Article ID: | iaor20135425 |
Volume: | 26 |
Issue: | 4 |
Start Page Number: | 242 |
End Page Number: | 252 |
Publication Date: | Dec 2013 |
Journal: | OR Insight |
Authors: | Freeman Jim, Tomkinson Bland |
Keywords: | higher education, performance evaluation, portfolio analysis |
A variety of different assessment formats has evolved in higher education in recent years – many inspired by task‐related activities in the working environment. Some are not new: at Masters level, the dissertation is long‐established, whereas at undergraduate level, projects and portfolios are proving increasingly popular. Portfolios are particularly favoured for professional subjects. Implementing these alternative forms of assessment is not always straightforward even when strict rubrics are applied. As a consequence, double‐marking is frequently used in an effort to reduce the subjectivity of marks awarded. Unfortunately, this strategy too can prove problematic – as recent studies have shown – especially when there is an irreconcilable disagreement between first and second examiners. In the article, we focus on this issue of inter‐marker conflict and through a series of simple statistical models offer insights into how final marks might more fairly be determined.