Article ID: | iaor2016724 |
Volume: | 25 |
Issue: | 1 |
Start Page Number: | 77 |
End Page Number: | 89 |
Publication Date: | Jan 2016 |
Journal: | European Journal of Information Systems |
Authors: | Baskerville Richard, Venable John, Pries-Heje Jan |
Keywords: | research, performance, computers: information |
Evaluation of design artefacts and design theories is a key activity in Design Science Research (DSR), as it provides feedback for further development and (if done correctly) assures the rigour of the research. However, the extant DSR literature provides insufficient guidance on evaluation to enable Design Science Researchers to effectively design and incorporate evaluation activities into a DSR project that can achieve DSR goals and objectives. To address this research gap, this research paper develops, explicates, and provides evidence for the utility of a Framework for Evaluation in Design Science (FEDS) together with a process to guide design science researchers in developing a strategy for evaluating the artefacts they develop within a DSR project. A FEDS strategy considers why, when, how, and what to evaluate. FEDS includes a two‐dimensional characterisation of DSR evaluation episodes (particular evaluations), with one dimension being the functional purpose of the evaluation (formative or summative) and the other dimension being the paradigm of the evaluation (artificial or naturalistic). The FEDS evaluation design process is comprised of four steps: (1) explicate the goals of the evaluation, (2) choose the evaluation strategy or strategies, (3) determine the properties to evaluate, and (4) design the individual evaluation episode(s). The paper illustrates the framework with two examples and provides evidence of its utility via a naturalistic, summative evaluation through its use on an actual DSR project.