Article ID: | iaor20021514 |
Country: | Netherlands |
Volume: | 32 |
Issue: | 1 |
Start Page Number: | 27 |
End Page Number: | 40 |
Publication Date: | Nov 2001 |
Journal: | Decision Support Systems |
Authors: | Benbasat Izak, Suh Kil-Soo, Lee Hyun-Kyu |
Keywords: | performance |
Research on visual and auditory modalities in human–computer interfaces has been aimed at making the interface similar to the process through which people naturally acquire information. The objective of this study is to compare the effectiveness of visual, auditory, and multi-modalities for representing information in different problem domains. The results of this study indicate that visual and auditory modalities were effective in different problem domains. Visual modality was generally appropriate for representing static events, while auditory modality was appropriate for the representation of changing events. Multi-modality interfaces led to significantly better performance than either an auditory or a visual modality in a high-attention task. No statistically significant differences were observed for the low-attention task.