Exploring the Use of Conversations with Virtual Agents in Assessment Contexts

Abstract

Conversations with computer agents can be used to measure skills that may be difficult to accomplish using traditional multiple-choice assessments. In order to achieve natural conversations in this form of assessment, we are exploring issues related to how test-takers interact with computer agents, such as what dialogue moves lead to interpretable responses, the influence of “cognitive characteristics” of computer agents, how should the system adapt to test-taker responses, and how these interactions impact test-taker emotions and affect. In this presentation we will discuss our current research addressing these questions, illustrating important dimensions that are involved with designing a conversation space and how each design decision can impact multiple factors within assessment contexts.


Back to Table of Contents