Conversation-Based Assessment (CBA) represents a relatively new method of measuring student cognitive skills (e.g., science reasoning) through adaptive dialogues with automated characters. This approach leverages the openness of natural language with the interactivity of spoken dialogue to engage students in verbal reasoning and constructive processes (i.e., cognition). These two dimensions differentiate CBA from other assessment items (e.g., multiple choice and essays) by allowing for more freedom in responses along with the ability to adapt and follow-up on particular threads of information. The conversational exchange affords a rich data stream that can provide additional explanatory evidence of students’ cognition exhibited through conversational content and dialogue paths. The current work, built on the AutoTutor dialogue engine, will discuss the affordances and constraints of CBA along with how this approach may complement and enhance other methods of measuring of cognitive skills.