Conversational discourse is a cognitive and social process influenced by both discourse content and pragmatic factors, such as the participants’ prior knowledge; these factors may also affect how simulated conversations with virtual agents unfold, with implications for design. This study explored effects of question content and perceived expertise of a virtual agent on students’ interactions with a conversation-based assessment (CBA) measuring science inquiry skills. Twenty-four middle school students were randomly assigned to work with a High- or Low-Knowledge virtual peer to collect data and generate weather predictions. Students evaluated their own data relative to the peer’s; they could either "Choose" which note to keep, or to "Agree/Disagree" with the peer’s suggested choice of note. Students rated the peer as more expert in the High-Knowledge condition, but peer expertise did not affect performance. However, the Agree/Disagree condition improved students’ accuracy in their note choice, and yielded marginally higher pre-post learning gains.