In spoken dialogue between people and machines, the com-puter must understand not only what the speaker means but also what she does not. The computer begins with a consider-able disadvantage: even the best speech recognition technol-ogy can provide error-ridden transcriptions of human speech under real-world telephone conditions. The work recounted here examines how, and how well, people use context to in-terpret noisy transcribed utterances in a challenging domain. Models learned from this experiment highlight two aspects of this human skill: the ability to detect a context-supported match, and the ability to know when the quality of attempted matches is so poor that it should be questioned. These models can then be applied by a spoken dialogue system to find the correct interpretation of users spoken requests, despite incor-rect speech recognition.