Humans turn abstract referents and discourse structures into gesture using metaphors. The semantic relation between abstract communicative intentions and their physical realization in gesture is a question that has not been fully addressed. Our hypothesis is that a limited set of primary metaphors and image schemas underlies a wide range of gestures. Our analysis of a video corpus supports this view: over 90% of the gestures in the corpus are structured by image schemas via a limited set of primary metaphors. This analysis informs the extension of a computational model that grounds various communicative intentions to a physical, embodied context, using those primary metaphors and image schemas. This model is used to generate gesture performances for virtual characters.