Affective and Non-affective Meaning in Words and Pictures

Abstract

When people see a snake, they are likely to activate both affective information (e.g., dangerous) and non-affective information (e.g., animal). According to the Affective Primacy Hypothesis, the affective information has priority, and its activation can precede identification of the ontological category of a stimulus. Alternatively, according to the Cognitive Primacy Hypothesis, perceivers must know what they are looking at before they can make an affective judgment about it. We propose that neither hypothesis holds at all times. In two experiments, we show that the relative speed with which affective and non-affective information gets activated by words and pictures depends upon the contexts in which the stimuli are processed. These data support a view according to which words and pictures do not “have” meanings; rather, they are cues to activate patterns of stored knowledge, the specifics of which are co-determined by the item itself and the context in which it occurs.


Back to Table of Contents