From Words to Sentences & Back: Characterizing Context-dependent Meaning Representations in the Brain


Recent Machine Learning systems in vision and language processing have drawn attention to single-word vector spaces, where concepts are represented by a set of attributes based on textual and perceptual input. However, such representations still fall short from symbol grounding. In contrast, Grounded Cognition theories such as CAR (Concept Attribute Representation; Binder, 2009) provide a brain-based componential semantic representation. Building on this theory, this research aims to understand an intriguing effect of grounding, i.e. how word meaning changes depending on context. CAR representations of words are mapped to fMRI images of subjects reading different sentences, and the contributions of each word determined through Multiple Linear Regression and the FGREP neural network. As a result, the FGREP model identifies significant changes on the CARs for the same word used across sentences. In future work, such context-modified word vectors could be used as representations for a more robust natural language processing system.

Back to Table of Contents