Modeling morphological affixation with interpretable recurrent networks: sequential rebinding controlled by hierarchical attention

AbstractThis paper proposes a recurrent neural network model that learns to perform morphological affixation, a fundamental operation of linguistic cognition, and has interpretable relations to descriptions of morphology at the computational and algorithmic levels. The model represents morphological sequences (stems and affixes) with distributed representations that support binding of symbols to ordinal positions and position-based unbinding. Construction of an affixed form is controlled at the implementation level by shifting attention between morphemes and across positions within each morpheme. The model successfully learns patterns of prefixation, suffixation, and infixation, unifying these at all levels of description around the theoretical notion of a pivot. Connections of the present proposal to neural coding of ordinal position, and to computational models of serial recall, are noted.

Return to previous page