The face is a central communication channel providing information about the identities of our interaction partners and their potential mental states expressed by motor configurations. Although it is well known that infants ability to recognise people follows a developmental process, it is still an open question how face identity recognition skills can develop and, in particular, how facial expression and identity processing potentially interact during this developmental process. We propose that by acquiring information of the facial motor configuration observed from face stimuli encountered throughout development would be sufficient to develop a face-space representation. This representation encodes the observed face stimuli as points of a multidimensional psychological space able to assist facial identity and expression recognition. We validate our hypothesis through computational simulations and we suggest potential implications of this understanding with respect to the available findings in face processing.