The role of perception in sighted infant motor development is well-established, but what are the processes by which an infant manages to handle the complex high-dimensional visual input? Clearly, the input has to be modeled in terms of lowdimensional codes so that plans may be made in a more abstract space. While a number of computational studies have investigated the question of motor control, the question of how the input dimensionality is reduced for motor control purposes remains unexplored. In this work we propose a mapping where starting from eye-centered input, we organize the perceptual images in a lower-dimensional space so that perceptually similar arm poses remain closer. In low-noise situations, we find that the dimensionality of this discovered lower-dimensional embedding matches the degrees-of-freedom of the motion. We further show how complex reaching and obstacle avoidance motions may be learned on this lower-dimensional motor space. The computational study suggests a possible mechanism for models in psychology that argue for high orders of dimensionality reduction in moving from task space to specific action