The relative amount of information contributed by learning and by pre-specification in a SRN trained to compute sameness
- Juan Valle-Lisboa,
AbstractWe analyze the conditions under which Simple Recurrent Networks learn and generalize sameness. This task is difficult for a generic SRN, and several properties of the network have to be established previous to any learning for generalization to occur. We show that by selecting a set of narrow weight intervals a network can learn sameness from a limited set of examples. The intervals depend on the particular training set, and we obtained them from a series of simulations using the complete training set. We can approximate the relative amount of information provided by the initial structure and the amount provided by the examples. Although we did not arrive to a general rule, in all our cases the initial structure provides much more information than the examples. This shows that if something similar to ANN operates in the brain, a rich innate structure is needed to support the learning of general functions.