26For a single conditioned (i.e., predictive) stimulus, Hebbian learning actually works fine, but the problem is that when there are several conditioned stimuli, Hebbian learning would create too many associations and in an unbalanced way. For example, we could have an experiment where both a bell and a green light predict food. Simple Hebbian learning would then associate both those stimuli with the food, and the association strengths would be computed independently of each other; since the association strengths are computed independently of each other, the predictions may interfere with each other and lead to bad prediction. This has been investigated in a famous twist to the basic classical conditioning experiment: after the main experiment, another experiment is made where both the bell and a newly introduced green light predict food. In such a case, the dog will not learn to associate the green light with the food because the connection from the bell is enough to predict the food, and there is no need to construct an association from the light to the food anymore. This is in contrast to what Hebbian learning is supposed to do. The brain apparently tries to be economical and constructs only those connections that are necessary for the prediction of the food. Therefore, the association strength of one conditioned stimulus will also depend on the associations of other stimuli. This is why most research assumes a supervised model, which typically learns several such association strengths in a balanced way, and thus explains the various experiments better than simple Hebbian learning. A basic supervised learning rule accomplishing this is the Rescorla-Wagner model; see e.g. review by Miller et al. (1995). It actually further models the dynamics of learning, as in the bell/light example above, where it is important that the bell is first associated to the food and the light only comes later; the association with the bell “blocks” the development of a new association with the light.