Tik-61.261 Principles of Neural Computing
Raivio, Venna

Exercise 2
1. The error-correction learning rule may be implemented by using inhibition to subtract the desired response (target value) from the output, and then applying the anti-Hebbian rule. Discuss this interpretation of error-correction learning.

2. Figure 1 shows a two-dimensional set of data points. Part of the data points belongs to class and the other part belongs to class . Construct the decision boundary produced by the nearest neighbor rule applied to this data sample.

3. A generalized form of Hebb's rule is described by the relation

where and are the presynaptic and postsynaptic signals, respectively; and are functions of their respective arguments; and is the change produced in the synaptic weight at time in response to the signals and . Find the balance point and the maximum depression that are defined by this rule.

4. An input signal of unit amplitude is applied repeatedly to a synaptic connection whose initial value is also unity. Calculate the variation in the synaptic weight with time using the following rules:
1. The simple form of Hebb's rule described by

assuming the learning rate .
2. The covariance rule described by

assuming that the time-averaged values of the presynaptic signal and postsynaptic signal are and , respectively.

5. Formulate the expression for the output of neuron in the network of Figure 2. You may use the following notations:

 ith input signal synaptic weight from input i to neuron j weight of lateral connection from neuron k to neuron j induced local field of neuron j

What is the condition that would have to be satisfied for neuron to be the winning neuron?

Jarkko Venna 2005-04-13