Tik-61.261 Principles of Neural Computing
Raivio, Venna
Exercise 2, 4.2.2004
The error-correction learning rule may be implemented by using
inhibition to subtract the desired response (target value) from the
output, and then applying the anti-Hebbian rule. Discuss this
interpretation of error-correction learning.
Figure 1 shows a two-dimensional set of data points. Part of the data
points belongs to class and the other part belongs to class
. Construct the decision boundary produced by the nearest
neighbor rule applied to this data sample.
A generalized form of Hebb's rule is described by the relation
where and are the presynaptic and postsynaptic
signals, respectively; and are functions of
their respective arguments; and
is the change
produced in the synaptic weight at time in response to
the signals and . Find the balance point and the
maximum depression that are defined by this rule.
An input signal of unit amplitude is applied repeatedly to a
synaptic connection whose initial value is also unity. Calculate the
variation in the synaptic weight with time using the following rules:
The simple form of Hebb's rule described by
assuming the learning rate .
The covariance rule described by
assuming that the time-averaged values of the presynaptic signal
and postsynaptic signal are
and
,
respectively.
Formulate the expression for the output of neuron in
the network of Figure 2. You may use the following notations:
ith input signal
synaptic weight from input i to neuron j
weight of lateral connection from neuron k to neuron j
induced local field of neuron j
What is the condition that would have to be satisfied for neuron
to be the winning neuron?
Figure 1:
Data point belonging to class and are plotted
with 'x' and '*', respectively.
Figure 2:
Simple competitive learning network with feedforward
connections from the source nodes to the neurons, and lateral
connections among the neurons.