self-organing feature map
The basic idea of SOFM is to incorporate
into the competitive learning rule some degree of sensitiviy with respect
tothe neighborhood or history. This provide a way to avoid totally unlearned
neurons and it helps enhance certain topological property which should
be preserved in the feature mapping.
Supose that an input pattern has N features and is represented by a
vector x in an N-dimensional pattern space. The network maps the input
pattern to an output space. The output space is suposed to be one dimensional
or two dimensional arrays of output nodes, which possess a certain topological
orderness. The question is how to train a network so that the ordered relationship
can be preserved. Kohonen proposed to allow the ouput nodes interact laterally,
leading to the self-organizing feature map.
The most prominent feature is the concept of excitatory learning within
a neighborhood around the winning neuron. The size of the neighborhood
decreases with each iteration. The training phase is provided here:
In the retrieving phase, all the output neurons calculate the Euclidean
distance between the weights and the input vector and the winning neuron
is the one with the shortest distance.
First a winning neuron is selected as the one with the shortest Euclidean
between its weight vector and the input vector, where
denotes the weight vector corresponding to the ith output neuron.
Let i* denote the index of the winner and let I* denote a
set of indexes corresponding to a defines neighborhood of winner i*.
Then the weights associated with the winner and its neighboring neurons
are updated by
for all the indices , and n
is a small positive learning rate. The amount of updating may be weighted
according to a preasigned "neighborhood function", .
for all j. For example, a neighborhood function
may be chosen as
where represents the position of
the neuron j in the output space. The convergence of the feature
map depends on a proper choice of .
One choice is that . The size of
the neighborhood (or ) should decrease
gradually as depicted in the next figure:
The weight update should be inmediately succeeded by the normalization
Competitive Learning with History Sensitivity
Incorporating some history/frequency sensitivity into de competitive learning
rule provides another way to alleviate the problem of totally unlearned
neurons or prejudiced training. There are two approaches:
The rate of training can also be modulated by frequency sensitivity. As
an example, we present the following competitive learning rule:
Modulate the selection of a winner by the frequency sensitivity.
Modulate the learning rate by the frequency sensitivity.
This technique is called frequency-sensitive competitive learning, where
the parameter is a function of
how frequent the i*-th node is selected as the winner. For example,
two possible such functions are:
denotes the number of winning times by the i*th node up to time
Select a winner i* as the neuron with, for example, the smallest Euclidean
Update the weights associated with the winner.
About this Tutorial
Artificial Neural Networks