A basic competitive learning network has one layer of input neurons
and one layer of output neurons. An input pattern x is a sample
point in the
n-dimensional real or binary vector space. Binary-valued
(1 or 0)
local representations are more often used for the output
nodes. That is, there are as many output neurons as the number of classes
and each output node represents a pattern category.
A competitive learning network comprises the feedforward excitatory
network(s) and the lateral inhibitory network(s). The feedforward network
usually implements an excitatory Hebbian learning
rule. It consist of when an input cell persistently participates in
firing an output cell , the input cell's influence firing that output cell
is increased. The lateral competitive network is inhibitory in nature.
The network serves the important role of selecting the winner, often via
a competitive learning process, highlighting the "winner-take-all"
schema. In a winner-take-all circuit, the output unit receiving the
largest input is assigned a full value(e.g.,1), whereas all other units
are suppressed to a 0 value. The winner-take-all circuit is usually implemented
by a (digital or analog) MAXNET
network. Another example of a lateral network is Kohonen's
self-organing feature map. By allowing the output nodes to interact
via the lateral network, the neural model can be trained to preserve certain
Unsupervised classification procedures are often based on some kind
of clustering strategy, which forms groups of similar patterns. The clustering
technique is very useful for pattern classification problems. Furthermore,
it plays a pivotal role in many competitive learning networks. For a clustering
procedure, it is necesary to define a similarity measure to be used for
evaluating how close the patterns are. Some popular measures are listed
below, among them the most common is the euclidean distance.
Weighted Euclidean Distance.
Basic Competitive Learning Networks
Using no supervision from any teacher, unsupervised networks adapt the
weights and verify the results only on the input patterns. One popular
scheme for such adaptation is the competitive learning rule, which allows
the units to compete for the exclusive right to respond to a particular
input pattern. It can be view as a sophisticated clustering technique,
whose objective is to divide a set of input patterns into a number of clusters
such that the patterns of the same cluster exhibit a certain degree of
similarity. The training rules are often the Hebbian rule for the feedforward
network and the winner-take-all (WTA) rule for the lateral network.
Minimal Learning Model
A basic competitive learning model consist of feedforward and lateral networks
with fixed output nodes (fixed number of clusters). The input and output
nodes are assumed to have binary values. When and only when both the ith
input and the j th are high, ;
otherwise . The strenght of the
synaptic weight connecting the input i with the output
is designated by wij. Given the k-th stimulus, a possible learning rule
where g is a small positive constant,
is the number of active input units for the stimulus pattern k,
if input unit i is high for the kth stimulus pattern and
Training rules based on Normalized Weights
In orther to ensure a fair competition environment, the sum of all the
weights linked to all the output nodes should be normalized. If
are the weights connected to an aoutput node j, then .
Then, if a unit wins the competition, the each of its input lines gives
up some proportion g of its weight and that weight is then distributed
equally among the active input lines.
One important feature of this learning rule is that renormalization
is incorporated into the updating rule such that the sum of synaptic weigts
to any output remains 1.
Training rules for Leaky Learning
In orther to prevent the possibility of totally unlearned neurons, a leaky
learning rule is introduced. Since a unit never learns unless it wins,
it is possible that one of the unit will never win, and therefore never
learn. One way to avoid this problem of not learning is by having all
the weights in the network involved in the training with different degrees
of strenght. This is proposed in the following leaky learning rule:
in this rule the parameter is made
an order of magnitude smaller than .
Therefore, slower learning occurs at the losing units than that at hte
winning units. This change has the property that it slowly moves the loosing
units into the region where the acual stimuli lie, at which point they
begin to capture some units and the ordinary dynamics of competitive learning
Kohonen's self-organizing feature map
About this Tutorial
Artificial Neural Networks