Hierarchically Structured Model
The basic ideas for the neocognitron are described as follows:
In summary, neocognitron is a self-organized, competitve learning, hierarchical
multilayer network. It is useful for pattern classification without supervised
learning, especially when there are possible shifts in position or distortion
in shape. However, it is difficult to determine how well can the network
cope with deformation of patterns, because there is no mathematical measure
or proper model for such study. Moreover, although the neocognitron appears
to be attractive in its high biological fidelity, it incurs a vast computational
cost which is not easily affordable for most applications.
Hirarchical Representation Structure. The neocognitron uses successive
(say, M) stages that can recognize patterns. It pogresses stage by stage
from the input layer to the output layer.The levels of representation in
the layers exhibit a hierarchical structure. More precisely, the first
layer (or the first few layers) extracts local features, such as a line
at a particular orientation. More global features are extracted in later
layers. The objective of such a hierarchical representation structure is
that by going deeper through succesive layers, the position of the symbol
in the input pattern becomes less important.
Intra-Layer Structure. Each layer consist of one simple sublayer
of cells and one complex sublayer of
cells. Each layer of cells or
cells is divided into subgroups according to the features to which they
respond. The cells in each subgroup are arranged in a twodimensional array.
The cells match an input patten with
the template of the receptive fields of the cells. The
cells receive excitatory signals from its correspondig
Recall that, just like the layer, each
layer of cells is divided into subgroups
according to the features to which they respond. All the cells in a subgroup
receive input connections of the same spatial distribution, but allowing
a certain degree of positions shift. In other words, each
cell receives signal from a group of
cells that extract the same feature but have slightly different positions.
The cell is activated if at least one
of these cells is active. Even if the
position shift of the stimulus feature causes a nearby
cell to be activated instead of the original one, the same
cell will respond; thereby, the effect of a small shift can be nullified.
Based on such a position-readjusting process, local features extracted
in a lower stage can be smoothly and gradually integrated into more global
Feature-Extracting Cells: Connections
converging to this cells are variable and reinforced by learning or training.
The process of kearning and the mechanism of feature extraction by
are based on self-organizing networks discussed before. Briefly, only the
one cell that gives the maximum response has its input connection reinforced.
After finishing the learning, cells
can extract features from the input pattern. Only when a relevant feature
is presented at a certain position in the input layer will the corresponding
cell be activated.
Position readjusting Cells: This sublayer
inmediately follows the sublayer. The
cells are used to compensate for positional errors. Connections from
cells to cells are fixed and invariable.
Inter-Layer Structure. The interlayer structure, that is, the mapping
from a sublayer to
cells in the next layer, provides a further shift tolerance to combat deformation
of the training pattern.
Final Layer. Finally, each cell
of the final (recognition) layer integrates all the information of the
input pattern. Due to the competitive learnng nature, only one cell in
the final layer, corresponding to the category of the input pattern, will
be activated. Other cells respond to the patterns of other categories.
About this Tutorial
Artificial Neural Networks