of Neural Networks
The major structures factors are connection structures, network size,
and ACON versus OCON
Interlayer and intralayer Connectios Structures
A neural network comprises the neuron and weight building blocks.
The behavior of the network depends largely on the interaction between
these building blocks. There are three types of neuron layers: input, hidden
and output layers. Two layers of neuron communicate via a weight conection
network. There are four types of weighted connections: feedforward, feedback,
lateral, and time-delayed connections, as you can see in this figure:
The synaptic connections may be fully or locally interconnected, see
figures. Also, a neural network may be either a single layer feedback
model or a multilayer feed-forward model. It is possible to cascade several
single layer feedback neural nets to form a larger net.
Feedforward connections : For all the neural models, data from neurons
of a lower layer are propagated forward to neurons of an upper layer via
feedforward connections networks.
Feddback Connections: Feedback networks bring data from neurons
of an upper layer back to neurons of a lower layer.
Lateral Connections: One tipical example of a lateral network is
circuit, which serves the important role of selecting the winner. In the
feature map example, by allowing neurons to interact via the lateral network,
a certain topological ordering relationship can be preserved. Another example
is the lateral orthogonalization network which forces the network
to extract orthogonal components.
Time-delayed Connections: Delay elements may be incorporated into
the connections to yield temporal dynamics models. They are more suitable
for temporal pattern recognitions.
Sizes of Neural Networks
In a feed-forward multilayer neural net, there are one or more layers of
hidden neuron units between the input and output neuron layers.
The sizes of networks depends on the number of layers and the number of
hidden-units per layer.
Number of layers: In a multilayer network, there are one or more
layers of hidden neuron units between
the input and output neuron layers. The number of layers is very often
counted according to the number of weight layers (instead of neuron layers).
Number of hidden units: The number of hidden-units is directly related
to the capablities of the network. For the best network performance (e.g.,
generalization), an optimal number of hidden-units
must be properly determined.
ACON versus OCON
The issue at hand is how many networks should be used for multicategory
classification. Typically, one output node is used to represent one class.
As example, in an alpha-numeric recognition problem, there are 36 classes,
so there are in total 36 output nodes. Given an input pattern in the retrieven
phase, the winner (i.e., the class that wins the recognition) is usually
the output node that has the maximum among all the output values.
Two plausible network structures are All-Class-in-One-Network (ACON)
and One-Class-in-One-Network (OCON). In the ACON approach, all the
classes are lumped into one giant-size super-network. It is sometimes advantageous
to decompose a huge network into many subnets, so that each subnet has
a small size. For example, a 36-output net can be decomposed into 12 subnets,
each responsible for 3 outputs. The most extreme decomposition is the so-called
OCON structure, where one subnet is devoted to one class only. Although
the number of subnets in the OCON is relatively large, each individual
subnet has considerably smaller size than the ACON supper-network. This
may be explaned by the next figures,
is partitioned into
many subnets `
by eliminating all the "cross-class" connections int the upper layer.
For convenience, all the subnets are assumed to have a uniform size,
say k. The number of hidden units of the ACON supper-network is
K. (Obviously, k << K.) The ACON
and OCON differ significantly in size and speed, that is the total numbers
of synaptic weights and the training time. Let us denote the input and
output vector dimensions as n and N. The number of the total synaptic weights
for the ACON structure is (N+n) x K. Likewise, the number
for the OCON structure is N x (n+1) x k::N x
n x k . Two extreme situations are analyzed below. When N
is relatively small (compared with n ), ACON could have compatible
or less weights than OCON. If
N is very large, then OCON could have
a major advantage in terms of network size.
In addition, the OCON seems to prevail over ACON in training and recognition
speed when the number of classes is large.
In the ACON approach, the single supernet has the burden of having
to simultaneously satisfy all these classes, so the number of hidden units
K is expected to be very big.
About this Tutorial
Artificial Neural Networks