Based Neural Networks
In a decision based neural network (DBNN), the teacher only tells the correctness
of the classification for each training pattern. The teacher is a set of
symbols, , labeling the correct
class for each input pattern. The objective of the learning
is to find a set of weights that yields the correct classification. In
the retrieving phase, the objective is to determine to which class a pattern
belongs, based on the winner of the output values. The output values are
function of the input values and network weights, called discriminant
Let us first focus on the binary classification problem, where the pattern
space is divided into two regions. Each class occupies its own region.
In the clearly separable case, the two classes are separated by the decision
boundary, defined as the hyper surface on which the two discriminant functions
have equal scores. Two subnetworks are adopted. The output of the first
subnet is a function of the input
and the weights :
This is the discrminant function of
the subnet. Similarly, the second subnet has a discriminant function:
The classification is decided based on the values of the discriminant functions.
More precisely, if
then the pattern is classified to "F". Otherwise, it is classified to "M".
The teacher in this figure points out
the correct class for each training pattern, M or F. In the DBNN, there
is no need for training if a correct decision is made. If the decision
is incorrect, then the weights ( and )
will have to be updated. Once the networks completes the learning phase,
the network is ready for use in the retrieving phase. It recalls the pattern
classification based on the trained discriminant functions.
For any binary classification problem, two classes can be separated
by a simgle-output network, see this figure.
Now the discriminat function of this network is chosen to be
At the network's output, a binary decision (d) is made based on the value
of the discriminant function .
In other words, the decision boundary is caracterized by
Note that the distribution of training patterns dictate the decision regions,
which in turn determinate the choice of proper discriminant functions.
Two clases of patterns are linearly separable if they can be
separated by a linear Hyperplane decision boundary. In other words, the
decision boundary can be characterized by a linear discriminant function:
For two-dimensional inputs, for example, the decision boundary is
An example of a linear separating hyperplane is illustrated in this figure.
A set of pattern vectors are called to be linearly nonseparable (or simply
separable) if they are not linearly separable. See this figure
It is common that patterns from different classes do overlap in
the border area, and to handle this situation, some probabilistic criteria
can be adopted. It would be effective then to use a nonlinear decision
Linear Perceptron Networks
About this Tutorial
Artificial Neural Networks