based Neural Networks
The Linear Perceptron is applicable only when
the classes of patterns are known to be separable by linear decision boundaries.
in contrast, the nonlinear perceptron offers a much greater domain
of practical applications. In training a complex network, the key lies
in the following distributive decision based credit-assigment principle:
The gradient vector of the function
with respect to w is denoted
When to update In the decision-based learning rule, weight updating
is performed only when misclassification occurs.
Which subnets to update? The learning rule is distributive and localized.
It applies reinforced learning to the subnet corresponding to the correct
class and antireinforced learning to the (unduly) winning subnet.
How to update? Because the decision boundary depends on the discriminant
function , it is natural to
adjust the boundary by adjusting the weight vector w either in the
direction of the gradient of the discriminat function (i.e., reinforced
learning) or opposite to that direction (i.e., antireinforced learning):
where is a positive learning rate.
Decision Based Learning Rule
Suppose that is a set of given
training patterns, each corresponding to one of the L classes .
Each class is modeled by a subnet with discriminant functions, say,
i= 1, ..., L. Suppose that the mth training pattern
is known to belong to class and
That is, the winning class for the pattern is the j-th class (subnet).
Note that for all and ,
that is, those weights remain unchanged. Just like the LP,
the M training patterns will be repeatedly used for as many sweeps
as required for convergence.
When j = i, then the pattern is
already correctly classified and no update is needed.
When , that is,
is still misclassified, then the following update is perfomed:
In this learning rule, the reinforced learning moves w along
the positive gradient direction, so the value of discriminant function
will increase, enhancing the chance of the pattern's future selection.
The antireinforced learning moves w along the negative gradient
direction, so the value of discriminant function will decrease, suppressing
the chance of its future selection.
In the special linear case, the discriminant
function adopted is based on the Linear Basis Function (LBF)
Then the gradient in the updating
formula, is simply
which leads to the linear perceptron rule.
Radial Basis Function This is used in a example of nonlinear
decision based learning rule. A RBF discriminant function is a function
of the radius betwenn the pattern and a centroid, :
is used for each subnet l. So the centroid ()
closest to the present pattern is the winner. By applying the decision-based
learning formula to last equation and noting that ,
the following learning rules can be derived:
Elliptic Basis Function The basic RBF version of the DBNN discussed
before is based on the asumption that the feature space is uniformly weighted
in all directions. In practice, however, different features may have varying
degrees of importance depending on the way they are measured. This leads
to the adoption of a more versatile elliptic discriminant function. The
most general form of a second order basis functions is the (skewed) hyperelliptic
basis function. In practice and for most applications, the EBF
discriminant function is confined to the following (upright) version: The
discriminant function (for each subnet l) can be generalized to
an (upright) elliptic function:
where N is the dimension of the input patterns, and
is the vector comprising all the weight parameters .
The learning formula can be derived from the equation.
Hierarquical DBNN Structure
About this Tutorial
Artificial Neural Networks