The approach is application-motivated, theoretically-based, and implementation oriented. The main applications are for signal processing and pattern recognition. The algorithmic treatment represents a combination of mathematical theory and heuristic justification for neural models. The ultimate objective is the implementation of digital neurocomputers, embracing technologies of VLSI, adaptive, digital and parallel processing.
Association, Clustering and Classification
In this application, input static patterns or temporal signals are to be classified or recognized. Ideally, a classifier should be trained such that when a slightly distorted version of stimulus is presented it can still be correctly recognized. Equivalently, the network should have a certain noise inmunity feature, that is, it should be able to recover a "clean" signal from noisy environments or channels. This is critical to holographic, associative, retrieval applications.
In many classification problems, an implied task is information completion, that is, recover the original pattern given only partial information. There are two kinds of pattern completions problems: temporal and static. The proper use of contextual information is key to a successful recognition.
Regression and Generalization
Linear or nonlinear regression provides a smooth and robust curve fiting to training patterns. It can be extended to an interpolation problem. The system is trained by a large set of training samples based on a supervised learning procedure. A network is considered successfully trained if it can closely approximate the teacher values for the trained data space and can provide smooth interpolations for the untrained data space. The objective of generalization is to yield a correct output response to an input stimulus to which it has not been trained before. The system must induce the salient feature of the input stimuli and detect the regularity. Such regularity discovery ability is critical to many applications. It enables the system to function competently throughout the entire space, even though it has benn trained only by a limited body of exemplary patterns. See this figure:
Neural nets offer an appealing tool for optimization applications, which usually involve finding a global minimum of an energy function:
Once the energy function is defined, then the determination of the synaptic weights is relatively straightforward. In some applications, the energy function is directly available. In some others, the energy function must be derived from the given cost criterion and special constraints. A major difficulty associated with the optimization problem is the high possibility of a solution converging to a local optimum instead of the global optimum. To battle the problem, several, several stattistical techniques are proposed, for example, stochastic simulated annealing (SSA) and mean field annealing.
Artificial Neural Networks