One of the problems that limit the use of neural networks is the problem called "problem of variable stability" – the network is not able to learn new information without damaging already stored information. This effect is apparent for multi-layer perceptron network. When training network for new pattern, the entire network can be broken down, i.e. all information that has already been stored is lost. This effect is caused by change of weights of the network. To finally train the network for required new information, we are often forced to start over again. This may present a considerable time delay, hours or even days. The network that we are going to describe deals with the problem of variable stability quite well. This network was developed by a mathematician and biologist S. Grossberg. Adaptive Resonance Theory (ART) was developed for modeling of large parallel architecture for self-learning network for pattern recognition. A property of ART network is that it can switch between vigilant and stable mode without damaging already stored information. The vigilant mode is a learning mode, wherein initial parameters may be modified. Stable mode is a mode, wherein the network is fixed and behaves like a finished classifier.
Just note that ART network exists in three basic modifications (ART-l, ART-2 and ART-3). The basic modification, which will be described here, is ART-1; ART-2 does not work with binary values, but with real ones, ART-3 uses similar structure to ART-2, but the model of this network is described by equations, which express the dynamics of chemical carriers of information. This modification assumes that the inputs to this network come continuously and change continuously.