, Amirkabir University of Technology
Computer Engineering & Information Technology Depa, Amirkabir University of Technology
The feature map represented by the set of weight vectors of the basic SOM (Self-Organizing Map) provides a good approximation to the input space from which the sample vectors come. But the timedecreasing learning rate and neighborhood function of the basic SOM algorithm reduce its capability to adapt weights for a varied environment. In dealing with non-stationary input distributions and changing environments, we propose a modified SOM algorithm called “Time Adaptive SOM”, or TASOM, that automatically adjusts learning rate and neighborhood function of each neuron independently. Each neuron's learning rate is determined by a function of distance between an input vector and its weight vector. The width of the neighborhood function is updated by a function of the distance between the weight vector of the neuron and the weight vectors of neighboring neurons. Only one time parameter initialization is sufficient throughout the lifetime of TASOM to enable it to work with stationary as well as non-stationary input distributions without the need for retraining. The proposed TASOM is tested with five different input distributions and its performance is compared with that of the basic SOM for these cases. The quantization errors of the TASOM in all of these experiments are lower than the errors of the basic SOM. Moreover, the convergence speed of the TASOM outperforms that of the basic SOM. These experiments demonstrate that the TASOM is stable and convergent. The TASOM network is also tested with non-stationary environments when the input distribution completely changes from one distribution to another. The TASOM in these changing environments moves its weights gradually from the old distribution to the clusters of the new distribution. This property is comparable to the memory of human brain, which gradually forgets old memory and memorizes new sensory data.