EXPLORABLES

by Dirk Brockmann

This explorable illustrates the dynamics of a self-organizing map (SOM), specifically a neural network known as the Kohonen Map. The Kohonen map is a model for self-organization of biological neural networks and how the brain can learn to map signals from an input space, e.g. visual stimuli in the visual field, to a two dimensional layer of neurons, e.g. the visual cortex in the brain, in such a way that neighborhood properties in the stimulus space are conserved as best as possible: Neurons that are neighbors in the neural network should respond to stimuli that are also close in stimulus space.

Often the stimulus space is higher-dimensional or more complicated than the two-dimensional neural layer and the challenge is to represent the complex space of stimuli in two dimensions as well as possible.

Press Play and keep on reading…..

This is how it works

The Kohonen map has two components, a stimulus space and a lattice of neurons. Neurons respond to stimuli in the stimulus space and rearrange dynamically to which stimuli they respond most strongly.

The stimulus space

The stimulus space here is a region in the plane with a boundary you can choose on the right. The stimuli are generated randomly at positions $(x_s,y_s)$ inside the chosen region. They are illustrated by red flashes at every iteration of the dynamics.

The neural network

Neurons are arranged internally as a two-dimensional lattice, e.g. a $16 \times 16$ lattice. You can chose different lattices with the radio buttons. Each neuron is labeled by its position $(n,m)$ in the lattice. The lattice itself is not depiced, so you don’t see it initially.

Each neuron also has a position $(x_n,y_n)$ in the stimulus space, depicted by a white node. Each neuron’s position in the stimulus space corresponds to the stimulus of maximal response.

Initially, the neurons’ maximum response points are scattered randomly in the stimulus space.

The neurons also interact laterally with their neighbors in the internal neural lattice. The lateral connections are also shown in the stimulus space and initially look like a mess, because the neurons’ maximal response points are randomly scattered and do not resemble the lattice topology of the hidden neural network.

Winner takes it all dynamics

This is how the dynamics works: When a stimulus is presented at position $(x_s,y_s)$ the neuron closest to the stimulus in stimulus space is the winner. This winner neuron then changes its node position $(x_n,y_n)$ towards the stimulus like so:

$$ x_n(t+1) = x_n(t)+\delta\times (x_s-x_n) / L $$

$$ y_n(t+1) = y_n(t)+\delta \times(y_s-y_n) / L $$

where $L=\sqrt{(x_s-x_n)^2+(y_s-y_n)^2}$ is the distance between stimulus and the neuron’s prefered stimulus.

The magnitude of change $\delta$ is controlled by the magnitude slider.

Lateral interaction

Now, the clue is that not only the winner $(n_w,m_w)$ changes its position in stimulus space, but also all the other neurons in the network. The magnitude of their change, however, decreases with the distance in the neural network lattice. So instead of a change $\delta$ all other neurons $(n,m)$ change their position according to a Gaussian defined on the neural lattice and peaked at the winner neuron, like so:

$$ \delta_{n,m} = \delta \times \exp \left ( -\frac{(n-n_w)^2+(m-m_w)^2}{\sigma^2} \right) $$

The range of this lateral influence of the winner on the other neurons is c_ontrolled by the lateral interaction parameter_ $\sigma$.

Observe this

If you press play the nodes will rearrange and spread in the stimulus space very quickly. As you decrease the lateral interaction you effectively decrease the tension in the network and it will eventually cover the entire stimulus space.

If you decrease the magnitude of change the network will become smoother and smoother.

Now, the neural network is a reactangular lattice, and if you change the stimulus space to something that isn’t rectangular, the network has to compromise and find a way to squeeze a lattice structure into the stimulus space.

If you change the network to a $256\times 1$ one-dimensional lattice, you will see how the network attempts to cover the two-dimensional stimulus space.


Related Explorables:

Horde of the Flies

The Vicsek-Model

Horde of the Flies

The Vicsek-Model

Eigenartig

The spatial hypercycle model

Hopfed Turingles

Pattern Formation in a simple reaction-diffusion system

Into the Dark

Collective intelligence