Spiking neural networks for time series predictions
Deep neural networks (DNN) are certainly one of the major advances of the last decades. On one hand, their performance comes from the computation complexity and the energy consumption. On the other hand, SNN offer models with cheaper computation complexity and a budgetary reduction of the energy consumption. SNN bring excellent performances along for task classification such as on images and sound. This network is often found in time series processing with their ability of classification.
What are Spiking neural networks ?
SNN have a different approach on information transmission from standard neural networks. They try to imitate biological neural networks. Instead of changing the values over time, SNN work on discrete events which are produced in specific moments. They receive peak series as input and produce time series as output.
Overview of Spiking neural networks
For every time-step, each neuron has some values which are analogous to a electric potential of biological neurons. This value in the neuron can change based on the mathematical model of the neuron. If the value is higher than a threshold, the neuron sends only one impulse for each neuron downstream of it. Finally, the value of the neuron is set under his mean value. After some time, the value of the neuron is back to the mean value.
Several models
SNN are built on the mathematical descriptions of biological neurons. There are two groups of methods which are used to model SNN :
models based on conductance which describe how actions in neurons are initiated and spread
models with a threshold which generate a spike for a given threshold
Leaky Integrate-and-fire
We are going to understand how the model Leaky Integrate-and-fire works. We start with its theoretical circuit. There are two cases to study and we need to find their equation and solution: 1.
Case where
We apply the Kirchhoff’s current law on the green point :
And the characteristic relation of a capacitor :
We represent the intensity as
Resolution of the differential equation
Homogeneous solution
We set
Particular solution
We assume that
General solution
We assume that
So:
Calculation of the firing frequency
The model can become more precise by introducing a refractory time in which the neuron can not be discharged. We are interesting to evaluate the frequency when
Then, we can define the firing frequency with the inverse of the total gap between the impulses (including the down-time). The firing frequency is then :
Case where
We apply the Kirchhoff’s current law on the green point :
We get the following relation :
with
Solution of the differential equation
where
We assume
We get then :
Conclusion
We recall that
Benefits - the model will not keep an increase of the tension for ever contrary to other models without leak where it is kept until the appearance of a new impulse.
Drawbacks - the model does not take into account the neuronal adaptation, so that it can not describe spike series.
Architecture of a SNN
Even if SNN have an unique concept, they stay a neural network. We can find :
How to train a Spiking Neural Network ?
Unfortunately, still today, there is no efficient supervised learning method which can train a SNN. Operations of SNN can not allow the usage of classical learning methods which are appropriated for a traditional neural network. The learning method for SNN can be a tough task.
Spike-Timing-Dependent Plasticity (STDP)
It is a unsupervised learning mechanism. The training is realized layer by layer, in other words, the training of the current layer is made when the training of the previous layer is finished. Neurons of the layer compete with each other and those which fire quickly, trigger a STDP and learn from inputs:
We measure the learning convergence of the
where
Scope for SNN
- Prosthetist : vision and auditive neuroprosthesis
- Robotics : Brain Corporation develop robots using SNN et SyNAPSE develop processors and neuromorphic systems.
- Computing Vision : digital neuroprocesseur IBM TrueNorth includes millions of programmable neurons and 256 millions of programmable synapses to simulate the operation of neurons of vision cortex.
- Telecommunication : Qualcomm is actively on the possibility to integrate SNN in telecommunication devices.
Some results
Architecture | Neural Coding | Learning-type | Learning-rule | Accuracy (%) |
---|---|---|---|---|
Dendritic neurons | Rate-based | Supervised | Morphology learning | 90.3 |
Convultional SNN | Spike-based | Supervised | Tempotron rule | 91.3 |
Two layer network | Spike-based | Unsupervised | STDP | 93.5 |
Spiking RBM | Rate-based | Supervised | Contrastive divergence | 94.1 |
Two layer network | Spike-based | Unsupervised | STDP | 95.0 |
Convultional SNN | Rate-based | Supervised | Back-propagation | 99.1 |
Proposed SDNN | Spike-based | Unsupervised | STDP | 98.4 |
Conclusion
Contrary to classical neural networks where the output is a modulation of the signal intensity (activation function), the output is immediate, here, we apply a modulation over time on SNN where the output is an accumulation of impulses over time. To train a SNN, we must apply new learning methods. SNN are high-performance for vocal recognition or computer vision.