Poisson like noise in the brain
Spiking activity of cortical neurons seems to be highly irregular and not very repeatable over trials, hence it is often approximated as a Poisson process. But, why does the brain seem to be so noisy? There are a few theories, and I’d like to discuss a few of them here.
Hypothesis 1. Neurons are rate coding and the variability is noise
Because of the nature of biological processes, temporal fluctuations of channels, probabilistic behavior or synaptic transmission and so on act as noise sources to the spike train produced by a neuron (e.g. [Carandini]). Therefore, the neuron is a probabilistic unit of processing. The real information can only be retrieved by marginalizing over multiple trials or the population by reducing the noise (or averaging over time). Linear-nonlinear-Poisson (LNP) modeling of neural activity is popular where the instantaneous rate is the signal, and the observation is a result of a Poisson spike generation [Paninski].
Hypothesis 2. Deterministic transformation of random input
On the opposite side of the spectrum, there are theories where each neuron is mostly a deterministic processing unit, and the Poisson like response originates from the variability of the natural statistics of the world combined with the uncontrollable internal state. For example, if a white Gaussian noise is presented to a leaky integrate-and-fire neuron, the resulting firing activity is a renewal process. The resulting inter-spike interval distribution is usually not analytically tractable, but when the firing rate is low (or the inter-spike interval is longer than the membrane time constant), it can be well approximated with a Poisson process [Stevens] (sometimes referred to as deterministic Poisson process).
Hypothesis 3. Maximizing information transfer
If one views sensory system as maximizing the mutual information of the input and output under certain constraints, the spike trains generated by a stochastic input should be close to a Poisson process. This is because Poisson process is the maximum entropy process among homogeneous point processes with the same rate [Rényi]. There are a few STDP like learning rules derived from such principles [Toyoizumi]. To derive these learning theories neurons themselves are not required to be deterministic, rather it is easier to deal with probabilistic models. But in principle, the neurons are trying to make the spike train as homogeneous Poisson like as possible.
Hypothesis 4. Brain uses variability to represent uncertainty of belief
This is the main idea of probabilistic population code (PPC) [Ma]. The source of variability is the uncertainty of the world or the brain’s model (model mismatch) and the population encoding model P(r|s) summarizes it. The posterior belief (represented as Bayesian probability) about the stimulus given the population spike train responses P(s|r) is proportional to P(r|s)P(s) by Bayes rule. If P(r|s) is composed of Poisson neurons with rate proportional to a corresponding population of tuning curves, then the stimulus uncertainty (assuming normal distribution) is represented by the gain of population firing rate. Simple Bayesian inference from multiple sources can be simply done by adding corresponding firing rates. Note that the coding scheme is rate code, but utilizes the population gain and variability to do Bayesian inference.
Hypothesis 5. Sampling hypothesis
Related to PPC, an alternative way of representing a posterior is to sample from it — in this case as a population firing pattern. This is known as the sampling hypothesis [Fiser]. The variability of the neurons is mainly because of the broadness of the posterior. If the brain had no uncertainty about the stimulus, then the posterior will be very narrow, and the firing pattern will have no variability.
These hypotheses are not mutually exclusive, but rather different theories that explain or make use of the “random nature” of spiking.
- Carandini, M. Amplification of trial-to-trial response variability by neurons in visual cortex. PLoS Biology, 2004, 2(9).
- Paninski, L. Maximum likelihood estimation of cascade point-process neural encoding models. Network: Comput. Neural Syst., 2004, 15, 243-262
- Stevens, C. & Zador, A. When is an Integrate-and-fire Neuron like a Poisson Neuron? Advances in Neural Information Processing Systems, 1996, 8, 103-109
- Rényi, A. On an extremal property of the poisson process. Annals of the Institute of Statistical Mathematics, 1964, 16, 129-133
- Toyoizumi, T.; Pfister, J.-P.; Aihara, K. & Gerstner, W. Optimality Model of Unsupervised Spike-Timing-Dependent Plasticity: Synaptic Memory and Weight Distribution. Neural Comp., 2007, 19, 639-671
- Ma, W. J.; Beck, J. M.; Latham, P. E. & Pouget, A. Bayesian inference with probabilistic population codes. Nature Neuroscience, Nature Publishing Group, 2006, 9, 1432-1438
- Fiser, J.; Berkes, P.; Orbán, G. & Lengyel, M. Statistically optimal perception and learning: from behavior to neural representations. Trends in Cognitive Sciences, 2010, 14, 119-130