Skip to content

Bayesian Spike Triggered Covariance Analysis

2011/12/08
tags: , , , ,

A widely used tool in neural characterization, where one is interested in the stimulus (or behavior) features that a neuron is sensitive to, is spike triggered averaging  (STA) or otherwise known as reverse correlation analysis [Dayan & Abbott]. At the occurrence of each spike, one averages the stimulus in a window time locked relative to the spike timing, that potentially causes the spike (or behavior that is caused by the spike) to obtain STA.

It essentially estimates the first order Volterra expansion of the neural response function, that is, approximating a neuron as a linear system. Although neuron is not really a linear system, STA works well in practice. Moreover, it is a consistent estimator for a linear-nonlinear Poisson (LNP) model if the stimulus is white Gaussian noise [Bussgang 1952 in Dayan & Abbott]. In [Paninski 2003] this condition is extended to an arbitrary radially symmetric stimulus that induces non-zero mean response.

When the neuron’s features space is in low-dimension, but not 1-dimension, then STA is not sufficient, since it recovers only a 1-dimensional subspace. Spike triggered covariance (STC) is an extension of STA that can consistently estimate filters of a multi-dimensional LNP model [Paninski 2003]. Let us denote the zero-mean stimulus distribution as p(x), and the spike triggered distribution as q(x). Then, STA is the mean of \hat{q}(x) (empirical estimate of q(x)), and STC is the eigen-vectors of the covariance matrix of \hat{q}(x). STC is only a consistent estimator when the stimulus distribution is Gaussian [for details, see Paninski 2003].

STA/STC are moment based estimators, and does not have a probabilistic model. Analogous to PPCA (probabilistic principal component analysis) provided a generative model for PCA, allowing Bayesian extensions of PCA, we formulate the STA/STC problem as a maximum likelihood estimate of a generative model. Inspired by iSTAC [Pillow & Simoncelli 2006], we extend the LNP model (figure) with exponentiated quadratic nonlinearity. This allows us to put priors on the features, and develop Bayesian estimators. We further extend it to a general family of models, that allows consistent estimation using arbitrary stimulus distribution and flexible class of nonlinearities. This result will be presented at Neural Information Processing Systems (NIPS) 2011. If you are coming to NIPS, it’s poster W88!

Linear Nonlinear Poisson model with quadratic nonlinearity

About these ads

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: