Skip to content

Interesting talks/posters from COSYNE 2009

2009/03/06

I attended COSYNE (Computational and Systems Neuroscience) conference and workshop last week. It was my second time to present a poster there, the first time was 2007 when I was working on continuous time signal processing tools for spike trains. This year, I presented a poster on a point process model for precisely timed spike trains, and received great feedback.

(The DOI links of this post might not work for sometime.)

The major trend on spike train modeling obviously switched to GLM (generalized linear model). GLM assumes the conditional intensity function is of an exponential family form given external processes including firing history, input stimulation, LFP and other experimental conditions. These models can be estimated easily using maximum likelihood estimation (in MATLAB the function glmfit in stats toolbox does the job). Precise spike timings are very well predicted through this model. I would need to try on my datasets. In one of the posters I saw them using 50 previous spike timings to estimate the paramters (model order selection by ABIC), if this is generally the case, a fairly long spike train would be required.

Hideaki Shimazaki, S. Amari, E. Brown, S Grün, had a poster on joint point process model based on GLM, and derived a recursive Bayesian filter to track the non-stationary higher order correlation structure. Shimazaki said the method is tracktable up to around 10 neurons.

There were a couple of very interesting posters about LSM (liquid state machine). One was by Prashant Joshi and Jochen Triesch titled “Optimizing microcircuits through reward modulated STDP”. Ideally the liquid should have the seperation property, and represent maximal information from the input. Joshi and Triesch used a simplified practically equivalent conditions, (1) maximizing the entropy of the liquid, (2) minimizing eigen-spread (3) maximizing decorrelatedness and (4) large number of principal components. They used a stochastic gradient type of variation of STDP learning rule, quite different from Izhikevich’s dopamin signaled STDP model. They computed the change of the cost function (negatively propotional to the reward) during two successive inputs, and modified the network using STDP scaled by the difference in cost function value. It is interesting that this kind of somewhat ad hoc method can drive the network to a consistently enhanced network configuration. A theoretical analysis as well as applying the same approach to other cost functions.

In my opinion, the cost function should be maximizing the information transfer from input to the liquid state over a time window. And the update of the network weights should only consider one input; present the input twice with static STDP, and compute the cost function difference from it and scale the STDP to obtain the actual update.

There was also a related talk by Cristina Savin and Jochen Triesch titled “Developing a working memory with reward-modulated STDP” where they trained a recurrent neural network and the readout (winner-take-all type) using reinforcement learning type of paradigm. By incrementally increasing the delay between input and response timing, and using previously mentioned reward based STDP by Izhikevich, they showed the working memory capacity could be enhanced.
The second poster on LSM was by Peter Latham and Edward Wallace titled “Temporal memory and network dynamics”. This is a negative result on creating edge of chaos like networks. Edge of chaos dynamics is preferable because the liquid can represent the seperation of two inputs for a longer time. However, choas is bad, because in a bounded phase space, the trajactories would have the same distance from each other and completely mix each other, so when observed, it is impossible to tell what the input was. They showed that a randomly connected network with certain weight scaling factor is highly likely to be chaotic, and only in the case of very sparse connectivity can lead to edge of chaos. Biological networks are not that sparse.

In the workshop, Martin Nawrot‘s talk on “The effect of cortical network state evolution on the amount and dynamics of single neuron variability” was interesting. He observed that the Fano factor (FF) and coefficient of variation (CV) are significantly differnt depending on the state of the network. When spontaneous, the network showed higher FF >> 1, and when doing tasks FF tends to be smaller. He also mentioned the bias on the FF estimator with finite data; when the window size is small, FF is biased towards 1.

Compared to 2007, I felt that the Bayesian approach was dominating in almost all areas except for network dynamics. Another frequently mentioned is off-response for visual, auditory, and olfactory systems.

Summary: three keywords of COSYNE 2009: Bayesian, GLM, and off-response.

Update May 6th, 2009: They have finally published the conference report but  DOIs are not yet activated.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: