Skip to content

Extended KS and CM test for point processes


Traditionally in neuroscience, information in spike trains is thought to be in the total number of spikes (within a small interval) — known as the rate code. Other than the number of spikes, the precise pattern of action potentials are often discarded. Hence, in many applications where changes in the signal the brain encodes is expected to change, neuroscientists first check if the mean rate of their observed spike train is changing. Detecting the change in the mean firing rate has been successful in virtually every area of neuroscience including sensory, working memory, decision making, motor systems. However, not much attention is given to the information that is included in the precise firing pattern. Information theoretic analysis and other temporal code analyses are the currently widely used options for probing such information. But there have been a lack of tools for detecting arbitrary differences in spike train observations under different conditions. Assuming no prior knowledge, using statistical divergences is a natural choice of statistic for this problem. In real valued random variables, Kullback-Leibler divergence, Hellinger-divergence, or total variation would theoretically serve one’s purpose, however, they are not always easy to estimate. On the other hand, simpler non-parameteric test statistics such as Kolmogorov-Smirnov (KS) test statistic, or Cramér-von-Mises (CM) test statistic are very easy to estimate and powerful enough for many cases. Both KS and CM test statistics can be applied to discriminate the difference between arbitrary distributions. On the other hand, KS test requires the underlying space to have a full ordering, while CM test statistic requires square integrability of difference of distribution functions. Hence it is non-trivial to extend it to the point process domain where such structures do not exist naturally.

NIPS 2010 starts tomorrow, and we have a poster for the previously mentioned extension of KS and CM test as point process divergences I have developed with Sohan Seth. You can glimpse at the poster below, but as you can see it is fairly compressed. So if you are attending NIPS 2010, please stop by to our poster and we’ll be glad to explain the details (Wednesday Dec 8th, W59, “A novel family of non-parametric cumulative based divergences for point processes”).

One application of a point process divergence is sensory neuralprothesis. The goal of sensory prosthetics is to artificially recreate the activation and inactivation of the brain so that a percept of sensory stimulation would be formed. The current approach is to electrically stimulate the thalamus along the sensory pathway. However, given only a limited number of electrodes, it is difficult to exactly mimic the natural pattern of firing the peripheral nerves. Therefore, a search of stimulation parameter is necessary to find the best possible setting. Austin Brockmeier analyzed a set of stimulation patterns to VPL of anesthetized rat (from Francis Lab) and demonstrated the usefulness and advantage of the proposed divergences compared to mean rate. I am very pleased to see the method working in practice.

The implementation is straightforward, and included in the IOCANE open source project.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: