Skip to content

9th Black Board Day (BBD9)

2014/04/28

BBD9 logo

Every last Sunday of April, I have been organizing a small workshop called BBD. We discuss logic, math, and science on a blackboard (this year, it was actually on a blackboard unlike the past 3 years!)

The main theme was paradox. A paradox is a puzzling contradiction; using some sort of reasoning one derives two contradicting conclusions. Consistency is an essential quality of a reasoning system, that is, it should not be able to produce contradictions by itself. Therefore, true paradoxes are hazardous to the fundamentals of being rational, but fortunately, most paradoxes are only apparent and can be resolved. Today (April 27th, 2014), we had several paradoxes presented:

Memming: I presented the Pinocchio paradox, which is a fun variant of the Liar paradox. Pinocchio’s nose grows if and only if Pinocchio tells a false statement. What happens when Pinocchio says “My nose grows now”/”My nose will grow now”? It either grows or not grows. If it grows, he is telling the truth, so it should not grow. If it is false, then it should grow, but then it is true again. Our natural language allows self-referencing, but is it really logically possible? (In the incompleteness theorem, Gödel numbering allows self-referencing using arithmetic.) There are several possible resolutions, such as, Pinocchio cannot say that statement, Pinocchio’s world is inconsistent (and hence cannot have physical reality attached to it), Pinocchio cannot know the truth value, and so on. In any case, a good logical system shouldn’t be able to produce such paradoxes.

Jonathan Pillow, continuing on the fairy tale theme, presented the sleeping beauty paradox. Toss a coin, sleeping beauty will be awakened once if it is head, twice if it is tail. Every time she is awakened, she is asked “What is your belief that the coin was heads?”, and given a drug that erases the memory of this awakening, and goes back to sleep. One argument (“halfer” position) says since a priori belief was 1/2, and each awakening does not provide more evidence, her belief does not change and would answer 1/2. The argument (“thirder” position) says that you are twice more likely to be awakened for the tail toss, hence the probability should be 1/3. If a certain reward was assigned to making a correct guess, the thirder position seems to be correct probability to use as the guess, but do we necessarily have matching belief? This paradox is still under debate, have not had a full resolution yet.

Kyle Mandi presented the classical Zeno’s paradox where your intuition on infinite sum of finite things being infinite is challenged. He also showed Gabriel’s horn where a simple (infinite) object with finite volume, but infinite surface area is given. Hence, if you were to pour in paint in this horn, you would need finite paint, but would never be able to paint the entire surface. (Hence its nickname: painter’s paradox)

Karin Knudson introduced the Banach-Tarski paradox where one solid unit sphere in 3D can be decomposed into 5 pieces, and only by translation and rotation, they are reconstructed into two solid unit spheres. In general, if two uncountable sets A, B are bounded with non-empty interior in R^n with n \geq 3, then one can find a finite decomposition such that each piece in A is congruent to the corresponding piece in B. It requires some group theory, non-measurable sets, and the axiom of cBBD9 discussion afterhoice (fortunately).

Harold Gutch told us about the Borel-Kolmogorov paradox. What is the conditional distribution on a great circle when points are uniformly distributed on the surface of a sphere? One argument says it should be uniform by symmetry. But, a simple sampling scheme in polar coordinate shows that it should be proportional to cosine of the angle. Basically, the lesson is, never take conditional probabilities on sets of measure zero (not to be confused with conditional densities). Furthermore, he told us about a formula to produce infinitely many paradoxes from Jaynes’ book (ch 15) based on ill-defined series convergences.

Andrew Tan presented Rosser’s extension of Gödel’s first incompleteness theorem with the statement R that colloquially says “For every proof of me, there’s a shorter disproof.” For any consistent system T that contains PA (Peano axioms), there exists an R_T, which is neither provable nor disprovable within T. Also, by the second incompleteness theorem, the consistency of PA (“con(PA)”) implies G_{PA}, which together with Gödel’s first incompleteness theorem that G_{PA} is neither provable nor disprovable within PA, implies that PA augmented with “con(PA)” or “not con(PA)” are both consistent. However, the latter is paradoxical, since it appears that a consistent system declares its own inconsistency, and the natural number system that we are familiar with is not a model for the system. But, it could be resolved by creating a non-standard model of arithmetic. References: [V. Gitman's blog post and talk slides][S. Reid, arXiv 2013]

I had a wonderful time, and I really appreciate my friends for joining me in this event!

Lobster olfactory scene analysis

2014/03/20

Recently, there was a press release and a youtube video from University of Florida about one of my recent papers on neural code in the lobster olfactory system, and also by others [e.g. 1, 2, 3, 4]. I decided to write a bit about it in my own perspective. In general, I am interested in understanding how neurons process and represent information in their output through which they communicate with other neurons and collectively compute. In this paper, we show how a subset of olfactory neurons can be used like a stop watch to measure temporal patterns of smell.

Unlike vision and audition, the olfactory world is perceived through a filament of odor plume riding on top of complex and chaotic turbulence. Therefore, you are not going to be in constant contact with the odor (say the scent of freshly baked chocolate chip cookies) while you search for the source (cookies!). You might not even smell it at all for a long periods of time, even if the target is nearby depending on the air flow. Dogs are well known to be good at this task, and so are many animals. We study lobsters. Lobsters heavily rely on olfaction to track, avoid, and detect odor sources such as other lobsters, predators, and food, therefore, it is important for them to constantly analyze olfactory sensory information to put together an olfactory scene. In auditory system, the miniscule temporal differences in sound arriving to each of your ears is a critical cue for estimating the direction of the sound source. Similarly, one critical component for olfactory scene analysis is the temporal structure of the odor pattern. Therefore, we wanted to find out how neurons encode and process this information.

The neurons we study are of a subtype of olfactory sensory neurons. Sensory neurons detect signals, encode them into a temporal pattern of activity, so that it can be processed by downstream neurons. Thus, it was very surprising when we (Dr. Yuriy Bobkov) found that those neurons were spontaneously generating signals–in the form of regular bursts of action potentials–even in the absence of odor stimuli [Bobkov & Ache 2007]. We were wondering why a sensory system would generate its own signal, because the downstream neurons would not know if the signal sent by these neurons are caused by external odor stimuli (smell), or are spontaneously generated. However, we realized that they can work like little clocks. When external odor molecules stimulate the neuron, it sends a signal in a time dependent manner. Each neuron is too noisy to be a precise clock, but there is a whole population of these neurons, such that together they can measure the temporal aspects critical for the olfactory scene analysis. The temporal aspects, combined with other cues such as local flow information and navigation history, in turn can be used to track targets and estimate distances to sources. Furthermore, this temporal memory was previously believed to be formed in the brain, but our results suggest a simple yet effective mechanism in the very front end, the sensors themselves.

Applications: Currently electronic nose technology is mostly focused on discriminating ‘what’ the odor is. We bring to the table how animals might use the ‘when’ information to reconstruct the ‘where’ information, putting together an olfactory scene. Perhaps it could inspire novel search strategies for odor tracking robots. Another possibility is to build neuromorphic chips that emulate artificial neurons using the same principle to encode temporal patterns into instantaneously accessible information. This could be a part of low-power sensory processing unit in a robot. The principle we found are likely not limited to lobsters and could be shared by other animals and sensory modality.

EDIT: There’s an article on the analytical scientist about this paper.

References:

  • Bobkov, Y. V. and Ache, B. W. (2007). Intrinsically bursting olfactory receptor neurons. J Neurophysiol, 97(2):1052-1057.
  • Park, I. M., Bobkov, Y. V., Ache, B. W., and Príncipe, J. C. (2014). Intermittency coding in the primary olfactory system: A neural substrate for olfactory scene analysis. The Journal of Neuroscience, 34(3):941-952. [pdf]
Creative Commons License
This work by I. Memming Park is licensed under a Creative Commons Attribution 4.0 International License.

Scalable models workshop recap

2014/03/08

memming:

Evan and I wrote a summary of the COSYNE 2014 workshop we organized!

Originally posted on Scalable models for high-dimensional neural data:

[ This blog post is collaboratively written by Evan and Memming ]
The Scalable Models workshop was a remarkable success! It attracted a huge crowd from the wee morning hours till the 7:30 pm close of the day. We attracted so much attention that we had to relocate from our original (tiny) allotted room (Superior A) to a (huge) lobby area (Golden Cliff). The talks offered both philosophical perspectives and methodological aspects, reflecting diverse viewpoints and approaches to high-dimensional neural data. Many of the discussions continued the next day in our sister workshop. Here we summarize each talk:

Konrad Körding – Big datasets of spike data: why it is coming and why it is useful

Konrad started off the workshop by posting some philosophical questions about how big data might change the way we do science. He argued that neuroscience is rife with theories (for instance, how uncertainty is…

View original 1,677 more words

A guide to discrete entropy estimators

2014/02/09
tags:

Shannon’s entropy is the fundamental building block of information theory – a theory of communication, compression, and randomness. Entropy has a very simple definition, H = - \sum_i p_i \log_2(p_i) , where p_i is the probability of i-th symbol. However, estimating entropy from observations is surprisingly difficult, and is still an active area of research. Typically, one does not have enough samples compared to the number of possible symbols (so called “undersampled regime”), there’s no unbiased estimator [Paninski 2003], and the convergence rate of a consistent estimator could be arbitrarily slow [Antos and Kontoyiannis, 2001]. There are many estimators that aim to overcome these difficulties to some degree. Deciding which estimator to use can be overwhelming, so here’s my recommendation in the form of a flow chart:

Which entropy estimator is best for me? Follow the arrows!

(click to enlarge)

Let me explain one by one. First of all, if you have continuous (analogue) observation, read the title of this post. CDM, PYM, DPM, NSB are Bayesian estimators, meaning that they have explicit probabilistic assumptions. Those estimators provide posterior distributions or credible intervals as well as point estimates of entropy. Note that the assumptions made by these estimators do not have to be valid to make them good entropy estimators. In fact, even if they are in the wrong class, these estimators are consistent, and often give reasonable answers even in the undersampled regime.

Nemenman-Shafee-Bialek (NSB) uses a mixture of Dirichlet prior to have an approximately uninformative implied prior on entropy. This reduces the bias of estimator significantly for the undersampled regime, because a priori, it could have any entropy.

Centered Dirichlet mixture (CDM) is a Bayesian estimator with a special prior designed for binary observations. It comes in two flavors depending if your observation is close to independent (DBer) or the total number of 1′s is a good summary statistic (DSyn).

Pitman-Yor mixture (PYM) and Dirichlet process mixture (DPM) are for infinite or unknown number of symbols. In many cases, natural data have a vast number of possible symbols, as in the case of species samples or language, and have power-law (or scale-free) distributions. Power-law tails can hide a lot of entropy in their tails, in which case PYM is recommended. If you expect an exponentially decaying tail probabilities when sorted, then DPM is appropriate.  See my previous post for more.

Non-Bayesian estimators come in many different flavors:

Best upper bound (BUB) estimator is a bias correction method which bounds the maximum error in entropy estimation.

Coverage-adjusted estimator (CAE) uses the Good-Turing estimator for the “coverage” (1 – unobserved probability mass), and uses a Horvitz-Thompson estimator for entropy in combination.

James-Stein (JS) estimator regularizes entropy by estimating a mixture of uniform distribution and the empirical histogram with the James-Stein shrinkage. The main advantage of JS is that it also produces an estimate of the distribution.

Unseen estimator uses a Poissonization of fingerprint and linear programming to find the likely underlying fingerprint, and use the entropy as an estimate.

Other notable estimators include (1) a bias correction method by Panzeri & Travis (1995) which has been popular for a long time, (2) Grassberger estimator, and (3) asymptotic expansion of NSB that only works in extremely undersampled regime and is inconsistent [Nemenman 2011]. These methods are faster than the others, if you need speed.

There are many software packages available out there. Our estimators CDMentropy and PYMentropy are implemented for MATLAB with BSD license (by now you surely noticed that this is a shameless self-promotion!). For R, some of these estimators are implemented in a package called entropy (in CRAN; written by the authors of JS estimator). There’s also a python package called pyentropy. Targeting a more neuroscience specific audience, Spike Train Analysis Toolkit contains a few of estimators implemented in MATLAB/C.

References

  • A. Antos and I. Kontoyiannis. Convergence properties of functional estimates for discrete distributions. Random Structures & Algorithms, 19(3-4):163–193, 2001.
  • E. Archer*, I. M. Park*, and J. Pillow. Bayesian estimation of discrete entropy with mixtures of stick-breaking priors. In P. Bartlett, F. Pereira, C. Burges, L. Bottou, and K. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 2024–2032. MIT Press, Cambridge, MA, 2012. [PYMentropy]
  • E. Archer*, I. M. Park*, J. Pillow. Bayesian Entropy Estimation for Countable Discrete Distributions. arXiv:1302.0328, 2013. [PYMentropy]
  • E. Archer, I. M. Park, and J. Pillow. Bayesian entropy estimation for binary spike train data using parametric prior knowledge. In C.J.C. Burges and L. Bottou and M. Welling and Z. Ghahramani and K.Q. Weinberger}, editors, Advances in Neural Information Processing Systems 26, 2013. [CDMentropy]
  • A. Chao and T. Shen. Nonparametric estimation of Shannon’s index of diversity when there are unseen species in sample. Environmental and Ecological Statistics, 10(4):429–443, 2003. [CAE]
  • P. Grassberger. Estimating the information content of symbol sequences and efficient codes. Information Theory, IEEE Transactions on, 35(3):669–675, 1989.
  • J. Hausser and K. Strimmer. Entropy inference and the James-Stein estimator, with application to nonlinear gene association networks. The Journal of Machine Learning Research, 10:1469–1484, 2009. [JS]
  • I. Nemenman. Coincidences and estimation of entropies of random variables with large cardinalities. Entropy, 13(12):2013–2023, 2011. [Asymptotic NSB]
  • I. Nemenman, F. Shafee, and W. Bialek. Entropy and inference, revisited. In Advances in Neural Information Processing Systems 14, pages 471–478. MIT Press, Cambridge, MA, 2002. [NSB]
  • I. Nemenman, W. Bialek, and R. Van Steveninck. Entropy and information in neural spike trains: Progress on the sampling problem. Physical Review E, 69(5):056111, 2004. [NSB]
  • L. Paninski. Estimation of entropy and mutual information. Neural Computation, 15:1191–1253, 2003. [BUB]
  • S. Panzeri and A. Treves. Analytical estimates of limited sampling biases in different information measures. Network: Computation in Neural Systems, 7:87–107, 1996.
  • P. Valiant and G. Valiant. Estimating the Unseen: Improved Estimators for Entropy and other Properties. In Advances in Neural Information Processing Systems 26, pp. 2157-2165, 2013. [UNSEEN]
  • V. Q. Vu, B. Yu, and R. E. Kass. Coverage-adjusted entropy estimation. Statistics in medicine, 26 (21):4039–4060, 2007. [CAE]

NIPS 2013

2013/12/13

This year, NIPS (Neural Information Processing Systems) had a record registration of 1900+ (it has been growing over the years) with 25% acceptance rate. This year, most of the reviews and rebuttals are also available onlineI was one of the many who were live tweeting via #NIPS2013 throughout the main meeting and workshops.

Compared to previous years, it seemed like there were less machine learning in the invited/keynote talks. Also I noticed more industrial engagements (Zuckerberg from facebook was here (also this), and so was the amazon drone) as well as increasing interest in neuroscience. My subjective list of trendy topics of the meeting are low-dimension, deep learning (and drop out), graphical model, theoretical neuroscience, computational neuroscience, big data, online learning, one-shot learning, calcium imaging. Next year, NIPS will be at Montreal, Canada.

I presented 3 papers in the main meeting (hence missed the first two days of poster session), and attended 2 workshops (High-Dimensional Statistical Inference in the Brain, Acquiring and analyzing the activity of large neural ensembles; Terry Sejnowski gave the first talk in both). Following are the talks/posters/papers that I found interesting as a computational neuroscientist / machine learning enthusiast.

Theoretical Neuroscience

Neural Reinforcement Learning (Posner lecture)A Neural Substrate of Prediction and Reward
Peter Dayan

He described how theoretical quantities in reinforcement learning such as TD-error correlate with neuromodulators  such as dopamine. Then he went on to  Q (max) and SARSA (mean) learning rules. The third point of the talk was the difference between model-based vs model-free reinforcement learning. Model-based learning can use how the world (state) is organized and plan accordingly, while model-free learns values associated with each state. Human fMRI evidence shows an interesting mixture of model-based and model-free learning.

A Memory Frontier for Complex Synapses
Subhaneil Lahiri, Surya Ganguli

Ganguli_synapse_diagram

Despite its molecular complexity, most systems level neural models describe it as a scalar valued strength. Biophysical evidence suggests discrete states within the synapse and discrete levels of synaptic strength, which is troublesome because memory will be quickly overwritten for discrete/binary-valued synapses. Surya talked about how to maximize memory capacity (measured as area under the SNR over time) with synapses with hidden states over all possible Markovian models. Using the first-passage time, they ordered states, and derived an upper bound. Area is bounded by O(\sqrt{N}(M-1)) where M and N denote number of internal states per synapse and synapses, respectively. Therefore, less synapses with more internal state is better for longer memory.

A theory of neural dimensionality, dynamics and measurement: the neuroscientist and the single neuron (workshop)
Surya Ganguli

Several recent studies showed low-dimensional state-space of trial-averaged population activities (e.g., Churchland et al. 2012Mante et al 2013). Surya asks what would happen to the PCA analysis of neural trajectories if we record from 1 billion neurons? He defines the participation ratio D = \frac{\left(\sum \lambda_i \right)^2}{\sum \lambda_i^2} as a measure of dimensionality, and through a series of clever upper bounds, estimates the dimensionality of neural state-space that would capture 95% of the variance given task complexity. In addition, assuming incoherence (mixed or complex tuning), neural measurements can be seen as random projections of the high-dimensional space; along with low-dimensional dynamics, the data recovers the correct true dimension. He claims that in the current task designs, the neural state-space is limited by task-complexity, and we would not see higher dimensions as we increase the number of simultaneously observed neurons.

Distributions of high-dimensional network states as knowledge base for networks of spiking neurons in the brain (workshop)
Wolfgang Maass

In a series of papers (Büsing et al. 2011, Pecevski et al. 2011, Habenschuss et al. 2013), Maass showed how noisy spiking neural networks can perform probabilistic inferences via sampling. From Boltzmann machines (maximum entropy models) to constraint satisfaction problems (e.g. Sudoku), noisy SNN’s can be designed to sample from the posterior, and converges exponentially fast from any initial state. This is done by irreversible MCMC sampling of the neurons, and it can be generalized to continuous time and state space.

Epigenetics in Cortex (workshop)
Terry Sejnowski

Lister2013_fig1

Using an animal model of schizophrenia using ketamine that shows similar decreased gamma-band activity  in the prefrontal cortex, and decrease in PV+ inhibitory neurons, it is known that Aza and Zeb (DNA methylation inhibitors) prevents this effect of ketamine. Furthermore, in Lister 2013, they showed a special type of DNA methylation (mCH) in the brain grows over the lifespan, coincides with synaptogenesis, and regulates gene expressions.

Optimal Neural Population Codes for High-dimensional Stimulus Variables
Zhuo Wang, Alan Stocker, Daniel Lee

They extend previous year’s paper to high-dimensions.

Computational Neuroscience

What can slice physiology tell us about inferring functional connectivity from spikes? (workshop)
Ian Stevenson

Our ability to infer functional connectivity among neurons is limited by data. Using current-injection, he investigated exactly how much data is required for detecting synapses of various strength under the generalized linear model (GLM). He showed interesting scaling plots both in terms of (square root of) firing rate and (inverse) amplitude of the post-synaptic current.

Hierarchical Modular Optimization of Convolutional Networks Achieves Representations Similar to Macaque IT and Human Ventral Stream (main)
Mechanisms Underlying visual object recognition: Humans vs. Neurons vs. machines (tutorial)
Daniel L. Yamins*, Ha Hong*, Charles Cadieu, James J. DiCarloYamins_Hong_2013

They built a model that can predict (average) activity of V4 and IT neurons in response to objects. Current computer vision methods do not perform well under high variability induced by transformation, rotation, and etc, while IT neuron response seems to be quite invariant to them. By optimizing a collection of convolutional deep networks with different hyperparameter (structural parameter) regimes and combining them, they showed that they can predict the average IT (and V4) responds reasonably well.

Least Informative Dimensions
Fabian Sinz, Anna Stockl, Jan Grewe, Jan Benda

Instead of maximizing mutual information between the features and target variable for dimensionality reduction, they propose to minimize the dependence between the non-feature space and the joint of target variable and feature space. As a dependence measure, they use HSIC (Hilbert-Schmidt independence criterion: squared distance between joint and the product of marginals embedded in the Hilbert space). The optimization problem is non-convex, and to determine the dimension of the feature space, a series of hypothesis testing is necessary.

Dimensionality, dynamics and (de)synchronisation in the auditory cortex (workshop)
Maneesh Sahani

Maneesh compared the underlying latent dynamical systems fit from synchronized state (drowsy/inattentive/urethane/ketamine/xylazine) and desyncrhonized state (awake/attentive/urethane+stimulus/fentany/medtomidine/midazolam). From the population response, he fit a 4 dimensional linear dynamical system, then transformed the dynamics matrix into a “true Schur form” such that 2 pairs of 2D dynamics could be visualized. He showed that the dynamics fit from either state were actually very similar.

Sparse nonnegative deconvolution for compressive calcium imaging: algorithms and phase transitions (main)
Extracting information from calcium imaging data (workshop)
Eftychios A. Pnevmatikakis, Liam Paninski

Eftychios have been developing various methods to infer spike trains from calcium image movies. He showed a compressive sensing framework for spiking activity can be inferred. A plausible implementation can use a digital micromirror device that can produce “random” binary patterns of pixels to project the activity.

Andreas Tolias (workshop talk)

Noise correlations in the brain are small (0.01 range; e.g., Renart et al. 2010). Anesthetized animals have higher firing rate and higher noise correlation (0.06 range). He showed how latent variable model (GPFA) can be used to decompose the noise correlation into that of the latent and the rest. Using 3D acousto-optical deflectors (AOD), he is observing 500 neurons simultaneously. He (and Dimitri Yatsenko) used latent-variable graphical lasso to enforce a sparse inverse covariance matrix, and found that the estimate is more accurate and very different from raw noise correlation estimates.

Whole-brain functional imaging and motor learning in the larval zebrafish (workshop)
Misha Ahrens

Using light-sheet microscopy, he imaged the calcium activity of 80,000 neurons simultaneously (~80% of all the neurons) at 1-2 Hz sampling frequency (Ahrens et al. 2013). From the big data while the fish was stimulated with visually, Jeremy Freeman and Misha analyzed the dynamics (with PCA) and orienting stimuli tuning, and make very cool 3D visualizations.

Normative models and identification of nonlinear neural representations (workshop)
Matthias Bethge

l

In the first half of his talk, Matthias talked about probabilistic models of natural images (Theis et al. 2012) which I didn’t understand very well. In the later half, he talked about an extension of the GQM (generalized quadratic model) called STM (spike-triggered mixture). The model is a GQM with quadratic term \mathbf{x}^\top (\Sigma_0^{-1} - \Sigma_1^{-1}) \mathbf{x}, if the spike-triggered and non-spike-triggered distributions are Gaussian with covariances \Sigma_0 and \Sigma_1. When both distributions are allowed to be mixture-of-Gaussians, then it turns out the nonlinear function becomes a soft-max of quadratic terms making it an LNLN model. [code on github]

Inferring neural population dynamics from multiple partial recordings of the same neural circuit
Srini Turaga, Lars Buesing, Adam M. Packer, Henry Dalgleish, Noah Pettit, Michael Hausser, Jakob Macke

Under certain observability conditions, they stitch together partially overlapping neural recordings to recover the joint covariance matrix. We read this paper earlier in UT Austin computational neuroscience journal club.

Machine Learning

Estimating the Unseen: Improved Estimators for Entropy and other Properties
Paul Valiant, Gregory Valiant

Using “Poissonization” of the fingerprint (a.k.a. Zipf plot, count histogram, pattern, hist-hist, collision statistics, etc.), they find a simplest distribution such that the expected fingerprint is close to the observed fingerprint. This is done by first splitting the histogram into “easy” part (many observations; more than square root # of observations) and “hard” part, then applying two linear programs to the hard part to optimize the (scaled) distance and support. The algorithm “UNSEEN” has a free parameter that controls the error tolerance. Their theorem states that the total variations is bounded by 1/\sqrt{c} with only k = \frac{c\ n}{\log n} samples where n denotes the support size. The resulting estimate of the fingerprint can be used to estimate entropy, unseen probability mass, support, and total variations. (code in appendix)

A simple example of Dirichlet process mixture inconsistency for the number of components
Jeffrey W. Miller, Matthew T. Harrison

They already showed that the number of clusters inferred from DP mixture model is inconsistent (at ICERM workshop 2012, and last year’s NIPS workshop). In this paper they show theoretical examples, one of which says: If the true distribution is a normal distribution, then the probability that # of components inferred by DPM (with \alpha = 1) is 1 goes to zero, as a function of # of samples.

A Kernel Test for Three-Variable Interactions
Dino Sejdinovic, Arthur Gretton, Wicher Bergsma

To detect a 3-way interaction which has a ‘V’-structure, they made a kernelized version of the Lancaster interaction measure. Unfortunately, Lancaster interaction measure is incorrect for 4+ variables, and the correct version becomes very complicated very quickly.

B-test: A Non-parametric, Low Variance Kernel Two-sample Test
Wojciech Zaremba, Arthur Gretton, Matthew Blaschko

This work brings both test power and computational speed (Gretton et al. 2012) to MMD by using a blocked estimator, making it more practical.

Robust Spatial Filtering with Beta Divergence
Wojciech Samek, Duncan Blythe, Klaus-Robert Müller, Motoaki Kawanabe

Supervised dimensionality reduction technique. Connection between generalized eigenvalue problem and KL-divergence, generalization to beta-divergence to gain robustness to outlier in the data.

Optimizing Instructional Policies
Robert Lindsey, Michael Mozer, William J. Huggins, Harold Pashler

This paper presents a meta-active-learning problem where active learning is used to find the best policy to teach a system (e.g., human). This is related to curriculum learning, where examples are fed to the machine learning algorithm in a specially designed order (e.g., easy to hard). This gave me ideas to enhance Eleksius!

Reconciling priors” & “priors” without prejudice?
Remi Gribonval, Pierre Machart

This paper connects the Bayesian least squares (MMSE) estimation and MAP estimation under Gaussian likelihood. Their theorem shows that MMSE estimate with some prior is also a MAP estimate under some other prior (or equivalently, a regularized least squares).

There were many more interesting things, but I’m going to stop here! [EDIT: check out these blog posts by Paul Mineiro, hundalhh, Yisong Yue, Sebastien Bubeck, Davide Chicco]

CNS 2013

2013/07/26

CNS 2013 badge

Computational NeuroScience (CNS) conference is held annually alternating in America and Europe. This year it was held in Paris, next year is Québec City, Canada. There are more theoretical and simulation based studies, compared to experimental studies. Among the experimental studies, there were a lot of oscillation and synchrony related subjects.

Disclaimer: I was occupied with several things, so I was not 100% attending the conference, so my selection is heavily biased. These notes are primarily for my future reference.

Simon Laughlin. The influence of metabolic energy on neural computation (keynote)

There are three main categories of energy cost in the brain: (1) maintenance, (2) spike generation, and (3) synapse. Assuming a finite energy budget for the brain, the optimal efficient coding strategy can vary from small number of neurons with high rate to large population with sparse coding [see Fig 3, Laughlin 2001]. Variation of cost ratios across animals may be associated with different coding strategies to optimize for energy/bits. He illustrated the balance through various laws of diminishing return plots. He emphasized reverse engineering the brain, and concluded with the 10 principles of neural design (transcribed from the slides thanks to the photo by @neuroflips):
(1) save on wire, (2) make components irreducibly small, (3) send only what is needed, (4) send at the lowest rate, (5) sparsify, (6) compute directly with analogue primitives, (7) mix analogue and digital, (8) adapt, match and learn, (9) complexify (elaborate to specialize), (10) compute with chemistry??????. (question marks are from the original slide)

Sophie Denev. Rescuing the spike (keynote)

She proposed that the observation of high trial-to-trial variability in spike trains from single neurons is due to degeneracy in the population encoding. There are many ways the presynaptic population can evoke similar membrane potential fluctuations of a linear readout neuron, and hence she claims that through precisely controlled lateral inhibition, the neural code is precise in the population level, but seems variable if we only observe a single neuron. She briefly mentioned how a linear dynamical system might be implemented in such a coding system, but it seemed limited as to what kind of computations can be achieved.

There were several noise correlation (joint variability in the population activity) related talks:

Joel Zylberberg et al. Consistency requirements determine optimal noise correlations in neural populations

The “sign rule” says that if the signal correlation is opposite of the noise correlation, linear Fisher information (and OLE performance) is improved (see Fig 1, Averbeck, Latham, Pouget 2006). They showed a theorem confirming the sign rule in general setup, and furthermore showed the optimal noise correlation does NOT necessarily obey the sign rule (see Hu, Zylberberg, Shea-Brown 2013). Experiments from the retina does not obey the sign rule; noise correlation is positive even for cells tuned to the same direction, however, it is still near optimal according to their theory.

Federico Carnevale et al. The role of neural correlations in a decision-making task

During a vibration detection task, cross-correlations among neurons in the premotor cortex (in a 250 ms window) were shown to be dependent on behavior (see Carnevale et al. 2012). Federico told me that there were no sharp peaks in the cross-correlation. He further extrapolated the choice probability to the network level based on multivariate Gaussian approximation, and a simplification to categorize neurons into two classes (transient or sustained response).

Alex Pouget and Peter Latham each gave talks in the Functional role of correlations workshop.

Both were on Fisher information and effect of noise correlations. Pouget’s talk was focused on “differential correlation” which is the noise in the direction of the manifold that tuning curves encode information (noise that looks like signal). Peter talked about why there are so many neurons in the brain with linear Fisher information and additive noise (but I forgot the details!)

On the first day of the workshop, I participated in the New approaches to spike train analysis and neuronal coding workshop organized by Conor Houghton and Thomas Kreuz.

Florian Mormann. Measuring spike-field coherence and spike train synchrony

He emphasized on using nonparametric statistics for testing circular variable of interest: the phase of LFP oscillation conditioned on spike timings. In the second part, he talked about spike-distance (see Kreuz 2012) which is a smooth, time scale invariant measure of instantaneous synchrony among spike trains.

Rodrigo Quian Quiroga. Extracting information in time patterns and correlations with wavelets

Using Haar wavelet time bins as the feature space, he proposed scale free linear analysis of spike trains. In addition, he proposed discovering relevant temporal structure through a feature selection using mutual information. The method doesn’t seem to be able to find higher order interactions between time bins.

Ralph Andrzejak. Detecting directional couplings between spiking signals and time-continuous signals

Using distance based directional coupling analysis (see Chicharro, Andrzejak 2009; Andrzejak, Kreuz 2011), he showed that it is possible to find unidirectional coupling between continuous signals and spike trains via spike train distances. He mentioned the possibility of using spectral Granger causality for a similar purpose.

Adrià Tauste Campo. Estimation of directed information between simultaneous spike trains in decision making

Bayesian conditional information estimation through the use of context-tree weighting was used to infer directional information (analogous to Granger causality, but with mutual information). A compact Markovian structure is learned for binary time series.

I presented a poster on Bayesian entropy estimation in the main meeting, and gave a talk about nonparametric (kernel) methods for spike trains in the workshop.

8th Black Board Day (BBD8)

2013/05/03
Birthday cake for both Gödel and Shannon!

Birthday cake for both Gödel and Shannon!

Last Sunday (April 28th, 2013) was the 8th Black board day (BBD), which is a small informal workshop I organize every year. It started 8 years ago on my hero Kurt Gödel‘s 100th birthday. This year, I found out that April 30th (1916) is Claud Shannon‘s birthday so I decided the theme would be his information theory.

I started by introducing probabilistic reasoning as an extension of logic in this uncertain world (as Michael Buice told us in BBD7). I quickly introduced two key concepts, Shannon’s entropy H(X) = -\sum_i p_i \log_2 p_i which additively quantifies the uncertainty of a sequence of independent random quantity in bits, and mutual information I(X;Y) = H(X) - H(X|Y) = H(Y) - H(Y|X) which quantifies the how much uncertainty is reduced in X by the knowledge of Y (and vice versa, it’s symmetric). I showed a simple example of the source coding theorem which states that a symbol sequence can be maximally compressed to the length of it’s entropy (information content), and stated the noisy channel coding theorem, which provides an achievable limit of information rate that can be passed through a channel (the channel capacity). Legend says that von Neumann told Shannon to use the word “entropy” due to its similarity to the concept in physics, so I gave a quick microcanonical picture that connects the Boltzmann entropy to Shannon’s entropy.

Andrew Tan: Holographic entanglement entropy

on the white board

Andrew on the white board

Andrew wanted to connect how space-time structure can be derived from holographic entanglement entropy, and furthermore to link it to graphical models such as the restricted Boltzmann machine. He gave overviews of quantum mechanics (deterministic linear dynamics of the quantum states), density matrix, von Neumann entropy, and entanglement entropy (entropy of a reduced density matrix, where we assume partial observation and marginalization over the rest). Then, he talked about the asymptotic behaviors of entropy for the ground state and critical regime, and introduced a parameterized form of Hamiltonian that gives rise to a specific dependence structure in space-time, and sketched what the dimension of boundary and area of the dependence structure are. Unfortunately, we did not have enough time to finish what he wanted to tell us (see Swingle 2012 for details).

Jonathan Pillow: Information Schminformation

Information theory is widely applied to neuroscience and sometimes to machine learning. Jonathan sympathized with Shannon’s note (1956) called “the bandwagon”, criticized the possible abuse/overselling of information theory. First, Jonathan focused on the derivation of a “universal” rate-distortion theory based on the “information bottleneck principle”. Then, he continued with his recent ideas in optimal neural codes under different Bayesian distortion functions. He showed a multiple-choice exam example where maximizing mutual information can be worse, and a linear neural coding example for different cost functions.

References:
Free discussion time!

Free discussion time!

Follow

Get every new post delivered to your Inbox.