Liquid State Machines and Cultured Cortical Networks: What we have learned
The idea of Liquid State Machine (LSM) is to effectively create a nonlinear and random transformation from the input signal into a spatio-temporal pattern (liquid state), such that the learning is achieved by adapting a readout from the liquid state. It is analogous to kernel methods in the sense that the input is transformed into a higher dimensional feature space, however, the transformation is explicit, random, and utilizes a dynamical system in the LSM. In this project, the dynamical system (the liquid) is replaced by a living cultured cortical network, and a computer is used as the readout. LSM has been suggested as one of the biologically plausible computation mechanisms, and this project was the first to attempt the creation of a hybrid LSM system.
We have faced several challenges during the project; (1) spontaneous dynamics which strongly drives the system, (2) nonstationarity, (3) noise and unreliable response, (4) unknown time scale. After analyzing the dynamics [Goswami2005, Park2006], an input stimulation pattern was developed that attenuates highly recurrent activation from low-frequency stimulation and stabilizes short-term synaptic depression. One key property of the LSM is input-output separability. We tested the separability of the input from the spike train observations, and a simple algorithm that tracks non-stationarity performed 99.8% correct [Dockendorf2008]. However, the input space under our current method is limited which makes the system not practical at the current state. Also the balance between the input stimulation and recurrent dynamics needs to be adjusted, since the response does not propagating significantly which is required by LSM to implement a universal filter.
Although we could not achieve the original goal of implementing a full-fledged LSM, this project gave us numerous fruitful byproducts on both biophysics and adaptive filter theory related to the cortical culture. The spontaneously active dynamics gave us hint for designing computational devices with transiently stable states [Ozturk2005, Ozturk2006]. The fractal analysis and dynamical modeling revealed long range dependency structure and power law distribution of the interspike interval [Goswami2005, Park2006]. The effort to model the cultured neuronal network as a biologically plausible components resulted in development of novel rules for self-regulating synaptic plasticity, and input scaling [Dockendorf2007]. This model also highlighted an important problem in the measurement of plasticity where instantaneous readout of the weights were highly correlated with current synaptic depression parameters rather than actual weights in the model. In terms of computation in the spike train domain which was motivated by the analysis of the dynamics and implementation of the readout, we have successfully developed a new reproducing kernel Hilbert space (RKHS) for the spike trains [Paiva2007a]. A large class of signal processing algorithms that are based on inner products can now be applied to point processes in a straight forward manner [Park2008a, Paiva2008a]. Other approaches to point process filtering such as sequential estimation based on the idea particle filter [Wang2006] and Wiener filter like solution to integrate-and-fire neuron [Park2007] are also developed in similar spirit. We are currently working on quantifying the reliability and precision of the response of the culture with a new point process model that is significantly different from intensity function based theories.
CRCNS (NSF) 2008 PI meeting