AISB event Bulletin Item
CALL FOR PARTICIPATION: Computational Neurodynamics Group Seminar, 12th Oct 2011, LONDON
Title: The Retroaxonal Hypothesis: Towards Speech Recognition with Spiking Neuronal Networks Speaker: Pierre Yger, Department of Bioengineering, Imperial College, London Wednesday 12th October - 16:00-17:30 - Room 343, Huxley Building, South Kensington c
Abstract: Understanding the mechanism by which synapses between neurons are modified is a fundamental goal in the comprehension of the learning properties of the brain. To tackle this question, we use spiking neuronal networks providing an accurate description of the ongoing activity often observed in vivo in awake animals. Following the idea of the Liquid State Machine, we aim to develop a general learning rule making spiking networks suitable to perform real-world information processing tasks, such as speech recognition. After being transformed by cochlea model, spoken digits are turned into spike trains and used as raw inputs to train a generic spiking neuronal network governed by a learning rule derived from an optimization principle, similar to the Bienenstock, Cooper and Munro theory. Using this learning rule, more flexible and stable than Spike Timing Dependent Plasticity, we will test a novel and ambitious hypothesis recently made, based on biological evidence, called the "retroaxonal hypothesis": strengthening of a neuron's output synapses stabilizes recent changes in the same neuron's inputs. Supported by several biological evidence, this could provide a useful mechanism for the development of computational units in such spiking neuronal networks. Biography: Pierre Yger is a postdoctoral fellow under the supervision of Dr Kenneth D. Harris in the BioEngineering Department of the Imperial College. With a background in computer science, he performed a PhD in Yves Fregnac's lab in Gif-sur-Yvette, close to Paris. Most of the work focused on simulations and dynamics of generic spiking neuronal networks, at a large scale level, involving the use of simplified models of neurons such as the integrate-and-fire. It uses a particular framework, the balanced random network, to recreate a dynamical regime close to the one observed in vivo, where neurons are spiking at low rates with an irregular discharge. It shows that neuronal networks, as those observed in primary areas of the cortex in vivo, can work at the border of a particular dynamical regime, the deterministic chaos. This work offers a functional role for the ongoing activity in the brain. Instead of being considered simply as noise corrupting the signal, it should be seen as a prior of the activity expected by the system. To establish a match between evoked and spontaneous activity, and to have evoked patterns replayed later in spontaneous activity (such as it has been shown in vivo), new rules of unsupervised learning with neuronal plasticity were explored, incorporating some homeostatic constraints in the framework of metaplasticity. The main result is the design of a biophysically plausible rule of plasticity, more general than current Spike Timing Dependent Plasticity, allowing to store long lasting traces of the inputs in neuronal networks and to link several experimental results on plasticity observed in the literature.