Yasemin Erden on BBC

AISB Committee member, and Philosophy Programme Director and Lecturer, Dr Yasemin J. Erden interviewed for the BBC on 29 October 2013. Speaking on the Today programme for BBC Radio 4, as well as the Business Report for BBC world N...


Read More...

AISB Convention 2014

AISB-50: a convention commemorating both 50 years since the founding of the society for the study of Artificial Intelligence and the Simulation of Behaviour (the AISB) and sixty years since the death of Alan Turing, founding fathe...


Read More...

Mark Bishop on BBC ...

Mark Bishop, Chair of the Study of Artificial Intelligence and the Simulation of Behaviour, appeared on Newsnight to discuss the ethics of ‘killer robots’. He was approached to give his view on a report raising questions on the et...


Read More...

AISB YouTube Channel

The AISB has launched a YouTube channel: http://www.youtube.com/user/AISBTube (http://www.youtube.com/user/AISBTube). The channel currently holds a number of videos from the AISB 2010 Convention. Videos include the AISB round t...


Read More...

Lighthill Debates

The Lighthill debates from 1973 are now available on YouTube. You need to a flashplayer enabled browser to view this YouTube video  


Read More...
01234

Notice

AISB event Bulletin Item

CALL FOR PARTICIPATION: Computational Neurodynamics Group Seminar, 12th Oct 2011, LONDON


Title: The Retroaxonal Hypothesis: Towards Speech Recognition with Spiking Neuronal Networks Speaker: Pierre Yger, Department of Bioengineering, Imperial College, London Wednesday 12th October - 16:00-17:30 - Room 343, Huxley Building, South Kensington c

Abstract: Understanding the mechanism by which synapses between neurons are modified is a fundamental
goal in the comprehension of the learning properties of the brain. To tackle this question, we use 
spiking neuronal networks providing an accurate description of the ongoing activity often observed 
in vivo in awake animals. Following the idea of the Liquid State Machine, we aim to develop a general
learning rule making spiking networks suitable to perform real-world information processing tasks, 
such as speech recognition. After being transformed by cochlea model, spoken digits are turned into
spike trains and used as raw inputs to train a generic spiking neuronal network governed by a 
learning rule derived from an optimization principle, similar to the Bienenstock, Cooper and Munro 
theory. Using this learning rule, more flexible and stable than Spike Timing Dependent Plasticity, 
we will test a novel and ambitious hypothesis recently made, based on biological evidence, called 
the "retroaxonal hypothesis": strengthening of a neuron's output synapses stabilizes recent changes
in the same neuron's inputs. Supported by several biological evidence, this could provide a useful 
mechanism for the development of computational units in such spiking neuronal networks.

Biography: Pierre Yger is a postdoctoral fellow under the supervision of Dr Kenneth D. Harris in 
the BioEngineering Department of the Imperial College. With a background in computer science, he 
performed a PhD in Yves Fregnac's lab in Gif-sur-Yvette, close to Paris. Most of the work focused 
on simulations and dynamics of generic spiking neuronal networks, at a large scale level, involving
the use of simplified models of neurons such as the integrate-and-fire. It uses a particular 
framework, the balanced random network, to recreate a dynamical regime close to the one observed 
in vivo, where neurons are spiking at low rates with an irregular discharge. It shows that neuronal
networks, as those observed in primary areas of the cortex in vivo, can work at the border of a 
particular dynamical regime, the deterministic chaos. This work offers a functional role for the 
ongoing activity in the brain. Instead of being considered simply as noise corrupting the signal, 
it should be seen as a prior of the activity expected by the system. To establish a match between 
evoked and spontaneous activity, and to have evoked patterns replayed later in spontaneous activity
(such as it has been shown in vivo), new rules of unsupervised learning with neuronal plasticity 
were explored, incorporating some homeostatic constraints in the framework of metaplasticity. The 
main result is the design of a biophysically plausible rule of plasticity, more general than 
current Spike Timing Dependent Plasticity, allowing to store long lasting traces of the inputs in 
neuronal networks and to link several experimental results on plasticity observed in the literature.