Hugh Gene Loebner

  The AISB were sad to learn last week of the passing of philanthropist and inventor Hugh Gene Loebner PhD, who died peacefully in his home in New York at the age of 74.  Hugh was founder and sponsor of The Loebner Prize, an an...


AI Europe 2016

  Partnership between AISB and AI Europe 2016: Next December 5th and 6th in London, AI Europe will bring together the European AI eco-system by gathering new tools and future technologies appearing in professional fields for th...


AISB convention 2017

  In the run up to AISB2017 convention (, I've asked Joanna Bryson, from the organising team, to answer few questions about the convention and what comes with it. Mohammad Majid...


Harold Cohen

Harold Cohen, tireless computer art pioneer dies at 87   Harold Cohen at the Tate (1983) Aaron image in background   Harold Cohen died at 87 in his studio on 27th April 2016 in Encintias California, USA.The first time I hear...


Dancing with Pixies?...

At TEDx Tottenham, London Mark Bishop (the former chair of the Society) demonstrates that if the ongoing EU flagship science project - the 1.6 billion dollar "Human Brain Project” - ultimately succeeds in understanding all as...


Computerised Minds. ...

A video sponsored by the society discusses Searle's Chinese Room Argument (CRA) and the heated debates surrounding it. In this video, which is accessible to the general public and those with interest in AI, Olly's Philosophy Tube ...


Connection Science

All individual members of The Society for the Study of Artificial Intelligence and Simulation of Behaviour have a personal subscription to the Taylor Francis journal Connection Science as part of their membership. How to Acce...



AISB event Bulletin Item

CALL FOR PARTICIPATION: Computational Neurodynamics Group Seminar, 12th Oct 2011, LONDON

Title: The Retroaxonal Hypothesis: Towards Speech Recognition with Spiking Neuronal Networks Speaker: Pierre Yger, Department of Bioengineering, Imperial College, London Wednesday 12th October - 16:00-17:30 - Room 343, Huxley Building, South Kensington c

Abstract: Understanding the mechanism by which synapses between neurons are modified is a fundamental
goal in the comprehension of the learning properties of the brain. To tackle this question, we use 
spiking neuronal networks providing an accurate description of the ongoing activity often observed 
in vivo in awake animals. Following the idea of the Liquid State Machine, we aim to develop a general
learning rule making spiking networks suitable to perform real-world information processing tasks, 
such as speech recognition. After being transformed by cochlea model, spoken digits are turned into
spike trains and used as raw inputs to train a generic spiking neuronal network governed by a 
learning rule derived from an optimization principle, similar to the Bienenstock, Cooper and Munro 
theory. Using this learning rule, more flexible and stable than Spike Timing Dependent Plasticity, 
we will test a novel and ambitious hypothesis recently made, based on biological evidence, called 
the "retroaxonal hypothesis": strengthening of a neuron's output synapses stabilizes recent changes
in the same neuron's inputs. Supported by several biological evidence, this could provide a useful 
mechanism for the development of computational units in such spiking neuronal networks.

Biography: Pierre Yger is a postdoctoral fellow under the supervision of Dr Kenneth D. Harris in 
the BioEngineering Department of the Imperial College. With a background in computer science, he 
performed a PhD in Yves Fregnac's lab in Gif-sur-Yvette, close to Paris. Most of the work focused 
on simulations and dynamics of generic spiking neuronal networks, at a large scale level, involving
the use of simplified models of neurons such as the integrate-and-fire. It uses a particular 
framework, the balanced random network, to recreate a dynamical regime close to the one observed 
in vivo, where neurons are spiking at low rates with an irregular discharge. It shows that neuronal
networks, as those observed in primary areas of the cortex in vivo, can work at the border of a 
particular dynamical regime, the deterministic chaos. This work offers a functional role for the 
ongoing activity in the brain. Instead of being considered simply as noise corrupting the signal, 
it should be seen as a prior of the activity expected by the system. To establish a match between 
evoked and spontaneous activity, and to have evoked patterns replayed later in spontaneous activity
(such as it has been shown in vivo), new rules of unsupervised learning with neuronal plasticity 
were explored, incorporating some homeostatic constraints in the framework of metaplasticity. The 
main result is the design of a biophysically plausible rule of plasticity, more general than 
current Spike Timing Dependent Plasticity, allowing to store long lasting traces of the inputs in 
neuronal networks and to link several experimental results on plasticity observed in the literature.