AISB Convention 2015

The AISB Convention is an annual conference covering the range of AI and Cognitive Science, organised by the Society for the Study of Artificial Intelligence and Simulation of Behaviour. The 2015 Convention will be held at the Uni...


Yasemin Erden on BBC

AISB Committee member, and Philosophy Programme Director and Lecturer, Dr Yasemin J. Erden interviewed for the BBC on 29 October 2013. Speaking on the Today programme for BBC Radio 4, as well as the Business Report for BBC world N...


Mark Bishop on BBC ...

Mark Bishop, Chair of the Study of Artificial Intelligence and the Simulation of Behaviour, appeared on Newsnight to discuss the ethics of ‘killer robots’. He was approached to give his view on a report raising questions on the et...


AISB YouTube Channel

The AISB has launched a YouTube channel: ( The channel currently holds a number of videos from the AISB 2010 Convention. Videos include the AISB round t...


Lighthill Debates

The Lighthill debates from 1973 are now available on YouTube. You need to a flashplayer enabled browser to view this YouTube video  



AISB opportunities Bulletin Item

TELECOM ParisTech PhD opening: Emotional Behaviour Model


*PhD opening*

*Subject:* 3D multimodal emotional animation model

The project takes place within the CECIL project, an ANR project that will start in September 2009 and will last for 3 years. The aim of the project CECIL is to contribute to area of interaction systems by placing emotion at the heart of the interaction between the system and the human user. To meet this general objective the project will clearly define (and disambiguate the definition of) some set of emotions, and incorporate them into the reasoning processes of an embodied conversational agent (ECA). In particular the agent will be endowed with the capabilities to express its emotions by means of different modalities including facial expressions, gestures, and language.

The agent system is based on an existing ECA system Greta. The system takes as input a text to be said by the agent. The text has been enriched with information on the manner the text ought to be said (i.e. with which communicative acts it should be said). The behavioral engine computes the synchronized verbal and nonverbal behaviors of the agent. The animation module follows the MPEG-4 and the H-Anim standards.

*Work to be done:*
The objective of the work is to develop an embodied conversational agent (ECA) which is capable to manage its emotions and their expressions and to display expressive behaviours. That is, the ECA will have to determine which expressions to show in a particular context and be able to display it.
To this aim, we will develop a qualitative model of multimodal expression of emotions. The model will include not only verbal language but also other multimodal signals such as facial expressions, gaze direction (eye and head direction) and gestures. The model will also consider behaviour expressivity.
Emotions are expressed through the whole body and not solely with facial expressions. Gestures, postures, gaze direction are signs of emotions. The multimodal behaviours are synchronized with each others to produce a coherent message. Moreover, the expression of emotion does not correspond to a static expression. Its behaviours are dynamic, they evolve through time. They form a sequence of behaviours whose meaning convey the emotion. The model of emotional behaviours should be extended to the whole body and should be dynamic.
Evaluation of the model will be performed.

*Pre-requisite*: C++, 3D animation, emotion model
*Project Length*: 3 years PhD
*Place*: TELECOM ParisTech
*Stipend*: depends on applicant qualification (around 1400 euros)
Catherine Pelachaud