Yasemin Erden on BBC

AISB Committee member, and Philosophy Programme Director and Lecturer, Dr Yasemin J. Erden interviewed for the BBC on 29 October 2013. Speaking on the Today programme for BBC Radio 4, as well as the Business Report for BBC world N...


Read More...

Mark Bishop on BBC ...

Mark Bishop, Chair of the Study of Artificial Intelligence and the Simulation of Behaviour, appeared on Newsnight to discuss the ethics of ‘killer robots’. He was approached to give his view on a report raising questions on the et...


Read More...

AISB YouTube Channel

The AISB has launched a YouTube channel: http://www.youtube.com/user/AISBTube (http://www.youtube.com/user/AISBTube). The channel currently holds a number of videos from the AISB 2010 Convention. Videos include the AISB round t...


Read More...

Lighthill Debates

The Lighthill debates from 1973 are now available on YouTube. You need to a flashplayer enabled browser to view this YouTube video  


Read More...
0123

Notice

AISB opportunities Bulletin Item

TELECOM ParisTech PhD opening: 3D Gesture model


Contact: catherine.pelachaud@telecom-paristech.fr

*PhD opening*

*Subject:* 3D expressive communicative gesture model

*Context:*
The project takes place within the GV-Lex project, an ANR project that will start early 2009 and will last for 3 years. The project aims to model a humanoid robot, NAO developed by Aldebaran, able to read a story in an expressive manner. The expressive gesture model will be based at first on an existing virtual agent system, the ECA system Greta:
http://www.tsi.enst.fr/~pelachau/Greta/
The system takes as input a text to be said by the agent. The text has been enriched with information on the manner the text ought to be said (i.e. with which communicative acts it should be said). The behavioral engine selects the multimodal behaviors to display and synchronizes the verbal and nonverbal behaviors of the agent.

*Work to be done:*
The work to be done concerns mainly the 3D Greta agent and the virtual agent representing the humanoid robot.
The respective animation modules for the humanoid robot and the virtual agent are script-based. That is the animation is generated from a language command of the type move the right arm forward with the palm up. Both languages should be made compatible to ensure that they both encompass the limitation of the robots movement capabilities and are able to produce equivalent movements on the robot and on the virtual agent. A repertoire of gestures will be established.
The animation module for the virtual agent should be made expressive. A first approach has been implemented on the virtual agent. Expressivity has been defined over six dimensions, namely the spatiality of the movement, their temporality, fluidity, power and repetitiveness. This model needs to be extended and refined to model aspects of expressivity that have not been considered yet, such as tension or continuousness.
Finally the gesture animation and expressivity models should be evaluated. An objective evaluation will be set to measure the capability of the implementation. A subjective evaluation will be made to test how expressive the gesture animation is perceived on the robot and on the agent when reading a story.

*Pre-requisite*: C++, 3D animation, behavior model
*Project Length*: 3 years PhD
*Place*: TELECOM ParisTech
*Stipend*: depends on applicant qualification (around 1400 euros)
*Contact*:
Catherine Pelachaud
catherine.pelachaud@telecom-paristech.fr
http://www.tsi.enst.fr/~pelachau