AISB convention 2017

  In the run up to AISB2017 convention, I've asked Joanna Bryson, from the organising team, to answer few questions about the convention and what comes with it. Mohammad Majid al-Rifaie (https://twitter.com/mohmaj) Tu...


Read More...

Harold Cohen

Harold Cohen, tireless computer art pioneer dies at 87   Harold Cohen at the Tate (1983) Aaron image in background   Harold Cohen died at 87 in his studio on 27th April 2016 in Encintias California, USA.The first time I hear...


Read More...

Dancing with Pixies?...

At TEDx Tottenham, London Mark Bishop (the former chair of the Society) demonstrates that if the ongoing EU flagship science project - the 1.6 billion dollar "Human Brain Project” - ultimately succeeds in understanding all as...


Read More...

Computerised Minds. ...

A video sponsored by the society discusses Searle's Chinese Room Argument (CRA) and the heated debates surrounding it. In this video, which is accessible to the general public and those with interest in AI, Olly's Philosophy Tube ...


Read More...

Connection Science

All individual members of The Society for the Study of Artificial Intelligence and Simulation of Behaviour have a personal subscription to the Taylor Francis journal Connection Science as part of their membership. How to Acce...


Read More...
01234

Notice

AISB opportunities Bulletin Item

TELECOM ParisTech PhD opening: 3D Gesture model


Contact: catherine.pelachaud@telecom-paristech.fr

*PhD opening*

*Subject:* 3D expressive communicative gesture model

*Context:*
The project takes place within the GV-Lex project, an ANR project that will start early 2009 and will last for 3 years. The project aims to model a humanoid robot, NAO developed by Aldebaran, able to read a story in an expressive manner. The expressive gesture model will be based at first on an existing virtual agent system, the ECA system Greta:
http://www.tsi.enst.fr/~pelachau/Greta/
The system takes as input a text to be said by the agent. The text has been enriched with information on the manner the text ought to be said (i.e. with which communicative acts it should be said). The behavioral engine selects the multimodal behaviors to display and synchronizes the verbal and nonverbal behaviors of the agent.

*Work to be done:*
The work to be done concerns mainly the 3D Greta agent and the virtual agent representing the humanoid robot.
The respective animation modules for the humanoid robot and the virtual agent are script-based. That is the animation is generated from a language command of the type move the right arm forward with the palm up. Both languages should be made compatible to ensure that they both encompass the limitation of the robots movement capabilities and are able to produce equivalent movements on the robot and on the virtual agent. A repertoire of gestures will be established.
The animation module for the virtual agent should be made expressive. A first approach has been implemented on the virtual agent. Expressivity has been defined over six dimensions, namely the spatiality of the movement, their temporality, fluidity, power and repetitiveness. This model needs to be extended and refined to model aspects of expressivity that have not been considered yet, such as tension or continuousness.
Finally the gesture animation and expressivity models should be evaluated. An objective evaluation will be set to measure the capability of the implementation. A subjective evaluation will be made to test how expressive the gesture animation is perceived on the robot and on the agent when reading a story.

*Pre-requisite*: C++, 3D animation, behavior model
*Project Length*: 3 years PhD
*Place*: TELECOM ParisTech
*Stipend*: depends on applicant qualification (around 1400 euros)
*Contact*:
Catherine Pelachaud
catherine.pelachaud@telecom-paristech.fr
http://www.tsi.enst.fr/~pelachau