AISB convention 2017

  In the run up to AISB2017 convention, I've asked Joanna Bryson, from the organising team, to answer few questions about the convention and what comes with it. Mohammad Majid al-Rifaie (https://twitter.com/mohmaj) Tu...


Read More...

Harold Cohen

Harold Cohen, tireless computer art pioneer dies at 87   Harold Cohen at the Tate (1983) Aaron image in background   Harold Cohen died at 87 in his studio on 27th April 2016 in Encintias California, USA.The first time I hear...


Read More...

Dancing with Pixies?...

At TEDx Tottenham, London Mark Bishop (the former chair of the Society) demonstrates that if the ongoing EU flagship science project - the 1.6 billion dollar "Human Brain Project” - ultimately succeeds in understanding all as...


Read More...

Computerised Minds. ...

A video sponsored by the society discusses Searle's Chinese Room Argument (CRA) and the heated debates surrounding it. In this video, which is accessible to the general public and those with interest in AI, Olly's Philosophy Tube ...


Read More...

Connection Science

All individual members of The Society for the Study of Artificial Intelligence and Simulation of Behaviour have a personal subscription to the Taylor Francis journal Connection Science as part of their membership. How to Acce...


Read More...
01234

Notice

AISB opportunities Bulletin Item

Job Opening - TELECOM ParisTech


Contact: catherine.pelachaud@telecom-paristech.fr

We are looking for a candidate with experience in 3D computer graphics.

*Context:*
The project takes place within the ECA system Greta. This ECA system accepts as input a text to be said by the agent. The text has been enriched with information on the manner the text ought to be said (i.e. with which communicative acts it should be said). The behavioral engine computes the synchronized verbal and nonverbal behaviors of the agent. The animation module follows the MPEG-4 and the H-Anim standards. This work is part of the EU project CALLAS (http://www.callas-newmedia.eu/). CALLAS aims to provide a new paradigm for investigating a more comprehensive set of emotions in multimodal interfaces tailored to New Media environments, changing the way we perceive contemporary and future media applications.

*Work to be done:*
The animation module designed so far has implemented arm movements only. It needs to be extended to the full upper body, in particular to the torso and shoulder. The body animation needs also to produce more fluid animations. A skinning algorithm needs to be added as well. It will make use of the library OGRE to make use of OGRE capabilities for the rendering and skinning properties.

The animation concerns communicative movement. Such a movement is decomposed in various phases (preparation of the movement, stroke, hold, retraction). As body movements and speech are synchronized with each other, the animation module needs to be flexible and allows for adaptation of the movement.
The animation needs to work in real-time.
The animation will be driven through a language command of the type move-shoulder-up or bend-torso-forward. This language follows the BML specification:
http://wiki.mindmakers.org/projects:bml:draft1.0

*Pre-requisite*: C++, 3D computer graphics
*Project Length*: 12 months
*Place*: TELECOM ParisTech
*Stipend*: around 2000 euros depending on applicants qualification
*Contact*:
Catherine Pelachaud
catherine.pelachaud@telecom-paristech.fr
http://www.tsi.enst.fr/~pelachau