Yasemin Erden on BBC

AISB Committee member, and Philosophy Programme Director and Lecturer, Dr Yasemin J. Erden interviewed for the BBC on 29 October 2013. Speaking on the Today programme for BBC Radio 4, as well as the Business Report for BBC world N...


Mark Bishop on BBC ...

Mark Bishop, Chair of the Study of Artificial Intelligence and the Simulation of Behaviour, appeared on Newsnight to discuss the ethics of ‘killer robots’. He was approached to give his view on a report raising questions on the et...


AISB YouTube Channel

The AISB has launched a YouTube channel: http://www.youtube.com/user/AISBTube (http://www.youtube.com/user/AISBTube). The channel currently holds a number of videos from the AISB 2010 Convention. Videos include the AISB round t...


Lighthill Debates

The Lighthill debates from 1973 are now available on YouTube. You need to a flashplayer enabled browser to view this YouTube video  



AISB opportunities Bulletin Item

Postdoc on Hybrid model for gesture animation of an avatar in 3D world, IUT of Montreuil, University of Paris8

Contact: pelachaud@iut.univ-paris8.fr

Hybrid model for gesture animation of an avatar in 3D world

This project is part of a French project ANR MyBlog-3D whose participants are I-Maginer, GET-ENST, GET-INT and University of Paris8. The goal of this project is to reinforce the mutual perception of users that communicate and share objects in a 3D virtual space through the use of perceptual interfaces and 3D users representation. In particular, the usersgestures that are acquired in realtime via webcam will be replayed by the avatars.
The acquired data are most of the time partial. For example, low camera quality, high computation load, users location could impede the extraction of some information (e.g., gaze direction, hand configuration). The users perception will be completed so as to be identical to observations and realist in their reproduction. To achieve such a goal, the idea is to combine data recognition (e.g. arm motion) and computation model of avatar animation. That is the gestures to be shown by the user avatar will be driven by gesture extracted from users movement and by computational model for autonomous  ECAs.
In this project we focus on elaborating an gesture animation model, in particular for communicative gestures. In a first step, the candidate will have to refine an existing procedural animation module that models gesture from a symbolic representation. Gestures are described with a representation language. This language will have to be enriched to allow one to describe complex gesture. The gesture engine will have to be refined to generate dynamic gestures.
In a second step the project will deal with the elaboration of an hybrid model for gesture animation : reproduction of gestures from partial data. The model will create gestures animation from the capture data relying on a symbolic representation  in case data were missing. The model will ensure a continuity in the movement as well as the integration of procedural animation for non-visible data. A typical example is for the hand animation in case of occlusion from the camera. The hand motion will be computed procedurally and would have to be integrated in the arm movement that has been captured.

Pre-requisite: C++, 3D animation
Project Length: 1 year, renewable for 1 more year.
Place: IUT of Montreuil, University of Paris8. This position would be the perfect opportunity  to somebody from an English speaking country to move to Paris as our institute is very near Paris!
Stipend: between 1200 and 2000 euros per month net depending on applicant qualification
Contact : 
Catherine Pelachaud