Yasemin Erden on BBC

AISB Committee member, and Philosophy Programme Director and Lecturer, Dr Yasemin J. Erden interviewed for the BBC on 29 October 2013. Speaking on the Today programme for BBC Radio 4, as well as the Business Report for BBC world N...


Read More...

Mark Bishop on BBC ...

Mark Bishop, Chair of the Study of Artificial Intelligence and the Simulation of Behaviour, appeared on Newsnight to discuss the ethics of ‘killer robots’. He was approached to give his view on a report raising questions on the et...


Read More...

AISB YouTube Channel

The AISB has launched a YouTube channel: http://www.youtube.com/user/AISBTube (http://www.youtube.com/user/AISBTube). The channel currently holds a number of videos from the AISB 2010 Convention. Videos include the AISB round t...


Read More...

Lighthill Debates

The Lighthill debates from 1973 are now available on YouTube. You need to a flashplayer enabled browser to view this YouTube video  


Read More...
0123

Notice

AISB miscellaneous Bulletin Item

CFP: International Journal of Humanoid Research - special issue

http://www.worldscinet.com/ijhr/mkt/callforpapers_details.shtml#active

Special Issue on the "Active Vision of Humanoids" to be published in the International Journal of Humanoid Research http://www.worldscinet.com/ijhr/mkt/callforpapers_details.shtml#active

Submission deadline extended to November 30, 2008
Publication: Spring 2009

Guest Editors: Yiannis Aloimonos and Giulio Sandini

Following the successful workshop on the Active Vision of Humanoids that happened in Pittsburgh, PA during the last Conference on Humanoid Robotics (November 2007)
(http://planning.cs.cmu.edu/humanoids07/index.shtml) we solicit papers for a special issue on the topic.

Practical computer vision-systems are devoted to answering a set of practical questions, such as is there something moving independently in the video taken by a moving camera?  What is it? Is there a human in the image?
Who is he? On the other hand, humans are involved in an ongoing process of analyzing images. As Stuart Geman wrote, "real world images have essentially infinite detail which can be perceived only by a process that is itself ongoing and essentially infinite. The more you look, the more you see".
Considering a humanoid robot, how should we think about its vision? The way we think of a practical vision system or the way we think of human vision?

Papers are solicited that keep some focus on the question of the visual/motor architecture of the humanoid: how should its motion system be structured? Should it stabilize the images? Segment the scene into surfaces?
Constantly check where it is with regard to its knowledge of the world? How should it build models of objects? How should it integrate cue information?
How should it reach a decision?
What is its perception of spatial layout? How should it learn, and what should it learn? Is there software that we have today which can be used to provide humanoids with a basic visual front-end, and what would this be?
Should we be developing visuo-motor representations?
How could we build them and how could we use them?

Many of the questions raised above are addressed in contemporary computer vision, but from the perspective of graphics and multimedia, image editing and image databases. Existing approaches do not apply to the case of a real-time system moving the way humans do. The peculiarity of humanoid vision stems from its purpose of supporting the action of an anthropomorphic body  (with hands and legs) as opposed to "just" being pattern recognition or image understanding.
Thus, in some sense, the humanoid active vision envisioned for this special issue, constitutes an evolution from the Active Vision of the 90's to the "Action Vision" of the new millennium. Action Vision considers the motor system of the humanoid as an integral part of its perceptual machinery.

Papers addressing such topics or others relevant to the design of a humanoid's vision system, should be submitted before the end of November for a target publication date of early 2009.