AISB Convention 2015

The AISB Convention is an annual conference covering the range of AI and Cognitive Science, organised by the Society for the Study of Artificial Intelligence and Simulation of Behaviour. The 2015 Convention will be held at the Uni...


Yasemin Erden on BBC

AISB Committee member, and Philosophy Programme Director and Lecturer, Dr Yasemin J. Erden interviewed for the BBC on 29 October 2013. Speaking on the Today programme for BBC Radio 4, as well as the Business Report for BBC world N...


Mark Bishop on BBC ...

Mark Bishop, Chair of the Study of Artificial Intelligence and the Simulation of Behaviour, appeared on Newsnight to discuss the ethics of ‘killer robots’. He was approached to give his view on a report raising questions on the et...


AISB YouTube Channel

The AISB has launched a YouTube channel: ( The channel currently holds a number of videos from the AISB 2010 Convention. Videos include the AISB round t...


Lighthill Debates

The Lighthill debates from 1973 are now available on YouTube. You need to a flashplayer enabled browser to view this YouTube video  



AISB event Bulletin Item

CFP: Semantic Robot Vision Challenge (SRVC)

Call for Participation for the Semantic Robot Vision Challenge (SRVC)
(including a robot league and a software-only league)


The Semantic Robot Vision Challenge is a new research competition
that is designed to push the state of the art in image understanding
and automatic acquisition of knowledge from large unstructured
databases of images (such as those generally found on the web).

The web contains vast collections of unstructured image databases that
could be exploited to generate useful models for image
recognition. For instance, Google Image Search only uses keywords
found in image filenames rather than the image data itself. However,
the thousands of images retrieved by such a search could be used to
generate models for automatically recognizing objects in the real
world. Currently, the state of the art in computer vision research has
shown that such general object recognition is possible given highly
organized and manually structured databases. However, not as much
success has been made in the area of image understanding from
completely unstructured databases. This is an area we are very
interested in encouraging.

In this competition, teams will be required to demonstrate a robot
(or a software agent) that has the ability to:

   1. Autonomously connect to the Internet and build an object
      classification database sufficient to identify a number of
      objects found on a textual list.

   2. Use this classification database to autonomously search an
      indoor environment for the objects in its list.

Integrating a mobile robot with the vision research adds another
interesting layer of complexity that would not ordinarily be available
in a purely computer vision competition. Semantic understanding of the
objects could also be expanded to scene understanding as well. Thus,
scene context can be used to guide the search for objects in areas
which make the most sense for them to be found (e.g. a stapler is
usually found on a desk rather than on the floor or on a wall).

We encourage anyone in these research fields to participate and help
us to advance the state of the art in image understanding research. At
the end of the competition, we will hold a workshop so that the
specific technical aspects of each entry can be presented and


The competition will be held as part of the Mobile Robot Competition
and Exhibition at The Twenty-Second National Conference on Artificial
Intelligence (AAAI) in Vancouver, British Columbia, Canada.

Event dates are: July 22-25, 2007

For more information, please visit: