Keynote Speakers


There are three plenary talks, which will be delivered by leading researchers and exciting speakers:

From biology to robots: the RobotCub project

Dr. Giorgio Metta

1 - 2 pm

Wednesday 1st April, 2010 @ Phoenix Square

Simulating and getting inspiration from biology is certainly not a new endeavor in robotics (Atkeson et al., 2000; Sandini, 1997; Metta 1999). However, the use of humanoid robots as tools to study human cognitive skills it is a relatively new area of the research which fully acknowledges the importance of embodiment and the interaction with the environment for the emergence of motor skills, perception, sensorimotor coordination, and cognition (Lungarella, Metta, Pfeifer, & Sandini, 2003).

The guiding philosophy - and main motivation - is that cognition cannot be hand-coded but it has to be the result of a developmental process through which the system becomes progressively more skilled and acquires the ability

to understand events, contexts, and actions, initially dealing with immediate situations and increasingly acquiring a predictive capability (Vernon, Metta Sandini, 2007).

To pursue this research, a humanoid robot (iCub) has been developed as result of the collaborative project RobotCub ( supported by the European Commission through the "Cognitive Systems and Robotics" Unit E5 of IST. The iCub has been designed with the goal of studying human cognition and therefore embeds a sophisticated set of sensors providing vision, touch, proprioception, audition as well as a large number

of actuators (53) providing dexterous motor abilities. The project is "open", in the sense of open-source, to build a critical mass of research groups contributing with their ideas and algorithms to advance knowledge on human cognition (N. Nosengo 2009).

The aim of the talk will be: i) to present the approach and motivation, ii) the illustrated the technological choices made and iii) to present some initial results obtained.

Artificial Ethical Intelligence:  Technical, Conceptual and Ethical Challenges

Prof Steve Torrance 

1 - 2 pm

Tuesday 30th March, 2010 @ Phoenix Square

Artificial Ethical Intelligence (AEI) is to ethical intelligence what AI is to general intelligence.  As with AI proper, the field of AEI has many ramifications, and presents a number of challenges.  It is also of growing urgency, as cognitive and robotic agent systems become increasingly autonomous, and are designed to function in increasingly open and unstructured environments.  The technical challenges that are raised by this field can only be addressed against the background of a number of conceptual, or philosophical, questions  which concern the ways in which AI and artificial ethics research impacts upon issues in traditional ethics, and vice versa. 

For instance:  What is the relation between ‘functional’ autonomy, considered as a goal in artificial agent research, and ‘moral’ autonomy as elaborated by moral philosophers over centuries? To what extent could an AEI research programme provide us with a full-blooded artificial ethical agent?  If only rather impoverished approximations to full ethical agency are possible – at least in the short run – then might artificial moral agents be worse than useless in real live situations?  Also, is ethical intelligence all there is to ethical agency, or are there affective, phenomenological, social or other factors which are perhaps less amenable to AI methods?  How much genuine moral responsibility might artificial ethical agents (in the shorter or the longer term) be accorded? 

There are also more speculative questions that address the wider implications of this kind of research.  Given current cognitive technology R&D platforms, could therealso  be artificial ethical ‘patients’ (or recipients) as well as ethical ‘agents’ (or producers) – i.e. artificial agents towards which we have responsibilities, rather than ones which may have duties, responsibilities, etc. to us?  Does it make sense to envisage artificial agents as either ethical agents or ethical recipients if they are not also taken to be conscious creatures?  How might social, economic, legal and cultural relationships change if our moral constituency is broadened to include this new kind of ethical actor? 

This talk will address these and other issues.  Many of these questions have been implicit in AI from the beginning, but now they have to be addressed with more focus and clarity as AI enters into the age of Artificial Ethics.

Numbers to die 4

Prof. Harold Thimbleby

Swansea University

1 - 2 pm

Monday 29th March, 2010 @ Phoenix Square

When learning to program, students find it a bit confusing that a sequence of characters like 12.3 needs converting into a number, yet which just happens to be 12.3 too!

By all the evidence, professional programmers still find it confusing: almost every system has number parsing flaws, and many systems (from common programming languages, office software, and medical devices to handheld calculators) even have multiple, inconsistent defects. In safety critical domains, such problems may lead to death as well as to incorrect attribution of user error. The good news is that correct parsing can reduce risk; in particular, we show how to achieve an estimated halving of death rates from "out by ten" drug overdoses, by knowing how to correctly convert sequences of characters like 12.3 into 12.3 and not, say, 123.

People attending this keynote lecture will learn how to get spreadsheets to generate arbitrary expense claims for attending this conference, but the serious, really scary thought is, what else is going wrong with our programming and with our education of programmers if we can't even get the elementary stuff right?

The research reported was supported by a Royal Society Leverhulme Trust Senior Research Fellowship.