Computerised Minds. ...

A video sponsored by the society discusses Searle's Chinese Room Argument (CRA) and the heated debates surrounding it. In this video, which is accessible to the general public and those with interest in AI, Olly's Philosophy Tube ...


Erden in AI roundtab...

On Friday 4th September, philosopher and AISB member Dr Yasemin J Erden, participated in an AI roundtable at Second Home, hosted by Index Ventures and SwiftKey.   Joining her on the panel were colleagues from academia and indu...


AISB Convention 2016

The AISB Convention is an annual conference covering the range of AI and Cognitive Science, organised by the Society for the Study of Artificial Intelligence and Simulation of Behaviour. The 2016 Convention will be held at the Uni...


Bishop and AI news

Stephen Hawking thinks computers may surpass human intelligence and take over the world. This view is based on the ideology that all aspects of human mentality will eventually be realised by a program running on a suitable compu...


Connection Science

All individual members of The Society for the Study of Artificial Intelligence and Simulation of Behaviour have a personal subscription to the Taylor Francis journal Connection Science as part of their membership. How to Acce...


Al-Rifaie on BBC

AISB Committee member and Research Fellow at Goldsmiths, University of London, Dr Mohammad Majid al-Rifaie was interviewed by the BBC (in Farsi) along with his colleague Mohammad Ali Javaheri Javid on the 6 November 2014. He was a...


AISB YouTube Channel

The AISB has launched a YouTube channel: ( The channel currently holds a number of videos from the AISB 2010 Convention. Videos include the AISB round t...



AISB event Bulletin Item

CALL FOR PAPERS: Workshop on Reinforcement Learning with Generalized Feedback: Beyond Numeric Rewards, Sep 23rd 2013, Prague, CZECH REPUBLIC

Part of ECML/PKDD 2013


Reinforcement learning is traditionally formalized within the /Markov
Decision Process/ (MDP) framework: By taking actions in a stochastic and
possibly unknown environment, an agent moves between states in this
environment; moreover, after each action, it receives a numeric,
possibly delayed reward signal. The agent's learning task then consists
of developing a strategy that allows it to act optimally, that is, to
devise a policy (mapping states to actions) that maximizes its long-term
(cumulative) reward.

In recent years, different generalizations of the standard setting of
reinforcement learning have emerged; in particular, several attempts
have been made to relax the quite restrictive requirement for numeric
feedback and to learn from different types of more flexible training
information. Examples of generalized settings of that kind include
apprenticeship learning, inverse reinforcement learning, multi-objective 
reinforcement learning, and preference-based reinforcement learning.
Learning in these generalized frameworks can be considerably harder than 
learning in MDPs because rewards cannot be easily aggregated over 
different states. 


The most important goal of this workshop is to help in unifying and
streamlining research on generalizations of standard reinforcement
learning, which, for the time being, seem to be pursued in a rather
disconnected manner. Indeed, many of the extensions and generalizations
discussed above are still lacking a sound theoretical foundation, let
alone a generally accepted underlying framework comparable to Markov
Decision Processes for conventional reinforcement learning. Besides,
many of the commonalities shared by these generalizations have
apparently not been recognized or explored so far. A formalization in
terms of preferences may provide such a theoretical underpinning.
Ideally, the workshop will help the participants to identify some common
ground of their work, thereby helping the field move toward a
theoretical foundation of reinforcement learning with generalized feedback.

Apart from fostering theoretical developments of that kind, we are also
interested in identifying and exchanging interesting applications and
problems that may serve as benchmarks for qualitative or
preference-based reinforcement learning (such as cart-pole balancing or
the mountain car for classical reinforcement learning).


Topics of interest include but are not limited to

  * novel frameworks for reinforcement learning beyond MDPs
  * algorithms for learning from preferences and non-numeric,
    qualitative, or structured feedback
  * theoretical results on the learnability of optimal policies,
    convergence of algorithms in qualitative settings, etc.
  * applications and benchmark problems for reinforcement learning in
    non-standard settings.


Please e-mail submissions in Springer LNCS format to both workshop chairs.
There is no strict page limit, but we encourage authors to stay within 
the page limits of the main conference (16 pages). We particularly
encourage short papers (8 pages or less).

Should there be a high turnout in papers with high quality, we will also
consider a post-workshop publication, such as a special issue or a book.
We like to emphasize, however, that the ambition of the workshop is not 
to collect mature work ready for publication but to provide a forum of 
exchange for researchers, with the possibility to discuss ongoing 
developments and work in progress.


Paper deadline: /June 28, 2013/
Notifications:  /July 19, 2013/
Final versions: /August 2, 2013/
Workshop date:  /September 23, 2013/


  * Johannes Frnkranz 
    (TU Darmstadt)
  * Eyke Hllermeier 
    (Universitt Marburg)