AI Europe 2016

  Partnership between AISB and AI Europe 2016: Next December 5th and 6th in London, AI Europe will bring together the European AI eco-system by gathering new tools and future technologies appearing in professional fields for th...


Read More...

AISB convention 2017

  In the run up to AISB2017 convention (http://aisb2017.cs.bath.ac.uk/index.html), I've asked Joanna Bryson, from the organising team, to answer few questions about the convention and what comes with it. Mohammad Majid...


Read More...

Harold Cohen

Harold Cohen, tireless computer art pioneer dies at 87   Harold Cohen at the Tate (1983) Aaron image in background   Harold Cohen died at 87 in his studio on 27th April 2016 in Encintias California, USA.The first time I hear...


Read More...

Dancing with Pixies?...

At TEDx Tottenham, London Mark Bishop (the former chair of the Society) demonstrates that if the ongoing EU flagship science project - the 1.6 billion dollar "Human Brain Project” - ultimately succeeds in understanding all as...


Read More...

Computerised Minds. ...

A video sponsored by the society discusses Searle's Chinese Room Argument (CRA) and the heated debates surrounding it. In this video, which is accessible to the general public and those with interest in AI, Olly's Philosophy Tube ...


Read More...

Connection Science

All individual members of The Society for the Study of Artificial Intelligence and Simulation of Behaviour have a personal subscription to the Taylor Francis journal Connection Science as part of their membership. How to Acce...


Read More...
012345

Notice

AISB event Bulletin Item

CALL FOR PAPERS: Workshop on Reinforcement Learning with Generalized Feedback: Beyond Numeric Rewards, Sep 23rd 2013, Prague, CZECH REPUBLIC

http://www.ke.tu-darmstadt.de/events/PBRL-13/pbrl-13.html

Part of ECML/PKDD 2013

BACKGROUND

Reinforcement learning is traditionally formalized within the /Markov
Decision Process/ (MDP) framework: By taking actions in a stochastic and
possibly unknown environment, an agent moves between states in this
environment; moreover, after each action, it receives a numeric,
possibly delayed reward signal. The agent's learning task then consists
of developing a strategy that allows it to act optimally, that is, to
devise a policy (mapping states to actions) that maximizes its long-term
(cumulative) reward.

In recent years, different generalizations of the standard setting of
reinforcement learning have emerged; in particular, several attempts
have been made to relax the quite restrictive requirement for numeric
feedback and to learn from different types of more flexible training
information. Examples of generalized settings of that kind include
apprenticeship learning, inverse reinforcement learning, multi-objective 
reinforcement learning, and preference-based reinforcement learning.
Learning in these generalized frameworks can be considerably harder than 
learning in MDPs because rewards cannot be easily aggregated over 
different states. 


GOALS AND OBJECTIVES

The most important goal of this workshop is to help in unifying and
streamlining research on generalizations of standard reinforcement
learning, which, for the time being, seem to be pursued in a rather
disconnected manner. Indeed, many of the extensions and generalizations
discussed above are still lacking a sound theoretical foundation, let
alone a generally accepted underlying framework comparable to Markov
Decision Processes for conventional reinforcement learning. Besides,
many of the commonalities shared by these generalizations have
apparently not been recognized or explored so far. A formalization in
terms of preferences may provide such a theoretical underpinning.
Ideally, the workshop will help the participants to identify some common
ground of their work, thereby helping the field move toward a
theoretical foundation of reinforcement learning with generalized feedback.

Apart from fostering theoretical developments of that kind, we are also
interested in identifying and exchanging interesting applications and
problems that may serve as benchmarks for qualitative or
preference-based reinforcement learning (such as cart-pole balancing or
the mountain car for classical reinforcement learning).


TOPICS OF INTEREST

Topics of interest include but are not limited to

  * novel frameworks for reinforcement learning beyond MDPs
  * algorithms for learning from preferences and non-numeric,
    qualitative, or structured feedback
  * theoretical results on the learnability of optimal policies,
    convergence of algorithms in qualitative settings, etc.
  * applications and benchmark problems for reinforcement learning in
    non-standard settings.


SUBMISSIONS

Please e-mail submissions in Springer LNCS format to both workshop chairs.
There is no strict page limit, but we encourage authors to stay within 
the page limits of the main conference (16 pages). We particularly
encourage short papers (8 pages or less).

Should there be a high turnout in papers with high quality, we will also
consider a post-workshop publication, such as a special issue or a book.
We like to emphasize, however, that the ambition of the workshop is not 
to collect mature work ready for publication but to provide a forum of 
exchange for researchers, with the possibility to discuss ongoing 
developments and work in progress.


IMPORTANT DATES

Paper deadline: /June 28, 2013/
Notifications:  /July 19, 2013/
Final versions: /August 2, 2013/
Workshop date:  /September 23, 2013/


WORKSHOP CHAIRS

  * Johannes Frnkranz 
    (TU Darmstadt)
  * Eyke Hllermeier 
    (Universitt Marburg)