Al-Rifaie on BBC

AISB Committee member and Research Fellow at Goldsmiths, University of London, Dr Mohammad Majid al-Rifaie was interviewed by the BBC (in Farsi) along with his colleague Mohammad Ali Javaheri Javid on the 6 November 2014. He was a...


Read More...

Rose wins the Loebne...

After 2 hours of judging at Bletchley Park, 'Rose' by Bruce Wilcox was declared the winner of the Loebner Prize 2014, held in conjunction with the AISB.  The event was well attended, film live by Sky News and the special guest jud...


Read More...

AISB Convention 2015

The AISB Convention is an annual conference covering the range of AI and Cognitive Science, organised by the Society for the Study of Artificial Intelligence and Simulation of Behaviour. The 2015 Convention will be held at the Uni...


Read More...

Yasemin Erden on BBC

AISB Committee member, and Philosophy Programme Director and Lecturer, Dr Yasemin J. Erden interviewed for the BBC on 29 October 2013. Speaking on the Today programme for BBC Radio 4, as well as the Business Report for BBC world N...


Read More...

Mark Bishop on BBC ...

Mark Bishop, Chair of the Study of Artificial Intelligence and the Simulation of Behaviour, appeared on Newsnight to discuss the ethics of ‘killer robots’. He was approached to give his view on a report raising questions on the et...


Read More...

AISB YouTube Channel

The AISB has launched a YouTube channel: http://www.youtube.com/user/AISBTube (http://www.youtube.com/user/AISBTube). The channel currently holds a number of videos from the AISB 2010 Convention. Videos include the AISB round t...


Read More...
012345

Notice

AISB event Bulletin Item

CALL FOR PAPERS: Workshop on Reinforcement Learning with Generalized Feedback: Beyond Numeric Rewards, Sep 23rd 2013, Prague, CZECH REPUBLIC

http://www.ke.tu-darmstadt.de/events/PBRL-13/pbrl-13.html

Part of ECML/PKDD 2013

BACKGROUND

Reinforcement learning is traditionally formalized within the /Markov
Decision Process/ (MDP) framework: By taking actions in a stochastic and
possibly unknown environment, an agent moves between states in this
environment; moreover, after each action, it receives a numeric,
possibly delayed reward signal. The agent's learning task then consists
of developing a strategy that allows it to act optimally, that is, to
devise a policy (mapping states to actions) that maximizes its long-term
(cumulative) reward.

In recent years, different generalizations of the standard setting of
reinforcement learning have emerged; in particular, several attempts
have been made to relax the quite restrictive requirement for numeric
feedback and to learn from different types of more flexible training
information. Examples of generalized settings of that kind include
apprenticeship learning, inverse reinforcement learning, multi-objective 
reinforcement learning, and preference-based reinforcement learning.
Learning in these generalized frameworks can be considerably harder than 
learning in MDPs because rewards cannot be easily aggregated over 
different states. 


GOALS AND OBJECTIVES

The most important goal of this workshop is to help in unifying and
streamlining research on generalizations of standard reinforcement
learning, which, for the time being, seem to be pursued in a rather
disconnected manner. Indeed, many of the extensions and generalizations
discussed above are still lacking a sound theoretical foundation, let
alone a generally accepted underlying framework comparable to Markov
Decision Processes for conventional reinforcement learning. Besides,
many of the commonalities shared by these generalizations have
apparently not been recognized or explored so far. A formalization in
terms of preferences may provide such a theoretical underpinning.
Ideally, the workshop will help the participants to identify some common
ground of their work, thereby helping the field move toward a
theoretical foundation of reinforcement learning with generalized feedback.

Apart from fostering theoretical developments of that kind, we are also
interested in identifying and exchanging interesting applications and
problems that may serve as benchmarks for qualitative or
preference-based reinforcement learning (such as cart-pole balancing or
the mountain car for classical reinforcement learning).


TOPICS OF INTEREST

Topics of interest include but are not limited to

  * novel frameworks for reinforcement learning beyond MDPs
  * algorithms for learning from preferences and non-numeric,
    qualitative, or structured feedback
  * theoretical results on the learnability of optimal policies,
    convergence of algorithms in qualitative settings, etc.
  * applications and benchmark problems for reinforcement learning in
    non-standard settings.


SUBMISSIONS

Please e-mail submissions in Springer LNCS format to both workshop chairs.
There is no strict page limit, but we encourage authors to stay within 
the page limits of the main conference (16 pages). We particularly
encourage short papers (8 pages or less).

Should there be a high turnout in papers with high quality, we will also
consider a post-workshop publication, such as a special issue or a book.
We like to emphasize, however, that the ambition of the workshop is not 
to collect mature work ready for publication but to provide a forum of 
exchange for researchers, with the possibility to discuss ongoing 
developments and work in progress.


IMPORTANT DATES

Paper deadline: /June 28, 2013/
Notifications:  /July 19, 2013/
Final versions: /August 2, 2013/
Workshop date:  /September 23, 2013/


WORKSHOP CHAIRS

  * Johannes Frnkranz 
    (TU Darmstadt)
  * Eyke Hllermeier 
    (Universitt Marburg)