CFProposal AISB2018

  The Society for the Study of Artificial Intelligence and Simulation for Behaviour (AISB) is soliciting proposals for symposia to be held at the AISB 2018 convention.The longest running convention on Artificial Intelligence, A...


Insurance AI Analy...

Insurance AI Analytics Summit, October 9-10, London Join us for Europe’s only AI event dedicated to insurance where 300 attendees will unite from analytics, pricing, marketing, claims and underwriting. You’ll find out how advan...


AISB 2018 Convention

  The longest running convention on Artificial Intelligence, AISB 2018 will be held at the University of Liverpool, chaired by Floriana Grasso and Louise Dennis. As in the past years, AISB 2018 will provide a unique forum for p...


AI Summit London

     The AI Summit London: The World’s Number One AI Event for Business  Date: 9-10 May 2017 Venue: Business Design Centre, London. The AI Summit is the world’s first and largest/number one conference exhibition dedicated to t...


AISB Wired Health

    AISB and WIRED events have partnered to bring together inspirational high-profile speakers. Join hundreds of healthcare, pharmaceutical and technology influencers and leaders at the 4th Annual WIRED Health event, taking pl...


Hugh Gene Loebner

  The AISB were sad to learn last week of the passing of philanthropist and inventor Hugh Gene Loebner PhD, who died peacefully in his home in New York at the age of 74.  Hugh was founder and sponsor of The Loebner Prize, an an...


AI Europe 2016

  Partnership between AISB and AI Europe 2016: Next December 5th and 6th in London, AI Europe will bring together the European AI eco-system by gathering new tools and future technologies appearing in professional fields for th...


AISB convention 2017

  In the run up to AISB2017 convention (, I've asked Joanna Bryson, from the organising team, to answer few questions about the convention and what comes with it. Mohammad Majid...


Harold Cohen

Harold Cohen, tireless computer art pioneer dies at 87   Harold Cohen at the Tate (1983) Aaron image in background   Harold Cohen died at 87 in his studio on 27th April 2016 in Encintias California, USA.The first time I hear...


Dancing with Pixies?...

At TEDx Tottenham, London Mark Bishop (the former chair of the Society) demonstrates that if the ongoing EU flagship science project - the 1.6 billion dollar "Human Brain Project” - ultimately succeeds in understanding all as...



AISB event Bulletin Item

CFP: Workshop on Evaluating Architectures for Intelligence

Dear Colleague.

We are happy to announce a call for submissions to the AAAI 2007 Workshop on

		Evaluating Architectures for Intelligence
Details are below.  Please distribute to all interested parties, as 


Purpose and Scope

Cognitive architectures form an integral part of robots and
agents. Architectures structure and organize the knowledge and
algorithms used by the agents to select actions in dynamic
environments, plan and solve problems, learn, and coordinate with
others. Architectures enable intelligent behavior by agents, and serve
to integrate general capabilities expected of an intelligent agent
(e.g. planning and learning), to implement and test theories about
natural or synthetic agent cognition, and to explore
domain-independent mechanisms for intelligence.

As AI research has improved in formal and empirical rigor, traditional
evaluation methodologies for architectures have sometimes proved
insufficient. On the formal side, rigorous analysis has often proved
elusive; we seem to be missing the notation required for formally
proving properties of architectures. On the empirical side,
experiments which demonstrate generality are notoriously expensive to
perform, and are not sufficiently informative. And at a high-level,
evaluation is difficult because the criteria are not well defined: Is
it generality? Ease of programmability? Compatibility with data from
biology and psychology? Applicability in real systems?

Recognizing that scientific progress depends on the ability to conduct
informative evaluation (by experiment or formal analysis), this
workshop will address the methodologies needed for evaluating
architectures. The focus is on evaluation methodology, rather than
specific architectures; there are many researchers investigating
architectures, but surprisingly little published work on evaluation
methodology. Thus the workshop's immediate goal is to generate
discussion of a wide spectrum of evaluation challenges and methods for
addressing them. The next step is to harness such discussions to
propose guidelines for evaluation of architectures, that would be
acceptable to the AI community, and allow researchers to both evaluate
their own work, and the progress of others. We believe such guidelines
will facilitate the collection of objective and reproducible evidence
of the depth and breadth of an architecture's support for cognition,
and its relationship to human or other natural cognition. We intend to
publish the results in a special issue of an international journal and
to archive presentation slides and explanatory material on an active
web site.
Key Issues for Discussion

The following key questions will be raised to motivate the workshop
discussion, with the goal of providing answers (or at least steps
towards answers) within the workshop:
o What are the underlying research hypotheses one explores with architectures?
o Which functions/characteristics turn an architecture into an
  architecture supporting intelligence?
o How are architectures to be compared in an informative manner?
o What evaluation methods are needed for different types of cognitive
o What are the criteria and scales of evaluation?
o How should we validate the design of a cognitive architecture?
o Are there any relevant formal methods? Can we prove properties of
o Can we develop a common ontology for describing architectures and/or
  the various sets of requirements against which they can be evaluated?
o How can data-sets and benchmarks (standardized tasks) be used to
  evaluate architectures? Are there useful case-studies?
o How can we determine what architectures to use for different tasks
  or environments? Are there any trade-offs involved?
Format and Submissions
The workshop will be composed of invited and contributed talks on
evaluation methodologies, interleaved with panels, and moderated
discussions. We seek submission of extended abstracts (2 pages) and
short position papers (4 pages) that discuss evaluation methodologies
for architectures. 

Submissions should clearly address architecture evaluation issues and
methods and explicitly relate to one or more of the questions posed
above. Submissions that discuss specific architectures are only
acceptable if they discuss evaluation case-studies. A selected group
of contributors will be invited to present their position, to
participate in panels, and/or to moderate group discussions.

Submissions, in AAAI format, should be emailed by April 15, 2007, to
Gal Kaminka ( and Catherina Burghart
(, with a subject line containing 
Important Dates
o Submission of extended abstracts:        April 15, 2007 
o Notification, selection of speakers:     May 7, 2007 
o Camera-ready copy of workshop material:  May 15, 2007 
o Workshop at AAAI 2007:                   July 22-23, 2007 

The workshop is co-chaired by Gal A. Kaminka (Bar Ilan University,
Israel) and Catherina R. Burghart (University of Karlsruhe,
Germany). The organizing committee additionally includes:

o Kevin Gluck, Air Force Research Laboratory, USA
o Pat Langley, Stanford University, USA
o Brian Logan, University of Nottingham, UK
o Ralf Mikut, Karlsruhe Institute for Technology, Germany
o Praveen Paritosh, Northwestern University, USA
o Bilge Say, Middle East Technical University, Turkey
o Robert Wray, Soar Technology, Inc., USA