cec

IEEE-CEC 2013

 

 

Special session: Evolution for agent control

 

 

 

Overview

 

The problem of automatically obtaining agent controllers, being the agents soft agents or real agents like robots, was tackled in computational intelligence with different strategies, trying to avoid a design “by hand”, and which can be categorized in two different points of view. On one hand, systems with learning capability have been designed and, on the other, evolutionary processes have been run to obtain complete systems that performed the required tasks, which have begun, for example, in the robotics case, the evolutionary robotics research line. That line represents an attempt to minimize the designer’s intervention with the use of the simulated evolution to obtain, in an automatic manner, the control system that defines the behaviours of the robot.

Both proposals have had partial success. The control systems with learning capability have been possible in cases where training sets could be clearly defined, that is difficult in non-structured environments, where in general the training examples appear in a non-sorted way and among huge irrelevant information. With the second possibility, the evolutionary systems have resolved problems in simple environments. However, in complex ones, evolutions can imply huge processing times, in addition to the lack of adaptability to fast changes. The combination of the two alternatives can incorporate the advantages of both. The aim of the special session is to present recent advances in the application of evolutionary methods and natural computing algorithms for agent control, together with their integration with different learning methods.

 

Topics areas include (but are not restricted to):

 

  Aspects of combination of evolution and learning:

      Lamarckian processes and Baldwinian processes.

      Memetic algorithms.

      New proposals of hybridizations of natural computing algorithms like DE, PSO, ant algorithms, or artificial immune systems with heuristics or different agent learning methods.

      Incorporation of reinforcement, supervised or unsupervised learning, or a combination of these types. Use of connectionist learning in ANN control or symbolic learning methods such as Case Base Reasoning or Inductive Logic Programming.

  Use of multi-objective solutions for agent control.

  Co-evolution of agent control with other aspects.

  Use in robot control: Evolutionary robotics.


Paper Submission:

Manuscripts should be prepared according to the standard format and page limit of regular papers specified in CEC2013 and submitted through the CEC2013 website: http://www.cec2013.org/. Special session papers will be treated in the same way as regular papers and included in the conference proceedings.

 

Important Dates:

 

Organizers:

 

José Santos

santos@udc.es

University of A Coruña, Spain

udc_logo

Fernando Montes

fmontes@uv.mx

Universidad Veracruzana, México

escudo.jpg

 

 

 

Program Committee:

Josh Bongard, University of Vermont (USA)

Angelo Cangelosi, University of Plymouth (UK)

Stéphane Doncieux, Institut des Systèmes Intelligents et de Robotique (France)

Nicolas Bredeche, Université Pierre et Marie Curie – UPMC (France)

Onofrio Gigliotta, University of Naples Federico II (Italy)

John Hallam, University of Southern Denmark (Denmark)

Inman Harvey, University of Sussex (UK)

Luis Felipe Marín Urías, Universidad Veracruzana (México)

Orazio Miglino, Università de gli Studi di Napoli Federico II (Italy)

Fernando Montes, Universidad Veracruzana (México)

Andrew L. Nelson, Androtics LLC - Tucson (USA)

Stephano Nolfi, ISTC-CNR (Italy)

Carlos Alberto Ochoa Ortiz, Universidad Autónoma de Ciudad Juárez (México)

José Santos, University of A Coruña (Spain)