Organisms in the natural world display remarkable abilities of adaptation, decision making and control in uncertain, and often hostile, environments. The means by which they achieve this impressive performance is poorly understood, although there is little doubt that much of this is achieved through evolution and learning, occurring at the cellular, individual and population levels. In contrast to natural systems, the performance of many engineered systems in such tasks is often brittle and inflexible.
In spite of these limitations, recent years have witnessed impressive advances in addressing some of these shortcomings through learning in complex distributed artificial systems, where the autonomous structuring of rich adaptive representations plays a key role. My main current focus is on learning and control in such systems within the perception-action cycle, concentrating mainly on reinforcement learning, on transferring knowledge between tasks and domains, and on effective and flexible exploration schemes which subserve both short-term and long-term goals. The long term goal driving my research is the creation of a conceptual and mathematical framework which will contribute to understanding perception, learning, decision making and control in biological and artificial systems.