Modeling Adaptive Autonomous Agents

Pattie Maes

Summary by Yaron Revah

 

One category in research of Artificial Life is concerned with modeling and building Adaptive Autonomous Agents. This paper tries to give an overview of this relatively new field. It attempts to reflect on the state of the art of this new approach, while discussing the current limitations and open issues, which remain to be studied and solved.

The first question to be asked is ‘what is an Adaptive Autonomous Agent?’. An agent is a system that tries to fulfill a set of goals in a complex, dynamic environment. An agent is called autonomous if it operates completely on its own, deciding by itself what action it should take, considering the input coming from its sensors, in order to achieve its goals. An agent capable of improving at achieving its goals is said to be adaptive.

In the light of this rather broad definition of agents, the main problem one should deal with, while trying to build such a system, is coming up with an architecture for an agent which will result in an agent demonstrating adaptive, robust and effective behavior. Just as the above definition of an agent can hold various kinds of systems, which apply to the rules, so is the problem of building one, which is not well defined. There are no directions where to explore in order to find the (sub) optimal architecture for an agent having a pre-defined set of tasks it should accomplish or goals it should achieve. Over the years two main sub-problems have come up which are somewhat more specific:

  1. The problem of action selection – how can an agent decide what to do next so as to further the progress towards its multiple time-varying goals?
  2. The problem of learning from experience – how can an agent improve its performance over time based on its experience?

While researching the field of adaptive autonomous agents two important insights, which serve as ‘guiding principles’, have evolved. The first one is that looking at the problem of building an intelligent system in its own context can make the task of building the system much easier. In other words, this observation means that the environment a system operates in, can help solve the problem of designing the system. Usually a system lies within a well-defined world and should cope with challenges typical to that world. Using these characteristics and taking them into account can simplify the problem to be solved. For example an agent operating in an office should have the ability to navigate without bumping into things. When taking into account that the agent is situated in an office environment, the task of navigating in an open space is reduced to navigating in an closed, flat environment with only few kinds of typical obstacles (desks, walls, doors, etc.). It might even follow people in the office, and therefor diminish the need for avoiding obstacles by itself. In a similar manner the society the agent is part of can help in planning a simple system which uses this knowledge.

The second ‘guideline’ is that interaction dynamics among simple components can lead to the emergent of complex behavior. In many cases a simple component acting in reflex to events or in some kind of feedback can be sufficient for producing the desired behavior. This concept is widely seen in nature where the actions of many individuals based on a very simple set of rules, can lead to a society demonstrating a complex behavior which is not seen in a smaller scale. Such systems based on interaction dynamics, are usually more robust and flexible. Since all the components interact to accomplish the task, the break down of one does not mean a total collapse of the system, instead a graceful degradation is presented. As all the components work (interact) together, exploration of multiple action-selection paths are explored (perhaps in an indirect way) in parallel, so the system can adapt more quickly to environmental changes.

Many of the architectures for adaptive autonomous agents that have been proposed have characteristics in common. Systems built using these characteristics tend to be more adaptive and robust. Some of these characteristics are listed below:

  1. Task-Oriented Modules – an agent is seen as a set of competence modules, each one responsible for a small task-oriented competence, connected directly to its relevant sensors and actuators.
  2. Task-Specific solution – an agent does not have general or task independent functional modules, such as general perception module. Each competence module does all the representation, computation, reasoning and execution needed for its particular competence.
  3. Decentralized Control Structure – agent architectures are highly distributed. All of the competence modules operate in parallel, while no one module is in control of the system or of other module.
  4. Goal-Directed Activity is an Emergent Property – agent’s activity is not modeled as a deliberative thinking process. The tasks are accomplished through emergent behavior as a result of interactions among competence modules, and between the agent and the environment.
  5. Role for Learning and Development – these tasks are considered crucial aspects of an adaptive autonomous agent. Building an adaptive system, which will evolve from a not so successful one into a system that achieves its goals, is often better then building a successful system that does not change in response to changes in the environment or in the tasks.

 

This paper presents a wide overview on the field of Adaptive Autonomous Agents. As for getting an introduction to the subject for someone who is unacquainted to it, this paper is just fine. If you are looking for a somewhat more thorough discussion you might find this paper not detailed enough. The paper does not say anything about the early days of this field, previous problems which researchers encountered and solved, or directions which were explored and abandoned. Nor does the author gives her view of the current trends in this field – which are expected to be successful and should be explored, and which are not. The paper implicitly presents a favorable opinion of the field, and it seems the author does believe that agents will have a major role in the future of ALife, and shell integrate in our everyday life, performing routine tasks. Perhaps it’s a little to early to take sides on whether the research of adaptive autonomous agents is about to make a breakthrough, or it will just be yet another attempt to produce artificial intelligence while imitating nature in some ways. According to the paper there are some examples of working agents doing some relatively complex tasks, but these are yet not the tasks one would really benefit from, by employing an agent for doing them.