Artificial Intelligence: From behavior to algorithms and intelligent agents

00 Blog Sebastian Mueller Landesfrauenraete KI Grundlagen - Lamarr Institute for Machine Learning (ML) and Artificial Intelligence (AI)
© Maria/stock.adobe.com & ML2R

In this post, Sebastian Müller, a researcher at the Lamarr Institute for Machine Learning and Artificial Intelligence at the University of Bonn, provides a compact overview of the diverse field of Artificial Intelligence. He highlights the different perspectives within the field and connections with other research areas and explains the abstraction of behavior through the agent model.

Intelligence? Behavior vs. Thinking, Imitation vs. Optimization

Artificial intelligence (AI) deals with the algorithmization of cognitive functions—meaning the search for formal computational rules (algorithms) by which a computer can solve specific or abstract tasks that are assumed to require some form of intelligence.

Since there is no formal, universally valid definition of the term “intelligence,” AI research approaches the topic from multiple directions:

If we take an external perspective, the goal is to generate observable “intelligent” behavior. Motivation for this approach has existed since early in AI’s history: after the invention of the digital computer, it wasn’t long before Alan Turing, in 1950, suggested exploring whether a program exists that is indistinguishable from a human in a text-based conversation (chat). Originally named the “Imitation Game” by Turing himself, the associated experiment is now known as the “Turing Test.”

If we begin with considerations of how observable behavior can be generated, questions quickly arise regarding the internal processes that lead to this behavior: How do thinking, memory, or learning, for example, work?

Whether the goal is to generate behavior or to simulate internal processes, humans (or other animals) provide only one possible reference point. We often have an intuition about whether a specific action in a situation was “good” or “bad.” If this intuition can be formulated as a mathematical criterion, an algorithm can attempt to optimize its behavior accordingly. In this sense, behavior is also referred to as rational.

Artificial Intelligence as an interdisciplinary research field

Although the field of Artificial Intelligence employs methods from computer science, it is clear from the motivations mentioned above that its questions and goals overlap significantly with other disciplines. In fact, Artificial Intelligence is not only a part of computer science but also falls under the cognitive sciences. Cognitive sciences also include parts of neuroscience, psychology, philosophy, linguistics, and anthropology. All these disciplines contribute approaches that describe behavior or develop hypotheses about the functioning of internal processes. The AI research field ultimately attempts to algorithmize these approaches. Entire research fields have emerged from these connections.

The relationship between AI and other disciplines becomes evident in the example of the Turing Test: to pass the test, AI must process and generate language in real time (linguistics), think logically (philosophy), access general knowledge, and acquire new information (neuroscience and psychology). Where the boundary between imitation and “real” behavior lies and whether computers can even reach this boundary is a classic debate (see, for example, the thought experiment “Chinese Room“).

The agent model

Let’s note: In an initial step, Artificial Intelligence concerns itself with abstracting “behavior.” The most fundamental concept for this is the agent model. This model enables the description of simple and complex action-capable systems.

Agentenmodell en - Lamarr Institute for Machine Learning (ML) and Artificial Intelligence (AI)
© ML2R

In the agent model, the “agent” and its “environment” are first distinguished. The agent itself consists of three components:

  1. Sensors, through which the environment is perceived;
  2. A unit for processing sensor data;
  3. Actuators, with which an action can be performed in the environment.

A set of sensor data is referred to as a “percept.” The environment depends on the application and can be arbitrarily complex.

Hierarchies from simple to complex agents

There are various agent models that differ in the capabilities of the processing unit. Let’s look at an example of a smart home with different agent models that build on each other in their capabilities.

First, we define the following components for the agent and its environment:

  • Sensors: microphone, thermometer, window
  • Actuators: speaker, heater, lamp, window
  • Environment: house with inhabitants

Simple Reflex Agent: The agent has a rule table that assigns an action to each possible percept. When a percept is received, the appropriate rule is sought and applied. Percept: “Turn on the light.” Action: The light is turned on.

Reflex Agent with Environmental Model: The agent also stores information about its environment and considers this when receiving a new percept. Percept: The window reports that it is now open. Internal update: window is open. Percept: “Turn on the heater.” Action: Close the window. Percept: Window reports it is closed. Internal update: window is closed. Action: Turn on the heater.

Utility-based Agent: With the utility function, the agent can determine how “well” it is performing its task. Example: the temperature should always be 23 degrees. A measure of quality could be derived from the difference between actual and target temperatures. Percept: Current temperature. Internal comparison: If the temperature is 23 degrees, no action is needed. If the temperature is below 23 degrees, the heater is turned on. If the temperature is too high, the window is opened.

There are also goal-based and learning agents. Goal-based agents with a utility function, for example, can develop a plan to optimize the utility function. The agent simulates different sequences of actions using its environmental model to search for a plan. Such an agent could find, using the utility function, that the window must be closed before turning on the heater to increase the temperature. The agent is a learning agent if it also has a component that saves this plan for future use rather than searching for it each time.

Developers of intelligent agents must first clearly define the environment in which the agent is to act. The tasks the agent should fulfill must be clearly defined, from which it can be deduced what capabilities need to be integrated into the agent. If you have a vacuum robot at home, consider what agent model might underlie it. How might it work for voice assistants? Sports and sleep trackers? Search engines? Chess computers? A simple timer?

Conclusion

We have seen how the question of artificially generated “intelligent” behavior led to questions about the functioning of individual cognitive abilities. The field of Artificial Intelligence collaborates with many other disciplines within the cognitive sciences to develop theories about these cognitive components and translate them into algorithms.

When observing a system or asking how we can generate similar behavior, we can try to apply the agent model. To do so, we answer the following questions: What does the “environment” look like? What information from the environment needs to be made available through sensors? What actuators are needed, and which cognitive abilities need to be combined? By keeping these approaches in mind, we acknowledge the fact that many AI applications already surround us in everyday life.

Sebastian Müller

Sebastian Müller focuses his research on trustworthy machine learning. He is currently working on two main projects: first, developing a user-centered, quantifiable metric for evaluating the quality of explanations; second, addressing word sense disambiguation through a hybrid approach. He is particularly interested in equipping machine learning models with a combination of abstract reasoning mechanisms and complex knowledge to create interpretable models capable of providing context-specific explanations.

More blog posts