Social Reasoning

From TinyCog
Jump to: navigation, search

The social reasoning subsystem is responsible to model the internal state of Agents interacting with the Scene Based Reasoning system in order to predict the agent's behavior in a given Scene. The subsystem is used by the Prediction Subsystem to predict possible actions of an Agent.

Representation of Agent States

The internal state of an Agent is described along the lines of the Belief-Desire-Intention (BDI) model [Bratman 1987] by characterizing the state of each of the agent's major subsystems:

  • The Persistent Goals hierarchy describes the Agent's current and generals Goals ("He just wants to eat, he's hungry"), and the Plans that he has developed in order to pursue the goals,
  • the content of the plan library (part of Episodic Memory), together with a confidence score describing the Agent's skills to perform each plan ("he is good at making dinner", "she doesn't know to operate an oven") and emotional scores ("he's afraid of mice running under the table").
  • a set of beliefs (part of Description Logic) with confidence scores describing the agent's ontology and inference rules ("he really thinks a penguin is a fish").

As a base line for reasoning about other agents, the system will assume that all other agents will be characterized by the same subsystem states as itself. Any deviation from this state will become an object of investigation and lead to an update of internal states or an explicit update of the agent's state ("he doesn't like fish").


Main Operations

The main operation of the social subsystem is to predict the behavior of an Agent as part developing plans by the Planner. Given an initial Scene (that also includes the initial state of the agent, for example "angry"), the subsystem will return a number of possible actions of the agent, together with a confidence score. In order to predict the behavior of an agent, the system will perform a "what would I do in this situation" analysis, taking on the initial scene and the assumed goals of the agent as it's own.

Another main operation of the social subsystem is to plan and reason about the plans and beliefs of other agents. As plans and beliefs form part of an agent's internal state, the Planner can develop plans to modify these states. One important way to influence the plans of other is verbal or non-verbal communication, so this is the point where the social and the language subsystems are connected.

Another main operation is to evaluate the impact of actions by the system on other agents. The SBR system needs to understand that there is a cost to itself when inflicting damage to other agents. In order to implement the desired behavior we propose several measures:

  • Setting a non-removable and high priority goal in the persistent goal subsystem to be appreciated ("loved") by other agents. The system may then learn by itself how it's actions affect the appreciation by other agents.
  • Introduce a hard-coded "ethics subsystem" that will detect if other Agents are damaged and overwrite the normal plan scores in order to assure that such plans will never be executed.

The social reasoning system also provides support for the Plan Reconstruction subsystem by using the agent's BDI state in order to identify it's plans.

Finally, we propose to use the social reasoning subsystem in order to reason about the state of the Scene Based Reasoning system itself, modeling it like it were a separate Agent. Combined with the introspective information from the Senso-Motoric Subsystem, the SBR system can build behavioral model about it's own actions and can is capable to reason about them in the same way as about other agents.


References and Comparison

This subsystem follows closely the established Belief-Desire-Intention (BDI) model [Bratman 1987] for multi-agent systems and Description Logic has been proposed multiple times as a base for modeling beliefs in multi-agent systems. The main contribution of this paper are what-if simulations that can incorporate potential reactions of other agents. [Santos 2003] describes a cognitive architecture capable to infer the intents of adversaries.

Implementation Status

TinyCog 0.0.1 does not yet implement this subsystem. TinyCog 0.0.3 will need implement rudimentary operations in order to model the behavior of other agents during plan development in the Hunter Domain.