Intelligence

From TinyCog
Jump to: navigation, search

Screen Based Reasoning avoids to take any stand on the nature of "intelligence" or "consciousness". Instead, the SBR approach is to compile a list of "competencies" that any general intelligence should exhibit and to show how to implemented each competency using TinyCog.

Quantifying the Implementation of "Competencies"

We propose the following measures in order to quantify the implementation of a competency:

  • Possibility: Possible, unknown or impossible:
    Is it theoretically possible to implement a competency using the system? A "possible" implementation means that there exists an architectural blueprint for an implementation with all required data-structures and subsystems available. An implementation is "impossible" if there is no detailed description how the competency could be implemented or that the description requires elements (concepts, data-structures, algorithms, subsystems, ...) that are not available.
  • Degree: Full, partial or prototypical:
    A full implementation would perform a capability to a degree comparable with humans, while a prototypical implementation would be limited to a small subset of functionality.
  • Scalability: Real-world, limited or "toy domain":
    Determines whether the implementation scales to real-world problems or whether the domain size is limited because of algorithmic complexities. Scalability has been one of the key issue of GOFAI, so any claim of scalability beyond toy domains requires a detailed explanation.
  • Learning facts and rules: Reinforcement, supervised or hand-coding:
    Determines how the facts, rules or other data required for the competency can be acquired. True "general intelligence" needs to be able to learn most facts and rules by itself, based on limited manually prepared "bootstrapping" data for training.

Competencies According to Goertzel's Survey of AGI Systems

Following [Goertzel 2014a] (an "early draft, for comment-solicitation only" paper), any true AGI should implement certain basic competencies:

  • Perception - Perceiving the outer world via multiple sensors. Perception requires the classification of objects both in terms of types and sub-types as well as in terms of states, attributes and components.
  • Actuation - Perform actions with the body, manipulate objects and use tools.
  • Memory - Store short and long time information about actions (episodic memory), facts, believes and procedures.
  • Learning - Acquiring and classify information from the outer world using imitation, reinforcement, verbal instruction and experimentation.
  • Reasoning - Deduction, induction, abduction, causal reasoning, physical reasoning and action reasoning.
  • Planning - Developing and executing plans on physical, mental and social levels, both tactical and strategic.
  • Attention - Focus the system's processing power on certain areas of the inner and outer world.
  • Modeling Self and Other - Create physical, mental, emotional and social models of agents including the subject.
  • Social Interaction - Represent and reason about the states of other agents
  • Communication - Use verbal and non-verbal communication to achieve goals.
  • Quantitative - Use numbers for counting objects and to quantify and compare attribute values.
  • Building/Creation - Build physical, logical and social groupings of objects.
  • Control - Maintain a list of goals, "urges" and emotions and optimize the achievement of goals (renamed from Goertzel's "Motivation" and "Emotions")

The authors add to this list:

  • Introspection - Exhibit the system's inner perception, reasoning, planning and attention processes as objects for the AGI.
  • Social Planning - Develop and execute plans including other agents, or participate in the plans of other agents.
  • Negation - A system needs to be able to perform reasoning based on negative conditions and phrases.
  • Meta Reasoning - Reasoning about the system's reasoning process

Competencies According to Metzinger

Thomas Metzinger describes a "Self-Model Theory of Subjectivity" in his books "The Ego Tunnel" and "Being No-One". He proposes detailed views about the inner workings of the human mind. Following his argumentation, we can extract a number of specific requirements that are required in order to create a "transparent tunnel" through which the "phenomenal self" can perceive and process information about the outer and inner worlds.

One-World Problem - The Unity of Consciousness

Metzinger asks how a GI can "form a coherent whole" of experience from heterogeneous and possibly conflicting sensory input. Reformulating his problem statement we can derive the following requirements:

  • Describe changes in the outer world in terms of cause-and-effect. Identify the agents behind the causes.
  • When performing actions, create expectations how the world should change and track these changes.
  • Identify difference between the "understood" parts of the outer world and the actual sensor data. Identify and analyze differences. This process may trigger escalation processes involving higher processing levels in order to learn something new about the world model.
  • Zoom in/out capabilities: The "attention focus" allows the subject to inspect sensor data at various levels.
  • Meta Experience (experiencing the experiencing process): "It hurts" vs. "I feel pain". The second phrase refers to the perception process.

Now Problem - The Appearance of a Lived Moment

Metzinger asks how a GI can perceive a "moment", when the information processing steps take measurable time? Even though this problems is more philosophical than relevant to an AGI, we derive one requirement:

  • AGI planner need to work with actions of finite duration and explicitly or implicitly operate on a time line, so that the AGI can track "the current moment" on this time line.

Who Problem - What is the entity that has conscious experience?

Metzinger's explains the subjective first person experience with a number of processes related to the "self-model" of the subject. He doesn't explicitly detail his "self-model". However, the following requirements can be derived:

  • Physical self-model - Metzinger refers in various sections to the "volume" identified with the subject's body and the physical self-model as part of the episodic memory. In addition, the physical self-model needs to include the actions that the body can execute or not.
  • Mental self-model - Certain processing levels of the brain are accessible for introspection while others are apparently hidden to the attention focus in order to maintain the "transparency" of the ego tunnel.
  • Social self-model - Defines the subject relative to other agents in various groups.

The subject is capable to:

  • Zoom in/out access to various processing steps of the perception pyramids
  • Observe emotions
  • Observe plan developed to accomplish a goal.
  • Observe reasoning processes
  • Observe planning processes

Implementation Status

TinyCog 0.0.1 implements some GOFAI (classical) Towers of Hanoi examples, demonstrating an integration between a Planner and Description Logic, but little more. No traces of intelligence yet.

TinyCog 0.0.3 will implement the Hunter Domain including basic plan learning, plan reasoning and social reasoning. This domain is supposed to demonstrate several novel features of intelligence, but doesn't yet include an explicit "self-model".

Later versions of TinyCog will add to the self-model probably on the level of social interaction in order to provide the concepts necessary to talk about the subject and others.