TinyCog is a minimalist open-source implementation of the Scene Based Reasoning (SBR) Cognitive Architecture in the tradition of SOAR and ACT-R. SBR divides the model of Artificial General Intelligence (AGI) into two parts: 1) Building on existing artificial intelligence (AI) formalisms, it defines a number of high-level operations on Plans and Scenes and 2) it proposes a mapping from the elements of the Self-Model Theory of Subjectivity into Plans, Scenes and operations on these data-structures.
- 1 Purpose of the TinyCog
- 2 Cognitive Architectures
- 3 What's New in TinyCog?
- 4 Core Elements of TinyCog
- 5 Joining / Collaboration
- 6 Comparison and References
- 7 Implementation Status
Purpose of the TinyCog
As the name "tiny" indicates, TinyCog aims to be a minimalist implementation of a cognitive architecture. TinyCog is supposed to be fun to try and to play around with and (relatively) easy to learn. It should still be suitable to validate certain claims of the Scene Based Reasoning theory and demonstrate what the theory is capable of.
For these reasons we have decided to skip the entire Perception Pyramid and to limit TinyCog to a simulation Testbed built using Scenes. We have aim to provide a number of working examples with slowly increasing complexity.
A "cognitive architecture" represents a model of the human based on techniques and formalisms from artificial intelligence. Cognitive architectures usually cover a variety of areas (here with references to TinyCog Subsystems):
- Perception (computer vision, 3D Reconstruction, deep learning, Spatial Relations)
- Execution (robotics, control theory, collision detection and avoidance)
- Planning (classical Planners, Physics Simulation, psychological and sociological planning, multi-agent systems)
- Learning (machine learning, data mining, decision trees)
- Language (speech recognition, Natural Language Processing, hidden Markov models, syntactical models, language generation)
- Reasoning (semantic networks, Description Logic)
Cognitive architectures try to cover all of these areas with a single basic operations principle, usually related to some kind of [production systems].
What's New in TinyCog?
Scalability and Learnability
As described further below, Scenes combine the legacy of GOFAI in terms of "planner states" and "semantic networks"/Description Logics. This is useful in order to apply a wide range of well understood formalisms that solve important theoretical problems. However, many of these formalisms suffer from two fundamental issues:
- Scalability - they work only in small size "toy" domains and
- Lack of Learnability - rules need to be hand-coded by humans in order to make these formalisms work.
Scene Based Reasoning has been developed in order to deal with these two issues plus to be closer to "mental images" than other cognitive architectures.
In order to achieve these goals, SBR does not rely on a "unified production system" type of core engine to implement all Subsystems. Instead, SBR only defines the Script and Plan data-structures and allows each subsystem to choose it's own specific formalisms. So each subsystem is free to choose "best of breed" algorithms, or even multiple algorithms for specific problems. For example, multiple Planner implementations may handle "spatial planning" vs. "social planning". Many of these "best of breed" algorithms today are based on Machine Learning and related technologies which have shown far better Scalability and Learnability compared to GOFAI algorithms.
Following this "pluggable algorithm" idea, TinyCog will provide at least one very simple implementation of each algorithm in order to reduce learning complexity.
Proximity to Psychological Models
Scene Based Reasoning has been influenced strongly from introspective insight obtained during Buddhist meditation. This heritage is represented in the construction of the Scenes concept that is designed to be as close as possible to a "mental image" and to be capable to model both the physical and social aspects of a Self-Model. Experiments will confirm or falsify if this claim holds or not.
Core Elements of TinyCog
Scenes (World Model)
- "3D scene" from computer vision (=outer world scene),
- "Mental Image" from psychology (=inner world scene),
- "Scene Graph" from computer gaming,
- "Planner State" from classical STRIPS style planning,
- "Frame" or "semantic network" from good ol' fashioned Artificial Intelligence,
- "ABox" from Description Logic and
- a series of object parameters to be handled by statistical AI.
Plans consist of Scenes connected by Actions. Plans are developed by a Planner in order to achieve a Goal. TinyCog currently implements one "classical" planner. Later versions of TinyCog will feature additional planners capable of handling task decompositions, uncertain events and social interactions. All planners are capable of working using a Sandbox, to simulate the effect of actions using internal scenes, before executing them.
The SBR/TinyCog Control Room consists of a Persistent Goals portfolio of goals/plans with Cost and Reward and an Attention Focus for controlling the SBR cameras, other sensors and Introspection. Data from the sensors are interpreted in context of the Persistent Goals portfolio, and used to track the Execution of Plans.
SBR aims quite high and tries to model and explain a number of higher mental faculties including Attention (we don't want to use the term Consciousness...). TinyCog starts very low and easy and should be easily accessible to AI students.
Joining / Collaboration
Please let us know if you are interested and post into the [Forum], just to let us know that you're interested. The language of the forum is English, but don't worry about mistakes if you are not a native speaker. Just let us know what you would like to do with TinyCog and how much effort you want to invest.
Comparison and References
- ICARUS [Langley 2005], because of the shared focus on physical agents. SBR Scenes looks quit similar to "planner states" in Icarus, at least on the level of Spatial Relations.
Please see the Comparison page for more details.
The current TinyCog V0.0.1 contains a SWI-Prolog implementation of:
- Basic scene data-structures and management
- A minimal Description Logic system integrated with scenes
- A "towers of hanoi" test for scene management
- A "towers of hanoi" test for DL + scenes (still buggy)
- A Hunter Domain prey and predator environment, somehow similar to a strategic game (under development).