Aspects of Artificial General Intelligence – AGI 13 Interview With Ben Goertzel
* Probability theory as a key tool in building advanced, rational AGI systems
* Mind-body integration as key to human-like intelligence
* The importance of integrative design and dynamics in AGI systems
* The systems theory of mind, reflecting the integration of mind, body and society
* Deep learning algorithms as a particular reflection of the system-theoretic principle of hierarchical pattern composition
* Lojban as a potential tool for easing human-AGI communication in the early stages
* OpenCog AGI architecture as an exemplification of systems based AI
- Probability ensues from Assumptions of Approximate Consistency
– Cox’s Theory – Any way of reasoning about uncertainty (*satisfying consistency criteria) must be probabilistic reasoning – Consistent reasoning about uncertainty requires probabilistic reasoning
– In the real world, we can’t reason consistently all the time, we are not unbounded rational agents. So we have to be roughly consistent in our reasoning (probably approximately consistent).
– Probably approximately consistent plausibility implies probably approximate probability. Intuitively if full consistency implies probability theory, then ruff consistency implies ruff probability theory.
– Probability theory is great for Big Data
- Parts of the NARS system is being used in OpenCog – Term Logic with truth values that embody both strength and confidence – but grounded in probability theory, rather than in non-axiomatic logic.
- Lojban – perhaps a way that humans can speak with AGI
- OpenCog – more goal oriented than the human mind
- Brain is not the only seat of our identity – it is connected to various other bodily subsystems, ie our endocrine system etc..
– if we were to emulate a human or want to upload a version of ourselves, perhaps we would need to connect the brain emulation with several simpler subsystems representing important bodily subsystems.
- OpenCog – an integrative approach to AGI – plenty of reasonably capable AI subsystems each solving pieces of the problem – Opencog’s vision is to combine these subsystems to work synergistically working with each others strengths and offsetting each others weaknesses. http://opencog.org
– There may be some philosophical/conceptual reason why an integrative system makes sense – the hierarchical nature of the world that humans operate in. Many patterns we see in the world have some kind of hierarchy, patterns scaffolded on patterns, patterns in patterns.
– Deep Learning approach to AI leverages the insight that the world consists of hierarchical compositions of patterns by creating spatial temporal hierarchies of pattern recognition elements. Though the problem is much wider than Deep Learning or HTM alone, so can be plugged in to an opencog like intergrated approach to AGI.
– Parts of a real world self organising complex system need to interact with and regulate each other – each of the parts are not completely modular or uncoupled. Hormones and chemical messages are sent throughout the body to help regulate it. Parts of the brain has to interact richly with other parts of the brain.
– Book: Building Better Minds : http://wiki.opencog.org/w/Building_Better_Minds also see interview : http://www.youtube.com/watch?v=L6LMZ3MyFCM