Machines That Dream (AI): A Brief Introduction to AGI – News Block

There is a lasting dream in the creation of artificial general intelligence (AGI)

Today’s artificial intelligence (AI) is not there yet. The current approach is to implement algorithms based on insights from human programmers and engineers. Therefore, a lot of effort is being invested in designing new learning algorithms and information processing systems. The hope is that a correct set of algorithms will eventually be created, creating a machine that will be able to teach itself to the point of becoming an AGI.

One potential problem is that this effort based on human-developed algorithms may not be enough to generate AGI. The reason is simple: a human engineer is unlikely to be able to understand the complex processes of the brain and mind well enough to create a computer program, for example, in C++, which would then result in an AI that is generally intelligent.

There is abundant evidence to suggest our limited ability to understand the engineering details of the brain. For example, a biologist cannot infer what changes in an animal’s behavior will be caused by a single nucleotide change in the DNA. The interactions between the genes themselves and between the genes and their environment are too complex to be understood with such precision by a human mind. Similar evidence comes from the mathematical theory of dynamical systems. From chaos theory we know that there are mathematical systems consisting of only a few equations that are too complex for a human to understand the behavior of the equations. The only way to know how the equations will behave is to run them in a computer simulation. Often these incomprehensible mathematical systems consist of only a few equations: a minimum of three for continuous systems, but a single discrete equation can already be chaotic (for example, a logistic map).

What then are our chances of understanding the brain, which probably has a number of interacting equations on the order of thousands, if not millions or billions? How can we understand the brain in enough engineering detail? AGI created through human insights into how the brain works may not be very likely simply because of the underlying complexity.

Today’s efforts in AI do not present a meaningful alternative to human-created learning algorithms. The only known alternative would be to use raw computing power to test randomly created learning equations and select them based on a fitness function, just like natural evolution did. This approach is computationally infeasible.

Therefore, it seems that there is no other option than to employ human engineers to think of new algorithms. The result is a large number of solutions, but for very specific problems. New general algorithms, once they could bring us closer to AGI, don’t seem to come easily out of such efforts. Some of the best general algorithms used today (eg, deep learning) come largely from the 1980s.

So what can we do? Is there an alternative or are we just stuck with specialized AI? The answer is: Yes, there is an alternative in the form of AI-Kindergarten. AIKindergarten is a method for the development of AGI that uses a novel approach to the problem (Nikolić 2015a).

First, AI-Kindergarten doesn’t have much to do with the development of new algorithms by human engineers. In fact, only a few relatively simple algorithms are needed to operate AIKindergarten. AI-Kindergarten is more about giving intelligent agents different levels of organization in which they can learn and thus be able to create their own algorithms. Human-created algorithms operate at much lower levels of organization than traditional AI. These simple algorithms underlie the ability of agents to create (or “learn”) more complex learning algorithms that humans themselves could not possibly understand. And then these more complex algorithms operate to create intelligence behavior on a human level.

For this, a theory of the organization of biological systems was needed that was more general than any theory so far, so that the theory would be equally applicable to different levels of organization within living systems (cell, organ, organism) and non-living adaptive (AI) systems. This theory is called practopoiesis (Nikolić 2015b) and fundamentally describes the operation of a hierarchy of cybernetic controllers, as it is based on two fundamental theorems of cybernetics: the required manifold (Ashby 1947) and the good regulator theorem (Conant and Ashby, 1970). But this was not enough.

Practopoiesis only provided the basic structure of adaptive systems. It was also necessary to specify how many levels of organization were needed and what the function of each level of origination was. It turned out that to create AGI, we need more levels of organization than current theories of the brain or AI theories have imagined. That is, adaptive agents that mimic biological intelligence need to operate at three levels of organization (see the tri-transversal theory of mind in Nikolić 2015b). This implies that for an AGI, it is not enough to have one advanced learning algorithm or multiple such algorithms. An AGI needs to rely on a set of algorithms that allow it to ‘learn’ new learning algorithms. And this has to be done on the fly. In effect, this requires conceiving an AGI-capable agent with a more adaptive level of organization than previously thought.

The real implementation problem arises from the fact that these algorithms for how to learn to learn are also incomprehensible to human engineers or scientists. These algorithms correspond to the plethora of plasticity mechanisms that are encoded in our genes and are driving the development of our brains and all our instincts. It is practically impossible to even list such rules, not to mention understanding the principles of their operation. To solve that problem, AI-Kindergarten (Nikolić 2015a) is invented as a method understandable to a human mind to provide the most fundamental learning algorithms to learn for AGI.

Second, AI-Kindergarten is not about autonomous self-development AI. A popular science fiction meme is that simply giving an intelligent AI access to the Internet is enough. The AI ​​can then download all the necessary information on its own and learn and develop autonomously. Just wait until the machine spits out a super smart agent. By contrast, AI-Kindergarten requires much of the human input and supervision throughout the AGI development process.

However, this entry is not a form of direct engineering. It’s a different kind of human input, related to showing our intuition and demonstrating our own abilities to deal with the world, and also related to our scientific knowledge of biology and psychology.

AI-Kindergarten: What does it take to build a truly intelligent machine?

AI-Kindergarten takes advantage of the fact that biological evolution has already performed many, many experiments until it came up with rules to build our own brains and guide our behavior. AI-Kindergarten is about extracting this existing knowledge from biological systems and implementing it into machines.

To do that, AI-Kindergarten uses input from human trainers. If human engineering fails to specify learning rules for the machine, human intuition can specify for the machine what kind of behavior it should produce, and then the machine is left to find the appropriate rules to learn to learn. We need to tell machines what behavior is desirable in what situations. This is provided during interactions with the AI ​​in a similar way to a real kindergarten, where teachers interact with our own children.

But AI-Kindergarten requires something else besides. While our children learn only at the developmental level of their brains, AGI needs to learn at a lower organizational level, that is, at the level of “machine genes”. To achieve that, AI-Kindergarten must combine ontogeny with phylogeny (ie, the development of an individual with the development of the species). For that, data from biology and psychology are needed to structure the stages of AI development. In that way, existing scientific knowledge about the brain and behavior plays a much larger role in AI-Kindergarten than in classic AI where engineers are supposed to assimilate that knowledge and apply it in self-invented algorithms.

It would be incorrect to think that AI-Kindergarten does not require a high intensity of calculation. On the contrary, we cannot run away from intensive calculations to develop AGI. These heavy calculations are mainly needed to integrate the knowledge gained from humans. The process of integrating knowledge within AI-Kindergarten corresponds to what biology has already invented by endowing us with the ability to sleep and dream. Just as our dreams are needed to internally integrate the knowledge we have acquired throughout the day, the AI ​​developed in AI-Kindergarten needs to integrate the knowledge acquired through interaction with humans. Consequently, without intense dreaming, AGI cannot develop.

Finally, due to the continuous interaction with humans and feedback, which occurs throughout all stages of AI development, the resulting AI remains safe in terms of performing exactly the type of behavior and timing that creators require. The motives, the instincts

and the interests of the resulting AI are carefully crafted and shaped through this process to match the needs of humans. There is concern that AI could surprise us with some sort of unwanted behavior that becomes rogue or rogue (Bostrom 2014).

AGI developed in AI-Kindergarten can’t do that. Just as the selective breading of dogs makes them reliably gentle and friendly towards humans, an AI-Kindergarten-produced superhuman intelligence has further imprinted into their “machine genes” the basic instincts not to harm humans. AI-Kindergarten, by its very nature, produces safe AI.


Ashby, WR (1947) Principles of the self-organizing dynamical system. Journal of General Psychology 37: 125–128.

Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.

Conant, RC & W. Ashby, R. (1970) Every good regulator of a system must be a model of that system. International Journal of Systems Sciences 1(2):89-97.

Nikolić, D. (2015a) AI-Kindergarten: A method for developing biological-like artificial intelligence. (Patent pending)

Nikolić, D. (2015b) Practopoiesis: Or how life fosters a mind. Journal of Theoretical Biology 373: 40–61.

Guest post by Danko Nikolic, neuroscience, machine learning, artificial intelligence, data science, executive, keynote speaker. Article originally published at

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top