Machines are being made more intelligent based on a variety of data analysis methods. The focus of these efforts is shifting increasingly from mere performance capability towards creating the kind of flexibility that the human brain achieves. Artificial neuronal networks are playing a big role.
All forms of Artificial Intelligence are not the same – there are different approaches to how the systems map their knowledge. A distinction is made primarily between two methods: neuronal networks and symbolic AI.
Knowledge represented by symbols
Conventional AI is mainly about logical analysis and planning of tasks. Symbolic, or rule-based, AI is the original method developed back in the 1950s. It attempts to simulate human intelligence by processing abstract symbols and with the aid of formal logic. This means that facts, events or actions are represented by concrete and unambiguous symbols. Based on these symbols, mathematical operations can be defined, such as the programming paradigm “if X, then Y, otherwise Z”. The knowledge – that is to say, the sum of all symbols – is stored in large databases against which the inputs are cross-checked. The databases must be “fed” in advance by humans. Classic applications of -symbolic AI include, for example, text processing and voice recognition. Probably the most famous example of -symbolic AI is DeepBlue, IBM’s chess computer which beat then world champion Garry Kasparov using symbolic AI in 1997.
As computer performance increases steadily, symbolic AI is able to solve ever more complex problems. It works on the basis of fixed rules, however. For a machine to operate beyond tightly constrained bounds, it needs much more flexible AI capable of handling uncertainty and processing new experiences.
Advancing knowledge about neurons autonomously
That flexibility is offered by artificial neuronal networks, which are currently the focus of research activity. They simulate the functionality of the human brain. Like in nature, artificial neuronal networks are made up of nodes, known as neurons, or also units. They receive information from their environment or from other neurons and relay it in modified form to other units or back to the environment (as output). There are three different kinds of unit:
Input units receive various kinds of information from the outside world. This may be measurement data or image information, for example. The data, such as a photo of an animal, is analysed across multiple layers by hidden units. At the end of the process, output units present the result to the outside world: “The photo shows a dog.” The analysis is based on the edge by which the individual neurons are interconnected. The strength of the connection between two neurons is expressed by a weight. The greater the weight, the more one unit influences another. Thus the knowledge of a neuronal network is stored in its weights. Learning normally occurs by a change in weight; how and when a weight changes is defined in learning rules. So before a neuronal network can be used in -practice, it must first be taught those learning rules. Then neuronal networks are able to apply their learning algorithm to learn independently and grow autonomously. That is what makes neuronal AI a highly dynamic, adaptable system capable of mastering challenges at which symbolic AI fails.
Cognitive processes as the basis of a new AI
Another new form of Artificial Intelligence has been developed by computer scientists at the University of -Tübingen: their “Brain Control” computer program simulates a 2D world and virtual figures – or agents – that act, cooperate and learn autonomously within it. The aim of the simulation is to translate state-of-the-art cognitive science theories into a model and research new variants of AI. Brain Control has not made use of neuronal networks to date, but nor does it adhere to the conventional AI paradigm. The core theoretical idea underlying the program originates from a cognitive psychology theory, according to which cognitive processes are essentially predictable, and based on so-called events. According to the theory, events – such as a movement to grip a pen, and sequences of events such as packing up at the end of the working day – form the building blocks of cognition, by which interactions, and sequences of interactions, with the world are selected and controlled in a goal-oriented way. This hypothesis is mirrored by Brain Control: the figures plan and decide by simulating events and their sequencing, and are thus able to carry out quite complex sequences of actions. In this way, the virtual figures can even act collaboratively. First, one figure places another figure on a platform so that the second figure can clear the way, then both of them are able to advance. The modelling of cognitive systems such as in Brain Control is still an ambitious undertaking. But its aim is to deliver improved AI over the long term.