AI – the Learning Robot

Artificial intelligence enables robots to perform tasks autonomously and find their way around unfamiliar environments. Ever more powerful algorithms and ultra­high-performance microprocessors are allowing machines to learn faster and faster.

The term artificial intelligence (AI) has been in exis­tence for over 60 years. During that time, research has been conducted into systems and methodologies capable of simulating the mechanisms of intelligent human behaviour. It might sound quite simple, but it has to date posed major challenges to scientists. Because many tasks that most people would not even associate with “intelligence” have in the past caused computers serious problems: understanding human speech; the ability to identify objects in pictures; or manoeuvring a robotic vehicle around unfamiliar terrain. Recently, however, artificial intelligence has been making giant strides, and is increasingly becoming a driver of economic growth. All major technology companies – all the key players in Silicon Valley – have AI departments. “Advances in artificial intelligence will allow robots to watch, learn and improve their capabilities,” said Kiyonori Inaba, Board Member, Executive Managing Officer and General Manager of Fanuc.

Simulating the human brain

Findings from brain research, in particular, have driven advances in artificial intelligence. Software algorithms and micro-electronics combine to create neuronal networks, just like the human brain. Depending on what information it captures, and how it evaluates it, a quite specific “information architecture” is created: the “memory”. The neuronal network is subject to continuous change as it is expanded or remodelled by new information. The technological foundations for state-of-the-art neuronal networks were laid back in the 1980s, but it is only now that powerful enough computers exist to permit the simulation of networks with many “hidden layers”.

Becoming ever better by learning

Deep Learning” is the modern-day term used to describe this information architecture. The concept involves software systems which are capable of reprogramming themselves based on experimentation, with the behaviours that most reliably lead to a desired result ultimately emerging as the “winners”. Many well-known applications, such as the Siri and Cortana voice recognition systems, are essentially based on Deep Learning software. “Deep Learning will greatly reduce the time-consuming programming of robot behaviour,” asserts Kiyonori Inaba. His company Fanuc has integrated AI into its “Intelligent Edge Link and Drive” platform for fog computing (also referred to as edge computing). The integrated AI enables connected robots to “teach” each other, so as to perform their tasks more quickly and efficiently: whereas one robot would otherwise take eight hours to acquire the necessary “knowledge”, eight robots take just one hour.

New algorithms for faster learning success

Ever-improving algorithms are continually enhancing the ability of machines to learn. As one example, the Mitsubishi Electric Corporation recently launched a quick-training algorithm for Deep Learning, incorporating so-called inference functions which are required in order to identify, recognise and predict unknown facts based on known facts. The new algorithm is designed to aid the implementation of Deep Learning in vehicles, industrial robots and other machines by dramatically reducing the memory usage and computing time taken up by training. The algorithm shortens training times and cuts computing costs and memory requirements to around a thirtieth of those for conventional AI systems.

Special chips for Deep Learning

To obtain the extremely high computing power required in order to create a Deep Learning system, current solutions mostly involve so-called GPU computing. In this, the computing power of a graphics processor unit (GPU) and the CPU are combined. CPUs are specially designed for serial processing. By contrast, GPUs have thousands of smaller, more efficient processor units for parallel data processing. Consequently, GPU computing enables serial code segments to run on the CPU while parallel segments – such as the training of deep neuronal networks – are processed on the GPU. The results are dramatic improvements in performance. But the development of Deep Learning processors is by no means at an end: the ­“Eyriss” processor developed at the Massachusetts Institute of Technology (MIT), for example, surpasses the performance capability of GPUs by a factor of ten. Whereas large numbers of cores in a GPU share a single large memory bank, Eyriss features a dedicated memory for each core. Each core is capable of communicating with its immediate neighbours. This means data does not always have to be routed through the main memory, so the system works much faster. Vivienne Sze, one of the researchers on the Eyriss project, comments: “Deep Learning is useful for many applications, such as object recognition, speech or facial recognition.”

Related Posts