Self-learning robots thanks to AI

In future, the mechanical helpers will be able to grip different objects without the need for programming, and move about autonomously in complex environments. Robots with artificial intelligence are the future.

While robots required complicated programming by experts in the past, the systems will be able to teach themselves how to carry out their tasks in the future. This will allow robots to adapt autonomously to changing surrounding circumstances and optimise themselves.

Intuitive cooperation

One example of this is the BionicCobot concept from ­Festo: the robot is connected to IT systems from the field of ­Artificial Intelligence, which can understand and interpret spoken questions people ask. Thus, operator and robot can collaborate intuitively. The learning system can also process and link images from connected camera systems as well as positioning data and different information from other devices in the working environment. A semantic card is thus created, which grows continually thanks to machine learning. The system then distributes the tasks logically to the robots and other tools in order to support people ­optimally in their work.

Learning to grip like a baby

A special challenge in the world of robotics is the ­ability to grip different objects – how can the object be held, what force is needed to grip the object? Robotic hands have been developed at Bielefeld University for this purpose: these ­familiarise themselves autonomously with unknown ­objects. The new system operates without first knowing the properties of objects, such as fruit or tools. “Our ­system learns through trial and self-discovery – like babies when they are exploring new objects,” says Professor Helge ­Ritter, the neuroinformatics scientist heading up the project. The system learns on the basis of Artificial Intelligence how everyday objects, such as fruit, crockery or even soft toys, can vary in colour and shape, and what is important when it comes to gripping these different objects. A ­banana can be grasped while a button has to be pressed. “The system uses the properties it learns to recognise these options and develops an interaction and recognition-based model for itself,” says Ritter. The gripping system is part of fundamental research – the results should benefit future self-learning robots in both the home and in industry.

Sharing experiences between robots

Japanese company Fanus is also working on reducing the training effort for gripping tasks through deep learning: the company demonstrated this at the Hannover Messe in 2017 using a bin-picking cell, as it is called: in this cell, two robots equipped with 3D camera sensors are placed in a bin with parts that the robot has to retrieve from the bin without specifically being taught to do so. Each robot saves the experiences gained, for example in the internal cloud referred to as fog. Once stored there, this information is also available to other robots. If four robots are working at this bin, for example, they benefit from the “experiences” of the other robots, emptying the bin more quickly as a result. The learning curve indicates that after 1,000 attempts, the robot has a success rate of 60 per cent. After 5,000 attempts, it can already pick up 90per cent of all the parts – without a single line of program code having to be written.

Continually improving movements

Nonetheless, mobile systems represent the ultimate ­discipline in the field of robotics: robots that can move independently in complex environments continue to pose major challenges for research and development. For a ­robot to act autonomously, it has to perceive its own motion and its environment through sensors, process the sensor data and compute new action commands to be executed. The result of these processes is then monitored in turn by the sensors. This continuous feedback allows the robot to balance itself or walk, for example.

Stefan Schaal and his team from the Autonomous ­Motion Department of the Max Planck Institute in Tübingen have developed a “continuous motion optimisation and control technology”, which makes robots see what they ­manipulate. This technology uses a new algorithm that continually ­optimises the motions of robots and improves hand-eye coordination. Robots can therefore shape their behaviour according to the environment and adapt to unpredictable interactions of human-robot cooperation.

American robotics specialist Lula Robotics is now continuing to develop the technology and plans to integrate it fully into existing robotics platforms. “Our system ­continuously optimises its behaviour and reacts to changes, giving a ­life-like quality that promotes close human-robot collaboration,” explains Nathan Ratliff, co-inventor and CEO of Lula Robotics. “Today, we are concentrating on the ­collaborative man-machine interaction in the area of ­industrial manufacturing and assembly, but the technology might even be the basis for robots used in the home or for healthcare in the future.”

(Picture Credit: Festo)

Related Posts

  • Cognitive computer assistants are helping clinicians to make diagnostic and therapeutic ­decisions. They evaluate medical­ data much faster, while ­delivering at least…

  • From the graphics processing unit through neuromorphic chips to the quantum computer – ­­the development of Artificial Intelligence chips is supporting many…

  • Artificial intelligence is only as good the data it is based on. Unless it takes all factors and all population groups into…