No highly automated driving without AI

Artificial intelligence is going to play a decisive role for many developments in the mobility of the future. Thanks to new microprocessor technologies, there is now a huge amount of computing power available which is helping to drive the automation of mobility solutions forward.

Artificial intelligence is at the heart of many intelligent applications, making mobility safer and more secure, more convenient, more efficient and more sparing of resources. The basis for this are chips that enable the artificial intelligence to be based not just in a large data centre, but also directly at the site of action.

“Without fast chips there is no networking, no automation and no autonomous driving,” says Frank Petznick, Head of the Continental business unit Driver Assistance Systems. The company is working on a new chip architecture for object detection in real time based on artificial intelligence. One area in which these processors of the future are to be used is in Continental’s high-performance computers in cars.

Here they will take over the fast processing of sensor data for automated and autonomous driving. Figuratively speaking, the new, highly specialised processes will act as a super economical data turbo: they will enable the vehicle computers to rapidly perceive the vehicle’s surroundings and will provide the basis for the functionalities of automated and autonomous driving – all while consuming an extremely low amount of energy. Edge AI chips currently available provide efficiency in the order of 1 to 100 tera operations per second per watt. Fast GPUs or ASICs are used for the calculations here. Research is being carried out into solutions that achieve an efficiency of 10,000 TOPS per Watt for inferences.

Chips that work like a human brain

Imec at the Holst Centre, an independent research centre specialising in micro-electronics, is developing new types of chips that are set to offer even more performance at an even lower energy consumption. “It is about the physical calculations inside the chip: these are based on how neurons in the brain interact with each other,” says Federico Corradi, Senior Neuromorphic Researcher at the imec research centre.

The microchips precisely mimic how neurons in the brain work together, exchange information, make predictions and recognise patterns. These new neural networks are referred to as “spiking” and represent the most heavily bio-inspired (“third”) generation of artificial neural networks. The use of this chip comes with numerous advantages: it consumes 100 times less energy than conventional applications and works without any latency, thus enabling an almost instantaneous decision-making process.

Safety

For all the computing power – the greatest challenge when using AI in autonomous vehicles is how to make the vehicle recognise and interpret situations safely so that it can make the right decisions. And as if that were not already challenging enough; AI systems also have to be able to compensate for deliberate attacks that target the AI system and want to destroy it and interrupt safety-critical functions. These attacks can come in the form of paint being applied on the road to distort the navigation system, or covering stop signs with stickers to prevent these from being detected. “In automated driving, it is possible for even just small deviations to prevent the system from detecting the environment properly,” says Richard, Section Head for Vehicle and Mobility at the TÜV Group.

Even bad weather can bring about these kinds of deviations. “For instance, weather such as rain, fog, or snow may cause an AV to detect itself in the wrong lane before a turn, or to stop too late at an intersection because of imprecise positioning,” explains Yasin Almalioglu from the Department of Computer Science at the University of Oxford.

To overcome this problem, Almalioglu and his colleagues have developed a new kind of self-monitoring deep-learning model for egomotion estimation – a decisive component in the driving system of an autonomous vehicle that estimates its position relative to the observed objects. The model combines highly detailed information from visual sensors (which can become distorted through adverse weather conditions) with data from weather-independent sources (such as radar), so that the advantages of both systems can be used in different weather conditions.

Social competency

The AI system of a highly automated car not only has to be able to detect its environment, but also predict how other road users are going to act. For this, it has to identify which passers-by could become relevant, so that it can capture and interpret their behaviour. The Fraunhofer Institute for Optronics, System Technologies and Image Exploitation (IOSB) in Karlsruhe has developed the prototype of a system that uses artificial intelligence to do just that.

“We have recently developed a research prototype that estimates whether a pedestrian wants to cross the road and analyses their gestures, which forms the basis for the interaction,” explains Manuel Martin from the Fraunhofer Institute for Optronics, System Technologies and Image Exploitation (IOSB). The system consists of a stereo camera with the ability to “see” on a spatial level and thus can detect the exact position of passers-by, and of an AI algorithm that detects the positions of the limbs and draws conclusions based on these.

Related Posts