Multiple fisheye-stereo cameras

Today’s automated cars are already able to react faster than their human drivers. This is made possible by extremely powerful on-board computers that collate data from all sensor systems, evaluate them in a matter of split seconds and initiate appropriate actions.

If a child runs into the road, a human driver takes 1.6 seconds on average to depress the brake pedal. But, highly automated vehicles equipped with radar or lidar sensors and a camera system already react in 0.5 seconds.

Real-time environmental perception

This is only possible if the data on board the vehicle itself is processed in powerful embedded image processing computers. And which continuously generate a complete picture of the surrounding traffic situation in real time. After all, you have to consider that a fully automated vehicle generates between 30 and 40 terabytes of data per eight hours of driving. That is around 3,500 4K movies. This makes it clear that the current, but also a future 5G network architecture is overwhelmed with this kind of volume. In addition, cars with automated-driving functions simply cannot afford to wait for information stored and processed in the cloud.

AI is a crucial module

According to Robert Bielby, Senior Director for Automotive-System Architectures in Micron’s Embedded Business Unit, high-performance computers featuring artificial intelligence with deep neural-network algorithms will enable autonomous cars to drive better than vehicles controlled by humans. “You’ve got a host of different sensors that work together to see the entire environment in 360 degrees, 24/7. At a greater distance and with a higher accuracy, than humans can,” Bielby says. “Combined with the extreme compute performance that today can be deployed in a car. And you have a situation where it is possible for cars to do a far better job of driving down the road with greater safety than we can.”

Micron Technology offers a broad portfolio of volatile and non-volatile memory products for automotive applications. They are also used, for example, in a computing platform specially developed for autonomous driving and based on high-performance DRAM-technology.

Hundreds of trillions of computing operations per second

Among other things, the AI platform delivers the computing power needed by Daimler’s highly automated vehicles. Artificial intelligence is an important building block in fully automated and driverless vehicles’ ECU networks. Each of which consists of multiple individual control units. Overall, the ECU network in Daimler vehicles achieves a computing capacity amounting to hundreds of trillions of operations per second. This equates to as much as the performance of at least six interconnected, state-of-the-art computer workstations.

Information from various environmental sensors merges with radar, video, lidar and ultrasound technology here, for example. The ECU network compiles the data from all environmental sensors. The sensor fusion evaluates it in milliseconds and then uses this information to plan the vehicle’s travel path. This is comparable to the speed of a pain stimulus in humans. Which takes between 20 and 500 milliseconds to reach the brain.

Braking in under 10 milliseconds

Research into even faster systems is being conducted, however. After all, an automated car reacts within 500 milliseconds. However, at a speed of 50 km/h, it still travels seven meters without braking. With this in mind, the Fraunhofer Institute for Reliability and Microintegration (IZM) is working on a camera-radar module. That can register changes in the traffic situation much more rapidly. The module – which is roughly the size of a mobile phone – will have a response time of under 10 milliseconds.

Integrated signal processing is to thank for this. Data from the radar system and the stereo camera are processed and filtered directly in (or on) the module. While irrelevant information is detected, it is not passed along. The data from the camera and radar are merged through sensor fusion. Underpinned by neural networks, the content of these data is evaluated using machine learning. The system does not subsequently send any status information to the vehicle, rather only instructions for how to react. As such, the bus line of the vehicle remains free for important signals. Such as a child that suddenly runs out into the road.

“This integrated signal processing shortens the response time enormously,” says Christian Tschoban, Group Leader in RF & Smart Sensor Systems at the IZM. Once finished, the system is intended to be 50 times faster than conventional sensor systems and 160 times as fast as a human being. A car would then only travel for a further 15 centimetres before the system would kick in and send a signal to brake, which could help many accidents in urban traffic to be avoided.

One eye fixed firmly on the surroundings

Multiple fisheye-stereo cameras

An alliance of companies and institutes – including MicroSys Electronics GmbH, TheImagingSource, Myestro, NewTec and the Institute for Laser Technology in Medicine and Measurement Technique (ILM) in Ulm – has developed a sensor system with multiple fisheye-stereo cameras. That companies from all manner of markets can deploy in highly automated vehicles. Without needing to build up specific expertise in environmental recognition themselves.

The system comprises several multi-stereo sensor systems. Each one of which features four quadrilaterally arranged cameras in addition to a dedicated laser system. Cameras from neighbouring multi-stereo systems that are further apart together form additional stereo pairs for longer ranges. Fisheye lenses keep the required number of multi-stereo cameras low. This configuration patented by Myestro enables both close- and long-range obstacles to be measured simultaneously. To compensate for vibrations in the vehicle body that would otherwise prevent usable image-information recordings, Myestro has developed the “RubberStereo-technology”. It immediately detects and offsets vibrations in real time by comparing the image data from the pair of cameras. The system runs on an embedded computer from MicroSys. Which, in turn, is built around a processor developed especially for automotive-vision applications. The platform combines signal-processing and computing functions for environmental detection with a very compact form factor.

 

Related Posts