Sensor fusion allows increasingly accurate images of the environment to be developed by fusing data from different sensors. To achieve faster results and reduce the flood of data, the sensors themselves are becoming intelligent too.
Systems with Artificial Intelligence need data. The more data, the better the results. This data can either originate in databases – or it can be recorded using sensors. Sensors measure vibrations, currents and temperatures on machines, for example, and thus provide an AI system with information for predicting when maintenance is due. Others – integrated in wearables – record pulse, blood pressure and perhaps blood sugar levels in people in order to draw conclusions regarding the state of health.
Sensor technology has gained considerable momentum in recent years from areas such as mobile robotics and autonomous driving: for vehicles to move autonomously through an environment, they have to recognise the surroundings and be able to determine the precise position. To do this, they are equipped with the widest array of sensors: ultrasound sensors record obstacles at a short distance, for example when parking. Radar sensors measure the position and speed of objects at a greater distance. Lidar sensors (light detection and ranging) use invisible laser light to scan the environment and deliver a precise 3D image. Camera systems record important optical information such as the colour and contour of an object and can even measure the distance over the travel time of a light pulse.
More information is needed
Attention today is no longer focusing solely on the positioning of an object, rather information such as orientation, size or also colour and texture is also becoming increasingly important. Various different sensors have to work together to ensure this information is captured reliably. That’s because every sensor system offers specific advantages. However, it is only by fusing the information from the different sensors – in a process known as sensor fusion – that a precise, complete and reliable image of the surroundings is generated. A simple example of this are motion sensors, such as those used in smartphones, among other devices: only by combining accelerometer, magnetic field recognition and gyroscope can these sensors measure the direction and speed of a movement.
Sensors are also becoming intelligent
Not only can modern sensor systems deliver data for AI, they can also use it: such sensors can therefore pre-process the measurement data and thus ease the burden on the central processor unit. The AEye start-up developed an innovative hybrid sensor, for example, which combines camera, solid-state lidar and chips with AI algorithms. It overlays the 3D pixel cloud of the lidar with the camera’s 2D pixels and thus delivers a 3D image of the environment in colour. The relevant information is then filtered from the vehicle’s environment using AI algorithms and evaluated. The system is not only more precise by a factor of 10 to 20 and three times faster than individual lidar sensors, it also reduces the flood of data to central processor units.
Sensors supply a variety of information to the AI system
- Vibration
- Currents
- Temperature
- Position
- Size
- Colour
- Texture
- and much more…