Gesture control is a Human Machine Interface technology that detects and interprets human body movements for interaction with devices without direct physical contact. Thanks to its natural form of communication, this technology is spreading into an increasing number of fields.
Thumbs up, waving, the open hand as a stop sign – gestures are a natural form of communication for humans. Thanks to significant advancements in sensor technology and artificial intelligence in recent years, it is now possible to control machines and devices through gestures.
The breakthrough came with the introduction of Nintendo’s Wii console in 2007 and Microsoft’s Kinect motion control in 2010. Both solutions were developed for the gaming market – and entertainment electronics still dominate the gesture control market today. According to market analysts from Grand View Research, the segment had a revenue share of 59.4 percent in 2022.
However, other industries are also discovering the benefits of gesture control for operating devices and machinery: for example, both the automotive industry and healthcare sector have placed great emphasis on adopting gesture recognition. This technology makes it easy and intuitive for users to interact with computers and other devices. The COVID-19 pandemic has further focused attention on gesture control, as it enables contactless and thus hygienic operation.
Gesture recognition market in 2031: 88.3 billion US dollars
In 2021, the market had a volume of 13.9 billion US dollars. Accordingly, the expected average annual market growth is 20.6 percent.
Source: Allied Market Research
Control via Wearables
Various different technologies are used to detect user movements. One option is special wearables, such as bracelets or rings, equipped with motion sensors that capture the rotation rate or acceleration of the wrist. An intelligent algorithm recognises which gesture has been performed and issues the corresponding command.
Camera-based Solutions
Another approach is camera-based systems. In principle, 2D cameras can capture and interpret movements. However, the algorithms used have difficulty distinguishing movements in front of the screen correctly – the precise capture of distance as the third dimension is missing. For this reason, 3D cameras or image sensors are increasingly being used for gesture control. They have become more affordable in recent years and can be integrated into almost any device due to their small size. These systems complement 2D image data with depth information, mostly obtained through Time-of-Flight technology, which measures the travel time of a light pulse reflected by an object to determine the distance to the camera. Today’s image sensors can detect not only general hand movements but even the movements of each individual finger.
Detection via Thermal Imaging
However, camera-based systems require adequate lighting to reliably recognise gestures. This problem does not affect infrared sensors: they detect the infrared radiation emitted by the human body (passive sensors) or emit infrared radiation themselves as active sensors and capture the reflection. The corresponding algorithms then analyse the patterns and movements of this radiation. The sensors can also generate a depth image. Thus, various different gestures can be recognised depending on predefined movement patterns and algorithms. Nevertheless, systems based on infrared sensors tend to be more suitable for simple gestures. Since they are relatively cost-effective, they are used in many industrial, consumer and automotive applications.
Radar – Robust and Precise
Unaffected by lighting conditions, resistant to contaminants, and with high resolution, radar is increasingly conquering the field of gesture control. Even the smallest movements can be detected by a radar device, with the latest systems offering a resolution of just one millimetre. Radar sensors measure the speed, direction of movement, distance and angular position in real time to detect changes in the position of objects. This makes it possible to track and depict movements of persons or specific motion patterns. And for those who associate radar with the large rotating antennas on ships – the radar sensors needed for gesture recognition fit on a microchip.
AI and Edge Computing
No matter which technology is used for gesture control, one challenge remains: everyone performs gestures in a different manner. This means that the systems must be able to recognise numerous interpretations of a gesture. Artificial intelligence and machine learning processes are highly useful in this regard: through complex signal evaluations, gestures can be clearly identified and classified. To process sensor data in real time and achieve the fast response times necessary for device operation, machine learning algorithms are increasingly being executed locally on the chip, close to the sensor itself – typically referred to as the “edge.”