What are Human-Machine Interfaces?

What are Human-Machine Interfaces? Data processing and storage takes place directly in the devices on site. As a result, in many cases, a Human-Machine Interface is also required in or on the edge device. Not only does this allow the recorded data and computation results to be displayed. Rather the edge devices can be controlled too. Thanks to increasingly powerful processors and AI, not only can complex graphics be displayed today. Also instructions can be transmitted by means of gestures or voice.

What are Human-Machine Interfaces?

Each person uses a Human-Machine Interface (HMI) several times every day. From the button on the coffee machine to the control knob on the dryer. Touch displays, in particular, have enjoyed widespread popularity for many years. Being used widely in smartphones and now also increasingly in other devices – from home appliances to cars. Just touching the screen activates actions and allows programs to be controlled.

Operation via graphical interfaces

Edge devices can also require such user interfaces; ultimately this is what a smartphone is. As processors become more powerful and affordable, these devices can be equipped today with complex graphical user interfaces. Those interface allow, together with colour and touch displays, convenient use of the device. They also present the data collected by the device or the results of data processing clearly and vividly. All major semiconductor manufacturers meanwhile offer microcontrollers or SoCs (System-on-Chips) for such graphics applications.

A new class of embedded HMI is currently emerging, via which compact operating software is loaded on an intelligent edge device. Such as a smart meter, an intelligent drive, a special controller or another component. The data and control elements are not shown on a separate display in this case. However, rather are viewed with the aid of a smartphone or tablet.

Voice Interface

Many homes already have a digital assistant. Although voice recognition with systems such as Alexa and Siri is still carried out in the cloud. Since the corresponding analysis processes are highly complex, generally require AI systems and need appropriately large amounts of processing power. The disadvantage of processing voice commands in the cloud however is the delay between the spoken command and the response. Even if this delay is minimal. Interaction between human and edge device should be as intuitive, natural and user friendly as possible. Which is why voice recognition will migrate in future to edge devices, where the appropriate hardware resources are available thanks to new AI chips and storage. Such minimisation of latency with voice control allows much more natural interaction with the device.

But there are other reasons too for moving voice recognition to the edge. “The next generation of voice interfaces will process data locally. Because this is the best way to build a trusted, transparent and intimate relationship between humans and devices”, said Joseph Dureau with conviction. He is the CTO of the French start-up Snips.

The company has developed a voice assistant, which runs entirely on the respective device. In other words does not require an Internet connection and does not collect and process user data in the cloud. This is primarily to exclude the possibility of eavesdropping on conversations and ensure that privacy is protected. A private-by-design solution as Dureau describes it. However, it is also to ensure the independence of a cloud connection and the reduction of the data stream.

Manufacturers offer turn-key solutions

Even chip manufacturers themselves have since started to offer special hardware designs. They are supplied with a small form factor, fully integrated software and are fully prepared for production. Such turn-key solutions minimise the time-to-market, risk and the effort involved in development. They enable OEMs to add voice control to their “smart home” and “smart appliance” products without Wi-Fi and cloud connectivity.

However, the user-experience will be stifled if voice-recognition has to be activated first by a wake-up command, to save energy. The semiconductor industry offers solutions in this respect. The first processors specifically developed to run deep-learning algorithms for voice interfaces already exist, in fact. In these cases, the chips are around 100 times as efficient as traditional CPU and DSP architectures. The voice control of an edge device can therefore always be “awake”. Meaning that an activation command would no longer be necessary.

“Always-on intelligent assistants that reside within smartphones and voice-first devices can consume a great deal of power”, said Dina Abdelrazik, Senior Analyst at Parks Associates. “Maximizing battery life on these devices continues to be a challenge for manufacturers. An effective avenue to achieve low power consumption is to focus on efficiencies around components such as the processor, driver or the chip. In doing so, manufacturers have the opportunity to significantly reduce the amount of power required to enable voice processing functionalities.”

Using gestures to control edge devices

Whilst voice-control systems are becoming increasingly more natural, there is no halt to the development of human-machine interfaces for edge-devices. Thanks to 3D depth cameras and sensors, applications will also be controlled in future using gestures or head movements. Or even facial expressions.

For example, Google has already been working for a number of years now in the framework of the Soli project. On detecting movements, gestures and objects in free space, without any need for cameras or sensors. Instead, movements are captured by means of a tiny radar chip. They can track movements at high speed and with high precision in the sub-millimetre range. The chip is so tiny and energy-efficient that it can also be integrated in very small edge devices. Approval by the relevant US authorities had been outstanding owing to the use of the radar frequency band. But was received by Google at the beginning of 2019. It could therefore soon be possible to control edge devices by casually pointing a finger.

Related Posts

  • Combining IoT technologies with edge solutions will transform the world of retail. New ways of interacting with consumers are emerging, not to…

  • Edge Computing is changing the connected world of tomorrow. It is having a turbo-charged effect on various areas of technology. Future Markets…

  • Edge solutions in the consumer sector are possible thanks to increasingly powerful embedded technologies. As a result, they make it possible to…