Edge Computing helps AI take off

Nowadays, edge devices can run AI applications thanks to increasingly powerful hardware. As such, there is no longer any need to transfer data to the cloud. Which makes AI applications in real time a reality, reduces costs and brings advantages in terms of data security

Today, Artificial Intelligence (AI) technology is used in a wide range of applications. Where it opens up new possibilities for creating added value. AI is used to predictively analyse the behaviour of users on social networks. Which enables ads matching their needs or interests to be shown. Even facial- and voice-recognition features on smartphones would not work without artificial intelligence. In industrial applications, AI applications help make maintenance more effective by predicting machine failures before they even occur. According to a white paper by investment bank Bryan, Garnier & Co., 99 per cent of AI-related semiconductor hardware was still centralised in the cloud as recently as 2017.

The difference between training and inference

One of Artificial Intelligence’s most important functions is Machine Learning. With which IT systems can use existing data pools to detect algorithms, patterns and rules. In addition to coming up with solutions. Machine Learning is a two-stage process. In the training phase, the system is initially “taught” to identify patterns in a large data set. The training phase is a long-term task that requires a large amount of computing power. After this phase, the Machine-Learning system can apply the final, trained model to analyse and categorise new data. And finally to derive a result. This step – which is known as inference – requires much less computing power.

Cloud infrastructure cannot handle AI requirements alone

Most inference and training steps today are executed in the cloud. For example, in the case of a virtual assistant, the user’s command is sent to a data centre. Then analysed there with the appropriate algorithms and sent back to the device with the appropriate response. Until now, the cloud has remained the most efficient way to exploit the benefits of powerful, cutting-edge hardware and software. Yet the increasing number of AI applications is threatening to overload current cloud infrastructure.

For instance, if every Android device in the world were to execute a voice-recognition command every three minutes. Google would need to make twice as much computing power available as it currently does. “In other words, the world’s largest computing infrastructure would have to double in size,” explains Jem Davies, Vice President, Fellow and General Manager of the ARM Machine Learning Group. “Also, demands for seamless user experiences mean people won’t accept the latency inherent in performing ML processing in the cloud.”

The number of AI edge devices is set to boom

Inference tasks are increasingly being relocated to the edge as a result of this. Which enables the data to be processed there without even needing to be transferred anywhere else. “The act of transferring data is inherently costly. In business-critical use cases where latency and accuracy are key and constant connectivity is lacking, applications can’t be fulfilled. Locating AI inference processing at the edge also means that companies don’t have to share private or sensitive data with cloud providers. Something that is problematic in the healthcare and consumer sectors,” explains Jack Vernon, Industry Analyst at ABI Research.

According to market researchers at Tractica, this means that annual deliveries of edge devices with integrated AI will increase from 161.4 million in 2018 to 2.6 billion in 2025. Smartphones will account for the largest share of these unit quantities, followed by smart speakers and laptops.

Smartphones making strides

A good example for demonstrating the sheer variety of possible applications for AI in edge devices are smartphones. To name one example, AI enables the camera of the Huawei P smart 2019 to detect 22 different subject types and 500 scenarios. In order to optimise settings and take the perfect photo. In the Samsung Galaxy S10 5G, on the other hand, AI automatically adapts the battery, processor performance, memory usage and device temperature to the user’s behaviour.

Gartner also names a few other potential applications, including user authentication. In the future, smartphones might be able to record and learn about a user’s behaviour. Such as patterns when walking or when scrolling and typing on the touch screen. All this, without any passwords or active authentication being required.

Emotion sensors and affective computing make a lot possible for smartphones. To detect, analyse and process people’s emotional states and moods before reacting to them.

With affective computing, human emotions will be interpreted and simulated. This is an interdisciplinary approach incorporating IT, psychology and cognitive science. And concerned with the interaction between humans and machines or computers, respectively.

For instance, vehicle manufacturers might use the front camera of a smartphone to interpret a driver’s mental state or assess signs of fatigue. In order to improve safety. Voice control or augmented reality are other potential applications for AI in smartphones.

CK Lu, Research Director at Gartner is certain of one thing: “Future AI capabilities will allow smartphones to learn, plan and solve problems for users. This isn’t just about making the smartphone smarter, but augmenting people by reducing their cognitive load. However, AI capabilities on smartphones are still in very early stages.”

 

Related Posts

  • The concurrent development and design of chips and software has enabled researchers to realise chips that are not only uncommonly small and…

  • The demands on intralogistics are growing in the Industry 4.0 era. Robust computer systems integrated in transport vehicles ensure that vehicles are…

  • Embedded Software and its colourful world. Like any other computers, embedded and edge devices also need software to perform their tasks. Whilst…