Thanks to new developments in chip technology, even small wearables such as fitness bracelets have AI on board. The latest top-of-the-range smartphones are already learning to understand their users better through neuronal networks, and are delivering significantly higher performance.
Mobile devices such as smartphones and wearables are becoming ever more important in people‘s everyday lives. “Smartphones have fundamentally changed our lives over the last 10 years. They have become the universal tool for accessing communications, content and services,” says Martin Börner, Deputy President of the industry association Bitkom.
Mobile devices are gaining AI capabilities
Now mobile devices are coming onto the market with Artificial Intelligence capable of analysing the recorded data even better, and providing users with more closely targeted recommendations to enhance their health or fitness. The trend is towards edge computing. In this, the data remains in the device, and is not – or is only in part – uploaded to the cloud for analysis. That provides a number of benefits: firstly, it reduces the load on cloud computing systems and transfer media. Secondly, latency is reduced; users receive their analysis results faster. And thirdly – a key factor in medical applications especially – personal data is kept secure on the mobile device. “AI used to rely on powerful cloud computing capabilities for data analysis and algorithms, but with the advancement of chips and the development of edge computing platforms, field devices and gateways have been entitled basic AI abilities, which allow them to assist in the initial data screening and analysis, immediate response to requirements, etc.,” states Jimmy Liu, an analyst with Trendforce.
More efficiency, performance and speed
For Huawei, too, this on-device AI is a response to existing AI issues such as latency, stability and data protection. In late 2017, the company launched two smartphone models – the Mate 10 and Mate 10 Pro – that it claims are the first in the world to feature an artificially intelligent chipset with a dedicated Neural Processing Unit (NPU). This enables the phones to learn the habits of their users. The mobile AI computing platform identifies the phone‘s most efficient operating mode, optimises its performance, and generally delivers improved efficiency and performance at faster speeds. But the main way in which Huawei is utilising AI is in real-time scene and object recognition, enabling users to shoot perfect photos.
Facial recognition on a smartphone
Apple has also fitted out its new iPhone X with a special chip for on-device AI. The neural architecture of the A11 Bionic Chip features a dual-core design and executes up to 600 billion operations per second for real-time processing. The A11 neural architecture was designed for special machine learning algorithms, and enables Face ID, Animoji and other functions. This makes it possible, for example, to unlock the phone by facial recognition. The feature, named Face ID, projects more than 30,000 invisible infrared dots onto the user‘s face. The infrared image and the dot pattern are pushed through neuronal networks in order to create a mathematical model of the user‘s face before the data is sent to the Secure Enclave to confirm a match, while machine learning is applied to track physical changes in the person‘s appearance over time. All the stored facial data is protected by the Secure Enclave to an ultra-high security level. Also, the entire processing is carried out on the device and not on the cloud in order to preserve users‘ privacy. Face ID only unlocks the iPhone X when the user looks at it, with highly-trained neuronal networks preventing any manipulation using photographs or masks.
The Right Camera Mode Every Time
“The smartphone market has evolved significantly over the past decade,” stresses Hwang Jeong-Hwan, the President of LG Mobile Communications Company: “LG customers expect our phones to excel in four core technologies – audio, battery, camera and display.” As a result, LG has also started to develop specialised and intuitive AI-based solutions for the features most commonly used on smartphones. The first result is the LG V30S ThinQ smartphone with integrat-ed Artificial Intelligence. The device’s AI camera analyses subjects in the picture and recommends the ideal shooting mode – depending, for instance, on whether it is a portrait, food, a pet, or a landscape. Each mode helps to improve the subject’s special characteristics, taking account of factors like the viewing angle, colour, reflections, lighting, and degree of saturation. The Voice AI allows users to run applications and customise settings by simply using voice commands. Combined with Google Assistant, searching via menu options becomes superfluous and certain functions can be selected directly. But LG wants to go further than simply equipping new smartphone models with AI. Depending on the hardware and other factors, LG is due to give some smartphones important AI functions via over-the-air updates in the future.
Every third wearable with AI
It is expected that AI wearables will give the stagnant wearables sector a much-needed boost. One in three wearables in 2017 operated with AI, according to market analysts at Counterpoint. According to Research Associate Parv Sharma: “Wearables haven’t seen the expected momentum so far because they have struggled on the lines of a stronger human computer interaction. However, the integration of Artificial Intelligence into the wearables will change how we interact with or use wearables. AI will not only enhance the user experience to drive higher usage of wearables, but will also make wearables smarter and intelligent to help us achieve more.” The analysts expect particularly high growth in the hearables -category – with devices such as the Apple Airpod or innovative products from less well-known brands like the Dash made by Bragi.
Wearables getting to know their users
Other wearables use AI, too. Machine learning offers far greater predictive potential in monitoring vital health signs. A company called Supa, for example, has developed clothing with integrated sensors. They capture a wide range of biometric data in the background, and provide personalised information on the user’s environment. AI enables Supa clothing to continually learn more about the user and so, for example, better understand their behaviour when exercising. Supa Founder and CEO Sabine Seymour claims that in 20 or 30 years, wearables of such a kind will be able to explain why the user has contracted cancer, for example – whether as a result of a genetic defect, due to environmental causes, or because of nutritional habits.
PIQ likewise combines its sports assistant Gaia with AI. It intelligently captures and analyses movements using specific motion-capture algorithms. Thanks to AI, Gaia detects its users’ movements ever more accurately, enabling it to provide personalised advice in order to optimise their training.
More Safety on the Bus
Intelligent wearable devices not only come in handy during sport and exercise, but also in more serious applications. For instance, NEC and Odakyu City Bus are partnering to test a wearable that collects biological information from drivers. The aim is to improve safety in the operation of the bus. In the pilot project, a wristband measures vital signs such as the pulse, temperature, moisture and body movements while the bus is being driven. The data is then sent off for analysis via a smartphone to an IoT platform, which is based on NEC’s latest Artificial Intelligence technologies. This is intended to visualise, monitor, and evaluate a wide range of health factors – for example, the driver’s levels of fatigue or changes in their physical condition which they may not be able to detect on their own.
Ayata Intelligence has developed an exciting solution: its Vishruti wearable smart eyewear helps people with visual impairments to find their way around their environment. To do so, it is fitted with a camera and a special, energy-efficient chip providing image recognition and deep-learning processes. This enables it to recognise objects and people’s faces. The system features a voice guidance feature, telling the user when a car is aproaching, for example, where a door is, or the name of the person in front of them.
Developments of this kind deliver the prospect that, in the years ahead, smartphones and wearables will continue to have an increasing influence on our lives, becoming guides and advisors in many different ways.