Tag: <span>artificial intelligence</span>
AI and Machine Learning

Artificial intelligence (AI), including machine learning (ML) and deep-learning techniques (DL), is poised to become a transformational force in healthcare. The various stakeholders in the ecosystem all stand to benefit from ML-driven tools. From anatomical geometric measurements to cancer detection, radiology, surgery, drug discovery and genomics, the possibilities are endless. ML can lead to increased operational efficiencies, extremely positive outcomes and significant cost reduction.
Opportunities for machine learning in healthcare
There is a broad spectrum of ways that ML can be used to solve critical healthcare problems. For example, digital pathology, radiology, dermatology, vascular diagnostics and ophthalmology all use standard image-processing techniques.
Chest X-rays are the most common radiological procedure, with over two billion scans performed worldwide every year. Amounting to 548,000 scans a day. Such a huge quantity of scans imposes a heavy load on radiologists and taxes the efficiency of the workflow. Methods involving ML, deep neural networks (DNN) and convolutional neural networks (CNN) often outperform radiologists in speed and accuracy. Although the expertise of a radiologist is still of paramount importance. However, under stressful conditions during a fast decision-making process, human error rate could be as high as 30 %. Aiding the decision-making process with ML methods can improve the quality of the result. Furthermore, it can also provide radiologists and other specialists as an additional tool.
Machine learning on the test bench
Validations of ML are today coming from multiple and very reliable sources. In a study by Stanford ML Group, a 121-layer CNN was trained to detect pneumonia better than four radiologists. In multiple other studies by the National Institute of Health, attempts at early detection, using a DNN-model, achieved better accuracy. Even better than multiple radiologists’ diagnoses at the same time – from malignant pulmonary nodules to diagnosing lung-cancer.
Many procedures within radiology, pathology, dermatology, vascular diagnostics and ophthalmology could involve large image sizes requiring complex image processing. Also, the ML workflow can be computing- and memory-intensive. The predominant computation is linear algebra and demands many computations and a multitude of parameters.
This results in billions of multiply-accumulate (MAC) operations, hundreds of megabytes of parameter data. Moreover, it requires a multitude of operators and a highly-distributed memory subsystem. So, performing accurate image-inferences efficiently for tissue detection or classification using traditional computational methods on PCs and GPUs is inefficient. Accordingly, healthcare companies are looking for alternative techniques to address this problem.
Improved efficiency with ACAP devices
Xilinx technology offers a heterogeneous and highly distributed architecture to solve this problem for healthcare companies. The Xilinx Versal Adaptive Compute Acceleration Platform (ACAP) family of system-on-chips (SoCs). This SoCs featuring adaptable field-programmable gate arrays (FPGAs), integrated digital-signal processors (DSPs) and accelerators for deep learning. Additionally, SIMD VLIW engines with highly distributed local memory architectures and multi-processor systems, is known for its ability to perform massively. Parallel signal processing of high-speed data in close to real time.
Versal ACAP has multi-terabit-per-second Network-on-Chip (NoC) interconnect capability and an advanced AI-Engine containing hundreds of tightly integrated VLIW SIMD processors. This means computing capacity can be moved beyond 100 tera operations per second (TOPS).
These device capabilities dramatically improve the efficiency of how complex healthcare ML algorithms are solved. And help to significantly accelerate healthcare applications at the edge, all with fewer resources, less cost and power. With Versal ACAP devices, support for recurrent networks could be inherent due to the simple nature of the architecture. And its supporting libraries.
Xilinx has an innovative ecosystem for algorithm and application developers. Unified software platforms mean developers can use advanced devices – such as ACAPs – in their projects. Such as Vitis for application development and Vitis AI for optimising and deploying accelerated ML inference.
Healthcare and medical-device workflows are undergoing major changes. In the future, medical-workflows will be “big data” enterprises with significantly higher requirements for computational needs, security, and patient safety. Distributed, non-linear, parallel and heterogeneous computing platforms are key for solving and managing this complexity. Xilinx devices like Versal and the Vitis software platform are ideal for delivering the optimised AI architectures of the future.
Discover more about Xilinx: www.xilinx.com.
Progress in technology 2021

Humanity has made phenomenal progress thanks to technology. There are more researchers and developers today than ever before. With their passion for technology, they toil to create solutions for the challenges we will face in the future.
The market for AI is growing rapidly
The predicted average annual growth rate of the global market for artificial intelligence is 46.2 per cent. By 2025, the market volume is estimated to be USD 390.9 billion.
(Source: Research and Markets)
AI’s performance increses
In a year and a half, the time required to train a large image-classification system on cloud infrastructure has fallen from about three hours in October 2017 to about 88 seconds in July 2019. During the same period, the cost of training such a system has fallen similarly.
(Source: Stanford University, “The AI Index 2019 Annual Report”)
Masses of semiconductors
2018 – the year in which more than 1 trillion semiconductors were sold for the first time.
(Source: Semiconductor Industry Association, SIA)
The number of people without access to electricity is decreasing
In the year 2010 there were 1.2 billion people worldwide without access to electricity, however, the number dropped to 789 million in 2018. Because renewable energy solutions have played a crucial role in growing the number of people with access to electricity. In 2018, more than 136 million people had a basic off-grid supply of electricity from renewable energy.
(Source: International Renewable Energy Agency, IRENA)
Jobs in renewables
42 million jobs in 2050: Jobs in renewables will reach 42 million globally by 2050, four times their current level, through the increased focus of investment on renewables. Energy-efficiency measures would create 21 million additional jobs, and system flexibility another 15 million.
(Source: International Renewable Energy Agency, IRENA)
Research is ongoing
Over USD 1.8 trillion: Investment in research and development in the ten leading countries in 2019.
(Source: Statista)
Internet users wordwide
Today 4.1 billion people use the Internet. That amounts to 53.6 per cent of the world’s population. In 2005, this figure still stood at per cent.
(Source: International Telecommunication Union)
People’s health continues to improve
Life expectancy at birth increased worldwide from 46 years in 1950 to 67 years in 2010 and, most recently, to 73.2 years in 2020.
(Sources: The Millennium Project, UN)
Interview with Dr Simon Haddadin, CEO of Franka Emika

Interview with Dr Simon Haddadin, CEO of robotics manufacturer Franka Emika
Many innovative technologies like artificial intelligence or machine learning end up on a screen, as Dr Simon Haddadin explains. But he and his team at Franka Emika, develop robots that get technology off the drawing board and into the thick of things in the real world. In Haddadin’s words, this not only has an “impact” in a technical sense, but also in social, societal and economic terms.
Despite qualifying as a medical practitioner, he found this new field extremely exciting. Therefore he hung up his stethoscope in 2016 to found Franka Emika together with his brother Sami. They were hoping that the company’s robots would make a difference in the world. Since starting production in 2018, Franka Emika has sold around 3,000 robots for all kinds of applications.
Although Sami Haddadin is the expert when it comes to algorithms and artificial intelligence, his brother Simon was able to bring something else to the table with his medical expertise. Because developing robots at Franka Emika is really about understanding human capabilities and transferring them to technology.
Simon Haddadin demonstrated one particularly impressive example of their robots’ capabilities. The SR-NOCS (Swab Robot for Naso- and Oropharyngeal COVID-19 Screening) conducts high-precision, fully autonomous nose and throat swabs on humans to test for COVID-19. The system has already shown off its capabilities in practical settings and been ordered by surgeries. Yet it took one or two sleepless nights and a healthy dose of passion to get this far, as Dr Haddadin emphasises.
Is the SR-NOCS a typical example of your company’s products?
Dr Simon Haddadin: No, it actually isn’t. We view our robotics solution as a hardware and software platform, similar to Apple with its iPhone and App Store. In other words, other parties can take our platform and launch their own completely new solutions based on our system. We were own our customers in a sense when it came to the SR-NOCS – we not only supplied the platform, but also a system solution for a specific application.
Apart from this platform concept, what is so special about your robots?
S.H.: We founded the company with the vision of opening robotics up to anyone and everyone, which is why we gave our robots new abilities. First and foremost among these is a sense of touch, which was realised by equipping each robot with over 100 sensors and lending it a sense of mechanical flexibility rather than rigidity. It can contract and relax muscles the same way that a human can. This enables our robots to work at close quarters with humans without any barriers or other safety guards.
Finally, our robots are as easy to operate as a smartphone. And they also come at just the fraction of the cost of previous models destined for industrial applications.
With your background in medicine, how did you end up managing a robotics company?
S.H.: My brother and I founded the company together. I actually used to be the one with an affinity for technology and would build and program computers at home. My brother, on the other hand, wanted to be a marine biologist. However, neither of us gave enough thought to our respective futures. In the end, it was our mother who enrolled us on our courses. Electrical and computer engineering for my brother, and medicine for me.
In other words, the exact opposite of your interests…
S.H.: That’s true.
My brother went on to develop an algorithm that gave robots a sense of touch, but nobody believed it would work. I told him back then that you have to back everything up with statistics in medicine. That was around ten years ago at Christmas. Instead of eating dinner together on Christmas Day, we went to the lab at the German Aerospace Center, where he was working at the time, and conducted crash tests. We wanted to establish where the boundary was between danger and safety for humans. This also gave rise to my thesis in the field of biomechanics.
At the time, we both saw how it was possible to conduct a lot of research without actually setting foot in the real world. That was the motivation for us to found our own company.
What is it that fascinates you about robotics?
S.H.: The impact that I have at Franka Emika is entirely different to being just a drop in the ocean as a doctor. In medicine, you learn a great deal about what people are made of. But only a little about how they actually work. This is different in advanced robotics as we understand it. For me, the most exciting thing is that you have to acquire an understanding of human abilities and then put them into a machine. And, of course, it’s really not hard to get excited about all the possibilities that this opens up.
Ultimately, many other cutting-edge technologies like machine learning or artificial intelligence end up on a screen. However, the most important human trait is the ability to interact with the real world, even in total chaos: a person has no idea what is heading their way, yet they can still get their bearings and interact with their surroundings. For me, this intervention in the real world is what makes robotics so exciting. Our robots should help people to put their abilities to good use even more simply and effectively.
You’ve mentioned “impact” – what exactly do you mean by that?
S.H.: This can essentially be divided into two points. Firstly, I am fortunate enough to have an extremely ambitious and talented team here. This team enables us to bring a project like SR-NOCS to fruition within a very tight turnaround time. Innovation is our most valuable asset, and we give our colleagues a very long leash. In this way, they can actually put their ideas into practice. This is what makes our innovations and technological breakthroughs possible in the first place.
The second point concerns real-world applications: our systems are primarily used in the “3C” industry: computers, communication devices and consumer electronics. Yet the year before last saw the closure of the last computer factory in Europe. Although we want to bring about a digital revolution in Europe, we can’t manufacture a single computer here… This is a consequence of the different standards that apply in countries half the world away: after seeing conditions in factories there, you have to admit that they come pretty close to modern-day slavery. This is the only way our smartphones can be as cheap as they are.
Therefore, one “impact” is the fact that we can banish working conditions like this with our robots.
The other impact is that we want to help Europe achieve economic and technological independence again. Nowadays, we are merely consumers of information technology and have ceased to be suppliers. Our vision is to make manufacturing in such fields economically viable in Europe once more. For example, our own production facilities are located in the Allgäu region of Germany, where our robots are manufactured by other robots for the most part. This allows us to manufacture economically right on our doorstep.
Another aim is to make mechatronic systems like autonomous driving or even autonomous flying possible in the long term – we still boast a lot of expertise in these fields in Europe. If at all possible, we want to ensure that these industries don’t go the way of the IT sector, with production for such technologies migrating away from Europe.
We try to do “our bit” to prevent this.
But your motto is “robotics for everyone” – so not just for factories?
S.H.: That’s correct, although we need industry to achieve economies of scale. You first need a market in which you can sell a certain number of units in order to bring prices down further. Although we’ve already made a quantum leap in terms of the cost of robots, they are still too expensive for private use. We therefore need to increase the scale further to reduce costs. In doing so, we can reach a cost range that would also be acceptable for household appliances. This is what the goal must be.
It’s important that robots are not perceived as toys, but as home assistants. Of course, that’s still a few years off, but we are already laying a lot of foundations in development. This not only involves our stationary robots. We are also building service robots already; essentially mobile robotic assistants. Our aim is for people to use robots for assistance at home during the “third age” and “fourth age” of their lives, whether that be for loading the dishwasher, preparing meals or dispensing pills. Robots can also assume an important role as a means of communicating.
In the future, people will be able to communicate with each other haptically using robots, as opposed to just by telephone or videoconferencing. The third area of application is medical assistance: in other words, robots might remind someone to take their pills, perform simple tests like taking a temperature or measuring blood pressure and, if necessary, call for an ambulance.
When do you think that such systems will be widely available?
S.H.: In certain applications, they already are. I estimate that it will be five to ten years before this type of thing is widely available.
I think that industrial applications are absolutely essential, although of course I hope that robots will be used much more in everyday situations at some point. For this to be a reality, however, a few things still need to happen in terms of the technology. Appropriate regulations also need to be amended, such as those concerning how people deal with these types of “learning” systems in their day-to-day lives.
So which aspects of the technology need to be refined?
S.H.: For one thing, there is still some work to be done on how machines communicate among themselves. We don’t think that today’s Internet is suitable for this. It’s too centralised – we really need a new type of network, ideally a decentralised one. There is also still a need for development in terms of real-time applications. For machines, “real time” actually means 1,000 signals per second – a figure more than a thousand times higher than the definition of the same term in an IT or Internet context. Many systems are simply not designed for this. In addition, hardware production needs to keep being scaled up in order for the components to get even cheaper.
Robotics is just one of many trending technologies at the moment, however. Which of the “disruptive” technologies like IoT, AI or edge computing will change our society the most?
S.H.: It’s not easy for me to deliver an impartial verdict here – that goes without saying. The great thing about robotics is that it transcends many kinds of technology. It’s actually about taking all of the technologies you just mentioned and merging them. In doing so, you can bring many of these on-trend technologies into the real world.
You have technology on the one hand. But you also need people who have a certain passion for technology. Yet you might get the impression that young people today tend to eschew new technologies. As a young company, what is your own experience of this?
S.H.: Young people today are – thankfully – preoccupied with extremely relevant topics like climate change. That much is a given. However, at the end of the day, I really do think that technology can help us in many ways to solve the major problems that our society faces.
That’s why we want to show young people what kind of doors technology can open. In 2017, we were awarded the Deutscher Zukunftspreis, which came with an endowment of EUR 250,000. With that prize money, we established a foundation that aims to introduce children and young people to technology as early as possible.
We are also a patron of the Munich round of the “Jugend forscht” youth science competition. In all of these activities, we see that there are still plenty of young people who are interested in technology and who also see the kind of difference it can make.
How important is passion when it comes to developing and using new technologies? Is passion absolutely essential, or might it even be a hindrance under certain circumstances?
S.H.: It’s probably a mixture of the two… Passion makes you keep plugging away in the face of all adversity. If you do new things, you will inevitably face a lot of opposition. This can take the form of competitors, who clearly have no desire for new kids on the block to appear. Then there are regulatory matters – in many senses, the world just isn’t ready for new things.
Naturally, the development of the technology itself is sometimes also difficult and drawn-out. Because all too often, this means long nights and short days when you don’t see your family much. Without a certain passion, it is impossible for someone to defy circumstances like these in the longer term.
„Passion ensures that you carry on despite all the setbacks.“
Dr Simon Haddadin, CEO and Co-founder of Franka Emika
On the other hand, you sometimes need keep a cool head and not get too hung up on things. After all, after a certain point, you simply can’t sink your teeth into absolutely everything. You eventually need to maintain a certain distance to actually make some money. Were it not for passion, the towel might have been thrown in a long time ago.
SR-NOCS – a medical product with a light touch?
This is where Franka Emika’s robot scores highly with its refined sense of touch. It takes samples so carefully and safely that it has already been approved as a Class I medical product, allowing it to be used in hospitals.
To test a patient, the robot arm first extends a disinfected plastic support through an opening in the test station’s screen. The patient must then position their nose and mouth and confirm that they are ready for the sample to be taken by pressing a pedal. Only then will the robot extend a swab from the support into their nose and mouth, respectively. The robot packages the sample in a tube, removes the plastic support and disinfects the gripper arm – all fully automatically.
Patients tested in this way were impressed by the robotic solution and would be happy to be re-tested by the SR-NOCS robot again at any time.
Get more informations about the company Franka Emika: www.franka.de
What is AlfES?

The concurrent development and design of chips and software has enabled researchers to realise chips that are not only uncommonly small and energy-efficient. But also boast powerful AI capabilities extending right through to training. One possible application scenario might be in nano drones that can navigate their way through a room independently.
To the untrained eye, the chip does not look any different from the ones found in any ordinary electronic device. Yet the chips developed by the Massachusetts Institute of Technology (MIT) and named Eyeriss and Navion are a revelation. And might just be the key to the future of artificial intelligence. Using these chips, even the most diminutive IoT devices could be equipped with powerful “smart” abilities. The likes of which have only been able to be provided from gigantic data centres until now.
Energy efficiency is vital
Professor Vivienne Sze from the MIT Department of Electrical Engineering and Computer Science (EECS) – and a member of the development team – is keen to emphasise that the real opportunity offered by these chips is not in their impressive capability for deep learning. It is much more about their energy efficiency. The chips need to master computationally intensive algorithms and make do with the energy available on the IoT devices themselves. However, this is the only way that AI can find widespread application on the “edge”. The performance of the Eyeriss chip is 10 or even 1,000 times more efficient than current hardware.
A symbiosis of software and hardware
In Professor Sze’s lab, research is also underway to determine how software ought to be designed to fully harness the power of computer chips. A low-power chip called Navion was developed at MIT to answer this question. To name one potential application, this chip could be used to navigate a drone using 3D-maps with previously unthinkable efficiency. But above all, such a drone is minute and no larger than a bee.
The concurrent development of the AI software and hardware was a crucial in this instance. This enabled researchers to build a chip just 20 square millimetres in size in the form of Navion. This chip only requires 24 milliwatts of power. That is around a thousandth of the energy consumed by a light bulb. The chip can use this tiny amount of power to process up to 171 camera images per second. Additionally, it perform inertial measurements – all in real time. Using this data, it calculates its position in the room. It might also be conceivable for the chip to be incorporated into a small pill that could gather and evaluate data from inside the human body once swallowed.
The chip achieves this level of efficiency through a variety of measures. For one thing, it minimises the data volume – taking the form of camera images and inertial measurements. That is stored on the chip at any given point in time. The developer team was also able to physically reduce the bottleneck. Between the data’s storage location and the location in which they are analysed. Not to mentioned coming up with clever schemes for the data to be re-used. The way that these data flow through the chip has also been optimised. Certain computation steps are also skipped, such as the computation of zeros that result in a zero.
A basis for self-learning, miniaturised electronics
Research into how AI can be more effectively integrated into edge devices is also being conducted in other institutes. Accordingly, a team of researchers at the Fraunhofer Institute for Microelectronic Circuits and Systems (IMS) has developed a kind of artificial intelligence for microcontrollers and sensors. That comprises a fully configurable, artificial neural network.
This solution – called AIfES – is a platform-independent machine-learning library. Using which self-learning, miniaturised electronics that do not require any connection to cloud infrastructure or powerful computers can be realised. The library constitutes a fully configurable, artificial neural network. This network can also generate appropriately deep networks for deep learning if needed. The source code has been reduced to a minimum, meaning that the AI can even be trained on the microcontroller itself. Thus far, this training phase has only been possible in data centres. AIfES is not concerned with processing large data volumes. Instead, it is now much more a case of the strictly required data being transferred in order to set up very small neural networks.
What is AlfES?
The team of researchers has already produced several demonstrators, including one on a cheap 8-bit microcontroller for detecting hand-written numbers. An additional demonstrator can detect complex gestures and numbers written in the air. For this application, scientists at the IMS developed a detection system comprising a microcontroller and an absolute orientation sensor. To begin with, multiple people are required to write the numbers from zero to nine several times over. The neural network detects these written patterns, learns them and autonomously identifies them in the next step.
Studies conducted at the research institutes provide an outlook for how AI software and hardware will continue to develop symbiotically in the future and open up complex AI functions in IoT and edge devices in the process.
Edge Computing helps AI take off

Nowadays, edge devices can run AI applications thanks to increasingly powerful hardware. As such, there is no longer any need to transfer data to the cloud. Which makes AI applications in real time a reality, reduces costs and brings advantages in terms of data security.
Today, Artificial Intelligence (AI) technology is used in a wide range of applications. Where it opens up new possibilities for creating added value. AI is used to predictively analyse the behaviour of users on social networks. Which enables ads matching their needs or interests to be shown. Even facial- and voice-recognition features on smartphones would not work without artificial intelligence. In industrial applications, AI applications help make maintenance more effective by predicting machine failures before they even occur. According to a white paper by investment bank Bryan, Garnier & Co., 99 per cent of AI-related semiconductor hardware was still centralised in the cloud as recently as 2017.
The difference between training and inference
One of Artificial Intelligence’s most important functions is Machine Learning. With which IT systems can use existing data pools to detect algorithms, patterns and rules. In addition to coming up with solutions. Machine Learning is a two-stage process. In the training phase, the system is initially “taught” to identify patterns in a large data set. The training phase is a long-term task that requires a large amount of computing power. After this phase, the Machine-Learning system can apply the final, trained model to analyse and categorise new data. And finally to derive a result. This step – which is known as inference – requires much less computing power.
Cloud infrastructure cannot handle AI requirements alone
Most inference and training steps today are executed in the cloud. For example, in the case of a virtual assistant, the user’s command is sent to a data centre. Then analysed there with the appropriate algorithms and sent back to the device with the appropriate response. Until now, the cloud has remained the most efficient way to exploit the benefits of powerful, cutting-edge hardware and software. Yet the increasing number of AI applications is threatening to overload current cloud infrastructure.
For instance, if every Android device in the world were to execute a voice-recognition command every three minutes. Google would need to make twice as much computing power available as it currently does. “In other words, the world’s largest computing infrastructure would have to double in size,” explains Jem Davies, Vice President, Fellow and General Manager of the ARM Machine Learning Group. “Also, demands for seamless user experiences mean people won’t accept the latency inherent in performing ML processing in the cloud.”
The number of AI edge devices is set to boom
Inference tasks are increasingly being relocated to the edge as a result of this. Which enables the data to be processed there without even needing to be transferred anywhere else. “The act of transferring data is inherently costly. In business-critical use cases where latency and accuracy are key and constant connectivity is lacking, applications can’t be fulfilled. Locating AI inference processing at the edge also means that companies don’t have to share private or sensitive data with cloud providers. Something that is problematic in the healthcare and consumer sectors,” explains Jack Vernon, Industry Analyst at ABI Research.
According to market researchers at Tractica, this means that annual deliveries of edge devices with integrated AI will increase from 161.4 million in 2018 to 2.6 billion in 2025. Smartphones will account for the largest share of these unit quantities, followed by smart speakers and laptops.
Smartphones making strides
A good example for demonstrating the sheer variety of possible applications for AI in edge devices are smartphones. To name one example, AI enables the camera of the Huawei P smart 2019 to detect 22 different subject types and 500 scenarios. In order to optimise settings and take the perfect photo. In the Samsung Galaxy S10 5G, on the other hand, AI automatically adapts the battery, processor performance, memory usage and device temperature to the user’s behaviour.
Gartner also names a few other potential applications, including user authentication. In the future, smartphones might be able to record and learn about a user’s behaviour. Such as patterns when walking or when scrolling and typing on the touch screen. All this, without any passwords or active authentication being required.
Emotion sensors and affective computing make a lot possible for smartphones. To detect, analyse and process people’s emotional states and moods before reacting to them.
For instance, vehicle manufacturers might use the front camera of a smartphone to interpret a driver’s mental state or assess signs of fatigue. In order to improve safety. Voice control or augmented reality are other potential applications for AI in smartphones.
CK Lu, Research Director at Gartner is certain of one thing: “Future AI capabilities will allow smartphones to learn, plan and solve problems for users. This isn’t just about making the smartphone smarter, but augmenting people by reducing their cognitive load. However, AI capabilities on smartphones are still in very early stages.”
Edge solutions: Consumer sector

Edge solutions in the consumer sector are possible thanks to increasingly powerful embedded technologies. As a result, they make it possible to equip even everyday devices such as stoves or televisions with edge intelligence. This not only means greater ease of use but also leads to improved operation of the devices. Smart applications with local data processing can also provide added security in the home, for example. Or allow simple payments using a smartphone.
Smart devices with their own intelligence are increasingly also conquering the everyday world of the consumer, be it in the form of wearables, in home appliances or with assistance systems that make life simpler and safer for the elderly. There are already some edge solutions in the consumer sector.
Miele has launched a solution on the market called Motionreact, for example. Which the oven uses to anticipate what the user wants to do next. For example, the oven alerts you to the end of the program by emitting an acoustic signal. As the user approaches the device, two things happen at the same time. The acoustic signal falls silent and the light in the oven chamber switches on. Or, the appliance and oven chamber light switch on when approached and the main menu appears on the display. From a technical perspective, the system operates via infrared sensors in the appliance panel. They respond to movements at a distance of between approx. 20 and 40 centimetres in front of the appliance.
Edge solutions in the consumer sector
Artificial Intelligence capabilities are being integrated increasingly into consumer devices. Not only do they allow greater ease of use, operability is also improved.
An example of this is the new generation of TV top models from LG Electronics. They have intelligent processors which, thanks to the integrated AI, improve the visual quality. Using deep-learning algorithms, the TVs analyse the quality of the signal source. And accordingly choose the most suitable interpolation method for optimum picture playback. Additionally, the processor performs a dynamic fine calibration of the tone-mapping curve for HDR contents in accordance with ambient light. The picture brightness is optimised dynamically based on values for how the human eye perceives images under different lighting conditions. Even in the darkest scenes, high-contrast and detailed pictures with excellent colour depth can be reproduced. And even in rooms with high ambient brightness. The room brightness is captured by means of an ambient light sensor in the TV.
Smart devices are also of tremendous assistance when it comes to making life safer and more comfortable for the elderly. This is shown by the example neviscura from nevisQ. The discreet sensor system integrated in the apartment’s skirting boards allows falls, for example. To be detected automatically and without any additional equipment attached to the body. The nurse-call system then informs the nurse in real time.
The data from the infrared sensors is recorded in a base station with smart functions and analysed. Local data processing immediately detects critical situations in the room. With the base station acting as an interface to the call system at the same time. In the future, the AI-sensor system will also use activity analyses to detect whether a person’s condition is changing. And thus prevent critical situations.
Banking sector – one of the largest users of edge technologies
The IoT with its array of consumer wearables along with Edge Computing are also altering life outside of the home. This is especially true of how banks conduct their business.
According to a study by ResearchAndMarkets, the financial and banking sector worldwide is actually one of the biggest users of Edge Computing. Growing acceptance of digital and mobile banking initiatives and payment using wearables are significantly increasing the demand for Edge-Computing solutions. Such wearables are, for example, the Apple Watch, Fitbit or the smartphone. That’s because banking networks need to be as secure and reliable as possible in order to gain from the advantages offered by IoT technologies. But IoT devices themselves are difficult to secure. To achieve the security level required for banking applications, advanced cryptographic algorithms are needed. However, these CPU-intensive operations are extremely complex to implement, for IoT devices.
The use of security agents at the edge is therefore recommended for this reason. For example, this could be a router or a base station installed in the vicinity of the user for processing security algorithms and encrypting data from and to IoT devices. Customers can therefore complete banking transactions securely even with simpler wearables. But as use of IoT devices for processing banking transactions and payments continues to grow, so too does the need to store and process data in edge data centres. Because they are closer to the user, these micro data centres allow data to be processed closer to the source, thus reducing the response times of the system (latency) and also the costs for data transfer. Paying via smartphone is therefore just as fast as or even faster than using cash from a wallet.
Building automation

Building automation. Equipped with edge systems and AI, modern buildings not only boast unprecedented sustainability. But also provide occupants with greater convenience and comfort.
Building requirements are becoming ever more stringent. Not only are they supposed to offer comfort, convenience and safety to those who live and work in them. But also consume less and less energy and incur minimal operating costs.
In order to achieve these targets, buildings are being kitted out with an ever-wider array of technology. Building systems such as lighting, air-conditioning, heating and shading systems or monitoring technology are being interlinked via centralised building-automation systems.
Even if buildings such as these are already referred to as “smart”, they haven’t yet fully earned that moniker. After all, they still lack the capability to predict and communicate that is inherent in truly intelligent systems. Only using edge solutions can change that. In this case, it would even be possible to control all of these functions via the cloud. True real-time capability is only required by a select few building systems. Yet there is another good argument for the use of Edge Computing in this context. The systems and sensors in the building gather a vast amount of data. Which not only relates to technical components, but also to users.
For reasons of data protection, it is therefore an advantage when the majority of the data processing required takes place in edge nodes. In this way, data remains private – and access is independent of the respective cloud connection’s availability. Nonetheless, a hybrid approach (i.e., the combination of edge- and cloud-based computing) will yield the best results in building automation. For example, weather information from the cloud could be used to control the air conditioning. Or the building in question could be compared with others to identify potential for improvement.
Bloomberg’s new headquarters
Only by pursuing such approaches can the most sustainable buildings become a reality. These include the new headquarters of media company Bloomberg, which is one of the most sustainable buildings in the world. In comparison to a typical office block, the building in London consumes approximately 73 per cent less water. While energy consumption and the associated CO₂ emissions are 35 per cent lower. Innovative energy, lighting, water and ventilation systems account for the majority of the energy savings. Many of these solutions are unique and enable the building to recover waste. Additionally, react to changes in the surrounding area and adapt to the way it is used by people.
Norman Foster, founder and Executive Chairman of Foster + Partners, outlines the key features of the building designed by his firm: “The deep-plan interior spaces are naturally ventilated through a ‘breathing’ façade. A top-lit atrium edged with a spiralling ramp at the heart of the building ensures a connected and healthy environment.” In moderate ambient temperatures, the striking, external bronze blades open and close. Meaning that the building can operate in a “breathable” natural ventilation mode. This reduces energy consumption considerably. Smart CO2-sensor controls allow air to be distributed according to the approximate number of people. Present in each zone of the building at a specific point in time. The ability to adapt the flow of air dynamically to the occupancy hours and patterns will save around 600 to 750 megawatt-hours of electricity each year and therefore reduce annual CO2 emissions by around 300 metric tons.
Building automation in cube berlin
One other smart commercial building is the cube berlin. The main goal of the concept drawn up by architects 3XN and brought to fruition by CA Immo is not its spectacular aesthetic form, but rather its artificial intelligence – the so-called “brain” of the building.
CA Immo commissioned start-up Thing Technologies to program the AI. They produced a system that intelligently interlinks all technical systems, sensors and planning, operating and user data, not to mention exerting optimised control over the processes in the building. The “brain” learns from data about operation, users and the environment, then uses this information to come up with suggestions for improvement. For example, unused areas will require neither heating nor cooling, ventilation or light in future. The control system will recognise this accordingly and shut off the relevant systems in these areas. Additionally, tenants in cube berlin can use a specially developed app to control aspects such as the room climate, access controls, parcel station, and much more besides. Users and their needs are at the forefront of the development process.
As such, smart buildings are an entirely new type of commercial property that puts users and their needs first. Thanks to edge computing and AI, this enables an interaction between humans and buildings that was previously impossible.
What is an Adamm?

Adamm, Orb and Co. Not only can wearables today record vital data thanks to edge technologies. They can also analyse this information instantly. Smart assistants support employees in healthcare in an incredible variety of ways. Central structures in patient healthcare are reaching their limits when it comes to mastering the challenges in the healthcare sector. Due to a lack of trained personnel, chronic diseases and a general need for greater efficiency. Intelligence is increasingly migrating to the edge.
The healthcare sector is facing enormous challenges, because costs are exploding worldwide. Market researchers at Deloitte expect that the global health spend in 2022 will rise to 10 billion dollars. In the year 2017 this figure was 7.7 billion US dollars. Chronic diseases are arising with increasing frequency. According to the World Health Organisation (WHO), 13 million people worldwide die each year before reaching the age of 70. Causes are cardiovascular diseases, chronic respiratory diseases, diabetes and cancer. What’s more, it is becoming increasingly difficult to find the right personnel who can offer medical services on a comprehensive basis.
Wearables analyse vital parameters
One solution for overcoming these challenges is to use IoT edge devices and their underlying computer architectures. For example, wearable edge devices can collect, store and analyse critical patient data. Without having to be in constant contact with a network infrastructure. Such medical products therefore help diagnoses to be made quickly and easily. Everything without the patient necessarily having to attend a medical practice or a hospital. In addition, the information collected can be sent at regular intervals to the central servers in the cloud. Where it is checked by the attending physician or stored for long-term diagnoses.
Warning of asthma attacks with Adamm
One example is Adamm – a wearable intelligent asthma monitor that detects the symptoms of an asthma attack before it happens. The wearer can therefore take action before the situation deteriorates. The sensors in the wearable detect the patient’s individual symptoms. They monitor the cough rate, breathing pattern, heartbeat, temperature and other relevant data.
The asthma monitor uses algorithms to learn the patient’s “normal condition”. Its ability to detect when an attack is indicated therefore improves continually over time. All of the data is processed on the device itself. Whenever the data deviates from the patient’s individual norm, the wearable vibrates and thus informs the patient about the deviation. At the same time, Adamm can also send an SMS to a previously nominated nurse or person of trust. The device is not dependent on the computing power of a smartphone and therefore offers true autonomy. However, Adamm can send data as needed to an app or a web portal.
Support for emergency call centres
AI-based edge devices not only offer assistance in terms of monitoring patient data: the Danish company Corti developed a system, for example, which supports operations managers in emergency stations. Orb is a real-time decision-making system, which uses AI technology to identify important patterns in the incoming emergency call. It warns the dispatcher of events that require the swiftest possible response, such as a heart attack for example.
The device is simply placed on the dispatcher’s table for this purpose. It connects to the audio stream of the telephone in order to monitor emergency calls. It is not being explicitly preprogrammed with sample events. Because, the AI algorithm learns to identify key factors by listening to large numbers of calls. The system also considers non-verbal sounds, which can supply important information. Edge Computing not only offers the advantage of a very fast response, as Corti co-founder and CTO Lars Maaløe stresses: “Efficiency is crucial for edge devices – particularly in an emergency setting. And Edge Computing has the significant benefit of allowing the Orb to function continuously. Even when the internet connection is interrupted.”
By listening in on the phone, Orb should reduce the number of undiscovered heart attacks by more than 50 per cent. And detect within 50 seconds whether a heart attack has happened. This represents an important time gain since from collapse to start of resuscitation. The chance of the casualty surviving drops by 10 per cent per minute. The dispatchers could therefore urgently do with some extra assistance in order to detect a cardiac arrest quickly and efficiently.
Enhanced MRI images
Edge technology also helps to improve services for patients in the hospital. For example, edge computing allows faster recording times, improved image quality and fewer discrepancies using magnetic resonance imaging machines from GE Healthcare. The MRI machines have embedded high-performance graphics processors for this purpose. Together with AIRx, an AI-based automated workflow tool for performing MRI brain scans, they enable automatic identification of anatomical structures. The system then independently determines the cutting point and angle of visual recordings for neurological examinations. This leads to fewer recording errors and significantly reduces the time the patient spends in the MRI machine.
AIRx is based on Edison, a General Electric platform, which is intended to accelerate the development and roll-out of AI technology. Edison can integrate and assimilate data from different sources, apply advanced analyses and AI to transform data and generate appropriate findings on this basis. “AI is fundamental to achieving precision health and must be pervasively available from the cloud to the edge and directly on medical devices”, stresses Dr. Jason Polzin, General Manager of MR Applications, GE Healthcare. “Real-time, critical-care use cases demand AI at the edge.”
From pure fiction to a real market opportunity

Intelligent machines and self-teaching computers will open up exciting prospects for the electronics industry.
The idea of thinking, or even feeling, machines was long merely a vision of science-fiction authors. But thanks to rapid developments in semiconductors and new ideas for the programming of self-teaching algorithms, Artificial Intelligence (AI) is today a real market, opening up exciting prospects for businesses.
According to management consultants McKinsey, the global market for AI-based services, software and hardware is set to grow by as much as 25 per cent a year, and is projected to be worth USD 130 billion by 2025. Investment in AI is therefore booming, as the survey “Artificial Intelligence: the next digital frontier” by the McKinsey Global Institute affirms. It reports that, in the last year, businesses – primarily major tech corporations such as Google and Amazon – spent as much as USD 27 billion on in-house research and development in the field of intelligent robots and self-teaching computers. A further USD 12 billion was invested in AI in 2016 externally – that is, by private equity companies, by risk capital funds, or through mergers and acquisitions. This amounted in total to some USD 39 billion – triple the volume seen in 2013. Most current external investments (about 60 per cent) is flowing into machine learning (totalling as much as USD 7 billion). Other major areas of investment include image recognition (USD 2.5 to 3.5 billion) and voice recognition (USD 600 to 900 million).
Intelligent machines and self-teaching computers are opening up new market opportunities in the electronics industry. Market analyst TrendForce predicts that global revenues from chip sales will increase by 3.1 per cent a year between 2018 and 2022. It is not just the demand for processors that is rising, however; applications of AI are also driving new solutions in electronics fields such as sensor technology, hardware accelerators and digital storage media. Market research organisation marketsandmarkets, for example, forecasts a rise from USD 2.35 billion in 2017 to USD 9.68 billion by 2023 – among other reasons as a result of big data, the Internet of Things, and applications relating to Artificial Intelligence. The creation of AI-based services is also increasing demand for higher-performance network infrastructures, data centres and server systems.
AI is thus a vital market for the electronics industry as well. With our semiconductor solutions, experienced experts and extensive partner network, we will be glad to help you develop exciting new products.
Sensors as a basis for AI

Sensor fusion allows increasingly accurate images of the environment to be developed by fusing data from different sensors. To achieve faster results and reduce the flood of data, the sensors themselves are becoming intelligent too.
Systems with Artificial Intelligence need data. The more data, the better the results. This data can either originate in databases – or it can be recorded using sensors. Sensors measure vibrations, currents and temperatures on machines, for example, and thus provide an AI system with information for predicting when maintenance is due. Others – integrated in wearables – record pulse, blood pressure and perhaps blood sugar levels in people in order to draw conclusions regarding the state of health.
Sensor technology has gained considerable momentum in recent years from areas such as mobile robotics and autonomous driving: for vehicles to move autonomously through an environment, they have to recognise the surroundings and be able to determine the precise position. To do this, they are equipped with the widest array of sensors: ultrasound sensors record obstacles at a short distance, for example when parking. Radar sensors measure the position and speed of objects at a greater distance. Lidar sensors (light detection and ranging) use invisible laser light to scan the environment and deliver a precise 3D image. Camera systems record important optical information such as the colour and contour of an object and can even measure the distance over the travel time of a light pulse.
More information is needed
Attention today is no longer focusing solely on the positioning of an object, rather information such as orientation, size or also colour and texture is also becoming increasingly important. Various different sensors have to work together to ensure this information is captured reliably. That’s because every sensor system offers specific advantages. However, it is only by fusing the information from the different sensors – in a process known as sensor fusion – that a precise, complete and reliable image of the surroundings is generated. A simple example of this are motion sensors, such as those used in smartphones, among other devices: only by combining accelerometer, magnetic field recognition and gyroscope can these sensors measure the direction and speed of a movement.
Sensors are also becoming intelligent
Not only can modern sensor systems deliver data for AI, they can also use it: such sensors can therefore pre-process the measurement data and thus ease the burden on the central processor unit. The AEye start-up developed an innovative hybrid sensor, for example, which combines camera, solid-state lidar and chips with AI algorithms. It overlays the 3D pixel cloud of the lidar with the camera’s 2D pixels and thus delivers a 3D image of the environment in colour. The relevant information is then filtered from the vehicle’s environment using AI algorithms and evaluated. The system is not only more precise by a factor of 10 to 20 and three times faster than individual lidar sensors, it also reduces the flood of data to central processor units.
Sensors supply a variety of information to the AI system
- Vibration
- Currents
- Temperature
- Position
- Size
- Colour
- Texture
- and much more…
AI smarter than humans?

What began in the 1950s with a conference has grown into a key technology. It is already influencing our lives today, and as the intelligence of machines increases in the future that influence is bound to spread much more. But is AI smarter than humans?
Smart home assistants order products online on demand. Chatbots talk to customers with no human intermediary. Self-driving cars transport the occupants safely to their destination, while the driver is engrossed in a newspaper. All of those are applications that are already encountered today in our everyday lives – and they all have something in common: they would not be possible without Artificial Intelligence.
Artificial Intelligence is a key technology which, in the years ahead, will have a major impact not only on our day-to-day lives but also on the competitiveness of the economy as a whole. “Artificial Intelligence has enormous potential for improving our lives – in the healthcare sector, in education, or in public administration, for example. It offers major opportunities to businesses, and has already attained an astonishingly high degree of acceptance among the public at large,” says Achim Berg, president of industry association Bitkom.
How everything began
Developments in this technology began as far back as the 1950s. The term Artificial Intelligence was actually coined even before the technology had truly existed, by computer scientist John McCarthy at a conference at Dartmouth University in the USA in 1956. The US government became aware of AI, and saw potential advantages from it in the Cold War, so it provided McCarthy and his colleague Marvin Minsky with the financial resources necessary to advance the new technology. By 1970, Minsky was convinced: “In from three to eight years, we will have a machine with the general intelligence of an average human being.” But that was to prove excessively optimistic. Scientists around the world made little progress in advancing AI, so governments began cutting funding. A kind of AI winter closed in. It was only in 1980 that efforts to develop intelligent machines picked up pace again. They culminated in a spectacular battle: in 1997, IBM’s supercomputer Deep Blue defeated chess world champion Garry Kasparov.
Bots are better gamers
AI began to advance rapidly from then on. Staying with games of a kind, the progress being made was demonstrated by the victory of a bot developed by OpenAI against a team of professional players in the multiplayer game Dota 2 – one of the most complex of all video games. What was so special about this triumph was that the bot taught itself the game in just four months. By continuous trial and error over huge numbers of rounds played against itself, the bot discovered what it needed to do to win. The bot was only set up to play a one-to-one game, however – normally two teams of five play against each other. Creating a team of five bots is the OpenAI developers’ next objective. OpenAI is a non-profit research institute founded by Elon Musk with the stated aim of developing safe AI for the good of all humanity.
Intelligence doubled in two years
So, is AI already as clever as a person today? To find that out, researchers headed by Feng Liu at the Chinese Academy of Science in Beijing devised a test which measures the intelligence of machines and compares it to human intelligence. The test focused on digital assistants such as Siri and Cortana. It found that the cleverest helper of all is the Google Assistant. With a score of 47.28 points, its intelligence is ranked just below that of a six year-old human (55.5 points). That’s pretty impressive. But what is much more impressive is the rate at which the Google Assistant is becoming more intelligent. When Feng Liu first conducted the test back in 2014, the Google Assistant scored just 26.4 points – meaning it almost doubled its intelligence in two years. If the system keeps on learning at that rate, it won’t be long before Minsky’s vision expressed back in 1970 of a machine with the intelligence of a human adult becomes reality.
Simulating a human
Surprisingly, despite the long history of the development of intelligent machines, there is still no scientifically recognised definition of AI today. The term is generally used to describe systems which simulate human intelligence and behaviour. The most fitting explanation comes from renowned MIT professor Marvin Minsky, who defined AI as “the science of making machines do those things that would be considered intelligent if they were done by people”.
Artficial Intelligence | Startups

A look at the start-up scene reveals the diversity of the areas in which AI can be used. These young companies are developing products for industries as varied as healthcare, robotics, finance, education, sports, safety, and many more. We present a small selection of interesting start-ups here.
Connected Cars for Everyone
In the form of Chris, the start-up German Autolabs provides an assistant designed specifically for motorists, which easily and conveniently provides access to their smartphone via smart speech recognition and gesture control, including while driving. Chris can be integrated with any vehicle, regardless of its model and year of manufacture. The aim is to make connected car technology available to all through the combination of a flexible and scalable assistant software and hardware for retrofitting.
Make Your Own Voice Assistant
Snips is developing a new voice platform for hardware manufacturers. The service, based on Artificial Intelligence, is intended to allow developers to embed voice assistance services on any device. At the same time, a consumer version is due to be provided over the Internet, running on Raspberry Pi-powered devices. Privacy is at the forefront of this, with the system sending no data to the cloud and operating completely offline.
Realistic Simulation for Autonomous Driving Systems
Automotive Artificial Intelligence offers a virtual 3D platform which realistically imitates cars’ driving environment. It is intended to be used as a means of testing software for fully automated driving, helping to explore the systems’ limits. Self-learning agents provide the reality needed in the virtual platform. Aggressive drivers turn up just as often as overcautious ones, and there are arbitrary lane changes just as there are unforeseeable braking manoeuvres from other (simulated) vehicles that are part of the traffic.
Feeding Pets More Intelligently
With SmartShop Beta, Petnet provides a digital marketplace that guides dog and cat owners towards suitable food for their pets using Artificial Intelligence –
depending on their breed and specific needs. The start-up has itself also developed the Petnet SmartFeeder for the feeding pets. This allows the pets to be automatically supplied with individual portions. The system gives notifications for successful feeds and when food levels are low. An automatic repeat order can also be set up in the SmartShop.
Smart Water Bottle
Bellabeat has already successfully brought Healthtracker to market in the form of women’s jewellery. Building on this, the start-up has developed an intelligent water bottle with Spring. By way of sensors, its system can record how much water the user drinks, how active she is, how much she sleeps, or her stress sensitivity levels. An app, with the assistance of special AI algorithms, is used to analyse users’ individual hydration needs and give a recommendation for fluid intake.
Drone for the Danger Zone
Hivemind Nova is a quadrocopter for law enforcement, first responder and security applications. The drone learns from experience how to negotiate restricted areas or hazardous environments. Without a pilot remote-controlling, it autonomously explores dangerous buildings, tunnels, and other structures before people enter them. It transmits HD video and a map of the building layout to the user live. Hivemind Nova learns and continuously improves over time. The more it is used, the more capable it becomes.
Detecting Wear Ahead of Time
Konux combines smart sensors and analysis based on Artificial Intelligence. The solution is used on railways to monitor sets of points, for example. Field data, already pre-processed by sensor, is wirelessly transmitted to an analysis platform and combined with other data sources such as timetables, meteorological data, and maintenance logs. The data is then analysed using machinelearning algorithms to detect operational anomalies and critical wear in advance.
Greater Success with Job Posts
Textio is an advanced writing platform for creating highly effective job advertisements. By analysing the hiring outcomes of more than 10 million job listings per month, Textio predicts the impact of a job post and gives instructions in real time as to how the text could be improved. To do this, the company uses a highly sophisticated predictive engine and makes it usable for anyone – no training or IT integration needed.
AI-pioneer Minsky: Temporarily dead?

The brain functions like a machine, or so according to the theory of Marvin Minsky, one of the most important pioneers of artificial intelligence. In other words, it can be recreated – made immortal by backing up its consciousness onto a computer.
Could our entire life simply be a computer simulation, like “the Matrix” from the Hollywood blockbuster of the same name? According to Marvin Minsky, this is entirely conceivable: “It’s perfectly possible that we are the production of some very powerful complicated programs running on some big computer somewhere else. And there’s really no way to distinguish that from what we call reality.” Such thoughts were typical of the mathematician, cognition researcher, computer engineer and great pioneer of Artificial Intelligence. Minsky combined science and philosophy scarcely any other, questioned conventional views – but always with a strong sense of humour:
“No computer has ever been designed that is ever aware of what it’s doing; but most of the time, we aren’t either.”
Born in 1927 in New York, Minsky studied mathematics at Harvard University and received a PhD in mathematics from Princeton University. He was scarcely 20 years old when he began to take an interest in the topic of intelligence: “Genetics seemed to be pretty interesting because nobody knew yet how it worked,” recalled Minsky at the time in an article that appeared in the “New Yorker” in 1981. “But I wasn’t sure that it was profound. The problems of physics seemed profound and solvable. It might have been nice to do physics. But the problem of intelligence seemed hopelessly profound. I can’t remember considering anything else worth doing.”
Great intelligence is the sum of many non-intelligent parts
Back then, as a youthful scientist, he laid the foundation stone for a revolutionary theory, which he expanded on during his time at the Massachusetts Institute of Technology (MIT) and which finally led to him becoming a pioneer in Artificial Intelligence: Minsky held the view that the brain works like a machine and can therefore basically be replicated in a machine too. “The brain happens to be a meat machine,” according to one of his frequently quoted statements. “You can build a mind from many little parts, each mindless by itself.” Marvin Minsky was convinced that consciousness can be broken down into many small parts. His aim was to identify such components of the mind and understand them. Minsky’s view that the brain is built up from the interactions of many simple parts called “agents” is the basis of today’s neural networks.
Together with his Princeton colleague John McCarthy, he continued to develop the theory and gave the new scientific discipline a name at the Dartmouth Conference in 1956: Artificial Intelligence. Together McCarthy and Minsky founded the MIT Artificial Intelligence Laboratory some three years later – the world’s most important research centre for Artificial Intelligence ever since. Many of the ideas developed there were later seized on in Silicon Valley and translated into commercial applications.
Answerable for halting research
What is interesting is that the father of Artificial Intelligence was responsible for research into the area being halted for many years: Minsky had experimented himself with neural networks in the 1960s, but renounced them in his book “Perceptrons”. Together with his co-author Seymour Papert, he highlighted the limitations of these networks – and thus brought research into this area to a standstill for decades. Most of these limitations have since been overcome, and neural networks are a core technology for AI in the present day.
However, research into AI was by far not the only work area that occupied Marvin Minsky. His Artificial Intelligence Laboratory is regarded as the birthplace for the idea that digital information should be freely available – a theory from which open-source philosophy later emerged. The institute contributed to the development of the Internet, too. Minsky also had an interest in robotics, computer vision and microscopy – his inventions in this area are still used today.
Problems of mankind could be resolved
Minsky viewed current developments in AI quite critically, as he felt they were not focused enough on creating true intelligence. In contrast to the alarmist warnings of some experts that intelligent machines would take control in the not too-distant future, Minsky most recently advocated a more philosophical view of the future: machines that master real thinking could demonstrate ways to solve some of the most significant problems facing mankind. Death may also have been at the back of his mind in this respect: He predicted that people could make themselves immortal by transferring their consciousness from the brain onto chips. “We will be immortal in this sense,” according to Minsky. When a person grows old, they simply make a backup copy of their knowledge and experience on a computer. “I think, in 100 years, people will be able to do that.”
AI-Pioneer Minsky only temporarily dead
Marvin Minsky died in January 2016 at the age of 88. Although perhaps only temporarily: shortly before his death, he was one of the signatories of the Scientists’ Open Letter on Cryonics – the deep-freezing of human bodies at death for thawing at a future date when the technology exists to bring them back to life. He was also a member of the Scientific Advisory Board of cryonics company Alcor. It is therefore entirely possible that Minsky’s brain is waiting, shock-frozen to be brought back to life at some time in the future as a backup on a computer.
Faster to intelligent IoT products

Developers working on products with integrated AI for the internet of things need time and major resources. This type of product typically requires up to 24 months until it is marketable. With a pre-industrialised software platform, Octonion now plans to reduce this time to only six months.
Smaller companies that want to realise products for the Internet of Things often lack the resources for electronics and software development. In addition, it takes lots of time to put together the building blocks required: connectivity, AI, sensor integration, etc. For example, the time to market for typical IoT projects is between 18 and 24 months – a very long time in the fast-paced world of the Internet.
A new solution from Octonion can help. The Swiss firm has developed a software platform for interconnecting objects and devices and equipping them with AI functions.
With this complete solution, IoT projects with integrated AI can be realised within six to eight months.
From the device to the cloud
Octonion provides a true end-to-end software solution, from an embedded device layer to cloud-based services. They include Gaia, a highly intelligent, autonomous software framework for decision-making that uses modern -machine-learning methods for pattern recognition. The system can be used for a wide range of applications in a variety of sectors. What’s more it guarantees that the data generated by the IoT device belongs to the customer only, who is also the sole project operator.
Reduce costs and development time
The result is a complete IoT system with Artificial Intelligence that provides a full solution from IoT devices or sensors and a gateway to the cloud. Since the individual platform levels are device-independent and compatible with all hosting solutions, customers can realise all their applications. Developers can select the functional module they require on each level. This enables them to adjust the platform to their individual requirements, providing the perfect conditions for developing and operating proprietary IoT solutions quickly and easily. With the Octonion platform, proprietary IoT solutions are marketable in only six months.
AI better than the doctor?

Cognitive computer assistants are helping clinicians to make diagnostic and therapeutic decisions. They evaluate medical data much faster, while delivering at least the same level of precision. It is hardly surprising, therefore, that, applications with Artificial Intelligence are being used more frequently.
Hospitals and doctors’ surgeries have to deal with huge volumes of data: X-ray images, test results, laboratory data, digital patient records, OR reports, and much more. To date, they have mostly been handled separately. But now the trend is towards bringing everything into a single unified software framework. This data integration is not only enabling faster processing of medical data and creating the basis for more efficient interworking between the various disciplines. It is also promising to deliver added value. New, self-learning computing algorithms will be able to detect hidden patterns in the data and provide clinicians with valuable assistance in their diagnostic and therapeutic decision-making.
Better diagnosis thanks to Artificial Intelligence: 30 times faster than a doctor with an error rate of 1%.
Source: PwC
Analysing tissue faster and more accurately
“Artificial Intelligence and robotics offer enormous benefits for our day-to-day work,” asserts Prof. Dr Michael Forsting, Director of the Diagnostic Radiology Clinic of the University Hospital in Essen. The clinic has used a self-learning algorithm to train a system in lung fibrosis. After just a few learning cycles, the computer was making better diagnoses than a doctor: “Artificial Intelligence is helping us to diagnose rare illnesses more effectively, for example. The reasons are that – unlike humans – computers do not forget what they have once learned, and they are better than the human eye at comparing patterns.”
Especially in the processing of image data, cognitive computer assistants are proving helpful in relieving clinicians of protracted, monotonous and recurring tasks, such as accurately tracing the outlines of an organ on a CT scan. The assistants are also capable of filtering information from medical image data that a clinician would struggle to identify on-screen.
Artificial Intelligence diagnosis – Better than the doctor
These systems are now even surpassing humans, as a study at the University of Nijmegen in the Netherlands demonstrates: the researchers assembled two groups to test the detection of cancerous tissue. One comprised 32 -developer teams using dedicated AI software solutions; the other comprised twelve pathologists. The AI developers were provided in advance with 270 CT scans, of which 110 indicated dangerous nodes and 160 showed healthy tissue. These were intended to aid them in training their systems. The result: the best AI system attained virtually 100 per cent detection accuracy and additionally colour-highlighted the critical locations. It was also much faster than a pathologist, who took 30 hours to detect the infected samples with corresponding precision. Most notably, the clinicians overlooked metastases less than 2 millimetres in size under time pressure. Only seven of the 32 AI systems were better than the pathologists, however.
The systems involved in the test are in fact not just research projects, but are already in use. In fibrosis research at the Charité hospital in Berlin, for example, where it is using the Cognitive Workbench from a company called ExB to automate the highly complex analysis of tissue samples for the early detection of pathological changes. The Cognitive Workbench is a proprietary, cloud-based platform which enables users to create and train their own AI-capable analyses of complex unstructured and structured data sources in text and image form. Ramin Assadollahi, CEO and Founder of ExB, states: “In addition to diagnosing hepatic fibrosis, we can bring our high-quality deep-learning processes to bear in the early detection of melanoma and colorectal cancers.”
Cost savings for the healthcare system
According to PwC, AI applications in breast cancer diagnoses mean that mammography results are analysed 30 times faster than by a clinician – with an error rate of just one per cent. There are prospects for huge progress, not only in diagnostics. In a pilot study, Artificial Intelligence was able to predict with greater than 70 per cent accuracy how a patient would respond to two conventional chemotherapy procedures. In view of the prevalence of breast cancer, the PwC survey reports that the use of AI could deliver huge cost savings for the healthcare system. It estimates that over the next 10 years, cumulative savings of EUR 74 billion might be made.
Digital assistants for patients
AI is also benefiting patients in very concrete ways to overcome a range of difficulties in their everyday lives, such as visual impairment, loss of hearing or motor diseases. The “Seeing AI” app, for example, helps the visually impaired to perceive their surroundings. The app recognises objects, people, text or even cash on a photo that the user takes on his or her smartphone. The AI-based algorithm identifies the content of the image and describes it in a sentence which is read out to the user. Other examples include smart devices such as the “Emma Watch”, which intelligently compensates for the tremors typical to Parkinson’s disease patients. Microsoft developer Haiyan Zhang developed the smart watch for graphic designer Emma Lawton, who herself suffers from Parkinson’s. More Parkinson’s patients will be provided with similar models in future.
Chips driving Artificial Intelligence

From the graphics processing unit through neuromorphic chips to the quantum computer – the development of Artificial Intelligence chips is supporting many new advances.
AI-supported applications must keep pace with rapidly growing data volumes and often have to respond simultaneously in real time. The classic CPUs that you will find in every computer quickly reach their limits in this area because they process tasks sequentially. Significant improvements in performance, particularly in the context of deep learning, would be possible if the individual processes could be executed in parallel.
Hardware for parallel computing processes
A few years, ago, the AI sector focused its attention on the graphics processing unit (GPU), a chip that had actually been developed for an entirely different purpose. It offers a massive parallel architecture, which can perform computing tasks in parallel using many smaller yet still efficient computer units. This is exactly what is required for deep learning. Manufacturers of graphics processing units are now building GPUs specifically for AI applications. A server with just one of these high-performance GPUs has a throughput 40 times greater than that of a dedicated CPU server.
However, even GPUs are now proving too slow for some AI companies. This in turn is having a significant impact on the semiconductor market. Traditional semiconductor manufacturers are now being joined by buyers and users of semiconductors – such as Microsoft, Amazon and even Google – who are themselves becoming semiconductor manufacturers (along with companies who want to produce chips to their own specifications). For example, Alphabet, the parent company behind Google, has developed its own Application-Specific Integrated Circuit (ASIC), which is specifically tailored to the requirements of machine learning. The second generation of this tensor processing unit (TPU) from Alphabet offers 180 teraflops of processing power, while the latest GPU from Nvidia offers 120 teraflops. Flops (Floating Point Operations Per Second) indicate how many simple mathematical calculations, such as addition or multiplication, a computer can perform per second.
Different performance requirements
Flops are not the only benchmark for the processing power of a chip. With AI processors, a distinction is made between performance in the training phase, which requires parallel computing processes, and performance in the application phase, which involves putting what has been learned into practice – known as inference. Here the focus is on deducing new knowledge from an existing database through inference. “In contrast to the massively parallel training component of AI that occurs in the data centre, inferencing is generally a sequential calculation that we believe will be mostly conducted on edge devices such as smartphones and Internet of Things, or IoT, products,” says Abhinav Davuluri, analyst at Morningstar, a leading provider of independent investment research. Unlike cloud computing, edge computing involves decentralised data processing at the “edge” of the network. AI technologies are playing an increasingly important role here, as intelligent edge devices such as robots or autonomous vehicles do not have to transfer data to the cloud before analysis. Instead, they can acquire the data directly on site – saving the time and energy required for transferring data to the data centre and back again.
Solutions for edge computing
For these edge computing applications, another new chip variant – Field-Programmable Gate Array (FPGA) – is currently establishing itself alongside CPUs, GPUs and ASICs. This is an integrated circuit, into which a logical circuit can be loaded after manufacturing. Unlike processors, FPGAs are truly parallel in nature thanks to their multiple programmable logic blocks, which mean that different processing operations are not assigned to the same resource. Each individual processing task is assigned to a dedicated area on a chip and can thus be performed autonomously. Although they do not quite match the processing power of a GPU in the training process, they rank higher than graphics processing units when it comes to inference. Above all, they consume less energy than GPUs, which is particularly important for applications on small, mobile devices. Tests have shown that FPGAs can detect more frames per second and watt than GPUs or CPUs, for example. “We think FPGAs offer the most promise for inference, as they can be upgraded while in the field and could provide low latencies if located at the edge alongside a CPU,” says Morningstar analyst Davuluri.
More start-ups are developing Artificial Intelligence chips
More and more company founders – and investors – are recognising the opportunities offered by AI chips. At least 45 start-ups are currently working on corresponding semiconductor solutions, while at least five of these have received more than USD 100 million from investors. According to market researchers at CB Insights, venture capitalists invested more than USD 1.5 billion in chip start-ups in 2017 – double the amount that was invested just two years ago. British firm Graphcore has developed the Intelligence Processing Unit (IPU), a new technology for accelerating machine learning and Artificial Intelligence (AI) applications. The AI platform of US company Mythic performs hybrid digital/analogue calculations in flash arrays. The inference phase can therefore take place directly within the memory, where the “knowledge” of the neural network is stored, offering benefits in terms of performance and accuracy. China is one of the most active countries when it comes to Artificial Intelligence chip start-ups. The value of Cambricon Technologies alone is currently estimated at USD 1 billion. The start-up has developed a neural network processor chip for smartphones, for instance.
New chip architectures for even better performance of Artificial Intelligence
Neuromorphic chips are emerging as the next phase in chip development. Their architecture mimics the way the human brain works in terms of learning and comprehension. A key feature of these chips is the removal of the separation between the processor unit and the data memory. Launched in 2017, neuromorphic test chips with over 100,000 neurons and 100 million plus synapses can unite training and inference on one chip. When in use, they should be able to learn autonomously at a rate that is a 1 million times better than the third generation of neural networks. At the same time, they are highly energy-efficient.
Quantum computers represent a quantum leap for AI systems in the truest sense of the word. The big players in the IT sector, such as Google, IBM and Microsoft, as well as countries, intelligence services and even car manufacturers are investing in this technology. These computers are based on the principles of quantum mechanics. A quantum computer can perform each calculation step for all states at the same time. This means that it delivers exceptional processing power for the parallel processing of commands and has the potential to compute at a much higher speed than conventional computers. Although the technology may still be in its infancy, the race for faster and more reliable quantum processors is already well underway.
Ethics and principles of AI

Artificial intelligence is only as good the data it is based on. Unless it takes all factors and all population groups into account, faulty and biased decisions may come as a result. But how about ethics and principles of Artificial Intelligence in recent applications?
The field of Artificial Intelligence is developing rapidly, and promises to help address some of the biggest challenges we face as a society,” says Kate Crawford, Co-Founder of the AI Now Institute: “But we urgently need more research into the real-world implications of the adoption of AI in our most sensitive social institutions. People are already being affected by these systems, be it while at school, looking for a job, reading news online, or interacting with the courts.” It is for this very reason that the AI Now Institute was launched in late 2017 at New York University. This is the first university research institute dedicated to the social impact of Artificial Intelligence. To this end, it wants to expand the scope of AI research to include experts from fields such as law, healthcare, occupational and social sciences. According to Meredith Whittaker, another Co-Founder of AI Now, “AI will require a much wider range of expertise than simply technical training. Just as you wouldn’t trust a judge to build a deep neural network, we should stop assuming that an engineering degree is sufficient to make complex decisions in domains like criminal justice. We need experts at the table from fields like law, healthcare, education, economics, and beyond.”
Safe and just AI requires a much broader spectrum of expertise than mere technical know-how.
AI Systems with Prejudices are a Reality
“We’re at a major inflection point in the development and implementation of AI systems,” Kate Crawford states. “If not managed properly, these systems could also have far-reaching social consequences that may be hard to foresee and difficult to reverse. We simply can’t afford to wait and see how AI will affect different populations.” With this in mind, the AI Now Institute is looking to develop methods to measure and understand the impacts of AI on society.
It is already apparent today that unsophisticated or biased AI systems are very real and have consequences – as shown, in one instance, by a team of journalists and technicians at Propublica, a non-profit newsdesk for investigative journalism. They tested an algorithm which is used by courts and law enforcement agencies in the United States to predict repeat offending among criminals. They found that it was measurably biased against African Americans. Such prejudice-laden decisions come about when the data that the AI is based on and works with is not neutral. If it includes social disparities, for instance, the evaluation is also skewed. If, for example, only data for men is used as the basis for an analysis process, women may be put at a disadvantage.
It is also dangerous if the AI systems have not been taught all the relevant criteria. For instance, the Medical Center at the University of Pittsburgh noted that a major risk factor for severe complications was missing from an AI system for initially assessing pneumonia patients. And there are many other relevant areas in which AI systems are currently in use without having been put through testing and evaluation for bias and inaccuracy.
Checks Needed
As a result of this, the AI Now Institute took to its 2017 research report to call for all important public institutions to immediately stop using “black-box” AI. “When we talk about the risks involved with AI, there is a tendency to focus on the distant future,” says Meredith Whittaker: “But these systems are already being rolled out in critical institutions. We’re truly worried that the examples uncovered so far are just the tip of the iceberg. It’s imperative that we stop using black-box algorithms in core institutions until we have methods for ensuring basic safety and fairness.”
Autonomous driving thanks to AI

In just a few years, every new vehicle will be fitted with electronic driving assistants. They will process information from both inside the car and from its surrounding environment to control comfort and assistance systems.
We are teaching cars to negotiate road traffic autonomously,” reports Dr Volkmar Denner, Chairman of the Board of Bosch. “Automated driving makes the roads safer. Artificial Intelligence is the key. Cars are becoming smart,” he asserts. As part of those efforts, the company is currently developing an on-board vehicle computer featuring AI. It will enable self-driving cars to find their way around even complex environments, including traffic scenarios that are new to it.
Transferring knowledge by updates
The on-board AI computer knows what pedestrians and cyclists look like. In addition to this so-called object recognition, AI also helps self-driving vehicles to assess the situation around them. They know, for example, that an indicating car is more likely to be changing lane than one that is not indicating. This means self-driving cars with AI can detect and assess complex traffic scenarios, such as a vehicle ahead turning, and apply the information to adapt their own driving. The computer stores the knowledge gathered while driving in artificial neuronal networks. Experts then check the accuracy of the knowledge in the laboratory. Following further testing on the road, the artificially created knowledge structures can be downloaded to any number of other on-board AI computers by means of updates.
Assistants recognising speech, gestures and faces
Bosch is also intending to collaborate with US technology company Nvidia on the design of the central vehicle computer. Nvidia will supply Bosch with a chip holding the algorithms for the vehicle’s movement created through machine learning. As Nvidia founder Jensen Huang points out, on-board AI in cars will not only be used for automated driving: “In just a few years, every new vehicle will have AI assistants for speech, gesture and facial recognition, or augmented reality.” In fact, the chip manufacturer has also been working with Volkswagen on the development of an intelligent driving assistant for the electric microvan I.D.Buzz. It will process sensor data from both inside the car and from its surrounding environment to control comfort and assistance systems. These systems will be able to accumulate new capabilities in the course of further developments in autonomous driving. Thanks to deep learning, the car of the future will learn to assess situations precisely and analyse the behaviour of other road users.
3D recognition using 2D cameras
Key to automated driving is creating the most exact map possible of the surrounding environment. The latest camera systems are also using AI to do that. A project team at Audi Electronics Venture, for example, has developed a mono camera which uses AI to generate a high-precision three-dimensional model of the surroundings. The sensor is a standard, commercially available front-end -camera. It captures the area in front of the car to an angle of about 120 degrees, taking 15 frames per second at a 1.3 megapixel resolution. The images are then processed in a neuronal network. That is also where the so-called semantic segmentation takes place. In this process, each pixel is assigned one of 13 object classes. As a result, the system is able to recognise and distinguish other cars, trucks, buildings, road markings, people and traffic signs. The system also uses neuronal networks to gather distance information. This is visualised by so-called ISO lines – virtual delimiters which define a constant distance. This combination of semantic segmentation and depth perception creates a precise 3D model of the real environment. The neuronal network is pre-trained through so-called unsupervised learning, having been fed with large numbers of videos of road scenarios captured by a stereo camera. The network subsequently learned autonomously to understand the rules by which it generates 3D data from the mono camera’s images.
Mitsubishi Electric has also developed a camera system that uses AI. It will warn drivers of future mirrorless vehicles of potential hazards and help avoid accidents, especially when changing lane. The system uses a new computing model for visual recognition that copies human vision. It does not capture a detailed view of the scene as a whole, but instead focuses rapidly on specific areas of interest within the field of vision. The relatively simple visual recognition algorithms used by the AI conserve the system resources of the on-board computer. The system is nevertheless able to distinguish between object types such as pedestrians, cars and motorcycles. Compared to conventional camera-based systems, the technology will be able to significantly extend the maximum object recognition range from the current approximately 30 metres to 100 metres. It will also be able to improve the accuracy of object recognition from 14 percent to 81 per cent.
AI is becoming a competitive factor
As intelligent assistance systems are being implemented ever more frequently, AI is becoming a key competitive factor for car manufacturers. That is true with regard to the use of AI for autonomous driving as well as in the development of state-of-the-art mobility concepts based on AI. According to McKinsey, almost 70 per cent of customers are already willing to switch manufacturer today in order to gain better assisted and autonomous driving features. The advice from Dominik Wee, Partner at McKinsey’s Munich office, is: “Premium manufacturers, in particular, need to demonstrate to their highly demanding customers that they are technology leaders in AI-based applications as in other areas – for example in voice-based interaction with the vehicle, or in finding a parking space.”
(Picture Credit: Volkswagen AG)
The benefit of Artificial Intelligence

The technology landscape is changing ever more rapidly; development lead times for new products are getting shorter, as are the intervals at which innovations are coming onto the market. These are trends to which an electronics distributor especially needs to respond, in the view of William Amelio, CEO of Avnet, the parent company of EBV. So Amelio, Avnet’s CEO since the summer of 2016, has launched an extensive programme to transform the whole business. It incorporates some 450 individual projects aimed at enhancing relationships with customers, helping to meet their needs even more closely than before, and offering them even more services. “We are the first electronics distributor to offer genuine end-to-end solutions, carrying a product idea from the initial prototypes through to mass production. In order to achieve that, we had to make significant changes to our business,” says Amelio. His goal is to transform Avnet into an “agent of innovation”. Artificial Intelligence is a key component in those efforts – as a market segment, but above all for the company itself, as Amelio explains in the interview.
What influence do you think Artificial Intelligence (AI) will have in the future on technology and our lives in general?
William Amelio: Futurists are saying that AI will have a bigger impact on society than the industrial and digital evolutions. We’re starting to see more concrete examples of what that might be like as the technologies needed to power AI systems are now becoming more affordable and open-sourced. Artificial Intelligence is already enabling more personalised medicine and treatment plans in healthcare. The vehicles on our roads are increasingly autonomous. Facial, voice and fingerprint recognition are becoming more commonplace as well. Nonetheless, AI is providing input to us and we’re still making the ultimate decision. Over time, these applications will be equipped to make more decisions on our behalf, in theory helping us devote more time to higher-level thinking.
As Artificial Intelligence evolves to other applications over the next few years, it will begin to have exponential impact, especially in areas such as employment. Enabling farmers to maximise the efficiency of their fields or automating repetitive office management tasks will dramatically influence how we manage our work. In turn, AI will create new public policy challenges. Though opinions vary on the scale and timing of its impact, AI does have the capability and potential to help us solve many of the complex problems that plague our society.
What role will AI technology play specifically for Avnet as an electronics distributor?
W. A.: AI will help us optimise our operations, increase customer and employee satisfaction and unearth new market opportunities. Today, we’re exploring using database models for personalised pricing, automating payments and both managing and anticipating customer needs. We’re piloting a project to automate repetitive tasks that don’t add value to our bottom line or to our employees’ happiness. AI can also help us make predictions and deliver more personalised offerings and services to customers, suppliers and to our employee base.
We’ll also be able to speed up and automate processes by offloading some decisions to machines. In particular, I think supply chains will go through a complete metamorphosis resulting from a combination of emerging technologies, including AI. Much of AI’s early promise comes down to better decision-making. This is also the area where AI will begin to significantly impact corporate leadership and culture.
“To get there, we’ll need to shape a new generation of leaders who understand how to work with AI.”
What influence is AI technology having on your products?
W. A.: AI is certainly beginning to create demand for new technologies, which is opening the door for new market opportunities. For example, our new Avnet MiniZed Zync SoC platform improves the performance of sound capture for AI applications. It leverages technologies from Xilinx and Aaware to provide faster, safer far-field voice interfaces. Among our major suppliers, we’re seeing AI drive both hardware and software products, including both FPGA kits and custom chips to hardcode AI neural networks. Many companies are also designing AI-specific chips. This reflects a larger trend of moving some intelligence to the edge instead of housing it all in the cloud, which solves latency and cost issues with the magnitude of processing power that these applications require. Not only are venture capitalists backing start-ups in this area, but large technology powerhouses such as Intel, Google, Apple and Microsoft are getting in on custom chips, too. Many of them are already on our linecard.
Will AI bring any other changes beyond that for Avnet?
W. A.: I mentioned earlier that AI will significantly impact corporate culture and leadership. This is because it will change how we work, how we make decisions and how we structure our business models. To get there, we’ll need to shape a new generation of leaders who understand how to work with AI. This means introducing a new set of skills and evolving some of our existing ones to truly understand the new advantages that AI introduces.
For example, AI systems can help anticipate employee satisfaction and balance workloads accordingly. We can also gain insight from customer surveys more quickly and regularly because we won’t need to go through such laborious data mining. But are the corporate systems and talent needed to enable HR and customer service departments to operate this way available today? Probably not.
AI can do a lot for us, but first we need to learn how to work with it. The way we do business is going to look very different in 10 years, and each of us is going to need to embark on a personal journey of change and continuous learning along the way.
Avnet is now using AI itself – on its “Ask Avnet” platform. Can you tell us briefly what “Ask Avnet” is and how your customers benefit from it?
W. A.: Ask Avnet is designed to deliver a better customer experience by leveraging a combination of AI and human expertise to create an intelligent assistant. Our customers benefit because it can help them address a wide variety of questions without having to jump through customer service hoops. Ask Avnet can move customers through our various digital properties seamlessly, too. Customers can still enjoy the same experience on each site while Ask Avnet keeps them connected to the resources available within our broader ecosystem. We’re already seeing promising results, such as reduced errors. As Ask Avnet learns over time, it will help us deliver a more scalable, efficient and pleasant customer experience.
More importantly, Ask Avnet is designed to understand the context of our customers’ needs and tailor its responses accordingly. It makes personalisation possible, and this adds significant value that will only grow with time. Because it can contextually understand which stage of the product development journey our customers are in, Ask Avnet can proactively identify information that they might need but are not necessarily on the lookout for, such as product maturity stage or anticipated lead time. It continuously learns from new queries and experiences over time, continually delivering the latest insights as needs, technologies and markets evolve.
“AI does have the capability and potential to help us solve many of the complex problems that plague our society.”
“Ask Avnet” also utilises the know-how of the hackster.io and element14 platforms. How important are those 2016 acquisitions to the “Ask Avnet” objective of shortening time to market for your customers?
W. A.: Ask Avnet is another way for customers to access the wealth of information available through the Avnet ecosystem, of which these communities are one important piece. Ultimately, it extends the mission of our communities while introducing more proactivity and personalisation. When you’re in our communities, you’re on an exploratory journey. When you’re using Ask Avnet, you have an AI-powered guide that brings you the information you need.
By combining Ask Avnet with our online communities, we’re helping shorten our customers’ time to market by making the resources they need to solve their challenges more readily available, easy to access and relevant.
The beta version of “Ask Avnet” went online in July 2017. What has been your customers’ experience with it so far?
W. A.: The customer experience continues to improve because the intelligent assistant learns from every query. Customers are finding greater value in the tool, as both usage and customer satisfaction are increasing. It’s able to hold more detailed and longer conversations as the kinds of questions that Ask Avnet is able to address have expanded significantly. It’s also now able to probe further into queries.
For example, at launch, Ask Avnet would respond to a query with a list of recommendations. Today, Ask Avnet would respond with more relevant questions to help clarify your specs and narrow down options before providing recommendations. It can also include contextually relevant information, such as how-to projects from our communities, price and stock information or lead times. As it learns, Ask Avnet is providing more information and holding more high-quality conversations.
Will there be more projects with Avnet processes using AI in the future?
W. A.: Without a doubt. We’re currently focused on those that create the highest possible value for stakeholders, including both front-office and back-office projects. We’re looking at demand management, supply chain optimisation and are continuing work to enhance our customers’ experience with Ask Avnet and other projects. The technology is really applicable anywhere where there’s an opportunity to improve efficiency, reduce boredom and help our employees create more value.
How will AI influence our lives in the future?
W. A.: Just when you think innovation is waning, a new trend like AI takes hold. It’s clear to me that the economic and social value AI has to offer is just at the beginning of its “S-curve”. Whichever argument sways you, we all can agree that AI is fundamentally going to change the nature of how we live and work. This means that we need to explore new business models, hiring practices and skill sets. Start-ups, makers, tech giants and oldline companies are all in the game. Competition will drive new and innovative AI ideas and applications, and I’m excited to see the next chapter in this story.
From initial sketch to mass production
Avnet supports its customers through every phase of the product life cycle – from initial idea to design, from prototype to production. As one of the world’s largest distributors of electronic components and embedded solutions, the company offers a comprehensive portfolio of design and supply chain services in addition to electronic building blocks. Its acquisition of the online communities Hackster.io and Element14 in 2016 furthermore shows how Avnet is building bridges between the maker and manufacturer. Hackster.io is committed to helping fledgling companies develop hardware for IoT designs. The network engages with some 90 technology partners and includes close to 200,000 engineers, makers and hobbyists. Element14 is an engineering community with more than 430,000 members. By acquiring both platforms, the company is taking an important step towards achieving its goal of helping customers get their ideas first to market. In this respect, Avnet can call on a closely-knit global network of leading technology companies dating back almost a century.
Avnet has its headquarters in Phoenix, Arizona. The company was founded in 1921 by Charles Avnet, starting out as a small retail store in Manhattan specialising in the sale of components for radios. Today Avnet has a workforce of more than 15,000 employees and is represented in 125 countries in North America, Europe and Asia.
Pattern recognition trough AI

One of the greatest strengths of Artificial Intelligence systems is their ability to find rules or patterns in big data, pictures, sounds and much more.
Many functions of intelligent information systems are based on methods of pattern recognition: support for diagnoses in the area of medicine, voice recognition with assistance systems and translation tools, object detection in camera images and videos or also forecasting of stock prices. All of these applications involve identifying patterns – or rules – in large volumes of data. It is immaterial whether this data relates to information stored in a database or to pixels in an image or the operating data of a machine. Such identification of patterns was either not possible at all with classic computer systems or required lengthy calculation times of up to several days.
Classifying data in seconds
Developments in the area of neural networks and machine learning have led to the emergence of solutions today in which even complex input data can be matched and classified within minutes or even seconds with trained features. A distinction is made here between two fundamental methods: supervised and unsupervised classification.
With supervised classification of input data in pattern recognition, the system is “fed” training data, with the data with the correct result being labelled accordingly. The correct response must therefore be available during the training phase and the pattern recognition algorithm has to fill the gap between input and output. This form of supervised pattern recognition is used with machine vision for object detection or for facial recognition for example.
In the case of unsupervised learning, the training data is not labelled, which means that the possible results are not known. The pattern-recognition algorithm therefore cannot be trained by providing it with the results it is to arrive at. Algorithms are used more so, which explore the structure of the data and derive meaningful information from it. To stay with the example of machine vision: the techniques of unsupervised pattern recognition are used for object detection, among other things. Unsupervised methods are essentially also used for data mining, thus for detecting contents in large data volumes based on visibly emerging structures.
Finding structures in big data
A number of different methods are in turn used in this type of big data analysis. One such example is association pattern analysis. A set of training data is searched through in this case for combinations of individual facts or events, which occur significantly often or significantly rarely together in the data. Another example in this context is what is known as sequential pattern mining. A set of training data is searched through to identify time-ordered sequences that occur conspicuously often or rarely in succession in the data. The result of the different mining methods is a collection of patterns or rules, which can be applied to future data sets to discover whether one or more rules occur in these data sets. The rules can be integrated in operative software programs in order to develop early warning concepts, for example, or to predict when maintenance is due.
AI in production

Artificial Intelligence boosts quality and productivity in manufacturing industry, and helps people in their work.
Artificial Intelligence can become a driver of growth for industry: according to a survey by management consultants McKinsey, GDP in Germany alone could rise by as much as 4 per cent, or EUR 160 billion, more than without AI by 2030 thanks to the early and concerted deployment of intelligent robots and self-learning computers. “AI promises to deliver benefits not only economically, but also in terms of business management: it enables employees to leave repetitive or hazardous tasks to computers and robots, and focus themselves on value-adding and interesting work,” says Harald Bauer, Senior Partner at McKinsey’s Frankfurt office.
AI and machine learning are exploiting opportunities in connection with Industry 4.0 in particular because enabling a production operation to organise itself autonomously and respond flexibly takes huge volumes of data. Computer systems autonomously identify structures, patterns and laws in the flood of data. This enables companies to derive new knowledge from empirical data. In this way, trends and anomalies can be detected – in real time, while the system is running.
Manufacturing industry offers many starting points where AI can boost competitive edge. At the moment, 70 % of all collected production data is not used – but AI can change that. (Source: obs / A.T. Kearney)
Proactive intervention before the machine breaks down
Predictive maintenance, particular offers genuine potential for rationalisation. Large numbers of sensors capture readings on the state of a machine or production line, such as vibration, voltage, temperature and pressure data, and transfer it to an analysis system. The data analysis enables predictions to be made. When are the systems likely to fail? When is the optimum time to carry out maintenance? This can reduce – or even rule out – the risk of breakdowns. McKinsey estimates that plant operators can improve their capacity utilisation by as much as 20 per cent by planning and executing their maintenance predictively.
Plant availability increases by up to 20 % while maintenance costs decrease by up to 10 %.
As one example, German start-up Sensosurf integrates force sensors directly into machine components that have no intrinsic intelligence of their own, such as flanged and pedestal bearings, linear guides and threaded rods. “We are working in areas where there has been little or no information available to date,” says Dr. Cord Winkelmann, CEO of Bremen-based company Sensosurf. The data obtained in this way is interpreted with the aid of custom machine-learning algorithms. As a result, specific irregularities can be detected and breakdowns prevented, for example.
Listening to make sure the machine is running smoothly
The intelligent Predisound system from Israeli company 3DSignals does not measure the deformation or vibration of a component, but instead records the acoustic sounds of a machine. Experienced machine operators and maintenance staff are able to determine from the sound of a machine whether it is running smoothly. In the ideal case, they can even predict impending breakdowns. Such detection measures are of course not entirely reliable – and they tie up personnel. Predisound aims to solve both those issues. The system consists of large numbers of ultrasonic sensors installed in the machines being monitored. The sensors record the full sound spectrum during the machine’s operation, and transmit the data to a centralised software program based on a neuronal network. By applying so-called deep learning, the software gradually recognises ever more precisely which variations in the sound pattern might be critical. This means that anomalies which would be indiscernible to a human can be detected. Based on predictive analytics algorithms, the probability of failure and time to failure of individual machine components can be predicted. The maintenance engineer is automatically notified before any damage occurs that might cause a machine to shut down. As a consequence, fixed inspection intervals are no longer necessary.
In certain operations, productivity increases by up to 20 % due to the AI-based interaction of humans and robots. (Source: McKinsey)
More effective quality control
Another key area of application for AI alongside machine maintenance is industrial image processing. Automatic pattern recognition by means of cameras and sensors enables faults and their causes to be detected more quickly. That is a great aid to quality control. Bosch’s APAS inspector is one example: it uses learning image processing to automatically detect whether or not the material surface of a production component conforms to specifications. The operator teaches the machine once what non-conformity it can tolerate, and when a component has to be taken offline. Artificial Intelligence then enables the machine to autonomously apply the patterns it has learned to all subsequent quality inspections.
Robots learning independently thanks to Artificial Intelligence
Thanks to AI, industrial robots are also increasingly becoming partners to factory staff, working with them hand-in-hand. One example of a collaborative robot of this kind is the Panda from Franka Emika. It is a lightweight robot arm capable of highly delicate movements. The medium-term aim of developer Sami Haddadin is to turn Panda into a learning robot that no longer has to be programmed. The human controller specifies a task to perform, and Panda tries out for itself the best way to perform it. The key feature is that once Panda has found the most efficient method, it relays the information to other robots via the cloud. This means the production company does not have to invest in costly and complex programming.
“We are just at the beginning of an exciting development,” asserts Matthias Breunig, a Partner at McKinsey’s Hamburg office. “Key to the beneficial application of AI is an open debate as to how and where humans and machines can work together advantageously.”
The human face of AI chatbots

Software systems that communicate with people via text or speech are becoming increasingly sophisticated: they recognise the other party’s mood and respond as an avatar with suitable gestures and facial expressions.
For British electrical retailer Dixons Carphone, the shopping experience for nine out of ten customers begins online. Two thirds of customers access the virtual retail shop via mobile technology, such as smartphone, in order to check product information and compare prices. All are greeted by Cami, the company’s AI chatbot. Cami is designed to learn from the chats so that it can anticipate the needs of customers, match these needs with current prices and stocks and can thus answer all questions quickly and precisely. The e-commerce retailer is thus recording significant growth in sales and creating scope for its human sales employees, who can now offer valuable customer services in the time they have saved.
Communication is becoming increasingly natural thanks to Artificial Intelligence
Chatbots – or software programs that can communicate with people via text or speech – are now enabling increasingly more natural communication with users: they use Artificial Intelligence, especially natural-language -processing (NLP) technologies, to understand and generate speech. Natural-language understanding interprets what the user is saying and assigns it to a pattern stored in the computer. Natural-language generation creates a natural voice response that can be understood by the user.
If a chatbot is integrated fully automatically, it handles the interaction with the customer by itself. During the conversation, the bot selects the responses it gives to the customer or can decide to forward the dialogue to a human agent if it does not understand the question. A semi-automatic chatbot, in contrast, does not act fully autonomously, rather suggests answers to a human agent. The agent can then select the most suitable response from those offered, revise the response or also take over the dialogue from this point. The advantage of this is that the chatbot can continue to learn from the interaction while already supporting the agent to respond faster and more purposefully. At the same time, there is less of a risk that valuable customer contacts will be lost as a result of errors by the bot. “The evolution of chatbots is only just beginning,” says Wolfgang Reinhardt, CEO of optimise-it, one of the leading providers of live chat and messaging services in Europe. “In the coming months and years, they will become even more intelligent and sophisticated. What is important in this respect is that the bots are incorporated and networked into the communication structure of the company so as to offer the customer the best possible user and service experience.”
AI chatbots in a new guise
The chatbots are to get a new look too for this purpose, too as avatars, they also emulate humans graphically. Cologne-based company Charamel and the German Research Centre for Artificial Intelligence (DFKI) have already been working for some time in the area of virtual avatars. Charamel’s VuppetMaster avatar platform allows users to integrate interactive avatars into web applications and other platforms without having to install additional software. The aim is to develop a next generation of multimodal voice-response systems that can make more extensive use of facial expressions, gestures and body language on the output side in order to enable a natural conversation flow. Prof. Wolfgang Wahlster, Chairman of the Executive Board of the DFKI: “That will make chatbots credible virtual characters with their own personalities. This new form of digital communication will make customer dialogues and recommendation, advice and tutoring systems in the Internet of Services even more efficient and appealing. These personalised virtual service providers will make it even easier for people to use the world of smart services, turning it into a very personal experience.”
Face-to-face
“At the end of the day, as human beings, the most emotionally engaged conversations we have are face-to-face conversations,” reckons Greg Cross, Chief Business -Officer at Soul Machines. “When we communicate face-to-face, we open up an entire series of new non-verbal communications channels.” Many bots are already very good at understanding what someone is saying. It will be more important in future, however, to also be able to evaluate how someone says something. The more than 20 facial expressions a -person uses help in this respect. For example, winking after a sentence indicates that the statement should not be taken too seriously. Recognising this is something that machines or chatbots should now also be able to do. This already works impressively as the life-like avatar from Soul Machines demonstrates. Its computer engine uses neural networks to mimic the human nervous system. When the user starts the system, a camera begins to read and interpret their facial expressions. At the same time, a microphone records the request. The request is interpreted based on the Artificial-Intelligence solution Watson from IBM and a suitable response is given. The Soul Machines engine recognises the emotions in parallel based on the facial expression and voice tone so that it can better understand how someone is interacting with the system.
Because people are interacting with more and more machines in their everyday lives, giving AI a human face is becoming increasingly important for Greg Cross: “We see the human face as being an absolutely critical part of the human-machine interaction in the future.”
Drones controlled by AI

Drones controlled by Artificial Intelligence can already deliver similar performance today to those controlled by humans. Even in an urban setting they are capable of navigating safely.
Congested streets, rising emission levels and the lack of parking all combine to make urban logistics an increasingly greater challenge. Powered by e-commerce, the package market is growing by seven to ten per cent annually in mature markets such as the United States or -Germany. This will see the volume double in Germany by 2025, with around five billion packages mailed each year. “While -deliveries to consumers have previously made up about 40 per cent, more than half of all packages are now delivered to private households. Timely delivery in ever greater -demand,” says Jürgen Schröder, a McKinsey -Senior Partner and expert in logistics and postal services. “New technologies like autonomous driving and drone delivery still need to be developed further. They present opportunities to reduce costs and simplify delivery. We expect that by 2025, it will be possible to deliver around 80 per cent of packages by automated means.”
Package-carrying drones, as Amazon put forward for the first time in 2013, were initially laughed off as a crazy idea. Today, a large number of companies are experimenting with delivery by drone. One example is Mercedes-Benz with its Vans & Drones concept, in which the package is not directly delivered to the customer via a drone, but in a commercial vehicle. In the summer of 2017, the company carried out autonomous drone missions for the first time in an urban environment in Zurich. In the course of the pilot project, -customers could order selected products on Swiss online marketplace siroop. These were a -maximum of two -kilograms in weight and suitable for transport by drone. The range of products included coffee and -electronics. The -customers received their goods the same day. The retailer loaded the drones immediately after receiving the -order on its own premises. After this, they flew to one of two Mercedes vans used in the project, which featured an -integrated drone landing platform. The vans stopped at one of four -pre–determined “rendezvous points” in the Zurich metropolitan area. At these points, the mail carriers received the products and delivered them to the customers, while the drone returned to the retailer. Overall, some 100 flights were -successfully completed without any incidents across the urban area. “We believe that drone-based logistics networks will fundamentally change the way we access products on a daily basis,” says Andreas Raptopoulos, Founder and CEO of Matternet, the manufacturer of the drones used in the test.
Reliably dodging obstacles thanks to Artificial Intelligence
An essential element of such applications are drones that can fly safely between buildings or in a dense street network, where cyclists and pedestrians can suddenly cross their path. Researchers at the University of Zurich and the NCCR Robotics research centre developed an intelligent solution for this purpose. Instead of relying on sophisticated -sensor systems, the drone developed by the Swiss -researchers uses a standard smartphone camera and a very -powerful AI -algorithm called DroNet. “DroNet recognises static and dynamic obstacles and can slow down to avoid crashing into them. With this algorithm, we have taken a step forward towards integrating autonomously navigating drones into our everyday life,” explains Davide Scaramuzza, Professor for Robotics and Perception at the University of Zurich. For each input image, the algorithm generates two outputs: one for navigation to fly around obstacles and one for the likelihood of collisions to detect dangerous situations and make it possible to respond. In order to gain enough data to train the algorithm, information was collected from cars and bicycles that were travelling in urban environments in accordance with the traffic rules. By imitating them, the drone -automatically learned to respect the safety rules, for example “How do we follow the street without crossing into the oncoming lane” or “How do we stop when obstacles like pedestrians, construction works or other vehicles block the way?”. Having been trained in this way, the drone is not only capable of navigating roads, but also of finding its way around in completely different environments than those it was ever trained for – such as multi-storey car parks or office corridors.
Drones controlled by Artificial Intelligence are winning the race
Just how sophisticated drones controlled by Artificial Intelli-gence are today was demonstrated in a race organised by NASA’s Jet Propulsion Laboratory (JPL), when world-class drone pilot Ken Loo took on Artificial Intelligence in a timed trial. “We pitted our algorithms against a human, who flies a lot more by feel,” said Rob Reid of JPL, the project’s task manager. Compared to Loo, the drones flew more cautiously but consistently. The drones needed around 3 seconds longer for the course, but kept their lap times constant at a speed of up to 64 kilometres per hour, while the human pilot varied greatly and was already exhausted after a few laps.
(Picture Credit: Daimler AG)
Neuronal networks simulating the brain

Machines are being made more intelligent based on a variety of data analysis methods. The focus of these efforts is shifting increasingly from mere performance capability towards creating the kind of flexibility that the human brain achieves. Artificial neuronal networks are playing a big role.
All forms of Artificial Intelligence are not the same – there are different approaches to how the systems map their knowledge. A distinction is made primarily between two methods: neuronal networks and symbolic AI.
Knowledge represented by symbols
Conventional AI is mainly about logical analysis and planning of tasks. Symbolic, or rule-based, AI is the original method developed back in the 1950s. It attempts to simulate human intelligence by processing abstract symbols and with the aid of formal logic. This means that facts, events or actions are represented by concrete and unambiguous symbols. Based on these symbols, mathematical operations can be defined, such as the programming paradigm “if X, then Y, otherwise Z”. The knowledge – that is to say, the sum of all symbols – is stored in large databases against which the inputs are cross-checked. The databases must be “fed” in advance by humans. Classic applications of -symbolic AI include, for example, text processing and voice recognition. Probably the most famous example of -symbolic AI is DeepBlue, IBM’s chess computer which beat then world champion Garry Kasparov using symbolic AI in 1997.
As computer performance increases steadily, symbolic AI is able to solve ever more complex problems. It works on the basis of fixed rules, however. For a machine to operate beyond tightly constrained bounds, it needs much more flexible AI capable of handling uncertainty and processing new experiences.
Advancing knowledge about neurons autonomously
That flexibility is offered by artificial neuronal networks, which are currently the focus of research activity. They simulate the functionality of the human brain. Like in nature, artificial neuronal networks are made up of nodes, known as neurons, or also units. They receive information from their environment or from other neurons and relay it in modified form to other units or back to the environment (as output). There are three different kinds of unit:
Input units receive various kinds of information from the outside world. This may be measurement data or image information, for example. The data, such as a photo of an animal, is analysed across multiple layers by hidden units. At the end of the process, output units present the result to the outside world: “The photo shows a dog.” The analysis is based on the edge by which the individual neurons are interconnected. The strength of the connection between two neurons is expressed by a weight. The greater the weight, the more one unit influences another. Thus the knowledge of a neuronal network is stored in its weights. Learning normally occurs by a change in weight; how and when a weight changes is defined in learning rules. So before a neuronal network can be used in -practice, it must first be taught those learning rules. Then neuronal networks are able to apply their learning algorithm to learn independently and grow autonomously. That is what makes neuronal AI a highly dynamic, adaptable system capable of mastering challenges at which symbolic AI fails.
Cognitive processes as the basis of a new AI
Another new form of Artificial Intelligence has been developed by computer scientists at the University of -Tübingen: their “Brain Control” computer program simulates a 2D world and virtual figures – or agents – that act, cooperate and learn autonomously within it. The aim of the simulation is to translate state-of-the-art cognitive science theories into a model and research new variants of AI. Brain Control has not made use of neuronal networks to date, but nor does it adhere to the conventional AI paradigm. The core theoretical idea underlying the program originates from a cognitive psychology theory, according to which cognitive processes are essentially predictable, and based on so-called events. According to the theory, events – such as a movement to grip a pen, and sequences of events such as packing up at the end of the working day – form the building blocks of cognition, by which interactions, and sequences of interactions, with the world are selected and controlled in a goal-oriented way. This hypothesis is mirrored by Brain Control: the figures plan and decide by simulating events and their sequencing, and are thus able to carry out quite complex sequences of actions. In this way, the virtual figures can even act collaboratively. First, one figure places another figure on a platform so that the second figure can clear the way, then both of them are able to advance. The modelling of cognitive systems such as in Brain Control is still an ambitious undertaking. But its aim is to deliver improved AI over the long term.
The challenges of powering Artificial Intelligence

Making a smart, connected world possible depends on energy-efficient data centres.
The invention of computers changed the world due to their ability to retain and share information, but up until recently, they lacked the capability to emulate a human brain and autonomously learn in order to perform tasks or make decisions.
To come close to the processing power of a human brain, an AI system must perform around 40 thousand trillion operations per second (or 40 PetaFLOPS). A typical server farm with this level of AI computing power would consume nearly 6 MW of power, whereas the human brain by comparison requires the calorific equivalent of only 20 W of power to perform the same tasks. Some of AI’s most advanced learning systems are currently consuming power at up to 15 MW – levels that would power a small European town of around 1500 homes for an entire day.
AI’s neural networks learn through exposure to differentiation, similar to human learning. Typically, thousands of images are processed through Graphics Processing Units (GPUs) set up in parallel in order for the network to compare and learn as quickly as possible.
AI computing is also dependent on so-called edge devices, including cameras, sensors, data collectors and actuators, to receive input information and output movement or actions in the physical world. Consumer and manufacturing trends such as the Internet of Things (IoT) have also led to the proliferation of AI-enabled devices in homes and factories, thereby also requiring increased data and energy consumption.
Delivering and managing megawatts of power is constantly underscored by pressure from rising energy prices. Additionally, every watt of energy dissipated in the data centres requires more cooling, increasing energy costs further.
Miniaturisation is central to improving processing power, but smaller sizes with increased power density reduce the surface area available for dissipating heat. Thermal management is therefore one of the most significant challenges in designing power for this new generation of AI supercomputers.
Reducing CO2 emissions
Estimates predict that there will be over 50 billion cloud-connected sensors and IoT devices by 2020. The combined effect these devices and the data centres that power Artificial Intelligence will have on global power consumption and global warming indicates the need for collective action to make power supplies for server racks, edge devices, and IoT devices much more energy-efficient.
In addition to investments in renewable energy production and attempts to move away from the use of petrol and diesel vehicles, European countries will need to place significant focus on energy efficiency in their efforts to cut carbon emissions. The European Commission implemented the Code of Conduct for Energy Efficiency in Data Centres in 2008 as a voluntary initiative to help all stakeholders improve energy efficiency, but data centres are still on course to consume as much as 104 TWh in Europe alone by 2020, almost doubled from 56 TWh in 2007.
According to a 2017 study on data centre energy consumption, the Information and Communication Technology (ICT) sector generates up to 2 per cent of the world’s total carbon dioxide emissions – a percentage on par with global emissions from the aviation sector. Data centres make up 14 per cent of this ICT footprint.
However, another report states that ICT-enabled solutions such as energy-efficient technologies could reduce the EU’s total carbon emissions by over 1.5 gigatonnes (Gt) of CO²e (carbon dioxide equivalent) by 2030. This would be a vast saving, almost equivalent to 37 per cent of the EU’s total carbon emissions in 2012.
Analogue vs. digital controllers
AI will no doubt have a significant impact on human society in the future. However, the repetitive algorithms of AI require a significant change to computing architectures and the processors themselves. As a result, powering these new AI systems will remain a persistent challenge.
Clearly, power solution sophistication must increase and, as a result, power management products have now emerged with advanced digital control techniques, replacing legacy analogue-based solutions.
Digital control has been shown to increase overall system flexibility and adaptability when designing high-end power solutions. A digital approach allows controllers to be customised without costly and time-consuming silicon spins and simplifies designing and building the scalable power solutions required for AI. Even with all of the included functionality and precision delivery of power, digital solutions are now price-competitive with the analogue solutions they will replace.
Making the power solutions for AI applications of the future as efficient as possible is a relatively easy and attainable way in which the ICT sector can contribute to reducing global carbon emissions.
Clayton Cornell, Technical Editor, Infineon Technologies AG
Better speech recognition thanks to Deep Learning

Digital assistants are becoming increasingly sophisticated at recognising speech thanks to deep-learning methods. And owing to their AI ability, they are even capable of predicting what their users want.
“Tea, Earl Grey, hot” – every Star Trek fan is familiar with the words Captain Picard uses to order his favourite drink from the replicator. Use of speech to control computers and spaceships is an unwavering element of most science-fiction films. Attempts have been made for many years to control machines through speech: the first speech-recognition software for computers, generally speaking, was presented to the public by IBM in 1984. Some ten years later, it was developed for the PC and thus for the mass market. Meanwhile, Microsoft used speech recognition in an operating system for the first time in 2007 with Windows Vista.
Apple was responsible for the breakthrough on the mass market in 2011, when it launched its speech-recognition-software assistant Siri for the iPhone 4s. Siri now shares the market with a number of similar solutions: Amazon’s Alexa, Cortana from Microsoft or Google’s Assistant. Common to all systems is that the speech input is not processed locally on the mobile device, rather on servers at the company: the voice message is sent to a data centre and converted there from spoken to written language. This allows the actual assistant system to recognise commands and questions and respond accordingly. An answer is generated and sent back locally to the mobile device – sometimes as a data record and sometimes as a finished sound file. With fast mobile Internet connections needed for this purpose, speech recognition is therefore benefiting from the current trend towards cloud computing and faster mobile Internet connections.
The error rate of speech-recognition systems has decreased significantly from 27% in 1997 to only about 6% in 2016!
Enhanced-quality speech recognition thanks to Deep Learning and Artificial Intelligence
Speech-recognition systems have benefited primarily in recent times from Artificial Intelligence. Self-learning algorithms ensure that machine understanding of speech is improving all the time: the error rate with computer-based speech recognition fell according to a study by McKinsey in 2017 from 27 per cent in 1997 to 6 per cent in 2016. Thanks to deep learning, the systems are getting increasingly better at recognising and learning the speaking patterns, dialects and accents of users.
Nuance – whose voice technology is incidentally behind Apple’s Siri – was also able to increase the precision of its Dragon speech-recognition solution, which it launched in 2017, by up to 10 per cent in comparison with the predecessor version. The software consistently uses deep learning and neural networks in this regard: on one hand at the speech model level, where the frequency of words and their typical combinations are recorded. And, on the other hand, also at the level of the acoustic model, where the phonemes or smallest spoken units of speech are modelled. “Deep-learning methods normally require access to a comprehensive range of data and complex hardware in the data centre in order to train the neural networks,” explains Nils Lenke, Senior Director Corporate Research at Nuance Communications. “At Nuance, however, we managed to bring this training directly to the Mac. Dragon uses the specific speech data of the user and is therefore continuously learning. This allows us to increase the precision significantly.”
Predictive assistants
AI not only improves speech recognition, however, but also the quality of the services offered by digital assistants such as Alexa, Siri and others. The reason for this is that the systems can deal with topics predictively based on their learning ability and make recommendations. Microsoft’s Cortana uses a notebook – like a human assistant – for this purpose, in which it notes down the interests and preferences of the user, frequently visited locations or rest periods when the user prefers not to be disturbed. For example, if the user asks about weather and traffic conditions every day before leaving for work, the system can offer the information independently after several iterations without the user needing to ask actively.
Voice control of IoT devices
The digital assistants become especially exciting when they are networked with the Internet of Things, since they can be used to control a whole host of different electronic equipment. Digital assistants will already be supported by more than 5 billion devices in the consumer sector in 2018, according to IHS Markit market researchers, with a further 3 billion devices to be added by 2021. Even today, for example, the smart home can be operated by voice commands using digital assistants.
In the US, Ford has also been integrating the Alexa voice assistant into its vehicles since the start of 2017 – thus incorporating the Amazon App into the car for the first time. Drivers can therefore enjoy audio books at the wheel, shop in the Amazon universe, search for local destinations, transfer these directly to the navigation system, and much more. “Ford and Amazon share the vision that everyone should be able to access and operate their favourite mobile devices and services using their own voice,” explains Don Butler, Executive Director of Ford Connected Vehicle and Services. “Soon our customers will be able to start their cars from home and operate their connected homes when on the go – we are thus making their lives easier step by step.”
And something else that is sure to please Star Trek fans: thanks to Alexa, you can now also order your hot drink with a voice command. Coffee supplier Tchibo has launched a capsule machine onto the market, for example, which can be connected to Alexa via WLAN. As a result you can order your morning coffee from the comfort of your bed: “Coffee, pronto!”
Artificial Intelligence is moving into smartphones and wearables

Thanks to new developments in chip technology, even small wearables such as fitness bracelets have AI on board. The latest top-of-the-range smartphones are already learning to understand their users better through neuronal networks, and are delivering significantly higher performance.
Mobile devices such as smartphones and wearables are becoming ever more important in people‘s everyday lives. “Smartphones have fundamentally changed our lives over the last 10 years. They have become the universal tool for accessing communications, content and services,” says Martin Börner, Deputy President of the industry association Bitkom.
Mobile devices are gaining AI capabilities
Now mobile devices are coming onto the market with Artificial Intelligence capable of analysing the recorded data even better, and providing users with more closely targeted recommendations to enhance their health or fitness. The trend is towards edge computing. In this, the data remains in the device, and is not – or is only in part – uploaded to the cloud for analysis. That provides a number of benefits: firstly, it reduces the load on cloud computing systems and transfer media. Secondly, latency is reduced; users receive their analysis results faster. And thirdly – a key factor in medical applications especially – personal data is kept secure on the mobile device. “AI used to rely on powerful cloud computing capabilities for data analysis and algorithms, but with the advancement of chips and the development of edge computing platforms, field devices and gateways have been entitled basic AI abilities, which allow them to assist in the initial data screening and analysis, immediate response to requirements, etc.,” states Jimmy Liu, an analyst with Trendforce.
More efficiency, performance and speed
For Huawei, too, this on-device AI is a response to existing AI issues such as latency, stability and data protection. In late 2017, the company launched two smartphone models – the Mate 10 and Mate 10 Pro – that it claims are the first in the world to feature an artificially intelligent chipset with a dedicated Neural Processing Unit (NPU). This enables the phones to learn the habits of their users. The mobile AI computing platform identifies the phone‘s most efficient operating mode, optimises its performance, and generally delivers improved efficiency and performance at faster speeds. But the main way in which Huawei is utilising AI is in real-time scene and object recognition, enabling users to shoot perfect photos.
Facial recognition on a smartphone
Apple has also fitted out its new iPhone X with a special chip for on-device AI. The neural architecture of the A11 Bionic Chip features a dual-core design and executes up to 600 billion operations per second for real-time processing. The A11 neural architecture was designed for special machine learning algorithms, and enables Face ID, Animoji and other functions. This makes it possible, for example, to unlock the phone by facial recognition. The feature, named Face ID, projects more than 30,000 invisible infrared dots onto the user‘s face. The infrared image and the dot pattern are pushed through neuronal networks in order to create a mathematical model of the user‘s face before the data is sent to the Secure Enclave to confirm a match, while machine learning is applied to track physical changes in the person‘s appearance over time. All the stored facial data is protected by the Secure Enclave to an ultra-high security level. Also, the entire processing is carried out on the device and not on the cloud in order to preserve users‘ privacy. Face ID only unlocks the iPhone X when the user looks at it, with highly-trained neuronal networks preventing any manipulation using photographs or masks.
The Right Camera Mode Every Time
“The smartphone market has evolved significantly over the past decade,” stresses Hwang Jeong-Hwan, the President of LG Mobile Communications Company: “LG customers expect our phones to excel in four core technologies – audio, battery, camera and display.” As a result, LG has also started to develop specialised and intuitive AI-based solutions for the features most commonly used on smartphones. The first result is the LG V30S ThinQ smartphone with integrat-ed Artificial Intelligence. The device’s AI camera analyses subjects in the picture and recommends the ideal shooting mode – depending, for instance, on whether it is a portrait, food, a pet, or a landscape. Each mode helps to improve the subject’s special characteristics, taking account of factors like the viewing angle, colour, reflections, lighting, and degree of saturation. The Voice AI allows users to run applications and customise settings by simply using voice commands. Combined with Google Assistant, searching via menu options becomes superfluous and certain functions can be selected directly. But LG wants to go further than simply equipping new smartphone models with AI. Depending on the hardware and other factors, LG is due to give some smartphones important AI functions via over-the-air updates in the future.
Every third wearable with AI
It is expected that AI wearables will give the stagnant wearables sector a much-needed boost. One in three wearables in 2017 operated with AI, according to market analysts at Counterpoint. According to Research Associate Parv Sharma: “Wearables haven’t seen the expected momentum so far because they have struggled on the lines of a stronger human computer interaction. However, the integration of Artificial Intelligence into the wearables will change how we interact with or use wearables. AI will not only enhance the user experience to drive higher usage of wearables, but will also make wearables smarter and intelligent to help us achieve more.” The analysts expect particularly high growth in the hearables -category – with devices such as the Apple Airpod or innovative products from less well-known brands like the Dash made by Bragi.
Wearables getting to know their users
Other wearables use AI, too. Machine learning offers far greater predictive potential in monitoring vital health signs. A company called Supa, for example, has developed clothing with integrated sensors. They capture a wide range of biometric data in the background, and provide personalised information on the user’s environment. AI enables Supa clothing to continually learn more about the user and so, for example, better understand their behaviour when exercising. Supa Founder and CEO Sabine Seymour claims that in 20 or 30 years, wearables of such a kind will be able to explain why the user has contracted cancer, for example – whether as a result of a genetic defect, due to environmental causes, or because of nutritional habits.
PIQ likewise combines its sports assistant Gaia with AI. It intelligently captures and analyses movements using specific motion-capture algorithms. Thanks to AI, Gaia detects its users’ movements ever more accurately, enabling it to provide personalised advice in order to optimise their training.
More Safety on the Bus
Intelligent wearable devices not only come in handy during sport and exercise, but also in more serious applications. For instance, NEC and Odakyu City Bus are partnering to test a wearable that collects biological information from drivers. The aim is to improve safety in the operation of the bus. In the pilot project, a wristband measures vital signs such as the pulse, temperature, moisture and body movements while the bus is being driven. The data is then sent off for analysis via a smartphone to an IoT platform, which is based on NEC’s latest Artificial Intelligence technologies. This is intended to visualise, monitor, and evaluate a wide range of health factors – for example, the driver’s levels of fatigue or changes in their physical condition which they may not be able to detect on their own.
Ayata Intelligence has developed an exciting solution: its Vishruti wearable smart eyewear helps people with visual impairments to find their way around their environment. To do so, it is fitted with a camera and a special, energy-efficient chip providing image recognition and deep-learning processes. This enables it to recognise objects and people’s faces. The system features a voice guidance feature, telling the user when a car is aproaching, for example, where a door is, or the name of the person in front of them.
Developments of this kind deliver the prospect that, in the years ahead, smartphones and wearables will continue to have an increasing influence on our lives, becoming guides and advisors in many different ways.
Retail: “Anyone who fails to adopt AI will die!”

Applications of Artificial Intelligence are not just to be found in e-commerce. In high-street shops, too, selflearning algorithms are helping to balance supply and demand more closely, and understand customers better.
The retail sector involves a complex interaction between customers, manufacturers, logistics service providers and online platforms. To gain a competitive edge, retailers need to gauge their customers’ needs optimally, and fulfil them as efficiently and closely as possible. That means retailers have to make the right choices to find the ideal mix of partners. Self-learning algorithms and AI are opening up new dimensions in process optimisation, personalisation and decision-making accuracy.
Artificial Intelligence enables retailers to respond better to their customers’ needs and, for example, optimise their ordering and delivery processes.
Artificial Intelligence is a question of necessity
Prof. Dr Michael Feindt, founder of Blue Yonder: “Anyone who fails to adopt AI will die! But those who open themselves up to the new technology and make smart use of it will have every chance of achieving sustained success in the retail sector. For retailers, digital transformation through AI is not a question of choice, but of necessity. Only those who change and adopt the new AI technologies will survive.” One way that Blue Yonder is responding to that necessity is with a machine learning solution which optimises sales throughout the season based on automated pricing and discounting. The system measures the correlation between price changes and demand trends at each physical outlet and through each channel. Based on the results, the solution automatically sets prices to increase turnover or profit throughout the selling cycle, including the application of discounted pricing and running sale campaigns as appropriate. It analyses both historical and current sales and product master data, and enables hundreds of prices to be validated and optimised each day. Using such systems, retailers can meet consumers’ rising expectations and maximise their profits at the same time. According to Blue Yonder, this means profit can be improved by 6 per cent, sales turnover increased by 15 per cent, and stocks cut by 15 per cent.
Optimising processes in retail with AI
“Artificial Intelligence enables retailers to respond better to their customers’ needs and, for example, optimise their ordering and delivery processes,” says Stephan Tromp, Chief Executive Director of HDE, the German Retail Association. For example, retailers can use their suppliers’ data to measure performance and optimise processes. Combined with the data from their outlets and warehouses, they can also balance supply and demand more closely. For instance, intelligent forecasting systems learn from past orders, create buyer groups, and analyse seasonal effects. From their findings, they can forecast product sales volumes and ideally know before the consumer what he or she is going to order next. This means retailers can tailor their websites to the relevant product groups, trigger purchasing, top up stocks accordingly, and ultimately cut shipping lead times. As a result, bottlenecks in the supply of specific products can be predicted, and retailers can quickly identify which supplier is currently able to deliver top-up stocks of the required merchandise most quickly.
Keeping track of customers’ movements
AI not only has applications in retailers’ back-office operations, however. In the physical shops, too, deep-learning functions are helping to gauge customers’ behaviour. A company called Retailnext, for example, has launched an all-in-one IoT sensor which monitors customers’ movements when in the outlet: collecting their goods, trying on clothing, and walking around the shop. All those movements are monitored by a camera, and analysed directly in the unit with the aid of deep-learning functions. The data is then uploaded to the cloud in real time, so companies can gather valuable information on all the branches in their chain. “It’s precisely those projects that enable retailers to develop a deeper understanding of in-store shopping behaviours and allow them to produce differentiated in-store shopping experiences,” asserts Arun Nair, Co-Founder and Technical Director of Retailnext. “The more retailers know about what’s happening in store, the better.”
AI – the Learning Robot

Artificial intelligence enables robots to perform tasks autonomously and find their way around unfamiliar environments. Ever more powerful algorithms and ultrahigh-performance microprocessors are allowing machines to learn faster and faster.
The term artificial intelligence (AI) has been in existence for over 60 years. During that time, research has been conducted into systems and methodologies capable of simulating the mechanisms of intelligent human behaviour. It might sound quite simple, but it has to date posed major challenges to scientists. Because many tasks that most people would not even associate with “intelligence” have in the past caused computers serious problems: understanding human speech; the ability to identify objects in pictures; or manoeuvring a robotic vehicle around unfamiliar terrain. Recently, however, artificial intelligence has been making giant strides, and is increasingly becoming a driver of economic growth. All major technology companies – all the key players in Silicon Valley – have AI departments. “Advances in artificial intelligence will allow robots to watch, learn and improve their capabilities,” said Kiyonori Inaba, Board Member, Executive Managing Officer and General Manager of Fanuc.
Simulating the human brain
Findings from brain research, in particular, have driven advances in artificial intelligence. Software algorithms and micro-electronics combine to create neuronal networks, just like the human brain. Depending on what information it captures, and how it evaluates it, a quite specific “information architecture” is created: the “memory”. The neuronal network is subject to continuous change as it is expanded or remodelled by new information. The technological foundations for state-of-the-art neuronal networks were laid back in the 1980s, but it is only now that powerful enough computers exist to permit the simulation of networks with many “hidden layers”.
Becoming ever better by learning
“Deep Learning” is the modern-day term used to describe this information architecture. The concept involves software systems which are capable of reprogramming themselves based on experimentation, with the behaviours that most reliably lead to a desired result ultimately emerging as the “winners”. Many well-known applications, such as the Siri and Cortana voice recognition systems, are essentially based on Deep Learning software. “Deep Learning will greatly reduce the time-consuming programming of robot behaviour,” asserts Kiyonori Inaba. His company Fanuc has integrated AI into its “Intelligent Edge Link and Drive” platform for fog computing (also referred to as edge computing). The integrated AI enables connected robots to “teach” each other, so as to perform their tasks more quickly and efficiently: whereas one robot would otherwise take eight hours to acquire the necessary “knowledge”, eight robots take just one hour.
New algorithms for faster learning success
Ever-improving algorithms are continually enhancing the ability of machines to learn. As one example, the Mitsubishi Electric Corporation recently launched a quick-training algorithm for Deep Learning, incorporating so-called inference functions which are required in order to identify, recognise and predict unknown facts based on known facts. The new algorithm is designed to aid the implementation of Deep Learning in vehicles, industrial robots and other machines by dramatically reducing the memory usage and computing time taken up by training. The algorithm shortens training times and cuts computing costs and memory requirements to around a thirtieth of those for conventional AI systems.
Special chips for Deep Learning
To obtain the extremely high computing power required in order to create a Deep Learning system, current solutions mostly involve so-called GPU computing. In this, the computing power of a graphics processor unit (GPU) and the CPU are combined. CPUs are specially designed for serial processing. By contrast, GPUs have thousands of smaller, more efficient processor units for parallel data processing. Consequently, GPU computing enables serial code segments to run on the CPU while parallel segments – such as the training of deep neuronal networks – are processed on the GPU. The results are dramatic improvements in performance. But the development of Deep Learning processors is by no means at an end: the “Eyriss” processor developed at the Massachusetts Institute of Technology (MIT), for example, surpasses the performance capability of GPUs by a factor of ten. Whereas large numbers of cores in a GPU share a single large memory bank, Eyriss features a dedicated memory for each core. Each core is capable of communicating with its immediate neighbours. This means data does not always have to be routed through the main memory, so the system works much faster. Vivienne Sze, one of the researchers on the Eyriss project, comments: “Deep Learning is useful for many applications, such as object recognition, speech or facial recognition.”
Artificial Intelligence in autonomous vehicles

Artificial intelligence is a crucial technology for autonomous vehicles. Adaptive control systems make it possible to process the immense data sets delivered by the surrounding area sensors, then work out which actions should be taken.
For a vehicle to drive autonomously, it is not enough to simply equip it with a large number of sensors for detecting the immediate surroundings. It must also be able to handle the huge volumes of data, and to do so in real time. This overburdens conventional computer systems. The solution comes from electronics and software that provide the means for imitating the functions of the human brain. Artificial intelligence (AI), cognitive computing and machine learning are terms used to describe different aspects of these types of modern computer systems. “In essence, it is all about emulating, supporting and expanding human perception, intelligence and thinking using computers and special software,” says Dr Mathias Weber, IT Services Section Head at the German digital industry association Bitkom.
Growing Demand
Nowadays, artificial intelligence is used as standard; for instance, it is embedded in digital assistants like Siri, Cortana and Echo. The basic assumption of AI is that human intelligence results from a variety of calculations. This allows artificial intelligence itself to be created by different means. There are now systems whose main purpose is to detect patterns and take appropriate actions accordingly. In addition, there are variants known as knowledge-based AI systems. These attempt to solve problems using the knowledge stored in a database. In turn, other systems use methods derived from probability theory to respond appropriately to given patterns. “An artificial-intelligence system continuously learns from experience and by its ability to discern and recognise its surroundings,” says Luca De Ambroggi, Principal Automotive and Semiconductor Analyst at IHS Technology. “It learns, as human beings do, from real sounds, images, and other sensory inputs. The system recognises the car’s environment and evaluates the contextual implications for the moving car.” In terms of AI systems built into infotainment and driver assistance systems alone, IHS expects sales to increase to 122 million units by 2025. By comparison, the 2015 figure was only 7 million.
New Processors for Artificial Intelligence
The roll-out of artificial intelligence also has direct impacts on processor technology: conventional computational cores, CPUs, are being replaced with new architectures. Graphics processing units (GPUs) have thus been viewed as a crucial technology for AI for several years. CPU architectures perform tasks as a consecutive series, whereas GPUs – with their numerous small and efficient computer units – process tasks in parallel, making them much faster where large volumes of data are concerned. The new chips’ control algorithms already contain elements of neural networks, which are used in self-learning machines. A neural network of this type consists of artificial neurons and is based on the human brain in terms of its workings and structure. This enables a neural network to make highly realistic calculations.
Tyres with AI
In 2016, tyre manufacturer Goodyear introduced the concept of a spherical tyre featuring artificial intelligence. With the aid of a bionic “outer skin” containing a sensor network, along with a weather-reactive tread, the tyre can act on the information it collects by directly implementing it in the driving experience. It connects and combines information, processing it immediately via its neural network, which uses self-learning algorithms. This allows the Eagle 360 Urban to make the correct decision every time in standard traffic situations. Its artificial intelligence helps it to learn from previous experiences, enabling it to continuously optimise its performance. Consequently, the tyre adds grooves in wet conditions and retightens when dry.
Adaptive Control Systems
Like human beings, cognitive computing systems can integrate information from their immediate surroundings – though rather than eyes, ears and other senses, they use sensors such as cameras, microphones or measuring instruments for this purpose. The new processor architectures give vehicles the ability to evaluate these huge data volumes, and to constantly improve and expand these evaluations. This machine learning is seen as a key technology on the road to artificial intelligence. Machine learning also includes deep learning, which interprets signals not by relying on mathematical rules, but rather knowledge gained from experience. In this case, the software systems change their programming by experimenting themselves – the behaviour that leads most reliably to a desired result “wins”.
Several automotive suppliers are now offering control systems pre-equipped with deep learning capabilities. Contemporary electronic control units (ECUs) in vehicles generally consist of various processing units, each of which controls a system or a specific function. The computing power of these units will no longer be adequate for autonomous driving. AI-based control units, on the other hand, centralise the control function. All information from the various data sources of an autonomous vehicle – including from infrastructure or from other road users – are gathered here and processed with a high-performance AI computing platform. In this way, the control system comes to “understand” the full 360-degree environment surrounding the vehicle in real time. It knows what is happening around the vehicle and can use this to deduce actions. Jensen Huang, CEO of Nvidia, works with his company to partner with various automotive manufacturers in developing control systems of this type. He is certain of one thing: “Artificial intelligence is the essential tool for solving the incredibly demanding challenge of autonomous driving.”