Faster, more reliable, more flexible with the Edge. Edge Computing will continue to grow rapidly in the future. The participants in the TQ expert discussion all agreed on that. However, there are still several obstacles to overcome – for instance, with regard to standardisation or the energy consumption of Edge processors.
There are several definitions and attitudes about Edge Computing. “Whoever you talk to will give you a different definition of the edge,” says Dr Felix Winterstein, co-founder and CEO of Xelera. However, Thorsten Milsmann, Director Digital & IoT at Hewlett Packard Enterprise (HPE), finds a definition that everyone at the table can agree with: “The edge is everything that is located outside the data centre.” As a result, the participants at the expert discussion are far quicker in defining the term than an expert panel at the Industrial Internet Consortium: “The longest expert discussion that I took part in regarding what Edge Computing actually is lasted two-and-a-half hours. The most expedient approach is probably to follow the definitions from the ‘Linux Foundation’s Open Glossary of Edge Computing’ project,” adds Dr Alexander Willner, Head of the IIoT Center at the Fraunhofer Institute for Open Communication Systems, referring to his involvement in the last IIC conference in Europe.
More sensors, more data
In actual fact, Edge Computing is not really anything new, as Dr Johannes Kreuzer, CEO of Cosinuss, emphasises. His company develops sensor technology that measures vital parameters in the ear of the patient. “As early as my doctorate thesis, I wrote about why data analyses have to be performed directly on the patient. Back then, it was simply because we couldn’t transfer the data viably. But in those days, it wasn’t known as Edge Computing.” The reasons for analysing the data on site were the same as today. A continuous transfer of the data would consume too much energy, and also the volume of data would be far too large.
“The data volume is growing mainly on the application side. So we will see a massive growth in the upstream volume in the coming years.”
Dr Felix Winterstein, CEO of Xelera Technologies
“Nowadays, companies are installing more and more sensor technology to monitor the systems and control them more precisely,” explained Ulrich Schmidt, Manager of the High-End Processing segment at EBV. “The data volume increases massively in the process, which in turn calls for pre-processing of the data before it is sent to the cloud.” By processing data on the edge device, you can also gain a certain amount of independence from the Internet or data connection to the cloud – which can be an important argument, for instance in medical technology, as Johannes Kreuzer explains: “In this field, the devices simply have to work reliably, so it is safest to run the data processing and the controls directly on the patient who is wearing the edge device.”
Real-time applications are becoming more common
“What’s more, the proportion of data that has to be processed in real time is rising in parallel to the volume,” adds Dr Felix Winterstein. Aurel Buda, Head of Product Management Factory Automation Systems, confirms this from the point of view of the users: “The industry is constantly aiming to produce more quickly, reliably and flexibly. That is no longer possible with the classic control processes.” He cites automobile manufacturing as an example, where you quickly find several thousand robot cells working away, each fitted with several hundred sensors generating data at a rate of every ten milliseconds. When processing this data in the cloud, with a latency of several seconds, the system would simply not work. According to Mr Buda, these types of processes have to be controlled with decisions taken within 100 milliseconds. Meanwhile, other applications can involve even more extreme requirements – Dr Winterstein gives an example: “In the case of augmented-reality applications with suitable goggles, the data has to be processed within 20 or 30 milliseconds, because otherwise the user begins to feel sick – it’s a simple case of motion sickness.” The computing power to process the data in this volume and at this speed in edge devices is already available today. “Even though you can never claim that we have enough computer power,” declares Felix Winterstein, “we already have a wide range of powerful chips these days, partly to implement AI applications in the edge, in particular.”
“Like any others, edge devices always need to be kept up to date. Admin tools can be useful there, like the ones used in data centres.”
Thorsten Milsmann, Director Digital and IoT at Hewlett Packard Enterprise
Although AI is nowadays often mentioned in relation to Edge Computing, the two do not necessarily have to go hand in hand, as Johannes Kreuzer emphasises: “Although there are plenty of overlaps, the data in edge applications is often just analysed with a relatively simple algorithm – which has nothing to do with AI.” The participants at the expert discussion agree that a carefully selected threshold value is often more useful than a neuronal network which has to be trained first – even though the possibilities of AI are genuinely exciting and they are only taking the first steps on the development path. Even so, “with the computing power available to us today, we can run computing models on site that wouldn’t have been conceivable in the past – and in some cases even in real time,” stated Thorsten Milsmann. The learning phase here is still carried out at the data centre, as Ulrich Schmidt stresses: “But the inference, that is, the application of the learned models, is increasingly being carried out on the edge devices.”
Seeking energy-savvy controllers
The expert discussion participants see a far greater challenge than the level of computing power available – namely, the amount of energy required. Precisely that is also regularly an issue for Johannes Kreuzer with his sensors: “Our controllers are just two by two millimetres in size, but they are equipped with Bluetooth and a floating-point unit. The computing power is no problem – the challenge for us is the electricity consumption. Particularly because it is simply vital for mobile applications whether the device can run for eight or 24 hours before it needs recharging.” The important factor here is that the system must be able to work with a standard cooling element despite the constantly increasing computing power – a fan would quickly drive the energy consumption up and would introduce a component into the system that requires maintenance, as Ulrich Schmidt explains: “One solution would be dedicated hardware blocks that are each designed for specific applications. By moving away from the generalistic approach, you can reduce the loss of power.”
“There are more and more devices that also generate more data – so it is only possible to process the exorbitant increase in data volume with Edge Computing.”
Dr Johannes Kreuzer, co-founder & CEO of Cosinuss
Cybersecurity and the Edge
However, Dr Alexander Willner explains that latency requirements and exploding data volumes are not the only drivers behind Edge Computing: “In Europe, and particularly in Germany, we often don’t want our data to be distributed in the cloud – bearing the GDPR in mind.” In this case, the issue of data security involves far more than “just having your hard drive in your own cabinet,” as Thorsten Milsmann puts it. “As soon as there is a wireless connection or a device is connected via IP, you immediately need a comprehensive security concept.” For Aurel Buda, it is already adequately feasible today to ensure security with a data connection: “In terms of communications, we already have encryption. Maybe it could still be possible to improve the usability with the certificate management, but that already gives us secure communications per se.” On the other hand, he considers it far more problematic when existing systems are made intelligent and are connected to the IoT. For instance, when existing industrial systems are modernised to enable condition monitoring or digital maintenance. “In industrial automation networks, there are very few technologies that are seriously secure.” After all, in the past they were not networked and not connected to the Internet, and so they were intrinsically secure. However, Mr Buda does not think that it would be affordable to implement secure communications with encryption for every connected sensor – and it might not even be necessary: “Who is really interested in a sensor value that is communicated every 100 milliseconds and hardly changes at all? In contrast, it is important to ensure that that is actually the correct value from the sensor – that it hasn’t been corrupted.”
“We are rising to the challenge of bringing data from the field to the cloud with minimal effort.”
Aurel Buda, Head of Product Management Factory Automation Systems at Turck
The semiconductor industry already supplies suitable solutions for that, as Ulrich Schmidt explains: “Secure elements are integrated into the solutions from the outset, and can then be used to set up a secured data connection which guarantees that only authenticated devices communicate with each other.” This also makes it possible to load updates for the edge devices “over the air” and to upgrade them to the latest version of the relevant firmware. In conjunction with intrusion detection and behaviour-oriented security systems, this process could also provide security for edge applications in the industrial environment. “However, we first need to teach the systems what unusual behaviour looks like, and what is right and what is wrong,” stresses Thorsten Milsmann. “After all, the field devices used in automation and their protocols are not yet widely known in the world of IT.”
“Dedicated hardware blocks tailor-made for specific applications reduce the required processing power and so, the energy consumption.”
Ulrich Schmidt, Segment Manager High-End Processing at EBV Elektronik
Data must be transferred from the field level to the cloud
Calls for standards
“One problem, however, is that we have a totally heterogeneous landscape with the chips,” adds Dr Felix Winterstein. “That means it’s no longer possible to write a program and combine it with a compiler so that it will then simply run on every computer. We also have thousands of nodes, of small hardware units, in a distributed system that are definitely not all the same. Even so, the application needs to work on all of these different platforms.” The CEO at Xelera sees a role model in the telecommunications sector, showing how it is possible to overcome this challenge. A federation of independent networks there allows everyone to make a connection with their smartphone, anywhere in the world. “One of the elements that allows that is the ETSI,” says Alexander Willner. The European Telecommunications Standards Institute is responsible for developing pan-European norms, standards and specifications in the field of telecommunications. “This is one of the few standards that we have in the area of Edge Computing, although it is insignificant in the industrial context and will probably not be adapted in the future.”
“In Edge Computing, we have to divide and manage the resources – so we need new management approaches.”
Dr Alexander Willner, Head of Industrial Internet of Things Center, Fraunhofer FOKUS
The specialists at the expert discussion are reasonably hopeful about open-source solutions: “Open-source technologies make it possible to map entire landscapes or development environments that would dramatically lower the entry hurdles for the development of edge applications,” believes Aurel Buda, for instance. Here, Thorsten Milsmann sees a great opportunity to standardise edge applications further and to facilitate their implementation: “With an open standard, it’s not only possible for more operators to use the solution, but also for more people to make a contribution – so you have leverage for distributing edge solutions more widely.” Alexander Willner could even imagine an entire ecosystem of open solutions: “That could start with an RISC-V processor in the edge device, moving on to Linux operating systems and edge middleware, and even going as far as every application.” But Johannes Kreuzer remains sceptical: “On the one hand, every component in the system changes too quickly. And on the other, you simply need specialised chips, such as FPGA or DSP, for many applications.” That makes it extremely difficult to implement a consistent, open solution. Thorsten Milsmann therefore stresses that edge solutions will therefore continue to be implemented in teamwork in the future: “We need partners to build up the infrastructure and for the necessary software stacks, security experts, integrators on site and energy experts – no-one can do it all on their own any more.” That could become an attractive market for service providers and system integrators.
Non-technical challenges as well
This might also be an answer to those who fear that digitalisation will destroy jobs. “With AI applications at the edge in particular, we can automate many tasks more fully than we have managed with automation technology,” says Dr Alexander Willner, admitting certain concerns. He’s therefore in favour of an unconditional basic income, but also believes that many fields of work could, in future, be replaced with new tasks. At the same time, experts agree that this type of automation will be crucial if Europe is to remain competitive as a location to do business, and that AI will even create more jobs than are lost. So Alexander Willner comes to a Socratic conclusion on the subject of Edge Computing: “It will certainly be an exciting, entirely non-technical challenge.”