Smart Factories deliver quadrillions of bytes of data a year from machine tools and production machinery – so-called “Big Data”, which has to be consolidated into “Smart Data” in order to identify potential for optimisation and gain a competitive edge.
Prof. Wolfgang Wahlster, CEO of the German Research Centre for Artificial Intelligence DFKI, comments: “Smart Data is used for preventive maintenance, to optimise efficiency, and to achieve the optimal operating point. It can deliver as much as 30 percent savings on material, energy, cost and labour, as well as helping to protect the environment. In most cases, however, that added value can only be realised if the data is evaluated in real time, enabling the output from Smart Data analysis to feed directly into process control – ‘Smart Data analytics in the loop’, so to speak. We developed such systems in our world’s first Smart Factory for Industry 4.0 (…), and we are now trialling them in upgraded plants such as for beer bottling, drug packaging and valve manufacture.”
Intelligent data loggers collecting and analysing data
One way of processing the enormous volumes of data in quasi-real time is through distributed intelligence. That means making individual components or modules intelligent and autonomous, so that they are able to decide for themselves what information is valuable. Data loggers can be installed to avoid having to equip every sensor and actuator on the line with such intelligence.
Processor-controlled memory units cyclically receive and store the data from one or more sensors. Data loggers consist of a programmable microprocessor, a storage medium such as a hard disk drive or flash memory, at least one interface to communicate with higher-level structures, and one or more channels for connection of the data sources. State-of-the-art data loggers are able not only to collect and store data, but also to process and analyse it at high speed.
In-memory computing as a basis of fast analysis
In the Industry 4.0 concept, this pre-filtered data is then routed over a network into a database. Software makes it comparable, and establishes correlations. However, existing database systems are not adequate to actually make the information available for decision-making in real time. The reason is that factory data is often distributed across different databases, and possibly even on different storage media. So fast access is often impossible. However, developments in memory technology are now making it possible to hold data in a computer’s main memory. This so-called “in-memory computing” – allied to the general increase in computing speeds – is enabling real-time analysis of large volumes of data. In-memory computing is thus key to the realisation of powerful Big Data applications. All large-scale Web applications involving lots of users and data also run “in-memory” for reasons of speed. Prominent examples include Google, Facebook and Amazon.
Cloud offers resources on demand
The best way to provide such resources flexibly and on demand is from the Cloud. This also means the data can be accessed by lots of different users, wherever they might be around the world. That is important where a company has integrated multiple branch operations into its Industry 4.0 setup, or where data from suppliers or customers also needs to be accessed.
Stefan Schöpfel, Global Vice President for Big Data and Analytics Services with software company SAP, comments: “Big Data solutions enable businesses to boost sales of existing products, bring new products to market more quickly, develop new business models to better serve their customers, and cut operating costs.” SAP has created SAP HANA, an in-memory platform providing a range of solutions by which companies can evaluate large volumes of data in real time and integrate the outputs directly into their business processes.
(picture credits: Shutterstock)