The new architecture of data processing

New processor technologies ensure that sufficient computing power is available for efficiently processing the massive streams of data we will generate in the future.

The year is 1965: the world’s first commercial minicomputer is unveiled, the first human walks in space, and Gordon E. Moore makes a prophecy that would define development in the semiconductor industry over the following 55 years. According to Moore’s Law, the computing power of chips should grow exponentially, with the speed and performance of computers doubling every two years.

Who was Gordon E. Moore?

Gordon E. Moore
Gordon E. Moore was a co-founder of Intel in 1968 and initially served as Executive Vice President. In 1975, he became President and CEO, holding this office until he was elected Chairman and CEO in 1979. He remained as CEO until 1987 before being named Chairman Emeritus in 1997.

This prediction kept coming true – until now. The increase in the performance of new chips has since slowed down considerably – indeed, forecasts assume that it will now only double every 20 years. At the same time, however, the volume of data to be processed worldwide continues to grow and grow: according to market analysts at the International Data Corporation (IDC), more than 59 zettabytes of data will be created, collected, copied and consumed in 2020. According to the IDC, this “global data sphere” – the total volume of digital data in the world – will grow to a whopping 175 zettabytes by 2025.

More performance through specialisation

However, the ability to actually use this volume of data expediently depends on processors’ performance continuing to increase. Without new technologies, higher performance can only be achieved by deploying more and more processors, such as in data centres. One alternative comes in the shape of specialised hardware solutions that are not merely generalists, unlike conventional central processing units (CPUs). Instead, they are developed or programmed for specific applications. The acceleration of computing processes made possible by these devices has also earned them the moniker “hardware accelerators”. For all intents and purposes, this approach is not new. Graphics processing units (GPUs for short) or sound cards are nothing other than dedicated processors for special tasks. “Specialised processors are changing the face of computing and allowing the innovative spirit of Moore’s Law to live on,” says David Nagle, who sits on Pliops’ Advisory Board. “Adopting them does require a shift in thinking, as well as a change in infrastructure, but the benefits gained deliver value well beyond what traditional processors can deliver today and enable scaling for the next decade and beyond.”

In this context, a vast range of solutions are currently being used for high-performance data processing: for one thing, today’s GPUs are also seeing more widespread use for processing large data volumes quickly. They boast anything between 1,000 and 10,000 execution units, putting them in a position to execute many computational steps simultaneously, while conventional server CPUs contain several dozen cores at most. Nonetheless, GPUs are very expensive and guzzle power.

Application-specific integrated circuits (ASICs) might offer an alternative: these are user-defined circuits developed specifically for a certain task. This enables them to deliver maximum efficiency, although they are expensive to develop and inflexible because they cannot be re-configured if requirements change.

Flexible and economical

Field-programmable gate arrays (FPGAs) are being used more and more often for this reason: these combine the configuration flexibility and software programmability running on a multi-purpose processor with the speed and energy efficiency of an ASIC. “To name an example, FPGAs are installed in the first product batch of new devices because they can still be modified afterwards, unlike special chips, whose costly development is only profitable for very large unit quantities,” says Dennis Gnad from the Institute of Computer Engineering (ITEC) at the Karlsruhe Institute of Technology. You might liken this to making a statue made of re-usable Lego bricks instead of sculpting with modelling clay that sets, the computer scientist explains.

In this sense, FPGAs are integrated circuits to which a logic circuit can be added or modified after manufacture. Unlike processors, FPGAs are capable of parallel data processing with their multiple, programmable logic blocks. FPGAs offer the option of developing systems precisely matched to the intended task, meaning that they operate with absolute efficiency. This is in contrast to standard processors, which appeal to the broadest possible user base and thus represent a compromise between power and functionality. To name an example, a special FPGA-based storage processor for cloud databases manufactured by Pliops enables data access that is up to 100 times faster, with just a fraction of the computation load and electricity consumption.

Dedicated hardware blocks designed to accelerate highly specific tasks – such as memory and communication tasks or AI functions – not only enable the necessary, specific computing processes to be executed much more quickly, but also with much greater efficiency.

As such, the 55th anniversary of Moore’s Law merely marks the end of an initial phase of semiconductor development. Although the increase in processing speed is starting to level off, new semiconductor architectures are continuing to boost power in smart devices and data centres alike.