Processors and power electronics help to reduce the amount of energy that is consumed in many different applications and sectors. At the same time, however, semiconductor products themselves require energy to operate. With increasing digitalisation, the energy efficiency of microchips is therefore becoming more and more important.
Digitalised technologies are an important building block in the conservation of resources and the improvement of energy efficiency in various different applications. However, processors, power electronics and data memories also require energy themselves to function. Although there aren’t any conclusive numbers as to how high the energy demand actually is, pessimistic estimates expect that, in ten to 20 years, information technology will account for up to 50 per cent of global electricity consumption.
That is why the semiconductor industry is working on continual improvements to the energy efficiency of its products: chip manufacturers are developing ever more energy-efficient CPUs (central processing units), and multi-core technology or the use of GPUs (graphics processing units) is making it possible to process higher loads with less power. Most CPUs also have power management features that optimise electricity consumption by dynamically switching between different power states depending on the workload. For example, with low-power chips, electricity consumption in standby mode is lower by a factor of one million than when they are active, i.e. when the chip processes data.
New chip architectures
Changing the basic architecture of a microchip can also help to considerably reduce its energy consumption. IBM, for example, has introduced a chip prototype whose design enables transistors to be stacked vertically. This means that, in addition to the ability to house more transistors on a chip, a greater electrical current from top to bottom is also enabled. This reduces loss of energy – the new chip is said to consume up to 85 per cent less energy with this configuration.
Another approach is so-called neuromorphic computing. The main aim of this technology is to process the enormous volume of data from big-data applications and artificial intelligence. The goal is to emulate the functionality of the most energy-efficient and flexible memory on Earth – the brain – and to enable a high degree of plasticity. This is why the Fraunhofer Institute for Photonic Microsystems is working on new, non-volatile memory technologies based on ferroelectric hafnium dioxide (HfO2) for analogue and digital neuromorphic circuits. Ferroelectric materials are characterised by a change in their polarity when an electric field is applied. After switching off the voltage, the state of the polarity remains. Similar to the human brain, the hardware architecture of the chips is structured such that information is already saved in the system and is non-volatile. A complicated data transfer between the processor and memory is not necessary; the thinking output is already performed in the chip. As a single non-volatile memory concept, ferroelectric memories are operated purely electro-statically. This makes them especially energy-saving, as only the capacitive currents have to be expended in order to write data.
Integration of cooling into the chip
Another important factor for reducing energy demand in semiconductor products is the cooling: this is because every computing or switching process generates heat. So far, the heat has generally been dissipated from the microchips using heat sinks. Any heat-sensitive components are also often cooled using fans. This uses up additional energy. As components become increasingly smaller and more compact, it becomes more and more difficult to dissipate the heat.
One idea is to integrate the cooling directly into the chip, using microfluidic cooling systems. For example, researchers at the POWERlab at the Institute of Electrical Engineering, École Polytechnique Fédérale de Lausanne (EPFL), have developed a process in which tiny channels for a liquid coolant are cut directly into the silicon wafers. Although this is still just a research approach, it could revolutionise the energy efficiency of chips: cooling currently accounts for more than 30 per cent of the total energy consumption of data centres, for example. Researchers expect that, with this approach, this number could be reduced to less than 0.01 per cent.
An important application area of semiconductors is power electronics. They are used to “convert” energy, for example to integrate renewable energies into electric grids, charge smartphones, or control drives in the manufacturing and process industry. However, with each of these conversion processes, a part of the electric energy is lost as heat. Innovative “wide bandgap” semiconductor materials, such as gallium nitride and silicon carbide, enable considerably higher switching frequencies and thus generate less lost heat than silicon-based components.
According to the Fraunhofer Institute for Silicon Technology, the new materials reduce energy losses by more than 45 per cent. Based on the entire power module market, this means potential energy savings of up to 100 terawatt hours in the EMEA (Europe, Middle East, Africa) economic area, and up to 25 terawatt hours in the USA by 2025. By way of comparison, that makes up around a fifth of the electrical energy that the whole of Germany requires in one year.