It all starts with a grain of sand

Silicon continues to be the main material used for the production of microchips, even though other materials are also used. Silicon is the raw ­material used in a complex production process that ­results in the manufacturing of integrated circuits for all manner of requirements and applications.

Without sand, no computer would be able to function, and it would not be possible to create solar cells and LEDs. Sand is the raw material in every microchip – or more specifically, the silicon dioxide contained in quartz sand is the raw material. Luckily, our planet has an abundance of this particular raw material. The majority of the sand on Earth consists of the mineral silicon dioxide, and it is also the main compound in the Earth’s crust.

A pure distillate

For use in microchips, it is actually pure silicon that is needed as a semiconductor material. To obtain the pure silicon, the silicon dioxide in the quartz sand (SiO2) is heated in a furnace at temperatures of up to 1,800 degrees Celsius. During this process, the oxygen separates from the silicon. The raw silicon that is obtained in this manner is then purified by being converted to trichlorosilane using pure ­hydrochloric acid and then distilled. The resulting silicon product has impurities of fewer than 1 ppb (parts per billion) – that means that for every billion silicon atoms there is no more than one impurity atom.

From single-crystal to wafer

To obtain the electrical properties required for microchips, a mono-crystalline structure is needed. For this purpose, the highly purified silicon – yet still in a poly-crystalline state – is melted once more. To be able to achieve the desired electronic properties that are used to control the electric voltages and currents in the final chip product, highly purified foreign substances are added to the silicon melt. This process is called “doping”. Into the melt you then insert a pencil-thin seed crystal, on which a very heavy, cylindrical silicon single-crystal grows. This silicon ingot or cylinder is between 50 and 450 millimetres in diameter, and between 0.5 and 1 metres in length. The ingot is cut into wafer-thin slices – this is where the term “wafer” comes from – that are between 100 and 500 micrometres thick. 

The actual chip production stage

The wafers are the actual starting product in the ­microchip manufacturing process. First, their surfaces are cleaned in several cleaning and polishing processes, any scratches are removed, and they are polished several times in a cleanroom. Next they undergo a final cleaning and a meticulous inspection of their surface. 

Structures are burned-in using light

The next step is to apply a thin layer of conductor, insulator or semiconductor materials – depen­ding on the desired function – onto the silicon wafer. What follows is the critical step in the production of computer chips: the ­lithography process. In this stage, the structures that will later determine the functions of the chip are defined in a photomask – corresponding to the negatives in photography. Next, a lens system is used to project a vastly scaled-down version of the structure onto the wafer, pretreated with a photoresist (or simply “resist”) coating. UV light is used for the exposure, as its short wavelengths are the only thing that enable the nanometre-sized structures to be applied to the wafers in sufficient resolution. Most important in the current manufacturing process is EUV (extreme ultraviolet) lithography, which enables working at wavelengths of 13.5 nanometres. This makes it possible to map structures that have a width that is less than a tenth of the wavelength used. DUV (deep ultraviolet) lithography, which has a wavelength of 193 or 248 nanometres, is used for less complex structures. It triggers chemical changes at the points where the light hits the resist, and these changes replicate the pattern of the photomask.

Creating an IC layer by layer

The pattern mapped onto the photoresist coating is then developed and burned into the wafer. Depending on the desired function, the wafer can then be bombarded with positive or negative ions in order to create certain semi­conductor properties in defined areas. This forms the basis for the generation of all electronic units (transistors, capacitors, etc.) on the chip. Next, some of the photoresist is worn away, thereby generating a pattern of exposed areas in the resist. Material is etched away from the exposed areas, leaving a 3D version of the pattern. The whole process, from the depositing to the removal of the resist, is repeated until the wafer is covered in patterns. To produce one complete chip, this process might need repeating up to 100 times, laying pattern upon pattern with the individual layers being wired to one another via “bonding”. The final result is an integrated circuit. During the entire chip production process, the wafer undergoes continual inspection and checks for errors. In the final production step, the wafer is diced into individual chips. After a final test, they are encapsulated within protective packaging and casing. 

The end-to-end production process comprises hundreds of steps. A lot of time is required before you get a microchip from a grain of sand. Going from the design of a chip to mass production can take up to four months.