More and more scientists and business leaders are warning against the dangers of intelligent machines. It is predicted that machines will have intelligence similar to that of humans in just a few years.
Rapid developments in electronics and software are creating more and more Smart Systems to aid our decision-making and provide active assistance in our everyday lives. This is often referred to under the umbrella term “Artificial Intelligence” (or AI). It enables a machine or computer to handle tasks that normally require human intelligence. AI has to date interpreted capabilities of which only humans are capable as information processing activities. It seeks not only to replicate human capabilities, but also to provide humans with additional capabilities that biological evolution has not imbued them with. Scientists differentiate between three types of AI. In the first category are expert systems. They are skilled in extracting knowledge from data. Such systems can detect errors in the on-board electronics of cars, for example, without the programmer having explicitly included the error in the diagnostic program. A second type of AI is so-called swarm intelligence. In this, a population of autonomous software programs works together to solve a problem. The third type are self-learning systems. They continually improve themselves without any human intervention.
AI is capable of overtaking humans in just a few years
It is this last category, especially, that is causing concern among scientists and business leaders. In a recent BBC interview, for example, world-renowned theoretical physicist Stephen Hawking said: “The development of full artificial intelligence could spell the end of the human race.” Major high-tech investor and CEO of Tesla Elon Musk strikes a similar warning note: “The risk of something seriously dangerous happening is in the five-year time frame. 10 years at the most.” And Bill Gates too – whose Microsoft Corporation has played a significant role in developing intelligent systems – commented during a recent online Q&A session: “I am in the camp that is concerned about super intelligence. First, the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that, though, the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.” Many experts agree that it will take only a few more years before the so-called “technological singularity” is reached, when the artificial intelligence of machines will have surpassed the capabilities of humans.
Frightening and useful at the same time
It is indeed true that the digitisation of the world, with ever smarter machines, can dramatically enhance our health and well-being, safety and security, and our efficiency; but on the other hand, it is likely to entail some unwanted and unintended consequences. The fact is that a totally connected world might be heaven or hell, frightening or useful, depending on one’s perspective. “It’s both,” asserts Basel-based global futurist Gerd Leonhard. That is why he is calling for the development of a set of ethical principles for a fully digitised world: “Without a stronger focus on digital ethics, technological progress will become a threat to humanity.”
AI must deliver benefits
He is certainly not alone in making such a call: in an open letter signed by numerous scientists, developers and business leaders from around the world, the Future of Life Institute calls for research into artificial intelligence to be progressed with caution. The scientists also point out, however, that the potential benefits are enormous. Their letter states: “… we cannot predict what we might achieve when this [human] intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.” In view of such concerns, they recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: “Our AI systems must do what we want them to do.”
(picture credits: Fotolia: ralwel)