By Dee Smith
There are many problems with AI, some of which I will explore in future posts. But the most basic problem is that, as we have all experienced, computers break.
For computers to continue to run requires multiple people who are capable of fixing them, available all the time.
Remembering this, is it a good idea to give more aspects of our lives over to “intelligent” systems so undependable? The things we rely on to obtain the food we eat, the water we drink, and to make, manage, and spend our money? The systems we use to conduct business, to take care of our health, our critical infrastructure, and our national security?
We already do, of course, but the teams are in place to fix them when they malfunction.
The unreliability of computers is not a passing problem. Computer systems, considered as a whole, are scarcely more reliable now than they were 30 years ago. Hardware is somewhat more reliable, but software is increasingly complex, increasingly unpredictable (complex systems are inherently more unpredictable), and increasingly unreliable.
Relying on AI systems makes us vulnerable in several critical ways. First is their exposure to attack. To cite just one example: discovery of undetected flaws leading to “zero-day exploits” — criminal or terrorist attacks exploiting those flaws.
Second are the continuing “hallucinations” AI experiences, where it gives entirely wrong, and sometimes nonsensical, information, often for reasons computer scientists do not understand. What if it does this while managing an element of critical infrastructure and the problem is “inside” the system, where it cannot easily be detected or fixed?
Third, all computer systems are subject to severe malfunctions due to rare, but potentially catastrophic, single-event upsets (SEUs) or single-event errors (SEEs) caused by cosmic rays bombarding the earth.
Fourth is AI’s requirement for a vast and ever-increasing level of electrical power for operation.
The reason computer systems are so ubiquitous is, of course, money. This works in two ways: the money being made and the money being saved by replacing human laborers. From a social standpoint, the latter may well be a pyrrhic victory: displacing millions of people from their jobs creates a huge social cost, in real money.
Are computer systems, in general, more efficient than humans? There is no evidence that they are. Computers are able to crunch numbers within mathematical operations much faster than humans — although that is discounting the enormous calculational power of the brain of a human, let alone the brain of a bird or even an ant, doing everyday things. There is no real understanding of how these biological intelligent systems work. Computer systems seem more efficient only because of the extremely limited scope within which they are operating.
Consider two alternatives, at opposite ends of the spectrum. One is that computer systems, as they become more and more complex, also become more and more fragile. When a system related to food production, or finance, or national security breaks catastrophically somewhere, the failure cascades through the system.
What if systems could be made substantially more reliable? Perhaps some unforeseen breakthrough will dramatically improve their dependability. Then suppose, as some people insist (incorrectly to my thinking), that AI can and will progress to Artificial General Intelligence (AGI). Imagine that this results in a superhuman intelligence. It could be one that emerges at a critical-mass-type point, almost in an instant (this is called the “singularity” by AGI aficionados). Were this to happen, we have no way of knowing whether such an entity would be benign, neutral, or malicious to humans.
But if such an AGI is trained on the sum total of human knowledge and expression, then that AGI is going to be loaded with all the bad along with the good. Do we really want to live in a world governed by transcendently intelligent and powerful machines trained on the behavior of what are essentially clever, volatile, often enraged chimpanzees? (We share 98.4 percent of our DNA with chimps.) Watching any war movie, or really most any movie, would suggest we might not.
And if the AGI was not trained on human knowledge and culture, what would it be trained on?
Biological systems have had about 4 billion years of evolution on this planet to become reliably dependable in operation. They are generally able, as living systems, to survive constant bombardment by radiation from space, extreme temperatures, rapid changes in climate, changes in atmospheric chemistry — and most important, to survive without someone standing by to repair or reboot them. This is a property known as homeostasis. Life has evolved naturally over an immense period of time through adaptation: trial and error.
One the other hand, our computer systems — based on silicon, not carbon — do have a very fallible creator: us. And they have been around about 70 years, or about two-trillionths as long as biological systems.
The belief in the inevitable ascendence of AGI is an article of faith for many involved in the computer industry and for others outside the industry who uncritically accept this “techno-religious” belief system. In its more virulent forms, it is teleological: a burning faith in an inevitable direction of history, in which AGIs are the successors to humanity. And in which the sacred duty of computer scientists is to bring about the birth of this supremely intelligent “life” form.
If I had told you 30 years ago that you would have in your pocket a self-powered device the size of a pack of cards that could tell you how to drive, turn by turn, from your current address to a building in a city 1000 miles away, you would probably have thought that it must be intelligent to be able to do this.
Do you think of your smart phone that way today? My estimation is that this is how we will think of AI in 30 years: a useful, not entirely dependable tool. Nothing more.