The Malicious Use of AI: Part 2By
Last month we left off with three questions about superintelligent machines: Can computers achieve superhuman intelligence? When will this happen? And what will the consequences be?
It’s looking like it might be possible to create machines with human-level artificial intelligence (HLAI). In a way, we’ve already done it once, having developed into intelligent beings over our long residence here, but what would it take to recapitulate evolution in a more reasonable time span? In his book Superintelligence: Paths, Dangers, Strategies, Nick Bostrom notes, “The availability of a brain as a template provides strong support for the claim that machine intelligence is ultimately feasible.” Using a template for neural networks this way is called the neuromorphic path.
Another way was suggested by computing pioneer Alan Turing. He proposed beginning with a child machine with a fixed architecture that could be developed by accumulating content and allowing “recursive self-improvement,” including permission to change its own architecture. That idea has been called “seed AI.”
The major efforts today involve large neural networks that upload massive data stores and learn from the information. Some of these neural networks have progressed from monitored learning to unsupervised “deep learning,” which includes inhuman trial-and-error speeds. And just as we depended on “collective intelligence” (language, writing, and printing) during our own intellectual evolution, neural networks are often combined to cooperate or sometimes even set against each other in GANs (generative adversarial networks) to competitively seek answers or solutions.
In some ways, digital intelligence has advantages we were never afforded. Bostrom compares the speed of computational elements. Biological neurons operate at a peak speed of about 200 Hz; a modern microprocessor operates at 2 GHz. Axons speed actions along at 120 meters per second while electronic processing cores communicate optically at the speed of light (300 million meters per second). The brain has fewer than 100 billion neurons, while computer hardware is indefinitely scalable. Our working memory can hold no more than four or five chunks of information at any given time, far less than a computer.
So, when will HLMI be reached? A representative selection of AI researchers guesses that there’s a 50% chance of machines with human intelligence in 2050. At that point, we’ll then face the critical question: What happens next?
Is it reasonable to assume that mankind or his very smart machines will stop or even pause once HLMI has been reached? I.J. Good, a British mathematician who worked with Alan Turing, introduced the disturbing possibility of an intelligence explosion that would initiate once we reach HLMI. “An ultraintelligent machine could design even better machines. There would then unquestionably be an intelligence explosion, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make provided that the machine is docile enough to tell us how to keep it under control.”
Bostrom, among others, believes the slope of the growth line then will resemble a hockey stick, and the explosion of superintelligence will introduce a new level of risks. The dangers will exceed every previous threat from technology, including nuclear weapons. Intelligent machines will create an environment in which, as AI exponentially gains the ability to improve itself, humanity’s future becomes more cloudy. Intelligence operating beyond our understanding or control creates an existential risk.
The slower the takeoff of the explosion, the more time there will be for preparation and adjustment. But Bostrom worries that it might take longer to solve the “control problem”—which will grant assurance that superintelligent machines will do what we want them to do—than the time it takes to solve AI. Another AI researcher, David McAllester, has said, “I am uncomfortable saying that we are 99% certain that we are safe for 50 years. That feels like hubris to me.”
The “control problem” hasn’t been forgotten. Elon Musk, for instance, has donated $10 million in grants to study AI safety, and he isn’t alone in his concern. And despite misgivings, Nick Bostrom recently told scientists at a Royal Society gathering, “It would be tragic if machine intelligence were never developed to its full capacity. I think this is ultimately the key, or the portal, we have to pass through to realize the full dimension of humanity’s long-term potential.” Pieces are falling into place.