Humans have been suspicious of AI from the beginning because of both apocalyptic science fiction and AI’s intrusion into human domains like reasoning and creativity. But as with many things strange and new, we’re apparently now getting used to and more tolerant of smart machines that can do some very human things.
The Pew Research Center has been testing the winds generated by AI and other human enhancements for years, and on March 17, 2022, the nonpartisan American “fact tank,” as it calls itself, published the results of a wide-ranging survey of 10,260 U.S. adults taken from November 1 to 7, 2021. The questions dealt with views about AI and human enhancement technologies. An opening question clearly defined the focus of the study: “Artificial intelligence computer programs are designed to learn tasks that humans typically do, for instance recognizing speech or pictures. Overall, would you say the increased use of artificial intelligence computer programs in daily life makes you feel…
More excited than concerned,
More concerned than excited,
Equally concerned and excited,
The responses reflected a more benign acceptance than in the past. Eighteen percent were more excited than concerned, 37% more concerned than excited, and a larger ambivalent group (45%) was equally concerned and excited.
A previous International Science Survey, taken over 2019-2020, had already shown a drift toward positive acceptance. Among the 20 nations reporting, 53% saw AI as “mostly a good thing for society,” while 33% saw the development of AI as “a mostly bad thing for society.” The seven countries in the Asia-Pacific region had the highest number of those looking forward to the changes coming from the supplemental intelligence and extended physical capabilities provided by the digital machines. The United States measured close to a 50/50 divide with a slightly optimistic score of 47% (good thing) and 44% (bad thing). It appears the threatening voice of the computer HAL on the spaceship in 2001: A Space Odyssey has been somewhat drowned out by the current chatter from Siri, Alexa, and Google.
THE SIX SUSPECTS
It isn’t just AI that makes people nervous. The researchers explain, “Fundamentally, caution runs through public views of artificial intelligence (AI) and human enhancement applications, often centered around concerns about autonomy, unintended consequences and the amount of change these developments might mean for humans and society.” Their latest survey “concentrates on public views about six developments that are widely discussed among futurists, ethicists and policy advocates.”
Three of the areas are direct AI applications. These include driverless passenger cars, facial recognition technology used by law enforcement, and the algorithms used by social media companies to find false information on their sites. The other three fall into the category of human enhancements. These are the computer chip implants in the brain designed to advance people’s cognitive skills, gene editing to greatly reduce an individual’s risk of developing serious diseases or health conditions, and robotic exoskeletons with built-in AI systems to increase strength for lifting in manual labor jobs.
The Pew researchers offer a caveat regarding the focus of the six vignettes they used in the questionnaire. “Our questions about public attitudes about facial recognition technology are not intended to cover all uses but, instead, to measure opinions about its use by police. Similarly, we concentrated our exploration of brain chip implants on their potential to allow people to far more efficiently process information rather than the use of brain implants to address therapeutic needs, such as helping people with spinal cord injuries restore movement.”
A generalized scorecard that the data produced looks like this:
A common discomfort that emerged throughout the data was an ambivalence toward AI that isn’t even close to being resolved. In an article that introduces the study, the researchers offered two observations by respondents that define and contrast the conflicted positions many of us find ourselves in.
One enthusiastic man in his 30s explained, “AI can help slingshot us into the future. It gives us the ability to focus on more complex issues and use the computing power of AI to solve world issues faster. AI should be used to help improve society as a whole if used correctly. This only works if we use it for the greater good and not for greed or power. AI is a tool, but it all depends on how this tool will be used.”
A woman in her 60s expressed her ethical concerns about the increased use of AI: “It’s just not normal. It’s removing the human race from doing the things that we should be doing. It’s scary because I’ve read from scientists that in the near future, robots can end up making decisions that we have no control over. I don’t like it at all.”
In summarizing the results of the survey, Pew identified four important themes generated by the data.
“A new era is emerging that Americans believe should have higher standards for assessing the safety of emerging technologies.” Asked about how to ensure the safety and effectiveness of the technologies, the replies overwhelmingly called for higher safety standards for four technologies: autonomous vehicles, brain chip implants, gene editing, and robotic exoskeletons.
“Sharp partisan divisions anchor people’s views about possible government regulation of these new and developing technologies.” When asked about their concerns about government going too far or not far enough in regulating the use of the six technologies all agreed that there must be some regulation, but the degree of control was colored generally by the political affiliation of those answering.
“Less than half of the public believes these technologies would improve things over the current situation.” In mixed responses, the level of confidence in the technologies to improve life over the way it is now is a little less than 50/50.
“Even for far-reaching applications, such as the widespread use of driverless cars and brain chip implants, there are mitigating steps people say they would make them more acceptable.” Here, there was general agreement that mitigating steps could be taken to make the technologies more acceptable. That includes 53% who would find brain implants more acceptable if people could turn the effects on or off. Seven in 10 would like driverless cars to be labeled, 67% would want dedicated lanes for the autonomous vehicles, and 57% would find them more acceptable if a licensed driver was required to be in the vehicle.
A WORK IN PROGRESS
The adoption of AI has outrun the fears it raises. Despite the many misgivings and warnings, AI applications are popping up everywhere and now even appear as dedicated systems embedded in the phones we carry around with us.
In his posthumously published Brief Answers to the Big Questions, legendary physicist Stephen Hawking warned, “It will either be the best thing that’s ever happened to us, or it will be the worst thing. If we’re not careful, it very well may be the last thing.” Headwinds like that are difficult to push ahead through, but the technology is slowly winning us over. The irony has been pointed out that Hawking used a computer-generated voice to express that warning. BBC news commented on his reaction to a new communications computer in 2014: “He has been an enthusiastic early adopter of all kinds of communications technologies and is looking forward to being able to write much faster with his new system.” The communication systems he used all depend on recently evolving AI-enabled speech and text recognition. And that seems to sum up the ambivalence still rooted in our general notions about AI.