SF Technotes

Google’s Vanishing AI Ethics Board

By Michael Castelluccio
April 10, 2019
4 comments

On March 26, 2019, Google announced the formation of a group called the Advanced Technology External Advisory Council (ATEAC). It “will consider some of Google’s most complex challenges that arise under our AI principles, like facial recognition and fairness in machine learning, providing diverse perspectives to inform our work.” Google’s AI ethics board lasted one week and three days and was officially dissolved by Google in an announcement on Thursday, April 4.

 

THE “DON’T BE EVIL” COMPANY

 

It isn’t that the company decided there wasn’t a need for a set of controls and regular reviews from a group of qualified watchdogs. After all, it wasn’t just Elon Musk, Bill Gates, and Stephen Hawking who warned of the inherent dangers of unmonitored AI development. The same concerns were raised inside Google. Going back to its founding in 2000, the company motto has been “Don’t be evil.” A slight rephrasing occurred in 2015 when Google was folded into Alphabet Inc. The new motto, “Do the right thing” was judged to be more positive, and proactive, than simply “avoid evil.” The company knew that objective, informed oversight was needed to guide the emerging intelligence of machines.

 

The ATEAC board had eight members with expertise in computational mathematics, natural language processing, and industrial engineering; there was a leading behavioral economist and a philosopher who was an expert in digital ethics. They would meet four times in 2019, and the shared goal would be the responsible development of AI at Google and its subsidiaries, like the recently purchased DeepMind research group in London.

 

EIGHT DAYS LATER

 

One of the first to report on the collapse of the council was Vox. In their reporting, they noted ATEAC hadn’t been set up for success. A fatal flaw in the plan was the makeup of the panel, and those who hit the self-destruct button came from within. More than 2,500 Google employees signed a letter demanding the removal of Kay Coles James from ATEAC within days of the council’s announced debut.

 

James is the president of the Heritage Foundation, a conservative think tank, and, according to the letter, she “is vocally anti-trans, anti-LGBTQ, and anti-immigrant.” The letter goes on, “In selecting James, Google is making clear that its version of ‘ethics’ values proximity to power over wellbeing of trans people, other LGBTQ people, and immigrants. Such a position directly contravenes Google’s stated values. Many have emphasized this publicly, and a professor appointed to ATEAC has already resigned in the wake of the controversy.”

 

The professor was Alessandro Acquisti of Carnegie Mellon University, and he explained in a tweet, “While I’m devoted to research grappling with key issues of fairness, rights & inclusion in AI, I don’t believe this is the right forum for me to engage in this important work.”

 

Another second set of conflicts surfaced over an ATEAC member who was a drone company executive. That appointment reignited the widespread internal protest within Google in 2018 over a Pentagon drone AI imaging program called Project Maven. Pressure from employees, including dozens of resignations, forced the termination of the project in June of that year.

 

Both the James appointment and the drone association violated Google’s own code of objectives for AI applications. (The case of Kay Coles James violated objective two and the drones actively engaged numbers two and three in the banned applications list.) Here are the listings in Google’s code:


Google’s Objectives for AI Applications

 

We will assess AI applications in view of the following objectives. We believe that AI should:

  1. Be socially beneficial.
  2. Avoid creating or reinforcing unfair bias.
  3. Be built and tested for safety.
  4. Be accountable to people.
  5. Incorporate privacy design principles.
  6. Uphold high standards of scientific excellence.
  7. Be made available for uses that accord with these principles.

 

AI Applications We Will Not Pursue

 

In addition to the above objectives, we will not design or deploy AI in the following application areas:

  1. Technologies that cause or are likely to cause overall harm.
  2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
  3. Technologies that gather or use information for surveillance violating internationally accepted norms.
  4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.

 

(See www.blog.google/technology/ai/ai-principles for more details.)


In a public statement, Google announced that ATEAC couldn’t function as it had wanted it to, and the company was “going back to the drawing board.” Another panel would seem unlikely as Google said it would “find different ways of getting outside opinions.”

 

The problems with ATEAC went beyond the flawed list of candidates. Kelsey Piper of Vox pointed out structural deficiencies that probably would have brought it all down or left it ineptly hanging as just another Silicon Valley AI PR project. She wrote, “A role on Google’s AI board was an unpaid, toothless position that cannot possibly, in four meetings over the course of a year, arrive at a clear understanding of everything Google is doing, let alone offer nuanced guidance on it.”

 

THE DRAWING BOARD

 

Two larger questions remain after the smoke and dust settle at Google. The first concerns a timeline for ethical AI governance. Will leading companies and universities find practical ways to monitor their research before government agencies step in as risks continue to escalate down the road?

 

The second question was posed by legal scholar Philip Alston at an AI Now symposium in October 2018. Alston looked at the problem of moral machines from a legal perspective that included human rights. “[Human rights are] in the Constitution,” he explained to the attendees. “They’re in the Bill of Rights; they’ve been interpreted by courts. If an AI system takes away people’s basic rights, then it should not be acceptable. Until we start bringing [human rights] into the AI discussion, there’s no hard anchor.”

 

That begs the question, Will the next public discussion begin on the human side of the equation, or would that just introduce an unmanageable level of complexity, forcing companies to resort to seminars and Ted Talks on algorithms and machine learning?

 



Michael Castelluccio has been the Technology Editor for Strategic Finance for 24 years. His SF TECHNOTES blog is in its 21st year. You can contact Mike at mcastelluccio@imanet.org.


4 + Show Comments

4 comments
    J Rice April 20, 2019 AT 2:13 pm

    Wow, another article from an ignorant writer. Being conservative does not mean that you are anti immigrant. We are against illegal immigrants and welcome anyone who follows our laws to enter legally. As a writer you are in violation of Google’s number 2 objective of having an unfair bias. Report the truth and not your agenda.

    Jerry L Goudy April 12, 2019 AT 2:02 pm

    “DON’T BE EVIL”

    Nick Strait April 12, 2019 AT 1:47 pm

    Banning a conservative from their Board only reinforces the suspicion that Google’s AI will not be objectively intelligent, but cetainly politically correct. The same PC culture that excuses criminal behavior based on race (J. Smollett) and villifies people of good will who have traditional view points. Nice job Google. You are do smart and subtle some might just call you stupid. For me , you aren’t stupid; just arrogantly evil.

    Damian Westermann April 12, 2019 AT 12:31 pm

    Of course they couldn’t tolerate a conservative viewpoint, especially from an African-American woman.