Today, the AI situation is becoming more tenuous. There are no federal laws specifically regulating AI or the many AI applications that are appearing in virtually every field of human endeavor despite the call for regulation from many sectors.
Over the last year, the federal government has been working on a draft AI Bill of Rights, and on Tuesday, October 4, 2022, the result was released by the White House Office of Science and Technology Policy (OSTP) under the title Blueprint for an AI Bill of Rights.
The declared purpose of the 73-page report, according to OSTP, is to, “Help guide the design, development, and deployment of artificial intelligence (AI) and other automated systems so that they protect the rights of the American public. More than a set of principles, this is the blueprint to empower people, companies, and policymakers…to hold big technology accountable, protect the civil rights of Americans, and ensure technology is working for the American people.” Unlike the other similarly named American document drawn up in 1791, the rights outlined in this AI-focused version don’t have the power of law, nor any prescribed consequences for those unconvinced on the value of these rights regarding the common good.
In their coverage of this release, several commentators have recalled Isaac Asimov’s Three Rules of Robotics, but Asimov’s rules guaranteed human rights by hard-wiring and programming in controls like: “A robot may not injure a human being or, through inaction, allow a human being to come to harm” (that’s the first rule). That certainly sounds authoritative, and it has the backbone of fixed law, but we need to remember Asimov’s robots were fictional and, therefore, much easier to rein in. Today’s AI agents and robotic systems are likely to be found designing their own circuits and programs, using deep learning techniques within black boxes beyond the horizon of our poor sightlines.
The OSTP wasn’t working alone on this project. They heard from hundreds of people over the course of the year. Input from large companies like Microsoft to small AI start-ups like Arthur (arize.com), human rights groups, and the public all added their voices to the proposal. Alondra Nelson, OSTP deputy director for science and society, explained, “We too understand that principles aren’t sufficient. This is really just a down payment. It’s just the beginning and the start.”
The blueprint outlines and overlaps five common-sense protections to which everyone in America should be entitled:
- “Safe and effective systems: You should be protected from unsafe and ineffective systems.
- Algorithmic discrimination protections: You should not face discrimination by algorithms and systems should be used and designed in an equitable way.
- Data privacy: You should be protected from abusive data practices via built-in protections, and you should have agency over how data about you is used.
- Notice and explanation: You should know when an automated system is being used and understand how and why it contributes to outcomes that impact you.
- Human alternatives, consideration, and fallback: You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.”
The detailed explanations in the paper present numerous examples of problems that have already emerged from AI, deep learning, and Big Data, and there are suggestions for limiting harm.
The next obvious step is to engage the body responsible for drawing up legislation that will address the problems of deep fakes, biased algorithms, inappropriate information mining, indiscriminate collection and use of facial recognition, new technologies that inadvertently have dangerous consequences, and more. While waiting for the lawmakers, the White House is continuing its efforts with promises of future actions to control harmful AI. The Department of Health and Human Services has announced it will release a plan for reducing discrimination in algorithms that affect access to care by the end of the year, and the Department of Education plans to offer recommendations on the use of AI for teaching or learning by early 2023.