Designers seem to be stepping over each other rushing to add AI to any product of theirs that has digital parts. Not so with government agencies.
According to a Stanford University report, almost half of the federal agencies it studied have experimented with AI and related machine learning tools with hardly a hint of the public exuberance of the manufacturing class. When governments add something like facial recognition to its systems, people are more likely to respond with “Why?” rather than “Wow!”
In the first week of October 2020, a joint announcement from two major European capitals promised a surprising first in open government. Their press release at the Next Generation Internet Summit was titled “Helsinki and Amsterdam first cities in the world to launch open AI register.” They were opening a window on the AI systems used by their cities.
The mayor of Helsinki, Finland, Jan Vapaavuor, said, “With the help of artificial intelligence, we can give people in the city better services available anywhere and at any time. In the front rank with the city of Amsterdam (Netherlands), we are proud to tell everyone openly what we use AI for.” Touria Meliana, deputy mayor of Amsterdam, added, “Algorithms play an increasingly important role in our lives. Together with the city of Helsinki, we are on a mission to create as much understanding about algorithms as possible and be transparent about the way we—as cities—use them. Today, we take another important step with the launch of these algorithm registers.”
The registers in both cities provide residents an overview of the AI applications they use, and both also have information about the data the apps use, the algorithms and overall logic, and the governance in place for each of the apps. At this point, there are a limited number of apps in the registers. In Amsterdam, one covers services like the automated parking control system, and another is used for tracking illegal vacation rentals. In Helsinki, the public library system and four municipal chatbots are in its register.
As part of the process that will add more applications in the fall, Amsterdam is asking for suggestions from the public to improve the system. Aik van Eemeren, the head of public technology for Amsterdam, has promised the register will ultimately include all the city’s algorithms. The register websites are Helsinki Artificial Intelligence Register, and Amsterdam Algorithm Register.
The company that developed the registers for both cities is Saidot, a Finnish technology platform whose mission is to help companies and governments develop and deploy trustworthy AI that’s also transparent and explainable.
The ethical principles that Saidot bases their platform on are straightforward. “Algorithms used in public services must adhere to the same rules and principles as all other public services provided by the municipality,” the company writes. “That means they must treat people equally, not limit their freedom, be transparent and open to democratic control and be at the service of the people of Amsterdam [and Helsinki].”
The effort by the two cities and Saidot is a beginning, but it isn’t just algorithmic transparency that will solve the issue of open AI. Justin Reich, the executive director at the MIT Teaching Systems Lab spoke to Pew researchers and offered a wider view. He wrote, “The advancing impact of algorithms in our society will require new forms and models of oversight. Some of these will need to involve expanded ethics training in computer science training programs to help new programmers better understand the consequences of their decisions in a diverse and pluralistic society.” And for the already completed code, Reich explained, “We also need new forms of code review and oversight that respect company trade secrets but don’t allow corporations to invoke secrecy as a rationale for avoiding all forms of public oversight.”
Programmers and the algorithms and AI code they produce are on the human side of this equation, but also present are a different set of agents, artificially intelligent machines that can learn in human-supervised settings and also on their own in unsupervised circumstances. A team from Stanford and New York University were asked, in February 2020, by the Administrative Conference of the United States to examine the legal dimensions of AI use in government programs.
One of their more discouraging conclusions was, “AI poses deep accountability challenges. Many of the more advanced AI tools are not, by their structure, fully explainable. A crucial question will be how to subject such tools to meaningful accountability and thus ensure their fidelity to legal norms of transparency, reason-giving, and non-discrimination.” For this, the academic team advised, “To achieve meaningful accountability, concrete and technically-informed thinking within and across contexts—not facile calls for prohibition, nor blind faith in innovation—is urgently needed.”
And that’s the “black-box” catch-22 at the heart of the call for complete transparency for AI apps and systems. Computer scientist and mathematician Stephen Wolfram expressed it succinctly for the U.S. Senate Committee that asked him to testify in 2019. He told the lawmakers, “If we want to seriously use the power of computation—and AI—then inevitably there won’t be a ‘human-explainable’ story about what’s happening inside.”
The Amsterdam/Helsinki initiative is an encouraging beginning at its level of municipal accountability, but the unsupervised intelligence that isn’t explainable is on a different existential level.