Generally Accepted Algorithmic PrinciplesBy
Internal and external control over advanced algorithms propelling AI is becoming increasingly important to prevent business and market disruptions.
Following the 1929 stock market crash that decimated the wealth of many Americans, there was significant pressure to enact legislation that would prevent a recurrence. In part, the U.S. government believed that unethical accounting practices by publicly traded companies helped contribute to a loss of public confidence in the markets. During the 1930s, the federal government partnered with professional accounting groups to establish standards and practices for consistent and accurate financial reporting, which evolved into Generally Accepted Accounting Principles (GAAP), beginning with the Securities Act of 1933.
At its core, GAAP was designed to ensure consistent financial statement presentation—the income statement, balance sheet, and statement of cash flows. These consistent statements purportedly made it easier for investors or potential investors to understand and compare statements, helping restore higher levels of confidence. Since the 1930s, GAAP has evolved to keep pace with change but remains focused on ensuring the following:
- Financial statement disclosures are accurate.
- Financial statement confusion is minimized by users.
- Financial statements are transparent and uniform.
- GAAP is used as a guideline for companies.
One may be able to draw some comparisons between financial reporting discretion pre-1930s and the design, control, and monitoring discretion over how algorithms that fuel AI and machine learning (ML) are written today.
AI USES AND ABUSES
An algorithm is a list of step-by-step instructions written to solve a problem or perform a task. These algorithms increasingly play a central role in our society, spanning the private, public, and not-for-profit sectors. A key issue is that algorithms aren’t by themselves neutral. They’re subject to bias (whether with malicious or benign intent), depending on how and by whom they were designed.
If you have concerns that unregulated or unchecked algorithms could lead to multifaceted abuses, you aren’t alone. In an evolving modern economy, AI places itself at the heart of decision making across a wide swath of sectors and applications such as self-driving cars, autonomous weapon systems, consumer marketing platforms, and healthcare programs, just to name a few. Suffice to say, the consequences of the rogue design of advanced algorithms propelling AI are potentially massive—and could involve life-and-death situations.
Algorithms are constructed by an individual or a team leveraging data to complete a set of activities. This means that an algorithm could be written (including the leveraging of data) to achieve a predetermined outcome. Thus, extreme due diligence should be required in the design and launch of any AI technology. Removing bias from AI may be one objective, but there are a variety of possibilities—some good and some bad—to keep in mind. Consider the following:
- Cybercriminals leverage AI to attack organizations.
- Organizations leverage AI to mine personal data to develop financial opportunities.
- A real estate platform wrote down $304 million due to an algorithmic home-buying disaster.
- Deepfakes can impersonate an individual for malicious purposes.
- An AI-enabled recruitment tool preferred men to women by mistake.
Leaving it to the imagination, this list could go on and on. But there are ways to mitigate some of this risk.
Similar to the misleading financial reporting behaviors that led to the creation of GAAP, expecting business algorithms to be designed ethically isn’t sufficient. While not perfect, the evolution of GAAP has had a positive impact on the financial markets, improving reporting consistency, transparency, and usefulness. Foreseeing the continuing expansion and dependency on AI and ML to help drive the global economy, it’s critical that any potential abuses embedded into the algorithmic code design (including the use of data) must be minimized if not outright eliminated. Leaving each company to its own control device may not be prudent and may ultimately lead to an uneven playing field among commercial competitors.
What if we consider putting the “algorithm” into GAAP—Generally Accepted Algorithmic Principles. What exactly would this look like? Who would provide oversight? Table 1 details some basic principles that would have a positive impact over the entire AI/ML algorithm landscape. Additionally, the ability for public companies to include some basic AI/ML model disclosures within their financial statements would add credible governance over the use of advanced algorithms in business operations.
These principles aren’t intended to be all inclusive, but rather an illustration of what business, control, and risk leaders may want to consider as they increasingly rely on advanced algorithms to drive business effectiveness and efficiency. Absent control and oversight, the potential misuse of AI may lead to significant business and market disruption that’s hard to foresee today.
Despite the challenges, advanced algorithms can deliver on many societal benefits, including reducing costs, reducing wait times, and improving customer experience. But without effective regulations, governance, and oversight, algorithms embedded into AI and ML may be abused to derive unethical or illegal outcomes. These outcomes could worsen over time if left unchecked, especially as they become more complex. This needs to be avoided at all costs.
The cumulative and compounding impact of AI absent regulations, disclosures, or control could lead to significant business and/or financial market disruptions akin to those of the 1930s. This is why AI governance is critical. In simple terms, AI governance is a framework that enables an organization to direct, manage, and monitor its AI activities. Despite its best intentions, internal control over AI may not be enough. Having AI disclosures as part of financial statement filings may be a step in the right direction. These disclosures would mandate insight into a company’s practices related to AI development, dependence, and oversight.