What Will AI Regulation Look Like? Unpacking Proposed Restrictions

by | AI Deployment

Midjourney depiction of AI Regulation
The future of artificial intelligence (AI) is promising, but AI regulation guidance is uncertain. We’ve seen tech leaders call for AI regulation and tighter controls, but federal agencies haven’t taken any concrete steps toward reining in AI tools. The Brookings Institution recently issued a set of proposals for government regulation of AI. In this article, we look at the impact this proposed regulation could have on AI and organizations adopting an AI strategy.
What Will AI Regulation Look Like? Unpacking Proposed Restrictions: image 1

AI Regulation  Require a New Law for Algorithms

The Brookings Institution’s proposal suggests the creation of a new law (or “regulatory instrument”) to be called the Critical Algorithmic Systems Classification, or CASC.

CASC’s ultimate goal is to regulate the use of algorithms so that they don’t interfere with Americans’ civil rights or with consumer rights. To that end, CASC can authorize extensive audits of “algorithmic decision-making systems,” or ADS.

These audits would look closely at how algorithms are being used and would try to determine whether their use is in line with consumer and civil rights. As usual in proposed AI regulations, the law would also seek to increase “transparency” around the use and impact of AI. We’ve written before how transparency is one of the three major challenges organizations need to resolve for their AI strategy.

This proposal may be more broad than organizations might hope for, but it gives professionals active in the AI space a framework for understanding how regulators may approach regulating AI technology.

Concerns with the CASC Approach

The CASC approach hopes to provide regulation that is “future-proof” since it gives agencies free rein to keep adapting to the evolution of AI technology, but it doesn’t provide the clear direction some organizations may want (or need) to pursue their AI strategy.

The Brookings Institutions notes the CASC plan doesn’t constrain agencies to existing regulations. Instead, agencies have the power to analyze the situation and craft new rules within their purview. For example, the Food and Drug Association (FDA) may have different concerns and regulations than the Environmental Protection Agency (EPA).

This lack of definition across agencies and industries could be a concern for businesses, since it doesn’t provide clear guidance but rather sets the stage for an evolving regulatory environment. It also implies that every ADS is fair game for regulation, rather than just a high-risk, high-profile ADS application.

On the other hand, the Brookings Institution notes this process would likely be hamstrung by its own bureaucracy. There’s a multi-year process involved in creating new regulations. On top of that, whenever the administration changes, there’s the possibility of further delay in passing regulations.

5 Obstacles to Avoid in RAG Deployment: A Strategic Guide Learn how to prevent RAG failure points and maximize the ROI from your AI implementations.

It’s possible this approach will be tweaked to make the process faster or to create a bright-line test for deciding when an ADS falls under a certain agency’s scope.

It’s also possible decision makers could build in a few universal characteristics — such as requiring algorithm developers to disclose their use and to bake in transparency.

“Algorithms” in Place of AI Regulation

Notice the proposal talks about “algorithms”, instead of phrases like “AI.” There’s no mention of generative AI (or “GenAI”). Instead, the Brookings Institution refers to something they call “algorithmic decision-making systems,”  or ADS. That’s a big category.

The use of “ADS” instead of “AI”  means if your business is using algorithms to speed up accounting processes, you may fall under the CASC mandate. Likewise, if you’re using an algorithm to make decisions about hiring and firing employees, or about compensation packages, you could be impacted by this proposed regulation.

CASC has a broad focus and can seemingly be applied to any organization using algorithms — Facebook, ChatGPT, or your Human Resources department.

Ultimately, that means  businesses would have to start changing their strategies if this regulatory framework were used. It could also impact the kinds of products AI developers put on the market.

What Would AI Regulation Look Like?

The Brookings Institution is a think tank so its proposal isn’t a bill being voted on — at least, not right now. The Brookings Institution’s goal is to “future-proof” AI regulation by leaving them broad enough to remain relevant as technology changes or improves.

Under this proposed vision, the regulatory body would have the power to assess algorithms routinely. Audits could happen regularly, as a way of checking to see whether consumers were being mis-served by AI.

If an audit did turn up evidence of bias, or of infringement on civil rights, there would likely be consequences. But we don’t know yet what those would look like until an actual bill was proposed in Congress.

Midjourney depiction of AI Regulation

Areas for AI Regulation

The Brookings Institution points out algorithmic decision-making systems, or ADS, are operating in many key parts of society already. Schools, jobs, lenders, housing, and medical services are all using algorithms in one way or another.

From the Brookings Institution’s point of view, leaving these areas unregulated creates risk. With so many institutions using ADS, there’s real potential for discrimination and bias — something political leaders have expressed as their main concern for AI regulation. There’s also no existing federal agency overseeing the use of AI or the use of ADS.

Most federal agencies are not equipped to regulate the algorithms in their areas properly. Agencies simply don’t have the technical expertise to audit the use of algorithms — much less the authority to set limits.

That’s where CASC would come in. Instead of creating one, centralized agency to regulate AI, CASC would empower all federal agencies to regulate the use of algorithms in their scope.

What’s New About This AI Regulation?

By giving power to federal agencies, CASC would likely keep the regulatory focus on the application of the technology, rather than on the technology itself.

CASC would let federal agencies audit the way ADS is being used in their own spheres of expertise. The Brookings Institution doesn’t spell it out, but the implication is — for example —  the Department of Education would audit the use of algorithms in schools, while the Federal Housing Administration would audit the impact of algorithms on lending practices.

CASC would give federal agencies the power to collect data on how ADS is being developed and deployed. The agencies would also be empowered to audit ADS and review its impact.

In order to begin collecting data and auditing, federal agencies would need to demonstrate a certain ADS was powerful enough to be worth regulating. Agencies would need to demonstrate the ADS had a real impact and risk of harm. The agency would also need to prove the ADS fell within the scope of the agency’s mandate.

The Takeaway

The CASC approach to AI regulation is novel in a few ways. It focuses on how AI is used rather than on the technology itself. CASC would potentially expose more organizations to regulation because of the focus on ADS instead of on AI specifically. This approach would provide a legal framework for what governmental entities will decide AI Regulation, but the framework may not be enough to resolve any organization awaiting jurisprudence on AI before investing in the technology.

What Will AI Regulation Look Like? Unpacking Proposed Restrictions: image 2

Read more from Shelf

May 2, 2024AI Deployment
Data quality in AI The Critical Role of Data Quality in AI Implementations
AI has revolutionized how we operate and make decisions. Its ability to analyze vast amounts of data and automate complex processes is fundamentally changing countless industries. However, the effectiveness of AI is deeply intertwined with the quality of data it processes. Poor data quality can...

By Oksana Zdrok

May 2, 2024AI Deployment
Futuristic paper printing machine Why “Garbage In, Garbage Out” Should Be the New Mantra for AI Implementation
The adage “Garbage In, Garbage Out” (GIGO) holds a pivotal truth throughout all of computer science, but especially for data analytics and artificial intelligence. This principle underscores the fundamental idea that the quality of the output is linked to the quality of the input. As...

By Oksana Zdrok

May 1, 2024News/Events
What Will AI Regulation Look Like? Unpacking Proposed Restrictions: image 3 Even LLMs Get the Blues, Tiny but Mighty SLMs, GenAI’s Uneven Frontier of Adoption … AI Weekly Breakthroughs
The AI Weekly Breakthrough | Issue 8 | May 1, 2024 Welcome to The AI Weekly Breakthrough, a roundup of the news, technologies, and companies changing the way we work and live Even LLMs Get the Blues Findings from a new study using the LongICLBench benchmark indicate that LLMs may “get the...

By Oksana Zdrok

What Will AI Regulation Look Like? Unpacking Proposed Restrictions: image 4
The Definitive Guide to Improving Your Unstructured Data How to's, tips, and tactics for creating better LLM outputs