The Who, Where, and How of Regulating AI

Jurisdictions joust over toxic chatbots; policymakers ponder existential risks

5 min read

Black strings attach to and hold a robotic figure as it tries to freely move.
Dan Page

During the past year, perhaps the only thing that has advanced as quickly as artificial intelligence is worry about artificial intelligence.

In the near term, many fear that chatbots such as OpenAI’s ChatGPT will flood the world with toxic language and disinformation, that automated decision-making systems will discriminate against certain groups, and that the lack of transparency in many AI systems will keep problems hidden. There’s also the looming concern of job displacement as AI systems prove themselves capable of matching or surpassing human performance. And in the long term, some prominent AI researchers fear that the creation of AI systems that are more intelligent than humans could pose an existential risk to our species.

The technology’s rapid advancement has brought new urgency to efforts around the world to regulate AI systems. The European Union got started first, and this week, on 14 June, took a step forward when one of its institutions, the European Parliament, voted to advance the draft legislation known as the AI Act. But China’s rule-makers have been moving the quickest to turn proposals into real rules, and countries including Brazil, Canada, and the United States are following behind.

The E.U. and the U.K. offer a study in contrasts. The former is regulation-forward; the latter is laissez-faire.

Remarkably, some of the calls for regulations are coming from the very companies that are developing the technology and have the most to gain from unbridled commercial deployment. OpenAI’s CEO, Sam Altman, recently told the U.S. Congress in written testimony that “OpenAI believes that regulation of AI is essential.” He further urged lawmakers to consider licensing requirements and safety tests for large AI models. Meanwhile, Sundar Pichai, CEO of Google and its parent company, Alphabet, said recently that there will need to be “global frameworks” governing the use of AI.

But not everyone thinks new rules are needed. The nonprofit Center for Data Innovation has endorsed the hands-off approach taken by the United Kingdom and India; those countries intend to use existing regulations to address the potential problems of AI. Hodan Omaar, a senior policy analyst at the nonprofit, tells IEEE Spectrum that the European Union will soon feel the chilling effects of new regulations. “By making it difficult for European digital entrepreneurs to set up new AI businesses and grow them, the E.U. is also making it harder to create jobs, technological progress, and wealth,” she says.

What does the E.U.’s AI Act do?

The course of events in Europe could certainly help governments around the world learn by example. In April 2021 the E.U.’s European Commission proposed the AI Act, which uses a tiered structure based on risks. AI applications that pose an “unacceptable risk” would be banned; high-risk applications in such fields as finance, the justice system, and medicine would be subject to strict oversight. Limited-risk applications such as the use of chatbots would require disclosures.

On Wednesday, 14 June, as noted above, the European Parliament passed its draft of this law—an important step, but only a step, in the process. Parliament and another E.U. institution, the Council of the European Union, have been proposing amendments to the Act since its 2021 inception. Three-way negotiations over the amendments will begin in July, with hopes of reaching an agreement on a final text by the end of 2023. If the legislation follows a typical timeline, the law will take effect two years later.

Connor Dunlop, the European public policy lead at the nonprofit Ada Lovelace Institute, says that one of the most contentious amendments is the European Parliament’s proposed ban on biometric surveillance, which would include the facial-recognition systems currently used by law enforcement.

In the U.S., a national law was proposed in 2022 and went nowhere. The White House has since issued a Blueprint for an AI Bill of Rights. It is nonbinding.

Another hot topic is a parliamentary amendment that attempts to cover recent advances in “foundation models,” which are massive and flexible AI systems that can be adapted for a wide range of applications. “The AI Act is designed as product legislation,” Dunlop explains. “The risk is defined by the intended purpose.” But that framework focuses on the companies or organizations that deploy the technology, and leaves the developers of foundation models off the hook. “What the European Parliament is trying to do is add an extra layer” to the regulation, Dunlop says. “For things like data transparency, only the platform developer can make the system compliant—it’s very hard for the downstream deployer to do so.”

How is the rest of the world regulating AI?

Europe isn’t the only active policy arena. In the West, there’s a mistaken belief that China isn’t concerned with AI governance, says Jeffrey Ding, an assistant professor of political science at George Washington University and creator of the ChinAI newsletter. In fact, he says, Chinese regulations have already been put in force, starting with rules for recommendation algorithms that went into effect in March 2022, requiring transparency from the service providers and a way for citizens to opt out.

Next, in January 2023, the Chinese government issued early rules governing generative AI, and further draft rules were proposed in April 2023. Ding says the rules relating to recommender algorithms and generative AI stem from a common concern: “The Chinese government is very concerned about public-facing algorithms that have the potential to shape societal views,” he says.

China’s initial set of rules for generative AI required websites to label AI-generated content, banned the production of fake news, and required companies to register their algorithms and disclose information about training data and performance. Some of the language is broad enough to give the government considerable leeway in enforcement, such as the requirement that AI providers “dispel rumors” created by AI-generated content.

“By focusing on civil rights, we can articulate protections that are agnostic to the technology being used, whether it’s an Excel spreadsheet or a neural network.”
—Suresh Venkatasubramanian, Brown University

The draft rules go even further, requiring that AI companies verify the veracity of all the data used to train their models. “That’s an impossible endeavor,” Ding says. “If that rule actually gets implemented and enforced, it would impose really heavy costs on these AI providers.” He notes that recent suggested revisions to these draft rules would narrow the scope of regulations to public-facing products, leaving business-to-business applications alone. “Companies may respond by paywalling their models, so only businesses get access to them, so they don’t draw scrutiny from government regulators,” he says.

Is the United States regulating AI?

In the United States, home to many of the companies and labs that are putting forth cutting-edge AI models, lawmakers have gotten off to a slow start. Last year a national law was proposed, but it went nowhere. Then, in October 2022, the White House issued a nonbinding Blueprint for an AI Bill of Rights, which framed AI governance as a civil rights issue, stating that citizens should be protected from algorithmic discrimination, privacy intrusion, and other harms.

Suresh Venkatasubramanian, a computer science professor at Brown University, coauthored the Blueprint while serving in the White House Office of Science and Technology Policy. He says the Blueprint suggests a civil rights approach in hopes of creating flexible rules that could keep up with fast-changing technologies. “By focusing on civil rights, we can articulate protections that are agnostic to the technology being used, whether it’s an Excel spreadsheet or a neural network,” he says.

Venkatasubramanian says that while separate federal agencies are establishing regulations for AI within their domains, there is “broad consensus that we should do something” on the legislative level. And indeed, U.S. Senate Majority Leader Chuck Schumer announced in April that he was circulating the draft of a “high level framework” for AI regulations.

Any AI company doing business around the world will find it a challenge to comply with all the local rules unless countries reach global agreements. The intergovernmental political forum known as the Group of 7 nations (G7) has already begun discussing AI governance, though no one expects the organization to move quickly. In the meantime, European officials have suggested that companies worldwide could sign on to a voluntary “AI code of conduct,” and say they’ll put forth a draft within a few weeks. There is no time to waste, said European Commission executive vice president Margrethe Vestager at a recent meeting: “We’re talking about technology that develops by the month.”

This article appears in the September 2023 print issue as “Europe and China Solidify AI Regulation.”

The Conversation (0)