U.S. and EU Announce Plans to Develop AI Standards

The two will have to navigate thorny differences of opinion on issues like data governance, however

3 min read

3 silhouettes of heads, one with an EU flag pattern, one with a US flag pattern, and a black and white one with arrows and lines between them..
Getty Images/IEEE Spectrum

In late January, civil servants in the United States and European Union promised that the two would join forces and support development of AI models in five socially critical areas, including health care and the climate.

However, their agreement has yet to translate into concrete action. “In my opinion, it’s a statement of intent,” says Nicolas Moës, a Brussels-based AI policy researcher at the Future Society think tank. “We do not have, yet, a lot of understanding of how that is going to be executed.”

But if the agreement’s models do materialize, the developers’ actions could set a precedent for handling data in a world where two sides of the Atlantic paint very different regulatory pictures.


The agreement is a product of the EU-US Trade and Technology Council (TTC), a body of civil servants created at a diplomatic summit in July 2021. AI isn’t the TTC’s only concern; it handles a wide variety of trade-related issues, including security, international standards, data governance, and supply chains.

But right at its inception, the TTC stated its intention to establish AI standards. Since then, its AI interest has only grown: At its most recent meeting, in December 2022, the council agreed on several AI items, including a promise to develop international AI governance standards and a joint study on AI’s ripples in the workforce. The TTC also promised to “explore collaboration” on more scientific AI work.

The details of that collaboration came a month later. Officials announced that U.S. and EU researchers would develop “joint models” in five designated fields: extreme weather and climate forecasting; emergency-response management; health and medicine improvements; electric grid optimization; and agriculture optimization.

This list is a departure from earlier joint projects, which tended to focus narrowly on data privacy.

Priya Donti, the executive director of Climate Change AI, a nonprofit that supports climate-related machine-learning research, believes the agreement is a good omen for her organization’s work. “The larger the extent to which we can share knowledge and best practices and data and all of that kind of thing, the more quickly we as a society will make progress,” Donti says.

Furthermore, the agreement eschews AI’s traditional haunts, such as text generation and image recognition, in favor of putting AI to work in socially relevant domains. “This collaboration emphasizes the priority that the directions of AI should be shaped by some of the more pressing societal problems we’re facing…and also chooses application areas that have a huge bearing on climate action,” Donti says.

It’s not clear who would develop the joint models or who would use them. One possibility is that they become public goods that U.S. or EU users could take and adapt for their own needs.

What is very clear, however, is that joint models don’t mean joint data. “The U.S. data stays in the U.S. and European data stays there, but we can build a model that talks to the European and the U.S. data,” a U.S. official told Reuters.

Officials didn’t clarify what exactly that means, but it could look like this: Imagine building a model that forecasts power-grid loads with a training set of real-world electrical consumption data (which could include personal information, if the data are household level). EU researchers might train a model with European data, then send the model across the sea for U.S. researchers to further train or fine-tune it with U.S. data.

Alternatively, researchers might set up some form of data-exchange system that allows models to access data from abroad. A climate-forecasting model, for instance, might be able to query data from European weather satellites even if researchers trained it in the United States.

Whatever the case, researchers building the models may be in for a rocky learning experience. “I expect this to be one of the politicized issues coming up,” says Moës.

But if researchers succeed, then whatever steps they take could leave a lasting impact. Standards for health care data—how to share them across borders and, importantly, between disjointed regulatory landscapes—already exist.

If AI’s stewards are able to make the most of the TTC’s agreement, they could establish similar data-sharing and interoperability standards in some of the agreement’s other areas. Thanks to the United States’ and the EU’s economic might, other countries might pay attention.

“If the EU and the U.S. come together and they say, ‘this is exactly how we’re going to be sharing climate data’…that really will push, probably, other countries to say, ‘well, if we’re going to be collecting climate data, why don’t we do it in the same format as well?’ ” says Daniel Castro, vice president at the ITIF think tank in Washington, D.C.

What is clear is that the European Union and the United States, even if their approaches to regulating AI are wildly divergent, have a common reason to work together on AI. “They see China as a common threat,” says Castro.

The Conversation (0)