The European Union and the United States have not always agreed on the regulation of digital technologies, but closer cooperation is needed to prevent the proliferation of harmful artificial intelligence and to help shape global AI norms that support democratic values, equity, and human rights. The recent launch of the EU-US Trade and Technology Council, together with the new EU AI regulatory proposal, provide a critical window of opportunity for deeper engagement.
Many assume that the European Union is the world’s technology watchdog, while in contrast the United States is an unruly digital Wild West. Media, policymakers, and the general public have been quick to fit the long-awaited EU regulatory proposal on artificial intelligence (the Artificial Intelligence Act, or AIA) into this bifurcated framing. Journalists have suggested that the AIA may “widen the regulatory gulf” between the EU and the US when it comes to reining in the riskiest AI applications. Researchers have called it “a direct challenge to Silicon Valley’s common view that law should leave emerging technology alone.”
However, this framing of a “gulf” between the EU and US on AI regulations is both overstated and counterproductive. The under-regulated AI industry is hurting Americans and Europeans alike, and AI’s risks, like algorithmic amplification of polarization and extremism, cut across borders. Not only do the allies’ perspectives align on various issues, but they are actively courting further cooperation on common challenges.
In mid-June, US President Joe Biden and European Commission President Ursula von der Leyen launched an EU-US Trade and Technology Council (TTC) at the US-EU Summit in Brussels. The TTC comprises ten working groups, with issues including standards cooperation for emerging technologies, data governance and technology platforms, and the threat posed to human rights by technology’s misuse. It remains to be seen, however, how much either ally will invest in this Council or how effective the TTC will be at advancing cooperation on critical AI issues going forward.
The release of the AIA, and the more recent launch of the TTC, present critical and time-sensitive opportunities for engagement. Failing to take advantage of this opportunity for transatlantic cooperation on AI would be a mistake with wide-ranging consequences for both AI and the state of democracy.
The EU’s proposed AI regulation differs from previous US federal government attempts by establishing oversight mechanisms to mitigate the risks of AI systems. The AIA views some applications of AI, such as AI-based social scoring, as presenting unacceptable risks that must be banned outright because they pose a clear threat to people’s safety and rights. It considers other applications, like using AI to evaluate eligibility for public services or a job, high risk because of their impact on people’s livelihoods and the potential for bias. High risk AI systems are subject to significant obligations before they can be placed on the market.
In contrast, a 2020 memo from the White House Office of Management and Budget on Guidance for Regulation of AI highlights a distrust of regulation that defined the Trump Administration’s approach to AI policy. The memo states, “Federal agencies must avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth.” The memo also suggests that AI’s risks should be considered alongside potential benefits.
However, there has been a shift in the US AI policy environment under the Biden Administration, with louder calls for accountability and regulation. Although Biden has yet to make AI a priority, there is greater recognition of the risks the technology can pose and signals that the administration will take AI policy seriously. Vice President Harris has previously endorsed a bill to establish federal AI policy and has criticized the ways that AI can perpetuate bias. An Executive Order signed on Biden’s first day in office established an Equitable Data Working Group and the appointment of Dr. Alondra Nelson to lead the Office of Science and Technology Policy promises a commitment to pursue equitable AI.
The US does already have some protections in place against high-risk AI systems. Real-time biometric surveillance by law enforcement, prohibited in the AIA with some exceptions, has already been banned by numerous cities in the US. A statement of intent issued by the Federal Trade Commission the same week as the AIA release explains that AI products are not outside the scope of its consumer protection laws. Companies will need to adhere to FTC guidelines to ensure AI systems are transparent, explainable, fair, and empirically sound.
In fact, some have asserted that the FTC’s notice has more teeth than the AIA in the near-term. For example, the FTC has committed to holding companies accountable for preventing the proliferation of racially-biased or unreliable algorithms. Meanwhile, it may take years for individual EU member states to adopt the AIA, lessening the immediate impact on Big Tech compared to what some had expected. Under the AIA, most AI technology will not be subject to any regulation and while producers of high-risk AI systems face regulatory requirements it appears that assessments will not be made available to the public. In short, the EU approach may be less of a “burden” than some feared, while the US policy landscape may be less permissive than it may first appear.
More important than the US’s and EU’s willingness to establish regulatory frameworks is the significant overlap in what their frameworks intend to accomplish. The US and EU aim for not only the development of AI, but the development of trustworthy AI. Both have adopted the OECD AI Principles, which provide common benchmarks on issues including sustainable development, human rights, democratic values and diversity, and accountability, among others. The US’s and EU’s support of the Principles has helped to establish a shared language for global AI norms and governance.
Cooperation as a Strategic Goal
Greater transatlantic cooperation on AI is a stated goal of both the US and the EU. A European Commission program for a transatlantic agenda from December 2020 first proposed the EU-US Trade and Technology Council. The Council was an opportunity for allies to work together on critical technologies and to encourage the establishment of digital governance that promotes shared values of human dignity, individual rights, and democratic principles. The agenda described this as “a once-in-a-generation opportunity.”
The US has also highlighted the importance of international cooperation on AI, most recently by accepting the EU’s invitation to launch the TTC. The US has launched the National AI Initiative which intends to support further opportunities for cooperation with strategic allies on research and development, assessment, and resources for trustworthy AI systems. “International Cooperation” is also one of the six strategic pillars outlined on the newly re-launched AI.gov website detailing US AI priorities.
Transatlantic cooperation is widely supported by US industry stakeholders, in part to promote regulatory compatibility. For example, the TTC was endorsed in a blog post by Karan Bhatia, Google’s Vice President of Government Affairs & Public Policy, and in a statement of support from the Information Technology Industry Council. The final report from the National Security Commission on Artificial Intelligence (NSCAI), a multistakeholder group including numerous AI industry leaders, also has a chapter on creating a favorable international technology order. The NSCAI advises the US to establish an International Science and Technology Strategy and argues that “like-minded countries must work together to advance an international rules-based order, protect free and open societies, and unleash economic innovation.”
Given the allies’ many common goals, the AIA should not be seen as a challenge to the US. Instead, the proposal is an important first step and an opportunity to prevent AI uses that violate human safety and fundamental rights. The US and EU can now work together to further clarify and prevent high-risk AI uses, and establish shared AI standards. While the recently-launched TTC provides a valuable platform for this work and will support regulatory policy cooperation and convergence, a handful of working groups only partially focused on AI may struggle to meet these objectives. Additional pathways that deserve consideration include increasing capacity for information sharing and pooling resources for larger scale research on critical topics.
As governments scrambled to control the spread of COVID-19, many turned to AI technologies for help – to better understand the virus, track outbreaks, and help provide care. In some cases, this has justified the implementation of pervasive surveillance systems, which are now being used for troubling ends. As just one example, a facial recognition camera network in Moscow, originally implemented to help enforce quarantine restrictions, was later used to detain dozens of protestors voicing opposition to President Vladimir Putin. AI-enabled surveillance systems have proliferated across the globe, and the scale and scope of “digital authoritarianism” has increased for years, amplified by the use of AI to automate censorship and surveillance systems.
While the United States has worked to develop standards and principles for the use of AI around the world and sought to protect human rights and fundamental freedoms, these actions have failed to stop the misuse of AI. Concrete cooperation with the European Union, which has been lacking, could create a stronger alliance to counter the rising wave of digital authoritarianism. The launch of the TTC shows that President Joe Biden understands this dynamic. He recently said the “transatlantic alliance is back,” and explicitly highlighted the need to shape the rules that will govern the advance of AI, among other consequential technologies.
Importantly, greater transatlantic cooperation on AI is not just in the self-interest of the US and the EU; it can benefit democracies and human rights around the world. The alliance will be even stronger if it looks outward and facilitates international, inclusive dialogues, including with countries from the Global South. Fostering an equitable and responsible digital future requires incorporating critical, yet underrepresented, voices into AI governance discussions and decision-making.
Forgoing greater cooperation on AI between the US and EU would be a shortsighted mistake. There is a narrow window of opportunity to prevent the proliferation of harmful AI and to help shape global AI norms. The time for transatlantic cooperation on AI is now.
Jessica Newman is a Research Fellow at the UC Berkeley Center for Long-Term Cybersecurity, where she leads the AI Security Initiative, a hub for interdisciplinary research on the global security implications of artificial intelligence. She is also an AI Policy Specialist with the Future of Life Institute and a Research Advisor with The Future Society. Jessica was a 2016-17 International and Global Affairs Student Fellow at Harvard’s Belfer Center, and has held research positions with Harvard’s Program on Science, Technology & Society, the Institute for the Future, and the Center for Genetics and Society. Jessica received her master’s degree in public policy from the Harvard Kennedy School and her bachelor’s in anthropology from the University of California, Berkeley with highest distinction honors. She is a member of the OECD Network of Experts on AI (ONE AI), the CNAS Task Force on Artificial Intelligence and National Security, and the Partnership on AI Expert Group on Fair, Transparent, and Accountable AI.
Russia’s 2016 election interference operation challenged dominant assumptions about the nature of cyber conflict. That operation and others like it mark the emergence of what the authors call “masspersonal social…
Semiconductor chips are integral to every facet of contemporary life. COVID-19 related shortages have rocked critical supply chains. The international community’s chip reliance has raised economic and national security…