Europe’s pioneering artificial intelligence regulation faces unexpected opposition and Silicon Valley’s high-flying AI startup Open AI plunges into chaos.
The AI revolution was never going to be smooth. Optimists like Sam Altman painted a picture of the new technology solving everything from climate change to cancer. Pessimists painted a dystopian 1984 world beyond anything George Orwell could have imagined.
But few expected so much drama so soon before the technology’s promise and dangers became clear. The upshot could be possible for transatlantic alignment to favor, at least for now, voluntary codes of conduct over strict, legally binding regulations.
Start with Europe. The European Union was the first out of the gate to regulate AI, proposing a legally binding AI Act. It looked like a technical dossier — until OpenAI’s ChatGPT appeared, shocking legislators into toughening the provisions. Negotiators from European governments and the European Parliament seemed to be heading to an imminent agreement as early as December — until it blew up.
France, Germany, and Italy fired the warning shot, signaling their opposition to regulating advanced AI ChatGPT “foundation models,” the large machine learning models that can be adapted to a wide range of tasks. In a two-page non-paper, the three EU heavyweights rejected Parliament’s attempts to impose strict binding rules. The technology is too new and untested, they argued.
“When it comes to foundation models, we oppose instoring un-tested norms and suggest building in the meantime on mandatory self-regulation through codes of conduct,” the Franco-Germanic-Italian non-paper reads. “They could follow principles defined at the G7 level.”
Get the Latest
Sign up to receive regular emails and stay informed about CEPA’s work.
Behind the “bureaucratese,” the statement represents a declaration of war against the Parliament. The reference to the G7 principles is fascinating because it opens up the possibility of a global accord, working with, instead of against, the US. Until now, Europe has insisted on going alone, paying lip service to international efforts, aiming instead to become the democratic world’s top AI regulator.
Even so, transatlantic AI harmony remains far from assured. When France, Germany, and Italy presented their opposition at a negotiating session, parliamentary representatives reportedly walked out. Spain, which holds the rotating EU presidency and has been shepherding the AI Act towards a conclusion, is struggling to find consensus. We “cannot turn away from foundation models,” warned Carme Artigas, the Spanish Secretary of State for Digitalization and Artificial Intelligence.
Thousands of miles away from the Brussels drama, Silicon Valley’s AI drama raised many of the same questions, again without offering clear answers. After OpenAI’s board fired CEO Sam Altman, offering only a vague explanation, rumors swirled about its motivation.
The consensus of reporters was that the board feared Altman moved too fast to cash in on the new technology, despite its potential dangers. He had struck deals worth upwards of $10 billion with Microsoft. It was a classic money versus ethics conflict.
Perhaps. As we go to press, Microsoft has hired Altman to build a new AI subsidiary. More than 700 of Open AI’s 770 employees have signed a letter saying they may leave the company if Altman is not reinstalled. Microsoft had assured OpenAI employees of jobs, the letter asserted.
Regulators on both sides of the Atlantic will not be able to stop the AI train from accelerating. The money and motivation to build Chat GPT and other AI products like it are available. Interestingly, the country behind Europe’s AI revolt responded to news of Altman’s firing by sending an invitation. “Altman, his team, and their talents are welcome in France,” said French Digital Minister Jean-Noël Barrot.
European and American AI priorities have much in common. Both want to promote the new technology in an “ethical” manner. While concerned about its potential dangers, their priority is to profit, not blow up, the new AI tech revolution. In the US, the executive branch is enacting regulations in a stealthy manner. Europe will end up passing some version of its AI Act.
It could be that the OpenAI debacle might derail French, German, and Italian efforts to keep ChatGPT out of scope. Negotiators might say, “Look, even the OpenAI board shares our fears.” But it also underlines the daunting challenge of regulating such a novel, untested technology and could open the door to allow Washington and Brussels to work together, not against each other.
Bill Echikson is a non-resident Senior Fellow at CEPA and editor of Bandwidth.
Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions are those of the author and do not necessarily represent the position or views of the institutions they represent or the Center for European Policy Analysis.
Read More From Bandwidth
CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy.