What are the issues that need to be considered when regulating artificial intelligence?
There are a number of issues to consider. First, there is the fear of an existential risk fuelled by stories of an AI that could become autonomous and destroy humanity. However, I don’t see this as a real possibility – at least not with the models that are being developed right now. Then there’s the fear of an oligopoly, with heavy dependence on a handful of companies. There is also the risk of human rights violations. And in this case, a new factor comes into play: for a long time, the most terrible actions were reserved for the countries and armies of the most powerful countries. Now it’s commonplace, mass-market technology. The same AI that can detect cancer could also be used to prevent part of the population from entering airports. We have rarely seen such multi-purpose technologies. AI also makes malicious acts possible, such as deep fakes, attacks on our systems, or the manipulation of opinion. Not to mention the imbalances it will create in terms of intellectual property, protection of the public domain, protection of privacy, and the upheavals in the workplace. All these challenges, coupled with the desire to really benefit from AI’s boundless promise, have prompted intense global reflection on a framework for shared governance.
What do you think is the greatest risk?
The greatest risk, in my opinion, is that of a monopoly. It’s a question of democracy. One of the great dangers is that the global economy, society and the media will become ultra-dependent on a small oligopoly that we are unable to regulate. In line with this analysis, I’m trying to push for the protection of the digital commons and open source. We need to make sure that there are models that are free to use so that everyone can benefit from them. There are already high-quality open source models. So, the question is: are we going to put enough public money into training them for the benefit of everyone? In my opinion, that’s where the real battle lies. Ensuring that there are resources for the public good and that it will be possible to innovate without asking for permission.
Is there particular international attention being paid to certain risks associated with AI?
Among the aspects that could lead to international coordination are the indirect effects of AI. These technologies could disrupt the way in which we have constructed the protection of privacy. Today, the general principle is to protect personal data in order to protect individuals. With AI, by using predictive models, we can learn a great deal, if not everything, about a person. Predictive models can take into account a person’s age, where they live and where they work, and give a very good probability of their risk of cancer or their likelihood of liking a particular film. Another issue being debated at an international level is that of intellectual property. It is now possible to ask an AI to produce paintings in the style of Keith Haring and to sell them, which poses a problem for the artist’s beneficiaries.
Is there a real awareness of the need for international regulation?
Ten years ago, there was a tacit agreement not to regulate social networks. Now these companies are worth $1,000bn and it’s very difficult to change their trajectory. Most developed countries are telling themselves that they won’t make the same mistake again and that they mustn’t miss the boat this time. But that means they know what to do. There is a new awareness that regulation must be anchored in an international framework. There are so few borders in the digital world that you can set up in any country and still have a presence in another. Clearly, there is intense international activity around the major issues mentioned above: existential risk, the question of economic sovereignty and malicious actors.
Is the international level the only relevant one for regulating artificial intelligence?
No. The “regional” level (i.e. a coherent group of countries) is also very important. Europe has learned the hard way that only regulating all digital uses at national level is not enough to bring the giants to heel. When we establish a European framework, they negotiate. But that creates other tensions, and we don’t want to encourage an international order based on extra-territorial application decisions. So, the idea that the international level is the right size for thinking about the digital world has taken hold, and it’s not really questioned any more.
Under the law, we have the right to prohibit the development of certain AI. But we are afraid that the other powers will continue to do so, and that we will become weak and obsolete. It’s a question of being both pro-innovation and pro-security for citizens, and that’s why everyone would like decisions to be collective. These technologies are changing very fast, and creating a lot of power, so we don’t want unilateral disarmament.
What progress has been made on the development and implementation of this framework?
The ethical framework is being discussed. Discussions are taking place in dozens of forums within business, civil society, among researchers, at the UN, the G7, the OECD and as part of the French initiative for a global partnership for artificial intelligence. It’s also a diplomatic effort that’s coming out of the embassies, with debates at the Internet Governance Forum and the annual summit on human rights, RightsCon. Little by little, ideas are taking root or becoming established. We are still in the process of identifying the concepts on which an agreement will be based. An initial consensus is emerging around certain principles: no use should be made that is contrary to human rights; the technology must be in the interest of the users; it must be proven that precautions have been taken to ensure that there is no bias in the education of the models; the need for transparency so that experts can audit the models. Then it will be time to look for treaties.
There are also debates about a democratic framework. How can we ensure that these companies are not manipulating us? Do we have the right to know what data the AI has been trained with? The notions of security in the face of existential risk were much discussed in the UK at last year’s world summit. Now the conversation is turning to the future of work and intellectual property, for example. In 2025, France will be hosting a major international summit on AI, which will help move these issues forward.
Is reflection regarding AI moving as fast as the technology?
Many people think not. Personally, I think that a good text is fundamental. The Declaration of Human Rights is still valid, but technologies have changed. The 1978 Data Protection Act has become the GDPR, but the principle of user consent for data to be circulated has not aged a day. If we can find robust principles, we can produce texts that will stand the test of time. I think we could regulate AI with the GDPR, the responsibility of the media and content publishers, and two or three other texts that already exist. It’s not a given that we need an entirely new framework.