3_regulationIA
π Science and technology π Society
How can artificial intelligence be regulated?

Are we moving towards global regulation of AI?

Henri Verdier, Ambassador for the Digital Sector and founding member of the Cap Digital competitiveness cluster
On May 14th, 2024 |
5 min reading time
Henri Verdier
Henri Verdier
Ambassador for the Digital Sector and founding member of the Cap Digital competitiveness cluster
Key takeaways
  • AI technology shows enormous promise, but there are a number of pitfalls associated with its use, including deep fakes, human rights violations and the manipulation of public opinion.
  • This constantly evolving multi-purpose tool is prompting intense global reflection over a framework for shared governance.
  • Increasingly, new AI technologies threaten users’ privacy and intellectual property, and require shared governance.
  • By regulating AI at a “national” level, Europe fears it will be weakened and overtaken by other powers.
  • In 2025, France will be hosting a major international summit on AI, which will enable these issues to move forward.
  • Although the technology is evolving rapidly, it is possible to regulate AI in the long term on the basis of fundamental and robust principles.

What are the issues that need to be considered when regulating artificial intelligence?

There are a num­ber of issues to con­sid­er. First, there is the fear of an exis­ten­tial risk fuelled by sto­ries of an AI that could become autonomous and destroy human­i­ty. How­ev­er, I don’t see this as a real pos­si­bil­i­ty – at least not with the mod­els that are being devel­oped right now. Then there’s the fear of an oli­gop­oly, with heavy depen­dence on a hand­ful of com­pa­nies. There is also the risk of human rights vio­la­tions. And in this case, a new fac­tor comes into play: for a long time, the most ter­ri­ble actions were reserved for the coun­tries and armies of the most pow­er­ful coun­tries. Now it’s com­mon­place, mass-mar­ket tech­nol­o­gy. The same AI that can detect can­cer could also be used to pre­vent part of the pop­u­la­tion from enter­ing air­ports. We have rarely seen such mul­ti-pur­pose tech­nolo­gies. AI also makes mali­cious acts pos­si­ble, such as deep fakes, attacks on our sys­tems, or the manip­u­la­tion of opin­ion. Not to men­tion the imbal­ances it will cre­ate in terms of intel­lec­tu­al prop­er­ty, pro­tec­tion of the pub­lic domain, pro­tec­tion of pri­va­cy, and the upheavals in the work­place. All these chal­lenges, cou­pled with the desire to real­ly ben­e­fit from AI’s bound­less promise, have prompt­ed intense glob­al reflec­tion on a frame­work for shared governance.

What do you think is the greatest risk?

The great­est risk, in my opin­ion, is that of a monop­oly. It’s a ques­tion of democ­ra­cy. One of the great dan­gers is that the glob­al econ­o­my, soci­ety and the media will become ultra-depen­dent on a small oli­gop­oly that we are unable to reg­u­late. In line with this analy­sis, I’m try­ing to push for the pro­tec­tion of the dig­i­tal com­mons and open source. We need to make sure that there are mod­els that are free to use so that every­one can ben­e­fit from them. There are already high-qual­i­ty open source mod­els. So, the ques­tion is: are we going to put enough pub­lic mon­ey into train­ing them for the ben­e­fit of every­one? In my opin­ion, that’s where the real bat­tle lies. Ensur­ing that there are resources for the pub­lic good and that it will be pos­si­ble to inno­vate with­out ask­ing for permission.

Is there particular international attention being paid to certain risks associated with AI?

Among the aspects that could lead to inter­na­tion­al coor­di­na­tion are the indi­rect effects of AI. These tech­nolo­gies could dis­rupt the way in which we have con­struct­ed the pro­tec­tion of pri­va­cy. Today, the gen­er­al prin­ci­ple is to pro­tect per­son­al data in order to pro­tect indi­vid­u­als. With AI, by using pre­dic­tive mod­els, we can learn a great deal, if not every­thing, about a per­son. Pre­dic­tive mod­els can take into account a person’s age, where they live and where they work, and give a very good prob­a­bil­i­ty of their risk of can­cer or their like­li­hood of lik­ing a par­tic­u­lar film. Anoth­er issue being debat­ed at an inter­na­tion­al lev­el is that of intel­lec­tu­al prop­er­ty. It is now pos­si­ble to ask an AI to pro­duce paint­ings in the style of Kei­th Har­ing and to sell them, which pos­es a prob­lem for the artist’s beneficiaries. 

Is there a real awareness of the need for international regulation?

Ten years ago, there was a tac­it agree­ment not to reg­u­late social net­works. Now these com­pa­nies are worth $1,000bn and it’s very dif­fi­cult to change their tra­jec­to­ry. Most devel­oped coun­tries are telling them­selves that they won’t make the same mis­take again and that they mustn’t miss the boat this time. But that means they know what to do. There is a new aware­ness that reg­u­la­tion must be anchored in an inter­na­tion­al frame­work. There are so few bor­ders in the dig­i­tal world that you can set up in any coun­try and still have a pres­ence in anoth­er. Clear­ly, there is intense inter­na­tion­al activ­i­ty around the major issues men­tioned above: exis­ten­tial risk, the ques­tion of eco­nom­ic sov­er­eign­ty and mali­cious actors.

Is the international level the only relevant one for regulating artificial intelligence?

No. The “region­al” lev­el (i.e. a coher­ent group of coun­tries) is also very impor­tant. Europe has learned the hard way that only reg­u­lat­ing all dig­i­tal uses at nation­al lev­el is not enough to bring the giants to heel. When we estab­lish a Euro­pean frame­work, they nego­ti­ate. But that cre­ates oth­er ten­sions, and we don’t want to encour­age an inter­na­tion­al order based on extra-ter­ri­to­r­i­al appli­ca­tion deci­sions. So, the idea that the inter­na­tion­al lev­el is the right size for think­ing about the dig­i­tal world has tak­en hold, and it’s not real­ly ques­tioned any more.

Under the law, we have the right to pro­hib­it the devel­op­ment of cer­tain AI. But we are afraid that the oth­er pow­ers will con­tin­ue to do so, and that we will become weak and obso­lete. It’s a ques­tion of being both pro-inno­va­tion and pro-secu­ri­ty for cit­i­zens, and that’s why every­one would like deci­sions to be col­lec­tive. These tech­nolo­gies are chang­ing very fast, and cre­at­ing a lot of pow­er, so we don’t want uni­lat­er­al disarmament.

What progress has been made on the development and implementation of this framework? 

The eth­i­cal frame­work is being dis­cussed. Dis­cus­sions are tak­ing place in dozens of forums with­in busi­ness, civ­il soci­ety, among researchers, at the UN, the G7, the OECD and as part of the French ini­tia­tive for a glob­al part­ner­ship for arti­fi­cial intel­li­gence. It’s also a diplo­mat­ic effort that’s com­ing out of the embassies, with debates at the Inter­net Gov­er­nance Forum and the annu­al sum­mit on human rights, RightsCon. Lit­tle by lit­tle, ideas are tak­ing root or becom­ing estab­lished. We are still in the process of iden­ti­fy­ing the con­cepts on which an agree­ment will be based. An ini­tial con­sen­sus is emerg­ing around cer­tain prin­ci­ples: no use should be made that is con­trary to human rights; the tech­nol­o­gy must be in the inter­est of the users; it must be proven that pre­cau­tions have been tak­en to ensure that there is no bias in the edu­ca­tion of the mod­els; the need for trans­paren­cy so that experts can audit the mod­els. Then it will be time to look for treaties.

There are also debates about a demo­c­ra­t­ic frame­work. How can we ensure that these com­pa­nies are not manip­u­lat­ing us? Do we have the right to know what data the AI has been trained with? The notions of secu­ri­ty in the face of exis­ten­tial risk were much dis­cussed in the UK at last year’s world sum­mit. Now the con­ver­sa­tion is turn­ing to the future of work and intel­lec­tu­al prop­er­ty, for exam­ple. In 2025, France will be host­ing a major inter­na­tion­al sum­mit on AI, which will help move these issues forward.

Is reflection regarding AI moving as fast as the technology?

Many peo­ple think not. Per­son­al­ly, I think that a good text is fun­da­men­tal. The Dec­la­ra­tion of Human Rights is still valid, but tech­nolo­gies have changed. The 1978 Data Pro­tec­tion Act has become the GDPR, but the prin­ci­ple of user con­sent for data to be cir­cu­lat­ed has not aged a day. If we can find robust prin­ci­ples, we can pro­duce texts that will stand the test of time. I think we could reg­u­late AI with the GDPR, the respon­si­bil­i­ty of the media and con­tent pub­lish­ers, and two or three oth­er texts that already exist. It’s not a giv­en that we need an entire­ly new framework.

Sirine Azououai

Our world explained with science. Every week, in your inbox.

Get the newsletter