1_cnil
π Digital π Society
How can artificial intelligence be regulated?

Is it possible to regulate AI ?

with Félicien Vallet, Head of the AI department at the Commission Nationale de l'Informatique et des Libertés (CNIL) (French Data Protection Authority)
On September 20th, 2023 |
4 min reading time
Félicien Vallet
Félicien Vallet
Head of the AI department at the Commission Nationale de l'Informatique et des Libertés (CNIL) (French Data Protection Authority)
Key takeaways
  • The increasing use of AI in many areas raises the question of how it should be managed.
  • At present, there is no specific regulation of AI in Europe, despite the fact that it is a constantly evolving field.
  • The European Parliament has voted in favour of the AI Act, a regulatory text on artificial intelligence.
  • The use and rapid development of AI is a cause for concern and raises major issues of security, transparency and automation.
  • To address these issues, the French Data Protection Authority (CNIL) has set up a multidisciplinary department dedicated to AI.

The gene­ra­tive Arti­fi­cial Intel­li­gence (AI) indus­try is boo­ming. Accor­ding to Bloom­berg, it is expec­ted to reach $1,300 bil­lion by 2032. But this expo­nen­tial growth is cau­sing concern world­wide and raises ques­tions about the secu­ri­ty and legis­la­tion of this mar­ket. Faced with this gro­wing mar­ket, Micro­soft, Google, Open-air and the start-up Anthro­pic – four Ame­ri­can AI giants – are joi­ning forces to regu­late them­selves in the face of gro­wing mis­trust. Europe is consi­de­ring regu­la­tions, and the Bri­tish Prime Minis­ter, Rishi Sunak, has announ­ced that the first glo­bal sum­mit dedi­ca­ted to arti­fi­cial intel­li­gence will be held in the UK by the end of the year.

Faced with the increa­sin­gly pro­minent role of AI sys­tems in our dai­ly lives, the CNIL has taken the unpre­ce­den­ted step of set­ting up a depart­ment spe­ci­fi­cal­ly dedi­ca­ted to this field. Under the aegis of Féli­cien Val­let, the regu­la­to­ry autho­ri­ty is see­king to apply its regu­la­to­ry prin­ciples to the major issues of secu­ri­ty, trans­pa­ren­cy and automation.

Why did the CNIL feel the need to set up a new department devoted exclusively to artificial intelligence ?

The CNIL, a regu­la­to­ry autho­ri­ty since 1978, is res­pon­sible for data pro­tec­tion. Since 2018, our point of refe­rence in this area has been the RGPD. Late­ly, we have been asked to deal with issues rela­ting to the pro­ces­sing of per­so­nal data that is increa­sin­gly based on AI, regard­less of the sec­tor of acti­vi­ty. At the CNIL, we tend to be orga­ni­sed on a sec­to­ral basis, with depart­ments dedi­ca­ted to health or govern­ment affairs, for example. The CNIL has obser­ved that AI is being used more and more in the fight against tax fraud (e.g. auto­ma­ted detec­tion of swim­ming pools based on satel­lite images), in secu­ri­ty (e.g. aug­men­ted video sur­veillance sys­tems that ana­lyse human beha­viour), in heal­th­care (e.g. diag­nos­tic assis­tance), and in edu­ca­tion (e.g. via lear­ning ana­ly­tics, aimed at per­so­na­li­sing lear­ning paths). As a regu­la­tor of per­so­nal data pro­ces­sing, the CNIL is paying par­ti­cu­lar atten­tion to the uses of AI that are like­ly to have an impact on citi­zens. The crea­tion of a mul­ti­dis­ci­pli­na­ry depart­ment dedi­ca­ted to AI is explai­ned by the cross-dis­ci­pli­na­ry nature of the issues invol­ved in this field.

What is your definition of artificial intelligence ? Is it restricted to the generative artificial intelligence that we hear so much about at the moment ?

We don’t have a defi­ni­tion in the strict sense. The defi­ni­tion we pro­pose on our web­site refers to a logi­cal and auto­ma­ted pro­cess, gene­ral­ly based on an algo­rithm, with the aim of car­rying out well-defi­ned tasks. Accor­ding to the Euro­pean Par­lia­ment, it is a tool used by machines to “repro­duce beha­viours asso­cia­ted with humans, such as rea­so­ning, plan­ning and crea­ti­vi­ty”. Gene­ra­tive arti­fi­cial intel­li­gence is one part of exis­ting arti­fi­cial intel­li­gence sys­tems, although this too raises ques­tions about the use of per­so­nal data.

What is the CNIL’s approach to regulating AI ?

The CNIL has a risk-based approach. This logic is at the heart of the IA Act, which clas­si­fies AI sys­tems into four cate­go­ries : unac­cep­table, high risk, limi­ted risk, and mini­mal risk. The so-cal­led unac­cep­table AI sys­tems can­not be imple­men­ted on Euro­pean soil at all, as they do not fall within the regu­la­to­ry bounds. High-risk sys­tems, which are often deployed in sec­tors such as heal­th­care or govern­ment affairs, are par­ti­cu­lar­ly sen­si­tive, as they can have a signi­fi­cant impact on indi­vi­duals and often pro­cess per­so­nal data. Spe­cial pre­cau­tions are taken before they are imple­men­ted. Limi­ted-risk sys­tems, such as gene­ra­tive AI, require grea­ter trans­pa­ren­cy for users. Mini­mal-risk sys­tems are not sub­ject to any spe­ci­fic obligations.

What are the major issues surrounding these AI systems ? 

The main issues are trans­pa­ren­cy, auto­ma­tion, and secu­ri­ty. Trans­pa­ren­cy is cru­cial to ensure that people are infor­med about the pro­ces­sing of their data by AI sys­tems, and to enable them to exer­cise their rights. These sys­tems can use huge amounts of data, some­times without the know­ledge of individuals.

Auto­ma­tion also raises ques­tions, even when a human ope­ra­tor is invol­ved in the pro­cess to make final deci­sions. Cog­ni­tive biases, such as the ten­den­cy to place exces­sive trust in machines, can influence deci­sion-making. It is essen­tial to be vigi­lant regar­ding the ope­ra­tor’s control methods and the way in which the ope­ra­tor is actual­ly inte­gra­ted into the deci­sion-making loop.

The secu­ri­ty of AI sys­tems is ano­ther major concern. Like any IT sys­tem, they can be the tar­get of cyber-attacks, in par­ti­cu­lar access hija­cking or data theft. In addi­tion, they can be mali­cious­ly exploi­ted, for example to run phi­shing cam­pai­gns or spread dis­in­for­ma­tion on a large scale.

Is there already a method for implementing these regulations in the future ?

Our action plan is struc­tu­red around four points. The first is to unders­tand AI tech­no­lo­gy, a field that is constant­ly evol­ving as each day brings new inno­va­tions and scien­ti­fic breakthroughs.

The second is to steer the use of AI. The RGPD is our refe­rence, but this text is tech­no­lo­gi­cal­ly neu­tral. It does not spe­ci­fi­cal­ly pres­cribe how per­so­nal data should be hand­led in the context of AI. We the­re­fore need to adapt the gene­ral prin­ciples of the GDPR to the dif­ferent tech­no­lo­gies and uses of AI to pro­vide effec­tive gui­de­lines for professionals.

The third point is to deve­lop inter­ac­tion and coope­ra­tion with our Euro­pean coun­ter­parts, the Défen­seur des droits (Defen­der of Rights), the Auto­ri­té de la concur­rence (French com­pe­ti­tion autho­ri­ty) and research ins­ti­tutes to address issues rela­ting to dis­cri­mi­na­tion, com­pe­ti­tion and inno­va­tion, with the aim of brin­ging toge­ther as many players as pos­sible around these issues.

Final­ly, we need to put in place controls, both before and after the imple­men­ta­tion of AI sys­tems. We the­re­fore need to deve­lop metho­do­lo­gies for car­rying out these checks, whe­ther through che­ck­lists, self-assess­ment guides or other inno­va­tive tools.

Are there any other projects of this type ?

Now, there are no regu­la­tions spe­ci­fic to AI, whe­ther in France, Europe or elsew­here. The draft Euro­pean regu­la­tion will be a first in this area. Howe­ver, some gene­ral regu­la­tions, such as the RGPD in Europe, apply indi­rect­ly to AI. Cer­tain sec­tor-spe­ci­fic regu­la­tions, such as those rela­ting to pro­duct safe­ty, may also apply to pro­ducts incor­po­ra­ting AI, such as medi­cal devices.

Will the differences in regulations between Europe and the United States be even more marked when it comes to AI ?

His­to­ri­cal­ly, Europe has been more proac­tive in imple­men­ting regu­la­tions on digi­tal tech­no­lo­gies, as demons­tra­ted by the adop­tion of the RGPD. Howe­ver, even in the US, the idea of regu­la­ting AI has been gai­ning ground. For example, the CEO of Ope­nAI told the US Congress that AI regu­la­tion would be bene­fi­cial. It should be noted, howe­ver, that what US tech­no­lo­gy exe­cu­tives see as ade­quate regu­la­tion may not be exact­ly what Europe envi­sages. It is with the aim of anti­ci­pa­ting the AI Act and secu­ring the sup­port of the major inter­na­tio­nal indus­tria­lists in the field that Euro­pean Com­mis­sio­ners Mar­grethe Ves­ta­ger (com­pe­ti­tion) and Thier­ry Bre­ton (inter­nal mar­ket) have pro­po­sed an AI Code of Conduct and an AI Pact respectively.

Interview by Jean Zeid

Support accurate information rooted in the scientific method.

Donate