ai robot sentenced in court in orange jumpsuit, chained and seated in a prison setting. The background includes guards, highlighting themes of artificial intelligence, control, and dystopian scenarios
π Digital π Society
How can artificial intelligence be regulated?

Artificial general intelligence : how will it be regulated ?

with Jean Langlois-Berthelot, Doctor of Applied Mathematics and Head of Division in the French Army and Christophe Gaie, Head of the Engineering and Digital Innovation Division at the Prime Minister's Office
On October 2nd, 2024 |
5 min reading time
Jean LANGLOIS-BERTHELOT
Jean Langlois-Berthelot
Doctor of Applied Mathematics and Head of Division in the French Army
Christophe Gaie
Christophe Gaie
Head of the Engineering and Digital Innovation Division at the Prime Minister's Office
Key takeaways
  • Current artificial intelligence (AI) excels at specific tasks but remains different from artificial general intelligence (AGI), which aims for intelligence comparable to that of humans.
  • Current AI models, while sophisticated, are not autonomous and have significant limitations that differentiate them from AGI.
  • Fears about AGI are growing; some experts are concerned that it could supplant humanity, while others consider this prospect to still be a long way off.
  • Rational regulation of AGI requires an informed analysis of the issues at stake and a balance between preventing risks and promoting benefits.
  • Proposals for effective regulation of AGI include national licences, rigorous safety tests and enhanced international cooperation.

Arti­fi­cial Intel­li­gence (AI) is cur­rent­ly boo­ming and trans­for­ming many aspects of our dai­ly lives. It opti­mises the ope­ra­tion of search engines and makes it pos­sible to ana­lyse que­ries more effec­ti­ve­ly in order to pro­pose the most rele­vant results1. It improves sur­veillance sys­tems, which now use it to detect sus­pi­cious beha­viour2. It offers inva­luable assis­tance in the heal­th­care sec­tor for ana­ly­sing medi­cal images, deve­lo­ping new drugs and per­so­na­li­sing treat­ments3. Howe­ver, there is a fun­da­men­tal dis­tinc­tion bet­ween the AI we know today, often refer­red to as “clas­si­cal AI”, and a more ambi­tious concept : Arti­fi­cial Gene­ral Intel­li­gence (AGI).

Clas­si­cal AI is desi­gned to excel at spe­ci­fic tasks and can out­per­form the best experts or spe­cia­li­sed algo­rithms. AGI, on the other hand, aspires to an intel­li­gence com­pa­rable to that of a human being. It aims to unders­tand the world in all its com­plexi­ty, to learn auto­no­mous­ly and to adapt to new situa­tions. In other words, AGI would be capable of sol­ving a wide varie­ty of pro­blems, rea­so­ning, crea­ting and being self-aware4.

Growing alarmism about AI

War­nings about the rise of gene­ral-pur­pose AI are mul­ti­plying, poin­ting to a bleak future for our civi­li­sa­tion. Seve­ral lea­ding figures in the world of tech­no­lo­gy have war­ned of the harm­ful effects of this tech­no­lo­gy. Ste­phen Haw­king has expres­sed fears that AI could sup­plant humans, lea­ding to a new era in which machines could domi­nate5. Eminent Ame­ri­can pro­fes­sors, such as Stuart Rus­sell, Pro­fes­sor at the Uni­ver­si­ty of Cali­for­nia, Ber­ke­ley, have also high­ligh­ted the shift towards a uni­verse where AI will play a role that is unk­nown at this stage, with new risks to be taken into account and anti­ci­pa­ted6. Fur­ther­more, Jerome Glenn of the Mil­len­nium Pro­ject has sta­ted7 that “gover­ning AGI could be the most com­plex mana­ge­ment pro­blem huma­ni­ty has ever faced” and that “the sligh­test mis­take could wipe us off the face of the Earth.” These asser­tions sug­gest an extre­me­ly pes­si­mis­tic, even catas­tro­phic, out­look on the deve­lop­ment of the AGI.

Is AGI really imminent ?

A fun­da­men­tal cri­ti­cism of the immi­nence of AGI is based on the “pro­blem of the com­plexi­ty of values”, a key concept addres­sed by Nick Bos­trom in Super­in­tel­li­gence : Paths, Dan­gers, Stra­te­gies8. The evo­lu­tio­na­ry pro­cess of human life and civi­li­sa­tion spans bil­lions of years, with the deve­lop­ment of nume­rous com­plex sys­tems of fee­lings, but also of controls and values, thanks to the many and varied inter­ac­tions with an envi­ron­ment that is phy­si­cal, bio­lo­gi­cal, and social. From this pers­pec­tive, it is hypo­the­si­sed that an auto­no­mous and high­ly sophis­ti­ca­ted AGI can­not be achie­ved in just a few decades.

The Aus­tra­lian Rod­ney Brooks, one of the icons and pio­neers of robo­tics and theo­ries of “embo­died cog­ni­tion”, main­tains that what will deter­mine whe­ther an intel­li­gence is tru­ly auto­no­mous and sophis­ti­ca­ted is its inte­gra­tion within a body and conti­nuous inter­ac­tion with a com­plex envi­ron­ment over a suf­fi­cient­ly long per­iod9. These ele­ments rein­force the the­sis that AGI, as des­cri­bed in the alar­mist sce­na­rios, is still a long way from beco­ming a reality.

In what way is current AI not yet general AI ?

Recent years have seen the rise of large lan­guage models (LLMs) such as ChatGPT, Gemi­ni, Copi­lot and so on. These have demons­tra­ted an impres­sive abi­li­ty to assi­mi­late many impli­cit human values, based on mas­sive ana­ly­sis of writ­ten docu­ments. Because of its archi­tec­ture and the way it works, ChatGPT has a num­ber of limi­ta­tions10. It does not sup­port logi­cal rea­so­ning, its res­ponses are some­times unre­liable, its know­ledge base is not adap­ted in real-time, and it is sus­cep­tible to “prompt injec­tion” attacks. Although these models have sophis­ti­ca­ted value sys­tems, they do not appear to be auto­no­mous. In fact, they do not seem to aim for auto­no­my or self-pre­ser­va­tion within an envi­ron­ment that is both com­plex and variable. In this res­pect, it is impor­tant to remem­ber that a very impor­tant part of com­mu­ni­ca­tion is lin­ked to into­na­tion and body lan­guage11, ele­ments that are not at all consi­de­red in inter­ac­tions with gene­ra­tive AIs.

A simple remin­der of this (pro­found) dis­tinc­tion seems cru­cial to bet­ter unders­tand the extent to which concerns over mali­cious super­in­tel­li­gence are unfoun­ded and exces­sive. Today, LLMs can only be consi­de­red as par­rots pro­vi­ding pro­ba­bi­lis­tic ans­wers (“sto­chas­tic par­rots” accor­ding to Emi­ly Ben­der12). Of course, they represent a break with the past, and it appears neces­sa­ry to regu­late their use now.

What are the arguments for an omnibenevolent superintelligence ?

It seems to us that future intel­li­gence can­not be “arti­fi­cial” in the strict sense of the word, i.e. desi­gned from scratch. But it would be high­ly col­la­bo­ra­tive, emer­ging from the know­ledge (and even wis­dom) accu­mu­la­ted by human­kind. It is rea­lis­tic to consi­der that cur­rent AIs, as such, are lar­ge­ly tools and embo­di­ments of col­lec­tive thought pat­terns, ten­ding towards bene­vo­lence rather than control or domi­na­tion. This col­lec­tive intel­li­gence is nothing less than a deep memo­ry that is nou­ri­shed by civi­li­sed values such as hel­ping those in need, res­pect for the envi­ron­ment and res­pect for others. We the­re­fore need to pro­tect this intan­gible heri­tage and ensure that it is aimed at pro­vi­ding sup­port and help to human beings rather than trans­mit­ting mis­in­for­ma­tion or inci­ting them to com­mit repre­hen­sible acts. At the risk of being Mani­chean, LLMs can be used for good13, but they can also be used for evil14.

What evidence is there to refute the scenarios of domination and control by AGI ?

From a logi­cal point of view, alar­mist sce­na­rios in which mali­cious actors would be led, in the short term, to pro­gramme mani­fest­ly harm­ful objec­tives into the heart of AI appear a prio­ri to be exag­ge­ra­ted. The argu­ment of the com­plexi­ty of values sug­gests that these nega­tive values would be poor­ly inte­gra­ted into the mass of posi­tive values lear­ned. Fur­ther­more, it seems like­ly that well-inten­tio­ned pro­gram­mers (white hats) will create AIs that can coun­ter the des­truc­tive stra­te­gies of mali­cious AIs (black hats). This could lead, quite natu­ral­ly, to a clas­sic “arms race”. Ano­ther coun­ter-argu­ment to a mali­cious takeo­ver of AIs is their eco­no­mic poten­tial. At present, AI for the gene­ral public is being dri­ven by major players in the eco­no­mic sec­tor (Ope­nAI, Google, Micro­soft, etc.), at least some of whom have a pro­fit ratio­nale. This requires user confi­dence in the use of the AI made avai­lable, but also the pre­ser­va­tion of the data and algo­rithms that make up AI as an intan­gible asset at the heart of eco­no­mic acti­vi­ty. The resources requi­red for pro­tec­tion and cyber-defence will the­re­fore be considerable.

Proposals for better governance of AI

Ini­tia­tives have alrea­dy been taken to regu­late spe­cia­li­sed AI. Howe­ver, the regu­la­tion of arti­fi­cial gene­ral intel­li­gence will require spe­ci­fic mea­sures. One such ini­tia­tive is the AI Act cur­rent­ly being draf­ted by the Euro­pean Union15. The authors make the fol­lo­wing addi­tio­nal proposals :

  • The intro­duc­tion of a sys­tem of natio­nal licences to ensure that any new AGI com­plies with the neces­sa­ry safe­ty standards,
  • Sys­tems for veri­fying the safe­ty of AI in control­led envi­ron­ments before they are autho­ri­sed and deployed,
  • The deve­lop­ment of more advan­ced inter­na­tio­nal coope­ra­tion, which could lead to UN Gene­ral Assem­bly reso­lu­tions and the esta­blish­ment of conven­tions on AI.

Ratio­nal regu­la­tion of AI requires an infor­med ana­ly­sis of the issues at stake and a balance bet­ween pre­ven­ting risks and pro­mo­ting bene­fits. Inter­na­tio­nal ins­ti­tu­tions and tech­ni­cal experts will play an impor­tant role in coor­di­na­ting the efforts requi­red for the safe and ethi­cal deve­lop­ment of AI. Good gover­nance and effec­tive regu­la­tion of the AGI will require a dis­pas­sio­nate approach.

1Vijaya, P., Raju, G. & Ray, S.K. Arti­fi­cial neu­ral net­work-based mer­ging score for Meta search engine. J. Cent. South Univ. 23, 2604–2615 (2016). https://doi.org/10.1007/s11771-016‑3322‑7
2Li, Jh. Cyber secu­ri­ty meets arti­fi­cial intel­li­gence : a sur­vey. Fron­tiers Inf Tech­nol Elec­tro­nic Eng 19, 1462–1474 (2018). https://​doi​.org/​1​0​.​1​6​3​1​/​F​I​T​E​E​.​1​8​00573
3Jiang F, Jiang Y, Zhi H, et al Arti­fi­cial intel­li­gence in heal­th­care : past, present and future Stroke and Vas­cu­lar Neu­ro­lo­gy 2017;2 : https://doi.org/10.1136/svn-2017–000101
4Ng, Gee Wah, and Wang Chi Leung. “Strong Arti­fi­cial Intel­li­gence and Conscious­ness.” Jour­nal of Arti­fi­cial Intel­li­gence and Conscious­ness 07, no. 01 (March 1, 2020): 63–72. https://​doi​.org/​1​0​.​1​1​4​2​/​s​2​7​0​5​0​7​8​5​2​0​3​00042
5Khar­pal, Arjun. “Ste­phen Haw­king says A.I. could be ‘worst event in the his­to­ry of our civi­li­za­tion.’” CNBC, Novem­ber 6, 2017. https://​www​.cnbc​.com/​2​0​1​7​/​1​1​/​0​6​/​s​t​e​p​h​e​n​-​h​a​w​k​i​n​g​-​a​i​-​c​o​u​l​d​-​b​e​-​w​o​r​s​t​-​e​v​e​n​t​-​i​n​-​c​i​v​i​l​i​z​a​t​i​o​n​.html
6Chia Jes­si­ca, Cian­cio­lo Betha­ny, “Opi­nion : We’ve rea­ched a tur­ning point with AI, expert says” Sep­tem­ber 5, 2023, https://​edi​tion​.cnn​.com/​2​0​2​3​/​0​5​/​3​1​/​o​p​i​n​i​o​n​s​/​a​r​t​i​f​i​c​i​a​l​-​i​n​t​e​l​l​i​g​e​n​c​e​-​s​t​u​a​r​t​-​r​u​s​s​e​l​l​/​i​n​d​e​x​.html
7Jerome C. Glenn, Februa­ry 2023, “Arti­fi­cial Gene­ral Intel­li­gence Issues and Oppor­tu­ni­ties”, The Mil­le­nium Pro­ject, Fore­sight for the 2nd Stra­te­gic Plan of Hori­zon Europe (2025–27) https://​www​.mil​len​nium​-pro​ject​.org/​w​p​-​c​o​n​t​e​n​t​/​u​p​l​o​a​d​s​/​2​0​2​3​/​0​5​/​E​C​-​A​G​I​-​p​a​p​e​r.pdf
8Nick Bos­trom. 2014. Super­in­tel­li­gence : Paths, Dan­gers, Stra­te­gies, 1st edi­tion. Oxford Uni­ver­si­ty Press, Inc., USA.
9Brooks, R. A. (1991). Intel­li­gence Without Repre­sen­ta­tion. Arti­fi­cial Intel­li­gence, 47(1–3), 139–159.
10
11Quinn, Jayme, and Jayme Quinn. “How Much of Com­mu­ni­ca­tion Is Non­ver­bal ? | UT Per­mian Basin Online.” The Uni­ver­si­ty of Texas Per­mian Basin | UTPB, May 15, 2023. https://​online​.utpb​.edu/​a​b​o​u​t​-​u​s​/​a​r​t​i​c​l​e​s​/​c​o​m​m​u​n​i​c​a​t​i​o​n​/​h​o​w​-​m​u​c​h​-​o​f​-​c​o​m​m​u​n​i​c​a​t​i​o​n​-​i​s​-​n​o​n​v​e​rbal/
12Emi­ly M. Ben­der, Tim­nit Gebru, Ange­li­na McMil­lan-Major, and Shmar­ga­ret Shmit­chell. 2021. On the Dan­gers of Sto­chas­tic Par­rots : Can Lan­guage Models Be Too Big ? In Pro­cee­dings of the 2021 ACM Confe­rence on Fair­ness, Accoun­ta­bi­li­ty, and Trans­pa­ren­cy (FAccT ‘21). Asso­cia­tion for Com­pu­ting Machi­ne­ry, New York, NY, USA, 610–623. https://​doi​.org/​1​0​.​1​1​4​5​/​3​4​4​2​1​8​8​.​3​4​45922
13Javaid, Mohd, Abid Haleem, and Ravi Pra­tap Singh. “ChatGPT for heal­th­care ser­vices : An emer­ging stage for an inno­va­tive pers­pec­tive.” Ben­ch­Coun­cil Tran­sac­tions on Bench­marks, Stan­dards and Eva­lua­tions 3, no. 1 (2023): 100105. https://​doi​.org/​1​0​.​1​0​1​6​/​j​.​t​b​e​n​c​h​.​2​0​2​3​.​1​00105
14Loh­mann, S. (2024). ChatGPT, Arti­fi­cial Intel­li­gence, and the Ter­ro­rist Tool­box. An Ame­ri­can Pers­pec­tive, 23. https://​media​.defense​.gov/​2​0​2​4​/​A​p​r​/​1​8​/​2​0​0​3​4​4​4​2​2​8​/​-​1​/​-​1​/​0​/​2​0​2​4​0​5​0​6​_​S​i​m​-​H​a​r​t​u​n​i​a​n​-​M​i​l​a​s​_​E​m​e​r​g​i​n​g​T​e​c​h​_​F​i​n​a​l​.​P​D​F​#​p​a​ge=41
15“Laying Down Har­mo­ni­sed Rules On Arti­fi­cial Intel­li­gence (Arti­fi­cial Intel­li­gence Act) And Amen­ding Cer­tain Union Legis­la­tive Acts,” https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206

Support accurate information rooted in the scientific method.

Donate