ai robot sentenced in court in orange jumpsuit, chained and seated in a prison setting. The background includes guards, highlighting themes of artificial intelligence, control, and dystopian scenarios
π Digital π Society
How can artificial intelligence be regulated?

Artificial general intelligence: how will it be regulated?

Jean Langlois-Berthelot, Ph.D. in Applied Mathematics and Head of Division in the French Army and Christophe Gaie, Head of the Engineering and Digital Innovation Division at the Prime Minister's Office
On October 2nd, 2024 |
5 min reading time
Jean LANGLOIS-BERTHELOT
Jean Langlois-Berthelot
Ph.D. in Applied Mathematics and Head of Division in the French Army
Christophe Gaie
Christophe Gaie
Head of the Engineering and Digital Innovation Division at the Prime Minister's Office
Key takeaways
  • Current artificial intelligence (AI) excels at specific tasks but remains different from artificial general intelligence (AGI), which aims for intelligence comparable to that of humans.
  • Current AI models, while sophisticated, are not autonomous and have significant limitations that differentiate them from AGI.
  • Fears about AGI are growing; some experts are concerned that it could supplant humanity, while others consider this prospect to still be a long way off.
  • Rational regulation of AGI requires an informed analysis of the issues at stake and a balance between preventing risks and promoting benefits.
  • Proposals for effective regulation of AGI include national licences, rigorous safety tests and enhanced international cooperation.

Arti­fi­cial Intel­li­gence (AI) is cur­rent­ly boom­ing and trans­form­ing many aspects of our dai­ly lives. It opti­mis­es the oper­a­tion of search engines and makes it pos­si­ble to analyse queries more effec­tive­ly in order to pro­pose the most rel­e­vant results1. It improves sur­veil­lance sys­tems, which now use it to detect sus­pi­cious behav­iour2. It offers invalu­able assis­tance in the health­care sec­tor for analysing med­ical images, devel­op­ing new drugs and per­son­al­is­ing treat­ments3. How­ev­er, there is a fun­da­men­tal dis­tinc­tion between the AI we know today, often referred to as “clas­si­cal AI”, and a more ambi­tious con­cept: Arti­fi­cial Gen­er­al Intel­li­gence (AGI).

Clas­si­cal AI is designed to excel at spe­cif­ic tasks and can out­per­form the best experts or spe­cialised algo­rithms. AGI, on the oth­er hand, aspires to an intel­li­gence com­pa­ra­ble to that of a human being. It aims to under­stand the world in all its com­plex­i­ty, to learn autonomous­ly and to adapt to new sit­u­a­tions. In oth­er words, AGI would be capa­ble of solv­ing a wide vari­ety of prob­lems, rea­son­ing, cre­at­ing and being self-aware4.

Growing alarmism about AI

Warn­ings about the rise of gen­er­al-pur­pose AI are mul­ti­ply­ing, point­ing to a bleak future for our civil­i­sa­tion. Sev­er­al lead­ing fig­ures in the world of tech­nol­o­gy have warned of the harm­ful effects of this tech­nol­o­gy. Stephen Hawk­ing has expressed fears that AI could sup­plant humans, lead­ing to a new era in which machines could dom­i­nate5. Emi­nent Amer­i­can pro­fes­sors, such as Stu­art Rus­sell, Pro­fes­sor at the Uni­ver­si­ty of Cal­i­for­nia, Berke­ley, have also high­light­ed the shift towards a uni­verse where AI will play a role that is unknown at this stage, with new risks to be tak­en into account and antic­i­pat­ed6. Fur­ther­more, Jerome Glenn of the Mil­len­ni­um Project has stat­ed7 that “gov­ern­ing AGI could be the most com­plex man­age­ment prob­lem human­i­ty has ever faced” and that “the slight­est mis­take could wipe us off the face of the Earth.” These asser­tions sug­gest an extreme­ly pes­simistic, even cat­a­stroph­ic, out­look on the devel­op­ment of the AGI.

Is AGI really imminent?

A fun­da­men­tal crit­i­cism of the immi­nence of AGI is based on the “prob­lem of the com­plex­i­ty of val­ues”, a key con­cept addressed by Nick Bostrom in Super­in­tel­li­gence: Paths, Dan­gers, Strate­gies8. The evo­lu­tion­ary process of human life and civil­i­sa­tion spans bil­lions of years, with the devel­op­ment of numer­ous com­plex sys­tems of feel­ings, but also of con­trols and val­ues, thanks to the many and var­ied inter­ac­tions with an envi­ron­ment that is phys­i­cal, bio­log­i­cal, and social. From this per­spec­tive, it is hypoth­e­sised that an autonomous and high­ly sophis­ti­cat­ed AGI can­not be achieved in just a few decades.

The Aus­tralian Rod­ney Brooks, one of the icons and pio­neers of robot­ics and the­o­ries of “embod­ied cog­ni­tion”, main­tains that what will deter­mine whether an intel­li­gence is tru­ly autonomous and sophis­ti­cat­ed is its inte­gra­tion with­in a body and con­tin­u­ous inter­ac­tion with a com­plex envi­ron­ment over a suf­fi­cient­ly long peri­od9. These ele­ments rein­force the the­sis that AGI, as described in the alarmist sce­nar­ios, is still a long way from becom­ing a reality.

In what way is current AI not yet general AI?

Recent years have seen the rise of large lan­guage mod­els (LLMs) such as Chat­G­PT, Gem­i­ni, Copi­lot and so on. These have demon­strat­ed an impres­sive abil­i­ty to assim­i­late many implic­it human val­ues, based on mas­sive analy­sis of writ­ten doc­u­ments. Because of its archi­tec­ture and the way it works, Chat­G­PT has a num­ber of lim­i­ta­tions10. It does not sup­port log­i­cal rea­son­ing, its respons­es are some­times unre­li­able, its knowl­edge base is not adapt­ed in real-time, and it is sus­cep­ti­ble to “prompt injec­tion” attacks. Although these mod­els have sophis­ti­cat­ed val­ue sys­tems, they do not appear to be autonomous. In fact, they do not seem to aim for auton­o­my or self-preser­va­tion with­in an envi­ron­ment that is both com­plex and vari­able. In this respect, it is impor­tant to remem­ber that a very impor­tant part of com­mu­ni­ca­tion is linked to into­na­tion and body lan­guage11, ele­ments that are not at all con­sid­ered in inter­ac­tions with gen­er­a­tive AIs.

A sim­ple reminder of this (pro­found) dis­tinc­tion seems cru­cial to bet­ter under­stand the extent to which con­cerns over mali­cious super­in­tel­li­gence are unfound­ed and exces­sive. Today, LLMs can only be con­sid­ered as par­rots pro­vid­ing prob­a­bilis­tic answers (“sto­chas­tic par­rots” accord­ing to Emi­ly Ben­der12). Of course, they rep­re­sent a break with the past, and it appears nec­es­sary to reg­u­late their use now.

What are the arguments for an omnibenevolent superintelligence?

It seems to us that future intel­li­gence can­not be “arti­fi­cial” in the strict sense of the word, i.e. designed from scratch. But it would be high­ly col­lab­o­ra­tive, emerg­ing from the knowl­edge (and even wis­dom) accu­mu­lat­ed by humankind. It is real­is­tic to con­sid­er that cur­rent AIs, as such, are large­ly tools and embod­i­ments of col­lec­tive thought pat­terns, tend­ing towards benev­o­lence rather than con­trol or dom­i­na­tion. This col­lec­tive intel­li­gence is noth­ing less than a deep mem­o­ry that is nour­ished by civilised val­ues such as help­ing those in need, respect for the envi­ron­ment and respect for oth­ers. We there­fore need to pro­tect this intan­gi­ble her­itage and ensure that it is aimed at pro­vid­ing sup­port and help to human beings rather than trans­mit­ting mis­in­for­ma­tion or incit­ing them to com­mit rep­re­hen­si­ble acts. At the risk of being Manichean, LLMs can be used for good13, but they can also be used for evil14.

What evidence is there to refute the scenarios of domination and control by AGI?

From a log­i­cal point of view, alarmist sce­nar­ios in which mali­cious actors would be led, in the short term, to pro­gramme man­i­fest­ly harm­ful objec­tives into the heart of AI appear a pri­ori to be exag­ger­at­ed. The argu­ment of the com­plex­i­ty of val­ues sug­gests that these neg­a­tive val­ues would be poor­ly inte­grat­ed into the mass of pos­i­tive val­ues learned. Fur­ther­more, it seems like­ly that well-inten­tioned pro­gram­mers (white hats) will cre­ate AIs that can counter the destruc­tive strate­gies of mali­cious AIs (black hats). This could lead, quite nat­u­ral­ly, to a clas­sic “arms race”. Anoth­er counter-argu­ment to a mali­cious takeover of AIs is their eco­nom­ic poten­tial. At present, AI for the gen­er­al pub­lic is being dri­ven by major play­ers in the eco­nom­ic sec­tor (Ope­nAI, Google, Microsoft, etc.), at least some of whom have a prof­it ratio­nale. This requires user con­fi­dence in the use of the AI made avail­able, but also the preser­va­tion of the data and algo­rithms that make up AI as an intan­gi­ble asset at the heart of eco­nom­ic activ­i­ty. The resources required for pro­tec­tion and cyber-defence will there­fore be considerable.

Proposals for better governance of AI

Ini­tia­tives have already been tak­en to reg­u­late spe­cialised AI. How­ev­er, the reg­u­la­tion of arti­fi­cial gen­er­al intel­li­gence will require spe­cif­ic mea­sures. One such ini­tia­tive is the AI Act cur­rent­ly being draft­ed by the Euro­pean Union15. The authors make the fol­low­ing addi­tion­al proposals:

  • The intro­duc­tion of a sys­tem of nation­al licences to ensure that any new AGI com­plies with the nec­es­sary safe­ty standards,
  • Sys­tems for ver­i­fy­ing the safe­ty of AI in con­trolled envi­ron­ments before they are autho­rised and deployed,
  • The devel­op­ment of more advanced inter­na­tion­al coop­er­a­tion, which could lead to UN Gen­er­al Assem­bly res­o­lu­tions and the estab­lish­ment of con­ven­tions on AI.

Ratio­nal reg­u­la­tion of AI requires an informed analy­sis of the issues at stake and a bal­ance between pre­vent­ing risks and pro­mot­ing ben­e­fits. Inter­na­tion­al insti­tu­tions and tech­ni­cal experts will play an impor­tant role in coor­di­nat­ing the efforts required for the safe and eth­i­cal devel­op­ment of AI. Good gov­er­nance and effec­tive reg­u­la­tion of the AGI will require a dis­pas­sion­ate approach.

1Vijaya, P., Raju, G. & Ray, S.K. Arti­fi­cial neur­al net­work-based merg­ing score for Meta search engine. J. Cent. South Univ. 23, 2604–2615 (2016). https://doi.org/10.1007/s11771-016‑3322‑7
2Li, Jh. Cyber secu­ri­ty meets arti­fi­cial intel­li­gence: a sur­vey. Fron­tiers Inf Tech­nol Elec­tron­ic Eng 19, 1462–1474 (2018). https://​doi​.org/​1​0​.​1​6​3​1​/​F​I​T​E​E​.​1​8​00573
3Jiang F, Jiang Y, Zhi H, et al Arti­fi­cial intel­li­gence in health­care: past, present and future Stroke and Vas­cu­lar Neu­rol­o­gy 2017;2: https://doi.org/10.1136/svn-2017–000101
4Ng, Gee Wah, and Wang Chi Leung. “Strong Arti­fi­cial Intel­li­gence and Con­scious­ness.” Jour­nal of Arti­fi­cial Intel­li­gence and Con­scious­ness 07, no. 01 (March 1, 2020): 63–72. https://​doi​.org/​1​0​.​1​1​4​2​/​s​2​7​0​5​0​7​8​5​2​0​3​00042
5Kharpal, Arjun. “Stephen Hawk­ing says A.I. could be ‘worst event in the his­to­ry of our civ­i­liza­tion.’” CNBC, Novem­ber 6, 2017. https://​www​.cnbc​.com/​2​0​1​7​/​1​1​/​0​6​/​s​t​e​p​h​e​n​-​h​a​w​k​i​n​g​-​a​i​-​c​o​u​l​d​-​b​e​-​w​o​r​s​t​-​e​v​e​n​t​-​i​n​-​c​i​v​i​l​i​z​a​t​i​o​n​.html
6Chia Jes­si­ca, Cian­ci­o­lo Bethany, “Opin­ion: We’ve reached a turn­ing point with AI, expert says” Sep­tem­ber 5, 2023, https://​edi​tion​.cnn​.com/​2​0​2​3​/​0​5​/​3​1​/​o​p​i​n​i​o​n​s​/​a​r​t​i​f​i​c​i​a​l​-​i​n​t​e​l​l​i​g​e​n​c​e​-​s​t​u​a​r​t​-​r​u​s​s​e​l​l​/​i​n​d​e​x​.html
7Jerome C. Glenn, Feb­ru­ary 2023, “Arti­fi­cial Gen­er­al Intel­li­gence Issues and Oppor­tu­ni­ties”, The Mil­le­ni­um Project, Fore­sight for the 2nd Strate­gic Plan of Hori­zon Europe (2025–27) https://​www​.mil​len​ni​um​-project​.org/​w​p​-​c​o​n​t​e​n​t​/​u​p​l​o​a​d​s​/​2​0​2​3​/​0​5​/​E​C​-​A​G​I​-​p​a​p​e​r.pdf
8Nick Bostrom. 2014. Super­in­tel­li­gence: Paths, Dan­gers, Strate­gies, 1st edi­tion. Oxford Uni­ver­si­ty Press, Inc., USA.
9Brooks, R. A. (1991). Intel­li­gence With­out Rep­re­sen­ta­tion. Arti­fi­cial Intel­li­gence, 47(1–3), 139–159.
10
11Quinn, Jayme, and Jayme Quinn. “How Much of Com­mu­ni­ca­tion Is Non­ver­bal? | UT Per­mi­an Basin Online.” The Uni­ver­si­ty of Texas Per­mi­an Basin | UTPB, May 15, 2023. https://​online​.utpb​.edu/​a​b​o​u​t​-​u​s​/​a​r​t​i​c​l​e​s​/​c​o​m​m​u​n​i​c​a​t​i​o​n​/​h​o​w​-​m​u​c​h​-​o​f​-​c​o​m​m​u​n​i​c​a​t​i​o​n​-​i​s​-​n​o​n​v​e​rbal/
12Emi­ly M. Ben­der, Timnit Gebru, Angeli­na McMil­lan-Major, and Shmar­garet Shmitchell. 2021. On the Dan­gers of Sto­chas­tic Par­rots: Can Lan­guage Mod­els Be Too Big? In Pro­ceed­ings of the 2021 ACM Con­fer­ence on Fair­ness, Account­abil­i­ty, and Trans­paren­cy (FAc­cT ‘21). Asso­ci­a­tion for Com­put­ing Machin­ery, New York, NY, USA, 610–623. https://​doi​.org/​1​0​.​1​1​4​5​/​3​4​4​2​1​8​8​.​3​4​45922
13Javaid, Mohd, Abid Haleem, and Ravi Prat­ap Singh. “Chat­G­PT for health­care ser­vices: An emerg­ing stage for an inno­v­a­tive per­spec­tive.” Bench­Coun­cil Trans­ac­tions on Bench­marks, Stan­dards and Eval­u­a­tions 3, no. 1 (2023): 100105. https://​doi​.org/​1​0​.​1​0​1​6​/​j​.​t​b​e​n​c​h​.​2​0​2​3​.​1​00105
14Lohmann, S. (2024). Chat­G­PT, Arti­fi­cial Intel­li­gence, and the Ter­ror­ist Tool­box. An Amer­i­can Per­spec­tive, 23. https://​media​.defense​.gov/​2​0​2​4​/​A​p​r​/​1​8​/​2​0​0​3​4​4​4​2​2​8​/​-​1​/​-​1​/​0​/​2​0​2​4​0​5​0​6​_​S​i​m​-​H​a​r​t​u​n​i​a​n​-​M​i​l​a​s​_​E​m​e​r​g​i​n​g​T​e​c​h​_​F​i​n​a​l​.​P​D​F​#​p​a​ge=41
15“Lay­ing Down Har­monised Rules On Arti­fi­cial Intel­li­gence (Arti­fi­cial Intel­li­gence Act) And Amend­ing Cer­tain Union Leg­isla­tive Acts,” https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206

Our world explained with science. Every week, in your inbox.

Get the newsletter