The impact of AI and robotics on unemployment
π Science and technology π Digital
Generative AI: threat or opportunity?

4 myths surrounding generative AI

with Thierry Rayna, Researcher at the CNRS i³-CRG* laboratory and Professor at Ecole Polytechnique (IP Paris) and Erwan Le Pennec, Professor at École Polytechnique (IP Paris)
On April 3rd, 2024 |
5 min reading time
Thierry Rayna
Thierry Rayna
Researcher at the CNRS i³-CRG* laboratory and Professor at Ecole Polytechnique (IP Paris)
Erwan le pennec
Erwan Le Pennec
Professor at École Polytechnique (IP Paris)
Key takeaways
  • Many myths and misconceptions surround AI, especially since the rise of generative AI such as DALL-E.
  • In reality, these types of AI do not represent a technological revolution, from an innovation point of view, since their existence predates the advent of ChatGPT.
  • What we are witnessing is a change in usage, thanks to the start-ups that have “opened up” access to A.I. for the general public.
  • In reality, the training protocols for these types of AI are kept secret by the companies, but programming interfaces give users the illusion of mastering the algorithm.
  • Despite concerns, this wide and open use of AI will make human expertise more necessary than ever.

AI repla­cing a lawyer, AI wri­ting essays which are so good they can fool tea­chers, AI put­ting artists out of work because anyone can gene­rate the cover of a maga­zine or com­pose the music for a film… These examples have been making the head­lines in recent months, not least the announ­ce­ment of the immi­nent obso­les­cence of intel­lec­tual pro­fes­sions and exe­cu­tives. Howe­ver, AI is not exact­ly inno­va­tive as it has been around for a very long time. From the mid-1950s onwards, waves of concern and fan­ci­ful spe­cu­la­tion fol­lo­wed one ano­ther, each time with the same pro­phe­cy : that humans would be defi­ni­ti­ve­ly repla­ced by machines. And yet, each time, these pre­dic­tions fai­led to mate­ria­lise. Howe­ver, this time, as the use of these new AIs mul­ti­plies, is it legi­ti­mate to believe that things might be different ?

#1 : we’re witnessing a technological revolution

Many com­ments and news reports sug­gest that a major tech­no­lo­gi­cal break­through has just taken place. But this is sim­ply not the case. The algo­rithms used by ChatGPT or DALL‑E resemble those alrea­dy in use for a num­ber of years. If the inno­va­tion doesn’t lie in the algo­rithms, then per­haps there’s a major tech­no­lo­gi­cal break­through that will make it pos­sible to pro­cess large quan­ti­ties of data in a more “intel­li­gent” way ? Not at all ! The advances we’ve seen are the result of a rela­ti­ve­ly conti­nuous and pre­dic­table pro­gres­sion. Even the much-dis­cus­sed gene­ra­tive AI, i.e. the use of algo­rithms trai­ned not to pre­dict the abso­lute right ans­wer, but to gene­rate a varie­ty of pos­sible ans­wers (hence the impres­sion of “crea­ti­vi­ty”), is not new either – even if impro­ved results are making it increa­sin­gly usable.

The para­dox is that Google can’t be as “good” as Ope­nAI, because Google can’t be as “bad” as OpenAI

What has hap­pe­ned in recent months is not a tech­no­lo­gi­cal revo­lu­tion, but a revo­lu­tion in the way it is being used. Up until now, the AI giants (typi­cal­ly the GAFAMs) kept these tech­no­lo­gies to them­selves, the­re­by res­tric­ting use by the gene­ral public. The new­co­mers (Ope­nAI, Stable​.AI or Mid­jour­ney) have, on the contra­ry, deci­ded to let people do (almost) wha­te­ver they want with their algo­rithms. Hen­ce­forth, anyone can appro­priate these this “AI”, and use them for pur­poses as diverse as they are unpre­dic­table. It is from this open­ness that the “real” crea­ti­vi­ty of this new wave of AI stems.

#2 : GAFAM (and other “Big Tech” companies) are technologically outdated

As explai­ned above, big com­pa­nies such as Google, Apple and Face­book have also mas­te­red these tech­no­lo­gies, but they res­trict access to them. GAFAM keep tight control of their AI, for two main rea­sons. First­ly, their image : if ChatGPT or DALL‑E gene­rates racist, dis­cri­mi­na­to­ry, or insul­ting content, the miss­tep will be excu­sed by their posi­tion as start-ups, still in the pro­cess of lear­ning. This “right to error” would not apply to Google, which would see its repu­ta­tion serious­ly tar­ni­shed (not to men­tion the poten­tial legal issues). The para­dox is that Google (or any other GAFAM) can’t be as “good” as Ope­nAI, because Google can’t be as “bad” as OpenAI.

ChatGPT : You can’t see the wood for the trees

Along­side the buzz gene­ra­ted by ChatGPT, DALL‑E and Open​.AI, a far more radi­cal and less visible deve­lop­ment is under­way : the avai­la­bi­li­ty and wides­pread dis­tri­bu­tion of pre-trai­ned AI modules to the gene­ral public. Unlike GPT, these are not dependent on a cen­tra­li­sed plat­form. They are auto­no­mous, can be down­loa­ded and trai­ned for a varie­ty of pur­poses (legal or other­wise). They can even be inte­gra­ted into soft­ware, apps, or other ser­vices, and redis­tri­bu­ted to other users who can use this addi­tio­nal lear­ning to train the modules them­selves for other pur­poses. Each time a pre-trai­ned module is dupli­ca­ted, trai­ned and redis­tri­bu­ted, a new variant is crea­ted. Even­tual­ly, thou­sands or even mil­lions of variants of an ini­tial module will spread across a stag­ge­ring num­ber of soft­ware pro­grams and appli­ca­tions. And these AI modules are all “black boxes”. They are not made up of expli­cit lines of com­pu­ter code, but of matrices (often very large ones), intrin­si­cal­ly unin­ter­pre­table, even by experts in the field. As a result, it is almost impos­sible, in prac­tice, to accu­ra­te­ly pre­dict the beha­viour of these AI sys­tems without tes­ting them extensively.

The second rea­son is stra­te­gic. Trai­ning and deve­lo­ping AI algo­rithms is incre­di­bly expen­sive (we’re tal­king mil­lions of dol­lars). This stag­ge­ring cost is an advan­tage for the alrea­dy well-esta­bli­shed GAFAMs. Ope­ning up access to their AI means giving up this com­pe­ti­tive advan­tage. This situa­tion is para­doxi­cal, given that these same com­pa­nies have deve­lo­ped by libe­ra­ting the use of tech­no­lo­gies (search engines, web plat­forms, e‑commerce and appli­ca­tion SDKs), while other esta­bli­shed players of the time kept them under tight control. Now that this mar­ket is being explo­red by new players, the GAFAMs are racing to offer the mar­ket their “ChatGPT” (hence the new ver­sion of Micro­soft Bing with Copi­lot, and Google Gemini).

#3 : OpenAI is open AI

Ano­ther myth that’s impor­tant to dis­pel is the open­ness of start-up AI. The use of their tech­no­lo­gy is, indeed, fair­ly wide­ly open. For example, ChatGPT’s “GPT API” allows anyone (for a fee) to include que­ries to the algo­rithms. But des­pite this acces­si­bi­li­ty, A.I. remains clo­sed : there’s no ques­tion here of open or col­lec­tive lear­ning. Updates and new lear­ning are car­ried out exclu­si­ve­ly by Open​.AI. Most of these updates and pro­to­cols are kept secret by the start-ups.

If the trai­ning of GPT (and its ilk) were open and col­lec­tive, we would undoub­ted­ly see bat­tles (using “bots”, for example) to influence the lear­ning of the algo­rithm. Simi­lar­ly, on Wiki­pe­dia, the col­la­bo­ra­tive ency­clo­pae­dia, there have been attempts for years to influence what is pre­sen­ted as the “col­lec­tive truth”. There is also the ques­tion of the right to use data.

Kee­ping AI sys­tems clo­sed seems to make sense. But in rea­li­ty, it raises the fun­da­men­tal ques­tion of the vera­ci­ty of content. The qua­li­ty of the infor­ma­tion is uncer­tain. Pos­si­bly bia­sed or par­tial, poor AI trai­ning could lead to dan­ge­rous “beha­viour”. As the gene­ral public is unable to assess these para­me­ters, the suc­cess of AI depends on the trust they place in com­pa­nies – as is alrea­dy the case with search engines and other “big tech” algorithms.

This “open” AI com­ple­te­ly rede­fines ques­tions of ethics, res­pon­si­bi­li­ty and regu­la­tion. These pre-trai­ned modules are easy to share and, unlike cen­tra­li­sed AI plat­forms like OpenAI’s GPT, are almost impos­sible to regu­late. Typi­cal­ly, in the event of an error, would we be able to deter­mine exact­ly which part of the lear­ning pro­cess was the cause ? Is it the ini­tial lear­ning or one of the hun­dreds of sub­sequent lear­ning ses­sions ? Was it the fact that the machine was trai­ned by dif­ferent people ?

#4 : Many people will lose their jobs

Ano­ther myth sur­roun­ding this “new AI” concerns its impact on employ­ment. Gene­ra­tive AI, like older AI, is dis­cri­mi­na­tive. As good as it may seem, this AI only replaces a com­petent begin­ner (except that this begin­ner can­not learn!), but not the expert or the spe­cia­list. But AI, howe­ver good it may seem, will never replace the expert. ChatGPT or DALL‑E can pro­duce very good “drafts”, but these still need to be che­cked, selec­ted and refi­ned by the human.

With ChatGPT, what’s impres­sive is the assu­rance with which it responds. In rea­li­ty, the intrin­sic qua­li­ty of the results is deba­table. The explo­sion of infor­ma­tion, content and acti­vi­ty that will result from the wide and open use of AI will make human exper­tise more neces­sa­ry than ever. Indeed, this has been the rule with the “digi­tal revo­lu­tions”: the more we digi­tise, the more human exper­tise becomes neces­sa­ry. Howe­ver, uncer­tain­ty remains as to how dis­rup­tive this second wave of AI will be for businesses.

Support accurate information rooted in the scientific method.

Donate