AI replacing a lawyer, AI writing essays which are so good they can fool teachers, AI putting artists out of work because anyone can generate the cover of a magazine or compose the music for a film… These examples have been making the headlines in recent months, not least the announcement of the imminent obsolescence of intellectual professions and executives. However, AI is not exactly innovative as it has been around for a very long time. From the mid-1950s onwards, waves of concern and fanciful speculation followed one another, each time with the same prophecy: that humans would be definitively replaced by machines. And yet, each time, these predictions failed to materialise. However, this time, as the use of these new AIs multiplies, is it legitimate to believe that things might be different?
#1: we’re witnessing a technological revolution
Many comments and news reports suggest that a major technological breakthrough has just taken place. But this is simply not the case. The algorithms used by ChatGPT or DALL‑E resemble those already in use for a number of years. If the innovation doesn’t lie in the algorithms, then perhaps there’s a major technological breakthrough that will make it possible to process large quantities of data in a more “intelligent” way? Not at all! The advances we’ve seen are the result of a relatively continuous and predictable progression. Even the much-discussed generative AI, i.e. the use of algorithms trained not to predict the absolute right answer, but to generate a variety of possible answers (hence the impression of “creativity”), is not new either – even if improved results are making it increasingly usable.
The paradox is that Google can’t be as “good” as OpenAI, because Google can’t be as “bad” as OpenAI
What has happened in recent months is not a technological revolution, but a revolution in the way it is being used. Up until now, the AI giants (typically the GAFAMs) kept these technologies to themselves, thereby restricting use by the general public. The newcomers (OpenAI, Stable.AI or Midjourney) have, on the contrary, decided to let people do (almost) whatever they want with their algorithms. Henceforth, anyone can appropriate these this “AI”, and use them for purposes as diverse as they are unpredictable. It is from this openness that the “real” creativity of this new wave of AI stems.
#2: GAFAM (and other “Big Tech” companies) are technologically outdated
As explained above, big companies such as Google, Apple and Facebook have also mastered these technologies, but they restrict access to them. GAFAM keep tight control of their AI, for two main reasons. Firstly, their image: if ChatGPT or DALL‑E generates racist, discriminatory, or insulting content, the misstep will be excused by their position as start-ups, still in the process of learning. This “right to error” would not apply to Google, which would see its reputation seriously tarnished (not to mention the potential legal issues). The paradox is that Google (or any other GAFAM) can’t be as “good” as OpenAI, because Google can’t be as “bad” as OpenAI.
ChatGPT: You can’t see the wood for the trees
Alongside the buzz generated by ChatGPT, DALL‑E and Open.AI, a far more radical and less visible development is underway: the availability and widespread distribution of pre-trained AI modules to the general public. Unlike GPT, these are not dependent on a centralised platform. They are autonomous, can be downloaded and trained for a variety of purposes (legal or otherwise). They can even be integrated into software, apps, or other services, and redistributed to other users who can use this additional learning to train the modules themselves for other purposes. Each time a pre-trained module is duplicated, trained and redistributed, a new variant is created. Eventually, thousands or even millions of variants of an initial module will spread across a staggering number of software programs and applications. And these AI modules are all “black boxes”. They are not made up of explicit lines of computer code, but of matrices (often very large ones), intrinsically uninterpretable, even by experts in the field. As a result, it is almost impossible, in practice, to accurately predict the behaviour of these AI systems without testing them extensively.
The second reason is strategic. Training and developing AI algorithms is incredibly expensive (we’re talking millions of dollars). This staggering cost is an advantage for the already well-established GAFAMs. Opening up access to their AI means giving up this competitive advantage. This situation is paradoxical, given that these same companies have developed by liberating the use of technologies (search engines, web platforms, e‑commerce and application SDKs), while other established players of the time kept them under tight control. Now that this market is being explored by new players, the GAFAMs are racing to offer the market their “ChatGPT” (hence the new version of Microsoft Bing with Copilot, and Google Gemini).
#3: OpenAI is open AI
Another myth that’s important to dispel is the openness of start-up AI. The use of their technology is, indeed, fairly widely open. For example, ChatGPT’s “GPT API” allows anyone (for a fee) to include queries to the algorithms. But despite this accessibility, A.I. remains closed: there’s no question here of open or collective learning. Updates and new learning are carried out exclusively by Open.AI. Most of these updates and protocols are kept secret by the start-ups.
If the training of GPT (and its ilk) were open and collective, we would undoubtedly see battles (using “bots”, for example) to influence the learning of the algorithm. Similarly, on Wikipedia, the collaborative encyclopaedia, there have been attempts for years to influence what is presented as the “collective truth”. There is also the question of the right to use data.
Keeping AI systems closed seems to make sense. But in reality, it raises the fundamental question of the veracity of content. The quality of the information is uncertain. Possibly biased or partial, poor AI training could lead to dangerous “behaviour”. As the general public is unable to assess these parameters, the success of AI depends on the trust they place in companies – as is already the case with search engines and other “big tech” algorithms.
This “open” AI completely redefines questions of ethics, responsibility and regulation. These pre-trained modules are easy to share and, unlike centralised AI platforms like OpenAI’s GPT, are almost impossible to regulate. Typically, in the event of an error, would we be able to determine exactly which part of the learning process was the cause? Is it the initial learning or one of the hundreds of subsequent learning sessions? Was it the fact that the machine was trained by different people?
#4: Many people will lose their jobs
Another myth surrounding this “new AI” concerns its impact on employment. Generative AI, like older AI, is discriminative. As good as it may seem, this AI only replaces a competent beginner (except that this beginner cannot learn!), but not the expert or the specialist. But AI, however good it may seem, will never replace the expert. ChatGPT or DALL‑E can produce very good “drafts”, but these still need to be checked, selected and refined by the human.
With ChatGPT, what’s impressive is the assurance with which it responds. In reality, the intrinsic quality of the results is debatable. The explosion of information, content and activity that will result from the wide and open use of AI will make human expertise more necessary than ever. Indeed, this has been the rule with the “digital revolutions”: the more we digitise, the more human expertise becomes necessary. However, uncertainty remains as to how disruptive this second wave of AI will be for businesses.