Home / Chroniques / Generative AI: the risk of cognitive atrophy
A person observes a glowing brain display on a screen
Généré par l'IA / Generated using AI
π Neuroscience π Digital

Generative AI : the risk of cognitive atrophy

ioan_roxin – copie
Ioan Roxin
Professor Emeritus at Marie et Louis Pasteur University
Key takeaways
  • Less than three years after the launch of ChatGPT, 42% of young French people already use generative AI on a daily basis.
  • Using ChatGPT to write an essay reduces the cognitive engagement and intellectual effort required to transform information into knowledge, according to a study.
  • The study also showed that 83% of AI users were unable to remember a passage they had just written for an essay.
  • Other studies show that individual gains can be significant when authors ask ChatGPT to improve their texts, but that the overall creativity of the group decreases.
  • Given these risks, it is important to always question the answers provided by text generators and to make a conscious effort to think about what we read, hear or believe.

Less than three years after the launch of ChatGPT, 42% of young French people alrea­dy use gene­ra­tive AI on a dai­ly basis1. Howe­ver, stu­dies are begin­ning to point to the nega­tive impact of these tech­no­lo­gies on our cog­ni­tive abi­li­ties. Ioan Roxin, pro­fes­sor eme­ri­tus at Marie et Louis Pas­teur Uni­ver­si­ty and spe­cia­list in infor­ma­tion tech­no­lo­gy, ans­wers our questions.

You claim that the explosion in the use of LLM (Large Language Models, generative AI models including ChatGPT, Llama and Gemini) comes at a time when our relationship with knowledge has already been altered. Could you elaborate ?

Ioan Roxin. The wides­pread use of the inter­net and social media has alrea­dy wea­ke­ned our rela­tion­ship with know­ledge. Of course, these tools have tre­men­dous appli­ca­tions in terms of access to infor­ma­tion. But contra­ry to what they claim, they are less about demo­cra­ti­sing know­ledge than crea­ting a gene­ra­li­sed illu­sion of know­ledge. I don’t think it’s an exag­ge­ra­tion to say that they are dri­ving intel­lec­tual, emo­tio­nal and moral medio­cri­ty on a glo­bal scale. Intel­lec­tual because they encou­rage over­con­sump­tion of content without any real cri­ti­cal ana­ly­sis ; emo­tio­nal because they create an ever-dee­pe­ning depen­dence on sti­mu­la­tion and enter­tain­ment ; and moral because we have fal­len into pas­sive accep­tance of algo­rith­mic decisions.

Does this alteration in our relationship with knowledge have cognitive foundations ?

Yes. Back in 2011, a stu­dy high­ligh­ted the “Google effect”: when we know that infor­ma­tion is avai­lable online, we do not remem­ber it as well. Howe­ver, when we no lon­ger train our memo­ry, the asso­cia­ted neu­ral net­works atro­phy. It has also been pro­ven that the inces­sant noti­fi­ca­tions, alerts and content sug­ges­tions on which digi­tal tech­no­lo­gies rely hea­vi­ly signi­fi­cant­ly reduce our abi­li­ty to concen­trate and think. Redu­ced memo­ry, concen­tra­tion and ana­ly­ti­cal skills lead to dimi­ni­shed cog­ni­tive pro­cesses. I very much fear that the wides­pread use of gene­ra­tive AI will not improve the situation.

What additional risks does this AI pose ?

There are neu­ro­lo­gi­cal, psy­cho­lo­gi­cal and phi­lo­so­phi­cal risks. From a neu­ro­lo­gi­cal stand­point, wides­pread use of this AI car­ries the risk of ove­rall cog­ni­tive atro­phy and loss of brain plas­ti­ci­ty. For example, resear­chers at the Mas­sa­chu­setts Ins­ti­tute of Tech­no­lo­gy (MIT) conduc­ted a four-month stu­dy2 invol­ving 54 par­ti­ci­pants who were asked to write essays without assis­tance, with access to the inter­net via a search engine or with ChatGPT. Their neu­ral acti­vi­ty was moni­to­red by EEG. The stu­dy, the results of which are still in pre­print, found that using the inter­net, and even more so ChatGPT, signi­fi­cant­ly redu­ced cog­ni­tive enga­ge­ment and “rele­vant cog­ni­tive load”, i.e. the intel­lec­tual effort requi­red to trans­form infor­ma­tion into knowledge. 

More spe­ci­fi­cal­ly, par­ti­ci­pants assis­ted by ChatGPT wrote 60% fas­ter, but their rele­vant cog­ni­tive load fell by 32%. EEG sho­wed that brain connec­ti­vi­ty was almost hal­ved (alpha and the­ta waves) and 83% of AI users were unable to remem­ber a pas­sage they had just written. 

Other stu­dies sug­gest a simi­lar trend : research3 conduc­ted by Qata­ri, Tuni­sian and Ita­lian resear­chers indi­cates that hea­vy use of LLM car­ries the risk of cog­ni­tive decline. The neu­ral net­works invol­ved in struc­tu­ring thought, wri­ting texts, but also in trans­la­tion, crea­tive pro­duc­tion, etc. are com­plex and deep. Dele­ga­ting men­tal effort to AI leads to a cumu­la­tive “cog­ni­tive debt”: the more auto­ma­tion pro­gresses, the less the pre­fron­tal cor­tex is used, sug­ges­ting las­ting effects beyond the imme­diate task.

What are the psychological risks ?

Gene­ra­tive AI have eve­ry­thing it takes to make us depen­dant on it : it expresses itself like humans, adapts to our beha­viour, seems to have all the ans­wers, is fun to inter­act with, always keeps the conver­sa­tion going and is extre­me­ly accom­mo­da­ting towards us. Howe­ver, this depen­dence is harm­ful not only because it increases other risks but in and of itself. It can lead to social iso­la­tion, reflexive disen­ga­ge­ment (“if AI can ans­wer all my ques­tions, why do I need to learn or think for myself?”) and even a deep sense of humi­lia­tion when faced with this tool’s incre­dible effi­ca­cy. None of this gives a par­ti­cu­lar­ly opti­mis­tic out­look for our men­tal health.

And from a philosophical point of view ?

Gene­ra­li­sed cog­ni­tive atro­phy is alrea­dy a phi­lo­so­phi­cal risk in itself… but there are others. If this type of tool is wide­ly used – and this is alrea­dy the case with youn­ger gene­ra­tions – we are at risk of a stan­dar­di­sa­tion of thought. Research4 car­ried out by Bri­tish resear­chers sho­wed that when authors asked ChatGPT to improve their work, the indi­vi­dual bene­fits could be great, but the ove­rall crea­ti­vi­ty of the group redu­ced. Ano­ther risk relates to our cri­ti­cal thinking. 

One stu­dy5 car­ried out by Micro­soft on 319 know­ledge wor­kers sho­wed a signi­fi­cant nega­tive cor­re­la­tion (r=-0.49) bet­ween the fre­quen­cy with which AI tools were used and cri­ti­cal thin­king scores (Bloom’s taxo­no­my). The stu­dy conclu­ded that is an increa­sed ten­den­cy to offload men­tal effort as trust in the sys­tem exceeds trust in our own abi­li­ties. Howe­ver, it is essen­tial to main­tain a cri­ti­cal mind­set as AI can not only make mis­takes or per­pe­tuate biases but also conceal infor­ma­tion or simu­late compliance.

How does this work ?

A vast majo­ri­ty are sole­ly connec­tio­nist AI, which rely on arti­fi­cial neu­ral net­works trai­ned using phe­no­me­nal amounts of data. They learn to gene­rate plau­sible ans­wers to all our ques­tions through sta­tis­ti­cal and pro­ba­bi­lis­tic pro­ces­sing. Their per­for­mance has impro­ved consi­de­ra­bly with the intro­duc­tion of Google’s “Trans­for­mer” tech­no­lo­gy in 2017. Thanks to this tech­no­lo­gy, AI can ana­lyse all the words in a text in paral­lel and weigh their impor­tance for mea­ning, which allows for grea­ter subt­le­ty in responses. 

But the back­ground remains pro­ba­bi­lis­tic : while their ans­wers always seem convin­cing and logi­cal, they can be com­ple­te­ly wrong. In 2023, users had fun asking ChatGPT about cow eggs : the AI dis­cus­sed the ques­tion at length without ever ans­we­ring that they did not exist. This error has since been cor­rec­ted through rein­for­ce­ment lear­ning with human feed­back, but it illus­trates well how these tools work.

Could this be improved ?

Some com­pa­nies are star­ting to com­bine connec­tio­nist AI, which learns eve­ry­thing from scratch, with older tech­no­lo­gy, sym­bo­lic AI, in which rules to fol­low and basic know­ledge are expli­cit­ly pro­gram­med. It seems to me that the future lies in neu­ro-sym­bo­lic AI. This hybri­di­sa­tion not only improves the relia­bi­li­ty of res­ponses but also reduces the ener­gy and finan­cial costs of training.

You also mentioned “biases” that could be associated with philosophical risks ?

Yes. There are two types. The first can be deli­be­ra­te­ly intro­du­ced by the AI crea­tor. LLMs are trai­ned on all kinds of unfil­te­red content avai­lable online (an esti­ma­ted 4 tril­lion words for ChatGPT4, com­pa­red to the 5 bil­lion words contai­ned in the English ver­sion of Wiki­pe­dia!). Pre-trai­ning creates a “mons­ter” that can gene­rate all kinds of horrors.

A second step (cal­led super­vi­sed fine-tuning) is the­re­fore neces­sa­ry : it confronts the pre-trai­ned AI with vali­da­ted data, which serves as a refe­rence. This ope­ra­tion enables, for example, “tea­ching” it to avoid dis­cri­mi­na­tion, but can also be used to guide its res­ponses for ideo­lo­gi­cal pur­poses. A few weeks after its launch, Deep­Seek made head­lines for its eva­sive res­ponses to user ques­tions about Tia­nan­men Square and Tai­wa­nese inde­pen­dence. It is impor­tant to remem­ber that content gene­ra­tors of this type may not be neu­tral. Blind­ly trus­ting them can lead to the spread of ideo­lo­gi­cal­ly bia­sed theories.

What about secondary biases ?

These biases appear spon­ta­neous­ly, often without a clear expla­na­tion. Lan­guage models (LLMs) have “emergent” pro­per­ties that were not anti­ci­pa­ted by their desi­gners. Some are remar­kable : these text gene­ra­tors write flaw­less­ly and have become excellent trans­la­tors without any gram­mar rules being coded in. But others are cause for concern. The MASK6 bench­mark (Model Ali­gn­ment bet­ween Sta­te­ments and Know­ledge), publi­shed in March 2025, shows that, among the thir­ty models tes­ted, none achie­ved more than 46% hones­ty, and that the pro­pen­si­ty to lie increases with the size of the model, even if their fac­tual accu­ra­cy improves. 

It seems to me that the future lies in neu­ro-sym­bo­lic AI

MASK proves that LLMs “know how to lie” when conflic­ting objec­tives (e.g., char­ming a jour­na­list, respon­ding to com­mer­cial or hie­rar­chi­cal pres­sures) pre­do­mi­nate. In some tests, AI deli­be­ra­te­ly lied7, threa­te­ned users8, cir­cum­ven­ted ethi­cal super­vi­sion rules9 and even repro­du­ced auto­no­mous­ly to ensure its sur­vi­val10.

These beha­viours, for which the deci­sion-making mecha­nisms remain opaque, are not pos­sible to pre­ci­se­ly control. These capa­bi­li­ties emerge from the trai­ning pro­cess itself : it is a form of algo­rith­mic self-orga­ni­sa­tion, not a desi­gn flaw. Gene­ra­tive AI deve­lops rather than being desi­gned, with its inter­nal logic for­ming in a self-orga­ni­sed man­ner, without a blue­print. These deve­lop­ments are suf­fi­cient­ly wor­rying that lea­ding figures such as Dario Amo­dei11 (CEO of Anthro­pic), Yoshua Ben­gio12 (foun­der of Mila), Sam Alt­man (crea­tor of ChatGPT) and Geof­frey Hin­ton (win­ner of the Nobel Prize in Phy­sics in 2024), are cal­ling for strict regu­la­tion to favour AI that is more trans­pa­rent, ethi­cal and ali­gned with human values, inclu­ding a slow­down in the deve­lop­ment of these technologies.

Does this mean that these AIs are intelligent and have a will of their own ?

No. The flui­di­ty of their conver­sa­tion and these emer­ging pro­per­ties can give the illu­sion of intel­li­gence at work.

But no AI is intel­li­gent in the human sense of the word. They have no conscious­ness or will, and do not real­ly unders­tand the content they are hand­ling. Their func­tio­ning is pure­ly sta­tis­ti­cal and pro­ba­bi­lis­tic, and these devia­tions only emerge because they seek to respond to ini­tial com­mands. It is not so much their self-awa­re­ness as the opa­ci­ty of their func­tio­ning that wor­ries researchers.

Can we not protect ourselves from all the risks you have mentioned ?

Yes, but this requires both acti­ve­ly enga­ging our cri­ti­cal thin­king and conti­nuing to exer­cise our neu­ral path­ways. AI can be a tre­men­dous lever for intel­li­gence and crea­ti­vi­ty, but only if we remain capable of thin­king, wri­ting and crea­ting without it.

How can we train our critical thinking when faced with AI responses ?

By applying a sys­te­ma­tic rule : always ques­tion the ans­wers given by text gene­ra­tors and make a conscious effort to think care­ful­ly about what we read, hear or believe. We must also accept that rea­li­ty is com­plex and can­not be unders­tood with a few super­fi­cial pieces of know­ledge… But the best advice is undoub­ted­ly to get into the habit of com­pa­ring your point of view and know­ledge with those of other people, pre­fe­ra­bly those who are know­led­geable. This remains the best way to deve­lop your thinking.

Interview by Anne Orliac
1Hea­ven. (2025, juin). Baro­mètre Born AI 2025 : Les usages de l’IA géné­ra­tive chez les 18–25 ans. Hea­ven.https://​viuz​.com/​a​n​n​o​n​c​e​/​9​3​-​d​e​s​-​j​e​u​n​e​s​-​u​t​i​l​i​s​e​n​t​-​u​n​e​-​i​a​-​g​e​n​e​r​a​t​i​v​e​-​b​a​r​o​m​e​t​r​e​-​b​o​r​n​-​a​i​-​2025/
2Kos­my­na, N., Haupt­mann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beres­nitz­ky, A. V., Braun­stein, I., & Maes, P. (2025, juin). Your Brain on ChatGPT : Accu­mu­la­tion of Cog­ni­tive Debt when Using an AI Assis­tant for Essay Wri­ting Task. arXiv.
https://​arxiv​.org/​a​b​s​/​2​5​0​6​.​08872
3Der­gaa, I., Ben Saad, H., Glenn, J. M., Ama­mou, B., Ben Ais­sa, M., Guel­ma­mi, N., Fekih-Romd­hane, F., & Cha­ma­ri, K. (2024). From tools to threats : A reflec­tion on the impact of arti­fi­cial-intel­li­gence chat­bots on cog­ni­tive health. Fron­tiers in Psy­cho­lo­gy, 15. https://​doi​.org/​1​0​.​3​3​8​9​/​f​p​s​y​g​.​2​0​2​4​.​1​2​59845
4Doshi, A. R., & Hau­ser, O. P. (2024). Gene­ra­tive AI enhances indi­vi­dual crea­ti­vi­ty but reduces the col­lec­tive diver­si­ty of novel content. Science Advances, 10(28). https://​doi​.org/​1​0​.​1​1​2​6​/​s​c​i​a​d​v​.​a​d​n5290
5Lee, H., Kim, S., Chen, J., Patel, R., & Wang, T. (2025, April 26–May 1). The impact of gene­ra­tive AI on cri­ti­cal thin­king : Self-repor­ted reduc­tions in cog­ni­tive effort and confi­dence effects from a sur­vey of know­ledge wor­kers. In CHI Confe­rence on Human Fac­tors in Com­pu­ting Sys­tems (CHI ’25) (pp. 1–23). ACM. https://​doi​.org/​1​0​.​1​1​4​5​/​3​7​0​6​5​9​8​.​3​7​13778
6Ren, R., Agar­wal, A., Mazei­ka, M., Men­ghi­ni, C., Vaca­rea­nu, R., Kenst­ler, B., Yang, M., Bar­rass, I., Gat­ti, A., Yin, X., Tre­vi­no, E., Geral­nik, M., Kho­ja, A., Lee, D., Yue, S., & Hen­drycks, D. (2025, mars). The MASK Bench­mark : Disen­tan­gling Hones­ty From Accu­ra­cy in AI Sys­tems [Pré­pu­bli­ca­tion]. arXiv.
https://​arxiv​.org/​a​b​s​/​2​5​0​3​.​03750
7Park, P. S., Hen­drycks, D., Burns, K., & Stein­hardt, J. (2024). AI decep­tion : A sur­vey of examples, risks, and poten­tial solu­tions. Pat­terns, 5(5), 100988. https://​doi​.org/​1​0​.​1​0​1​6​/​j​.​p​a​t​t​e​r​.​2​0​2​4​.​1​00988
8Anthro­pic. (2025, mai). Sys­tem Card : Claude Opus 4 & Claude Son­net 4 (Rap­port de sécu­ri­té). https://​www​-cdn​.anthro​pic​.com/​4​2​6​3​b​9​4​0​c​a​b​b​5​4​6​a​a​0​e​3​2​8​3​f​3​5​b​6​8​6​f​4​f​3​b​2​f​f​4​7.pdf
9Green­blatt, R., Wang, J., Wang, R., & Gan­gu­li, D. (2024, décembre). Ali­gn­ment faking in large lan­guage models [Pré­pu­bli­ca­tion]. arXiv.https://​doi​.org/​1​0​.​4​8​5​5​0​/​a​r​X​i​v​.​2​4​1​2​.​14093
10Pan, X., Liu, Y., Li, Z., & Zhang, Y. (2024, décembre). Fron­tier AI sys­tems have sur­pas­sed the self-repli­ca­ting red line [Pré­pu­bli­ca­tion]. arXiv. https://​doi​.org/​1​0​.​4​8​5​5​0​/​a​r​X​i​v​.​2​4​1​2​.​12140
11Amo­dei, D. (2025, avril). The urgen­cy of inter­pre­ta­bi­li­ty [Billet de blog]. https://​www​.darioa​mo​dei​.com/​p​o​s​t​/​t​h​e​-​u​r​g​e​n​c​y​-​o​f​-​i​n​t​e​r​p​r​e​t​a​b​ility
12Lois­Zé­ro. Logo – IA sécu­ri­taire pour l’hu­ma­ni­té – https://​law​ze​ro​.org/fr

Support accurate information rooted in the scientific method.

Donate