Home / Chroniques / AI in medical decision-making: a question of ethics?
Enhancing medical cybersecurity with generative ai for data safety in healthcare and life insurance
π Health and biotech π Science and technology

AI in medical decision-making: a question of ethics?

Damien Lacroux
Damien Lacroux
philosopher of science and researcher at the UNESCO Chair in the Ethics of the Living and the Artificial
Key takeaways
  • Artificial intelligence is gradually becoming an integral part of predictive and personalised medicine, and in helping medical professionals make therapeutic decisions.
  • The aim of the MIRACLE project is to identify the risk of recurrence in lung cancer patients, using an algorithm to aid medical decision-making.
  • To achieve this, the algorithm is fed with a large amount of patient data – the more data there is, the narrower the algorithm’s margin of error.
  • But the more powerful the AI, the more opaque it becomes for practitioners, who have no way of understanding what data has led to the probability of recurrence proposed by the AI.
  • AI therefore raises ethical issues concerning transparency in medicine, where the main fear of patients remains that the machine will impose a diagnosis without human intervention.

Arti­fi­cial intel­li­gence to detect breast can­cer1 or prostate can­cer2, algo­rithms that cal­cu­late our phys­i­o­log­i­cal age to pre­dict age­ing3 or con­ver­sa­tion­al agents to mon­i­tor our men­tal health4… Arti­fi­cial intel­li­gence tools are grad­u­al­ly becom­ing part of med­ical prac­tice. The focus is on pre­dic­tive and per­son­alised med­i­cine, as well as ther­a­peu­tic deci­sion-mak­ing by the med­ical pro­fes­sion. But how do patients per­ceive this rela­tion­ship between doc­tors and AI? How do prac­ti­tion­ers real­ly inter­act with the technology?

These are the ques­tions posed by Damien Lacroux, philoso­pher of sci­ence and researcher at the UNESCO Chair in the Ethics of the Liv­ing and the Arti­fi­cial. “Dur­ing my inter­views, I noticed that patients imag­ine a par­tic­u­lar rela­tion­ship between doc­tors and AI,” explains the researcher, who spe­cialis­es in the inte­gra­tion of algo­rithms in can­cerol­o­gy. “We tend to believe that human spe­cial­ists in oncol­o­gy delib­er­ate on our case before mak­ing a deci­sion, and that the tech­nol­o­gy inter­venes at a lat­er stage to val­i­date the delib­er­a­tion,” he explains. But is this real­ly the case?

AI to prevent the risk of lung cancer recurrence

To find out, Damien Lacroux spoke to sci­en­tists from the MIRACLE5 project. This ambi­tious­ly named Euro­pean study was launched in 2021 and brings togeth­er lab­o­ra­to­ries in Italy, Spain, Ger­many and France. The aim is to iden­ti­fy risk of recur­rence in lung can­cer patients, using an algo­rithm to help med­ical deci­sion-mak­ing. To achieve this, researchers are train­ing AI (machine learn­ing) in a super­vised man­ner. The algo­rithm is “fed” with data from a cohort of patients for whom the exis­tence or absence of recur­rence is known. The data ingest­ed is of three types: clin­i­co-patho­log­i­cal data (such as the patient’s sex, the his­to­ry of their dis­ease or the treat­ments they may have under­gone); med­ical imag­ing data; and final­ly, omics data, i.e. a mass of infor­ma­tion relat­ing to mol­e­c­u­lar biol­o­gy (DNA or RNA from tumours).

Using a cohort of 220 patients, the sci­en­tists feed the algo­rithm with all the data col­lect­ed, as well as infor­ma­tion on whether or not a recur­rence hap­pened – and how long before it does. “Then we let the algo­rithm do its work! This involves an unimag­in­able amount of data, which is impos­si­ble for humans to process on their own,” explains Damien Lacroux. “Today, the project is behind sched­ule, and we’ve only just fin­ished col­lect­ing data from the first cohort. We still have to start train­ing the algo­rithm with this data and then recruit a sec­ond cohort to val­i­date its train­ing.” So, we’ll have to wait a lit­tle longer before we see the MIRACLE project in action.

AI: a black box for medical decision-making

But the way it works imme­di­ate­ly rais­es an eth­i­cal issue, which was point­ed out by the researchers inter­viewed by Damien Lacroux. “At the start of train­ing, bioin­for­mati­cians man­age to slice up the datasets and asso­ciate the AI results with this or that input fac­tor. But grad­u­al­ly, the data increas­es and it becomes a black box.” This grow­ing vol­ume of data makes the mod­els used to refine pre­dic­tions more com­plex. And there­in lies the para­dox: as the amount of data increas­es, the algorithm’s mar­gin for error decreas­es. The AI is there­fore more effi­cient, but the way it works is less clear to prac­ti­tion­ers. How can they explain the deci­sions made by AI to patients, or guar­an­tee the absence of bias, if they them­selves are not famil­iar with its inner workings?

In the field of oncol­o­gy, deci­sion trees are often used to help doc­tors jus­ti­fy their clin­i­cal rea­son­ing. How­ev­er, the inte­gra­tion of algo­rith­mic scores into these process­es can con­flict with the need for trans­paren­cy on the part of doc­tors, who some­times strug­gle to under­stand what input data has led the AI to esti­mate the prob­a­bil­i­ty of recur­rence. “Even if we man­aged to deci­pher every inter­nal cal­cu­la­tion in the algo­rithm, the result would be so math­e­mat­i­cal­ly com­plex that doc­tors would not be able to inter­pret it or use it in their clin­i­cal prac­tice,” explains a Ger­man bioin­for­mati­cian work­ing on the MIRACLE project, inter­viewed6 by Damien Lacroux in his forth­com­ing study.

This also affects the notion of informed patient con­sent. “The doc­tor is oblig­ed to pro­vide suf­fi­cient infor­ma­tion to enable the patient to accept or refuse treat­ment. How­ev­er, if the prac­ti­tion­er is not ful­ly informed, this pos­es an eth­i­cal prob­lem,” adds the philoso­pher. And yet, as Damien Lacroux points out in his study: “Mol­e­c­u­lar biol­o­gy has iden­ti­fied the need to take into account thou­sands of pieces of patient omic data as an essen­tial means of mak­ing progress in oncol­o­gy.” AI would there­fore enable bet­ter man­age­ment of the poten­tial evo­lu­tion of the dis­ease, by refin­ing the pro­posed treat­ments… at the expense of trust between doc­tors and patients.

The importance of having humans in the driver’s seat

Whether AI is inte­grat­ed into the med­ical delib­er­a­tion process (what Damien Lacroux calls “ana­lyt­i­cal deliberation”in his arti­cle) or whether it is total­ly out­side the deci­sion-mak­ing process and only inter­venes as a final con­sul­ta­tion (“syn­thet­ic delib­er­a­tion”), there must be total trans­paren­cy as far as patients are con­cerned. The main fear raised by the researcher dur­ing group inter­views with patients7 remains that “the machine” will make the diag­no­sis with­out human inter­ven­tion. “But this is not at all the case today,” reas­sures Damien Lacroux.

These algo­rith­mic scores, which pro­pose a prob­a­bil­i­ty of can­cer recur­rence based on patient data, also raise oth­er ques­tions spe­cif­ic to pre­dic­tive med­i­cine: when are we real­ly cured? Can we real­ly be free of the dis­ease when uncer­tain­ty per­sists, and we live in con­stant antic­i­pa­tion of a pos­si­ble recur­rence? These ques­tions, like so many oth­ers, have yet to be answered.

Sophie Podevin
1Insti­tut Curie arti­cle, Decem­ber 2022 (https://​curie​.fr/​a​c​t​u​a​l​i​t​e​/​p​u​b​l​i​c​a​t​i​o​n​/​d​i​a​g​n​o​s​t​i​c​-​d​u​-​c​a​n​c​e​r​-​d​u​-​s​e​i​n​-​l​i​n​t​e​l​l​i​g​e​n​c​e​-​a​r​t​i​f​i​c​i​e​l​l​e​-​d​i​b​e​x​-​b​i​e​n​t​o​t​-​r​e​alite)
2Insti­tut Curie arti­cle, Novem­ber 2024 (https://​curie​.fr/​a​c​t​u​a​l​i​t​e​/​r​e​c​h​e​r​c​h​e​/​i​n​t​e​l​l​i​g​e​n​c​e​-​a​r​t​i​f​i​c​i​e​l​l​e​-​l​i​n​s​t​i​t​u​t​-​c​u​r​i​e​-​i​m​p​l​e​m​e​n​t​e​-​l​e​s​-​o​u​t​i​l​s​-​d​i​b​e​x​-​m​e​dical )
3Inserm press release, June 2023 (https://​presse​.inserm​.fr/​u​n​e​-​i​n​t​e​l​l​i​g​e​n​c​e​-​a​r​t​i​f​i​c​i​e​l​l​e​-​p​o​u​r​-​p​r​e​d​i​r​e​-​l​e​-​v​i​e​i​l​l​i​s​s​e​m​e​n​t​/​6​7138/)
4Decem­ber 2024 arti­cle on Info.gouv using the Kanopee appli­ca­tion as an exam­ple (https://​www​.info​.gouv​.fr/​a​c​t​u​a​l​i​t​e​/​l​i​n​t​e​l​l​i​g​e​n​c​e​-​a​r​t​i​f​i​c​i​e​l​l​e​-​a​u​-​s​e​r​v​i​c​e​-​d​e​-​l​a​-​s​a​n​t​e​-​m​e​ntale )
5The project code ERP-2021–23680708 – ERP-2021-ERAPERMED2021-MIRACLE.
6Inter­view, Feb­ru­ary 2024: bioin­for­mati­cians from the MIRACLE project, Cen­ter for Scal­able Data Ana­lyt­ics and Arti­fi­cial Intel­li­gence (ScaDS​.AI), Ger­many, Leipzig.
7These inter­views were con­duct­ed with patient asso­ci­a­tions out­side the MIRACLE project.

Our world explained with science. Every week, in your inbox.

Get the newsletter