AI in medical decision-making: a question of ethics?
- Artificial intelligence is gradually becoming an integral part of predictive and personalised medicine, and in helping medical professionals make therapeutic decisions.
- The aim of the MIRACLE project is to identify the risk of recurrence in lung cancer patients, using an algorithm to aid medical decision-making.
- To achieve this, the algorithm is fed with a large amount of patient data – the more data there is, the narrower the algorithm’s margin of error.
- But the more powerful the AI, the more opaque it becomes for practitioners, who have no way of understanding what data has led to the probability of recurrence proposed by the AI.
- AI therefore raises ethical issues concerning transparency in medicine, where the main fear of patients remains that the machine will impose a diagnosis without human intervention.
Artificial intelligence to detect breast cancer1 or prostate cancer2, algorithms that calculate our physiological age to predict ageing3 or conversational agents to monitor our mental health4… Artificial intelligence tools are gradually becoming part of medical practice. The focus is on predictive and personalised medicine, as well as therapeutic decision-making by the medical profession. But how do patients perceive this relationship between doctors and AI? How do practitioners really interact with the technology?
These are the questions posed by Damien Lacroux, philosopher of science and researcher at the UNESCO Chair in the Ethics of the Living and the Artificial. “During my interviews, I noticed that patients imagine a particular relationship between doctors and AI,” explains the researcher, who specialises in the integration of algorithms in cancerology. “We tend to believe that human specialists in oncology deliberate on our case before making a decision, and that the technology intervenes at a later stage to validate the deliberation,” he explains. But is this really the case?
AI to prevent the risk of lung cancer recurrence
To find out, Damien Lacroux spoke to scientists from the MIRACLE5 project. This ambitiously named European study was launched in 2021 and brings together laboratories in Italy, Spain, Germany and France. The aim is to identify risk of recurrence in lung cancer patients, using an algorithm to help medical decision-making. To achieve this, researchers are training AI (machine learning) in a supervised manner. The algorithm is “fed” with data from a cohort of patients for whom the existence or absence of recurrence is known. The data ingested is of three types: clinico-pathological data (such as the patient’s sex, the history of their disease or the treatments they may have undergone); medical imaging data; and finally, omics data, i.e. a mass of information relating to molecular biology (DNA or RNA from tumours).
Using a cohort of 220 patients, the scientists feed the algorithm with all the data collected, as well as information on whether or not a recurrence happened – and how long before it does. “Then we let the algorithm do its work! This involves an unimaginable amount of data, which is impossible for humans to process on their own,” explains Damien Lacroux. “Today, the project is behind schedule, and we’ve only just finished collecting data from the first cohort. We still have to start training the algorithm with this data and then recruit a second cohort to validate its training.” So, we’ll have to wait a little longer before we see the MIRACLE project in action.
AI: a black box for medical decision-making
But the way it works immediately raises an ethical issue, which was pointed out by the researchers interviewed by Damien Lacroux. “At the start of training, bioinformaticians manage to slice up the datasets and associate the AI results with this or that input factor. But gradually, the data increases and it becomes a black box.” This growing volume of data makes the models used to refine predictions more complex. And therein lies the paradox: as the amount of data increases, the algorithm’s margin for error decreases. The AI is therefore more efficient, but the way it works is less clear to practitioners. How can they explain the decisions made by AI to patients, or guarantee the absence of bias, if they themselves are not familiar with its inner workings?
In the field of oncology, decision trees are often used to help doctors justify their clinical reasoning. However, the integration of algorithmic scores into these processes can conflict with the need for transparency on the part of doctors, who sometimes struggle to understand what input data has led the AI to estimate the probability of recurrence. “Even if we managed to decipher every internal calculation in the algorithm, the result would be so mathematically complex that doctors would not be able to interpret it or use it in their clinical practice,” explains a German bioinformatician working on the MIRACLE project, interviewed6 by Damien Lacroux in his forthcoming study.
This also affects the notion of informed patient consent. “The doctor is obliged to provide sufficient information to enable the patient to accept or refuse treatment. However, if the practitioner is not fully informed, this poses an ethical problem,” adds the philosopher. And yet, as Damien Lacroux points out in his study: “Molecular biology has identified the need to take into account thousands of pieces of patient omic data as an essential means of making progress in oncology.” AI would therefore enable better management of the potential evolution of the disease, by refining the proposed treatments… at the expense of trust between doctors and patients.
The importance of having humans in the driver’s seat
Whether AI is integrated into the medical deliberation process (what Damien Lacroux calls “analytical deliberation”in his article) or whether it is totally outside the decision-making process and only intervenes as a final consultation (“synthetic deliberation”), there must be total transparency as far as patients are concerned. The main fear raised by the researcher during group interviews with patients7 remains that “the machine” will make the diagnosis without human intervention. “But this is not at all the case today,” reassures Damien Lacroux.
These algorithmic scores, which propose a probability of cancer recurrence based on patient data, also raise other questions specific to predictive medicine: when are we really cured? Can we really be free of the disease when uncertainty persists, and we live in constant anticipation of a possible recurrence? These questions, like so many others, have yet to be answered.