Artificial intelligence (AI) holds great promises for health care, according to developers, policymakers and medical professionals. It is expected to improve health care by alleviating the workload of care workers, improving the quality of decision-making or enhancing the efficiency of health care. At the same time, the introduction of AI poses many challenges for medical practice. This article sheds light on clinical decision support systems, a particular type of AI for clinical practice, and how they can complement the work of medical professionals.
In a recent series of blog posts, the Rathenau Institute has asked several relevant players in the field of Dutch health care and innovation (i.e. government, developers, entrepreneurs, lawyers and scientists) to share their view on responsible innovation of AI for health care. Another series of blog posts shed light on the ideas and discussion concerning the international recommendation on the ethics of AI.
This article by Sophie van Baalen, Mieke Boon and Petra Verhoef highlights insights stemming from the paper “From clinical decision support to clinical reasoning support systems” that was recently published in the Journal of Evaluation in Clinical practice and can be accessed here.
This article deals with epistemological (knowledge-theoretical) issues arising from the development and implementation of a specific type of AI for clinical practice, namely clinical decision support systems (CDSS) and their implications for the epistemic tasks and responsibilities of medical professionals. Epistemology deals with questions related to the construction and use of knowledge. For example, how well do theories (or other knowledge-products such as models) represent the target system in the real world (for example a disease) and how can scientist prove that?
Knowledge for diagnosis and treatment
In clinical practice, medical professionals are continuously constructing (working on) knowledge of individual patients, in order to make decisions about the diagnosis and treatment of those patients¹. CDSS are systems that are being developed to take over one or more epistemic tasks (i.e. tasks that contribute to the construction of knowledge about a patient for clinical decision-making) that are usually performed by medical professionals.
There are many different types of CDSS which provide different types of support to different kinds of decision‐making processes in a variety of clinical situations, ranging from providing alerts or reminders for example while monitoring patients, emphasizing clinical guidelines during care, identifying drug‐drug interactions, or advising on possible diagnosis or treatment plans. Regarding diagnosis and treatment, CDSS can have many functions, such as predicting the outcome of a specific treatment, image interpretation (for instance contouring, segmentation or pathology detection), prescribing (the dosage of) medication, and screening and prevention².
In performing epistemic tasks, a CDSS uses artificial intelligence to ‘reason’ according to its algorithms about the diagnosis or possible treatments of a specific patient by comparing that patient’s data with the data in its system. The rules that the CDSS follows to ‘reason’ are either programmed by the developers (i.e. ‘knowledge’ or ‘rule-based’ expert systems), or inferred from a large amount of data about a group of patients, using statistical AI methods, such as machine learning or deep learning algorithms (i.e. ‘data-driven’).
When (not) to use CDSS?
There are several potential, uncertain risks associated with the introduction of CDSS in clinical practice, which were reviewed in a recent report for the European project RECIPES. This project studies how best to apply the precautionary principle to innovation based on technology for which the risks are uncertain. In the case of CDSS, epistemic tasks, which are usually performed by medical professionals who bear the responsibility to perform these tasks to the best of their knowledge and ability, are now delegated to machines. The potential risks identified in the report are related to the fact that CDSS lack the ability to incorporate the clinical and personal context of the individual patient into the conclusion. Moreover, they cannot be held responsible for the outcome in the same way that human doctors can.
However, since CDSSs outperform clinicians in some specific, well-defined epistemic tasks, the application of these systems can support clinical reasoning by clinicians. To identify these epistemic tasks it is important to consider the specific cognitive capacities of medical professionals and of AI systems. CDSS can, for example, help identify patterns in large amounts of data that are, because of the large quantity of data or the complexity of the pattern, inaccessible to humans. Furthermore, they can help detect similarities of data patterns among patients. Clinicians, however, deal with individual patients and their specific circumstances. They have to find the most suitable treatment taking into account the diagnosis, the personal situation of the patient, and the local situation of the hospital. In addition, they may consult colleagues and deliberate with them.
If a CDSS is to take over certain epistemic tasks it must be fitted into the clinical reasoning process, and the clinician must still be in a position to take responsibility for the final reasoning process and outcome. Therefore, rather than thinking of CDSS as decisions-aids, we argue that it is better to consider them as clinical reasoning support systems (CRSS).
Proper implementation of CRSS can support high-quality decision-making, by allowing clinicians to combine their human intelligence with the artificial intelligence of the CRSS into hybrid intelligence, in which both have clearly delineated and complementary tasks, based on their respective capacities. However, clinicians have to stay ‘in the lead’ of collecting, contextualizing and integrating all kinds of clinical data and medical information and use them to construct knowledge about an individual patient. Good use of AI in medical practice depends on the availability and proper processing of relevant medical data. Furthermore, it depends on the ability of medical professionals to utilize a system in practice by incorporating it into their clinical reasoning process.
¹ Van Baalen S, Boon M. An epistemological shift: from evidence-based medicine to epistemological responsibility. J Eval Clin Pract, 2015;21(3):433-439. DOI: 10.1111/jep.12282.
² Mahadevaiah G, Prasad RV, Bermejo I, Jaffray D, Dekker A, Wee L. Artificial intelligence‐based clinical decision support in modern medical physics: Selection, acceptance, commissioning, and quality assurance. Med. Phys. 2020;47(5): e228-e235. Doi: https://doi.org/10.1002/mp.13562
³ Sloane EB , Silva RJ. Artificial intelligence in medical devices and clinical decision support systems. In: Iadanza E, ed. Clinical Engineering Handbook. Academic Press. 2020.: 556-568 Doi: https://doi.org/10.1016/B978-0-12-813467-2.00084-5
⁴ Dellermann D, Ebel P, Söllner M, Leimeister JM. Hybrid Intelligence. Bus. Inform. Syst. Eng+ 2019;61:637-643. Doi: https://doi.org/10.1007/s12599-019-00595-2