calendar tag arrow download print
Skip to content

How Artificial Intelligence can support the practice of doctors

Article
16 February 2021
Artificial intelligence Health care

Photo: Sirtravelalot/Shutterstock

Image
A doctor and patient discuss what's on their tablet screen

Artificial intelligence (AI) holds great promises for health care, according to developers, policymakers and medical professionals. It is expected to improve health care by alleviating the workload of care workers, improving the quality of decision-making or enhancing the efficiency of health care. At the same time, the introduction of AI poses many challenges for medical practice. This article sheds light on clinical decision support systems, a particular type of AI for clinical practice, and how they can complement the work of medical professionals.

In a recent series of blog posts, the Rathenau Institute has asked several relevant players in the field of Dutch health care and innovation (i.e. government, developers, entrepreneurs, lawyers and scientists) to share their view on responsible innovation of AI for health care. Another series of blog posts shed light on the ideas and discussion concerning the international recommendation on the ethics of AI.

This article by Sophie van Baalen, Mieke Boon and Petra Verhoef highlights insights stemming from the paper “From clinical decision support to clinical reasoning support systems” that was recently published in the Journal of Evaluation in Clinical practice and can be accessed here.

This article deals with epistemological (knowledge-theoretical) issues arising from the development and implementation of a specific type of AI for clinical practice, namely clinical decision support systems (CDSS) and their implications for the epistemic tasks and responsibilities of medical professionals. Epistemology deals with questions related to the construction and use of knowledge. For example, how well do theories (or other knowledge-products such as models) represent the target system in the real world (for example a disease) and how can scientist prove that?

Knowledge for diagnosis and treatment
In clinical practice, medical professionals are continuously constructing (working on) knowledge of individual patients, in order to make decisions about the diagnosis and treatment of those patients¹. CDSS are systems that are being developed to take over one or more epistemic tasks (i.e. tasks that contribute to the construction of knowledge about a patient for clinical decision-making) that are usually performed by medical professionals.

When (not) to use CDSS?
There are several potential, uncertain risks associated with the introduction of CDSS in clinical practice, which were reviewed in a recent report for the European project RECIPES. This project studies how best to apply the precautionary principle to innovation based on technology for which the risks are uncertain. In the case of CDSS, epistemic tasks, which are usually performed by medical professionals who bear the responsibility to perform these tasks to the best of their knowledge and ability, are now delegated to machines. The potential risks identified in the report are related to the fact that  CDSS lack the ability to incorporate the clinical and personal context of the individual patient into the conclusion. Moreover, they  cannot be held responsible for the outcome in the same way that human doctors can.

However, since CDSSs outperform clinicians in some specific, well-defined epistemic tasks, the application of these systems can support clinical reasoning by clinicians. To identify these epistemic tasks it is important to consider the specific cognitive capacities of medical professionals and of AI systems. CDSS can, for example, help identify patterns in large amounts of data that are, because of the large quantity of data or the complexity of the pattern, inaccessible to humans. Furthermore, they can help detect similarities of data patterns among patients. Clinicians, however, deal with individual patients and their specific circumstances. They have to find the most suitable treatment taking into account the diagnosis, the personal situation of the patient, and the local situation of the hospital. In addition,  they may  consult colleagues and deliberate with them.

If a CDSS is to take over certain epistemic tasks it must be fitted into the clinical reasoning process, and the clinician must still be in a position to take responsibility for the final reasoning process and outcome. Therefore, rather than thinking of CDSS as decisions-aids, we argue that it is better to consider them as clinical reasoning support systems (CRSS).

Hybrid intelligence
Proper implementation of CRSS can support high-quality decision-making, by allowing clinicians to combine their human intelligence with the artificial intelligence of the CRSS into hybrid intelligence, in which both have clearly delineated and complementary tasks, based on their respective capacities. However, clinicians have to stay ‘in the lead’ of collecting, contextualizing and integrating all kinds of clinical data and medical information and use them to construct knowledge about an individual patient. Good use of AI in medical practice depends on the availability and proper processing of relevant medical data. Furthermore, it depends on the ability of medical professionals to utilize a system in practice by incorporating  it into their clinical reasoning process.