When (not) to use CDSS?
There are several potential, uncertain risks associated with the introduction of CDSS in clinical practice, which were reviewed in a recent report for the European project RECIPES. This project studies how best to apply the precautionary principle to innovation based on technology for which the risks are uncertain. In the case of CDSS, epistemic tasks, which are usually performed by medical professionals who bear the responsibility to perform these tasks to the best of their knowledge and ability, are now delegated to machines. The potential risks identified in the report are related to the fact that CDSS lack the ability to incorporate the clinical and personal context of the individual patient into the conclusion. Moreover, they cannot be held responsible for the outcome in the same way that human doctors can.
However, since CDSSs outperform clinicians in some specific, well-defined epistemic tasks, the application of these systems can support clinical reasoning by clinicians. To identify these epistemic tasks it is important to consider the specific cognitive capacities of medical professionals and of AI systems. CDSS can, for example, help identify patterns in large amounts of data that are, because of the large quantity of data or the complexity of the pattern, inaccessible to humans. Furthermore, they can help detect similarities of data patterns among patients. Clinicians, however, deal with individual patients and their specific circumstances. They have to find the most suitable treatment taking into account the diagnosis, the personal situation of the patient, and the local situation of the hospital. In addition, they may consult colleagues and deliberate with them.
If a CDSS is to take over certain epistemic tasks it must be fitted into the clinical reasoning process, and the clinician must still be in a position to take responsibility for the final reasoning process and outcome. Therefore, rather than thinking of CDSS as decisions-aids, we argue that it is better to consider them as clinical reasoning support systems (CRSS).
Proper implementation of CRSS can support high-quality decision-making, by allowing clinicians to combine their human intelligence with the artificial intelligence of the CRSS into hybrid intelligence, in which both have clearly delineated and complementary tasks, based on their respective capacities. However, clinicians have to stay ‘in the lead’ of collecting, contextualizing and integrating all kinds of clinical data and medical information and use them to construct knowledge about an individual patient. Good use of AI in medical practice depends on the availability and proper processing of relevant medical data. Furthermore, it depends on the ability of medical professionals to utilize a system in practice by incorporating it into their clinical reasoning process.