Imagine: you have just had a serious accident. Your condition is critical. The medical staff rushes you to the operating theatre. The assistant tells you that the doctor is on the way. The doctor will have to make a decision about the necessary treatment. This decision will be crucial for the long-term damage of the accident. After a few minutes of waiting, out of the corner of your eye, you see a white coat fluttering around the corner. Thank goodness. The doctor has arrived, opens a laptop and, aloud, asks it what to do. And a robotic voice says what needs to be done.
Of course, the scenario above is not (yet) a reality. However, make no mistake, machines programmed with medical knowledge are already playing an increasingly important role in healthcare decision-making. Clinical decision support systems, using artificial intelligence, are already 'thinking' along with medical staff. For example, they warn healthcare workers in the case of unusual readings, provide guidance during treatment or make suggestions about actions to be taken.
These clinical decision support systems (CDSS) are supposed to facilitate the work of medical professionals or even improve their actions. The use of these systems makes decision-making faster and more accurate, and reduces the number of human errors, or so the promise goes. Greater efficiency and effectiveness of medical decisions should also lead to lower healthcare costs. The technical preconditions for these advantages are that the data used by these systems must be complete and correct. In many cases, there are doubts as to whether decisions can be made for individual cases on the basis of large data sets, for example for individuals with complex or multiple disorders. In addition, the algorithms used must meet safety standards. For example, not everyone should be able to modify the algorithm just like that.
Besides these technical considerations, there are also concerns and uncertainties about the practical use of these decision support systems. How far can these systems go in their support? Do they not, in some cases, replace the decision-making of care workers too much? And what are the consequences for the patient?
As we have shown in previous articles on precaution and innovation, the precautionary principle is often used in the event of serious but uncertain risks. In this paper, we describe why a precautionary approach is also important in clinical decision support systems, and what such an approach might look like in practice.