In the blog series 'Healthy Bytes' we investigate how artificial intelligence (AI) is used responsibly for our health. In this opening blog, we conclude that the responsible use of AI means that our health is central and that patients can make their own choices. AI can support us in these choices. But the more we rely on technology, the more the question 'who decides?' becomes topical. Guarantees are needed to safeguard values such as autonomy, privacy and inclusiveness in automated decision systems. In this blog series, we look for good examples so that we can learn how professionals, patients and society shape the responsible use of AI.
In short
- How can AI be used responsibly for our health? That's what this blog series is about.
- As we rely more and more on AI in making choices for our health, the question 'who decides?’ becomes more urgent.
- We are looking for good examples, so that we can learn how to give shape to the responsible use of AI in care and welfare.
Health is a great good. In surveys, the Dutch invariably indicate that they find health very important. In health policy, the patient or client is at the core. Together with family and assisted by doctors and other professionals, we make our own choices. At least, that is the ideal picture. Especially when it comes to our health, in practice it turns out that wanting is not the same as being able to, and that being able to doesn’t mean actually doing.
New technology can help us by monitoring our health, coaching us, helping us make decisions, and finding correlations in big amounts of data which we wouldn't be able to interpret ourselves. The pedometer is a simple example. Our number of steps are measured, we get advice to be more active and we can also see how we are doing compared to others. The same is true for interpreting medical images. By collecting a lot of measurement data, which are interpreted by mathematical models (algorithms), doctors can get advice. For example, whether or not to operate. Both examples refer to artificial intelligence (AI) that processes all this data.
But who decides? Is the patient or our health still central? Or are commercial or other motives leading?
Previously, the Rathenau Instituut investigated which preconditions both simple and complex decision systems should meet and when they should be helpful and reliable. Whether decisions are in our best interest is far from always apparent, justifiable, or even evidence-based. During our earlier research we also saw good examples of applications of artificially intelligent, smart algorithms and automated decision systems for health and welfare. We looked at how these are embedded in good care and advice.
We concluded that automated decision systems can make choices in which health and the patient are central, provided we pay attention to three factors: the quality of the data, the way in which data is processed and the advice that the automated decision system gives. Are the data relevant for this individual patient, or are the data collected in a context that does not apply to this patient? Is the interest of the patient central, or does the algorithm take other (e.g. commercial) interests into account? Is the advice personally tailored to the patient, or does it rather resemble a one size fits all system?
As we increasingly rely on this technology for our health, as predicted, it will become more important to have guarantees of control, autonomy, accountability, but also privacy and inclusiveness. In that way, there is no exclusion or discrimination based on automated advice and choices. The question 'who decides?' becomes even more topical with the use of artificial intelligence. Who wants, who can and who will ultimately decide about our health and welfare?
Looking for good examples of the application of artificial intelligence in healthcare, we asked key players in the field of health and welfare in the Netherlands about their experiences. In the coming weeks we will share these insights via our website. How do we make healthy choices now and in the future? How do we make sure we can make our own choices when possible? And do we understand in what way others take responsibility for our health where they need to? Even with the use of technology, our health should be central.
Read more about this topic
Read the next blogs in this series:
Policy for AI in health care: a balancing of values
Health
09 November 2020
Article
How AI helps a person with dementia eat their sandwich on time
Health
16 November 2020
Article
Responsible AI in healthcare: the value of examples
Health
23 November 2020
Article
AI in care: implications for education
Health
30 November 2020
Article
Entrepeneurship with AI in healthcare: need for cooperation and strategy
Health
07 December 2020
Article
How an investment fund can contribute to responsible AI in healthcare
Health
14 December 2020
Article
Innovating with AI in healthcare: 'So people can participate in society'
Health
21 December 2020
Article
Towards healthy data use for medical research
Health
04 January 2021
Article
Towards proper management of data technology in healthcare
Health
11 January 2021
Article
Innovating with AI in healthcare: 'Get the data in order first'
Health
18 January 2021
Article
Artificial intelligence in healthcare: deciding together is crucial
Health
03 February 2021
Article