In the blog series ‘Healthy Bytes’ we investigate how artificial intelligence (AI) is used responsibly for our health. In this third part, Dirk Lukkien and Henk Herman Nap, senior researchers at Vilans, talk about the importance of exchanging knowledge and experience. Vilans is a knowledge organisation for long-term care. AI can improve long-term care, but this requires healthcare professionals to work together and share their experiences.
How is AI used responsibly for our health? That's what this blog series is about.
Knowledge organisation Vilans strives for cooperation and sharing of knowledge, within the long-term care sector as well as between different care sectors.
AI can support clients and professionals in their decisions, for example in recognising or predicting care needs.
Looking for good examples of the application of artificial intelligence in healthcare, we asked key players in the field of health and welfare in the Netherlands about their experiences. In the coming weeks we will share these insights via our website. How do we make healthy choices now and in the future? How do we make sure we can make our own choices when possible? And do we understand in what way others take responsibility for our health where they need to? Even with the use of technology, our health should be central.
Innovations with AI in long-term care
AI is used in long-term care. Apps are being used to help register wound care on the basis of automatic image analysis. The Electronic Client Dossier (ECD) uses tools that provide automatic word suggestions during reporting and, based on text analysis, help to think about the client’s request for care. In this way, AI can simultaneously reduce the workload of healthcare providers and help them make better decisions in order to increase the quality of care.
Lifestyle monitoring using data from motion sensor can give care providers and next of kin early warnings about gradual changes in the lifestyle of a client living at home. For example the inversion of their day-night rhythm. A development that goes one step further is linking lifestyle monitoring to social robotics. Using sensor data showing that there hasn’t been any kitchen activity, a social robot can, for example, point out to a person with dementia that it is time to eat their beloved peanut butter sandwich. In this way, the care professional gains a better understanding of the client’s daily life from a distance and saves time that can be spent on human and personal contact at another time. This is especially of value in long-term care, where there is a shortage of care professionals and where many clients find it difficult to express what they feel or want.
We see many initiatives exploring the possibilities of data and AI. For example, several organisations are investing in applications that help detect (increasing) stress and other emotions earlier and more accurately. This makes it possible to respond more quickly to aggression, self-injurious behaviour or running away. In addition, care organisations in various regions within the Netherlands apply lifestyle monitoring to coach clients on healthy behaviour. The data they collect is still exchanged to a limited extent.
Vilans is currently making an inventory of how healthcare organisations can learn from each other and how organisations, professionals and clients can better exchange and use data. Vilans is also exploring how developers and users deal with ethical issues. Vilans strives for cooperation and sharing of knowledge on the responsible use of data and AI within the entire care sector.
Care will always require human input and actions.
Realistic expectations about the role of AI
Indeed, AI's potential is also accompanied by dilemmas. What about the confidentiality of data, for example? What does a cloud provider of a speech-to-text application do with the data and who can listen in? How do we ensure that AI helps healthcare professionals increase the quality of care they provide, without pushing them to the limit?
We advocate that developers, clients, healthcare professionals and scientists work together in the design of supportive and 'trustworthy AI'. This means, for example, that users of AI applications retain control over the choices within the care process.
It is important that the long-term care sector takes the limitations and pitfalls of AI seriously. The sector must recognise that many processes and situations do not automatically lend themselves to AI applications. Algorithms can make decisions and carry out tasks based on huge amounts of data from the past. Assigning meaning to this data and taking final responsibility remains human work. AI can be of added value if it supports human experts in what makes them so irreplaceable: understanding a client's personal situation. Care will always require human input and actions.
Learning from others
AI in long-term care is currently still in its infancy. Through more cooperation and exchange of data, organisations can take significant steps towards the deployment of learning AI systems that make predictions about the future based on events from the past. This requires a lot of data. Looking at specialist medical and mental health care, the long-term care sector can learn more about what can and cannot be done with AI. For example, AI already helps in choosing the right treatment for a urinary tract infection or in determining which treatment is best in case of a psychosis. Long-term care already collects a lot of valuable data, but does not yet make as much use of it as other care sectors.
Organisational and cross-sector collaboration and knowledge sharing are of great added value in achieving responsible AI applications. Organisations often try to reinvent the wheel themselves and the various care sectors are still too often seen as separate worlds. Precisely if organisations and the various care sectors were to deploy AI together, the technology could support prevention, early detection and treatment.