calendar tag arrow download print
Skip to content

Entrepeneurship with AI in healthcare: need for cooperation and strategy

Article
07 December 2020
Artificial intelligence Health care Blogseries

Illustration: Max Kisman

Image
Ondernemen met AI in de zorg

In the blog series 'Healthy Bytes' we investigate how artificial intelligence (AI) is used responsibly for our health. In this sixth part, Jörgen Sandig talks about his experience as an AI entrepreneur in healthcare. Jörgen Sandig was co-founder and CEO of Scyfer, a technology company that has now been acquired by Qualcomm Technologies. What has he learned from his experience as an AI entrepreneur in healthcare? And what does he think it takes to use AI successfully and responsibly for our health?

In short

  • How is AI used responsibly for our health? That's what this blog series is about.
  • Jörgen Sandig shares his experience as an entrepreneur in healthcare technology.
  • To make AI work for better care, cooperation between the many players in health care and a data strategy are needed.

Looking for good examples of the application of artificial intelligence in healthcare, we asked key players in the field of health and welfare in the Netherlands about their experiences. In the coming weeks we will share these insights via our website. How do we make healthy choices now and in the future? How do we make sure we can make our own choices when possible? And do we understand in what way others take responsibility for our health where they need to? Even with the use of technology, our health should be central.

Obstacles for an AI start-up

As a starting technology company in AI care products, I came up against some obstacles. One of them was the subsidy process and the requirement to submit an application with a launching partner. A launching partner is someone who indicates that they are participating in your experiment. The partner wants to test the new product, without having to make a deal to actually purchase the product. Finding a launching partner in healthcare proved difficult because the business cases for AI solutions in healthcare are not easy to realise. Healthcare providers are busy and do not start an experiment until there is a prospect of a positive impact.

Time pressure was also a problem. Finding a launching partner and the processing time in healthcare generally takes more than a year. There is very little time to test new, unproven AI technology, despite its promising added value.

Another obstacle proved to be the acceptance of error margins in an AI system. For example, an application that makes 3D simulations of hip joints, to be used in preparation for hip surgery. A doctor can use the 3D copy to practice the operation in advance. The system makes an accurate copy of the hip joint and the software analyses where abnormalities are. But the image is never 100% accurate. A person also makes mistakes. Yet it seems as if system errors are more difficult to accept than human errors.

Yet it seems as if system errors are more difficult to accept than human errors.

Cooperation and strategy

In order to be successful with AI in healthcare, it is first and foremost necessary to know whether the AI application will be used by a sufficient number of healthcare providers. What problem does this application solve? Who will use it? Although these questions are asked to a sufficient degree, the answers are difficult to formulate. This is because the consequences for the current working method are often not clear, making it impossible to properly assess the impact of the AI application.

A willingness to take risks is therefore necessary. But because care is about people, it is not wise to take risks. This makes it even more difficult to start up experiments. That is why we first tested and implemented our image recognition technology on visual inspection of steel plates instead of medical images.

An entrepreneur must also be able to make a profit from his or her product. Challenges for this lie in the health insurance system and the organisational structure of health care. The business case must be attuned to a system of declarations and reimbursements. The complex organisational structure of a hospital - and healthcare in general - hinders the implementation of a new AI application. There are many players in the field an entrepreneur is dependent on. These players should join forces and jointly draw up a programme for the development of AI in healthcare. This requires a fundamental system change in healthcare.

Hospitals will have to adapt their data strategy to the application of AI in healthcare. They can store a lot of data, but there is little point in collecting data if there is no plan for the use of advanced AI applications. If hospitals want to use AI for better care, they will have to store crucial information for AI applications so that automatic decision systems can work with it. This information is generally lacking. Pathology, for example, investigates whether cells in a sample of a tumour are malignant. The person analysing the cells will have to indicate which cells are malignant and which are not. Only then will this data become usable for a possible AI application. At present, there is often a lack of a shared vision for the generation and use of this data, which leaves opportunities for improving care with AI.

There is a lack of a shared vision for the generation and use of data.

We also need a collective awareness that people make individual considerations and mistakes and have their limitations. The analysis of blood samples, for example, is becoming increasingly sophisticated and complex. It is becoming more and more difficult for a human being to see these in connection with other (historical) analyses and to discover subtle patterns in them. AI offers an opportunity to improve that process. But then the potential end user has to be prepared to allow themselves to be supported by AI, with the awareness that it has its limitations.

Impact of AI

AI can only have a real impact if AI start-ups succeed in 'scaling up'. So that successful AI applications are not fragmented in healthcare, but benefit the entire healthcare sector.

There is still too little experience and thus awareness of the potential impact of AI. It is good that Europe is thinking about policies for responsible AI. However, Europe is not leading, but following in the development of AI technology by parties in China and the US. We therefore have limited control over the technology. Europe is training the referees, as it were, but is lagging behind in the development of the game.

It is noteworthy that AI is often seen as a culprit, while it is people themselves who make the technology and also decide which decisions are left to AI. AI is - at least for the moment - a mirror of human behaviour. If we discriminate, then an automatic decision system will discriminate as well. AI makes our own choices and preferences visible. As far as these are undesirable, they should be identified and 'corrected', instead of pointing to AI as the culprit. I see this as an opportunity: AI systems can make our own collective failings visible. This, as far as I am concerned, is where the power of this technology lies.

Europa leidt als het ware de scheidsrechters op, maar loopt achter in de ontwikkeling van het spel.