How the medical field is becoming automated A useful toolkit but no solution to global health problems

© Goethe-Institut

The Covid-19 pandemic shows how AI technologies can be useful in managing global health crises but also of where they fall short.

By Henrik Chulu

When thinking about how new technologies change healthcare, one might immediately think of advances in prosthetics, advanced surgical procedures, or even robots used in hospitals for disinfection, clinical training, telepresence or even patient companionship. A cute example of the latter is Paro the therapeutic robot baby seal who helps alleviate dementia symptoms.

However, while the robots definitely are coming to change the face of healthcare, they are just the tip of the iceberg of how the medical field is becoming automated. Underneath the robotic actuators and process automation, a data deluge and an algorithmic avalanche unfolds. Artificial intelligence promises to change the way we think about and manage global public health.

"Although artificial intelligence for health is relatively new there's been an enormous amount of progress over the last couple of years," says Naomi Lee who is Senior Executive Editor at The Lancet and Vice Chair at the ITU/WHO Focus Group on Artificial Intelligence for Health.

Diabetes is a growing global health problem with annual worldwide deaths from the disease (both Type 1 and Type 2 combined) more than doubling over the last three decades from 600,000 in 1990 to 1.37 million in 2017. A common complication of diabetes is retinopathy, where high blood sugar levels can cause irreversible damage to the eyes and eventually blindness. Consequently, screening diabetes patients for retinopathy is an increasing public health workload. In the UK, for example, the National Health Service offers a yearly retinopathy screening to everyone with diabetes from the age of 12.

"We know there's a huge burden of diabetes especially globally but that detection requires a specialist to look at these images of the back of the eye and work out whether or not there is retinopathy there. And so we now know that the AI models can do that from these images, but what we've learned over the last four years is that not only can they do that, but they can also look at what clinicians would describe as normal scans and identify those patients that are going to progress," says Naomi Lee.

Additionally, these AI assisted screenings are able to reveal other, non-diabetes related health information from images of the retina. This fuels hope that AI can be used to create more general health scans and diagnoses on the basis of scans related to particular conditions.

"All of these models are in early stage of diagnosis. You can understand how amazing this is for clinicians that you can take something that we thought was a sign of eye health and find these general indicators of people's health just from this scan," says Naomi Lee.

In other areas, such as breast cancer screenings, AI methods also show promises of reducing the workload for human diagnosticians and in some cases of even outperforming them.

How to handle a pandemic using artificial intelligence

Several different AI systems played a role in initially detecting the Covid-19 outbreak, the first one being HealthMap, run by a team of researchers, epidemiologists and software developers at Boston Children's Hospital. Other disease outbreak detection systems use similar methods to give early warnings against possible new threats against public health, by analysing at a large variety of disparate data sources.

"The input is often social media posts, it's news articles, it might be flight patterns. And the AI algorithm is looking for anomalous behaviour because it understands what normal behaviour is and once it detects that anomalous behaviour, it then sends an alert saying there
might be a disease outbreak," says Karen Hao, senior AI reporter at MIT Technology Review.

While faster than their human counterparts in detecting potential disease outbreaks, Karen Hao emphasises that they were not a lot faster. Half an hour after the initial alert from HealthMap, a team of human beings released a warning independently based on other sources. And while HealthMap was fast, it flagged the severity of the outbreak lower than what the human team (and later the whole planet) estimated.

"You really do need the human experts that are trying to understand whether or not this is actually something of concern," says Karen Hao.

Algorithms have been put to work across all the different stages of the global public health response to the global pandemic. They are to varying extents being used to detect the outbreak, forecast infection rates, diagnose patients, and assist drug discovery and vaccine development.

"One of the benefits of AI is it doesn't get tired so it can look at a lot of patients and always consistently use the same framework for decision making. But one of the cons is also that if it makes an error this is a life death situation," says Karen Hao.

She emphasises that these algorithms should be seen as an "assistive diagnosis" to help human decision-makers and not replacements for human expertise.

"They are not the ones making the final call on whether a patient has the disease but they can supplement the doctors and the radiologists to double check their work and vice versa," she says.

After the pandemic had begun to spread around the world, Björn Schuller who is Professor of Artificial Intelligence at Imperial College London and Chair of Embedded Intelligence for Healthcare and Wellbeing at the University of Augsburg, and his team started to gather voice recordings in Germany and in Wuhan, China from people tested both positive and negative for Covid-19. They used this data to train a diagnostic AI that can detect the disease from the sound of the patient's voice.

"From only listening to the voice, we have an 81 percent correct rate at this moment. This is a shockingly high number," says Björn Schuller, noting that the results should be taken with a grain of salt, but that it is generalised across the two countries.

What does health data say about your health?

Inspired by the advances made with artificial intelligence in other data rich areas, the healthcare field is generally seen as ripe for a technological transformation. This potential technological leap, says Naomi Lee, could serve a worldwide need for physicians, especially in low income countries, and is driven by growing reservoir of data for potential exploitation.

"There's been an explosion of digital health data and that's because we have electronic health records and we also have an increase in computational power," she says.

The vast amount of data available in electronic health records, albeit particular in high income countries, has allowed researchers to look for correlations across these relatively new, massive data sets.
Health © Goethe-Institut Maxine Mackintosh is a Research Associate and Fellow working between The Alan Turing Institute, The Health Foundation and the University of Oxford. In her PhD research, she used health data in the form of electronic health records in an attempt to identify early indicators of dementia. And some of her results were puzzling.

"A thing that constantly came up time and time again as an early feature of dementia was the fact that if you had your cervical smear taken, it meant that you were less likely to get dementia," she says.

There was no obvious causal relationship between not having a cervical smear and 20 years later getting diagnosed with dementia. But the correlation did reveal an underlying reality that has changed the way Maxine Mackintosh looks at the field of health data as a whole.

"We know that people from lower socioeconomic backgrounds, people who are poorer people who are in poverty, are much less likely to turn up to a screening and much less likely to go to things like their cervical screen," she says.

Through the analysis of the vast amount of electronic health records, what Maxine Mackintosh found was not that having a cervical smear reduced your risk of dementia, but more generally that healthcare outcomes are highly tied to a patient's socioeconomic strata.

"What we were really picking up was signs of poverty and social and economic deprivation. Health care makes up only about ten percent of our health, so that's the pills you take, the hospital visits you have. There's about 20 percent which is your genetics and the rest is this thing called social determinants of health. It's where you live, who you love, where you work, what you spend your time doing, the food you consume etc. It's all those things that actually makes up that other 70% of whether you are kind of healthy or not," says Maxine Mackintosh.

The question then became not as much, how to detect dementia early before it becomes symptomatic, but rather, how to find the data that represents the social determinants of health, not only in the forms of a proxy in electronic health records, such as having a cervical smear as a stand in for higher social status.

"When we want to think about the things that contribute to whether you're sick or healthy, we actually don't really have that data. We don't collect it in the scale which you can really analyse," she says.

However, this data is being collected, just not for healthcare purposes. Private companies like banks, insurers, telecommunications companies, social media and video streaming services collect enormous amounts of data about the daily lives of their customers and users, data that could be used to tell a great deal about the social determinants of their health and thus improve their healthcare.

"At the moment there's mostly this enormous concentration of data within the healthcare service i.e. it's data that's collected about you when you're already sick. So for me those are an enormous missed opportunity for truly predictive healthcare by looking at data that's outside of the healthcare system," says Maxine Mackintosh.

Correlating all this highly personal data about people's private lives with their health data carries obvious privacy risks and this is an area where data scientists begin to leave their own domain of expertise and enter the realm of ethics. It stresses the need for interdisciplinary work where the development of AI is not left up to data scientists and software engineers.

Björn Schuller has worked for 12 years using voice data to train machine learning algorithms and during this time he has discovered that a lot of information can be discovered by an algorithm simply listening to someone’s voice.

"We have been finding a lot of other things in the voice like, are you sincere or not? What is your native language? Do you have Parkinson's? And many other things. If you share your audio and if you have something that can analyse your voice for Covid-19, you want to be sure that it's not depicting other things," he says.

He points to safeguards that can be put in place in a diagnostic AI that uses disease detection through voice recognition. Federated machine learning means that the data processing takes place locally on the user's device and not at a central location.

"Federated machine learning is the answer where you don't give away your personal data like your images or your audio," he says. "This would be averaged so the other users in the world would not get your images they would not even get your parameters but just averages of a parameter changes and then the model would improve for everybody."

Who benefits from AI in healthcare?

Putting in place privacy safeguards in AI tools mitigates certain risks, but it's important to take the bigger ethical picture into account when considering how to use AI for public health purposes.

"AI is not the solution but it holds a lot of promise and in particular if you all work together with the experts from other fields," says Björn Schuller.

It is especially important that AI developers work with institutions and people with different forms of domain expertise given the huge commercial interests in AI in general and in health data in particular.

"The AI expertise lies increasingly in commercial entities and so right now there's a kind of pinch point where people are worried about how we can both unlock the interest in and analyse this data but also how we can do that without selling off large datasets to companies that might make models that are then sold back to the population," says Naomi Lee.

When it comes to the Covid-19 pandemic Karen Hao cautions against seeing AI as a panacea. AI technologies have limitations as to what they can accomplish as well as ethical considerations to keep in mind.

"A lot of the challenges that we face in resolving the pandemic are fundamentally not actually AI problems, they're social problems, they're policy problems," says Karen Hao.
Naomi Lee agrees.

"Artificial intelligence is another tool. It's got strengths and weaknesses. Its potentially very powerful but realising the benefits is about much more than a technical advancement. We are probably technically already in a position to use artificial intelligence for many of these health conditions.

However, when we think about data privacy protection, bias, evaluation, and the geopolitical tensions of using health data there are so many issues there that are non-technical and that need to be resolved before we can safely leverage the benefits of this tool," she says.

Introducing new medical interventions are always preceded by clinical trials to establish whether they are effective as well as safe, before widely implementing them in a real world healthcare setting. When it comes to AI, while there are many studies showing the potential of AI as a tool for assistive diagnostics, hardly any of these tools have seen clinical trials.

"There are very few, almost no studies of artificial intelligence being used in a real world clinical setting in a randomized way which is what we in the health community would normally consider the gold standard for adopting a new intervention like this," says Naomi Lee.

She suggests that for this reason, where we will first see any widespread introduction of AI in healthcare will be in management such as for bed occupancy and attendance or other low-risk situations, where the technologies do not interact directly with patient care.