Artificial Intelligence and Covid “AI is Not a Magic Bullet”

A woman with a mask looks at her smartphone.
Cough into your mobile phone and it will tell you if you should get tested: Will AI revolutionise the healthcare sector? | Photo (detail): Engin Akyurt © Unsplash

What place does artificial intelligence have in the health sector, where human interaction is so important? In our interview, scientist Rahul Panicker points out some of the problems of AI - and how it could still help us in times of pandemics.

Mr. Panicker, we see digitalization and artificial intelligence (AI) technology gradually implemented in all areas of life. That also includes health care, a field, which is traditionally focused on actual human interaction. What could be improved by using artificial intelligence in health care?

There are tremendous opportunities in this area! Especially in Global Health, there is a shortage of skilled care givers trained doctors, and specialists. There is even a shortage of skilled health-care workers. So, using algorithms to actually extend the reach of health care workers and systems, is a tremendous opportunity. For example, how can we bring a frontline health care worker closer to a primary health care doctor? With algorithmic support. Imagine the kind of impact that is possible across various diseases! Another possibility for positively implementing AI in health care is by using certain technologies to detect signals that are harder for humans to pick up but can be done by algorithms. For example, the results from fairly recent work that was done by looking at PET(Positron emission tomography) scans show that those are able to predict Alzheimers about five years earlier in comparison to traditional means of diagnosis. These are, at some level, superhuman capabilities of AI. The third set of possibilities – the first one being extending the reach of health care, the second one being superhuman capabilities – is actually minimizing the stakes. That means allowing AI basically to be a support system to care givers and reducing medical mistakes. I mean, I could go on but these are the broad buckets.

I understand that humans make mistakes and by using AI you can minimize the possibilities of mistakes. But AI is programmed by humans so I wonder if AI isn’t also prone to mistakes.

Yes, of course. Any technology has the probability of failure. And especially in health care, the consequences can be really heavy: they can range from a minor annoyance all the way to life-threatening outcomes: for example, when the life support system fails, or serious diseases are incorrectly diagnosed. This is well known in health care technology. The question we ask ourselves, above all, is: can we make things better? Can we lower the probability of errors? There is also the important question of bias. Therefore, there are anti-bias mitigation tools, so it’s important that those rules are applied. Even if the rules are applied, there are people affected by accessing bias at all times. There is a basic problem with bias itself, which is our definition of “fairness”. Human beings don’t have a consistent definition of “fairness” and some of those are not compatible with the others. That’s a root problem. But putting that aside for a minute, yes, we do have the intention to make our systems better.

I would also go one step further and say, it’s not just about making algorithms fair – it’s also about making access fair. What do our technologies enable? Are our technologies bringing about data equality? Or are they furthering inequality? That’s not just an algorithm-related question, it’s also the question of where you deploy it and who has the ability to use that? Who benefits from it? Many of these questions transcend the limits of AI.

During the Covid pandemic, you started an AI-project yourself: CoughAgainstCovid. What is it about?

The idea is to analyze cough sounds in order to compare these on multiple levels with Covid coughs. Our hope is that we can use CoughAgainstCovid as a triaging and screening tool. We rank the people who take the screening test based on the ranking produced by the algorithm and recommend the people who are showing a higher risk to take the test. So, if the testing centre has the capability to test, let’s say, 200 people per day, they can decide, who the top 200 people that should get tested are. So that sort of thing can be done.

What makes CoughAgainstCovid special, what does it add to other projects that are being implemented?

When we started the project in March, it was an experiment. I would say: it was a fishing expedition. We didn’t know whether it would work, all we had was prior research, which had been done on the subject, like looking at cough sounds to search for other respiratory diseases. Our primary motivation was that although we didn’t know if it would be successful, it could have a large impact if it did. There is a shortage of Covid-testing capacity out there. There are limitations both in terms of numbers as well as in distribution, limited by supply as well as by personnel and geographic distribution. The solution we are looking at now requires nothing more than people having access to a phone. It even doesn’t necessarily need to be a smartphone. As we develop the algorithm, there are various avenues for implementation. A person can call a toll-free number and simply cough into that, people could also record their cough and send it over Whatsapp. You wouldn’t need to download an app. That was exactly what we were envisioning: You shouldn’t need an app, not even a smartphone and so it could, therefore, potentially allow wide coverage. A few months in, we have observed some very promising results that could have a tremendous impact on testing capabilities.

What are the future prospects of this project?

We want to make it openly accessible to the health care system. It wouldn’t be right of us to make it directly accessible to the public, since it can create too many false positives, so it’s important that it goes through the public health care system. We also want it to be used internationally in other countries that have similar resource constraints. So, people in other parts of the world with other health care systems could profit, as well.

What else is possible with AI in terms of controlling Covid and what can be done to prevent these kinds of pandemics in the future?

There are many possibilities. Another area, in which AI can be used productively is in discovering drugs. It can also help cities to assure sensibility for Covid and suggest a plan that helps with dealing with their resources. Moreover, it can be used to predict new areas of an outbreak based on historical data, which can then be used by authorities when taking proactive action. But there are limits of course: when I say “predict”, I mean at a point in time, when a virus is already spreading, AI can be used to predict, which areas might be at risk. However, the ability to predict a virus or the start of a pandemic itself is actually a challenging problem. Artificial intelligence today doesn’t necessarily have those capabilities because AI learns from historical data. But there is interesting work happening in that field, which should allow AI to learn these abilities. But that requires not just extracting knowledge from past data, it also needs the ability to reason about new possibilities. Before, let’s say, October 2019, there was no data that said there is going to be an epidemic soon. The best we could do at the moment is picking up on the early triggers, meaning that, as soon as something happens, there should be a mechanism to raise early alarms in order for us to say something might be going on.

How would that look like?

For example, there are surveillance networks, sending early alerts. The key part in meeting those alerts is actually integrating data systems to show: here is a new disease, which is displaying a range of progression that is different from other flus we have seen. So, we can conclude: there is something extraordinary happening. It’s really about looking for a deviation from the norm, the norm being a regular flu. It’s a little analogous to fraud detection. You want to detect suspicious behaviour.

Do you think the pandemic is helping AI to become more integrated into public attention?

I would say: no and yes. The most important thing the pandemic has shown is that the basic things, which are needed, are not things like AI. Those are e.g. suppliers of medical equipment, masks, and contact tracing. You want to be able to prevent these kinds of outbreaks, which requires a sort of “boots on the ground”-mentality as well as action from the authorities. Those are not software technology problems. In some ways, it shows Silicon Valley can only do so much on that. That’s not the solution in these situations. So, that’s the no.

The yes part is, AI-scientists are doing what they can to help. Look at some of the examples I gave, drug discovery, x-rays and predictions or screening technologies like ours. In those settings, when technologies seem to be helpful, expedited approval happens. So, that is certainly true, as well. In many ways, it helps to put AI in context. It’s not a magic bullet.