Episode 545 AI and Pandemics

Enrico Coiera is Director of Health Informatics at Macquarie University in Sydney, Australia. Ronald St. John is an infectious disease expert who led Canada’s SARS response and Pan American epidemiology for the World Health Organization. Both men and our own Adam Wynne discuss the recent breakthroughs in the use of artificial intelligence in identifying and responding to the COVID19 pandemic and now even more uses in health care. Coiera warns that the new AI system ChatGPT is good at “truthiness” but unreliable as a source of medical information for patients. For the video, audio podcast, transcript and public comments: https://tosavetheworld.ca/episode-545-ai-and-pandemics.


Enrico Coiera

Ronald St. John

Adam Wynne



ai, people, pandemic, algorithms, called, patient, gpt, interesting, technology, big, healthcare, neural network, model, doctor, health, chat, world, informatics, networks, companies


Enrico Coiera, Ronald St. John, Adam Wynne


Artificial intelligence (AI) is often misconstrued due to its portrayal in popular culture as sentient and autonomous beings. However, in reality, AI refers to a set of tools and methods that computer scientists have developed to automate decision-making processes. These tools are applied in various fields, including healthcare, where they can analyze test results, diagnose patients, and recommend treatments based on genetic profiles.

Machine learning, a subset of AI, identifies patterns in data to predict outcomes or diagnoses. Deep learning, which has gained popularity over the past five to seven years, uses neural networks—loosely modeled on human brains—to process information. These networks require massive amounts of data and energy to train, making them computationally demanding.



In healthcare, AI can assist or replace humans in various tasks, such as mammogram screening. It can quickly identify normal mammograms and reserve those with potential issues for human review. AI is also being used in melanoma treatment to determine the best-suited drug for patients based on their genetic profiles, as some drugs may be more effective or have fewer side effects for certain individuals.

Neural networks and deep learning have gained traction due to advances in computing power and access to vast amounts of data. Video game technology and digitized healthcare records have provided the resources necessary for training these networks. Despite their roots in mimicking human brain structure, current neural networks have limited correlation to actual brain functionality.

AI tools like ChatGPT have garnered attention for their ability to generate human-like text, prompting academic journals, schools, and universities to reconsider their stance on non-human authors. As AI continues to evolve, the challenge lies in finding ways to integrate and adapt to these technologies in various sectors.

Coiera points out that while ChatGPT is a great storyteller, it’s not a truth teller. Its ability to provide articulate and polished answers can be both useful and concerning. In healthcare, ChatGPT can be used to access information crafted for individuals to understand, such as sifting through large volumes of research during a pandemic. However, misinformation remains a significant concern, as malicious actors can flood the internet with plausible misinformation, affecting elections and public opinion.

Coiera also mentions HealthMap, a program that uses AI to read news feeds and social media to detect early signals of potential pandemics. The program reportedly detected COVID-19 in China before it was officially reported. Ronald St. John recalls his work with the Global Public Health Intelligence Network (GPHIN), an early warning system using news media and social media to detect potential outbreaks. He suggests that AI could improve this process by analyzing a wider range of sources, including social media chatter, and incorporating input from astute clinicians.

The sophistication of ChatGPT and similar large language models has surprised many in the academic community. As these technologies continue to develop rapidly, their potential applications and implications for healthcare and society at large are likely to grow as well.        

Coiera mentioned that an AI system had detected COVID before the Chinese doctor who blew the whistle. The conversation moved on to the potential of AI for risk assessment and prediction in relation to climate change, vector species movement, and predicting pandemics.

Coiera emphasized that AI is not the solution but a part of the arsenal to improve healthcare systems. He shared that AI is now being used in radiology for various imaging purposes and even for detecting COVID from a person’s cough. As the field of AI in healthcare has expanded, collaboration between organizations, universities, and industry partners has increased.

However, Coiera also highlighted the challenges of open source movement, replicability, and transportability in the AI space. Factors such as competition and the involvement of large companies complicate the landscape. Coiera further discussed his experience with the pandemic working group under the Global Partnership on Artificial Intelligence (GPAI), which aimed to respond to the COVID-19 pandemic but struggled to make a significant impact due to difficulties in aligning skills and mindset.

Lastly, the conversation touched on the ethical aspects of using AI in healthcare, including the idea of AI following the Hippocratic Oath. The participants explored the complexities of AI making decisions in clinical settings, such as end-of-life care, and the potential ethical dilemmas that arise.        

The panelists address concerns about algorithmic bias, which can lead to poor health outcomes for certain groups due to factors like socioeconomic status or ethnicity. One solution is to develop algorithms that account for these biases and promote equitable care for all patients.

The conversation also touches on the potential for AI to improve patient care, as well as the challenges associated with integrating this technology into existing healthcare systems. The speakers suggest that AI could be used to help patients manage their own health, as well as to assist healthcare providers in staying up-to-date with the latest research and best practices.

Regarding the future of healthcare and AI, one expert suggests that healthcare-specific versions of AI tools like ChatGPT could emerge in the coming years, providing valuable resources for both patients and healthcare providers. However, the current version of ChatGPT is not recommended for use in healthcare, as it is not specifically designed for that purpose.

Finally, the conversation explores the potential impact of AI on the job market and the structure of the medical profession. The experts agree that while AI may change the nature of some tasks, there is still more than enough work to be done in the healthcare field, and the combination of AI and human medical professionals can help address unmet patient needs. They emphasize the importance of focusing on the entire patient journey, rather than just moments of diagnosis, and using AI as a tool to support and enhance the work of healthcare providers.


Metta Spencer  00:00

This is a machine-generated transcript, so it contains errors. Do not cite it without checking for yourself by watching the video and catching any obvious errors.  Hi, I’m Metta Spencer, let’s go to Australia and talk about AI, shall we? AI and pandemics, what a combination. This is not a conversation that I have the faintest idea what we will actually wind up discussing, except that it’s going to be fun. We are going to Sydney, Australia to meet Professor Enrico Coiera, and Professor Coiera is the director of the Center for Health Informatics, which is a big word that I’m I halfway understand. Australia Institute of Health and foundation professor in medical informatics of Macquarie University. He was trained in medicine with a computer science PhD in artificial intelligence and he has done research in industry and academia. So hello, it’s good morning to you. I think it’s night for us good morning… Lunchtime for us. Hi there. And much closer to home. In Ottawa is Dr. Ronald St. John, who is my now a dear old friend, because I call on him every time I need to talk to somebody about pandemics, and he is an expert on pandemics. He was the Pan American epidemiology expert or the head of [inaudible] epidemiology for a pan America for the World Health Organization. And he was in charge of Canada’s response to the SARS epidemic back when, and he certainly knows his way around the virus or two. And here in Toronto is my assistant, Adam Wynne, who I lean on every day for one thing or another, and by the way, just getting over COVID and so and so is he he just tested negative today for the first time. So we are both on the mend. And now have firsthand experience with epidemiology. And the wrong end of the experience, I’m afraid. So I am delighted to meet Professor Coiera in particular, because it’s time for me to get smart about artificial intelligence. And I find it to be quite a daunting prospect. I every now and then try to watch something about this new thing, Chat GPT, is that what it’s called?

Enrico Coiera  02:33

That’s right yeah.

Metta Spencer  02:34

And it seems to be taking over the world. And I got to learn it and figure it out. But I don’t know. I’m kind of scared of it. So, Professor Coiera how about telling us? And by the way, am I pronouncing your name correctly?

Enrico Coiera  02:52

You are doing a pretty good job. Metta. Thank you.

Metta Spencer  02:55

Okay. Thank you. All right, let’s, if you don’t mind, I’d like for you to just get me smart in about one hour about what is this thing called artificial intelligence.

Enrico Coiera  03:07

I’m glad we’ve got an hour, but I might need five. So artificial intelligence is a very loaded term. And different people will think of different things when they hear it. Many people think of the Terminator and Arnold Schwarzenegger and that is something known as AGI or artificial general intelligence, the idea that an AI would be sentient and have autonomy. And you know, that is something that is coming. You know, some people are saying it’s a few years away some people saying we’re nowhere near doing that. I remain agnostic. But what we normally mean when we talk about AI is a set of tools and methods that computer scientists have built that just help us automate the way we make decisions. So finding things that people would do. Like listen to speech, if you’re used to Siri, you know, there’s AI, there listening to your speech and converting it into, into information. In health care. We might be looking at your test results, your signs and symptoms and diagnosing you. We might be suggesting treatment recommendations based on your genetic profile. So the AI using something called machine learning, will look at data, we’ll try and bring together patterns about what is associated with an outcome or a diagnosis. And the new generation technologies which have taken over in the last five to seven years, are called deep learning. And that refers to the use of what we call neural networks, which are not really models of human brains, but were loosely modeled against them in the early days. And so these are big networks that sort of connect together different ideas. And like neurons, you know, there’s different weightings, etc, between them. And to training those networks can take, you know, huge chunks of data, huge energy resources, if we’re worried about things like climate change, we can talk about that at some point. But so think about AI simply as ways of doing things smartly. And either assisting humans, which is what my preferred model is for healthcare, or sometimes replacing humans. A good example of replacement might be think of mammogram screening. It’s, it’s something that we like to do, but it’s increasingly hard to find humans to read all those 1000s of mammograms. And it turns out, most of them are normal anyway. So an AI would do something like screen all the obviously normal ones out, throw them away and reserve those that are somehow concerning, to have a reading with the human and the AI together. And that is of inefficiency. And no harm is done, you’ve tested to make sure that it’s doing the right thing. But for example, our team is working on melanoma treatment. You know, the last few years have been remarkable for melanoma, it’s gone from being an untreatable cancer to one where we have these new immune drugs, checkpoint inhibitors, which really, for some patients are a miracle and cure them. But

Metta Spencer  06:30

How recently, because this is news to me, I’ve, I’ve had friends with melanoma and more like 10 years ago, so if are you saying less shorter period than that more recently?

Enrico Coiera  06:42

That’s right, they’ve been in development for a long time. But they’ve only been, and maybe Ronald will know more about it than I do, but only the last five to seven years have, have they been widely available. And there are different generations of the drugs, the early generations for the drugs were very hit and miss. And they only work on certain people. So if you’ve got certain genetic markers, you’re more likely to respond to a drug than others. So the task we have is to find which drug is best suited to you. Because “A” we want you to be cured. And “B”, we know the wrong drug can have toxic effects on some people. So try trying to work out through your genetic profile and other biomarkers, which patient is going to respond best to which drug is the task we have for the AI. And so we’re finding as much data as we can about patients and trying to discover what the patterns are. So this is where the biology lags, what the AI can discover it’ll find patents that we maybe, maybe it understand why it is the case that things work or not, and then the biology will follow. So it’s a very, very, very different example of using AI again, no Terminator, no Schwarzenegger, but just some very smart tools to help us do things. Is that a good start at a very high level Metta?

Metta Spencer  08:05

It’s a good start but you’ve also triggered a question in my mind, because I actually have another friend who is active, as some people have called him the father of the new AI. Because back 50 years ago, Paul Werbos was a PhD student at Harvard. And he wrote a paper on something called backpropagation, which is apparently…

Enrico Coiera  08:29

Oh right yes.

Metta Spencer  08:30

The neural network analysis, but people didn’t use it until the last few years, and then they, I guess, figured out how to use it. So when you talk about deep, deep learning, is that’s neural network things?

Enrico Coiera  08:47

It is so so yeah, you use the right term back propagation. So that is the the mechanism for teaching the network. And even when I was completing my doctoral work in the 80s, you know, that shows you how old I am. Those methods were being used, but the computer power to do it was very limited. So we would tend to build networks with very few neurons and very few layers of neurons, just simply because the computers couldn’t do it. And also, we just didn’t have enough.

Metta Spencer  09:21

You call them neurons? You’re…

Enrico Coiera  09:24


Metta Spencer  09:24

You’re actually trying to mimic the structure of a brain?

Enrico Coiera  09:28

Well, that’s how it started, as I said, but it’s no longer the case that we that there’s much correlation, although some of the other pioneers of deep learning are saying we need to put more human [inaudible] back into the designs of what we do.  But what happened in about about 10 years ago, two big things. One was on the back of video games, so people started to need to build really fast computer chips to play video games in all those things you see, and they turned out to be perfect for, for teaching neural networks and were really fast. So all of a sudden, we have computing power and companies like Google had big warehouses full of computers, court server farms, with all of spare a source. So for the first time, we had access to massive computing power. And the other big thing that happened in healthcare, especially was that we were probably a decade into digitizing healthcare. So at last, we were getting enough patient data to train these networks and have to discover the patterns I’m describing. So yes, there have been great innovations in the design of these neural networks. But many people think, you know, the big things were getting access to raw computer power, and getting access to the training data. And so and on the back of that, you’ve mentioned Chat, GPT, which has appeared only two months ago, and already, you know, caused academic journals to have position statements on non human authors and whether that’s good or not. Schools and universities have banned the use of this for the production of essays. Actually, our university issued a guidance yesterday and said, well, we just have to live with it. And you have to find a new way of doing of doing things. I was listening to our Australian Government parliament, on the way home the other day, just by accident, as I was skipping through it, our attorney general who was the chief legal officer for the country, was giving a speech on an AGI artificial general intelligence, and halfway through his speech, he said, I need to let you know that last night I asked Chat GPT, what I should say, and the whole speech I have  given you so far has been generated by computer, and it was it was fantastic.

Metta Spencer  11:52

Was he really telling the truth or was he joking?

Enrico Coiera  11:54

Yeah, no, he wasn’t joking.

Ronald St. John  12:00

That’s remarkable. I was gonna say, another anecdote by chat was I learned that, you know, when you finish your medical school, you have taken a final exam to qualify or certify as a doctor and chat, passed the exam. So chat chat, technically became a doctor.

Metta Spencer  12:24

Is there any conceivable way in which that’s good news? Sounds to me like it’s…

Enrico Coiera  12:32

It, look, it’s it’s sort of isn’t it sort of is not.  Technology, I like to think as being neutral. And it’s the human uses of technology which are of concern. So from the point of view of healthcare, so ChatGPT, for your listeners, is what’s called a large language model. In other words, it’s, it’s a fed the neural network with, you know, literally billions of words and text from the web. And it’s built a model of how people speak. So I like to say that it’s a great storyteller, but it’s not a truth teller. So just by knowing how people speak about things, it often gets things right. But it actually is not knowledgeable in that sense. So you can ask Chat GPT, for something, and it’ll give you an answer. That’s truthy to use that American phrase. And it’ll be articulate and polished. And it often gets things right. So consumers, for example, instead of doing a Google search, will, I’m sure in the next few months, speak to or type a question in, maybe to Google they just released two days ago, their version of Chat GPT. So there’s a bit of a, a war going on now. Microsoft will incorporate it into its search. So the way we look for information will change. And instead of getting a list of links, we’re going to get an answer. Now, I don’t know if you remember, but before Google, there was something called Ask Jeeves, which was a very early search engine. And the idea then was the same thing where you would just try and give a question to the computer and it would just tell you what it thinks the answer is, but ChatGPT and all those language models can write essays, write limericks write write songs. A great Australian musician, Nick Cave, who you may or may not know, calls it an abomination because people asked it to write songs in his style. Of course it did. And so it really challenges the nature of what it means to be an artist if computers can mimic you. But so, so back to the point in healthcare, it it allows people potentially, if well done, to access information in a way that’s crafted for them to understand and we’re meant to be talking about the pandemic and one of the things that happened during the pandemic was that there was a literal explosion in research. You know, 1000s and 1000s of free print articles that had not been peer reviewed, were just deposited by people because it was so urgent that we needed to get the results out. And being able to look at all that volume of research that had not been peer reviewed yet or was in the process of peer review. And working out what’s important than not, you know, that’s kind of a crisis thing that no human could really do. And we’re, for example, interested in building what we call automated systematic review technologies, in other words, computers that can sift through documents to try and answer a specific research question. So that’s a different story to ChatGPT which just will tell you a nice story in healthcare, we expect to tell you the right story. And that’s a slightly harder ask. So I can see positive benefits. My biggest worry right now is misinformation. We saw throughout the pandemic, the anti Vax movement grew very much driven by what was happening in social media. So Facebook, etc, we know was designed to use AI and machine learning to essentially drive people to quite polarized views, because that would generate lots of interaction on Facebook. And that would drive their revenue model. But the consequence in society was people tend to polarize into very small groups. And having a Chat GPT technology means that a malicious actor could be a state actor ,can literally flood the internet with lots of plausible versions of misinformation. So, you know, if you were wanting to wanting to change an election, you just keep on creating lots of very plausible tweets, or Facebook posts. And you know, it, then it becomes an arms race, about message control. So I think we’re only in the very early days of misinformation, and these technologies worry me a lot in terms of the capacity for misuse.  That’s always there’s always people on the good side, trying to find ways of filtering that out. But um, the societal costs already of the anti Vax movement or, you know, what’s happened in recent elections. It comes in all countries, not just the United States, driven by this sort of social media misinformation. It’s concerning, especially because we know international state actors participated in trying to shift the results in other countries. So that’s some of the negative side of it all.

Ronald St. John  17:55

Yeah, I would find a fascinating application in helping us sort out long COVID. There are now a list of 200 symptoms of long COVID can’t all be one disease or one illness. There are lots of theories. There’s lots of publications, papers. So trying to sift through all that to see if there is a common thread somewhere that can define the long COVID syndrome in some way, that would be fantastic.

Enrico Coiera  18:33

Yeah, okay, I’ll get the guys to do it today. It’s an interesting challenge. And I suspect part of the problem with long COVID Is that we still don’t have all the right things measured. And so we’re left with proxies, like fever and cough, etc. Which, you know, are interesting, but may not give us the sort of granularity that you’d want to work out some of those different phenotypes, as we say, you know, different different, different causes. Yeah. But I’m sure that’s something people are working on. But it’s, it’s, it’s, it’s, you know, it’ll be correct in this way. I’m pretty sure.

Ronald St. John  19:15

Yeah, we just need to come up with a common case definition everybody can agree that’s [inaudible].

Enrico Coiera  19:21

Yeah, exactly.

Ronald St. John  19:22

What is it? What are we measuring? One way or the other? Yeah, it’s a challenge. been interesting to watch it evolve, but it’s gonna take a little while for science to sorted itself out.

Enrico Coiera  19:33


Ronald St. John  19:35

One of my interests regarding pandemics is detection, early detection of pandemic. And I wonder if you’ve given some thought to applications that might help us detect an unusual event that might, might become a pandemic, or not.

Enrico Coiera  19:58

Yeah, that’s interesting. We did some work about a year or two ago, looking at the way AI had been used in the COVID pandemic, just trying to review the different tasks that it was used for. And, and the first task was signaling pandemic risk. And there, there is a program called HealthMap, which is run by the Boston Children’s Hospital. And it’s basically uses AI to read news feeds to read social media, essentially, look for signals and people’s behavior and what they’re reporting, and to see if there are early clusters. And and people say that HealthMap detected the COVID pandemic first. So it was seeing signals and social media in China, that were suggesting that there was something happening in that population before people mentioned it. And I think there’s a Canadian model was called blue dot, which I don’t know too much about. But just a few days later also detected the same kind of signal. So, I think one of the things we learned through the pandemic was that there are still there are soft data out there that actually are meaningful, and that we never really had access to before. And being able to surveil the way people are talking about their health. It is an interesting early signal. It’s not precise. But you know, and you don’t want to obviously be overloading but I don’t know, if that was something you’re familiar with Ronald or, had thought about.

Ronald St. John  21:35

Just a bit of background back in the late 90s. Due to a couple of scares we had in Canada, I won’t go into, we thought Canada needed an early warning of things that will might be imported from around the world. Given that you could come, get anywhere in the world in 24 hours. While you’re incubating something that might, might create an outbreak of disease where wherever you arrive, and we thought the early internet could help somehow find the information we were looking for. However, at that time, in the late 90s, the fastest, the fastest search engine was going to take one whole month to make a pass. And what we landed on was we wanted stuff in five minutes, but we landed on was reading RSS feeds from media, which is a lot and created the global public health intelligence network or Chief and it’s called. Now that’s pretty archaic technology, because we use the taxonomy to search for search words. And then we needed to have people who would then take a look at the events and decide whether something was critical or not. But if you have AI to look at not only at the sources like newsprint, but also looking at the mammoth source of social media jabber that’s going on around the world. Plus you add to that the idea of I’ve always felt I’ve always felt that it’s an astute clinician, that often tells us there’s something unusual going on, and it needs to be followed up. So how you tap into a network of astute clinicians I’m not sure. But, but certainly tapping into news media and social media is the way to go in AI, I think can do the job much faster and better than humans can do it.

Enrico Coiera  23:46

Yeah, and I think that was the lesson in common is that you can do it. And now it’s interesting. It’s called COVID-19. It’s now 2023. Four years, has passed since the start of the outbreak started. And AI has moved so fast in the last four years. So I’m not the only academic who was surprised by Chat GPT. We’ve known about these large language models, and we’ve known that they’re coming. But to just be faced with the sophistication of the conversational capacity of this technology, it really was, you know, more sophisticated and more capable than we thought would be possible at this time. So if Chat GPT is out there, you know, Google has things in house, Manor has things in house. There are lots of companies that will be doing things the next few years. So, So I think everything that we could do in 2019 is already ancient history.

Metta Spencer  24:57

Are you saying that there that some Computer System or AI system picked up COVID, before that doctor in China who blew the whistle and then died.

Enrico Coiera  25:07


Metta Spencer  25:08

So as before, yeah, really? Now, how come? How come it didn’t become public knowledge? Before he took that great risk of annnouncing?

Enrico Coiera  25:21

I’m not sure, I think they may have even published their alerts at the time. But it just, it’s also a matter of days, you know, so. Yeah. But I presume it even a couple of days is a lot of warning to get going.

Ronald St. John  25:39

To me, another interesting aspect is to look at seemingly unrelated situations that might create conditions for a pandemic. A little bit of an idea of AI forecasting, for example, if there is a dramatic change in the climate in certain areas, and there is a movement of pathogens into those areas. Is there a potential? So do you think that you can get into sort of a risk assessment, risk prediction mode with AI?

Enrico Coiera  26:12

How do you look you could do with standard stats. And you’ve actually mentioned a really interesting topic that my community is only recently started to think about, which is climate change and digital health. So in December, I edited a special issue of the American Journal of informatics around climate change that involves AI other things. And in terms of the contribution of AI, one of the big things is to do exactly what you say, which is to monitor change in the movement of vector species like malaria, and mosquitoes, that sort of thing. So we can see in Australia, for example, things like Ross River fever, moving south, as things heat up, and I’m sure across the North American continent, you’re going to have changes in, in those vectors are the diseases of themselves. So being able to model and detect changes is clearly a part of what we we could be doing. And I think, you know, the bigger the bigger story is that to survive COVID. And we’re still in the middle of COVID. I’m always surprised when people say with post pandemic, it’s still live. It’s still live.

Ronald St. John  27:32

Politically it was declared over.

Enrico Coiera  27:35

Exactly, yeah, the same, same here. But it’s taken heroic, a heroic response to get to this point, you know, it’s burned out many, many clinicians. And societies pay quite large costs during lockdown, etc. Climate change is going to bring us these events on a regular basis, the idea that we’re going to have one pandemic, and we’ll just get over it is not the world we live in. So we’re going to have new pandemics because of shifting patterns of climate, we’re going to have floods, heat events, smoke events, cold events, all of these things, you know, are massively challenging to society, and each is a shock to the health system. So the big question for us now is, how do we re-engineer healthcare, to be resilient enough to deal with all these things that are going to come to it? You know, if we do another COVID response, we’re not going to have many doctors and nurses left willing to play, it’s just going to be too hard. So AI is not the solution, but it’s part of our armaments, you know, I gave you that example of screening for mammograms for so there will be things that we will delegate to AI just because we have to. And I think what we’re trying to do now is to understand the nature of how we adapt as a health system and society, quickly to new crises, you know, and that’s a whole new ballgame, as they say, and stretch stretches the thinking, you know,

Metta Spencer  29:22

One thing I think today said something about the way you would detect I don’t know putting people through an MRI or something they they switch to using AI to to detect and diagnose the condition. Instead of using this X ray equipment, which was going to give the next patient the disease.

Enrico Coiera  29:46

It is certainly the case that AI is being used in radiology, for all sorts of images for everything from ultrasounds to CT scans and MRIs. It can do it. We ourselves have built AI’s to read X rays to diagnose COVID, that’s a very standard thing. Now you can do that we actually have a very interesting project, which is diagnosing, COVID based on cough. So you can distinguish, but just based on the kind of cough, you’ve got whether you got  COVID or not.

Metta Spencer  30:19

The sound? The sound?

Enrico Coiera  30:21

Just the sound yeah. So the idea is you cough into your smartphone and you know its performing better than a lot of these swab tests that we have. Not PCR which is very accurate but so that is the sort of thing that you wouldn’t have imagined but there you go.

Metta Spencer  30:42

How many places are there like you in the world? I mean, your whole institute or department is focusing on developing AI technology for, for medicine.

Enrico Coiera  30:56

That’s right.

Metta Spencer  30:57

Yeah, how many other people in the world are doing that? And are you, Is there any kind of collaboration? Or do you know, who’s doing what, so you don’t wind up doing the same thing?

Enrico Coiera  31:09

Yeah, look it things have changed a lot. If you’d asked me this question. Five years ago, I would have said, in Australia, we were the only one. Now, every university has one of me. You know, there’s a professor informatics or digital health or something. Across North America, you know, every large US university has a strong informatics or biomedical informatics group, all working on different things. So the field has exploded, really in the last decade. And that’s interesting to me, too, you know, in one way, it’s lovely to have success that you can see your field burgeoned. But there’s also a lot more competition, which cuts to your second point, which is around collaboration. And there are collaborations around medical AI. So there are groups where different labs and centers do share information. We have a national alliance, here with about 100 organizational members. And it’s not just universities, it’s industry, which is a great source of innovation. Its health service providers, clinicians, that it’s not just a technology piece, it’s got to be all about people with problems and needs, who help us understand how to help them. The last thing we want is lots of computer scientists creating algorithms that look great, but don’t actually change one patient outcome. That’s, that will be a failure of technology. So it’s a it’s a big area now. Even though I use informatics as a funny word, and I’ve hated it since my career started, but it’s it’s it’s the thing, you know,

Ronald St. John  32:57

I’m just, you know, I’m not I’m not an expert in the field of AI, for sure. But it occurs to me, you’re I’m sure you’re familiar with the the International Bank of genetic genetic sequences for viruses. And I’m wondering if there’s a a bank for software programs for AI? Do you people bank their programs and say, hey, look what we’ve done?

Enrico Coiera  33:24

Look thats a really interesting question. So this is a bit about the open source movement, which is about publishing your software in a way that others can access it. So we’d like that to happen. One of the interesting reasons for doing what you say is, is what we call replication or replicability. You know, it’s, it’s, it’s a big thing in in, in health research, as I’m sure you’re aware, too, that just because somebody publishes a study, and a result, doesn’t mean it’s true, they might have made a mistake, they might have been biased, to want to get a certain answer because of financial conflict of interest. And in my field of AI, people get their own data, write their own algorithms publish a result, and nobody checks your homework. Nobody checks to see if that was true or not. So one of the, one of the drivers is to have these open repositories “A” of data patient data, so everybody can test their systems on the same data, and also the algorithms they did so. So we are moving to that we’re not doing enough of it. And to complicate the issue, the big companies, the Googles of this world, and Microsoft’s publish their own software and make it open to people so that kind of attaches people to the company’s technologies and may not necessarily be as open as we would like. So it It’s a very complex space, and especially in AI, what might have been a very academic pursuit is now a multi billion or trillion dollar enterprise. And there’s a lot of there are a lot of drivers to this being closed shop, because that’ll make companies more competitive. So a really interesting landscape. Very complex.

Ronald St. John  35:27

That’s fascinating. The Yeah, I think the public sometimes has a hard time understanding that science is iterative, that what is seems true today may not be true tomorrow, as more information is developed, collated, analyzed, and so on. So I imagine the same thing can apply with certain algorithms and AI? Well, maybe they do work under certain circumstances, but not always, then we can. The next, the next round will be a little bit better. And the next round will be better than that.

Enrico Coiera  36:02

That’s right. Interesting. Yeah, I was just gonna say that, um, one of the things that we’re starting to grapple with is that just because the AI works in hospital “A” in Canada, doesn’t mean it’s going to work in another hospital, even in the US, because the populations are different, during different different blends of ethnicity, different socioeconomic status, different disease patterns, different ways of practicing medicine. So we call this the transportability problem. So it’s harder in our space, because it’s not just that people may have made a mistake in the research, it’s just the things may not work for perfectly valid reasons in a new setting. You know, a drug should still work, presumably, across large populations. But when you build these, these health service interventions, they they’re very hostage to local and local conditions.

Adam Wynne  37:08

I’ve read that you were a member of the pandemic working group with the global partnership on artificial intelligence, which is one of the broader international artificial intelligence organizations. I’m wondering if you might speak a bit about what they’re working on and what GPAI is.

Enrico Coiera  37:24

Right. So GPAI, GPAI, was put together, I think, under the OECD, to try and bring together some rapid response to the pandemic. And up until last year, I was part of the pandemic Urgent Response subgroup. And I’ve got to say that was an interesting, but challenging experience, because we had an urgent pandemic happening out there. And it was very difficult to get people whose skill set was computer science and algorithms to get into the mindset of Population and Public Health or clinical response. So, there was a great focus on creating libraries of software that you could use, etc. And there was a healthy debate in the group about what we should be doing. So I’d be saying, Look, guys, people are dying out there. What can we do today, to help? And others were saying, Look, our job is to be more retrospective to see if we can give general guidance. And I think there are a lot of lessons around how best to utilize those skills. I’m, I’m not sure that, that we made much of a difference to the pandemic. But I think that was a learning experience. And certainly, I would reconvene that group again, but with more experience, we would do a different thing. So it was very interesting. And and I think those sorts of groups actually probably do better at more general things. So one of the big things have been the articulation of ethical principles of the use of AI, what is appropriate, what is the safe use of AI? And not just in health care, but in the military, etc. And that’s where a GPAI has probably been really strong.

Adam Wynne  39:30

I think. I think you wrote an article about that the whether AI is and medical context should follow the Hippocratic oath. I found that quite an interesting prospect.

Enrico Coiera  39:40

That’s right. So what what are the ethics of AI? The, the thing I always say is people are probably familiar if they’re a science fiction reader with Isaac Asimov and his iRobot series and then the the four, the four rules that all robots have to follow which is essentially don’t, don’t harm people don’t kill people do what you’re told. But if being told, and don’t, don’t don’t commit genocide is also snuck in there, which I think is useful for all robots to know. And I often say, and although it’s a cheeky thing I say, but a clinical AI would break all those rules, if it did its job. So the example is end of life. So in the last couple of weeks of life, many people end up unfortunately, in hospital, and the wrong place for them to be then need to be probably at home with their loved ones. They don’t need heroic intensive care, they need the tube pulled out and to be comfortable. And that’s a decision to be made. And if you make that decision to remove care, you’re allowing somebody to die. I’ve broken the Asimov rules. So we have algorithms now, which are very good at predicting whether or not you’re about to die in hospital. Really, really accurate, I’ve got to say. And so what is the ethics of using that algorithm? To make a decision to withdraw care? You know, so a very complex ethical space.

Metta Spencer  41:13

Do you let the AI system make its own decision? I mean, do you build in the…

Enrico Coiera  41:18

We don’t know, do we?

Metta Spencer  41:20

You don’t? I don’t know what I’m asking what you do? Do you turn it loose? You asked me next time you have a question.

Enrico Coiera  41:29

Yeah, so, so you know, those decisions or decisions made by the person involved by their family and carers with guidance from their clinical team. But if a clinical team now relies on the technology, to push them to making one recommendation, or the other, you know, there’s an ethical balance there around this is it appropriate to design that AI to give that advice. You know, I think it’s manifestly obviously useful to know that, you know, your clinician is already a very clear, usually what’s going to happen anyway. But you know, it might be, for example, that the AI, says, Look, you know, this patient’s 35, and although they look really ill, their probability of death is only 30%, they’ve got a good chance. So you need to work hard and go, in which case, you might be harming the patient, they’re going to ICU, they’re getting painful treatments that are on a respirator. These are really awful things to do to somebody, but we do them the prospect of survival. So again, that breaks the Asimov law, it now inflicts harm or short term harm anyway for long term benefit. So these are, these are the bioethical challenges around AI and healthcare. And these are just very tried examples to try and get you thinking about the complexities of it all.

Ronald St. John  42:50

I can certainly imagine a computer with AI algorithms that first of all evaluation, pulmonary status, to renal status, your cardiac status, your mental status, matches that against the data set. And here’s a probability.

Enrico Coiera  43:06

Now we do that we do that, you know, it’s and it’s, it’s 99 point, something percent, usually accurate.

Ronald St. John  43:12

Do it with a little more precision than the physician who says, I got a feeling based on experience based on experience and knowledge. But this is, but the AI is database.

Enrico Coiera  43:26

That’s right.

Ronald St. John  43:27

It but I see it as a tool to help the family and the physician, not as a decision maker.

Enrico Coiera  43:35

I think that’s the safe framing of how to use it. However, somebody has had to design that algorithm, with the knowledge that it might shift a decision. So there’s the ethical challenge.

Ronald St. John  43:47

Yeah. I realized, yeah.

Enrico Coiera  43:50

Yeah. Very interesting.

Ronald St. John  43:52

How good is the algorithm?

Enrico Coiera  43:55

Well, isn’t the right thing to build the algorithm?

Ronald St. John  43:58

Right. Right.

Metta Spencer  43:59

We’re also interests that, you know, I’ve heard of cases where the hospital says, now look here, this person has been on a machine for 40 years. That’s long enough. And then the family keeps saying no, don’t don’t turn it off. So you know, I guess real practical, cost benefit analysis plays into real decisions anyway, ethical decisions that that people make not not just a problem for, for the designer of the AI system.

Enrico Coiera  44:33

Yeah, look, these are simply going from things that in principle we did to now becoming pervasive. And I think, now that it’s becoming pervasive, these algorithms are everywhere. People are looking at this with new eyes. Another good example of an issue is algorithmic bias, which people talk about a lot. So for example, algorithms might suggest what sort of treatment you should get, or what what your chances of survival are. So one well known bias is it, the algorithm will end up recommending, based on how the world is. So people from one socioeconomic group, or one ethnic group might have very poor health outcomes for all reasons around inequity, but the algorithm doesn’t know that and says, Oh, look, if I see a patient like that, they’re not going to do well. So they’re not going to get treatment. So what it’s doing is just hard wiring, a social bias. So it’s a big issue, I’m training an AI that will recognize a skin cancer, if all that’s ever seen is Caucasian shots of of the cancer, it’s going to be really poor at diagnosing somebody who, who’s got different colored skin. And that’s another bias, they’re gonna get poor quality care, because it doesn’t understand how to look at this disease in a different population. So those biases are another issue that concern the ethicist. You know, it’s a complex world we’re into.

Metta Spencer  46:30

I want to go back to the the the profit, profitability issue, who’s got to own all of this stuff. Somebody you know, you mentioned Google, and metta and so on, that now are producing these things. This has to be a profit making operation, somebody’s going to make a buck. And somebody’s going to own the algorithms and somebody’s going to own and how they’re going to charge rates for, for diagnostics, or how’s the economics of medicine going to be affected by the use of this thing? And I’m, in a way mentally comparing it to what we do with the pharmaceutical industry, which I think many of us would have some complaints about, because certainly some of these companies are ripping off, you know, the public in a big way, by making sure, I think Biden last night talked about how the, in his State of the Union speech that that they were charging $300 for insulin when it costs $12 to make. And so they’re going to put a lid on that. Well, high time. So so they should, but you know, there will there will be some comparable, you know, economic issues that I suppose arise when it comes to who’s going to control what, and what do you see as coming out of this in terms of the financing of medicine?

Enrico Coiera  48:13

This is a really, really interesting question. And I don’t have the answer, because I think companies are trying to work out how they make a buck out of it. You know, I mentioned how the some of the big tech companies are currently giving away their software in open source to get people using their systems. So right now you can buy one of the giant electronic record company record systems, you know, so when you go to a hospital, you’ll see there’s a computer and somebody is entering information into the electronic record. And those companies allow you to build your own algorithms based on this system. So it might be that in one model, it’s the local organizations that build their own algorithms and keep them up to date. That could happen. Or it could be that somebody tries to sell, sell them, you know. But I talked about something called algorithmic sovereignty. In other words, it’s important for nations to have control of the algorithms that run their country. And, you know, if you’ve got foreign actors creating the software that runs your electricity grid, your transport grid, your water supply, you don’t really know what’s in the software. And so there are sort of societal risks from from not having capability. And I think in health care, you know, I talk about that Australia, for example, I don’t know if it’s current experience, but we’re great exporters of of primary products will export wool, and then we’ll import the suits made from the wool, you know, we don’t add value. And the risk is that we’ll be exporting all our patient data and importing the value added algorithms back from other countries. And I think you can become hostage to them, you know, not just in terms of the financial implications, but also to the biases, etc, etc. So I think there’s a strong, there’s a strong argument, to try and bring algorithm development down to as much as the local as possible. We’ll see. Let’s talk about this in a year Metta.

Ronald St. John  50:38

But isn’t isn’t a feasible but I’m a company and I’ve developed an algorithm for reading X rays, and discarding the normals as you put it, and I come to the hospital, I say, I’ve got an algorithm read your X rays just in a flash doctor. And it’ll cost you.

Enrico Coiera  50:57

So I think the the model there more often is that they’re selling the box, they’re selling the X ray machine, and saying, it’s a smart X ray machine. And these are the things it can do. I remember many years ago, when I was working at Hewlett Packard research labs, we were working on making their bedside monitors those machines that go beep, and making them smart. And so we were putting AI, this is back back in the early 90s, putting AI into the into the patient bedside monitors and the ICU monitors. And they will go to the customers and say, How much will you pay for this smart version of HP care view? And, and customers would say, well, we’re not going to pay any extra because we expect the next generation of technology to be smart. So it may well be it may well be that you’ll be getting AI as part of the package of the CT scanner you buy as part of the medical record you buy. And that is just a natural evolution of the market. But there might also be models where those companies allow other third parties to put their algorithms in and they can charge them. Who knows. It’s going to be interesting. Yeah, I always love. Yeah, the HP printer model where we sell you the printer for less than cost, but then charge for the ink, right?

Ronald St. John  52:26

Yeah. Well, it’s clearly a field that is evolving quickly. Very quickly.

Metta Spencer  52:38

What kind of work is your group doing now? Do you have a particular goal that you’re working on that hasn’t quite developed yet? And you can tell us a hot secret about what you’re going to do next?

Enrico Coiera  52:55

Sure. So we’ve got four groups, and one that we haven’t talked much about is the group that works on AI safety. So this is about trying to understand how you would build these systems in a way that would do no harm. So that’s really interesting. We’re actually right now, you mentioned ChatGPT. So we’re in a scramble to try and understand what its capabilities are. So I think someone mentioned earlier that it did pretty well, on [inaudible], the medical questions from an exam, I don’t think actually pass the exam, but I think it was given questions from it. And it did pretty well. So we’re trying to understand how, how accurate is the information that is providing patients? That’s the sort of thing we’re looking at? Is it safe? Is it safe? Because we’ve in the past looked at Google, let’s say, and, and other search engines, you know, and, you know, typing in things like, you know, I want to kill myself, you know, I’ve I’m suicidal to see what sort of results you get back and some search engines in the past would say…

Metta Spencer  54:08

Will it teach you how, how to do it. Is that the idea or what is?

Enrico Coiera  54:13

Or does it or does it say, if you are concerned, here’s the number you should call for help right now, which is the appropriate response, you know, if you get somebody who’s potentially suicidal, and, you know, they direct them immediately to frontline resources to help. And so, you know, in some of the search engines, we found, they did exactly that were beautifully engineered, and others, you know, misdirected you know, so, that sort of thing that is of great concern. And we’re just also very interested in this whole discussion today has been about hospitals and doctors. In the future, you know, most of the tools are going to be owned and used by us patients or carers or family members, so So one of our groups is very focused on this consumer side of understanding what those tools look like, you know, we say that that patient work is hidden work. You know, nobody really talks about all the hard work that’s involved in being a patient. So if you if you’re insulin dependent diabetic, you talked about insulin, then there’s a lot of work you do every day, to stay well. We studied the work of doctors, nurses, nobody really pays much attention to helping the work of patients. And we’re very interested in families, too. So we had a paper [inaudible] last year saying, really, we should building tools to help families support each other, you know, so we know in a family, it’s often the mum, for some reason, who becomes the owner of the healthcare problem. In some families with aged parents, you know, that there are changing responsibilities, you know, could we build a dashboard for the family to see where everybody is to see how well people are to prioritize health care tasks? You know, to be able to provide access to information to know if mum or dad opened the fridge and ate yesterday? You know, why are my parents in bed all day? Why haven’t they got out, all those things you can do from sensors. So it’s, it’s not big brother, it’s about trying to create tools for us to become better managers of our own lives. And we talked about, you know, a stressed health system in covered. We’ve talked about climate change, bring future stresses to the health system. There’s no question that machines, and we as patients are going to take up the slack that the health system can’t help us with any more, those are going to be the the only two levers we’ve got really, self reliance machines, or help from the health system?

Metta Spencer  57:06

Well, I think that’s, you know, I use Google all the time to, to diagnose myself, if I have a pimple, or I have, you know, a cough or something. I, I check and see what, you know, what does this mean, you know, but it how much better is it going to get? When am I going to actually see an improvement in my self diagnosis? When I use begin using this Chat GPT or whatever it’s called? Better than just looking up on Google? And and will physicians be using? Will my doctor go check with GPT efore she calls me to answer my question, et cetera?

Enrico Coiera  57:53

Look at it. It’s very interesting. So I look at and we talk about ChatGPT. It’s not a health technology, it’s not trying to do that job. And so I wouldn’t recommend anybody rely on it in its current version for anything other than playing with it for interest. But I don’t doubt that, within the next few years, we will see healthcare specific versions of these large language models appear, and they will be very good. At look, doctors today already do that, you know, there’s no question that there’s no way any good doctor can keep in their head, all of the latest research, you just can’t do that. And so good medical practice is to go and make sure that you’re up to date. And in the past, you might have gone to a course you might have done some reading from your journal. Now, now we we we search well known resources, but we search and we train, we train our, our medical students to, to not just be knowledgeable, they can’t know everything we teach them, how to find out what they need to know, you know, it’s the modern skill. So it’s not a bad thing. It’s a great thing. You know, I mean, if I, if I see my doctor, tapping away and looking at something I feel comfortable, that that they have lived, understand there have limits, it’s not the other way around. And look, you mentioned self diagnosis, but you know, most most of our workers patients is between moments of diagnosis, you know, the diagnosis is just a moment in your journey. And we tend to focus on it so heavily, but it’s everything before and after that really is the hot stuff.

Metta Spencer  59:36

How’s it going to change the The Working Life of doctors, I mean, the whole profession, I presume, if, if so much of the work can be given over to machines, that it’s going to affect the job market, it’s going to affect that the whole structure of the of the profession. Is this happening. already, or are you getting geared up for in expectation of these changes?

Enrico Coiera  1:00:07

Yeah. So early on, people were saying that AI was going to put doctors out of work. And we now know that’s nonsense. If you if you look at what we would call the unmet need out there, in other words, all those patients who have problems who are not getting the care they need, there’s more than enough work to be done. And so the the future looks like doctors and nurses and physiotherapists, etc, are all that health professionals are going to be working with the technology to be far more effective and touch more patients than before. It’s not that we’re going to be out of a job, it’s that we’re going to be doing a different job. Some wag said a while ago that if you’re a doctor and AI put you out of a job, then you deserve to go.

Metta Spencer  1:01:05

Anyone else have any more issues that you want to handle before we close this tonight?

Enrico Coiera  1:01:10

My head is spinning. I’ve had enough thank you.

Ronald St. John  1:01:15

[Inaudible] It’s been a great pleasure to meet you in this and discuss some of these things with you.

Metta Spencer  1:01:21

Certainly has, thank you very, very much. I feel now I’m really smart. I’ve got it all controlled. Thank you so much and good wishes for your adventures with AI. Take care.

Enrico Coiera  1:01:37

Thank you. Thank you. Bye everybody, good night. Project save the world produces these shows and this is episode 545. Watch them or listen to them as audio podcasts on our website tosavetheworld.ca you can share information there also about six global issues. To find a particular talk show it or its title or episode number in the search bar or the name of one of the guest speakers. Project save the world also produces a quarterly online publication Peace magazine, you can subscribe for $20 Canadian per year. Just go to pressreader.com on your browser and in the search bar and just the word peace. You’ll see buttons to click to subscribe.







Select the Videos from Right

We produce several one-hour-long Zoom conversations each week about various aspects of six issues we address. You can watch them live and send a question to the speakers or watch the edited version later here or on our Youtube channel.