“I can see a world where we end up with robot doctors”: Meet the researcher investigating the future of AI in medicine

“I can see a world where we end up with robot doctors”: Meet the researcher investigating the future of AI in medicine

U of T professor Rahul Krishnan on using AI to shorten wait times, the potential for deep fakes in health care and the dangers of teaching robots human biases

Researcher Rahul Krishnan says AI tools can help save time in the over-burdened health care industry. The University of Toronto professor talks about how AI in medicine could decrease wait times and relieve burn out.

These days, it seems like AI can do just about anything—including producing extremely convincing fake Drake tunes. On a more practical note, one Toronto researcher believes the tech could also help relieve pressure on the over-burdened health care industry. Rahul Krishnan is an assistant professor in computational medicine at the University of Toronto, and he recently received an $85,000 Amazon Research Award to study the consequences of implementing AI in health care. We spoke with him about the risks and rewards of robot-assisted doctors and why your future trips to the hospital could be a lot faster.


You study machine learning and AI. How would you explain that to a five-year-old?
It’s about trying to make machines that can mimic the way a human brain works by finding patterns in data. Let’s say, for example, that your dataset is a catalogue of different kinds of chairs. Your goal is to find out: Is it possible to sit on each chair? As a human, I have a good way of answering this question—I could just sit on them. But that would take up a lot of my time. With an AI model, it needs to be trained on things that seem obvious to us, like recognizing that you can only sit on a chair if it’s upright and so on. So it takes some work upfront, but once that’s done, it lets you answer the question faster.

Related: Meet the physician working to address burnout in Toronto hospitals

All right, that’s chairs solved. What about health care?
A lot of clinical decisions are made by evaluating collections of data. Think about how a doctor decides how to treat a patient: they look at their medical history and images that show what’s happening in their body. Then they integrate all that information with their training and experience and come to a decision. If you had a tumour, for example, a pathologist might take a sample of it, magnify it into a huge image and then count the number of plasma cells. An AI tool could do that for them, which accelerates the workflow—meaning that you, as a patient, get a diagnosis faster. Of course, people still expect to be treated by a human, and hospitals are still liable for the outcome. So, at the end of the day, a real doctor would still give your final diagnosis.

Shorter wait times sound pretty good. Is there any way this could backfire?
If you train a model using data that contains real-world biases, there’s a danger of those biases getting built into the system. For example, Pro Publica has investigated machine-learning models in the US legal system that tend to deny bail to Black people because the models think they have a higher risk of recidivism. That causes real harm. If the same were to happen in the health care system, that would obviously be really bad.

Are any Toronto hospitals or clinics using AI already?
There are quite a few. St. Mike’s, for example, has started using a program called Chart Watch, which lets clinicians know if there’s a patient who’s at risk of rapidly deteriorating and therefore needs a bit more care. But, overall, Canada is lagging behind. The UK has created a huge study called the UK Biobank, which collects medical data for half a million people. That data is going to drive the next generation of medical models. Our own studies don’t come close.

Should I expect to start getting diagnoses from ChatGPT in the next few years?
Not much is likely to change in the near future. AI is making its way into health care through long-term studies. That takes time to germinate.

What does all this mean for health care workers and their jobs?
If these tools can take on some of the admin work that doctors and nurses are doing, that would really help with burnout. But the larger problem is that we just don’t have enough health care professionals for our population. No amount of AI can fix that.

What’s the wildest possible scenario for the near-future of AI in medicine?
That’s hard. With AI, every time I think, “Oh, we could never do that,” someone comes along and does it. I’ve stopped trying to predict where the field will go. But, with the technology we have now, I think we could have an AI assistant that writes first drafts of pathology reports for doctors.

How long until, say, robot doctors?
I’ll give a prediction of ten years, and we’ll see if it happens! The pandemic saw a huge acceleration in the uptake of medical technologies like telemedicine. Medicine usually changes really slowly, so that tells us rapid progression is possible. If AI models are tested and validated in the future, I could see a world where we end up with robot doctors.

You just got a huge grant from Amazon to continue studying all this. What are you going to use it for?
Whenever you talk to a doctor, they write up a note about that interaction, which contains a lot of information about you. I found that some AI models were good at summarizing that sort of thing. Imagine you’re a doctor and you move to a new hospital. You’d have to spend hours reading pages and pages of notes about your patients to get up to speed. But an AI might be able to do that and just pop out a one-pager of the important details. These models are already out there, so my goal is to ask: How well are they working? How could they be improved? And how can we make sure they stay up to date? One part of the award lets us use Amazon’s powerful cloud computing system to run experiments, and the grant would pay for a graduate student to help do the research.

Does AI have any implications for how you conduct your research into AI itself?
The pace of scientific output has accelerated so rapidly—you need to be reading 10 to 20 papers a day to keep up with the newest events. So, yes, an AI tool could help sort for the most relevant updates and, in doing so, make it easier for people to ask creative research questions.

Just yesterday, Geoffrey Hinton, often called “the godfather of machine learning,” announced he’d left his position at Google to speak frankly about the dangers of AI, including its potential to spread misinformation and take people’s jobs. Do you share any of his concerns?
I do. We have seen that AI can lead to a net loss of available jobs, which is a problem. And there are now tools that can mimic the voices of public figures to an alarming degree of accuracy. What happens when we have misinformation circulating online in the form of audio messages where, say, the Minister of Health expounds the virtues of celery as a cure for cancer? Trust in machine learning has always been a concern in this field, but there is a heightened urgency to contend with that issue now that these tools have been deployed into the public sphere, including in health care, en masse. As with all technology, there is good and bad potential. In this case, it will take global coordination to limit the bad. 

A number of experts (and Elon Musk) recently penned a letter urging some AI developers to pause their work for six months. Would that help?
I’m not convinced that any letter will pause development when there’s a monetary incentive to keep going. It’s tricky. We don’t want to halt everything entirely, but we want to disallow any misuse of the technology that could harm people.


This interview has been edited for length and clarity.