What's the remedy for medical misinformation?

3/8/2023 Sharita Forrest

Written by Sharita Forrest

Kevin Leicht, a sociology professor and one of the science team leads at the U. of I. System's Discovery Partners Institute in Chicago, is co-leading the development of a software app that will alert clinicians to medical misinformation circulating on social media so they can address it with patients if they choose. The work is funded through a $100,000 grant from the Jump Applied Research for Community Health through Engineering and Simulation initiative in The Grainger College of Engineering. 

 

Photo by L. Brian Stauffer

 

 

Editor’s noteKevin Leicht is a professor of sociology at the University of Illinois Urbana-Champaign and one of the science team leads at the U. of I. System's Discovery Partners Institute in Chicago. He is co-leading the development of a software app that will alert clinicians to medical misinformation on social media. He is working with Computer Science Professor ChangXiang Zhai (The Grainger College of Engineering) and Dr. Mary Stapel, Physician at OSF HealthCare Pediatrics and Faculty at the University of Illinois College of Medicine Peoria, as well as other researchers in this endeavor. This project is funded by a $100,000 grant through Jump ARCHES, a collaboration between OSF Healthcare, the University of Illinois Urbana-Champaign, and U of I College of Medicine Peoria. Leicht spoke with News Bureau research editor. 

How do you foresee clinicians using the app you’re developing?

My project with DPI and OSF HealthCare is designed to help clinicians in health care settings keep track of medical misinformation and, in a sense, get ahead of it so they aren’t overwhelmed by what comes in through the door when they diagnose people.

The app will track likely new trends in misinformation and then incorporate those into the clinician’s workflow. When they’re talking to someone about a specific disease, they can not only present the set of treatments the patient might want to consider, but say, “Oh, by the way, here is some of the information kicking around on social media about this disease.”

And then they can decide whether they want to address that misinformation directly with the patient, or whether it’s just something to keep in the back of their mind and wait for the patient to bring up. But in either case they are not caught flatfooted by whatever the latest bad information or bogus cure for that disease is.

OSF HealthCare provides a testing ground among health care providers who can offer important feedback about usability. Co-lead investigator Dr. Mary Stapel of OSF HealthCare – who also teaches medical residents – says the app could support community health workers and health navigators who work to build trust in health care, particularly with underserved populations and community organizations.

With the sheer volume of misinformation that’s spread via social media, what’s your strategy for harnessing it?

Misinformation exists on every disease that’s out there. It’s difficult because we must figure out where to focus our energy. We’re starting by concentrating on a few diseases, and we’ll expand outward from there.

We’ll probably spend a lot of time on alleged cures for cancer and Type 2 diabetes. Those will probably be at the top of the list, along with vaccinations to some extent and COVID-19.

It is going to be a big undertaking, for sure. While medical misinformation has always existed, I suspect it is going to become a bigger problem as our population ages, in part because people will have a lot of different things wrong with them. And you’re not going to be able to effectively treat all those problems.

What’s currently available to promote clinicians’ awareness of misinformation?

When we investigated the software platforms that are currently available for identifying bad medical information on social media, we were at first a little surprised that there’s not a commercially available platform out there. But it’s a very complex undertaking.

During the height of the pandemic, some social media platforms like Twitter were criticized for spreading false information about COVID-19 and, later, about the side effects of vaccines. But social media platforms decided they didn’t want to get in the business of curating and modifying information because it’s so complicated and might be interpreted as infringing on freedom of speech.

Where are you likely to need human intervention for deciding which information to address?

Our app will be able to flag misinformation, and those flagged items will have to be vetted by content experts to decide which pieces of misinformation are of the most danger to the public health, for example. There will be some intervention there before it’s put out for others to see.

What’s the time frame for getting the app up and running?

About two years. We will not have a working prototype until this coming summer, I would guess. There’s still going to have to be a good bit of human intervention to make it work.

Are you considering other industries as potential users of this technology?

We’re thinking about expanding it to include misinformation in areas other than health care. The biggest industries that might be interested are clean energy and electric vehicles. You don’t have to poke around too far on social media to come across posts claiming that electric windmills cause cancer if you live under them.

As these technologies evolve, there’s going to be some need for dealing with misinformation about them. We’re not sure those will pan out, but we’re confident that the health care industry will.

 


Share this story

This story was published March 8, 2023.