fbpx
Connect with us

AI

AI helps evaluate accuracy of online health news

Published

on

Researchers at the University of New Hampshire have developed a machine learning model to help screen medical news stories for accuracy.

It can be challenging to gauge the quality of online news—questioning if it is real or if it is fake. When it comes to health news and press releases about medical treatments and procedures the issue can be even more complex, especially if the story is not complete and still doesn’t necessarily fall into the category of fake news.

To help identify the stories with inflated claims, inaccuracies and possible associated risks, researchers developed a new machine learning model, an application of artificial intelligence, that news services, like social media outlets, could easily use to better screen medical news stories for accuracy.

The study was led by Ermira Zifla, with co-author Burcu Eke Rubini, assistant professors of decision sciences, and is published in Decision Support Systems,

Since most people don’t have the medical expertise to understand the complexities of the news, the machine learning models they developed outperformed the evaluations of laypeople in assessing the quality of health stories.

They used data from Health News Review that included news stories and press releases on new healthcare treatments published in various outlets from 2013 to 2018. The articles had already been evaluated by a panel of healthcare experts—medical doctors, healthcare journalists and clinical professors—using ten different evaluation criteria the experts had developed.

The criteria included cost and benefits of the treatment or test, any possible harm, the quality of arguments, the novelty and availability of the procedure and the independence of the sources. The researchers then developed an algorithm based on the same expert criteria, and trained the machine models to classify each aspect of the  news story, matching that criteria as “satisfactory” or “not satisfactory”.

The model’s performance was then compared against layperson evaluations obtained through a separate survey where participants rated the same articles as “satisfactory” or “not satisfactory” based on the same criteria. The survey revealed an “optimism bias,” with most of the 254 participants rating articles as satisfactory, markedly different from the model’s more critical assessments.

Researchers stress that they are by no means looking to replace expert opinion but are hoping to start a conversation about evaluating news based on multiple criteria and offering an easily accessible and low-cost alternative via open-source software to better evaluate health news.

Zifla commented: “The way most people think about fake news is something that’s completely fabricated, but, especially in healthcare, it doesn’t need to be fake. It could be that maybe they’re not mentioning something.

“In the study, we’re not making claims about the intent of the news organisations that put these out. But if things are left out, there should be a way to look at that.”

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending stories