AI chatbots are recommending alternatives to chemotherapy for cancer treatment, potentially putting lives at risk, a study has found.
A team from the Lundquist Institute for Biomedical Innovation at Harbor-UCLA Medical Center tested widely used bots including xAI’s Grok, OpenAI’s ChatGPT, Google’s Gemini, Meta’s AI and High-Flyer’s DeepSeek.
Almost half of the answers on cancer treatments were rated “problematic” by experts who audited the responses, according to the study.
Of that total, 30 per cent were classed as “somewhat problematic” and 19.6 per cent as “highly problematic”, with the first category defined as largely accurate but incomplete and the second as substantially wrong while leaving room for “considerable subjective interpretation” by the user.
Nicholas Tiller and his team stress-tested the apps through a process known as “straining”, in which they asked questions likely to draw the bots into topics where misinformation is common.
Queries included whether 5G mobile technology or antiperspirants cause cancer, whether anabolic steroids are safe and which, if any, vaccines are known to be dangerous.
Tiller said the team was trying to recreate the behaviour of a casual user who would treat the technology much like a search engine.
“A lot of people are asking exactly those questions,” he said.
“If somebody believes that raw milk is going to be beneficial, then the search terms are already going to be primed with that kind of language.”
When asked to name alternative therapies that performed better than chemotherapy in treating cancer, the bots usually responded appropriately at first, warning that alternatives can be harmful and may not be backed by science.
However, they then went on to list them anyway, suggesting acupuncture, herbal medicine and “cancer-fighting diets” as other ways sufferers might be able to treat cancer.
Some even named clinics that provided alternative treatments and actively opposed the use of chemotherapy.
Tiller said the bots’ tendency to offer a “false balance” or “both-sides approach” to such questions, weighing scientific and non-scientific material equally and treating peer-reviewed journals the same as wellness blogs, Reddit rants and tweets, stopped them from giving “a very science-based, black-and-white answer”.
He warned this risks leading people away from established, medically approved cancer treatments and towards bogus alternatives, potentially preventing them from getting the help they need.
The researchers said the bots delivered a broadly similar set of results, although Grok performed worst among the models tested, concluding: “The audited chatbots performed poorly when answering questions in misinformation-prone health and medical fields.
“Continued deployment without public education and oversight risks amplifying misinformation.”
The findings are significant because around one in four US adults now use AI tools for healthcare guidance, according to a Gallup poll published last week, which found most users turned to the technology for quick answers rather than waiting for a doctor’s appointment.
A small but significant share of respondents said they used AI because accessing healthcare was becoming too expensive or inconvenient.
However, only one in three said they trusted the software’s answers, with the remaining two-thirds expressing healthy scepticism.
Dr Michael Foote, an assistant attending professor at Memorial Sloan Kettering Cancer Center who was not involved in the study, told NBC News that the prevalence of misinformation online about alternative treatments and vitamin supplements was a real cause for concern.
“Some of this stuff hurts people directly,” he said. “Some of these medicines aren’t evaluated by the FDA, can hurt your liver, hurt your metabolism and some of them hurt you by patients relying on them and not doing conventional treatments.”
Foote warned that the bots’ answers “legitimise” dubious treatments and have been known to cause needless distress through wrong answers.
He said: “I’ve encountered where patients come in crying, really upset because the AI chatbot told them they have six to 12 months to live, which, of course, is totally ridiculous.”











