News

Meta trained AI shut down for spewing “racism” & misinformation

Published

on

A new Meta AI demo which was trained on over 40M science papers had to be pulled after just two days, after it wrote “racist” and “inaccurate” scientific literature, it has been reported. 

The new tech, which was intended to help “organise science” backfired in a big way when the newly trained AI began spewing out misinformation. Critics suggested that the demo AI produced pseudoscience, was overhyped and not ready for public use.

Meta’s AI division unveiled the demo of Galactica with the intention that it would “store, combine and reason about scientific knowledge.”

The plans behind the AI model were to aid scientific research by helping to write scientific literature in a fast and accurate manner. 

AI for science papers backfired 

While users had found it could generate large amounts of realistic sounding text – it turned out to mean very little, if anything at all. Worse than that, the clearly flawed model went on to generate texts that would be scientifically inaccurate, and in certain cases, straightforwardly racist.

Pulled for misinformation…and racism

Some users did find the model promising and potentially useful, but the level at which it was making errors, and the nature of them, led inevitably to the whole thing being pulled last Thursday. 

While some people found the demo promising and useful, others soon discovered that anyone could type in racist or potentially offensive prompts, generating authoritative-sounding content on those topics just as easily. 

For example, someone used it to author a wiki entry about a fictional research paper titled “The benefits of eating crushed glass.”

Meta’s Chief AI Scientist Yann LeCun tweeted, “Galactica demo is offline for now. It’s no longer possible to have some fun by casually misusing it.”

Don’t miss…

Could AI be intensifying the mental health epidemic?

1 Comment

Trending stories

Exit mobile version