"Fake news" AI deemed too dangerous for use

OpenAI has created an AI which is able to generate such authentic writing that it is too dangerous to be released in case it is misused to generate compelling "fake news" at scale

20Feb

Scientists at OpenAI have developed an advanced AI that generates such incredibly authentic synthetic text based on writing prompts that they say is too dangerous to be released in case it is misused to create "fake news".

OpenAI, a nonprofit AI research organization backed by Elon Musk which aims to develop friendly, safe AI in a way that benefits humanity as a whole, trained the machine-learning model, named "GPT2", on 8 million articles. The system uses Transformer, a new type of neural network design, to analyze words or a block of text before generating more copy in the same style. This means that when the algorithm is given a fake headline it will generate a whole fake story.

The organization gave an example of its use:

"In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English," the researchers wrote.

The algorithm then continued: "The scientist named the population, after their distinctive horn, Ovid's Unicorn. These four-horned, silver-white unicorns were previously unknown to science."

While GPT2 is able to generate coherent, well-structured sentences, it is only able to understand language and cannot comprehend or explain facts, thereby meaning that any writing that is generated is simply well-written – and often elaborate – lies. This has fueled concerns that it could be used to generate incredibly compelling "fake news" in the wrong hands.

Because of its dangerous potential, OpenAI has decided to keep it in the lab for now and continue experimenting to determine its future use.

Has saas gotten ahead of itself  small

Read next:

Has SaaS gotten ahead of itself?

i