News Categories

Elon Musk's OpenAI has an algorithm that can generate weirdly believable fake news stories

By Koh Wanzi - on 15 Feb 2019, 11:33am

Elon Musk's OpenAI has an algorithm that can generate weirdly believable fake news stories

Artificial intelligence is getting pretty good at generating entire articles and stories, which raises troubling implications about its potential to mass produce fake news. A program developed by a team at OpenAI, the non-profit research institute founded by Elon Musk and Sam Altman, can make up stories based on a handful of words that actually sound pretty believable. 

Here's a snippet of what it's capable of:

Russia has declared war on the United States after Donald Trump accidentally fired a missile in the air.

Russia said it had “identified the missile’s trajectory and will take necessary measures to ensure the security of the Russian population and the country’s strategic nuclear forces.” The White House said it was “extremely concerned by the Russian violation” of a treaty banning intermediate-range ballistic missiles.

The US and Russia have had an uneasy relationship since 2014, when Moscow annexed Ukraine’s Crimea region and backed separatists in eastern Ukraine.

The AI came up with the entire story on its own, after simply being provided with the words "Russia has declared war on the United States after Donald Trump accidentally..." 

The researchers wanted to develop a general purpose language algorithm, trained on a huge amount of text from the web. This encompassed 45 million pages from the web, chosen via Reddit.

They originally intended for the program to be able to do things like translate text and answer questions, but it soon became clear that there was also great potential for abuse and exploitation. It was simply too competent at generating stories, which could then be misused. 

The program is an example of how AI could be used to automatically create fake news, social media posts, or other content that could be disseminated widely. This is especially concerning as fake news is already a problem, which would be even harder to deal with if it were automated. It could be used to sway public opinion, possibly affecting crucial elections and influencing other events. 

"It's very clear that if this technology matures – and I'd give it one or two years – it could be used for disinformation and propaganda," said Jack Clark, policy director at OpenAI. One of the organization's goals is also to highlight the risks of AI and get ahead of them, so it's no surprise that OpenAI is currently trying to find a way to mitigate the risk of abuse. 

The algorithm is far from perfect though, and the example above is one of the better ones. It still frequently produces text that sounds like gibberish on closer inspection or is clearly lifted from online news sources, so discerning readers won't be fooled. 

That said, OpenAI still thinks the program is too dangerous for public use, and will only make a simplified version of the program publicly available. 

Source: MIT Technology review


Join HWZ's Telegram channel here and catch all the latest tech news!
Our articles may contain affiliate links. If you buy through these links, we may earn a small commission.