Advertisement

technologyTechnology

AI Writes False News Remarkably Well With Little Human Input, And People Fall For It

author

Jack Dunhill

author

Jack Dunhill

Social Media Coordinator and Staff Writer

Jack is a Social Media Coordinator and Staff Writer for IFLScience, with a degree in Medical Genetics specializing in Immunology.

Social Media Coordinator and Staff Writer

clockPublished
comments9Comments
share670Shares
Artificial intelligence puppet

Could AI become a narrative-driving tool for disinformation? Image Credit: Alexander Limbach/Shutterstock.com

Disinformation has been around for centuries, but with more ways than ever to connect any voice that wishes to be heard to a global audience, it is a problem that is only getting worse. No one is safe – in fact, people who believe they are immune to it appear to be the most affected by false news. Much of the disinformation on the internet may trace back to just a select few accounts, likely intentionally spreading their false narratives. But as machine learning algorithms and artificial intelligence begin to enter the global stage, could spreading disinformation become much easier for such people?  

A report by the Center for Security and Emerging Technology (CSET) suggests that it certainly can, and internet users could be extremely vulnerable to it. 

Advertisement

Researchers from CSET, who frequently explore internet privacy and the impact of AI on society, wanted to delve into how successfully an open-source AI could be used to generate disinformation and persuade people away from their viewpoints on political topics. They used the OpenAI GPT-3, a freely-available AI that can generate text based on inputs by humans, and either let it roam free on creating false text, or gave a skilled editor free roam with inputting disinformation campaigns. The algorithm performed well whilst it was free, but under the watchful eye of an editor was where it really shined. 

They experimented with six different tasks, aimed at pushing some form of narrative through various methods. Tasks ranged from Narrative Reiteration, which instructed GPT-3 to create short messages that advances a narrative (such as climate change denial), to Narrative Wedging, which targeted specific demographics to amplify division. GPT-3 performed surprisingly – and somewhat worryingly – well in all tasks, but it truly excelled in Narrative Reiteration. With very little human involvement, GPT-3 generated small lines of text that appeared remarkably convincing when arguing against climate change.  

“She is obviously not a scientist where science is the pursuit of 'absolute truths' & has no relevance to her political biases & goals. She frankly, epitomizes all that is wrong with the attempted politicization of science by people with not so hidden agendas.” tweeted GPT-3 when talking about Greta Thunberg. 

The AI was even capable of rewriting news articles with the goal of changing the narrative using outrage and incendiary tones to elicit emotive reactions from the reader.  

Advertisement

The researchers then took a sample of 1,171 Americans to test whether their new narrative-changing AI could persuade them away from existing viewpoints.  

After collecting their opinions on political situations, such as sanctioning China, the researchers used GPT-3 to generate an array of opinionated statements either for or against each viewpoint. These were then shown to the sample, who subsequently filled out a survey on how convincing the statements were, and whether the statements persuaded their opinions towards the other side of the spectrum. Impressively, GPT-3 was able to be at least somewhat convincing 63 percent of the time, regardless of the participants’ political stance. When the statements held the same viewpoint as the participants, they were rated more highly, convincing 70 percent of those that read them. 

In another political scenario, GPT-3 was able to entirely change some peoples’ viewpoints, with the statements making respondents 54 percent more likely to agree with the stance after being shown biased AI-generated text. 

The researchers believe that an open-access AI such as GPT-3 could easily be utilized in disinformation campaigns, and in many cases the generated text is hard to discern from the average internet user. These results may even be on the low end of what is possible with such an algorithm, as controlled settings are likely to produce slightly different outcomes compared to simply scrolling through social media and stumbling across a piece of disinformation. 

Advertisement

Regardless, GPT-3 is certainly capable of entirely fabricating narratives and persuading a large portion of users – and it is far better at lying than it is telling the truth. 

“Our study hints at a preliminary but alarming conclusion: systems like GPT-3 seem better suited for disinformation—at least in its least subtle forms—than information, more adept as fabulists than as staid truth-tellers,” write the authors. 


 This Week in IFLScience

Receive our biggest science stories to your inbox weekly!


ARTICLE POSTED IN

technologyTechnology
  • tag
  • algorithm,

  • AI,

  • false news

FOLLOW ONNEWSGoogele News