Disinformation Wars: AI and Fake News Detection on Social Media

fake news detection

The widespread and tactical use of disinformation is a confirmed reality of modern politics. Social media companies introduced measures to help flag fake news, and advancements in AI could soon automate the detection of fake news and related issues such as hate speech.

The Politics of Fake News Detection

In 2016, Oxford Dictionary chose the term “post-truth” as its Word Of The Year, after its usage spiked drastically in the lead up to the US elections, though the concept, even then, was nothing new.

Sensationalism has been used to sell publications and push political agendas for centuries. In the age of social media, the reach and virality of clickbait have only intensified the issue, with social media users, often unwittingly, helping to circulate fake news by engaging with and sharing it. Adding to the issue is the fact that, unlike other media platforms, social media is unregulated and isn’t obligated to meet certain standards.

Concerned by the potential for disinformation to undermine democracy and harm public health, media watchdogs, shareholders, and legislators have put pressure on social media companies to detect and act on fake news.

In response, Twitter and Facebook both rolled out features to enable their users to flag false or misleading content. Facebook has gone a few steps further, applying machine learning to help response teams detect fraud and spam accounts, and by taking action against repeat offenders disseminating fake news.

The measures so far in place are a combination of technology and human review and are by no means instant. Given how quickly and how widely information can spread on social media, the main challenge still in place is detecting fake news and acting on it quickly enough to mitigate dissemination.

With advancements in machine learning and natural language processing (NLP) technology, there’s still an incredible amount of untapped potential in this field. Soon, the process of detecting and acting on fake news may be much faster and more effective.

Bots: A Battle Between Good and Evil

Malicious actors use bots to spread fake news on social media. These bots come in the form of automated accounts that are made to look like legitimate people. In actuality, bots are programmed to send and share specific types of (dis)information.

A 2017 study by the Association for the Advancement of Artificial Intelligence (AAAI) estimated that around 9 -15% of all active Twitter accounts are bots.

Now, there’s the potential to send out a troop of bots to play a positive role in combatting the creation and spreading of fake news.

For example, with advanced NLP, you can train AI models to accurately classify misinformation or references to fake news in conversations on social media. You can also use emotion text analysis to detect whether conversations are verging into abuse, harassment, or hate speech.

From there, you could train bots to engage in the conversations, point out fake news and explain the realities. It would be an impossible undertaking for a group of fact-checkers to manually parse through billions of conversions a day, let alone engage in them, but with AI technology, this may soon be a reality.

More than words: Fake Videos and Images

Of course, fake news isn’t just spread through words. Videos and images are often misattributed or are doctored to fit a fake news story.

If you’ve ever seen a photo of a shark swimming down a flooded motorway, it’s probably this one, and it’s definitely fake. This same doctored photo has been recirculated and attributed to hurricanes in Puerto Rico, Texas and Florida, each time receiving thousands of views. But the image is actually taken from a 2015 photo of a kayaker being tailed by a Great White shark.

To tackle misattributed videos and images, AI can be trained to detect, for example, whether an image is recent or if it was taken years ago.

Deepfake videos are also causing a stir amongst those who are concerned with the spread of disinformation. Deepfake videos use deep learning technology to superimpose someone’s likeness onto an existing video. With this technology, you can make it look as though someone has said or done something they haven’t.

In the space of a few years, the technology behind deepfakes has become increasingly difficult to differentiate from reality. This video created by researchers at the University of Washington proves just how convincing deepfakes can be.

Last year, Facebook researchers claimed to have developed deepfake detection AI using reverse engineering. This technology can spot deepfakes from a single image and can even track their origin.

In Summary

Fake news is a legitimate and increasingly pressing concern, but thankfully, advancements in AI technology are ready to meet the challenge. Many of these technologies already exist, it’s just a matter of organising foundation models and APIs to work to combat this specific issue.

Who Are We?

Symanto is a world leader in NLP technology. From emotion detection to psychographic profiling, we develop industry-leading models to derive deep insights from unstructured written data wherever it occurs.

This year we’re proud to be sponsoring the 27th international conference on Natural Language and Information Systems in Valencia. Disinformation in social media will be just one of the topical issues discussed by leaders in NLP.

Find Out More About Symanto

To find out more about what we do and the potential of our technologies, get in touch or book your free demonstration.