AI’s dual function: detecting and creating misinformation

How Artificial Intelligence can both combat and contribute to the vast spread of fake news found across the internet. one of these articles was written by Kate Dieffenbacher and the other by ChatGPT. Can you tell which one is which?


AP Newsroom

A journalism professor leads a class discussion on fake news


Artificial Intelligence (AI) is changing the way we consume and interact with news. With the rise of social media, we have seen a surge in fake news, and AI has become an important tool in combating it. However, AI can also contribute to the spread of fake news. In this article, we will explore both sides of the argument and examine how AI combats and contributes to fake news, with quotes from experts in the field.

AI has the potential to combat fake news by detecting and filtering out false information. Through machine learning algorithms, AI can analyze large datasets and identify patterns that are indicative of fake news. AI-powered systems can scan news articles and social media posts to flag potentially fake news stories for further review.

According to Mark Skilton, a Professor of Practice in the Information Systems and Management Group at Warwick Business School, “AI can help to flag patterns of suspicious activity, such as bots or click farms, which are indicative of fake news campaigns.” Skilton further adds, “AI algorithms can also be trained to identify fake news stories based on linguistic features, such as the use of sensationalist language or false information.”

AI can also assist in the fact- checking process by comparing news articles to trustworthy sources. This technology can identify false statements and inconsistencies in news stories and highlight them for further review. As pointed out by Francesca Rossi, a Professor of Computer Science at the University of Padova and a Research Scientist at IBM, “AI can provide a faster and more efficient fact-checking process, which is essential in combatting the spread of fake news.”

Moreover, AI-powered chatbots can engage with users on social media to provide them with accurate information. Chatbots can answer questions and provide links to reliable sources, helping to counteract the

spread of fake news.
According to a report published

by the Reuters Institute for the Study of Journalism, “Chatbots are a promising tool for delivering accurate information to users and countering the spread of fake news.”

Despite its potential to combat fake news, AI can also contribute to the spread of false information. One way AI can contribute to the spread of fake news is through deepfakes. Deepfakes are synthetic media that use AI to manipulate audio, video, or images to make it appear as if someone said or did something they did not. Deepfakes can be used to spread fake news and manipulate public opinion.

As pointed out by Paul Lewis, a Senior Correspondent at The Guardian, “The potential of deepfakes to manipulate public opinion is a serious concern.” Lewis further adds, “The ability to create convincing fake videos or audio recordings can be used to disseminate false information and create chaos.”

AI-generated text is another way in which AI can contribute to the spread of fake news. AI algorithms can generate convincing news articles or social media posts that are indistinguishable from those written by humans. This technology can be used to create fake news stories, fake social media accounts, and other forms of misinformation.

As noted by Dr. Nello Cristianini, a Professor of Artificial Intelligence at the University of Bristol, “The ability of AI algorithms to generate convincing fake news stories is a growing concern.” Dr. Cristianini further adds, “It is essential that we develop and deploy effective AI-based tools to detect and combat fake news, and educate the public on how to identify and avoid it.”

Furthermore, AI can be used to manipulate search results to promote fake news or to bury accurate information. By manipulating search engine algorithms, bad actors can ensure that their fake news stories appear at the top of search results, while accurate information is buried.


As new technology advances, so does fake news. Misinformation can be found almost anywhere through political propaganda, conspiracy theories, news websites, and even commonly used search engines.

Long before ChatGPT, Bard, and Bing were all created, machine learning algorithms have been able to detect misinformation. Essentially, these Artificial Intelligence algorithms are able to perform tasks by predicting output values based on the input data it receives.

One way in which artificial intelligence systems can be used is by tracking the accuracy of how well content corresponds to one another.

In “How artificial intelligence can detect – and create – fake news” from The Conversation, author Anjana Susarla wrote, “AI systems can evaluate how well a post’s text or a headline compares with the actual content of an article someone is sharing online.” The natural language processing techniques within these systems allow the false content to be detected, thus helping journalists and those who are researching in need of reliable information.

Along with detecting clickbait and false headlines, natural language processing systems can be used on a regular basis for individuals to identify spam emails from one’s inbox.

According to Zac Amos who wrote “AI and Spam: How Artificial Intelligence Protects Your Inbox” from, AI systems have the capability of filtering spam emails by searching for suspicious keywords, inconsistent grammar or spelling, overusing of emojis or certain characters, and untrustworthy attachments.

AI is naturally rooted into our everyday lives to pick out false information that is spread online through social media. According to an article written by Meta AI, “Here’s how we’re using AI to help detect misinformation,” it states that the company uses AI to take down harmful content in order to protect users.

Facebook works to take down false claims and statements by detecting it through AI systems that act as fact-checkers who take down posts or claims made with the same, inaccurate content.
Surely, AI has clear benefits shielding internet users from false information, but these software systems can create fake news themselves.

The article, “How AI Can Create And Detect Fake News” by Forbes journalist Indre Raviv defines deepfakes: “doctored or artificially generated videos and photos that can superimpose the physique and face of one person on another to make it seem like they carried out a certain action.” Some utilize deepfakes in order to joke around and be playful, but misusing this technology could create lasting consequences.

In 2018, a video went viral online of former President Barack Obama calling Donald Trump (the president at the time) inappropriate names. The video looked so realistic, yet it was not real. In fact, the content was created through “FakeApp,” according to author Kaylee Fagan from Business Insider. The content was so believable to thousands of internet users that it led to controversy.

It is evident that finding misinformation is not easy, especially

when it comes to ChatGPT. This artificial intelligence system has been caught falsely quoting individuals, creating made-up sources, and stating unreliable information.

The I-Team, a group of investigative reporters at NBC News, tested the accuracy of information from the chatbot by asking ChatGPT to write an article about Michael Bloomberg’s current endeavors ever since being mayor of New York City. In the response produced, there were a few quotes that were said to have been spoken by the former mayor himself as well as commentators; however, all of these quotes were found to be entirely produced by ChatGPT and not real.

In a recent NBC News article, “Fake News? ChatGPT Has a Knack for Making Up Phony Anonymous Sources” Chris Glorioso discusses this incident and the response of Open AI: “…a spokesperson for the firm sent a fact sheet that included a list of the AI technology’s limitations, including occasionally providing inaccurate responses, sometimes producing harmful or biased content, and having limited knowledge after 2021.”

The real question lies here: in what ways can we rely on ChatGPT to detect fake news if it is also a source of misinformation?