GenAI and the battle against misinformation

The rise of generative AI poses fresh challenges in the fight against online falsehoods

The rapid advancement of generative AI (GenAI) technology has brought significant benefits to many sectors, by helping with content creation and software development, as well as by enhancing accessibility of AI tools. Using GenAI is comparatively easy and, unlike previous AI technologies, does not require users to learn programming languages. This ease of use accelerates its adoption and integration into various industries, enabling businesses to swiftly implement AI-driven solutions and enhance their operational efficiency. 

For example, in the customer service industry, businesses have leveraged GenAI to implement AI-powered chatbots capable of handling customer queries 24/7, improving efficiency and freeing up human resources. Marketing teams can now use GenAI tools to automatically generate personalized email campaigns or social media content, drastically reducing the time and effort needed
for manual content creation. And many major news organizations already use GenAI for basic factual and data-driven content creation, such as sports recaps or election results, whilst human journalists and editors ensure that the AI-generated output aligns with the publication’s standards and editorial voice.

However, GenAI has also presented major new challenges, particularly in its potential to create and disseminate misinformation both at a high volume and with remarkable velocity. An alarming example is GenAI’s ability to create highly convincing deepfake videos and audio recordings that mimic real people, often political figures, spreading false information or statements they never made. These can go viral on social media platforms within hours, reaching millions of users before fact-checking can occur. Deepfake scams deceive both individuals and businesses. A recent survey by finance software provider Medius reports that 53% of businesses in the US and UK have been targets of a financial scam powered by deepfakes, with 43% falling victim to such attacks. According to a report by Deloitte published this year, GenAI could enable fraud losses to reach $40 billion in the US by 2027.

As businesses, governments and society grapple with these remarkable advancements, it is evident that the rise of AI-generated misinformation poses unique risks that cannot be ignored.

In our recent research, we explored how AI-generated misinformation differs from human-generated misinformation, focusing on user engagement, credibility perceptions, and the implications for businesses. (See ‘Comparing the willingness to share for human-generated vs. AI-generated fake news’ (2024) by Amirsiavosh Bashardoust, Stefan Feuerriegel and Yash Raj Shrestha, available via Arxiv.) We analyzed how users interacted with both types of content and found significant differences in how AI-driven content was perceived compared to human-generated material. AI-generated misinformation often exhibited a higher level of complexity and subtlety, which in some cases increased user engagement. It also made it hard to distinguish AI-generated content from human-generated content. 

A better understanding of the impact of AI-generated content on the spread of misinformation is essential to mitigate related risks, particularly in light of the rapidly evolving capabilities of artificial intelligence and its widespread integration across various digital platforms. 

AI-generated and human-generated misinformation

GenAI has accelerated the production of misinformation by making it faster, more scalable, and easier to create. Unlike human-generated misinformation, which requires time and effort to ideate and write, AI can generate misleading content in mere seconds. It can then be swiftly disseminated across multiple social media platforms. Often, the sheer volume of such content overwhelms traditional fact-checking systems, making it increasingly difficult for people to verify the accuracy of the information they see online. 

Our research reveals that AI-generated misinformation is generally perceived as less credible than human-generated misinformation. One likely reason for this differential perception is the lack of emotional nuance and context in AI-generated content. For example, AI-generated text often lacks subtlety in tone and may present information too mechanically or in an overly-perfect, uniform style, which can signal to readers that it was not crafted by a human. People are adept at detecting certain cues in language, such as emotional undertones, storytelling elements, or imperfections, which they subconsciously associate with authenticity. 

Human-generated misinformation may also involve personal anecdotes or specific cultural references that make it feel more relatable and thus more believable. In contrast, AI-generated content might rely on generic statements and a more formal tone, making it feel detached. As a result, while AI content might be more grammatically polished, people can sense when a message is ‘too perfect,’ leading them to question its credibility.

Yet despite these credibility concerns, the polished appearance and rapid dissemination of AI content result in it being shared just as frequently, contributing to its viral nature on social media platforms. We conducted an online experiment with nearly 1,000 participants and found that although AI-generated misinformation was perceived as less believable, participants were just as likely to share it on social media as they were human-created misinformation. 

A possible reason for this could be that participants may not share misinformation because they fully believe its content, but rather to align with certain narratives or engage in ongoing conversations, regardless of the information’s credibility. 

This finding underscores the threat posed by AI-driven misinformation, especially when combined with the fast-paced nature of social media platforms. It also raises important questions about user behavior and the credibility of AI-generated content. 

Trust and user behavior

One of the most interesting aspects of the current discourse on misinformation is the role of human behavior in the dissemination of false information. Trust in AI should be considered not only in terms of the technology’s capabilities, but also in terms of how people interact with it. Whether content is generated by AI or humans, users often share it without verifying its authenticity, which perpetuates the spread of misinformation.

The key insight here is that users often do not distinguish between human- and AI-generated content. While AI-generated content may seem less credible, its structure, clarity, and flawless presentation can make it equally appealing to share. This highlights the importance of addressing not only the technological aspect of misinformation, but also the psychological and sociological factors that drive user engagement.

Cognitive biases and social pressures can exacerbate this issue. Cognitive biases, such as confirmation bias, play a significant part in why individuals are more likely to believe and share information that aligns with their pre-existing beliefs, regardless of its veracity. Social media platforms, where people are often influenced by their peer networks, amplify this effect, making it easier for misinformation to spread rapidly. Moreover, the emotional tone of misinformation – whether it incites fear, anger, or urgency – tends to elicit stronger reactions, encouraging users to share without critical evaluation. Addressing these behavioral tendencies is key to curbing the spread of misinformation. Technological solutions alone are not enough.

The implications for business

From a business perspective, the issue of AI-generated misinformation extends beyond political or societal concerns. The ability of AI to produce convincing yet false content presents clear risks for brands, especially if misinformation influences customer behavior, financial markets, or public opinion. Companies must be vigilant in monitoring misinformation that could negatively impact their reputation.

A striking example occurred when a finance worker at a multinational firm in Hong Kong was tricked into paying out $25 million to fraudsters who used deepfake technology to pose as the company’s chief financial officer – and several other colleagues – during a video conference call. As CNN reported, the worker believed he was interacting with real colleagues, so accepted the instruction to authorize a major transaction. It highlights the alarming potential of AI to generate highly convincing false content that can lead to severe financial losses and reputational damage. 

Indeed, according to a poll by Deloitte, over half of executives (51.6%) expect an increase in deepfake financial fraud targeting their organizations in the coming year, with 15.1% of companies already experiencing at least one deepfake attack in the past year. As GenAI tools become more accessible, cybersecurity experts warn that deepfake-related scams could worsen, making it critical for companies to bolster their defenses through employee education, policy changes, and advanced detection technologies.

Other high-profile cases, such as a deepfake video of Mark Zuckerberg and false claims involving Tesla and Wayfair, highlight how easily these manipulations can spread misinformation and cause severe reputational risks. PwC has noted that companies are especially vulnerable to deepfakes and disinformation campaigns during critical events like public offerings, mergers, acquisitions, and major organizational announcements, which provide opportunities for fraudsters to harm a company’s reputation.

Businesses should learn from these incidents by investing in tools and strategies to detect and mitigate the spread of AI-generated misinformation. For instance, enhancing AI literacy within organizations could help employees better identify AI-driven content, while advanced AI-based monitoring tools can flag suspicious content faster than traditional methods. Moreover, organizations should advocate for stricter regulations on the use of GenAI, especially in public-facing platforms. 

By taking a proactive approach, businesses can protect themselves from a potential fallout related to AI-generated misinformation, while ensuring they remain trusted sources in an increasingly AI-driven content landscape.

Wider lessons for AI applications

The challenges posed by GenAI in the realm of misinformation have broader implications for other applications of AI. Whether in marketing, customer service, or content creation, businesses must consider how AI-generated content will be perceived and trusted by their audiences. 

The trust issue goes beyond misinformation and affects all types of AI interactions with consumers. As AI becomes more embedded in everyday business operations, ensuring transparency and accountability in AI-driven processes will be critical for building and maintaining trust. Companies must also be prepared to address ethical concerns related to AI use, as well as regulatory scrutiny, to safeguard their reputations and foster consumer confidence.

AI, by its nature, lacks human emotions and personal connections, which can sometimes make it appear more detached and less credible in certain contexts. This has important ramifications for industries such as healthcare or hospitality, where trust in information and recommendations is paramount. As AI becomes more integrated into various sectors, businesses must navigate the delicate balance between leveraging AI’s efficiency and maintaining human-like trust. To bridge this gap, companies may need to implement AI-human hybrid models, where AI handles repetitive tasks, while human staff manage more complex interactions requiring empathy and nuanced judgment, ultimately enhancing consumer satisfaction and trust.

A multifaceted challenge

Misinformation is a multifaceted issue that goes beyond just GenAI. Understanding the psychological and sociological elements is crucial, because people engage with misinformation due to a variety of cognitive biases, emotional triggers, and social influences. Factors such as confirmation bias, the desire for validation, and the emotional appeal of certain narratives can make individuals more likely to believe and share false information. Addressing misinformation requires not only technological solutions, such as improved fact-checking algorithms, but also a focus on the human element.

As GenAI becomes both more widespread and powerful, media literacy, training and technological tools will be critically important for helping users navigate the growing landscape f information, particularly on social media and other digital platforms. AI-driven misinformation poses unique threats that could lead to reputational damage, legal challenges, or financial loss if not properly addressed. Businesses must remain vigilant. 

Yash Raj Shrestha is assistant professor at the Department of Information Systems, Faculty of Business and Economics (HEC) at the University
of Lausanne