top of page

Synthetic Truth Decay: Can Society Survive the Era of AI-Enabled Disinformation?

Writer's picture: Ivan Ruzic, Ph.D.Ivan Ruzic, Ph.D.

The advent of advanced artificial intelligence (AI) models capable of generating human-like text has given rise to a new era of technology-enabled misinformation. As these AI systems become more sophisticated at mimicking credible information sources, they threaten to undermine truth and trust in mass media and reporting.


Scale and Sophistication

Over the past year, monitoring groups have identified over 600 unreliable AI-generated news and information sites on the internet—a tenfold increase since 2022. These sites demonstrate little human oversight, publishing hundreds of fictitious articles on topics ranging from politics to business.


AI text-to-speech has also enabled misinformation videos with hundreds of millions of views on platforms like TikTok. The outputs exhibit a disturbing capacity to prey on human biases.


State actors have been particularly adept at this. Well known examples include:

  • Venezuelan state media outlets using AI-generated videos of fake news anchors to spread pro-government messages. The videos were created by Synthesia, a company that offers technology that can produces very realistic custom deepfakes.

  • In the US, AI-manipulated videos, and images of political leaders like President Biden have circulated on social media, depicting them making controversial statements.

  • Microsoft researchers identifying a network of Chinese-controlled social media accounts that used AI to create content aimed at influencing US voters in the lead up to the 2024 election.

In addition, there are suspected cases of Chinese state-aligned influence operations using AI-generated avatars in videos to spread domestic political propaganda within China and try to shape online conversations.


And China is not the only state actor taking advantage of this technology. The ability of AI systems to generate highly realistic fake content is being exploited by several state actors to manipulate public opinion, interfere in foreign elections, censor dissent, and amplify propaganda campaigns.


Impacts: Truth Decay and Information Warfare

The proliferation of AI-generated misinformation is already having profound societal impacts. The manipulation of narratives can shape news cycles, influence elections, exacerbate polarization, erode public trust, and enable “disinformation wars” both between and within state adversaries.


This year, one-third of the global population will vote, with the US election being pivotal for global security and economy. Only 160 million Americans will decide the outcome, impacting 8 billion people, with key votes in a few swing states.


Recently, AI-generated images and videos depicting controversial or inflammatory events spread quickly online and helped exacerbate existing political polarization. For example, AI-generated images appeared to show former President Trump violently resisting arrest, fueling further division between Trump supporters and opponents.


Researchers warn such tactics could "take campaign dirty tricks to a new low" by targeting specific groups. If left unchecked, this phenomenon threatens to undermine the integrity of democratic deliberation and policymaking.


America's political stability, fair elections, peaceful power transfers, and institutional checks and balances are at stake.


The Need for Multilateral Action to Promote Resilience

Tackling this challenge requires a coordinated, multistakeholder approach engaging governments, technology companies, academia, media, and civil society groups.


Researchers are rapidly developing AI systems to detect machine-generated text, but keeping pace with advances in generative models remains an immense technical challenge. But in the end, some level of regulation is likely to be required and policymakers will have to address complex questions around content moderation and free expression.


Fact-checking processes must evolve to account for synthetic media. Schools and other organizations should teach employees and students to identify manipulation techniques and biased narratives, equipping current and future generations with the skills to navigate an increasingly complex information environment.


Ultimately, inoculating society against AI-enabled deception will require a societal commitment to critical thinking and continuous learning, something for which the current education system is ill suited.


Combatting misinformation

There are some short-to-medium-term practical things that both organizations and individuals can do to detect and help combat misinformation.


Invest in Detection Technologies

Investing in tools and technologies to detect and flag false information is a critical initial step in combating misinformation. AI algorithms and machine learning can be trained to identify patterns typical of fake news, such as sensationalist language or unreliable sources.


For example, the AI tool FakerFact assesses the purpose and characteristics of online content, helping users discern the nature of the information they encounter. Another AI tool developed by researchers at the University of Waterloo uses deep-learning algorithms to determine if claims made in posts or stories are supported by other credible stories, which can assist fact-checkers and social media networks in weeding out false stories.


At the individual level, there are now numerous browser extensions (including FakerFact) that can also help assess fake news.


In addition, individuals can use dedicated fact-checking sites: Sites like FactCheck.org, PolitiFact, Snopes, and TruthOrFiction.com specialize in verifying claims and debunking misinformation.


Of course, individuals must be technologically literate, and willing to utilize these tools.


Establish Response Guidelines

Organizations should establish internal misinformation response guidelines to effectively manage the spread of false information. This includes creating a crisis plan that outlines the steps to take when misinformation is identified.


For example, the Substance Abuse and Mental Health Services Administration provides solid guidelines for public officials, emphasizing the importance of correcting errors quickly and using social media effectively to address misinformation and rumors.

Establishing clear procedures for responding to misinformation and educating team members about the process is essential.


Promote Media Literacy Education

Media literacy education is also essential to equip individuals with the skills to critically evaluate information sources.


Creative methods to teach media literacy include recognizing fake news, using multiple sources, gauging tone and language, and questioning numbers and statistics. Educators can incorporate these skills into employee training and school curricula and use current events to make the lessons relatable and practical.


Teaching individuals to avoid instantly reacting to headlines and to check the source before sharing information is also important.


Leverage Advertising Policies

Leveraging advertising policies can deter profit-driven misinformation sites.


Brands can withhold advertising from platforms known to spread misinformation, thus cutting off a significant revenue stream for these sites. Social media policies can also limit the reach of posts containing misinformation or label them with additional information, often provided by third-party fact-checkers. This approach can make it financially unviable for fake news manufacturers to operate at scale.


Foster International Cooperation

Ultimately, fostering international cooperation will be necessary to address what has become a global challenge of misinformation. International partnerships can facilitate the sharing of best practices and the development of unified strategies to combat misinformation.


Cooperation between governments, companies, and researchers can lead to more effective detection tools and response strategies. The OECD emphasizes the role of public communication in responding to disinformation and the importance of transparency and trust.


The weaponization of AI for mass deception poses an existential threat to truth, trust, and social cohesion. But an informed, empowered public can develop resilience against this challenge. With vigilance, wisdom, and collective responsibility, we can mitigate the risks and forge a path toward a better-informed society.


Sources:


12 views0 comments

Comments


bottom of page