How to Spot Deepfakes in the 2024 Election Cycle
AI-generated images and cloned audio are flooding social media feeds as the 2024 election approaches. Voters are facing a wave of synthetic media designed to mislead or confuse. If you want to protect your vote and avoid falling for digital manipulation, you need the right tools and strategies to spot fakes.
The Rise of AI in the 2024 Campaign
The 2024 election cycle is the first major political season where generative AI is widely accessible to the public. Anyone with an internet connection can create realistic fake media in seconds. In January 2024, New Hampshire voters received a fake robocall featuring an AI-generated voice of President Joe Biden telling them to skip the primary. Before that, viral images created by the AI program Midjourney showed former President Donald Trump being arrested by New York police.
These events prove that synthetic media is an active, daily threat. Bad actors are relying on tools like ElevenLabs to clone voices and Midjourney or DALL-E 3 to generate hyper-realistic photos. This flood of content makes it incredibly difficult for voters to trust what they see and hear on platforms like X, Facebook, and TikTok.
Visual Clues: Spotting AI-Generated Images
Even the most advanced AI image generators leave behind small mistakes. You can train your eyes to catch these anatomical and textural errors.
- Look at the hands and teeth. AI often struggles with complex human anatomy. You might see a politician with six fingers, oddly long thumbs, or teeth that blur together without clear separation.
- Check the text and background. Generative AI is notoriously bad at spelling. Protest signs, street signs, or police badges in the background of an image often contain gibberish, backward letters, or misspelled words.
- Examine the lighting and shadows. Shadows might fall in the wrong direction compared to the light source. You might also notice missing reflections in mirrors, windows, or glasses.
- Watch for the plastic effect. AI faces often look too smooth or heavily airbrushed. Skin texture might completely lack pores, wrinkles, or natural blemishes, giving the subject a shiny appearance.
Audio Clues: Identifying Synthetic Voices
Audio deepfakes are arguably more dangerous than images because they are cheaper to make and harder to detect with the naked ear. However, synthetic voices still have a few telltale signs.
- Listen for unnatural breathing. Human speakers take natural pauses to breathe. AI voices often speak in long, uninterrupted streams without normal inhales or exhales.
- Notice the emotional tone. While AI can mimic the sound of a specific voice perfectly, it struggles to match the correct emotion to the words. A politician giving an aggressive, angry speech might sound strangely calm or flat.
- Pay attention to background noise. Fake audio files sometimes feature weird, robotic static. You might also notice sudden, unnatural shifts in room acoustics.
Top Tools to Detect Deepfakes
Relying on your senses is a good start, but technology offers a much stronger defense. Several organizations and software companies have launched specific tools to help voters and journalists spot fake political content.
- TrueMedia.org: Launched specifically for the 2024 election, this free web-based tool allows users to input links from TikTok, X, or YouTube. It scans the media and gives a fast probability score on whether the video, image, or audio is a deepfake.
- Hive Moderation: Hive offers a free Google Chrome extension that scans images and text directly in your web browser. It highlights content that was likely created by AI programs like Midjourney or ChatGPT.
- Reality Defender: This is an enterprise-level tool used by political campaigns and newsrooms. It uses multi-model detection to flag synthetic audio, video, and images. While built for professionals, its findings are frequently cited by fact-checkers debunking viral claims.
- Google SynthID: Google embeds a digital watermark directly into the pixels of AI images generated by its Gemini models. While invisible to the human eye, detection tools can read this hidden watermark to confirm the image is synthetic.
Social Media Platforms and Watermarking
Tech companies are trying to keep up with the flood of fake content. Meta announced that any realistic AI-generated images, video, or audio posted on Facebook, Instagram, or Threads regarding the 2024 election must be labeled by the user. If users fail to disclose this, Meta will apply its own “Imagined with AI” labels when its detection systems catch the content.
TikTok outright bans synthetic media that shows fake scenes involving public figures like politicians. YouTube requires creators to check a disclosure box stating they used altered or synthetic media when uploading election-related videos. Despite these rules, platform enforcement is notoriously inconsistent. Voters must remain skeptical and actively verify explosive claims through trusted news outlets like the Associated Press or Reuters.
Frequently Asked Questions
What is the best free tool to check for deepfakes? TrueMedia.org is currently one of the most accessible and effective free tools for the 2024 election. You can paste a link from a social media post, and the site will analyze the visual and audio elements for AI manipulation.
Is it illegal to create a political deepfake? Federal law does not currently ban all political deepfakes outright. However, several states, including California, Texas, and Michigan, have passed laws restricting the use of materially deceptive AI media in the days immediately leading up to an election. Additionally, the FCC ruled in February 2024 that using AI-generated voices in robocalls is a violation of the Telephone Consumer Protection Act.
How good are AI audio clones? They are incredibly realistic. Software from companies like ElevenLabs requires only a few seconds of a person’s real voice to create a highly accurate clone. This is why voters should always verify controversial audio clips with reliable news sources before sharing them online.