top of page

How to spot and protect yourself from deepfake videos

The rise of deepfake technology has transformed our online world, blurring the lines between what’s real and what’s not. While deepfakes have introduced exciting possibilities for entertainment and visual effects, they’ve also opened the door to a troubling trend.



Deepfake videos are digitally manipulated media created using artificial intelligence (AI) to alter or replace a person's face, voice, or movements with those of another. By using machine learning algorithms, deepfake technology can produce highly realistic videos where individuals appear to say or do things they never actually did.


This technology is often used to imitate celebrities, politicians, or influencers, creating convincing but entirely fabricated content. While deepfakes have legitimate applications in entertainment, they are increasingly exploited in scams, misinformation campaigns, and fraudulent advertising, making it important for viewers to be cautious and aware.


Deepfake scam advertisements


These manipulated videos are used by scammers to deceive viewers, often impersonating trusted celebrities or brands, promoting fake products, or offering misleading investment advice.

Deepfake scam ads use advanced artificial intelligence to create highly realistic, often convincing videos of people saying things they never actually said.


For example, a deepfake ad might feature a celebrity enthusiastically promoting a fake investment scheme or endorsing a "miracle" product, without the celebrity’s knowledge or consent. These ads are then distributed on social media, websites, and even legitimate advertising platforms to lure unsuspecting viewers into parting with their money.


Common types of deepfake scam ads


  • Celebrity endorsements: Scammers use deepfake technology to create fake videos of celebrities endorsing products, services, or investment opportunities. You might see a video of a well-known actor or athlete supposedly promoting a “get rich quick” scheme or miracle cure, often with fake testimonials and doctored images for added credibility. David Beckham, Tom Hanks, Tom Cruise, Martin Lewis and Elon Musk are just some examples of well-known people to have been 'faked'.


  • Financial or crypto investment scams: Deepfake scam ads often feature financial experts or tech moguls, seemingly giving financial advice or endorsing cryptocurrency investments. These videos exploit the credibility and influence of trusted figures to make dubious investment opportunities appear legitimate.


  • Fake product demos: Deepfake technology is also used to generate fake product demos, showing "live" results that don’t actually occur. For instance, you might see a deepfake video of someone using a skin-care product that shows instant wrinkle reduction or a weight-loss supplement with unbelievable results.


  • Influencer giveaways and promotions: Influencers are often deepfaked in scam ads, advertising fake giveaways or promotions. These might include promises of expensive gadgets or cash prizes that lure viewers into providing personal information or clicking on suspicious links.




How to spot a deepfake scam ad


Detecting deepfake videos can be challenging because of the high level of sophistication used in their creation. However, there are signs you can look for:


  • Inconsistent lip syncing: In some deepfake videos, the subject’s lips may not perfectly match their words, especially in longer videos. If the audio seems slightly off or if the lips don’t move as expected, it could be a deepfake.


  • Unnatural facial movements and blinking: Deepfake technology has improved in recent years, but it still struggles with replicating natural facial movements, like subtle shifts in expression, natural blinking, and head movements. Look for stiff or robotic movements as a red flag.


  • Poor quality or blurred frames: Scammers often create deepfake videos quickly, resulting in lower-quality visuals. If a video appears unusually blurry, with pixelated or distorted frames, particularly around the face, it may be a deepfake.


  • Dubious accents or an unrealistic voice: AI can mimic a person’s voice quite accurately but listen carefully for tell-tale signs that the voice and the person do not match. For example, a deepfake of David Beckham saw him talking with an American accent, and even though he resides in the US, he hasn’t lost his East London accent.


  • Outlandish claims and offers: Be cautious of any video that promises extreme or unbelievable results, like doubling your money overnight or achieving instant weight loss. These are often classic signs of a scam, regardless of whether it’s a deepfake.


  • Verify celebrity endorsements on official channels: If a celebrity or influencer appears in a video endorsing a product, check their official social media pages or website to confirm the endorsement. Celebrities rarely promote products exclusively in obscure ads; they often share endorsements on their official platforms. The celebrity might also be aware they have been the subject of a deepfake and therefore are warning of its existence via their official channels.



How to protect yourself from deepfake scams


  • Be sceptical of viral ads and videos: Scammers rely on our tendency to share engaging content. Before sharing or acting on any video, especially if it promotes a product, investment, or giveaway, scrutinise the source.


  • Enable two-factor or multi-factor authentication: Many scammers use deepfake ads to harvest personal information. By enabling two-factor or multi-factor authentication on your accounts, you add an extra layer of protection, reducing the risk of unauthorised access.


  • Avoid clicking suspicious links or giving out personal information: Deepfake scams often direct viewers to links where they’re asked for personal or payment information. Avoid entering personal details unless you are certain of the source’s legitimacy.


  • Report suspicious content: If you come across a video that appears to be a deepfake scam, report it to the platform where you found it. Reporting helps platforms detect patterns and improve their defenses against such scams.


  • Educate yourself on deepfake technology: The more familiar you are with how deepfakes work, the better prepared you’ll be to spot them. Many online resources and even tools are available to help identify manipulated media.


What are social media platforms doing about deepfake scams?


Social media and online platforms are aware of the deepfake threat and have taken steps to mitigate it. Platforms like Facebook, X, and YouTube have developed policies specifically targeting deepfake content, and they use AI-powered tools to identify and remove manipulated videos.


However, with the rapid advancement of AI and deepfake technology, it’s challenging for these platforms to keep up. Users still play a crucial role in reporting and flagging suspicious content.


The future of deepfake scams and staying vigilant


As AI technology continues to advance, deepfake scam ads are likely to become more prevalent and sophisticated. This means the responsibility to identify and avoid scams rests increasingly on individuals.


By familiarising yourself with deepfake detection methods, staying cautious online, and only trusting reputable sources, you can protect yourself and others from falling victim to these deceptive ads. Staying vigilant and spreading awareness can help build a more resilient online community where scam artists are less likely to thrive.


In a world where "seeing is believing" is no longer foolproof, staying informed is your best defense.




 

Reporting

Report all Fraud and Cybercrime to Action Fraud by calling 0300 123 2040 or online. Forward suspicious emails to report@phishing.gov.uk. Report SMS scams by forwarding the original message to 7726 (spells SPAM on the keypad).

 

Comments


The contents of blog posts on this website are provided for general information only and are not intended to replace specific professional advice relevant to your situation. The intention of East Midlands Cyber Resilience Centre (EMCRC) is to encourage cyber resilience by raising issues and disseminating information on the experiences and initiatives of others. Articles on the website cannot by their nature be comprehensive and may not reflect most recent legislation, practice, or application to your circumstances. EMCRC provides affordable services and Trusted Partners if you need specific support. For specific questions please contact us by email.

 

EMCRC does not accept any responsibility for any loss which may arise from reliance on information or materials published on this blog. EMCRC is not responsible for the content of external internet sites that link to this site or which are linked from it.

bottom of page