Deepfake technology endangers us all (2024)

  1. Comment

23 May 2024

It's not just politicians and global celebrities who suffer the consequences of AI-generated images: they can affect anyone with a social media presence.

By Sarah Manavis

Deepfake technology endangers us all (1)

The past year has been a wake-up call about the prevalence and sophistication of deepfakes. Be it the fake p*rn created using Taylor Swift’s likeness that spread across social media, or deepfake audio of Sadiq Khan speaking about the war in Gaza, AI-generated content is becoming more convincing – and dangerous. In what looks to be an election year in both the US and the UK, the threat such images pose to our democracy feels more tangible than ever (deepfakes of Joe Biden and Donald Trump are everywhere – both Rishi Sunak and Keir Starmer have already been targeted).

Politicians and global celebrities are the people we spend the most time saying are at risk of deepfakes. But another demographic is being targeted more than any other: social media influencers, particularly women. When the social media agency Twicsy conducted a survey of more than 22,000 influencer accounts on Twitch, TikTok, Instagram, YouTube and Twitter/X in March, they found that 84 per cent had been the victims of deepfake p*rnography at least once (89 per cent of the deepfakes found were of female influencers). These weren’t small accounts – each had a five-figure follower count. And in the space of just one month, some of these deepfakes had received more than 100 million views.

Influencers make good subjects for deepfake technology. They upload thousands of images and videos of themselves in short spaces of time, often from multiple angles (you only need one high-quality image to create a convincing deepfake). They speak in similar cadences to one another to fit algorithmic trends, meaning their voices can be straightforwardly mimicked. They might use filters that leave them looking smoother and more cyborg-esque than any person you’d encounter in real life. And there is a litany of apps available for anyone to download to create deepfakes – such as HeyGen and ElevenLabs – which only require users to upload a small number of images in order to make something that looks very real.

Influencers also make easier targets than your average celebrity. While there is also a wealth of images of, say, pop stars and athletes, these public figures typically have the money and resources to be litigious about deepfakes. Comparatively, influencers have limited means to do anything about the videos and images created using their likeness. Platforms, too, are far more likely to respond to celebrity deepfakes than of less famous individuals. When the p*rnographic deepfakes of Swift went viral on Twitter earlier this year, the site blocked all searches of her name, stemming the spread almost immediately. It’s difficult to imagine the reaction would have been the same for an influencer with only a few thousand followers.

Deepfake p*rnography is likely the most concerning problem for famous women. But this technology can be used for many other nefarious purposes beyond the creation of humiliating sexual content. Influencers’ likenesses are now increasingly used to create fake advertisem*nts to sell dodgy products – such as erectile dysfunction supplements – and to push propagandist disinformation, such as the deepfake of a Ukrainian influencer praising Russia.

Even beyond using deepfakes of already-popular influencers to make ads they didn’t agree to, we are also starting to see how – via a scrapbook of images of multiple media figures – tech entrepreneurs can build whole new fake influencers, created entirely via generative AI. These accounts are full of hyper-realistic, computer-generated images, where the fake influencers talk about their fake hobbies, share their fake personality quirks, while securing very real and lucrative brand deals. Some have gained hundreds of thousands of followers and generated thousands for their male creators every month. Entrepreneurs can also fabricate deepfake influencers who embody sexist stereotypes of the “perfect woman” to appeal directly to male audiences, who could become more popular than the real, human influencers whose likenesses were used to make them.

Of course, this impacts many more people than just social influencers, affecting the livelihoods of anyone who does creative work – be it those who make music, art, act or write. Just this week, the actor Scarlett Johansson claimed OpenAI, the tech organisation that builds ChatGPT, asked to use her voice for a chatbot and, after she declined, mimicked her voice anyway. OpenAI pulled the voice but claimed it was not an imitation of Johansson.

It’s easy to vilify influencers for being shallow and attention-seeking, promoting over-consumption, and narrow beauty standards. But this trend shows us the danger deepfakes (and other forms of technology that could be used misogynistically) present for all of us – especially women. Anyone who has shared any image of themselves online is now at risk of having a deepfake made of them by anyone with malicious intent and internet access. If there is any digital representation of you online – an image, something as common as a Facebook profile picture, or even a professional headshot for LinkedIn, or a video; even your voice or your written work – then you are susceptible. This reality should help us to see why deepfake technology needs immediate legislation – holistic, wide-reaching laws that address the risks deepfakes pose to all of us.

[See also: What we must learn from the infected blood scandal]

Content from our partners
What you need to know about private markets

Spotlight

Work isn’t working: how to boost the nation’s health and happiness

Spotlight

The dementia crisis: a call for action

Spotlight

Related

Comment

The Tories have no hope of avoiding defeat

Comment

Ali Abbasi’s gift to Donald Trump at Cannes

Comment

Trapped by apps

Topics in this article : Artificial intelligence (AI) , Big Tech , Women

Deepfake technology endangers us all (2024)

FAQs

What are the risks of deepfake technology? ›

Not only has this technology created confusion, skepticism, and the spread of misinformation, deepfakes also pose a threat to privacy and security. With the ability to convincingly impersonate anyone, cybercriminals can orchestrate phishing scams or identity theft operations with alarming precision.

What is the abuse of deepfake technology? ›

Deepfake Fraud

They allow criminals to manipulate pictures, audio tracks, and even videos to present themselves as other people convincingly.

Should we be worried about deepfakes? ›

Deepfakes are creating havoc across the globe, spreading fake news and p*rnography, being used to steal identities, exploiting celebrities, scamming ordinary people and even influencing elections.

Is deepfake a cyber threat? ›

Illicit Uses for Deepfake Technology

Cyber criminals can use deepfake technology to create scams, false claims, and hoaxes that undermine and destabilize organizations.

How do deepfakes affect us? ›

Non-consensual deepfake videos can cause significant harm to individuals by exploiting and manipulating their likeness for explicit or damaging content. This can lead to severe harm, including loss of employment opportunities, public humiliation, or damage to personal relationships.

Is making a deepfake a crime? ›

The punishment for posting a deepfake varies by jurisdiction and the nature of the deepfake. It can range from monetary fines to imprisonment, especially in cases of revenge p*rn or when it threatens national security.

How to protect yourself from deepfakes? ›

Use trustworthy antivirus and anti-malware software that incorporate features to protect against phishing attacks and suspicious activities, Kaskade says. Some software now also offers protection against identity theft and can alert you to potential deepfake scams.

Why is AI hurting society? ›

Consequently, AI systems can perpetuate gender, racial, or socioeconomic biases, leading to discriminatory outcomes in areas such as hiring, lending, and criminal justice. It is crucial to address these biases not only to ensure fairness but also to preserve societal harmony and inclusivity.

Can deepfakes be banned? ›

Existing rules in the UK and some US states already ban the creation and/or dissemination of deepfakes. The FTC would make it illegal for AI platforms to create content that impersonates people and would allow the agency to force scammers to return the money they made from such scams.

Are deepfakes morally wrong? ›

If you don't agree to your image being used or manipulated, then it's wrong for someone to do so. It's a line that can be (and has been) easily turned into law — if you deepfake someone without their consent, then you risk a criminal charge. The illegality would certainly limit (if not stop) its use.

What is an example of a deep fake? ›

One benign example is a video that appears to show soccer star David Beckham fluently speaking nine different languages, when he actually only speaks one. Another fake shows Richard Nixon giving the speech he prepared in the event that the Apollo 11 mission failed and the astronauts didn't survive.

Can you sue someone for deepfakes? ›

Defamation. Deepfakes may falsely portray someone in a way that harms their reputation. People harmed by deepfakes of themselves may be able to sue the creator of the deepfake for defamation.

What is the abuse of deep fakes? ›

Abusive intimate partners may use deepfakes to humiliate the victim with friends or family. Explicit deepfakes have also been known to lead to endangering a person's livelihood if they are shared widely. Deep fakes and other forms of image-based abuse are not always about revenge, though.

How are deepfakes misused? ›

Deepfakes can be used to damage reputations and put someone's safety at risk, such as content that creates fake evidence to accuse someone of a crime. They're being used to mimic voices of family members or business executives to extort money and personal information.

What are the disadvantages of deep fake? ›

Deep fakes can be used to spread false information or fake news, potentially impacting celebrities, leaders, and cultural values. Additionally, the continuous advancement of deep fakes technology increases security threats and trust crises, necessitating the need for preventive measures.

What are the challenges of deepfakes? ›

Regulatory Challenges:
  • Regulating deepfakes in electoral campaigns is challenging due to rapid technological advancements and the global nature of online platforms.
  • Governments and election authorities struggle to keep pace with evolving AI techniques and may lack expertise in regulating AI-driven electoral activities.
May 14, 2024

What are the privacy concerns of deepfakes? ›

Deepfakes represent a significant threat to personal privacy as they can be manipulated for personal and financial gain. With the ability to convincingly alter digital media to depict individuals saying or doing things they never did; malicious actors can exploit deepfakes for various purposes.

Is deepfake safe? ›

Experts suggest safety tips. Deepfakes are also being used extensively for Identity thefts, where scammers impersonate individuals to get sensitive information or for fraudulent transactions.

Top Articles
Latest Posts
Article information

Author: Pres. Carey Rath

Last Updated:

Views: 5841

Rating: 4 / 5 (61 voted)

Reviews: 84% of readers found this page helpful

Author information

Name: Pres. Carey Rath

Birthday: 1997-03-06

Address: 14955 Ledner Trail, East Rodrickfort, NE 85127-8369

Phone: +18682428114917

Job: National Technology Representative

Hobby: Sand art, Drama, Web surfing, Cycling, Brazilian jiu-jitsu, Leather crafting, Creative writing

Introduction: My name is Pres. Carey Rath, I am a faithful, funny, vast, joyous, lively, brave, glamorous person who loves writing and wants to share my knowledge and understanding with you.