Ad
Internal fraud data from the UK bank TSB for 2021–2022 revealed that 80 percent of fraud cases involved Meta-owned platforms (Photo: Dima Solomin)

Opinion

How Meta enables deepfake financial scams — and EU AI Act isn't fixing it

Deepfakes are fake digital media generated using Artificial Intelligence (AI), including audio, video, and images that look real. Malicious deepfakes cause serious harm to people through non-consensual pornography, disinformation, and financial fraud.

In a recent example, a French woman was reportedly scammed out of $800,000 [€685,000] by someone faking to be Brad Pitt. Using multiple deepfaked photos of the actor in a hospital bed, the scammer claimed he (Brad Pitt) could not pay his medical bills because his money was tied up in a divorce. This is just one of the many ways fraudsters weaponise deepfake technology to exploit trust, emotion, and the public’s fascination with celebrity.

Malicious deepfakes aren't always dark web

Deepfakes range from entirely synthetic videos and cloned voices to edited footage with fake vocal overlays, AI voice cloning making it easy to impersonate anyone.

Malicious deepfakes are not only created with dark web tools but also with technology from legitimate providers.

Stability AI’s image generator, Stable Diffusion, has been used to produce child sexual abuse material, leading to criminal prosecution.

A 2024 UN Office on Drug and Crimes Report found that financial fraudsters had combined Google Face Mesh, an open-source system that maps facial expressions in real time, with other tools to generate deepfake scam videos.

Open-source software is particularly attractive to scammers as it is freely accessible and can produce harder-to-trace deepfakes.

Meta as enabler of deepfake fraud

Digital platforms essentially act as intermediaries between scammers and the public. Meta plays a major role in this, with its platforms — Instagram, Facebook, and WhatsApp — commonly used to conduct deepfake scams.

Deepfake ads reported on one Meta platform, often reappear on the same or another platform, with scammers typically continuing the promotion of fraudulent schemes through WhatsApp conversations. Internal fraud data from the UK bank TSB for 2021–2022 revealed that 80 percent of fraud cases involved Meta-owned platforms.

Recently, a deepfake video of Dutch prime minister Dick Schoof appeared as a sponsored Facebook ad promoting a fake investment scheme. The video, falsely claiming that the investment initiative was supported by the Dutch Central Bank reached 250,000 views.

Despite being reported to Meta by the Dutch national Consumers’ Association and Meta’s promise to act, the ad remained active afterwards.

Similarly, a deepfake of former Fidelity International fund manager Anthony Bolton targeted retail investors, inviting them to join a WhatsApp group for stock tips.

Reported to have appeared on Instagram in May 2025, the video continued circulating across Meta’s platforms, with a Facebook post featuring it on 21 June 2025 garnering over 500,000 views.

The widespread misuse of Meta’s platforms for fraud led Danish TV presenters, whose images and voices were used in thousands of fake Facebook ads, to report the company to police in 2024.

Meta’s automated and human moderation systems are dysfunctional, with the company prioritising ad revenue over public safety. According to a 2025 Wall Street Journal investigation, Facebook and Instagram staff were instructed to tolerate up to 32 fraud “strikes” before acting, with enforcement against scams deliberately deprioritised to avoid losing ad revenue.

This is hardly surprising given that in 2024, $160.6bn of Meta’s $164.5bn revenue came from advertising.

Existing EU rules aren’t making significant difference

The EU AI Act requires AI systems to inform (by design) people that they are interacting with AI-generated content, unless it is obvious.

Providers must also ensure such content in detectable through machine-readable markings and clearly label them as AI-generated or manipulated.

Spain’s implementation of the act imposes fines of up to $38m or seven percent of a company’s turnover for violating these labelling rules.

However, non-compliant AI tools allow fraudsters to bypass the rules, rendering fines meaningless and making platform-centred regulation and enforcement the most effective way to tackle deepfake fraud.

The EU’s Digital Services Act (DSA) require platforms to implement mechanisms for reporting and swiftly removing illegal content.

As a designated Very Large Online Platform (VLOP), Facebook has additional transparency obligations, including creating searchable public repository of advertisements.

However, Meta’s DSA compliance has been criticised, among others, for inadequate staffing for moderation and making the reporting system difficult to access.

Some EU member states, including France, have criminalised sharing non-consensual AI-generated content depicting a person’s words or image, unless it is clearly or explicitly marked as algorithmically generated.

The offence in France carries a maximum penalty of two years in prison and a €45,000 fine when committed via an online platform. But criminalisation alone does not prevent harms, and scammers are rarely traced by authorities.

In the UK, where comparable legislation called the Online Safety Act exists, a recent report by the Parliament’s Science, Innovation and Technology Committee concluded that platforms are either unable or unwilling to address harmful content, including fraudulent ads.

Criminalising hosting of illegal content

Mounting evidence suggests that Meta actively monetises financial fraud by deliberately allowing scam ads to run on its various platforms, constituting effective complicity with crime.

There is already a call to hold Meta criminally liable for aiding and abetting organised crimes through algorithmic profiling and targeting of susceptible users for phishing and scam.

The only way to compel platforms like Meta to act responsibly is criminalising and prosecuting the intentional hosting of illegal content, including fraudulent deepfake ads.


This year, we turn 25 and are looking for 2,500 new supporting members to take their stake in EU democracy. A functioning EU relies on a well-informed public – you.

Ad
Ad