Introduction
Contents
Deepfake Statistics: Deepfakes are videos, photos, or audio recordings that have been changed using artificial intelligence. These changes can make them look very real. Many people struggle to tell what’s real and what’s fake in these altered media. While deepfakes can be used for good purposes, they are often misused by bad actors for things like spreading false information, blackmail, harassment, and financial scams. For example, many individuals and companies have lost large amounts of money due to deepfake scams.
This guide provides Deepfake Statistics and examples to show how serious the issues with deepfake technology can be. It also offers tips on how to spot deepfakes and keep your personal information safe.
Editor’s Choice
- Deepfake Statistics stated that around 70% of people are unsure if they can tell the difference between a real voice and a fake one.
- 40% of people in the same study said they would respond to a voicemail from their spouse asking for help, even if it seemed unusual.
- Criminals can use a short clip of someone’s voice to trick their family or friends into sending money, pretending they are in an emergency.
- One in 10 people say they have received a fake voice message, and 77% of them lost money in scams.
- Cybercriminals have many ways to get people’s voices.
- 53% of people share their voices online or in voice messages at least once a week. Sites like YouTube, social media, and podcasts are common places for audio to be shared.
- Google Trends shows that searches for “free voice cloning software” increased by 120% between July 2023 and 2024. These free apps don’t require much knowledge to create fake voices.
- Sometimes, only 3 seconds of audio is needed to make a cloned voice that is 85% similar to the original.
- DeepFaceLab, an open-source tool, is used to create over 95% of deepfake videos.
- Fake news lies, and rumors spread much faster than the truth. This explains why deepfakes work so well. They stir up emotions and present new information. For example, the top 1% of rumors on Twitter (now X) reached 1,000 to 100,000 people, while real news rarely reached more than 1,000.
- The cost of a high-quality deepfake video ranges from $300 to $20,000 per minute, depending on how complex the video is and how famous the people in it are.
- Deepfake fraud grew by over 10 times from 2022 to 2023, according to Sumsub’s data. About 88% of these fraud cases happened in the crypto industry, and 8% were in fintech.
- CEO fraud targets at least 400 companies every day, especially hurting businesses that are not aware of the latest scams, phishing, and deepfakes stealing their money.
- More than 10% of companies have experienced attempts or successful deepfake fraud, with damages from these attacks reaching up to 10% of their yearly profits.
- Deepfake Statistics stated that about 1 in 4 company leaders don’t know much about deepfake technology, which might explain why 31% of executives believe deepfakes don’t increase their company’s risk of fraud.
- 80% of companies don’t have a plan in place to deal with deepfake attacks.
- Over 50% of leaders admit their employees haven’t been trained to spot or handle deepfake attacks.
- Deepfake Statistics stated that only 5% of company leaders say they have solid protection against deepfake attacks, covering staff, communication, and company procedures.
What is Deepfake?
Deepfakes are images, videos, or audio that are altered or created using artificial intelligence. These can feature real people or fictional characters. They are a type of fake media and a modern way of tricking people.
(Source: statista.com)
Creating fake content isn’t new, but deepfakes are different because they use advanced tools like machine learning, facial recognition, and neural networks such as variational autoencoders (VAEs) and generative adversarial networks (GANs). Because of this, experts are working on ways to spot altered content.
Deepfakes have received a lot of attention for being used in harmful ways, like making child abuse material, fake celebrity videos, revenge porn, spreading false news, hoaxes, bullying, and financial scams.
Experts worry that deepfakes could be used to spread lies, promote hate, and mess with elections. The tech industry and governments are starting to come up with ways to detect and control their use.
From movies to video games, deepfake technology is getting more realistic and easier for the public to use. This is causing problems in the entertainment and media industries.
Deepfake Examples
Some examples of deepfakes include:
- Altering speeches or pictures of politicians to sway public opinion. For instance, deepfakes of Joe Biden and Slovak politician Michal Simecka have been used to mess with elections in their countries.
- Changing actors’ faces in movies. A well-known example is Fast & Furious 7, where CGI and deepfake tech recreated Paul Walker, who passed away in 2013 during filming. Other movies like Avatar: The Way of Water and The Mandalorian season 2 finale also used this technology to make actors look younger.
- Fake adult content created with AI. Deepfake porn of celebrities like Taylor Swift and Marvel actress Xochitl Gomez has been shared on the social media platform X. Still, deepfake porn doesn’t just hurt famous people—anyone who shares photos online could be affected.
Companies Lose Due to Deepfake Statistics.
- On average, businesses in different industries have lost almost $450,000 because of deepfakes.
- Deepfake Statistics stated that the largest group of businesses (28%) reported losing between $250,000 and $500,000.
- When comparing losses by country, Singapore and Germany show the biggest differences in how much was lost.
(Source: regulaforensics.com)
- These losses could include payments made to fraud victims who are customers of the businesses affected.
- Government agencies are now considering changes to the law that would make banks, telecom companies, and social media platforms responsible for protecting people from scams.
- In Singapore, a Shared Responsibility Framework (SRF) was proposed in 2023 to deal with online phishing scams, which can now be made worse by deepfakes.
- The SRF suggests that local businesses might have to cover the full loss caused by scams. For example, if a telecom company doesn’t stop fake SMS messages pretending to be from a bank, the company would have to compensate customers who mistakenly gave their bank account details to scammers.
- Interestingly, 45% of telecom companies reported losses between $250,000 and $500,000 from deepfake fraud schemes.
- When it comes to industry financial losses, there are some clear trends.
- Deepfake Statistics stated that most people in banking believe deepfake losses could go over $1,000,000, while healthcare companies usually face losses of up to $250,000.
- This may be because financial institutions are more focused on protecting their reputation and preventing the loss of customers, which they see as major business risks.
Telecoms (45%) |
$250,000 to $499,999 |
Technology (29%) |
$250,000 to $499,999 |
Law Enforcement (36%) |
$250,000 to $499,999 |
Healthcare (24%) |
Less than $99,999 to $249,999 |
Financial Services (23%) |
More than $1,000,000 |
Crypto (37%) |
$500,000 to $999,999 |
Aviation (43%) |
$100,000 to $249,999 |
- Small businesses lose less money from fraud than larger ones. For example, 34% of people from companies with up to 100 employees reported losses of $99,000 or less. On the other hand, only 11% of companies with over 10,000 employees said the same, while 26% of big companies reported losses of over $1,000,000.
- When it comes to the costs deepfakes bring to businesses, the three biggest expenses are damage to reputation, disruption to business operations, and fines or penalties.
(Reference: regulaforensics.com)
- Different industries have different main concerns based on their operations. For example, reputational damage is the biggest worry for Telecom (51%) and Financial Services (44%), as both are very competitive fields.
- In Banking, a highly regulated industry, 38% of companies also worry about penalties and fines.
- Deepfake Statistics stated that half of IT companies that rely on stable systems see business disruptions as their biggest risk.
- In Aviation, where customer relationships are very important, losing current or potential customers (38%) is the top concern.
Deepfake Frauds and Threats Statistics
Even though deepfakes are a new technology, they have already caused a lot of harm. Here’s a quick overview of how powerful this technology can be:
(Source: regulaforensics.com)
- Deepfakes and other AI-powered scams are some of the most common types of identity theft.
- Email is the main way deepfake phishing attacks happen.
- In 2024, around 26% of people encountered a deepfake scam online, with 9% falling for them.
- Deepfake Statistics stated that 80% of Telegram channels have deepfake content.
- As deepfakes and AI technology grow, facial recognition is becoming a key tool for confirming identities.
- Unfortunately, 77% of people who were tricked by deepfake scams lost money.
- A third of deepfake victims lost over $1,000, while 7% lost up to $15,000 to fraudsters.
Detecting Deepfake Statistics
Even though technology keeps improving, spotting deepfakes is still very difficult. As AI tools get better, deepfakes also continue to evolve, making it harder to tell what’s real and what’s fake. This raises concerns about how effective current detection methods are and if they can keep up with fast tech changes.
(Source: sphericalinsights.com)
- Deepfake Statistics stated that almost 57% of people can tell deepfake videos apart, but 43% can’t tell fake from real content.
- Unfortunately, people only detect voice or speech deepfakes 73% of the time.
- The human brain can spot deepfakes on its own 54% of the time.
- People make mistakes often. In one test, 69% of real faces were wrongly thought to be fake.
- Another study tested 280 people and found that they could identify deepfakes correctly 62% of the time, with accuracy ranging from 30% to 85%.
- Deepfake Statistics stated that training only improved deepfake detection by 3.84% on average.
- Altered texts are hard to catch, with only a 57% detection rate. However, deepfake audio (74%) and video (82%) are easier to identify.
- Nearly half of the people tested (48.2%) couldn’t tell if a photo was real or deepfaked—just slightly better than random guessing.
- Surprisingly, people thought deepfaked faces were 7.7% more trustworthy than real ones.
- Deepfake Statistics stated that only 27% of people can tell if someone on the phone is real or a deepfake.
- In 2023, the development of AI-powered deepfake tools grew by 60%.
- The deepfake detection market was worth $5.5 billion in 2023 and could grow to $15.7 billion by 2026, with a 42% annual growth rate.
Deepfake Perception Worldwide Statistics
(Source: contentdetector.ai)
Region | Awareness of Deepfakes | Ability to Distinguish Deepfakes |
Germany | – | 57% cannot tell |
Spain | – | 75% do not know |
UK | 32% | – |
Mexico | 40% | 82% believe they can spot |
Global | 29% | 57% believe they can spot |
Generative AI Deepfake Statistics
- Whether we like it or not, deepfake content is here to stay. Deepfakes are images, sounds, and videos made using AI technology.
- Deepfake Statistics stated that identity fraud in the U.S. increased a lot in the first quarter of 2023, jumping from 0.2% to 2.6%.
- In Canada, it went from 0.1% to 4.6%. While these numbers may seem small, this increase is likely to get bigger as the technology becomes more common and widely used.
(Reference: statista.com)
- In a survey from August 2023, 62% of adult women in the U.S. said they were concerned about the spread of AI-created video and audio deepfakes. Almost 60% of men also shared this worry.
- On the other hand, only 1% of women and 3% of men in the U.S. said they weren’t worried at all.
- The global market for AI-created deepfakes is expected to grow from $1.39 billion in 2024 to $79.1 million by 2024, with a huge growth rate of 37.6% each year.
- The number of deepfake videos went up by 550% from 2019 to 2024, reaching a total of 95,820 videos.
- In North America, the detection of deepfakes increased by 1,740%, while Asia Pacific and Europe saw growths of 1,530% and 780%, respectively.
- Deepfake Statistics stated that fraud losses due to generative AI are predicted to rise from $12.3 billion in 2024 to $40 billion by 2027, growing at a 32% annual rate.
- In 2024, 26% of people encountered a deepfake scam online, with 9% becoming victims of these scams.
- By 2024, the market for AI-made deepfakes is expected to grow to $79.1 million, up from $1.39 billion in 2024, with an annual growth rate of 37.6%.
- The number of deepfake videos exploded by 550% from 2019 to 2024, reaching 95,820 videos in total.
(Source: artsmart.ai)
- North America saw a huge 1,740% increase in detected deepfakes, while Asia Pacific (1,530%) and Europe (780%) also experienced significant growth.
- In 2024, 500,000 voice and video deepfakes were shared on social media, with many of them targeting global politics.
- Deepfake Statistics stated that “Face swaps” grew by 704% in 2024.
- People can only identify deepfakes with 57% accuracy, much lower than the 84% accuracy of top AI detection tools.
- The financial losses from deepfakes are rising quickly, from $12.3 billion in 2024 to $40 billion by 2027.
(Source: grandviewresearch.com)
- This 32% yearly growth shows how deepfake technology is being used more and more in scams and fraud, with cybercriminals taking advantage of AI to trick businesses and individuals, causing major financial harm. This shows the urgent need for better fraud protection.
- In 2024, 26% of people came across a deepfake scam online, and 9% fell for it.
- The fact that over a quarter of people encountered deepfake scams in 2024 highlights how common this issue has become.
- Deepfake Statistics stated that with 9% of people getting tricked, deepfake scams are now a serious problem, which shows the need for more awareness and better tools to detect and prevent these attacks.
Technological Advancements in Deepfakes Statistics
(Reference: contentdetector.ai)
Category | % |
User-Friendly Deepfake Tools | 42% |
Deepfake Creation Model Increase | 84% |
Deepfake Videos Made with DeepFaceLab | 95% |
Deepfake Detection Accuracy | 99% |
- Deepfake Statistics stated that more than 95% of all deepfake videos are made using DeepFaceLab.
- According to Skynet Today, some deepfake detection tools can detect deepfakes with over 99% accuracy.
Conclusion
To sum up, deepfakes are becoming a big concern as they continue to grow in use and awareness. More and more people, both men and women, are worried about the spread of AI-made videos and audio. With the rise in deepfake scams and manipulated content, the risks to privacy, security, and reputation are clear.
As shown by the statistics, deepfakes are causing financial losses in many industries and highlighting the need for better tools to detect and prevent them.
It’s important to stay alert and take action to address these issues in the future. We have shed enough light on Deepfake Statistics through this article.