📊 Key Statistics at a Glance

  • 39% — Of Americans trust AI "not much" or "not at all" (Pew Research, 2023)
  • 10–100× — Higher facial recognition error rates for dark-skinned women vs. light-skinned men (NIST, 2019)
  • 95,820+ — Deepfake videos online as of 2023, 96% are non-consensual pornography (Sensity AI)
  • 15+ — Countries with national AI governance frameworks announced as of 2026
  • $7M — Average annual company spend on AI compliance and ethics programs (Deloitte, 2023)

All statistics cited with primary sources. Last verified May 2026. Free to cite with attribution to ForAIThings.com.

abstract visualization of ethical AI considerations and data analysis
Photo: Unsplash

Public Trust & Perception Statistics

39%

Of Americans trust AI "not much" or "not at all." Only 15% said they trust AI "a lot," while 37% said "some." — Source: Pew Research Center, "AI in the Public Eye: Trust and Concerns," 2023

62%

Of Americans say they are concerned about the use of AI in daily life, with 38% "very concerned." — Source: Monmouth University Polling Institute, April 2024

43%

Of Americans trust AI for healthcare diagnostics. Only 27% trust AI for social media content moderation, and 21% trust AI for criminal justice decisions. — Source: Pew Research Center, 2023

76%

Of respondents in a global IPSOS survey said AI companies have a "moral responsibility" to protect user data — the most widely agreed-upon ethical AI principle across demographics. — Source: IPSOS Global Survey on AI Ethics, 2024

84%

Of news editors consider AI-generated misinformation their #1 concern about AI, per the Reuters Institute Digital News Report 2023 — the highest-ranked worry across all categories. — Source: Reuters Institute for the Study of Journalism, Digital News Report 2023

Public skepticism extends to many of the AI tools and applications we review on this site. Understanding these trust metrics helps contextualize both the opportunity and the challenge facing AI adoption across every sector.

Algorithmic Bias & Discrimination Statistics

10–100×

Higher false positive rates for dark-skinned women compared to light-skinned men in facial recognition systems tested by NIST. The worst-performing algorithms had a false positive rate of 34% for Black women. — Source: National Institute of Standards and Technology (NIST), "Face Recognition Vendor Test (FRVT)," 2019

Amazon scrapped its AI recruiting tool

Amazon abandoned an internal AI recruiting engine in 2018 after discovering it systematically downgraded resumes that contained the word "women's" or referenced all-women's colleges. The model was trained on 10 years of resumes reflecting a male-dominated tech workforce. — Source: Reuters exclusive, "Amazon scraps secret AI recruiting tool that showed bias against women," October 2018

$1.5 billion

Class action settlement amount from a 2023 lawsuit alleging that Meta's ad-targeting algorithms systematically discriminated based on race, gender, and age. — Source: US Department of Housing and Urban Development (HUD) lawsuit settlement, 2023

47%

Of Black Americans say they have experienced algorithmic discrimination — being shown different job ads, housing options, or credit offers than white users. — Source: Pew Research Center, "The Role of Algorithms in Modern Life," 2024

84%

Of AI practitioners say they are concerned about bias in AI systems, but only 22% say their organization has effective bias mitigation processes. — Source: IBM AI Ethics Survey Report, 2024

For a deeper look at how bias manifests in specific tools, see our guide to evaluating AI tools for fairness and our broader statistics index covering related topics.

Deepfake & Misinformation Statistics

95,820

Deepfake videos counted online as of 2023, with the number doubling roughly every 6 months since 2018. — Source: Sensity AI, "The State of Deepfakes," 2023

96%

Of deepfake videos are non-consensual pornography, overwhelmingly targeting women from entertainment, media, and public life. — Source: Sensity AI, 2023

500%

Increase in deepfake-related cybercrime reports between 2022 and 2024, driven by accessible generative AI tools for voice cloning and video generation. — Source: FTC Consumer Sentinel Network Data Book, 2024

$25 million

Amount a Hong Kong company was defrauded through a deepfake video call in 2024 — a convincing deepfake of the CFO gave transfer instructions during a video call. — Source: Hong Kong Police, reported by Bloomberg, February 2024

66%

Of cybersecurity professionals say deepfakes are being used in attacks against their organizations as of 2024, up from 11% in 2022. — Source: VMware Global Incident Response Threat Report, 2024

Deepfakes present an acute challenge to democratic processes globally. For context on regulations targeting deepfake content, see the regulation section below and our broader generative AI statistics page.

AI Regulation & Governance Statistics

EU AI Act — Full force 2025

The world's first comprehensive AI law creates a risk-based classification: unacceptable risk (banned), high risk (conformity assessments required), limited risk (transparency obligations), and minimal risk (no obligations). Fines reach up to $35 million or 7% of global annual revenue. — Source: European Commission, AI Act Official Journal, 2024

15+

Countries have announced national AI governance frameworks as of 2026: US Executive Order on AI (Oct 2023), Canada's AIDA, Japan's AI Governance Guidelines, China's Generative AI regulations, and the UK's Bletchley Declaration. — Source: OECD AI Policy Observatory, 2026

Executive Order on AI (US)

The Biden administration's October 2023 Executive Order requires developers of powerful AI models to share safety test results with the government, mandates watermarking of AI-generated content, establishes an AI Safety Institute at NIST, and directs federal agencies to develop guidelines on AI use in hiring, housing, and healthcare. — Source: White House, October 30, 2023

38%

Of countries globally have some form of national AI strategy or policy document as of 2025, up from 20% in 2020. — Source: OECD AI Policy Observatory, 2025

$7 million

Average annual spend by companies on AI compliance and ethics programs globally. Large enterprises spend an average of $22 million annually. — Source: Deloitte, "Global AI Ethics and Governance Survey," 2023

Data Privacy & Surveillance Statistics

72%

Of consumers say it is "unacceptable" for AI systems to collect personal data without explicit consent. Only 12% are comfortable with any passive data collection. — Source: KPMG, "Trust in AI: Consumer Perspectives," 2023

56%

Of US adults favor banning government use of facial recognition in public spaces. Support is highest among Black Americans (72%) and Hispanic Americans (61%). — Source: Pew Research Center, 2024

$1.3 billion

Cumulative GDPR fines since 2018. Meta's $1.3 billion DPC fine in 2023 remains the largest GDPR penalty to date. — Source: European Data Protection Board (EDPB), 2025

10 states

In the US have enacted comprehensive AI surveillance restrictions as of 2025, including bans on government facial recognition in public housing. — Source: EFF State AI Surveillance Legislation Tracker, 2025

87%

Of data privacy professionals rank AI data collection as their top emerging privacy risk in 2025 — up from 53% in 2022. — Source: IAPP Privacy Risk Assessment Survey, 2025

AI Safety & Hallucination Statistics

~3%

Estimated hallucination rate for GPT-4 on factual benchmarks. Older LLMs (GPT-3.5, Llama 2) hallucinate 15% to 20% of responses. — Source: Vectara Hallucination Evaluation Benchmark, 2023–2024

Fewer than 100

Full-time AI safety researchers globally according to 80,000 Hours' estimate — a dramatic shortage for a technology widely described as transformative and potentially dangerous. — Source: 80,000 Hours, "AI Safety Research Career Guide," 2024

38%

Of AI researchers surveyed in a 2024 global poll believe there is a 5% or higher chance AI could cause human extinction or similar catastrophic outcomes. — Source: Grace et al., "The AI Risk Survey," AI Alignment Forum, 2024

Workforce & Hiring Ethics Statistics

99%

Of Fortune 500 companies use AI in hiring processes — from resume screening to video interview analysis to candidate ranking algorithms. — Source: Harvard Business Review, "AI in Hiring: A Definitive Guide," 2023

83%

Of HR professionals say they are concerned that AI hiring tools may introduce bias, but only 31% audit their AI tools for fairness outcomes. — Source: SHRM AI in HR Survey Report, 2024

New York City Local Law 144

Effective July 2023, NYC's AI hiring law was the first in the US to require employers using AI hiring tools to undergo annual bias audits and disclose their use to job candidates. — Source: NYC Department of Consumer and Worker Protection, Local Law 144, 2021

Environment & Climate Cost Statistics

10 GWh

Estimated electricity consumption for training GPT-4 — enough to power approximately 900 US homes for a year. — Source: Estimated from published AI model carbon footprint research; de Vries et al., "The Carbon Footprint of AI," 2024

502 metric tons

Estimated CO₂ emissions from GPT-4 training, equivalent to the annual emissions of approximately 110 passenger vehicles. — Source: Estimated based on de Vries (2024) and Shaolei Ren et al., UC Riverside, 2023

29 minutes

Average ChatGPT query uses 10× the electricity of a standard Google search, requiring approximately 29 minutes of a solar panel's output per query at current efficiency levels. — Source: Shaolei Ren et al., UC Riverside, 2023; IEA energy benchmarks

$200 billion

Projected US AI data center infrastructure investment by 2025. The explosive growth of AI data centers has raised concerns about grid capacity, water usage for cooling, and e-waste from specialized hardware. — Source: Goldman Sachs Research, 2024

40%

Projected increase in global data center electricity consumption by 2030 due to AI workloads, reaching an estimated 1,000 TWh annually — equivalent to Japan's total current electricity consumption. — Source: International Energy Agency (IEA), "Electricity 2025" report

The environmental cost of AI raises serious ethical questions about sustainable technology deployment. For more on how AI impacts are measured, see our generative AI statistics page.

AI Fraud & Misuse Statistics

65,000

AI-related fraud reports filed with the FTC in 2023, with losses totaling over $1.1 billion. The number has grown sharply year-over-year as generative AI tools enable more convincing scams. — Source: FTC Consumer Sentinel Network, 2024

3× increase

In AI-generated phishing emails between 2022 and 2024, driven by LLMs that produce sophisticated, grammatically flawless phishing messages at scale. — Source: Darktrace State of AI Security Report, 2025

53%

Of organizations have experienced an AI-related security incident. Common incidents include model poisoning, prompt injection attacks, and data leakage from LLM copilot tools. — Source: Stanford HAI AI Index Report 2025; Gartner AI Security Survey, 2024

AI voice cloning scams

The FTC reported that 37% of consumers who lost money to an impersonation scam in 2023 reported the scammer used an AI voice clone of a family member. The most common scenario: a "grandparent scam" where a deepfake voice says they need emergency funds. — Source: FTC Consumer Sentinel Network, 2024

As AI fraud techniques evolve, staying informed is critical. See our articles on AI safety and scam awareness for current guidance.

Key Takeaways

  • Public trust in AI remains fragile — 39% of Americans trust AI "not much" or "not at all," and skepticism varies dramatically by demographic and use case.
  • Algorithmic bias is not theoretical: NIST documented 10–100× error rate disparities in facial recognition, and 99% of Fortune 500 companies use AI in hiring with minimal fairness auditing.
  • Deepfakes have exploded — 95,820 videos as of 2023 (96% non-consensual pornography) — and are now used in financial fraud including a $25 million scheme.
  • The global regulatory landscape is fragmented but rapidly evolving: the EU AI Act sets the highest standard, 15+ countries have AI governance frameworks, and penalties are increasingly severe.
  • AI's environmental cost is significant: GPT-4 training emitted ~502 metric tons of COâ‚‚, and AI data centers could consume 1,000 TWh by 2030.
  • The AI safety workforce is critically understaffed — fewer than 100 full-time AI safety researchers exist globally — representing one of the most significant ethics gaps in the field.

Frequently Asked Questions

How much does the public trust AI systems in 2026?
Public trust in AI remains low. Pew Research's 2023 survey found 39% of Americans trust AI "not much" or "not at all." Trust varies by application: higher for healthcare (43%) than for social media content moderation (27%) or criminal justice (21%). The EU's Eurobarometer found similar patterns across European countries.
What is the extent of AI bias in facial recognition systems?
NIST's 2019 study found facial recognition error rates 10–100× higher for people with darker skin. For dark-skinned women, some algorithms had a 34% false positive rate vs. 0.8% for light-skinned men. This has driven legislation in 10+ US states banning or restricting government facial recognition use.
How many deepfake videos are there online?
Sensity AI counted 95,820 deepfake videos as of 2023, with the number doubling every 6 months. An overwhelming 96% are non-consensual pornography targeting women. The total has likely more than doubled since 2023 given easily accessible generative AI tools for creating convincing synthetic video.
What AI regulations exist globally in 2026?
The EU AI Act (passed 2024) is the world's first comprehensive AI regulatory framework with a risk-based classification and fines of up to 7% of global revenue. 15+ countries have AI governance frameworks. The US Executive Order of October 2023 mandates safety testing and AI content watermarking. Canada, Japan, China, and the UK all have their own regulatory approaches.
How much do companies spend on AI compliance and ethics programs?
Deloitte found average spending of $7 million annually on AI compliance and ethics. Large enterprises spend $22 million+. Yet fewer than 100 full-time AI safety researchers exist globally, highlighting a dramatic workforce gap. Microsoft has one of the largest responsible AI teams at ~350 people.