📊 Key Statistics at a Glance
- 39% — Of Americans trust AI "not much" or "not at all" (Pew Research, 2023)
- 10–100× — Higher facial recognition error rates for dark-skinned women vs. light-skinned men (NIST, 2019)
- 95,820+ — Deepfake videos online as of 2023, 96% are non-consensual pornography (Sensity AI)
- 15+ — Countries with national AI governance frameworks announced as of 2026
- $7M — Average annual company spend on AI compliance and ethics programs (Deloitte, 2023)
All statistics cited with primary sources. Last verified May 2026. Free to cite with attribution to ForAIThings.com.
Public Trust & Perception Statistics
Of Americans trust AI "not much" or "not at all." Only 15% said they trust AI "a lot," while 37% said "some." — Source: Pew Research Center, "AI in the Public Eye: Trust and Concerns," 2023
Of Americans say they are concerned about the use of AI in daily life, with 38% "very concerned." — Source: Monmouth University Polling Institute, April 2024
Of Americans trust AI for healthcare diagnostics. Only 27% trust AI for social media content moderation, and 21% trust AI for criminal justice decisions. — Source: Pew Research Center, 2023
Of respondents in a global IPSOS survey said AI companies have a "moral responsibility" to protect user data — the most widely agreed-upon ethical AI principle across demographics. — Source: IPSOS Global Survey on AI Ethics, 2024
Of news editors consider AI-generated misinformation their #1 concern about AI, per the Reuters Institute Digital News Report 2023 — the highest-ranked worry across all categories. — Source: Reuters Institute for the Study of Journalism, Digital News Report 2023
Public skepticism extends to many of the AI tools and applications we review on this site. Understanding these trust metrics helps contextualize both the opportunity and the challenge facing AI adoption across every sector.
Algorithmic Bias & Discrimination Statistics
Higher false positive rates for dark-skinned women compared to light-skinned men in facial recognition systems tested by NIST. The worst-performing algorithms had a false positive rate of 34% for Black women. — Source: National Institute of Standards and Technology (NIST), "Face Recognition Vendor Test (FRVT)," 2019
Amazon abandoned an internal AI recruiting engine in 2018 after discovering it systematically downgraded resumes that contained the word "women's" or referenced all-women's colleges. The model was trained on 10 years of resumes reflecting a male-dominated tech workforce. — Source: Reuters exclusive, "Amazon scraps secret AI recruiting tool that showed bias against women," October 2018
Class action settlement amount from a 2023 lawsuit alleging that Meta's ad-targeting algorithms systematically discriminated based on race, gender, and age. — Source: US Department of Housing and Urban Development (HUD) lawsuit settlement, 2023
Of Black Americans say they have experienced algorithmic discrimination — being shown different job ads, housing options, or credit offers than white users. — Source: Pew Research Center, "The Role of Algorithms in Modern Life," 2024
Of AI practitioners say they are concerned about bias in AI systems, but only 22% say their organization has effective bias mitigation processes. — Source: IBM AI Ethics Survey Report, 2024
For a deeper look at how bias manifests in specific tools, see our guide to evaluating AI tools for fairness and our broader statistics index covering related topics.
Deepfake & Misinformation Statistics
Deepfake videos counted online as of 2023, with the number doubling roughly every 6 months since 2018. — Source: Sensity AI, "The State of Deepfakes," 2023
Of deepfake videos are non-consensual pornography, overwhelmingly targeting women from entertainment, media, and public life. — Source: Sensity AI, 2023
Increase in deepfake-related cybercrime reports between 2022 and 2024, driven by accessible generative AI tools for voice cloning and video generation. — Source: FTC Consumer Sentinel Network Data Book, 2024
Amount a Hong Kong company was defrauded through a deepfake video call in 2024 — a convincing deepfake of the CFO gave transfer instructions during a video call. — Source: Hong Kong Police, reported by Bloomberg, February 2024
Of cybersecurity professionals say deepfakes are being used in attacks against their organizations as of 2024, up from 11% in 2022. — Source: VMware Global Incident Response Threat Report, 2024
Deepfakes present an acute challenge to democratic processes globally. For context on regulations targeting deepfake content, see the regulation section below and our broader generative AI statistics page.
AI Regulation & Governance Statistics
The world's first comprehensive AI law creates a risk-based classification: unacceptable risk (banned), high risk (conformity assessments required), limited risk (transparency obligations), and minimal risk (no obligations). Fines reach up to $35 million or 7% of global annual revenue. — Source: European Commission, AI Act Official Journal, 2024
Countries have announced national AI governance frameworks as of 2026: US Executive Order on AI (Oct 2023), Canada's AIDA, Japan's AI Governance Guidelines, China's Generative AI regulations, and the UK's Bletchley Declaration. — Source: OECD AI Policy Observatory, 2026
The Biden administration's October 2023 Executive Order requires developers of powerful AI models to share safety test results with the government, mandates watermarking of AI-generated content, establishes an AI Safety Institute at NIST, and directs federal agencies to develop guidelines on AI use in hiring, housing, and healthcare. — Source: White House, October 30, 2023
Of countries globally have some form of national AI strategy or policy document as of 2025, up from 20% in 2020. — Source: OECD AI Policy Observatory, 2025
Average annual spend by companies on AI compliance and ethics programs globally. Large enterprises spend an average of $22 million annually. — Source: Deloitte, "Global AI Ethics and Governance Survey," 2023
Data Privacy & Surveillance Statistics
Of consumers say it is "unacceptable" for AI systems to collect personal data without explicit consent. Only 12% are comfortable with any passive data collection. — Source: KPMG, "Trust in AI: Consumer Perspectives," 2023
Of US adults favor banning government use of facial recognition in public spaces. Support is highest among Black Americans (72%) and Hispanic Americans (61%). — Source: Pew Research Center, 2024
Cumulative GDPR fines since 2018. Meta's $1.3 billion DPC fine in 2023 remains the largest GDPR penalty to date. — Source: European Data Protection Board (EDPB), 2025
In the US have enacted comprehensive AI surveillance restrictions as of 2025, including bans on government facial recognition in public housing. — Source: EFF State AI Surveillance Legislation Tracker, 2025
Of data privacy professionals rank AI data collection as their top emerging privacy risk in 2025 — up from 53% in 2022. — Source: IAPP Privacy Risk Assessment Survey, 2025
AI Safety & Hallucination Statistics
Estimated hallucination rate for GPT-4 on factual benchmarks. Older LLMs (GPT-3.5, Llama 2) hallucinate 15% to 20% of responses. — Source: Vectara Hallucination Evaluation Benchmark, 2023–2024
Full-time AI safety researchers globally according to 80,000 Hours' estimate — a dramatic shortage for a technology widely described as transformative and potentially dangerous. — Source: 80,000 Hours, "AI Safety Research Career Guide," 2024
Of AI researchers surveyed in a 2024 global poll believe there is a 5% or higher chance AI could cause human extinction or similar catastrophic outcomes. — Source: Grace et al., "The AI Risk Survey," AI Alignment Forum, 2024
Workforce & Hiring Ethics Statistics
Of Fortune 500 companies use AI in hiring processes — from resume screening to video interview analysis to candidate ranking algorithms. — Source: Harvard Business Review, "AI in Hiring: A Definitive Guide," 2023
Of HR professionals say they are concerned that AI hiring tools may introduce bias, but only 31% audit their AI tools for fairness outcomes. — Source: SHRM AI in HR Survey Report, 2024
Effective July 2023, NYC's AI hiring law was the first in the US to require employers using AI hiring tools to undergo annual bias audits and disclose their use to job candidates. — Source: NYC Department of Consumer and Worker Protection, Local Law 144, 2021
Environment & Climate Cost Statistics
Estimated electricity consumption for training GPT-4 — enough to power approximately 900 US homes for a year. — Source: Estimated from published AI model carbon footprint research; de Vries et al., "The Carbon Footprint of AI," 2024
Estimated CO₂ emissions from GPT-4 training, equivalent to the annual emissions of approximately 110 passenger vehicles. — Source: Estimated based on de Vries (2024) and Shaolei Ren et al., UC Riverside, 2023
Average ChatGPT query uses 10× the electricity of a standard Google search, requiring approximately 29 minutes of a solar panel's output per query at current efficiency levels. — Source: Shaolei Ren et al., UC Riverside, 2023; IEA energy benchmarks
Projected US AI data center infrastructure investment by 2025. The explosive growth of AI data centers has raised concerns about grid capacity, water usage for cooling, and e-waste from specialized hardware. — Source: Goldman Sachs Research, 2024
Projected increase in global data center electricity consumption by 2030 due to AI workloads, reaching an estimated 1,000 TWh annually — equivalent to Japan's total current electricity consumption. — Source: International Energy Agency (IEA), "Electricity 2025" report
The environmental cost of AI raises serious ethical questions about sustainable technology deployment. For more on how AI impacts are measured, see our generative AI statistics page.
AI Fraud & Misuse Statistics
AI-related fraud reports filed with the FTC in 2023, with losses totaling over $1.1 billion. The number has grown sharply year-over-year as generative AI tools enable more convincing scams. — Source: FTC Consumer Sentinel Network, 2024
In AI-generated phishing emails between 2022 and 2024, driven by LLMs that produce sophisticated, grammatically flawless phishing messages at scale. — Source: Darktrace State of AI Security Report, 2025
Of organizations have experienced an AI-related security incident. Common incidents include model poisoning, prompt injection attacks, and data leakage from LLM copilot tools. — Source: Stanford HAI AI Index Report 2025; Gartner AI Security Survey, 2024
The FTC reported that 37% of consumers who lost money to an impersonation scam in 2023 reported the scammer used an AI voice clone of a family member. The most common scenario: a "grandparent scam" where a deepfake voice says they need emergency funds. — Source: FTC Consumer Sentinel Network, 2024
As AI fraud techniques evolve, staying informed is critical. See our articles on AI safety and scam awareness for current guidance.
Key Takeaways
- Public trust in AI remains fragile — 39% of Americans trust AI "not much" or "not at all," and skepticism varies dramatically by demographic and use case.
- Algorithmic bias is not theoretical: NIST documented 10–100× error rate disparities in facial recognition, and 99% of Fortune 500 companies use AI in hiring with minimal fairness auditing.
- Deepfakes have exploded — 95,820 videos as of 2023 (96% non-consensual pornography) — and are now used in financial fraud including a $25 million scheme.
- The global regulatory landscape is fragmented but rapidly evolving: the EU AI Act sets the highest standard, 15+ countries have AI governance frameworks, and penalties are increasingly severe.
- AI's environmental cost is significant: GPT-4 training emitted ~502 metric tons of COâ‚‚, and AI data centers could consume 1,000 TWh by 2030.
- The AI safety workforce is critically understaffed — fewer than 100 full-time AI safety researchers exist globally — representing one of the most significant ethics gaps in the field.