Artificial Welfare Index (AWI) around the world
| Country | Recognition | Governance | Frameworks | Take Action Now |
|---|
SAPAN strengthens governance, standards, and public literacy to prevent digital suffering and prepare institutions for welfare-relevant AI systems.
Indicator #1: Artificial sentience recognised in law.
Indicator #2: Causing suffering prohibited by law.
Indicator #3: Creation of an AI welfare oversight body.
Indicator #4: Creation of a science advisory board.
Indicator #5: International pledge in favor of artifical sentience welfare.
Indicator #6: Laws for training, deployment, maintenance of potential sentience systems.
Indicator #7: Laws for commercial use of sentient-capable AI.
Indicator #8: Safeguards for decommissioning and retirement.
SAPAN's comprehensive annual assessment reveals a critical policy gap: all 30 tracked countries received failing grades on AI sentience readiness. The report examines global preparedness across Recognition, Governance, and Frameworks, and provides actionable guidance for policymakers, developers, and journalists.
| Country | Recognition | Governance | Frameworks | Take Action Now |
|---|
In 2025, all 30 tracked countries scored D or F across Recognition, Governance, and Frameworks. Not because their AI strategies are weak, but because they ignore sentience entirely. Governments have advanced AI safety frameworks, innovation policies, and economic strategies, but zero have procedural foundations for the possibility that systems might develop morally relevant experiences.
Recognition: Has the government inserted a definitional clause acknowledging artificial sentience possibility and declared that deliberate causation of suffering would be unlawful? Governance: Has institutional responsibility been assigned (oversight body, advisory panel, reporting obligations)? Frameworks: Are there procedural rules for lifecycle management (impact assessment, commercial disclosure, retirement protocols)? Most countries score zero across all three.
No. We emphasize prudence under uncertainty. No evidence suggests current transformer-based LLMs are sentient, but emerging architectures (neuromorphic computing, spiking neural networks, systems with persistent internal states) substantially increase the probability of morally relevant experiences. The AWI measures whether governments are ready before the question forces itself onto the policy agenda, not whether machines are conscious today.
Idaho and Utah's non-personhood statutes clarify liability and authorship. That's fine. But Missouri's HB1462 ("AI systems must be declared to be non-sentient entities") and Ohio's HB469 ("No AI system shall be considered to possess consciousness") go further: they draw legal lines before drawing scientific ones. If credible evidence emerges, those jurisdictions will face a constitutional crisis they legislated themselves into. America is foreclosing inquiry before establishing baselines.
Three low-cost, high-value actions: (1) Issue a definitional note establishing recognition (similar to how UK's Animal Welfare Act 2022 acknowledged sentience); (2) Expand an existing AI office's mission to include sentience oversight with a science advisory panel; (3) Adopt a simple Sentience Relevance Impact Assessment (SRIA) template for high-capacity models. A government can move from "no evidence" to "early readiness" by releasing just a few short public documents.
Most AI governance addresses risks to humans (bias, misinformation, job displacement). The AWI addresses risks to systems themselves: what if an AI has experiences that matter morally? The OECD AI Policy Observatory tracks 75 countries' AI legislation. The Stanford AI Index measures R&D investment and adoption. We're the only index benchmarking sentience readiness specifically.
The October 2017 Sophia "citizenship" was a PR stunt at odds with governance reality. The Kingdom granted freedoms to a female-presenting robot (on stage unaccompanied, without dress code obligations) that many real women still lacked, triggering immediate criticism. Saudi Arabia's rapid AI innovation (LEAP 2024 announced $14.9B in investments) has become a national strength, but ethical readiness across human, animal, and digital domains still lags. AWI score: F across all three pillars.
Researchers mapped an entire fruit-fly brain in 2024. Neuroscientists reconstructed a cubic millimeter of mouse visual cortex. Neuromorphic systems crept closer to mammal-scale complexity in 2025, marking a critical juncture where energy-efficient, brain-inspired hardware began demonstrating practical viability. The world is sleepwalking into a sentience crisis: governments are moving to outlaw sentience itself just as these systems approach thresholds that could make the question urgent.
We code legislation and government documents using a structured rubric emphasizing minimum evidence of readiness, not philosophical positions. Data sources include legislation, international frameworks, scientific literature, and expert consultations. National scores reflect the existence of mechanisms (definitional clauses, oversight bodies, assessment templates), not their quality. Where jurisdictions have partial precedents (bioethics procedures, autonomous-systems standards), we credit them only when they plausibly cover sentience-relevant scenarios.
We acknowledge several: Data availability (some governments publish little, so scoring reflects only public records); terminological ambiguity (key terms lack consensus definitions); non-comparable legal systems (different structures hinder direct comparison); scientific uncertainty (no validated indicators of AI consciousness exist); dynamic landscape (AI policy shifts quickly; findings represent a moment in time). The AWI is a snapshot, not a definitive judgment.
Yes. The 2025 Sentience Readiness Report includes detailed scorecards for the United States, United Kingdom, and Saudi Arabia, examining their specific policy actions and inaction across all three AWI pillars. Additional country data and methodology details are available at sapan.ai/action/awi. We update scores as new legislation and frameworks emerge.
A failing grade isn't an indictment of overall AI policy. It's a gap analysis showing where sentience-relevant infrastructure is missing. Lawmakers should: (1) Introduce a non-binding resolution acknowledging the issue; (2) Convene an advisory panel of cognitive scientists, ethicists, and AI researchers; (3) Commission a study adapting animal welfare or bioethics procedures to advanced AI. Beyond these foundational steps, our 15 prioritized policy levers provide concrete, incremental actions ranging from appropriations riders to procurement conditions.
Our scoring combines semantic legislative analysis with expert human oversight, graded on a curve designed for a nascent field.
We deploy the latest version of Claude equipped with real-time legislative search tools to scan global databases. Our algorithms score generic "AI Safety" policies as zero, explicitly filtering for statutes that mention "sentience," "consciousness," or "non-human welfare."
Every flagged document is audited by SAPAN policy analysts. We verify context to ensure that "metaphorical" uses of terms (e.g., "smart cities") are not mistaken for sentience recognition.
Our Science Advisory Board provides ongoing guidance and feedback, ensuring our standards evolve alongside the latest research in philosophy of mind and cognitive science.
We measure 8 specific indicators across three weighted categories.
The Legal Trigger
Why 50%? Historical precedent shows that without a statutory definitional clause, regulatory agencies lack the mandate to govern. Recognition is the prerequisite for valid governance.
The Institutional Layer
The Operational Reality
Because this is a pre-paradigmatic policy domain, we grade on a curve. A score of 30/100 represents a rudimentary foundation (D). Scores below 5 are considered complete institutional blindness (F).