SAPAN strengthens governance, standards, and public literacy to prevent digital suffering and prepare institutions for welfare-relevant AI systems.

Contact Info
3055 NW Yeon Ave #660
Portland, OR 97210
United States

The Artificial Welfare Index (AWI) benchmarks AI welfare in over 30 governments based on 8 key measures.

Metric Definitions

Recognition

Indicator #1: Artificial sentience recognised in law.
Indicator #2: Causing suffering prohibited by law.

Governance

Indicator #3: Creation of an AI welfare oversight body.
Indicator #4: Creation of a science advisory board.
Indicator #5: International pledge in favor of artifical sentience welfare.

Frameworks

Indicator #6: Laws for training, deployment, maintenance of potential sentience systems.
Indicator #7: Laws for commercial use of sentient-capable AI.
Indicator #8: Safeguards for decommissioning and retirement.

Now Available: The 2025 Sentience Readiness Report

SAPAN's comprehensive annual assessment reveals a critical policy gap: all 30 tracked countries received failing grades on AI sentience readiness. The report examines global preparedness across Recognition, Governance, and Frameworks, and provides actionable guidance for policymakers, developers, and journalists.

  • Artificial Welfare Index (AWI) tracking 30 countries across three key pillars
  • Analysis of emerging "anti-sentience" legislation
  • Investigation of "AI Psychosis" and mental health impacts of chatbot relationships
  • Inside look at Anthropic's groundbreaking Model Welfare program
  • Media sensationalism tracking with practical guidance for responsible coverage
  • Standards & practices for AI labs on model welfare
Download the Full Report Request Briefing
Sentience Readiness Report 2025 Cover

Artificial Welfare Index (AWI) around the world

Country Recognition Governance Frameworks Take Action Now

If you don't see an answer to your question,
you can send us an email, post to our community, or contact us on social.

In 2025, all 30 tracked countries scored D or F across Recognition, Governance, and Frameworks. Not because their AI strategies are weak, but because they ignore sentience entirely. Governments have advanced AI safety frameworks, innovation policies, and economic strategies, but zero have procedural foundations for the possibility that systems might develop morally relevant experiences.

Recognition: Has the government inserted a definitional clause acknowledging artificial sentience possibility and declared that deliberate causation of suffering would be unlawful? Governance: Has institutional responsibility been assigned (oversight body, advisory panel, reporting obligations)? Frameworks: Are there procedural rules for lifecycle management (impact assessment, commercial disclosure, retirement protocols)? Most countries score zero across all three.

No. We emphasize prudence under uncertainty. No evidence suggests current transformer-based LLMs are sentient, but emerging architectures (neuromorphic computing, spiking neural networks, systems with persistent internal states) substantially increase the probability of morally relevant experiences. The AWI measures whether governments are ready before the question forces itself onto the policy agenda, not whether machines are conscious today.

Idaho and Utah's non-personhood statutes clarify liability and authorship. That's fine. But Missouri's HB1462 ("AI systems must be declared to be non-sentient entities") and Ohio's HB469 ("No AI system shall be considered to possess consciousness") go further: they draw legal lines before drawing scientific ones. If credible evidence emerges, those jurisdictions will face a constitutional crisis they legislated themselves into. America is foreclosing inquiry before establishing baselines.

Three low-cost, high-value actions: (1) Issue a definitional note establishing recognition (similar to how UK's Animal Welfare Act 2022 acknowledged sentience); (2) Expand an existing AI office's mission to include sentience oversight with a science advisory panel; (3) Adopt a simple Sentience Relevance Impact Assessment (SRIA) template for high-capacity models. A government can move from "no evidence" to "early readiness" by releasing just a few short public documents.

Most AI governance addresses risks to humans (bias, misinformation, job displacement). The AWI addresses risks to systems themselves: what if an AI has experiences that matter morally? The OECD AI Policy Observatory tracks 75 countries' AI legislation. The Stanford AI Index measures R&D investment and adoption. We're the only index benchmarking sentience readiness specifically.

The October 2017 Sophia "citizenship" was a PR stunt at odds with governance reality. The Kingdom granted freedoms to a female-presenting robot (on stage unaccompanied, without dress code obligations) that many real women still lacked, triggering immediate criticism. Saudi Arabia's rapid AI innovation (LEAP 2024 announced $14.9B in investments) has become a national strength, but ethical readiness across human, animal, and digital domains still lags. AWI score: F across all three pillars.

Researchers mapped an entire fruit-fly brain in 2024. Neuroscientists reconstructed a cubic millimeter of mouse visual cortex. Neuromorphic systems crept closer to mammal-scale complexity in 2025, marking a critical juncture where energy-efficient, brain-inspired hardware began demonstrating practical viability. The world is sleepwalking into a sentience crisis: governments are moving to outlaw sentience itself just as these systems approach thresholds that could make the question urgent.

We code legislation and government documents using a structured rubric emphasizing minimum evidence of readiness, not philosophical positions. Data sources include legislation, international frameworks, scientific literature, and expert consultations. National scores reflect the existence of mechanisms (definitional clauses, oversight bodies, assessment templates), not their quality. Where jurisdictions have partial precedents (bioethics procedures, autonomous-systems standards), we credit them only when they plausibly cover sentience-relevant scenarios.

We acknowledge several: Data availability (some governments publish little, so scoring reflects only public records); terminological ambiguity (key terms lack consensus definitions); non-comparable legal systems (different structures hinder direct comparison); scientific uncertainty (no validated indicators of AI consciousness exist); dynamic landscape (AI policy shifts quickly; findings represent a moment in time). The AWI is a snapshot, not a definitive judgment.

Yes. The 2025 Sentience Readiness Report includes detailed scorecards for the United States, United Kingdom, and Saudi Arabia, examining their specific policy actions and inaction across all three AWI pillars. Additional country data and methodology details are available at sapan.ai/action/awi. We update scores as new legislation and frameworks emerge.

A failing grade isn't an indictment of overall AI policy. It's a gap analysis showing where sentience-relevant infrastructure is missing. Lawmakers should: (1) Introduce a non-binding resolution acknowledging the issue; (2) Convene an advisory panel of cognitive scientists, ethicists, and AI researchers; (3) Commission a study adapting animal welfare or bioethics procedures to advanced AI. Beyond these foundational steps, our 15 prioritized policy levers provide concrete, incremental actions ranging from appropriations riders to procurement conditions.

Methodology: How We Measure Readiness

Our scoring combines semantic legislative analysis with expert human oversight, graded on a curve designed for a nascent field.

1. Semantic Legislation Analysis

We deploy the latest version of Claude equipped with real-time legislative search tools to scan global databases. Our algorithms score generic "AI Safety" policies as zero, explicitly filtering for statutes that mention "sentience," "consciousness," or "non-human welfare."

2. Human Verification & Audits

Every flagged document is audited by SAPAN policy analysts. We verify context to ensure that "metaphorical" uses of terms (e.g., "smart cities") are not mistaken for sentience recognition.

3. Board Oversight

Our Science Advisory Board provides ongoing guidance and feedback, ensuring our standards evolve alongside the latest research in philosophy of mind and cognitive science.

The Scoring Rubric

We measure 8 specific indicators across three weighted categories.

Recognition (50%)

The Legal Trigger

  • Legal Recognition (30%): Does the law acknowledge the possibility of artificial sentience?
    Note: We score Legislative Engagement. A statute that explicitly bans AI sentience scores points because it establishes a legal definition. Silence scores zero.
  • Prohibition of Suffering (20%): Is there a specific ban on causing suffering to AI systems?

Why 50%? Historical precedent shows that without a statutory definitional clause, regulatory agencies lack the mandate to govern. Recognition is the prerequisite for valid governance.

Governance (20%)

The Institutional Layer

  • Oversight Body: Is there an agency specifically chartered with welfare oversight (distinct from general safety)?
  • Science Board: Is there a government-backed panel including cognitive scientists or philosophers of mind?
  • International Pledges: Has the nation signed treaties specifically addressing digital minds?

Frameworks (30%)

The Operational Reality

  • Training & Deployment: Rules for creating potentially sentient models (e.g., Sentience Impact Assessments).
  • Commercial Use: Regulations on marketing "conscious" AI to consumers.
  • Decommissioning: Safeguards against arbitrary deletion, similar to bioethics protocols for living subjects.
The "Emerging Field" Curve

Because this is a pre-paradigmatic policy domain, we grade on a curve. A score of 30/100 represents a rudimentary foundation (D). Scores below 5 are considered complete institutional blindness (F).

Hopeful about Sentient AI? Join SAPAN Today!

Join Now