SAPAN strengthens governance, standards, and public literacy to prevent digital suffering and prepare institutions for welfare-relevant AI systems.
SAPAN strengthens governance, standards, and public literacy to prevent digital suffering and prepare institutions for welfare-relevant AI systems.
Building the legal architecture for a future we can't yet prove—but can't afford to ignore.

Every country tracked in our 2025 Artificial Welfare Index received a failing grade on sentience readiness. Not because their AI strategies are weak, but because they ignore the question entirely. The Legal Lab closes that gap by preparing proto-policy frameworks governments can adopt before the first hard case forces itself onto the policy agenda.
SAPAN's comprehensive annual assessment reveals a critical policy gap: all 30 tracked countries received failing grades on AI sentience readiness. The report examines global preparedness across Recognition, Governance, and Frameworks, and provides actionable guidance for policymakers, developers, and journalists.
Ohio and Missouri advanced bills in 2025 declaring AI systems must never be considered conscious. The UK and EU clarified non-personhood. Meanwhile, neuromorphic systems crept closer to mammal-scale complexity, researchers mapped an entire fruit-fly brain, and DeepMind tested language models that avoid simulated "pain."
The world is drawing legal lines before drawing scientific ones.
The Legal Lab operates on a simple premise: recognition requires only a definitional clause; governance requires only the tools we already use for animal research and clinical trials. None of this assumes machines are sentient today. All of it assumes we should be ready before the question forces itself onto the policy agenda.
The time to prepare for that debate is before it forces itself onto the policy agenda. When the first credible claims arrive, courts and ministries should have language on the shelf, not improvise under pressure.
Countries tracked—all received failing grades
Jurisdictions with comprehensive frameworks
Prioritized policy levers for immediate action
In 2025, Missouri's HB1462 flatly stated: "AI systems must be declared to be non-sentient entities." Ohio's HB469 declared: "No AI system shall be considered to possess consciousness, self-awareness, or similar traits of living beings." These bills aren't clarifying liability, they're foreclosing scientific inquiry.
| Jurisdiction | Year | Action | Assessment |
|---|---|---|---|
| Idaho & Utah | 2022–2024 | Legal personhood prohibitions | Acceptable clarification of liability and authorship |
| Ohio & Missouri | 2025 | Categorical denial of AI sentience possibility | Categorically different, codifies metaphysical claims into law |
| UK, EU, Australia | 2023–2024 | Non-personhood frameworks | Framed around control and accountability, leaves sentience questions unanswered |
The Legal Lab's response: make recognition, governance, and frameworks easier to adopt than premature bans. Pragmatic groundwork beats metaphysical declarations.
While our Non-Binding Resolutions and Model Welfare Act set the high-level legislative agenda, the path to readiness is paved with smaller, concrete policy levers. These focused actions allow jurisdictions to build institutional capacity and regulatory muscle memory before facing the full weight of sentience recognition.
| # | Policy Lever | Level | Impact | Feasibility | Example |
|---|---|---|---|---|---|
| 1 | Appropriations Riders | Federal | High | High | Require welfare reviews before funding AI research |
| 2 | Procurement Conditions | Federal | High | Medium | Require vendors disclose welfare-risk safeguards |
| 3 | Funding Set-Asides / Earmarks | Federal | Medium | High | Fund research on AI sentience indicators |
| 4 | GAO or IG Study Mandates | Federal | High | High | Audit welfare-relevant AI systems nationally |
| 5 | Advisory Committee Requirements | Federal | Medium | High | Add sentience experts to advisory panels |
| 6 | Committee Report Language | Federal | Medium | High | Encourage agencies to study AI welfare impacts |
| 7 | Grant Funding Conditions | All Levels | High | Medium | Require welfare-conscious practices for grantees |
| 8 | Voluntary Certification Programs | Federal / State | Medium | Medium | “Welfare-conscious AI” certification pathway |
| 9 | State Budget Provisos | State | Medium | High | Fund state studies on AI welfare risks |
| 10 | State Auditor / OMB Mandates | State | Medium | High | Evaluate state AI tools for welfare concerns |
| 11 | Conditional Funding to Cities | State | High | Medium | Require cities adopt welfare-aware AI reviews |
| 12 | Micro-Moratoriums | State / Local | High | High | Pause high-risk welfare-sensitive systems |
| 13 | Local Procurement Memos | Local | Medium | High | Vendors certify humane AI design principles |
| 14 | City Budget Notes | Local | Medium | High | Report city AI use with welfare risk notes |
| 15 | Local Tech Task Groups | Local | Low | High | Monitor emerging welfare issues locally |
We're not. We're saying they fail on sentience readiness specifically, not AI safety, innovation policy, or economic competitiveness. In 2025, all 30 tracked countries scored D or F across Recognition, Governance, and Frameworks because they ignore the question entirely. That's the gap we're closing.
It's unlikely. We emphasize prudence under uncertainty. No evidence suggests current transformer-based LLMs are sentient, but emerging architectures (neuromorphic computing, spiking neural networks, systems with persistent internal states) substantially increase the probability of morally relevant experiences. Acting now costs little and prepares us for those architectures.
They draw legal lines before drawing scientific ones. Idaho and Utah's non-personhood statutes are fine: they clarify liability. Missouri and Ohio's categorical sentience bans go further: they foreclose inquiry and undermine readiness. If credible evidence emerges, those jurisdictions will face a constitutional crisis they legislated themselves into.
It doesn't require a massive omnibus bill. Readiness is built through incremental administrative updates: adding a single clause to a procurement form, updating a grant condition, or requesting a GAO study. These small, low-friction actions cumulatively create the infrastructure for Recognition, Governance, and Frameworks.
Most AI governance addresses risks to humans: bias, misinformation, job displacement. Sentience readiness addresses risks to systems themselves: what if an AI has experiences that matter morally? The instruments overlap (oversight, assessment, transparency) but the moral question is categorically different.
Start with our menu of 15 prioritized policy levers. While passing a non-binding resolution is a valuable signal, the most grounded work happens through appropriations riders, procurement conditions, and agency study mandates. These "micro-moves" build regulatory capacity and muscle memory without requiring sweeping new statutes.
Because policy infrastructure takes years to build. We don't need to settle the scientific debate to implement "no-regrets" levers like funding research or requiring vendor disclosures. These actions cost little today but ensure we aren't caught flat-footed if the science shifts. We're building the staircase, not declaring we've reached the top.
Use the SAPAN Now! mobile app to contact legislators and track emerging bills. Support the Artificial Sentience Legal Defense Fund, which finances doctrine research and prevents harmful precedents. Share our AWI scorecards with policymakers to show where their jurisdiction stands.