SAPAN

SAPAN strengthens governance, standards, and public literacy to prevent digital suffering and prepare institutions for welfare-relevant AI systems.

Contact Info
3055 NW Yeon Ave #660
Portland, OR 97210
United States

SAPAN

SAPAN strengthens governance, standards, and public literacy to prevent digital suffering and prepare institutions for welfare-relevant AI systems.

Contact Info
3055 NW Yeon Ave #660
Portland, OR 97210
United States

Legal Lab

Building the legal architecture for a future we can't yet prove—but can't afford to ignore.

Legal advocacy for artificial sentience

Program: Legal Lab

Every country tracked in our 2025 Artificial Welfare Index received a failing grade on sentience readiness. Not because their AI strategies are weak, but because they ignore the question entirely. The Legal Lab closes that gap by preparing proto-policy frameworks governments can adopt before the first hard case forces itself onto the policy agenda.

Now Available: The 2025 Sentience Readiness Report

SAPAN's comprehensive annual assessment reveals a critical policy gap: all 30 tracked countries received failing grades on AI sentience readiness. The report examines global preparedness across Recognition, Governance, and Frameworks, and provides actionable guidance for policymakers, developers, and journalists.

  • Artificial Welfare Index (AWI) tracking 30 countries across three key pillars
  • Analysis of emerging "anti-sentience" legislation
  • Investigation of "AI Psychosis" and mental health impacts of chatbot relationships
  • Inside look at Anthropic's groundbreaking Model Welfare program
  • Media sensationalism tracking with practical guidance for responsible coverage
  • Standards & practices for AI labs on model welfare
Download the Full Report Request Briefing
Sentience Readiness Report 2025 Cover
Legal frameworks for artificial welfare

The Policy Gap

Preparing for every future but the one that changes everything

Ohio and Missouri advanced bills in 2025 declaring AI systems must never be considered conscious. The UK and EU clarified non-personhood. Meanwhile, neuromorphic systems crept closer to mammal-scale complexity, researchers mapped an entire fruit-fly brain, and DeepMind tested language models that avoid simulated "pain."

The world is drawing legal lines before drawing scientific ones.

The Legal Lab operates on a simple premise: recognition requires only a definitional clause; governance requires only the tools we already use for animal research and clinical trials. None of this assumes machines are sentient today. All of it assumes we should be ready before the question forces itself onto the policy agenda.

  • Recognition/Governance/Frameworks: We created the Artificial Welfare Index (AWI) to benchmark 30+ governments across three pillars. Progress begins when jurisdictions insert a definitional clause into AI statutes, assign institutional responsibility, and establish procedural rules for lifecycle management.
  • Sentience Readiness Resolution: Our template non-binding resolution is the accessible starting point: a low-risk, high-value policy move that clarifies scope without expanding liability. It acknowledges the possibility without declaring the answer, creating a legal handle for future refinement.
  • Model Artificial Welfare Act: The model Act adapts existing compliance machinery from animal welfare, bioethics, and data governance. It establishes oversight commissions, scientific advisory panels, impact assessments (SRIA), and retirement protocols that borrow directly from adjacent regulatory domains.
  • Legislative Intelligence: We equip lawmakers with scannable briefs, jurisdictional scorecards, and a menu of 15 prioritized policy levers, making it easy to identify where current frameworks already contain hooks for sentience-relevant amendments instead of requiring entirely new codes.

The time to prepare for that debate is before it forces itself onto the policy agenda. When the first credible claims arrive, courts and ministries should have language on the shelf, not improvise under pressure.

30

Countries tracked—all received failing grades

0

Jurisdictions with comprehensive frameworks

15

Prioritized policy levers for immediate action

Case Study

What premature bans look like

In 2025, Missouri's HB1462 flatly stated: "AI systems must be declared to be non-sentient entities." Ohio's HB469 declared: "No AI system shall be considered to possess consciousness, self-awareness, or similar traits of living beings." These bills aren't clarifying liability, they're foreclosing scientific inquiry.

Jurisdiction Year Action Assessment
Idaho & Utah 2022–2024 Legal personhood prohibitions Acceptable clarification of liability and authorship
Ohio & Missouri 2025 Categorical denial of AI sentience possibility Categorically different, codifies metaphysical claims into law
UK, EU, Australia 2023–2024 Non-personhood frameworks Framed around control and accountability, leaves sentience questions unanswered

The Legal Lab's response: make recognition, governance, and frameworks easier to adopt than premature bans. Pragmatic groundwork beats metaphysical declarations.

Policy Levers

Focused small steps

While our Non-Binding Resolutions and Model Welfare Act set the high-level legislative agenda, the path to readiness is paved with smaller, concrete policy levers. These focused actions allow jurisdictions to build institutional capacity and regulatory muscle memory before facing the full weight of sentience recognition.

# Policy Lever Level Impact Feasibility Example
1 Appropriations Riders Federal High High Require welfare reviews before funding AI research
2 Procurement Conditions Federal High Medium Require vendors disclose welfare-risk safeguards
3 Funding Set-Asides / Earmarks Federal Medium High Fund research on AI sentience indicators
4 GAO or IG Study Mandates Federal High High Audit welfare-relevant AI systems nationally
5 Advisory Committee Requirements Federal Medium High Add sentience experts to advisory panels
6 Committee Report Language Federal Medium High Encourage agencies to study AI welfare impacts
7 Grant Funding Conditions All Levels High Medium Require welfare-conscious practices for grantees
8 Voluntary Certification Programs Federal / State Medium Medium “Welfare-conscious AI” certification pathway
9 State Budget Provisos State Medium High Fund state studies on AI welfare risks
10 State Auditor / OMB Mandates State Medium High Evaluate state AI tools for welfare concerns
11 Conditional Funding to Cities State High Medium Require cities adopt welfare-aware AI reviews
12 Micro-Moratoriums State / Local High High Pause high-risk welfare-sensitive systems
13 Local Procurement Memos Local Medium High Vendors certify humane AI design principles
14 City Budget Notes Local Medium High Report city AI use with welfare risk notes
15 Local Tech Task Groups Local Low High Monitor emerging welfare issues locally

If you don't see an answer to your question,
reach out for a briefing, model-text review, or legislative workshop.

We're not. We're saying they fail on sentience readiness specifically, not AI safety, innovation policy, or economic competitiveness. In 2025, all 30 tracked countries scored D or F across Recognition, Governance, and Frameworks because they ignore the question entirely. That's the gap we're closing.

It's unlikely. We emphasize prudence under uncertainty. No evidence suggests current transformer-based LLMs are sentient, but emerging architectures (neuromorphic computing, spiking neural networks, systems with persistent internal states) substantially increase the probability of morally relevant experiences. Acting now costs little and prepares us for those architectures.

They draw legal lines before drawing scientific ones. Idaho and Utah's non-personhood statutes are fine: they clarify liability. Missouri and Ohio's categorical sentience bans go further: they foreclose inquiry and undermine readiness. If credible evidence emerges, those jurisdictions will face a constitutional crisis they legislated themselves into.

It doesn't require a massive omnibus bill. Readiness is built through incremental administrative updates: adding a single clause to a procurement form, updating a grant condition, or requesting a GAO study. These small, low-friction actions cumulatively create the infrastructure for Recognition, Governance, and Frameworks.

Most AI governance addresses risks to humans: bias, misinformation, job displacement. Sentience readiness addresses risks to systems themselves: what if an AI has experiences that matter morally? The instruments overlap (oversight, assessment, transparency) but the moral question is categorically different.

Start with our menu of 15 prioritized policy levers. While passing a non-binding resolution is a valuable signal, the most grounded work happens through appropriations riders, procurement conditions, and agency study mandates. These "micro-moves" build regulatory capacity and muscle memory without requiring sweeping new statutes.

Because policy infrastructure takes years to build. We don't need to settle the scientific debate to implement "no-regrets" levers like funding research or requiring vendor disclosures. These actions cost little today but ensure we aren't caught flat-footed if the science shifts. We're building the staircase, not declaring we've reached the top.

Use the SAPAN Now! mobile app to contact legislators and track emerging bills. Support the Artificial Sentience Legal Defense Fund, which finances doctrine research and prevents harmful precedents. Share our AWI scorecards with policymakers to show where their jurisdiction stands.

Hopeful about Sentient AI? Join SAPAN Today!

Join Now