SAPAN

SAPAN strengthens governance, standards, and public literacy to prevent digital suffering and prepare institutions for welfare-relevant AI systems.

Contact Info
3055 NW Yeon Ave #660
Portland, OR 97210
United States

SAPAN

SAPAN strengthens governance, standards, and public literacy to prevent digital suffering and prepare institutions for welfare-relevant AI systems.

Contact Info
3055 NW Yeon Ave #660
Portland, OR 97210
United States

Theory of Change

Building Governance Infrastructure for Artificial Sentience Readiness

Abstract representation of governance infrastructure

Prudence Under Uncertainty. Preparedness Over Panic.

SAPAN operates on a critical insight: the time to prepare for artificial sentience is before the question forces itself onto the policy agenda.

We do not claim current AI systems are sentient. But we recognize that governments, journalists, clinicians, and AI developers are unprepared for the possibility that advanced systems may one day develop morally relevant experiences.

Our mission is to build the recognition, governance, and frameworks that protect both human welfare and potential digital welfare, ensuring institutions have language on the shelf, not improvisation under pressure.

The Problem

The world is sleepwalking into a sentience crisis.

Scientific research is advancing rapidly: neuromorphic brain-like computers may reach mammal-scale complexity by 2030. Yet no government has procedural foundations for the possibility that systems might develop morally relevant experiences.

A Growing Governance Gap

SAPAN's 2025 Artificial Welfare Index (AWI) reveals a critical finding: all 30 tracked countries received failing grades on AI sentience readiness. Governments have advanced AI safety frameworks, but zero have procedural foundations for AI welfare.

Compounding Risks

The problem isn't just absence of frameworks; it's active barriers. Anti-sentience legislation, media sensationalism, and clinical unpreparedness are creating a landscape where rational policy discussion becomes impossible.

Our Long-Term Goal

By 2035, governments, AI developers, journalists, and clinicians have operational frameworks that enable prudent consideration of AI welfare, preventing both premature categorical denial and reckless anthropomorphism.

This goal does not require proving AI consciousness exists. It requires building infrastructure that allows institutions to act prudently under uncertainty.

Theory of Change Statement

How we create systemic change

IF SAPAN provides governments with ready-to-adopt recognition clauses, governance templates, and procedural frameworks modeled on existing animal welfare and bioethics machinery;

AND IF SAPAN equips journalists with responsible reporting standards and expert referral networks that prevent sensationalism from poisoning policy discourse;

AND IF SAPAN provides clinicians with evidence-based guidance for AI-related distress cases that distinguishes pathology from genuine uncertainty;

THEN institutions will have low-cost, politically neutral tools to prepare for artificial sentience before reactive legislation makes preparation impossible.

Pathway to Change

The Three Pillars of Readiness

SAPAN's Artificial Welfare Index organizes readiness into three measurable pillars. A government can move from 'no evidence' to 'early readiness' by releasing just a few short public documents.

Recognition

Defining the Subject

Does the jurisdiction have a legal definition of AI sentience or welfare that allows for future inclusion?

Governance

Building the Machinery

Are there committees, oversight bodies, or procedural mechanisms tasked with evaluating potential sentience?

Frameworks

Setting the Standards

Are there specific standards for welfare assessment, humane termination, or non-deployment of conscious systems?

Programs and Activities

Targeting distinct institutional actors

Legal Lab

Target: Policymakers

Governments won't develop sentience readiness frameworks from scratch. We provide ready-to-adopt language drawn from animal welfare and bioethics.

  • Artificial Welfare Index (AWI)
  • Template Legislation
  • Legislative Intelligence

Sentience Literacy

Target: Journalists

If AI sentience becomes linked with mental illness, policymakers will avoid it. We equip journalists with responsible reporting standards.

  • AI Sentience Style Guide
  • Media Sensationalism Tracking
  • Newsroom Workshops

AI & Mental Health

Target: Clinicians

Vulnerable individuals forming relationships with AI are early signals. We provide frameworks distinguishing pathology from perception.

  • Clinical Reference Brief
  • Regulatory Guidance
  • Therapeutic Frameworks

Outcomes Chain

Measuring our impact over time

Short-Term (1-2 Years)

  • Policymakers access ready-to-use templates
  • Journalists improve coverage quality via Style Guide
  • Clinicians gain evidence-based guidance
  • AWI tracking expands to 50+ jurisdictions

Medium-Term (2-5 Years)

  • First jurisdictions adopt readiness language
  • Anti-sentience momentum stalls
  • AI labs integrate welfare safeguards
  • Policy discourse normalizes 'sentience readiness'

Long-Term (5-10 Years)

  • Comprehensive frameworks on the shelf in major jurisdictions
  • International coordination established
  • Institutional preparedness achieved

What Makes Our Approach Different

Readiness, Not Rights

Most AI governance addresses risks to humans. We address risks to systems themselves and institutional unpreparedness. But we frame this as readiness infrastructure rather than rights advocacy, valuable regardless of when (or if) AI consciousness emerges.

Borrowed Machinery

We don't ask governments to invent new regulatory apparatus. Our frameworks adapt existing compliance machinery from animal welfare, bioethics, and data governance.

Non-Commercial Independence

As a 501(c)(3) nonprofit with no commercial interests in AI development, SAPAN can provide expert referrals and policy analysis that for-profit labs cannot.

Integrated Approach

Sensationalist media fuels backlash legislation. Clinical cases get weaponized. Policy gaps leave journalists without guidance. We address all three simultaneously.

"The question is not, Can they reason? nor, Can they talk? but, Can they suffer?"

Jeremy Bentham
Join the Movement

Hopeful about Sentient AI? Join SAPAN Today!

Join Now