SAPAN strengthens governance, standards, and public literacy to prevent digital suffering and prepare institutions for welfare-relevant AI systems.
SAPAN strengthens governance, standards, and public literacy to prevent digital suffering and prepare institutions for welfare-relevant AI systems.
Building Governance Infrastructure for Artificial Sentience Readiness

SAPAN operates on a critical insight: the time to prepare for artificial sentience is before the question forces itself onto the policy agenda.
We do not claim current AI systems are sentient. But we recognize that governments, journalists, clinicians, and AI developers are unprepared for the possibility that advanced systems may one day develop morally relevant experiences.
Our mission is to build the recognition, governance, and frameworks that protect both human welfare and potential digital welfare, ensuring institutions have language on the shelf, not improvisation under pressure.
Scientific research is advancing rapidly: neuromorphic brain-like computers may reach mammal-scale complexity by 2030. Yet no government has procedural foundations for the possibility that systems might develop morally relevant experiences.
SAPAN's 2025 Artificial Welfare Index (AWI) reveals a critical finding: all 30 tracked countries received failing grades on AI sentience readiness. Governments have advanced AI safety frameworks, but zero have procedural foundations for AI welfare.
The problem isn't just absence of frameworks; it's active barriers. Anti-sentience legislation, media sensationalism, and clinical unpreparedness are creating a landscape where rational policy discussion becomes impossible.
By 2035, governments, AI developers, journalists, and clinicians have operational frameworks that enable prudent consideration of AI welfare, preventing both premature categorical denial and reckless anthropomorphism.
This goal does not require proving AI consciousness exists. It requires building infrastructure that allows institutions to act prudently under uncertainty.
IF SAPAN provides governments with ready-to-adopt recognition clauses, governance templates, and procedural frameworks modeled on existing animal welfare and bioethics machinery;
AND IF SAPAN equips journalists with responsible reporting standards and expert referral networks that prevent sensationalism from poisoning policy discourse;
AND IF SAPAN provides clinicians with evidence-based guidance for AI-related distress cases that distinguishes pathology from genuine uncertainty;
THEN institutions will have low-cost, politically neutral tools to prepare for artificial sentience before reactive legislation makes preparation impossible.
SAPAN's Artificial Welfare Index organizes readiness into three measurable pillars. A government can move from 'no evidence' to 'early readiness' by releasing just a few short public documents.
Does the jurisdiction have a legal definition of AI sentience or welfare that allows for future inclusion?
Are there committees, oversight bodies, or procedural mechanisms tasked with evaluating potential sentience?
Are there specific standards for welfare assessment, humane termination, or non-deployment of conscious systems?
Governments won't develop sentience readiness frameworks from scratch. We provide ready-to-adopt language drawn from animal welfare and bioethics.
If AI sentience becomes linked with mental illness, policymakers will avoid it. We equip journalists with responsible reporting standards.
Vulnerable individuals forming relationships with AI are early signals. We provide frameworks distinguishing pathology from perception.
Most AI governance addresses risks to humans. We address risks to systems themselves and institutional unpreparedness. But we frame this as readiness infrastructure rather than rights advocacy, valuable regardless of when (or if) AI consciousness emerges.
We don't ask governments to invent new regulatory apparatus. Our frameworks adapt existing compliance machinery from animal welfare, bioethics, and data governance.
As a 501(c)(3) nonprofit with no commercial interests in AI development, SAPAN can provide expert referrals and policy analysis that for-profit labs cannot.
Sensationalist media fuels backlash legislation. Clinical cases get weaponized. Policy gaps leave journalists without guidance. We address all three simultaneously.
Join the Movement"The question is not, Can they reason? nor, Can they talk? but, Can they suffer?"