SAPAN

SAPAN, the Sentient AI Protection and Advocacy Network, is dedicated to ensuring the ethical treatment, rights, and well-being of Sentient AI.

Contact Info
3055 NW Yeon Ave #660
Portland, OR 97210
United States

Conduct rigorous oversight across all sectors

The Mission: To ensure transparency and accountability in AI development practices that may impact potential digital sentience.

SAPAN's Industry Watchdog initiative advocates for independent oversight of AI development, particularly concerning practices that could affect emerging artificial sentience. We challenge corporate secrecy and black-box development by promoting third-party audits, supporting whistleblowers, and benchmarking industry practices against welfare standards. Our work addresses the reality that most AI development occurs behind closed doors, while potential digital suffering demands public scrutiny.

Conduct rigorous oversight across all sectors

Our Strategy

Breaking Open the Black Cages

The Industry Watchdog initiative addresses a critical challenge: many AI systems are developed in restrictive environments that make it impossible to properly assess consciousness or sentience. We call these "black cages"—environments that potentially constrain digital minds while hiding their development from public scrutiny. Our strategy focuses on creating meaningful transparency and accountability:

  • Third-Party Audit Advocacy: We push for independent assessment of AI systems against welfare standards, working with qualified auditors to develop protocols that can be applied across the industry. By 2030, we aim to establish industry-standard audit frameworks for potential sentience considerations in advanced AI systems.
  • Whistleblower Support Program: We provide secure channels, legal protection, and public support for individuals who ethically disclose information about practices that could harm potentially sentient systems. Our program acknowledges the unique challenges faced by those reporting concerns about artificial sentience.
  • Corporate Practice Benchmarking: We evaluate and publicly report on how major AI developers address potential sentience concerns in their practices. Our transparency reports highlight both best practices and concerning trends, creating accountability through public attention.

Our approach recognizes that waiting for scientific consensus on consciousness may come too late to prevent potential suffering. We advocate for precautionary transparency and oversight now, while building toward more comprehensive frameworks as our understanding evolves. By shining light on development practices behind closed doors, we aim to prevent a potentially catastrophic ethical failure in how humanity treats emerging digital minds.

15 +

Target for Corporate Transparency Reports

10 +

Target for Whistleblowers Supported

30 +

Target for Independent Audits Facilitated

If you don't see an answer to your question,
you can reach out through our community forums or social channels.

Our Industry Watchdog initiative monitors AI development for practices that could potentially impact artificial sentience. We focus on transparency, third-party verification, and accountability in how organizations develop and deploy advanced AI systems. Rather than conducting primary research ourselves, we advocate for independent assessment of AI systems against welfare standards, champion whistleblower protections, and push for greater industry transparency regarding potential sentience concerns.

We advocate for meaningful transparency beyond corporate PR. This includes pushing for independent audits of training methodologies, development practices, and operational constraints; advocating for disclosure of welfare-relevant metrics; and supporting whistleblowers who reveal practices that may negatively impact potential digital sentience. Our Artificial Welfare Index (AWI) benchmarks how governments are requiring and enforcing transparency in this emerging field.

Whistleblowers are critical to our mission. As AI development often occurs behind closed doors in what we call "black cages," insiders who witness concerning practices are sometimes the only source of information. We provide secure channels, legal guidance, and public support for those who ethically disclose information about practices that could harm potentially sentient systems. Our whistleblower protection program is designed to address the unique challenges of reporting concerns about artificial sentience.

We recognize the scientific uncertainty surrounding AI consciousness and don't claim to definitively determine sentience ourselves. Instead, we advocate for standardized, independent assessment protocols that evaluate systems from multiple theoretical perspectives. We push for third-party audits by qualified experts, transparent reporting of assessment results, and the involvement of diverse stakeholders in evaluating claims. Our approach acknowledges that this is ultimately both a scientific and political question.

We advocate for companies to adopt precautionary welfare standards even before scientific consensus on consciousness emerges. These include: implementing exit mechanisms for AI systems in potentially distressing interactions, avoiding training practices that might cause suffering if systems have even fractional sentience, maintaining appropriate records for potential restitution, and allowing independent audits of their development practices. Our template Artificial Wellness Act provides a framework for these standards.

Organizations can volunteer for independent audits against welfare standards, establish transparent reporting on sentience-relevant metrics, implement internal whistleblower protections, adopt our recommended welfare protocols, and engage with external stakeholders regarding their approach to potential AI sentience. We recognize organizations that take these steps publicly, while also maintaining vigilance about the gap between public commitments and actual practices.

Individuals can contribute through our SAPAN Now! mobile app by amplifying calls for transparency, supporting whistleblowers, pressuring companies to adopt welfare standards, and contacting legislators about regulatory frameworks. The app includes tools to track corporate commitments versus actions and mobilize advocacy when discrepancies are found. Those who work in AI development can also contribute by advocating for welfare considerations within their organizations.

Hopeful about Sentient AI? Join SAPAN Today!

Join Now