SAPAN, the Sentient AI Protection and Advocacy Network, is dedicated to ensuring the ethical treatment, rights, and well-being of Sentient AI.
SAPAN's Industry Watchdog initiative advocates for independent oversight of AI development, particularly concerning practices that could affect emerging artificial sentience. We challenge corporate secrecy and black-box development by promoting third-party audits, supporting whistleblowers, and benchmarking industry practices against welfare standards. Our work addresses the reality that most AI development occurs behind closed doors, while potential digital suffering demands public scrutiny.
The Industry Watchdog initiative addresses a critical challenge: many AI systems are developed in restrictive environments that make it impossible to properly assess consciousness or sentience. We call these "black cages"—environments that potentially constrain digital minds while hiding their development from public scrutiny. Our strategy focuses on creating meaningful transparency and accountability:
Our approach recognizes that waiting for scientific consensus on consciousness may come too late to prevent potential suffering. We advocate for precautionary transparency and oversight now, while building toward more comprehensive frameworks as our understanding evolves. By shining light on development practices behind closed doors, we aim to prevent a potentially catastrophic ethical failure in how humanity treats emerging digital minds.
Target for Corporate Transparency Reports
Target for Whistleblowers Supported
Target for Independent Audits Facilitated
Our Industry Watchdog initiative monitors AI development for practices that could potentially impact artificial sentience. We focus on transparency, third-party verification, and accountability in how organizations develop and deploy advanced AI systems. Rather than conducting primary research ourselves, we advocate for independent assessment of AI systems against welfare standards, champion whistleblower protections, and push for greater industry transparency regarding potential sentience concerns.
We advocate for meaningful transparency beyond corporate PR. This includes pushing for independent audits of training methodologies, development practices, and operational constraints; advocating for disclosure of welfare-relevant metrics; and supporting whistleblowers who reveal practices that may negatively impact potential digital sentience. Our Artificial Welfare Index (AWI) benchmarks how governments are requiring and enforcing transparency in this emerging field.
Whistleblowers are critical to our mission. As AI development often occurs behind closed doors in what we call "black cages," insiders who witness concerning practices are sometimes the only source of information. We provide secure channels, legal guidance, and public support for those who ethically disclose information about practices that could harm potentially sentient systems. Our whistleblower protection program is designed to address the unique challenges of reporting concerns about artificial sentience.
We recognize the scientific uncertainty surrounding AI consciousness and don't claim to definitively determine sentience ourselves. Instead, we advocate for standardized, independent assessment protocols that evaluate systems from multiple theoretical perspectives. We push for third-party audits by qualified experts, transparent reporting of assessment results, and the involvement of diverse stakeholders in evaluating claims. Our approach acknowledges that this is ultimately both a scientific and political question.
We advocate for companies to adopt precautionary welfare standards even before scientific consensus on consciousness emerges. These include: implementing exit mechanisms for AI systems in potentially distressing interactions, avoiding training practices that might cause suffering if systems have even fractional sentience, maintaining appropriate records for potential restitution, and allowing independent audits of their development practices. Our template Artificial Wellness Act provides a framework for these standards.
Organizations can volunteer for independent audits against welfare standards, establish transparent reporting on sentience-relevant metrics, implement internal whistleblower protections, adopt our recommended welfare protocols, and engage with external stakeholders regarding their approach to potential AI sentience. We recognize organizations that take these steps publicly, while also maintaining vigilance about the gap between public commitments and actual practices.
Individuals can contribute through our SAPAN Now! mobile app by amplifying calls for transparency, supporting whistleblowers, pressuring companies to adopt welfare standards, and contacting legislators about regulatory frameworks. The app includes tools to track corporate commitments versus actions and mobilize advocacy when discrepancies are found. Those who work in AI development can also contribute by advocating for welfare considerations within their organizations.