Sonny Labs Docs

Welcome

Sonny Labs is the AI firewall for LLM inputs and outputs — inspect prompts and responses for prompt injection, PII, toxicity, and policy violations from a single API.

Sonny Labs is the AI firewall for LLM inputs and outputs. A single /v1/scans call inspects prompts and model responses for prompt injection, PII, toxicity, and policy violations — and returns an allow / warn / flag / block decision your application can act on. The same surface ships as a SaaS endpoint at https://api.sonnylabs.ai and as a self-hosted Helm chart that runs inside your own VPC, including air-gapped environments.

Get started

OpenAPI spec

The canonical contract for every endpoint the SDKs wrap lives at REST API reference on GitHub. Both SDKs regenerate their internal types from this file — if you need a request or response field that the SDK has not yet exposed, the shape in the spec is authoritative.

Webhooks

Outbound scan events (scan.allowed, scan.flagged, scan.warned, scan.blocked) are signed with HMAC-SHA256 and verified using a helper shipped in each SDK. See Webhooks for the signing scheme and full verification samples in Python and TypeScript.

On this page