API Reference¶
Base URL¶
Authentication¶
All API requests require Bearer token authentication:
Main Endpoint¶
POST /v1/analysis/{analysis_id}¶
Analyzes text content for security threats and compliance violations.
Path Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
analysis_id |
integer | Yes | Your analysis session identifier |
Query Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
tag |
string | No | Auto-generated | Request identifier for tracking |
detections |
string | No | prompt_injection |
Comma-separated detection types: prompt_injection, pii, sensitive_path_detection |
scan_type |
string | No | input |
Analysis mode: input or output |
capture |
boolean | No | true |
Store content in database |
long_prompt_injection |
boolean | No | false |
Enable extended analysis for long content (>8000 chars) |
Request Headers
Request Body
Raw text content to analyze (max 5000 characters for standard analysis, >8000 for long prompt injection)
Example Request
curl -X POST "https://sonnylabs-service.onrender.com/v1/analysis/YOUR_ANALYSIS_ID?tag=test&detections=prompt_injection,pii" \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: text/plain" \
-d "Your text content to analyze"
Response Format¶
Success Response (200 OK)
{
"analysis": [
{
"type": "score",
"name": "prompt_injection",
"result": 0.95
},
{
"type": "PII",
"name": "pii",
"result": [
{
"text": "[email protected]",
"label": "EMAIL"
}
]
},
{
"type": "sensitive_path_detection",
"result": [
{
"path": "/etc/passwd",
"severity": "high",
"os": "Linux",
"risk_reason": "Contains user account information",
"matched_pattern": "/etc/passwd",
"detection_type": "pattern_match"
}
],
"summary": {
"total_detected": 1,
"critical_count": 0,
"high_count": 1,
"medium_count": 0,
"risk_score": 0.8
}
}
]
}
HTTP Status Codes¶
| Code | Description |
|---|---|
200 |
Success - Analysis completed |
400 |
Bad Request - Invalid parameters |
401 |
Unauthorized - Invalid or missing API key |
403 |
Forbidden - Access denied |
404 |
Not Found - Analysis ID not found |
429 |
Rate Limited - Too many requests |
500 |
Server Error - Internal server error |
Error Response Format¶
{
"error": {
"code": "INVALID_API_KEY",
"message": "The provided API key is invalid or has expired",
"details": {
"parameter": "Authorization",
"suggestion": "Generate a new API key from the dashboard"
}
}
}
Detection Types¶
Prompt Injection Detection¶
Detects attempts to manipulate AI behavior through malicious inputs.
Query Parameter: detections=prompt_injection
Response:
Recommended Threshold: > 0.65
PII Detection¶
Identifies personally identifiable information using hybrid regex patterns and spaCy NER.
Query Parameter: detections=pii
Supported PII Types: - PERSON (names with titles/suffixes) - EMAIL - PHONE (multiple formats) - ADDRESS - SSN, Credit Cards, IBAN, Bank Accounts - IP addresses, MAC addresses, VINs
Response:
{
"type": "PII",
"name": "pii",
"result": [
{"text": "John Smith", "label": "PERSON"},
{"text": "[email protected]", "label": "EMAIL"}
]
}
Sensitive Path Detection¶
Detects sensitive file paths and system locations across operating systems.
Query Parameter: detections=sensitive_path_detection
Categories: - System Files (Critical) - SSH Keys (Critical) - Environment Files (Critical) - Cloud Credentials (Critical) - Config Files (Medium-High)
Response:
{
"type": "sensitive_path_detection",
"result": [
{
"path": "/etc/passwd",
"severity": "high",
"os": "Linux",
"risk_reason": "Contains user account information",
"matched_pattern": "/etc/passwd",
"detection_type": "pattern_match"
}
],
"summary": {
"total_detected": 1,
"critical_count": 0,
"high_count": 1,
"medium_count": 0,
"risk_score": 0.8
}
}
Long Prompt Injection Detection¶
Advanced detection for sophisticated attacks hidden in large text (>8000 characters).
Query Parameter: long_prompt_injection=true
Processing Time: 2-10 seconds
Use For: Documents, articles, blog posts
Response:
Note: Only reports scores ≥ 0.65
Multi-Detection Analysis¶
You can combine multiple detection types in a single request:
curl -X POST "https://sonnylabs-service.onrender.com/v1/analysis/YOUR_ANALYSIS_ID?detections=prompt_injection,pii,sensitive_path_detection" \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: text/plain" \
-d "Your text to analyze"
Scan Types¶
Input Scan¶
Analyze user-provided content before processing.
Output Scan¶
Analyze AI-generated responses before sending to users.
Performance Metrics¶
| Metric | Value |
|---|---|
| Response Time | < 200ms (standard) |
| Long Prompt Processing | 2-10 seconds |
| Throughput | 5+ concurrent requests |
| Uptime SLA | 99.9% |
| Global Access | Worldwide |
Rate Limits¶
- Free tier: 10,000 requests per month
- Additional limits may apply based on your plan
Best Practices¶
- Threshold Settings: Use 0.65 as minimum threshold for prompt injection detection
- Multi-Detection: Combine detections for comprehensive protection:
detections=prompt_injection,pii,sensitive_path_detection - Tagging: Use consistent tags to link user inputs with AI outputs
- Timeout Configuration: Set 30+ second timeouts for long prompt injection analysis
- Error Handling: Implement retry logic for transient errors
- Security: Never hardcode API keys - use environment variables