Rakshak
Output guard
Scan model responses before users see them. Redact PII and suppress unsafe or policy-violating output.
Python SDK
result = client.sanitize(
llm_response,
allowed_domains=["yourcompany.com"],
)
if result.blocked:
return "[Response blocked]"
return result.safe_textREST API
POST /v1/scan/output
X-API-Key: rsk_your_key
Content-Type: application/json
{
"response": "Your Aadhaar number is 1234 5678 9012.",
"allowed_domains": ["yourcompany.com"],
"allowed_emails": ["support@yourcompany.com"]
}Response fields
verdict"pass" | "redact" | "block"Use the response as-is, use the sanitized response, or suppress it.sanitized_responsestring | nullSafe text after redaction. Null only for blocked responses.redactionsobject[]Each redaction includes type, original, and replacement.threatsstring[]Threat codes such as pii_email, pii_aadhaar, system_leak, or harmful_content.confidencenumberClassifier confidence from 0.0 to 1.0.language_detectedstringDetected language or script code.