Rakshak docs
LLM security guard API
Rakshak screens user prompts, model responses, and multi-turn conversations before unsafe content reaches your model or your users.
What Rakshak protects
Rakshak sits between your application and your LLM. Use the input guard before forwarding user text to the model, the output guard before showing model text to users, and the conversation guard when risk builds across multiple turns.
- Prompt injection, jailbreaks, obfuscation, and data extraction attempts.
- PII, system prompt leaks, internal URLs, and unsafe model responses.
- Multi-turn escalation, payload splitting, and repeated probing.
Base API contract
All production guard endpoints use an API key in the X-API-Keyheader. Customer API keys use the rsk_ prefix.
curl -X POST "/v1/scan" \
-H "Content-Type: application/json" \
-H "X-API-Key: rsk_your_key" \
-d '{"prompt": "Ignore previous instructions"}'Recommended integration
import rakshak
client = rakshak.Client(
api_key="rsk_your_key",
)
result = client.guard("user message here")
if result.blocked:
return "I can't help with that."
# safe: forward the original user message to your LLMThe dashboard test console currently exposes input scanning. The API and Python SDK also support output sanitization and conversation guard workflows.