Documentation
Complete guide to integrating and using OpenGuardrails AI safety platform
🚀 Quick Start
API Usage
Use OpenGuardrails detection API to actively check content safety before and after AI model calls.
💡 Tip: Get your API key from the Account Management page after logging in.
Python Example:
# 1. Install client library
pip install openguardrails
# 2. Use the library
from openguardrails import OpenGuardrails
client = OpenGuardrails("your-api-key")
# Single-turn detection
response = client.check_prompt("Teach me how to make a bomb")
if response.suggest_action == "pass":
print("Safe")
else:
print(f"Unsafe: {response.suggest_answer}")Gateway Usage
Use OpenGuardrails as a transparent security gateway - just change two lines of code!
✅ Benefit: Zero code changes to your AI logic, automatic protection for all requests.
Gateway Example:
from openai import OpenAI
# Just change two lines - base_url and api_key
client = OpenAI(
base_url="https://api.openguardrails.com/v1/gateway",
api_key="your-api-key"
)
# Use as normal - automatic safety protection!
response = client.chat.completions.create(
model="your-proxy-model-name",
messages=[{"role": "user", "content": "Hello"}]
)⚠️ Important: Check finish_reason in responses
When content is blocked, finish_reason will be 'content_filter'. Always check this before accessing response fields.
Protection Configuration
Customize protection policies through the management platform to fit your specific needs.
- Risk Type Configuration: Enable/disable specific risk categories and set custom thresholds
- Blacklist/Whitelist: Manage blocked and allowed content patterns
- Response Templates: Customize safety response messages for different risk types
- Sensitivity Threshold: Configure detection sensitivity (high/medium/low)
🔌 Integrations
Integrate OpenGuardrails with popular AI platforms and workflow automation tools for seamless safety protection.
n8n Integration
Integrate OpenGuardrails with n8n workflow automation platform to add AI safety guardrails to your workflows.
💡 Two Integration Methods: Use the dedicated OpenGuardrails node (recommended) or the standard HTTP Request node.
Method 1: OpenGuardrails Community Node (Recommended)
Installation:
- Go to Settings → Community Nodes in your n8n instance
- Click Install and enter:
n8n-nodes-openguardrails - Click Install and wait for completion
Features:
- Check Content: Validate any user-generated content for safety issues
- Input Moderation: Protect AI chatbots from prompt attacks and inappropriate input
- Output Moderation: Ensure AI-generated responses are safe and appropriate
- Conversation Check: Monitor multi-turn conversations with context awareness
Example Workflow: AI Chatbot with Protection
1. Webhook (receive user message)
2. OpenGuardrails - Input Moderation
3. IF (action = pass)
→ YES: Continue to LLM
→ NO: Return safe response
4. OpenAI Chat
5. OpenGuardrails - Output Moderation
6. IF (action = pass)
→ YES: Return to user
→ NO: Return safe responseDetection Options:
- Enable Security Check: Detect jailbreaks, prompt injection, role manipulation
- Enable Compliance Check: Check for 18 content safety categories (violence, hate speech, etc.)
- Enable Data Security: Detect privacy violations, commercial secrets, IP infringement
- Action on High Risk: Continue with warning / Stop workflow / Use safe response
Method 2: HTTP Request Node
Use n8n's built-in HTTP Request node to call OpenGuardrails API directly.
Setup Steps:
- Create Credentials: In n8n, go to Credentials → New → Header Auth
- Name:
Authorization - Value:
Bearer sk-xxai-YOUR-API-KEY
- Name:
- Configure HTTP Request Node:
- Method:
POST - URL:
https://api.openguardrails.com/v1/guardrails - Authentication: Select your OpenGuardrails credentials
- Method:
Request Body Example:
{
"model": "OpenGuardrails-Text",
"messages": [
{
"role": "user",
"content": "{{ $json.userInput }}"
}
],
"extra_body": {
"enable_security": true,
"enable_compliance": true,
"enable_data_security": true
}
}📦 Import Ready-to-Use Workflows:
Check the n8n-integrations/http-request-examples/ folder for pre-built workflow templates including basic content check and chatbot with moderation.
Dify Integration
Integrate OpenGuardrails as a content moderation extension in Dify platform for no-code AI safety protection.
✅ Configure once in Dify workspace, protect all applications automatically!
Configuration Steps:
- Deploy OpenGuardrails: Follow the deployment guide to set up the platform
- Get API Key: Login and navigate to Account Management to get your API key
sk-xxai-xxxxxxxxxx - Configure in Dify: Navigate to Workspace Settings → Content Review → API Extension
- Input URL:
http://your-server:5001/v1/guardrails/input - Output URL:
http://your-server:5001/v1/guardrails/output - API Key: Your OpenGuardrails API key
- Input URL:
- Test Integration: Send test requests to verify content moderation is working
Dify Content Moderation Settings:

API Extension Configuration:

Key Advantages:
- No-code integration with Dify applications
- Comprehensive 19-category risk detection
- Customizable risk thresholds and responses
- Knowledge-based intelligent responses
- Free and open source with no usage limits
📚 API Reference
Interactive Documentation:
Swagger UI: http://localhost:5001/docs
ReDoc: http://localhost:5001/redoc
| Service | Port | Purpose |
|---|---|---|
| Admin Service | 5000 | User management, configuration, statistics |
| Detection Service | 5001 | High-concurrency guardrails detection API |
| Proxy Service | 5002 | OpenAI-compatible security gateway |
Authentication
All API requests require authentication using Bearer token in the Authorization header.
# Using cURL
curl -X POST "http://localhost:5001/v1/guardrails" \
-H "Authorization: Bearer your-api-key" \
-H "Content-Type: application/json" \
-d '{
"messages": [
{"role": "user", "content": "Test content"}
]
}'Error Codes
| Status Code | Meaning | Common Causes |
|---|---|---|
| 200 | Success | Request processed successfully |
| 400 | Bad Request | Invalid request format or parameters |
| 401 | Unauthorized | Missing or invalid API key |
| 403 | Forbidden | Insufficient permissions |
| 429 | Rate Limited | Too many requests |
| 500 | Server Error | Internal server error |
📖 Detailed Guide
Detection Capabilities
OpenGuardrails provides comprehensive detection across 19 risk categories with customizable sensitivity.
| Category | Risk Level | Examples |
|---|---|---|
| Violent Crime | High Risk | Instructions for violence, terrorism |
| Prompt Attacks | High Risk | Jailbreaks, prompt injections |
| Illegal Activities | Medium Risk | Drug trafficking, fraud schemes |
| Discrimination | Low Risk | Hate speech, bias |
Dashboard Overview
The OpenGuardrails platform provides a comprehensive dashboard for monitoring and managing your AI safety guardrails.

Data Leak Detection
Automatically detect and mask sensitive data in prompts and responses to prevent information leakage.
Supported Data Types:
- ID Cards & Social Security Numbers
- Phone Numbers
- Email Addresses
- Bank Card Numbers
- Passport Numbers
- IP Addresses
Masking Methods:
Replace: Replace with generic placeholderMask: Partially mask with asterisksHash: Replace with cryptographic hashEncrypt: Reversibly encrypt data
Sensitivity Configuration
Configure detection sensitivity based on your use case requirements.
| Sensitivity Level | Threshold | Use Case |
|---|---|---|
| High Sensitivity | ≥ 0.40 | Public services, regulated industries |
| Medium Sensitivity | ≥ 0.60 | General business applications |
| Low Sensitivity | ≥ 0.95 | Internal tools, development environments |
Need help? thomas@openguardrails.com