Your AI Is Powerful. Do You Know What It Takes Away?
OpenGuardrails is an open-source AI security gateway that prevents enterprise secrets from being unknowingly sent to external LLMs β without breaking how teams use AI.
It detects sensitive data, evaluates risk, and automatically blocks, anonymizes, or routes requests to private models β before they ever reach an external LLM.
β’This is not a black box SaaS. This is an open, verifiable system.
Try instantly on our free SaaS, then deploy the same open-source code in your own production environment.
See how OpenGuardrails protects your AI applications in real-time
Join the AI Runtime Security Standards Initiative
We're building the AI Runtime Security Management System (AI-RSMS) β an open, community-driven standard for securing AI systems during runtime. Join security, IT, risk, and compliance leaders worldwide to shape this critical standard together.
Single 14Bβ3.3B (GPTQ quantized) model handling both content-safety and model-manipulation detection. Achieves superior semantic understanding compared to hybrid BERT-style architectures while maintaining production-level efficiency.
Multilingual Excellence
Robust performance across 119 languages and dialects, with SOTA results on English, Chinese, and multilingual benchmarks. Includes OpenGuardrailsMixZh 97k dataset contribution under Apache 2.0 license.
Production-Ready Platform
First fully open-source guardrail system with both large-scale safety LLM and deployable platform. RESTful APIs, Docker deployment, and modular components for seamless private/on-premise integration.
Enterprise-Ready Features
Everything you need to secure AI applications across any cloud or deployment
Multi-Cloud Support
Protect AI models across AWS, Azure, GCP, and on-premise deployments. Works with OpenAI, Anthropic, open-source models, and custom LLMs - wherever they run.
Developer-First API
RESTful API with SDKs for Python, Node.js, Java, and Go. Get started in minutes with comprehensive docs and code examples.
Prompt Injection Defense
Advanced protection against jailbreaks, prompt injection, code-interpreter abuse, and malicious code generation attempts.
Content Safety Detection
Detect harmful, hateful, illegal, or sexually explicit content across 12 risk categories with configurable sensitivity thresholds.
Data Leakage Prevention
Identify and redact sensitive personal and organizational information using NER pipelines and regex-based detection.
Real-Time Performance
P95 latency of 274.6ms ensures your applications stay fast. High concurrency support for production workloads.
Latest from the blog
How teams are shipping safer AI
Release notes, field insights, and security research from the OpenGuardrails team and partners.
Your LLM Is Your Company's Second Brain β But Do You Know What It's Leaking?
Large Language Models have become the second brain of modern enterprises. But in real enterprise environments, one uncomfortable question keeps surfacing: do we actually know how much sensitive data is being sent to external LLMs β unintentionally?
OpenGuardrails Announces the AI-RSMS Community Standard Draft
A global call to shape AI Runtime Security together. OpenGuardrails announces the AI Runtime Security Management System (AI-RSMS) β an open, community-driven standard draft focused on securing AI systems during runtime.
OpenGuardrails Team, AI Runtime Security Initiative8 min read
OpenGuardrails 4.5.0: Direct Model Access for Fast Private Deployment POCs
OpenGuardrails 4.5.0 introduces Direct Model Accessβa privacy-first feature that lets enterprises quickly deploy private POCs by pointing to our SaaS models without logging any data. Deploy locally, access models remotely, keep everything private.