๐Ÿš€ -> Project on GitHub <-

CrawlLama Logo CrawlLama

Python Version Platform Platform License Status Ask DeepWiki

๐Ÿ“š Documentation ๐Ÿš€ Quickstart ๐Ÿ”Œ API Guide ๐Ÿค– Adaptive Hops ๐Ÿ”’ Security ๐Ÿ“ Changelog

๐ŸŒ Project Website

Production-Ready AI Research Agent with OSINT & Multi-Hop Reasoning <div align=โ€left>

CrawlLama Logo CrawlLama

Current Version: 1.4.6 โ€“ Security Fixes </div>

๐Ÿค Contribute to CrawlLama!

We welcome your ideas, bug reports, and feature requests!

Contribute Badge

๐Ÿ“– Table of Contents

A fully local, production-ready AI system with advanced intelligence features:

โœจ Features

๐ŸŽฏ Core Features

๐Ÿ” OSINT Features

๐Ÿ”’ Security & Performance

-

๐Ÿ†• Release Highlights v1.4.5 (2025-10-29) (Optional Cloud LLM)

โ˜๏ธ Cloud LLM & Provider-Based Config:

๐Ÿ†• Release Highlights v1.4.4 (2025-10-28)

๐Ÿค– Adaptive Agent Hopping System

๐Ÿ†• Release Highlights v1.4.3 (2025-10-27)

๐ŸŒ Complete English Translation:

๐Ÿ†• Release Highlights v1.4.2 (2025-10-26)

Major Changes:

Forget Command Syntax:

forget email:test@example.com        # Delete specific email
forget phone:+491234567890           # Delete phone number
forget ip:192.168.1.1                # Delete IP address
forget username:johndoe              # Delete username
forget category:emails               # Delete all emails
forget category:phones               # Delete all phone numbers
forget all:true                      # Delete entire memory store

Start API Server:

# Windows
run_api.bat

# Linux/macOS
./run_api.sh

# Or manually
python app.py

Then open in browser: http://localhost:8000/docs

๐Ÿฅ Health Monitoring Dashboard

The integrated health module offers a unified dashboard with two modes:

Usage:

# Windows
health-dashboard.bat

# Linux/macOS
./health-dashboard.sh

# Directly with Python (Interactive Menu)
python health-dashboard.py

# Directly to Live Monitor
python health-dashboard.py --monitor

# Directly to Test Dashboard
python health-dashboard.py --tests

๐Ÿ“Š Mode 1: Live System Monitor

Real-time monitoring with rich terminal UI:

๐Ÿงช Mode 2: Test Dashboard (GUI)

Tkinter-based GUI for test management:

See: Health Monitoring Guide for details and programmatic usage

OSINT Usage:

# Email intelligence
email:test@example.com

# Phone intelligence
phone:"+49 151 12345678"

# IP intelligence
ip:8.8.8.8

# Batch processing (NEW in v1.4.1!)
email:test@example.com user@domain.com admin@site.com
phone:+491234567890 +441234567890 +331234567890

# Memory Store (NEW in v1.4.2!)
remember email:test@example.com      # Store email
recall emails                        # Retrieve all emails
forget email:test@example.com        # Delete specific email
forget category:emails               # Delete all emails
forget all:true                      # Delete entire memory store

# Advanced search
site:github.com inurl:python filetype:md

# Combined operators
email:john@example.com site:linkedin.com inurl:profile
See: OSINT Usage Guide OSINT Module README

๐Ÿ”’ Security & Robustness

๐Ÿ“ธ Images

Health Dashboard - Live System Monitor

Real-time monitoring with rich terminal UI displaying system metrics, component health, and performance tracking.

Health Dashboard

Interactive CLI Interface

CrawlLamaโ€™s adaptive intelligence system with automatic agent selection and interactive commands.

CLI Interface

Test Dashboard GUI

Tkinter-based test management interface with automatic test detection and real-time progress tracking.

Test Dashboard

๐Ÿš€ Quickstart

๐Ÿ“ฆ Downloads

Pre-built Releases (recommended for quick start):

Version Download VirusTotal Check
v1.4 Preview Crawllama-1.4-preview.zip ๐Ÿ”’ VirusTotal Scan

โœ… All downloads are virus-free - VirusTotal scans confirm no malware
๐Ÿ“ฆ Plug & Play - Simply extract and start (Ollama + Python required)

๐Ÿ“ฆ Installation

Windows:

  1. Download Crawllama-1.4-preview.zip
  2. Extract to any folder (e.g., C:\Crawllama)
  3. Install Ollama from ollama.ai/download
  4. Start Ollama and load model:
    ollama serve
    ollama pull qwen3:4b
    
  5. In the Crawllama folder:
    setup.bat
    run.bat
    

Linux/macOS:

  1. Download and extract:
    wget https://github.com/arn-c0de/Crawllama/releases/download/v.1.4_Preview/Crawllama-1.4-preview.zip
    unzip Crawllama-1.4-preview.zip
    cd Crawllama-1.4
    
  2. Install Ollama:
    curl -fsSL https://ollama.ai/install.sh | sh
    ollama serve &
    ollama pull qwen3:4b
    
  3. Setup and start:
    chmod +x setup.sh run.sh
    ./setup.sh
    ./run.sh
    

Windows:

setup.bat

Linux/macOS:

chmod +x setup.sh
./setup.sh

Note: After the initial setup, you must select at least one LLM model during setup. If a model is already installed, you can skip this stepโ€”otherwise, selection is required to avoid errors in the test program.

The setup script:

โš ๏ธ Note for initial installation:

When running pip install -r requirements.txt for the first time within the newly created virtual environment, installing all dependenciesโ€”especially packages like torch, sentence-transformers, and scientific librariesโ€”may take 5โ€“10 minutes (or longer, depending on connection and hardware). Please wait until the process completes; afterward, the virtual environment is ready for use.

Note on disk space: After installation (including venv), the project typically requires about 1.2โ€“1.5 GB of free disk space (v1.4: ~1.23 GB). This value may vary significantly depending on the operating system, Python packages (e.g., larger PyTorch/CUDA wheels), and additional models. Plan for ample additional space if storage is limited.

Model download sizes (approximate):

Note: Model sizes vary significantly depending on the provider, format (FP16, INT8 quantization, etc.), and additional assets. Quantized models (e.g., INT8) can significantly reduce size, while FP32/FP16 or models with additional tokenizer/vocab files require more space. Plan for sufficient additional storage if using larger models or multiple models simultaneously.

Option 2: Manual Installation

Prerequisites:

Windows - Step by Step:

# 1. Clone repository
git clone https://github.com/arn-c0de/Crawllama.git
cd Crawllama

# 2. Create virtual environment
python -m venv venv
venv\Scripts\activate

# 3. Install dependencies (takes 5-10 min)
pip install -r requirements.txt

# 4. Create directories
mkdir data\cache data\embeddings data\history logs plugins

# 5. Configuration
copy .env.example .env
notepad .env  # Optional: Add API keys

# 6. Start Ollama (separate terminal)
ollama serve

# 7. Load model (separate terminal)
ollama pull qwen3:4b

# 8. Start Crawllama
python main.py --interactive

Linux/macOS - Step by Step:

# 1. Clone repository
git clone https://github.com/arn-c0de/Crawllama.git
cd Crawllama

# 2. Create virtual environment
python3 -m venv venv
source venv/bin/activate

# 3. Install dependencies (takes 5-10 min)
pip install -r requirements.txt

# 4. Create directories
mkdir -p data/cache data/embeddings data/history logs plugins

# 5. Configuration
cp .env.example .env
nano .env  # Optional: Add API keys

# 6. Install and start Ollama
curl -fsSL https://ollama.ai/install.sh | sh
ollama serve &

# 7. Load model
ollama pull qwen3:4b

# 8. Start Crawllama
python main.py --interactive

Troubleshooting Installation:

Problem Solution
python not found Install Python 3.10+: python.org
pip install fails Run python -m pip install --upgrade pip
ollama: command not found Install Ollama: ollama.ai/download
Connection refused (Ollama) Start Ollama: ollama serve
ModuleNotFoundError Activate virtual environment: venv\Scripts\activate (Win) or source venv/bin/activate (Linux)
Disk space full Ensure at least 5 GB free for venv + model

Option 3: Git Clone (Quick Installation)

# 1. Clone
git clone https://github.com/arn-c0de/Crawllama.git
cd Crawllama

# 2. Virtual Environment
python -m venv venv
source venv/bin/activate  # Linux/macOS
venv\Scripts\activate     # Windows

# 3. Dependencies
pip install -r requirements.txt

# 4. Directories
mkdir -p data/cache data/embeddings data/history logs plugins

# 5. Config
cp .env.example .env

Ollama Setup

# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh  # Linux/macOS
# or from https://ollama.ai/download           # Windows

# Start Ollama
ollama serve

# Load model
ollama pull qwen3:4b
# Alternative: deepseek-r1:8b, llama3:7b, mistral

๐Ÿ’ก Usage

Note:
The first start may take significantly longer than subsequent starts!
Initialization, dependency installation, and model downloads may take several minutes, depending on hardware and internet connection.
After the first successful start, all subsequent starts are significantly faster.

1. CLI - Interactive Mode

python main.py --interactive

# Or with setup script
run.bat           # Windows
./run.sh          # Linux/macOS
โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚ CrawlLama - Local Search and Response Agent                  โ”‚
โ”‚ Commands:                                                    โ”‚
โ”‚   clear       - Reset session (history + cache)              โ”‚
โ”‚   clear-cache - Clear cache only                             โ”‚
โ”‚   save        - Manually save session                        โ”‚
โ”‚   load        - Reload session                               โ”‚
โ”‚   stats       - Display statistics                           โ”‚
โ”‚   status      - Show context usage                           โ”‚
โ”‚   settings    - Show/edit settings                           โ”‚
โ”‚   restart     - Restart agent (reload config)                โ”‚
โ”‚   exit, quit  - Exit                                         โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ

โฏ What is Machine Learning?

New Commands:

2. Health Monitoring Dashboard

# Windows
health-dashboard.bat

# Linux/macOS
python health-dashboard.py

The dashboard displays:

Interactive commands:

3. How does intelligent search work?

The agent automatically decides when and how to search:

๐Ÿค– Automatic Decision

โฏ Who is the current German Chancellor?

1. LLM analyzes: "Requires current info" โœ“
2. Agent performs web search
3. LLM processes search results
4. Agent delivers up-to-date response

๐Ÿ” Search Operators for Targeted Searches

OSINT Search Operators:

# Domain-specific search
โฏ site:github.com machine learning

# Email Intelligence
โฏ email:john.doe@company.com

# Phone Intelligence
โฏ phone:"+49 151 12345678"

# IP Intelligence (NEW!)
โฏ ip:8.8.8.8
โฏ 192.168.1.1  # Auto-detects as IP

# Social Media Intelligence (12 Platforms)
โฏ username:elonmusk
โฏ @microsoft
โฏ github  # Auto-detects as username

# File format search
โฏ site:example.com filetype:pdf

# URL filter
โฏ inurl:documentation python

# Text in content
โฏ intext:"contact email" site:example.com

Combined Searches:

# Multiple operators
โฏ site:linkedin.com inurl:profile "software engineer"

# Exclusion with minus
โฏ python programming -java

# OR conjunction
โฏ site:github.com OR site:gitlab.com "machine learning"

See OSINT Usage Guide for all features.

4. CLI - Direct Queries

# Standard query (agent decides automatically if web search is needed)
python main.py "What is Python?"

# Multi-Hop Reasoning (for complex queries)
python main.py --multihop "Compare Python and JavaScript for web development"

# Offline mode (no web search, only LLM knowledge)
python main.py --no-web "Explain photosynthesis"

# OSINT search with search operators
python main.py "site:github.com python projects"
python main.py "email:contact@example.com"

# With specific model
python main.py --model llama3:7b "Who discovered Einstein?"

5. FastAPI Server

# Start server
python app.py

# Or with starter scripts
run_api.bat      # Windows
./run_api.sh     # Linux/macOS

# Or manually
uvicorn app:app --host 0.0.0.0 --port 8000

API Documentation: http://localhost:8000/docs

Available Endpoints:

Query & Reasoning:

Memory Store (CRUD):

Session Management:

Cache:

Configuration:

Plugins & Tools:

System:

๐Ÿ”’ API Security (v1.4.2+):

The API is protected by default with multiple security features:

Setup:

# 1. Set API key in .env
CRAWLLAMA_API_KEY=your_secure_api_key_min_32_chars

# 2. For local development (without API key)
CRAWLLAMA_DEV_MODE=true

# 3. Adjust rate limit (optional)
RATE_LIMIT=100

# 4. Configure CORS origins (optional)
ALLOWED_ORIGINS=http://localhost:3000,http://localhost:8080

Usage with API Key:

# With API key header
curl -X POST http://localhost:8000/query \
  -H "Content-Type: application/json" \
  -H "X-API-Key: your_api_key_here" \
  -d '{"query": "test"}'

# Or in dev mode (without API key)
export CRAWLLAMA_DEV_MODE=true
python app.py

Example Requests:

# Standard query (agent uses web search automatically if needed)
curl -X POST http://localhost:8000/query \
  -H "Content-Type: application/json" \
  -d '{
    "query": "What is Machine Learning?",
    "use_multihop": false
  }'

# Multi-hop query (for complex analyses)
curl -X POST http://localhost:8000/query \
  -H "Content-Type: application/json" \
  -d '{
    "query": "Compare Python and JavaScript",
    "use_multihop": true,
    "max_hops": 3
  }'

# OSINT search with search operators
curl -X POST http://localhost:8000/query \
  -H "Content-Type: application/json" \
  -d '{
    "query": "site:github.com python machine-learning",
    "use_multihop": false
  }'

# Retrieve statistics
curl http://localhost:8000/stats

# List plugins
curl http://localhost:8000/plugins

# Load plugin
curl -X POST http://localhost:8000/plugins/example_plugin/load

๐Ÿ“‹ CLI Commands & Options

Basic Options

| Option | Description | |โ€”โ€”โ€“|โ€”โ€”โ€”โ€”โ€“| | --interactive | Interactive mode | | --debug | Enable debug logging | | --no-web | Offline mode (no web search) | | --model MODEL | Choose Ollama model | | --stats | Display system statistics | | --clear-cache | Clear cache |

Advanced Options (v1.1)

| Option | Description | |โ€”โ€”โ€“|โ€”โ€”โ€”โ€”โ€“| | --multihop | Enable multi-hop reasoning | | --max-hops N | Max reasoning steps (1-5) | | --api | Start API server | | --plugins | List available plugins | | --load-plugin NAME | Load plugin | | --help-extended | Show extended help | | --examples | Show usage examples | | --setup-keys | Securely set up API keys |

Interactive Commands

| Command | Description | |โ€”โ€”โ€“|โ€”โ€”โ€”โ€”โ€“| | exit, quit | Exit program | | clear | Clear screen | | stats | Display statistics | | help | Show help |

๐Ÿš€ REST API

CrawlLama provides a complete REST API for integration into custom applications.

Start API Server

Windows:

run_api.bat

Linux/macOS:

./run_api.sh

Or manually:

uvicorn app:app --host 0.0.0.0 --port 8000

Quickstart

1. Start API Server

run_api.bat

2. Open API Documentation

3. Send Query

curl -X POST http://localhost:8000/query \
  -H "X-API-Key: your-key" \
  -H "Content-Type: application/json" \
  -d '{"query": "What is Python?", "use_tools": false}'

Key Endpoints

Authentication

Set API key in .env:

CRAWLLAMA_API_KEY=your-secret-key-here

Or for testing:

CRAWLLAMA_DEV_MODE=true

Full Documentation

๐Ÿ“– API Usage Guide - Complete API documentation with examples

๐Ÿ—๏ธ Project Structure

๐Ÿ‘‰ The complete and up-to-date project structure can be found here: docs/development/PROJECT_STRUCTURE.md

โš™๏ธ Configuration

config.json

{
  "llm": {
    "base_url": "http://127.0.0.1:11434",
    "model": "qwen3:8b",
    "temperature": 0.7,
    "max_tokens": 10000,
    "stream": true
  },
  "search": {
    "provider": "duckduckgo",
    "max_results": 5,
    "timeout": 10
  },
  "rag": {
    "enabled": true,
    "batch_size": 100,
    "max_workers": 4
  },
  "cache": {
    "enabled": true,
    "ttl_hours": 24,
    "max_size_mb": 500,
    "clear_on_startup": false
  },
  "osint": {
    "max_results": 20,
    "email_search_limit": 50,
    "phone_search_limit": 50,
    "general_osint_limit": 100
  },
  "multihop": {
    "enabled": true,
    "max_hops": 3,
    "confidence_threshold": 0.7,
    "enable_critique": true
  },
  "plugins": {
    "example_plugin": {
      "enabled": true
    }
  },
  "security": {
    "rate_limit": 1.0,
    "max_context_length": 8000,
    "check_robots_txt": true
  }
}

Recommended max_tokens Settings:

GPU/Hardware Recommended max_tokens Model
RTX 3080+ (10GB+) 10,000 - 16,000 qwen3:8b, deepseek-r1:8b
RTX 3060/3070 (8GB) 6,000 - 8,000 qwen3:4b, llama3:7b
CPU Only 2,000 - 4,000 qwen3:4b

๐Ÿ’ก Tip: Use the status command to monitor your token usage in real-time!

.env (Optional)

# API Keys (optional)
BRAVE_API_KEY=your_brave_api_key
SERPER_API_KEY=your_serper_api_key

# Proxy (optional)
HTTP_PROXY=http://proxy:port
HTTPS_PROXY=https://proxy:port

๐Ÿงช Testing

# All tests
pytest tests/ -v

# With coverage
pytest --cov=core --cov=tools --cov=utils tests/

# Specific tests
pytest tests/test_multihop_reasoning.py -v
pytest tests/test_error_simulation.py -v

# With debug output
pytest tests/ -v --log-cli-level=INFO

๐Ÿ”Œ Plugin Development

Creating a Simple Plugin

# plugins/my_plugin.py

from core.plugin_manager import Plugin, PluginMetadata

class MyPlugin(Plugin):
    def get_metadata(self) -> PluginMetadata:
        return PluginMetadata(
            name="MyPlugin",
            version="1.0.0",
            description="My custom plugin",
            author="Your Name",
            dependencies=[]
        )

    def get_tools(self):
        return [self.my_tool]

    def my_tool(self, input: str) -> str:
        return f"Processed: {input}"

See: Plugin Tutorial for details

๐Ÿ› ๏ธ Technology Stack

Core

Backend

Utils

Development

๐Ÿ“š Documentation

User Guides

Developer Docs

API Documentation

๐ŸŒŸ Roadmap

Phase 1: Core โœ… (Completed)

Phase 2: Robustness โœ… (Completed)

Phase 3: Intelligence โœ… (Completed - v1.1)

Phase 4: Production โœ… (Completed - v1.1)

Phase 5: Future ๐Ÿ“… (Planned)

๐Ÿค Contributing

Contributions are welcome!

Development Workflow:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Create a pull request

Coding Standards:

๐Ÿ“Š Performance

Benchmarks (on i7-8700K, 32GB RAM)

Operation Average Notes
Standard Query 2-5s Without web search
Query with Web Search 5-10s 3-5 results
Multi-Hop (3 Hops) 15-30s Complex
RAG Search <1s 5 results
API Request <100ms Without tools

Resources

Web Scraping

Data Privacy

API Keys

๐Ÿ†˜ Troubleshooting

Ollama not reachable

# Check status
curl http://127.0.0.1:11434/api/tags

# Start Ollama
ollama serve

Import errors

# Reinstall dependencies
pip install -r requirements.txt

# Or re-run setup
./setup.sh  # or setup.bat

ChromaDB errors

# Delete embeddings
rm -rf data/embeddings/

# Restart
python main.py

API rate limits

# Adjust in config.json
"security": {
  "rate_limit": 2.0  # 2 req/s
}

๐Ÿ’ฌ Support & Community

๐Ÿ“ License

Crawllama License (Non-Commercial) - Free for use and development, but no commercial sale allowed.

โœ… Allowed:

โŒ Not Allowed:

See LICENSE for full details.

๐Ÿ™ Credits

Built with:

๐Ÿ“š Further Documentation

Last Updated: 2025-10-27