CrawLlama now supports cloud-based LLM providers in addition to local Ollama models.
|βββ-|βββ|ββ-|ββ|ββββββ| | Ollama (default) | Qwen, Llama, Mistral, etc. | Fast (local) | Free | No | | OpenAI | GPT-3.5, GPT-4, GPT-4-Turbo | Fast | Paid | Yes | | Anthropic | Claude 3 (Opus, Sonnet, Haiku) | Fast | Paid | Yes | | Groq | Mixtral, LLaMA 2, Gemma | Very Fast | Free tier available | Yes |
Uncomment the desired providers in requirements.txt:
# For OpenAI
pip install openai>=1.54.0
# For Anthropic
pip install anthropic>=0.40.0
# For Groq
pip install groq>=0.15.0
# Or install all at once
pip install openai anthropic groq
.envCopy .env.example to .env and add your API keys:
# OpenAI
OPENAI_API_KEY=sk-proj-your_key_here
# Anthropic
ANTHROPIC_API_KEY=sk-ant-your_key_here
# Groq
GROQ_API_KEY=gsk_your_key_here
config.jsonEdit the llm section in config.json:
{
"llm": {
"provider": "openai",
"model": "gpt-4-turbo",
"temperature": 0.7,
"max_tokens": 4096
}
}
{
"llm": {
"provider": "openai",
"model": "gpt-4-turbo",
"temperature": 0.7,
"max_tokens": 4096
}
}
Available Models:
gpt-3.5-turbo - Fast, cost-effectivegpt-4 - Most capable (slower, expensive)gpt-4-turbo - Fast GPT-4 variantgpt-4-turbo-preview - Latest previewPricing: See OpenAI Pricing
{
"llm": {
"provider": "anthropic",
"model": "claude-3-sonnet-20240229",
"temperature": 0.7,
"max_tokens": 4096
}
}
Available Models:
claude-3-opus-20240229 - Most capable (expensive)claude-3-sonnet-20240229 - Balanced (recommended)claude-3-haiku-20240307 - Fast, cost-effectivePricing: See Anthropic Pricing
{
"llm": {
"provider": "groq",
"model": "mixtral-8x7b-32768",
"temperature": 0.7,
"max_tokens": 4096
}
}
Available Models:
mixtral-8x7b-32768 - Mixtral 8x7B (32K context)llama2-70b-4096 - LLaMA 2 70B (4K context)gemma-7b-it - Google Gemma 7BPricing: Free tier available! See Groq Console
{
"llm": {
"provider": "ollama",
"base_url": "http://127.0.0.1:11434",
"model": "qwen2.5:3b",
"temperature": 0.7,
"max_tokens": 4096
}
}
No API key required - runs 100% locally.
from core.cloud_llm_client import get_llm_client
# Get OpenAI client
client = get_llm_client("openai", model="gpt-4")
response = client.generate("What is the capital of France?")
print(response)
# Get Anthropic client
client = get_llm_client("anthropic", model="claude-3-sonnet-20240229")
response = client.chat([
{"role": "user", "content": "Hello, Claude!"}
])
print(response)
# Get Groq client
client = get_llm_client("groq", model="mixtral-8x7b-32768")
response = client.generate("Explain quantum computing")
print(response)
from core.cloud_llm_client import OpenAIClient, AnthropicClient, GroqClient
# OpenAI
openai_client = OpenAIClient(
api_key="sk-proj-...", # Or from .env
model="gpt-4",
temperature=0.7
)
# Anthropic
anthropic_client = AnthropicClient(
api_key="sk-ant-...",
model="claude-3-opus-20240229",
temperature=0.7
)
# Groq
groq_client = GroqClient(
api_key="gsk_...",
model="mixtral-8x7b-32768",
temperature=0.7
)
Controls randomness in responses:
{
"llm": {
"temperature": 0.0 // Deterministic (0.0) to Creative (2.0)
}
}
{
"llm": {
"max_tokens": 4096 // Maximum response length
}
}
Note: Different models have different context windows:
Run tests for cloud LLM clients:
# Run all cloud LLM tests
python -m pytest tests/unit/test_cloud_llm_client.py -v
# Run specific provider tests
python -m pytest tests/unit/test_cloud_llm_client.py::TestOpenAIClient -v
python -m pytest tests/unit/test_cloud_llm_client.py::TestAnthropicClient -v
python -m pytest tests/unit/test_cloud_llm_client.py::TestGroqClient -v
.env to Git
# .gitignore already includes .env
echo ".env" >> .gitignore
export OPENAI_API_KEY="sk-proj-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GROQ_API_KEY="gsk_..."
|βββ-|ββββββ-|ββββββ|βββββ-| | Ollama | 1-5s (local) | Free | 4K-128K (model dependent) | | OpenAI GPT-4 | 2-10s | $30 (input) / $60 (output) | 8K-128K | | Anthropic Claude 3 | 2-8s | $15 (Sonnet) | 200K | | Groq Mixtral | 0.5-2s | Free tier | 32K |
Recommendation:
# Check if .env file exists and contains the key
cat .env | grep API_KEY
# Ensure .env is loaded
python -c "from dotenv import load_dotenv; load_dotenv(); import os; print(os.getenv('OPENAI_API_KEY'))"
# Install missing provider library
pip install openai anthropic groq
sk-proj-...sk-ant-...gsk_...Check key validity in provider dashboard
.env:
```bash
# BAD
OPENAI_API_KEY=βsk-proj-β¦β# GOOD OPENAI_API_KEY=sk-proj-β¦ ```
Found a bug or want to add support for more providers? See CONTRIBUTING.md
Potential future providers: