Cerewro in local mode: private AI without cloud
Cerewro can be configured to use local AI models (Ollama, LM Studio, Jan) instead of sending data to the cloud. Your conversations, files and commands remain on your machine.
Configure local model
# In Cerewro's configuration file
provider: ollama
model: llama3.2:8b
base_url: http://localhost:11434/v1
Compatible local providers
| Provider | Default URL | Recommended models |
|---|---|---|
| Ollama | http://localhost:11434 | llama3.2, mistral, phi4 |
| LM Studio | http://localhost:1234 | Any GGUF model |
| Jan | http://localhost:1337 | llama3, gemma2 |
Advantages of local mode
- Total privacy: no data leaves your machine
- No token limits: no API usage costs
- Works offline: no internet connection required
- Full control: use any open-source model