Cerewro can be configured to use local AI models (Ollama, LM Studio, Jan) instead of sending data to the cloud. Your conversations, files and commands remain on your machine.
# In Cerewro's configuration file
provider: ollama
model: llama3.2:8b
base_url: http://localhost:11434/v1
| Provider | Default URL | Recommended models |
|---|---|---|
| Ollama | http://localhost:11434 | llama3.2, mistral, phi4 |
| LM Studio | http://localhost:1234 | Any GGUF model |
| Jan | http://localhost:1337 | llama3, gemma2 |