Model Configuration
Cody CLI supports multiple AI models via the AINative gateway. You can select a model per-session or set a default.
Available Models
| Model ID | Parameters | Best For | Plan |
|---|---|---|---|
qwen-coder-32b | 32B | Default — best code quality | Pro+ |
qwen-coder-7b | 7B | Fast iteration, simple tasks | Free |
nouscoder-14b | 14B | Balanced code model | Pro+ |
gemma-9b | 9B | General text and reasoning | Free |
deepseek-r1-distill-qwen-7b | 7B | Step-by-step reasoning | Free |
Selecting a Model
Per-session flag
cody --model qwen-coder-7b
Interactive model picker
Run /model inside an interactive session to switch models on the fly.
Set a default in settings
Add to ~/.cody/settings.json:
{
"model": "qwen-coder-32b"
}
Context Limits
Each model has a maximum context window. Cody automatically manages context to stay within limits, summarizing older turns when needed.
Tool Support
Not all models support tool use (file read/write, bash, search). Reasoning models (deepseek-r1-*) do not support tools — use qwen-coder-* or nouscoder-* for agentic tasks.
Fast Mode
Toggle fast mode with /fast in an interactive session, or:
cody --fast
Fast mode uses the fast model (qwen-coder-7b by default) for quicker responses.