Model providers
To run cagent, you need a model provider. You can either use a cloud provider with an API key or run models locally with Docker Model Runner.
This guide covers cloud providers. For the local alternative, see Local models with Docker Model Runner.
Supported providers
cagent supports these cloud model providers:
- Anthropic - Claude models
- OpenAI - GPT models
- Google - Gemini models
Anthropic
Anthropic provides the Claude family of models, including Claude Sonnet and Claude Opus.
To get an API key:
- Go to console.anthropic.com.
- Sign up or sign in to your account.
- Navigate to the API Keys section.
- Create a new API key.
- Copy the key.
Set your API key as an environment variable:
$ export ANTHROPIC_API_KEY=your_key_here
Use Anthropic models in your agent configuration:
agents:
root:
model: anthropic/claude-sonnet-4-5
instruction: You are a helpful coding assistantAvailable models include:
anthropic/claude-sonnet-4-5anthropic/claude-opus-4-5anthropic/claude-haiku-4-5
OpenAI
OpenAI provides the GPT family of models, including GPT-5 and GPT-5 mini.
To get an API key:
- Go to platform.openai.com/api-keys.
- Sign up or sign in to your account.
- Navigate to the API Keys section.
- Create a new API key.
- Copy the key.
Set your API key as an environment variable:
$ export OPENAI_API_KEY=your_key_here
Use OpenAI models in your agent configuration:
agents:
root:
model: openai/gpt-5
instruction: You are a helpful coding assistantAvailable models include:
openai/gpt-5openai/gpt-5-mini
Google Gemini
Google provides the Gemini family of models.
To get an API key:
- Go to aistudio.google.com/apikey.
- Sign in with your Google account.
- Create an API key.
- Copy the key.
Set your API key as an environment variable:
$ export GOOGLE_API_KEY=your_key_here
Use Gemini models in your agent configuration:
agents:
root:
model: google/gemini-2.5-flash
instruction: You are a helpful coding assistantAvailable models include:
google/gemini-2.5-flashgoogle/gemini-2.5-pro
OpenAI-compatible providers
You can use the openai provider type to connect to any model or provider that
implements the OpenAI API specification. This includes services like Azure
OpenAI, local inference servers, and other compatible endpoints.
Configure an OpenAI-compatible provider by specifying the base URL:
agents:
root:
model: openai/your-model-name
instruction: You are a helpful coding assistant
provider:
base_url: https://your-provider.example.com/v1By default, cagent uses the OPENAI_API_KEY environment variable for
authentication. If your provider uses a different variable, specify it with
token_key:
agents:
root:
model: openai/your-model-name
instruction: You are a helpful coding assistant
provider:
base_url: https://your-provider.example.com/v1
token_key: YOUR_PROVIDER_API_KEYWhat's next
- Follow the tutorial to build your first agent
- Learn about local models with Docker Model Runner as an alternative to cloud providers
- Review the configuration reference for advanced model settings