Models
Cohere
Leverage Cohere’s powerful command models and more.
Cohere has a wide range of models and is really good for fine-tuning. See their library of models here.
We recommend experimenting to find the best-suited model for your use-case. Here are some general recommendations:
command
model is good for most basic use-cases.command-light
model is good for smaller tasks and faster inference.command-r7b-12-2024
model is good with RAG tasks, complex reasoning and multi-step tasks.
Cohere also supports fine-tuning models. Here is a guide on how to do it.
Cohere has tier-based rate limits. See the docs for more information.
Authentication
Set your CO_API_KEY
environment variable. Get your key from here.
Example
Use Cohere
with your Agent
:
View more examples here.
Params
Parameter | Type | Default | Description |
---|---|---|---|
id | str | "command-r-plus" | The specific model ID used for generating responses. |
name | str | "cohere" | The name identifier for the agent. |
provider | str | "Cohere" | The provider of the model. |
temperature | Optional[float] | None | The sampling temperature to use, between 0 and 2. Higher values like 0.8 make the output more random, while lower values like 0.2 make it more focused and deterministic. |
max_tokens | Optional[int] | None | The maximum number of tokens to generate in the response. |
top_k | Optional[int] | None | The number of highest probability vocabulary tokens to keep for top-k-filtering. |
top_p | Optional[float] | None | Nucleus sampling parameter. The model considers the results of the tokens with top_p probability mass. |
frequency_penalty | Optional[float] | None | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. |
presence_penalty | Optional[float] | None | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. |
request_params | Optional[Dict[str, Any]] | None | Additional parameters to include in the request. |
add_chat_history | bool | False | Whether to add chat history to the Cohere messages instead of using the conversation_id. |
api_key | Optional[str] | None | The API key for authenticating requests to the Cohere service. |
client_params | Optional[Dict[str, Any]] | None | Additional parameters for client configuration. |
cohere_client | Optional[CohereClient] | None | A pre-configured instance of the Cohere client. |
structured_outputs | bool | False | Whether to use structured outputs with this Model. |
supports_structured_outputs | bool | True | Whether the Model supports structured outputs. |
add_images_to_message_content | bool | True | Whether to add images to the message content. |
override_system_role | bool | True | Whether to override the system role. |
system_message_role | str | "system" | The role to map the system message to. |
Cohere
is a subclass of the Model class and has access to the same params.