Mistral is a platform for providing endpoints for Large Language models. See their library of models here.

We recommend experimenting to find the best-suited model for your use-case. Here are some general recommendations:

  • codestral model is good for code generation and editing.
  • mistral-large-latest model is good for most use-cases.
  • open-mistral-nemo is a free model that is good for most use-cases.

Mistral has tier-based rate limits. See the docs for more information.

Authentication

Set your MISTRAL_API_KEY environment variable. Get your key from here.

Example

Use Mistral with your Agent:

View more examples here.

Params

ParameterTypeDefaultDescription
idstr"mistral-large-latest"The ID of the model.
namestr"MistralChat"The name of the model.
providerstr"Mistral"The provider of the model.
temperatureOptional[float]NoneControls randomness in output generation.
max_tokensOptional[int]NoneMaximum number of tokens to generate.
top_pOptional[float]NoneControls diversity of output generation.
random_seedOptional[int]NoneSeed for random number generation.
safe_modeboolFalseEnables content filtering.
safe_promptboolFalseApplies content filtering to prompts.
response_formatOptional[Union[Dict[str, Any], ChatCompletionResponse]]NoneSpecifies the desired response format.
request_paramsOptional[Dict[str, Any]]NoneAdditional request parameters.
api_keyOptional[str]NoneYour Mistral API key.
endpointOptional[str]NoneCustom API endpoint URL.
max_retriesOptional[int]NoneMaximum number of API call retries.
timeoutOptional[int]NoneTimeout for API calls in seconds.
client_paramsOptional[Dict[str, Any]]NoneAdditional client parameters.
mistral_clientOptional[MistralClient]NoneCustom Mistral client instance.
storeOptional[bool]NoneWhether or not to store the output of this chat completion request for use in the model distillation or evals products.
frequency_penaltyOptional[float]NoneA number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
logit_biasOptional[Any]NoneA JSON object that modifies the likelihood of specified tokens appearing in the completion by mapping token IDs to bias values between -100 and 100.
logprobsOptional[bool]NoneWhether to return log probabilities of the output tokens.
presence_penaltyOptional[float]NoneA number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
stopOptional[Union[str, List[str]]]NoneUp to 4 sequences where the API will stop generating further tokens.
top_logprobsOptional[int]NoneThe number of top log probabilities to return for each generated token.
userOptional[str]NoneA unique identifier representing your end-user, helping to monitor and detect abuse.

MistralChat is a subclass of the Model class and has access to the same params.