Models
Gemini
The Gemini model provides access to Google’s Gemini models.
Parameter | Type | Default | Description |
---|---|---|---|
id | str | "gemini-2.0-flash-exp" | The specific Gemini model ID to use. |
name | str | "Gemini" | The name of this Gemini model instance. |
provider | str | "Google" | The provider of the model. |
function_declarations | Optional[List[FunctionDeclaration]] | None | List of function declarations for the model. |
generation_config | Optional[Any] | None | Configuration for text generation. |
safety_settings | Optional[Any] | None | Safety settings for the model. |
generative_model_kwargs | Optional[Dict[str, Any]] | None | Additional keyword arguments for the generative model. |
grounding | bool | False | Whether to use grounding. |
search | bool | False | Whether to use search. |
grounding_dynamic_threshold | Optional[float] | None | The dynamic threshold for grounding. |
api_key | Optional[str] | None | API key for authentication. |
vertexai | bool | False | Whether to use Vertex AI instead of Google AI Studio. |
project_id | Optional[str] | None | Google Cloud project ID for Vertex AI. |
location | Optional[str] | None | Google Cloud region for Vertex AI. |
client_params | Optional[Dict[str, Any]] | None | Additional parameters for the client. |
client | Optional[GeminiClient] | None | The underlying generative model client. |
temperature | Optional[float] | None | Controls randomness in the output. Higher values (e.g., 0.8) make the output more random, while lower values (e.g., 0.2) make it more focused and deterministic. |
top_p | Optional[float] | None | Nucleus sampling parameter. Only consider tokens whose cumulative probability exceeds this value. |
top_k | Optional[int] | None | Only consider the top k tokens for text generation. |
max_output_tokens | Optional[int] | None | The maximum number of tokens to generate in the response. |
stop_sequences | Optional[list[str]] | None | List of sequences where the model should stop generating further tokens. |
logprobs | Optional[bool] | None | Whether to return log probabilities of the output tokens. |
presence_penalty | Optional[float] | None | Penalizes new tokens based on whether they appear in the text so far. |
frequency_penalty | Optional[float] | None | Penalizes new tokens based on their frequency in the text so far. |
seed | Optional[int] | None | Random seed for deterministic text generation. |
request_params | Optional[Dict[str, Any]] | None | Additional parameters for the request. |