llama3.3models are good for most basic use-cases.qwenmodels perform specifically well with tool use.deepseek-r1models have strong reasoning capabilities.phi4models are powerful, while being really small in size.
Set up a model
Install LM Studio, download the model you want to use, and run it.Example
After you have the model locally, use theLM Studio model class to access it
View more examples here.
Params
| Parameter | Type | Default | Description |
|---|---|---|---|
id | str | "lmstudio-community/Meta-Llama-3-8B-Instruct-GGUF" | The id of the LMStudio model to use |
name | str | "LMStudio" | The name of the model |
provider | str | "LMStudio" | The provider of the model |
api_key | Optional[str] | None | The API key for LMStudio (usually not needed for local) |
base_url | str | "http://localhost:1234/v1" | The base URL for the local LMStudio server |
LM Studio also supports the params of OpenAI.