Models
LM Studio
Run Large Language Models locally with LM Studio
LM Studio is a fantastic tool for running models locally.
LM Studio supports multiple open-source models. See the library here.
We recommend experimenting to find the best-suited model for your use-case. Here are some general recommendations:
llama3.3
models are good for most basic use-cases.qwen
models perform specifically well with tool use.deepseek-r1
models have strong reasoning capabilities.phi4
models are powerful, while being really small in size.
Set up a model
Install LM Studio, download the model you want to use, and run it.
Example
After you have the model locally, use the LM Studio
model class to access it
View more examples here.
Params
Parameter | Type | Default | Description |
---|---|---|---|
id | str | "qwen2.5-7b-instruct-1m" | The id of the LM Studio model to use. |
name | str | "LM Studio " | The name of this chat model instance. |
provider | str | "LM Studio " + id | The provider of the model. |
base_url | str | "http://127.0.0.1:1234/v1" | The base URL for API requests. |
LM Studio
also supports the params of OpenAI.