Run Large Language Models locally with LM Studio

LM Studio is a fantastic tool for running models locally.

LM Studio supports multiple open-source models. See the library here.

We recommend experimenting to find the best-suited model for your use-case. Here are some general recommendations:

  • llama3.3 models are good for most basic use-cases.
  • qwen models perform specifically well with tool use.
  • deepseek-r1 models have strong reasoning capabilities.
  • phi4 models are powerful, while being really small in size.

Set up a model

Install LM Studio, download the model you want to use, and run it.

Example

After you have the model locally, use the LM Studio model class to access it

from agno.agent import Agent, RunResponse
from agno.models.lmstudio import LMStudio

agent = Agent(
    model=LMStudio(id="qwen2.5-7b-instruct-1m"),
    markdown=True
)

# Print the response in the terminal
agent.print_response("Share a 2 sentence horror story.")
View more examples here.

Params

ParameterTypeDefaultDescription
idstr"qwen2.5-7b-instruct-1m"The id of the LM Studio model to use.
namestr"LM Studio "The name of this chat model instance.
providerstr"LM Studio " + idThe provider of the model.
base_urlstr"http://127.0.0.1:1234/v1"The base URL for API requests.

LM Studio also supports the params of OpenAI.