LM Studio
Integrate LM Studio with Jan
LM Studio (opens in a new tab) enables you to explore, download, and run local Large Language Models (LLMs). You can integrate Jan with LM Studio using two methods:
- Integrate the LM Studio server with Jan UI
- Migrate your downloaded model from LM Studio to Jan.
To integrate LM Studio with Jan, follow the steps below:
This guide will demonstrate how to connect Jan to LM Studio (opens in a new tab) using the second method. We'll use the Phi 2 - GGUF (opens in a new tab) model from Hugging Face as our example.
Step 1: Server Setup
- Access the
Local Inference Server
within LM Studio. - Select your desired model.
- Start the server after configuring the port and options.
- Navigate back to Jan.
- Navigate to the Settings > Extensions.
- In the OpenAI Inference Engine section, add the full web address of the LM Studio server.
- Replace
(port)
with your chosen port number. The default is 1234. - Leave the API Key field blank.
Step 2: Model Configuration
- Navigate to the Hub.
- we will use the
phi-2
model in this example. Insert thehttps://huggingface.co/TheBloke/phi-2-GGUF
link into the search bar. - Select and download the model you want to use.
Step 3: Start the Model
- Proceed to the Threads.
- Select the
phi-2
model and configure the model parameters. - Start chatting with the model.