Documentation
Local Engines
LM Studio

LM Studio

Integrate LM Studio with Jan

LM Studio (opens in a new tab) enables you to explore, download, and run local Large Language Models (LLMs). You can integrate Jan with LM Studio using two methods:

  1. Integrate the LM Studio server with Jan UI
  2. Migrate your downloaded model from LM Studio to Jan.

To integrate LM Studio with Jan, follow the steps below:

This guide will demonstrate how to connect Jan to LM Studio (opens in a new tab) using the second method. We'll use the Phi 2 - GGUF (opens in a new tab) model from Hugging Face as our example.

Step 1: Server Setup

  1. Access the Local Inference Server within LM Studio.
  2. Select your desired model.
  3. Start the server after configuring the port and options.
  4. Navigate back to Jan.
  5. Navigate to the Settings > Extensions.
  6. In the OpenAI Inference Engine section, add the full web address of the LM Studio server.

Server Setup

  • Replace (port) with your chosen port number. The default is 1234.
  • Leave the API Key field blank.

Step 2: Model Configuration

  1. Navigate to the Hub.
  2. we will use the phi-2 model in this example. Insert the https://huggingface.co/TheBloke/phi-2-GGUF link into the search bar.
  3. Select and download the model you want to use.

Download Model

Step 3: Start the Model

  1. Proceed to the Threads.
  2. Select the phi-2 model and configure the model parameters.
  3. Start chatting with the model.

Start Model