Local Engines


Integrate Ollama with Jan

Ollama provides you with large language models that you can run locally. There are two methods to integrate Ollama with Jan:

  1. Integrate Ollama server with Jan.
  2. Migrate the downloaded model from Ollama to Jan.

To integrate Ollama with Jan, follow the steps below:

This tutorial will show how to integrate Ollama with Jan using the first method. We will use the llama2 (opens in a new tab) model as an example.

Step 1: Start the Ollama Server

  1. Choose your model from the Ollama library (opens in a new tab).
  2. Run your model with this command:

ollama run <model-name>

  1. According to the Ollama documentation on OpenAI compatibility (opens in a new tab), you can connect to the Ollama server using the web address http://localhost:11434/v1/chat/completions. To do this, change the openai.json file in the ~/jan/engines folder to add the Ollama server's full web address:

"full_url": "http://localhost:11434/v1/chat/completions"

Step 2: Model Configuration

  1. Navigate to the ~/jan/models folder.
  2. Create a folder named (ollam-modelname), for example, lmstudio-phi-2.
  3. Create a model.json file inside the folder, including the following configurations:
  • Set the id property to the model name as the Ollama model name.
  • Set the format property to api.
  • Set the engine property to openai.
  • Set the state property to ready.

"sources": [
"filename": "llama2",
"url": ""
"id": "llama2",
"object": "model",
"name": "Ollama - Llama2",
"version": "1.0",
"description": "Llama 2 is a collection of foundation language models ranging from 7B to 70B parameters.",
"format": "api",
"settings": {},
"parameters": {},
"metadata": {
"author": "Meta",
"tags": ["General", "Big Context Length"]
"engine": "openai"

For more details regarding the model.json settings and parameters fields, please see here.

Step 3: Start the Model

  1. Restart Jan and navigate to the Hub.
  2. Locate your model and click the Use button.