Documentation
Remote Engines
Generic OpenAI-compatible API

Generic OpenAI-compatible API

This guide outlines the process for configuring Jan as a client for both remote and local API servers, using the mistral-ins-7b-q4 model for illustration. We'll show how to connect to Jan's API-hosting servers.

Currently, you can only connect to one OpenAI-compatible endpoint at a time.

Step 1: Configure a Client Connection

  1. Navigate to the ~jan/settings/@janhq/inference-openai-extension folder.
  2. Modify the settings.json file.

Please note that currently, the code that supports any OpenAI-compatible endpoint only reads the engine/openai.json file. Thus, it will not search any other files in this directory.

  1. Configure full_url properties with the endpoint server you want to connect. For example, if you're going to communicate to Jan's API server, you can configure it as follows:
~jan/settings/@janhq/inference-openai-extension/settings.json

{
// "full_url": "https://<server-ip-address>:<port>/v1/chat/completions"
"full_url": "https://<server-ip-address>:1337/v1/chat/completions"
// Skip api_key if your local server does not require authentication
// "api_key": "sk-<your key here>"
}

Step 2: Create a Model JSON

  1. In ~/jan/models, create a folder named mistral-ins-7b-q4.

  2. In this folder, add a model.json file with Filename as model.json, ensure the following configurations:

  • id matching folder name.
  • Format set to api.
  • Engine set to openai
  • State set to ready.
~/jan/models/mistral-ins-7b-q4/model.json

{
"sources": [
{
"filename": "janai",
"url": "https://jan.ai"
}
],
"id": "mistral-ins-7b-q4",
"object": "model",
"name": "Mistral Instruct 7B Q4 on Jan API Server",
"version": "1.0",
"description": "Jan integration with remote Jan API server",
"format": "api",
"settings": {},
"parameters": {},
"metadata": {
"author": "MistralAI, The Bloke",
"tags": ["remote", "awesome"]
},
"engine": "openai"
}

Step 3: Start the Model

  1. Restart Jan and navigate to the Hub.
  2. Locate your model and click the Use button.

If you have questions or want more preconfigured GGUF models, please join our Discord community (opens in a new tab) for support, updates, and discussions.

Troubleshooting

If you encounter any issues during the integration process or while using OpenAI with Jan, consider the following troubleshooting steps:

  • Double-check your API credentials to ensure they are correct.
  • Check for error messages or logs that may provide insight into the issue.
  • Reach out to their API support for assistance if needed.