Documentation
Remote APIs
OpenAI API

OpenAI API

How to Integrate OpenAI API with Jan

This guide provides step-by-step instructions for integrating the OpenAI API with Jan, allowing users to utilize OpenAI's capabilities within Jan's conversational interface.

Integration Steps

Step 1: Configure OpenAI API Key

  1. Obtain OpenAI API Key from your OpenAI Platform (opens in a new tab) dashboard.
  2. Copy your OpenAI Key and the endpoint URL you want to use.
  3. Navigate to the Jan app > Settings.
  4. Select the OpenAI Inference Engine.
  5. Insert the API Key and the endpoint URL into their respective fields.

Server Setup

You can also manually edit the JSON file in ~/jan/settings/@janhq/inference-openai-extension.

Step 2: Select Model

  1. Navigate to the Hub section.
  2. Ensure you have downloaded the OpenAI model you want to use.

Select Model

The OpenAI Inference Engine is the default extension for the Jan application. All the OpenAI models are automatically installed when you install the Jan application.

Step 3: Start the Model

  1. Navigate to the Thread section.
  2. Under the Model section, click Remote.
  3. Select the OpenAI model you want to use.
  4. Start the conversation with the OpenAI model.

Start Model

OpenAI Models

You can also use specific OpenAI models you cannot find in the Hub section by customizing the model.json file, which you can see in the ~/jan/models/. Follow the steps in the Manage Models to manually add a model.

Troubleshooting

If you encounter any issues during the integration process or while using OpenAI with Jan, consider the following troubleshooting steps:

  • Double-check your API credentials to ensure they are correct.
  • Check for error messages or logs that may provide insight into the issue.
  • Reach out to OpenAI API support for assistance if needed.