Documentation
Remote APIs
Mistral AI API

Mistral AI API

How to Integrate Mistral AI with Jan

This guide provides step-by-step instructions for integrating the Mistral API with Jan, enabling users to utilize Mistral's capabilities within Jan's conversational interface.

Before proceeding, ensure you have the following:

  • Access to the Jan Application
  • Mistral API credentials

Integration Steps

Step 1: Configure Mistral API Key

  1. Obtain the Mistral API Key from your Mistral (opens in a new tab) dashboard.
  2. Copy your Mistral API Key and the endpoint URL you want.
  3. Navigate to the Jan app > Settings.
  4. Select the MistralAI Inference Engine.
  5. Insert the API Key and the endpoint URL into their respective fields.

Start Server

  • You can also manually edit the JSON file in ~/jan/settings/@janhq/inference-mistral-extension.
  • Mistral AI offers various endpoints. Refer to their endpoint documentation (opens in a new tab) to select the one that fits your requirements.

Step 2: Select Model

  1. Navigate to the Hub section.
  2. Ensure you have downloaded the Mistral model you want to use. Select Model

The MistralAI Inference Engine is the default extension for the Jan application. All the Mistral models are automatically installed when you install the Jan application.

Step 3: Start the Model

  1. Navigate to the Thread section.
  2. Under the Model section, click Remote.
  3. Select the Mistral model you want to use.
  4. Start the conversation with the Mistral API.

Start Model

Troubleshooting

If you encounter any issues during the integration process or while using Mistral with Jan, consider the following troubleshooting steps:

  • Double-check your API credentials to ensure they are correct.
  • Check for error messages or logs that may provide insight into the issue.
  • Reach out to Mistral API support for assistance if needed.