Quickstart
Step 1: Install Jan
- Download Jan
- Install the application on your system (Mac, Windows, Linux)
- Launch Jan
Once installed, you'll see Jan application interface with no models pre-installed yet. You'll be able to:
- Download and run local AI models
- Connect to cloud AI providers if desired
Step 2: Download a Model
Jan offers various local AI models, from smaller efficient models to larger more capable ones:
- Go to Hub
- Browse available models and click on any model to see details about it
- Choose a model that fits your needs & hardware specifications
- Click Download to begin
For more model installation methods, please visit Model Management.
Step 3: Turn on GPU Acceleration (Optional)
While the model downloads, let's optimize your hardware setup. If you're on Windows or Linux and have a compatible graphics card, you can significantly boost model performance by enabling GPU acceleration.
- Navigate to Settings () > Local Engine > Llama.cpp
- At llama-cpp Backend, select backend. For example
windows-amd64-vulkan
if you have and AMD gaphic card. For more info, see our guide.
Step 4: Customize Assistant Instructions
Once your model has been downloaded and you're ready to start your first conversation, you can customize how the model should respond by setting specific instructions:
- In any Thread, click the Assistant tab in the right sidebar
- Enter your instructions in Instructions field to define how the model should respond. For example, "You are an expert storyteller who writes engaging and imaginative stories for marketing campaigns. You don't follow the herd and rather think outside the box when putting your copywriting skills to the test."
You can modify these instructions at any time during your conversation to adjust a model's behavior for that specific thread. See detailed guide at Assistant.
Step 5: Start Chatting and Fine-tune Settings
Now that your model is downloaded and instructions are set, you can begin chatting with it. Type your message in the input field at the bottom of the thread to start the conversation.
You can further customize your experience by:
- Adjusting the model parameters in the Model tab in the right sidebar
- Try different models for different tasks by clicking the model selector in Model tab or input field
- Create new threads with different instructions and model configurations
Step 6: Connect to cloud models (Optional)
Jan supports both open source and cloud-based models. You can connect to cloud model providers that are including: OpenAI (GPT-4o, o1,...), Anthropic (Claude), Groq, Mistral, and more.
- Open any Thread
- Click Model tab in the right sidebar or model selector in input field
- Once the selector is poped up, choose the Cloud tab
- Select your preferred provider (Anthropic, OpenAI, etc.), click Add (➕) icon next to the provider
- Obtain a valid API key from your chosen provider, ensure the key has sufficient credits & appropriate permissions
- Copy & insert your API Key in Jan
See Remote APIs for detailed configuration.
What's Next?
Now that Jan is up and running, explore further:
- Learn how to download and manage your models.
- Customize Jan's application settings according to your preferences.