Run DeepSeek R1 locally on your device (Beginner-Friendly Guide)

DeepSeek R1 running locally in Jan AI interface, showing the chat interface and model settings

DeepSeek R1 is one of the best open-source models in the market right now, and you can run DeepSeek R1 on your own computer!

New to running AI models locally? Check out the guide on running AI models locally first. It covers essential concepts that will help you better understand this DeepSeek R1 guide.

DeepSeek R1 requires data-center level computers to run at its full potential, and we'll use a smaller version that works great on regular computers.

Why use an optimized version?

  • Efficient performance on standard hardware
  • Faster download and initialization
  • Optimized storage requirements
  • Maintains most of the original model's capabilities

Quick Steps at a Glance

  1. Download Jan (opens in a new tab)
  2. Select a model version
  3. Choose settings
  4. Set up the prompt template & start using DeepSeek R1

Let's walk through each step with detailed instructions.

Step 1: Download Jan

Jan (opens in a new tab) is an open-source application that enables you to run AI models locally. It's available for Windows, Mac, and Linux. For beginners, Jan is the best choice to get started.

Jan AI interface, showing the download button

  1. Visit jan.ai (opens in a new tab)
  2. Download the appropriate version for your operating system
  3. Install the app

Step 2: Choose Your DeepSeek R1 Version

To run AI models like DeepSeek R1 on your computer, you'll need something called VRAM (Video Memory). Think of VRAM as your computer's special memory for handling complex tasks like gaming or, in our case, running AI models. It's different from regular RAM - VRAM is part of your graphics card (GPU).

Running AI models locally is like running a very sophisticated video game - it needs dedicated memory to process all the AI's "thinking." The more VRAM you have, the larger and more capable AI models you can run.

Let's first check how much VRAM your computer has. Don't worry if it's not much - DeepSeek R1 has versions for all kinds of computers!

Finding your VRAM is simple:

  • On Windows: Press Windows + R, type dxdiag, hit Enter, and look under the "Display" tab
  • On Mac: Click the Apple menu, select "About This Mac", then "More Info", and check under "Graphics/Displays"
  • On Linux: Open Terminal and type nvidia-smi for NVIDIA GPUs, or lspci -v | grep -i vga for other graphics cards
💡

No dedicated graphics card? That's okay! You can still run the smaller versions of DeepSeek R1. They're specially optimized to work on computers with basic graphics capabilities.

Once you know your VRAM, here's what version of DeepSeek R1 will work best for you. If you have:

  • 6GB VRAM: Go for the 1.5B version - it's fast and efficient
  • 8GB VRAM: You can run the 7B or 8B versions, which offer great capabilities
  • 16GB or more VRAM: You have access to the larger models with enhanced features

Available versions and basic requirements for DeepSeek R1 distills:

VersionModel LinkRequired VRAM
Qwen 1.5BDeepSeek-R1-Distill-Qwen-1.5B-GGUF (opens in a new tab)6GB+
Qwen 7BDeepSeek-R1-Distill-Qwen-7B-GGUF (opens in a new tab)8GB+
Llama 8BDeepSeek-R1-Distill-Llama-8B-GGUF (opens in a new tab)8GB+
Qwen 14BDeepSeek-R1-Distill-Qwen-14B-GGUF (opens in a new tab)16GB+
Qwen 32BDeepSeek-R1-Distill-Qwen-32B-GGUF (opens in a new tab)16GB+
Llama 70BDeepSeek-R1-Distill-Llama-70B-GGUF (opens in a new tab)48GB+

To download your chosen model:

Launch Jan and navigate to Jan Hub using the sidebar

Jan AI interface, showing the model library

  1. Input the model link in this field:

Jan AI interface, showing the model link input field

Step 3: Configure Model Settings

When configuring your model, you'll encounter quantization options:

Quantization balances performance and resource usage:

  • Q4: Recommended for most users - optimal balance of efficiency and quality
  • Q8: Higher precision but requires more computational resources

Step 4: Configure Prompt Template

Final configuration step:

  1. Access Model Settings via the sidebar
  2. Locate the Prompt Template configuration
  3. Use this specific format:
⚠️

<|User|>{prompt}<|Assistant|>

This template is for proper communication between you and the model.

You're now ready to interact with DeepSeek R1:

Jan interface, showing DeepSeek R1 running locally

Need Assistance?

Join our Discord community (opens in a new tab) for support and discussions about running AI models locally.