An open API service indexing awesome lists of open source software.

https://github.com/lasithaamarasinghe/deepseek-r1-local

Simple Guide to Run DeepSeek-R1 Locally with Ollama
https://github.com/lasithaamarasinghe/deepseek-r1-local

deepseek-r1

Last synced: 2 months ago
JSON representation

Simple Guide to Run DeepSeek-R1 Locally with Ollama

Awesome Lists containing this project

README

        

# How to Run DeepSeek-R1 Locally with Ollama

This guide explains how to set up and run **DeepSeek-R1** locally using **Ollama**. Follow the steps below to download the necessary software, choose the right model version, and run the DeepSeek-R1 model.

---

## Step 1: Download and Install Ollama

To start, you need to download and install **Ollama**, which allows you to run language models locally.

1. Go to the [Ollama Download Page](https://ollama.com/download).
2. Select the version that matches your operating system.

![Ollama download](https://github.com/user-attachments/assets/3a7c8a20-1e6f-4473-98f2-f0092f057768)

---

## Step 2: Choose the Model Version

After installing Ollama, you'll need to select the appropriate version of DeepSeek-R1.

1. Visit the [DeepSeek-R1 Library](https://ollama.com/library/deepseek-r1).
2. Select the model version that fits your system's capabilities
- Available versions: **1.5b, 7b, 8b, 14b**, etc.
- Choose a model based on your computer’s performance. Higher models require more memory and processing power.

![Model Selection](https://github.com/user-attachments/assets/88154691-f290-4766-9ef7-6a9c2f403e3f)

---

## Step 3: Run DeepSeek-R1 on Ollama

Once you have installed Ollama and selected your model, you can run **DeepSeek-R1** locally.

1. Open your terminal or command prompt.
2. Run the following command: (replace n with the version you choose)
```bash
ollama run deepseek-r1-nb
![image](https://github.com/user-attachments/assets/24bac0af-ffa5-4de7-979e-034d26378629)

## Step 4: Interact with the Model

Once DeepSeek-R1 is running, you can start interacting with it.

1. In your terminal, simply type your questions or commands.
2. The model will respond to your input accordingly.

![image](https://github.com/user-attachments/assets/f3c8bdda-0d1e-44b3-8f5c-64d983e51a43)