https://github.com/lasithaamarasinghe/deepseek-r1-local
Simple Guide to Run DeepSeek-R1 Locally with Ollama
https://github.com/lasithaamarasinghe/deepseek-r1-local
deepseek-r1
Last synced: 2 months ago
JSON representation
Simple Guide to Run DeepSeek-R1 Locally with Ollama
- Host: GitHub
- URL: https://github.com/lasithaamarasinghe/deepseek-r1-local
- Owner: LasithaAmarasinghe
- Created: 2025-02-04T11:31:34.000Z (3 months ago)
- Default Branch: main
- Last Pushed: 2025-02-05T02:21:32.000Z (3 months ago)
- Last Synced: 2025-02-05T03:22:43.386Z (3 months ago)
- Topics: deepseek-r1
- Homepage:
- Size: 5.86 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# How to Run DeepSeek-R1 Locally with Ollama
This guide explains how to set up and run **DeepSeek-R1** locally using **Ollama**. Follow the steps below to download the necessary software, choose the right model version, and run the DeepSeek-R1 model.
---
## Step 1: Download and Install Ollama
To start, you need to download and install **Ollama**, which allows you to run language models locally.
1. Go to the [Ollama Download Page](https://ollama.com/download).
2. Select the version that matches your operating system.
---
## Step 2: Choose the Model Version
After installing Ollama, you'll need to select the appropriate version of DeepSeek-R1.
1. Visit the [DeepSeek-R1 Library](https://ollama.com/library/deepseek-r1).
2. Select the model version that fits your system's capabilities
- Available versions: **1.5b, 7b, 8b, 14b**, etc.
- Choose a model based on your computer’s performance. Higher models require more memory and processing power.
---
## Step 3: Run DeepSeek-R1 on Ollama
Once you have installed Ollama and selected your model, you can run **DeepSeek-R1** locally.
1. Open your terminal or command prompt.
2. Run the following command: (replace n with the version you choose)
```bash
ollama run deepseek-r1-nb
## Step 4: Interact with the Model
Once DeepSeek-R1 is running, you can start interacting with it.
1. In your terminal, simply type your questions or commands.
2. The model will respond to your input accordingly.