[GH-ISSUE #9105] Ollam 0.5.9 run deepseek-r1:671b #67983

Closed
opened 2026-05-04 12:11:25 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @MartinDong on GitHub (Feb 14, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9105

What is the issue?

Hello, I'm encountering some issues while deploying the ollama run deepseek-r1:671b model. Could you help me figure out how to resolve them?
How can I load and run the 671b DeepSeekR1 model using Ollama's Docker container? My current device runs on a Linux system with the following hardware configuration:
CPU: 384 cores
Memory: 2304 GB
GPU: 8 × NVIDIA H20
Data Disk: 1 × 1500 GiB SSD cloud disk
Storage: 1 × 2048 GiB SSD cloud disk, 1 × 500 GiB SSD cloud disk
Ollama Version:ollama/ollama:0.5.9


[ollama7.1.txt](https://github.com/user-attachments/files/18800808/ollama7.1.txt)
[ollama7.txt](https://github.com/user-attachments/files/18800809/ollama7.txt)

Relevant log output


OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.5.9

Originally created by @MartinDong on GitHub (Feb 14, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9105 ### What is the issue? Hello, I'm encountering some issues while deploying the ollama run deepseek-r1:671b model. Could you help me figure out how to resolve them? How can I load and run the 671b DeepSeekR1 model using Ollama's Docker container? My current device runs on a Linux system with the following hardware configuration: CPU: 384 cores Memory: 2304 GB GPU: 8 × NVIDIA H20 Data Disk: 1 × 1500 GiB SSD cloud disk Storage: 1 × 2048 GiB SSD cloud disk, 1 × 500 GiB SSD cloud disk Ollama Version:ollama/ollama:0.5.9 ``` [ollama7.1.txt](https://github.com/user-attachments/files/18800808/ollama7.1.txt) [ollama7.txt](https://github.com/user-attachments/files/18800809/ollama7.txt) ``` ### Relevant log output ```shell ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.9
GiteaMirror added the bug label 2026-05-04 12:11:25 -05:00
Author
Owner

@rick-github commented on GitHub (Feb 14, 2025):

llama.cpp:11968: The current context does not support K-shift

https://github.com/ollama/ollama/issues/5975

<!-- gh-comment-id:2659294187 --> @rick-github commented on GitHub (Feb 14, 2025): ``` llama.cpp:11968: The current context does not support K-shift ``` https://github.com/ollama/ollama/issues/5975
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#67983