[GH-ISSUE #8602] Deepseek-R1 671B - Segmentation Fault Bug #31324

Closed
opened 2026-04-22 11:40:47 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @Notbici on GitHub (Jan 27, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8602

What is the issue?

Hi,

I've been using the Deepseek-R1 671B model from Ollama on my 8x H100 machine and keep running into a segmentation fault, I've noticed that the frequency of the segfault happens the larger the context becomes.

I'm using the latest Ollama release.
Hardware Specs:

  • 8x H100 - 80GB SXM
  • Xeon Platinum 8468 (160c)
  • Micron 7450 ssd
  • 1548gb of ram
  • OS is ubuntu 22.04
  • CUDA: 12.6
  • NVIDIA driver: 560.35.05

Happy to test params or gather more data, I'm having a hard time working around this. The distilled models like the deepseek llama 70B work just fine.

server.err.log

Any advice is appreciated.

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.5.7

Originally created by @Notbici on GitHub (Jan 27, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8602 ### What is the issue? Hi, I've been using the Deepseek-R1 671B model from Ollama on my 8x H100 machine and keep running into a segmentation fault, I've noticed that the frequency of the segfault happens the larger the context becomes. I'm using the latest Ollama release. Hardware Specs: - 8x H100 - 80GB SXM - Xeon Platinum 8468 (160c) - Micron 7450 ssd - 1548gb of ram - OS is ubuntu 22.04 - CUDA: 12.6 - NVIDIA driver: 560.35.05 Happy to test params or gather more data, I'm having a hard time working around this. The distilled models like the deepseek llama 70B work just fine. [server.err.log](https://github.com/user-attachments/files/18554764/server.err.log) Any advice is appreciated. ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.7
GiteaMirror added the bug label 2026-04-22 11:40:47 -05:00
Author
Owner

@rick-github commented on GitHub (Jan 27, 2025):

https://github.com/ollama/ollama/issues/5975

<!-- gh-comment-id:2615072167 --> @rick-github commented on GitHub (Jan 27, 2025): https://github.com/ollama/ollama/issues/5975
Author
Owner

@Notbici commented on GitHub (Jan 27, 2025):

#5975

Hey, thanks and the thread along with other adjacent threads spark a few headaches, could we get some idea on where the point of contention is with the solution, can we donate for developer time specific to this issue or is it a complicated issue with the implementation of llamacpp?

I see that there are some workarounds exist where cloning the model file and changing 2 params could ease the symptoms, can we deploy that workaround
to the ollama model provided with the box until llamacpp/ollama work on a concrete solution?

<!-- gh-comment-id:2615355610 --> @Notbici commented on GitHub (Jan 27, 2025): > [#5975](https://github.com/ollama/ollama/issues/5975) Hey, thanks and the thread along with other adjacent threads spark a few headaches, could we get some idea on where the point of contention is with the solution, can we donate for developer time specific to this issue or is it a complicated issue with the implementation of llamacpp? I see that there are some workarounds exist where cloning the model file and changing 2 params could ease the symptoms, can we deploy that workaround to the [ollama model](https://ollama.com/library/deepseek-r1:671b) provided with the box until llamacpp/ollama work on a concrete solution?
Author
Owner

@Notbici commented on GitHub (Jan 28, 2025):

For now this worked:

Modelfile

FROM deepseek-r1:671b
PARAMETER num_ctx 24576
PARAMETER num_predict 8192

ollama create deepseek-r1-fixed -f Modelfile

<!-- gh-comment-id:2618584828 --> @Notbici commented on GitHub (Jan 28, 2025): For now this worked: Modelfile ``` FROM deepseek-r1:671b PARAMETER num_ctx 24576 PARAMETER num_predict 8192 ``` > ollama create deepseek-r1-fixed -f Modelfile
Author
Owner

@daniel-adriano commented on GitHub (Feb 1, 2025):

Running into the same problem, with both deepseek-r1:671b and deepseek-r1:70b (machine has 16xV100-32GB)

For deepseek-r1:70b I seem to get away with modifying it to pretty large values:

num_ctx=131072
num_predict=32768

But for deepseek-r1:671b, anything similar to those VRAM amounts is prohibitive. So, I have tried:

PARAMETER num_ctx 4096
PARAMETER num_predict 2048

... but, predictably, I still run into the same problem pretty quickly, seemly due to the long responses that deepseek-r1 can frequently produce.

Is there a better fix? Like a setting preventing the context to grow beyond the allocated num_ctx and crashing the model? Anything else?

<!-- gh-comment-id:2628917325 --> @daniel-adriano commented on GitHub (Feb 1, 2025): Running into the same problem, with both deepseek-r1:671b and deepseek-r1:70b (machine has 16xV100-32GB) For deepseek-r1:70b I seem to get away with modifying it to pretty large values: `num_ctx=131072` `num_predict=32768` But for deepseek-r1:671b, anything similar to those VRAM amounts is prohibitive. So, I have tried: `PARAMETER num_ctx 4096` `PARAMETER num_predict 2048` ... but, predictably, I still run into the same problem pretty quickly, seemly due to the long responses that deepseek-r1 can frequently produce. Is there a better fix? Like a setting preventing the context to grow beyond the allocated num_ctx and crashing the model? Anything else?
Author
Owner

@ice6 commented on GitHub (Feb 27, 2025):

@daniel-adriano check here: https://github.com/ggml-org/llama.cpp/issues/8862

num_predict must be less than or equal to num_ctx / process count.
If not then when the context grows beyond what's been provisioned the program hard exits.
That feels like a bug to me. But apparently it's as designed.

<!-- gh-comment-id:2687126115 --> @ice6 commented on GitHub (Feb 27, 2025): @daniel-adriano check here: https://github.com/ggml-org/llama.cpp/issues/8862 > num_predict must be less than or equal to num_ctx / process count. > If not then when the context grows beyond what's been provisioned the program hard exits. > That feels like a bug to me. But apparently it's as designed.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#31324