[GH-ISSUE #1139] Error while loading Nous-Capybara-34B #47084

Closed
opened 2026-04-28 03:01:20 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @eramax on GitHub (Nov 15, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1139

Originally assigned to: @BruceMacD on GitHub.

I tried to run https://huggingface.co/TheBloke/Nous-Capybara-34B-GGUF
using this modelfile

FROM ./nous-capybara-34b.Q3_K_S.gguf
TEMPLATE """USER: {{ .Prompt }} ASSISTANT:"""
PARAMETER num_ctx 200000
PARAMETER stop "USER"
PARAMETER stop "ASSISTANT"

I got this error

Error: llama runner process has terminated

I have 32GB RAM and 1050 TI GPU with 4GB.

Originally created by @eramax on GitHub (Nov 15, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1139 Originally assigned to: @BruceMacD on GitHub. I tried to run https://huggingface.co/TheBloke/Nous-Capybara-34B-GGUF using this modelfile ``` FROM ./nous-capybara-34b.Q3_K_S.gguf TEMPLATE """USER: {{ .Prompt }} ASSISTANT:""" PARAMETER num_ctx 200000 PARAMETER stop "USER" PARAMETER stop "ASSISTANT" ``` I got this error ``` Error: llama runner process has terminated ``` I have 32GB RAM and 1050 TI GPU with 4GB.
GiteaMirror added the bug label 2026-04-28 03:01:20 -05:00
Author
Owner

@MostlyKIGuess commented on GitHub (Nov 18, 2023):

I think it might be due to the fact you got seg fault as 4FB GPU isn't really enough to handle a 34 b model, does your 7b model run fine?

then it is for sure that but if not try removing this model and reinstalling it with some different system format, basically create a special model

  • If you want any help just ping me, would help you out in creating a model
<!-- gh-comment-id:1817598144 --> @MostlyKIGuess commented on GitHub (Nov 18, 2023): I think it might be due to the fact you got seg fault as 4FB GPU isn't really enough to handle a 34 b model, does your 7b model run fine? then it is for sure that but if not try removing this model and reinstalling it with some different system format, basically create a special model - If you want any help just ping me, would help you out in creating a model
Author
Owner

@BruceMacD commented on GitHub (Mar 11, 2024):

Thanks for the report, this was an issue with memory allocation in this version of ollama. New versions will now fallback to CPU when there is not enough memory available on the GPU. So if anyone sees a similar error try updating to the latest release.

Resolving this one for now as I believe it is solved.

<!-- gh-comment-id:1989177983 --> @BruceMacD commented on GitHub (Mar 11, 2024): Thanks for the report, this was an issue with memory allocation in this version of ollama. New versions will now fallback to CPU when there is not enough memory available on the GPU. So if anyone sees a similar error try updating to the latest release. Resolving this one for now as I believe it is solved.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#47084