[GH-ISSUE #14615] Error in ollama using open-notebook #35231

Closed
opened 2026-04-22 19:37:01 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @ScaryBeats01 on GitHub (Mar 4, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14615

What is the issue?

When I use open-notebook, it allways gives error 500 in just 30 seconds on creating a insight.
But if I use the "ollama run" featur, only with commandr-7b:latest gives this errors. I tried qwen3.5:4b, granite3.1-dense:2b, granite4:tiny-h and lfm2.5-thinking without errors.
Look:
PS C:\Users\user> ollama run command-r7b:latest "Summarize this text in 3 bullet points:

INPUT

Neural networks are computing systems inspired by biological neural networks..."
Error: 500 Internal Server Error: llama runner process has terminated: CUDA error
PS C:\Users\user> ollama run command-r7b:latest "Summarize this text in 3 bullet points:

INPUT

Neural networks are computing systems inspired by biological neural networks..."^C
PS C:\Users\user> ollama rm command-r7b:latest
deleted 'command-r7b:latest'
PS C:\Users\user> ollama pull command-r7b:latest
pulling manifest
pulling b32d935e114c: 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 5.1 GB
pulling 0d8282caa612: 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 7.2 KB
pulling 945eaa8b1428: 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 13 KB
pulling d8455b5dce0b: 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 110 B
pulling 574fdc7616e8: 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 491 B
verifying sha256 digest
writing manifest
success
PS C:\Users\user> ollama run command-r7b:latest "Summarize this text in 3 bullet points:

INPUT

Neural networks are computing systems inspired by biological neural networks..."
Error: 500 Internal Server Error: llama runner process has terminated: CUDA error
PS C:\Users\user> ollama ps
NAME ID SIZE PROCESSOR CONTEXT UNTIL

Relevant log output

print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 1B
print_info: model params     = 6.94 B
print_info: general.name     = Granite 4.0 H Tiny
print_info: f_embedding_scale = 12.000000
print_info: f_residual_scale  = 0.220000
print_info: f_attention_scale = 0.007813
print_info: n_ff_shexp        = 1024
print_info: vocab type       = BPE
print_info: n_vocab          = 100352
print_info: n_merges         = 100000
print_info: BOS token        = 100257 '<|end_of_text|>'
print_info: EOS token        = 100257 '<|end_of_text|>'
print_info: EOT token        = 100257 '<|end_of_text|>'
print_info: UNK token        = 100269 '<|unk|>'
print_info: PAD token        = 100256 '<|pad|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 100258 '<|fim_prefix|>'
print_info: FIM SUF token    = 100260 '<|fim_suffix|>'
print_info: FIM MID token    = 100259 '<|fim_middle|>'
print_info: FIM PAD token    = 100261 '<|fim_pad|>'
print_info: EOG token        = 100257 '<|end_of_text|>'
print_info: EOG token        = 100261 '<|fim_pad|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors: offloading 10 repeating layers to GPU
load_tensors: offloaded 10/41 layers to GPU
load_tensors:          CPU model buffer size =   120.59 MiB
load_tensors:        CUDA0 model buffer size =  1003.36 MiB
load_tensors:    CUDA_Host model buffer size =  3028.19 MiB
time=2026-03-04T09:37:10.728+01:00 level=INFO source=server.go:1384 msg="waiting for server to become available" status="llm server not responding"
ggml_cuda_host_malloc: failed to allocate 1.00 MiB of pinned memory: out of memory
CUDA error: out of memory
  current device: 0, in function ggml_backend_cuda_device_event_new at C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:4957
  cudaEventCreateWithFlags(&event, 0x02)
C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:94: CUDA error
time=2026-03-04T09:37:23.287+01:00 level=INFO source=server.go:1384 msg="waiting for server to become available" status="llm server loading model"
time=2026-03-04T09:37:24.385+01:00 level=INFO source=server.go:1384 msg="waiting for server to become available" status="llm server not responding"
time=2026-03-04T09:37:26.735+01:00 level=ERROR source=server.go:303 msg="llama runner terminated" error="exit status 1"
time=2026-03-04T09:37:26.902+01:00 level=INFO source=sched.go:518 msg="Load failed" model=C:\Users\user\.ollama\models\blobs\sha256-491ba81786c46a345a5da9a60cdb9f9a3056960c8411dd857153c194b1f91313 error="llama runner process has terminated: CUDA error"
[GIN] 2026/03/04 - 09:37:27 | 500 |   29.7846277s |       127.0.0.1 | POST     "/api/chat"

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.17.5

Originally created by @ScaryBeats01 on GitHub (Mar 4, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14615 ### What is the issue? When I use open-notebook, it allways gives error 500 in just 30 seconds on creating a insight. But if I use the "ollama run" featur, only with commandr-7b:latest gives this errors. I tried qwen3.5:4b, granite3.1-dense:2b, granite4:tiny-h and lfm2.5-thinking without errors. Look: PS C:\Users\user> ollama run command-r7b:latest "Summarize this text in 3 bullet points: >> >> # INPUT >> Neural networks are computing systems inspired by biological neural networks..." Error: 500 Internal Server Error: llama runner process has terminated: CUDA error PS C:\Users\user> ollama run command-r7b:latest "Summarize this text in 3 bullet points: >> >> # INPUT >> Neural networks are computing systems inspired by biological neural networks..."^C PS C:\Users\user> ollama rm command-r7b:latest deleted 'command-r7b:latest' PS C:\Users\user> ollama pull command-r7b:latest pulling manifest pulling b32d935e114c: 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 5.1 GB pulling 0d8282caa612: 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 7.2 KB pulling 945eaa8b1428: 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 13 KB pulling d8455b5dce0b: 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 110 B pulling 574fdc7616e8: 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 491 B verifying sha256 digest writing manifest success PS C:\Users\user> ollama run command-r7b:latest "Summarize this text in 3 bullet points: >> >> # INPUT >> Neural networks are computing systems inspired by biological neural networks..." Error: 500 Internal Server Error: llama runner process has terminated: CUDA error PS C:\Users\user> ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL ### Relevant log output ```shell print_info: ssm_dt_b_c_rms = 0 print_info: model type = 1B print_info: model params = 6.94 B print_info: general.name = Granite 4.0 H Tiny print_info: f_embedding_scale = 12.000000 print_info: f_residual_scale = 0.220000 print_info: f_attention_scale = 0.007813 print_info: n_ff_shexp = 1024 print_info: vocab type = BPE print_info: n_vocab = 100352 print_info: n_merges = 100000 print_info: BOS token = 100257 '<|end_of_text|>' print_info: EOS token = 100257 '<|end_of_text|>' print_info: EOT token = 100257 '<|end_of_text|>' print_info: UNK token = 100269 '<|unk|>' print_info: PAD token = 100256 '<|pad|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 100258 '<|fim_prefix|>' print_info: FIM SUF token = 100260 '<|fim_suffix|>' print_info: FIM MID token = 100259 '<|fim_middle|>' print_info: FIM PAD token = 100261 '<|fim_pad|>' print_info: EOG token = 100257 '<|end_of_text|>' print_info: EOG token = 100261 '<|fim_pad|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = false) load_tensors: offloading 10 repeating layers to GPU load_tensors: offloaded 10/41 layers to GPU load_tensors: CPU model buffer size = 120.59 MiB load_tensors: CUDA0 model buffer size = 1003.36 MiB load_tensors: CUDA_Host model buffer size = 3028.19 MiB time=2026-03-04T09:37:10.728+01:00 level=INFO source=server.go:1384 msg="waiting for server to become available" status="llm server not responding" ggml_cuda_host_malloc: failed to allocate 1.00 MiB of pinned memory: out of memory CUDA error: out of memory current device: 0, in function ggml_backend_cuda_device_event_new at C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:4957 cudaEventCreateWithFlags(&event, 0x02) C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:94: CUDA error time=2026-03-04T09:37:23.287+01:00 level=INFO source=server.go:1384 msg="waiting for server to become available" status="llm server loading model" time=2026-03-04T09:37:24.385+01:00 level=INFO source=server.go:1384 msg="waiting for server to become available" status="llm server not responding" time=2026-03-04T09:37:26.735+01:00 level=ERROR source=server.go:303 msg="llama runner terminated" error="exit status 1" time=2026-03-04T09:37:26.902+01:00 level=INFO source=sched.go:518 msg="Load failed" model=C:\Users\user\.ollama\models\blobs\sha256-491ba81786c46a345a5da9a60cdb9f9a3056960c8411dd857153c194b1f91313 error="llama runner process has terminated: CUDA error" [GIN] 2026/03/04 - 09:37:27 | 500 | 29.7846277s | 127.0.0.1 | POST "/api/chat" ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.17.5
GiteaMirror added the bug label 2026-04-22 19:37:01 -05:00
Author
Owner

@ScaryBeats01 commented on GitHub (Mar 4, 2026):

It occurs in 0.17.6 too.

<!-- gh-comment-id:3996673906 --> @ScaryBeats01 commented on GitHub (Mar 4, 2026): It occurs in 0.17.6 too.
Author
Owner

@rick-github commented on GitHub (Mar 4, 2026):

CUDA error: out of memory

It helps if you provide the full log. Here are some general methods for dealing with OOMs.

<!-- gh-comment-id:3996798630 --> @rick-github commented on GitHub (Mar 4, 2026): ``` CUDA error: out of memory ``` It helps if you provide the full log. [Here](https://github.com/ollama/ollama/issues/8597#issuecomment-2614533288) are some general methods for dealing with OOMs.
Author
Owner

@danigixshi commented on GitHub (Mar 4, 2026):

aquí no hay misterio místico ni bug paranormal. El log está cantando la verdad como un karaoke borracho: te estás quedando sin memoria en la GPU. CUDA se queja y el proceso muere, y por eso Ollama te devuelve el Error 500.

La línea clave del log es esta:

ggml_cuda_host_malloc: failed to allocate 1.00 MiB of pinned memory: out of memory

Traducido al castellano humano: la GPU intenta reservar memoria y no queda ni 1 MB libre, así que el runner de llama.cpp se estrella.

Vamos a desmontar lo que pasa.

El modelo command-r7b pesa unos ~5 GB, pero eso no es toda la historia. Los modelos LLM necesitan memoria extra para:

los KV cache (memoria para la conversación)

buffers de cálculo

capas que se cargan en GPU

memoria "pinned" compartida con CPU

Resultado típico para un 7B:

5-6 GB modelo

+2-4 GB buffers

+KV cache

Total real: 8-10 GB de VRAM fácilmente.

Si tu GPU tiene:

6 GB → imposible

8 GB → muy justo

12 GB → debería funcionar

Por eso otros modelos sí funcionan:

qwen3.5:4b

granite tiny

2b

Son mucho más pequeños.

Además en tu log aparece:

offloaded 10/41 layers to GPU
CUDA0 model buffer size = 1003 MB
CUDA_Host buffer = 3028 MB

<!-- gh-comment-id:3999385461 --> @danigixshi commented on GitHub (Mar 4, 2026): aquí no hay misterio místico ni bug paranormal. El log está cantando la verdad como un karaoke borracho: te estás quedando sin memoria en la GPU. CUDA se queja y el proceso muere, y por eso Ollama te devuelve el Error 500. La línea clave del log es esta: ggml_cuda_host_malloc: failed to allocate 1.00 MiB of pinned memory: out of memory Traducido al castellano humano: la GPU intenta reservar memoria y no queda ni 1 MB libre, así que el runner de llama.cpp se estrella. Vamos a desmontar lo que pasa. El modelo command-r7b pesa unos ~5 GB, pero eso no es toda la historia. Los modelos LLM necesitan memoria extra para: los KV cache (memoria para la conversación) buffers de cálculo capas que se cargan en GPU memoria "pinned" compartida con CPU Resultado típico para un 7B: 5-6 GB modelo +2-4 GB buffers +KV cache Total real: 8-10 GB de VRAM fácilmente. Si tu GPU tiene: 6 GB → imposible 8 GB → muy justo 12 GB → debería funcionar Por eso otros modelos sí funcionan: qwen3.5:4b granite tiny 2b Son mucho más pequeños. Además en tu log aparece: offloaded 10/41 layers to GPU CUDA0 model buffer size = 1003 MB CUDA_Host buffer = 3028 MB
Author
Owner

@ScaryBeats01 commented on GitHub (Mar 7, 2026):

Now works the same command.

<!-- gh-comment-id:4016525063 --> @ScaryBeats01 commented on GitHub (Mar 7, 2026): Now works the same command.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#35231