[GH-ISSUE #10808] devstral:24b-small-2505-q8_0 immediately fails to load. #7097

Closed
opened 2026-04-12 19:03:06 -05:00 by GiteaMirror · 10 comments
Owner

Originally created by @SingularityMan on GitHub (May 22, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10808

What is the issue?

This is the error I get, even after restarting the server on Ollama 0.7.0:

Error: llama runner process has terminated: GGML_ASSERT(ggml_nelements(a) == ne0*ne1*ne2) failed

I have 48GB VRAM available. I simply ran ollama run devstral:24b-small-2505-q8_0 with no context length set anywhere and I instantly get OOMs.

Relevant log output

[GIN] 2025/05/21 - 22:54:03 | 500 |   18.4269744s |       127.0.0.1 | POST     "/api/generate"
time=2025-05-21T22:54:03.031-04:00 level=DEBUG source=sched.go:364 msg="runner expired event received" runner.name=registry.ollama.ai/library/devstral:24b-small-2505-q8_0 runner.inference=cuda runner.devices=1 runner.size="25.0 GiB" runner.vram="25.0 GiB" runner.parallel=2 runner.pid=11052 runner.model=H:\ai\ollama\models\blobs\sha256-716e71486001082af395e305779d97bc4cef966b1f9158cdee24d3d8d1e41697 runner.num_ctx=8192
time=2025-05-21T22:54:03.031-04:00 level=DEBUG source=sched.go:379 msg="got lock to unload expired event" runner.name=registry.ollama.ai/library/devstral:24b-small-2505-q8_0 runner.inference=cuda runner.devices=1 runner.size="25.0 GiB" runner.vram="25.0 GiB" runner.parallel=2 runner.pid=11052 runner.model=H:\ai\ollama\models\blobs\sha256-716e71486001082af395e305779d97bc4cef966b1f9158cdee24d3d8d1e41697 runner.num_ctx=8192
time=2025-05-21T22:54:03.031-04:00 level=DEBUG source=sched.go:391 msg="starting background wait for VRAM recovery" runner.name=registry.ollama.ai/library/devstral:24b-small-2505-q8_0 runner.inference=cuda runner.devices=1 runner.size="25.0 GiB" runner.vram="25.0 GiB" runner.parallel=2 runner.pid=11052 runner.model=H:\ai\ollama\models\blobs\sha256-716e71486001082af395e305779d97bc4cef966b1f9158cdee24d3d8d1e41697 runner.num_ctx=8192
time=2025-05-21T22:54:03.031-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="89.1 GiB" before.free_swap="98.6 GiB" now.total="127.1 GiB" now.free="88.9 GiB" now.free_swap="98.6 GiB"
time=2025-05-21T22:54:03.052-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB"
releasing nvml library
time=2025-05-21T22:54:03.054-04:00 level=DEBUG source=server.go:1023 msg="stopping llama server" pid=11052
time=2025-05-21T22:54:03.054-04:00 level=DEBUG source=sched.go:396 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="25.0 GiB" runner.vram="25.0 GiB" runner.parallel=2 runner.pid=11052 runner.model=H:\ai\ollama\models\blobs\sha256-716e71486001082af395e305779d97bc4cef966b1f9158cdee24d3d8d1e41697
time=2025-05-21T22:54:03.304-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="88.9 GiB" before.free_swap="98.6 GiB" now.total="127.1 GiB" now.free="88.9 GiB" now.free_swap="98.7 GiB"
time=2025-05-21T22:54:03.331-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB"
releasing nvml library
time=2025-05-21T22:54:03.554-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="88.9 GiB" before.free_swap="98.7 GiB" now.total="127.1 GiB" now.free="89.1 GiB" now.free_swap="98.7 GiB"
time=2025-05-21T22:54:03.565-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB"
releasing nvml library
time=2025-05-21T22:54:03.804-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="89.1 GiB" before.free_swap="98.7 GiB" now.total="127.1 GiB" now.free="89.1 GiB" now.free_swap="98.7 GiB"
time=2025-05-21T22:54:03.829-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB"
releasing nvml library
time=2025-05-21T22:54:04.054-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="89.1 GiB" before.free_swap="98.7 GiB" now.total="127.1 GiB" now.free="89.1 GiB" now.free_swap="98.7 GiB"
time=2025-05-21T22:54:04.076-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB"
releasing nvml library
time=2025-05-21T22:54:04.305-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="89.1 GiB" before.free_swap="98.7 GiB" now.total="127.1 GiB" now.free="89.1 GiB" now.free_swap="98.7 GiB"
time=2025-05-21T22:54:04.327-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB"
releasing nvml library
time=2025-05-21T22:54:04.555-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="89.1 GiB" before.free_swap="98.7 GiB" now.total="127.1 GiB" now.free="89.1 GiB" now.free_swap="98.7 GiB"
time=2025-05-21T22:54:04.576-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB"
releasing nvml library
time=2025-05-21T22:54:04.804-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="89.1 GiB" before.free_swap="98.7 GiB" now.total="127.1 GiB" now.free="89.1 GiB" now.free_swap="98.7 GiB"
time=2025-05-21T22:54:04.827-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB"
releasing nvml library
time=2025-05-21T22:54:05.055-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="89.1 GiB" before.free_swap="98.7 GiB" now.total="127.1 GiB" now.free="89.1 GiB" now.free_swap="98.7 GiB"
time=2025-05-21T22:54:05.075-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB"
releasing nvml library
time=2025-05-21T22:54:05.304-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="89.1 GiB" before.free_swap="98.7 GiB" now.total="127.1 GiB" now.free="89.1 GiB" now.free_swap="98.7 GiB"
time=2025-05-21T22:54:05.322-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB"
releasing nvml library
time=2025-05-21T22:54:05.555-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="89.1 GiB" before.free_swap="98.7 GiB" now.total="127.1 GiB" now.free="89.1 GiB" now.free_swap="98.7 GiB"
time=2025-05-21T22:54:05.571-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB"
releasing nvml library
time=2025-05-21T22:54:05.805-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="89.1 GiB" before.free_swap="98.7 GiB" now.total="127.1 GiB" now.free="89.0 GiB" now.free_swap="98.7 GiB"
time=2025-05-21T22:54:05.823-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB"
releasing nvml library
time=2025-05-21T22:54:06.055-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="89.0 GiB" before.free_swap="98.7 GiB" now.total="127.1 GiB" now.free="89.0 GiB" now.free_swap="98.6 GiB"
time=2025-05-21T22:54:06.073-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB"
releasing nvml library
time=2025-05-21T22:54:06.304-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="89.0 GiB" before.free_swap="98.6 GiB" now.total="127.1 GiB" now.free="89.0 GiB" now.free_swap="98.6 GiB"
time=2025-05-21T22:54:06.323-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB"
releasing nvml library
time=2025-05-21T22:54:06.554-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="89.0 GiB" before.free_swap="98.6 GiB" now.total="127.1 GiB" now.free="89.0 GiB" now.free_swap="98.6 GiB"
time=2025-05-21T22:54:06.571-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB"
releasing nvml library
time=2025-05-21T22:54:06.804-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="89.0 GiB" before.free_swap="98.6 GiB" now.total="127.1 GiB" now.free="89.0 GiB" now.free_swap="98.6 GiB"
time=2025-05-21T22:54:06.821-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB"
releasing nvml library
time=2025-05-21T22:54:07.054-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="89.0 GiB" before.free_swap="98.6 GiB" now.total="127.1 GiB" now.free="89.0 GiB" now.free_swap="98.6 GiB"
time=2025-05-21T22:54:07.071-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB"
releasing nvml library
time=2025-05-21T22:54:07.305-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="89.0 GiB" before.free_swap="98.6 GiB" now.total="127.1 GiB" now.free="89.0 GiB" now.free_swap="98.6 GiB"
time=2025-05-21T22:54:07.321-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB"
releasing nvml library
time=2025-05-21T22:54:07.554-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="89.0 GiB" before.free_swap="98.6 GiB" now.total="127.1 GiB" now.free="89.0 GiB" now.free_swap="98.6 GiB"
time=2025-05-21T22:54:07.572-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB"
releasing nvml library
time=2025-05-21T22:54:07.804-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="89.0 GiB" before.free_swap="98.6 GiB" now.total="127.1 GiB" now.free="89.0 GiB" now.free_swap="98.6 GiB"
time=2025-05-21T22:54:07.822-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB"
releasing nvml library
time=2025-05-21T22:54:08.054-04:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.0237484 runner.size="25.0 GiB" runner.vram="25.0 GiB" runner.parallel=2 runner.pid=11052 runner.model=H:\ai\ollama\models\blobs\sha256-716e71486001082af395e305779d97bc4cef966b1f9158cdee24d3d8d1e41697
time=2025-05-21T22:54:08.054-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="89.0 GiB" before.free_swap="98.6 GiB" now.total="127.1 GiB" now.free="89.0 GiB" now.free_swap="98.6 GiB"
time=2025-05-21T22:54:08.054-04:00 level=DEBUG source=sched.go:399 msg="sending an unloaded event" runner.size="25.0 GiB" runner.vram="25.0 GiB" runner.parallel=2 runner.pid=11052 runner.model=H:\ai\ollama\models\blobs\sha256-716e71486001082af395e305779d97bc4cef966b1f9158cdee24d3d8d1e41697
time=2025-05-21T22:54:08.055-04:00 level=DEBUG source=sched.go:312 msg="ignoring unload event with no pending requests"
time=2025-05-21T22:54:08.070-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB"
releasing nvml library
time=2025-05-21T22:54:08.304-04:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.2735259 runner.size="25.0 GiB" runner.vram="25.0 GiB" runner.parallel=2 runner.pid=11052 runner.model=H:\ai\ollama\models\blobs\sha256-716e71486001082af395e305779d97bc4cef966b1f9158cdee24d3d8d1e41697
time=2025-05-21T22:54:08.304-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="89.0 GiB" before.free_swap="98.6 GiB" now.total="127.1 GiB" now.free="89.0 GiB" now.free_swap="98.6 GiB"
time=2025-05-21T22:54:08.320-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB"
releasing nvml library
time=2025-05-21T22:54:08.555-04:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.5240002 runner.size="25.0 GiB" runner.vram="25.0 GiB" runner.parallel=2 runner.pid=11052 runner.model=H:\ai\ollama\models\blobs\sha256-716e71486001082af395e305779d97bc4cef966b1f9158cdee24d3d8d1e41697

OS

Windows 10

GPU

NVIDIA RTX 8000 Quadro 48GB

CPU

Ryzen 7950x

Ollama version

Ollama 0.7.0

Originally created by @SingularityMan on GitHub (May 22, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10808 ### What is the issue? This is the error I get, even after restarting the server on Ollama `0.7.0`: `Error: llama runner process has terminated: GGML_ASSERT(ggml_nelements(a) == ne0*ne1*ne2) failed` I have 48GB VRAM available. I simply ran `ollama run devstral:24b-small-2505-q8_0` with no context length set anywhere and I instantly get OOMs. ### Relevant log output ```shell [GIN] 2025/05/21 - 22:54:03 | 500 | 18.4269744s | 127.0.0.1 | POST "/api/generate" time=2025-05-21T22:54:03.031-04:00 level=DEBUG source=sched.go:364 msg="runner expired event received" runner.name=registry.ollama.ai/library/devstral:24b-small-2505-q8_0 runner.inference=cuda runner.devices=1 runner.size="25.0 GiB" runner.vram="25.0 GiB" runner.parallel=2 runner.pid=11052 runner.model=H:\ai\ollama\models\blobs\sha256-716e71486001082af395e305779d97bc4cef966b1f9158cdee24d3d8d1e41697 runner.num_ctx=8192 time=2025-05-21T22:54:03.031-04:00 level=DEBUG source=sched.go:379 msg="got lock to unload expired event" runner.name=registry.ollama.ai/library/devstral:24b-small-2505-q8_0 runner.inference=cuda runner.devices=1 runner.size="25.0 GiB" runner.vram="25.0 GiB" runner.parallel=2 runner.pid=11052 runner.model=H:\ai\ollama\models\blobs\sha256-716e71486001082af395e305779d97bc4cef966b1f9158cdee24d3d8d1e41697 runner.num_ctx=8192 time=2025-05-21T22:54:03.031-04:00 level=DEBUG source=sched.go:391 msg="starting background wait for VRAM recovery" runner.name=registry.ollama.ai/library/devstral:24b-small-2505-q8_0 runner.inference=cuda runner.devices=1 runner.size="25.0 GiB" runner.vram="25.0 GiB" runner.parallel=2 runner.pid=11052 runner.model=H:\ai\ollama\models\blobs\sha256-716e71486001082af395e305779d97bc4cef966b1f9158cdee24d3d8d1e41697 runner.num_ctx=8192 time=2025-05-21T22:54:03.031-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="89.1 GiB" before.free_swap="98.6 GiB" now.total="127.1 GiB" now.free="88.9 GiB" now.free_swap="98.6 GiB" time=2025-05-21T22:54:03.052-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB" releasing nvml library time=2025-05-21T22:54:03.054-04:00 level=DEBUG source=server.go:1023 msg="stopping llama server" pid=11052 time=2025-05-21T22:54:03.054-04:00 level=DEBUG source=sched.go:396 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="25.0 GiB" runner.vram="25.0 GiB" runner.parallel=2 runner.pid=11052 runner.model=H:\ai\ollama\models\blobs\sha256-716e71486001082af395e305779d97bc4cef966b1f9158cdee24d3d8d1e41697 time=2025-05-21T22:54:03.304-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="88.9 GiB" before.free_swap="98.6 GiB" now.total="127.1 GiB" now.free="88.9 GiB" now.free_swap="98.7 GiB" time=2025-05-21T22:54:03.331-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB" releasing nvml library time=2025-05-21T22:54:03.554-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="88.9 GiB" before.free_swap="98.7 GiB" now.total="127.1 GiB" now.free="89.1 GiB" now.free_swap="98.7 GiB" time=2025-05-21T22:54:03.565-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB" releasing nvml library time=2025-05-21T22:54:03.804-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="89.1 GiB" before.free_swap="98.7 GiB" now.total="127.1 GiB" now.free="89.1 GiB" now.free_swap="98.7 GiB" time=2025-05-21T22:54:03.829-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB" releasing nvml library time=2025-05-21T22:54:04.054-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="89.1 GiB" before.free_swap="98.7 GiB" now.total="127.1 GiB" now.free="89.1 GiB" now.free_swap="98.7 GiB" time=2025-05-21T22:54:04.076-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB" releasing nvml library time=2025-05-21T22:54:04.305-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="89.1 GiB" before.free_swap="98.7 GiB" now.total="127.1 GiB" now.free="89.1 GiB" now.free_swap="98.7 GiB" time=2025-05-21T22:54:04.327-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB" releasing nvml library time=2025-05-21T22:54:04.555-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="89.1 GiB" before.free_swap="98.7 GiB" now.total="127.1 GiB" now.free="89.1 GiB" now.free_swap="98.7 GiB" time=2025-05-21T22:54:04.576-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB" releasing nvml library time=2025-05-21T22:54:04.804-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="89.1 GiB" before.free_swap="98.7 GiB" now.total="127.1 GiB" now.free="89.1 GiB" now.free_swap="98.7 GiB" time=2025-05-21T22:54:04.827-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB" releasing nvml library time=2025-05-21T22:54:05.055-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="89.1 GiB" before.free_swap="98.7 GiB" now.total="127.1 GiB" now.free="89.1 GiB" now.free_swap="98.7 GiB" time=2025-05-21T22:54:05.075-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB" releasing nvml library time=2025-05-21T22:54:05.304-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="89.1 GiB" before.free_swap="98.7 GiB" now.total="127.1 GiB" now.free="89.1 GiB" now.free_swap="98.7 GiB" time=2025-05-21T22:54:05.322-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB" releasing nvml library time=2025-05-21T22:54:05.555-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="89.1 GiB" before.free_swap="98.7 GiB" now.total="127.1 GiB" now.free="89.1 GiB" now.free_swap="98.7 GiB" time=2025-05-21T22:54:05.571-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB" releasing nvml library time=2025-05-21T22:54:05.805-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="89.1 GiB" before.free_swap="98.7 GiB" now.total="127.1 GiB" now.free="89.0 GiB" now.free_swap="98.7 GiB" time=2025-05-21T22:54:05.823-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB" releasing nvml library time=2025-05-21T22:54:06.055-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="89.0 GiB" before.free_swap="98.7 GiB" now.total="127.1 GiB" now.free="89.0 GiB" now.free_swap="98.6 GiB" time=2025-05-21T22:54:06.073-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB" releasing nvml library time=2025-05-21T22:54:06.304-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="89.0 GiB" before.free_swap="98.6 GiB" now.total="127.1 GiB" now.free="89.0 GiB" now.free_swap="98.6 GiB" time=2025-05-21T22:54:06.323-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB" releasing nvml library time=2025-05-21T22:54:06.554-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="89.0 GiB" before.free_swap="98.6 GiB" now.total="127.1 GiB" now.free="89.0 GiB" now.free_swap="98.6 GiB" time=2025-05-21T22:54:06.571-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB" releasing nvml library time=2025-05-21T22:54:06.804-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="89.0 GiB" before.free_swap="98.6 GiB" now.total="127.1 GiB" now.free="89.0 GiB" now.free_swap="98.6 GiB" time=2025-05-21T22:54:06.821-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB" releasing nvml library time=2025-05-21T22:54:07.054-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="89.0 GiB" before.free_swap="98.6 GiB" now.total="127.1 GiB" now.free="89.0 GiB" now.free_swap="98.6 GiB" time=2025-05-21T22:54:07.071-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB" releasing nvml library time=2025-05-21T22:54:07.305-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="89.0 GiB" before.free_swap="98.6 GiB" now.total="127.1 GiB" now.free="89.0 GiB" now.free_swap="98.6 GiB" time=2025-05-21T22:54:07.321-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB" releasing nvml library time=2025-05-21T22:54:07.554-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="89.0 GiB" before.free_swap="98.6 GiB" now.total="127.1 GiB" now.free="89.0 GiB" now.free_swap="98.6 GiB" time=2025-05-21T22:54:07.572-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB" releasing nvml library time=2025-05-21T22:54:07.804-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="89.0 GiB" before.free_swap="98.6 GiB" now.total="127.1 GiB" now.free="89.0 GiB" now.free_swap="98.6 GiB" time=2025-05-21T22:54:07.822-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB" releasing nvml library time=2025-05-21T22:54:08.054-04:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.0237484 runner.size="25.0 GiB" runner.vram="25.0 GiB" runner.parallel=2 runner.pid=11052 runner.model=H:\ai\ollama\models\blobs\sha256-716e71486001082af395e305779d97bc4cef966b1f9158cdee24d3d8d1e41697 time=2025-05-21T22:54:08.054-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="89.0 GiB" before.free_swap="98.6 GiB" now.total="127.1 GiB" now.free="89.0 GiB" now.free_swap="98.6 GiB" time=2025-05-21T22:54:08.054-04:00 level=DEBUG source=sched.go:399 msg="sending an unloaded event" runner.size="25.0 GiB" runner.vram="25.0 GiB" runner.parallel=2 runner.pid=11052 runner.model=H:\ai\ollama\models\blobs\sha256-716e71486001082af395e305779d97bc4cef966b1f9158cdee24d3d8d1e41697 time=2025-05-21T22:54:08.055-04:00 level=DEBUG source=sched.go:312 msg="ignoring unload event with no pending requests" time=2025-05-21T22:54:08.070-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB" releasing nvml library time=2025-05-21T22:54:08.304-04:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.2735259 runner.size="25.0 GiB" runner.vram="25.0 GiB" runner.parallel=2 runner.pid=11052 runner.model=H:\ai\ollama\models\blobs\sha256-716e71486001082af395e305779d97bc4cef966b1f9158cdee24d3d8d1e41697 time=2025-05-21T22:54:08.304-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="89.0 GiB" before.free_swap="98.6 GiB" now.total="127.1 GiB" now.free="89.0 GiB" now.free_swap="98.6 GiB" time=2025-05-21T22:54:08.320-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="915.5 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="477.5 MiB" releasing nvml library time=2025-05-21T22:54:08.555-04:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.5240002 runner.size="25.0 GiB" runner.vram="25.0 GiB" runner.parallel=2 runner.pid=11052 runner.model=H:\ai\ollama\models\blobs\sha256-716e71486001082af395e305779d97bc4cef966b1f9158cdee24d3d8d1e41697 ``` ### OS Windows 10 ### GPU NVIDIA RTX 8000 Quadro 48GB ### CPU Ryzen 7950x ### Ollama version Ollama 0.7.0
GiteaMirror added the bug label 2026-04-12 19:03:06 -05:00
Author
Owner

@rick-github commented on GitHub (May 22, 2025):

$ ollama run devstral:24b-small-2505-q8_0 hello
Hello! How can I assist you today?

A full log may make diagnosis easier.

<!-- gh-comment-id:2900884207 --> @rick-github commented on GitHub (May 22, 2025): ```console $ ollama run devstral:24b-small-2505-q8_0 hello Hello! How can I assist you today? ``` A full log may make diagnosis easier.
Author
Owner

@SingularityMan commented on GitHub (May 22, 2025):

$ ollama run devstral:24b-small-2505-q8_0 hello
Hello! How can I assist you today?
A full log may make diagnosis easier.

The output i provided is all i get. It doesnt even lwt me send a message.

<!-- gh-comment-id:2900979344 --> @SingularityMan commented on GitHub (May 22, 2025): > $ ollama run devstral:24b-small-2505-q8_0 hello > Hello! How can I assist you today? > A full log may make diagnosis easier. The output i provided is all i get. It doesnt even lwt me send a message.
Author
Owner

@rick-github commented on GitHub (May 22, 2025):

Logs from before the failure point.

<!-- gh-comment-id:2900982319 --> @rick-github commented on GitHub (May 22, 2025): Logs from before the failure point.
Author
Owner

@SingularityMan commented on GitHub (May 22, 2025):

Logs from before the failure point.

time=2025-05-22T09:06:15.653-04:00 level=INFO source=routes.go:1205 msg="server config" env="map[CUDA_VISIBLE_DEVICES:0 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE:q8_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:3 OLLAMA_MAX_QUEUE:2 OLLAMA_MODELS:H:\\ai\\ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:2 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-05-22T09:06:15.656-04:00 level=INFO source=images.go:463 msg="total blobs: 30"
time=2025-05-22T09:06:15.657-04:00 level=INFO source=images.go:470 msg="total unused blobs removed: 0"
time=2025-05-22T09:06:15.657-04:00 level=INFO source=routes.go:1258 msg="Listening on 127.0.0.1:11434 (version 0.7.0)"
time=2025-05-22T09:06:15.657-04:00 level=DEBUG source=sched.go:108 msg="starting llm scheduler"
time=2025-05-22T09:06:15.658-04:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-05-22T09:06:15.658-04:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-05-22T09:06:15.658-04:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=16 efficiency=0 threads=32
time=2025-05-22T09:06:15.658-04:00 level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA"
time=2025-05-22T09:06:15.658-04:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=nvml.dll
time=2025-05-22T09:06:15.659-04:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[C:\\Users\\carlo\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\nvml.dll C:\\ProgramData\\anaconda3\\condabin\\nvml.dll C:\\Program Files\\Common Files\\Oracle\\Java\\javapath\\nvml.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.4\\bin\\nvml.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.4\\libnvvp\\nvml.dll C:\\Program Files (x86)\\Common Files\\Oracle\\Java\\java8path\\nvml.dll C:\\Program Files (x86)\\Common Files\\Oracle\\Java\\javapath\\nvml.dll C:\\Program Files\\Microsoft\\jdk-11.0.16.101-hotspot\\bin\\nvml.dll C:\\Windows\\system32\\nvml.dll C:\\Windows\\nvml.dll C:\\Windows\\System32\\Wbem\\nvml.dll C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvml.dll C:\\Windows\\System32\\OpenSSH\\nvml.dll C:\\Program Files\\Graphviz\\bin\\nvml.dll C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll C:\\Program Files (x86)\\Google\\Cloud SDK\\google-cloud-sdk\\bin\\nvml.dll C:\\Users\\carlo\\AppData\\Roaming\\nvm\\nvml.dll C:\\Program Files\\nodejs\\nvml.dll C:\\Program Files\\Git\\cmd\\nvml.dll C:\\ProgramData\\anaconda3\\nvml.dll C:\\ProgramData\\anaconda3\\Scripts\\nvml.dll C:\\ProgramData\\anaconda3\\Library\\bin\\nvml.dll C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2024.1.0\\nvml.dll C:\\Program Files\\NVIDIA Corporation\\NVIDIA NvDLISR\\nvml.dll C:\\Program Files\\Microsoft Visual Studio\\2022\\Professional\\VC\\Tools\\MSVC\\14.42.34433\\bin\\Hostx64\\x64\\nvml.dll C:\\Program Files\\dotnet\\nvml.dll C:\\Program Files\\MongoDB\\Server\\8.0\\bin\\nvml.dll C:\\data\\mongosh-2.3.8-win32-x64\\bin\\nvml.dll C:\\Users\\carlo\\AppData\\Roaming\\Python\\Python312\\Scripts\\nvml.dll C:\\xampp\\php\\nvml.dll C:\\ProgramData\\ComposerSetup\\bin\\nvml.dll C:\\Program Files\\Docker\\Docker\\resources\\bin\\nvml.dll C:\\Users\\carlo\\.cargo\\bin\\nvml.dll C:\\Users\\carlo\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\nvml.dll C:\\Users\\carlo\\AppData\\Local\\Programs\\Python\\Python310\\nvml.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.4\\bin\\nvml.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.4\\libnvvp\\nvml.dll C:\\Users\\carlo\\scoop\\shims\\nvml.dll C:\\Users\\carlo\\AppData\\Local\\Microsoft\\WindowsApps\\nvml.dll C:\\Users\\carlo\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\nvml.dll C:\\Users\\carlo\\AppData\\Local\\GitHubDesktop\\bin\\nvml.dll C:\\Users\\carlo\\PycharmProjects\\engshell\\engshell\\nvml.dll C:\\Program Files (x86)\\Nmap\\nvml.dll C:\\MinGW\\bin\\nvml.dll C:\\adb\\platform-tools_r34.0.4-windows\\platform-tools\\nvml.dll C:\\Users\\carlo\\AppData\\Roaming\\nvm\\nvml.dll C:\\Program Files\\nodejs\\nvml.dll C:\\Users\\carlo\\AppData\\Local\\Programs\\Ollama\\nvml.dll C:\\Users\\carlo\\nvml.dll C:\\Users\\carlo\\.lmstudio\\bin\\nvml.dll C:\\Users\\carlo\\AppData\\Roaming\\Composer\\vendor\\bin\\nvml.dll c:\\Windows\\System32\\nvml.dll]"
time=2025-05-22T09:06:15.659-04:00 level=DEBUG source=gpu.go:529 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll"
time=2025-05-22T09:06:15.661-04:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths="[C:\\Windows\\system32\\nvml.dll c:\\Windows\\System32\\nvml.dll]"
time=2025-05-22T09:06:15.676-04:00 level=DEBUG source=gpu.go:111 msg="nvidia-ml loaded" library=C:\Windows\system32\nvml.dll
time=2025-05-22T09:06:15.676-04:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=nvcuda.dll
time=2025-05-22T09:06:15.676-04:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[C:\\Users\\carlo\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\nvcuda.dll C:\\ProgramData\\anaconda3\\condabin\\nvcuda.dll C:\\Program Files\\Common Files\\Oracle\\Java\\javapath\\nvcuda.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.4\\bin\\nvcuda.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.4\\libnvvp\\nvcuda.dll C:\\Program Files (x86)\\Common Files\\Oracle\\Java\\java8path\\nvcuda.dll C:\\Program Files (x86)\\Common Files\\Oracle\\Java\\javapath\\nvcuda.dll C:\\Program Files\\Microsoft\\jdk-11.0.16.101-hotspot\\bin\\nvcuda.dll C:\\Windows\\system32\\nvcuda.dll C:\\Windows\\nvcuda.dll C:\\Windows\\System32\\Wbem\\nvcuda.dll C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvcuda.dll C:\\Windows\\System32\\OpenSSH\\nvcuda.dll C:\\Program Files\\Graphviz\\bin\\nvcuda.dll C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll C:\\Program Files (x86)\\Google\\Cloud SDK\\google-cloud-sdk\\bin\\nvcuda.dll C:\\Users\\carlo\\AppData\\Roaming\\nvm\\nvcuda.dll C:\\Program Files\\nodejs\\nvcuda.dll C:\\Program Files\\Git\\cmd\\nvcuda.dll C:\\ProgramData\\anaconda3\\nvcuda.dll C:\\ProgramData\\anaconda3\\Scripts\\nvcuda.dll C:\\ProgramData\\anaconda3\\Library\\bin\\nvcuda.dll C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2024.1.0\\nvcuda.dll C:\\Program Files\\NVIDIA Corporation\\NVIDIA NvDLISR\\nvcuda.dll C:\\Program Files\\Microsoft Visual Studio\\2022\\Professional\\VC\\Tools\\MSVC\\14.42.34433\\bin\\Hostx64\\x64\\nvcuda.dll C:\\Program Files\\dotnet\\nvcuda.dll C:\\Program Files\\MongoDB\\Server\\8.0\\bin\\nvcuda.dll C:\\data\\mongosh-2.3.8-win32-x64\\bin\\nvcuda.dll C:\\Users\\carlo\\AppData\\Roaming\\Python\\Python312\\Scripts\\nvcuda.dll C:\\xampp\\php\\nvcuda.dll C:\\ProgramData\\ComposerSetup\\bin\\nvcuda.dll C:\\Program Files\\Docker\\Docker\\resources\\bin\\nvcuda.dll C:\\Users\\carlo\\.cargo\\bin\\nvcuda.dll C:\\Users\\carlo\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\nvcuda.dll C:\\Users\\carlo\\AppData\\Local\\Programs\\Python\\Python310\\nvcuda.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.4\\bin\\nvcuda.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.4\\libnvvp\\nvcuda.dll C:\\Users\\carlo\\scoop\\shims\\nvcuda.dll C:\\Users\\carlo\\AppData\\Local\\Microsoft\\WindowsApps\\nvcuda.dll C:\\Users\\carlo\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\nvcuda.dll C:\\Users\\carlo\\AppData\\Local\\GitHubDesktop\\bin\\nvcuda.dll C:\\Users\\carlo\\PycharmProjects\\engshell\\engshell\\nvcuda.dll C:\\Program Files (x86)\\Nmap\\nvcuda.dll C:\\MinGW\\bin\\nvcuda.dll C:\\adb\\platform-tools_r34.0.4-windows\\platform-tools\\nvcuda.dll C:\\Users\\carlo\\AppData\\Roaming\\nvm\\nvcuda.dll C:\\Program Files\\nodejs\\nvcuda.dll C:\\Users\\carlo\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll C:\\Users\\carlo\\nvcuda.dll C:\\Users\\carlo\\.lmstudio\\bin\\nvcuda.dll C:\\Users\\carlo\\AppData\\Roaming\\Composer\\vendor\\bin\\nvcuda.dll c:\\windows\\system*\\nvcuda.dll]"
time=2025-05-22T09:06:15.677-04:00 level=DEBUG source=gpu.go:529 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll"
time=2025-05-22T09:06:15.678-04:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[C:\Windows\system32\nvcuda.dll]
initializing C:\Windows\system32\nvcuda.dll
dlsym: cuInit - 00007FFA5C2A4D20
dlsym: cuDriverGetVersion - 00007FFA5C2A4DC0
dlsym: cuDeviceGetCount - 00007FFA5C2A55B6
dlsym: cuDeviceGet - 00007FFA5C2A55B0
dlsym: cuDeviceGetAttribute - 00007FFA5C2A4F10
dlsym: cuDeviceGetUuid - 00007FFA5C2A55C2
dlsym: cuDeviceGetName - 00007FFA5C2A55BC
dlsym: cuCtxCreate_v3 - 00007FFA5C2A5634
dlsym: cuMemGetInfo_v2 - 00007FFA5C2A5736
dlsym: cuCtxDestroy - 00007FFA5C2A5646
calling cuInit
calling cuDriverGetVersion
raw version 0x2f26
CUDA driver version: 12.7
calling cuDeviceGetCount
device count 1
time=2025-05-22T09:06:15.713-04:00 level=DEBUG source=gpu.go:125 msg="detected GPUs" count=1 library=C:\Windows\system32\nvcuda.dll
[GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6] CUDA totalMem 49151mb
[GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6] CUDA freeMem 47759mb
[GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6] Compute Capability 7.5
time=2025-05-22T09:06:15.823-04:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 library=cuda compute=7.5 driver=12.7 name="Quadro RTX 8000" overhead="750.9 MiB"
time=2025-05-22T09:06:15.831-04:00 level=DEBUG source=amd_hip_windows.go:88 msg=hipDriverGetVersion version=60241512
time=2025-05-22T09:06:15.831-04:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir C:\\Users\\carlo\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\rocm"
time=2025-05-22T09:06:15.831-04:00 level=DEBUG source=amd_common.go:44 msg="detected ROCM next to ollama executable C:\\Users\\carlo\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\rocm"
time=2025-05-22T09:06:15.832-04:00 level=DEBUG source=amd_windows.go:73 msg="detected hip devices" count=1
time=2025-05-22T09:06:15.832-04:00 level=DEBUG source=amd_windows.go:93 msg="hip device" id=0 name="AMD Radeon(TM) Graphics" gfx=gfx1036
time=2025-05-22T09:06:16.170-04:00 level=INFO source=amd_windows.go:127 msg="unsupported Radeon iGPU detected skipping" id=0 total="48.2 GiB"
releasing cuda driver library
releasing nvml library
time=2025-05-22T09:06:16.173-04:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 library=cuda variant=v12 compute=7.5 driver=12.7 name="Quadro RTX 8000" total="48.0 GiB" available="46.6 GiB"
[GIN] 2025/05/22 - 09:06:20 | 200 |            0s |       127.0.0.1 | HEAD     "/"
time=2025-05-22T09:06:20.776-04:00 level=DEBUG source=ggml.go:154 msg="key not found" key=general.alignment default=32
time=2025-05-22T09:06:20.785-04:00 level=DEBUG source=ggml.go:154 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/22 - 09:06:20 | 200 |     21.2416ms |       127.0.0.1 | POST     "/api/show"
time=2025-05-22T09:06:20.798-04:00 level=DEBUG source=ggml.go:154 msg="key not found" key=general.alignment default=32
time=2025-05-22T09:06:20.799-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.6 GiB" before.free_swap="128.0 GiB" now.total="127.1 GiB" now.free="113.5 GiB" now.free_swap="127.8 GiB"
time=2025-05-22T09:06:20.820-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB"
releasing nvml library
time=2025-05-22T09:06:20.830-04:00 level=DEBUG source=ggml.go:154 msg="key not found" key=general.alignment default=32
time=2025-05-22T09:06:20.839-04:00 level=DEBUG source=ggml.go:154 msg="key not found" key=general.alignment default=32
time=2025-05-22T09:06:20.839-04:00 level=DEBUG source=sched.go:228 msg="loading first model" model=H:\ai\ollama\models\blobs\sha256-716e71486001082af395e305779d97bc4cef966b1f9158cdee24d3d8d1e41697
time=2025-05-22T09:06:20.839-04:00 level=DEBUG source=memory.go:111 msg=evaluating library=cuda gpu_count=1 available="[46.6 GiB]"
time=2025-05-22T09:06:20.840-04:00 level=DEBUG source=ggml.go:154 msg="key not found" key=llama.vision.block_count default=0
time=2025-05-22T09:06:20.840-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.5 GiB" before.free_swap="127.8 GiB" now.total="127.1 GiB" now.free="113.5 GiB" now.free_swap="127.8 GiB"
time=2025-05-22T09:06:20.851-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB"
releasing nvml library
time=2025-05-22T09:06:20.852-04:00 level=INFO source=sched.go:777 msg="new model will fit in available VRAM in single GPU, loading" model=H:\ai\ollama\models\blobs\sha256-716e71486001082af395e305779d97bc4cef966b1f9158cdee24d3d8d1e41697 gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 parallel=2 available=50078941184 required="25.0 GiB"
time=2025-05-22T09:06:20.852-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.5 GiB" before.free_swap="127.8 GiB" now.total="127.1 GiB" now.free="113.5 GiB" now.free_swap="127.8 GiB"
time=2025-05-22T09:06:20.866-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB"
releasing nvml library
time=2025-05-22T09:06:20.867-04:00 level=INFO source=server.go:135 msg="system memory" total="127.1 GiB" free="113.5 GiB" free_swap="127.8 GiB"
time=2025-05-22T09:06:20.867-04:00 level=DEBUG source=memory.go:111 msg=evaluating library=cuda gpu_count=1 available="[46.6 GiB]"
time=2025-05-22T09:06:20.867-04:00 level=DEBUG source=ggml.go:154 msg="key not found" key=llama.vision.block_count default=0
time=2025-05-22T09:06:20.867-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.5 GiB" before.free_swap="127.8 GiB" now.total="127.1 GiB" now.free="113.5 GiB" now.free_swap="127.8 GiB"
time=2025-05-22T09:06:20.882-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB"
releasing nvml library
time=2025-05-22T09:06:20.883-04:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=41 layers.offload=41 layers.split="" memory.available="[46.6 GiB]" memory.gpu_overhead="0 B" memory.required.full="25.0 GiB" memory.required.partial="25.0 GiB" memory.required.kv="640.0 MiB" memory.required.allocations="[25.0 GiB]" memory.weights.total="22.7 GiB" memory.weights.repeating="22.0 GiB" memory.weights.nonrepeating="680.0 MiB" memory.graph.full="568.0 MiB" memory.graph.partial="801.0 MiB"
time=2025-05-22T09:06:20.883-04:00 level=INFO source=server.go:211 msg="enabling flash attention"
time=2025-05-22T09:06:20.884-04:00 level=DEBUG source=server.go:284 msg="compatible gpu libraries" compatible="[cuda_v12 cuda_v11]"
time=2025-05-22T09:06:20.903-04:00 level=DEBUG source=ggml.go:154 msg="key not found" key=general.alignment default=32
time=2025-05-22T09:06:20.903-04:00 level=DEBUG source=ggml.go:154 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
time=2025-05-22T09:06:20.903-04:00 level=DEBUG source=ggml.go:154 msg="key not found" key=llama.rope.freq_scale default=1
time=2025-05-22T09:06:20.908-04:00 level=DEBUG source=server.go:360 msg="adding gpu library" path=C:\Users\carlo\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12
time=2025-05-22T09:06:20.908-04:00 level=DEBUG source=server.go:367 msg="adding gpu dependency paths" paths=[C:\Users\carlo\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12]
time=2025-05-22T09:06:20.908-04:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\\Users\\carlo\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model H:\\ai\\ollama\\models\\blobs\\sha256-716e71486001082af395e305779d97bc4cef966b1f9158cdee24d3d8d1e41697 --ctx-size 8192 --batch-size 512 --n-gpu-layers 41 --threads 16 --flash-attn --kv-cache-type q8_0 --no-mmap --parallel 2 --port 11835"
time=2025-05-22T09:06:20.908-04:00 level=DEBUG source=server.go:432 msg=subprocess CUDA_HOME="C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.4" CUDA_PATH="C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.4" CUDA_PATH_V12_4="C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.4" CUDA_VISIBLE_DEVICES=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 OLLAMA_CUDA=1 OLLAMA_DEBUG=1 OLLAMA_FLASH_ATTENTION=1 OLLAMA_KEEP_ALIVE=-1 OLLAMA_KV_CACHE_TYPE=q8_0 OLLAMA_MAX_LOADED_MODELS=3 OLLAMA_MAX_QUEUE=2 OLLAMA_MODELS=H:\ai\ollama\models OLLAMA_NEW_ENGINE=true OLLAMA_NUM_GPU_LAYERS=65 OLLAMA_NUM_PARALLEL=2 OLLAMA_NUM_THREADS=0 OLLAMA_TIMEOUT=1000000 PATH="C:\\Users\\carlo\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12;C:\\Users\\carlo\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12;C:\\Users\\carlo\\AppData\\Local\\Programs\\Ollama\\lib\\ollama;C:\\ProgramData\\anaconda3\\condabin;C:\\Program Files\\Common Files\\Oracle\\Java\\javapath;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.4\\bin;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.4\\libnvvp;C:\\Program Files (x86)\\Common Files\\Oracle\\Java\\java8path;C:\\Program Files (x86)\\Common Files\\Oracle\\Java\\javapath;C:\\Program Files\\Microsoft\\jdk-11.0.16.101-hotspot\\bin;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\Graphviz\\bin;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files (x86)\\Google\\Cloud SDK\\google-cloud-sdk\\bin;C:\\Users\\carlo\\AppData\\Roaming\\nvm;C:\\Program Files\\nodejs;C:\\Program Files\\Git\\cmd;C:\\ProgramData\\anaconda3;C:\\ProgramData\\anaconda3\\Scripts;C:\\ProgramData\\anaconda3\\Library\\bin;C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2024.1.0\\;C:\\Program Files\\NVIDIA Corporation\\NVIDIA NvDLISR;C:\\Program Files\\Microsoft Visual Studio\\2022\\Professional\\VC\\Tools\\MSVC\\14.42.34433\\bin\\Hostx64\\x64;C:\\Program Files\\dotnet\\;C:\\Program Files\\MongoDB\\Server\\8.0\\bin;C:\\data\\mongosh-2.3.8-win32-x64\\bin;C:\\Users\\carlo\\AppData\\Roaming\\Python\\Python312\\Scripts;C:\\xampp\\php;C:\\ProgramData\\ComposerSetup\\bin;C:\\Program Files\\Docker\\Docker\\resources\\bin;C:\\Users\\carlo\\.cargo\\bin;C:\\Users\\carlo\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\;C:\\Users\\carlo\\AppData\\Local\\Programs\\Python\\Python310\\;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.4\\bin;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.4\\libnvvp;C:\\Users\\carlo\\scoop\\shims;C:\\Users\\carlo\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\carlo\\AppData\\Local\\Programs\\Python\\Python310\\Scripts;C:\\Users\\carlo\\AppData\\Local\\GitHubDesktop\\bin;C:\\Users\\carlo\\PycharmProjects\\engshell\\engshell;C:\\Program Files (x86)\\Nmap;C:\\MinGW\\bin;C:\\adb\\platform-tools_r34.0.4-windows\\platform-tools;C:\\Users\\carlo\\AppData\\Roaming\\nvm;C:\\Program Files\\nodejs;C:\\Users\\carlo\\AppData\\Local\\Programs\\Ollama;;C:\\Users\\carlo\\.lmstudio\\bin;C:\\Users\\carlo\\AppData\\Roaming\\Composer\\vendor\\bin;C:\\Users\\carlo\\AppData\\Local\\Programs\\Ollama\\lib\\ollama" OLLAMA_LIBRARY_PATH=C:\Users\carlo\AppData\Local\Programs\Ollama\lib\ollama;C:\Users\carlo\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12
time=2025-05-22T09:06:20.910-04:00 level=INFO source=sched.go:472 msg="loaded runners" count=1
time=2025-05-22T09:06:20.910-04:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
time=2025-05-22T09:06:20.911-04:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error"
time=2025-05-22T09:06:20.938-04:00 level=INFO source=runner.go:836 msg="starting ollama engine"
time=2025-05-22T09:06:20.938-04:00 level=INFO source=runner.go:899 msg="Server listening on 127.0.0.1:11835"
time=2025-05-22T09:06:20.957-04:00 level=DEBUG source=ggml.go:154 msg="key not found" key=general.alignment default=32
time=2025-05-22T09:06:20.957-04:00 level=DEBUG source=ggml.go:154 msg="key not found" key=general.description default=""
time=2025-05-22T09:06:20.957-04:00 level=INFO source=ggml.go:73 msg="" architecture=llama file_type=Q8_0 name="Devstral Small 2505" description="" num_tensors=363 num_key_values=41
time=2025-05-22T09:06:20.957-04:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\carlo\AppData\Local\Programs\Ollama\lib\ollama
load_backend: loaded CPU backend from C:\Users\carlo\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll
time=2025-05-22T09:06:20.968-04:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\carlo\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: Quadro RTX 8000, compute capability 7.5, VMM: yes
load_backend: loaded CUDA backend from C:\Users\carlo\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
time=2025-05-22T09:06:21.052-04:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-05-22T09:06:21.162-04:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
time=2025-05-22T09:06:21.166-04:00 level=INFO source=ggml.go:299 msg="model weights" buffer=CUDA0 size="22.7 GiB"
time=2025-05-22T09:06:21.167-04:00 level=INFO source=ggml.go:299 msg="model weights" buffer=CPU size="680.0 MiB"
time=2025-05-22T09:06:21.412-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.03"
time=2025-05-22T09:06:21.913-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.06"
time=2025-05-22T09:06:22.163-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.10"
time=2025-05-22T09:06:22.414-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.14"
time=2025-05-22T09:06:22.664-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.18"
time=2025-05-22T09:06:22.914-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.22"
time=2025-05-22T09:06:23.165-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.26"
time=2025-05-22T09:06:23.415-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.30"
time=2025-05-22T09:06:23.665-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.34"
time=2025-05-22T09:06:23.915-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.37"
time=2025-05-22T09:06:24.165-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.41"
time=2025-05-22T09:06:24.416-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.45"
time=2025-05-22T09:06:24.666-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.49"
time=2025-05-22T09:06:24.916-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.53"
time=2025-05-22T09:06:25.167-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.57"
time=2025-05-22T09:06:25.417-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.61"
time=2025-05-22T09:06:25.667-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.64"
time=2025-05-22T09:06:25.918-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.68"
time=2025-05-22T09:06:26.169-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.72"
time=2025-05-22T09:06:26.419-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.76"
time=2025-05-22T09:06:26.669-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.80"
time=2025-05-22T09:06:26.919-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.84"
time=2025-05-22T09:06:27.170-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.88"
time=2025-05-22T09:06:27.421-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.92"
time=2025-05-22T09:06:27.671-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.95"
time=2025-05-22T09:06:27.921-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.99"
time=2025-05-22T09:06:28.074-04:00 level=DEBUG source=ggml.go:154 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
time=2025-05-22T09:06:28.074-04:00 level=DEBUG source=ggml.go:154 msg="key not found" key=llama.rope.freq_scale default=1
ggml.c:3081: GGML_ASSERT(ggml_nelements(a) == ne0*ne1*ne2) failed
time=2025-05-22T09:06:28.212-04:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error"
time=2025-05-22T09:06:28.233-04:00 level=ERROR source=server.go:457 msg="llama runner terminated" error="exit status 0xc0000409"
time=2025-05-22T09:06:28.462-04:00 level=ERROR source=sched.go:478 msg="error loading llama server" error="llama runner process has terminated: GGML_ASSERT(ggml_nelements(a) == ne0*ne1*ne2) failed"
time=2025-05-22T09:06:28.462-04:00 level=DEBUG source=sched.go:480 msg="triggering expiration for failed load" runner.name=registry.ollama.ai/library/devstral:24b-small-2505-q8_0 runner.inference=cuda runner.devices=1 runner.size="25.0 GiB" runner.vram="25.0 GiB" runner.parallel=2 runner.pid=31272 runner.model=H:\ai\ollama\models\blobs\sha256-716e71486001082af395e305779d97bc4cef966b1f9158cdee24d3d8d1e41697 runner.num_ctx=8192
[GIN] 2025/05/22 - 09:06:28 | 500 |    7.6747794s |       127.0.0.1 | POST     "/api/generate"
time=2025-05-22T09:06:28.463-04:00 level=DEBUG source=sched.go:364 msg="runner expired event received" runner.name=registry.ollama.ai/library/devstral:24b-small-2505-q8_0 runner.inference=cuda runner.devices=1 runner.size="25.0 GiB" runner.vram="25.0 GiB" runner.parallel=2 runner.pid=31272 runner.model=H:\ai\ollama\models\blobs\sha256-716e71486001082af395e305779d97bc4cef966b1f9158cdee24d3d8d1e41697 runner.num_ctx=8192
time=2025-05-22T09:06:28.463-04:00 level=DEBUG source=sched.go:379 msg="got lock to unload expired event" runner.name=registry.ollama.ai/library/devstral:24b-small-2505-q8_0 runner.inference=cuda runner.devices=1 runner.size="25.0 GiB" runner.vram="25.0 GiB" runner.parallel=2 runner.pid=31272 runner.model=H:\ai\ollama\models\blobs\sha256-716e71486001082af395e305779d97bc4cef966b1f9158cdee24d3d8d1e41697 runner.num_ctx=8192
time=2025-05-22T09:06:28.463-04:00 level=DEBUG source=sched.go:391 msg="starting background wait for VRAM recovery" runner.name=registry.ollama.ai/library/devstral:24b-small-2505-q8_0 runner.inference=cuda runner.devices=1 runner.size="25.0 GiB" runner.vram="25.0 GiB" runner.parallel=2 runner.pid=31272 runner.model=H:\ai\ollama\models\blobs\sha256-716e71486001082af395e305779d97bc4cef966b1f9158cdee24d3d8d1e41697 runner.num_ctx=8192
time=2025-05-22T09:06:28.463-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.5 GiB" before.free_swap="127.8 GiB" now.total="127.1 GiB" now.free="113.2 GiB" now.free_swap="127.6 GiB"
time=2025-05-22T09:06:28.481-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB"
releasing nvml library
time=2025-05-22T09:06:28.482-04:00 level=DEBUG source=server.go:1023 msg="stopping llama server" pid=31272
time=2025-05-22T09:06:28.482-04:00 level=DEBUG source=sched.go:396 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="25.0 GiB" runner.vram="25.0 GiB" runner.parallel=2 runner.pid=31272 runner.model=H:\ai\ollama\models\blobs\sha256-716e71486001082af395e305779d97bc4cef966b1f9158cdee24d3d8d1e41697
time=2025-05-22T09:06:28.732-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.2 GiB" before.free_swap="127.6 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB"
time=2025-05-22T09:06:28.742-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB"
releasing nvml library
time=2025-05-22T09:06:28.982-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.3 GiB" before.free_swap="127.5 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB"
time=2025-05-22T09:06:28.992-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB"
releasing nvml library
time=2025-05-22T09:06:29.233-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.3 GiB" before.free_swap="127.5 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB"
time=2025-05-22T09:06:29.256-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB"
releasing nvml library
time=2025-05-22T09:06:29.483-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.3 GiB" before.free_swap="127.5 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB"
time=2025-05-22T09:06:29.506-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB"
releasing nvml library
time=2025-05-22T09:06:29.733-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.3 GiB" before.free_swap="127.5 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB"
time=2025-05-22T09:06:29.755-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB"
releasing nvml library
time=2025-05-22T09:06:29.982-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.3 GiB" before.free_swap="127.5 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB"
time=2025-05-22T09:06:30.004-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB"
releasing nvml library
time=2025-05-22T09:06:30.233-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.3 GiB" before.free_swap="127.5 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB"
time=2025-05-22T09:06:30.252-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB"
releasing nvml library
time=2025-05-22T09:06:30.483-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.3 GiB" before.free_swap="127.5 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB"
time=2025-05-22T09:06:30.501-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB"
releasing nvml library
time=2025-05-22T09:06:30.732-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.3 GiB" before.free_swap="127.5 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB"
time=2025-05-22T09:06:30.749-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB"
releasing nvml library
time=2025-05-22T09:06:30.983-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.3 GiB" before.free_swap="127.5 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB"
time=2025-05-22T09:06:31.001-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB"
releasing nvml library
time=2025-05-22T09:06:31.232-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.3 GiB" before.free_swap="127.5 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB"
time=2025-05-22T09:06:31.249-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB"
releasing nvml library
time=2025-05-22T09:06:31.483-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.3 GiB" before.free_swap="127.5 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB"
time=2025-05-22T09:06:31.497-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB"
releasing nvml library
time=2025-05-22T09:06:31.732-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.3 GiB" before.free_swap="127.5 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB"
time=2025-05-22T09:06:31.747-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB"
releasing nvml library
time=2025-05-22T09:06:31.983-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.3 GiB" before.free_swap="127.5 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB"
time=2025-05-22T09:06:31.996-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB"
releasing nvml library
time=2025-05-22T09:06:32.233-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.3 GiB" before.free_swap="127.5 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB"
time=2025-05-22T09:06:32.246-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB"
releasing nvml library
time=2025-05-22T09:06:32.483-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.3 GiB" before.free_swap="127.5 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB"
time=2025-05-22T09:06:32.496-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB"
releasing nvml library
time=2025-05-22T09:06:32.733-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.3 GiB" before.free_swap="127.5 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB"
time=2025-05-22T09:06:32.746-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB"
releasing nvml library
time=2025-05-22T09:06:32.983-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.3 GiB" before.free_swap="127.5 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB"
time=2025-05-22T09:06:32.993-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB"
releasing nvml library
time=2025-05-22T09:06:33.233-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.3 GiB" before.free_swap="127.5 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB"
time=2025-05-22T09:06:33.256-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB"
releasing nvml library
time=2025-05-22T09:06:33.482-04:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.0188961 runner.size="25.0 GiB" runner.vram="25.0 GiB" runner.parallel=2 runner.pid=31272 runner.model=H:\ai\ollama\models\blobs\sha256-716e71486001082af395e305779d97bc4cef966b1f9158cdee24d3d8d1e41697
time=2025-05-22T09:06:33.482-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.3 GiB" before.free_swap="127.5 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB"
time=2025-05-22T09:06:33.482-04:00 level=DEBUG source=sched.go:399 msg="sending an unloaded event" runner.size="25.0 GiB" runner.vram="25.0 GiB" runner.parallel=2 runner.pid=31272 runner.model=H:\ai\ollama\models\blobs\sha256-716e71486001082af395e305779d97bc4cef966b1f9158cdee24d3d8d1e41697
time=2025-05-22T09:06:33.483-04:00 level=DEBUG source=sched.go:312 msg="ignoring unload event with no pending requests"
time=2025-05-22T09:06:33.504-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB"
releasing nvml library
time=2025-05-22T09:06:33.732-04:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.2688569 runner.size="25.0 GiB" runner.vram="25.0 GiB" runner.parallel=2 runner.pid=31272 runner.model=H:\ai\ollama\models\blobs\sha256-716e71486001082af395e305779d97bc4cef966b1f9158cdee24d3d8d1e41697
time=2025-05-22T09:06:33.732-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.3 GiB" before.free_swap="127.5 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB"
time=2025-05-22T09:06:33.755-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB"
releasing nvml library
time=2025-05-22T09:06:33.982-04:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.5186237 runner.size="25.0 GiB" runner.vram="25.0 GiB" runner.parallel=2 runner.pid=31272 runner.model=H:\ai\ollama\models\blobs\sha256-716e71486001082af395e305779d97bc4cef966b1f9158cdee24d3d8d1e41697

That happens right after I do: ollama run devstral:24b-small-2505-q8_0

Error: llama runner process has terminated: GGML_ASSERT(ggml_nelements(a) == ne0*ne1*ne2) failed

<!-- gh-comment-id:2901171037 --> @SingularityMan commented on GitHub (May 22, 2025): > Logs from before the failure point. ``` time=2025-05-22T09:06:15.653-04:00 level=INFO source=routes.go:1205 msg="server config" env="map[CUDA_VISIBLE_DEVICES:0 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE:q8_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:3 OLLAMA_MAX_QUEUE:2 OLLAMA_MODELS:H:\\ai\\ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:2 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-05-22T09:06:15.656-04:00 level=INFO source=images.go:463 msg="total blobs: 30" time=2025-05-22T09:06:15.657-04:00 level=INFO source=images.go:470 msg="total unused blobs removed: 0" time=2025-05-22T09:06:15.657-04:00 level=INFO source=routes.go:1258 msg="Listening on 127.0.0.1:11434 (version 0.7.0)" time=2025-05-22T09:06:15.657-04:00 level=DEBUG source=sched.go:108 msg="starting llm scheduler" time=2025-05-22T09:06:15.658-04:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-05-22T09:06:15.658-04:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2025-05-22T09:06:15.658-04:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=16 efficiency=0 threads=32 time=2025-05-22T09:06:15.658-04:00 level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA" time=2025-05-22T09:06:15.658-04:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=nvml.dll time=2025-05-22T09:06:15.659-04:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[C:\\Users\\carlo\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\nvml.dll C:\\ProgramData\\anaconda3\\condabin\\nvml.dll C:\\Program Files\\Common Files\\Oracle\\Java\\javapath\\nvml.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.4\\bin\\nvml.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.4\\libnvvp\\nvml.dll C:\\Program Files (x86)\\Common Files\\Oracle\\Java\\java8path\\nvml.dll C:\\Program Files (x86)\\Common Files\\Oracle\\Java\\javapath\\nvml.dll C:\\Program Files\\Microsoft\\jdk-11.0.16.101-hotspot\\bin\\nvml.dll C:\\Windows\\system32\\nvml.dll C:\\Windows\\nvml.dll C:\\Windows\\System32\\Wbem\\nvml.dll C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvml.dll C:\\Windows\\System32\\OpenSSH\\nvml.dll C:\\Program Files\\Graphviz\\bin\\nvml.dll C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll C:\\Program Files (x86)\\Google\\Cloud SDK\\google-cloud-sdk\\bin\\nvml.dll C:\\Users\\carlo\\AppData\\Roaming\\nvm\\nvml.dll C:\\Program Files\\nodejs\\nvml.dll C:\\Program Files\\Git\\cmd\\nvml.dll C:\\ProgramData\\anaconda3\\nvml.dll C:\\ProgramData\\anaconda3\\Scripts\\nvml.dll C:\\ProgramData\\anaconda3\\Library\\bin\\nvml.dll C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2024.1.0\\nvml.dll C:\\Program Files\\NVIDIA Corporation\\NVIDIA NvDLISR\\nvml.dll C:\\Program Files\\Microsoft Visual Studio\\2022\\Professional\\VC\\Tools\\MSVC\\14.42.34433\\bin\\Hostx64\\x64\\nvml.dll C:\\Program Files\\dotnet\\nvml.dll C:\\Program Files\\MongoDB\\Server\\8.0\\bin\\nvml.dll C:\\data\\mongosh-2.3.8-win32-x64\\bin\\nvml.dll C:\\Users\\carlo\\AppData\\Roaming\\Python\\Python312\\Scripts\\nvml.dll C:\\xampp\\php\\nvml.dll C:\\ProgramData\\ComposerSetup\\bin\\nvml.dll C:\\Program Files\\Docker\\Docker\\resources\\bin\\nvml.dll C:\\Users\\carlo\\.cargo\\bin\\nvml.dll C:\\Users\\carlo\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\nvml.dll C:\\Users\\carlo\\AppData\\Local\\Programs\\Python\\Python310\\nvml.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.4\\bin\\nvml.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.4\\libnvvp\\nvml.dll C:\\Users\\carlo\\scoop\\shims\\nvml.dll C:\\Users\\carlo\\AppData\\Local\\Microsoft\\WindowsApps\\nvml.dll C:\\Users\\carlo\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\nvml.dll C:\\Users\\carlo\\AppData\\Local\\GitHubDesktop\\bin\\nvml.dll C:\\Users\\carlo\\PycharmProjects\\engshell\\engshell\\nvml.dll C:\\Program Files (x86)\\Nmap\\nvml.dll C:\\MinGW\\bin\\nvml.dll C:\\adb\\platform-tools_r34.0.4-windows\\platform-tools\\nvml.dll C:\\Users\\carlo\\AppData\\Roaming\\nvm\\nvml.dll C:\\Program Files\\nodejs\\nvml.dll C:\\Users\\carlo\\AppData\\Local\\Programs\\Ollama\\nvml.dll C:\\Users\\carlo\\nvml.dll C:\\Users\\carlo\\.lmstudio\\bin\\nvml.dll C:\\Users\\carlo\\AppData\\Roaming\\Composer\\vendor\\bin\\nvml.dll c:\\Windows\\System32\\nvml.dll]" time=2025-05-22T09:06:15.659-04:00 level=DEBUG source=gpu.go:529 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll" time=2025-05-22T09:06:15.661-04:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths="[C:\\Windows\\system32\\nvml.dll c:\\Windows\\System32\\nvml.dll]" time=2025-05-22T09:06:15.676-04:00 level=DEBUG source=gpu.go:111 msg="nvidia-ml loaded" library=C:\Windows\system32\nvml.dll time=2025-05-22T09:06:15.676-04:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=nvcuda.dll time=2025-05-22T09:06:15.676-04:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[C:\\Users\\carlo\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\nvcuda.dll C:\\ProgramData\\anaconda3\\condabin\\nvcuda.dll C:\\Program Files\\Common Files\\Oracle\\Java\\javapath\\nvcuda.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.4\\bin\\nvcuda.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.4\\libnvvp\\nvcuda.dll C:\\Program Files (x86)\\Common Files\\Oracle\\Java\\java8path\\nvcuda.dll C:\\Program Files (x86)\\Common Files\\Oracle\\Java\\javapath\\nvcuda.dll C:\\Program Files\\Microsoft\\jdk-11.0.16.101-hotspot\\bin\\nvcuda.dll C:\\Windows\\system32\\nvcuda.dll C:\\Windows\\nvcuda.dll C:\\Windows\\System32\\Wbem\\nvcuda.dll C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvcuda.dll C:\\Windows\\System32\\OpenSSH\\nvcuda.dll C:\\Program Files\\Graphviz\\bin\\nvcuda.dll C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll C:\\Program Files (x86)\\Google\\Cloud SDK\\google-cloud-sdk\\bin\\nvcuda.dll C:\\Users\\carlo\\AppData\\Roaming\\nvm\\nvcuda.dll C:\\Program Files\\nodejs\\nvcuda.dll C:\\Program Files\\Git\\cmd\\nvcuda.dll C:\\ProgramData\\anaconda3\\nvcuda.dll C:\\ProgramData\\anaconda3\\Scripts\\nvcuda.dll C:\\ProgramData\\anaconda3\\Library\\bin\\nvcuda.dll C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2024.1.0\\nvcuda.dll C:\\Program Files\\NVIDIA Corporation\\NVIDIA NvDLISR\\nvcuda.dll C:\\Program Files\\Microsoft Visual Studio\\2022\\Professional\\VC\\Tools\\MSVC\\14.42.34433\\bin\\Hostx64\\x64\\nvcuda.dll C:\\Program Files\\dotnet\\nvcuda.dll C:\\Program Files\\MongoDB\\Server\\8.0\\bin\\nvcuda.dll C:\\data\\mongosh-2.3.8-win32-x64\\bin\\nvcuda.dll C:\\Users\\carlo\\AppData\\Roaming\\Python\\Python312\\Scripts\\nvcuda.dll C:\\xampp\\php\\nvcuda.dll C:\\ProgramData\\ComposerSetup\\bin\\nvcuda.dll C:\\Program Files\\Docker\\Docker\\resources\\bin\\nvcuda.dll C:\\Users\\carlo\\.cargo\\bin\\nvcuda.dll C:\\Users\\carlo\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\nvcuda.dll C:\\Users\\carlo\\AppData\\Local\\Programs\\Python\\Python310\\nvcuda.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.4\\bin\\nvcuda.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.4\\libnvvp\\nvcuda.dll C:\\Users\\carlo\\scoop\\shims\\nvcuda.dll C:\\Users\\carlo\\AppData\\Local\\Microsoft\\WindowsApps\\nvcuda.dll C:\\Users\\carlo\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\nvcuda.dll C:\\Users\\carlo\\AppData\\Local\\GitHubDesktop\\bin\\nvcuda.dll C:\\Users\\carlo\\PycharmProjects\\engshell\\engshell\\nvcuda.dll C:\\Program Files (x86)\\Nmap\\nvcuda.dll C:\\MinGW\\bin\\nvcuda.dll C:\\adb\\platform-tools_r34.0.4-windows\\platform-tools\\nvcuda.dll C:\\Users\\carlo\\AppData\\Roaming\\nvm\\nvcuda.dll C:\\Program Files\\nodejs\\nvcuda.dll C:\\Users\\carlo\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll C:\\Users\\carlo\\nvcuda.dll C:\\Users\\carlo\\.lmstudio\\bin\\nvcuda.dll C:\\Users\\carlo\\AppData\\Roaming\\Composer\\vendor\\bin\\nvcuda.dll c:\\windows\\system*\\nvcuda.dll]" time=2025-05-22T09:06:15.677-04:00 level=DEBUG source=gpu.go:529 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll" time=2025-05-22T09:06:15.678-04:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[C:\Windows\system32\nvcuda.dll] initializing C:\Windows\system32\nvcuda.dll dlsym: cuInit - 00007FFA5C2A4D20 dlsym: cuDriverGetVersion - 00007FFA5C2A4DC0 dlsym: cuDeviceGetCount - 00007FFA5C2A55B6 dlsym: cuDeviceGet - 00007FFA5C2A55B0 dlsym: cuDeviceGetAttribute - 00007FFA5C2A4F10 dlsym: cuDeviceGetUuid - 00007FFA5C2A55C2 dlsym: cuDeviceGetName - 00007FFA5C2A55BC dlsym: cuCtxCreate_v3 - 00007FFA5C2A5634 dlsym: cuMemGetInfo_v2 - 00007FFA5C2A5736 dlsym: cuCtxDestroy - 00007FFA5C2A5646 calling cuInit calling cuDriverGetVersion raw version 0x2f26 CUDA driver version: 12.7 calling cuDeviceGetCount device count 1 time=2025-05-22T09:06:15.713-04:00 level=DEBUG source=gpu.go:125 msg="detected GPUs" count=1 library=C:\Windows\system32\nvcuda.dll [GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6] CUDA totalMem 49151mb [GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6] CUDA freeMem 47759mb [GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6] Compute Capability 7.5 time=2025-05-22T09:06:15.823-04:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 library=cuda compute=7.5 driver=12.7 name="Quadro RTX 8000" overhead="750.9 MiB" time=2025-05-22T09:06:15.831-04:00 level=DEBUG source=amd_hip_windows.go:88 msg=hipDriverGetVersion version=60241512 time=2025-05-22T09:06:15.831-04:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir C:\\Users\\carlo\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\rocm" time=2025-05-22T09:06:15.831-04:00 level=DEBUG source=amd_common.go:44 msg="detected ROCM next to ollama executable C:\\Users\\carlo\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\rocm" time=2025-05-22T09:06:15.832-04:00 level=DEBUG source=amd_windows.go:73 msg="detected hip devices" count=1 time=2025-05-22T09:06:15.832-04:00 level=DEBUG source=amd_windows.go:93 msg="hip device" id=0 name="AMD Radeon(TM) Graphics" gfx=gfx1036 time=2025-05-22T09:06:16.170-04:00 level=INFO source=amd_windows.go:127 msg="unsupported Radeon iGPU detected skipping" id=0 total="48.2 GiB" releasing cuda driver library releasing nvml library time=2025-05-22T09:06:16.173-04:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 library=cuda variant=v12 compute=7.5 driver=12.7 name="Quadro RTX 8000" total="48.0 GiB" available="46.6 GiB" [GIN] 2025/05/22 - 09:06:20 | 200 | 0s | 127.0.0.1 | HEAD "/" time=2025-05-22T09:06:20.776-04:00 level=DEBUG source=ggml.go:154 msg="key not found" key=general.alignment default=32 time=2025-05-22T09:06:20.785-04:00 level=DEBUG source=ggml.go:154 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/22 - 09:06:20 | 200 | 21.2416ms | 127.0.0.1 | POST "/api/show" time=2025-05-22T09:06:20.798-04:00 level=DEBUG source=ggml.go:154 msg="key not found" key=general.alignment default=32 time=2025-05-22T09:06:20.799-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.6 GiB" before.free_swap="128.0 GiB" now.total="127.1 GiB" now.free="113.5 GiB" now.free_swap="127.8 GiB" time=2025-05-22T09:06:20.820-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB" releasing nvml library time=2025-05-22T09:06:20.830-04:00 level=DEBUG source=ggml.go:154 msg="key not found" key=general.alignment default=32 time=2025-05-22T09:06:20.839-04:00 level=DEBUG source=ggml.go:154 msg="key not found" key=general.alignment default=32 time=2025-05-22T09:06:20.839-04:00 level=DEBUG source=sched.go:228 msg="loading first model" model=H:\ai\ollama\models\blobs\sha256-716e71486001082af395e305779d97bc4cef966b1f9158cdee24d3d8d1e41697 time=2025-05-22T09:06:20.839-04:00 level=DEBUG source=memory.go:111 msg=evaluating library=cuda gpu_count=1 available="[46.6 GiB]" time=2025-05-22T09:06:20.840-04:00 level=DEBUG source=ggml.go:154 msg="key not found" key=llama.vision.block_count default=0 time=2025-05-22T09:06:20.840-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.5 GiB" before.free_swap="127.8 GiB" now.total="127.1 GiB" now.free="113.5 GiB" now.free_swap="127.8 GiB" time=2025-05-22T09:06:20.851-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB" releasing nvml library time=2025-05-22T09:06:20.852-04:00 level=INFO source=sched.go:777 msg="new model will fit in available VRAM in single GPU, loading" model=H:\ai\ollama\models\blobs\sha256-716e71486001082af395e305779d97bc4cef966b1f9158cdee24d3d8d1e41697 gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 parallel=2 available=50078941184 required="25.0 GiB" time=2025-05-22T09:06:20.852-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.5 GiB" before.free_swap="127.8 GiB" now.total="127.1 GiB" now.free="113.5 GiB" now.free_swap="127.8 GiB" time=2025-05-22T09:06:20.866-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB" releasing nvml library time=2025-05-22T09:06:20.867-04:00 level=INFO source=server.go:135 msg="system memory" total="127.1 GiB" free="113.5 GiB" free_swap="127.8 GiB" time=2025-05-22T09:06:20.867-04:00 level=DEBUG source=memory.go:111 msg=evaluating library=cuda gpu_count=1 available="[46.6 GiB]" time=2025-05-22T09:06:20.867-04:00 level=DEBUG source=ggml.go:154 msg="key not found" key=llama.vision.block_count default=0 time=2025-05-22T09:06:20.867-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.5 GiB" before.free_swap="127.8 GiB" now.total="127.1 GiB" now.free="113.5 GiB" now.free_swap="127.8 GiB" time=2025-05-22T09:06:20.882-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB" releasing nvml library time=2025-05-22T09:06:20.883-04:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=41 layers.offload=41 layers.split="" memory.available="[46.6 GiB]" memory.gpu_overhead="0 B" memory.required.full="25.0 GiB" memory.required.partial="25.0 GiB" memory.required.kv="640.0 MiB" memory.required.allocations="[25.0 GiB]" memory.weights.total="22.7 GiB" memory.weights.repeating="22.0 GiB" memory.weights.nonrepeating="680.0 MiB" memory.graph.full="568.0 MiB" memory.graph.partial="801.0 MiB" time=2025-05-22T09:06:20.883-04:00 level=INFO source=server.go:211 msg="enabling flash attention" time=2025-05-22T09:06:20.884-04:00 level=DEBUG source=server.go:284 msg="compatible gpu libraries" compatible="[cuda_v12 cuda_v11]" time=2025-05-22T09:06:20.903-04:00 level=DEBUG source=ggml.go:154 msg="key not found" key=general.alignment default=32 time=2025-05-22T09:06:20.903-04:00 level=DEBUG source=ggml.go:154 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-05-22T09:06:20.903-04:00 level=DEBUG source=ggml.go:154 msg="key not found" key=llama.rope.freq_scale default=1 time=2025-05-22T09:06:20.908-04:00 level=DEBUG source=server.go:360 msg="adding gpu library" path=C:\Users\carlo\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12 time=2025-05-22T09:06:20.908-04:00 level=DEBUG source=server.go:367 msg="adding gpu dependency paths" paths=[C:\Users\carlo\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12] time=2025-05-22T09:06:20.908-04:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\\Users\\carlo\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model H:\\ai\\ollama\\models\\blobs\\sha256-716e71486001082af395e305779d97bc4cef966b1f9158cdee24d3d8d1e41697 --ctx-size 8192 --batch-size 512 --n-gpu-layers 41 --threads 16 --flash-attn --kv-cache-type q8_0 --no-mmap --parallel 2 --port 11835" time=2025-05-22T09:06:20.908-04:00 level=DEBUG source=server.go:432 msg=subprocess CUDA_HOME="C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.4" CUDA_PATH="C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.4" CUDA_PATH_V12_4="C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.4" CUDA_VISIBLE_DEVICES=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 OLLAMA_CUDA=1 OLLAMA_DEBUG=1 OLLAMA_FLASH_ATTENTION=1 OLLAMA_KEEP_ALIVE=-1 OLLAMA_KV_CACHE_TYPE=q8_0 OLLAMA_MAX_LOADED_MODELS=3 OLLAMA_MAX_QUEUE=2 OLLAMA_MODELS=H:\ai\ollama\models OLLAMA_NEW_ENGINE=true OLLAMA_NUM_GPU_LAYERS=65 OLLAMA_NUM_PARALLEL=2 OLLAMA_NUM_THREADS=0 OLLAMA_TIMEOUT=1000000 PATH="C:\\Users\\carlo\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12;C:\\Users\\carlo\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12;C:\\Users\\carlo\\AppData\\Local\\Programs\\Ollama\\lib\\ollama;C:\\ProgramData\\anaconda3\\condabin;C:\\Program Files\\Common Files\\Oracle\\Java\\javapath;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.4\\bin;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.4\\libnvvp;C:\\Program Files (x86)\\Common Files\\Oracle\\Java\\java8path;C:\\Program Files (x86)\\Common Files\\Oracle\\Java\\javapath;C:\\Program Files\\Microsoft\\jdk-11.0.16.101-hotspot\\bin;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\Graphviz\\bin;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files (x86)\\Google\\Cloud SDK\\google-cloud-sdk\\bin;C:\\Users\\carlo\\AppData\\Roaming\\nvm;C:\\Program Files\\nodejs;C:\\Program Files\\Git\\cmd;C:\\ProgramData\\anaconda3;C:\\ProgramData\\anaconda3\\Scripts;C:\\ProgramData\\anaconda3\\Library\\bin;C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2024.1.0\\;C:\\Program Files\\NVIDIA Corporation\\NVIDIA NvDLISR;C:\\Program Files\\Microsoft Visual Studio\\2022\\Professional\\VC\\Tools\\MSVC\\14.42.34433\\bin\\Hostx64\\x64;C:\\Program Files\\dotnet\\;C:\\Program Files\\MongoDB\\Server\\8.0\\bin;C:\\data\\mongosh-2.3.8-win32-x64\\bin;C:\\Users\\carlo\\AppData\\Roaming\\Python\\Python312\\Scripts;C:\\xampp\\php;C:\\ProgramData\\ComposerSetup\\bin;C:\\Program Files\\Docker\\Docker\\resources\\bin;C:\\Users\\carlo\\.cargo\\bin;C:\\Users\\carlo\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\;C:\\Users\\carlo\\AppData\\Local\\Programs\\Python\\Python310\\;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.4\\bin;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.4\\libnvvp;C:\\Users\\carlo\\scoop\\shims;C:\\Users\\carlo\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\carlo\\AppData\\Local\\Programs\\Python\\Python310\\Scripts;C:\\Users\\carlo\\AppData\\Local\\GitHubDesktop\\bin;C:\\Users\\carlo\\PycharmProjects\\engshell\\engshell;C:\\Program Files (x86)\\Nmap;C:\\MinGW\\bin;C:\\adb\\platform-tools_r34.0.4-windows\\platform-tools;C:\\Users\\carlo\\AppData\\Roaming\\nvm;C:\\Program Files\\nodejs;C:\\Users\\carlo\\AppData\\Local\\Programs\\Ollama;;C:\\Users\\carlo\\.lmstudio\\bin;C:\\Users\\carlo\\AppData\\Roaming\\Composer\\vendor\\bin;C:\\Users\\carlo\\AppData\\Local\\Programs\\Ollama\\lib\\ollama" OLLAMA_LIBRARY_PATH=C:\Users\carlo\AppData\Local\Programs\Ollama\lib\ollama;C:\Users\carlo\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12 time=2025-05-22T09:06:20.910-04:00 level=INFO source=sched.go:472 msg="loaded runners" count=1 time=2025-05-22T09:06:20.910-04:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2025-05-22T09:06:20.911-04:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error" time=2025-05-22T09:06:20.938-04:00 level=INFO source=runner.go:836 msg="starting ollama engine" time=2025-05-22T09:06:20.938-04:00 level=INFO source=runner.go:899 msg="Server listening on 127.0.0.1:11835" time=2025-05-22T09:06:20.957-04:00 level=DEBUG source=ggml.go:154 msg="key not found" key=general.alignment default=32 time=2025-05-22T09:06:20.957-04:00 level=DEBUG source=ggml.go:154 msg="key not found" key=general.description default="" time=2025-05-22T09:06:20.957-04:00 level=INFO source=ggml.go:73 msg="" architecture=llama file_type=Q8_0 name="Devstral Small 2505" description="" num_tensors=363 num_key_values=41 time=2025-05-22T09:06:20.957-04:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\carlo\AppData\Local\Programs\Ollama\lib\ollama load_backend: loaded CPU backend from C:\Users\carlo\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll time=2025-05-22T09:06:20.968-04:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\carlo\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: Quadro RTX 8000, compute capability 7.5, VMM: yes load_backend: loaded CUDA backend from C:\Users\carlo\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll time=2025-05-22T09:06:21.052-04:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-05-22T09:06:21.162-04:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" time=2025-05-22T09:06:21.166-04:00 level=INFO source=ggml.go:299 msg="model weights" buffer=CUDA0 size="22.7 GiB" time=2025-05-22T09:06:21.167-04:00 level=INFO source=ggml.go:299 msg="model weights" buffer=CPU size="680.0 MiB" time=2025-05-22T09:06:21.412-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.03" time=2025-05-22T09:06:21.913-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.06" time=2025-05-22T09:06:22.163-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.10" time=2025-05-22T09:06:22.414-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.14" time=2025-05-22T09:06:22.664-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.18" time=2025-05-22T09:06:22.914-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.22" time=2025-05-22T09:06:23.165-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.26" time=2025-05-22T09:06:23.415-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.30" time=2025-05-22T09:06:23.665-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.34" time=2025-05-22T09:06:23.915-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.37" time=2025-05-22T09:06:24.165-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.41" time=2025-05-22T09:06:24.416-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.45" time=2025-05-22T09:06:24.666-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.49" time=2025-05-22T09:06:24.916-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.53" time=2025-05-22T09:06:25.167-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.57" time=2025-05-22T09:06:25.417-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.61" time=2025-05-22T09:06:25.667-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.64" time=2025-05-22T09:06:25.918-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.68" time=2025-05-22T09:06:26.169-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.72" time=2025-05-22T09:06:26.419-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.76" time=2025-05-22T09:06:26.669-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.80" time=2025-05-22T09:06:26.919-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.84" time=2025-05-22T09:06:27.170-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.88" time=2025-05-22T09:06:27.421-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.92" time=2025-05-22T09:06:27.671-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.95" time=2025-05-22T09:06:27.921-04:00 level=DEBUG source=server.go:636 msg="model load progress 0.99" time=2025-05-22T09:06:28.074-04:00 level=DEBUG source=ggml.go:154 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-05-22T09:06:28.074-04:00 level=DEBUG source=ggml.go:154 msg="key not found" key=llama.rope.freq_scale default=1 ggml.c:3081: GGML_ASSERT(ggml_nelements(a) == ne0*ne1*ne2) failed time=2025-05-22T09:06:28.212-04:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error" time=2025-05-22T09:06:28.233-04:00 level=ERROR source=server.go:457 msg="llama runner terminated" error="exit status 0xc0000409" time=2025-05-22T09:06:28.462-04:00 level=ERROR source=sched.go:478 msg="error loading llama server" error="llama runner process has terminated: GGML_ASSERT(ggml_nelements(a) == ne0*ne1*ne2) failed" time=2025-05-22T09:06:28.462-04:00 level=DEBUG source=sched.go:480 msg="triggering expiration for failed load" runner.name=registry.ollama.ai/library/devstral:24b-small-2505-q8_0 runner.inference=cuda runner.devices=1 runner.size="25.0 GiB" runner.vram="25.0 GiB" runner.parallel=2 runner.pid=31272 runner.model=H:\ai\ollama\models\blobs\sha256-716e71486001082af395e305779d97bc4cef966b1f9158cdee24d3d8d1e41697 runner.num_ctx=8192 [GIN] 2025/05/22 - 09:06:28 | 500 | 7.6747794s | 127.0.0.1 | POST "/api/generate" time=2025-05-22T09:06:28.463-04:00 level=DEBUG source=sched.go:364 msg="runner expired event received" runner.name=registry.ollama.ai/library/devstral:24b-small-2505-q8_0 runner.inference=cuda runner.devices=1 runner.size="25.0 GiB" runner.vram="25.0 GiB" runner.parallel=2 runner.pid=31272 runner.model=H:\ai\ollama\models\blobs\sha256-716e71486001082af395e305779d97bc4cef966b1f9158cdee24d3d8d1e41697 runner.num_ctx=8192 time=2025-05-22T09:06:28.463-04:00 level=DEBUG source=sched.go:379 msg="got lock to unload expired event" runner.name=registry.ollama.ai/library/devstral:24b-small-2505-q8_0 runner.inference=cuda runner.devices=1 runner.size="25.0 GiB" runner.vram="25.0 GiB" runner.parallel=2 runner.pid=31272 runner.model=H:\ai\ollama\models\blobs\sha256-716e71486001082af395e305779d97bc4cef966b1f9158cdee24d3d8d1e41697 runner.num_ctx=8192 time=2025-05-22T09:06:28.463-04:00 level=DEBUG source=sched.go:391 msg="starting background wait for VRAM recovery" runner.name=registry.ollama.ai/library/devstral:24b-small-2505-q8_0 runner.inference=cuda runner.devices=1 runner.size="25.0 GiB" runner.vram="25.0 GiB" runner.parallel=2 runner.pid=31272 runner.model=H:\ai\ollama\models\blobs\sha256-716e71486001082af395e305779d97bc4cef966b1f9158cdee24d3d8d1e41697 runner.num_ctx=8192 time=2025-05-22T09:06:28.463-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.5 GiB" before.free_swap="127.8 GiB" now.total="127.1 GiB" now.free="113.2 GiB" now.free_swap="127.6 GiB" time=2025-05-22T09:06:28.481-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB" releasing nvml library time=2025-05-22T09:06:28.482-04:00 level=DEBUG source=server.go:1023 msg="stopping llama server" pid=31272 time=2025-05-22T09:06:28.482-04:00 level=DEBUG source=sched.go:396 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="25.0 GiB" runner.vram="25.0 GiB" runner.parallel=2 runner.pid=31272 runner.model=H:\ai\ollama\models\blobs\sha256-716e71486001082af395e305779d97bc4cef966b1f9158cdee24d3d8d1e41697 time=2025-05-22T09:06:28.732-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.2 GiB" before.free_swap="127.6 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB" time=2025-05-22T09:06:28.742-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB" releasing nvml library time=2025-05-22T09:06:28.982-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.3 GiB" before.free_swap="127.5 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB" time=2025-05-22T09:06:28.992-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB" releasing nvml library time=2025-05-22T09:06:29.233-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.3 GiB" before.free_swap="127.5 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB" time=2025-05-22T09:06:29.256-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB" releasing nvml library time=2025-05-22T09:06:29.483-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.3 GiB" before.free_swap="127.5 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB" time=2025-05-22T09:06:29.506-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB" releasing nvml library time=2025-05-22T09:06:29.733-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.3 GiB" before.free_swap="127.5 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB" time=2025-05-22T09:06:29.755-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB" releasing nvml library time=2025-05-22T09:06:29.982-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.3 GiB" before.free_swap="127.5 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB" time=2025-05-22T09:06:30.004-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB" releasing nvml library time=2025-05-22T09:06:30.233-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.3 GiB" before.free_swap="127.5 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB" time=2025-05-22T09:06:30.252-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB" releasing nvml library time=2025-05-22T09:06:30.483-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.3 GiB" before.free_swap="127.5 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB" time=2025-05-22T09:06:30.501-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB" releasing nvml library time=2025-05-22T09:06:30.732-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.3 GiB" before.free_swap="127.5 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB" time=2025-05-22T09:06:30.749-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB" releasing nvml library time=2025-05-22T09:06:30.983-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.3 GiB" before.free_swap="127.5 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB" time=2025-05-22T09:06:31.001-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB" releasing nvml library time=2025-05-22T09:06:31.232-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.3 GiB" before.free_swap="127.5 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB" time=2025-05-22T09:06:31.249-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB" releasing nvml library time=2025-05-22T09:06:31.483-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.3 GiB" before.free_swap="127.5 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB" time=2025-05-22T09:06:31.497-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB" releasing nvml library time=2025-05-22T09:06:31.732-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.3 GiB" before.free_swap="127.5 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB" time=2025-05-22T09:06:31.747-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB" releasing nvml library time=2025-05-22T09:06:31.983-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.3 GiB" before.free_swap="127.5 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB" time=2025-05-22T09:06:31.996-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB" releasing nvml library time=2025-05-22T09:06:32.233-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.3 GiB" before.free_swap="127.5 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB" time=2025-05-22T09:06:32.246-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB" releasing nvml library time=2025-05-22T09:06:32.483-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.3 GiB" before.free_swap="127.5 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB" time=2025-05-22T09:06:32.496-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB" releasing nvml library time=2025-05-22T09:06:32.733-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.3 GiB" before.free_swap="127.5 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB" time=2025-05-22T09:06:32.746-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB" releasing nvml library time=2025-05-22T09:06:32.983-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.3 GiB" before.free_swap="127.5 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB" time=2025-05-22T09:06:32.993-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB" releasing nvml library time=2025-05-22T09:06:33.233-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.3 GiB" before.free_swap="127.5 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB" time=2025-05-22T09:06:33.256-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB" releasing nvml library time=2025-05-22T09:06:33.482-04:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.0188961 runner.size="25.0 GiB" runner.vram="25.0 GiB" runner.parallel=2 runner.pid=31272 runner.model=H:\ai\ollama\models\blobs\sha256-716e71486001082af395e305779d97bc4cef966b1f9158cdee24d3d8d1e41697 time=2025-05-22T09:06:33.482-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.3 GiB" before.free_swap="127.5 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB" time=2025-05-22T09:06:33.482-04:00 level=DEBUG source=sched.go:399 msg="sending an unloaded event" runner.size="25.0 GiB" runner.vram="25.0 GiB" runner.parallel=2 runner.pid=31272 runner.model=H:\ai\ollama\models\blobs\sha256-716e71486001082af395e305779d97bc4cef966b1f9158cdee24d3d8d1e41697 time=2025-05-22T09:06:33.483-04:00 level=DEBUG source=sched.go:312 msg="ignoring unload event with no pending requests" time=2025-05-22T09:06:33.504-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB" releasing nvml library time=2025-05-22T09:06:33.732-04:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.2688569 runner.size="25.0 GiB" runner.vram="25.0 GiB" runner.parallel=2 runner.pid=31272 runner.model=H:\ai\ollama\models\blobs\sha256-716e71486001082af395e305779d97bc4cef966b1f9158cdee24d3d8d1e41697 time=2025-05-22T09:06:33.732-04:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="127.1 GiB" before.free="113.3 GiB" before.free_swap="127.5 GiB" now.total="127.1 GiB" now.free="113.3 GiB" now.free_swap="127.5 GiB" time=2025-05-22T09:06:33.755-04:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-86c1a0d8-a857-7035-6d12-957836f9d5d6 name="Quadro RTX 8000" overhead="750.9 MiB" before.total="48.0 GiB" before.free="46.6 GiB" now.total="48.0 GiB" now.free="46.6 GiB" now.used="642.1 MiB" releasing nvml library time=2025-05-22T09:06:33.982-04:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.5186237 runner.size="25.0 GiB" runner.vram="25.0 GiB" runner.parallel=2 runner.pid=31272 runner.model=H:\ai\ollama\models\blobs\sha256-716e71486001082af395e305779d97bc4cef966b1f9158cdee24d3d8d1e41697 ``` That happens right after I do: `ollama run devstral:24b-small-2505-q8_0` `Error: llama runner process has terminated: GGML_ASSERT(ggml_nelements(a) == ne0*ne1*ne2) failed`
Author
Owner

@hideaki-t commented on GitHub (May 23, 2025):

I have the same problem, and it seems it happens when OLLAMA_NEW_ENGINE=1

<!-- gh-comment-id:2904036327 --> @hideaki-t commented on GitHub (May 23, 2025): I have the same problem, and it seems it happens when OLLAMA_NEW_ENGINE=1
Author
Owner

@rick-github commented on GitHub (May 23, 2025):

Don't force models onto the new engine, let ollama choose the appropriate one for the model. As the migration progresses they will be moved over.

<!-- gh-comment-id:2904070289 --> @rick-github commented on GitHub (May 23, 2025): Don't force models onto the new engine, let ollama choose the appropriate one for the model. As the migration progresses they will be moved over.
Author
Owner

@SingularityMan commented on GitHub (May 23, 2025):

Don't force models onto the new engine, let ollama choose the appropriate one for the model. As the migration progresses they will be moved over.

I've done both approaches. Still same issue. Log looks exactly the same as what I provided.

<!-- gh-comment-id:2904391620 --> @SingularityMan commented on GitHub (May 23, 2025): > Don't force models onto the new engine, let ollama choose the appropriate one for the model. As the migration progresses they will be moved over. I've done both approaches. Still same issue. Log looks exactly the same as what I provided.
Author
Owner

@sempervictus commented on GitHub (May 23, 2025):

@rick-github - can confirm same on the V100s i'm using (SXM2 setup, 570, 12.8, all dockerized):

time=2025-05-23T15:20:59.024Z level=INFO source=server.go:431 msg="starting llama server" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-b3a2c9a8fef9be8d2ef951aecca36a36b9ea0b70abe9359eab4315bf4cd9be01 --ctx-size 131072 --batch-size 512 --n-gpu-layers 41 --threads 40 --parallel 1 --tensor-split 11,10,10,10 --multiuser-cache --port 40803"
time=2025-05-23T15:20:59.025Z level=INFO source=sched.go:472 msg="loaded runners" count=2
time=2025-05-23T15:20:59.025Z level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
time=2025-05-23T15:20:59.025Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding"
time=2025-05-23T15:20:59.047Z level=INFO source=runner.go:836 msg="starting ollama engine"
time=2025-05-23T15:20:59.048Z level=INFO source=runner.go:899 msg="Server listening on 127.0.0.1:40803"
time=2025-05-23T15:20:59.128Z level=INFO source=ggml.go:73 msg="" architecture=llama file_type=Q4_K_M name="Devstral Small 2505" description="" num_tensors=363 num_key_values=41
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-skylakex.so
time=2025-05-23T15:20:59.276Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 4 CUDA devices:
  Device 0: Tesla V100-SXM2-32GB, compute capability 7.0, VMM: yes
  Device 1: Tesla V100-SXM2-32GB, compute capability 7.0, VMM: yes
  Device 2: Tesla V100-SXM2-32GB, compute capability 7.0, VMM: yes
  Device 3: Tesla V100-SXM2-32GB, compute capability 7.0, VMM: yes
load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so
time=2025-05-23T15:20:59.757Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 CUDA.2.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.2.USE_GRAPHS=1 CUDA.2.PEER_MAX_BATCH_SIZE=128 CUDA.3.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.3.USE_GRAPHS=1 CUDA.3.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2025-05-23T15:20:59.892Z level=INFO source=ggml.go:299 msg="model weights" buffer=CPU size="360.0 MiB"
time=2025-05-23T15:20:59.892Z level=INFO source=ggml.go:299 msg="model weights" buffer=CUDA0 size="3.5 GiB"
time=2025-05-23T15:20:59.892Z level=INFO source=ggml.go:299 msg="model weights" buffer=CUDA1 size="3.0 GiB"
time=2025-05-23T15:20:59.892Z level=INFO source=ggml.go:299 msg="model weights" buffer=CUDA2 size="3.0 GiB"
time=2025-05-23T15:20:59.892Z level=INFO source=ggml.go:299 msg="model weights" buffer=CUDA3 size="3.4 GiB"
ggml.c:3081: GGML_ASSERT(ggml_nelements(a) == ne0*ne1*ne2) failed
SIGSEGV: segmentation violation
PC=0x7fbdfa1aae47 m=65 sigcode=1 addr=0x213a03ef0
signal arrived during cgo execution

goroutine 102 gp=0xc000582a80 m=65 mp=0xc00150c008 [syscall]:
runtime.cgocall(0x55bb02cbef40, 0xc000517a08)
	runtime/cgocall.go:167 +0x4b fp=0xc0005179e0 sp=0xc0005179a8 pc=0x55bb02015ecb
github.com/ollama/ollama/ml/backend/ggml._Cfunc_ggml_reshape_3d(0x7fb8dc004580, 0x7fbdd407da70, 0xa0, 0x20, 0x200)
	_cgo_gotypes.go:1571 +0x4b fp=0xc000517a08 sp=0xc0005179e0 pc=0x55bb0241798b
github.com/ollama/ollama/ml/backend/ggml.(*Tensor).Reshape.func3(...)
	github.com/ollama/ollama/ml/backend/ggml/ggml.go:966
github.com/ollama/ollama/ml/backend/ggml.(*Tensor).Reshape(0xc000010b58, {0x55bb03362988?, 0xc00219f2f0?}, {0xc00134ec78?, 0xc000010b40?, 0x18?})
	github.com/ollama/ollama/ml/backend/ggml/ggml.go:966 +0x33e fp=0xc000517af0 sp=0xc000517a08 pc=0x55bb0242257e
github.com/ollama/ollama/model/models/llama.(*SelfAttention).Forward(0xc001196780, {0x55bb03362988, 0xc00219f2f0}, {0x55bb0336c8c8, 0xc000010b40}, {0x55bb0336c8c8, 0xc000010198}, {0x55bb03361d00, 0xc001b32000}, 0xc001386ea0)
	github.com/ollama/ollama/model/models/llama/model.go:86 +0x156 fp=0xc000517bb0 sp=0xc000517af0 pc=0x55bb024bc056
github.com/ollama/ollama/model/models/llama.(*Layer).Forward(0xc000517cd8, {0x55bb03362988, 0xc00219f2f0}, {0x55bb0336c8c8, 0xc0000107b0}, {0x55bb0336c8c8, 0xc000010198}, {0x0, 0x0}, {0x55bb03361d00, ...}, ...)
	github.com/ollama/ollama/model/models/llama/model.go:129 +0xd9 fp=0xc000517c30 sp=0xc000517bb0 pc=0x55bb024bc739
github.com/ollama/ollama/model/models/llama.(*Model).Forward(0xc000226070, {0x55bb03362988, 0xc00219f2f0}, {{0x55bb0336c8c8, 0xc001c76960}, {0x0, 0x0, 0x0}, {0xc00186b000, 0x200, ...}, ...})
	github.com/ollama/ollama/model/models/llama/model.go:167 +0x2ce fp=0xc000517d20 sp=0xc000517c30 pc=0x55bb024bcbce
github.com/ollama/ollama/runner/ollamarunner.(*Server).reserveWorstCaseGraph(0xc0002daea0)
	github.com/ollama/ollama/runner/ollamarunner/runner.go:751 +0x325 fp=0xc000517ea8 sp=0xc000517d20 pc=0x55bb024f2be5
github.com/ollama/ollama/runner/ollamarunner.(*Server).loadModel(0xc0002daea0, {0x55bb0335ab10?, 0xc000621950?}, {0x7ffc1f50cbf6?, 0x0?}, {0xc000619b30, 0x28, 0x0, 0x29, {0xc0003e01b0, ...}, ...}, ...)
	github.com/ollama/ollama/runner/ollamarunner/runner.go:799 +0x273 fp=0xc000517f20 sp=0xc000517ea8 pc=0x55bb024f30d3
github.com/ollama/ollama/runner/ollamarunner.Execute.gowrap1()
	github.com/ollama/ollama/runner/ollamarunner/runner.go:872 +0xbd fp=0xc000517fe0 sp=0xc000517f20 pc=0x55bb024f43bd
runtime.goexit({})
	runtime/asm_amd64.s:1700 +0x1 fp=0xc000517fe8 sp=0xc000517fe0 pc=0x55bb02020901
created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1
	github.com/ollama/ollama/runner/ollamarunner/runner.go:872 +0xa2b

<!-- gh-comment-id:2904805305 --> @sempervictus commented on GitHub (May 23, 2025): @rick-github - can confirm same on the V100s i'm using (SXM2 setup, 570, 12.8, all dockerized): ``` time=2025-05-23T15:20:59.024Z level=INFO source=server.go:431 msg="starting llama server" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-b3a2c9a8fef9be8d2ef951aecca36a36b9ea0b70abe9359eab4315bf4cd9be01 --ctx-size 131072 --batch-size 512 --n-gpu-layers 41 --threads 40 --parallel 1 --tensor-split 11,10,10,10 --multiuser-cache --port 40803" time=2025-05-23T15:20:59.025Z level=INFO source=sched.go:472 msg="loaded runners" count=2 time=2025-05-23T15:20:59.025Z level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2025-05-23T15:20:59.025Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding" time=2025-05-23T15:20:59.047Z level=INFO source=runner.go:836 msg="starting ollama engine" time=2025-05-23T15:20:59.048Z level=INFO source=runner.go:899 msg="Server listening on 127.0.0.1:40803" time=2025-05-23T15:20:59.128Z level=INFO source=ggml.go:73 msg="" architecture=llama file_type=Q4_K_M name="Devstral Small 2505" description="" num_tensors=363 num_key_values=41 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-skylakex.so time=2025-05-23T15:20:59.276Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 4 CUDA devices: Device 0: Tesla V100-SXM2-32GB, compute capability 7.0, VMM: yes Device 1: Tesla V100-SXM2-32GB, compute capability 7.0, VMM: yes Device 2: Tesla V100-SXM2-32GB, compute capability 7.0, VMM: yes Device 3: Tesla V100-SXM2-32GB, compute capability 7.0, VMM: yes load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so time=2025-05-23T15:20:59.757Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 CUDA.2.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.2.USE_GRAPHS=1 CUDA.2.PEER_MAX_BATCH_SIZE=128 CUDA.3.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.3.USE_GRAPHS=1 CUDA.3.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2025-05-23T15:20:59.892Z level=INFO source=ggml.go:299 msg="model weights" buffer=CPU size="360.0 MiB" time=2025-05-23T15:20:59.892Z level=INFO source=ggml.go:299 msg="model weights" buffer=CUDA0 size="3.5 GiB" time=2025-05-23T15:20:59.892Z level=INFO source=ggml.go:299 msg="model weights" buffer=CUDA1 size="3.0 GiB" time=2025-05-23T15:20:59.892Z level=INFO source=ggml.go:299 msg="model weights" buffer=CUDA2 size="3.0 GiB" time=2025-05-23T15:20:59.892Z level=INFO source=ggml.go:299 msg="model weights" buffer=CUDA3 size="3.4 GiB" ggml.c:3081: GGML_ASSERT(ggml_nelements(a) == ne0*ne1*ne2) failed SIGSEGV: segmentation violation PC=0x7fbdfa1aae47 m=65 sigcode=1 addr=0x213a03ef0 signal arrived during cgo execution goroutine 102 gp=0xc000582a80 m=65 mp=0xc00150c008 [syscall]: runtime.cgocall(0x55bb02cbef40, 0xc000517a08) runtime/cgocall.go:167 +0x4b fp=0xc0005179e0 sp=0xc0005179a8 pc=0x55bb02015ecb github.com/ollama/ollama/ml/backend/ggml._Cfunc_ggml_reshape_3d(0x7fb8dc004580, 0x7fbdd407da70, 0xa0, 0x20, 0x200) _cgo_gotypes.go:1571 +0x4b fp=0xc000517a08 sp=0xc0005179e0 pc=0x55bb0241798b github.com/ollama/ollama/ml/backend/ggml.(*Tensor).Reshape.func3(...) github.com/ollama/ollama/ml/backend/ggml/ggml.go:966 github.com/ollama/ollama/ml/backend/ggml.(*Tensor).Reshape(0xc000010b58, {0x55bb03362988?, 0xc00219f2f0?}, {0xc00134ec78?, 0xc000010b40?, 0x18?}) github.com/ollama/ollama/ml/backend/ggml/ggml.go:966 +0x33e fp=0xc000517af0 sp=0xc000517a08 pc=0x55bb0242257e github.com/ollama/ollama/model/models/llama.(*SelfAttention).Forward(0xc001196780, {0x55bb03362988, 0xc00219f2f0}, {0x55bb0336c8c8, 0xc000010b40}, {0x55bb0336c8c8, 0xc000010198}, {0x55bb03361d00, 0xc001b32000}, 0xc001386ea0) github.com/ollama/ollama/model/models/llama/model.go:86 +0x156 fp=0xc000517bb0 sp=0xc000517af0 pc=0x55bb024bc056 github.com/ollama/ollama/model/models/llama.(*Layer).Forward(0xc000517cd8, {0x55bb03362988, 0xc00219f2f0}, {0x55bb0336c8c8, 0xc0000107b0}, {0x55bb0336c8c8, 0xc000010198}, {0x0, 0x0}, {0x55bb03361d00, ...}, ...) github.com/ollama/ollama/model/models/llama/model.go:129 +0xd9 fp=0xc000517c30 sp=0xc000517bb0 pc=0x55bb024bc739 github.com/ollama/ollama/model/models/llama.(*Model).Forward(0xc000226070, {0x55bb03362988, 0xc00219f2f0}, {{0x55bb0336c8c8, 0xc001c76960}, {0x0, 0x0, 0x0}, {0xc00186b000, 0x200, ...}, ...}) github.com/ollama/ollama/model/models/llama/model.go:167 +0x2ce fp=0xc000517d20 sp=0xc000517c30 pc=0x55bb024bcbce github.com/ollama/ollama/runner/ollamarunner.(*Server).reserveWorstCaseGraph(0xc0002daea0) github.com/ollama/ollama/runner/ollamarunner/runner.go:751 +0x325 fp=0xc000517ea8 sp=0xc000517d20 pc=0x55bb024f2be5 github.com/ollama/ollama/runner/ollamarunner.(*Server).loadModel(0xc0002daea0, {0x55bb0335ab10?, 0xc000621950?}, {0x7ffc1f50cbf6?, 0x0?}, {0xc000619b30, 0x28, 0x0, 0x29, {0xc0003e01b0, ...}, ...}, ...) github.com/ollama/ollama/runner/ollamarunner/runner.go:799 +0x273 fp=0xc000517f20 sp=0xc000517ea8 pc=0x55bb024f30d3 github.com/ollama/ollama/runner/ollamarunner.Execute.gowrap1() github.com/ollama/ollama/runner/ollamarunner/runner.go:872 +0xbd fp=0xc000517fe0 sp=0xc000517f20 pc=0x55bb024f43bd runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc000517fe8 sp=0xc000517fe0 pc=0x55bb02020901 created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1 github.com/ollama/ollama/runner/ollamarunner/runner.go:872 +0xa2b ```
Author
Owner

@rick-github commented on GitHub (May 23, 2025):

This is a bit of a Heisenbug. I was able to reproduce the failure using the new engine. It throws at this location:
884d26093c/ml/backend/ggml/ggml/src/ggml.c (L3081)

However, when I wrapped it in some tracing, it stopped failing. I rolled back the changes and it continues to not fail. So investigation continues.

<!-- gh-comment-id:2905390199 --> @rick-github commented on GitHub (May 23, 2025): This is a bit of a Heisenbug. I was able to reproduce the failure using the new engine. It throws at this location: https://github.com/ollama/ollama/blob/884d26093c80491a3fe07f606fc04851dc317199/ml/backend/ggml/ggml/src/ggml.c#L3081 However, when I wrapped it in some tracing, it stopped failing. I rolled back the changes and it continues to not fail. So investigation continues.
Author
Owner

@SingularityMan commented on GitHub (May 24, 2025):

This is a bit of a Heisenbug. I was able to reproduce the failure using the new engine. It throws at this location:

ollama/ml/backend/ggml/ggml/src/ggml.c

Line 3081 in 884d260

GGML_ASSERT(ggml_nelements(a) == ne0ne1ne2);
However, when I wrapped it in some tracing, it stopped failing. I rolled back the changes and it continues to not fail. So investigation continues.

0.7.1 fixed it. Closing now.

<!-- gh-comment-id:2907169397 --> @SingularityMan commented on GitHub (May 24, 2025): > This is a bit of a Heisenbug. I was able to reproduce the failure using the new engine. It throws at this location: > > [ollama/ml/backend/ggml/ggml/src/ggml.c](https://github.com/ollama/ollama/blob/884d26093c80491a3fe07f606fc04851dc317199/ml/backend/ggml/ggml/src/ggml.c#L3081) > > Line 3081 in [884d260](/ollama/ollama/commit/884d26093c80491a3fe07f606fc04851dc317199) > > GGML_ASSERT(ggml_nelements(a) == ne0*ne1*ne2); > However, when I wrapped it in some tracing, it stopped failing. I rolled back the changes and it continues to not fail. So investigation continues. 0.7.1 fixed it. Closing now.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#7097