[GH-ISSUE #8689] Error LLama runner process has terminated: %!w(<nil>) #5631

Closed
opened 2026-04-12 16:54:39 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @Saatvik-droid on GitHub (Jan 30, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8689

What is the issue?

Sometimes when infering from ollama using the python module I get this error. After retrying a couple of times it works and looks random to me.

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.5.7

Originally created by @Saatvik-droid on GitHub (Jan 30, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8689 ### What is the issue? Sometimes when infering from ollama using the python module I get this error. After retrying a couple of times it works and looks random to me. ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.7
GiteaMirror added the bugneeds more info labels 2026-04-12 16:54:39 -05:00
Author
Owner

@rick-github commented on GitHub (Jan 30, 2025):

Server logs will aid in debugging.

<!-- gh-comment-id:2623890301 --> @rick-github commented on GitHub (Jan 30, 2025): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will aid in debugging.
Author
Owner

@Saatvik-droid commented on GitHub (Feb 3, 2025):

time=2025-02-03T05:52:46.700Z level=WARN source=sched.go:137 msg="mllama doesn't support parallel requests yet"
time=2025-02-03T05:52:46.784Z level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=D:\ollama\blobs\sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 gpu=GPU-51b689d3-02bf-257b-2546-d85dab7b0097 parallel=1 available=15813967872 required="11.3 GiB"
time=2025-02-03T05:52:46.793Z level=INFO source=server.go:104 msg="system memory" total="15.7 GiB" free="11.6 GiB" free_swap="13.2 GiB"
time=2025-02-03T05:52:46.799Z level=INFO source=memory.go:356 msg="offload to cuda" projector.weights="1.8 GiB" projector.graph="2.8 GiB" layers.requested=-1 layers.model=41 layers.offload=41 layers.split="" memory.available="[14.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.3 GiB" memory.required.partial="11.3 GiB" memory.required.kv="656.2 MiB" memory.required.allocations="[11.3 GiB]" memory.weights.total="5.5 GiB" memory.weights.repeating="5.1 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="258.5 MiB" memory.graph.partial="669.5 MiB"
time=2025-02-03T05:52:46.805Z level=INFO source=server.go:376 msg="starting llama server" cmd="D:\ollama\lib\ollama\runners\cuda_v12\ollama_llama_server.exe runner --model D:\ollama\blobs\sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 --ctx-size 2048 --batch-size 512 --n-gpu-layers 41 --mmproj D:\ollama\blobs\sha256-ece5e659647a20a5c28ab9eea1c12a1ad430bc0f2a27021d00ad103b3bf5206f --threads 2 --no-mmap --parallel 1 --port 50687"
time=2025-02-03T05:52:47.394Z level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-02-03T05:52:47.394Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
time=2025-02-03T05:52:47.404Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2025-02-03T05:52:47.927Z level=INFO source=runner.go:941 msg="starting go runner"
time=2025-02-03T05:52:48.026Z level=INFO source=runner.go:942 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(clang)" threads=4
Listen error: listen tcp 127.0.0.1:8080: bind: An attempt was made to access a socket in a way forbidden by its access permissions.
time=2025-02-03T05:52:48.156Z level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: %!w()"
[GIN] 2025/02/03 - 05:52:48 | 500 | 1.4883586s | ::1 | POST "/api/chat"
time=2025-02-03T05:52:53.166Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.0099036 model=D:\ollama\blobs\sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068
time=2025-02-03T05:52:53.416Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.2598836 model=D:\ollama\blobs\sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068
time=2025-02-03T05:52:53.666Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.5099232 model=D:\ollama\blobs\sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068
[GIN] 2025/02/03 - 05:53:05 | 200 | 2.2954ms | 100.88.128.103 | GET "/api/tags"
time=2025-02-03T05:53:22.319Z level=WARN source=sched.go:137 msg="mllama doesn't support parallel requests yet"
time=2025-02-03T05:53:22.388Z level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=D:\ollama\blobs\sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 gpu=GPU-51b689d3-02bf-257b-2546-d85dab7b0097 parallel=1 available=15813967872 required="11.3 GiB"
time=2025-02-03T05:53:22.397Z level=INFO source=server.go:104 msg="system memory" total="15.7 GiB" free="11.6 GiB" free_swap="13.1 GiB"
time=2025-02-03T05:53:22.402Z level=INFO source=memory.go:356 msg="offload to cuda" projector.weights="1.8 GiB" projector.graph="2.8 GiB" layers.requested=-1 layers.model=41 layers.offload=41 layers.split="" memory.available="[14.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.3 GiB" memory.required.partial="11.3 GiB" memory.required.kv="656.2 MiB" memory.required.allocations="[11.3 GiB]" memory.weights.total="5.5 GiB" memory.weights.repeating="5.1 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="258.5 MiB" memory.graph.partial="669.5 MiB"

<!-- gh-comment-id:2630068982 --> @Saatvik-droid commented on GitHub (Feb 3, 2025): time=2025-02-03T05:52:46.700Z level=WARN source=sched.go:137 msg="mllama doesn't support parallel requests yet" time=2025-02-03T05:52:46.784Z level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=D:\ollama\blobs\sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 gpu=GPU-51b689d3-02bf-257b-2546-d85dab7b0097 parallel=1 available=15813967872 required="11.3 GiB" time=2025-02-03T05:52:46.793Z level=INFO source=server.go:104 msg="system memory" total="15.7 GiB" free="11.6 GiB" free_swap="13.2 GiB" time=2025-02-03T05:52:46.799Z level=INFO source=memory.go:356 msg="offload to cuda" projector.weights="1.8 GiB" projector.graph="2.8 GiB" layers.requested=-1 layers.model=41 layers.offload=41 layers.split="" memory.available="[14.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.3 GiB" memory.required.partial="11.3 GiB" memory.required.kv="656.2 MiB" memory.required.allocations="[11.3 GiB]" memory.weights.total="5.5 GiB" memory.weights.repeating="5.1 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="258.5 MiB" memory.graph.partial="669.5 MiB" time=2025-02-03T05:52:46.805Z level=INFO source=server.go:376 msg="starting llama server" cmd="D:\\ollama\\lib\\ollama\\runners\\cuda_v12\\ollama_llama_server.exe runner --model D:\\ollama\\blobs\\sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 --ctx-size 2048 --batch-size 512 --n-gpu-layers 41 --mmproj D:\\ollama\\blobs\\sha256-ece5e659647a20a5c28ab9eea1c12a1ad430bc0f2a27021d00ad103b3bf5206f --threads 2 --no-mmap --parallel 1 --port 50687" time=2025-02-03T05:52:47.394Z level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-02-03T05:52:47.394Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding" time=2025-02-03T05:52:47.404Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" time=2025-02-03T05:52:47.927Z level=INFO source=runner.go:941 msg="starting go runner" time=2025-02-03T05:52:48.026Z level=INFO source=runner.go:942 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(clang)" threads=4 Listen error: listen tcp [127.0.0.1:8080](http://127.0.0.1:8080/): bind: An attempt was made to access a socket in a way forbidden by its access permissions. time=2025-02-03T05:52:48.156Z level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: %!w(<nil>)" [GIN] 2025/02/03 - 05:52:48 | 500 | 1.4883586s | ::1 | POST "/api/chat" time=2025-02-03T05:52:53.166Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.0099036 model=D:\ollama\blobs\sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 time=2025-02-03T05:52:53.416Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.2598836 model=D:\ollama\blobs\sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 time=2025-02-03T05:52:53.666Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.5099232 model=D:\ollama\blobs\sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 [GIN] 2025/02/03 - 05:53:05 | 200 | 2.2954ms | 100.88.128.103 | GET "/api/tags" time=2025-02-03T05:53:22.319Z level=WARN source=sched.go:137 msg="mllama doesn't support parallel requests yet" time=2025-02-03T05:53:22.388Z level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=D:\ollama\blobs\sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 gpu=GPU-51b689d3-02bf-257b-2546-d85dab7b0097 parallel=1 available=15813967872 required="11.3 GiB" time=2025-02-03T05:53:22.397Z level=INFO source=server.go:104 msg="system memory" total="15.7 GiB" free="11.6 GiB" free_swap="13.1 GiB" time=2025-02-03T05:53:22.402Z level=INFO source=memory.go:356 msg="offload to cuda" projector.weights="1.8 GiB" projector.graph="2.8 GiB" layers.requested=-1 layers.model=41 layers.offload=41 layers.split="" memory.available="[14.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.3 GiB" memory.required.partial="11.3 GiB" memory.required.kv="656.2 MiB" memory.required.allocations="[11.3 GiB]" memory.weights.total="5.5 GiB" memory.weights.repeating="5.1 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="258.5 MiB" memory.graph.partial="669.5 MiB"
Author
Owner

@Saatvik-droid commented on GitHub (Feb 17, 2025):

Hi @rick-github any update on this? Is there anything more that I can provide?

<!-- gh-comment-id:2662249399 --> @Saatvik-droid commented on GitHub (Feb 17, 2025): Hi @rick-github any update on this? Is there anything more that I can provide?
Author
Owner

@rick-github commented on GitHub (Feb 17, 2025):

Could you provide a full log, there are earlier log entries that may contain useful info.

<!-- gh-comment-id:2662269231 --> @rick-github commented on GitHub (Feb 17, 2025): Could you provide a full log, there are earlier log entries that may contain useful info.
Author
Owner

@Saatvik-droid commented on GitHub (Mar 13, 2025):

20250313 053440 routes.go1215 INFO.txt

<!-- gh-comment-id:2720711376 --> @Saatvik-droid commented on GitHub (Mar 13, 2025): [20250313 053440 routes.go1215 INFO.txt](https://github.com/user-attachments/files/19226833/20250313.053440.routes.go1215.INFO.txt)
Author
Owner

@rick-github commented on GitHub (Mar 13, 2025):

There's no %!w(nil) error in this log. It's from a different version of ollama from when you created this issue, is it still happening?

<!-- gh-comment-id:2721705994 --> @rick-github commented on GitHub (Mar 13, 2025): There's no `%!w(nil)` error in this log. It's from a different version of ollama from when you created this issue, is it still happening?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#5631