[PR #12888] [MERGED] logs: catch rocm errors #13991

Closed
opened 2026-04-13 00:42:03 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/12888
Author: @dhiltgen
Created: 10/31/2025
Status: Merged
Merged: 10/31/2025
Merged by: @dhiltgen

Base: mainHead: rocm_error


📝 Commits (1)

📊 Changes

1 file changed (+1 additions, -0 deletions)

View changed files

📝 llm/status.go (+1 -0)

📄 Description

This will help bubble up more crash errors with the details instead of a generic message

example scenario on windows before this change. Client sees:

llm_image_test.go:74: failed to load model qwen3-vl:8b: 500 Internal Server Error: do load request: Post "http://127.0.0.1:57617/load": read tcp 127.0.0.1:57621->127.0.0.1:57617: wsarecv: An existing connection was forcibly closed by the remote host.

server logs: (most likely an OOM)

time=2025-10-30T19:19:12.705-07:00 level=DEBUG source=vocabulary.go:52 msg="adding bos token to prompt" id=0
ROCm error: invalid argument
  current device: 0, in function ggml_cuda_op_mul_mat at C:/a/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:1718
  hipMemsetAsync(dev[id].src0_dd + nbytes_data, 0, nbytes_padding, stream)
C:/a/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:88: ROCm error
time=2025-10-30T19:19:12.811-07:00 level=INFO source=device.go:212 msg="model weights" device=ROCm0 size="5.3 GiB"
time=2025-10-30T19:19:12.811-07:00 level=INFO source=device.go:217 msg="model weights" device=CPU size="292.4 MiB"
time=2025-10-30T19:19:12.811-07:00 level=INFO source=device.go:223 msg="kv cache" device=ROCm0 size="224.0 MiB"
time=2025-10-30T19:19:12.811-07:00 level=INFO source=device.go:234 msg="compute graph" device=ROCm0 size="2.0 GiB"
time=2025-10-30T19:19:12.811-07:00 level=INFO source=device.go:239 msg="compute graph" device=CPU size="16.8 MiB"
time=2025-10-30T19:19:12.811-07:00 level=INFO source=device.go:244 msg="total memory" size="7.8 GiB"
time=2025-10-30T19:19:12.811-07:00 level=INFO source=sched.go:446 msg="Load failed" model=C:\Users\daniel\.ollama\models\blobs\sha256-a99b7f834d754b88f122d865f32758ba9f0994a83f8363df2c1e71c17605a025 error="do load request: Post \"http://127.0.0.1:57481/load\": read tcp 127.0.0.1:57485->127.0.0.1:57481: wsarecv: An existing connection was forcibly closed by the remote host."
time=2025-10-30T19:19:12.811-07:00 level=DEBUG source=server.go:1699 msg="stopping llama server" pid=17712
time=2025-10-30T19:19:12.826-07:00 level=ERROR source=server.go:273 msg="llama runner terminated" error="exit status 0xc0000409"

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/12888 **Author:** [@dhiltgen](https://github.com/dhiltgen) **Created:** 10/31/2025 **Status:** ✅ Merged **Merged:** 10/31/2025 **Merged by:** [@dhiltgen](https://github.com/dhiltgen) **Base:** `main` ← **Head:** `rocm_error` --- ### 📝 Commits (1) - [`ce67c78`](https://github.com/ollama/ollama/commit/ce67c78f7243f0e4edcb6b10bf5c05f192a2e6a9) logs: catch rocm errors ### 📊 Changes **1 file changed** (+1 additions, -0 deletions) <details> <summary>View changed files</summary> 📝 `llm/status.go` (+1 -0) </details> ### 📄 Description This will help bubble up more crash errors with the details instead of a generic message example scenario on windows before this change. Client sees: ``` llm_image_test.go:74: failed to load model qwen3-vl:8b: 500 Internal Server Error: do load request: Post "http://127.0.0.1:57617/load": read tcp 127.0.0.1:57621->127.0.0.1:57617: wsarecv: An existing connection was forcibly closed by the remote host. ``` server logs: (most likely an OOM) ``` time=2025-10-30T19:19:12.705-07:00 level=DEBUG source=vocabulary.go:52 msg="adding bos token to prompt" id=0 ROCm error: invalid argument current device: 0, in function ggml_cuda_op_mul_mat at C:/a/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:1718 hipMemsetAsync(dev[id].src0_dd + nbytes_data, 0, nbytes_padding, stream) C:/a/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:88: ROCm error time=2025-10-30T19:19:12.811-07:00 level=INFO source=device.go:212 msg="model weights" device=ROCm0 size="5.3 GiB" time=2025-10-30T19:19:12.811-07:00 level=INFO source=device.go:217 msg="model weights" device=CPU size="292.4 MiB" time=2025-10-30T19:19:12.811-07:00 level=INFO source=device.go:223 msg="kv cache" device=ROCm0 size="224.0 MiB" time=2025-10-30T19:19:12.811-07:00 level=INFO source=device.go:234 msg="compute graph" device=ROCm0 size="2.0 GiB" time=2025-10-30T19:19:12.811-07:00 level=INFO source=device.go:239 msg="compute graph" device=CPU size="16.8 MiB" time=2025-10-30T19:19:12.811-07:00 level=INFO source=device.go:244 msg="total memory" size="7.8 GiB" time=2025-10-30T19:19:12.811-07:00 level=INFO source=sched.go:446 msg="Load failed" model=C:\Users\daniel\.ollama\models\blobs\sha256-a99b7f834d754b88f122d865f32758ba9f0994a83f8363df2c1e71c17605a025 error="do load request: Post \"http://127.0.0.1:57481/load\": read tcp 127.0.0.1:57485->127.0.0.1:57481: wsarecv: An existing connection was forcibly closed by the remote host." time=2025-10-30T19:19:12.811-07:00 level=DEBUG source=server.go:1699 msg="stopping llama server" pid=17712 time=2025-10-30T19:19:12.826-07:00 level=ERROR source=server.go:273 msg="llama runner terminated" error="exit status 0xc0000409" ``` --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-13 00:42:03 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#13991