[GH-ISSUE #14759] occasional crashes when trying ocr with qwen3.5:9b #35300

Open
opened 2026-04-22 19:42:32 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @kwiechen on GitHub (Mar 10, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14759

What is the issue?

I have occasional crashes when using qwen3.5:9b (and all other qwen vision models tested so far) for ocr purposes:

Error 500:
Error with Ollama API for testimage.jpg using model qwen3.5:9b: model runner has unexpectedly stopped, this may be due to resource limitations or an internal error, check ollama server logs for details (status code: 500)

server.log

Mär 10 08:36:48 kai-BeyondMax-Series ollama[3744]: HW Exception by GPU node-1 (Agent handle: 0x753cf86eff60) reason :GPU Hang
Mär 10 08:36:49 kai-BeyondMax-Series ollama[3744]: time=2026-03-10T08:36:49.152+01:00 level=ERROR source=server.go:1610 msg="post predict" error="Post "http://127.0.0.1:44471/completion": EOF"
Mär 10 08:36:49 kai-BeyondMax-Series ollama[3744]: [GIN] 2026/03/10 - 08:36:49 | 500 | 2.433882273s | 10.9.2.222 | POST "/api/generate"

AMD 395+ Max
Ubuntu 25.04

Best regards

Kai

Relevant log output


OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @kwiechen on GitHub (Mar 10, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14759 ### What is the issue? I have occasional crashes when using qwen3.5:9b (and all other qwen vision models tested so far) for ocr purposes: Error 500: Error with Ollama API for testimage.jpg using model qwen3.5:9b: model runner has unexpectedly stopped, this may be due to resource limitations or an internal error, check ollama server logs for details (status code: 500) server.log Mär 10 08:36:48 kai-BeyondMax-Series ollama[3744]: HW Exception by GPU node-1 (Agent handle: 0x753cf86eff60) reason :GPU Hang Mär 10 08:36:49 kai-BeyondMax-Series ollama[3744]: time=2026-03-10T08:36:49.152+01:00 level=ERROR source=server.go:1610 msg="post predict" error="Post \"http://127.0.0.1:44471/completion\": EOF" Mär 10 08:36:49 kai-BeyondMax-Series ollama[3744]: [GIN] 2026/03/10 - 08:36:49 | 500 | 2.433882273s | 10.9.2.222 | POST "/api/generate" AMD 395+ Max Ubuntu 25.04 Best regards Kai ### Relevant log output ```shell ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-22 19:42:32 -05:00
Author
Owner

@rick-github commented on GitHub (Mar 10, 2026):

Mär 10 08:36:48 kai-BeyondMax-Series ollama[3744]: HW Exception by GPU node-1 (Agent handle: 0x753cf86eff60) reason :GPU Hang

This is a problem with the amdgpu kernel driver, it's been reported a few times and choosing a different kernel seems to help a bit, but it's not clear which kernel is best. Try switching to using Vulkan and see if the problem persists.

<!-- gh-comment-id:4030886473 --> @rick-github commented on GitHub (Mar 10, 2026): ``` Mär 10 08:36:48 kai-BeyondMax-Series ollama[3744]: HW Exception by GPU node-1 (Agent handle: 0x753cf86eff60) reason :GPU Hang ``` This is a problem with the amdgpu kernel driver, it's been reported a few times and choosing a different kernel seems to help a bit, but it's not clear which kernel is best. Try switching to using [Vulkan](https://docs.ollama.com/gpu#vulkan-gpu-support) and see if the problem persists.
Author
Owner

@kwiechen commented on GitHub (Mar 11, 2026):

Thank you very much, I will try Vulkan.

But we get errors only when using qwen models. ocr with gemma3:12b for example works without problems.

<!-- gh-comment-id:4036894855 --> @kwiechen commented on GitHub (Mar 11, 2026): Thank you very much, I will try Vulkan. But we get errors only when using qwen models. ocr with gemma3:12b for example works without problems.
Author
Owner

@rick-github commented on GitHub (Mar 20, 2026):

AMD recommends linux kernel 6.18.4 or newer for 8060S support.

<!-- gh-comment-id:4097486501 --> @rick-github commented on GitHub (Mar 20, 2026): AMD [recommends](https://rocm.docs.amd.com/en/latest/how-to/system-optimization/strixhalo.html#required-kernel-version) linux kernel 6.18.4 or newer for 8060S support.
Author
Owner

@kwiechen commented on GitHub (Mar 20, 2026):

Thank you very much - I will update the kernel

<!-- gh-comment-id:4097719250 --> @kwiechen commented on GitHub (Mar 20, 2026): Thank you very much - I will update the kernel
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#35300