[GH-ISSUE #10758] serving model(llama4:scout) fails on gfx1151 #32827

Open
opened 2026-04-22 14:40:50 -05:00 by GiteaMirror · 9 comments
Owner

Originally created by @pnjacket on GitHub (May 17, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10758

What is the issue?

output.log

I am trying to load llama4:scout using the 0.7.0 and getting an exception at the moment. you can see the full log on the attached file.
The system is AMD AI MAX 395+(Strix halo) with the enough ram assigned to igpu.
Image

I am not too sure how to interpret the error message I am getting from the log. Any help would be appreciated.

Relevant log output


OS

Windows

GPU

AMD

CPU

AMD

Ollama version

0.7.0

Update: Unfortunately, I have ended up returning the unit because something is clearly broken on the hardware or BIOS side. Aside from random crashes, I am losing the display way too often making it unusable. I am trying to source one from another brand. The choices are very limited at this point, and I won't be able to verify the fix even if one is available until I get my replacement one day.

Originally created by @pnjacket on GitHub (May 17, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10758 ### What is the issue? [output.log](https://github.com/user-attachments/files/20266627/output.log) I am trying to load llama4:scout using the 0.7.0 and getting an exception at the moment. you can see the full log on the attached file. The system is AMD AI MAX 395+(Strix halo) with the enough ram assigned to igpu. ![Image](https://github.com/user-attachments/assets/18382c5b-4e44-43aa-be1a-723683e35cb4) I am not too sure how to interpret the error message I am getting from the log. Any help would be appreciated. ### Relevant log output ```shell ``` ### OS Windows ### GPU AMD ### CPU AMD ### Ollama version 0.7.0 Update: Unfortunately, I have ended up returning the unit because something is clearly broken on the hardware or BIOS side. Aside from random crashes, I am losing the display way too often making it unusable. I am trying to source one from another brand. The choices are very limited at this point, and I won't be able to verify the fix even if one is available until I get my replacement one day.
GiteaMirror added the amdbug labels 2026-04-22 14:40:50 -05:00
Author
Owner

@rick-github commented on GitHub (May 17, 2025):

time=2025-05-17T17:18:03.638-04:00 level=DEBUG source=server.go:639 msg="model load completed, waiting for server to become available" status="llm server loading model"
Exception 0xc0000005 0x1 0x10 0x7ffc02e49176
PC=0x7ffc02e49176
signal arrived during external code execution

runtime.cgocall(0x7ff6cc2b0c40, 0xc0018c1b30)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/cgocall.go:167 +0x3e fp=0xc0018c1b08 sp=0xc0018c1aa0 pc=0x7ff6cb5f241e
github.com/ollama/ollama/ml/backend/ggml._Cfunc_ggml_backend_sched_reserve(0x1f52e37d890, 0x20536f56040)

The runner tried to write to an invalid memory location (0x10) and got killed with an access violation. The error occurred in ggml_backend_sched_reserve() when the runner was trying to reserve the worst case graph. Possibly some pointer was zero and the code tried to reference the pointer with a 16 byte offset and got killed. Does the problem repeat with ollama version 0.6.8?

<!-- gh-comment-id:2888612861 --> @rick-github commented on GitHub (May 17, 2025): ``` time=2025-05-17T17:18:03.638-04:00 level=DEBUG source=server.go:639 msg="model load completed, waiting for server to become available" status="llm server loading model" Exception 0xc0000005 0x1 0x10 0x7ffc02e49176 PC=0x7ffc02e49176 signal arrived during external code execution runtime.cgocall(0x7ff6cc2b0c40, 0xc0018c1b30) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/cgocall.go:167 +0x3e fp=0xc0018c1b08 sp=0xc0018c1aa0 pc=0x7ff6cb5f241e github.com/ollama/ollama/ml/backend/ggml._Cfunc_ggml_backend_sched_reserve(0x1f52e37d890, 0x20536f56040) ``` The runner tried to write to an invalid memory location (0x10) and got killed with an access violation. The error occurred in ggml_backend_sched_reserve() when the runner was trying to reserve the worst case graph. Possibly some pointer was zero and the code tried to reference the pointer with a 16 byte offset and got killed. Does the problem repeat with ollama version 0.6.8?
Author
Owner

@pnjacket commented on GitHub (May 17, 2025):

Thank you for the response. I quickly tried 0.6.8 and ended up with the same result.
I made a comment to report that then it suddenly hit me that I may want to try 0.6.7 to see if that is to make sure the first version with llama4 support, and then 0.6.7 loaded just fine.
However, the first query I sent to the 0.6.7 failed with basically the same error that was happening in the later versions.

output067.log

I tried one with smaller model gemma3:4b and it works perfectly fine. I am going to download another sizeable model to see if they work fine and report back soon. If you have anything I want to try, please let me know.

Update: I tried deepseek-r1:70b and qwen:110b, and both ran fine meaning it is probably not the size of the model that is causing it to go bust. qwen2.5vl:72b did not run with out of memory error while the system obviously has enough memory at least on GPU side. qwen2.5vl:72b is probably too early to be supported correctly for many platform???
Anyways, none of these provide me why llama4:scout is failing, but just added here only as an additional information.

<!-- gh-comment-id:2888637366 --> @pnjacket commented on GitHub (May 17, 2025): Thank you for the response. I quickly tried 0.6.8 and ended up with the same result. I made a comment to report that then it suddenly hit me that I may want to try 0.6.7 to see if that is to make sure the first version with llama4 support, and then 0.6.7 loaded just fine. However, the first query I sent to the 0.6.7 failed with basically the same error that was happening in the later versions. [output067.log](https://github.com/user-attachments/files/20269799/output067.log) I tried one with smaller model gemma3:4b and it works perfectly fine. I am going to download another sizeable model to see if they work fine and report back soon. If you have anything I want to try, please let me know. Update: I tried deepseek-r1:70b and qwen:110b, and both ran fine meaning it is probably not the size of the model that is causing it to go bust. qwen2.5vl:72b did not run with out of memory error while the system obviously has enough memory at least on GPU side. qwen2.5vl:72b is probably too early to be supported correctly for many platform??? Anyways, none of these provide me why llama4:scout is failing, but just added here only as an additional information.
Author
Owner

@pnjacket commented on GitHub (May 18, 2025):

I made it run by setting an environment variable as following

# in powershell
$env:OLLAMA_NUM_PARALLEL=1

After a careful examination of the log, I found this setting is set to 2 by default. With some digging of other issues in github, I found that this settings seems to affect the memory usage. I gave the new value a shot and that fixed the issue for me. I am not sure what was the basis of setting the default for this value to 2. If anyone can explain, that will be greatly appreciated.

At this point, I think the issue is nothing to do with the specific GPU, rather, it is just lack of available resources.I will close this issue soon if someone can confirm that.

<!-- gh-comment-id:2888737602 --> @pnjacket commented on GitHub (May 18, 2025): I made it run by setting an environment variable as following ```powershell # in powershell $env:OLLAMA_NUM_PARALLEL=1 ``` After a careful examination of the log, I found this setting is set to 2 by default. With some digging of other issues in github, I found that this settings seems to affect the memory usage. I gave the new value a shot and that fixed the issue for me. I am not sure what was the basis of setting the default for this value to 2. If anyone can explain, that will be greatly appreciated. At this point, I think the issue is nothing to do with the specific GPU, rather, it is just lack of available resources.I will close this issue soon if someone can confirm that.
Author
Owner

@rick-github commented on GitHub (May 18, 2025):

The amount of context allocated by the runner is OLLAMA_NUM_PARALLEL * OLLAMA_CONTEXT_LENGTH. So you may be able to reproduce the bug with $env:OLLAMA_NUM_PARALLEL=1 and $env:OLLAMA_CONTEXT_LENGTH=8192. It may be that the root cause of the bug is lack of resources, but it shouldn't cause an access violation and crash the runnner: it should report that it couldn't allocate memory and the server should return an appropriate error message.

As a tangent: what sort of inference speed are you getting with the AMD AI MAX 395? I'm thinking of getting one for my test bench and I'm curious about the performance.

<!-- gh-comment-id:2888952782 --> @rick-github commented on GitHub (May 18, 2025): The amount of context allocated by the runner is `OLLAMA_NUM_PARALLEL` * `OLLAMA_CONTEXT_LENGTH`. So you may be able to reproduce the bug with `$env:OLLAMA_NUM_PARALLEL=1` and `$env:OLLAMA_CONTEXT_LENGTH=8192`. It may be that the root cause of the bug is lack of resources, but it shouldn't cause an access violation and crash the runnner: it should report that it couldn't allocate memory and the server should return an appropriate error message. As a tangent: what sort of inference speed are you getting with the AMD AI MAX 395? I'm thinking of getting one for my test bench and I'm curious about the performance.
Author
Owner

@pnjacket commented on GitHub (May 18, 2025):

@rick-github Thanks again for the insight. As you mentioned, going to 8192 for the context length instantly broke the chatbot with the same error message. I am probably at the edge of some kind of limit that is not correctly accounted for in the code somewhere. I will leave this issue open for now, and wait if anyone can get to this. I am willing to help test stuff out if that is necessary.

As a tangent: what sort of inference speed are you getting with the AMD AI MAX 395? I'm thinking of getting one for my test bench and I'm curious about the performance.

llama4:scout with everything set at default except the Parallel setting (which was 1) generated following result

total duration:       1m21.1093579s
load duration:        44.1131ms
prompt eval count:    366 token(s)
prompt eval duration: 8.2765981s
prompt eval rate:     44.22 tokens/s
eval count:           929 token(s)
eval duration:        1m12.7462859s
eval rate:            12.77 tokens/s

using deepseek-r1:70b with everything set to default resulted in the following

total duration:       7m1.3536922s
load duration:        32.7594ms
prompt eval count:    25 token(s)
prompt eval duration: 204.2508ms
prompt eval rate:     122.40 tokens/s
eval count:           1506 token(s)
eval duration:        7m1.1131673s
eval rate:            3.58 tokens/s

I am not sure how to interpret it but this is what I can share at this point. If you want more information, please let me know, either for the bug itself or the general performance of the chipset.

<!-- gh-comment-id:2889141534 --> @pnjacket commented on GitHub (May 18, 2025): @rick-github Thanks again for the insight. As you mentioned, going to 8192 for the context length instantly broke the chatbot with the same error message. I am probably at the edge of some kind of limit that is not correctly accounted for in the code somewhere. I will leave this issue open for now, and wait if anyone can get to this. I am willing to help test stuff out if that is necessary. > As a tangent: what sort of inference speed are you getting with the AMD AI MAX 395? I'm thinking of getting one for my test bench and I'm curious about the performance. llama4:scout with everything set at default except the Parallel setting (which was 1) generated following result ``` total duration: 1m21.1093579s load duration: 44.1131ms prompt eval count: 366 token(s) prompt eval duration: 8.2765981s prompt eval rate: 44.22 tokens/s eval count: 929 token(s) eval duration: 1m12.7462859s eval rate: 12.77 tokens/s ``` using deepseek-r1:70b with everything set to default resulted in the following ``` total duration: 7m1.3536922s load duration: 32.7594ms prompt eval count: 25 token(s) prompt eval duration: 204.2508ms prompt eval rate: 122.40 tokens/s eval count: 1506 token(s) eval duration: 7m1.1131673s eval rate: 3.58 tokens/s ``` I am not sure how to interpret it but this is what I can share at this point. If you want more information, please let me know, either for the bug itself or the general performance of the chipset.
Author
Owner

@pnjacket commented on GitHub (May 22, 2025):

I have managed to run the same model in Ubuntu without any issues. Memory usage was a lot less over in the linux side it seems. I will add more information back in here once I have done more testing.

<!-- gh-comment-id:2900982302 --> @pnjacket commented on GitHub (May 22, 2025): I have managed to run the same model in Ubuntu without any issues. Memory usage was a lot less over in the linux side it seems. I will add more information back in here once I have done more testing.
Author
Owner

@rick-github commented on GitHub (May 22, 2025):

Good to hear. I've ordered an AMD AI MAX, when it arrives I can try to replicate the issue you are having. It might be that the problem is peculiar to the Windows ROCm drivers, in which case we may not be able to fix the root cause, but perhaps can devise a workaround.

<!-- gh-comment-id:2901057294 --> @rick-github commented on GitHub (May 22, 2025): Good to hear. I've ordered an AMD AI MAX, when it arrives I can try to replicate the issue you are having. It might be that the problem is peculiar to the Windows ROCm drivers, in which case we may not be able to fix the root cause, but perhaps can devise a workaround.
Author
Owner

@hobbymachinist commented on GitHub (Jun 15, 2025):

Out of curiosity, does ollama support the auto iGPU VRAM memory allocation option in the AI MAX 395+ BIOS ? or does one always need to allocate dedicated VRAM ?

<!-- gh-comment-id:2973662194 --> @hobbymachinist commented on GitHub (Jun 15, 2025): Out of curiosity, does ollama support the auto iGPU VRAM memory allocation option in the AI MAX 395+ BIOS ? or does one always need to allocate dedicated VRAM ?
Author
Owner

@pnjacket commented on GitHub (Jun 20, 2025):

Out of curiosity, does ollama support the auto iGPU VRAM memory allocation option in the AI MAX 395+ BIOS ? or does one always need to allocate dedicated VRAM ?

I believe AI Max's Auto setting does not mean it is dynamic like Apple's silicon. It is automatically set to a predetermined value and that is a static value.

<!-- gh-comment-id:2992909175 --> @pnjacket commented on GitHub (Jun 20, 2025): > Out of curiosity, does ollama support the auto iGPU VRAM memory allocation option in the AI MAX 395+ BIOS ? or does one always need to allocate dedicated VRAM ? I believe AI Max's Auto setting does not mean it is dynamic like Apple's silicon. It is automatically set to a predetermined value and that is a static value.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#32827