[GH-ISSUE #11357] Ollama crashed in Intel shared GPU #69551

Closed
opened 2026-05-04 18:26:49 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @mendickmorningstar on GitHub (Jul 10, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11357

What is the issue?

The ollama crashed in Intel shared GPU.

server.log

Relevant log output

[GIN] 2025/07/10 - 19:24:12 | 200 |    5.1194644s |       127.0.0.1 | POST     "/api/generate"
decode: cannot decode batches with this context (use llama_encode() instead)
Native API failed. Native API returns: 20 (UR_RESULT_ERROR_DEVICE_LOST)
Exception caught at file:D:\actions-runner\release-cpp-oneapi_2024_2\_work\llm.cpp\llm.cpp\ollama-llama-cpp\ggml\src\ggml-sycl\ggml-sycl.cpp, line:432, func:operator()
SYCL error: CHECK_TRY_ERROR((*stream).memcpy((char *) tensor->data + offset, data, size).wait()): Exception caught in this line of code.
  in function ggml_backend_sycl_buffer_set_tensor at D:\actions-runner\release-cpp-oneapi_2024_2\_work\llm.cpp\llm.cpp\ollama-llama-cpp\ggml\src\ggml-sycl\ggml-sycl.cpp:432
D:\actions-runner\release-cpp-oneapi_2024_2\_work\llm.cpp\llm.cpp\ollama-llama-cpp\ggml\src\ggml-sycl\..\ggml-sycl\common.hpp:115: SYCL error
[GIN] 2025/07/10 - 19:24:24 | 500 |      2.16119s |       127.0.0.1 | POST     "/api/embed"
time=2025-07-10T19:24:24.128+08:00 level=ERROR source=server.go:484 msg="llama runner terminated" error="exit status 0xc0000409"

OS

Windows

GPU

Intel

CPU

Intel

Ollama version

0.9.3

Originally created by @mendickmorningstar on GitHub (Jul 10, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11357 ### What is the issue? The ollama crashed in Intel shared GPU. [server.log](https://github.com/user-attachments/files/21162153/server.log) ### Relevant log output ```shell [GIN] 2025/07/10 - 19:24:12 | 200 | 5.1194644s | 127.0.0.1 | POST "/api/generate" decode: cannot decode batches with this context (use llama_encode() instead) Native API failed. Native API returns: 20 (UR_RESULT_ERROR_DEVICE_LOST) Exception caught at file:D:\actions-runner\release-cpp-oneapi_2024_2\_work\llm.cpp\llm.cpp\ollama-llama-cpp\ggml\src\ggml-sycl\ggml-sycl.cpp, line:432, func:operator() SYCL error: CHECK_TRY_ERROR((*stream).memcpy((char *) tensor->data + offset, data, size).wait()): Exception caught in this line of code. in function ggml_backend_sycl_buffer_set_tensor at D:\actions-runner\release-cpp-oneapi_2024_2\_work\llm.cpp\llm.cpp\ollama-llama-cpp\ggml\src\ggml-sycl\ggml-sycl.cpp:432 D:\actions-runner\release-cpp-oneapi_2024_2\_work\llm.cpp\llm.cpp\ollama-llama-cpp\ggml\src\ggml-sycl\..\ggml-sycl\common.hpp:115: SYCL error [GIN] 2025/07/10 - 19:24:24 | 500 | 2.16119s | 127.0.0.1 | POST "/api/embed" time=2025-07-10T19:24:24.128+08:00 level=ERROR source=server.go:484 msg="llama runner terminated" error="exit status 0xc0000409" ``` ### OS Windows ### GPU Intel ### CPU Intel ### Ollama version 0.9.3
GiteaMirror added the bug label 2026-05-04 18:26:49 -05:00
Author
Owner

@rick-github commented on GitHub (Jul 10, 2025):

SYCL is not a supported backend for ollama. I'm guessing that you are using ipex-llm, in which case you can file an issue in their tracker here.

<!-- gh-comment-id:3057191641 --> @rick-github commented on GitHub (Jul 10, 2025): SYCL is not a supported backend for ollama. I'm guessing that you are using ipex-llm, in which case you can file an issue in their tracker [here](https://github.com/intel/ipex-llm/issues).
Author
Owner

@mendickmorningstar commented on GitHub (Jul 10, 2025):

SYCL is not a supported backend for ollama. I'm guessing that you are using ipex-llm, in which case you can file an issue in their tracker here.

I opened a new ticket for intel-llm.
https://github.com/intel/ipex-llm/issues/13252

<!-- gh-comment-id:3057532965 --> @mendickmorningstar commented on GitHub (Jul 10, 2025): > SYCL is not a supported backend for ollama. I'm guessing that you are using ipex-llm, in which case you can file an issue in their tracker [here](https://github.com/intel/ipex-llm/issues). I opened a new ticket for intel-llm. https://github.com/intel/ipex-llm/issues/13252
Author
Owner

@pdevine commented on GitHub (Jul 11, 2025):

cc @dhiltgen

<!-- gh-comment-id:3063304736 --> @pdevine commented on GitHub (Jul 11, 2025): cc @dhiltgen
Author
Owner

@dhiltgen commented on GitHub (Jul 31, 2025):

Looks like it's fixed in ipex-llm.

<!-- gh-comment-id:3141653085 --> @dhiltgen commented on GitHub (Jul 31, 2025): Looks like it's fixed in ipex-llm.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#69551