[GH-ISSUE #6232] Experimental SYCL offload for Intel 13g (Raptor Lake w Xe-LP) not offloading #65934

Closed
opened 2026-05-03 23:14:04 -05:00 by GiteaMirror · 12 comments
Owner

Originally created by @byjrack on GitHub (Aug 7, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6232

What is the issue?

Tied back to #5593

Using SYCL via llama-cpp b3038 I can get a clean offload of a 8b param model of all 33 layers. Performance is still not ideal using -ngl compared to CPU, but lots of optimization still in play.

All done in the Windows Host

Using build https://github.com/zhewang1-intc/ollama/releases/tag/experimental-oneapi-v0.0.2

set OLLAMA_FORCE_ENABLE_INTEL_IGPU=1
set OLLAMA_INTEL_GPU=1
.\ollama serve
.\ollama run --verbose llama3

time=2024-08-07T09:37:36.026-04:00 level=INFO source=sched.go:738 msg="new model will fit in available VRAM in single GPU, loading" model=...\.ollama\models\blobs\sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa gpu=0 parallel=4 available=32808415232 required="5.8 GiB"
time=2024-08-07T09:37:36.027-04:00 level=INFO source=memory.go:309 msg="offload to oneapi" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[30.6 GiB]" memory.required.full="5.8 GiB" memory.required.partial="5.8 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[5.8 GiB]" memory.weights.total="4.7 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB"
time=2024-08-07T09:37:36.036-04:00 level=INFO source=server.go:375 msg="starting llama server" cmd="...\\dist\\windows-amd64\\ollama_runners\\oneapi_v2024.2.0\\ollama_llama_server.exe --model ...\\.ollama\\models\\blobs\\sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --parallel 4 --port 61490"
time=2024-08-07T09:37:37.096-04:00 level=INFO source=sched.go:474 msg="loaded runners" count=1
time=2024-08-07T09:37:37.123-04:00 level=INFO source=server.go:563 msg="waiting for llama runner to start responding"
time=2024-08-07T09:37:37.128-04:00 level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server error"
WARN [server_params_parse] Not compiled with GPU offload support, --n-gpu-layers option will be ignored. See main README.md for information on enabling GPU BLAS support | n_gpu_layers=-1 tid="9176" timestamp=1723037857
INFO [wmain] build info | build=57 commit="a8db2a9c" tid="9176" timestamp=1723037857
INFO [wmain] system info | n_threads=10 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="9176" timestamp=1723037857 total_threads=20
INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="19" port="61490" tid="9176" timestamp=1723037857
time=2024-08-07T09:37:37.396-04:00 level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server loading model"
...
time=2024-08-07T09:37:43.799-04:00 level=ERROR source=sched.go:480 msg="error loading llama server" error="llama runner process has terminated: exit status 0xc000001d "

OS

Windows

GPU

Intel

CPU

Intel

Ollama version

0.0.0 (experimental SYCL)

Originally created by @byjrack on GitHub (Aug 7, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6232 ### What is the issue? Tied back to #5593 [Using SYCL via llama-cpp b3038](https://github.com/ggerganov/llama.cpp/releases/download/b3038/llama-b3038-bin-win-sycl-x64.zip) I can get a clean offload of a 8b param model of all 33 layers. Performance is still not ideal using `-ngl` compared to CPU, but lots of optimization still in play. All done in the Windows Host Using build https://github.com/zhewang1-intc/ollama/releases/tag/experimental-oneapi-v0.0.2 set OLLAMA_FORCE_ENABLE_INTEL_IGPU=1 set OLLAMA_INTEL_GPU=1 .\ollama serve .\ollama run --verbose llama3 ``` time=2024-08-07T09:37:36.026-04:00 level=INFO source=sched.go:738 msg="new model will fit in available VRAM in single GPU, loading" model=...\.ollama\models\blobs\sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa gpu=0 parallel=4 available=32808415232 required="5.8 GiB" time=2024-08-07T09:37:36.027-04:00 level=INFO source=memory.go:309 msg="offload to oneapi" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[30.6 GiB]" memory.required.full="5.8 GiB" memory.required.partial="5.8 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[5.8 GiB]" memory.weights.total="4.7 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB" time=2024-08-07T09:37:36.036-04:00 level=INFO source=server.go:375 msg="starting llama server" cmd="...\\dist\\windows-amd64\\ollama_runners\\oneapi_v2024.2.0\\ollama_llama_server.exe --model ...\\.ollama\\models\\blobs\\sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --parallel 4 --port 61490" time=2024-08-07T09:37:37.096-04:00 level=INFO source=sched.go:474 msg="loaded runners" count=1 time=2024-08-07T09:37:37.123-04:00 level=INFO source=server.go:563 msg="waiting for llama runner to start responding" time=2024-08-07T09:37:37.128-04:00 level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server error" WARN [server_params_parse] Not compiled with GPU offload support, --n-gpu-layers option will be ignored. See main README.md for information on enabling GPU BLAS support | n_gpu_layers=-1 tid="9176" timestamp=1723037857 INFO [wmain] build info | build=57 commit="a8db2a9c" tid="9176" timestamp=1723037857 INFO [wmain] system info | n_threads=10 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="9176" timestamp=1723037857 total_threads=20 INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="19" port="61490" tid="9176" timestamp=1723037857 time=2024-08-07T09:37:37.396-04:00 level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server loading model" ... time=2024-08-07T09:37:43.799-04:00 level=ERROR source=sched.go:480 msg="error loading llama server" error="llama runner process has terminated: exit status 0xc000001d " ``` ### OS Windows ### GPU Intel ### CPU Intel ### Ollama version 0.0.0 (experimental SYCL)
GiteaMirror added the bug label 2026-05-03 23:14:04 -05:00
Author
Owner

@zhewang1-intc commented on GitHub (Aug 8, 2024):

hi, i notice the log " Not compiled with GPU offload support", it's odd, did you run this in powershell? if so, pls set the env var by follow command:
$env:OLLAMA_FORCE_ENABLE_INTEL_IGPU=1
$env:OLLAMA_INTEL_GPU=1

<!-- gh-comment-id:2274721947 --> @zhewang1-intc commented on GitHub (Aug 8, 2024): hi, i notice the log " Not compiled with GPU offload support", it's odd, did you run this in powershell? if so, pls set the env var by follow command: $env:OLLAMA_FORCE_ENABLE_INTEL_IGPU=1 $env:OLLAMA_INTEL_GPU=1
Author
Owner

@byjrack commented on GitHub (Aug 8, 2024):

Yup noticed the same. Not using posh just a regular cmd so standard set statements.

The iGPU is detected so the env vars seen to be read correctly.

<!-- gh-comment-id:2274731021 --> @byjrack commented on GitHub (Aug 8, 2024): Yup noticed the same. Not using posh just a regular cmd so standard `set` statements. The iGPU is detected so the env vars seen to be read correctly.
Author
Owner

@zhewang1-intc commented on GitHub (Aug 8, 2024):

I've noticed in the PR's attached debug.log that the llama runner process has terminated with exit status 0xc0000135. I believe this is due to the ollama_llama_server lacking some necessary libraries during runtime.
To verify this, I do some experiments by deleting certain essential OneAPI libraries (e.g. sycl7.dll) required by ollama_llama_server and successfully reproduced the error code 0xc0000135.
Could you please help identify which libraries are missing in your environment? On Windows, you can use Process Monitor and filter the processes starting with 'ollama' and capture all their activities. Before the ollama_llama_server process exits, it should log attempts to locate a missing library but fail to find it in any directory. As shown in the attached fig, these 'NAME NOT FOUND', 'NAME INVALID', 'PATH NOT FOUND' errors were caused by the absence of the sycl7.dll library.

image

<!-- gh-comment-id:2274883562 --> @zhewang1-intc commented on GitHub (Aug 8, 2024): I've noticed in the PR's attached [debug.log](https://github.com/user-attachments/files/16414940/debug.log) that the llama runner process has terminated with exit status 0xc0000135. I believe this is due to the ollama_llama_server lacking some necessary libraries during runtime. To verify this, I do some experiments by deleting certain essential OneAPI libraries (e.g. sycl7.dll) required by ollama_llama_server and successfully reproduced the error code 0xc0000135. Could you please help identify which libraries are missing in your environment? On Windows, you can use [Process Monitor](https://learn.microsoft.com/en-us/sysinternals/downloads/procmon) and filter the processes starting with 'ollama' and capture all their activities. Before the ollama_llama_server process exits, it should log attempts to locate a missing library but fail to find it in any directory. As shown in the attached fig, these 'NAME NOT FOUND', 'NAME INVALID', 'PATH NOT FOUND' errors were caused by the absence of the sycl7.dll library. ![image](https://github.com/user-attachments/assets/ac875be3-11cf-418e-b311-0767e6664d73)
Author
Owner

@byjrack commented on GitHub (Aug 8, 2024):

So in my case ollama_llama_server doesn't even try to load sycl7.dll from dist/windows-amd64/oneapi.

I can see it hook a mess of the vc redistributable from sys32 (and a lot of not founds in the runners path) but no indication it even is trying to find sycl7 in procmon output.

ollama run is currently giving me a 0xc000001d which I agree is pointing at a library load, but I can't track down what it is actually missing.

PATH seems to look good
In the serve env I can see OLLAMA_INTEL_GPU and OLLAMA_FORCE_ENABLE_INTEL_IGPU both as true
server loads, but gives the not compiled warning which has to be related. Looking at the upstream function it seems pretty straightforward? Any way to know if GGML_USE_SYCL was missed? This is just using the experimental release on your fork and not something i built myself.

Wondering since I am forcing to use IGPU when that gets skipped it can't fall back to CPU and it locks up.

time=2024-08-08T07:51:55.060-04:00 level=INFO source=sched.go:738 msg="new model will fit in available VRAM in single GPU, loading" model=...\.ollama\models\blobs\sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa gpu=0 parallel=4 available=32805273600 required="5.8 GiB"
time=2024-08-08T07:51:55.060-04:00 level=DEBUG source=server.go:98 msg="system memory" total="63.6 GiB" free=51936940032
time=2024-08-08T07:51:55.060-04:00 level=DEBUG source=memory.go:101 msg=evaluating library=oneapi gpu_count=1 available="[30.6 GiB]"
time=2024-08-08T07:51:55.061-04:00 level=INFO source=memory.go:309 msg="offload to oneapi" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[30.6 GiB]" memory.required.full="5.8 GiB" memory.required.partial="5.8 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[5.8 GiB]" memory.weights.total="4.7 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB"
time=2024-08-08T07:51:55.063-04:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=...\scratch\o-sycl\dist\windows-amd64\ollama_runners\cpu\ollama_llama_server.exe
time=2024-08-08T07:51:55.064-04:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=...\scratch\o-sycl\dist\windows-amd64\ollama_runners\cpu_avx2\ollama_llama_server.exe
time=2024-08-08T07:51:55.064-04:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=...\scratch\o-sycl\dist\windows-amd64\ollama_runners\oneapi_v2024.2.0\ollama_llama_server.exe
time=2024-08-08T07:51:55.066-04:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=...\scratch\o-sycl\dist\windows-amd64\ollama_runners\cpu\ollama_llama_server.exe
time=2024-08-08T07:51:55.066-04:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=...\scratch\o-sycl\dist\windows-amd64\ollama_runners\cpu_avx2\ollama_llama_server.exe
time=2024-08-08T07:51:55.066-04:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=...\scratch\o-sycl\dist\windows-amd64\ollama_runners\oneapi_v2024.2.0\ollama_llama_server.exe
time=2024-08-08T07:51:55.088-04:00 level=INFO source=server.go:375 msg="starting llama server" cmd="...\\scratch\\o-sycl\\dist\\windows-amd64\\ollama_runners\\oneapi_v2024.2.0\\ollama_llama_server.exe --model ...\\.ollama\\models\\blobs\\sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --verbose --parallel 4 --port 63768"
time=2024-08-08T07:51:55.088-04:00 level=DEBUG source=server.go:390 msg=subprocess environment="[PATH=...\\scratch\\o-sycl\\dist\\windows-amd64\\oneapi;...\\scratch\\o-sycl\\dist\\windows-amd64\\ollama_runners\\oneapi_v2024.2.0;...\\scratch\\o-sycl\\dist\\windows-amd64\\ollama_runners;...]"
time=2024-08-08T07:51:55.101-04:00 level=INFO source=sched.go:474 msg="loaded runners" count=1
time=2024-08-08T07:51:55.101-04:00 level=INFO source=server.go:563 msg="waiting for llama runner to start responding"
time=2024-08-08T07:51:55.102-04:00 level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server error"
WARN [server_params_parse] Not compiled with GPU offload support, --n-gpu-layers option will be ignored. See main README.md for information on enabling GPU BLAS support | n_gpu_layers=-1 tid="7080" timestamp=1723117915
INFO [wmain] build info | build=57 commit="a8db2a9c" tid="7080" timestamp=1723117915
INFO [wmain] system info | n_threads=10 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="7080" timestamp=1723117915 total_threads=20
INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="19" port="63768" tid="7080" timestamp=1723117915
time=2024-08-08T07:51:55.357-04:00 level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server loading model"
...
time=2024-08-08T07:52:01.073-04:00 level=DEBUG source=server.go:615 msg="model load progress 1.00"
llama_kv_cache_init:        CPU KV buffer size =  1024.00 MiB
llama_new_context_with_model: KV self size  = 1024.00 MiB, K (f16):  512.00 MiB, V (f16):  512.00 MiB
llama_new_context_with_model:        CPU  output buffer size =     2.02 MiB
time=2024-08-08T07:52:01.340-04:00 level=ERROR source=sched.go:480 msg="error loading llama server" error="llama runner process has terminated: exit status 0xc000001d "
time=2024-08-08T07:52:01.341-04:00 level=DEBUG source=sched.go:483 msg="triggering expiration for failed load" model=...\.ollama\models\blobs\sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa
<!-- gh-comment-id:2275730038 --> @byjrack commented on GitHub (Aug 8, 2024): So in my case ollama_llama_server doesn't even try to load sycl7.dll from dist/windows-amd64/oneapi. I can see it hook a mess of the vc redistributable from sys32 (and a lot of not founds in the runners path) but no indication it even is trying to find sycl7 in procmon output. ollama run is currently giving me a 0xc000001d which I agree is pointing at a library load, but I can't track down what it is actually missing. PATH seems to look good In the serve env I can see `OLLAMA_INTEL_GPU` and `OLLAMA_FORCE_ENABLE_INTEL_IGPU` both as true server loads, but gives the not compiled warning which has to be related. Looking at the [upstream function](https://github.com/ggerganov/llama.cpp/blob/master/src/llama.cpp#L16364) it seems pretty straightforward? Any way to know if `GGML_USE_SYCL` was missed? This is just using the experimental release on your fork and not something i built myself. Wondering since I am forcing to use IGPU when that gets skipped it can't fall back to CPU and it locks up. ``` time=2024-08-08T07:51:55.060-04:00 level=INFO source=sched.go:738 msg="new model will fit in available VRAM in single GPU, loading" model=...\.ollama\models\blobs\sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa gpu=0 parallel=4 available=32805273600 required="5.8 GiB" time=2024-08-08T07:51:55.060-04:00 level=DEBUG source=server.go:98 msg="system memory" total="63.6 GiB" free=51936940032 time=2024-08-08T07:51:55.060-04:00 level=DEBUG source=memory.go:101 msg=evaluating library=oneapi gpu_count=1 available="[30.6 GiB]" time=2024-08-08T07:51:55.061-04:00 level=INFO source=memory.go:309 msg="offload to oneapi" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[30.6 GiB]" memory.required.full="5.8 GiB" memory.required.partial="5.8 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[5.8 GiB]" memory.weights.total="4.7 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB" time=2024-08-08T07:51:55.063-04:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=...\scratch\o-sycl\dist\windows-amd64\ollama_runners\cpu\ollama_llama_server.exe time=2024-08-08T07:51:55.064-04:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=...\scratch\o-sycl\dist\windows-amd64\ollama_runners\cpu_avx2\ollama_llama_server.exe time=2024-08-08T07:51:55.064-04:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=...\scratch\o-sycl\dist\windows-amd64\ollama_runners\oneapi_v2024.2.0\ollama_llama_server.exe time=2024-08-08T07:51:55.066-04:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=...\scratch\o-sycl\dist\windows-amd64\ollama_runners\cpu\ollama_llama_server.exe time=2024-08-08T07:51:55.066-04:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=...\scratch\o-sycl\dist\windows-amd64\ollama_runners\cpu_avx2\ollama_llama_server.exe time=2024-08-08T07:51:55.066-04:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=...\scratch\o-sycl\dist\windows-amd64\ollama_runners\oneapi_v2024.2.0\ollama_llama_server.exe time=2024-08-08T07:51:55.088-04:00 level=INFO source=server.go:375 msg="starting llama server" cmd="...\\scratch\\o-sycl\\dist\\windows-amd64\\ollama_runners\\oneapi_v2024.2.0\\ollama_llama_server.exe --model ...\\.ollama\\models\\blobs\\sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --verbose --parallel 4 --port 63768" time=2024-08-08T07:51:55.088-04:00 level=DEBUG source=server.go:390 msg=subprocess environment="[PATH=...\\scratch\\o-sycl\\dist\\windows-amd64\\oneapi;...\\scratch\\o-sycl\\dist\\windows-amd64\\ollama_runners\\oneapi_v2024.2.0;...\\scratch\\o-sycl\\dist\\windows-amd64\\ollama_runners;...]" time=2024-08-08T07:51:55.101-04:00 level=INFO source=sched.go:474 msg="loaded runners" count=1 time=2024-08-08T07:51:55.101-04:00 level=INFO source=server.go:563 msg="waiting for llama runner to start responding" time=2024-08-08T07:51:55.102-04:00 level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server error" WARN [server_params_parse] Not compiled with GPU offload support, --n-gpu-layers option will be ignored. See main README.md for information on enabling GPU BLAS support | n_gpu_layers=-1 tid="7080" timestamp=1723117915 INFO [wmain] build info | build=57 commit="a8db2a9c" tid="7080" timestamp=1723117915 INFO [wmain] system info | n_threads=10 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="7080" timestamp=1723117915 total_threads=20 INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="19" port="63768" tid="7080" timestamp=1723117915 time=2024-08-08T07:51:55.357-04:00 level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server loading model" ... time=2024-08-08T07:52:01.073-04:00 level=DEBUG source=server.go:615 msg="model load progress 1.00" llama_kv_cache_init:        CPU KV buffer size =  1024.00 MiB llama_new_context_with_model: KV self size  = 1024.00 MiB, K (f16):  512.00 MiB, V (f16):  512.00 MiB llama_new_context_with_model:        CPU  output buffer size =     2.02 MiB time=2024-08-08T07:52:01.340-04:00 level=ERROR source=sched.go:480 msg="error loading llama server" error="llama runner process has terminated: exit status 0xc000001d " time=2024-08-08T07:52:01.341-04:00 level=DEBUG source=sched.go:483 msg="triggering expiration for failed load" model=...\.ollama\models\blobs\sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa ```
Author
Owner

@zhewang1-intc commented on GitHub (Aug 9, 2024):

nice catch, i update the windows binary release in https://github.com/zhewang1-intc/ollama/releases/tag/experimental-oneapi-v0.0.2, could you please have a try?

<!-- gh-comment-id:2276980107 --> @zhewang1-intc commented on GitHub (Aug 9, 2024): nice catch, i update the windows binary release in https://github.com/zhewang1-intc/ollama/releases/tag/experimental-oneapi-v0.0.2, could you please have a try?
Author
Owner

@byjrack commented on GitHub (Aug 9, 2024):

Wow complete shot in the dark on that one!

So looking much better, but still get a crash on ollama run

Now if i use the llama-cpp i can load llama3 so guessing it is a param being sent to ollama_llama_server? Anything i can test directly or more logs i can pull?

[SYCL] call ggml_check_sycl
ggml_check_sycl: GGML_SYCL_DEBUG: 0
ggml_check_sycl: GGML_SYCL_F16: no
found 1 SYCL devices:
|  |                   |                                       |       |Max    |        |Max  |Global |
    |
|  |                   |                                       |       |compute|Max work|sub  |mem    |
    |
|ID|        Device Type|                                   Name|Version|units  |group   |group|size   |       Driver version|
|--|-------------------|---------------------------------------|-------|-------|--------|-----|-------|---------------------|
| 0| [level_zero:gpu:0]|                 Intel Iris Xe Graphics|    1.3|     96|     512|   32| 31645M|            1.3.29803|
llama_kv_cache_init:      SYCL0 KV buffer size =  1024.00 MiB
llama_new_context_with_model: KV self size  = 1024.00 MiB, K (f16):  512.00 MiB, V (f16):  512.00 MiB
llama_new_context_with_model:  SYCL_Host  output buffer size =     2.02 MiB
llama_new_context_with_model:      SYCL0 compute buffer size =   560.00 MiB
llama_new_context_with_model:  SYCL_Host compute buffer size =    24.01 MiB
llama_new_context_with_model: graph nodes  = 1030
llama_new_context_with_model: graph splits = 2
time=2024-08-08T21:27:39.482-04:00 level=DEBUG source=server.go:615 msg="model load progress 1.00"
time=2024-08-08T21:27:39.737-04:00 level=DEBUG source=server.go:618 msg="model load completed, waiting for server to become available" status="llm server loading model"
The number of work-items in each dimension of a work-group cannot exceed {512, 512, 512} for this device -54 (PI_ERROR_INVALID_WORK_GROUP_SIZE)Exception caught at file:C:/ollama/llm/llama.cpp/ggml-sycl.cpp, line:10796
time=2024-08-08T21:27:40.246-04:00 level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server error"
time=2024-08-08T21:27:41.011-04:00 level=ERROR source=sched.go:480 msg="error loading llama server" error="llama runner process has terminated: exit status 1 "
<!-- gh-comment-id:2276985850 --> @byjrack commented on GitHub (Aug 9, 2024): Wow complete shot in the dark on that one! So looking much better, but still get a crash on `ollama run` Now if i use the llama-cpp i can load llama3 so guessing it is a param being sent to ollama_llama_server? Anything i can test directly or more logs i can pull? ``` [SYCL] call ggml_check_sycl ggml_check_sycl: GGML_SYCL_DEBUG: 0 ggml_check_sycl: GGML_SYCL_F16: no found 1 SYCL devices: | | | | |Max | |Max |Global | | | | | | |compute|Max work|sub |mem | | |ID| Device Type| Name|Version|units |group |group|size | Driver version| |--|-------------------|---------------------------------------|-------|-------|--------|-----|-------|---------------------| | 0| [level_zero:gpu:0]| Intel Iris Xe Graphics| 1.3| 96| 512| 32| 31645M| 1.3.29803| llama_kv_cache_init: SYCL0 KV buffer size = 1024.00 MiB llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB llama_new_context_with_model: SYCL_Host output buffer size = 2.02 MiB llama_new_context_with_model: SYCL0 compute buffer size = 560.00 MiB llama_new_context_with_model: SYCL_Host compute buffer size = 24.01 MiB llama_new_context_with_model: graph nodes = 1030 llama_new_context_with_model: graph splits = 2 time=2024-08-08T21:27:39.482-04:00 level=DEBUG source=server.go:615 msg="model load progress 1.00" time=2024-08-08T21:27:39.737-04:00 level=DEBUG source=server.go:618 msg="model load completed, waiting for server to become available" status="llm server loading model" The number of work-items in each dimension of a work-group cannot exceed {512, 512, 512} for this device -54 (PI_ERROR_INVALID_WORK_GROUP_SIZE)Exception caught at file:C:/ollama/llm/llama.cpp/ggml-sycl.cpp, line:10796 time=2024-08-08T21:27:40.246-04:00 level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server error" time=2024-08-08T21:27:41.011-04:00 level=ERROR source=sched.go:480 msg="error loading llama server" error="llama runner process has terminated: exit status 1 " ```
Author
Owner

@zhewang1-intc commented on GitHub (Aug 9, 2024):

looks like it's a llama.cpp side issue, could you pls have a try that can llama.cpp works with llama3 well based on this commit on your machine?

<!-- gh-comment-id:2276990143 --> @zhewang1-intc commented on GitHub (Aug 9, 2024): looks like it's a llama.cpp side issue, could you pls have a try that can llama.cpp works with llama3 well based on this [commit](https://github.com/ggerganov/llama.cpp/tree/a8db2a9ce64cd4417f6a312ab61858f17f0f8584) on your machine?
Author
Owner

@byjrack commented on GitHub (Aug 9, 2024):

So using b3038 I can get things run as expected if i point to the ollama cached model.

Tried from the sycl build for b3547 and getting a libiomp5md.dll missing so I need to dig into that.

Based on that commit date looks like https://github.com/ggerganov/llama.cpp/releases/tag/b3130 might be the closest? Tested that as well and same missing DLL. Bit late tonight, but will track that down tomorrow and see if i can get a newer release to run.

Anything helpful from the logs in llama-cpp runs that might help pin down what is happening in ollama_llama_server?

<!-- gh-comment-id:2277009825 --> @byjrack commented on GitHub (Aug 9, 2024): So using b3038 I can get things run as expected if i point to the ollama cached model. Tried from the sycl build for b3547 and getting a libiomp5md.dll missing so I need to dig into that. Based on that commit date looks like https://github.com/ggerganov/llama.cpp/releases/tag/b3130 might be the closest? Tested that as well and same missing DLL. Bit late tonight, but will track that down tomorrow and see if i can get a newer release to run. Anything helpful from the logs in llama-cpp runs that might help pin down what is happening in ollama_llama_server?
Author
Owner

@zhewang1-intc commented on GitHub (Aug 9, 2024):

ollama official already track to 1e6f6554aa, i think i can catch up with it and rebuild, hope can slove this issue.

<!-- gh-comment-id:2277023017 --> @zhewang1-intc commented on GitHub (Aug 9, 2024): ollama official already track to https://github.com/ggerganov/llama.cpp/tree/1e6f6554aa11fa10160a5fda689e736c3c34169f, i think i can catch up with it and rebuild, hope can slove this issue.
Author
Owner

@zhewang1-intc commented on GitHub (Aug 9, 2024):

hi, clould you please try this fix-version release in https://github.com/zhewang1-intc/ollama/releases/tag/experimental-oneapi-v0.0.2? i validated that llama3 works well on my iGPU(Intel Core i7-1185G7 laptop), if there no issue i will create a new release tag.

<!-- gh-comment-id:2277302753 --> @zhewang1-intc commented on GitHub (Aug 9, 2024): hi, clould you please try this fix-version release in https://github.com/zhewang1-intc/ollama/releases/tag/experimental-oneapi-v0.0.2? i validated that llama3 works well on my iGPU(Intel Core i7-1185G7 laptop), if there no issue i will create a new release tag.
Author
Owner

@byjrack commented on GitHub (Aug 9, 2024):

I think we have a winner!

Still using OLLAMA_FORCE_ENABLE_INTEL_IGPU=1 and OLLAMA_INTEL_GPU=1 before running ollama serve.

Simple prompt to llama3:8b on a i7-1370P so far from definitive on performance, but the iGPU wasn't really intended for this use case so there will be limits. Once an upstream release is cut i will have some folks test that are closer to our standard builds to see if it surfaces any corner cases, but with some prereqs we can get it to work.

I will close this Issue as we know we can get there.

total duration:       3m5.4063661s
load duration:        25.5932ms
prompt eval count:    17 token(s)
prompt eval duration: 882.373ms
prompt eval rate:     19.27 tokens/s
eval count:           845 token(s)
eval duration:        3m4.493588s
eval rate:            4.58 tokens/s
<!-- gh-comment-id:2277849284 --> @byjrack commented on GitHub (Aug 9, 2024): I think we have a winner! Still using `OLLAMA_FORCE_ENABLE_INTEL_IGPU=1` and `OLLAMA_INTEL_GPU=1` before running ollama serve. Simple prompt to llama3:8b on a i7-1370P so far from definitive on performance, but the iGPU wasn't really intended for this use case so there will be limits. Once an upstream release is cut i will have some folks test that are closer to our standard builds to see if it surfaces any corner cases, but with some prereqs we can get it to work. I will close this Issue as we know we can get there. ``` total duration: 3m5.4063661s load duration: 25.5932ms prompt eval count: 17 token(s) prompt eval duration: 882.373ms prompt eval rate: 19.27 tokens/s eval count: 845 token(s) eval duration: 3m4.493588s eval rate: 4.58 tokens/s ```
Author
Owner

@byjrack commented on GitHub (Aug 9, 2024):

And just as a point of reference. Same prompt, same machine, same experimental build, same model, but using CPU inference (eg no env vars).

Using GPU though avoids the contention from CPU context switching so even though they are in the ballpark of each over performance wise going GPU seems better. In both cases the active cooling on the laptop is running full bore so to me its a wash there. Didn't do any power testing like others on the iGPU threads, but guessing that GPU will come out on top there.

total duration:       3m47.5144778s
load duration:        30.9214ms
prompt eval count:    17 token(s)
prompt eval duration: 1.778241s
prompt eval rate:     9.56 tokens/s
eval count:           1028 token(s)
eval duration:        3m45.700513s
eval rate:            4.55 tokens/s
<!-- gh-comment-id:2277879618 --> @byjrack commented on GitHub (Aug 9, 2024): And just as a point of reference. Same prompt, same machine, same experimental build, same model, but using CPU inference (eg no env vars). Using GPU though avoids the contention from CPU context switching so even though they are in the ballpark of each over performance wise going GPU seems better. In both cases the active cooling on the laptop is running full bore so to me its a wash there. Didn't do any power testing like others on the iGPU threads, but guessing that GPU will come out on top there. ``` total duration: 3m47.5144778s load duration: 30.9214ms prompt eval count: 17 token(s) prompt eval duration: 1.778241s prompt eval rate: 9.56 tokens/s eval count: 1028 token(s) eval duration: 3m45.700513s eval rate: 4.55 tokens/s ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#65934