[GH-ISSUE #13153] Vulkan #8699

Open
opened 2026-04-12 21:28:21 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @Swagatade on GitHub (Nov 19, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13153

What is the issue?

intel ultra processor gpt-oss-safeguard very slow using Vulkan support. But only cpu performence is perfect.

Relevant log output

C:\Users\swaga_nsppntr>set OLLAMA_VULKAN=1

C:\Users\swaga_nsppntr>ollama serve
time=2025-11-19T11:14:58.021+05:30 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\swaga_nsppntr\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES:]"
time=2025-11-19T11:14:58.272+05:30 level=INFO source=images.go:522 msg="total blobs: 74"
time=2025-11-19T11:14:58.277+05:30 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
time=2025-11-19T11:14:58.283+05:30 level=INFO source=routes.go:1597 msg="Listening on 127.0.0.1:11434 (version 0.12.11)"
time=2025-11-19T11:14:58.285+05:30 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2025-11-19T11:14:58.299+05:30 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\swaga_nsppntr\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 51942"
time=2025-11-19T11:14:59.184+05:30 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\swaga_nsppntr\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 51954"
time=2025-11-19T11:14:59.964+05:30 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\swaga_nsppntr\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 51961"
time=2025-11-19T11:15:00.292+05:30 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\swaga_nsppntr\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 51966"
time=2025-11-19T11:15:00.877+05:30 level=INFO source=types.go:42 msg="inference compute" id=8680a064-0400-0000-0002-000000000000 filter_id="" library=Vulkan compute=0.0 name=Vulkan0 description="Intel(R) Arc(TM) 130V GPU (16GB)" libdirs=ollama,vulkan driver=0.0 pci_id="" type=iGPU total="27.6 GiB" available="26.8 GiB"
[GIN] 2025/11/19 - 11:15:40 | 200 |      2.1217ms |       127.0.0.1 | GET      "/api/version"
[GIN] 2025/11/19 - 11:15:41 | 200 |     371.946ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/11/19 - 11:15:41 | 200 |    335.6007ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/11/19 - 11:15:43 | 200 |     34.5837ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/11/19 - 11:15:44 | 200 |    155.6211ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/11/19 - 11:15:44 | 200 |    116.4347ms |       127.0.0.1 | POST     "/api/show"
time=2025-11-19T11:15:44.389+05:30 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\swaga_nsppntr\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 57220"
time=2025-11-19T11:15:45.033+05:30 level=INFO source=cpu_windows.go:148 msg=packages count=1
time=2025-11-19T11:15:45.033+05:30 level=INFO source=cpu_windows.go:164 msg="efficiency cores detected" maxEfficiencyClass=1
time=2025-11-19T11:15:45.034+05:30 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=8 efficiency=4 threads=8
time=2025-11-19T11:15:45.191+05:30 level=INFO source=server.go:209 msg="enabling flash attention"
time=2025-11-19T11:15:45.194+05:30 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\swaga_nsppntr\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\swaga_nsppntr\\.ollama\\models\\blobs\\sha256-c4016c9e54d0a9218b5911790579e58284a9ed57c48b7e87607125c6307f9da1 --port 57231"
time=2025-11-19T11:15:45.198+05:30 level=INFO source=sched.go:443 msg="system memory" total="31.5 GiB" free="17.0 GiB" free_swap="31.9 GiB"
time=2025-11-19T11:15:45.198+05:30 level=INFO source=sched.go:450 msg="gpu memory" id=8680a064-0400-0000-0002-000000000000 library=Vulkan available="26.1 GiB" free="26.6 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-11-19T11:15:45.198+05:30 level=INFO source=server.go:702 msg="loading model" "model layers"=25 requested=-1
time=2025-11-19T11:15:45.258+05:30 level=INFO source=runner.go:1398 msg="starting ollama engine"
time=2025-11-19T11:15:45.260+05:30 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:57231"
time=2025-11-19T11:15:45.267+05:30 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:4 GPULayers:25[ID:8680a064-0400-0000-0002-000000000000 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-11-19T11:15:45.322+05:30 level=INFO source=ggml.go:136 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=459 num_key_values=32
load_backend: loaded CPU backend from C:\Users\swaga_nsppntr\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = Intel(R) Arc(TM) 130V GPU (16GB) (Intel Corporation) | uma: 1 | fp16: 1 | bf16: 0 | warp size: 32 | shared memory: 32768 | int dot: 1 | matrix cores: KHR_coopmat
load_backend: loaded Vulkan backend from C:\Users\swaga_nsppntr\AppData\Local\Programs\Ollama\lib\ollama\vulkan\ggml-vulkan.dll
time=2025-11-19T11:15:45.427+05:30 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang)
ggml_backend_vk_get_device_memory called: uuid 8680a064-0400-0000-0002-000000000000
ggml_backend_vk_get_device_memory called: luid 0x000000000000ff6b
ggml_dxgi_pdh_init called
DXGI + PDH Initialized. Getting GPU free memory info
[DXGI] Adapter Description: Intel(R) Arc(TM) 130V GPU (16GB), LUID: 0x000000000000FF6B, Dedicated: 0.12 GB, Shared: 27.45 GB
[DXGI] Adapter Description: Microsoft Basic Render Driver, LUID: 0x000000000001038E, Dedicated: 0.00 GB, Shared: 27.45 GB
Integrated GPU (Intel(R) Arc(TM) 130V GPU (16GB)) with LUID 0x000000000000ff6b detected. Shared Total: 29470189240.00 bytes (27.45 GB), Shared Usage: 1070800896.00 bytes (1.00 GB), Dedicated Total: 134217728.00 bytes (0.12 GB), Dedicated Usage: 0.00 bytes (0.00 GB)
ggml_backend_vk_get_device_memory utilizing DXGI + PDH memory reporting free: 28533606072 total: 29604406968
time=2025-11-19T11:15:45.809+05:30 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:4 GPULayers:25[ID:8680a064-0400-0000-0002-000000000000 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_vk_get_device_memory called: uuid 8680a064-0400-0000-0002-000000000000
ggml_backend_vk_get_device_memory called: luid 0x000000000000ff6b
ggml_dxgi_pdh_init called
DXGI + PDH Initialized. Getting GPU free memory info
[DXGI] Adapter Description: Intel(R) Arc(TM) 130V GPU (16GB), LUID: 0x000000000000FF6B, Dedicated: 0.12 GB, Shared: 27.45 GB
[DXGI] Adapter Description: Microsoft Basic Render Driver, LUID: 0x000000000001038E, Dedicated: 0.00 GB, Shared: 27.45 GB
Integrated GPU (Intel(R) Arc(TM) 130V GPU (16GB)) with LUID 0x000000000000ff6b detected. Shared Total: 29470189240.00 bytes (27.45 GB), Shared Usage: 1070800896.00 bytes (1.00 GB), Dedicated Total: 134217728.00 bytes (0.12 GB), Dedicated Usage: 0.00 bytes (0.00 GB)
ggml_backend_vk_get_device_memory utilizing DXGI + PDH memory reporting free: 28533606072 total: 29604406968
time=2025-11-19T11:15:51.463+05:30 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:4 GPULayers:25[ID:8680a064-0400-0000-0002-000000000000 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-11-19T11:15:51.463+05:30 level=INFO source=ggml.go:482 msg="offloading 24 repeating layers to GPU"
time=2025-11-19T11:15:51.463+05:30 level=INFO source=ggml.go:489 msg="offloading output layer to GPU"
time=2025-11-19T11:15:51.463+05:30 level=INFO source=ggml.go:494 msg="offloaded 25/25 layers to GPU"
time=2025-11-19T11:15:51.465+05:30 level=INFO source=device.go:240 msg="model weights" device=Vulkan0 size="11.8 GiB"
time=2025-11-19T11:15:51.465+05:30 level=INFO source=device.go:245 msg="model weights" device=CPU size="1.1 GiB"
time=2025-11-19T11:15:51.465+05:30 level=INFO source=device.go:251 msg="kv cache" device=Vulkan0 size="300.0 MiB"
time=2025-11-19T11:15:51.465+05:30 level=INFO source=device.go:262 msg="compute graph" device=Vulkan0 size="110.6 MiB"
time=2025-11-19T11:15:51.467+05:30 level=INFO source=device.go:267 msg="compute graph" device=CPU size="5.6 MiB"
time=2025-11-19T11:15:51.468+05:30 level=INFO source=device.go:272 msg="total memory" size="13.2 GiB"
time=2025-11-19T11:15:51.468+05:30 level=INFO source=sched.go:517 msg="loaded runners" count=1
time=2025-11-19T11:15:51.468+05:30 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding"
time=2025-11-19T11:15:51.469+05:30 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model"

OS

Windows

GPU

Intel

CPU

Intel

Ollama version

v0.12.11

Originally created by @Swagatade on GitHub (Nov 19, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13153 ### What is the issue? intel ultra processor gpt-oss-safeguard very slow using Vulkan support. But only cpu performence is perfect. ### Relevant log output ```shell C:\Users\swaga_nsppntr>set OLLAMA_VULKAN=1 C:\Users\swaga_nsppntr>ollama serve time=2025-11-19T11:14:58.021+05:30 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\swaga_nsppntr\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES:]" time=2025-11-19T11:14:58.272+05:30 level=INFO source=images.go:522 msg="total blobs: 74" time=2025-11-19T11:14:58.277+05:30 level=INFO source=images.go:529 msg="total unused blobs removed: 0" time=2025-11-19T11:14:58.283+05:30 level=INFO source=routes.go:1597 msg="Listening on 127.0.0.1:11434 (version 0.12.11)" time=2025-11-19T11:14:58.285+05:30 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2025-11-19T11:14:58.299+05:30 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\swaga_nsppntr\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 51942" time=2025-11-19T11:14:59.184+05:30 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\swaga_nsppntr\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 51954" time=2025-11-19T11:14:59.964+05:30 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\swaga_nsppntr\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 51961" time=2025-11-19T11:15:00.292+05:30 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\swaga_nsppntr\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 51966" time=2025-11-19T11:15:00.877+05:30 level=INFO source=types.go:42 msg="inference compute" id=8680a064-0400-0000-0002-000000000000 filter_id="" library=Vulkan compute=0.0 name=Vulkan0 description="Intel(R) Arc(TM) 130V GPU (16GB)" libdirs=ollama,vulkan driver=0.0 pci_id="" type=iGPU total="27.6 GiB" available="26.8 GiB" [GIN] 2025/11/19 - 11:15:40 | 200 | 2.1217ms | 127.0.0.1 | GET "/api/version" [GIN] 2025/11/19 - 11:15:41 | 200 | 371.946ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/19 - 11:15:41 | 200 | 335.6007ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/11/19 - 11:15:43 | 200 | 34.5837ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/19 - 11:15:44 | 200 | 155.6211ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/11/19 - 11:15:44 | 200 | 116.4347ms | 127.0.0.1 | POST "/api/show" time=2025-11-19T11:15:44.389+05:30 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\swaga_nsppntr\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 57220" time=2025-11-19T11:15:45.033+05:30 level=INFO source=cpu_windows.go:148 msg=packages count=1 time=2025-11-19T11:15:45.033+05:30 level=INFO source=cpu_windows.go:164 msg="efficiency cores detected" maxEfficiencyClass=1 time=2025-11-19T11:15:45.034+05:30 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=8 efficiency=4 threads=8 time=2025-11-19T11:15:45.191+05:30 level=INFO source=server.go:209 msg="enabling flash attention" time=2025-11-19T11:15:45.194+05:30 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\swaga_nsppntr\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\swaga_nsppntr\\.ollama\\models\\blobs\\sha256-c4016c9e54d0a9218b5911790579e58284a9ed57c48b7e87607125c6307f9da1 --port 57231" time=2025-11-19T11:15:45.198+05:30 level=INFO source=sched.go:443 msg="system memory" total="31.5 GiB" free="17.0 GiB" free_swap="31.9 GiB" time=2025-11-19T11:15:45.198+05:30 level=INFO source=sched.go:450 msg="gpu memory" id=8680a064-0400-0000-0002-000000000000 library=Vulkan available="26.1 GiB" free="26.6 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-11-19T11:15:45.198+05:30 level=INFO source=server.go:702 msg="loading model" "model layers"=25 requested=-1 time=2025-11-19T11:15:45.258+05:30 level=INFO source=runner.go:1398 msg="starting ollama engine" time=2025-11-19T11:15:45.260+05:30 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:57231" time=2025-11-19T11:15:45.267+05:30 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:4 GPULayers:25[ID:8680a064-0400-0000-0002-000000000000 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-11-19T11:15:45.322+05:30 level=INFO source=ggml.go:136 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=459 num_key_values=32 load_backend: loaded CPU backend from C:\Users\swaga_nsppntr\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll ggml_vulkan: Found 1 Vulkan devices: ggml_vulkan: 0 = Intel(R) Arc(TM) 130V GPU (16GB) (Intel Corporation) | uma: 1 | fp16: 1 | bf16: 0 | warp size: 32 | shared memory: 32768 | int dot: 1 | matrix cores: KHR_coopmat load_backend: loaded Vulkan backend from C:\Users\swaga_nsppntr\AppData\Local\Programs\Ollama\lib\ollama\vulkan\ggml-vulkan.dll time=2025-11-19T11:15:45.427+05:30 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang) ggml_backend_vk_get_device_memory called: uuid 8680a064-0400-0000-0002-000000000000 ggml_backend_vk_get_device_memory called: luid 0x000000000000ff6b ggml_dxgi_pdh_init called DXGI + PDH Initialized. Getting GPU free memory info [DXGI] Adapter Description: Intel(R) Arc(TM) 130V GPU (16GB), LUID: 0x000000000000FF6B, Dedicated: 0.12 GB, Shared: 27.45 GB [DXGI] Adapter Description: Microsoft Basic Render Driver, LUID: 0x000000000001038E, Dedicated: 0.00 GB, Shared: 27.45 GB Integrated GPU (Intel(R) Arc(TM) 130V GPU (16GB)) with LUID 0x000000000000ff6b detected. Shared Total: 29470189240.00 bytes (27.45 GB), Shared Usage: 1070800896.00 bytes (1.00 GB), Dedicated Total: 134217728.00 bytes (0.12 GB), Dedicated Usage: 0.00 bytes (0.00 GB) ggml_backend_vk_get_device_memory utilizing DXGI + PDH memory reporting free: 28533606072 total: 29604406968 time=2025-11-19T11:15:45.809+05:30 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:4 GPULayers:25[ID:8680a064-0400-0000-0002-000000000000 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_vk_get_device_memory called: uuid 8680a064-0400-0000-0002-000000000000 ggml_backend_vk_get_device_memory called: luid 0x000000000000ff6b ggml_dxgi_pdh_init called DXGI + PDH Initialized. Getting GPU free memory info [DXGI] Adapter Description: Intel(R) Arc(TM) 130V GPU (16GB), LUID: 0x000000000000FF6B, Dedicated: 0.12 GB, Shared: 27.45 GB [DXGI] Adapter Description: Microsoft Basic Render Driver, LUID: 0x000000000001038E, Dedicated: 0.00 GB, Shared: 27.45 GB Integrated GPU (Intel(R) Arc(TM) 130V GPU (16GB)) with LUID 0x000000000000ff6b detected. Shared Total: 29470189240.00 bytes (27.45 GB), Shared Usage: 1070800896.00 bytes (1.00 GB), Dedicated Total: 134217728.00 bytes (0.12 GB), Dedicated Usage: 0.00 bytes (0.00 GB) ggml_backend_vk_get_device_memory utilizing DXGI + PDH memory reporting free: 28533606072 total: 29604406968 time=2025-11-19T11:15:51.463+05:30 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:4 GPULayers:25[ID:8680a064-0400-0000-0002-000000000000 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-11-19T11:15:51.463+05:30 level=INFO source=ggml.go:482 msg="offloading 24 repeating layers to GPU" time=2025-11-19T11:15:51.463+05:30 level=INFO source=ggml.go:489 msg="offloading output layer to GPU" time=2025-11-19T11:15:51.463+05:30 level=INFO source=ggml.go:494 msg="offloaded 25/25 layers to GPU" time=2025-11-19T11:15:51.465+05:30 level=INFO source=device.go:240 msg="model weights" device=Vulkan0 size="11.8 GiB" time=2025-11-19T11:15:51.465+05:30 level=INFO source=device.go:245 msg="model weights" device=CPU size="1.1 GiB" time=2025-11-19T11:15:51.465+05:30 level=INFO source=device.go:251 msg="kv cache" device=Vulkan0 size="300.0 MiB" time=2025-11-19T11:15:51.465+05:30 level=INFO source=device.go:262 msg="compute graph" device=Vulkan0 size="110.6 MiB" time=2025-11-19T11:15:51.467+05:30 level=INFO source=device.go:267 msg="compute graph" device=CPU size="5.6 MiB" time=2025-11-19T11:15:51.468+05:30 level=INFO source=device.go:272 msg="total memory" size="13.2 GiB" time=2025-11-19T11:15:51.468+05:30 level=INFO source=sched.go:517 msg="loaded runners" count=1 time=2025-11-19T11:15:51.468+05:30 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding" time=2025-11-19T11:15:51.469+05:30 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model" ``` ### OS Windows ### GPU Intel ### CPU Intel ### Ollama version v0.12.11
GiteaMirror added the vulkanperformancebug labels 2026-04-12 21:28:21 -05:00
Author
Owner

@charlescng commented on GitHub (Nov 20, 2025):

Testing with gemma3:12b on my Intel Arc B580 with the xe driver from Linux 6.17.

~35 t/s with ipex-llm (Ollama 0.9.3)
~15 t/s with Vulkan (Ollama 0.13.0)

time=2025-11-20T01:34:32.654Z level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"

time=2025-11-20T01:34:32.654Z level=INFO source=images.go:522 msg="total blobs: 14"

time=2025-11-20T01:34:32.655Z level=INFO source=images.go:529 msg="total unused blobs removed: 0"

time=2025-11-20T01:34:32.655Z level=INFO source=routes.go:1597 msg="Listening on [::]:11434 (version 0.13.0)"

time=2025-11-20T01:34:32.655Z level=INFO source=runner.go:67 msg="discovering available GPUs..."

time=2025-11-20T01:34:32.656Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 33719"

time=2025-11-20T01:34:32.764Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 36285"

time=2025-11-20T01:34:32.792Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 42775"

time=2025-11-20T01:34:32.809Z level=INFO source=types.go:42 msg="inference compute" id=86800be2-0000-0000-0600-000000000000 filter_id="" library=Vulkan compute=0.0 name=Vulkan0 description="Intel(R) Graphics (BMG G21)" libdirs=ollama,vulkan driver=0.0 pci_id=0000:06:00.0 type=discrete total="11.9 GiB" available="10.7 GiB"

time=2025-11-20T01:34:32.809Z level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="11.9 GiB" threshold="20.0 GiB"

time=2025-11-20T01:34:59.897Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 43677"

time=2025-11-20T01:34:59.948Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax"

time=2025-11-20T01:35:00.093Z level=INFO source=server.go:209 msg="enabling flash attention"

time=2025-11-20T01:35:00.093Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de --port 38295"

time=2025-11-20T01:35:00.094Z level=INFO source=sched.go:443 msg="system memory" total="15.6 GiB" free="15.5 GiB" free_swap="2.0 GiB"

time=2025-11-20T01:35:00.094Z level=INFO source=sched.go:450 msg="gpu memory" id=86800be2-0000-0000-0600-000000000000 library=Vulkan available="10.3 GiB" free="10.7 GiB" minimum="457.0 MiB" overhead="0 B"

time=2025-11-20T01:35:00.094Z level=INFO source=server.go:702 msg="loading model" "model layers"=49 requested=-1

time=2025-11-20T01:35:00.104Z level=INFO source=runner.go:1398 msg="starting ollama engine"

time=2025-11-20T01:35:00.104Z level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:38295"

time=2025-11-20T01:35:00.105Z level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:2 GPULayers:49[ID:86800be2-0000-0000-0600-000000000000 Layers:49(0..48)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"

time=2025-11-20T01:35:00.160Z level=INFO source=ggml.go:136 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1065 num_key_values=37

load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so

ggml_vulkan: Found 1 Vulkan devices:

ggml_vulkan: 0 = Intel(R) Graphics (BMG G21) (Intel open-source Mesa driver) | uma: 0 | fp16: 1 | bf16: 0 | warp size: 32 | shared memory: 131072 | int dot: 1 | matrix cores: KHR_coopmat

load_backend: loaded Vulkan backend from /usr/lib/ollama/vulkan/libggml-vulkan.so

time=2025-11-20T01:35:00.190Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)

ggml_backend_vk_get_device_memory called: uuid 86800be2-0000-0000-0600-000000000000

ggml_backend_vk_get_device_memory called: luid 0x0000000000000000

time=2025-11-20T01:35:00.428Z level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:2 GPULayers:49[ID:86800be2-0000-0000-0600-000000000000 Layers:49(0..48)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"

ggml_backend_vk_get_device_memory called: uuid 86800be2-0000-0000-0600-000000000000

ggml_backend_vk_get_device_memory called: luid 0x0000000000000000

time=2025-11-20T01:35:00.788Z level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:2 GPULayers:49[ID:86800be2-0000-0000-0600-000000000000 Layers:49(0..48)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"

time=2025-11-20T01:35:00.788Z level=INFO source=ggml.go:482 msg="offloading 48 repeating layers to GPU"

time=2025-11-20T01:35:00.788Z level=INFO source=ggml.go:489 msg="offloading output layer to GPU"

time=2025-11-20T01:35:00.788Z level=INFO source=ggml.go:494 msg="offloaded 49/49 layers to GPU"

time=2025-11-20T01:35:00.788Z level=INFO source=device.go:240 msg="model weights" device=Vulkan0 size="7.6 GiB"

time=2025-11-20T01:35:00.788Z level=INFO source=device.go:245 msg="model weights" device=CPU size="787.5 MiB"

time=2025-11-20T01:35:00.788Z level=INFO source=device.go:251 msg="kv cache" device=Vulkan0 size="736.0 MiB"

time=2025-11-20T01:35:00.788Z level=INFO source=device.go:262 msg="compute graph" device=Vulkan0 size="135.3 MiB"

time=2025-11-20T01:35:00.788Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="7.5 MiB"

time=2025-11-20T01:35:00.788Z level=INFO source=device.go:272 msg="total memory" size="9.2 GiB"

time=2025-11-20T01:35:00.788Z level=INFO source=sched.go:517 msg="loaded runners" count=1

time=2025-11-20T01:35:00.788Z level=INFO source=server.go:1294 msg="waiting for llama runner to start responding"

time=2025-11-20T01:35:00.802Z level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model"

time=2025-11-20T01:35:04.331Z level=INFO source=server.go:1332 msg="llama runner started in 4.24 seconds"
<!-- gh-comment-id:3555328975 --> @charlescng commented on GitHub (Nov 20, 2025): Testing with gemma3:12b on my Intel Arc B580 with the xe driver from Linux 6.17. ~35 t/s with [ipex-llm](https://github.com/ipex-llm/ipex-llm/releases/download/v2.3.0-nightly/ollama-ipex-llm-2.3.0b20250725-ubuntu.tgz) (Ollama 0.9.3) ~15 t/s with Vulkan (Ollama 0.13.0) ``` time=2025-11-20T01:34:32.654Z level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-11-20T01:34:32.654Z level=INFO source=images.go:522 msg="total blobs: 14" time=2025-11-20T01:34:32.655Z level=INFO source=images.go:529 msg="total unused blobs removed: 0" time=2025-11-20T01:34:32.655Z level=INFO source=routes.go:1597 msg="Listening on [::]:11434 (version 0.13.0)" time=2025-11-20T01:34:32.655Z level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2025-11-20T01:34:32.656Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 33719" time=2025-11-20T01:34:32.764Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 36285" time=2025-11-20T01:34:32.792Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 42775" time=2025-11-20T01:34:32.809Z level=INFO source=types.go:42 msg="inference compute" id=86800be2-0000-0000-0600-000000000000 filter_id="" library=Vulkan compute=0.0 name=Vulkan0 description="Intel(R) Graphics (BMG G21)" libdirs=ollama,vulkan driver=0.0 pci_id=0000:06:00.0 type=discrete total="11.9 GiB" available="10.7 GiB" time=2025-11-20T01:34:32.809Z level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="11.9 GiB" threshold="20.0 GiB" time=2025-11-20T01:34:59.897Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 43677" time=2025-11-20T01:34:59.948Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax" time=2025-11-20T01:35:00.093Z level=INFO source=server.go:209 msg="enabling flash attention" time=2025-11-20T01:35:00.093Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de --port 38295" time=2025-11-20T01:35:00.094Z level=INFO source=sched.go:443 msg="system memory" total="15.6 GiB" free="15.5 GiB" free_swap="2.0 GiB" time=2025-11-20T01:35:00.094Z level=INFO source=sched.go:450 msg="gpu memory" id=86800be2-0000-0000-0600-000000000000 library=Vulkan available="10.3 GiB" free="10.7 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-11-20T01:35:00.094Z level=INFO source=server.go:702 msg="loading model" "model layers"=49 requested=-1 time=2025-11-20T01:35:00.104Z level=INFO source=runner.go:1398 msg="starting ollama engine" time=2025-11-20T01:35:00.104Z level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:38295" time=2025-11-20T01:35:00.105Z level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:2 GPULayers:49[ID:86800be2-0000-0000-0600-000000000000 Layers:49(0..48)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-11-20T01:35:00.160Z level=INFO source=ggml.go:136 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1065 num_key_values=37 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so ggml_vulkan: Found 1 Vulkan devices: ggml_vulkan: 0 = Intel(R) Graphics (BMG G21) (Intel open-source Mesa driver) | uma: 0 | fp16: 1 | bf16: 0 | warp size: 32 | shared memory: 131072 | int dot: 1 | matrix cores: KHR_coopmat load_backend: loaded Vulkan backend from /usr/lib/ollama/vulkan/libggml-vulkan.so time=2025-11-20T01:35:00.190Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) ggml_backend_vk_get_device_memory called: uuid 86800be2-0000-0000-0600-000000000000 ggml_backend_vk_get_device_memory called: luid 0x0000000000000000 time=2025-11-20T01:35:00.428Z level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:2 GPULayers:49[ID:86800be2-0000-0000-0600-000000000000 Layers:49(0..48)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_vk_get_device_memory called: uuid 86800be2-0000-0000-0600-000000000000 ggml_backend_vk_get_device_memory called: luid 0x0000000000000000 time=2025-11-20T01:35:00.788Z level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:2 GPULayers:49[ID:86800be2-0000-0000-0600-000000000000 Layers:49(0..48)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-11-20T01:35:00.788Z level=INFO source=ggml.go:482 msg="offloading 48 repeating layers to GPU" time=2025-11-20T01:35:00.788Z level=INFO source=ggml.go:489 msg="offloading output layer to GPU" time=2025-11-20T01:35:00.788Z level=INFO source=ggml.go:494 msg="offloaded 49/49 layers to GPU" time=2025-11-20T01:35:00.788Z level=INFO source=device.go:240 msg="model weights" device=Vulkan0 size="7.6 GiB" time=2025-11-20T01:35:00.788Z level=INFO source=device.go:245 msg="model weights" device=CPU size="787.5 MiB" time=2025-11-20T01:35:00.788Z level=INFO source=device.go:251 msg="kv cache" device=Vulkan0 size="736.0 MiB" time=2025-11-20T01:35:00.788Z level=INFO source=device.go:262 msg="compute graph" device=Vulkan0 size="135.3 MiB" time=2025-11-20T01:35:00.788Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="7.5 MiB" time=2025-11-20T01:35:00.788Z level=INFO source=device.go:272 msg="total memory" size="9.2 GiB" time=2025-11-20T01:35:00.788Z level=INFO source=sched.go:517 msg="loaded runners" count=1 time=2025-11-20T01:35:00.788Z level=INFO source=server.go:1294 msg="waiting for llama runner to start responding" time=2025-11-20T01:35:00.802Z level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model" time=2025-11-20T01:35:04.331Z level=INFO source=server.go:1332 msg="llama runner started in 4.24 seconds" ```
Author
Owner

@grover66 commented on GitHub (Nov 20, 2025):

Can confirm works well on my ACEMAGIC F3A AMD Ryzen AI 9 HX 370 Mini PC. LLMs using 32GBs of vram.

Glad to run any tests requested.
Mike :)

Here is hardware info:

ubuntu-desktop
description: Computer
width: 64 bits
capabilities: smp vsyscall32
*-core
description: Motherboard
physical id: 0
*-memory
description: System memory
physical id: 0
size: 80GiB
*-cpu
product: AMD Ryzen AI 9 HX 370 w/ Radeon 890M
vendor: Advanced Micro Devices [AMD]
physical id: 1
bus info: cpu@0
version: 26.36.0
size: 4807MHz
capacity: 5157MHz
width: 64 bits
capabilities: fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp x86-64 constant_tsc rep_good amd_lbr_v2 nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx_vnni avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid bus_lock_detect movdiri movdir64b overflow_recov succor smca fsrm avx512_vp2intersect flush_l1d amd_lbr_pmc_freeze cpufreq
configuration: microcode=186662937

<!-- gh-comment-id:3560010683 --> @grover66 commented on GitHub (Nov 20, 2025): Can confirm works well on my ACEMAGIC F3A AMD Ryzen AI 9 HX 370 Mini PC. LLMs using 32GBs of vram. Glad to run any tests requested. Mike :) Here is hardware info: ubuntu-desktop description: Computer width: 64 bits capabilities: smp vsyscall32 *-core description: Motherboard physical id: 0 *-memory description: System memory physical id: 0 size: 80GiB *-cpu product: AMD Ryzen AI 9 HX 370 w/ Radeon 890M vendor: Advanced Micro Devices [AMD] physical id: 1 bus info: cpu@0 version: 26.36.0 size: 4807MHz capacity: 5157MHz width: 64 bits capabilities: fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp x86-64 constant_tsc rep_good amd_lbr_v2 nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx_vnni avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid bus_lock_detect movdiri movdir64b overflow_recov succor smca fsrm avx512_vp2intersect flush_l1d amd_lbr_pmc_freeze cpufreq configuration: microcode=186662937
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#8699