[GH-ISSUE #15328] Intel GPU Arc 770 16G, gemma4:e2b gemma4:e4b, OLLAMA_VULKAN=1, Output gibberish. #35564

Open
opened 2026-04-22 20:08:46 -05:00 by GiteaMirror · 12 comments
Owner

Originally created by @wqmeng on GitHub (Apr 4, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15328

What is the issue?

Intel GPU Arc 770 16G, gemma4:e2b gemma4:e4b, OLLAMA_VULKAN=1, Output gibberish.

When disable OLLAMA_VULKAN=1, The CPU only gemma4 output correctly.

OS

windows 11

GPU

Intel Arc 770 16Gb

CPU

Intel i5-13400F

Ollama version

0.20.0

Relevant log output


ollama serve
time=2026-04-04T22:15:38.820+08:00 level=INFO source=routes.go:1744 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:INFO OLLAMA_DEBUG_LOG_REQUESTS:false OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:e:\\OllamaModels OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[chrome-extension://* moz-extension://* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES:]"
time=2026-04-04T22:15:38.825+08:00 level=INFO source=routes.go:1746 msg="Ollama cloud disabled: false"
time=2026-04-04T22:15:38.827+08:00 level=INFO source=images.go:499 msg="total blobs: 24"
time=2026-04-04T22:15:38.828+08:00 level=INFO source=images.go:506 msg="total unused blobs removed: 0"
time=2026-04-04T22:15:38.828+08:00 level=INFO source=routes.go:1802 msg="Listening on [::]:11434 (version 0.20.0)"
time=2026-04-04T22:15:38.829+08:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2026-04-04T22:15:38.842+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="C:\\Users\\1\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10575"
time=2026-04-04T22:15:38.928+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="C:\\Users\\1\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10581"
time=2026-04-04T22:15:39.068+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="C:\\Users\\1\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10587"
time=2026-04-04T22:15:39.161+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="C:\\Users\\1\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10596"
time=2026-04-04T22:15:39.567+08:00 level=INFO source=types.go:42 msg="inference compute" id=8680a056-0800-0000-0300-000000000000 filter_id="" library=Vulkan compute=0.0 name=Vulkan0 description="Intel(R) Arc(TM) A770 Graphics" libdirs=ollama,vulkan driver=0.0 pci_id="" type=discrete total="15.9 GiB" available="14.2 GiB"
time=2026-04-04T22:15:39.567+08:00 level=INFO source=routes.go:1852 msg="vram-based default context" total_vram="15.9 GiB" default_num_ctx=4096
Originally created by @wqmeng on GitHub (Apr 4, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15328 ### What is the issue? Intel GPU Arc 770 16G, gemma4:e2b gemma4:e4b, OLLAMA_VULKAN=1, Output gibberish. When disable OLLAMA_VULKAN=1, The CPU only gemma4 output correctly. ### OS windows 11 ### GPU Intel Arc 770 16Gb ### CPU Intel i5-13400F ### Ollama version 0.20.0 ### Relevant log output ```shell ollama serve time=2026-04-04T22:15:38.820+08:00 level=INFO source=routes.go:1744 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:INFO OLLAMA_DEBUG_LOG_REQUESTS:false OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:e:\\OllamaModels OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[chrome-extension://* moz-extension://* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES:]" time=2026-04-04T22:15:38.825+08:00 level=INFO source=routes.go:1746 msg="Ollama cloud disabled: false" time=2026-04-04T22:15:38.827+08:00 level=INFO source=images.go:499 msg="total blobs: 24" time=2026-04-04T22:15:38.828+08:00 level=INFO source=images.go:506 msg="total unused blobs removed: 0" time=2026-04-04T22:15:38.828+08:00 level=INFO source=routes.go:1802 msg="Listening on [::]:11434 (version 0.20.0)" time=2026-04-04T22:15:38.829+08:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-04-04T22:15:38.842+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="C:\\Users\\1\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10575" time=2026-04-04T22:15:38.928+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="C:\\Users\\1\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10581" time=2026-04-04T22:15:39.068+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="C:\\Users\\1\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10587" time=2026-04-04T22:15:39.161+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="C:\\Users\\1\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10596" time=2026-04-04T22:15:39.567+08:00 level=INFO source=types.go:42 msg="inference compute" id=8680a056-0800-0000-0300-000000000000 filter_id="" library=Vulkan compute=0.0 name=Vulkan0 description="Intel(R) Arc(TM) A770 Graphics" libdirs=ollama,vulkan driver=0.0 pci_id="" type=discrete total="15.9 GiB" available="14.2 GiB" time=2026-04-04T22:15:39.567+08:00 level=INFO source=routes.go:1852 msg="vram-based default context" total_vram="15.9 GiB" default_num_ctx=4096 ```
GiteaMirror added the bug label 2026-04-22 20:08:46 -05:00
Author
Owner

@wqmeng commented on GitHub (Apr 4, 2026):

ggml_backend_vk_get_device_memory utilizing DXGI + PDH memory reporting free: 15111237632 total: 17045651456
ggml_vulkan: Device memory allocation of size 5637144576 failed.
ggml_vulkan: Requested buffer size exceeds device buffer size limit: ErrorOutOfDeviceMemory
alloc_tensor_range: failed to allocate Vulkan0 buffer of size 5637144576

ollama serve
time=2026-04-04T22:22:55.311+08:00 level=INFO source=routes.go:1744 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:DEBUG OLLAMA_DEBUG_LOG_REQUESTS:false OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:e:\\OllamaModels OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[chrome-extension://* moz-extension://* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES:]"
time=2026-04-04T22:22:55.315+08:00 level=INFO source=routes.go:1746 msg="Ollama cloud disabled: false"
time=2026-04-04T22:22:55.317+08:00 level=INFO source=images.go:499 msg="total blobs: 24"
time=2026-04-04T22:22:55.317+08:00 level=INFO source=images.go:506 msg="total unused blobs removed: 0"
time=2026-04-04T22:22:55.318+08:00 level=INFO source=routes.go:1802 msg="Listening on [::]:11434 (version 0.20.0)"
time=2026-04-04T22:22:55.318+08:00 level=DEBUG source=sched.go:145 msg="starting llm scheduler"
time=2026-04-04T22:22:55.319+08:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2026-04-04T22:22:55.332+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 7373"
time=2026-04-04T22:22:55.332+08:00 level=DEBUG source=server.go:433 msg=subprocess GGML_VK_DISABLE_F16=1 GGML_VK_DISABLE_FUSION=1 GGML_VK_DISABLE_INTEGER_DOT_PRODUCT=1 OLLAMA_DEBUG=1 OLLAMA_HOST=0.0.0.0 OLLAMA_MODELS=e:\OllamaModels OLLAMA_NUM_GPU=999 OLLAMA_ORIGINS=chrome-extension://*,moz-extension://* OLLAMA_VULKAN=1 PATH="C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama;C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12;C:\\Users\\111\\AppData\\Local\\Programs\\Ollama;D:\\python;D:\\python\\Scripts;C:\\Program Files\\Eclipse Adoptium\\jdk-17.0.9.9-hotspot\\bin;F:\\Program Files\\Eclipse Adoptium\\jdk-21.0.5.11-hotspot\\bin;f:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin;f:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin64;C:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin;C:\\Users\\Public\\Documents\\Embarcadero\\Studio\\23.0\\Bpl;C:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin64;C:\\Users\\Public\\Documents\\Embarcadero\\Studio\\23.0\\Bpl\\Win64;E:\\Program Files\\Eclipse Adoptium\\jre-21.0.5.11-hotspot\\bin;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\;C:\\WINDOWS\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;F:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;G:\\Program Files\\TortoiseGit\\bin;D:\\Program Files\\Git\\bin;D:\\python\\Scripts;C:\\Users\\Public\\Documents\\MitovSoftware\\Libraries\\Delphi\\12.0\\LabPacks\\OpenWire Studio\\Win32\\;C:\\Users\\Public\\Documents\\MitovSoftware\\Libraries\\Delphi\\12.0\\LabPacks\\OpenWire Studio\\Win64\\;F:\\nodejs\\;C:\\ProgramData\\chocolatey\\bin;e:\\FPC\\3.2.2\\bin\\i386-Win32;D:\\rad\\Boss;e:\\Program Files\\gs\\gs10.05.1\\bin;C:\\Users\\111\\AppData\\Local\\Muse Hub\\lib;E:\\gnu\\glo6612wb\\bin;E:\\gnu\\ctags58;D:\\rad\\formatter\\pasfmt-0.7.0-x86_64-pc-windows-msvc;C:\\Users\\111\\.opencode\\bin;D:\\Profiler;C:\\Users\\111\\.local\\bin;C:\\Users\\111\\.local\\bin;D:\\python;C:\\Users\\111\\AppData\\Local\\Programs\\Python\\Launcher\\;C:\\Users\\111\\AppData\\Local\\Microsoft\\WindowsApps;D:\\python\\Scripts;C:\\Users\\111\\AppData\\Roaming\\npm;E:\\Program Files\\Antigravity\\bin;E:\\Program Files (x86)\\cursor\\resources\\app\\bin;" OLLAMA_LIBRARY_PATH=C:\Users\111\AppData\Local\Programs\Ollama\lib\ollama;C:\Users\111\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12
time=2026-04-04T22:22:55.418+08:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=92.3367ms OLLAMA_LIBRARY_PATH="[C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12]" extra_envs=map[]
time=2026-04-04T22:22:55.419+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 7379"
time=2026-04-04T22:22:55.420+08:00 level=DEBUG source=server.go:433 msg=subprocess GGML_VK_DISABLE_F16=1 GGML_VK_DISABLE_FUSION=1 GGML_VK_DISABLE_INTEGER_DOT_PRODUCT=1 OLLAMA_DEBUG=1 OLLAMA_HOST=0.0.0.0 OLLAMA_MODELS=e:\OllamaModels OLLAMA_NUM_GPU=999 OLLAMA_ORIGINS=chrome-extension://*,moz-extension://* OLLAMA_VULKAN=1 PATH="C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama;C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v13;C:\\Users\\111\\AppData\\Local\\Programs\\Ollama;D:\\python;D:\\python\\Scripts;C:\\Program Files\\Eclipse Adoptium\\jdk-17.0.9.9-hotspot\\bin;F:\\Program Files\\Eclipse Adoptium\\jdk-21.0.5.11-hotspot\\bin;f:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin;f:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin64;C:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin;C:\\Users\\Public\\Documents\\Embarcadero\\Studio\\23.0\\Bpl;C:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin64;C:\\Users\\Public\\Documents\\Embarcadero\\Studio\\23.0\\Bpl\\Win64;E:\\Program Files\\Eclipse Adoptium\\jre-21.0.5.11-hotspot\\bin;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\;C:\\WINDOWS\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;F:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;G:\\Program Files\\TortoiseGit\\bin;D:\\Program Files\\Git\\bin;D:\\python\\Scripts;C:\\Users\\Public\\Documents\\MitovSoftware\\Libraries\\Delphi\\12.0\\LabPacks\\OpenWire Studio\\Win32\\;C:\\Users\\Public\\Documents\\MitovSoftware\\Libraries\\Delphi\\12.0\\LabPacks\\OpenWire Studio\\Win64\\;F:\\nodejs\\;C:\\ProgramData\\chocolatey\\bin;e:\\FPC\\3.2.2\\bin\\i386-Win32;D:\\rad\\Boss;e:\\Program Files\\gs\\gs10.05.1\\bin;C:\\Users\\111\\AppData\\Local\\Muse Hub\\lib;E:\\gnu\\glo6612wb\\bin;E:\\gnu\\ctags58;D:\\rad\\formatter\\pasfmt-0.7.0-x86_64-pc-windows-msvc;C:\\Users\\111\\.opencode\\bin;D:\\Profiler;C:\\Users\\111\\.local\\bin;C:\\Users\\111\\.local\\bin;D:\\python;C:\\Users\\111\\AppData\\Local\\Programs\\Python\\Launcher\\;C:\\Users\\111\\AppData\\Local\\Microsoft\\WindowsApps;D:\\python\\Scripts;C:\\Users\\111\\AppData\\Roaming\\npm;E:\\Program Files\\Antigravity\\bin;E:\\Program Files (x86)\\cursor\\resources\\app\\bin;" OLLAMA_LIBRARY_PATH=C:\Users\111\AppData\Local\Programs\Ollama\lib\ollama;C:\Users\111\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13
time=2026-04-04T22:22:55.561+08:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=142.4888ms OLLAMA_LIBRARY_PATH="[C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v13]" extra_envs=map[]
time=2026-04-04T22:22:55.562+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 7385"
time=2026-04-04T22:22:55.562+08:00 level=DEBUG source=server.go:433 msg=subprocess GGML_VK_DISABLE_F16=1 GGML_VK_DISABLE_FUSION=1 GGML_VK_DISABLE_INTEGER_DOT_PRODUCT=1 OLLAMA_DEBUG=1 OLLAMA_HOST=0.0.0.0 OLLAMA_MODELS=e:\OllamaModels OLLAMA_NUM_GPU=999 OLLAMA_ORIGINS=chrome-extension://*,moz-extension://* OLLAMA_VULKAN=1 PATH="C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama;C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\rocm;C:\\Users\\111\\AppData\\Local\\Programs\\Ollama;D:\\python;D:\\python\\Scripts;C:\\Program Files\\Eclipse Adoptium\\jdk-17.0.9.9-hotspot\\bin;F:\\Program Files\\Eclipse Adoptium\\jdk-21.0.5.11-hotspot\\bin;f:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin;f:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin64;C:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin;C:\\Users\\Public\\Documents\\Embarcadero\\Studio\\23.0\\Bpl;C:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin64;C:\\Users\\Public\\Documents\\Embarcadero\\Studio\\23.0\\Bpl\\Win64;E:\\Program Files\\Eclipse Adoptium\\jre-21.0.5.11-hotspot\\bin;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\;C:\\WINDOWS\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;F:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;G:\\Program Files\\TortoiseGit\\bin;D:\\Program Files\\Git\\bin;D:\\python\\Scripts;C:\\Users\\Public\\Documents\\MitovSoftware\\Libraries\\Delphi\\12.0\\LabPacks\\OpenWire Studio\\Win32\\;C:\\Users\\Public\\Documents\\MitovSoftware\\Libraries\\Delphi\\12.0\\LabPacks\\OpenWire Studio\\Win64\\;F:\\nodejs\\;C:\\ProgramData\\chocolatey\\bin;e:\\FPC\\3.2.2\\bin\\i386-Win32;D:\\rad\\Boss;e:\\Program Files\\gs\\gs10.05.1\\bin;C:\\Users\\111\\AppData\\Local\\Muse Hub\\lib;E:\\gnu\\glo6612wb\\bin;E:\\gnu\\ctags58;D:\\rad\\formatter\\pasfmt-0.7.0-x86_64-pc-windows-msvc;C:\\Users\\111\\.opencode\\bin;D:\\Profiler;C:\\Users\\111\\.local\\bin;C:\\Users\\111\\.local\\bin;D:\\python;C:\\Users\\111\\AppData\\Local\\Programs\\Python\\Launcher\\;C:\\Users\\111\\AppData\\Local\\Microsoft\\WindowsApps;D:\\python\\Scripts;C:\\Users\\111\\AppData\\Roaming\\npm;E:\\Program Files\\Antigravity\\bin;E:\\Program Files (x86)\\cursor\\resources\\app\\bin;" OLLAMA_LIBRARY_PATH=C:\Users\111\AppData\Local\Programs\Ollama\lib\ollama;C:\Users\111\AppData\Local\Programs\Ollama\lib\ollama\rocm
time=2026-04-04T22:22:55.657+08:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=95.731ms OLLAMA_LIBRARY_PATH="[C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\rocm]" extra_envs=map[]
time=2026-04-04T22:22:55.658+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 7391"
time=2026-04-04T22:22:55.658+08:00 level=DEBUG source=server.go:433 msg=subprocess GGML_VK_DISABLE_F16=1 GGML_VK_DISABLE_FUSION=1 GGML_VK_DISABLE_INTEGER_DOT_PRODUCT=1 OLLAMA_DEBUG=1 OLLAMA_HOST=0.0.0.0 OLLAMA_MODELS=e:\OllamaModels OLLAMA_NUM_GPU=999 OLLAMA_ORIGINS=chrome-extension://*,moz-extension://* OLLAMA_VULKAN=1 PATH="C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama;C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\vulkan;C:\\Users\\111\\AppData\\Local\\Programs\\Ollama;D:\\python;D:\\python\\Scripts;C:\\Program Files\\Eclipse Adoptium\\jdk-17.0.9.9-hotspot\\bin;F:\\Program Files\\Eclipse Adoptium\\jdk-21.0.5.11-hotspot\\bin;f:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin;f:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin64;C:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin;C:\\Users\\Public\\Documents\\Embarcadero\\Studio\\23.0\\Bpl;C:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin64;C:\\Users\\Public\\Documents\\Embarcadero\\Studio\\23.0\\Bpl\\Win64;E:\\Program Files\\Eclipse Adoptium\\jre-21.0.5.11-hotspot\\bin;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\;C:\\WINDOWS\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;F:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;G:\\Program Files\\TortoiseGit\\bin;D:\\Program Files\\Git\\bin;D:\\python\\Scripts;C:\\Users\\Public\\Documents\\MitovSoftware\\Libraries\\Delphi\\12.0\\LabPacks\\OpenWire Studio\\Win32\\;C:\\Users\\Public\\Documents\\MitovSoftware\\Libraries\\Delphi\\12.0\\LabPacks\\OpenWire Studio\\Win64\\;F:\\nodejs\\;C:\\ProgramData\\chocolatey\\bin;e:\\FPC\\3.2.2\\bin\\i386-Win32;D:\\rad\\Boss;e:\\Program Files\\gs\\gs10.05.1\\bin;C:\\Users\\111\\AppData\\Local\\Muse Hub\\lib;E:\\gnu\\glo6612wb\\bin;E:\\gnu\\ctags58;D:\\rad\\formatter\\pasfmt-0.7.0-x86_64-pc-windows-msvc;C:\\Users\\111\\.opencode\\bin;D:\\Profiler;C:\\Users\\111\\.local\\bin;C:\\Users\\111\\.local\\bin;D:\\python;C:\\Users\\111\\AppData\\Local\\Programs\\Python\\Launcher\\;C:\\Users\\111\\AppData\\Local\\Microsoft\\WindowsApps;D:\\python\\Scripts;C:\\Users\\111\\AppData\\Roaming\\npm;E:\\Program Files\\Antigravity\\bin;E:\\Program Files (x86)\\cursor\\resources\\app\\bin;" OLLAMA_LIBRARY_PATH=C:\Users\111\AppData\Local\Programs\Ollama\lib\ollama;C:\Users\111\AppData\Local\Programs\Ollama\lib\ollama\vulkan
time=2026-04-04T22:22:56.063+08:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=405.7985ms OLLAMA_LIBRARY_PATH="[C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\vulkan]" extra_envs=map[]
time=2026-04-04T22:22:56.063+08:00 level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=1
time=2026-04-04T22:22:56.063+08:00 level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=745.0763ms
time=2026-04-04T22:22:56.063+08:00 level=INFO source=types.go:42 msg="inference compute" id=8680a056-0800-0000-0300-000000000000 filter_id="" library=Vulkan compute=0.0 name=Vulkan0 description="Intel(R) Arc(TM) A770 Graphics" libdirs=ollama,vulkan driver=0.0 pci_id="" type=discrete total="15.9 GiB" available="14.2 GiB"
time=2026-04-04T22:22:56.063+08:00 level=INFO source=routes.go:1852 msg="vram-based default context" total_vram="15.9 GiB" default_num_ctx=4096
[GIN] 2026/04/04 - 22:23:24 | 200 |            0s |       127.0.0.1 | HEAD     "/"
time=2026-04-04T22:23:24.705+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32
[GIN] 2026/04/04 - 22:23:24 | 200 |    191.4033ms |       127.0.0.1 | POST     "/api/show"
time=2026-04-04T22:23:24.922+08:00 level=DEBUG source=runner.go:264 msg="refreshing free memory"
time=2026-04-04T22:23:24.922+08:00 level=DEBUG source=runner.go:328 msg="unable to refresh all GPUs with existing runners, performing bootstrap discovery"
time=2026-04-04T22:23:24.925+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 7435"
time=2026-04-04T22:23:24.925+08:00 level=DEBUG source=server.go:433 msg=subprocess GGML_VK_DISABLE_F16=1 GGML_VK_DISABLE_FUSION=1 GGML_VK_DISABLE_INTEGER_DOT_PRODUCT=1 OLLAMA_DEBUG=1 OLLAMA_HOST=0.0.0.0 OLLAMA_MODELS=e:\OllamaModels OLLAMA_NUM_GPU=999 OLLAMA_ORIGINS=chrome-extension://*,moz-extension://* OLLAMA_VULKAN=1 PATH="C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama;C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\vulkan;C:\\Users\\111\\AppData\\Local\\Programs\\Ollama;D:\\python;D:\\python\\Scripts;C:\\Program Files\\Eclipse Adoptium\\jdk-17.0.9.9-hotspot\\bin;F:\\Program Files\\Eclipse Adoptium\\jdk-21.0.5.11-hotspot\\bin;f:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin;f:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin64;C:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin;C:\\Users\\Public\\Documents\\Embarcadero\\Studio\\23.0\\Bpl;C:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin64;C:\\Users\\Public\\Documents\\Embarcadero\\Studio\\23.0\\Bpl\\Win64;E:\\Program Files\\Eclipse Adoptium\\jre-21.0.5.11-hotspot\\bin;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\;C:\\WINDOWS\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;F:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;G:\\Program Files\\TortoiseGit\\bin;D:\\Program Files\\Git\\bin;D:\\python\\Scripts;C:\\Users\\Public\\Documents\\MitovSoftware\\Libraries\\Delphi\\12.0\\LabPacks\\OpenWire Studio\\Win32\\;C:\\Users\\Public\\Documents\\MitovSoftware\\Libraries\\Delphi\\12.0\\LabPacks\\OpenWire Studio\\Win64\\;F:\\nodejs\\;C:\\ProgramData\\chocolatey\\bin;e:\\FPC\\3.2.2\\bin\\i386-Win32;D:\\rad\\Boss;e:\\Program Files\\gs\\gs10.05.1\\bin;C:\\Users\\111\\AppData\\Local\\Muse Hub\\lib;E:\\gnu\\glo6612wb\\bin;E:\\gnu\\ctags58;D:\\rad\\formatter\\pasfmt-0.7.0-x86_64-pc-windows-msvc;C:\\Users\\111\\.opencode\\bin;D:\\Profiler;C:\\Users\\111\\.local\\bin;C:\\Users\\111\\.local\\bin;D:\\python;C:\\Users\\111\\AppData\\Local\\Programs\\Python\\Launcher\\;C:\\Users\\111\\AppData\\Local\\Microsoft\\WindowsApps;D:\\python\\Scripts;C:\\Users\\111\\AppData\\Roaming\\npm;E:\\Program Files\\Antigravity\\bin;E:\\Program Files (x86)\\cursor\\resources\\app\\bin;" OLLAMA_LIBRARY_PATH=C:\Users\111\AppData\Local\Programs\Ollama\lib\ollama;C:\Users\111\AppData\Local\Programs\Ollama\lib\ollama\vulkan
time=2026-04-04T22:23:25.321+08:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=399.1395ms OLLAMA_LIBRARY_PATH="[C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\vulkan]" extra_envs=map[]
time=2026-04-04T22:23:25.321+08:00 level=DEBUG source=runner.go:40 msg="overall device VRAM discovery took" duration=399.1395ms
time=2026-04-04T22:23:25.322+08:00 level=INFO source=cpu_windows.go:148 msg=packages count=1
time=2026-04-04T22:23:25.322+08:00 level=INFO source=cpu_windows.go:164 msg="efficiency cores detected" maxEfficiencyClass=1
time=2026-04-04T22:23:25.322+08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=10 efficiency=4 threads=16
time=2026-04-04T22:23:25.322+08:00 level=DEBUG source=sched.go:220 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1
time=2026-04-04T22:23:25.322+08:00 level=DEBUG source=sched.go:229 msg="loading first model" model=e:\OllamaModels\blobs\sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a
time=2026-04-04T22:23:25.399+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32
time=2026-04-04T22:23:25.441+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32
time=2026-04-04T22:23:25.444+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.pooling_type default=0
time=2026-04-04T22:23:25.444+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eot_token_id default=106
time=2026-04-04T22:23:25.444+08:00 level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883
time=2026-04-04T22:23:25.444+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.attention.global_head_count_kv default=0
time=2026-04-04T22:23:25.444+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.attention.head_count_kv default="&{size:0 values:[]}"
time=2026-04-04T22:23:25.444+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.expert_count default=0
time=2026-04-04T22:23:25.444+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.expert_used_count default=0
time=2026-04-04T22:23:25.444+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.audio.num_mel_bins default=128
time=2026-04-04T22:23:25.446+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model e:\\OllamaModels\\blobs\\sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a --port 7441"
time=2026-04-04T22:23:25.446+08:00 level=DEBUG source=server.go:433 msg=subprocess GGML_VK_DISABLE_F16=1 GGML_VK_DISABLE_FUSION=1 GGML_VK_DISABLE_INTEGER_DOT_PRODUCT=1 OLLAMA_DEBUG=1 OLLAMA_HOST=0.0.0.0 OLLAMA_MODELS=e:\OllamaModels OLLAMA_NUM_GPU=999 OLLAMA_ORIGINS=chrome-extension://*,moz-extension://* OLLAMA_VULKAN=1 PATH="C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama;C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\vulkan;C:\\Users\\111\\AppData\\Local\\Programs\\Ollama;D:\\python;D:\\python\\Scripts;C:\\Program Files\\Eclipse Adoptium\\jdk-17.0.9.9-hotspot\\bin;F:\\Program Files\\Eclipse Adoptium\\jdk-21.0.5.11-hotspot\\bin;f:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin;f:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin64;C:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin;C:\\Users\\Public\\Documents\\Embarcadero\\Studio\\23.0\\Bpl;C:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin64;C:\\Users\\Public\\Documents\\Embarcadero\\Studio\\23.0\\Bpl\\Win64;E:\\Program Files\\Eclipse Adoptium\\jre-21.0.5.11-hotspot\\bin;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\;C:\\WINDOWS\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;F:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;G:\\Program Files\\TortoiseGit\\bin;D:\\Program Files\\Git\\bin;D:\\python\\Scripts;C:\\Users\\Public\\Documents\\MitovSoftware\\Libraries\\Delphi\\12.0\\LabPacks\\OpenWire Studio\\Win32\\;C:\\Users\\Public\\Documents\\MitovSoftware\\Libraries\\Delphi\\12.0\\LabPacks\\OpenWire Studio\\Win64\\;F:\\nodejs\\;C:\\ProgramData\\chocolatey\\bin;e:\\FPC\\3.2.2\\bin\\i386-Win32;D:\\rad\\Boss;e:\\Program Files\\gs\\gs10.05.1\\bin;C:\\Users\\111\\AppData\\Local\\Muse Hub\\lib;E:\\gnu\\glo6612wb\\bin;E:\\gnu\\ctags58;D:\\rad\\formatter\\pasfmt-0.7.0-x86_64-pc-windows-msvc;C:\\Users\\111\\.opencode\\bin;D:\\Profiler;C:\\Users\\111\\.local\\bin;C:\\Users\\111\\.local\\bin;D:\\python;C:\\Users\\111\\AppData\\Local\\Programs\\Python\\Launcher\\;C:\\Users\\111\\AppData\\Local\\Microsoft\\WindowsApps;D:\\python\\Scripts;C:\\Users\\111\\AppData\\Roaming\\npm;E:\\Program Files\\Antigravity\\bin;E:\\Program Files (x86)\\cursor\\resources\\app\\bin;" OLLAMA_LIBRARY_PATH=C:\Users\111\AppData\Local\Programs\Ollama\lib\ollama;C:\Users\111\AppData\Local\Programs\Ollama\lib\ollama\vulkan
time=2026-04-04T22:23:25.449+08:00 level=INFO source=sched.go:484 msg="system memory" total="47.8 GiB" free="23.3 GiB" free_swap="23.3 GiB"
time=2026-04-04T22:23:25.449+08:00 level=INFO source=sched.go:491 msg="gpu memory" id=8680a056-0800-0000-0300-000000000000 library=Vulkan available="13.7 GiB" free="14.1 GiB" minimum="457.0 MiB" overhead="0 B"
time=2026-04-04T22:23:25.449+08:00 level=INFO source=server.go:759 msg="loading model" "model layers"=43 requested=-1
time=2026-04-04T22:23:25.483+08:00 level=INFO source=runner.go:1417 msg="starting ollama engine"
time=2026-04-04T22:23:25.488+08:00 level=INFO source=runner.go:1452 msg="Server listening on 127.0.0.1:7441"
time=2026-04-04T22:23:25.493+08:00 level=INFO source=runner.go:1290 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Disabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:43[ID:8680a056-0800-0000-0300-000000000000 Layers:43(0..42)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-04-04T22:23:25.547+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32
time=2026-04-04T22:23:25.549+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.name default=""
time=2026-04-04T22:23:25.549+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.description default=""
time=2026-04-04T22:23:25.549+08:00 level=INFO source=ggml.go:136 msg="" architecture=gemma4 file_type=Q4_K_M name="" description="" num_tensors=2131 num_key_values=55
time=2026-04-04T22:23:25.549+08:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\111\AppData\Local\Programs\Ollama\lib\ollama
load_backend: loaded CPU backend from C:\Users\111\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll
time=2026-04-04T22:23:25.563+08:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\111\AppData\Local\Programs\Ollama\lib\ollama\vulkan
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = Intel(R) Arc(TM) A770 Graphics (Intel Corporation) | uma: 0 | fp16: 0 | bf16: 0 | warp size: 32 | shared memory: 49152 | int dot: 0 | matrix cores: none
load_backend: loaded Vulkan backend from C:\Users\111\AppData\Local\Programs\Ollama\lib\ollama\vulkan\ggml-vulkan.dll
time=2026-04-04T22:23:25.610+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang)
ggml_backend_vk_get_device_memory called: uuid 8680a056-0800-0000-0300-000000000000
ggml_backend_vk_get_device_memory called: luid 0x000000000000c010
ggml_dxgi_pdh_init called
DXGI + PDH Initialized. Getting GPU free memory info
[DXGI] Adapter Description: Intel(R) Arc(TM) A770 Graphics, LUID: 0x000000000000C010, Dedicated: 15.88 GB, Shared: 23.92 GB
[DXGI] Adapter Description: Microsoft Basic Render Driver, LUID: 0x000000000000C41B, Dedicated: 0.00 GB, Shared: 23.92 GB
Discrete GPU (Intel(R) Arc(TM) A770 Graphics) with LUID 0x000000000000c010 detected. Dedicated Total: 17045651456.00 bytes (15.88 GB), Dedicated Usage: 1915502592.00 bytes (1.78 GB)
ggml_backend_vk_get_device_memory utilizing DXGI + PDH memory reporting free: 15130148864 total: 17045651456
time=2026-04-04T22:23:25.792+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.pooling_type default=0
time=2026-04-04T22:23:25.792+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eot_token_id default=106
time=2026-04-04T22:23:25.793+08:00 level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883
time=2026-04-04T22:23:25.793+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.attention.global_head_count_kv default=0
time=2026-04-04T22:23:25.793+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.attention.head_count_kv default="&{size:0 values:[]}"
time=2026-04-04T22:23:25.793+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.expert_count default=0
time=2026-04-04T22:23:25.793+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.expert_used_count default=0
time=2026-04-04T22:23:25.793+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.audio.num_mel_bins default=128
time=2026-04-04T22:23:25.810+08:00 level=INFO source=model.go:138 msg="vision: decode" elapsed=1.5073ms bounds=(0,0)-(2048,2048)
time=2026-04-04T22:23:25.906+08:00 level=INFO source=model.go:145 msg="vision: preprocess" elapsed=95.2109ms size="[768 768]"
time=2026-04-04T22:23:25.906+08:00 level=INFO source=model.go:148 msg="vision: pixelValues" shape="[768 768 3]" dim0=768 dim1=768 dim2=3
time=2026-04-04T22:23:25.906+08:00 level=INFO source=model.go:152 msg="vision: patches" patchesX=48 patchesY=48 total=2304 patchSize=16
time=2026-04-04T22:23:25.907+08:00 level=INFO source=model.go:156 msg="vision: encoded" elapsed=97.7248ms shape="[2560 256]"
time=2026-04-04T22:23:25.908+08:00 level=DEBUG source=ggml.go:852 msg="compute graph" nodes=764 splits=1
time=2026-04-04T22:23:25.926+08:00 level=DEBUG source=ggml.go:852 msg="compute graph" nodes=1979 splits=2
time=2026-04-04T22:23:25.928+08:00 level=DEBUG source=ggml.go:852 msg="compute graph" nodes=1977 splits=2
time=2026-04-04T22:23:25.930+08:00 level=DEBUG source=device.go:240 msg="model weights" device=Vulkan0 size="8.9 GiB"
time=2026-04-04T22:23:25.930+08:00 level=DEBUG source=device.go:245 msg="model weights" device=CPU size="587.0 MiB"
time=2026-04-04T22:23:25.930+08:00 level=DEBUG source=device.go:251 msg="kv cache" device=Vulkan0 size="224.0 MiB"
time=2026-04-04T22:23:25.930+08:00 level=DEBUG source=device.go:262 msg="compute graph" device=Vulkan0 size="309.8 MiB"
time=2026-04-04T22:23:25.930+08:00 level=DEBUG source=device.go:267 msg="compute graph" device=CPU size="5.0 MiB"
time=2026-04-04T22:23:25.930+08:00 level=DEBUG source=device.go:272 msg="total memory" size="10.0 GiB"
time=2026-04-04T22:23:25.930+08:00 level=DEBUG source=server.go:784 msg=memory success=true required.InputWeights=615514112 required.CPU.Graph=5242880 required.Vulkan0.ID=8680a056-0800-0000-0300-000000000000 required.Vulkan0.Weights="[109889600 110042304 109889600 109889600 109889600 110843968 103131200 109889600 102793280 102793280 109889600 110168128 52401216 59497536 52401216 52401216 59497536 59776064 52401216 59497536 52401216 52401216 59497536 59776064 52401216 59497536 52401216 52401216 59497536 59776064 52401216 59497536 52401216 52401216 59497536 59776064 59159616 59497536 59159616 59159616 59497536 66534464 6521492480]" required.Vulkan0.Cache="[8388608 8388608 8388608 8388608 8388608 16777216 8388608 8388608 8388608 8388608 8388608 16777216 8388608 8388608 8388608 8388608 8388608 16777216 8388608 8388608 8388608 8388608 8388608 16777216 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]" required.Vulkan0.Graph=324833280
time=2026-04-04T22:23:25.930+08:00 level=DEBUG source=server.go:978 msg="available gpu" id=8680a056-0800-0000-0300-000000000000 library=Vulkan "available layer vram"="13.4 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="309.8 MiB"
time=2026-04-04T22:23:25.930+08:00 level=DEBUG source=server.go:795 msg="new layout created" layers="43[ID:8680a056-0800-0000-0300-000000000000 Layers:43(0..42)]"
time=2026-04-04T22:23:25.931+08:00 level=INFO source=runner.go:1290 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Disabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:43[ID:8680a056-0800-0000-0300-000000000000 Layers:43(0..42)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-04-04T22:23:25.972+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32
ggml_backend_vk_get_device_memory called: uuid 8680a056-0800-0000-0300-000000000000
ggml_backend_vk_get_device_memory called: luid 0x000000000000c010
ggml_dxgi_pdh_init called
DXGI + PDH Initialized. Getting GPU free memory info
[DXGI] Adapter Description: Intel(R) Arc(TM) A770 Graphics, LUID: 0x000000000000C010, Dedicated: 15.88 GB, Shared: 23.92 GB
[DXGI] Adapter Description: Microsoft Basic Render Driver, LUID: 0x000000000000C41B, Dedicated: 0.00 GB, Shared: 23.92 GB
Discrete GPU (Intel(R) Arc(TM) A770 Graphics) with LUID 0x000000000000c010 detected. Dedicated Total: 17045651456.00 bytes (15.88 GB), Dedicated Usage: 1934413824.00 bytes (1.80 GB)
ggml_backend_vk_get_device_memory utilizing DXGI + PDH memory reporting free: 15111237632 total: 17045651456
ggml_vulkan: Device memory allocation of size 5637144576 failed.
ggml_vulkan: Requested buffer size exceeds device buffer size limit: ErrorOutOfDeviceMemory
alloc_tensor_range: failed to allocate Vulkan0 buffer of size 5637144576
time=2026-04-04T22:23:26.239+08:00 level=DEBUG source=device.go:240 msg="model weights" device=Vulkan0 size="8.9 GiB"
time=2026-04-04T22:23:26.239+08:00 level=DEBUG source=device.go:245 msg="model weights" device=CPU size="587.0 MiB"
time=2026-04-04T22:23:26.239+08:00 level=DEBUG source=device.go:272 msg="total memory" size="9.4 GiB"
time=2026-04-04T22:23:26.239+08:00 level=DEBUG source=server.go:784 msg=memory success=false required.InputWeights=615514112 required.Vulkan0.ID=8680a056-0800-0000-0300-000000000000 required.Vulkan0.Weights="[109889600 110042304 109889600 109889600 109889600 110843968 103131200 109889600 102793280 102793280 109889600 110168128 52401216 59497536 52401216 52401216 59497536 59776064 52401216 59497536 52401216 52401216 59497536 59776064 52401216 59497536 52401216 52401216 59497536 59776064 52401216 59497536 52401216 52401216 59497536 59776064 59159616 59497536 59159616 59159616 59497536 66534464 6521492480]"
time=2026-04-04T22:23:26.239+08:00 level=DEBUG source=server.go:978 msg="available gpu" id=8680a056-0800-0000-0300-000000000000 library=Vulkan "available layer vram"="13.7 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="0 B"
time=2026-04-04T22:23:26.239+08:00 level=DEBUG source=server.go:795 msg="new layout created" layers="43[ID:8680a056-0800-0000-0300-000000000000 Layers:43(0..42)]"
time=2026-04-04T22:23:26.239+08:00 level=INFO source=server.go:881 msg="model layout did not fit, applying backoff" backoff=0.10
time=2026-04-04T22:23:26.239+08:00 level=DEBUG source=server.go:978 msg="available gpu" id=8680a056-0800-0000-0300-000000000000 library=Vulkan "available layer vram"="12.2 GiB" backoff=0.10 minimum="457.0 MiB" overhead="0 B" graph="0 B"
time=2026-04-04T22:23:26.239+08:00 level=DEBUG source=server.go:795 msg="new layout created" layers="43[ID:8680a056-0800-0000-0300-000000000000 Layers:43(0..42)]"
time=2026-04-04T22:23:26.239+08:00 level=INFO source=server.go:881 msg="model layout did not fit, applying backoff" backoff=0.20
time=2026-04-04T22:23:26.239+08:00 level=DEBUG source=server.go:978 msg="available gpu" id=8680a056-0800-0000-0300-000000000000 library=Vulkan "available layer vram"="10.8 GiB" backoff=0.20 minimum="457.0 MiB" overhead="0 B" graph="0 B"
time=2026-04-04T22:23:26.240+08:00 level=DEBUG source=server.go:795 msg="new layout created" layers="43[ID:8680a056-0800-0000-0300-000000000000 Layers:43(0..42)]"
time=2026-04-04T22:23:26.240+08:00 level=INFO source=server.go:881 msg="model layout did not fit, applying backoff" backoff=0.30
time=2026-04-04T22:23:26.240+08:00 level=DEBUG source=server.go:978 msg="available gpu" id=8680a056-0800-0000-0300-000000000000 library=Vulkan "available layer vram"="9.4 GiB" backoff=0.30 minimum="457.0 MiB" overhead="0 B" graph="0 B"
time=2026-04-04T22:23:26.240+08:00 level=DEBUG source=server.go:795 msg="new layout created" layers="43[ID:8680a056-0800-0000-0300-000000000000 Layers:43(0..42)]"
time=2026-04-04T22:23:26.240+08:00 level=INFO source=server.go:881 msg="model layout did not fit, applying backoff" backoff=0.40
time=2026-04-04T22:23:26.240+08:00 level=DEBUG source=server.go:978 msg="available gpu" id=8680a056-0800-0000-0300-000000000000 library=Vulkan "available layer vram"="8.0 GiB" backoff=0.40 minimum="457.0 MiB" overhead="0 B" graph="0 B"
time=2026-04-04T22:23:26.240+08:00 level=DEBUG source=server.go:795 msg="new layout created" layers="42[ID:8680a056-0800-0000-0300-000000000000 Layers:42(0..41)]"
time=2026-04-04T22:23:26.240+08:00 level=INFO source=runner.go:1290 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Disabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:42[ID:8680a056-0800-0000-0300-000000000000 Layers:42(0..41)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-04-04T22:23:26.281+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32
ggml_backend_vk_get_device_memory called: uuid 8680a056-0800-0000-0300-000000000000
ggml_backend_vk_get_device_memory called: luid 0x000000000000c010
ggml_dxgi_pdh_init called
DXGI + PDH Initialized. Getting GPU free memory info
[DXGI] Adapter Description: Intel(R) Arc(TM) A770 Graphics, LUID: 0x000000000000C010, Dedicated: 15.88 GB, Shared: 23.92 GB
[DXGI] Adapter Description: Microsoft Basic Render Driver, LUID: 0x000000000000C41B, Dedicated: 0.00 GB, Shared: 23.92 GB
Discrete GPU (Intel(R) Arc(TM) A770 Graphics) with LUID 0x000000000000c010 detected. Dedicated Total: 17045651456.00 bytes (15.88 GB), Dedicated Usage: 1948073984.00 bytes (1.81 GB)
ggml_backend_vk_get_device_memory utilizing DXGI + PDH memory reporting free: 15097577472 total: 17045651456
time=2026-04-04T22:23:26.515+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.pooling_type default=0
time=2026-04-04T22:23:26.515+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eot_token_id default=106
time=2026-04-04T22:23:26.515+08:00 level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883
time=2026-04-04T22:23:26.515+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.attention.global_head_count_kv default=0
time=2026-04-04T22:23:26.515+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.attention.head_count_kv default="&{size:0 values:[]}"
time=2026-04-04T22:23:26.515+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.expert_count default=0
time=2026-04-04T22:23:26.515+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.expert_used_count default=0
time=2026-04-04T22:23:26.515+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.audio.num_mel_bins default=128
time=2026-04-04T22:23:26.529+08:00 level=INFO source=model.go:138 msg="vision: decode" elapsed=1.0912ms bounds=(0,0)-(2048,2048)
time=2026-04-04T22:23:26.618+08:00 level=INFO source=model.go:145 msg="vision: preprocess" elapsed=88.4649ms size="[768 768]"
time=2026-04-04T22:23:26.619+08:00 level=INFO source=model.go:148 msg="vision: pixelValues" shape="[768 768 3]" dim0=768 dim1=768 dim2=3
time=2026-04-04T22:23:26.619+08:00 level=INFO source=model.go:152 msg="vision: patches" patchesX=48 patchesY=48 total=2304 patchSize=16
time=2026-04-04T22:23:26.619+08:00 level=INFO source=model.go:156 msg="vision: encoded" elapsed=91.2747ms shape="[2560 256]"
time=2026-04-04T22:23:26.638+08:00 level=DEBUG source=ggml.go:852 msg="compute graph" nodes=764 splits=242
time=2026-04-04T22:23:26.697+08:00 level=DEBUG source=ggml.go:852 msg="compute graph" nodes=1979 splits=7
time=2026-04-04T22:23:26.702+08:00 level=DEBUG source=ggml.go:852 msg="compute graph" nodes=1977 splits=5
time=2026-04-04T22:23:26.702+08:00 level=DEBUG source=device.go:240 msg="model weights" device=Vulkan0 size="2.8 GiB"
time=2026-04-04T22:23:26.702+08:00 level=DEBUG source=device.go:245 msg="model weights" device=CPU size="6.6 GiB"
time=2026-04-04T22:23:26.702+08:00 level=DEBUG source=device.go:251 msg="kv cache" device=Vulkan0 size="224.0 MiB"
time=2026-04-04T22:23:26.702+08:00 level=DEBUG source=device.go:262 msg="compute graph" device=Vulkan0 size="289.1 MiB"
time=2026-04-04T22:23:26.702+08:00 level=DEBUG source=device.go:267 msg="compute graph" device=CPU size="21.0 MiB"
time=2026-04-04T22:23:26.703+08:00 level=DEBUG source=device.go:272 msg="total memory" size="10.0 GiB"
time=2026-04-04T22:23:26.703+08:00 level=DEBUG source=server.go:784 msg=memory success=true required.InputWeights=615514112 required.CPU.Weights="[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 6521478144]" required.CPU.Graph=22020096 required.Vulkan0.ID=8680a056-0800-0000-0300-000000000000 required.Vulkan0.Weights="[109889600 110042304 109889600 109889600 109889600 110843968 103131200 109889600 102793280 102793280 109889600 110168128 52401216 59497536 52401216 52401216 59497536 59776064 52401216 59497536 52401216 52401216 59497536 59776064 52401216 59497536 52401216 52401216 59497536 59776064 52401216 59497536 52401216 52401216 59497536 59776064 59159616 59497536 59159616 59159616 59497536 66534464 0]" required.Vulkan0.Cache="[8388608 8388608 8388608 8388608 8388608 16777216 8388608 8388608 8388608 8388608 8388608 16777216 8388608 8388608 8388608 8388608 8388608 16777216 8388608 8388608 8388608 8388608 8388608 16777216 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]" required.Vulkan0.Graph=303169408
time=2026-04-04T22:23:26.703+08:00 level=DEBUG source=server.go:978 msg="available gpu" id=8680a056-0800-0000-0300-000000000000 library=Vulkan "available layer vram"="7.7 GiB" backoff=0.40 minimum="457.0 MiB" overhead="0 B" graph="289.1 MiB"
time=2026-04-04T22:23:26.703+08:00 level=DEBUG source=server.go:795 msg="new layout created" layers="42[ID:8680a056-0800-0000-0300-000000000000 Layers:42(0..41)]"
time=2026-04-04T22:23:26.703+08:00 level=INFO source=runner.go:1290 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Disabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:42[ID:8680a056-0800-0000-0300-000000000000 Layers:42(0..41)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-04-04T22:23:26.703+08:00 level=INFO source=ggml.go:482 msg="offloading 42 repeating layers to GPU"
time=2026-04-04T22:23:26.703+08:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU"
time=2026-04-04T22:23:26.703+08:00 level=INFO source=ggml.go:494 msg="offloaded 42/43 layers to GPU"
time=2026-04-04T22:23:26.703+08:00 level=INFO source=device.go:240 msg="model weights" device=Vulkan0 size="2.8 GiB"
time=2026-04-04T22:23:26.703+08:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="6.6 GiB"
time=2026-04-04T22:23:26.703+08:00 level=INFO source=device.go:251 msg="kv cache" device=Vulkan0 size="224.0 MiB"
time=2026-04-04T22:23:26.703+08:00 level=INFO source=device.go:262 msg="compute graph" device=Vulkan0 size="289.1 MiB"
time=2026-04-04T22:23:26.704+08:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="21.0 MiB"
time=2026-04-04T22:23:26.704+08:00 level=INFO source=device.go:272 msg="total memory" size="10.0 GiB"
time=2026-04-04T22:23:26.704+08:00 level=INFO source=sched.go:561 msg="loaded runners" count=1
time=2026-04-04T22:23:26.704+08:00 level=INFO source=server.go:1352 msg="waiting for llama runner to start responding"
time=2026-04-04T22:23:26.705+08:00 level=INFO source=server.go:1386 msg="waiting for server to become available" status="llm server loading model"
time=2026-04-04T22:23:26.706+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.00"
time=2026-04-04T22:23:26.957+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.20"
time=2026-04-04T22:23:27.208+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.37"
time=2026-04-04T22:23:27.458+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.47"
time=2026-04-04T22:23:27.709+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.55"
time=2026-04-04T22:23:27.960+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.62"
time=2026-04-04T22:23:28.211+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.69"
time=2026-04-04T22:23:28.461+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.77"
time=2026-04-04T22:23:28.711+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.84"
time=2026-04-04T22:23:28.963+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.91"
time=2026-04-04T22:23:29.213+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.98"
time=2026-04-04T22:23:29.287+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.pooling_type default=0
time=2026-04-04T22:23:29.464+08:00 level=INFO source=server.go:1390 msg="llama runner started in 4.01 seconds"
time=2026-04-04T22:23:29.464+08:00 level=DEBUG source=sched.go:573 msg="finished setting up" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:8680a056-0800-0000-0300-000000000000 Library:Vulkan}]" runner.size="10.0 GiB" runner.vram="3.3 GiB" runner.parallel=1 runner.pid=24040 runner.model=e:\OllamaModels\blobs\sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=4096
time=2026-04-04T22:23:29.523+08:00 level=DEBUG source=server.go:1538 msg="completion request" images=0 prompt=74 format=""
time=2026-04-04T22:23:29.584+08:00 level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=0 prompt=16 used=0 remaining=16
[GIN] 2026/04/04 - 22:24:34 | 200 |          1m9s |       127.0.0.1 | POST     "/api/generate"
time=2026-04-04T22:24:34.391+08:00 level=DEBUG source=sched.go:581 msg="context for request finished"
time=2026-04-04T22:24:34.391+08:00 level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:8680a056-0800-0000-0300-000000000000 Library:Vulkan}]" runner.size="10.0 GiB" runner.vram="3.3 GiB" runner.parallel=1 runner.pid=24040 runner.model=e:\OllamaModels\blobs\sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=4096 duration=5m0s
time=2026-04-04T22:24:34.391+08:00 level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:8680a056-0800-0000-0300-000000000000 Library:Vulkan}]" runner.size="10.0 GiB" runner.vram="3.3 GiB" runner.parallel=1 runner.pid=24040 runner.model=e:\OllamaModels\blobs\sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=4096 refCount=0
<!-- gh-comment-id:4187199366 --> @wqmeng commented on GitHub (Apr 4, 2026): ggml_backend_vk_get_device_memory utilizing DXGI + PDH memory reporting free: 15111237632 total: 17045651456 ggml_vulkan: Device memory allocation of size 5637144576 failed. ggml_vulkan: Requested buffer size exceeds device buffer size limit: **ErrorOutOfDeviceMemory** alloc_tensor_range: failed to allocate Vulkan0 buffer of size 5637144576 ``` ollama serve time=2026-04-04T22:22:55.311+08:00 level=INFO source=routes.go:1744 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:DEBUG OLLAMA_DEBUG_LOG_REQUESTS:false OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:e:\\OllamaModels OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[chrome-extension://* moz-extension://* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES:]" time=2026-04-04T22:22:55.315+08:00 level=INFO source=routes.go:1746 msg="Ollama cloud disabled: false" time=2026-04-04T22:22:55.317+08:00 level=INFO source=images.go:499 msg="total blobs: 24" time=2026-04-04T22:22:55.317+08:00 level=INFO source=images.go:506 msg="total unused blobs removed: 0" time=2026-04-04T22:22:55.318+08:00 level=INFO source=routes.go:1802 msg="Listening on [::]:11434 (version 0.20.0)" time=2026-04-04T22:22:55.318+08:00 level=DEBUG source=sched.go:145 msg="starting llm scheduler" time=2026-04-04T22:22:55.319+08:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-04-04T22:22:55.332+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 7373" time=2026-04-04T22:22:55.332+08:00 level=DEBUG source=server.go:433 msg=subprocess GGML_VK_DISABLE_F16=1 GGML_VK_DISABLE_FUSION=1 GGML_VK_DISABLE_INTEGER_DOT_PRODUCT=1 OLLAMA_DEBUG=1 OLLAMA_HOST=0.0.0.0 OLLAMA_MODELS=e:\OllamaModels OLLAMA_NUM_GPU=999 OLLAMA_ORIGINS=chrome-extension://*,moz-extension://* OLLAMA_VULKAN=1 PATH="C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama;C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12;C:\\Users\\111\\AppData\\Local\\Programs\\Ollama;D:\\python;D:\\python\\Scripts;C:\\Program Files\\Eclipse Adoptium\\jdk-17.0.9.9-hotspot\\bin;F:\\Program Files\\Eclipse Adoptium\\jdk-21.0.5.11-hotspot\\bin;f:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin;f:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin64;C:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin;C:\\Users\\Public\\Documents\\Embarcadero\\Studio\\23.0\\Bpl;C:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin64;C:\\Users\\Public\\Documents\\Embarcadero\\Studio\\23.0\\Bpl\\Win64;E:\\Program Files\\Eclipse Adoptium\\jre-21.0.5.11-hotspot\\bin;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\;C:\\WINDOWS\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;F:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;G:\\Program Files\\TortoiseGit\\bin;D:\\Program Files\\Git\\bin;D:\\python\\Scripts;C:\\Users\\Public\\Documents\\MitovSoftware\\Libraries\\Delphi\\12.0\\LabPacks\\OpenWire Studio\\Win32\\;C:\\Users\\Public\\Documents\\MitovSoftware\\Libraries\\Delphi\\12.0\\LabPacks\\OpenWire Studio\\Win64\\;F:\\nodejs\\;C:\\ProgramData\\chocolatey\\bin;e:\\FPC\\3.2.2\\bin\\i386-Win32;D:\\rad\\Boss;e:\\Program Files\\gs\\gs10.05.1\\bin;C:\\Users\\111\\AppData\\Local\\Muse Hub\\lib;E:\\gnu\\glo6612wb\\bin;E:\\gnu\\ctags58;D:\\rad\\formatter\\pasfmt-0.7.0-x86_64-pc-windows-msvc;C:\\Users\\111\\.opencode\\bin;D:\\Profiler;C:\\Users\\111\\.local\\bin;C:\\Users\\111\\.local\\bin;D:\\python;C:\\Users\\111\\AppData\\Local\\Programs\\Python\\Launcher\\;C:\\Users\\111\\AppData\\Local\\Microsoft\\WindowsApps;D:\\python\\Scripts;C:\\Users\\111\\AppData\\Roaming\\npm;E:\\Program Files\\Antigravity\\bin;E:\\Program Files (x86)\\cursor\\resources\\app\\bin;" OLLAMA_LIBRARY_PATH=C:\Users\111\AppData\Local\Programs\Ollama\lib\ollama;C:\Users\111\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12 time=2026-04-04T22:22:55.418+08:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=92.3367ms OLLAMA_LIBRARY_PATH="[C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12]" extra_envs=map[] time=2026-04-04T22:22:55.419+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 7379" time=2026-04-04T22:22:55.420+08:00 level=DEBUG source=server.go:433 msg=subprocess GGML_VK_DISABLE_F16=1 GGML_VK_DISABLE_FUSION=1 GGML_VK_DISABLE_INTEGER_DOT_PRODUCT=1 OLLAMA_DEBUG=1 OLLAMA_HOST=0.0.0.0 OLLAMA_MODELS=e:\OllamaModels OLLAMA_NUM_GPU=999 OLLAMA_ORIGINS=chrome-extension://*,moz-extension://* OLLAMA_VULKAN=1 PATH="C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama;C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v13;C:\\Users\\111\\AppData\\Local\\Programs\\Ollama;D:\\python;D:\\python\\Scripts;C:\\Program Files\\Eclipse Adoptium\\jdk-17.0.9.9-hotspot\\bin;F:\\Program Files\\Eclipse Adoptium\\jdk-21.0.5.11-hotspot\\bin;f:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin;f:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin64;C:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin;C:\\Users\\Public\\Documents\\Embarcadero\\Studio\\23.0\\Bpl;C:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin64;C:\\Users\\Public\\Documents\\Embarcadero\\Studio\\23.0\\Bpl\\Win64;E:\\Program Files\\Eclipse Adoptium\\jre-21.0.5.11-hotspot\\bin;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\;C:\\WINDOWS\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;F:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;G:\\Program Files\\TortoiseGit\\bin;D:\\Program Files\\Git\\bin;D:\\python\\Scripts;C:\\Users\\Public\\Documents\\MitovSoftware\\Libraries\\Delphi\\12.0\\LabPacks\\OpenWire Studio\\Win32\\;C:\\Users\\Public\\Documents\\MitovSoftware\\Libraries\\Delphi\\12.0\\LabPacks\\OpenWire Studio\\Win64\\;F:\\nodejs\\;C:\\ProgramData\\chocolatey\\bin;e:\\FPC\\3.2.2\\bin\\i386-Win32;D:\\rad\\Boss;e:\\Program Files\\gs\\gs10.05.1\\bin;C:\\Users\\111\\AppData\\Local\\Muse Hub\\lib;E:\\gnu\\glo6612wb\\bin;E:\\gnu\\ctags58;D:\\rad\\formatter\\pasfmt-0.7.0-x86_64-pc-windows-msvc;C:\\Users\\111\\.opencode\\bin;D:\\Profiler;C:\\Users\\111\\.local\\bin;C:\\Users\\111\\.local\\bin;D:\\python;C:\\Users\\111\\AppData\\Local\\Programs\\Python\\Launcher\\;C:\\Users\\111\\AppData\\Local\\Microsoft\\WindowsApps;D:\\python\\Scripts;C:\\Users\\111\\AppData\\Roaming\\npm;E:\\Program Files\\Antigravity\\bin;E:\\Program Files (x86)\\cursor\\resources\\app\\bin;" OLLAMA_LIBRARY_PATH=C:\Users\111\AppData\Local\Programs\Ollama\lib\ollama;C:\Users\111\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13 time=2026-04-04T22:22:55.561+08:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=142.4888ms OLLAMA_LIBRARY_PATH="[C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v13]" extra_envs=map[] time=2026-04-04T22:22:55.562+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 7385" time=2026-04-04T22:22:55.562+08:00 level=DEBUG source=server.go:433 msg=subprocess GGML_VK_DISABLE_F16=1 GGML_VK_DISABLE_FUSION=1 GGML_VK_DISABLE_INTEGER_DOT_PRODUCT=1 OLLAMA_DEBUG=1 OLLAMA_HOST=0.0.0.0 OLLAMA_MODELS=e:\OllamaModels OLLAMA_NUM_GPU=999 OLLAMA_ORIGINS=chrome-extension://*,moz-extension://* OLLAMA_VULKAN=1 PATH="C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama;C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\rocm;C:\\Users\\111\\AppData\\Local\\Programs\\Ollama;D:\\python;D:\\python\\Scripts;C:\\Program Files\\Eclipse Adoptium\\jdk-17.0.9.9-hotspot\\bin;F:\\Program Files\\Eclipse Adoptium\\jdk-21.0.5.11-hotspot\\bin;f:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin;f:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin64;C:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin;C:\\Users\\Public\\Documents\\Embarcadero\\Studio\\23.0\\Bpl;C:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin64;C:\\Users\\Public\\Documents\\Embarcadero\\Studio\\23.0\\Bpl\\Win64;E:\\Program Files\\Eclipse Adoptium\\jre-21.0.5.11-hotspot\\bin;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\;C:\\WINDOWS\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;F:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;G:\\Program Files\\TortoiseGit\\bin;D:\\Program Files\\Git\\bin;D:\\python\\Scripts;C:\\Users\\Public\\Documents\\MitovSoftware\\Libraries\\Delphi\\12.0\\LabPacks\\OpenWire Studio\\Win32\\;C:\\Users\\Public\\Documents\\MitovSoftware\\Libraries\\Delphi\\12.0\\LabPacks\\OpenWire Studio\\Win64\\;F:\\nodejs\\;C:\\ProgramData\\chocolatey\\bin;e:\\FPC\\3.2.2\\bin\\i386-Win32;D:\\rad\\Boss;e:\\Program Files\\gs\\gs10.05.1\\bin;C:\\Users\\111\\AppData\\Local\\Muse Hub\\lib;E:\\gnu\\glo6612wb\\bin;E:\\gnu\\ctags58;D:\\rad\\formatter\\pasfmt-0.7.0-x86_64-pc-windows-msvc;C:\\Users\\111\\.opencode\\bin;D:\\Profiler;C:\\Users\\111\\.local\\bin;C:\\Users\\111\\.local\\bin;D:\\python;C:\\Users\\111\\AppData\\Local\\Programs\\Python\\Launcher\\;C:\\Users\\111\\AppData\\Local\\Microsoft\\WindowsApps;D:\\python\\Scripts;C:\\Users\\111\\AppData\\Roaming\\npm;E:\\Program Files\\Antigravity\\bin;E:\\Program Files (x86)\\cursor\\resources\\app\\bin;" OLLAMA_LIBRARY_PATH=C:\Users\111\AppData\Local\Programs\Ollama\lib\ollama;C:\Users\111\AppData\Local\Programs\Ollama\lib\ollama\rocm time=2026-04-04T22:22:55.657+08:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=95.731ms OLLAMA_LIBRARY_PATH="[C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\rocm]" extra_envs=map[] time=2026-04-04T22:22:55.658+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 7391" time=2026-04-04T22:22:55.658+08:00 level=DEBUG source=server.go:433 msg=subprocess GGML_VK_DISABLE_F16=1 GGML_VK_DISABLE_FUSION=1 GGML_VK_DISABLE_INTEGER_DOT_PRODUCT=1 OLLAMA_DEBUG=1 OLLAMA_HOST=0.0.0.0 OLLAMA_MODELS=e:\OllamaModels OLLAMA_NUM_GPU=999 OLLAMA_ORIGINS=chrome-extension://*,moz-extension://* OLLAMA_VULKAN=1 PATH="C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama;C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\vulkan;C:\\Users\\111\\AppData\\Local\\Programs\\Ollama;D:\\python;D:\\python\\Scripts;C:\\Program Files\\Eclipse Adoptium\\jdk-17.0.9.9-hotspot\\bin;F:\\Program Files\\Eclipse Adoptium\\jdk-21.0.5.11-hotspot\\bin;f:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin;f:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin64;C:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin;C:\\Users\\Public\\Documents\\Embarcadero\\Studio\\23.0\\Bpl;C:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin64;C:\\Users\\Public\\Documents\\Embarcadero\\Studio\\23.0\\Bpl\\Win64;E:\\Program Files\\Eclipse Adoptium\\jre-21.0.5.11-hotspot\\bin;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\;C:\\WINDOWS\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;F:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;G:\\Program Files\\TortoiseGit\\bin;D:\\Program Files\\Git\\bin;D:\\python\\Scripts;C:\\Users\\Public\\Documents\\MitovSoftware\\Libraries\\Delphi\\12.0\\LabPacks\\OpenWire Studio\\Win32\\;C:\\Users\\Public\\Documents\\MitovSoftware\\Libraries\\Delphi\\12.0\\LabPacks\\OpenWire Studio\\Win64\\;F:\\nodejs\\;C:\\ProgramData\\chocolatey\\bin;e:\\FPC\\3.2.2\\bin\\i386-Win32;D:\\rad\\Boss;e:\\Program Files\\gs\\gs10.05.1\\bin;C:\\Users\\111\\AppData\\Local\\Muse Hub\\lib;E:\\gnu\\glo6612wb\\bin;E:\\gnu\\ctags58;D:\\rad\\formatter\\pasfmt-0.7.0-x86_64-pc-windows-msvc;C:\\Users\\111\\.opencode\\bin;D:\\Profiler;C:\\Users\\111\\.local\\bin;C:\\Users\\111\\.local\\bin;D:\\python;C:\\Users\\111\\AppData\\Local\\Programs\\Python\\Launcher\\;C:\\Users\\111\\AppData\\Local\\Microsoft\\WindowsApps;D:\\python\\Scripts;C:\\Users\\111\\AppData\\Roaming\\npm;E:\\Program Files\\Antigravity\\bin;E:\\Program Files (x86)\\cursor\\resources\\app\\bin;" OLLAMA_LIBRARY_PATH=C:\Users\111\AppData\Local\Programs\Ollama\lib\ollama;C:\Users\111\AppData\Local\Programs\Ollama\lib\ollama\vulkan time=2026-04-04T22:22:56.063+08:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=405.7985ms OLLAMA_LIBRARY_PATH="[C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\vulkan]" extra_envs=map[] time=2026-04-04T22:22:56.063+08:00 level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=1 time=2026-04-04T22:22:56.063+08:00 level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=745.0763ms time=2026-04-04T22:22:56.063+08:00 level=INFO source=types.go:42 msg="inference compute" id=8680a056-0800-0000-0300-000000000000 filter_id="" library=Vulkan compute=0.0 name=Vulkan0 description="Intel(R) Arc(TM) A770 Graphics" libdirs=ollama,vulkan driver=0.0 pci_id="" type=discrete total="15.9 GiB" available="14.2 GiB" time=2026-04-04T22:22:56.063+08:00 level=INFO source=routes.go:1852 msg="vram-based default context" total_vram="15.9 GiB" default_num_ctx=4096 [GIN] 2026/04/04 - 22:23:24 | 200 | 0s | 127.0.0.1 | HEAD "/" time=2026-04-04T22:23:24.705+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32 [GIN] 2026/04/04 - 22:23:24 | 200 | 191.4033ms | 127.0.0.1 | POST "/api/show" time=2026-04-04T22:23:24.922+08:00 level=DEBUG source=runner.go:264 msg="refreshing free memory" time=2026-04-04T22:23:24.922+08:00 level=DEBUG source=runner.go:328 msg="unable to refresh all GPUs with existing runners, performing bootstrap discovery" time=2026-04-04T22:23:24.925+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 7435" time=2026-04-04T22:23:24.925+08:00 level=DEBUG source=server.go:433 msg=subprocess GGML_VK_DISABLE_F16=1 GGML_VK_DISABLE_FUSION=1 GGML_VK_DISABLE_INTEGER_DOT_PRODUCT=1 OLLAMA_DEBUG=1 OLLAMA_HOST=0.0.0.0 OLLAMA_MODELS=e:\OllamaModels OLLAMA_NUM_GPU=999 OLLAMA_ORIGINS=chrome-extension://*,moz-extension://* OLLAMA_VULKAN=1 PATH="C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama;C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\vulkan;C:\\Users\\111\\AppData\\Local\\Programs\\Ollama;D:\\python;D:\\python\\Scripts;C:\\Program Files\\Eclipse Adoptium\\jdk-17.0.9.9-hotspot\\bin;F:\\Program Files\\Eclipse Adoptium\\jdk-21.0.5.11-hotspot\\bin;f:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin;f:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin64;C:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin;C:\\Users\\Public\\Documents\\Embarcadero\\Studio\\23.0\\Bpl;C:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin64;C:\\Users\\Public\\Documents\\Embarcadero\\Studio\\23.0\\Bpl\\Win64;E:\\Program Files\\Eclipse Adoptium\\jre-21.0.5.11-hotspot\\bin;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\;C:\\WINDOWS\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;F:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;G:\\Program Files\\TortoiseGit\\bin;D:\\Program Files\\Git\\bin;D:\\python\\Scripts;C:\\Users\\Public\\Documents\\MitovSoftware\\Libraries\\Delphi\\12.0\\LabPacks\\OpenWire Studio\\Win32\\;C:\\Users\\Public\\Documents\\MitovSoftware\\Libraries\\Delphi\\12.0\\LabPacks\\OpenWire Studio\\Win64\\;F:\\nodejs\\;C:\\ProgramData\\chocolatey\\bin;e:\\FPC\\3.2.2\\bin\\i386-Win32;D:\\rad\\Boss;e:\\Program Files\\gs\\gs10.05.1\\bin;C:\\Users\\111\\AppData\\Local\\Muse Hub\\lib;E:\\gnu\\glo6612wb\\bin;E:\\gnu\\ctags58;D:\\rad\\formatter\\pasfmt-0.7.0-x86_64-pc-windows-msvc;C:\\Users\\111\\.opencode\\bin;D:\\Profiler;C:\\Users\\111\\.local\\bin;C:\\Users\\111\\.local\\bin;D:\\python;C:\\Users\\111\\AppData\\Local\\Programs\\Python\\Launcher\\;C:\\Users\\111\\AppData\\Local\\Microsoft\\WindowsApps;D:\\python\\Scripts;C:\\Users\\111\\AppData\\Roaming\\npm;E:\\Program Files\\Antigravity\\bin;E:\\Program Files (x86)\\cursor\\resources\\app\\bin;" OLLAMA_LIBRARY_PATH=C:\Users\111\AppData\Local\Programs\Ollama\lib\ollama;C:\Users\111\AppData\Local\Programs\Ollama\lib\ollama\vulkan time=2026-04-04T22:23:25.321+08:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=399.1395ms OLLAMA_LIBRARY_PATH="[C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\vulkan]" extra_envs=map[] time=2026-04-04T22:23:25.321+08:00 level=DEBUG source=runner.go:40 msg="overall device VRAM discovery took" duration=399.1395ms time=2026-04-04T22:23:25.322+08:00 level=INFO source=cpu_windows.go:148 msg=packages count=1 time=2026-04-04T22:23:25.322+08:00 level=INFO source=cpu_windows.go:164 msg="efficiency cores detected" maxEfficiencyClass=1 time=2026-04-04T22:23:25.322+08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=10 efficiency=4 threads=16 time=2026-04-04T22:23:25.322+08:00 level=DEBUG source=sched.go:220 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1 time=2026-04-04T22:23:25.322+08:00 level=DEBUG source=sched.go:229 msg="loading first model" model=e:\OllamaModels\blobs\sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a time=2026-04-04T22:23:25.399+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32 time=2026-04-04T22:23:25.441+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32 time=2026-04-04T22:23:25.444+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.pooling_type default=0 time=2026-04-04T22:23:25.444+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eot_token_id default=106 time=2026-04-04T22:23:25.444+08:00 level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883 time=2026-04-04T22:23:25.444+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.attention.global_head_count_kv default=0 time=2026-04-04T22:23:25.444+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.attention.head_count_kv default="&{size:0 values:[]}" time=2026-04-04T22:23:25.444+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.expert_count default=0 time=2026-04-04T22:23:25.444+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.expert_used_count default=0 time=2026-04-04T22:23:25.444+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.audio.num_mel_bins default=128 time=2026-04-04T22:23:25.446+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model e:\\OllamaModels\\blobs\\sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a --port 7441" time=2026-04-04T22:23:25.446+08:00 level=DEBUG source=server.go:433 msg=subprocess GGML_VK_DISABLE_F16=1 GGML_VK_DISABLE_FUSION=1 GGML_VK_DISABLE_INTEGER_DOT_PRODUCT=1 OLLAMA_DEBUG=1 OLLAMA_HOST=0.0.0.0 OLLAMA_MODELS=e:\OllamaModels OLLAMA_NUM_GPU=999 OLLAMA_ORIGINS=chrome-extension://*,moz-extension://* OLLAMA_VULKAN=1 PATH="C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama;C:\\Users\\111\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\vulkan;C:\\Users\\111\\AppData\\Local\\Programs\\Ollama;D:\\python;D:\\python\\Scripts;C:\\Program Files\\Eclipse Adoptium\\jdk-17.0.9.9-hotspot\\bin;F:\\Program Files\\Eclipse Adoptium\\jdk-21.0.5.11-hotspot\\bin;f:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin;f:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin64;C:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin;C:\\Users\\Public\\Documents\\Embarcadero\\Studio\\23.0\\Bpl;C:\\Program Files (x86)\\Embarcadero\\Studio\\23.0\\bin64;C:\\Users\\Public\\Documents\\Embarcadero\\Studio\\23.0\\Bpl\\Win64;E:\\Program Files\\Eclipse Adoptium\\jre-21.0.5.11-hotspot\\bin;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\;C:\\WINDOWS\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;F:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;G:\\Program Files\\TortoiseGit\\bin;D:\\Program Files\\Git\\bin;D:\\python\\Scripts;C:\\Users\\Public\\Documents\\MitovSoftware\\Libraries\\Delphi\\12.0\\LabPacks\\OpenWire Studio\\Win32\\;C:\\Users\\Public\\Documents\\MitovSoftware\\Libraries\\Delphi\\12.0\\LabPacks\\OpenWire Studio\\Win64\\;F:\\nodejs\\;C:\\ProgramData\\chocolatey\\bin;e:\\FPC\\3.2.2\\bin\\i386-Win32;D:\\rad\\Boss;e:\\Program Files\\gs\\gs10.05.1\\bin;C:\\Users\\111\\AppData\\Local\\Muse Hub\\lib;E:\\gnu\\glo6612wb\\bin;E:\\gnu\\ctags58;D:\\rad\\formatter\\pasfmt-0.7.0-x86_64-pc-windows-msvc;C:\\Users\\111\\.opencode\\bin;D:\\Profiler;C:\\Users\\111\\.local\\bin;C:\\Users\\111\\.local\\bin;D:\\python;C:\\Users\\111\\AppData\\Local\\Programs\\Python\\Launcher\\;C:\\Users\\111\\AppData\\Local\\Microsoft\\WindowsApps;D:\\python\\Scripts;C:\\Users\\111\\AppData\\Roaming\\npm;E:\\Program Files\\Antigravity\\bin;E:\\Program Files (x86)\\cursor\\resources\\app\\bin;" OLLAMA_LIBRARY_PATH=C:\Users\111\AppData\Local\Programs\Ollama\lib\ollama;C:\Users\111\AppData\Local\Programs\Ollama\lib\ollama\vulkan time=2026-04-04T22:23:25.449+08:00 level=INFO source=sched.go:484 msg="system memory" total="47.8 GiB" free="23.3 GiB" free_swap="23.3 GiB" time=2026-04-04T22:23:25.449+08:00 level=INFO source=sched.go:491 msg="gpu memory" id=8680a056-0800-0000-0300-000000000000 library=Vulkan available="13.7 GiB" free="14.1 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-04-04T22:23:25.449+08:00 level=INFO source=server.go:759 msg="loading model" "model layers"=43 requested=-1 time=2026-04-04T22:23:25.483+08:00 level=INFO source=runner.go:1417 msg="starting ollama engine" time=2026-04-04T22:23:25.488+08:00 level=INFO source=runner.go:1452 msg="Server listening on 127.0.0.1:7441" time=2026-04-04T22:23:25.493+08:00 level=INFO source=runner.go:1290 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Disabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:43[ID:8680a056-0800-0000-0300-000000000000 Layers:43(0..42)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-04-04T22:23:25.547+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32 time=2026-04-04T22:23:25.549+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.name default="" time=2026-04-04T22:23:25.549+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.description default="" time=2026-04-04T22:23:25.549+08:00 level=INFO source=ggml.go:136 msg="" architecture=gemma4 file_type=Q4_K_M name="" description="" num_tensors=2131 num_key_values=55 time=2026-04-04T22:23:25.549+08:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\111\AppData\Local\Programs\Ollama\lib\ollama load_backend: loaded CPU backend from C:\Users\111\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll time=2026-04-04T22:23:25.563+08:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\111\AppData\Local\Programs\Ollama\lib\ollama\vulkan ggml_vulkan: Found 1 Vulkan devices: ggml_vulkan: 0 = Intel(R) Arc(TM) A770 Graphics (Intel Corporation) | uma: 0 | fp16: 0 | bf16: 0 | warp size: 32 | shared memory: 49152 | int dot: 0 | matrix cores: none load_backend: loaded Vulkan backend from C:\Users\111\AppData\Local\Programs\Ollama\lib\ollama\vulkan\ggml-vulkan.dll time=2026-04-04T22:23:25.610+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang) ggml_backend_vk_get_device_memory called: uuid 8680a056-0800-0000-0300-000000000000 ggml_backend_vk_get_device_memory called: luid 0x000000000000c010 ggml_dxgi_pdh_init called DXGI + PDH Initialized. Getting GPU free memory info [DXGI] Adapter Description: Intel(R) Arc(TM) A770 Graphics, LUID: 0x000000000000C010, Dedicated: 15.88 GB, Shared: 23.92 GB [DXGI] Adapter Description: Microsoft Basic Render Driver, LUID: 0x000000000000C41B, Dedicated: 0.00 GB, Shared: 23.92 GB Discrete GPU (Intel(R) Arc(TM) A770 Graphics) with LUID 0x000000000000c010 detected. Dedicated Total: 17045651456.00 bytes (15.88 GB), Dedicated Usage: 1915502592.00 bytes (1.78 GB) ggml_backend_vk_get_device_memory utilizing DXGI + PDH memory reporting free: 15130148864 total: 17045651456 time=2026-04-04T22:23:25.792+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.pooling_type default=0 time=2026-04-04T22:23:25.792+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eot_token_id default=106 time=2026-04-04T22:23:25.793+08:00 level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883 time=2026-04-04T22:23:25.793+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.attention.global_head_count_kv default=0 time=2026-04-04T22:23:25.793+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.attention.head_count_kv default="&{size:0 values:[]}" time=2026-04-04T22:23:25.793+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.expert_count default=0 time=2026-04-04T22:23:25.793+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.expert_used_count default=0 time=2026-04-04T22:23:25.793+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.audio.num_mel_bins default=128 time=2026-04-04T22:23:25.810+08:00 level=INFO source=model.go:138 msg="vision: decode" elapsed=1.5073ms bounds=(0,0)-(2048,2048) time=2026-04-04T22:23:25.906+08:00 level=INFO source=model.go:145 msg="vision: preprocess" elapsed=95.2109ms size="[768 768]" time=2026-04-04T22:23:25.906+08:00 level=INFO source=model.go:148 msg="vision: pixelValues" shape="[768 768 3]" dim0=768 dim1=768 dim2=3 time=2026-04-04T22:23:25.906+08:00 level=INFO source=model.go:152 msg="vision: patches" patchesX=48 patchesY=48 total=2304 patchSize=16 time=2026-04-04T22:23:25.907+08:00 level=INFO source=model.go:156 msg="vision: encoded" elapsed=97.7248ms shape="[2560 256]" time=2026-04-04T22:23:25.908+08:00 level=DEBUG source=ggml.go:852 msg="compute graph" nodes=764 splits=1 time=2026-04-04T22:23:25.926+08:00 level=DEBUG source=ggml.go:852 msg="compute graph" nodes=1979 splits=2 time=2026-04-04T22:23:25.928+08:00 level=DEBUG source=ggml.go:852 msg="compute graph" nodes=1977 splits=2 time=2026-04-04T22:23:25.930+08:00 level=DEBUG source=device.go:240 msg="model weights" device=Vulkan0 size="8.9 GiB" time=2026-04-04T22:23:25.930+08:00 level=DEBUG source=device.go:245 msg="model weights" device=CPU size="587.0 MiB" time=2026-04-04T22:23:25.930+08:00 level=DEBUG source=device.go:251 msg="kv cache" device=Vulkan0 size="224.0 MiB" time=2026-04-04T22:23:25.930+08:00 level=DEBUG source=device.go:262 msg="compute graph" device=Vulkan0 size="309.8 MiB" time=2026-04-04T22:23:25.930+08:00 level=DEBUG source=device.go:267 msg="compute graph" device=CPU size="5.0 MiB" time=2026-04-04T22:23:25.930+08:00 level=DEBUG source=device.go:272 msg="total memory" size="10.0 GiB" time=2026-04-04T22:23:25.930+08:00 level=DEBUG source=server.go:784 msg=memory success=true required.InputWeights=615514112 required.CPU.Graph=5242880 required.Vulkan0.ID=8680a056-0800-0000-0300-000000000000 required.Vulkan0.Weights="[109889600 110042304 109889600 109889600 109889600 110843968 103131200 109889600 102793280 102793280 109889600 110168128 52401216 59497536 52401216 52401216 59497536 59776064 52401216 59497536 52401216 52401216 59497536 59776064 52401216 59497536 52401216 52401216 59497536 59776064 52401216 59497536 52401216 52401216 59497536 59776064 59159616 59497536 59159616 59159616 59497536 66534464 6521492480]" required.Vulkan0.Cache="[8388608 8388608 8388608 8388608 8388608 16777216 8388608 8388608 8388608 8388608 8388608 16777216 8388608 8388608 8388608 8388608 8388608 16777216 8388608 8388608 8388608 8388608 8388608 16777216 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]" required.Vulkan0.Graph=324833280 time=2026-04-04T22:23:25.930+08:00 level=DEBUG source=server.go:978 msg="available gpu" id=8680a056-0800-0000-0300-000000000000 library=Vulkan "available layer vram"="13.4 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="309.8 MiB" time=2026-04-04T22:23:25.930+08:00 level=DEBUG source=server.go:795 msg="new layout created" layers="43[ID:8680a056-0800-0000-0300-000000000000 Layers:43(0..42)]" time=2026-04-04T22:23:25.931+08:00 level=INFO source=runner.go:1290 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Disabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:43[ID:8680a056-0800-0000-0300-000000000000 Layers:43(0..42)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-04-04T22:23:25.972+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32 ggml_backend_vk_get_device_memory called: uuid 8680a056-0800-0000-0300-000000000000 ggml_backend_vk_get_device_memory called: luid 0x000000000000c010 ggml_dxgi_pdh_init called DXGI + PDH Initialized. Getting GPU free memory info [DXGI] Adapter Description: Intel(R) Arc(TM) A770 Graphics, LUID: 0x000000000000C010, Dedicated: 15.88 GB, Shared: 23.92 GB [DXGI] Adapter Description: Microsoft Basic Render Driver, LUID: 0x000000000000C41B, Dedicated: 0.00 GB, Shared: 23.92 GB Discrete GPU (Intel(R) Arc(TM) A770 Graphics) with LUID 0x000000000000c010 detected. Dedicated Total: 17045651456.00 bytes (15.88 GB), Dedicated Usage: 1934413824.00 bytes (1.80 GB) ggml_backend_vk_get_device_memory utilizing DXGI + PDH memory reporting free: 15111237632 total: 17045651456 ggml_vulkan: Device memory allocation of size 5637144576 failed. ggml_vulkan: Requested buffer size exceeds device buffer size limit: ErrorOutOfDeviceMemory alloc_tensor_range: failed to allocate Vulkan0 buffer of size 5637144576 time=2026-04-04T22:23:26.239+08:00 level=DEBUG source=device.go:240 msg="model weights" device=Vulkan0 size="8.9 GiB" time=2026-04-04T22:23:26.239+08:00 level=DEBUG source=device.go:245 msg="model weights" device=CPU size="587.0 MiB" time=2026-04-04T22:23:26.239+08:00 level=DEBUG source=device.go:272 msg="total memory" size="9.4 GiB" time=2026-04-04T22:23:26.239+08:00 level=DEBUG source=server.go:784 msg=memory success=false required.InputWeights=615514112 required.Vulkan0.ID=8680a056-0800-0000-0300-000000000000 required.Vulkan0.Weights="[109889600 110042304 109889600 109889600 109889600 110843968 103131200 109889600 102793280 102793280 109889600 110168128 52401216 59497536 52401216 52401216 59497536 59776064 52401216 59497536 52401216 52401216 59497536 59776064 52401216 59497536 52401216 52401216 59497536 59776064 52401216 59497536 52401216 52401216 59497536 59776064 59159616 59497536 59159616 59159616 59497536 66534464 6521492480]" time=2026-04-04T22:23:26.239+08:00 level=DEBUG source=server.go:978 msg="available gpu" id=8680a056-0800-0000-0300-000000000000 library=Vulkan "available layer vram"="13.7 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="0 B" time=2026-04-04T22:23:26.239+08:00 level=DEBUG source=server.go:795 msg="new layout created" layers="43[ID:8680a056-0800-0000-0300-000000000000 Layers:43(0..42)]" time=2026-04-04T22:23:26.239+08:00 level=INFO source=server.go:881 msg="model layout did not fit, applying backoff" backoff=0.10 time=2026-04-04T22:23:26.239+08:00 level=DEBUG source=server.go:978 msg="available gpu" id=8680a056-0800-0000-0300-000000000000 library=Vulkan "available layer vram"="12.2 GiB" backoff=0.10 minimum="457.0 MiB" overhead="0 B" graph="0 B" time=2026-04-04T22:23:26.239+08:00 level=DEBUG source=server.go:795 msg="new layout created" layers="43[ID:8680a056-0800-0000-0300-000000000000 Layers:43(0..42)]" time=2026-04-04T22:23:26.239+08:00 level=INFO source=server.go:881 msg="model layout did not fit, applying backoff" backoff=0.20 time=2026-04-04T22:23:26.239+08:00 level=DEBUG source=server.go:978 msg="available gpu" id=8680a056-0800-0000-0300-000000000000 library=Vulkan "available layer vram"="10.8 GiB" backoff=0.20 minimum="457.0 MiB" overhead="0 B" graph="0 B" time=2026-04-04T22:23:26.240+08:00 level=DEBUG source=server.go:795 msg="new layout created" layers="43[ID:8680a056-0800-0000-0300-000000000000 Layers:43(0..42)]" time=2026-04-04T22:23:26.240+08:00 level=INFO source=server.go:881 msg="model layout did not fit, applying backoff" backoff=0.30 time=2026-04-04T22:23:26.240+08:00 level=DEBUG source=server.go:978 msg="available gpu" id=8680a056-0800-0000-0300-000000000000 library=Vulkan "available layer vram"="9.4 GiB" backoff=0.30 minimum="457.0 MiB" overhead="0 B" graph="0 B" time=2026-04-04T22:23:26.240+08:00 level=DEBUG source=server.go:795 msg="new layout created" layers="43[ID:8680a056-0800-0000-0300-000000000000 Layers:43(0..42)]" time=2026-04-04T22:23:26.240+08:00 level=INFO source=server.go:881 msg="model layout did not fit, applying backoff" backoff=0.40 time=2026-04-04T22:23:26.240+08:00 level=DEBUG source=server.go:978 msg="available gpu" id=8680a056-0800-0000-0300-000000000000 library=Vulkan "available layer vram"="8.0 GiB" backoff=0.40 minimum="457.0 MiB" overhead="0 B" graph="0 B" time=2026-04-04T22:23:26.240+08:00 level=DEBUG source=server.go:795 msg="new layout created" layers="42[ID:8680a056-0800-0000-0300-000000000000 Layers:42(0..41)]" time=2026-04-04T22:23:26.240+08:00 level=INFO source=runner.go:1290 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Disabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:42[ID:8680a056-0800-0000-0300-000000000000 Layers:42(0..41)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-04-04T22:23:26.281+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32 ggml_backend_vk_get_device_memory called: uuid 8680a056-0800-0000-0300-000000000000 ggml_backend_vk_get_device_memory called: luid 0x000000000000c010 ggml_dxgi_pdh_init called DXGI + PDH Initialized. Getting GPU free memory info [DXGI] Adapter Description: Intel(R) Arc(TM) A770 Graphics, LUID: 0x000000000000C010, Dedicated: 15.88 GB, Shared: 23.92 GB [DXGI] Adapter Description: Microsoft Basic Render Driver, LUID: 0x000000000000C41B, Dedicated: 0.00 GB, Shared: 23.92 GB Discrete GPU (Intel(R) Arc(TM) A770 Graphics) with LUID 0x000000000000c010 detected. Dedicated Total: 17045651456.00 bytes (15.88 GB), Dedicated Usage: 1948073984.00 bytes (1.81 GB) ggml_backend_vk_get_device_memory utilizing DXGI + PDH memory reporting free: 15097577472 total: 17045651456 time=2026-04-04T22:23:26.515+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.pooling_type default=0 time=2026-04-04T22:23:26.515+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eot_token_id default=106 time=2026-04-04T22:23:26.515+08:00 level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883 time=2026-04-04T22:23:26.515+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.attention.global_head_count_kv default=0 time=2026-04-04T22:23:26.515+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.attention.head_count_kv default="&{size:0 values:[]}" time=2026-04-04T22:23:26.515+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.expert_count default=0 time=2026-04-04T22:23:26.515+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.expert_used_count default=0 time=2026-04-04T22:23:26.515+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.audio.num_mel_bins default=128 time=2026-04-04T22:23:26.529+08:00 level=INFO source=model.go:138 msg="vision: decode" elapsed=1.0912ms bounds=(0,0)-(2048,2048) time=2026-04-04T22:23:26.618+08:00 level=INFO source=model.go:145 msg="vision: preprocess" elapsed=88.4649ms size="[768 768]" time=2026-04-04T22:23:26.619+08:00 level=INFO source=model.go:148 msg="vision: pixelValues" shape="[768 768 3]" dim0=768 dim1=768 dim2=3 time=2026-04-04T22:23:26.619+08:00 level=INFO source=model.go:152 msg="vision: patches" patchesX=48 patchesY=48 total=2304 patchSize=16 time=2026-04-04T22:23:26.619+08:00 level=INFO source=model.go:156 msg="vision: encoded" elapsed=91.2747ms shape="[2560 256]" time=2026-04-04T22:23:26.638+08:00 level=DEBUG source=ggml.go:852 msg="compute graph" nodes=764 splits=242 time=2026-04-04T22:23:26.697+08:00 level=DEBUG source=ggml.go:852 msg="compute graph" nodes=1979 splits=7 time=2026-04-04T22:23:26.702+08:00 level=DEBUG source=ggml.go:852 msg="compute graph" nodes=1977 splits=5 time=2026-04-04T22:23:26.702+08:00 level=DEBUG source=device.go:240 msg="model weights" device=Vulkan0 size="2.8 GiB" time=2026-04-04T22:23:26.702+08:00 level=DEBUG source=device.go:245 msg="model weights" device=CPU size="6.6 GiB" time=2026-04-04T22:23:26.702+08:00 level=DEBUG source=device.go:251 msg="kv cache" device=Vulkan0 size="224.0 MiB" time=2026-04-04T22:23:26.702+08:00 level=DEBUG source=device.go:262 msg="compute graph" device=Vulkan0 size="289.1 MiB" time=2026-04-04T22:23:26.702+08:00 level=DEBUG source=device.go:267 msg="compute graph" device=CPU size="21.0 MiB" time=2026-04-04T22:23:26.703+08:00 level=DEBUG source=device.go:272 msg="total memory" size="10.0 GiB" time=2026-04-04T22:23:26.703+08:00 level=DEBUG source=server.go:784 msg=memory success=true required.InputWeights=615514112 required.CPU.Weights="[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 6521478144]" required.CPU.Graph=22020096 required.Vulkan0.ID=8680a056-0800-0000-0300-000000000000 required.Vulkan0.Weights="[109889600 110042304 109889600 109889600 109889600 110843968 103131200 109889600 102793280 102793280 109889600 110168128 52401216 59497536 52401216 52401216 59497536 59776064 52401216 59497536 52401216 52401216 59497536 59776064 52401216 59497536 52401216 52401216 59497536 59776064 52401216 59497536 52401216 52401216 59497536 59776064 59159616 59497536 59159616 59159616 59497536 66534464 0]" required.Vulkan0.Cache="[8388608 8388608 8388608 8388608 8388608 16777216 8388608 8388608 8388608 8388608 8388608 16777216 8388608 8388608 8388608 8388608 8388608 16777216 8388608 8388608 8388608 8388608 8388608 16777216 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]" required.Vulkan0.Graph=303169408 time=2026-04-04T22:23:26.703+08:00 level=DEBUG source=server.go:978 msg="available gpu" id=8680a056-0800-0000-0300-000000000000 library=Vulkan "available layer vram"="7.7 GiB" backoff=0.40 minimum="457.0 MiB" overhead="0 B" graph="289.1 MiB" time=2026-04-04T22:23:26.703+08:00 level=DEBUG source=server.go:795 msg="new layout created" layers="42[ID:8680a056-0800-0000-0300-000000000000 Layers:42(0..41)]" time=2026-04-04T22:23:26.703+08:00 level=INFO source=runner.go:1290 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Disabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:42[ID:8680a056-0800-0000-0300-000000000000 Layers:42(0..41)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-04-04T22:23:26.703+08:00 level=INFO source=ggml.go:482 msg="offloading 42 repeating layers to GPU" time=2026-04-04T22:23:26.703+08:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU" time=2026-04-04T22:23:26.703+08:00 level=INFO source=ggml.go:494 msg="offloaded 42/43 layers to GPU" time=2026-04-04T22:23:26.703+08:00 level=INFO source=device.go:240 msg="model weights" device=Vulkan0 size="2.8 GiB" time=2026-04-04T22:23:26.703+08:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="6.6 GiB" time=2026-04-04T22:23:26.703+08:00 level=INFO source=device.go:251 msg="kv cache" device=Vulkan0 size="224.0 MiB" time=2026-04-04T22:23:26.703+08:00 level=INFO source=device.go:262 msg="compute graph" device=Vulkan0 size="289.1 MiB" time=2026-04-04T22:23:26.704+08:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="21.0 MiB" time=2026-04-04T22:23:26.704+08:00 level=INFO source=device.go:272 msg="total memory" size="10.0 GiB" time=2026-04-04T22:23:26.704+08:00 level=INFO source=sched.go:561 msg="loaded runners" count=1 time=2026-04-04T22:23:26.704+08:00 level=INFO source=server.go:1352 msg="waiting for llama runner to start responding" time=2026-04-04T22:23:26.705+08:00 level=INFO source=server.go:1386 msg="waiting for server to become available" status="llm server loading model" time=2026-04-04T22:23:26.706+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.00" time=2026-04-04T22:23:26.957+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.20" time=2026-04-04T22:23:27.208+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.37" time=2026-04-04T22:23:27.458+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.47" time=2026-04-04T22:23:27.709+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.55" time=2026-04-04T22:23:27.960+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.62" time=2026-04-04T22:23:28.211+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.69" time=2026-04-04T22:23:28.461+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.77" time=2026-04-04T22:23:28.711+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.84" time=2026-04-04T22:23:28.963+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.91" time=2026-04-04T22:23:29.213+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.98" time=2026-04-04T22:23:29.287+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.pooling_type default=0 time=2026-04-04T22:23:29.464+08:00 level=INFO source=server.go:1390 msg="llama runner started in 4.01 seconds" time=2026-04-04T22:23:29.464+08:00 level=DEBUG source=sched.go:573 msg="finished setting up" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:8680a056-0800-0000-0300-000000000000 Library:Vulkan}]" runner.size="10.0 GiB" runner.vram="3.3 GiB" runner.parallel=1 runner.pid=24040 runner.model=e:\OllamaModels\blobs\sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=4096 time=2026-04-04T22:23:29.523+08:00 level=DEBUG source=server.go:1538 msg="completion request" images=0 prompt=74 format="" time=2026-04-04T22:23:29.584+08:00 level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=0 prompt=16 used=0 remaining=16 [GIN] 2026/04/04 - 22:24:34 | 200 | 1m9s | 127.0.0.1 | POST "/api/generate" time=2026-04-04T22:24:34.391+08:00 level=DEBUG source=sched.go:581 msg="context for request finished" time=2026-04-04T22:24:34.391+08:00 level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:8680a056-0800-0000-0300-000000000000 Library:Vulkan}]" runner.size="10.0 GiB" runner.vram="3.3 GiB" runner.parallel=1 runner.pid=24040 runner.model=e:\OllamaModels\blobs\sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=4096 duration=5m0s time=2026-04-04T22:24:34.391+08:00 level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:8680a056-0800-0000-0300-000000000000 Library:Vulkan}]" runner.size="10.0 GiB" runner.vram="3.3 GiB" runner.parallel=1 runner.pid=24040 runner.model=e:\OllamaModels\blobs\sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=4096 refCount=0 ```
Author
Owner

@hidp123 commented on GitHub (Apr 4, 2026):

Same gibberish issue on AMD Radeon 860M as well. Works when Vulkan is turned off or Gemma4 is made to use CPU only using Modelfile.

<!-- gh-comment-id:4187558104 --> @hidp123 commented on GitHub (Apr 4, 2026): Same gibberish issue on AMD Radeon 860M as well. Works when Vulkan is turned off or Gemma4 is made to use CPU only using Modelfile.
Author
Owner

@asmer commented on GitHub (Apr 4, 2026):

Same issue with AMD 780M as well. OS: Linux. Works fine when Vulkan is turned off, or with gemma3/llama3/qwen3

<!-- gh-comment-id:4187678762 --> @asmer commented on GitHub (Apr 4, 2026): Same issue with AMD 780M as well. OS: Linux. Works fine when Vulkan is turned off, or with gemma3/llama3/qwen3
Author
Owner

@N3RDIUM commented on GitHub (Apr 5, 2026):

I'm facing the same problem. CPU works fine, but gemma4:e4b outputs gibberish on vulkan. Running on an RX580 2048SP 8GB. NixOS, ollama 0.20.2.

<!-- gh-comment-id:4188183651 --> @N3RDIUM commented on GitHub (Apr 5, 2026): I'm facing the same problem. CPU works fine, but gemma4:e4b outputs gibberish on vulkan. Running on an RX580 2048SP 8GB. NixOS, ollama `0.20.2`.
Author
Owner

@yaizawa2e commented on GitHub (Apr 5, 2026):

I am also experiencing the same phenomenon.

The two operating environments I encountered are as follows:
The Ollama version is 0.20.2.

CPU GPU Memory OS
AMD Ryzen 5 3500U Radeon Vega 8 Graphics 16GB Ubuntu 24.04.4 LTS
AMD Ryzen 5 PRO 8640HS Radeon 760M 16GB Windows 11 Home 25H2
<!-- gh-comment-id:4188904461 --> @yaizawa2e commented on GitHub (Apr 5, 2026): I am also experiencing the same phenomenon. The two operating environments I encountered are as follows: The Ollama version is 0.20.2. | CPU | GPU | Memory | OS | | :--- | :--- | :--- | :--- | | AMD Ryzen 5 3500U | Radeon Vega 8 Graphics | 16GB | Ubuntu 24.04.4 LTS | | AMD Ryzen 5 PRO 8640HS | Radeon 760M | 16GB | Windows 11 Home 25H2 |
Author
Owner

@rjmalagon commented on GitHub (Apr 5, 2026):

I can confirm this on my AMD Ryzen 2200G/Radeon Vega 8, 64GB shared RAM, Fedora 43, containerized Ollama 0.20.2 with Vulkan runtime.

CPU runtime (same CPU) and ROCm runtime (on an AMD Ryzen 7 7735HS/Radeon 680m) work fine.

<!-- gh-comment-id:4189181565 --> @rjmalagon commented on GitHub (Apr 5, 2026): I can confirm this on my AMD Ryzen 2200G/Radeon Vega 8, 64GB shared RAM, Fedora 43, containerized Ollama 0.20.2 with Vulkan runtime. CPU runtime (same CPU) and ROCm runtime (on an AMD Ryzen 7 7735HS/Radeon 680m) work fine.
Author
Owner

@TheEarthCMS commented on GitHub (Apr 5, 2026):

I have the same issue using gemma4:e2b and gemma4:e4b with Ollama versions 0.20.0 and 0.20.2.
Windows 11, AMD Ryzen, Radeon Vega GPU, with OLLAMA_VULKAN=1
All other models work as expected while using my GPU.

<!-- gh-comment-id:4189329535 --> @TheEarthCMS commented on GitHub (Apr 5, 2026): I have the same issue using gemma4:e2b and gemma4:e4b with Ollama versions 0.20.0 and 0.20.2. Windows 11, AMD Ryzen, Radeon Vega GPU, with OLLAMA_VULKAN=1 All other models work as expected while using my GPU.
Author
Owner

@hidp123 commented on GitHub (Apr 6, 2026):

Workaround:

  1. keep Vulkan enabled from environment variables to benefit from the GPU for the other models: OLLAMA_VULKAN=1
  2. disable GPU offloading only for Gemma4:
    a) open a txt file
    b) paste the following (edit model name accordingly):

FROM gemma4:e4b
PARAMETER num_gpu 0

c) save file as a .Modelfile, something like: Gemma4_CPU.Modelfile.
d) open terminal and paste the following (edit the path to the Modelfile above) and run the command
ollama create gemma4:gemma4_CPU -f C:\Users\...\Desktop\Gemma4_CPU.Modelfile
This should now show up as a new model to select in Ollama.

<!-- gh-comment-id:4191831439 --> @hidp123 commented on GitHub (Apr 6, 2026): Workaround: 1) keep Vulkan enabled from environment variables to benefit from the GPU for the other models: OLLAMA_VULKAN=1 2) disable GPU offloading only for Gemma4: a) open a txt file b) paste the following (edit model name accordingly): `FROM gemma4:e4b ` `PARAMETER num_gpu 0 ` c) save file as a .Modelfile, something like: Gemma4_CPU.Modelfile. d) open terminal and paste the following (edit the path to the Modelfile above) and run the command `ollama create gemma4:gemma4_CPU -f C:\Users\...\Desktop\Gemma4_CPU.Modelfile ` This should now show up as a new model to select in Ollama.
Author
Owner

@yaizawa2e commented on GitHub (Apr 6, 2026):

I have confirmed that the same issue occurs in the following environment.

When I created a Modelfile and configured it for CPU processing, it worked as expected.

CPU GPU Memory OS
13th Gen Intel Core i5-1340P Intel Iris Xe Graphics 16GB Windows 11 Enterprise 23H2
<!-- gh-comment-id:4192831671 --> @yaizawa2e commented on GitHub (Apr 6, 2026): I have confirmed that the same issue occurs in the following environment. When I created a Modelfile and configured it for CPU processing, it worked as expected. | CPU | GPU | Memory | OS | |-----|-----|--------|-----| | 13th Gen Intel Core i5-1340P | Intel Iris Xe Graphics | 16GB | Windows 11 Enterprise 23H2 |
Author
Owner

@yaizawa2e commented on GitHub (Apr 9, 2026):

#15261

<!-- gh-comment-id:4214792969 --> @yaizawa2e commented on GitHub (Apr 9, 2026): #15261
Author
Owner

@wqmeng commented on GitHub (Apr 9, 2026):

As see that both logs has the problem.

ggml_backend_vk_get_device_memory utilizing DXGI + PDH memory reporting free: 15111237632 total: 17045651456
ggml_vulkan: Device memory allocation of size 5637144576 failed.
ggml_vulkan: Requested buffer size exceeds device buffer size limit: ErrorOutOfDeviceMemory
alloc_tensor_range: failed to allocate Vulkan0 buffer of size 5637144576

<!-- gh-comment-id:4215926818 --> @wqmeng commented on GitHub (Apr 9, 2026): As see that both logs has the problem. ggml_backend_vk_get_device_memory utilizing DXGI + PDH memory reporting free: 15111237632 total: 17045651456 ggml_vulkan: Device memory allocation of size 5637144576 failed. ggml_vulkan: Requested buffer size exceeds device buffer size limit: _**ErrorOutOfDeviceMemory**_ alloc_tensor_range: failed to allocate Vulkan0 buffer of size 5637144576
Author
Owner

@PureBlissAK commented on GitHub (Apr 18, 2026):

🤖 Automated Triage & Analysis Report

Issue: #15328
Analyzed: 2026-04-18T18:22:35.193265

Analysis

  • Type: unknown
  • Severity: medium
  • Components: unknown

Implementation Plan

  • Effort: medium
  • Steps:

This issue has been triaged and marked for implementation.

<!-- gh-comment-id:4274310269 --> @PureBlissAK commented on GitHub (Apr 18, 2026): <!-- ollama-issue-orchestrator:v1 issue:15328 --> ## 🤖 Automated Triage & Analysis Report **Issue**: #15328 **Analyzed**: 2026-04-18T18:22:35.193265 ### Analysis - **Type**: unknown - **Severity**: medium - **Components**: unknown ### Implementation Plan - **Effort**: medium - **Steps**: *This issue has been triaged and marked for implementation.*
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#35564