[GH-ISSUE #12672] All ggml libraries fail to load on Windows Enterprise with "The specified procedure could not be found." #34164

Closed
opened 2026-04-22 17:29:56 -05:00 by GiteaMirror · 15 comments
Owner

Originally created by @avesed on GitHub (Oct 17, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12672

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

ollama detects Vulkan SDK, and vulkaninfo shows my GPU (MI50 with AMD Radeon Pro VII driver), but ollama is not using the GPU

Relevant log output

Ollama:
time=2025-10-17T00:37:59.793-04:00 level=INFO source=routes.go:1511 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY:localhost,127.0.0.1 OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG-4 OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5h0m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\WinServer\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:2 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-10-17T00:37:59.794-04:00 level=INFO source=images.go:522 msg="total blobs: 0"
time=2025-10-17T00:37:59.794-04:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
 - using env:   export GIN_MODE=release
 - using code:  gin.SetMode(gin.ReleaseMode)

[GIN-debug] HEAD   /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET    /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD   /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func3 (5 handlers)
[GIN-debug] GET    /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func4 (5 handlers)
[GIN-debug] POST   /api/pull                 --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
[GIN-debug] POST   /api/push                 --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] GET    /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] POST   /api/show                 --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete               --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
[GIN-debug] POST   /api/me                   --> github.com/ollama/ollama/server.(*Server).WhoamiHandler-fm (5 handlers)
[GIN-debug] POST   /api/signout              --> github.com/ollama/ollama/server.(*Server).SignoutHandler-fm (5 handlers)
[GIN-debug] DELETE /api/user/keys/:encodedKey --> github.com/ollama/ollama/server.(*Server).SignoutHandler-fm (5 handlers)
[GIN-debug] POST   /api/create               --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
[GIN-debug] POST   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] POST   /api/copy                 --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
[GIN-debug] GET    /api/ps                   --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
[GIN-debug] POST   /api/generate             --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST   /api/chat                 --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST   /api/embed                --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST   /api/embeddings           --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST   /v1/chat/completions      --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST   /v1/completions           --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST   /v1/embeddings            --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models                --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models/:model         --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
time=2025-10-17T00:37:59.796-04:00 level=INFO source=routes.go:1564 msg="Listening on [::]:11434 (version 0.0.0)"
time=2025-10-17T00:37:59.796-04:00 level=DEBUG source=sched.go:123 msg="starting llm scheduler"
time=2025-10-17T00:37:59.797-04:00 level=INFO source=runner.go:80 msg="discovering available GPUs..."
time=2025-10-17T00:37:59.797-04:00 level=DEBUG source=runner.go:448 msg="spawning runner with" OLLAMA_LIBRARY_PATH=[C:\ai\ollama-0.12.6\build\lib\ollama] extra_envs=[]
time=2025-10-17T00:37:59.804-04:00 level=TRACE source=runner.go:529 msg="starting runner for device discovery" env="[=::=::\\ =C:=C:\\ai\\ollama-0.12.6 ALLUSERSPROFILE=C:\\ProgramData APPDATA=C:\\Users\\WinServer\\AppData\\Roaming CLIENTNAME=Trevors-MacBook CommonProgramFiles=C:\\Program Files\\Common Files CommonProgramFiles(x86)=C:\\Program Files (x86)\\Common Files CommonProgramW6432=C:\\Program Files\\Common Files COMPUTERNAME=HOMESERVER ComSpec=C:\\Windows\\system32\\cmd.exe DriverData=C:\\Windows\\System32\\Drivers\\DriverData EFC_8836_1592913036=1 GOPATH=C:\\Users\\WinServer\\go HOMEDRIVE=C: HOMEPATH=\\Users\\WinServer LEVEL_ZERO_V1_SDK_PATH=C:\\Program Files\\LevelZeroSDK\\1.24.1\\ LOCALAPPDATA=C:\\Users\\WinServer\\AppData\\Local LOGONSERVER=\\\\HOMESERVER no_proxy=localhost,127.0.0.1 NUMBER_OF_PROCESSORS=16 OLLAMA_DEBUG=2 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_KEEP_ALIVE=300m OLLAMA_NUM_GPU=999 OLLAMA_NUM_PARALLEL=2 OS=Windows_NT PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC PROCESSOR_ARCHITECTURE=AMD64 PROCESSOR_IDENTIFIER=AMD64 Family 25 Model 33 Stepping 0, AuthenticAMD PROCESSOR_LEVEL=25 PROCESSOR_REVISION=2100 ProgramData=C:\\ProgramData ProgramFiles=C:\\Program Files ProgramFiles(x86)=C:\\Program Files (x86) ProgramW6432=C:\\Program Files PROMPT=$P$G PSModulePath=C:\\Program Files\\WindowsPowerShell\\Modules;C:\\Windows\\system32\\WindowsPowerShell\\v1.0\\Modules PUBLIC=C:\\Users\\Public SESSIONNAME=RDP-Tcp#0 SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1 SystemDrive=C: SystemRoot=C:\\Windows TEMP=C:\\Users\\WINSER~1\\AppData\\Local\\Temp TMP=C:\\Users\\WINSER~1\\AppData\\Local\\Temp USERDOMAIN=HOMESERVER USERDOMAIN_ROAMINGPROFILE=HOMESERVER USERNAME=WinServer USERPROFILE=C:\\Users\\WinServer VK_SDK_PATH=C:\\VulkanSDK\\1.4.328.1 VULKAN_SDK=C:\\VulkanSDK\\1.4.328.1 windir=C:\\Windows ZES_ENABLE_SYSMAN=1 PATH=C:\\ai\\ollama-0.12.6\\build\\lib\\ollama;C:\\ai\\ollama-0.12.6\\build\\lib\\ollama;C:\\Program Files\\Go\\bin;C:\\VulkanSDK\\1.4.328.1\\Bin;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\CMake\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Program Files\\Go\\bin;C:\\VulkanSDK\\1.4.328.1\\Bin;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\CMake\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Program Files\\Go\\bin;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Python\\Python311\\;C:\\Users\\WinServer\\AppData\\Local\\Microsoft\\WindowsApps;;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Ollama;C:\\Users\\WinServer\\.lmstudio\\bin;C:\\Users\\WinServer\\go\\bin;C:\\msys64\\mingw64\\bin OLLAMA_LIBRARY_PATH=C:\\ai\\ollama-0.12.6\\build\\lib\\ollama]" cmd="C:\\Users\\WinServer\\AppData\\Local\\go-build\\07\\07e13647a0d532051c841bfc3eda1f2f4f918e15e50120e5b1e39044193214a0-d\\main.exe runner --ollama-engine --port 54420"
time=2025-10-17T00:37:59.831-04:00 level=INFO source=runner.go:1332 msg="starting ollama engine"
time=2025-10-17T00:37:59.832-04:00 level=INFO source=runner.go:1367 msg="Server listening on 127.0.0.1:54420"
time=2025-10-17T00:37:59.838-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
time=2025-10-17T00:37:59.838-04:00 level=DEBUG source=gguf.go:578 msg=general.architecture type=string
time=2025-10-17T00:37:59.838-04:00 level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.model type=string
time=2025-10-17T00:37:59.839-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
time=2025-10-17T00:37:59.839-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0
time=2025-10-17T00:37:59.839-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.name default=""
time=2025-10-17T00:37:59.839-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.description default=""
time=2025-10-17T00:37:59.839-04:00 level=INFO source=ggml.go:134 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3
time=2025-10-17T00:37:59.839-04:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\ai\ollama-0.12.6\build\lib\ollama
time=2025-10-17T00:37:59.851-04:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.LLAMAFILE=1 compiler=cgo(gcc)
time=2025-10-17T00:37:59.854-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0
time=2025-10-17T00:37:59.854-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.pooling_type default=0
time=2025-10-17T00:37:59.854-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.expert_count default=0
time=2025-10-17T00:37:59.854-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}"
time=2025-10-17T00:37:59.854-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}"
time=2025-10-17T00:37:59.854-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}"
time=2025-10-17T00:37:59.854-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}"
time=2025-10-17T00:37:59.854-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true
time=2025-10-17T00:37:59.854-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0
time=2025-10-17T00:37:59.854-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
time=2025-10-17T00:37:59.854-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0
time=2025-10-17T00:37:59.854-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
time=2025-10-17T00:37:59.854-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.pre default=""
time=2025-10-17T00:37:59.854-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0
time=2025-10-17T00:37:59.855-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.embedding_length default=0
time=2025-10-17T00:37:59.856-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count default=0
time=2025-10-17T00:37:59.856-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count_kv default=0
time=2025-10-17T00:37:59.856-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.key_length default=0
time=2025-10-17T00:37:59.856-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.dimension_count default=0
time=2025-10-17T00:37:59.856-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0
time=2025-10-17T00:37:59.856-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.freq_base default=100000
time=2025-10-17T00:37:59.856-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.scaling.factor default=1
time=2025-10-17T00:37:59.856-04:00 level=DEBUG source=runner.go:1307 msg="dummy model load took" duration=18.3234ms
time=2025-10-17T00:37:59.856-04:00 level=DEBUG source=runner.go:1312 msg="gathering device infos took" duration=0s
time=2025-10-17T00:37:59.856-04:00 level=TRACE source=runner.go:548 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH=[C:\ai\ollama-0.12.6\build\lib\ollama] devices=[]
time=2025-10-17T00:37:59.857-04:00 level=DEBUG source=runner.go:451 msg="bootstrap discovery took" duration=60.6343ms OLLAMA_LIBRARY_PATH=[C:\ai\ollama-0.12.6\build\lib\ollama] extra_envs=[]
time=2025-10-17T00:37:59.857-04:00 level=DEBUG source=runner.go:118 msg="filtering out unsupported or overlapping GPU library combinations" count=0
time=2025-10-17T00:37:59.857-04:00 level=TRACE source=runner.go:171 msg="supported GPU library combinations" supported=map[]
time=2025-10-17T00:37:59.857-04:00 level=DEBUG source=runner.go:45 msg="GPU bootstrap discovery took" duration=61.1488ms
time=2025-10-17T00:37:59.857-04:00 level=INFO source=types.go:129 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="63.9 GiB" available="40.1 GiB"
time=2025-10-17T00:37:59.857-04:00 level=INFO source=routes.go:1605 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB"

vulkaninfo(top few lines):
==========
VULKANINFO
==========

Vulkan Instance Version: 1.4.328


Instance Extensions: count = 13
===============================
        VK_EXT_debug_report                    : extension revision 10
        VK_EXT_debug_utils                     : extension revision 2
        VK_EXT_swapchain_colorspace            : extension revision 4
        VK_KHR_device_group_creation           : extension revision 1
        VK_KHR_external_fence_capabilities     : extension revision 1
        VK_KHR_external_memory_capabilities    : extension revision 1
        VK_KHR_external_semaphore_capabilities : extension revision 1
        VK_KHR_get_physical_device_properties2 : extension revision 2
        VK_KHR_get_surface_capabilities2       : extension revision 1
        VK_KHR_portability_enumeration         : extension revision 1
        VK_KHR_surface                         : extension revision 25
        VK_KHR_win32_surface                   : extension revision 6
        VK_LUNARG_direct_driver_loading        : extension revision 1

Layers: count = 10
==================
VK_LAYER_AMD_switchable_graphics (AMD switchable graphics layer) Vulkan version 1.3.260, layer version 1:
        Layer Extensions: count = 0
        Devices: count = 1
                GPU id = 0 (AMD Radeon Pro VII)
                Layer-Device Extensions: count = 0

VK_LAYER_KHRONOS_profiles (Khronos Profiles layer) Vulkan version 1.4.328, layer version 1:
        Layer Extensions: count = 1
                VK_EXT_layer_settings : extension revision 2
        Devices: count = 1
                GPU id = 0 (AMD Radeon Pro VII)
                Layer-Device Extensions: count = 1
                        VK_EXT_tooling_info : extension revision 1

VK_LAYER_KHRONOS_shader_object (Khronos Shader object layer) Vulkan version 1.4.328, layer version 1:
        Layer Extensions: count = 1
                VK_EXT_layer_settings : extension revision 2
        Devices: count = 1
                GPU id = 0 (AMD Radeon Pro VII)
                Layer-Device Extensions: count = 1
                        VK_EXT_shader_object : extension revision 1

VK_LAYER_KHRONOS_synchronization2 (Khronos Synchronization2 layer) Vulkan version 1.4.328, layer version 1:
        Layer Extensions: count = 1
                VK_EXT_layer_settings : extension revision 2
        Devices: count = 1
                GPU id = 0 (AMD Radeon Pro VII)
                Layer-Device Extensions: count = 1
                        VK_KHR_synchronization2 : extension revision 1

VK_LAYER_KHRONOS_validation (Khronos Validation Layer) Vulkan version 1.4.328, layer version 1:
        Layer Extensions: count = 4
                VK_EXT_debug_report        : extension revision 9
                VK_EXT_debug_utils         : extension revision 1
                VK_EXT_layer_settings      : extension revision 2
                VK_EXT_validation_features : extension revision 2
        Devices: count = 1
                GPU id = 0 (AMD Radeon Pro VII)
                Layer-Device Extensions: count = 3
                        VK_EXT_debug_marker     : extension revision 4
                        VK_EXT_tooling_info     : extension revision 1
                        VK_EXT_validation_cache : extension revision 1

VK_LAYER_LUNARG_api_dump (LunarG API dump layer) Vulkan version 1.4.328, layer version 2:
        Layer Extensions: count = 1
                VK_EXT_layer_settings : extension revision 2
        Devices: count = 1
                GPU id = 0 (AMD Radeon Pro VII)
                Layer-Device Extensions: count = 1
                        VK_EXT_tooling_info : extension revision 1

VK_LAYER_LUNARG_crash_diagnostic (Crash Diagnostic Layer is a crash/hang debugging tool that helps determines GPU progress in a Vulkan application.) Vulkan version 1.4.328, layer version 1:
        Layer Extensions: count = 3
                VK_EXT_debug_report   : extension revision 10
                VK_EXT_debug_utils    : extension revision 1
                VK_EXT_layer_settings : extension revision 2
        Devices: count = 1
                GPU id = 0 (AMD Radeon Pro VII)
                Layer-Device Extensions: count = 2
                        VK_EXT_debug_report : extension revision 10
                        VK_EXT_tooling_info : extension revision 1

VK_LAYER_LUNARG_gfxreconstruct (GFXReconstruct Capture Layer Version 1.0.5) Vulkan version 1.4.328, layer version 4194309:
        Layer Extensions: count = 0
        Devices: count = 1
                GPU id = 0 (AMD Radeon Pro VII)
                Layer-Device Extensions: count = 1
                        VK_EXT_tooling_info : extension revision 1

VK_LAYER_LUNARG_monitor (Execution Monitoring Layer) Vulkan version 1.4.328, layer version 1:
        Layer Extensions: count = 1
                VK_EXT_layer_settings : extension revision 2
        Devices: count = 1
                GPU id = 0 (AMD Radeon Pro VII)
                Layer-Device Extensions: count = 1
                        VK_EXT_tooling_info : extension revision 1

VK_LAYER_LUNARG_screenshot (LunarG image capture layer) Vulkan version 1.4.328, layer version 1:
        Layer Extensions: count = 1
                VK_EXT_layer_settings : extension revision 2
        Devices: count = 1
                GPU id = 0 (AMD Radeon Pro VII)
                Layer-Device Extensions: count = 1
                        VK_EXT_tooling_info : extension revision 1

Presentable Surfaces:
=====================
GPU id : 0 (AMD Radeon Pro VII) [VK_KHR_win32_surface]:
        Surface type = VK_KHR_win32_surface
        Formats: count = 20
                SurfaceFormat[0]:
                        format = FORMAT_B8G8R8A8_UNORM
                        colorSpace = COLOR_SPACE_SRGB_NONLINEAR_KHR
                SurfaceFormat[1]:
                        format = FORMAT_B8G8R8A8_SRGB
                        colorSpace = COLOR_SPACE_SRGB_NONLINEAR_KHR
                SurfaceFormat[2]:
                        format = FORMAT_R4G4B4A4_UNORM_PACK16
                        colorSpace = COLOR_SPACE_SRGB_NONLINEAR_KHR
                SurfaceFormat[3]:
                        format = FORMAT_B4G4R4A4_UNORM_PACK16
                        colorSpace = COLOR_SPACE_SRGB_NONLINEAR_KHR
                SurfaceFormat[4]:
                        format = FORMAT_R5G6B5_UNORM_PACK16
                        colorSpace = COLOR_SPACE_SRGB_NONLINEAR_KHR
                SurfaceFormat[5]:
                        format = FORMAT_B5G6R5_UNORM_PACK16
                        colorSpace = COLOR_SPACE_SRGB_NONLINEAR_KHR
                SurfaceFormat[6]:
                        format = FORMAT_A1R5G5B5_UNORM_PACK16
                        colorSpace = COLOR_SPACE_SRGB_NONLINEAR_KHR
                SurfaceFormat[7]:
                        format = FORMAT_R8G8B8A8_UNORM
                        colorSpace = COLOR_SPACE_SRGB_NONLINEAR_KHR
                SurfaceFormat[8]:
                        format = FORMAT_R8G8B8A8_SNORM
                        colorSpace = COLOR_SPACE_SRGB_NONLINEAR_KHR
                SurfaceFormat[9]:
                        format = FORMAT_R8G8B8A8_SRGB
                        colorSpace = COLOR_SPACE_SRGB_NONLINEAR_KHR
                SurfaceFormat[10]:
                        format = FORMAT_B8G8R8A8_SNORM
                        colorSpace = COLOR_SPACE_SRGB_NONLINEAR_KHR
                SurfaceFormat[11]:
                        format = FORMAT_A8B8G8R8_UNORM_PACK32
                        colorSpace = COLOR_SPACE_SRGB_NONLINEAR_KHR
                SurfaceFormat[12]:
                        format = FORMAT_A8B8G8R8_SNORM_PACK32
                        colorSpace = COLOR_SPACE_SRGB_NONLINEAR_KHR
                SurfaceFormat[13]:
                        format = FORMAT_A8B8G8R8_SRGB_PACK32
                        colorSpace = COLOR_SPACE_SRGB_NONLINEAR_KHR
                SurfaceFormat[14]:
                        format = FORMAT_A2R10G10B10_UNORM_PACK32
                        colorSpace = COLOR_SPACE_SRGB_NONLINEAR_KHR
                SurfaceFormat[15]:
                        format = FORMAT_A2B10G10R10_UNORM_PACK32
                        colorSpace = COLOR_SPACE_SRGB_NONLINEAR_KHR
                SurfaceFormat[16]:
                        format = FORMAT_R16G16B16A16_UNORM
                        colorSpace = COLOR_SPACE_SRGB_NONLINEAR_KHR
                SurfaceFormat[17]:
                        format = FORMAT_R16G16B16A16_SNORM
                        colorSpace = COLOR_SPACE_SRGB_NONLINEAR_KHR
                SurfaceFormat[18]:
                        format = FORMAT_R16G16B16A16_SFLOAT
                        colorSpace = COLOR_SPACE_SRGB_NONLINEAR_KHR
                SurfaceFormat[19]:
                        format = FORMAT_B10G11R11_UFLOAT_PACK32
                        colorSpace = COLOR_SPACE_SRGB_NONLINEAR_KHR
        Present Modes: count = 3
                PRESENT_MODE_IMMEDIATE_KHR
                PRESENT_MODE_FIFO_KHR
                PRESENT_MODE_FIFO_RELAXED_KHR

OS

Windows

GPU

AMD

CPU

AMD

Ollama version

0.12.6

Originally created by @avesed on GitHub (Oct 17, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12672 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? ollama detects Vulkan SDK, and vulkaninfo shows my GPU (MI50 with AMD Radeon Pro VII driver), but ollama is not using the GPU ### Relevant log output ```shell Ollama: time=2025-10-17T00:37:59.793-04:00 level=INFO source=routes.go:1511 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY:localhost,127.0.0.1 OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG-4 OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5h0m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\WinServer\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:2 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-10-17T00:37:59.794-04:00 level=INFO source=images.go:522 msg="total blobs: 0" time=2025-10-17T00:37:59.794-04:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func3 (5 handlers) [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func4 (5 handlers) [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers) [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers) [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers) [GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers) [GIN-debug] POST /api/me --> github.com/ollama/ollama/server.(*Server).WhoamiHandler-fm (5 handlers) [GIN-debug] POST /api/signout --> github.com/ollama/ollama/server.(*Server).SignoutHandler-fm (5 handlers) [GIN-debug] DELETE /api/user/keys/:encodedKey --> github.com/ollama/ollama/server.(*Server).SignoutHandler-fm (5 handlers) [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers) [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers) [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers) [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers) [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers) [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers) [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers) [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers) [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers) [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers) [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers) [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers) [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers) [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers) time=2025-10-17T00:37:59.796-04:00 level=INFO source=routes.go:1564 msg="Listening on [::]:11434 (version 0.0.0)" time=2025-10-17T00:37:59.796-04:00 level=DEBUG source=sched.go:123 msg="starting llm scheduler" time=2025-10-17T00:37:59.797-04:00 level=INFO source=runner.go:80 msg="discovering available GPUs..." time=2025-10-17T00:37:59.797-04:00 level=DEBUG source=runner.go:448 msg="spawning runner with" OLLAMA_LIBRARY_PATH=[C:\ai\ollama-0.12.6\build\lib\ollama] extra_envs=[] time=2025-10-17T00:37:59.804-04:00 level=TRACE source=runner.go:529 msg="starting runner for device discovery" env="[=::=::\\ =C:=C:\\ai\\ollama-0.12.6 ALLUSERSPROFILE=C:\\ProgramData APPDATA=C:\\Users\\WinServer\\AppData\\Roaming CLIENTNAME=Trevors-MacBook CommonProgramFiles=C:\\Program Files\\Common Files CommonProgramFiles(x86)=C:\\Program Files (x86)\\Common Files CommonProgramW6432=C:\\Program Files\\Common Files COMPUTERNAME=HOMESERVER ComSpec=C:\\Windows\\system32\\cmd.exe DriverData=C:\\Windows\\System32\\Drivers\\DriverData EFC_8836_1592913036=1 GOPATH=C:\\Users\\WinServer\\go HOMEDRIVE=C: HOMEPATH=\\Users\\WinServer LEVEL_ZERO_V1_SDK_PATH=C:\\Program Files\\LevelZeroSDK\\1.24.1\\ LOCALAPPDATA=C:\\Users\\WinServer\\AppData\\Local LOGONSERVER=\\\\HOMESERVER no_proxy=localhost,127.0.0.1 NUMBER_OF_PROCESSORS=16 OLLAMA_DEBUG=2 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_KEEP_ALIVE=300m OLLAMA_NUM_GPU=999 OLLAMA_NUM_PARALLEL=2 OS=Windows_NT PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC PROCESSOR_ARCHITECTURE=AMD64 PROCESSOR_IDENTIFIER=AMD64 Family 25 Model 33 Stepping 0, AuthenticAMD PROCESSOR_LEVEL=25 PROCESSOR_REVISION=2100 ProgramData=C:\\ProgramData ProgramFiles=C:\\Program Files ProgramFiles(x86)=C:\\Program Files (x86) ProgramW6432=C:\\Program Files PROMPT=$P$G PSModulePath=C:\\Program Files\\WindowsPowerShell\\Modules;C:\\Windows\\system32\\WindowsPowerShell\\v1.0\\Modules PUBLIC=C:\\Users\\Public SESSIONNAME=RDP-Tcp#0 SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1 SystemDrive=C: SystemRoot=C:\\Windows TEMP=C:\\Users\\WINSER~1\\AppData\\Local\\Temp TMP=C:\\Users\\WINSER~1\\AppData\\Local\\Temp USERDOMAIN=HOMESERVER USERDOMAIN_ROAMINGPROFILE=HOMESERVER USERNAME=WinServer USERPROFILE=C:\\Users\\WinServer VK_SDK_PATH=C:\\VulkanSDK\\1.4.328.1 VULKAN_SDK=C:\\VulkanSDK\\1.4.328.1 windir=C:\\Windows ZES_ENABLE_SYSMAN=1 PATH=C:\\ai\\ollama-0.12.6\\build\\lib\\ollama;C:\\ai\\ollama-0.12.6\\build\\lib\\ollama;C:\\Program Files\\Go\\bin;C:\\VulkanSDK\\1.4.328.1\\Bin;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\CMake\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Program Files\\Go\\bin;C:\\VulkanSDK\\1.4.328.1\\Bin;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\CMake\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Program Files\\Go\\bin;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Python\\Python311\\;C:\\Users\\WinServer\\AppData\\Local\\Microsoft\\WindowsApps;;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Ollama;C:\\Users\\WinServer\\.lmstudio\\bin;C:\\Users\\WinServer\\go\\bin;C:\\msys64\\mingw64\\bin OLLAMA_LIBRARY_PATH=C:\\ai\\ollama-0.12.6\\build\\lib\\ollama]" cmd="C:\\Users\\WinServer\\AppData\\Local\\go-build\\07\\07e13647a0d532051c841bfc3eda1f2f4f918e15e50120e5b1e39044193214a0-d\\main.exe runner --ollama-engine --port 54420" time=2025-10-17T00:37:59.831-04:00 level=INFO source=runner.go:1332 msg="starting ollama engine" time=2025-10-17T00:37:59.832-04:00 level=INFO source=runner.go:1367 msg="Server listening on 127.0.0.1:54420" time=2025-10-17T00:37:59.838-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-17T00:37:59.838-04:00 level=DEBUG source=gguf.go:578 msg=general.architecture type=string time=2025-10-17T00:37:59.838-04:00 level=DEBUG source=gguf.go:578 msg=tokenizer.ggml.model type=string time=2025-10-17T00:37:59.839-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-17T00:37:59.839-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-10-17T00:37:59.839-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.name default="" time=2025-10-17T00:37:59.839-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.description default="" time=2025-10-17T00:37:59.839-04:00 level=INFO source=ggml.go:134 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 time=2025-10-17T00:37:59.839-04:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\ai\ollama-0.12.6\build\lib\ollama time=2025-10-17T00:37:59.851-04:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.LLAMAFILE=1 compiler=cgo(gcc) time=2025-10-17T00:37:59.854-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0 time=2025-10-17T00:37:59.854-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.pooling_type default=0 time=2025-10-17T00:37:59.854-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.expert_count default=0 time=2025-10-17T00:37:59.854-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" time=2025-10-17T00:37:59.854-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" time=2025-10-17T00:37:59.854-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" time=2025-10-17T00:37:59.854-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" time=2025-10-17T00:37:59.854-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true time=2025-10-17T00:37:59.854-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 time=2025-10-17T00:37:59.854-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false time=2025-10-17T00:37:59.854-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 time=2025-10-17T00:37:59.854-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2025-10-17T00:37:59.854-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.pre default="" time=2025-10-17T00:37:59.854-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0 time=2025-10-17T00:37:59.855-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.embedding_length default=0 time=2025-10-17T00:37:59.856-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count default=0 time=2025-10-17T00:37:59.856-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count_kv default=0 time=2025-10-17T00:37:59.856-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.key_length default=0 time=2025-10-17T00:37:59.856-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.dimension_count default=0 time=2025-10-17T00:37:59.856-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 time=2025-10-17T00:37:59.856-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.freq_base default=100000 time=2025-10-17T00:37:59.856-04:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.scaling.factor default=1 time=2025-10-17T00:37:59.856-04:00 level=DEBUG source=runner.go:1307 msg="dummy model load took" duration=18.3234ms time=2025-10-17T00:37:59.856-04:00 level=DEBUG source=runner.go:1312 msg="gathering device infos took" duration=0s time=2025-10-17T00:37:59.856-04:00 level=TRACE source=runner.go:548 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH=[C:\ai\ollama-0.12.6\build\lib\ollama] devices=[] time=2025-10-17T00:37:59.857-04:00 level=DEBUG source=runner.go:451 msg="bootstrap discovery took" duration=60.6343ms OLLAMA_LIBRARY_PATH=[C:\ai\ollama-0.12.6\build\lib\ollama] extra_envs=[] time=2025-10-17T00:37:59.857-04:00 level=DEBUG source=runner.go:118 msg="filtering out unsupported or overlapping GPU library combinations" count=0 time=2025-10-17T00:37:59.857-04:00 level=TRACE source=runner.go:171 msg="supported GPU library combinations" supported=map[] time=2025-10-17T00:37:59.857-04:00 level=DEBUG source=runner.go:45 msg="GPU bootstrap discovery took" duration=61.1488ms time=2025-10-17T00:37:59.857-04:00 level=INFO source=types.go:129 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="63.9 GiB" available="40.1 GiB" time=2025-10-17T00:37:59.857-04:00 level=INFO source=routes.go:1605 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB" vulkaninfo(top few lines): ========== VULKANINFO ========== Vulkan Instance Version: 1.4.328 Instance Extensions: count = 13 =============================== VK_EXT_debug_report : extension revision 10 VK_EXT_debug_utils : extension revision 2 VK_EXT_swapchain_colorspace : extension revision 4 VK_KHR_device_group_creation : extension revision 1 VK_KHR_external_fence_capabilities : extension revision 1 VK_KHR_external_memory_capabilities : extension revision 1 VK_KHR_external_semaphore_capabilities : extension revision 1 VK_KHR_get_physical_device_properties2 : extension revision 2 VK_KHR_get_surface_capabilities2 : extension revision 1 VK_KHR_portability_enumeration : extension revision 1 VK_KHR_surface : extension revision 25 VK_KHR_win32_surface : extension revision 6 VK_LUNARG_direct_driver_loading : extension revision 1 Layers: count = 10 ================== VK_LAYER_AMD_switchable_graphics (AMD switchable graphics layer) Vulkan version 1.3.260, layer version 1: Layer Extensions: count = 0 Devices: count = 1 GPU id = 0 (AMD Radeon Pro VII) Layer-Device Extensions: count = 0 VK_LAYER_KHRONOS_profiles (Khronos Profiles layer) Vulkan version 1.4.328, layer version 1: Layer Extensions: count = 1 VK_EXT_layer_settings : extension revision 2 Devices: count = 1 GPU id = 0 (AMD Radeon Pro VII) Layer-Device Extensions: count = 1 VK_EXT_tooling_info : extension revision 1 VK_LAYER_KHRONOS_shader_object (Khronos Shader object layer) Vulkan version 1.4.328, layer version 1: Layer Extensions: count = 1 VK_EXT_layer_settings : extension revision 2 Devices: count = 1 GPU id = 0 (AMD Radeon Pro VII) Layer-Device Extensions: count = 1 VK_EXT_shader_object : extension revision 1 VK_LAYER_KHRONOS_synchronization2 (Khronos Synchronization2 layer) Vulkan version 1.4.328, layer version 1: Layer Extensions: count = 1 VK_EXT_layer_settings : extension revision 2 Devices: count = 1 GPU id = 0 (AMD Radeon Pro VII) Layer-Device Extensions: count = 1 VK_KHR_synchronization2 : extension revision 1 VK_LAYER_KHRONOS_validation (Khronos Validation Layer) Vulkan version 1.4.328, layer version 1: Layer Extensions: count = 4 VK_EXT_debug_report : extension revision 9 VK_EXT_debug_utils : extension revision 1 VK_EXT_layer_settings : extension revision 2 VK_EXT_validation_features : extension revision 2 Devices: count = 1 GPU id = 0 (AMD Radeon Pro VII) Layer-Device Extensions: count = 3 VK_EXT_debug_marker : extension revision 4 VK_EXT_tooling_info : extension revision 1 VK_EXT_validation_cache : extension revision 1 VK_LAYER_LUNARG_api_dump (LunarG API dump layer) Vulkan version 1.4.328, layer version 2: Layer Extensions: count = 1 VK_EXT_layer_settings : extension revision 2 Devices: count = 1 GPU id = 0 (AMD Radeon Pro VII) Layer-Device Extensions: count = 1 VK_EXT_tooling_info : extension revision 1 VK_LAYER_LUNARG_crash_diagnostic (Crash Diagnostic Layer is a crash/hang debugging tool that helps determines GPU progress in a Vulkan application.) Vulkan version 1.4.328, layer version 1: Layer Extensions: count = 3 VK_EXT_debug_report : extension revision 10 VK_EXT_debug_utils : extension revision 1 VK_EXT_layer_settings : extension revision 2 Devices: count = 1 GPU id = 0 (AMD Radeon Pro VII) Layer-Device Extensions: count = 2 VK_EXT_debug_report : extension revision 10 VK_EXT_tooling_info : extension revision 1 VK_LAYER_LUNARG_gfxreconstruct (GFXReconstruct Capture Layer Version 1.0.5) Vulkan version 1.4.328, layer version 4194309: Layer Extensions: count = 0 Devices: count = 1 GPU id = 0 (AMD Radeon Pro VII) Layer-Device Extensions: count = 1 VK_EXT_tooling_info : extension revision 1 VK_LAYER_LUNARG_monitor (Execution Monitoring Layer) Vulkan version 1.4.328, layer version 1: Layer Extensions: count = 1 VK_EXT_layer_settings : extension revision 2 Devices: count = 1 GPU id = 0 (AMD Radeon Pro VII) Layer-Device Extensions: count = 1 VK_EXT_tooling_info : extension revision 1 VK_LAYER_LUNARG_screenshot (LunarG image capture layer) Vulkan version 1.4.328, layer version 1: Layer Extensions: count = 1 VK_EXT_layer_settings : extension revision 2 Devices: count = 1 GPU id = 0 (AMD Radeon Pro VII) Layer-Device Extensions: count = 1 VK_EXT_tooling_info : extension revision 1 Presentable Surfaces: ===================== GPU id : 0 (AMD Radeon Pro VII) [VK_KHR_win32_surface]: Surface type = VK_KHR_win32_surface Formats: count = 20 SurfaceFormat[0]: format = FORMAT_B8G8R8A8_UNORM colorSpace = COLOR_SPACE_SRGB_NONLINEAR_KHR SurfaceFormat[1]: format = FORMAT_B8G8R8A8_SRGB colorSpace = COLOR_SPACE_SRGB_NONLINEAR_KHR SurfaceFormat[2]: format = FORMAT_R4G4B4A4_UNORM_PACK16 colorSpace = COLOR_SPACE_SRGB_NONLINEAR_KHR SurfaceFormat[3]: format = FORMAT_B4G4R4A4_UNORM_PACK16 colorSpace = COLOR_SPACE_SRGB_NONLINEAR_KHR SurfaceFormat[4]: format = FORMAT_R5G6B5_UNORM_PACK16 colorSpace = COLOR_SPACE_SRGB_NONLINEAR_KHR SurfaceFormat[5]: format = FORMAT_B5G6R5_UNORM_PACK16 colorSpace = COLOR_SPACE_SRGB_NONLINEAR_KHR SurfaceFormat[6]: format = FORMAT_A1R5G5B5_UNORM_PACK16 colorSpace = COLOR_SPACE_SRGB_NONLINEAR_KHR SurfaceFormat[7]: format = FORMAT_R8G8B8A8_UNORM colorSpace = COLOR_SPACE_SRGB_NONLINEAR_KHR SurfaceFormat[8]: format = FORMAT_R8G8B8A8_SNORM colorSpace = COLOR_SPACE_SRGB_NONLINEAR_KHR SurfaceFormat[9]: format = FORMAT_R8G8B8A8_SRGB colorSpace = COLOR_SPACE_SRGB_NONLINEAR_KHR SurfaceFormat[10]: format = FORMAT_B8G8R8A8_SNORM colorSpace = COLOR_SPACE_SRGB_NONLINEAR_KHR SurfaceFormat[11]: format = FORMAT_A8B8G8R8_UNORM_PACK32 colorSpace = COLOR_SPACE_SRGB_NONLINEAR_KHR SurfaceFormat[12]: format = FORMAT_A8B8G8R8_SNORM_PACK32 colorSpace = COLOR_SPACE_SRGB_NONLINEAR_KHR SurfaceFormat[13]: format = FORMAT_A8B8G8R8_SRGB_PACK32 colorSpace = COLOR_SPACE_SRGB_NONLINEAR_KHR SurfaceFormat[14]: format = FORMAT_A2R10G10B10_UNORM_PACK32 colorSpace = COLOR_SPACE_SRGB_NONLINEAR_KHR SurfaceFormat[15]: format = FORMAT_A2B10G10R10_UNORM_PACK32 colorSpace = COLOR_SPACE_SRGB_NONLINEAR_KHR SurfaceFormat[16]: format = FORMAT_R16G16B16A16_UNORM colorSpace = COLOR_SPACE_SRGB_NONLINEAR_KHR SurfaceFormat[17]: format = FORMAT_R16G16B16A16_SNORM colorSpace = COLOR_SPACE_SRGB_NONLINEAR_KHR SurfaceFormat[18]: format = FORMAT_R16G16B16A16_SFLOAT colorSpace = COLOR_SPACE_SRGB_NONLINEAR_KHR SurfaceFormat[19]: format = FORMAT_B10G11R11_UFLOAT_PACK32 colorSpace = COLOR_SPACE_SRGB_NONLINEAR_KHR Present Modes: count = 3 PRESENT_MODE_IMMEDIATE_KHR PRESENT_MODE_FIFO_KHR PRESENT_MODE_FIFO_RELAXED_KHR ``` ### OS Windows ### GPU AMD ### CPU AMD ### Ollama version 0.12.6
GiteaMirror added the buildbugwindows labels 2026-04-22 17:29:56 -05:00
Author
Owner

@dhiltgen commented on GitHub (Nov 4, 2025):

The build wiring still could use a little work. What version of the Vulkan SDK do you have installed? Can you share the output of your cmake -B ... command? I've heard using an older version of the SDK may not work properly, so make sure you've installed the latest version.

<!-- gh-comment-id:3487679075 --> @dhiltgen commented on GitHub (Nov 4, 2025): The build wiring still could use a little work. What version of the Vulkan SDK do you have installed? Can you share the output of your `cmake -B ...` command? I've heard using an older version of the SDK may not work properly, so make sure you've installed the latest version.
Author
Owner

@avesed commented on GitHub (Nov 7, 2025):

The build wiring still could use a little work. What version of the Vulkan SDK do you have installed? Can you share the output of your cmake -B ... command? I've heard using an older version of the SDK may not work properly, so make sure you've installed the latest version.

Hi, sorry replied a bit late,

Vulkan SDK version is 1.4.328.1

cmake -B ... command output:

-- Building for: Visual Studio 17 2022
-- The C compiler identification is MSVC 19.44.35217.0
-- The CXX compiler identification is MSVC 19.44.35217.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - not found
-- Found Threads: TRUE
-- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF
-- CMAKE_SYSTEM_PROCESSOR: AMD64
-- CMAKE_GENERATOR_PLATFORM:
-- GGML_SYSTEM_ARCH: x86
-- Including CPU backend
-- x86 detected
-- Adding CPU backend variant ggml-cpu-x64:
-- x86 detected
-- Adding CPU backend variant ggml-cpu-sse42: /arch:SSE4.2 GGML_SSE42
-- x86 detected
-- Adding CPU backend variant ggml-cpu-sandybridge: /arch:AVX GGML_AVX
-- x86 detected
-- Adding CPU backend variant ggml-cpu-haswell: /arch:AVX2 GGML_AVX2;GGML_FMA;GGML_F16C;__BMI2__;GGML_BMI2
-- x86 detected
-- Adding CPU backend variant ggml-cpu-skylakex: /arch:AVX512 GGML_AVX512;__BMI2__;GGML_BMI2
-- x86 detected
-- Adding CPU backend variant ggml-cpu-icelake: /arch:AVX512 GGML_AVX512;__AVX512VBMI__;__AVX512VNNI__;GGML_AVX512_VNNI;__BMI2__;GGML_BMI2
-- x86 detected
-- Adding CPU backend variant ggml-cpu-alderlake: /arch:AVX2 GGML_AVX2;GGML_FMA;GGML_F16C;__AVXVNNI__;GGML_AVX_VNNI;__BMI2__;GGML_BMI2
-- Looking for a CUDA compiler
-- Looking for a CUDA compiler - NOTFOUND
-- Looking for a HIP compiler
-- Looking for a HIP compiler - NOTFOUND
-- Found Vulkan: C:/VulkanSDK/1.4.328.1/Lib/vulkan-1.lib (found version "1.4.328") found components: glslc glslangValidator
-- Vulkan found
-- GL_KHR_cooperative_matrix supported by glslc
-- GL_NV_cooperative_matrix2 supported by glslc
-- GL_EXT_integer_dot_product supported by glslc
-- GL_EXT_bfloat16 supported by glslc
-- Configuring done (7.3s)
-- Generating done (0.2s)
-- Build files have been written to: C:/ai/ollama/ollama
<!-- gh-comment-id:3503737172 --> @avesed commented on GitHub (Nov 7, 2025): > The build wiring still could use a little work. What version of the Vulkan SDK do you have installed? Can you share the output of your `cmake -B ...` command? I've heard using an older version of the SDK may not work properly, so make sure you've installed the latest version. Hi, sorry replied a bit late, Vulkan SDK version is 1.4.328.1 cmake -B ... command output: ``` -- Building for: Visual Studio 17 2022 -- The C compiler identification is MSVC 19.44.35217.0 -- The CXX compiler identification is MSVC 19.44.35217.0 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed -- Looking for pthread_create in pthreads -- Looking for pthread_create in pthreads - not found -- Looking for pthread_create in pthread -- Looking for pthread_create in pthread - not found -- Found Threads: TRUE -- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF -- CMAKE_SYSTEM_PROCESSOR: AMD64 -- CMAKE_GENERATOR_PLATFORM: -- GGML_SYSTEM_ARCH: x86 -- Including CPU backend -- x86 detected -- Adding CPU backend variant ggml-cpu-x64: -- x86 detected -- Adding CPU backend variant ggml-cpu-sse42: /arch:SSE4.2 GGML_SSE42 -- x86 detected -- Adding CPU backend variant ggml-cpu-sandybridge: /arch:AVX GGML_AVX -- x86 detected -- Adding CPU backend variant ggml-cpu-haswell: /arch:AVX2 GGML_AVX2;GGML_FMA;GGML_F16C;__BMI2__;GGML_BMI2 -- x86 detected -- Adding CPU backend variant ggml-cpu-skylakex: /arch:AVX512 GGML_AVX512;__BMI2__;GGML_BMI2 -- x86 detected -- Adding CPU backend variant ggml-cpu-icelake: /arch:AVX512 GGML_AVX512;__AVX512VBMI__;__AVX512VNNI__;GGML_AVX512_VNNI;__BMI2__;GGML_BMI2 -- x86 detected -- Adding CPU backend variant ggml-cpu-alderlake: /arch:AVX2 GGML_AVX2;GGML_FMA;GGML_F16C;__AVXVNNI__;GGML_AVX_VNNI;__BMI2__;GGML_BMI2 -- Looking for a CUDA compiler -- Looking for a CUDA compiler - NOTFOUND -- Looking for a HIP compiler -- Looking for a HIP compiler - NOTFOUND -- Found Vulkan: C:/VulkanSDK/1.4.328.1/Lib/vulkan-1.lib (found version "1.4.328") found components: glslc glslangValidator -- Vulkan found -- GL_KHR_cooperative_matrix supported by glslc -- GL_NV_cooperative_matrix2 supported by glslc -- GL_EXT_integer_dot_product supported by glslc -- GL_EXT_bfloat16 supported by glslc -- Configuring done (7.3s) -- Generating done (0.2s) -- Build files have been written to: C:/ai/ollama/ollama ```
Author
Owner

@dhiltgen commented on GitHub (Nov 12, 2025):

Your Vulkan version looks good. What DLLs are present in C:\ai\ollama-0.12.6\build\lib\ollama after you build?

<!-- gh-comment-id:3522724489 --> @dhiltgen commented on GitHub (Nov 12, 2025): Your Vulkan version looks good. What DLLs are present in `C:\ai\ollama-0.12.6\build\lib\ollama` after you build?
Author
Owner

@avesed commented on GitHub (Nov 12, 2025):

Your Vulkan version looks good. What DLLs are present in C:\ai\ollama-0.12.6\build\lib\ollama after you build?

I rebuilt it using 0.12.10,

Folder contained:

ggml-vulkan.dll
ggml-cpu-x64.dll
ggml-cpu-sse42.dll
ggml-cpu-skylakex.dll
ggml-cpu-sandybridge.dll
ggml-cpu-icelake.dll
ggml-cpu-haswell.dll
ggml-cpu-alderlake.dll
ggml-base.dll

I did notice some warnings during the build, not sure if it's helpful, but I will paste below:

C:\Users\WinServer\Downloads\ollama-0.12.10\ml\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.cpp(3580,9): warning C4003
: not enough arguments for function-like macro invocation 'IM2COL' [C:\Users\WinServer\Downloads\ollama-0.12.10\build\m
l\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.vcxproj]
C:\Users\WinServer\Downloads\ollama-0.12.10\ml\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.cpp(3692,13): warning C400
3: not enough arguments for function-like macro invocation 'CREATE_CONVS' [C:\Users\WinServer\Downloads\ollama-0.12.10\
build\ml\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.vcxproj]
C:\Users\WinServer\Downloads\ollama-0.12.10\ml\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.cpp(6128,44): warning C431
9: '~': zero extending '_Ty' to 'uint64_t' of greater size [C:\Users\WinServer\Downloads\ollama-0.12.10\build\ml\backen
d\ggml\ggml\src\ggml-vulkan\ggml-vulkan.vcxproj]
C:\Users\WinServer\Downloads\ollama-0.12.10\ml\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.cpp(6128,44): warning C431
9:         with [C:\Users\WinServer\Downloads\ollama-0.12.10\build\ml\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.vcx
proj]
C:\Users\WinServer\Downloads\ollama-0.12.10\ml\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.cpp(6128,44): warning C431
9:         [ [C:\Users\WinServer\Downloads\ollama-0.12.10\build\ml\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.vcxpro
j]
C:\Users\WinServer\Downloads\ollama-0.12.10\ml\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.cpp(6128,44): warning C431
9:             _Ty=uint32_t [C:\Users\WinServer\Downloads\ollama-0.12.10\build\ml\backend\ggml\ggml\src\ggml-vulkan\ggm
l-vulkan.vcxproj]
C:\Users\WinServer\Downloads\ollama-0.12.10\ml\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.cpp(6128,44): warning C431
9:         ] [C:\Users\WinServer\Downloads\ollama-0.12.10\build\ml\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.vcxpro
j]
C:\Users\WinServer\Downloads\ollama-0.12.10\ml\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.cpp(6916,44): warning C431
9: '~': zero extending '_Ty' to 'uint64_t' of greater size [C:\Users\WinServer\Downloads\ollama-0.12.10\build\ml\backen
d\ggml\ggml\src\ggml-vulkan\ggml-vulkan.vcxproj]
C:\Users\WinServer\Downloads\ollama-0.12.10\ml\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.cpp(6916,44): warning C431
9:         with [C:\Users\WinServer\Downloads\ollama-0.12.10\build\ml\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.vcx
proj]
C:\Users\WinServer\Downloads\ollama-0.12.10\ml\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.cpp(6916,44): warning C431
9:         [ [C:\Users\WinServer\Downloads\ollama-0.12.10\build\ml\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.vcxpro
j]
C:\Users\WinServer\Downloads\ollama-0.12.10\ml\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.cpp(6916,44): warning C431
9:             _Ty=uint32_t [C:\Users\WinServer\Downloads\ollama-0.12.10\build\ml\backend\ggml\ggml\src\ggml-vulkan\ggm
l-vulkan.vcxproj]
C:\Users\WinServer\Downloads\ollama-0.12.10\ml\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.cpp(6916,44): warning C431
9:         ] [C:\Users\WinServer\Downloads\ollama-0.12.10\build\ml\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.vcxpro
j]
C:\Users\WinServer\Downloads\ollama-0.12.10\ml\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.cpp(9311,32): warning C431
9: '~': zero extending 'uint32_t' to 'size_t' of greater size [C:\Users\WinServer\Downloads\ollama-0.12.10\build\ml\bac
kend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.vcxproj]
<!-- gh-comment-id:3523198611 --> @avesed commented on GitHub (Nov 12, 2025): > Your Vulkan version looks good. What DLLs are present in `C:\ai\ollama-0.12.6\build\lib\ollama` after you build? I rebuilt it using 0.12.10, Folder contained: ``` ggml-vulkan.dll ggml-cpu-x64.dll ggml-cpu-sse42.dll ggml-cpu-skylakex.dll ggml-cpu-sandybridge.dll ggml-cpu-icelake.dll ggml-cpu-haswell.dll ggml-cpu-alderlake.dll ggml-base.dll ``` I did notice some warnings during the build, not sure if it's helpful, but I will paste below: ``` C:\Users\WinServer\Downloads\ollama-0.12.10\ml\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.cpp(3580,9): warning C4003 : not enough arguments for function-like macro invocation 'IM2COL' [C:\Users\WinServer\Downloads\ollama-0.12.10\build\m l\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.vcxproj] C:\Users\WinServer\Downloads\ollama-0.12.10\ml\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.cpp(3692,13): warning C400 3: not enough arguments for function-like macro invocation 'CREATE_CONVS' [C:\Users\WinServer\Downloads\ollama-0.12.10\ build\ml\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.vcxproj] C:\Users\WinServer\Downloads\ollama-0.12.10\ml\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.cpp(6128,44): warning C431 9: '~': zero extending '_Ty' to 'uint64_t' of greater size [C:\Users\WinServer\Downloads\ollama-0.12.10\build\ml\backen d\ggml\ggml\src\ggml-vulkan\ggml-vulkan.vcxproj] C:\Users\WinServer\Downloads\ollama-0.12.10\ml\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.cpp(6128,44): warning C431 9: with [C:\Users\WinServer\Downloads\ollama-0.12.10\build\ml\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.vcx proj] C:\Users\WinServer\Downloads\ollama-0.12.10\ml\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.cpp(6128,44): warning C431 9: [ [C:\Users\WinServer\Downloads\ollama-0.12.10\build\ml\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.vcxpro j] C:\Users\WinServer\Downloads\ollama-0.12.10\ml\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.cpp(6128,44): warning C431 9: _Ty=uint32_t [C:\Users\WinServer\Downloads\ollama-0.12.10\build\ml\backend\ggml\ggml\src\ggml-vulkan\ggm l-vulkan.vcxproj] C:\Users\WinServer\Downloads\ollama-0.12.10\ml\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.cpp(6128,44): warning C431 9: ] [C:\Users\WinServer\Downloads\ollama-0.12.10\build\ml\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.vcxpro j] C:\Users\WinServer\Downloads\ollama-0.12.10\ml\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.cpp(6916,44): warning C431 9: '~': zero extending '_Ty' to 'uint64_t' of greater size [C:\Users\WinServer\Downloads\ollama-0.12.10\build\ml\backen d\ggml\ggml\src\ggml-vulkan\ggml-vulkan.vcxproj] C:\Users\WinServer\Downloads\ollama-0.12.10\ml\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.cpp(6916,44): warning C431 9: with [C:\Users\WinServer\Downloads\ollama-0.12.10\build\ml\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.vcx proj] C:\Users\WinServer\Downloads\ollama-0.12.10\ml\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.cpp(6916,44): warning C431 9: [ [C:\Users\WinServer\Downloads\ollama-0.12.10\build\ml\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.vcxpro j] C:\Users\WinServer\Downloads\ollama-0.12.10\ml\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.cpp(6916,44): warning C431 9: _Ty=uint32_t [C:\Users\WinServer\Downloads\ollama-0.12.10\build\ml\backend\ggml\ggml\src\ggml-vulkan\ggm l-vulkan.vcxproj] C:\Users\WinServer\Downloads\ollama-0.12.10\ml\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.cpp(6916,44): warning C431 9: ] [C:\Users\WinServer\Downloads\ollama-0.12.10\build\ml\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.vcxpro j] C:\Users\WinServer\Downloads\ollama-0.12.10\ml\backend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.cpp(9311,32): warning C431 9: '~': zero extending 'uint32_t' to 'size_t' of greater size [C:\Users\WinServer\Downloads\ollama-0.12.10\build\ml\bac kend\ggml\ggml\src\ggml-vulkan\ggml-vulkan.vcxproj] ```
Author
Owner

@dhiltgen commented on GitHub (Nov 13, 2025):

Vulkan is now built in for 0.12.11 and requires setting a variable to enable. OLLAMA_VULKAN=1
Please give it a try.

<!-- gh-comment-id:3528973488 --> @dhiltgen commented on GitHub (Nov 13, 2025): Vulkan is now built in for 0.12.11 and requires setting a variable to enable. OLLAMA_VULKAN=1 Please give it a try.
Author
Owner

@avesed commented on GitHub (Nov 13, 2025):

Vulkan is now built in for 0.12.11 and requires setting a variable to enable. OLLAMA_VULKAN=1 Please give it a try.

No luck. It was the same behavior. Ollama can detect the Vulkan SDK, but it refuses to utilize the GPU. In the log, I can see it was trying to find the GPU, but nothing was detected.

time=2025-11-13T17:13:30.849-05:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY:localhost,127.0.0.1 OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:10m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\WinServer\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:2 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES:]"
time=2025-11-13T17:13:30.850-05:00 level=INFO source=images.go:522 msg="total blobs: 0"
time=2025-11-13T17:13:30.850-05:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
time=2025-11-13T17:13:30.850-05:00 level=INFO source=routes.go:1597 msg="Listening on [::]:11434 (version 0.12.11-rc1)"
time=2025-11-13T17:13:30.850-05:00 level=DEBUG source=sched.go:120 msg="starting llm scheduler"
time=2025-11-13T17:13:30.851-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2025-11-13T17:13:30.858-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\ai\\ollama-windows-amd64\\ollama.exe runner --ollama-engine --port 54222"
time=2025-11-13T17:13:30.858-05:00 level=DEBUG source=server.go:393 msg=subprocess OLLAMA_DEBUG=1 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_KEEP_ALIVE=10m OLLAMA_NUM_GPU=999 OLLAMA_NUM_PARALLEL=2 OLLAMA_VULKAN=1 PATH="C:\\ai\\ollama-windows-amd64\\lib\\ollama;C:\\ai\\ollama-windows-amd64\\lib\\ollama\\cuda_v12;C:\\VulkanSDK\\1.4.328.1\\Bin;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\CMake\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Program Files\\Go\\bin;C:\\VulkanSDK\\1.4.328.1\\Bin;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\CMake\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Program Files\\Go\\bin;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Python\\Python311\\;C:\\Users\\WinServer\\AppData\\Local\\Microsoft\\WindowsApps;;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Ollama;C:\\Users\\WinServer\\.lmstudio\\bin;C:\\Users\\WinServer\\go\\bin;C:\\msys64\\mingw64\\bin" OLLAMA_LIBRARY_PATH=C:\ai\ollama-windows-amd64\lib\ollama;C:\ai\ollama-windows-amd64\lib\ollama\cuda_v12
time=2025-11-13T17:13:30.918-05:00 level=DEBUG source=runner.go:418 msg="bootstrap discovery took" duration=67.0214ms OLLAMA_LIBRARY_PATH="[C:\\ai\\ollama-windows-amd64\\lib\\ollama C:\\ai\\ollama-windows-amd64\\lib\\ollama\\cuda_v12]" extra_envs=map[]
time=2025-11-13T17:13:30.919-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\ai\\ollama-windows-amd64\\ollama.exe runner --ollama-engine --port 54226"
time=2025-11-13T17:13:30.920-05:00 level=DEBUG source=server.go:393 msg=subprocess OLLAMA_DEBUG=1 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_KEEP_ALIVE=10m OLLAMA_NUM_GPU=999 OLLAMA_NUM_PARALLEL=2 OLLAMA_VULKAN=1 PATH="C:\\ai\\ollama-windows-amd64\\lib\\ollama;C:\\ai\\ollama-windows-amd64\\lib\\ollama\\cuda_v13;C:\\VulkanSDK\\1.4.328.1\\Bin;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\CMake\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Program Files\\Go\\bin;C:\\VulkanSDK\\1.4.328.1\\Bin;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\CMake\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Program Files\\Go\\bin;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Python\\Python311\\;C:\\Users\\WinServer\\AppData\\Local\\Microsoft\\WindowsApps;;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Ollama;C:\\Users\\WinServer\\.lmstudio\\bin;C:\\Users\\WinServer\\go\\bin;C:\\msys64\\mingw64\\bin" OLLAMA_LIBRARY_PATH=C:\ai\ollama-windows-amd64\lib\ollama;C:\ai\ollama-windows-amd64\lib\ollama\cuda_v13
time=2025-11-13T17:13:30.971-05:00 level=DEBUG source=runner.go:418 msg="bootstrap discovery took" duration=52.4556ms OLLAMA_LIBRARY_PATH="[C:\\ai\\ollama-windows-amd64\\lib\\ollama C:\\ai\\ollama-windows-amd64\\lib\\ollama\\cuda_v13]" extra_envs=map[]
time=2025-11-13T17:13:30.972-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\ai\\ollama-windows-amd64\\ollama.exe runner --ollama-engine --port 54230"
time=2025-11-13T17:13:30.972-05:00 level=DEBUG source=server.go:393 msg=subprocess OLLAMA_DEBUG=1 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_KEEP_ALIVE=10m OLLAMA_NUM_GPU=999 OLLAMA_NUM_PARALLEL=2 OLLAMA_VULKAN=1 PATH="C:\\ai\\ollama-windows-amd64\\lib\\ollama;C:\\ai\\ollama-windows-amd64\\lib\\ollama\\vulkan;C:\\VulkanSDK\\1.4.328.1\\Bin;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\CMake\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Program Files\\Go\\bin;C:\\VulkanSDK\\1.4.328.1\\Bin;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\CMake\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Program Files\\Go\\bin;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Python\\Python311\\;C:\\Users\\WinServer\\AppData\\Local\\Microsoft\\WindowsApps;;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Ollama;C:\\Users\\WinServer\\.lmstudio\\bin;C:\\Users\\WinServer\\go\\bin;C:\\msys64\\mingw64\\bin" OLLAMA_LIBRARY_PATH=C:\ai\ollama-windows-amd64\lib\ollama;C:\ai\ollama-windows-amd64\lib\ollama\vulkan
time=2025-11-13T17:13:31.020-05:00 level=DEBUG source=runner.go:418 msg="bootstrap discovery took" duration=48.9139ms OLLAMA_LIBRARY_PATH="[C:\\ai\\ollama-windows-amd64\\lib\\ollama C:\\ai\\ollama-windows-amd64\\lib\\ollama\\vulkan]" extra_envs=map[]
time=2025-11-13T17:13:31.020-05:00 level=DEBUG source=runner.go:116 msg="evluating which if any devices to filter out" initial_count=0
time=2025-11-13T17:13:31.021-05:00 level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=171.0091ms
time=2025-11-13T17:13:31.021-05:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="63.9 GiB" available="28.2 GiB"
time=2025-11-13T17:13:31.021-05:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB"
<!-- gh-comment-id:3530187027 --> @avesed commented on GitHub (Nov 13, 2025): > Vulkan is now built in for 0.12.11 and requires setting a variable to enable. OLLAMA_VULKAN=1 Please give it a try. No luck. It was the same behavior. Ollama can detect the Vulkan SDK, but it refuses to utilize the GPU. In the log, I can see it was trying to find the GPU, but nothing was detected. ``` time=2025-11-13T17:13:30.849-05:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY:localhost,127.0.0.1 OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:10m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\WinServer\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:2 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES:]" time=2025-11-13T17:13:30.850-05:00 level=INFO source=images.go:522 msg="total blobs: 0" time=2025-11-13T17:13:30.850-05:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" time=2025-11-13T17:13:30.850-05:00 level=INFO source=routes.go:1597 msg="Listening on [::]:11434 (version 0.12.11-rc1)" time=2025-11-13T17:13:30.850-05:00 level=DEBUG source=sched.go:120 msg="starting llm scheduler" time=2025-11-13T17:13:30.851-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2025-11-13T17:13:30.858-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\ai\\ollama-windows-amd64\\ollama.exe runner --ollama-engine --port 54222" time=2025-11-13T17:13:30.858-05:00 level=DEBUG source=server.go:393 msg=subprocess OLLAMA_DEBUG=1 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_KEEP_ALIVE=10m OLLAMA_NUM_GPU=999 OLLAMA_NUM_PARALLEL=2 OLLAMA_VULKAN=1 PATH="C:\\ai\\ollama-windows-amd64\\lib\\ollama;C:\\ai\\ollama-windows-amd64\\lib\\ollama\\cuda_v12;C:\\VulkanSDK\\1.4.328.1\\Bin;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\CMake\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Program Files\\Go\\bin;C:\\VulkanSDK\\1.4.328.1\\Bin;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\CMake\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Program Files\\Go\\bin;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Python\\Python311\\;C:\\Users\\WinServer\\AppData\\Local\\Microsoft\\WindowsApps;;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Ollama;C:\\Users\\WinServer\\.lmstudio\\bin;C:\\Users\\WinServer\\go\\bin;C:\\msys64\\mingw64\\bin" OLLAMA_LIBRARY_PATH=C:\ai\ollama-windows-amd64\lib\ollama;C:\ai\ollama-windows-amd64\lib\ollama\cuda_v12 time=2025-11-13T17:13:30.918-05:00 level=DEBUG source=runner.go:418 msg="bootstrap discovery took" duration=67.0214ms OLLAMA_LIBRARY_PATH="[C:\\ai\\ollama-windows-amd64\\lib\\ollama C:\\ai\\ollama-windows-amd64\\lib\\ollama\\cuda_v12]" extra_envs=map[] time=2025-11-13T17:13:30.919-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\ai\\ollama-windows-amd64\\ollama.exe runner --ollama-engine --port 54226" time=2025-11-13T17:13:30.920-05:00 level=DEBUG source=server.go:393 msg=subprocess OLLAMA_DEBUG=1 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_KEEP_ALIVE=10m OLLAMA_NUM_GPU=999 OLLAMA_NUM_PARALLEL=2 OLLAMA_VULKAN=1 PATH="C:\\ai\\ollama-windows-amd64\\lib\\ollama;C:\\ai\\ollama-windows-amd64\\lib\\ollama\\cuda_v13;C:\\VulkanSDK\\1.4.328.1\\Bin;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\CMake\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Program Files\\Go\\bin;C:\\VulkanSDK\\1.4.328.1\\Bin;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\CMake\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Program Files\\Go\\bin;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Python\\Python311\\;C:\\Users\\WinServer\\AppData\\Local\\Microsoft\\WindowsApps;;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Ollama;C:\\Users\\WinServer\\.lmstudio\\bin;C:\\Users\\WinServer\\go\\bin;C:\\msys64\\mingw64\\bin" OLLAMA_LIBRARY_PATH=C:\ai\ollama-windows-amd64\lib\ollama;C:\ai\ollama-windows-amd64\lib\ollama\cuda_v13 time=2025-11-13T17:13:30.971-05:00 level=DEBUG source=runner.go:418 msg="bootstrap discovery took" duration=52.4556ms OLLAMA_LIBRARY_PATH="[C:\\ai\\ollama-windows-amd64\\lib\\ollama C:\\ai\\ollama-windows-amd64\\lib\\ollama\\cuda_v13]" extra_envs=map[] time=2025-11-13T17:13:30.972-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\ai\\ollama-windows-amd64\\ollama.exe runner --ollama-engine --port 54230" time=2025-11-13T17:13:30.972-05:00 level=DEBUG source=server.go:393 msg=subprocess OLLAMA_DEBUG=1 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_KEEP_ALIVE=10m OLLAMA_NUM_GPU=999 OLLAMA_NUM_PARALLEL=2 OLLAMA_VULKAN=1 PATH="C:\\ai\\ollama-windows-amd64\\lib\\ollama;C:\\ai\\ollama-windows-amd64\\lib\\ollama\\vulkan;C:\\VulkanSDK\\1.4.328.1\\Bin;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\CMake\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Program Files\\Go\\bin;C:\\VulkanSDK\\1.4.328.1\\Bin;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\CMake\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Program Files\\Go\\bin;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Python\\Python311\\;C:\\Users\\WinServer\\AppData\\Local\\Microsoft\\WindowsApps;;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Ollama;C:\\Users\\WinServer\\.lmstudio\\bin;C:\\Users\\WinServer\\go\\bin;C:\\msys64\\mingw64\\bin" OLLAMA_LIBRARY_PATH=C:\ai\ollama-windows-amd64\lib\ollama;C:\ai\ollama-windows-amd64\lib\ollama\vulkan time=2025-11-13T17:13:31.020-05:00 level=DEBUG source=runner.go:418 msg="bootstrap discovery took" duration=48.9139ms OLLAMA_LIBRARY_PATH="[C:\\ai\\ollama-windows-amd64\\lib\\ollama C:\\ai\\ollama-windows-amd64\\lib\\ollama\\vulkan]" extra_envs=map[] time=2025-11-13T17:13:31.020-05:00 level=DEBUG source=runner.go:116 msg="evluating which if any devices to filter out" initial_count=0 time=2025-11-13T17:13:31.021-05:00 level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=171.0091ms time=2025-11-13T17:13:31.021-05:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="63.9 GiB" available="28.2 GiB" time=2025-11-13T17:13:31.021-05:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB" ```
Author
Owner

@dhiltgen commented on GitHub (Nov 14, 2025):

@avesed can you try with $env:OLLAMA_DEBUG="2" set so we get more detailed logs from the GPU discovery process?

<!-- gh-comment-id:3530473189 --> @dhiltgen commented on GitHub (Nov 14, 2025): @avesed can you try with `$env:OLLAMA_DEBUG="2"` set so we get more detailed logs from the GPU discovery process?
Author
Owner

@avesed commented on GitHub (Nov 14, 2025):

So I did a bit of debugging myself, it seems like *.dll files are not loading? I can confirm that VC++ and the Vulkan SDK are installed properly, and dumpbin shows ggml-vulkan.dll depends are ggml-base.dll, vulkan-1.dll, and KERNEL32.dll, and all imported symbols exist in ggml-base.dll.

0.12.11(non rc) Debug 2 log:

time=2025-11-14T03:42:21.167-05:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY:localhost,127.0.0.1 OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG-4 OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:10m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\WinServer\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:2 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES:]"
time=2025-11-14T03:42:21.169-05:00 level=INFO source=images.go:522 msg="total blobs: 6"
time=2025-11-14T03:42:21.169-05:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
time=2025-11-14T03:42:21.170-05:00 level=INFO source=routes.go:1597 msg="Listening on [::]:11434 (version 0.12.11)"
time=2025-11-14T03:42:21.170-05:00 level=DEBUG source=sched.go:120 msg="starting llm scheduler"
time=2025-11-14T03:42:21.171-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2025-11-14T03:42:21.171-05:00 level=TRACE source=runner.go:421 msg="starting runner for device discovery" libDirs="[C:\\ai\\ollama-windows-amd64\\lib\\ollama C:\\ai\\ollama-windows-amd64\\lib\\ollama\\cuda_v13]" extraEnvs=map[]
time=2025-11-14T03:42:21.178-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\ai\\ollama-windows-amd64\\ollama.exe runner --ollama-engine --port 59410"
time=2025-11-14T03:42:21.178-05:00 level=DEBUG source=server.go:393 msg=subprocess GGML_VULKAN_DEBUG=1 OLLAMA_DEBUG=2 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_KEEP_ALIVE=10m OLLAMA_NUM_GPU=999 OLLAMA_NUM_PARALLEL=2 OLLAMA_VULKAN=1 PATH="C:\\ai\\ollama-windows-amd64\\lib\\ollama;C:\\ai\\ollama-windows-amd64\\lib\\ollama\\cuda_v13;C:\\VulkanSDK\\1.4.328.1\\Bin;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\CMake\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Program Files\\Go\\bin;C:\\VulkanSDK\\1.4.328.1\\Bin;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\CMake\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Program Files\\Go\\bin;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Python\\Python311\\;C:\\Users\\WinServer\\AppData\\Local\\Microsoft\\WindowsApps;;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Ollama;C:\\Users\\WinServer\\.lmstudio\\bin;C:\\Users\\WinServer\\go\\bin;C:\\msys64\\mingw64\\bin" OLLAMA_LIBRARY_PATH=C:\ai\ollama-windows-amd64\lib\ollama;C:\ai\ollama-windows-amd64\lib\ollama\cuda_v13
time=2025-11-14T03:42:21.211-05:00 level=INFO source=runner.go:1398 msg="starting ollama engine"
time=2025-11-14T03:42:21.212-05:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:59410"
time=2025-11-14T03:42:21.213-05:00 level=DEBUG source=gguf.go:590 msg=general.architecture type=string
time=2025-11-14T03:42:21.213-05:00 level=DEBUG source=gguf.go:590 msg=tokenizer.ggml.model type=string
time=2025-11-14T03:42:21.213-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
time=2025-11-14T03:42:21.215-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
time=2025-11-14T03:42:21.215-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0
time=2025-11-14T03:42:21.215-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.name default=""
time=2025-11-14T03:42:21.215-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.description default=""
time=2025-11-14T03:42:21.215-05:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3
time=2025-11-14T03:42:21.215-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\ai\ollama-windows-amd64\lib\ollama
dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-alderlake.dll: The specified procedure could not be found.

dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-haswell.dll: The specified procedure could not be found.

dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-icelake.dll: The specified procedure could not be found.

dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-sandybridge.dll: The specified procedure could not be found.

dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-skylakex.dll: The specified procedure could not be found.

dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-sse42.dll: The specified procedure could not be found.

dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-x64.dll: The specified procedure could not be found.

time=2025-11-14T03:42:21.223-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\ai\ollama-windows-amd64\lib\ollama\cuda_v13
dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\cuda_v13\ggml-cuda.dll: The specified module could not be found.

time=2025-11-14T03:42:21.229-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(clang)
time=2025-11-14T03:42:21.229-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0
time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.pooling_type default=0
time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.expert_count default=0
time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}"
time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}"
time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}"
time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}"
time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true
time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0
time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0
time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.pre default=""
time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0
time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.embedding_length default=0
time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count default=0
time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count_kv default=0
time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.key_length default=0
time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.dimension_count default=0
time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0
time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.freq_base default=100000
time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.scaling.factor default=1
time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=runner.go:1373 msg="dummy model load took" duration=17.5244ms
time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=runner.go:1378 msg="gathering device infos took" duration=0s
time=2025-11-14T03:42:21.231-05:00 level=TRACE source=runner.go:448 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[C:\\ai\\ollama-windows-amd64\\lib\\ollama C:\\ai\\ollama-windows-amd64\\lib\\ollama\\cuda_v13]" devices=[]
time=2025-11-14T03:42:21.231-05:00 level=DEBUG source=runner.go:418 msg="bootstrap discovery took" duration=60.1794ms OLLAMA_LIBRARY_PATH="[C:\\ai\\ollama-windows-amd64\\lib\\ollama C:\\ai\\ollama-windows-amd64\\lib\\ollama\\cuda_v13]" extra_envs=map[]
time=2025-11-14T03:42:21.231-05:00 level=TRACE source=runner.go:421 msg="starting runner for device discovery" libDirs="[C:\\ai\\ollama-windows-amd64\\lib\\ollama C:\\ai\\ollama-windows-amd64\\lib\\ollama\\vulkan]" extraEnvs=map[]
time=2025-11-14T03:42:21.232-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\ai\\ollama-windows-amd64\\ollama.exe runner --ollama-engine --port 59414"
time=2025-11-14T03:42:21.232-05:00 level=DEBUG source=server.go:393 msg=subprocess GGML_VULKAN_DEBUG=1 OLLAMA_DEBUG=2 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_KEEP_ALIVE=10m OLLAMA_NUM_GPU=999 OLLAMA_NUM_PARALLEL=2 OLLAMA_VULKAN=1 PATH="C:\\ai\\ollama-windows-amd64\\lib\\ollama;C:\\ai\\ollama-windows-amd64\\lib\\ollama\\vulkan;C:\\VulkanSDK\\1.4.328.1\\Bin;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\CMake\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Program Files\\Go\\bin;C:\\VulkanSDK\\1.4.328.1\\Bin;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\CMake\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Program Files\\Go\\bin;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Python\\Python311\\;C:\\Users\\WinServer\\AppData\\Local\\Microsoft\\WindowsApps;;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Ollama;C:\\Users\\WinServer\\.lmstudio\\bin;C:\\Users\\WinServer\\go\\bin;C:\\msys64\\mingw64\\bin" OLLAMA_LIBRARY_PATH=C:\ai\ollama-windows-amd64\lib\ollama;C:\ai\ollama-windows-amd64\lib\ollama\vulkan
time=2025-11-14T03:42:21.265-05:00 level=INFO source=runner.go:1398 msg="starting ollama engine"
time=2025-11-14T03:42:21.266-05:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:59414"
time=2025-11-14T03:42:21.276-05:00 level=DEBUG source=gguf.go:590 msg=general.architecture type=string
time=2025-11-14T03:42:21.277-05:00 level=DEBUG source=gguf.go:590 msg=tokenizer.ggml.model type=string
time=2025-11-14T03:42:21.277-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
time=2025-11-14T03:42:21.277-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
time=2025-11-14T03:42:21.277-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0
time=2025-11-14T03:42:21.277-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.name default=""
time=2025-11-14T03:42:21.277-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.description default=""
time=2025-11-14T03:42:21.277-05:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3
time=2025-11-14T03:42:21.278-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\ai\ollama-windows-amd64\lib\ollama
dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-alderlake.dll: The specified procedure could not be found.

dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-haswell.dll: The specified procedure could not be found.

dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-icelake.dll: The specified procedure could not be found.

dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-sandybridge.dll: The specified procedure could not be found.

dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-skylakex.dll: The specified procedure could not be found.

dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-sse42.dll: The specified procedure could not be found.

dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-x64.dll: The specified procedure could not be found.

time=2025-11-14T03:42:21.285-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\ai\ollama-windows-amd64\lib\ollama\vulkan
dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\vulkan\ggml-vulkan.dll: The specified procedure could not be found.

time=2025-11-14T03:42:21.289-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(clang)
time=2025-11-14T03:42:21.289-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0
time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.pooling_type default=0
time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.expert_count default=0
time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}"
time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}"
time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}"
time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}"
time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true
time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0
time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0
time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.pre default=""
time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0
time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.embedding_length default=0
time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count default=0
time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count_kv default=0
time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.key_length default=0
time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.dimension_count default=0
time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0
time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.freq_base default=100000
time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.scaling.factor default=1
time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=runner.go:1373 msg="dummy model load took" duration=13.8387ms
time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=runner.go:1378 msg="gathering device infos took" duration=0s
time=2025-11-14T03:42:21.290-05:00 level=TRACE source=runner.go:448 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[C:\\ai\\ollama-windows-amd64\\lib\\ollama C:\\ai\\ollama-windows-amd64\\lib\\ollama\\vulkan]" devices=[]
time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=runner.go:418 msg="bootstrap discovery took" duration=59.4609ms OLLAMA_LIBRARY_PATH="[C:\\ai\\ollama-windows-amd64\\lib\\ollama C:\\ai\\ollama-windows-amd64\\lib\\ollama\\vulkan]" extra_envs=map[]
time=2025-11-14T03:42:21.290-05:00 level=TRACE source=runner.go:421 msg="starting runner for device discovery" libDirs="[C:\\ai\\ollama-windows-amd64\\lib\\ollama C:\\ai\\ollama-windows-amd64\\lib\\ollama\\cuda_v12]" extraEnvs=map[]
time=2025-11-14T03:42:21.292-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\ai\\ollama-windows-amd64\\ollama.exe runner --ollama-engine --port 59419"
time=2025-11-14T03:42:21.292-05:00 level=DEBUG source=server.go:393 msg=subprocess GGML_VULKAN_DEBUG=1 OLLAMA_DEBUG=2 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_KEEP_ALIVE=10m OLLAMA_NUM_GPU=999 OLLAMA_NUM_PARALLEL=2 OLLAMA_VULKAN=1 PATH="C:\\ai\\ollama-windows-amd64\\lib\\ollama;C:\\ai\\ollama-windows-amd64\\lib\\ollama\\cuda_v12;C:\\VulkanSDK\\1.4.328.1\\Bin;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\CMake\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Program Files\\Go\\bin;C:\\VulkanSDK\\1.4.328.1\\Bin;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\CMake\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Program Files\\Go\\bin;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Python\\Python311\\;C:\\Users\\WinServer\\AppData\\Local\\Microsoft\\WindowsApps;;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Ollama;C:\\Users\\WinServer\\.lmstudio\\bin;C:\\Users\\WinServer\\go\\bin;C:\\msys64\\mingw64\\bin" OLLAMA_LIBRARY_PATH=C:\ai\ollama-windows-amd64\lib\ollama;C:\ai\ollama-windows-amd64\lib\ollama\cuda_v12
time=2025-11-14T03:42:21.322-05:00 level=INFO source=runner.go:1398 msg="starting ollama engine"
time=2025-11-14T03:42:21.323-05:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:59419"
time=2025-11-14T03:42:21.325-05:00 level=DEBUG source=gguf.go:590 msg=general.architecture type=string
time=2025-11-14T03:42:21.325-05:00 level=DEBUG source=gguf.go:590 msg=tokenizer.ggml.model type=string
time=2025-11-14T03:42:21.326-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
time=2025-11-14T03:42:21.326-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
time=2025-11-14T03:42:21.326-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0
time=2025-11-14T03:42:21.326-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.name default=""
time=2025-11-14T03:42:21.326-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.description default=""
time=2025-11-14T03:42:21.326-05:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3
time=2025-11-14T03:42:21.326-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\ai\ollama-windows-amd64\lib\ollama
dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-alderlake.dll: The specified procedure could not be found.

dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-haswell.dll: The specified procedure could not be found.

dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-icelake.dll: The specified procedure could not be found.

dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-sandybridge.dll: The specified procedure could not be found.

dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-skylakex.dll: The specified procedure could not be found.

dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-sse42.dll: The specified procedure could not be found.

dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-x64.dll: The specified procedure could not be found.

time=2025-11-14T03:42:21.334-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\ai\ollama-windows-amd64\lib\ollama\cuda_v12
dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\cuda_v12\ggml-cuda.dll: The specified module could not be found.

time=2025-11-14T03:42:21.350-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(clang)
time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0
time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.pooling_type default=0
time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.expert_count default=0
time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}"
time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}"
time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}"
time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}"
time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true
time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0
time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0
time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.pre default=""
time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0
time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.embedding_length default=0
time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count default=0
time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count_kv default=0
time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.key_length default=0
time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.dimension_count default=0
time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0
time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.freq_base default=100000
time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.scaling.factor default=1
time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=runner.go:1373 msg="dummy model load took" duration=24.7924ms
time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=runner.go:1378 msg="gathering device infos took" duration=0s
time=2025-11-14T03:42:21.351-05:00 level=TRACE source=runner.go:448 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[C:\\ai\\ollama-windows-amd64\\lib\\ollama C:\\ai\\ollama-windows-amd64\\lib\\ollama\\cuda_v12]" devices=[]
time=2025-11-14T03:42:21.351-05:00 level=DEBUG source=runner.go:418 msg="bootstrap discovery took" duration=60.774ms OLLAMA_LIBRARY_PATH="[C:\\ai\\ollama-windows-amd64\\lib\\ollama C:\\ai\\ollama-windows-amd64\\lib\\ollama\\cuda_v12]" extra_envs=map[]
time=2025-11-14T03:42:21.351-05:00 level=DEBUG source=runner.go:116 msg="evluating which if any devices to filter out" initial_count=0
time=2025-11-14T03:42:21.351-05:00 level=TRACE source=runner.go:156 msg="supported GPU library combinations before filtering" supported=map[]
time=2025-11-14T03:42:21.351-05:00 level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=180.9357ms
time=2025-11-14T03:42:21.351-05:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="63.9 GiB" available="28.1 GiB"
time=2025-11-14T03:42:21.351-05:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB"

vulkaninfo --summary:

PS C:\Windows\system32> vulkaninfo.exe --summary
WARNING: [Loader Message] Code 0 : Layer VK_LAYER_AMD_switchable_graphics uses API version 1.3 which is older than the application specified API version of 1.4. May cause issues.
==========
VULKANINFO
==========

Vulkan Instance Version: 1.4.328


Instance Extensions: count = 13
-------------------------------
VK_EXT_debug_report                    : extension revision 10
VK_EXT_debug_utils                     : extension revision 2
VK_EXT_swapchain_colorspace            : extension revision 4
VK_KHR_device_group_creation           : extension revision 1
VK_KHR_external_fence_capabilities     : extension revision 1
VK_KHR_external_memory_capabilities    : extension revision 1
VK_KHR_external_semaphore_capabilities : extension revision 1
VK_KHR_get_physical_device_properties2 : extension revision 2
VK_KHR_get_surface_capabilities2       : extension revision 1
VK_KHR_portability_enumeration         : extension revision 1
VK_KHR_surface                         : extension revision 25
VK_KHR_win32_surface                   : extension revision 6
VK_LUNARG_direct_driver_loading        : extension revision 1

Instance Layers: count = 10
---------------------------
VK_LAYER_AMD_switchable_graphics  AMD switchable graphics layer                                                                                     1.3.260  version 1
VK_LAYER_KHRONOS_profiles         Khronos Profiles layer                                                                                            1.4.328  version 1
VK_LAYER_KHRONOS_shader_object    Khronos Shader object layer                                                                                       1.4.328  version 1
VK_LAYER_KHRONOS_synchronization2 Khronos Synchronization2 layer                                                                                    1.4.328  version 1
VK_LAYER_KHRONOS_validation       Khronos Validation Layer                                                                                          1.4.328  version 1
VK_LAYER_LUNARG_api_dump          LunarG API dump layer                                                                                             1.4.328  version 2
VK_LAYER_LUNARG_crash_diagnostic  Crash Diagnostic Layer is a crash/hang debugging tool that helps determines GPU progress in a Vulkan application. 1.4.328  version 1
VK_LAYER_LUNARG_gfxreconstruct    GFXReconstruct Capture Layer Version 1.0.5                                                                        1.4.328  version 4194309
VK_LAYER_LUNARG_monitor           Execution Monitoring Layer                                                                                        1.4.328  version 1
VK_LAYER_LUNARG_screenshot        LunarG image capture layer                                                                                        1.4.328  version 1

Devices:
========
GPU0:
        apiVersion         = 1.3.260
        driverVersion      = 2.0.279
        vendorID           = 0x1002
        deviceID           = 0x66a0
        deviceType         = PHYSICAL_DEVICE_TYPE_DISCRETE_GPU
        deviceName         = AMD Radeon Pro VII
        driverID           = DRIVER_ID_AMD_PROPRIETARY
        driverName         = AMD proprietary driver
        driverInfo         = (AMD proprietary shader compiler)
        conformanceVersion = 1.3.3.1
        deviceUUID         = 00000000-0c00-0000-0000-000000000000
        driverUUID         = 414d442d-5749-4e2d-4452-560000000000
PS C:\Windows\system32> vulkaninfo | Select-String "storageBuffer16BitAccess"
WARNING: [Loader Message] Code 0 : Layer VK_LAYER_AMD_switchable_graphics uses API version 1.3 which is older than the application specified API version of 1.4. May cause issues.

        storageBuffer16BitAccess           = true
        uniformAndStorageBuffer16BitAccess = true
<!-- gh-comment-id:3531636196 --> @avesed commented on GitHub (Nov 14, 2025): So I did a bit of debugging myself, it seems like *.dll files are not loading? I can confirm that VC++ and the Vulkan SDK are installed properly, and dumpbin shows ggml-vulkan.dll depends are ggml-base.dll, vulkan-1.dll, and KERNEL32.dll, and all imported symbols exist in ggml-base.dll. 0.12.11(non rc) Debug 2 log: ``` time=2025-11-14T03:42:21.167-05:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY:localhost,127.0.0.1 OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG-4 OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:10m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\WinServer\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:2 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES:]" time=2025-11-14T03:42:21.169-05:00 level=INFO source=images.go:522 msg="total blobs: 6" time=2025-11-14T03:42:21.169-05:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" time=2025-11-14T03:42:21.170-05:00 level=INFO source=routes.go:1597 msg="Listening on [::]:11434 (version 0.12.11)" time=2025-11-14T03:42:21.170-05:00 level=DEBUG source=sched.go:120 msg="starting llm scheduler" time=2025-11-14T03:42:21.171-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2025-11-14T03:42:21.171-05:00 level=TRACE source=runner.go:421 msg="starting runner for device discovery" libDirs="[C:\\ai\\ollama-windows-amd64\\lib\\ollama C:\\ai\\ollama-windows-amd64\\lib\\ollama\\cuda_v13]" extraEnvs=map[] time=2025-11-14T03:42:21.178-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\ai\\ollama-windows-amd64\\ollama.exe runner --ollama-engine --port 59410" time=2025-11-14T03:42:21.178-05:00 level=DEBUG source=server.go:393 msg=subprocess GGML_VULKAN_DEBUG=1 OLLAMA_DEBUG=2 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_KEEP_ALIVE=10m OLLAMA_NUM_GPU=999 OLLAMA_NUM_PARALLEL=2 OLLAMA_VULKAN=1 PATH="C:\\ai\\ollama-windows-amd64\\lib\\ollama;C:\\ai\\ollama-windows-amd64\\lib\\ollama\\cuda_v13;C:\\VulkanSDK\\1.4.328.1\\Bin;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\CMake\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Program Files\\Go\\bin;C:\\VulkanSDK\\1.4.328.1\\Bin;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\CMake\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Program Files\\Go\\bin;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Python\\Python311\\;C:\\Users\\WinServer\\AppData\\Local\\Microsoft\\WindowsApps;;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Ollama;C:\\Users\\WinServer\\.lmstudio\\bin;C:\\Users\\WinServer\\go\\bin;C:\\msys64\\mingw64\\bin" OLLAMA_LIBRARY_PATH=C:\ai\ollama-windows-amd64\lib\ollama;C:\ai\ollama-windows-amd64\lib\ollama\cuda_v13 time=2025-11-14T03:42:21.211-05:00 level=INFO source=runner.go:1398 msg="starting ollama engine" time=2025-11-14T03:42:21.212-05:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:59410" time=2025-11-14T03:42:21.213-05:00 level=DEBUG source=gguf.go:590 msg=general.architecture type=string time=2025-11-14T03:42:21.213-05:00 level=DEBUG source=gguf.go:590 msg=tokenizer.ggml.model type=string time=2025-11-14T03:42:21.213-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-11-14T03:42:21.215-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-11-14T03:42:21.215-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-11-14T03:42:21.215-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.name default="" time=2025-11-14T03:42:21.215-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.description default="" time=2025-11-14T03:42:21.215-05:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 time=2025-11-14T03:42:21.215-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\ai\ollama-windows-amd64\lib\ollama dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-alderlake.dll: The specified procedure could not be found. dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-haswell.dll: The specified procedure could not be found. dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-icelake.dll: The specified procedure could not be found. dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-sandybridge.dll: The specified procedure could not be found. dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-skylakex.dll: The specified procedure could not be found. dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-sse42.dll: The specified procedure could not be found. dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-x64.dll: The specified procedure could not be found. time=2025-11-14T03:42:21.223-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\ai\ollama-windows-amd64\lib\ollama\cuda_v13 dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\cuda_v13\ggml-cuda.dll: The specified module could not be found. time=2025-11-14T03:42:21.229-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(clang) time=2025-11-14T03:42:21.229-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0 time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.pooling_type default=0 time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.expert_count default=0 time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.pre default="" time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0 time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.embedding_length default=0 time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count default=0 time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count_kv default=0 time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.key_length default=0 time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.dimension_count default=0 time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.freq_base default=100000 time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.scaling.factor default=1 time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=runner.go:1373 msg="dummy model load took" duration=17.5244ms time=2025-11-14T03:42:21.230-05:00 level=DEBUG source=runner.go:1378 msg="gathering device infos took" duration=0s time=2025-11-14T03:42:21.231-05:00 level=TRACE source=runner.go:448 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[C:\\ai\\ollama-windows-amd64\\lib\\ollama C:\\ai\\ollama-windows-amd64\\lib\\ollama\\cuda_v13]" devices=[] time=2025-11-14T03:42:21.231-05:00 level=DEBUG source=runner.go:418 msg="bootstrap discovery took" duration=60.1794ms OLLAMA_LIBRARY_PATH="[C:\\ai\\ollama-windows-amd64\\lib\\ollama C:\\ai\\ollama-windows-amd64\\lib\\ollama\\cuda_v13]" extra_envs=map[] time=2025-11-14T03:42:21.231-05:00 level=TRACE source=runner.go:421 msg="starting runner for device discovery" libDirs="[C:\\ai\\ollama-windows-amd64\\lib\\ollama C:\\ai\\ollama-windows-amd64\\lib\\ollama\\vulkan]" extraEnvs=map[] time=2025-11-14T03:42:21.232-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\ai\\ollama-windows-amd64\\ollama.exe runner --ollama-engine --port 59414" time=2025-11-14T03:42:21.232-05:00 level=DEBUG source=server.go:393 msg=subprocess GGML_VULKAN_DEBUG=1 OLLAMA_DEBUG=2 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_KEEP_ALIVE=10m OLLAMA_NUM_GPU=999 OLLAMA_NUM_PARALLEL=2 OLLAMA_VULKAN=1 PATH="C:\\ai\\ollama-windows-amd64\\lib\\ollama;C:\\ai\\ollama-windows-amd64\\lib\\ollama\\vulkan;C:\\VulkanSDK\\1.4.328.1\\Bin;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\CMake\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Program Files\\Go\\bin;C:\\VulkanSDK\\1.4.328.1\\Bin;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\CMake\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Program Files\\Go\\bin;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Python\\Python311\\;C:\\Users\\WinServer\\AppData\\Local\\Microsoft\\WindowsApps;;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Ollama;C:\\Users\\WinServer\\.lmstudio\\bin;C:\\Users\\WinServer\\go\\bin;C:\\msys64\\mingw64\\bin" OLLAMA_LIBRARY_PATH=C:\ai\ollama-windows-amd64\lib\ollama;C:\ai\ollama-windows-amd64\lib\ollama\vulkan time=2025-11-14T03:42:21.265-05:00 level=INFO source=runner.go:1398 msg="starting ollama engine" time=2025-11-14T03:42:21.266-05:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:59414" time=2025-11-14T03:42:21.276-05:00 level=DEBUG source=gguf.go:590 msg=general.architecture type=string time=2025-11-14T03:42:21.277-05:00 level=DEBUG source=gguf.go:590 msg=tokenizer.ggml.model type=string time=2025-11-14T03:42:21.277-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-11-14T03:42:21.277-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-11-14T03:42:21.277-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-11-14T03:42:21.277-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.name default="" time=2025-11-14T03:42:21.277-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.description default="" time=2025-11-14T03:42:21.277-05:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 time=2025-11-14T03:42:21.278-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\ai\ollama-windows-amd64\lib\ollama dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-alderlake.dll: The specified procedure could not be found. dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-haswell.dll: The specified procedure could not be found. dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-icelake.dll: The specified procedure could not be found. dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-sandybridge.dll: The specified procedure could not be found. dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-skylakex.dll: The specified procedure could not be found. dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-sse42.dll: The specified procedure could not be found. dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-x64.dll: The specified procedure could not be found. time=2025-11-14T03:42:21.285-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\ai\ollama-windows-amd64\lib\ollama\vulkan dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\vulkan\ggml-vulkan.dll: The specified procedure could not be found. time=2025-11-14T03:42:21.289-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(clang) time=2025-11-14T03:42:21.289-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0 time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.pooling_type default=0 time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.expert_count default=0 time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.pre default="" time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0 time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.embedding_length default=0 time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count default=0 time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count_kv default=0 time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.key_length default=0 time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.dimension_count default=0 time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.freq_base default=100000 time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.scaling.factor default=1 time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=runner.go:1373 msg="dummy model load took" duration=13.8387ms time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=runner.go:1378 msg="gathering device infos took" duration=0s time=2025-11-14T03:42:21.290-05:00 level=TRACE source=runner.go:448 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[C:\\ai\\ollama-windows-amd64\\lib\\ollama C:\\ai\\ollama-windows-amd64\\lib\\ollama\\vulkan]" devices=[] time=2025-11-14T03:42:21.290-05:00 level=DEBUG source=runner.go:418 msg="bootstrap discovery took" duration=59.4609ms OLLAMA_LIBRARY_PATH="[C:\\ai\\ollama-windows-amd64\\lib\\ollama C:\\ai\\ollama-windows-amd64\\lib\\ollama\\vulkan]" extra_envs=map[] time=2025-11-14T03:42:21.290-05:00 level=TRACE source=runner.go:421 msg="starting runner for device discovery" libDirs="[C:\\ai\\ollama-windows-amd64\\lib\\ollama C:\\ai\\ollama-windows-amd64\\lib\\ollama\\cuda_v12]" extraEnvs=map[] time=2025-11-14T03:42:21.292-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\ai\\ollama-windows-amd64\\ollama.exe runner --ollama-engine --port 59419" time=2025-11-14T03:42:21.292-05:00 level=DEBUG source=server.go:393 msg=subprocess GGML_VULKAN_DEBUG=1 OLLAMA_DEBUG=2 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_KEEP_ALIVE=10m OLLAMA_NUM_GPU=999 OLLAMA_NUM_PARALLEL=2 OLLAMA_VULKAN=1 PATH="C:\\ai\\ollama-windows-amd64\\lib\\ollama;C:\\ai\\ollama-windows-amd64\\lib\\ollama\\cuda_v12;C:\\VulkanSDK\\1.4.328.1\\Bin;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\CMake\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Program Files\\Go\\bin;C:\\VulkanSDK\\1.4.328.1\\Bin;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\CMake\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Program Files\\Go\\bin;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Python\\Python311\\;C:\\Users\\WinServer\\AppData\\Local\\Microsoft\\WindowsApps;;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Ollama;C:\\Users\\WinServer\\.lmstudio\\bin;C:\\Users\\WinServer\\go\\bin;C:\\msys64\\mingw64\\bin" OLLAMA_LIBRARY_PATH=C:\ai\ollama-windows-amd64\lib\ollama;C:\ai\ollama-windows-amd64\lib\ollama\cuda_v12 time=2025-11-14T03:42:21.322-05:00 level=INFO source=runner.go:1398 msg="starting ollama engine" time=2025-11-14T03:42:21.323-05:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:59419" time=2025-11-14T03:42:21.325-05:00 level=DEBUG source=gguf.go:590 msg=general.architecture type=string time=2025-11-14T03:42:21.325-05:00 level=DEBUG source=gguf.go:590 msg=tokenizer.ggml.model type=string time=2025-11-14T03:42:21.326-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-11-14T03:42:21.326-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-11-14T03:42:21.326-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-11-14T03:42:21.326-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.name default="" time=2025-11-14T03:42:21.326-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.description default="" time=2025-11-14T03:42:21.326-05:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 time=2025-11-14T03:42:21.326-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\ai\ollama-windows-amd64\lib\ollama dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-alderlake.dll: The specified procedure could not be found. dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-haswell.dll: The specified procedure could not be found. dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-icelake.dll: The specified procedure could not be found. dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-sandybridge.dll: The specified procedure could not be found. dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-skylakex.dll: The specified procedure could not be found. dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-sse42.dll: The specified procedure could not be found. dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\ggml-cpu-x64.dll: The specified procedure could not be found. time=2025-11-14T03:42:21.334-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\ai\ollama-windows-amd64\lib\ollama\cuda_v12 dl_load_library unable to load library C:\ai\ollama-windows-amd64\lib\ollama\cuda_v12\ggml-cuda.dll: The specified module could not be found. time=2025-11-14T03:42:21.350-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(clang) time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0 time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.pooling_type default=0 time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.expert_count default=0 time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.pre default="" time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0 time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.embedding_length default=0 time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count default=0 time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count_kv default=0 time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.key_length default=0 time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.dimension_count default=0 time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.freq_base default=100000 time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.scaling.factor default=1 time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=runner.go:1373 msg="dummy model load took" duration=24.7924ms time=2025-11-14T03:42:21.350-05:00 level=DEBUG source=runner.go:1378 msg="gathering device infos took" duration=0s time=2025-11-14T03:42:21.351-05:00 level=TRACE source=runner.go:448 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[C:\\ai\\ollama-windows-amd64\\lib\\ollama C:\\ai\\ollama-windows-amd64\\lib\\ollama\\cuda_v12]" devices=[] time=2025-11-14T03:42:21.351-05:00 level=DEBUG source=runner.go:418 msg="bootstrap discovery took" duration=60.774ms OLLAMA_LIBRARY_PATH="[C:\\ai\\ollama-windows-amd64\\lib\\ollama C:\\ai\\ollama-windows-amd64\\lib\\ollama\\cuda_v12]" extra_envs=map[] time=2025-11-14T03:42:21.351-05:00 level=DEBUG source=runner.go:116 msg="evluating which if any devices to filter out" initial_count=0 time=2025-11-14T03:42:21.351-05:00 level=TRACE source=runner.go:156 msg="supported GPU library combinations before filtering" supported=map[] time=2025-11-14T03:42:21.351-05:00 level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=180.9357ms time=2025-11-14T03:42:21.351-05:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="63.9 GiB" available="28.1 GiB" time=2025-11-14T03:42:21.351-05:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB" ``` vulkaninfo --summary: ``` PS C:\Windows\system32> vulkaninfo.exe --summary WARNING: [Loader Message] Code 0 : Layer VK_LAYER_AMD_switchable_graphics uses API version 1.3 which is older than the application specified API version of 1.4. May cause issues. ========== VULKANINFO ========== Vulkan Instance Version: 1.4.328 Instance Extensions: count = 13 ------------------------------- VK_EXT_debug_report : extension revision 10 VK_EXT_debug_utils : extension revision 2 VK_EXT_swapchain_colorspace : extension revision 4 VK_KHR_device_group_creation : extension revision 1 VK_KHR_external_fence_capabilities : extension revision 1 VK_KHR_external_memory_capabilities : extension revision 1 VK_KHR_external_semaphore_capabilities : extension revision 1 VK_KHR_get_physical_device_properties2 : extension revision 2 VK_KHR_get_surface_capabilities2 : extension revision 1 VK_KHR_portability_enumeration : extension revision 1 VK_KHR_surface : extension revision 25 VK_KHR_win32_surface : extension revision 6 VK_LUNARG_direct_driver_loading : extension revision 1 Instance Layers: count = 10 --------------------------- VK_LAYER_AMD_switchable_graphics AMD switchable graphics layer 1.3.260 version 1 VK_LAYER_KHRONOS_profiles Khronos Profiles layer 1.4.328 version 1 VK_LAYER_KHRONOS_shader_object Khronos Shader object layer 1.4.328 version 1 VK_LAYER_KHRONOS_synchronization2 Khronos Synchronization2 layer 1.4.328 version 1 VK_LAYER_KHRONOS_validation Khronos Validation Layer 1.4.328 version 1 VK_LAYER_LUNARG_api_dump LunarG API dump layer 1.4.328 version 2 VK_LAYER_LUNARG_crash_diagnostic Crash Diagnostic Layer is a crash/hang debugging tool that helps determines GPU progress in a Vulkan application. 1.4.328 version 1 VK_LAYER_LUNARG_gfxreconstruct GFXReconstruct Capture Layer Version 1.0.5 1.4.328 version 4194309 VK_LAYER_LUNARG_monitor Execution Monitoring Layer 1.4.328 version 1 VK_LAYER_LUNARG_screenshot LunarG image capture layer 1.4.328 version 1 Devices: ======== GPU0: apiVersion = 1.3.260 driverVersion = 2.0.279 vendorID = 0x1002 deviceID = 0x66a0 deviceType = PHYSICAL_DEVICE_TYPE_DISCRETE_GPU deviceName = AMD Radeon Pro VII driverID = DRIVER_ID_AMD_PROPRIETARY driverName = AMD proprietary driver driverInfo = (AMD proprietary shader compiler) conformanceVersion = 1.3.3.1 deviceUUID = 00000000-0c00-0000-0000-000000000000 driverUUID = 414d442d-5749-4e2d-4452-560000000000 ``` ``` PS C:\Windows\system32> vulkaninfo | Select-String "storageBuffer16BitAccess" WARNING: [Loader Message] Code 0 : Layer VK_LAYER_AMD_switchable_graphics uses API version 1.3 which is older than the application specified API version of 1.4. May cause issues. storageBuffer16BitAccess = true uniformAndStorageBuffer16BitAccess = true ```
Author
Owner

@dhiltgen commented on GitHub (Nov 14, 2025):

@avesed are you by any chance running Windows Enterprise 25H2? If you build from source from main and set OLLAMA_DEBUG="2" do you see the same The specified procedure could not be found problem loading the libraries?

<!-- gh-comment-id:3533547416 --> @dhiltgen commented on GitHub (Nov 14, 2025): @avesed are you by any chance running Windows Enterprise 25H2? If you build from source from main and set OLLAMA_DEBUG="2" do you see the same `The specified procedure could not be found` problem loading the libraries?
Author
Owner

@avesed commented on GitHub (Nov 14, 2025):

@avesed are you by any chance running Windows Enterprise 25H2? If you build from source from main and set OLLAMA_DEBUG="2" do you see the same The specified procedure could not be found problem loading the libraries?

It's running Windows 11 Enterprise LTSC 24H2, and yes build from source has the same error, full log below:

PS C:\ai\ollama-0.12.11> $env:OLLAMA_VULKAN="1"
PS C:\ai\ollama-0.12.11> $env:OLLAMA_DEBUG="2"
PS C:\ai\ollama-0.12.11> go run main.go serve
# github.com/ollama/ollama/llama/llama.cpp/common
common.cpp: In function 'bool fs_create_directory_with_parents(const std::string&)':
common.cpp:784:10: warning: 'template<class _Codecvt, class _Elem, class _Wide_alloc, class _Byte_alloc> class std::__cxx11::wstring_convert' is deprecated [-Wdeprecated-declarations]
  784 |     std::wstring_convert<std::codecvt_utf8<wchar_t>> converter;
      |          ^~~~~~~~~~~~~~~
In file included from C:/msys64/mingw64/include/c++/15.2.0/locale:47,
                 from C:/msys64/mingw64/include/c++/15.2.0/bits/fs_path.h:36,
                 from C:/msys64/mingw64/include/c++/15.2.0/filesystem:54,
                 from common.cpp:21:
C:/msys64/mingw64/include/c++/15.2.0/bits/locale_conv.h:262:33: note: declared here
  262 |     class _GLIBCXX17_DEPRECATED wstring_convert
      |                                 ^~~~~~~~~~~~~~~
time=2025-11-14T15:02:54.090-05:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG-4 OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\WinServer\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES:]"
time=2025-11-14T15:02:54.092-05:00 level=INFO source=images.go:522 msg="total blobs: 6"
time=2025-11-14T15:02:54.092-05:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
 - using env:   export GIN_MODE=release
 - using code:  gin.SetMode(gin.ReleaseMode)

[GIN-debug] HEAD   /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET    /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD   /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func3 (5 handlers)
[GIN-debug] GET    /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func4 (5 handlers)
[GIN-debug] POST   /api/pull                 --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
[GIN-debug] POST   /api/push                 --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] GET    /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] POST   /api/show                 --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete               --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
[GIN-debug] POST   /api/me                   --> github.com/ollama/ollama/server.(*Server).WhoamiHandler-fm (5 handlers)
[GIN-debug] POST   /api/signout              --> github.com/ollama/ollama/server.(*Server).SignoutHandler-fm (5 handlers)
[GIN-debug] DELETE /api/user/keys/:encodedKey --> github.com/ollama/ollama/server.(*Server).SignoutHandler-fm (5 handlers)
[GIN-debug] POST   /api/create               --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
[GIN-debug] POST   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] POST   /api/copy                 --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
[GIN-debug] GET    /api/ps                   --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
[GIN-debug] POST   /api/generate             --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST   /api/chat                 --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST   /api/embed                --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST   /api/embeddings           --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST   /v1/chat/completions      --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST   /v1/completions           --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST   /v1/embeddings            --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models                --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models/:model         --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
time=2025-11-14T15:02:54.094-05:00 level=INFO source=routes.go:1597 msg="Listening on 127.0.0.1:11434 (version 0.0.0)"
time=2025-11-14T15:02:54.094-05:00 level=DEBUG source=sched.go:120 msg="starting llm scheduler"
time=2025-11-14T15:02:54.095-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2025-11-14T15:02:54.095-05:00 level=TRACE source=runner.go:421 msg="starting runner for device discovery" libDirs=[C:\ai\ollama-0.12.11\build\lib\ollama] extraEnvs=map[]
time=2025-11-14T15:02:54.103-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\WinServer\\AppData\\Local\\go-build\\d1\\d1833ae32b50f87724f674748e3400df7e7e770570d22d6e88587168c6b786e5-d\\main.exe runner --ollama-engine --port 62908"
time=2025-11-14T15:02:54.103-05:00 level=DEBUG source=server.go:393 msg=subprocess OLLAMA_DEBUG=2 OLLAMA_VULKAN=1 PATH="C:\\ai\\ollama-0.12.11\\build\\lib\\ollama;C:\\Program Files\\Go\\bin;C:\\VulkanSDK\\1.4.328.1\\Bin;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\CMake\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Program Files\\Go\\bin;C:\\VulkanSDK\\1.4.328.1\\Bin;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\CMake\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Program Files\\Go\\bin;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Python\\Python311\\;C:\\Users\\WinServer\\AppData\\Local\\Microsoft\\WindowsApps;;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Ollama;C:\\Users\\WinServer\\.lmstudio\\bin;C:\\Users\\WinServer\\go\\bin;C:\\msys64\\mingw64\\bin" OLLAMA_LIBRARY_PATH=C:\ai\ollama-0.12.11\build\lib\ollama
time=2025-11-14T15:02:54.136-05:00 level=INFO source=runner.go:1398 msg="starting ollama engine"
time=2025-11-14T15:02:54.137-05:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:62908"
time=2025-11-14T15:02:54.138-05:00 level=DEBUG source=gguf.go:590 msg=general.architecture type=string
time=2025-11-14T15:02:54.138-05:00 level=DEBUG source=gguf.go:590 msg=tokenizer.ggml.model type=string
time=2025-11-14T15:02:54.138-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
time=2025-11-14T15:02:54.139-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
time=2025-11-14T15:02:54.139-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0
time=2025-11-14T15:02:54.139-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.name default=""
time=2025-11-14T15:02:54.139-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.description default=""
time=2025-11-14T15:02:54.139-05:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3
time=2025-11-14T15:02:54.139-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\ai\ollama-0.12.11\build\lib\ollama
dl_load_library unable to load library C:\ai\ollama-0.12.11\build\lib\ollama\ggml-vulkan.dll: The specified module could not be found.

dl_load_library unable to load library C:\ai\ollama-0.12.11\build\lib\ollama\ggml-cpu-alderlake.dll: The specified procedure could not be found.

dl_load_library unable to load library C:\ai\ollama-0.12.11\build\lib\ollama\ggml-cpu-haswell.dll: The specified procedure could not be found.

dl_load_library unable to load library C:\ai\ollama-0.12.11\build\lib\ollama\ggml-cpu-icelake.dll: The specified procedure could not be found.

dl_load_library unable to load library C:\ai\ollama-0.12.11\build\lib\ollama\ggml-cpu-sandybridge.dll: The specified procedure could not be found.

dl_load_library unable to load library C:\ai\ollama-0.12.11\build\lib\ollama\ggml-cpu-skylakex.dll: The specified procedure could not be found.

dl_load_library unable to load library C:\ai\ollama-0.12.11\build\lib\ollama\ggml-cpu-sse42.dll: The specified procedure could not be found.

dl_load_library unable to load library C:\ai\ollama-0.12.11\build\lib\ollama\ggml-cpu-x64.dll: The specified procedure could not be found.

time=2025-11-14T15:02:54.153-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.LLAMAFILE=1 compiler=cgo(gcc)
time=2025-11-14T15:02:54.153-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0
time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.pooling_type default=0
time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.expert_count default=0
time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}"
time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}"
time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}"
time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}"
time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true
time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0
time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0
time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.pre default=""
time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0
time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.embedding_length default=0
time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count default=0
time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count_kv default=0
time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.key_length default=0
time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.dimension_count default=0
time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0
time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.freq_base default=100000
time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.scaling.factor default=1
time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=runner.go:1373 msg="dummy model load took" duration=16.8527ms
time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=runner.go:1378 msg="gathering device infos took" duration=0s
time=2025-11-14T15:02:54.154-05:00 level=TRACE source=runner.go:448 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH=[C:\ai\ollama-0.12.11\build\lib\ollama] devices=[]
time=2025-11-14T15:02:54.155-05:00 level=DEBUG source=runner.go:418 msg="bootstrap discovery took" duration=60.1692ms OLLAMA_LIBRARY_PATH=[C:\ai\ollama-0.12.11\build\lib\ollama] extra_envs=map[]
time=2025-11-14T15:02:54.155-05:00 level=DEBUG source=runner.go:116 msg="evluating which if any devices to filter out" initial_count=0
time=2025-11-14T15:02:54.155-05:00 level=TRACE source=runner.go:156 msg="supported GPU library combinations before filtering" supported=map[]
time=2025-11-14T15:02:54.155-05:00 level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=60.695ms
time=2025-11-14T15:02:54.155-05:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="63.9 GiB" available="45.2 GiB"
time=2025-11-14T15:02:54.155-05:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB"
<!-- gh-comment-id:3534390499 --> @avesed commented on GitHub (Nov 14, 2025): > [@avesed](https://github.com/avesed) are you by any chance running Windows Enterprise 25H2? If you build from source from main and set OLLAMA_DEBUG="2" do you see the same `The specified procedure could not be found` problem loading the libraries? It's running ` Windows 11 Enterprise LTSC 24H2 `, and yes build from source has the same error, full log below: ``` PS C:\ai\ollama-0.12.11> $env:OLLAMA_VULKAN="1" PS C:\ai\ollama-0.12.11> $env:OLLAMA_DEBUG="2" PS C:\ai\ollama-0.12.11> go run main.go serve # github.com/ollama/ollama/llama/llama.cpp/common common.cpp: In function 'bool fs_create_directory_with_parents(const std::string&)': common.cpp:784:10: warning: 'template<class _Codecvt, class _Elem, class _Wide_alloc, class _Byte_alloc> class std::__cxx11::wstring_convert' is deprecated [-Wdeprecated-declarations] 784 | std::wstring_convert<std::codecvt_utf8<wchar_t>> converter; | ^~~~~~~~~~~~~~~ In file included from C:/msys64/mingw64/include/c++/15.2.0/locale:47, from C:/msys64/mingw64/include/c++/15.2.0/bits/fs_path.h:36, from C:/msys64/mingw64/include/c++/15.2.0/filesystem:54, from common.cpp:21: C:/msys64/mingw64/include/c++/15.2.0/bits/locale_conv.h:262:33: note: declared here 262 | class _GLIBCXX17_DEPRECATED wstring_convert | ^~~~~~~~~~~~~~~ time=2025-11-14T15:02:54.090-05:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG-4 OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\WinServer\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES:]" time=2025-11-14T15:02:54.092-05:00 level=INFO source=images.go:522 msg="total blobs: 6" time=2025-11-14T15:02:54.092-05:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func3 (5 handlers) [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func4 (5 handlers) [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers) [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers) [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers) [GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers) [GIN-debug] POST /api/me --> github.com/ollama/ollama/server.(*Server).WhoamiHandler-fm (5 handlers) [GIN-debug] POST /api/signout --> github.com/ollama/ollama/server.(*Server).SignoutHandler-fm (5 handlers) [GIN-debug] DELETE /api/user/keys/:encodedKey --> github.com/ollama/ollama/server.(*Server).SignoutHandler-fm (5 handlers) [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers) [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers) [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers) [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers) [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers) [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers) [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers) [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers) [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers) [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers) [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers) [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers) [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers) [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers) time=2025-11-14T15:02:54.094-05:00 level=INFO source=routes.go:1597 msg="Listening on 127.0.0.1:11434 (version 0.0.0)" time=2025-11-14T15:02:54.094-05:00 level=DEBUG source=sched.go:120 msg="starting llm scheduler" time=2025-11-14T15:02:54.095-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2025-11-14T15:02:54.095-05:00 level=TRACE source=runner.go:421 msg="starting runner for device discovery" libDirs=[C:\ai\ollama-0.12.11\build\lib\ollama] extraEnvs=map[] time=2025-11-14T15:02:54.103-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\WinServer\\AppData\\Local\\go-build\\d1\\d1833ae32b50f87724f674748e3400df7e7e770570d22d6e88587168c6b786e5-d\\main.exe runner --ollama-engine --port 62908" time=2025-11-14T15:02:54.103-05:00 level=DEBUG source=server.go:393 msg=subprocess OLLAMA_DEBUG=2 OLLAMA_VULKAN=1 PATH="C:\\ai\\ollama-0.12.11\\build\\lib\\ollama;C:\\Program Files\\Go\\bin;C:\\VulkanSDK\\1.4.328.1\\Bin;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\CMake\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Program Files\\Go\\bin;C:\\VulkanSDK\\1.4.328.1\\Bin;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\CMake\\bin;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Program Files\\Go\\bin;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Python\\Python311\\;C:\\Users\\WinServer\\AppData\\Local\\Microsoft\\WindowsApps;;C:\\Users\\WinServer\\AppData\\Local\\Programs\\Ollama;C:\\Users\\WinServer\\.lmstudio\\bin;C:\\Users\\WinServer\\go\\bin;C:\\msys64\\mingw64\\bin" OLLAMA_LIBRARY_PATH=C:\ai\ollama-0.12.11\build\lib\ollama time=2025-11-14T15:02:54.136-05:00 level=INFO source=runner.go:1398 msg="starting ollama engine" time=2025-11-14T15:02:54.137-05:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:62908" time=2025-11-14T15:02:54.138-05:00 level=DEBUG source=gguf.go:590 msg=general.architecture type=string time=2025-11-14T15:02:54.138-05:00 level=DEBUG source=gguf.go:590 msg=tokenizer.ggml.model type=string time=2025-11-14T15:02:54.138-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-11-14T15:02:54.139-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-11-14T15:02:54.139-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-11-14T15:02:54.139-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.name default="" time=2025-11-14T15:02:54.139-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.description default="" time=2025-11-14T15:02:54.139-05:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 time=2025-11-14T15:02:54.139-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\ai\ollama-0.12.11\build\lib\ollama dl_load_library unable to load library C:\ai\ollama-0.12.11\build\lib\ollama\ggml-vulkan.dll: The specified module could not be found. dl_load_library unable to load library C:\ai\ollama-0.12.11\build\lib\ollama\ggml-cpu-alderlake.dll: The specified procedure could not be found. dl_load_library unable to load library C:\ai\ollama-0.12.11\build\lib\ollama\ggml-cpu-haswell.dll: The specified procedure could not be found. dl_load_library unable to load library C:\ai\ollama-0.12.11\build\lib\ollama\ggml-cpu-icelake.dll: The specified procedure could not be found. dl_load_library unable to load library C:\ai\ollama-0.12.11\build\lib\ollama\ggml-cpu-sandybridge.dll: The specified procedure could not be found. dl_load_library unable to load library C:\ai\ollama-0.12.11\build\lib\ollama\ggml-cpu-skylakex.dll: The specified procedure could not be found. dl_load_library unable to load library C:\ai\ollama-0.12.11\build\lib\ollama\ggml-cpu-sse42.dll: The specified procedure could not be found. dl_load_library unable to load library C:\ai\ollama-0.12.11\build\lib\ollama\ggml-cpu-x64.dll: The specified procedure could not be found. time=2025-11-14T15:02:54.153-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.LLAMAFILE=1 compiler=cgo(gcc) time=2025-11-14T15:02:54.153-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0 time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.pooling_type default=0 time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.expert_count default=0 time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.pre default="" time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0 time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.embedding_length default=0 time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count default=0 time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count_kv default=0 time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.key_length default=0 time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.dimension_count default=0 time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.freq_base default=100000 time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.scaling.factor default=1 time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=runner.go:1373 msg="dummy model load took" duration=16.8527ms time=2025-11-14T15:02:54.154-05:00 level=DEBUG source=runner.go:1378 msg="gathering device infos took" duration=0s time=2025-11-14T15:02:54.154-05:00 level=TRACE source=runner.go:448 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH=[C:\ai\ollama-0.12.11\build\lib\ollama] devices=[] time=2025-11-14T15:02:54.155-05:00 level=DEBUG source=runner.go:418 msg="bootstrap discovery took" duration=60.1692ms OLLAMA_LIBRARY_PATH=[C:\ai\ollama-0.12.11\build\lib\ollama] extra_envs=map[] time=2025-11-14T15:02:54.155-05:00 level=DEBUG source=runner.go:116 msg="evluating which if any devices to filter out" initial_count=0 time=2025-11-14T15:02:54.155-05:00 level=TRACE source=runner.go:156 msg="supported GPU library combinations before filtering" supported=map[] time=2025-11-14T15:02:54.155-05:00 level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=60.695ms time=2025-11-14T15:02:54.155-05:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="63.9 GiB" available="45.2 GiB" time=2025-11-14T15:02:54.155-05:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB" ```
Author
Owner

@dhiltgen commented on GitHub (Nov 14, 2025):

Thanks for confirming. We'll try to repro and figure out what's causing this.

<!-- gh-comment-id:3534513967 --> @dhiltgen commented on GitHub (Nov 14, 2025): Thanks for confirming. We'll try to repro and figure out what's causing this.
Author
Owner

@avesed commented on GitHub (Nov 14, 2025):

Thanks for confirming. We'll try to repro and figure out what's causing this.

Ok, thank you.

<!-- gh-comment-id:3534542861 --> @avesed commented on GitHub (Nov 14, 2025): > Thanks for confirming. We'll try to repro and figure out what's causing this. Ok, thank you.
Author
Owner

@dhiltgen commented on GitHub (Nov 14, 2025):

Hmm... I just spun up a Windows 11 Enterprise 25H2 VM on Azure, and it is able to load the libraries without error. @avesed could there be some other piece of software intervening, like AV software on your system? If you have 3rd party AV software, could you try disabling that temporarily and see if the libraries load correctly? On my test VM, I see MsMpEng.exe chewing up a ton of CPU cycles as we're doing our initial bootstrapping, but it appears to be letting the libraries load. All the DLLs look correctly signed by our signing key.

If you are in a corporate environment and can't disable the AV software, can you check with your IT to see if there are logs showing it is blocking the libraries from loading?

<!-- gh-comment-id:3534696743 --> @dhiltgen commented on GitHub (Nov 14, 2025): Hmm... I just spun up a Windows 11 Enterprise 25H2 VM on Azure, and it is able to load the libraries without error. @avesed could there be some other piece of software intervening, like AV software on your system? If you have 3rd party AV software, could you try disabling that temporarily and see if the libraries load correctly? On my test VM, I see MsMpEng.exe chewing up a ton of CPU cycles as we're doing our initial bootstrapping, but it appears to be letting the libraries load. All the DLLs look correctly signed by our signing key. If you are in a corporate environment and can't disable the AV software, can you check with your IT to see if there are logs showing it is blocking the libraries from loading?
Author
Owner

@avesed commented on GitHub (Nov 16, 2025):

@dhiltgen Thank you for all the help. I fixed it. So, this system had an Intel Arc B580 and was running an Intel version of Ollama, and turns out it had the ggml_base.dll directly saved in the system32 folder, which is why the actual Ollama version didn't load other libraries.

<!-- gh-comment-id:3537970076 --> @avesed commented on GitHub (Nov 16, 2025): @dhiltgen Thank you for all the help. I fixed it. So, this system had an Intel Arc B580 and was running an Intel version of Ollama, and turns out it had the ggml_base.dll directly saved in the system32 folder, which is why the actual Ollama version didn't load other libraries.
Author
Owner

@dhiltgen commented on GitHub (Nov 17, 2025):

That's great to hear! We try to control the PATH to ensure the expected version is found first, but perhaps Windows Enterprise has some subtle difference in behavior in how LoadLibrary searches for dependencies.

<!-- gh-comment-id:3543727733 --> @dhiltgen commented on GitHub (Nov 17, 2025): That's great to hear! We try to control the PATH to ensure the expected version is found first, but perhaps Windows Enterprise has some subtle difference in behavior in how LoadLibrary searches for dependencies.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#34164