[GH-ISSUE #13739] v0.14.1 fails to start on Windows #34765

Closed
opened 2026-04-22 18:35:48 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @kyleweishaupt on GitHub (Jan 16, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/13739

What is the issue?

I'm unable to get Ollama for Windows to start. It appears to hang, even though some endpoints like /api/tags work.

When I try to run ollama list I get: Error: Head "http://0.0.0.0:11434/": read tcp 127.0.0.1:63869->127.0.0.1:11434: wsarecv: An existing connection was forcibly closed by the remote host.

I have all the latest updates on Windows 11. Nvidia RTX 3060 GPU with latest 591.74 drivers. Intel I9-13900K CPU.

Relevant log output

time=2026-01-15T19:28:53.031-05:00 level=INFO source=app_windows.go:270 msg="starting Ollama" app=C:\Users\kylew\AppData\Local\Programs\Ollama version=0.14.1 OS=Windows/10.0.26200
time=2026-01-15T19:28:53.032-05:00 level=INFO source=app.go:237 msg="initialized tools registry" tool_count=0
time=2026-01-15T19:28:53.043-05:00 level=INFO source=app.go:252 msg="starting ollama server"
time=2026-01-15T19:28:53.043-05:00 level=INFO source=app.go:277 msg="starting ui server" port=62999
time=2026-01-15T19:28:53.044-05:00 level=DEBUG source=app.go:279 msg="starting ui server on port" port=62999
time=2026-01-15T19:28:53.050-05:00 level=DEBUG source=app.go:316 msg="no URL scheme request to handle"
time=2026-01-15T19:28:53.050-05:00 level=DEBUG source=app.go:320 msg="waiting for ollama server to be ready"
time=2026-01-15T19:28:53.216-05:00 level=DEBUG source=webview.go:446 msg="starting webview event loop"
time=2026-01-15T19:28:53.223-05:00 level=DEBUG source=eventloop.go:22 msg="starting event handling loop"
time=2026-01-15T19:28:53.335-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1768523333335148900 version=0.14.1
time=2026-01-15T19:28:53.336-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=0s request_id=1768523333336336800 version=0.14.1
time=2026-01-15T19:28:53.838-05:00 level=ERROR source=ui.go:1468 msg="failed to get inference compute" error="timeout scanning server log for inference compute details"
time=2026-01-15T19:28:53.838-05:00 level=ERROR source=ui.go:236 msg=site.serveHTTP error="failed to get inference compute: timeout scanning server log for inference compute details" http.method=GET http.path=/api/v1/inference-compute http.pattern="GET /api/v1/inference-compute" http.status=500 http.d=502.9255ms request_id=1768523333335148900 version=0.14.1
time=2026-01-15T19:28:53.839-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=0s request_id=1768523333839226400 version=0.14.1
time=2026-01-15T19:28:54.869-05:00 level=INFO source=server.go:344 msg=Matched "inference compute"="{Library:CUDA Variant: Compute:8.6 Driver:13.1 Name:CUDA0 VRAM:8.0 GiB}"
time=2026-01-15T19:28:54.869-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/inference-compute http.pattern="GET /api/v1/inference-compute" http.status=200 http.d=12.84ms request_id=1768523334856327700 version=0.14.1
time=2026-01-15T19:28:56.044-05:00 level=INFO source=updater.go:254 msg="beginning update checker" interval=1h0m0s
time=2026-01-15T19:28:56.045-05:00 level=DEBUG source=updater.go:100 msg="checking for available update" requestURL="https://ollama.com/api/update?arch=amd64&nonce=vAVM3rq24CUUNKdBtrRFaw&os=windows&ts=1768523336&version=0.14.1" User-Agent="ollama/0.14.1 amd64 Go/go1.24.1 Windows/10.0.26200"
time=2026-01-15T19:28:56.266-05:00 level=DEBUG source=updater.go:109 msg="check update response 204 (current version is up to date)"
time=2026-01-15T19:29:14.112-05:00 level=WARN source=app.go:322 msg="ollama server not ready, continuing anyway" error="timeout waiting for Ollama server to be ready"
time=2026-01-15T19:29:14.348-05:00 level=WARN source=ui.go:136 msg="ollama server not ready, retrying" attempt=2
time=2026-01-15T19:29:20.550-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1768523360550840600 version=0.14.1
time=2026-01-15T19:29:22.317-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=0s request_id=1768523362317712800 version=0.14.1
time=2026-01-15T19:29:22.318-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1768523362318383600 version=0.14.1
time=2026-01-15T19:29:36.392-05:00 level=ERROR source=ui.go:148 msg="ollama server not ready after retries" error="timeout waiting for Ollama server to be ready"
time=2026-01-15T19:29:38.876-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1768523378876417000 version=0.14.1
time=2026-01-15T19:29:57.449-05:00 level=WARN source=ui.go:136 msg="ollama server not ready, retrying" attempt=2
time=2026-01-15T19:30:19.488-05:00 level=ERROR source=ui.go:148 msg="ollama server not ready after retries" error="timeout waiting for Ollama server to be ready"
time=2026-01-15T19:30:40.541-05:00 level=WARN source=ui.go:136 msg="ollama server not ready, retrying" attempt=2
time=2026-01-15T19:31:02.580-05:00 level=ERROR source=ui.go:148 msg="ollama server not ready after retries" error="timeout waiting for Ollama server to be ready"
time=2026-01-15T19:31:23.612-05:00 level=WARN source=ui.go:136 msg="ollama server not ready, retrying" attempt=2
time=2026-01-15T19:31:45.662-05:00 level=ERROR source=ui.go:148 msg="ollama server not ready after retries" error="timeout waiting for Ollama server to be ready"
time=2026-01-15T19:32:06.722-05:00 level=WARN source=ui.go:136 msg="ollama server not ready, retrying" attempt=2
time=2026-01-15T19:32:28.769-05:00 level=ERROR source=ui.go:148 msg="ollama server not ready after retries" error="timeout waiting for Ollama server to be ready"
time=2026-01-15T19:32:43.067-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=0s request_id=1768523563067849700 version=0.14.1
time=2026-01-15T19:32:43.081-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=13.3685ms request_id=1768523563067849700 version=0.14.1
time=2026-01-15T19:32:49.820-05:00 level=WARN source=ui.go:136 msg="ollama server not ready, retrying" attempt=2

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.14.1

Originally created by @kyleweishaupt on GitHub (Jan 16, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/13739 ### What is the issue? I'm unable to get Ollama for Windows to start. It appears to hang, even though some endpoints like `/api/tags` work. When I try to run `ollama list` I get: `Error: Head "http://0.0.0.0:11434/": read tcp 127.0.0.1:63869->127.0.0.1:11434: wsarecv: An existing connection was forcibly closed by the remote host.` I have all the latest updates on Windows 11. Nvidia RTX 3060 GPU with latest 591.74 drivers. Intel I9-13900K CPU. ### Relevant log output ```shell time=2026-01-15T19:28:53.031-05:00 level=INFO source=app_windows.go:270 msg="starting Ollama" app=C:\Users\kylew\AppData\Local\Programs\Ollama version=0.14.1 OS=Windows/10.0.26200 time=2026-01-15T19:28:53.032-05:00 level=INFO source=app.go:237 msg="initialized tools registry" tool_count=0 time=2026-01-15T19:28:53.043-05:00 level=INFO source=app.go:252 msg="starting ollama server" time=2026-01-15T19:28:53.043-05:00 level=INFO source=app.go:277 msg="starting ui server" port=62999 time=2026-01-15T19:28:53.044-05:00 level=DEBUG source=app.go:279 msg="starting ui server on port" port=62999 time=2026-01-15T19:28:53.050-05:00 level=DEBUG source=app.go:316 msg="no URL scheme request to handle" time=2026-01-15T19:28:53.050-05:00 level=DEBUG source=app.go:320 msg="waiting for ollama server to be ready" time=2026-01-15T19:28:53.216-05:00 level=DEBUG source=webview.go:446 msg="starting webview event loop" time=2026-01-15T19:28:53.223-05:00 level=DEBUG source=eventloop.go:22 msg="starting event handling loop" time=2026-01-15T19:28:53.335-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1768523333335148900 version=0.14.1 time=2026-01-15T19:28:53.336-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=0s request_id=1768523333336336800 version=0.14.1 time=2026-01-15T19:28:53.838-05:00 level=ERROR source=ui.go:1468 msg="failed to get inference compute" error="timeout scanning server log for inference compute details" time=2026-01-15T19:28:53.838-05:00 level=ERROR source=ui.go:236 msg=site.serveHTTP error="failed to get inference compute: timeout scanning server log for inference compute details" http.method=GET http.path=/api/v1/inference-compute http.pattern="GET /api/v1/inference-compute" http.status=500 http.d=502.9255ms request_id=1768523333335148900 version=0.14.1 time=2026-01-15T19:28:53.839-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=0s request_id=1768523333839226400 version=0.14.1 time=2026-01-15T19:28:54.869-05:00 level=INFO source=server.go:344 msg=Matched "inference compute"="{Library:CUDA Variant: Compute:8.6 Driver:13.1 Name:CUDA0 VRAM:8.0 GiB}" time=2026-01-15T19:28:54.869-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/inference-compute http.pattern="GET /api/v1/inference-compute" http.status=200 http.d=12.84ms request_id=1768523334856327700 version=0.14.1 time=2026-01-15T19:28:56.044-05:00 level=INFO source=updater.go:254 msg="beginning update checker" interval=1h0m0s time=2026-01-15T19:28:56.045-05:00 level=DEBUG source=updater.go:100 msg="checking for available update" requestURL="https://ollama.com/api/update?arch=amd64&nonce=vAVM3rq24CUUNKdBtrRFaw&os=windows&ts=1768523336&version=0.14.1" User-Agent="ollama/0.14.1 amd64 Go/go1.24.1 Windows/10.0.26200" time=2026-01-15T19:28:56.266-05:00 level=DEBUG source=updater.go:109 msg="check update response 204 (current version is up to date)" time=2026-01-15T19:29:14.112-05:00 level=WARN source=app.go:322 msg="ollama server not ready, continuing anyway" error="timeout waiting for Ollama server to be ready" time=2026-01-15T19:29:14.348-05:00 level=WARN source=ui.go:136 msg="ollama server not ready, retrying" attempt=2 time=2026-01-15T19:29:20.550-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1768523360550840600 version=0.14.1 time=2026-01-15T19:29:22.317-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=0s request_id=1768523362317712800 version=0.14.1 time=2026-01-15T19:29:22.318-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1768523362318383600 version=0.14.1 time=2026-01-15T19:29:36.392-05:00 level=ERROR source=ui.go:148 msg="ollama server not ready after retries" error="timeout waiting for Ollama server to be ready" time=2026-01-15T19:29:38.876-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1768523378876417000 version=0.14.1 time=2026-01-15T19:29:57.449-05:00 level=WARN source=ui.go:136 msg="ollama server not ready, retrying" attempt=2 time=2026-01-15T19:30:19.488-05:00 level=ERROR source=ui.go:148 msg="ollama server not ready after retries" error="timeout waiting for Ollama server to be ready" time=2026-01-15T19:30:40.541-05:00 level=WARN source=ui.go:136 msg="ollama server not ready, retrying" attempt=2 time=2026-01-15T19:31:02.580-05:00 level=ERROR source=ui.go:148 msg="ollama server not ready after retries" error="timeout waiting for Ollama server to be ready" time=2026-01-15T19:31:23.612-05:00 level=WARN source=ui.go:136 msg="ollama server not ready, retrying" attempt=2 time=2026-01-15T19:31:45.662-05:00 level=ERROR source=ui.go:148 msg="ollama server not ready after retries" error="timeout waiting for Ollama server to be ready" time=2026-01-15T19:32:06.722-05:00 level=WARN source=ui.go:136 msg="ollama server not ready, retrying" attempt=2 time=2026-01-15T19:32:28.769-05:00 level=ERROR source=ui.go:148 msg="ollama server not ready after retries" error="timeout waiting for Ollama server to be ready" time=2026-01-15T19:32:43.067-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=0s request_id=1768523563067849700 version=0.14.1 time=2026-01-15T19:32:43.081-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=13.3685ms request_id=1768523563067849700 version=0.14.1 time=2026-01-15T19:32:49.820-05:00 level=WARN source=ui.go:136 msg="ollama server not ready, retrying" attempt=2 ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.14.1
GiteaMirror added the bugwindows labels 2026-04-22 18:35:48 -05:00
Author
Owner

@kyleweishaupt commented on GitHub (Jan 16, 2026):

Here is from my app.log with OLLAMA_DEBUG=2:

time=2026-01-15T19:59:35.031-05:00 level=INFO source=app_windows.go:270 msg="starting Ollama" app=C:\Users\kylew\AppData\Local\Programs\Ollama version=0.14.1 OS=Windows/10.0.26200
time=2026-01-15T19:59:35.032-05:00 level=INFO source=app.go:237 msg="initialized tools registry" tool_count=0
time=2026-01-15T19:59:35.043-05:00 level=INFO source=app.go:252 msg="starting ollama server"
time=2026-01-15T19:59:35.043-05:00 level=INFO source=app.go:277 msg="starting ui server" port=64727
time=2026-01-15T19:59:35.392-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1768525175392174900 version=0.14.1
time=2026-01-15T19:59:35.393-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=0s request_id=1768525175393167600 version=0.14.1
time=2026-01-15T19:59:35.503-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=0s request_id=1768525175503469400 version=0.14.1
time=2026-01-15T19:59:35.893-05:00 level=ERROR source=ui.go:1468 msg="failed to get inference compute" error="timeout scanning server log for inference compute details"
time=2026-01-15T19:59:35.893-05:00 level=ERROR source=ui.go:236 msg=site.serveHTTP error="failed to get inference compute: timeout scanning server log for inference compute details" http.method=GET http.path=/api/v1/inference-compute http.pattern="GET /api/v1/inference-compute" http.status=500 http.d=501.3099ms request_id=1768525175392174900 version=0.14.1
time=2026-01-15T19:59:37.410-05:00 level=ERROR source=ui.go:1468 msg="failed to get inference compute" error="timeout scanning server log for inference compute details"
time=2026-01-15T19:59:37.410-05:00 level=ERROR source=ui.go:236 msg=site.serveHTTP error="failed to get inference compute: timeout scanning server log for inference compute details" http.method=GET http.path=/api/v1/inference-compute http.pattern="GET /api/v1/inference-compute" http.status=500 http.d=509.8674ms request_id=1768525176900767000 version=0.14.1
time=2026-01-15T19:59:38.044-05:00 level=INFO source=updater.go:254 msg="beginning update checker" interval=1h0m0s
time=2026-01-15T19:59:39.958-05:00 level=ERROR source=ui.go:1468 msg="failed to get inference compute" error="timeout scanning server log for inference compute details"
time=2026-01-15T19:59:39.958-05:00 level=ERROR source=ui.go:236 msg=site.serveHTTP error="failed to get inference compute: timeout scanning server log for inference compute details" http.method=GET http.path=/api/v1/inference-compute http.pattern="GET /api/v1/inference-compute" http.status=500 http.d=542.7972ms request_id=1768525179416002300 version=0.14.1
time=2026-01-15T19:59:43.963-05:00 level=INFO source=server.go:344 msg=Matched "inference compute"="{Library:CUDA Variant: Compute:8.6 Driver:13.1 Name:CUDA0 VRAM:8.0 GiB}"
time=2026-01-15T19:59:43.963-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/inference-compute http.pattern="GET /api/v1/inference-compute" http.status=200 http.d=1.0209ms request_id=1768525183962881500 version=0.14.1
time=2026-01-15T19:59:48.949-05:00 level=INFO source=app_windows.go:270 msg="starting Ollama" app=C:\Users\kylew\AppData\Local\Programs\Ollama version=0.14.1 OS=Windows/10.0.26200
time=2026-01-15T19:59:48.951-05:00 level=INFO source=eventloop.go:328 msg="sent focus request to existing instance"
time=2026-01-15T19:59:48.951-05:00 level=INFO source=app_windows.go:79 msg="existing instance found, exiting"
time=2026-01-15T19:59:56.100-05:00 level=WARN source=app.go:322 msg="ollama server not ready, continuing anyway" error="timeout waiting for Ollama server to be ready"
time=2026-01-15T19:59:56.413-05:00 level=WARN source=ui.go:136 msg="ollama server not ready, retrying" attempt=2
time=2026-01-15T20:00:05.802-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1768525205802786600 version=0.14.1
time=2026-01-15T20:00:05.803-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=0s request_id=1768525205803289100 version=0.14.1
time=2026-01-15T20:00:06.977-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=373.3µs request_id=1768525206977038600 version=0.14.1
time=2026-01-15T20:00:09.352-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=539.6µs request_id=1768525209351680400 version=0.14.1
time=2026-01-15T20:00:09.352-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=515.5µs request_id=1768525209352220000 version=0.14.1
time=2026-01-15T20:00:18.457-05:00 level=ERROR source=ui.go:148 msg="ollama server not ready after retries" error="timeout waiting for Ollama server to be ready"
time=2026-01-15T20:00:32.255-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1768525232255477100 version=0.14.1
time=2026-01-15T20:00:39.503-05:00 level=WARN source=ui.go:136 msg="ollama server not ready, retrying" attempt=2
time=2026-01-15T20:00:53.868-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1768525253868462700 version=0.14.1
time=2026-01-15T20:00:55.572-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=0s request_id=1768525255572808700 version=0.14.1
time=2026-01-15T20:00:55.583-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=11.1907ms request_id=1768525255572808700 version=0.14.1

Here is the contents of my server.log:

time=2026-01-15T19:59:36.131-05:00 level=INFO source=routes.go:1614 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG-4 OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\kylew\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:]"
time=2026-01-15T19:59:36.132-05:00 level=INFO source=images.go:499 msg="total blobs: 0"
time=2026-01-15T19:59:36.132-05:00 level=INFO source=images.go:506 msg="total unused blobs removed: 0"
time=2026-01-15T19:59:36.133-05:00 level=INFO source=routes.go:1667 msg="Listening on [::]:11434 (version 0.14.1)"
time=2026-01-15T19:59:36.133-05:00 level=DEBUG source=sched.go:121 msg="starting llm scheduler"
time=2026-01-15T19:59:36.134-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2026-01-15T19:59:36.147-05:00 level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12]" extraEnvs=map[]
time=2026-01-15T19:59:36.152-05:00 level=INFO source=server.go:429 msg="starting runner" cmd="C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 65131"
time=2026-01-15T19:59:36.152-05:00 level=DEBUG source=server.go:430 msg=subprocess OLLAMA_HOST=0.0.0.0 PATH="C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama;C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12;C:\\Python314\\Scripts\\;C:\\Python314\\;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\Git\\cmd;C:\\Program Files\\PuTTY\\;C:\\Users\\kylew\\AppData\\Roaming\\npm;C:\\Users\\kylew\\AppData\\Local\\PowerToys\\DSCModules\\;C:\\Users\\kylew\\.local\\bin;;C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama" OLLAMA_DEBUG=2 OLLAMA_CONTEXT_LENGTH=4096 OLLAMA_MODELS=C:\Users\kylew\.ollama\models OLLAMA_LIBRARY_PATH=C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama;C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12
time=2026-01-15T19:59:36.182-05:00 level=INFO source=runner.go:1405 msg="starting ollama engine"
time=2026-01-15T19:59:36.184-05:00 level=INFO source=runner.go:1440 msg="Server listening on 127.0.0.1:65131"
time=2026-01-15T19:59:36.190-05:00 level=DEBUG source=gguf.go:589 msg=general.architecture type=string
time=2026-01-15T19:59:36.191-05:00 level=DEBUG source=gguf.go:589 msg=tokenizer.ggml.model type=string
time=2026-01-15T19:59:36.191-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.alignment default=32
time=2026-01-15T19:59:36.196-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.alignment default=32
time=2026-01-15T19:59:36.196-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.file_type default=0
time=2026-01-15T19:59:36.197-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.name default=""
time=2026-01-15T19:59:36.197-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.description default=""
time=2026-01-15T19:59:36.197-05:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3
time=2026-01-15T19:59:36.197-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama
load_backend: loaded CPU backend from C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll
time=2026-01-15T19:59:36.398-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3060 Ti, compute capability 8.6, VMM: yes, ID: GPU-2e029dff-70bc-bc67-d675-b378a7fa494a
load_backend: loaded CUDA backend from C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
time=2026-01-15T19:59:38.761-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2026-01-15T19:59:38.762-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.block_count default=0
time=2026-01-15T19:59:38.762-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.pooling_type default=0
time=2026-01-15T19:59:38.763-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.expert_count default=0
time=2026-01-15T19:59:38.763-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}"
time=2026-01-15T19:59:38.763-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}"
time=2026-01-15T19:59:38.763-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}"
time=2026-01-15T19:59:38.763-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}"
time=2026-01-15T19:59:38.763-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true
time=2026-01-15T19:59:38.763-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0
time=2026-01-15T19:59:38.763-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
time=2026-01-15T19:59:38.763-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0
time=2026-01-15T19:59:38.763-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
time=2026-01-15T19:59:38.763-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.pre default=""
time=2026-01-15T19:59:38.764-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.block_count default=0
time=2026-01-15T19:59:38.764-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.embedding_length default=0
time=2026-01-15T19:59:38.764-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.attention.head_count default=0
time=2026-01-15T19:59:38.764-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.attention.head_count_kv default=0
time=2026-01-15T19:59:38.764-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.attention.key_length default=0
time=2026-01-15T19:59:38.764-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.rope.dimension_count default=0
time=2026-01-15T19:59:38.764-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0
time=2026-01-15T19:59:38.764-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.rope.freq_base default=100000
time=2026-01-15T19:59:38.764-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.rope.scaling.factor default=1
time=2026-01-15T19:59:38.764-05:00 level=DEBUG source=runner.go:1380 msg="dummy model load took" duration=2.5753385s
ggml_backend_cuda_device_get_memory device GPU-2e029dff-70bc-bc67-d675-b378a7fa494a utilizing NVML memory reporting free: 7727259648 total: 8589934592
time=2026-01-15T19:59:38.802-05:00 level=DEBUG source=runner.go:1385 msg="gathering device infos took" duration=37.6757ms
time=2026-01-15T19:59:38.803-05:00 level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12]" devices="[{DeviceID:{ID:GPU-2e029dff-70bc-bc67-d675-b378a7fa494a Library:CUDA} Name:CUDA0 Description:NVIDIA GeForce RTX 3060 Ti FilterID: Integrated:false PCIID:0000:01:00.0 TotalMemory:8589934592 FreeMemory:7727259648 ComputeMajor:8 ComputeMinor:6 DriverMajor:13 DriverMinor:1 LibraryPath:[C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12]}]"
time=2026-01-15T19:59:38.803-05:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=2.6564672s OLLAMA_LIBRARY_PATH="[C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12]" extra_envs=map[]
time=2026-01-15T19:59:38.803-05:00 level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v13]" extraEnvs=map[]
time=2026-01-15T19:59:38.804-05:00 level=INFO source=server.go:429 msg="starting runner" cmd="C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 65137"
time=2026-01-15T19:59:38.804-05:00 level=DEBUG source=server.go:430 msg=subprocess OLLAMA_HOST=0.0.0.0 PATH="C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama;C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v13;C:\\Python314\\Scripts\\;C:\\Python314\\;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\Git\\cmd;C:\\Program Files\\PuTTY\\;C:\\Users\\kylew\\AppData\\Roaming\\npm;C:\\Users\\kylew\\AppData\\Local\\PowerToys\\DSCModules\\;C:\\Users\\kylew\\.local\\bin;;C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama" OLLAMA_DEBUG=2 OLLAMA_CONTEXT_LENGTH=4096 OLLAMA_MODELS=C:\Users\kylew\.ollama\models OLLAMA_LIBRARY_PATH=C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama;C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13
time=2026-01-15T19:59:38.829-05:00 level=INFO source=runner.go:1405 msg="starting ollama engine"
time=2026-01-15T19:59:38.830-05:00 level=INFO source=runner.go:1440 msg="Server listening on 127.0.0.1:65137"
time=2026-01-15T19:59:38.837-05:00 level=DEBUG source=gguf.go:589 msg=general.architecture type=string
time=2026-01-15T19:59:38.837-05:00 level=DEBUG source=gguf.go:589 msg=tokenizer.ggml.model type=string
time=2026-01-15T19:59:38.837-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.alignment default=32
time=2026-01-15T19:59:38.838-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.alignment default=32
time=2026-01-15T19:59:38.838-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.file_type default=0
time=2026-01-15T19:59:38.838-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.name default=""
time=2026-01-15T19:59:38.838-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.description default=""
time=2026-01-15T19:59:38.838-05:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3
time=2026-01-15T19:59:38.838-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama
load_backend: loaded CPU backend from C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll
time=2026-01-15T19:59:38.847-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3060 Ti, compute capability 8.6, VMM: yes, ID: GPU-2e029dff-70bc-bc67-d675-b378a7fa494a
load_backend: loaded CUDA backend from C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13\ggml-cuda.dll
time=2026-01-15T19:59:39.484-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2026-01-15T19:59:39.484-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.block_count default=0
time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.pooling_type default=0
time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.expert_count default=0
time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}"
time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}"
time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}"
time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}"
time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true
time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0
time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0
time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.pre default=""
time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.block_count default=0
time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.embedding_length default=0
time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.attention.head_count default=0
time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.attention.head_count_kv default=0
time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.attention.key_length default=0
time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.rope.dimension_count default=0
time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0
time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.rope.freq_base default=100000
time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.rope.scaling.factor default=1
time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=runner.go:1380 msg="dummy model load took" duration=648.4647ms
ggml_backend_cuda_device_get_memory device GPU-2e029dff-70bc-bc67-d675-b378a7fa494a utilizing NVML memory reporting free: 7716462592 total: 8589934592
time=2026-01-15T19:59:39.512-05:00 level=DEBUG source=runner.go:1385 msg="gathering device infos took" duration=26.7054ms
time=2026-01-15T19:59:39.512-05:00 level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v13]" devices="[{DeviceID:{ID:GPU-2e029dff-70bc-bc67-d675-b378a7fa494a Library:CUDA} Name:CUDA0 Description:NVIDIA GeForce RTX 3060 Ti FilterID: Integrated:false PCIID:0000:01:00.0 TotalMemory:8589934592 FreeMemory:7716462592 ComputeMajor:8 ComputeMinor:6 DriverMajor:13 DriverMinor:1 LibraryPath:[C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v13]}]"
time=2026-01-15T19:59:39.512-05:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=709.0754ms OLLAMA_LIBRARY_PATH="[C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v13]" extra_envs=map[]
time=2026-01-15T19:59:39.513-05:00 level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\rocm]" extraEnvs=map[]
time=2026-01-15T19:59:39.513-05:00 level=INFO source=server.go:429 msg="starting runner" cmd="C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 65146"
time=2026-01-15T19:59:39.513-05:00 level=DEBUG source=server.go:430 msg=subprocess OLLAMA_HOST=0.0.0.0 PATH="C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama;C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\rocm;C:\\Python314\\Scripts\\;C:\\Python314\\;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\Git\\cmd;C:\\Program Files\\PuTTY\\;C:\\Users\\kylew\\AppData\\Roaming\\npm;C:\\Users\\kylew\\AppData\\Local\\PowerToys\\DSCModules\\;C:\\Users\\kylew\\.local\\bin;;C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama" OLLAMA_DEBUG=2 OLLAMA_CONTEXT_LENGTH=4096 OLLAMA_MODELS=C:\Users\kylew\.ollama\models OLLAMA_LIBRARY_PATH=C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama;C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\rocm
time=2026-01-15T19:59:39.540-05:00 level=INFO source=runner.go:1405 msg="starting ollama engine"
time=2026-01-15T19:59:39.541-05:00 level=INFO source=runner.go:1440 msg="Server listening on 127.0.0.1:65146"
time=2026-01-15T19:59:39.547-05:00 level=DEBUG source=gguf.go:589 msg=general.architecture type=string
time=2026-01-15T19:59:39.547-05:00 level=DEBUG source=gguf.go:589 msg=tokenizer.ggml.model type=string
time=2026-01-15T19:59:39.548-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.alignment default=32
time=2026-01-15T19:59:39.548-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.alignment default=32
time=2026-01-15T19:59:39.548-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.file_type default=0
time=2026-01-15T19:59:39.548-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.name default=""
time=2026-01-15T19:59:39.548-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.description default=""
time=2026-01-15T19:59:39.548-05:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3
time=2026-01-15T19:59:39.548-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama
load_backend: loaded CPU backend from C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll
time=2026-01-15T19:59:39.558-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\rocm
ggml_cuda_init: failed to initialize ROCm: no ROCm-capable device is detected
load_backend: loaded ROCm backend from C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\rocm\ggml-hip.dll
time=2026-01-15T19:59:39.753-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang)
time=2026-01-15T19:59:39.753-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.block_count default=0
time=2026-01-15T19:59:39.753-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.pooling_type default=0
time=2026-01-15T19:59:39.753-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.expert_count default=0
time=2026-01-15T19:59:39.753-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}"
time=2026-01-15T19:59:39.753-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}"
time=2026-01-15T19:59:39.753-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}"
time=2026-01-15T19:59:39.753-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}"
time=2026-01-15T19:59:39.753-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true
time=2026-01-15T19:59:39.753-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0
time=2026-01-15T19:59:39.753-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
time=2026-01-15T19:59:39.753-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0
time=2026-01-15T19:59:39.753-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
time=2026-01-15T19:59:39.753-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.pre default=""
time=2026-01-15T19:59:39.754-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.block_count default=0
time=2026-01-15T19:59:39.754-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.embedding_length default=0
time=2026-01-15T19:59:39.754-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.attention.head_count default=0
time=2026-01-15T19:59:39.754-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.attention.head_count_kv default=0
time=2026-01-15T19:59:39.754-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.attention.key_length default=0
time=2026-01-15T19:59:39.754-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.rope.dimension_count default=0
time=2026-01-15T19:59:39.754-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0
time=2026-01-15T19:59:39.754-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.rope.freq_base default=100000
time=2026-01-15T19:59:39.754-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.rope.scaling.factor default=1
time=2026-01-15T19:59:39.754-05:00 level=DEBUG source=runner.go:1380 msg="dummy model load took" duration=206.2151ms
time=2026-01-15T19:59:39.754-05:00 level=DEBUG source=runner.go:1385 msg="gathering device infos took" duration=0s
time=2026-01-15T19:59:39.754-05:00 level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\rocm]" devices=[]
time=2026-01-15T19:59:39.754-05:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=241.4085ms OLLAMA_LIBRARY_PATH="[C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\rocm]" extra_envs=map[]
time=2026-01-15T19:59:39.754-05:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
time=2026-01-15T19:59:39.754-05:00 level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=2
time=2026-01-15T19:59:39.754-05:00 level=DEBUG source=runner.go:146 msg="verifying if device is supported" library=C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12 description="NVIDIA GeForce RTX 3060 Ti" compute=8.6 id=GPU-2e029dff-70bc-bc67-d675-b378a7fa494a pci_id=0000:01:00.0
time=2026-01-15T19:59:39.754-05:00 level=DEBUG source=runner.go:146 msg="verifying if device is supported" library=C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13 description="NVIDIA GeForce RTX 3060 Ti" compute=8.6 id=GPU-2e029dff-70bc-bc67-d675-b378a7fa494a pci_id=0000:01:00.0
time=2026-01-15T19:59:39.754-05:00 level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v13]" extraEnvs="map[CUDA_VISIBLE_DEVICES:GPU-2e029dff-70bc-bc67-d675-b378a7fa494a GGML_CUDA_INIT:1]"
time=2026-01-15T19:59:39.754-05:00 level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12]" extraEnvs="map[CUDA_VISIBLE_DEVICES:GPU-2e029dff-70bc-bc67-d675-b378a7fa494a GGML_CUDA_INIT:1]"
time=2026-01-15T19:59:39.755-05:00 level=INFO source=server.go:429 msg="starting runner" cmd="C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 65152"
time=2026-01-15T19:59:39.755-05:00 level=DEBUG source=server.go:430 msg=subprocess OLLAMA_HOST=0.0.0.0 PATH="C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama;C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v13;C:\\Python314\\Scripts\\;C:\\Python314\\;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\Git\\cmd;C:\\Program Files\\PuTTY\\;C:\\Users\\kylew\\AppData\\Roaming\\npm;C:\\Users\\kylew\\AppData\\Local\\PowerToys\\DSCModules\\;C:\\Users\\kylew\\.local\\bin;;C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama" OLLAMA_DEBUG=2 OLLAMA_CONTEXT_LENGTH=4096 OLLAMA_MODELS=C:\Users\kylew\.ollama\models OLLAMA_LIBRARY_PATH=C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama;C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13 CUDA_VISIBLE_DEVICES=GPU-2e029dff-70bc-bc67-d675-b378a7fa494a GGML_CUDA_INIT=1
time=2026-01-15T19:59:39.755-05:00 level=INFO source=server.go:429 msg="starting runner" cmd="C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 65153"
time=2026-01-15T19:59:39.755-05:00 level=DEBUG source=server.go:430 msg=subprocess OLLAMA_HOST=0.0.0.0 PATH="C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama;C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12;C:\\Python314\\Scripts\\;C:\\Python314\\;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\Git\\cmd;C:\\Program Files\\PuTTY\\;C:\\Users\\kylew\\AppData\\Roaming\\npm;C:\\Users\\kylew\\AppData\\Local\\PowerToys\\DSCModules\\;C:\\Users\\kylew\\.local\\bin;;C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama" OLLAMA_DEBUG=2 OLLAMA_CONTEXT_LENGTH=4096 OLLAMA_MODELS=C:\Users\kylew\.ollama\models OLLAMA_LIBRARY_PATH=C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama;C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12 GGML_CUDA_INIT=1 CUDA_VISIBLE_DEVICES=GPU-2e029dff-70bc-bc67-d675-b378a7fa494a
time=2026-01-15T19:59:39.785-05:00 level=INFO source=runner.go:1405 msg="starting ollama engine"
time=2026-01-15T19:59:39.785-05:00 level=INFO source=runner.go:1405 msg="starting ollama engine"
time=2026-01-15T19:59:39.786-05:00 level=INFO source=runner.go:1440 msg="Server listening on 127.0.0.1:65153"
time=2026-01-15T19:59:39.786-05:00 level=INFO source=runner.go:1440 msg="Server listening on 127.0.0.1:65152"
time=2026-01-15T19:59:39.789-05:00 level=DEBUG source=gguf.go:589 msg=general.architecture type=string
time=2026-01-15T19:59:39.789-05:00 level=DEBUG source=gguf.go:589 msg=general.architecture type=string
time=2026-01-15T19:59:39.789-05:00 level=DEBUG source=gguf.go:589 msg=tokenizer.ggml.model type=string
time=2026-01-15T19:59:39.789-05:00 level=DEBUG source=gguf.go:589 msg=tokenizer.ggml.model type=string
time=2026-01-15T19:59:39.789-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.alignment default=32
time=2026-01-15T19:59:39.789-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.alignment default=32
time=2026-01-15T19:59:39.790-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.alignment default=32
time=2026-01-15T19:59:39.790-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.file_type default=0
time=2026-01-15T19:59:39.790-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.name default=""
time=2026-01-15T19:59:39.790-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.description default=""
time=2026-01-15T19:59:39.790-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.alignment default=32
time=2026-01-15T19:59:39.790-05:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3
time=2026-01-15T19:59:39.790-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.file_type default=0
time=2026-01-15T19:59:39.790-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.name default=""
time=2026-01-15T19:59:39.790-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.description default=""
time=2026-01-15T19:59:39.790-05:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3
time=2026-01-15T19:59:39.790-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama
time=2026-01-15T19:59:39.790-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama
load_backend: loaded CPU backend from C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll
time=2026-01-15T19:59:39.800-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12
load_backend: loaded CPU backend from C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll
time=2026-01-15T19:59:39.800-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3060 Ti, compute capability 8.6, VMM: yes, ID: GPU-2e029dff-70bc-bc67-d675-b378a7fa494a
load_backend: loaded CUDA backend from C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13\ggml-cuda.dll
time=2026-01-15T19:59:39.869-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2026-01-15T19:59:39.869-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.block_count default=0
time=2026-01-15T19:59:39.869-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.pooling_type default=0
time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.expert_count default=0
time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}"
time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}"
time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}"
time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}"
time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true
time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0
time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0
time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.pre default=""
time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.block_count default=0
time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.embedding_length default=0
time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.attention.head_count default=0
time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.attention.head_count_kv default=0
time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.attention.key_length default=0
time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.rope.dimension_count default=0
time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0
time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.rope.freq_base default=100000
time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.rope.scaling.factor default=1
time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=runner.go:1380 msg="dummy model load took" duration=80.7727ms
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3060 Ti, compute capability 8.6, VMM: yes, ID: GPU-2e029dff-70bc-bc67-d675-b378a7fa494a
load_backend: loaded CUDA backend from C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
time=2026-01-15T19:59:39.880-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.block_count default=0
time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.pooling_type default=0
time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.expert_count default=0
time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}"
time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}"
time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}"
time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}"
time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true
time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0
time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0
time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.pre default=""
time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.block_count default=0
time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.embedding_length default=0
time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.attention.head_count default=0
time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.attention.head_count_kv default=0
time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.attention.key_length default=0
time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.rope.dimension_count default=0
time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0
time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.rope.freq_base default=100000
time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.rope.scaling.factor default=1
time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=runner.go:1380 msg="dummy model load took" duration=91.613ms
ggml_backend_cuda_device_get_memory device GPU-2e029dff-70bc-bc67-d675-b378a7fa494a utilizing NVML memory reporting free: 7703863296 total: 8589934592
time=2026-01-15T19:59:39.888-05:00 level=DEBUG source=runner.go:1385 msg="gathering device infos took" duration=18.0196ms
time=2026-01-15T19:59:39.888-05:00 level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v13]" devices="[{DeviceID:{ID:GPU-2e029dff-70bc-bc67-d675-b378a7fa494a Library:CUDA} Name:CUDA0 Description:NVIDIA GeForce RTX 3060 Ti FilterID: Integrated:false PCIID:0000:01:00.0 TotalMemory:8589934592 FreeMemory:7703863296 ComputeMajor:8 ComputeMinor:6 DriverMajor:13 DriverMinor:1 LibraryPath:[C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v13]}]"
time=2026-01-15T19:59:39.888-05:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=133.9558ms OLLAMA_LIBRARY_PATH="[C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v13]" extra_envs="map[CUDA_VISIBLE_DEVICES:GPU-2e029dff-70bc-bc67-d675-b378a7fa494a GGML_CUDA_INIT:1]"
ggml_backend_cuda_device_get_memory device GPU-2e029dff-70bc-bc67-d675-b378a7fa494a utilizing NVML memory reporting free: 7703863296 total: 8589934592
time=2026-01-15T19:59:39.903-05:00 level=DEBUG source=runner.go:1385 msg="gathering device infos took" duration=22.6888ms
time=2026-01-15T19:59:39.904-05:00 level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12]" devices="[{DeviceID:{ID:GPU-2e029dff-70bc-bc67-d675-b378a7fa494a Library:CUDA} Name:CUDA0 Description:NVIDIA GeForce RTX 3060 Ti FilterID: Integrated:false PCIID:0000:01:00.0 TotalMemory:8589934592 FreeMemory:7703863296 ComputeMajor:8 ComputeMinor:6 DriverMajor:13 DriverMinor:1 LibraryPath:[C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12]}]"
time=2026-01-15T19:59:39.904-05:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=149.4678ms OLLAMA_LIBRARY_PATH="[C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12]" extra_envs="map[CUDA_VISIBLE_DEVICES:GPU-2e029dff-70bc-bc67-d675-b378a7fa494a GGML_CUDA_INIT:1]"
time=2026-01-15T19:59:39.904-05:00 level=TRACE source=runner.go:174 msg="supported GPU library combinations before filtering" supported="map[CUDA:map[C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12:map[GPU-2e029dff-70bc-bc67-d675-b378a7fa494a:0] C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v13:map[GPU-2e029dff-70bc-bc67-d675-b378a7fa494a:1]]]"
time=2026-01-15T19:59:39.904-05:00 level=DEBUG source=runner.go:401 msg="filtering device with overlapping libraries" id=GPU-2e029dff-70bc-bc67-d675-b378a7fa494a library=C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12 delete_index=0 kept_library=C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13
time=2026-01-15T19:59:39.904-05:00 level=TRACE source=runner.go:183 msg="removing unsupported or overlapping GPU combination" libDir=C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12 description="NVIDIA GeForce RTX 3060 Ti" compute=8.6 pci_id=0000:01:00.0
time=2026-01-15T19:59:39.904-05:00 level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=3.7713077s
time=2026-01-15T19:59:39.905-05:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-2e029dff-70bc-bc67-d675-b378a7fa494a filter_id="" library=CUDA compute=8.6 name=CUDA0 description="NVIDIA GeForce RTX 3060 Ti" libdirs=ollama,cuda_v13 driver=13.1 pci_id=0000:01:00.0 type=discrete total="8.0 GiB" available="7.2 GiB"
time=2026-01-15T19:59:39.905-05:00 level=INFO source=routes.go:1708 msg="entering low vram mode" "total vram"="8.0 GiB" threshold="20.0 GiB"
<!-- gh-comment-id:3757584629 --> @kyleweishaupt commented on GitHub (Jan 16, 2026): Here is from my app.log with `OLLAMA_DEBUG=2`: ``` time=2026-01-15T19:59:35.031-05:00 level=INFO source=app_windows.go:270 msg="starting Ollama" app=C:\Users\kylew\AppData\Local\Programs\Ollama version=0.14.1 OS=Windows/10.0.26200 time=2026-01-15T19:59:35.032-05:00 level=INFO source=app.go:237 msg="initialized tools registry" tool_count=0 time=2026-01-15T19:59:35.043-05:00 level=INFO source=app.go:252 msg="starting ollama server" time=2026-01-15T19:59:35.043-05:00 level=INFO source=app.go:277 msg="starting ui server" port=64727 time=2026-01-15T19:59:35.392-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1768525175392174900 version=0.14.1 time=2026-01-15T19:59:35.393-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=0s request_id=1768525175393167600 version=0.14.1 time=2026-01-15T19:59:35.503-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=0s request_id=1768525175503469400 version=0.14.1 time=2026-01-15T19:59:35.893-05:00 level=ERROR source=ui.go:1468 msg="failed to get inference compute" error="timeout scanning server log for inference compute details" time=2026-01-15T19:59:35.893-05:00 level=ERROR source=ui.go:236 msg=site.serveHTTP error="failed to get inference compute: timeout scanning server log for inference compute details" http.method=GET http.path=/api/v1/inference-compute http.pattern="GET /api/v1/inference-compute" http.status=500 http.d=501.3099ms request_id=1768525175392174900 version=0.14.1 time=2026-01-15T19:59:37.410-05:00 level=ERROR source=ui.go:1468 msg="failed to get inference compute" error="timeout scanning server log for inference compute details" time=2026-01-15T19:59:37.410-05:00 level=ERROR source=ui.go:236 msg=site.serveHTTP error="failed to get inference compute: timeout scanning server log for inference compute details" http.method=GET http.path=/api/v1/inference-compute http.pattern="GET /api/v1/inference-compute" http.status=500 http.d=509.8674ms request_id=1768525176900767000 version=0.14.1 time=2026-01-15T19:59:38.044-05:00 level=INFO source=updater.go:254 msg="beginning update checker" interval=1h0m0s time=2026-01-15T19:59:39.958-05:00 level=ERROR source=ui.go:1468 msg="failed to get inference compute" error="timeout scanning server log for inference compute details" time=2026-01-15T19:59:39.958-05:00 level=ERROR source=ui.go:236 msg=site.serveHTTP error="failed to get inference compute: timeout scanning server log for inference compute details" http.method=GET http.path=/api/v1/inference-compute http.pattern="GET /api/v1/inference-compute" http.status=500 http.d=542.7972ms request_id=1768525179416002300 version=0.14.1 time=2026-01-15T19:59:43.963-05:00 level=INFO source=server.go:344 msg=Matched "inference compute"="{Library:CUDA Variant: Compute:8.6 Driver:13.1 Name:CUDA0 VRAM:8.0 GiB}" time=2026-01-15T19:59:43.963-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/inference-compute http.pattern="GET /api/v1/inference-compute" http.status=200 http.d=1.0209ms request_id=1768525183962881500 version=0.14.1 time=2026-01-15T19:59:48.949-05:00 level=INFO source=app_windows.go:270 msg="starting Ollama" app=C:\Users\kylew\AppData\Local\Programs\Ollama version=0.14.1 OS=Windows/10.0.26200 time=2026-01-15T19:59:48.951-05:00 level=INFO source=eventloop.go:328 msg="sent focus request to existing instance" time=2026-01-15T19:59:48.951-05:00 level=INFO source=app_windows.go:79 msg="existing instance found, exiting" time=2026-01-15T19:59:56.100-05:00 level=WARN source=app.go:322 msg="ollama server not ready, continuing anyway" error="timeout waiting for Ollama server to be ready" time=2026-01-15T19:59:56.413-05:00 level=WARN source=ui.go:136 msg="ollama server not ready, retrying" attempt=2 time=2026-01-15T20:00:05.802-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1768525205802786600 version=0.14.1 time=2026-01-15T20:00:05.803-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=0s request_id=1768525205803289100 version=0.14.1 time=2026-01-15T20:00:06.977-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=373.3µs request_id=1768525206977038600 version=0.14.1 time=2026-01-15T20:00:09.352-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=539.6µs request_id=1768525209351680400 version=0.14.1 time=2026-01-15T20:00:09.352-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=515.5µs request_id=1768525209352220000 version=0.14.1 time=2026-01-15T20:00:18.457-05:00 level=ERROR source=ui.go:148 msg="ollama server not ready after retries" error="timeout waiting for Ollama server to be ready" time=2026-01-15T20:00:32.255-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1768525232255477100 version=0.14.1 time=2026-01-15T20:00:39.503-05:00 level=WARN source=ui.go:136 msg="ollama server not ready, retrying" attempt=2 time=2026-01-15T20:00:53.868-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1768525253868462700 version=0.14.1 time=2026-01-15T20:00:55.572-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=0s request_id=1768525255572808700 version=0.14.1 time=2026-01-15T20:00:55.583-05:00 level=INFO source=ui.go:236 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=11.1907ms request_id=1768525255572808700 version=0.14.1 ``` Here is the contents of my server.log: ``` time=2026-01-15T19:59:36.131-05:00 level=INFO source=routes.go:1614 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG-4 OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\kylew\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:]" time=2026-01-15T19:59:36.132-05:00 level=INFO source=images.go:499 msg="total blobs: 0" time=2026-01-15T19:59:36.132-05:00 level=INFO source=images.go:506 msg="total unused blobs removed: 0" time=2026-01-15T19:59:36.133-05:00 level=INFO source=routes.go:1667 msg="Listening on [::]:11434 (version 0.14.1)" time=2026-01-15T19:59:36.133-05:00 level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2026-01-15T19:59:36.134-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-01-15T19:59:36.147-05:00 level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12]" extraEnvs=map[] time=2026-01-15T19:59:36.152-05:00 level=INFO source=server.go:429 msg="starting runner" cmd="C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 65131" time=2026-01-15T19:59:36.152-05:00 level=DEBUG source=server.go:430 msg=subprocess OLLAMA_HOST=0.0.0.0 PATH="C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama;C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12;C:\\Python314\\Scripts\\;C:\\Python314\\;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\Git\\cmd;C:\\Program Files\\PuTTY\\;C:\\Users\\kylew\\AppData\\Roaming\\npm;C:\\Users\\kylew\\AppData\\Local\\PowerToys\\DSCModules\\;C:\\Users\\kylew\\.local\\bin;;C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama" OLLAMA_DEBUG=2 OLLAMA_CONTEXT_LENGTH=4096 OLLAMA_MODELS=C:\Users\kylew\.ollama\models OLLAMA_LIBRARY_PATH=C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama;C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12 time=2026-01-15T19:59:36.182-05:00 level=INFO source=runner.go:1405 msg="starting ollama engine" time=2026-01-15T19:59:36.184-05:00 level=INFO source=runner.go:1440 msg="Server listening on 127.0.0.1:65131" time=2026-01-15T19:59:36.190-05:00 level=DEBUG source=gguf.go:589 msg=general.architecture type=string time=2026-01-15T19:59:36.191-05:00 level=DEBUG source=gguf.go:589 msg=tokenizer.ggml.model type=string time=2026-01-15T19:59:36.191-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.alignment default=32 time=2026-01-15T19:59:36.196-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.alignment default=32 time=2026-01-15T19:59:36.196-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.file_type default=0 time=2026-01-15T19:59:36.197-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.name default="" time=2026-01-15T19:59:36.197-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.description default="" time=2026-01-15T19:59:36.197-05:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 time=2026-01-15T19:59:36.197-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama load_backend: loaded CPU backend from C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll time=2026-01-15T19:59:36.398-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3060 Ti, compute capability 8.6, VMM: yes, ID: GPU-2e029dff-70bc-bc67-d675-b378a7fa494a load_backend: loaded CUDA backend from C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll time=2026-01-15T19:59:38.761-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2026-01-15T19:59:38.762-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.block_count default=0 time=2026-01-15T19:59:38.762-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.pooling_type default=0 time=2026-01-15T19:59:38.763-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.expert_count default=0 time=2026-01-15T19:59:38.763-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" time=2026-01-15T19:59:38.763-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" time=2026-01-15T19:59:38.763-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" time=2026-01-15T19:59:38.763-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" time=2026-01-15T19:59:38.763-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true time=2026-01-15T19:59:38.763-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 time=2026-01-15T19:59:38.763-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false time=2026-01-15T19:59:38.763-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 time=2026-01-15T19:59:38.763-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2026-01-15T19:59:38.763-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.pre default="" time=2026-01-15T19:59:38.764-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.block_count default=0 time=2026-01-15T19:59:38.764-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.embedding_length default=0 time=2026-01-15T19:59:38.764-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.attention.head_count default=0 time=2026-01-15T19:59:38.764-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.attention.head_count_kv default=0 time=2026-01-15T19:59:38.764-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.attention.key_length default=0 time=2026-01-15T19:59:38.764-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.rope.dimension_count default=0 time=2026-01-15T19:59:38.764-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 time=2026-01-15T19:59:38.764-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.rope.freq_base default=100000 time=2026-01-15T19:59:38.764-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.rope.scaling.factor default=1 time=2026-01-15T19:59:38.764-05:00 level=DEBUG source=runner.go:1380 msg="dummy model load took" duration=2.5753385s ggml_backend_cuda_device_get_memory device GPU-2e029dff-70bc-bc67-d675-b378a7fa494a utilizing NVML memory reporting free: 7727259648 total: 8589934592 time=2026-01-15T19:59:38.802-05:00 level=DEBUG source=runner.go:1385 msg="gathering device infos took" duration=37.6757ms time=2026-01-15T19:59:38.803-05:00 level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12]" devices="[{DeviceID:{ID:GPU-2e029dff-70bc-bc67-d675-b378a7fa494a Library:CUDA} Name:CUDA0 Description:NVIDIA GeForce RTX 3060 Ti FilterID: Integrated:false PCIID:0000:01:00.0 TotalMemory:8589934592 FreeMemory:7727259648 ComputeMajor:8 ComputeMinor:6 DriverMajor:13 DriverMinor:1 LibraryPath:[C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12]}]" time=2026-01-15T19:59:38.803-05:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=2.6564672s OLLAMA_LIBRARY_PATH="[C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12]" extra_envs=map[] time=2026-01-15T19:59:38.803-05:00 level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v13]" extraEnvs=map[] time=2026-01-15T19:59:38.804-05:00 level=INFO source=server.go:429 msg="starting runner" cmd="C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 65137" time=2026-01-15T19:59:38.804-05:00 level=DEBUG source=server.go:430 msg=subprocess OLLAMA_HOST=0.0.0.0 PATH="C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama;C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v13;C:\\Python314\\Scripts\\;C:\\Python314\\;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\Git\\cmd;C:\\Program Files\\PuTTY\\;C:\\Users\\kylew\\AppData\\Roaming\\npm;C:\\Users\\kylew\\AppData\\Local\\PowerToys\\DSCModules\\;C:\\Users\\kylew\\.local\\bin;;C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama" OLLAMA_DEBUG=2 OLLAMA_CONTEXT_LENGTH=4096 OLLAMA_MODELS=C:\Users\kylew\.ollama\models OLLAMA_LIBRARY_PATH=C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama;C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13 time=2026-01-15T19:59:38.829-05:00 level=INFO source=runner.go:1405 msg="starting ollama engine" time=2026-01-15T19:59:38.830-05:00 level=INFO source=runner.go:1440 msg="Server listening on 127.0.0.1:65137" time=2026-01-15T19:59:38.837-05:00 level=DEBUG source=gguf.go:589 msg=general.architecture type=string time=2026-01-15T19:59:38.837-05:00 level=DEBUG source=gguf.go:589 msg=tokenizer.ggml.model type=string time=2026-01-15T19:59:38.837-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.alignment default=32 time=2026-01-15T19:59:38.838-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.alignment default=32 time=2026-01-15T19:59:38.838-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.file_type default=0 time=2026-01-15T19:59:38.838-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.name default="" time=2026-01-15T19:59:38.838-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.description default="" time=2026-01-15T19:59:38.838-05:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 time=2026-01-15T19:59:38.838-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama load_backend: loaded CPU backend from C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll time=2026-01-15T19:59:38.847-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3060 Ti, compute capability 8.6, VMM: yes, ID: GPU-2e029dff-70bc-bc67-d675-b378a7fa494a load_backend: loaded CUDA backend from C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13\ggml-cuda.dll time=2026-01-15T19:59:39.484-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2026-01-15T19:59:39.484-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.block_count default=0 time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.pooling_type default=0 time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.expert_count default=0 time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.pre default="" time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.block_count default=0 time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.embedding_length default=0 time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.attention.head_count default=0 time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.attention.head_count_kv default=0 time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.attention.key_length default=0 time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.rope.dimension_count default=0 time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.rope.freq_base default=100000 time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.rope.scaling.factor default=1 time=2026-01-15T19:59:39.485-05:00 level=DEBUG source=runner.go:1380 msg="dummy model load took" duration=648.4647ms ggml_backend_cuda_device_get_memory device GPU-2e029dff-70bc-bc67-d675-b378a7fa494a utilizing NVML memory reporting free: 7716462592 total: 8589934592 time=2026-01-15T19:59:39.512-05:00 level=DEBUG source=runner.go:1385 msg="gathering device infos took" duration=26.7054ms time=2026-01-15T19:59:39.512-05:00 level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v13]" devices="[{DeviceID:{ID:GPU-2e029dff-70bc-bc67-d675-b378a7fa494a Library:CUDA} Name:CUDA0 Description:NVIDIA GeForce RTX 3060 Ti FilterID: Integrated:false PCIID:0000:01:00.0 TotalMemory:8589934592 FreeMemory:7716462592 ComputeMajor:8 ComputeMinor:6 DriverMajor:13 DriverMinor:1 LibraryPath:[C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v13]}]" time=2026-01-15T19:59:39.512-05:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=709.0754ms OLLAMA_LIBRARY_PATH="[C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v13]" extra_envs=map[] time=2026-01-15T19:59:39.513-05:00 level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\rocm]" extraEnvs=map[] time=2026-01-15T19:59:39.513-05:00 level=INFO source=server.go:429 msg="starting runner" cmd="C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 65146" time=2026-01-15T19:59:39.513-05:00 level=DEBUG source=server.go:430 msg=subprocess OLLAMA_HOST=0.0.0.0 PATH="C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama;C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\rocm;C:\\Python314\\Scripts\\;C:\\Python314\\;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\Git\\cmd;C:\\Program Files\\PuTTY\\;C:\\Users\\kylew\\AppData\\Roaming\\npm;C:\\Users\\kylew\\AppData\\Local\\PowerToys\\DSCModules\\;C:\\Users\\kylew\\.local\\bin;;C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama" OLLAMA_DEBUG=2 OLLAMA_CONTEXT_LENGTH=4096 OLLAMA_MODELS=C:\Users\kylew\.ollama\models OLLAMA_LIBRARY_PATH=C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama;C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\rocm time=2026-01-15T19:59:39.540-05:00 level=INFO source=runner.go:1405 msg="starting ollama engine" time=2026-01-15T19:59:39.541-05:00 level=INFO source=runner.go:1440 msg="Server listening on 127.0.0.1:65146" time=2026-01-15T19:59:39.547-05:00 level=DEBUG source=gguf.go:589 msg=general.architecture type=string time=2026-01-15T19:59:39.547-05:00 level=DEBUG source=gguf.go:589 msg=tokenizer.ggml.model type=string time=2026-01-15T19:59:39.548-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.alignment default=32 time=2026-01-15T19:59:39.548-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.alignment default=32 time=2026-01-15T19:59:39.548-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.file_type default=0 time=2026-01-15T19:59:39.548-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.name default="" time=2026-01-15T19:59:39.548-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.description default="" time=2026-01-15T19:59:39.548-05:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 time=2026-01-15T19:59:39.548-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama load_backend: loaded CPU backend from C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll time=2026-01-15T19:59:39.558-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\rocm ggml_cuda_init: failed to initialize ROCm: no ROCm-capable device is detected load_backend: loaded ROCm backend from C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\rocm\ggml-hip.dll time=2026-01-15T19:59:39.753-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang) time=2026-01-15T19:59:39.753-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.block_count default=0 time=2026-01-15T19:59:39.753-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.pooling_type default=0 time=2026-01-15T19:59:39.753-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.expert_count default=0 time=2026-01-15T19:59:39.753-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" time=2026-01-15T19:59:39.753-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" time=2026-01-15T19:59:39.753-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" time=2026-01-15T19:59:39.753-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" time=2026-01-15T19:59:39.753-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true time=2026-01-15T19:59:39.753-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 time=2026-01-15T19:59:39.753-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false time=2026-01-15T19:59:39.753-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 time=2026-01-15T19:59:39.753-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2026-01-15T19:59:39.753-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.pre default="" time=2026-01-15T19:59:39.754-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.block_count default=0 time=2026-01-15T19:59:39.754-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.embedding_length default=0 time=2026-01-15T19:59:39.754-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.attention.head_count default=0 time=2026-01-15T19:59:39.754-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.attention.head_count_kv default=0 time=2026-01-15T19:59:39.754-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.attention.key_length default=0 time=2026-01-15T19:59:39.754-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.rope.dimension_count default=0 time=2026-01-15T19:59:39.754-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 time=2026-01-15T19:59:39.754-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.rope.freq_base default=100000 time=2026-01-15T19:59:39.754-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.rope.scaling.factor default=1 time=2026-01-15T19:59:39.754-05:00 level=DEBUG source=runner.go:1380 msg="dummy model load took" duration=206.2151ms time=2026-01-15T19:59:39.754-05:00 level=DEBUG source=runner.go:1385 msg="gathering device infos took" duration=0s time=2026-01-15T19:59:39.754-05:00 level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\rocm]" devices=[] time=2026-01-15T19:59:39.754-05:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=241.4085ms OLLAMA_LIBRARY_PATH="[C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\rocm]" extra_envs=map[] time=2026-01-15T19:59:39.754-05:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" time=2026-01-15T19:59:39.754-05:00 level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=2 time=2026-01-15T19:59:39.754-05:00 level=DEBUG source=runner.go:146 msg="verifying if device is supported" library=C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12 description="NVIDIA GeForce RTX 3060 Ti" compute=8.6 id=GPU-2e029dff-70bc-bc67-d675-b378a7fa494a pci_id=0000:01:00.0 time=2026-01-15T19:59:39.754-05:00 level=DEBUG source=runner.go:146 msg="verifying if device is supported" library=C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13 description="NVIDIA GeForce RTX 3060 Ti" compute=8.6 id=GPU-2e029dff-70bc-bc67-d675-b378a7fa494a pci_id=0000:01:00.0 time=2026-01-15T19:59:39.754-05:00 level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v13]" extraEnvs="map[CUDA_VISIBLE_DEVICES:GPU-2e029dff-70bc-bc67-d675-b378a7fa494a GGML_CUDA_INIT:1]" time=2026-01-15T19:59:39.754-05:00 level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12]" extraEnvs="map[CUDA_VISIBLE_DEVICES:GPU-2e029dff-70bc-bc67-d675-b378a7fa494a GGML_CUDA_INIT:1]" time=2026-01-15T19:59:39.755-05:00 level=INFO source=server.go:429 msg="starting runner" cmd="C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 65152" time=2026-01-15T19:59:39.755-05:00 level=DEBUG source=server.go:430 msg=subprocess OLLAMA_HOST=0.0.0.0 PATH="C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama;C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v13;C:\\Python314\\Scripts\\;C:\\Python314\\;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\Git\\cmd;C:\\Program Files\\PuTTY\\;C:\\Users\\kylew\\AppData\\Roaming\\npm;C:\\Users\\kylew\\AppData\\Local\\PowerToys\\DSCModules\\;C:\\Users\\kylew\\.local\\bin;;C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama" OLLAMA_DEBUG=2 OLLAMA_CONTEXT_LENGTH=4096 OLLAMA_MODELS=C:\Users\kylew\.ollama\models OLLAMA_LIBRARY_PATH=C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama;C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13 CUDA_VISIBLE_DEVICES=GPU-2e029dff-70bc-bc67-d675-b378a7fa494a GGML_CUDA_INIT=1 time=2026-01-15T19:59:39.755-05:00 level=INFO source=server.go:429 msg="starting runner" cmd="C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 65153" time=2026-01-15T19:59:39.755-05:00 level=DEBUG source=server.go:430 msg=subprocess OLLAMA_HOST=0.0.0.0 PATH="C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama;C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12;C:\\Python314\\Scripts\\;C:\\Python314\\;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\Git\\cmd;C:\\Program Files\\PuTTY\\;C:\\Users\\kylew\\AppData\\Roaming\\npm;C:\\Users\\kylew\\AppData\\Local\\PowerToys\\DSCModules\\;C:\\Users\\kylew\\.local\\bin;;C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama" OLLAMA_DEBUG=2 OLLAMA_CONTEXT_LENGTH=4096 OLLAMA_MODELS=C:\Users\kylew\.ollama\models OLLAMA_LIBRARY_PATH=C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama;C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12 GGML_CUDA_INIT=1 CUDA_VISIBLE_DEVICES=GPU-2e029dff-70bc-bc67-d675-b378a7fa494a time=2026-01-15T19:59:39.785-05:00 level=INFO source=runner.go:1405 msg="starting ollama engine" time=2026-01-15T19:59:39.785-05:00 level=INFO source=runner.go:1405 msg="starting ollama engine" time=2026-01-15T19:59:39.786-05:00 level=INFO source=runner.go:1440 msg="Server listening on 127.0.0.1:65153" time=2026-01-15T19:59:39.786-05:00 level=INFO source=runner.go:1440 msg="Server listening on 127.0.0.1:65152" time=2026-01-15T19:59:39.789-05:00 level=DEBUG source=gguf.go:589 msg=general.architecture type=string time=2026-01-15T19:59:39.789-05:00 level=DEBUG source=gguf.go:589 msg=general.architecture type=string time=2026-01-15T19:59:39.789-05:00 level=DEBUG source=gguf.go:589 msg=tokenizer.ggml.model type=string time=2026-01-15T19:59:39.789-05:00 level=DEBUG source=gguf.go:589 msg=tokenizer.ggml.model type=string time=2026-01-15T19:59:39.789-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.alignment default=32 time=2026-01-15T19:59:39.789-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.alignment default=32 time=2026-01-15T19:59:39.790-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.alignment default=32 time=2026-01-15T19:59:39.790-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.file_type default=0 time=2026-01-15T19:59:39.790-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.name default="" time=2026-01-15T19:59:39.790-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.description default="" time=2026-01-15T19:59:39.790-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.alignment default=32 time=2026-01-15T19:59:39.790-05:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 time=2026-01-15T19:59:39.790-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.file_type default=0 time=2026-01-15T19:59:39.790-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.name default="" time=2026-01-15T19:59:39.790-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=general.description default="" time=2026-01-15T19:59:39.790-05:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 time=2026-01-15T19:59:39.790-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama time=2026-01-15T19:59:39.790-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama load_backend: loaded CPU backend from C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll time=2026-01-15T19:59:39.800-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12 load_backend: loaded CPU backend from C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll time=2026-01-15T19:59:39.800-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3060 Ti, compute capability 8.6, VMM: yes, ID: GPU-2e029dff-70bc-bc67-d675-b378a7fa494a load_backend: loaded CUDA backend from C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13\ggml-cuda.dll time=2026-01-15T19:59:39.869-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2026-01-15T19:59:39.869-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.block_count default=0 time=2026-01-15T19:59:39.869-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.pooling_type default=0 time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.expert_count default=0 time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.pre default="" time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.block_count default=0 time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.embedding_length default=0 time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.attention.head_count default=0 time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.attention.head_count_kv default=0 time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.attention.key_length default=0 time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.rope.dimension_count default=0 time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.rope.freq_base default=100000 time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.rope.scaling.factor default=1 time=2026-01-15T19:59:39.870-05:00 level=DEBUG source=runner.go:1380 msg="dummy model load took" duration=80.7727ms ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3060 Ti, compute capability 8.6, VMM: yes, ID: GPU-2e029dff-70bc-bc67-d675-b378a7fa494a load_backend: loaded CUDA backend from C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll time=2026-01-15T19:59:39.880-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.block_count default=0 time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.pooling_type default=0 time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.expert_count default=0 time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.pre default="" time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.block_count default=0 time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.embedding_length default=0 time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.attention.head_count default=0 time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.attention.head_count_kv default=0 time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.attention.key_length default=0 time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.rope.dimension_count default=0 time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.rope.freq_base default=100000 time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=ggml.go:296 msg="key with type not found" key=llama.rope.scaling.factor default=1 time=2026-01-15T19:59:39.880-05:00 level=DEBUG source=runner.go:1380 msg="dummy model load took" duration=91.613ms ggml_backend_cuda_device_get_memory device GPU-2e029dff-70bc-bc67-d675-b378a7fa494a utilizing NVML memory reporting free: 7703863296 total: 8589934592 time=2026-01-15T19:59:39.888-05:00 level=DEBUG source=runner.go:1385 msg="gathering device infos took" duration=18.0196ms time=2026-01-15T19:59:39.888-05:00 level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v13]" devices="[{DeviceID:{ID:GPU-2e029dff-70bc-bc67-d675-b378a7fa494a Library:CUDA} Name:CUDA0 Description:NVIDIA GeForce RTX 3060 Ti FilterID: Integrated:false PCIID:0000:01:00.0 TotalMemory:8589934592 FreeMemory:7703863296 ComputeMajor:8 ComputeMinor:6 DriverMajor:13 DriverMinor:1 LibraryPath:[C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v13]}]" time=2026-01-15T19:59:39.888-05:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=133.9558ms OLLAMA_LIBRARY_PATH="[C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v13]" extra_envs="map[CUDA_VISIBLE_DEVICES:GPU-2e029dff-70bc-bc67-d675-b378a7fa494a GGML_CUDA_INIT:1]" ggml_backend_cuda_device_get_memory device GPU-2e029dff-70bc-bc67-d675-b378a7fa494a utilizing NVML memory reporting free: 7703863296 total: 8589934592 time=2026-01-15T19:59:39.903-05:00 level=DEBUG source=runner.go:1385 msg="gathering device infos took" duration=22.6888ms time=2026-01-15T19:59:39.904-05:00 level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12]" devices="[{DeviceID:{ID:GPU-2e029dff-70bc-bc67-d675-b378a7fa494a Library:CUDA} Name:CUDA0 Description:NVIDIA GeForce RTX 3060 Ti FilterID: Integrated:false PCIID:0000:01:00.0 TotalMemory:8589934592 FreeMemory:7703863296 ComputeMajor:8 ComputeMinor:6 DriverMajor:13 DriverMinor:1 LibraryPath:[C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12]}]" time=2026-01-15T19:59:39.904-05:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=149.4678ms OLLAMA_LIBRARY_PATH="[C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12]" extra_envs="map[CUDA_VISIBLE_DEVICES:GPU-2e029dff-70bc-bc67-d675-b378a7fa494a GGML_CUDA_INIT:1]" time=2026-01-15T19:59:39.904-05:00 level=TRACE source=runner.go:174 msg="supported GPU library combinations before filtering" supported="map[CUDA:map[C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12:map[GPU-2e029dff-70bc-bc67-d675-b378a7fa494a:0] C:\\Users\\kylew\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v13:map[GPU-2e029dff-70bc-bc67-d675-b378a7fa494a:1]]]" time=2026-01-15T19:59:39.904-05:00 level=DEBUG source=runner.go:401 msg="filtering device with overlapping libraries" id=GPU-2e029dff-70bc-bc67-d675-b378a7fa494a library=C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12 delete_index=0 kept_library=C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13 time=2026-01-15T19:59:39.904-05:00 level=TRACE source=runner.go:183 msg="removing unsupported or overlapping GPU combination" libDir=C:\Users\kylew\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12 description="NVIDIA GeForce RTX 3060 Ti" compute=8.6 pci_id=0000:01:00.0 time=2026-01-15T19:59:39.904-05:00 level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=3.7713077s time=2026-01-15T19:59:39.905-05:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-2e029dff-70bc-bc67-d675-b378a7fa494a filter_id="" library=CUDA compute=8.6 name=CUDA0 description="NVIDIA GeForce RTX 3060 Ti" libdirs=ollama,cuda_v13 driver=13.1 pci_id=0000:01:00.0 type=discrete total="8.0 GiB" available="7.2 GiB" time=2026-01-15T19:59:39.905-05:00 level=INFO source=routes.go:1708 msg="entering low vram mode" "total vram"="8.0 GiB" threshold="20.0 GiB" ```
Author
Owner

@kyleweishaupt commented on GitHub (Jan 16, 2026):

After debugging this, it turns out I had also tried installing Ollama in WSL. The port 11434 was being used by WSL and was not removed as part of uninstalling it. I had to use a command to get the IP Helper service to remove it.

After that and restarting the Host Network Service, I got Ollama to start up.

<!-- gh-comment-id:3758017829 --> @kyleweishaupt commented on GitHub (Jan 16, 2026): After debugging this, it turns out I had also tried installing Ollama in WSL. The port 11434 was being used by WSL and was not removed as part of uninstalling it. I had to use a command to get the IP Helper service to remove it. After that and restarting the Host Network Service, I got Ollama to start up.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#34765