[GH-ISSUE #13301] Using website's install command for Linux installs version 0.13.0, ministral-3 pull still fails with 412 error #34547

Closed
opened 2026-04-22 18:13:14 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @applebiter on GitHub (Dec 2, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13301

What is the issue?

Running Linux Mint 22.2. Attempted to download new model, ministral-3:8b, received error

Error: pull model manifest: 412: 
The model you are attempting to pull requires a newer version of Ollama."

Used website's one-liner install command, same result. Killed all processes, reloaded daemon, restarted service multiple times, confirmed 0.13.0 as current installed version. Attempting to pull ministral-3:8b results in same error after all of that. Google tells me the latest version is 0.13.1. Huh. That's not what Ollama's instructions gave me. I got 0.13.0. And I'm still getting the 412 error.

Relevant log output

Dec 02 00:51:50 indigo systemd[1]: ollama.service: Deactivated successfully.
Dec 02 00:51:50 indigo systemd[1]: Stopped ollama.service - Ollama Service.
Dec 02 00:51:50 indigo systemd[1]: ollama.service: Consumed 2min 25.683s CPU time, 299.0M memory peak, 0B memory swap peak.
-- Boot ea3f8f08870244c4b888fc978de03ccc --
Dec 02 12:23:46 indigo systemd[1]: Started ollama.service - Ollama Service.
Dec 02 12:23:49 indigo ollama[1193]: time=2025-12-02T12:23:49.891-05:00 level=INFO source=routes.go:1525 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Dec 02 12:23:50 indigo ollama[1193]: time=2025-12-02T12:23:50.711-05:00 level=INFO source=images.go:522 msg="total blobs: 58"
Dec 02 12:23:50 indigo ollama[1193]: time=2025-12-02T12:23:50.713-05:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
Dec 02 12:23:50 indigo ollama[1193]: time=2025-12-02T12:23:50.715-05:00 level=INFO source=routes.go:1578 msg="Listening on [::]:11434 (version 0.12.10)"
Dec 02 12:23:50 indigo ollama[1193]: time=2025-12-02T12:23:50.744-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
Dec 02 12:23:50 indigo ollama[1193]: time=2025-12-02T12:23:50.746-05:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 34635"
Dec 02 12:24:07 indigo ollama[1193]: time=2025-12-02T12:24:07.016-05:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 35631"
Dec 02 12:24:09 indigo ollama[1193]: time=2025-12-02T12:24:09.943-05:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-757bb34f-8ed7-6ea3-9ee3-d6995f453d42 filter_id="" library=CUDA compute=6.1 name=CUDA0 description="NVIDIA GeForce GTX 1050 Ti" libdirs=ollama,cuda_v12 driver=12.2 pci_id=0000:01:00.0 type=discrete total="4.0 GiB" available="3.9 GiB"
Dec 02 12:24:09 indigo ollama[1193]: time=2025-12-02T12:24:09.943-05:00 level=INFO source=routes.go:1619 msg="entering low vram mode" "total vram"="4.0 GiB" threshold="20.0 GiB"
Dec 02 12:43:42 indigo ollama[1193]: [GIN] 2025/12/02 - 12:43:42 | 200 |   17.836808ms |       127.0.0.1 | HEAD     "/"
Dec 02 12:43:42 indigo ollama[1193]: [GIN] 2025/12/02 - 12:43:42 | 200 |   434.52749ms |       127.0.0.1 | POST     "/api/pull"
Dec 02 12:46:19 indigo systemd[1]: Stopping ollama.service - Ollama Service...
Dec 02 12:46:19 indigo systemd[1]: ollama.service: Deactivated successfully.
Dec 02 12:46:19 indigo systemd[1]: Stopped ollama.service - Ollama Service.
Dec 02 12:46:19 indigo systemd[1]: ollama.service: Consumed 1.136s CPU time, 299.5M memory peak, 0B memory swap peak.
Dec 02 12:46:19 indigo systemd[1]: Started ollama.service - Ollama Service.
Dec 02 12:46:19 indigo ollama[5152]: time=2025-12-02T12:46:19.906-05:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Dec 02 12:46:19 indigo ollama[5152]: time=2025-12-02T12:46:19.912-05:00 level=INFO source=images.go:522 msg="total blobs: 58"
Dec 02 12:46:19 indigo ollama[5152]: time=2025-12-02T12:46:19.913-05:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
Dec 02 12:46:19 indigo ollama[5152]: time=2025-12-02T12:46:19.914-05:00 level=INFO source=routes.go:1597 msg="Listening on [::]:11434 (version 0.13.0)"
Dec 02 12:46:19 indigo ollama[5152]: time=2025-12-02T12:46:19.914-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
Dec 02 12:46:19 indigo ollama[5152]: time=2025-12-02T12:46:19.917-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 42861"
Dec 02 12:46:20 indigo ollama[5152]: time=2025-12-02T12:46:20.124-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 36135"
Dec 02 12:46:20 indigo ollama[5152]: time=2025-12-02T12:46:20.301-05:00 level=INFO source=runner.go:102 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
Dec 02 12:46:20 indigo ollama[5152]: time=2025-12-02T12:46:20.301-05:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-757bb34f-8ed7-6ea3-9ee3-d6995f453d42 filter_id="" library=CUDA compute=6.1 name=CUDA0 description="NVIDIA GeForce GTX 1050 Ti" libdirs=ollama,cuda_v12 driver=12.2 pci_id=0000:01:00.0 type=discrete total="4.0 GiB" available="3.6 GiB"
Dec 02 12:46:20 indigo ollama[5152]: time=2025-12-02T12:46:20.301-05:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="4.0 GiB" threshold="20.0 GiB"
Dec 02 12:46:44 indigo ollama[5152]: [GIN] 2025/12/02 - 12:46:44 | 200 |      50.616µs |       127.0.0.1 | HEAD     "/"
Dec 02 12:46:44 indigo ollama[5152]: [GIN] 2025/12/02 - 12:46:44 | 200 |  287.720002ms |       127.0.0.1 | POST     "/api/pull"
Dec 02 12:48:37 indigo systemd[1]: Stopping ollama.service - Ollama Service...
Dec 02 12:48:37 indigo systemd[1]: ollama.service: Deactivated successfully.
Dec 02 12:48:37 indigo systemd[1]: Stopped ollama.service - Ollama Service.
Dec 02 12:48:37 indigo systemd[1]: Started ollama.service - Ollama Service.
Dec 02 12:48:37 indigo ollama[6045]: time=2025-12-02T12:48:37.999-05:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Dec 02 12:48:38 indigo ollama[6045]: time=2025-12-02T12:48:38.006-05:00 level=INFO source=images.go:522 msg="total blobs: 58"
Dec 02 12:48:38 indigo ollama[6045]: time=2025-12-02T12:48:38.007-05:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
Dec 02 12:48:38 indigo ollama[6045]: time=2025-12-02T12:48:38.008-05:00 level=INFO source=routes.go:1597 msg="Listening on [::]:11434 (version 0.13.0)"
Dec 02 12:48:38 indigo ollama[6045]: time=2025-12-02T12:48:38.008-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
Dec 02 12:48:38 indigo ollama[6045]: time=2025-12-02T12:48:38.009-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 43109"
Dec 02 12:48:38 indigo ollama[6045]: time=2025-12-02T12:48:38.204-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 33603"
Dec 02 12:48:38 indigo ollama[6045]: time=2025-12-02T12:48:38.359-05:00 level=INFO source=runner.go:102 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
Dec 02 12:48:38 indigo ollama[6045]: time=2025-12-02T12:48:38.359-05:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-757bb34f-8ed7-6ea3-9ee3-d6995f453d42 filter_id="" library=CUDA compute=6.1 name=CUDA0 description="NVIDIA GeForce GTX 1050 Ti" libdirs=ollama,cuda_v12 driver=12.2 pci_id=0000:01:00.0 type=discrete total="4.0 GiB" available="3.6 GiB"
Dec 02 12:48:38 indigo ollama[6045]: time=2025-12-02T12:48:38.359-05:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="4.0 GiB" threshold="20.0 GiB"
Dec 02 12:48:50 indigo ollama[6045]: [GIN] 2025/12/02 - 12:48:50 | 200 |      50.512µs |       127.0.0.1 | HEAD     "/"
Dec 02 12:48:50 indigo ollama[6045]: [GIN] 2025/12/02 - 12:48:50 | 200 |  268.317883ms |       127.0.0.1 | POST     "/api/pull"
Dec 02 12:49:07 indigo ollama[6045]: [GIN] 2025/12/02 - 12:49:07 | 200 |      45.056µs |       127.0.0.1 | GET      "/api/version"
Dec 02 12:50:41 indigo systemd[1]: ollama.service: Deactivated successfully.
Dec 02 12:50:44 indigo systemd[1]: ollama.service: Scheduled restart job, restart counter is at 1.
Dec 02 12:50:44 indigo systemd[1]: Started ollama.service - Ollama Service.
Dec 02 12:50:44 indigo ollama[6193]: time=2025-12-02T12:50:44.852-05:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Dec 02 12:50:44 indigo ollama[6193]: time=2025-12-02T12:50:44.857-05:00 level=INFO source=images.go:522 msg="total blobs: 58"
Dec 02 12:50:44 indigo ollama[6193]: time=2025-12-02T12:50:44.859-05:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
Dec 02 12:50:44 indigo ollama[6193]: time=2025-12-02T12:50:44.859-05:00 level=INFO source=routes.go:1597 msg="Listening on [::]:11434 (version 0.13.0)"
Dec 02 12:50:44 indigo ollama[6193]: time=2025-12-02T12:50:44.860-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
Dec 02 12:50:44 indigo ollama[6193]: time=2025-12-02T12:50:44.860-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 41299"
Dec 02 12:50:45 indigo ollama[6193]: time=2025-12-02T12:50:45.014-05:00 level=INFO source=runner.go:102 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
Dec 02 12:50:45 indigo ollama[6193]: time=2025-12-02T12:50:45.014-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 45303"
Dec 02 12:50:45 indigo ollama[6193]: time=2025-12-02T12:50:45.188-05:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-757bb34f-8ed7-6ea3-9ee3-d6995f453d42 filter_id="" library=CUDA compute=6.1 name=CUDA0 description="NVIDIA GeForce GTX 1050 Ti" libdirs=ollama,cuda_v12 driver=12.2 pci_id=0000:01:00.0 type=discrete total="4.0 GiB" available="3.7 GiB"
Dec 02 12:50:45 indigo ollama[6193]: time=2025-12-02T12:50:45.188-05:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="4.0 GiB" threshold="20.0 GiB"
Dec 02 12:51:25 indigo systemd[1]: Stopping ollama.service - Ollama Service...
Dec 02 12:51:26 indigo systemd[1]: ollama.service: Deactivated successfully.
Dec 02 12:51:26 indigo systemd[1]: Stopped ollama.service - Ollama Service.
Dec 02 12:51:26 indigo systemd[1]: Started ollama.service - Ollama Service.
Dec 02 12:51:26 indigo ollama[6326]: time=2025-12-02T12:51:26.036-05:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Dec 02 12:51:26 indigo ollama[6326]: time=2025-12-02T12:51:26.042-05:00 level=INFO source=images.go:522 msg="total blobs: 58"
Dec 02 12:51:26 indigo ollama[6326]: time=2025-12-02T12:51:26.043-05:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
Dec 02 12:51:26 indigo ollama[6326]: time=2025-12-02T12:51:26.043-05:00 level=INFO source=routes.go:1597 msg="Listening on [::]:11434 (version 0.13.0)"
Dec 02 12:51:26 indigo ollama[6326]: time=2025-12-02T12:51:26.044-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
Dec 02 12:51:26 indigo ollama[6326]: time=2025-12-02T12:51:26.045-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 32935"
Dec 02 12:51:26 indigo ollama[6326]: time=2025-12-02T12:51:26.180-05:00 level=INFO source=runner.go:102 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
Dec 02 12:51:26 indigo ollama[6326]: time=2025-12-02T12:51:26.181-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 32895"
Dec 02 12:51:26 indigo ollama[6326]: time=2025-12-02T12:51:26.357-05:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-757bb34f-8ed7-6ea3-9ee3-d6995f453d42 filter_id="" library=CUDA compute=6.1 name=CUDA0 description="NVIDIA GeForce GTX 1050 Ti" libdirs=ollama,cuda_v12 driver=12.2 pci_id=0000:01:00.0 type=discrete total="4.0 GiB" available="3.6 GiB"
Dec 02 12:51:26 indigo ollama[6326]: time=2025-12-02T12:51:26.357-05:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="4.0 GiB" threshold="20.0 GiB"
Dec 02 12:52:33 indigo ollama[6326]: [GIN] 2025/12/02 - 12:52:33 | 200 |      81.663µs |       127.0.0.1 | GET      "/api/version"
Dec 02 12:53:56 indigo ollama[6326]: [GIN] 2025/12/02 - 12:53:56 | 200 |      29.011µs |       127.0.0.1 | HEAD     "/"
Dec 02 12:53:56 indigo ollama[6326]: [GIN] 2025/12/02 - 12:53:56 | 200 |  262.105326ms |       127.0.0.1 | POST     "/api/pull"
Dec 02 12:55:13 indigo ollama[6326]: [GIN] 2025/12/02 - 12:55:13 | 200 |      44.722µs |    192.168.32.7 | GET      "/api/version"
Dec 02 12:55:22 indigo ollama[6326]: [GIN] 2025/12/02 - 12:55:22 | 200 |      33.363µs |    192.168.32.7 | HEAD     "/"
Dec 02 12:55:22 indigo ollama[6326]: [GIN] 2025/12/02 - 12:55:22 | 200 |  146.333955ms |    192.168.32.7 | POST     "/api/pull"
Dec 02 12:55:42 indigo ollama[6326]: [GIN] 2025/12/02 - 12:55:42 | 200 |      46.782µs |    192.168.32.7 | GET      "/api/version"
Dec 02 13:07:21 indigo systemd[1]: Stopping ollama.service - Ollama Service...
Dec 02 13:07:21 indigo systemd[1]: ollama.service: Deactivated successfully.
Dec 02 13:07:21 indigo systemd[1]: Stopped ollama.service - Ollama Service.
Dec 02 13:11:39 indigo systemd[1]: Started ollama.service - Ollama Service.
Dec 02 13:11:39 indigo ollama[7201]: time=2025-12-02T13:11:39.354-05:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Dec 02 13:11:39 indigo ollama[7201]: time=2025-12-02T13:11:39.359-05:00 level=INFO source=images.go:522 msg="total blobs: 58"
Dec 02 13:11:39 indigo ollama[7201]: time=2025-12-02T13:11:39.361-05:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
Dec 02 13:11:39 indigo ollama[7201]: time=2025-12-02T13:11:39.361-05:00 level=INFO source=routes.go:1597 msg="Listening on [::]:11434 (version 0.13.0)"
Dec 02 13:11:39 indigo ollama[7201]: time=2025-12-02T13:11:39.362-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
Dec 02 13:11:39 indigo ollama[7201]: time=2025-12-02T13:11:39.362-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 34055"
Dec 02 13:11:39 indigo ollama[7201]: time=2025-12-02T13:11:39.529-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 42305"
Dec 02 13:11:39 indigo ollama[7201]: time=2025-12-02T13:11:39.700-05:00 level=INFO source=runner.go:102 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
Dec 02 13:11:39 indigo ollama[7201]: time=2025-12-02T13:11:39.700-05:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-757bb34f-8ed7-6ea3-9ee3-d6995f453d42 filter_id="" library=CUDA compute=6.1 name=CUDA0 description="NVIDIA GeForce GTX 1050 Ti" libdirs=ollama,cuda_v12 driver=12.2 pci_id=0000:01:00.0 type=discrete total="4.0 GiB" available="3.7 GiB"
Dec 02 13:11:39 indigo ollama[7201]: time=2025-12-02T13:11:39.700-05:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="4.0 GiB" threshold="20.0 GiB"
Dec 02 13:11:46 indigo ollama[7201]: [GIN] 2025/12/02 - 13:11:46 | 200 |      59.955µs |    192.168.32.7 | HEAD     "/"
Dec 02 13:11:47 indigo ollama[7201]: time=2025-12-02T13:11:47.249-05:00 level=INFO source=download.go:177 msg="downloading f5074b1221da in 16 273 MB part(s)"
Dec 02 13:11:51 indigo ollama[7201]: [GIN] 2025/12/02 - 13:11:51 | 200 |  4.763593318s |    192.168.32.7 | POST     "/api/pull"

OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

0.13.0

Originally created by @applebiter on GitHub (Dec 2, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13301 ### What is the issue? Running Linux Mint 22.2. Attempted to download new model, ministral-3:8b, received error ``` Error: pull model manifest: 412: The model you are attempting to pull requires a newer version of Ollama." ``` Used website's one-liner install command, same result. Killed all processes, reloaded daemon, restarted service multiple times, confirmed 0.13.0 as current installed version. Attempting to pull ministral-3:8b results in same error after all of that. Google tells me the latest version is 0.13.1. Huh. That's not what Ollama's instructions gave me. I got 0.13.0. And I'm still getting the 412 error. ### Relevant log output ```shell Dec 02 00:51:50 indigo systemd[1]: ollama.service: Deactivated successfully. Dec 02 00:51:50 indigo systemd[1]: Stopped ollama.service - Ollama Service. Dec 02 00:51:50 indigo systemd[1]: ollama.service: Consumed 2min 25.683s CPU time, 299.0M memory peak, 0B memory swap peak. -- Boot ea3f8f08870244c4b888fc978de03ccc -- Dec 02 12:23:46 indigo systemd[1]: Started ollama.service - Ollama Service. Dec 02 12:23:49 indigo ollama[1193]: time=2025-12-02T12:23:49.891-05:00 level=INFO source=routes.go:1525 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Dec 02 12:23:50 indigo ollama[1193]: time=2025-12-02T12:23:50.711-05:00 level=INFO source=images.go:522 msg="total blobs: 58" Dec 02 12:23:50 indigo ollama[1193]: time=2025-12-02T12:23:50.713-05:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" Dec 02 12:23:50 indigo ollama[1193]: time=2025-12-02T12:23:50.715-05:00 level=INFO source=routes.go:1578 msg="Listening on [::]:11434 (version 0.12.10)" Dec 02 12:23:50 indigo ollama[1193]: time=2025-12-02T12:23:50.744-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." Dec 02 12:23:50 indigo ollama[1193]: time=2025-12-02T12:23:50.746-05:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 34635" Dec 02 12:24:07 indigo ollama[1193]: time=2025-12-02T12:24:07.016-05:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 35631" Dec 02 12:24:09 indigo ollama[1193]: time=2025-12-02T12:24:09.943-05:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-757bb34f-8ed7-6ea3-9ee3-d6995f453d42 filter_id="" library=CUDA compute=6.1 name=CUDA0 description="NVIDIA GeForce GTX 1050 Ti" libdirs=ollama,cuda_v12 driver=12.2 pci_id=0000:01:00.0 type=discrete total="4.0 GiB" available="3.9 GiB" Dec 02 12:24:09 indigo ollama[1193]: time=2025-12-02T12:24:09.943-05:00 level=INFO source=routes.go:1619 msg="entering low vram mode" "total vram"="4.0 GiB" threshold="20.0 GiB" Dec 02 12:43:42 indigo ollama[1193]: [GIN] 2025/12/02 - 12:43:42 | 200 | 17.836808ms | 127.0.0.1 | HEAD "/" Dec 02 12:43:42 indigo ollama[1193]: [GIN] 2025/12/02 - 12:43:42 | 200 | 434.52749ms | 127.0.0.1 | POST "/api/pull" Dec 02 12:46:19 indigo systemd[1]: Stopping ollama.service - Ollama Service... Dec 02 12:46:19 indigo systemd[1]: ollama.service: Deactivated successfully. Dec 02 12:46:19 indigo systemd[1]: Stopped ollama.service - Ollama Service. Dec 02 12:46:19 indigo systemd[1]: ollama.service: Consumed 1.136s CPU time, 299.5M memory peak, 0B memory swap peak. Dec 02 12:46:19 indigo systemd[1]: Started ollama.service - Ollama Service. Dec 02 12:46:19 indigo ollama[5152]: time=2025-12-02T12:46:19.906-05:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Dec 02 12:46:19 indigo ollama[5152]: time=2025-12-02T12:46:19.912-05:00 level=INFO source=images.go:522 msg="total blobs: 58" Dec 02 12:46:19 indigo ollama[5152]: time=2025-12-02T12:46:19.913-05:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" Dec 02 12:46:19 indigo ollama[5152]: time=2025-12-02T12:46:19.914-05:00 level=INFO source=routes.go:1597 msg="Listening on [::]:11434 (version 0.13.0)" Dec 02 12:46:19 indigo ollama[5152]: time=2025-12-02T12:46:19.914-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." Dec 02 12:46:19 indigo ollama[5152]: time=2025-12-02T12:46:19.917-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 42861" Dec 02 12:46:20 indigo ollama[5152]: time=2025-12-02T12:46:20.124-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 36135" Dec 02 12:46:20 indigo ollama[5152]: time=2025-12-02T12:46:20.301-05:00 level=INFO source=runner.go:102 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" Dec 02 12:46:20 indigo ollama[5152]: time=2025-12-02T12:46:20.301-05:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-757bb34f-8ed7-6ea3-9ee3-d6995f453d42 filter_id="" library=CUDA compute=6.1 name=CUDA0 description="NVIDIA GeForce GTX 1050 Ti" libdirs=ollama,cuda_v12 driver=12.2 pci_id=0000:01:00.0 type=discrete total="4.0 GiB" available="3.6 GiB" Dec 02 12:46:20 indigo ollama[5152]: time=2025-12-02T12:46:20.301-05:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="4.0 GiB" threshold="20.0 GiB" Dec 02 12:46:44 indigo ollama[5152]: [GIN] 2025/12/02 - 12:46:44 | 200 | 50.616µs | 127.0.0.1 | HEAD "/" Dec 02 12:46:44 indigo ollama[5152]: [GIN] 2025/12/02 - 12:46:44 | 200 | 287.720002ms | 127.0.0.1 | POST "/api/pull" Dec 02 12:48:37 indigo systemd[1]: Stopping ollama.service - Ollama Service... Dec 02 12:48:37 indigo systemd[1]: ollama.service: Deactivated successfully. Dec 02 12:48:37 indigo systemd[1]: Stopped ollama.service - Ollama Service. Dec 02 12:48:37 indigo systemd[1]: Started ollama.service - Ollama Service. Dec 02 12:48:37 indigo ollama[6045]: time=2025-12-02T12:48:37.999-05:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Dec 02 12:48:38 indigo ollama[6045]: time=2025-12-02T12:48:38.006-05:00 level=INFO source=images.go:522 msg="total blobs: 58" Dec 02 12:48:38 indigo ollama[6045]: time=2025-12-02T12:48:38.007-05:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" Dec 02 12:48:38 indigo ollama[6045]: time=2025-12-02T12:48:38.008-05:00 level=INFO source=routes.go:1597 msg="Listening on [::]:11434 (version 0.13.0)" Dec 02 12:48:38 indigo ollama[6045]: time=2025-12-02T12:48:38.008-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." Dec 02 12:48:38 indigo ollama[6045]: time=2025-12-02T12:48:38.009-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 43109" Dec 02 12:48:38 indigo ollama[6045]: time=2025-12-02T12:48:38.204-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 33603" Dec 02 12:48:38 indigo ollama[6045]: time=2025-12-02T12:48:38.359-05:00 level=INFO source=runner.go:102 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" Dec 02 12:48:38 indigo ollama[6045]: time=2025-12-02T12:48:38.359-05:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-757bb34f-8ed7-6ea3-9ee3-d6995f453d42 filter_id="" library=CUDA compute=6.1 name=CUDA0 description="NVIDIA GeForce GTX 1050 Ti" libdirs=ollama,cuda_v12 driver=12.2 pci_id=0000:01:00.0 type=discrete total="4.0 GiB" available="3.6 GiB" Dec 02 12:48:38 indigo ollama[6045]: time=2025-12-02T12:48:38.359-05:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="4.0 GiB" threshold="20.0 GiB" Dec 02 12:48:50 indigo ollama[6045]: [GIN] 2025/12/02 - 12:48:50 | 200 | 50.512µs | 127.0.0.1 | HEAD "/" Dec 02 12:48:50 indigo ollama[6045]: [GIN] 2025/12/02 - 12:48:50 | 200 | 268.317883ms | 127.0.0.1 | POST "/api/pull" Dec 02 12:49:07 indigo ollama[6045]: [GIN] 2025/12/02 - 12:49:07 | 200 | 45.056µs | 127.0.0.1 | GET "/api/version" Dec 02 12:50:41 indigo systemd[1]: ollama.service: Deactivated successfully. Dec 02 12:50:44 indigo systemd[1]: ollama.service: Scheduled restart job, restart counter is at 1. Dec 02 12:50:44 indigo systemd[1]: Started ollama.service - Ollama Service. Dec 02 12:50:44 indigo ollama[6193]: time=2025-12-02T12:50:44.852-05:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Dec 02 12:50:44 indigo ollama[6193]: time=2025-12-02T12:50:44.857-05:00 level=INFO source=images.go:522 msg="total blobs: 58" Dec 02 12:50:44 indigo ollama[6193]: time=2025-12-02T12:50:44.859-05:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" Dec 02 12:50:44 indigo ollama[6193]: time=2025-12-02T12:50:44.859-05:00 level=INFO source=routes.go:1597 msg="Listening on [::]:11434 (version 0.13.0)" Dec 02 12:50:44 indigo ollama[6193]: time=2025-12-02T12:50:44.860-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." Dec 02 12:50:44 indigo ollama[6193]: time=2025-12-02T12:50:44.860-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 41299" Dec 02 12:50:45 indigo ollama[6193]: time=2025-12-02T12:50:45.014-05:00 level=INFO source=runner.go:102 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" Dec 02 12:50:45 indigo ollama[6193]: time=2025-12-02T12:50:45.014-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 45303" Dec 02 12:50:45 indigo ollama[6193]: time=2025-12-02T12:50:45.188-05:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-757bb34f-8ed7-6ea3-9ee3-d6995f453d42 filter_id="" library=CUDA compute=6.1 name=CUDA0 description="NVIDIA GeForce GTX 1050 Ti" libdirs=ollama,cuda_v12 driver=12.2 pci_id=0000:01:00.0 type=discrete total="4.0 GiB" available="3.7 GiB" Dec 02 12:50:45 indigo ollama[6193]: time=2025-12-02T12:50:45.188-05:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="4.0 GiB" threshold="20.0 GiB" Dec 02 12:51:25 indigo systemd[1]: Stopping ollama.service - Ollama Service... Dec 02 12:51:26 indigo systemd[1]: ollama.service: Deactivated successfully. Dec 02 12:51:26 indigo systemd[1]: Stopped ollama.service - Ollama Service. Dec 02 12:51:26 indigo systemd[1]: Started ollama.service - Ollama Service. Dec 02 12:51:26 indigo ollama[6326]: time=2025-12-02T12:51:26.036-05:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Dec 02 12:51:26 indigo ollama[6326]: time=2025-12-02T12:51:26.042-05:00 level=INFO source=images.go:522 msg="total blobs: 58" Dec 02 12:51:26 indigo ollama[6326]: time=2025-12-02T12:51:26.043-05:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" Dec 02 12:51:26 indigo ollama[6326]: time=2025-12-02T12:51:26.043-05:00 level=INFO source=routes.go:1597 msg="Listening on [::]:11434 (version 0.13.0)" Dec 02 12:51:26 indigo ollama[6326]: time=2025-12-02T12:51:26.044-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." Dec 02 12:51:26 indigo ollama[6326]: time=2025-12-02T12:51:26.045-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 32935" Dec 02 12:51:26 indigo ollama[6326]: time=2025-12-02T12:51:26.180-05:00 level=INFO source=runner.go:102 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" Dec 02 12:51:26 indigo ollama[6326]: time=2025-12-02T12:51:26.181-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 32895" Dec 02 12:51:26 indigo ollama[6326]: time=2025-12-02T12:51:26.357-05:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-757bb34f-8ed7-6ea3-9ee3-d6995f453d42 filter_id="" library=CUDA compute=6.1 name=CUDA0 description="NVIDIA GeForce GTX 1050 Ti" libdirs=ollama,cuda_v12 driver=12.2 pci_id=0000:01:00.0 type=discrete total="4.0 GiB" available="3.6 GiB" Dec 02 12:51:26 indigo ollama[6326]: time=2025-12-02T12:51:26.357-05:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="4.0 GiB" threshold="20.0 GiB" Dec 02 12:52:33 indigo ollama[6326]: [GIN] 2025/12/02 - 12:52:33 | 200 | 81.663µs | 127.0.0.1 | GET "/api/version" Dec 02 12:53:56 indigo ollama[6326]: [GIN] 2025/12/02 - 12:53:56 | 200 | 29.011µs | 127.0.0.1 | HEAD "/" Dec 02 12:53:56 indigo ollama[6326]: [GIN] 2025/12/02 - 12:53:56 | 200 | 262.105326ms | 127.0.0.1 | POST "/api/pull" Dec 02 12:55:13 indigo ollama[6326]: [GIN] 2025/12/02 - 12:55:13 | 200 | 44.722µs | 192.168.32.7 | GET "/api/version" Dec 02 12:55:22 indigo ollama[6326]: [GIN] 2025/12/02 - 12:55:22 | 200 | 33.363µs | 192.168.32.7 | HEAD "/" Dec 02 12:55:22 indigo ollama[6326]: [GIN] 2025/12/02 - 12:55:22 | 200 | 146.333955ms | 192.168.32.7 | POST "/api/pull" Dec 02 12:55:42 indigo ollama[6326]: [GIN] 2025/12/02 - 12:55:42 | 200 | 46.782µs | 192.168.32.7 | GET "/api/version" Dec 02 13:07:21 indigo systemd[1]: Stopping ollama.service - Ollama Service... Dec 02 13:07:21 indigo systemd[1]: ollama.service: Deactivated successfully. Dec 02 13:07:21 indigo systemd[1]: Stopped ollama.service - Ollama Service. Dec 02 13:11:39 indigo systemd[1]: Started ollama.service - Ollama Service. Dec 02 13:11:39 indigo ollama[7201]: time=2025-12-02T13:11:39.354-05:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Dec 02 13:11:39 indigo ollama[7201]: time=2025-12-02T13:11:39.359-05:00 level=INFO source=images.go:522 msg="total blobs: 58" Dec 02 13:11:39 indigo ollama[7201]: time=2025-12-02T13:11:39.361-05:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" Dec 02 13:11:39 indigo ollama[7201]: time=2025-12-02T13:11:39.361-05:00 level=INFO source=routes.go:1597 msg="Listening on [::]:11434 (version 0.13.0)" Dec 02 13:11:39 indigo ollama[7201]: time=2025-12-02T13:11:39.362-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." Dec 02 13:11:39 indigo ollama[7201]: time=2025-12-02T13:11:39.362-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 34055" Dec 02 13:11:39 indigo ollama[7201]: time=2025-12-02T13:11:39.529-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 42305" Dec 02 13:11:39 indigo ollama[7201]: time=2025-12-02T13:11:39.700-05:00 level=INFO source=runner.go:102 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" Dec 02 13:11:39 indigo ollama[7201]: time=2025-12-02T13:11:39.700-05:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-757bb34f-8ed7-6ea3-9ee3-d6995f453d42 filter_id="" library=CUDA compute=6.1 name=CUDA0 description="NVIDIA GeForce GTX 1050 Ti" libdirs=ollama,cuda_v12 driver=12.2 pci_id=0000:01:00.0 type=discrete total="4.0 GiB" available="3.7 GiB" Dec 02 13:11:39 indigo ollama[7201]: time=2025-12-02T13:11:39.700-05:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="4.0 GiB" threshold="20.0 GiB" Dec 02 13:11:46 indigo ollama[7201]: [GIN] 2025/12/02 - 13:11:46 | 200 | 59.955µs | 192.168.32.7 | HEAD "/" Dec 02 13:11:47 indigo ollama[7201]: time=2025-12-02T13:11:47.249-05:00 level=INFO source=download.go:177 msg="downloading f5074b1221da in 16 273 MB part(s)" Dec 02 13:11:51 indigo ollama[7201]: [GIN] 2025/12/02 - 13:11:51 | 200 | 4.763593318s | 192.168.32.7 | POST "/api/pull" ``` ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version 0.13.0
GiteaMirror added the bug label 2026-04-22 18:13:14 -05:00
Author
Owner

@tmikaeld commented on GitHub (Dec 2, 2025):

v0.13.1-rc2 is only in pre-release, install manually or wait until it's officially released.

To install via Ollama installer:
curl -fsSL https://ollama.com/install.sh | OLLAMA_VERSION=0.13.1-rc2 sh

<!-- gh-comment-id:3603461243 --> @tmikaeld commented on GitHub (Dec 2, 2025): v0.13.1-rc2 is only in pre-release, install manually or wait until it's officially released. To install via Ollama installer: `curl -fsSL https://ollama.com/install.sh | OLLAMA_VERSION=0.13.1-rc2 sh`
Author
Owner

@dhiltgen commented on GitHub (Dec 2, 2025):

v0.13.1 is now latest and will install by default.

<!-- gh-comment-id:3604341971 --> @dhiltgen commented on GitHub (Dec 2, 2025): v0.13.1 is now latest and will install by default.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#34547