[GH-ISSUE #13336] mistral-3 seems to run only on my CPU. #70867

Closed
opened 2026-05-04 23:16:48 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @441041 on GitHub (Dec 5, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13336

What is the issue?

On Ollama v0.13.1, mistral-3 seems to run only on my CPU.

The model begins loading on the GPU (vram is utilized), but then it offloads everything to the CPU and stays there. I’ve tested the 3B, 8B, and 14B variants. Meanwhile, qwen3 and all other models I have runs perfectly on the same GPU setup with no issues.

Is this a known Ollama bug or is there some config I’m missing?

Relevant log output


OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @441041 on GitHub (Dec 5, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13336 ### What is the issue? On Ollama v0.13.1, mistral-3 seems to run only on my CPU. The model begins loading on the GPU (vram is utilized), but then it offloads everything to the CPU and stays there. I’ve tested the 3B, 8B, and 14B variants. Meanwhile, qwen3 and all other models I have runs perfectly on the same GPU setup with no issues. Is this a known Ollama bug or is there some config I’m missing? ### Relevant log output ```shell ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the bug label 2026-05-04 23:16:48 -05:00
Author
Owner

@maternion commented on GitHub (Dec 5, 2025):

On Ollama v0.13.1, mistral-3 seems to run only on my CPU.

Minor change, i suppose you mean ministral-3 and not mistral 3?

<!-- gh-comment-id:3615469011 --> @maternion commented on GitHub (Dec 5, 2025): > On Ollama v0.13.1, mistral-3 seems to run only on my CPU. Minor change, i suppose you mean [ministral-3](https://ollama.com/library/ministral-3) and not mistral 3?
Author
Owner

@rladinger commented on GitHub (Dec 5, 2025):

I have also noticed this behavior. Ministral-3:8B

OS: Ubuntu Server 25.10
CPU+GPU: AMD Ryzen AI 9 HF 370
LIB: Vulkan
Ollama Version: 0.13.1

<!-- gh-comment-id:3615564314 --> @rladinger commented on GitHub (Dec 5, 2025): I have also noticed this behavior. Ministral-3:8B OS: Ubuntu Server 25.10 CPU+GPU: AMD Ryzen AI 9 HF 370 LIB: Vulkan Ollama Version: 0.13.1
Author
Owner

@Arman-Espiar commented on GitHub (Dec 5, 2025):

I have the same problem, but all models run on the CPU, while in older versions of ollama they ran correctly on the GPU.
OS: Windows 11
GPU: RTX 3050 8G
ollama version is 0.13.0

<!-- gh-comment-id:3616337418 --> @Arman-Espiar commented on GitHub (Dec 5, 2025): I have the same problem, but all models run on the CPU, while in older versions of ollama they ran correctly on the GPU. OS: Windows 11 GPU: RTX 3050 8G ollama version is 0.13.0
Author
Owner

@rladinger commented on GitHub (Dec 5, 2025):

@Arman-Espiar

Yes, this has been an issue with AMD CPUs/GPUs since 0.13.0. That is why I switched from ROCm to Vulkan, where it generally functions. Except for Ministral-3. At least under Ubuntu.

<!-- gh-comment-id:3616401026 --> @rladinger commented on GitHub (Dec 5, 2025): @Arman-Espiar Yes, this has been an issue with AMD CPUs/GPUs since 0.13.0. That is why I switched from ROCm to Vulkan, where it generally functions. Except for Ministral-3. At least under Ubuntu.
Author
Owner

@azazar commented on GitHub (Dec 5, 2025):

Last Ollama update have broken GPU support for me too.

Log

Nov 30 17:34:31 string systemd[1]: Started ollama.service - Ollama Service.
Nov 30 17:34:31 string ollama[2783]: time=2025-11-30T17:34:31.372+02:00 level=INFO source=routes.go:1475 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:6 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:16 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Nov 30 17:34:31 string ollama[2783]: time=2025-11-30T17:34:31.385+02:00 level=INFO source=images.go:518 msg="total blobs: 26"
Nov 30 17:34:31 string ollama[2783]: time=2025-11-30T17:34:31.385+02:00 level=INFO source=images.go:525 msg="total unused blobs removed: 0"
Nov 30 17:34:31 string ollama[2783]: time=2025-11-30T17:34:31.385+02:00 level=INFO source=routes.go:1528 msg="Listening on 127.0.0.1:11434 (version 0.12.3)"
Nov 30 17:34:31 string ollama[2783]: time=2025-11-30T17:34:31.386+02:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
Nov 30 17:34:31 string ollama[2783]: time=2025-11-30T17:34:31.465+02:00 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/download/linux-drivers.html" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
Nov 30 17:34:31 string ollama[2783]: time=2025-11-30T17:34:31.466+02:00 level=WARN source=amd_linux.go:447 msg="amdgpu detected, but no compatible rocm library found.  Either install rocm v6, or follow manual install instructions at https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install"
Nov 30 17:34:31 string ollama[2783]: time=2025-11-30T17:34:31.466+02:00 level=WARN source=amd_linux.go:352 msg="unable to verify rocm library: no suitable rocm found, falling back to CPU"
Nov 30 17:34:31 string ollama[2783]: time=2025-11-30T17:34:31.466+02:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 library=cuda variant=v12 compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1050 Ti" total="3.9 GiB" available="3.9 GiB"
Nov 30 17:34:31 string ollama[2783]: time=2025-11-30T17:34:31.466+02:00 level=INFO source=routes.go:1569 msg="entering low vram mode" "total vram"="3.9 GiB" threshold="20.0 GiB"
Nov 30 20:36:05 string ollama[2783]: [GIN] 2025/11/30 - 20:36:05 | 200 |   11.522113ms |       127.0.0.1 | GET      "/api/tags"
Nov 30 20:36:05 string ollama[2783]: [GIN] 2025/11/30 - 20:36:05 | 200 |       98.58µs |       127.0.0.1 | GET      "/api/ps"
Nov 30 21:21:40 string ollama[2783]: [GIN] 2025/11/30 - 21:21:40 | 200 |     352.602µs |       127.0.0.1 | GET      "/api/tags"
Nov 30 21:21:40 string ollama[2783]: [GIN] 2025/11/30 - 21:21:40 | 200 |       22.72µs |       127.0.0.1 | GET      "/api/ps"
Nov 30 22:11:53 string systemd[1]: Stopping ollama.service - Ollama Service...
Nov 30 22:11:53 string systemd[1]: ollama.service: Deactivated successfully.
Nov 30 22:11:53 string systemd[1]: Stopped ollama.service - Ollama Service.
-- Boot d64caa43e0014455a1a42e2d3ccfd2d0 --
Dec 01 08:49:18 string systemd[1]: Started ollama.service - Ollama Service.
Dec 01 08:49:18 string ollama[2871]: time=2025-12-01T08:49:18.324+02:00 level=INFO source=routes.go:1475 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:6 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:16 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Dec 01 08:49:18 string ollama[2871]: time=2025-12-01T08:49:18.338+02:00 level=INFO source=images.go:518 msg="total blobs: 26"
Dec 01 08:49:18 string ollama[2871]: time=2025-12-01T08:49:18.339+02:00 level=INFO source=images.go:525 msg="total unused blobs removed: 0"
Dec 01 08:49:18 string ollama[2871]: time=2025-12-01T08:49:18.339+02:00 level=INFO source=routes.go:1528 msg="Listening on 127.0.0.1:11434 (version 0.12.3)"
Dec 01 08:49:18 string ollama[2871]: time=2025-12-01T08:49:18.339+02:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
Dec 01 08:49:18 string ollama[2871]: time=2025-12-01T08:49:18.418+02:00 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/download/linux-drivers.html" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
Dec 01 08:49:18 string ollama[2871]: time=2025-12-01T08:49:18.418+02:00 level=WARN source=amd_linux.go:447 msg="amdgpu detected, but no compatible rocm library found.  Either install rocm v6, or follow manual install instructions at https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install"
Dec 01 08:49:18 string ollama[2871]: time=2025-12-01T08:49:18.418+02:00 level=WARN source=amd_linux.go:352 msg="unable to verify rocm library: no suitable rocm found, falling back to CPU"
Dec 01 08:49:18 string ollama[2871]: time=2025-12-01T08:49:18.418+02:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 library=cuda variant=v12 compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1050 Ti" total="3.9 GiB" available="3.9 GiB"
Dec 01 08:49:18 string ollama[2871]: time=2025-12-01T08:49:18.418+02:00 level=INFO source=routes.go:1569 msg="entering low vram mode" "total vram"="3.9 GiB" threshold="20.0 GiB"
Dec 01 22:22:02 string systemd[1]: Stopping ollama.service - Ollama Service...
Dec 01 22:22:02 string systemd[1]: ollama.service: Deactivated successfully.
Dec 01 22:22:02 string systemd[1]: Stopped ollama.service - Ollama Service.
-- Boot 5e665a1b1ca744abbe2f547993b29020 --
Dec 02 08:03:00 string systemd[1]: Started ollama.service - Ollama Service.
Dec 02 08:03:00 string ollama[2773]: time=2025-12-02T08:03:00.289+02:00 level=INFO source=routes.go:1475 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:6 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:16 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Dec 02 08:03:00 string ollama[2773]: time=2025-12-02T08:03:00.301+02:00 level=INFO source=images.go:518 msg="total blobs: 26"
Dec 02 08:03:00 string ollama[2773]: time=2025-12-02T08:03:00.301+02:00 level=INFO source=images.go:525 msg="total unused blobs removed: 0"
Dec 02 08:03:00 string ollama[2773]: time=2025-12-02T08:03:00.302+02:00 level=INFO source=routes.go:1528 msg="Listening on 127.0.0.1:11434 (version 0.12.3)"
Dec 02 08:03:00 string ollama[2773]: time=2025-12-02T08:03:00.302+02:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
Dec 02 08:03:00 string ollama[2773]: time=2025-12-02T08:03:00.384+02:00 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/download/linux-drivers.html" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
Dec 02 08:03:00 string ollama[2773]: time=2025-12-02T08:03:00.384+02:00 level=WARN source=amd_linux.go:447 msg="amdgpu detected, but no compatible rocm library found.  Either install rocm v6, or follow manual install instructions at https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install"
Dec 02 08:03:00 string ollama[2773]: time=2025-12-02T08:03:00.384+02:00 level=WARN source=amd_linux.go:352 msg="unable to verify rocm library: no suitable rocm found, falling back to CPU"
Dec 02 08:03:00 string ollama[2773]: time=2025-12-02T08:03:00.384+02:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 library=cuda variant=v12 compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1050 Ti" total="3.9 GiB" available="3.9 GiB"
Dec 02 08:03:00 string ollama[2773]: time=2025-12-02T08:03:00.384+02:00 level=INFO source=routes.go:1569 msg="entering low vram mode" "total vram"="3.9 GiB" threshold="20.0 GiB"
Dec 02 08:38:11 string ollama[2773]: [GIN] 2025/12/02 - 08:38:11 | 200 |   10.871519ms |       127.0.0.1 | GET      "/api/tags"
Dec 02 08:38:11 string ollama[2773]: [GIN] 2025/12/02 - 08:38:11 | 200 |      94.791µs |       127.0.0.1 | GET      "/api/ps"
Dec 02 23:11:18 string systemd[1]: Stopping ollama.service - Ollama Service...
Dec 02 23:11:18 string systemd[1]: ollama.service: Deactivated successfully.
Dec 02 23:11:18 string systemd[1]: Stopped ollama.service - Ollama Service.
Dec 02 23:11:18 string systemd[1]: ollama.service: Consumed 665ms CPU time, 64M memory peak.
-- Boot 2828bece086f469faacb079ec26b647c --
Dec 03 09:14:30 string systemd[1]: Started ollama.service - Ollama Service.
Dec 03 09:14:30 string ollama[2787]: time=2025-12-03T09:14:30.380+02:00 level=INFO source=routes.go:1475 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:6 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:16 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Dec 03 09:14:30 string ollama[2787]: time=2025-12-03T09:14:30.395+02:00 level=INFO source=images.go:518 msg="total blobs: 26"
Dec 03 09:14:30 string ollama[2787]: time=2025-12-03T09:14:30.396+02:00 level=INFO source=images.go:525 msg="total unused blobs removed: 0"
Dec 03 09:14:30 string ollama[2787]: time=2025-12-03T09:14:30.396+02:00 level=INFO source=routes.go:1528 msg="Listening on 127.0.0.1:11434 (version 0.12.3)"
Dec 03 09:14:30 string ollama[2787]: time=2025-12-03T09:14:30.397+02:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
Dec 03 09:14:30 string ollama[2787]: time=2025-12-03T09:14:30.469+02:00 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/download/linux-drivers.html" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
Dec 03 09:14:30 string ollama[2787]: time=2025-12-03T09:14:30.469+02:00 level=WARN source=amd_linux.go:447 msg="amdgpu detected, but no compatible rocm library found.  Either install rocm v6, or follow manual install instructions at https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install"
Dec 03 09:14:30 string ollama[2787]: time=2025-12-03T09:14:30.469+02:00 level=WARN source=amd_linux.go:352 msg="unable to verify rocm library: no suitable rocm found, falling back to CPU"
Dec 03 09:14:30 string ollama[2787]: time=2025-12-03T09:14:30.469+02:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 library=cuda variant=v12 compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1050 Ti" total="3.9 GiB" available="3.9 GiB"
Dec 03 09:14:30 string ollama[2787]: time=2025-12-03T09:14:30.469+02:00 level=INFO source=routes.go:1569 msg="entering low vram mode" "total vram"="3.9 GiB" threshold="20.0 GiB"
Dec 03 22:43:48 string systemd[1]: Stopping ollama.service - Ollama Service...
Dec 03 22:43:48 string systemd[1]: ollama.service: Deactivated successfully.
Dec 03 22:43:48 string systemd[1]: Stopped ollama.service - Ollama Service.
Dec 03 22:43:48 string systemd[1]: ollama.service: Consumed 608ms CPU time, 64.4M memory peak.
-- Boot 9fc10e2c0a00418fb419241f4e79f174 --
Dec 04 09:25:30 string systemd[1]: Started ollama.service - Ollama Service.
Dec 04 09:25:30 string ollama[2795]: time=2025-12-04T09:25:30.248+02:00 level=INFO source=routes.go:1475 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:6 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:16 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Dec 04 09:25:30 string ollama[2795]: time=2025-12-04T09:25:30.260+02:00 level=INFO source=images.go:518 msg="total blobs: 26"
Dec 04 09:25:30 string ollama[2795]: time=2025-12-04T09:25:30.261+02:00 level=INFO source=images.go:525 msg="total unused blobs removed: 0"
Dec 04 09:25:30 string ollama[2795]: time=2025-12-04T09:25:30.262+02:00 level=INFO source=routes.go:1528 msg="Listening on 127.0.0.1:11434 (version 0.12.3)"
Dec 04 09:25:30 string ollama[2795]: time=2025-12-04T09:25:30.263+02:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
Dec 04 09:25:30 string ollama[2795]: time=2025-12-04T09:25:30.344+02:00 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/download/linux-drivers.html" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
Dec 04 09:25:30 string ollama[2795]: time=2025-12-04T09:25:30.345+02:00 level=WARN source=amd_linux.go:447 msg="amdgpu detected, but no compatible rocm library found.  Either install rocm v6, or follow manual install instructions at https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install"
Dec 04 09:25:30 string ollama[2795]: time=2025-12-04T09:25:30.345+02:00 level=WARN source=amd_linux.go:352 msg="unable to verify rocm library: no suitable rocm found, falling back to CPU"
Dec 04 09:25:30 string ollama[2795]: time=2025-12-04T09:25:30.345+02:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 library=cuda variant=v12 compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1050 Ti" total="3.9 GiB" available="3.9 GiB"
Dec 04 09:25:30 string ollama[2795]: time=2025-12-04T09:25:30.345+02:00 level=INFO source=routes.go:1569 msg="entering low vram mode" "total vram"="3.9 GiB" threshold="20.0 GiB"
Dec 04 23:10:37 string systemd[1]: Stopping ollama.service - Ollama Service...
Dec 04 23:10:38 string systemd[1]: ollama.service: Deactivated successfully.
Dec 04 23:10:38 string systemd[1]: Stopped ollama.service - Ollama Service.
-- Boot ecd13b02253b450fa6d33abbe8a6e108 --
Dec 05 08:59:30 string systemd[1]: Started ollama.service - Ollama Service.
Dec 05 08:59:30 string ollama[2859]: time=2025-12-05T08:59:30.337+02:00 level=INFO source=routes.go:1475 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:6 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:16 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Dec 05 08:59:30 string ollama[2859]: time=2025-12-05T08:59:30.351+02:00 level=INFO source=images.go:518 msg="total blobs: 26"
Dec 05 08:59:30 string ollama[2859]: time=2025-12-05T08:59:30.351+02:00 level=INFO source=images.go:525 msg="total unused blobs removed: 0"
Dec 05 08:59:30 string ollama[2859]: time=2025-12-05T08:59:30.351+02:00 level=INFO source=routes.go:1528 msg="Listening on 127.0.0.1:11434 (version 0.12.3)"
Dec 05 08:59:30 string ollama[2859]: time=2025-12-05T08:59:30.352+02:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
Dec 05 08:59:30 string ollama[2859]: time=2025-12-05T08:59:30.426+02:00 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/download/linux-drivers.html" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
Dec 05 08:59:30 string ollama[2859]: time=2025-12-05T08:59:30.427+02:00 level=WARN source=amd_linux.go:447 msg="amdgpu detected, but no compatible rocm library found.  Either install rocm v6, or follow manual install instructions at https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install"
Dec 05 08:59:30 string ollama[2859]: time=2025-12-05T08:59:30.427+02:00 level=WARN source=amd_linux.go:352 msg="unable to verify rocm library: no suitable rocm found, falling back to CPU"
Dec 05 08:59:30 string ollama[2859]: time=2025-12-05T08:59:30.427+02:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 library=cuda variant=v12 compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1050 Ti" total="3.9 GiB" available="3.9 GiB"
Dec 05 08:59:30 string ollama[2859]: time=2025-12-05T08:59:30.427+02:00 level=INFO source=routes.go:1569 msg="entering low vram mode" "total vram"="3.9 GiB" threshold="20.0 GiB"
Dec 05 11:20:51 string ollama[2859]: [GIN] 2025/12/05 - 11:20:51 | 200 |   11.534891ms |       127.0.0.1 | GET      "/api/tags"
Dec 05 11:20:51 string ollama[2859]: [GIN] 2025/12/05 - 11:20:51 | 200 |       56.97µs |       127.0.0.1 | GET      "/api/ps"
Dec 05 11:32:11 string ollama[2859]: [GIN] 2025/12/05 - 11:32:11 | 200 |     366.692µs |       127.0.0.1 | GET      "/api/tags"
Dec 05 11:32:12 string ollama[2859]: [GIN] 2025/12/05 - 11:32:12 | 200 |       21.77µs |       127.0.0.1 | GET      "/api/ps"
Dec 05 11:32:16 string ollama[2859]: [GIN] 2025/12/05 - 11:32:16 | 200 |     362.052µs |       127.0.0.1 | GET      "/api/tags"
Dec 05 11:32:17 string ollama[2859]: [GIN] 2025/12/05 - 11:32:17 | 200 |       19.82µs |       127.0.0.1 | GET      "/api/ps"
Dec 05 11:32:17 string ollama[2859]: [GIN] 2025/12/05 - 11:32:17 | 200 |       37.02µs |       127.0.0.1 | GET      "/api/version"
Dec 05 15:56:58 string ollama[2859]: [GIN] 2025/12/05 - 15:56:58 | 200 |     565.572µs |       127.0.0.1 | GET      "/api/tags"
Dec 05 15:56:58 string ollama[2859]: [GIN] 2025/12/05 - 15:56:58 | 200 |       21.95µs |       127.0.0.1 | GET      "/api/ps"
Dec 05 19:11:21 string ollama[2859]: [GIN] 2025/12/05 - 19:11:21 | 200 |     517.672µs |       127.0.0.1 | GET      "/api/tags"
Dec 05 19:11:22 string ollama[2859]: [GIN] 2025/12/05 - 19:11:22 | 200 |       23.75µs |       127.0.0.1 | GET      "/api/ps"
Dec 05 22:38:43 string ollama[2859]: [GIN] 2025/12/05 - 22:38:43 | 200 |       17.77µs |       127.0.0.1 | HEAD     "/"
Dec 05 22:38:43 string ollama[2859]: [GIN] 2025/12/05 - 22:38:43 | 404 |     242.671µs |       127.0.0.1 | POST     "/api/show"
Dec 05 22:38:44 string ollama[2859]: [GIN] 2025/12/05 - 22:38:44 | 200 |  550.546483ms |       127.0.0.1 | POST     "/api/pull"
Dec 05 22:39:37 string systemd[1]: Stopping ollama.service - Ollama Service...
Dec 05 22:39:37 string systemd[1]: ollama.service: Deactivated successfully.
Dec 05 22:39:37 string systemd[1]: Stopped ollama.service - Ollama Service.
Dec 05 22:39:37 string systemd[1]: ollama.service: Consumed 624ms CPU time, 64.1M memory peak.
Dec 05 22:39:37 string systemd[1]: Started ollama.service - Ollama Service.
Dec 05 22:39:37 string ollama[421142]: time=2025-12-05T22:39:37.968+02:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:6 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:16 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Dec 05 22:39:37 string ollama[421142]: time=2025-12-05T22:39:37.968+02:00 level=INFO source=images.go:522 msg="total blobs: 26"
Dec 05 22:39:37 string ollama[421142]: time=2025-12-05T22:39:37.968+02:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
Dec 05 22:39:37 string ollama[421142]: time=2025-12-05T22:39:37.969+02:00 level=INFO source=routes.go:1597 msg="Listening on 127.0.0.1:11434 (version 0.13.1)"
Dec 05 22:39:37 string ollama[421142]: time=2025-12-05T22:39:37.969+02:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
Dec 05 22:39:37 string ollama[421142]: time=2025-12-05T22:39:37.969+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 43695"
Dec 05 22:39:37 string ollama[421142]: time=2025-12-05T22:39:37.997+02:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
Dec 05 22:39:37 string ollama[421142]: time=2025-12-05T22:39:37.997+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 45877"
Dec 05 22:39:38 string ollama[421142]: time=2025-12-05T22:39:38.045+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 44487"
Dec 05 22:39:38 string ollama[421142]: time=2025-12-05T22:39:38.162+02:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 filter_id="" library=CUDA compute=6.1 name=CUDA0 description="NVIDIA GeForce GTX 1050 Ti" libdirs=ollama,cuda_v12 driver=12.2 pci_id=0000:01:00.0 type=discrete total="4.0 GiB" available="3.4 GiB"
Dec 05 22:39:38 string ollama[421142]: time=2025-12-05T22:39:38.162+02:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="4.0 GiB" threshold="20.0 GiB"
Dec 05 22:39:44 string ollama[421142]: [GIN] 2025/12/05 - 22:39:44 | 200 |        31.9µs |       127.0.0.1 | HEAD     "/"
Dec 05 22:39:44 string ollama[421142]: [GIN] 2025/12/05 - 22:39:44 | 404 |     238.081µs |       127.0.0.1 | POST     "/api/show"
Dec 05 22:39:45 string ollama[421142]: time=2025-12-05T22:39:45.017+02:00 level=INFO source=download.go:177 msg="downloading 094eb0a75095 in 16 184 MB part(s)"
Dec 05 22:40:36 string ollama[421142]: time=2025-12-05T22:40:36.448+02:00 level=INFO source=download.go:177 msg="downloading 6db27cd4e277 in 1 695 B part(s)"
Dec 05 22:40:37 string ollama[421142]: time=2025-12-05T22:40:37.813+02:00 level=INFO source=download.go:177 msg="downloading 3d8ba0a186b5 in 1 2.4 KB part(s)"
Dec 05 22:40:39 string ollama[421142]: time=2025-12-05T22:40:39.211+02:00 level=INFO source=download.go:177 msg="downloading e0daf17ff83e in 1 21 B part(s)"
Dec 05 22:40:40 string ollama[421142]: time=2025-12-05T22:40:40.645+02:00 level=INFO source=download.go:177 msg="downloading 97002903a239 in 1 514 B part(s)"
Dec 05 22:40:43 string ollama[421142]: [GIN] 2025/12/05 - 22:40:43 | 200 | 58.862797551s |       127.0.0.1 | POST     "/api/pull"
Dec 05 22:40:43 string ollama[421142]: [GIN] 2025/12/05 - 22:40:43 | 200 |   31.745633ms |       127.0.0.1 | POST     "/api/show"
Dec 05 22:40:43 string ollama[421142]: [GIN] 2025/12/05 - 22:40:43 | 200 |   28.823229ms |       127.0.0.1 | POST     "/api/show"
Dec 05 22:40:43 string ollama[421142]: time=2025-12-05T22:40:43.155+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 35107"
Dec 05 22:40:43 string ollama[421142]: time=2025-12-05T22:40:43.243+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-094eb0a75095db5a9f83e51323879750023d7050d008a9b2899bf9f47c4926e5 --port 35131"
Dec 05 22:40:43 string ollama[421142]: time=2025-12-05T22:40:43.243+02:00 level=INFO source=sched.go:443 msg="system memory" total="92.0 GiB" free="77.8 GiB" free_swap="54.0 GiB"
Dec 05 22:40:43 string ollama[421142]: time=2025-12-05T22:40:43.243+02:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 library=CUDA available="3.0 GiB" free="3.5 GiB" minimum="457.0 MiB" overhead="0 B"
Dec 05 22:40:43 string ollama[421142]: time=2025-12-05T22:40:43.243+02:00 level=INFO source=server.go:702 msg="loading model" "model layers"=27 requested=-1
Dec 05 22:40:43 string ollama[421142]: time=2025-12-05T22:40:43.248+02:00 level=INFO source=runner.go:1398 msg="starting ollama engine"
Dec 05 22:40:43 string ollama[421142]: time=2025-12-05T22:40:43.248+02:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:35131"
Dec 05 22:40:43 string ollama[421142]: time=2025-12-05T22:40:43.254+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:27[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:27(0..26)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:40:43 string ollama[421142]: time=2025-12-05T22:40:43.272+02:00 level=INFO source=ggml.go:136 msg="" architecture=mistral3 file_type=Q4_K_M name="" description="" num_tensors=458 num_key_values=45
Dec 05 22:40:43 string ollama[421142]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so
Dec 05 22:40:43 string ollama[421142]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Dec 05 22:40:43 string ollama[421142]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Dec 05 22:40:43 string ollama[421142]: ggml_cuda_init: found 1 CUDA devices:
Dec 05 22:40:43 string ollama[421142]:   Device 0: NVIDIA GeForce GTX 1050 Ti, compute capability 6.1, VMM: yes, ID: GPU-b9f0866d-45b0-35b9-575d-2443d746ce48
Dec 05 22:40:43 string ollama[421142]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so
Dec 05 22:40:43 string ollama[421142]: time=2025-12-05T22:40:43.343+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
Dec 05 22:40:43 string ollama[421142]: time=2025-12-05T22:40:43.568+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:40:43 string ollama[421142]: time=2025-12-05T22:40:43.741+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:8[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:8(18..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:40:43 string ollama[421142]: time=2025-12-05T22:40:43.923+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:7[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:7(19..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:40:44 string ollama[421142]: time=2025-12-05T22:40:44.103+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:6[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:6(20..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:40:44 string ollama[421142]: time=2025-12-05T22:40:44.283+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:5[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:5(21..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:40:44 string ollama[421142]: time=2025-12-05T22:40:44.462+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:4[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:4(22..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:40:44 string ollama[421142]: time=2025-12-05T22:40:44.643+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:3[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:3(23..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:40:44 string ollama[421142]: time=2025-12-05T22:40:44.823+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:2[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:2(24..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:40:45 string ollama[421142]: time=2025-12-05T22:40:45.004+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:1[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:1(25..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:40:45 string ollama[421142]: time=2025-12-05T22:40:45.186+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:40:45 string ollama[421142]: time=2025-12-05T22:40:45.363+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:40:45 string ollama[421142]: time=2025-12-05T22:40:45.873+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:8[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:8(18..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:40:46 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:40:46 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760
Dec 05 22:40:46 string ollama[421142]: time=2025-12-05T22:40:46.026+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:7[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:7(19..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:40:46 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:40:46 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760
Dec 05 22:40:46 string ollama[421142]: time=2025-12-05T22:40:46.179+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:6[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:6(20..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:40:46 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:40:46 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760
Dec 05 22:40:46 string ollama[421142]: time=2025-12-05T22:40:46.337+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:5[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:5(21..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:40:46 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:40:46 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760
Dec 05 22:40:46 string ollama[421142]: time=2025-12-05T22:40:46.495+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:4[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:4(22..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:40:46 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:40:46 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760
Dec 05 22:40:46 string ollama[421142]: time=2025-12-05T22:40:46.646+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:3[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:3(23..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:40:46 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:40:46 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760
Dec 05 22:40:46 string ollama[421142]: time=2025-12-05T22:40:46.799+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:2[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:2(24..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:40:46 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:40:46 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760
Dec 05 22:40:46 string ollama[421142]: time=2025-12-05T22:40:46.956+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:1[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:1(25..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:40:47 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:40:47 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760
Dec 05 22:40:47 string ollama[421142]: time=2025-12-05T22:40:47.110+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:40:47 string ollama[421142]: time=2025-12-05T22:40:47.631+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:40:47 string ollama[421142]: time=2025-12-05T22:40:47.631+02:00 level=INFO source=ggml.go:482 msg="offloading 0 repeating layers to GPU"
Dec 05 22:40:47 string ollama[421142]: time=2025-12-05T22:40:47.631+02:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="3.1 GiB"
Dec 05 22:40:47 string ollama[421142]: time=2025-12-05T22:40:47.631+02:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU"
Dec 05 22:40:47 string ollama[421142]: time=2025-12-05T22:40:47.631+02:00 level=INFO source=ggml.go:494 msg="offloaded 0/27 layers to GPU"
Dec 05 22:40:47 string ollama[421142]: time=2025-12-05T22:40:47.631+02:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="6.5 GiB"
Dec 05 22:40:47 string ollama[421142]: time=2025-12-05T22:40:47.631+02:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="9.0 GiB"
Dec 05 22:40:47 string ollama[421142]: time=2025-12-05T22:40:47.631+02:00 level=INFO source=device.go:272 msg="total memory" size="18.6 GiB"
Dec 05 22:40:47 string ollama[421142]: time=2025-12-05T22:40:47.631+02:00 level=INFO source=sched.go:517 msg="loaded runners" count=1
Dec 05 22:40:47 string ollama[421142]: time=2025-12-05T22:40:47.631+02:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding"
Dec 05 22:40:47 string ollama[421142]: time=2025-12-05T22:40:47.631+02:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model"
Dec 05 22:40:47 string ollama[421142]: time=2025-12-05T22:40:47.882+02:00 level=INFO source=server.go:1332 msg="llama runner started in 4.64 seconds"
Dec 05 22:40:47 string ollama[421142]: [GIN] 2025/12/05 - 22:40:47 | 200 |  4.786097154s |       127.0.0.1 | POST     "/api/generate"
Dec 05 22:41:09 string ollama[421142]: [GIN] 2025/12/05 - 22:41:09 | 200 |  6.317019068s |       127.0.0.1 | POST     "/api/chat"
Dec 05 22:41:27 string ollama[421142]: [GIN] 2025/12/05 - 22:41:27 | 200 |     452.761µs |       127.0.0.1 | GET      "/api/tags"
Dec 05 22:41:27 string ollama[421142]: [GIN] 2025/12/05 - 22:41:27 | 200 |       75.78µs |       127.0.0.1 | GET      "/api/ps"
Dec 05 22:41:29 string ollama[421142]: [GIN] 2025/12/05 - 22:41:29 | 200 |     391.212µs |       127.0.0.1 | GET      "/api/tags"
Dec 05 22:41:29 string ollama[421142]: [GIN] 2025/12/05 - 22:41:29 | 200 |       33.42µs |       127.0.0.1 | GET      "/api/ps"
Dec 05 22:41:30 string ollama[421142]: [GIN] 2025/12/05 - 22:41:30 | 200 |       31.28µs |       127.0.0.1 | GET      "/api/version"
Dec 05 22:42:12 string ollama[421142]: [GIN] 2025/12/05 - 22:42:12 | 200 | 12.091144569s |       127.0.0.1 | POST     "/api/chat"
Dec 05 22:42:13 string ollama[421142]: [GIN] 2025/12/05 - 22:42:13 | 200 |       24.92µs |       127.0.0.1 | HEAD     "/"
Dec 05 22:42:13 string ollama[421142]: [GIN] 2025/12/05 - 22:42:13 | 200 |       27.21µs |       127.0.0.1 | GET      "/api/ps"
Dec 05 22:42:28 string ollama[421142]: [GIN] 2025/12/05 - 22:42:28 | 200 | 16.197507127s |       127.0.0.1 | POST     "/api/chat"
Dec 05 22:42:36 string ollama[421142]: [GIN] 2025/12/05 - 22:42:36 | 200 |  7.296386643s |       127.0.0.1 | POST     "/api/chat"
Dec 05 22:42:43 string ollama[421142]: [GIN] 2025/12/05 - 22:42:43 | 200 |  7.713140857s |       127.0.0.1 | POST     "/api/chat"
Dec 05 22:42:46 string ollama[421142]: [GIN] 2025/12/05 - 22:42:46 | 200 |       20.89µs |       127.0.0.1 | HEAD     "/"
Dec 05 22:42:46 string ollama[421142]: [GIN] 2025/12/05 - 22:42:46 | 200 |    28.63348ms |       127.0.0.1 | POST     "/api/show"
Dec 05 22:42:46 string ollama[421142]: [GIN] 2025/12/05 - 22:42:46 | 200 |   27.102232ms |       127.0.0.1 | POST     "/api/show"
Dec 05 22:42:46 string ollama[421142]: [GIN] 2025/12/05 - 22:42:46 | 200 |   57.195278ms |       127.0.0.1 | POST     "/api/generate"
Dec 05 22:42:48 string ollama[421142]: [GIN] 2025/12/05 - 22:42:48 | 200 |  585.884041ms |       127.0.0.1 | POST     "/api/chat"
Dec 05 22:43:48 string ollama[421142]: [GIN] 2025/12/05 - 22:43:48 | 200 |        22.5µs |       127.0.0.1 | HEAD     "/"
Dec 05 22:43:48 string ollama[421142]: [GIN] 2025/12/05 - 22:43:48 | 404 |     235.781µs |       127.0.0.1 | POST     "/api/show"
Dec 05 22:43:49 string ollama[421142]: time=2025-12-05T22:43:49.615+02:00 level=INFO source=download.go:177 msg="downloading 9c60bdd691c1 in 16 205 MB part(s)"
Dec 05 22:44:03 string ollama[421142]: [GIN] 2025/12/05 - 22:44:03 | 200 |     385.132µs |       127.0.0.1 | GET      "/api/tags"
Dec 05 22:44:03 string ollama[421142]: [GIN] 2025/12/05 - 22:44:03 | 200 |       31.56µs |       127.0.0.1 | GET      "/api/ps"
Dec 05 22:44:03 string ollama[421142]: [GIN] 2025/12/05 - 22:44:03 | 200 |        30.2µs |       127.0.0.1 | GET      "/api/version"
Dec 05 22:44:09 string ollama[421142]: [GIN] 2025/12/05 - 22:44:09 | 200 |       26.33µs |       127.0.0.1 | GET      "/api/version"
Dec 05 22:44:47 string ollama[421142]: time=2025-12-05T22:44:47.998+02:00 level=INFO source=download.go:177 msg="downloading 7339fa418c9a in 1 11 KB part(s)"
Dec 05 22:44:49 string ollama[421142]: time=2025-12-05T22:44:49.373+02:00 level=INFO source=download.go:177 msg="downloading f6417cb1e269 in 1 42 B part(s)"
Dec 05 22:44:50 string ollama[421142]: time=2025-12-05T22:44:50.818+02:00 level=INFO source=download.go:177 msg="downloading 3353ed4a819b in 1 551 B part(s)"
Dec 05 22:44:53 string ollama[421142]: [GIN] 2025/12/05 - 22:44:53 | 200 |          1m4s |       127.0.0.1 | POST     "/api/pull"
Dec 05 22:44:53 string ollama[421142]: [GIN] 2025/12/05 - 22:44:53 | 200 |   24.966395ms |       127.0.0.1 | POST     "/api/show"
Dec 05 22:44:53 string ollama[421142]: [GIN] 2025/12/05 - 22:44:53 | 200 |   25.422106ms |       127.0.0.1 | POST     "/api/show"
Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.422+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 36733"
Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.469+02:00 level=INFO source=sched.go:583 msg="updated VRAM based on existing loaded models" gpu=GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 library=CUDA total="4.0 GiB" available="3.5 GiB"
Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.485+02:00 level=WARN source=sched.go:404 msg="model architecture does not currently support parallel requests" architecture=qwen3vl
Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.498+02:00 level=INFO source=server.go:209 msg="enabling flash attention"
Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.498+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-9c60bdd691c1897bbfe5ddbc67336848e18c346b7ee2ab8541b135f208e5bb38 --port 42317"
Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.499+02:00 level=INFO source=sched.go:443 msg="system memory" total="92.0 GiB" free="67.3 GiB" free_swap="54.0 GiB"
Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.499+02:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 library=CUDA available="3.0 GiB" free="3.5 GiB" minimum="457.0 MiB" overhead="0 B"
Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.499+02:00 level=INFO source=server.go:702 msg="loading model" "model layers"=37 requested=-1
Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.503+02:00 level=INFO source=runner.go:1398 msg="starting ollama engine"
Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.504+02:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:42317"
Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.510+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:37[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.524+02:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3vl file_type=Q4_K_M name="" description="" num_tensors=809 num_key_values=40
Dec 05 22:44:53 string ollama[421142]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so
Dec 05 22:44:53 string ollama[421142]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Dec 05 22:44:53 string ollama[421142]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Dec 05 22:44:53 string ollama[421142]: ggml_cuda_init: found 1 CUDA devices:
Dec 05 22:44:53 string ollama[421142]:   Device 0: NVIDIA GeForce GTX 1050 Ti, compute capability 6.1, VMM: yes, ID: GPU-b9f0866d-45b0-35b9-575d-2443d746ce48
Dec 05 22:44:53 string ollama[421142]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so
Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.544+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.746+02:00 level=INFO source=server.go:974 msg="model requires more gpu memory than is currently available, evicting a model to make space" "loaded layers"=0
Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.746+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:false KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.746+02:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="3.1 GiB"
Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.746+02:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="304.3 MiB"
Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.746+02:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="576.0 MiB"
Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.746+02:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="4.2 GiB"
Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.746+02:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="39.6 MiB"
Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.746+02:00 level=INFO source=device.go:272 msg="total memory" size="8.2 GiB"
Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.766+02:00 level=ERROR source=server.go:265 msg="llama runner terminated" error="signal: killed"
Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.766+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 41617"
Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.831+02:00 level=WARN source=sched.go:404 msg="model architecture does not currently support parallel requests" architecture=qwen3vl
Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.831+02:00 level=INFO source=sched.go:443 msg="system memory" total="92.0 GiB" free="77.0 GiB" free_swap="54.0 GiB"
Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.831+02:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 library=CUDA available="2.9 GiB" free="3.4 GiB" minimum="457.0 MiB" overhead="0 B"
Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.831+02:00 level=INFO source=server.go:702 msg="loading model" "model layers"=37 requested=-1
Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.832+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.993+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:35[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:35(1..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:44:54 string ollama[421142]: time=2025-12-05T22:44:54.156+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:34[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:34(2..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:44:54 string ollama[421142]: time=2025-12-05T22:44:54.319+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:33[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:33(3..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:44:54 string ollama[421142]: time=2025-12-05T22:44:54.486+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:32[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:32(4..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:44:54 string ollama[421142]: time=2025-12-05T22:44:54.647+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:31[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:31(5..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:44:54 string ollama[421142]: time=2025-12-05T22:44:54.809+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:30[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:30(6..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:44:54 string ollama[421142]: time=2025-12-05T22:44:54.972+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:29[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:29(7..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:44:55 string ollama[421142]: time=2025-12-05T22:44:55.135+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:28[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:28(8..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:44:55 string ollama[421142]: time=2025-12-05T22:44:55.298+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:27[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:27(9..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:44:55 string ollama[421142]: time=2025-12-05T22:44:55.464+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:26[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:26(10..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:44:55 string ollama[421142]: time=2025-12-05T22:44:55.626+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:25[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:25(11..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:44:55 string ollama[421142]: time=2025-12-05T22:44:55.791+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:24[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:24(12..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:44:55 string ollama[421142]: time=2025-12-05T22:44:55.955+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:23[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:23(13..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:44:56 string ollama[421142]: time=2025-12-05T22:44:56.121+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:22[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:22(14..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:44:56 string ollama[421142]: time=2025-12-05T22:44:56.298+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:21[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:21(15..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:44:56 string ollama[421142]: time=2025-12-05T22:44:56.465+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:20[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:20(16..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:44:56 string ollama[421142]: time=2025-12-05T22:44:56.632+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:19[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:19(17..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:44:56 string ollama[421142]: time=2025-12-05T22:44:56.794+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:18[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:18(18..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:44:56 string ollama[421142]: time=2025-12-05T22:44:56.959+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:17[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:17(19..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:44:57 string ollama[421142]: time=2025-12-05T22:44:57.127+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:16[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:16(20..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:44:57 string ollama[421142]: time=2025-12-05T22:44:57.294+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:15[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:15(21..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:44:57 string ollama[421142]: time=2025-12-05T22:44:57.457+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:14[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:14(22..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:44:57 string ollama[421142]: time=2025-12-05T22:44:57.625+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:13[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:13(23..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:44:57 string ollama[421142]: time=2025-12-05T22:44:57.791+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:12[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:12(24..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:44:57 string ollama[421142]: time=2025-12-05T22:44:57.954+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:11[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:11(25..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:44:58 string ollama[421142]: time=2025-12-05T22:44:58.124+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:10[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:10(26..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:44:58 string ollama[421142]: time=2025-12-05T22:44:58.289+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:9[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:9(27..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:44:58 string ollama[421142]: time=2025-12-05T22:44:58.454+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:8[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:8(28..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:44:58 string ollama[421142]: time=2025-12-05T22:44:58.615+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:7[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:7(29..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:44:58 string ollama[421142]: time=2025-12-05T22:44:58.784+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:6[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:6(30..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:44:58 string ollama[421142]: time=2025-12-05T22:44:58.949+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:5[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:5(31..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:44:59 string ollama[421142]: time=2025-12-05T22:44:59.117+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:4[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:4(32..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:44:59 string ollama[421142]: time=2025-12-05T22:44:59.285+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:3[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:3(33..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:44:59 string ollama[421142]: time=2025-12-05T22:44:59.458+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:2[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:2(34..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:44:59 string ollama[421142]: time=2025-12-05T22:44:59.627+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:1[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:1(35..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:44:59 string ollama[421142]: time=2025-12-05T22:44:59.793+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:44:59 string ollama[421142]: time=2025-12-05T22:44:59.961+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:45:00 string ollama[421142]: time=2025-12-05T22:45:00.174+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:35[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:35(1..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:45:00 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:45:00 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752
Dec 05 22:45:00 string ollama[421142]: time=2025-12-05T22:45:00.336+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:34[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:34(2..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:45:00 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:45:00 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752
Dec 05 22:45:00 string ollama[421142]: time=2025-12-05T22:45:00.551+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:33[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:33(3..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:45:00 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:45:00 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752
Dec 05 22:45:00 string ollama[421142]: time=2025-12-05T22:45:00.769+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:32[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:32(4..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:45:00 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:45:00 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752
Dec 05 22:45:00 string ollama[421142]: time=2025-12-05T22:45:00.974+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:31[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:31(5..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:45:01 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:45:01 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752
Dec 05 22:45:01 string ollama[421142]: time=2025-12-05T22:45:01.177+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:30[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:30(6..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:45:01 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:45:01 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752
Dec 05 22:45:01 string ollama[421142]: time=2025-12-05T22:45:01.382+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:29[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:29(7..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:45:01 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:45:01 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752
Dec 05 22:45:01 string ollama[421142]: time=2025-12-05T22:45:01.581+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:28[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:28(8..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:45:01 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:45:01 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752
Dec 05 22:45:01 string ollama[421142]: time=2025-12-05T22:45:01.777+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:27[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:27(9..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:45:01 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:45:01 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752
Dec 05 22:45:01 string ollama[421142]: time=2025-12-05T22:45:01.940+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:26[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:26(10..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:45:02 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:45:02 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752
Dec 05 22:45:02 string ollama[421142]: time=2025-12-05T22:45:02.115+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:25[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:25(11..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:45:02 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:45:02 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752
Dec 05 22:45:02 string ollama[421142]: time=2025-12-05T22:45:02.276+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:24[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:24(12..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:45:02 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:45:02 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752
Dec 05 22:45:02 string ollama[421142]: time=2025-12-05T22:45:02.440+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:23[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:23(13..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:45:02 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:45:02 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752
Dec 05 22:45:02 string ollama[421142]: time=2025-12-05T22:45:02.604+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:22[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:22(14..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:45:02 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:45:02 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752
Dec 05 22:45:02 string ollama[421142]: time=2025-12-05T22:45:02.766+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:21[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:21(15..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:45:02 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:45:02 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752
Dec 05 22:45:02 string ollama[421142]: time=2025-12-05T22:45:02.930+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:20[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:20(16..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:45:03 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:45:03 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752
Dec 05 22:45:03 string ollama[421142]: time=2025-12-05T22:45:03.092+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:19[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:19(17..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:45:03 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:45:03 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752
Dec 05 22:45:03 string ollama[421142]: time=2025-12-05T22:45:03.256+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:18[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:18(18..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:45:03 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:45:03 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752
Dec 05 22:45:03 string ollama[421142]: time=2025-12-05T22:45:03.418+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:17[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:17(19..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:45:03 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:45:03 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752
Dec 05 22:45:03 string ollama[421142]: time=2025-12-05T22:45:03.579+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:16[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:16(20..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:45:03 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:45:03 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752
Dec 05 22:45:03 string ollama[421142]: time=2025-12-05T22:45:03.741+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:15[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:15(21..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:45:03 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:45:03 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752
Dec 05 22:45:03 string ollama[421142]: time=2025-12-05T22:45:03.904+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:14[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:14(22..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:45:04 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:45:04 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752
Dec 05 22:45:04 string ollama[421142]: time=2025-12-05T22:45:04.068+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:13[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:13(23..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:45:04 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:45:04 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752
Dec 05 22:45:04 string ollama[421142]: time=2025-12-05T22:45:04.231+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:12[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:12(24..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:45:04 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:45:04 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752
Dec 05 22:45:04 string ollama[421142]: time=2025-12-05T22:45:04.391+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:11[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:11(25..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:45:04 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:45:04 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752
Dec 05 22:45:04 string ollama[421142]: time=2025-12-05T22:45:04.557+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:10[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:10(26..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:45:04 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:45:04 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752
Dec 05 22:45:04 string ollama[421142]: time=2025-12-05T22:45:04.723+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:9[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:9(27..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:45:04 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:45:04 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752
Dec 05 22:45:04 string ollama[421142]: time=2025-12-05T22:45:04.886+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:8[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:8(28..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:45:05 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:45:05 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752
Dec 05 22:45:05 string ollama[421142]: time=2025-12-05T22:45:05.045+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:7[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:7(29..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:45:05 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:45:05 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752
Dec 05 22:45:05 string ollama[421142]: time=2025-12-05T22:45:05.199+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:6[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:6(30..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:45:05 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:45:05 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752
Dec 05 22:45:05 string ollama[421142]: time=2025-12-05T22:45:05.356+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:5[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:5(31..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:45:05 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:45:05 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752
Dec 05 22:45:05 string ollama[421142]: time=2025-12-05T22:45:05.518+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:4[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:4(32..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:45:05 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:45:05 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752
Dec 05 22:45:05 string ollama[421142]: time=2025-12-05T22:45:05.675+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:3[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:3(33..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:45:05 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:45:05 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752
Dec 05 22:45:05 string ollama[421142]: time=2025-12-05T22:45:05.839+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:2[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:2(34..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:45:05 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:45:05 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752
Dec 05 22:45:05 string ollama[421142]: time=2025-12-05T22:45:05.997+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:1[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:1(35..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:45:06 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:45:06 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752
Dec 05 22:45:06 string ollama[421142]: time=2025-12-05T22:45:06.157+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:45:06 string ollama[421142]: time=2025-12-05T22:45:06.370+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:45:06 string ollama[421142]: time=2025-12-05T22:45:06.370+02:00 level=INFO source=ggml.go:482 msg="offloading 0 repeating layers to GPU"
Dec 05 22:45:06 string ollama[421142]: time=2025-12-05T22:45:06.370+02:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU"
Dec 05 22:45:06 string ollama[421142]: time=2025-12-05T22:45:06.370+02:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="3.4 GiB"
Dec 05 22:45:06 string ollama[421142]: time=2025-12-05T22:45:06.370+02:00 level=INFO source=ggml.go:494 msg="offloaded 0/37 layers to GPU"
Dec 05 22:45:06 string ollama[421142]: time=2025-12-05T22:45:06.370+02:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="576.0 MiB"
Dec 05 22:45:06 string ollama[421142]: time=2025-12-05T22:45:06.370+02:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="4.2 GiB"
Dec 05 22:45:06 string ollama[421142]: time=2025-12-05T22:45:06.370+02:00 level=INFO source=device.go:272 msg="total memory" size="8.1 GiB"
Dec 05 22:45:06 string ollama[421142]: time=2025-12-05T22:45:06.370+02:00 level=INFO source=sched.go:517 msg="loaded runners" count=1
Dec 05 22:45:06 string ollama[421142]: time=2025-12-05T22:45:06.370+02:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding"
Dec 05 22:45:06 string ollama[421142]: time=2025-12-05T22:45:06.375+02:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model"
Dec 05 22:45:06 string ollama[421142]: time=2025-12-05T22:45:06.625+02:00 level=INFO source=server.go:1332 msg="llama runner started in 13.13 seconds"
Dec 05 22:45:06 string ollama[421142]: [GIN] 2025/12/05 - 22:45:06 | 200 |  13.25248179s |       127.0.0.1 | POST     "/api/generate"
Dec 05 22:45:20 string ollama[421142]: [GIN] 2025/12/05 - 22:45:20 | 200 |  9.977555375s |       127.0.0.1 | POST     "/api/chat"
Dec 05 22:45:24 string ollama[421142]: [GIN] 2025/12/05 - 22:45:24 | 200 |       19.31µs |       127.0.0.1 | HEAD     "/"
Dec 05 22:45:24 string ollama[421142]: [GIN] 2025/12/05 - 22:45:24 | 200 |       17.61µs |       127.0.0.1 | GET      "/api/ps"
Dec 05 22:46:21 string systemd[1]: Stopping ollama.service - Ollama Service...
Dec 05 22:46:21 string ollama[421142]: time=2025-12-05T22:46:21.483+02:00 level=ERROR source=server.go:265 msg="llama runner terminated" error="signal: terminated"
Dec 05 22:46:21 string systemd[1]: ollama.service: Deactivated successfully.
Dec 05 22:46:21 string systemd[1]: Stopped ollama.service - Ollama Service.
Dec 05 22:46:21 string systemd[1]: ollama.service: Consumed 6min 31.359s CPU time, 16.7G memory peak.
Dec 05 22:50:51 string systemd[1]: Started ollama.service - Ollama Service.
Dec 05 22:50:51 string ollama[431503]: time=2025-12-05T22:50:51.112+02:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:6 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:16 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Dec 05 22:50:51 string ollama[431503]: time=2025-12-05T22:50:51.112+02:00 level=INFO source=images.go:522 msg="total blobs: 35"
Dec 05 22:50:51 string ollama[431503]: time=2025-12-05T22:50:51.113+02:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
Dec 05 22:50:51 string ollama[431503]: time=2025-12-05T22:50:51.113+02:00 level=INFO source=routes.go:1597 msg="Listening on 127.0.0.1:11434 (version 0.13.1)"
Dec 05 22:50:51 string ollama[431503]: time=2025-12-05T22:50:51.113+02:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
Dec 05 22:50:51 string ollama[431503]: time=2025-12-05T22:50:51.113+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 42411"
Dec 05 22:50:51 string ollama[431503]: time=2025-12-05T22:50:51.155+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 42529"
Dec 05 22:50:51 string ollama[431503]: time=2025-12-05T22:50:51.183+02:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
Dec 05 22:50:51 string ollama[431503]: time=2025-12-05T22:50:51.183+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 39263"
Dec 05 22:50:51 string ollama[431503]: time=2025-12-05T22:50:51.262+02:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 filter_id="" library=CUDA compute=6.1 name=CUDA0 description="NVIDIA GeForce GTX 1050 Ti" libdirs=ollama,cuda_v12 driver=12.2 pci_id=0000:01:00.0 type=discrete total="4.0 GiB" available="3.3 GiB"
Dec 05 22:50:51 string ollama[431503]: time=2025-12-05T22:50:51.262+02:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="4.0 GiB" threshold="20.0 GiB"
Dec 05 22:52:45 string ollama[431503]: [GIN] 2025/12/05 - 22:52:45 | 200 |      32.681µs |       127.0.0.1 | HEAD     "/"
Dec 05 22:52:45 string ollama[431503]: [GIN] 2025/12/05 - 22:52:45 | 200 |   29.159632ms |       127.0.0.1 | POST     "/api/show"
Dec 05 22:52:45 string ollama[431503]: [GIN] 2025/12/05 - 22:52:45 | 200 |   28.719099ms |       127.0.0.1 | POST     "/api/show"
Dec 05 22:52:45 string ollama[431503]: time=2025-12-05T22:52:45.380+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 40591"
Dec 05 22:52:45 string ollama[431503]: time=2025-12-05T22:52:45.466+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-094eb0a75095db5a9f83e51323879750023d7050d008a9b2899bf9f47c4926e5 --port 44011"
Dec 05 22:52:45 string ollama[431503]: time=2025-12-05T22:52:45.466+02:00 level=INFO source=sched.go:443 msg="system memory" total="92.0 GiB" free="77.0 GiB" free_swap="54.0 GiB"
Dec 05 22:52:45 string ollama[431503]: time=2025-12-05T22:52:45.466+02:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 library=CUDA available="2.9 GiB" free="3.3 GiB" minimum="457.0 MiB" overhead="0 B"
Dec 05 22:52:45 string ollama[431503]: time=2025-12-05T22:52:45.466+02:00 level=INFO source=server.go:702 msg="loading model" "model layers"=27 requested=-1
Dec 05 22:52:45 string ollama[431503]: time=2025-12-05T22:52:45.471+02:00 level=INFO source=runner.go:1398 msg="starting ollama engine"
Dec 05 22:52:45 string ollama[431503]: time=2025-12-05T22:52:45.471+02:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:44011"
Dec 05 22:52:45 string ollama[431503]: time=2025-12-05T22:52:45.478+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:27[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:27(0..26)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:52:45 string ollama[431503]: time=2025-12-05T22:52:45.495+02:00 level=INFO source=ggml.go:136 msg="" architecture=mistral3 file_type=Q4_K_M name="" description="" num_tensors=458 num_key_values=45
Dec 05 22:52:45 string ollama[431503]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so
Dec 05 22:52:45 string ollama[431503]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Dec 05 22:52:45 string ollama[431503]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Dec 05 22:52:45 string ollama[431503]: ggml_cuda_init: found 1 CUDA devices:
Dec 05 22:52:45 string ollama[431503]:   Device 0: NVIDIA GeForce GTX 1050 Ti, compute capability 6.1, VMM: yes, ID: GPU-b9f0866d-45b0-35b9-575d-2443d746ce48
Dec 05 22:52:45 string ollama[431503]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so
Dec 05 22:52:45 string ollama[431503]: time=2025-12-05T22:52:45.519+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
Dec 05 22:52:45 string ollama[431503]: time=2025-12-05T22:52:45.742+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:52:45 string ollama[431503]: time=2025-12-05T22:52:45.921+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:8[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:8(18..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:52:46 string ollama[431503]: time=2025-12-05T22:52:46.101+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:7[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:7(19..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:52:46 string ollama[431503]: time=2025-12-05T22:52:46.282+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:6[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:6(20..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:52:46 string ollama[431503]: time=2025-12-05T22:52:46.462+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:5[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:5(21..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:52:46 string ollama[431503]: time=2025-12-05T22:52:46.644+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:4[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:4(22..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:52:46 string ollama[431503]: time=2025-12-05T22:52:46.829+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:3[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:3(23..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:52:47 string ollama[431503]: time=2025-12-05T22:52:47.007+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:2[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:2(24..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:52:47 string ollama[431503]: time=2025-12-05T22:52:47.187+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:1[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:1(25..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:52:47 string ollama[431503]: time=2025-12-05T22:52:47.369+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:52:47 string ollama[431503]: time=2025-12-05T22:52:47.548+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:52:48 string ollama[431503]: time=2025-12-05T22:52:48.057+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:8[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:8(18..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:52:48 string ollama[431503]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:52:48 string ollama[431503]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760
Dec 05 22:52:48 string ollama[431503]: time=2025-12-05T22:52:48.212+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:7[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:7(19..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:52:48 string ollama[431503]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:52:48 string ollama[431503]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760
Dec 05 22:52:48 string ollama[431503]: time=2025-12-05T22:52:48.374+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:6[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:6(20..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:52:48 string ollama[431503]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:52:48 string ollama[431503]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760
Dec 05 22:52:48 string ollama[431503]: time=2025-12-05T22:52:48.526+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:5[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:5(21..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:52:48 string ollama[431503]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:52:48 string ollama[431503]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760
Dec 05 22:52:48 string ollama[431503]: time=2025-12-05T22:52:48.679+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:4[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:4(22..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:52:48 string ollama[431503]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:52:48 string ollama[431503]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760
Dec 05 22:52:48 string ollama[431503]: time=2025-12-05T22:52:48.834+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:3[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:3(23..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:52:48 string ollama[431503]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:52:48 string ollama[431503]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760
Dec 05 22:52:48 string ollama[431503]: time=2025-12-05T22:52:48.987+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:2[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:2(24..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:52:49 string ollama[431503]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:52:49 string ollama[431503]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760
Dec 05 22:52:49 string ollama[431503]: time=2025-12-05T22:52:49.141+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:1[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:1(25..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:52:49 string ollama[431503]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory
Dec 05 22:52:49 string ollama[431503]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760
Dec 05 22:52:49 string ollama[431503]: time=2025-12-05T22:52:49.298+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:52:49 string ollama[431503]: time=2025-12-05T22:52:49.811+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Dec 05 22:52:49 string ollama[431503]: time=2025-12-05T22:52:49.811+02:00 level=INFO source=ggml.go:482 msg="offloading 0 repeating layers to GPU"
Dec 05 22:52:49 string ollama[431503]: time=2025-12-05T22:52:49.811+02:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU"
Dec 05 22:52:49 string ollama[431503]: time=2025-12-05T22:52:49.811+02:00 level=INFO source=ggml.go:494 msg="offloaded 0/27 layers to GPU"
Dec 05 22:52:49 string ollama[431503]: time=2025-12-05T22:52:49.811+02:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="3.1 GiB"
Dec 05 22:52:49 string ollama[431503]: time=2025-12-05T22:52:49.811+02:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="6.5 GiB"
Dec 05 22:52:49 string ollama[431503]: time=2025-12-05T22:52:49.811+02:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="9.0 GiB"
Dec 05 22:52:49 string ollama[431503]: time=2025-12-05T22:52:49.811+02:00 level=INFO source=device.go:272 msg="total memory" size="18.6 GiB"
Dec 05 22:52:49 string ollama[431503]: time=2025-12-05T22:52:49.811+02:00 level=INFO source=sched.go:517 msg="loaded runners" count=1
Dec 05 22:52:49 string ollama[431503]: time=2025-12-05T22:52:49.811+02:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding"
Dec 05 22:52:49 string ollama[431503]: time=2025-12-05T22:52:49.829+02:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model"
Dec 05 22:52:50 string ollama[431503]: time=2025-12-05T22:52:50.080+02:00 level=INFO source=server.go:1332 msg="llama runner started in 4.61 seconds"
Dec 05 22:52:50 string ollama[431503]: [GIN] 2025/12/05 - 22:52:50 | 200 |  4.759222739s |       127.0.0.1 | POST     "/api/generate"
<!-- gh-comment-id:3618573225 --> @azazar commented on GitHub (Dec 5, 2025): Last Ollama update have broken GPU support for me too. # Log ``` Nov 30 17:34:31 string systemd[1]: Started ollama.service - Ollama Service. Nov 30 17:34:31 string ollama[2783]: time=2025-11-30T17:34:31.372+02:00 level=INFO source=routes.go:1475 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:6 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:16 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Nov 30 17:34:31 string ollama[2783]: time=2025-11-30T17:34:31.385+02:00 level=INFO source=images.go:518 msg="total blobs: 26" Nov 30 17:34:31 string ollama[2783]: time=2025-11-30T17:34:31.385+02:00 level=INFO source=images.go:525 msg="total unused blobs removed: 0" Nov 30 17:34:31 string ollama[2783]: time=2025-11-30T17:34:31.385+02:00 level=INFO source=routes.go:1528 msg="Listening on 127.0.0.1:11434 (version 0.12.3)" Nov 30 17:34:31 string ollama[2783]: time=2025-11-30T17:34:31.386+02:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" Nov 30 17:34:31 string ollama[2783]: time=2025-11-30T17:34:31.465+02:00 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/download/linux-drivers.html" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" Nov 30 17:34:31 string ollama[2783]: time=2025-11-30T17:34:31.466+02:00 level=WARN source=amd_linux.go:447 msg="amdgpu detected, but no compatible rocm library found. Either install rocm v6, or follow manual install instructions at https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install" Nov 30 17:34:31 string ollama[2783]: time=2025-11-30T17:34:31.466+02:00 level=WARN source=amd_linux.go:352 msg="unable to verify rocm library: no suitable rocm found, falling back to CPU" Nov 30 17:34:31 string ollama[2783]: time=2025-11-30T17:34:31.466+02:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 library=cuda variant=v12 compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1050 Ti" total="3.9 GiB" available="3.9 GiB" Nov 30 17:34:31 string ollama[2783]: time=2025-11-30T17:34:31.466+02:00 level=INFO source=routes.go:1569 msg="entering low vram mode" "total vram"="3.9 GiB" threshold="20.0 GiB" Nov 30 20:36:05 string ollama[2783]: [GIN] 2025/11/30 - 20:36:05 | 200 | 11.522113ms | 127.0.0.1 | GET "/api/tags" Nov 30 20:36:05 string ollama[2783]: [GIN] 2025/11/30 - 20:36:05 | 200 | 98.58µs | 127.0.0.1 | GET "/api/ps" Nov 30 21:21:40 string ollama[2783]: [GIN] 2025/11/30 - 21:21:40 | 200 | 352.602µs | 127.0.0.1 | GET "/api/tags" Nov 30 21:21:40 string ollama[2783]: [GIN] 2025/11/30 - 21:21:40 | 200 | 22.72µs | 127.0.0.1 | GET "/api/ps" Nov 30 22:11:53 string systemd[1]: Stopping ollama.service - Ollama Service... Nov 30 22:11:53 string systemd[1]: ollama.service: Deactivated successfully. Nov 30 22:11:53 string systemd[1]: Stopped ollama.service - Ollama Service. -- Boot d64caa43e0014455a1a42e2d3ccfd2d0 -- Dec 01 08:49:18 string systemd[1]: Started ollama.service - Ollama Service. Dec 01 08:49:18 string ollama[2871]: time=2025-12-01T08:49:18.324+02:00 level=INFO source=routes.go:1475 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:6 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:16 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Dec 01 08:49:18 string ollama[2871]: time=2025-12-01T08:49:18.338+02:00 level=INFO source=images.go:518 msg="total blobs: 26" Dec 01 08:49:18 string ollama[2871]: time=2025-12-01T08:49:18.339+02:00 level=INFO source=images.go:525 msg="total unused blobs removed: 0" Dec 01 08:49:18 string ollama[2871]: time=2025-12-01T08:49:18.339+02:00 level=INFO source=routes.go:1528 msg="Listening on 127.0.0.1:11434 (version 0.12.3)" Dec 01 08:49:18 string ollama[2871]: time=2025-12-01T08:49:18.339+02:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" Dec 01 08:49:18 string ollama[2871]: time=2025-12-01T08:49:18.418+02:00 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/download/linux-drivers.html" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" Dec 01 08:49:18 string ollama[2871]: time=2025-12-01T08:49:18.418+02:00 level=WARN source=amd_linux.go:447 msg="amdgpu detected, but no compatible rocm library found. Either install rocm v6, or follow manual install instructions at https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install" Dec 01 08:49:18 string ollama[2871]: time=2025-12-01T08:49:18.418+02:00 level=WARN source=amd_linux.go:352 msg="unable to verify rocm library: no suitable rocm found, falling back to CPU" Dec 01 08:49:18 string ollama[2871]: time=2025-12-01T08:49:18.418+02:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 library=cuda variant=v12 compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1050 Ti" total="3.9 GiB" available="3.9 GiB" Dec 01 08:49:18 string ollama[2871]: time=2025-12-01T08:49:18.418+02:00 level=INFO source=routes.go:1569 msg="entering low vram mode" "total vram"="3.9 GiB" threshold="20.0 GiB" Dec 01 22:22:02 string systemd[1]: Stopping ollama.service - Ollama Service... Dec 01 22:22:02 string systemd[1]: ollama.service: Deactivated successfully. Dec 01 22:22:02 string systemd[1]: Stopped ollama.service - Ollama Service. -- Boot 5e665a1b1ca744abbe2f547993b29020 -- Dec 02 08:03:00 string systemd[1]: Started ollama.service - Ollama Service. Dec 02 08:03:00 string ollama[2773]: time=2025-12-02T08:03:00.289+02:00 level=INFO source=routes.go:1475 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:6 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:16 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Dec 02 08:03:00 string ollama[2773]: time=2025-12-02T08:03:00.301+02:00 level=INFO source=images.go:518 msg="total blobs: 26" Dec 02 08:03:00 string ollama[2773]: time=2025-12-02T08:03:00.301+02:00 level=INFO source=images.go:525 msg="total unused blobs removed: 0" Dec 02 08:03:00 string ollama[2773]: time=2025-12-02T08:03:00.302+02:00 level=INFO source=routes.go:1528 msg="Listening on 127.0.0.1:11434 (version 0.12.3)" Dec 02 08:03:00 string ollama[2773]: time=2025-12-02T08:03:00.302+02:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" Dec 02 08:03:00 string ollama[2773]: time=2025-12-02T08:03:00.384+02:00 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/download/linux-drivers.html" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" Dec 02 08:03:00 string ollama[2773]: time=2025-12-02T08:03:00.384+02:00 level=WARN source=amd_linux.go:447 msg="amdgpu detected, but no compatible rocm library found. Either install rocm v6, or follow manual install instructions at https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install" Dec 02 08:03:00 string ollama[2773]: time=2025-12-02T08:03:00.384+02:00 level=WARN source=amd_linux.go:352 msg="unable to verify rocm library: no suitable rocm found, falling back to CPU" Dec 02 08:03:00 string ollama[2773]: time=2025-12-02T08:03:00.384+02:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 library=cuda variant=v12 compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1050 Ti" total="3.9 GiB" available="3.9 GiB" Dec 02 08:03:00 string ollama[2773]: time=2025-12-02T08:03:00.384+02:00 level=INFO source=routes.go:1569 msg="entering low vram mode" "total vram"="3.9 GiB" threshold="20.0 GiB" Dec 02 08:38:11 string ollama[2773]: [GIN] 2025/12/02 - 08:38:11 | 200 | 10.871519ms | 127.0.0.1 | GET "/api/tags" Dec 02 08:38:11 string ollama[2773]: [GIN] 2025/12/02 - 08:38:11 | 200 | 94.791µs | 127.0.0.1 | GET "/api/ps" Dec 02 23:11:18 string systemd[1]: Stopping ollama.service - Ollama Service... Dec 02 23:11:18 string systemd[1]: ollama.service: Deactivated successfully. Dec 02 23:11:18 string systemd[1]: Stopped ollama.service - Ollama Service. Dec 02 23:11:18 string systemd[1]: ollama.service: Consumed 665ms CPU time, 64M memory peak. -- Boot 2828bece086f469faacb079ec26b647c -- Dec 03 09:14:30 string systemd[1]: Started ollama.service - Ollama Service. Dec 03 09:14:30 string ollama[2787]: time=2025-12-03T09:14:30.380+02:00 level=INFO source=routes.go:1475 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:6 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:16 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Dec 03 09:14:30 string ollama[2787]: time=2025-12-03T09:14:30.395+02:00 level=INFO source=images.go:518 msg="total blobs: 26" Dec 03 09:14:30 string ollama[2787]: time=2025-12-03T09:14:30.396+02:00 level=INFO source=images.go:525 msg="total unused blobs removed: 0" Dec 03 09:14:30 string ollama[2787]: time=2025-12-03T09:14:30.396+02:00 level=INFO source=routes.go:1528 msg="Listening on 127.0.0.1:11434 (version 0.12.3)" Dec 03 09:14:30 string ollama[2787]: time=2025-12-03T09:14:30.397+02:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" Dec 03 09:14:30 string ollama[2787]: time=2025-12-03T09:14:30.469+02:00 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/download/linux-drivers.html" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" Dec 03 09:14:30 string ollama[2787]: time=2025-12-03T09:14:30.469+02:00 level=WARN source=amd_linux.go:447 msg="amdgpu detected, but no compatible rocm library found. Either install rocm v6, or follow manual install instructions at https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install" Dec 03 09:14:30 string ollama[2787]: time=2025-12-03T09:14:30.469+02:00 level=WARN source=amd_linux.go:352 msg="unable to verify rocm library: no suitable rocm found, falling back to CPU" Dec 03 09:14:30 string ollama[2787]: time=2025-12-03T09:14:30.469+02:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 library=cuda variant=v12 compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1050 Ti" total="3.9 GiB" available="3.9 GiB" Dec 03 09:14:30 string ollama[2787]: time=2025-12-03T09:14:30.469+02:00 level=INFO source=routes.go:1569 msg="entering low vram mode" "total vram"="3.9 GiB" threshold="20.0 GiB" Dec 03 22:43:48 string systemd[1]: Stopping ollama.service - Ollama Service... Dec 03 22:43:48 string systemd[1]: ollama.service: Deactivated successfully. Dec 03 22:43:48 string systemd[1]: Stopped ollama.service - Ollama Service. Dec 03 22:43:48 string systemd[1]: ollama.service: Consumed 608ms CPU time, 64.4M memory peak. -- Boot 9fc10e2c0a00418fb419241f4e79f174 -- Dec 04 09:25:30 string systemd[1]: Started ollama.service - Ollama Service. Dec 04 09:25:30 string ollama[2795]: time=2025-12-04T09:25:30.248+02:00 level=INFO source=routes.go:1475 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:6 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:16 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Dec 04 09:25:30 string ollama[2795]: time=2025-12-04T09:25:30.260+02:00 level=INFO source=images.go:518 msg="total blobs: 26" Dec 04 09:25:30 string ollama[2795]: time=2025-12-04T09:25:30.261+02:00 level=INFO source=images.go:525 msg="total unused blobs removed: 0" Dec 04 09:25:30 string ollama[2795]: time=2025-12-04T09:25:30.262+02:00 level=INFO source=routes.go:1528 msg="Listening on 127.0.0.1:11434 (version 0.12.3)" Dec 04 09:25:30 string ollama[2795]: time=2025-12-04T09:25:30.263+02:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" Dec 04 09:25:30 string ollama[2795]: time=2025-12-04T09:25:30.344+02:00 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/download/linux-drivers.html" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" Dec 04 09:25:30 string ollama[2795]: time=2025-12-04T09:25:30.345+02:00 level=WARN source=amd_linux.go:447 msg="amdgpu detected, but no compatible rocm library found. Either install rocm v6, or follow manual install instructions at https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install" Dec 04 09:25:30 string ollama[2795]: time=2025-12-04T09:25:30.345+02:00 level=WARN source=amd_linux.go:352 msg="unable to verify rocm library: no suitable rocm found, falling back to CPU" Dec 04 09:25:30 string ollama[2795]: time=2025-12-04T09:25:30.345+02:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 library=cuda variant=v12 compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1050 Ti" total="3.9 GiB" available="3.9 GiB" Dec 04 09:25:30 string ollama[2795]: time=2025-12-04T09:25:30.345+02:00 level=INFO source=routes.go:1569 msg="entering low vram mode" "total vram"="3.9 GiB" threshold="20.0 GiB" Dec 04 23:10:37 string systemd[1]: Stopping ollama.service - Ollama Service... Dec 04 23:10:38 string systemd[1]: ollama.service: Deactivated successfully. Dec 04 23:10:38 string systemd[1]: Stopped ollama.service - Ollama Service. -- Boot ecd13b02253b450fa6d33abbe8a6e108 -- Dec 05 08:59:30 string systemd[1]: Started ollama.service - Ollama Service. Dec 05 08:59:30 string ollama[2859]: time=2025-12-05T08:59:30.337+02:00 level=INFO source=routes.go:1475 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:6 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:16 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Dec 05 08:59:30 string ollama[2859]: time=2025-12-05T08:59:30.351+02:00 level=INFO source=images.go:518 msg="total blobs: 26" Dec 05 08:59:30 string ollama[2859]: time=2025-12-05T08:59:30.351+02:00 level=INFO source=images.go:525 msg="total unused blobs removed: 0" Dec 05 08:59:30 string ollama[2859]: time=2025-12-05T08:59:30.351+02:00 level=INFO source=routes.go:1528 msg="Listening on 127.0.0.1:11434 (version 0.12.3)" Dec 05 08:59:30 string ollama[2859]: time=2025-12-05T08:59:30.352+02:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" Dec 05 08:59:30 string ollama[2859]: time=2025-12-05T08:59:30.426+02:00 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/download/linux-drivers.html" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" Dec 05 08:59:30 string ollama[2859]: time=2025-12-05T08:59:30.427+02:00 level=WARN source=amd_linux.go:447 msg="amdgpu detected, but no compatible rocm library found. Either install rocm v6, or follow manual install instructions at https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install" Dec 05 08:59:30 string ollama[2859]: time=2025-12-05T08:59:30.427+02:00 level=WARN source=amd_linux.go:352 msg="unable to verify rocm library: no suitable rocm found, falling back to CPU" Dec 05 08:59:30 string ollama[2859]: time=2025-12-05T08:59:30.427+02:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 library=cuda variant=v12 compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1050 Ti" total="3.9 GiB" available="3.9 GiB" Dec 05 08:59:30 string ollama[2859]: time=2025-12-05T08:59:30.427+02:00 level=INFO source=routes.go:1569 msg="entering low vram mode" "total vram"="3.9 GiB" threshold="20.0 GiB" Dec 05 11:20:51 string ollama[2859]: [GIN] 2025/12/05 - 11:20:51 | 200 | 11.534891ms | 127.0.0.1 | GET "/api/tags" Dec 05 11:20:51 string ollama[2859]: [GIN] 2025/12/05 - 11:20:51 | 200 | 56.97µs | 127.0.0.1 | GET "/api/ps" Dec 05 11:32:11 string ollama[2859]: [GIN] 2025/12/05 - 11:32:11 | 200 | 366.692µs | 127.0.0.1 | GET "/api/tags" Dec 05 11:32:12 string ollama[2859]: [GIN] 2025/12/05 - 11:32:12 | 200 | 21.77µs | 127.0.0.1 | GET "/api/ps" Dec 05 11:32:16 string ollama[2859]: [GIN] 2025/12/05 - 11:32:16 | 200 | 362.052µs | 127.0.0.1 | GET "/api/tags" Dec 05 11:32:17 string ollama[2859]: [GIN] 2025/12/05 - 11:32:17 | 200 | 19.82µs | 127.0.0.1 | GET "/api/ps" Dec 05 11:32:17 string ollama[2859]: [GIN] 2025/12/05 - 11:32:17 | 200 | 37.02µs | 127.0.0.1 | GET "/api/version" Dec 05 15:56:58 string ollama[2859]: [GIN] 2025/12/05 - 15:56:58 | 200 | 565.572µs | 127.0.0.1 | GET "/api/tags" Dec 05 15:56:58 string ollama[2859]: [GIN] 2025/12/05 - 15:56:58 | 200 | 21.95µs | 127.0.0.1 | GET "/api/ps" Dec 05 19:11:21 string ollama[2859]: [GIN] 2025/12/05 - 19:11:21 | 200 | 517.672µs | 127.0.0.1 | GET "/api/tags" Dec 05 19:11:22 string ollama[2859]: [GIN] 2025/12/05 - 19:11:22 | 200 | 23.75µs | 127.0.0.1 | GET "/api/ps" Dec 05 22:38:43 string ollama[2859]: [GIN] 2025/12/05 - 22:38:43 | 200 | 17.77µs | 127.0.0.1 | HEAD "/" Dec 05 22:38:43 string ollama[2859]: [GIN] 2025/12/05 - 22:38:43 | 404 | 242.671µs | 127.0.0.1 | POST "/api/show" Dec 05 22:38:44 string ollama[2859]: [GIN] 2025/12/05 - 22:38:44 | 200 | 550.546483ms | 127.0.0.1 | POST "/api/pull" Dec 05 22:39:37 string systemd[1]: Stopping ollama.service - Ollama Service... Dec 05 22:39:37 string systemd[1]: ollama.service: Deactivated successfully. Dec 05 22:39:37 string systemd[1]: Stopped ollama.service - Ollama Service. Dec 05 22:39:37 string systemd[1]: ollama.service: Consumed 624ms CPU time, 64.1M memory peak. Dec 05 22:39:37 string systemd[1]: Started ollama.service - Ollama Service. Dec 05 22:39:37 string ollama[421142]: time=2025-12-05T22:39:37.968+02:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:6 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:16 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Dec 05 22:39:37 string ollama[421142]: time=2025-12-05T22:39:37.968+02:00 level=INFO source=images.go:522 msg="total blobs: 26" Dec 05 22:39:37 string ollama[421142]: time=2025-12-05T22:39:37.968+02:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" Dec 05 22:39:37 string ollama[421142]: time=2025-12-05T22:39:37.969+02:00 level=INFO source=routes.go:1597 msg="Listening on 127.0.0.1:11434 (version 0.13.1)" Dec 05 22:39:37 string ollama[421142]: time=2025-12-05T22:39:37.969+02:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." Dec 05 22:39:37 string ollama[421142]: time=2025-12-05T22:39:37.969+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 43695" Dec 05 22:39:37 string ollama[421142]: time=2025-12-05T22:39:37.997+02:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" Dec 05 22:39:37 string ollama[421142]: time=2025-12-05T22:39:37.997+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 45877" Dec 05 22:39:38 string ollama[421142]: time=2025-12-05T22:39:38.045+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 44487" Dec 05 22:39:38 string ollama[421142]: time=2025-12-05T22:39:38.162+02:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 filter_id="" library=CUDA compute=6.1 name=CUDA0 description="NVIDIA GeForce GTX 1050 Ti" libdirs=ollama,cuda_v12 driver=12.2 pci_id=0000:01:00.0 type=discrete total="4.0 GiB" available="3.4 GiB" Dec 05 22:39:38 string ollama[421142]: time=2025-12-05T22:39:38.162+02:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="4.0 GiB" threshold="20.0 GiB" Dec 05 22:39:44 string ollama[421142]: [GIN] 2025/12/05 - 22:39:44 | 200 | 31.9µs | 127.0.0.1 | HEAD "/" Dec 05 22:39:44 string ollama[421142]: [GIN] 2025/12/05 - 22:39:44 | 404 | 238.081µs | 127.0.0.1 | POST "/api/show" Dec 05 22:39:45 string ollama[421142]: time=2025-12-05T22:39:45.017+02:00 level=INFO source=download.go:177 msg="downloading 094eb0a75095 in 16 184 MB part(s)" Dec 05 22:40:36 string ollama[421142]: time=2025-12-05T22:40:36.448+02:00 level=INFO source=download.go:177 msg="downloading 6db27cd4e277 in 1 695 B part(s)" Dec 05 22:40:37 string ollama[421142]: time=2025-12-05T22:40:37.813+02:00 level=INFO source=download.go:177 msg="downloading 3d8ba0a186b5 in 1 2.4 KB part(s)" Dec 05 22:40:39 string ollama[421142]: time=2025-12-05T22:40:39.211+02:00 level=INFO source=download.go:177 msg="downloading e0daf17ff83e in 1 21 B part(s)" Dec 05 22:40:40 string ollama[421142]: time=2025-12-05T22:40:40.645+02:00 level=INFO source=download.go:177 msg="downloading 97002903a239 in 1 514 B part(s)" Dec 05 22:40:43 string ollama[421142]: [GIN] 2025/12/05 - 22:40:43 | 200 | 58.862797551s | 127.0.0.1 | POST "/api/pull" Dec 05 22:40:43 string ollama[421142]: [GIN] 2025/12/05 - 22:40:43 | 200 | 31.745633ms | 127.0.0.1 | POST "/api/show" Dec 05 22:40:43 string ollama[421142]: [GIN] 2025/12/05 - 22:40:43 | 200 | 28.823229ms | 127.0.0.1 | POST "/api/show" Dec 05 22:40:43 string ollama[421142]: time=2025-12-05T22:40:43.155+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 35107" Dec 05 22:40:43 string ollama[421142]: time=2025-12-05T22:40:43.243+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-094eb0a75095db5a9f83e51323879750023d7050d008a9b2899bf9f47c4926e5 --port 35131" Dec 05 22:40:43 string ollama[421142]: time=2025-12-05T22:40:43.243+02:00 level=INFO source=sched.go:443 msg="system memory" total="92.0 GiB" free="77.8 GiB" free_swap="54.0 GiB" Dec 05 22:40:43 string ollama[421142]: time=2025-12-05T22:40:43.243+02:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 library=CUDA available="3.0 GiB" free="3.5 GiB" minimum="457.0 MiB" overhead="0 B" Dec 05 22:40:43 string ollama[421142]: time=2025-12-05T22:40:43.243+02:00 level=INFO source=server.go:702 msg="loading model" "model layers"=27 requested=-1 Dec 05 22:40:43 string ollama[421142]: time=2025-12-05T22:40:43.248+02:00 level=INFO source=runner.go:1398 msg="starting ollama engine" Dec 05 22:40:43 string ollama[421142]: time=2025-12-05T22:40:43.248+02:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:35131" Dec 05 22:40:43 string ollama[421142]: time=2025-12-05T22:40:43.254+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:27[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:27(0..26)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:40:43 string ollama[421142]: time=2025-12-05T22:40:43.272+02:00 level=INFO source=ggml.go:136 msg="" architecture=mistral3 file_type=Q4_K_M name="" description="" num_tensors=458 num_key_values=45 Dec 05 22:40:43 string ollama[421142]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so Dec 05 22:40:43 string ollama[421142]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Dec 05 22:40:43 string ollama[421142]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Dec 05 22:40:43 string ollama[421142]: ggml_cuda_init: found 1 CUDA devices: Dec 05 22:40:43 string ollama[421142]: Device 0: NVIDIA GeForce GTX 1050 Ti, compute capability 6.1, VMM: yes, ID: GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Dec 05 22:40:43 string ollama[421142]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so Dec 05 22:40:43 string ollama[421142]: time=2025-12-05T22:40:43.343+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) Dec 05 22:40:43 string ollama[421142]: time=2025-12-05T22:40:43.568+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:40:43 string ollama[421142]: time=2025-12-05T22:40:43.741+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:8[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:8(18..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:40:43 string ollama[421142]: time=2025-12-05T22:40:43.923+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:7[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:7(19..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:40:44 string ollama[421142]: time=2025-12-05T22:40:44.103+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:6[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:6(20..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:40:44 string ollama[421142]: time=2025-12-05T22:40:44.283+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:5[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:5(21..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:40:44 string ollama[421142]: time=2025-12-05T22:40:44.462+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:4[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:4(22..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:40:44 string ollama[421142]: time=2025-12-05T22:40:44.643+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:3[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:3(23..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:40:44 string ollama[421142]: time=2025-12-05T22:40:44.823+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:2[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:2(24..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:40:45 string ollama[421142]: time=2025-12-05T22:40:45.004+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:1[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:1(25..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:40:45 string ollama[421142]: time=2025-12-05T22:40:45.186+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:40:45 string ollama[421142]: time=2025-12-05T22:40:45.363+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:40:45 string ollama[421142]: time=2025-12-05T22:40:45.873+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:8[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:8(18..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:40:46 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:40:46 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760 Dec 05 22:40:46 string ollama[421142]: time=2025-12-05T22:40:46.026+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:7[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:7(19..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:40:46 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:40:46 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760 Dec 05 22:40:46 string ollama[421142]: time=2025-12-05T22:40:46.179+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:6[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:6(20..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:40:46 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:40:46 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760 Dec 05 22:40:46 string ollama[421142]: time=2025-12-05T22:40:46.337+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:5[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:5(21..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:40:46 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:40:46 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760 Dec 05 22:40:46 string ollama[421142]: time=2025-12-05T22:40:46.495+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:4[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:4(22..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:40:46 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:40:46 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760 Dec 05 22:40:46 string ollama[421142]: time=2025-12-05T22:40:46.646+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:3[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:3(23..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:40:46 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:40:46 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760 Dec 05 22:40:46 string ollama[421142]: time=2025-12-05T22:40:46.799+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:2[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:2(24..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:40:46 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:40:46 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760 Dec 05 22:40:46 string ollama[421142]: time=2025-12-05T22:40:46.956+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:1[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:1(25..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:40:47 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:40:47 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760 Dec 05 22:40:47 string ollama[421142]: time=2025-12-05T22:40:47.110+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:40:47 string ollama[421142]: time=2025-12-05T22:40:47.631+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:40:47 string ollama[421142]: time=2025-12-05T22:40:47.631+02:00 level=INFO source=ggml.go:482 msg="offloading 0 repeating layers to GPU" Dec 05 22:40:47 string ollama[421142]: time=2025-12-05T22:40:47.631+02:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="3.1 GiB" Dec 05 22:40:47 string ollama[421142]: time=2025-12-05T22:40:47.631+02:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU" Dec 05 22:40:47 string ollama[421142]: time=2025-12-05T22:40:47.631+02:00 level=INFO source=ggml.go:494 msg="offloaded 0/27 layers to GPU" Dec 05 22:40:47 string ollama[421142]: time=2025-12-05T22:40:47.631+02:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="6.5 GiB" Dec 05 22:40:47 string ollama[421142]: time=2025-12-05T22:40:47.631+02:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="9.0 GiB" Dec 05 22:40:47 string ollama[421142]: time=2025-12-05T22:40:47.631+02:00 level=INFO source=device.go:272 msg="total memory" size="18.6 GiB" Dec 05 22:40:47 string ollama[421142]: time=2025-12-05T22:40:47.631+02:00 level=INFO source=sched.go:517 msg="loaded runners" count=1 Dec 05 22:40:47 string ollama[421142]: time=2025-12-05T22:40:47.631+02:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding" Dec 05 22:40:47 string ollama[421142]: time=2025-12-05T22:40:47.631+02:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model" Dec 05 22:40:47 string ollama[421142]: time=2025-12-05T22:40:47.882+02:00 level=INFO source=server.go:1332 msg="llama runner started in 4.64 seconds" Dec 05 22:40:47 string ollama[421142]: [GIN] 2025/12/05 - 22:40:47 | 200 | 4.786097154s | 127.0.0.1 | POST "/api/generate" Dec 05 22:41:09 string ollama[421142]: [GIN] 2025/12/05 - 22:41:09 | 200 | 6.317019068s | 127.0.0.1 | POST "/api/chat" Dec 05 22:41:27 string ollama[421142]: [GIN] 2025/12/05 - 22:41:27 | 200 | 452.761µs | 127.0.0.1 | GET "/api/tags" Dec 05 22:41:27 string ollama[421142]: [GIN] 2025/12/05 - 22:41:27 | 200 | 75.78µs | 127.0.0.1 | GET "/api/ps" Dec 05 22:41:29 string ollama[421142]: [GIN] 2025/12/05 - 22:41:29 | 200 | 391.212µs | 127.0.0.1 | GET "/api/tags" Dec 05 22:41:29 string ollama[421142]: [GIN] 2025/12/05 - 22:41:29 | 200 | 33.42µs | 127.0.0.1 | GET "/api/ps" Dec 05 22:41:30 string ollama[421142]: [GIN] 2025/12/05 - 22:41:30 | 200 | 31.28µs | 127.0.0.1 | GET "/api/version" Dec 05 22:42:12 string ollama[421142]: [GIN] 2025/12/05 - 22:42:12 | 200 | 12.091144569s | 127.0.0.1 | POST "/api/chat" Dec 05 22:42:13 string ollama[421142]: [GIN] 2025/12/05 - 22:42:13 | 200 | 24.92µs | 127.0.0.1 | HEAD "/" Dec 05 22:42:13 string ollama[421142]: [GIN] 2025/12/05 - 22:42:13 | 200 | 27.21µs | 127.0.0.1 | GET "/api/ps" Dec 05 22:42:28 string ollama[421142]: [GIN] 2025/12/05 - 22:42:28 | 200 | 16.197507127s | 127.0.0.1 | POST "/api/chat" Dec 05 22:42:36 string ollama[421142]: [GIN] 2025/12/05 - 22:42:36 | 200 | 7.296386643s | 127.0.0.1 | POST "/api/chat" Dec 05 22:42:43 string ollama[421142]: [GIN] 2025/12/05 - 22:42:43 | 200 | 7.713140857s | 127.0.0.1 | POST "/api/chat" Dec 05 22:42:46 string ollama[421142]: [GIN] 2025/12/05 - 22:42:46 | 200 | 20.89µs | 127.0.0.1 | HEAD "/" Dec 05 22:42:46 string ollama[421142]: [GIN] 2025/12/05 - 22:42:46 | 200 | 28.63348ms | 127.0.0.1 | POST "/api/show" Dec 05 22:42:46 string ollama[421142]: [GIN] 2025/12/05 - 22:42:46 | 200 | 27.102232ms | 127.0.0.1 | POST "/api/show" Dec 05 22:42:46 string ollama[421142]: [GIN] 2025/12/05 - 22:42:46 | 200 | 57.195278ms | 127.0.0.1 | POST "/api/generate" Dec 05 22:42:48 string ollama[421142]: [GIN] 2025/12/05 - 22:42:48 | 200 | 585.884041ms | 127.0.0.1 | POST "/api/chat" Dec 05 22:43:48 string ollama[421142]: [GIN] 2025/12/05 - 22:43:48 | 200 | 22.5µs | 127.0.0.1 | HEAD "/" Dec 05 22:43:48 string ollama[421142]: [GIN] 2025/12/05 - 22:43:48 | 404 | 235.781µs | 127.0.0.1 | POST "/api/show" Dec 05 22:43:49 string ollama[421142]: time=2025-12-05T22:43:49.615+02:00 level=INFO source=download.go:177 msg="downloading 9c60bdd691c1 in 16 205 MB part(s)" Dec 05 22:44:03 string ollama[421142]: [GIN] 2025/12/05 - 22:44:03 | 200 | 385.132µs | 127.0.0.1 | GET "/api/tags" Dec 05 22:44:03 string ollama[421142]: [GIN] 2025/12/05 - 22:44:03 | 200 | 31.56µs | 127.0.0.1 | GET "/api/ps" Dec 05 22:44:03 string ollama[421142]: [GIN] 2025/12/05 - 22:44:03 | 200 | 30.2µs | 127.0.0.1 | GET "/api/version" Dec 05 22:44:09 string ollama[421142]: [GIN] 2025/12/05 - 22:44:09 | 200 | 26.33µs | 127.0.0.1 | GET "/api/version" Dec 05 22:44:47 string ollama[421142]: time=2025-12-05T22:44:47.998+02:00 level=INFO source=download.go:177 msg="downloading 7339fa418c9a in 1 11 KB part(s)" Dec 05 22:44:49 string ollama[421142]: time=2025-12-05T22:44:49.373+02:00 level=INFO source=download.go:177 msg="downloading f6417cb1e269 in 1 42 B part(s)" Dec 05 22:44:50 string ollama[421142]: time=2025-12-05T22:44:50.818+02:00 level=INFO source=download.go:177 msg="downloading 3353ed4a819b in 1 551 B part(s)" Dec 05 22:44:53 string ollama[421142]: [GIN] 2025/12/05 - 22:44:53 | 200 | 1m4s | 127.0.0.1 | POST "/api/pull" Dec 05 22:44:53 string ollama[421142]: [GIN] 2025/12/05 - 22:44:53 | 200 | 24.966395ms | 127.0.0.1 | POST "/api/show" Dec 05 22:44:53 string ollama[421142]: [GIN] 2025/12/05 - 22:44:53 | 200 | 25.422106ms | 127.0.0.1 | POST "/api/show" Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.422+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 36733" Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.469+02:00 level=INFO source=sched.go:583 msg="updated VRAM based on existing loaded models" gpu=GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 library=CUDA total="4.0 GiB" available="3.5 GiB" Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.485+02:00 level=WARN source=sched.go:404 msg="model architecture does not currently support parallel requests" architecture=qwen3vl Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.498+02:00 level=INFO source=server.go:209 msg="enabling flash attention" Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.498+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-9c60bdd691c1897bbfe5ddbc67336848e18c346b7ee2ab8541b135f208e5bb38 --port 42317" Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.499+02:00 level=INFO source=sched.go:443 msg="system memory" total="92.0 GiB" free="67.3 GiB" free_swap="54.0 GiB" Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.499+02:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 library=CUDA available="3.0 GiB" free="3.5 GiB" minimum="457.0 MiB" overhead="0 B" Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.499+02:00 level=INFO source=server.go:702 msg="loading model" "model layers"=37 requested=-1 Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.503+02:00 level=INFO source=runner.go:1398 msg="starting ollama engine" Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.504+02:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:42317" Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.510+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:37[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.524+02:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3vl file_type=Q4_K_M name="" description="" num_tensors=809 num_key_values=40 Dec 05 22:44:53 string ollama[421142]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so Dec 05 22:44:53 string ollama[421142]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Dec 05 22:44:53 string ollama[421142]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Dec 05 22:44:53 string ollama[421142]: ggml_cuda_init: found 1 CUDA devices: Dec 05 22:44:53 string ollama[421142]: Device 0: NVIDIA GeForce GTX 1050 Ti, compute capability 6.1, VMM: yes, ID: GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Dec 05 22:44:53 string ollama[421142]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.544+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.746+02:00 level=INFO source=server.go:974 msg="model requires more gpu memory than is currently available, evicting a model to make space" "loaded layers"=0 Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.746+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:false KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.746+02:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="3.1 GiB" Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.746+02:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="304.3 MiB" Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.746+02:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="576.0 MiB" Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.746+02:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="4.2 GiB" Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.746+02:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="39.6 MiB" Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.746+02:00 level=INFO source=device.go:272 msg="total memory" size="8.2 GiB" Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.766+02:00 level=ERROR source=server.go:265 msg="llama runner terminated" error="signal: killed" Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.766+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 41617" Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.831+02:00 level=WARN source=sched.go:404 msg="model architecture does not currently support parallel requests" architecture=qwen3vl Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.831+02:00 level=INFO source=sched.go:443 msg="system memory" total="92.0 GiB" free="77.0 GiB" free_swap="54.0 GiB" Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.831+02:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 library=CUDA available="2.9 GiB" free="3.4 GiB" minimum="457.0 MiB" overhead="0 B" Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.831+02:00 level=INFO source=server.go:702 msg="loading model" "model layers"=37 requested=-1 Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.832+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:44:53 string ollama[421142]: time=2025-12-05T22:44:53.993+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:35[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:35(1..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:44:54 string ollama[421142]: time=2025-12-05T22:44:54.156+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:34[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:34(2..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:44:54 string ollama[421142]: time=2025-12-05T22:44:54.319+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:33[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:33(3..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:44:54 string ollama[421142]: time=2025-12-05T22:44:54.486+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:32[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:32(4..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:44:54 string ollama[421142]: time=2025-12-05T22:44:54.647+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:31[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:31(5..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:44:54 string ollama[421142]: time=2025-12-05T22:44:54.809+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:30[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:30(6..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:44:54 string ollama[421142]: time=2025-12-05T22:44:54.972+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:29[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:29(7..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:44:55 string ollama[421142]: time=2025-12-05T22:44:55.135+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:28[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:28(8..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:44:55 string ollama[421142]: time=2025-12-05T22:44:55.298+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:27[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:27(9..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:44:55 string ollama[421142]: time=2025-12-05T22:44:55.464+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:26[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:26(10..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:44:55 string ollama[421142]: time=2025-12-05T22:44:55.626+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:25[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:25(11..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:44:55 string ollama[421142]: time=2025-12-05T22:44:55.791+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:24[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:24(12..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:44:55 string ollama[421142]: time=2025-12-05T22:44:55.955+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:23[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:23(13..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:44:56 string ollama[421142]: time=2025-12-05T22:44:56.121+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:22[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:22(14..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:44:56 string ollama[421142]: time=2025-12-05T22:44:56.298+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:21[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:21(15..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:44:56 string ollama[421142]: time=2025-12-05T22:44:56.465+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:20[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:20(16..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:44:56 string ollama[421142]: time=2025-12-05T22:44:56.632+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:19[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:19(17..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:44:56 string ollama[421142]: time=2025-12-05T22:44:56.794+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:18[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:18(18..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:44:56 string ollama[421142]: time=2025-12-05T22:44:56.959+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:17[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:17(19..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:44:57 string ollama[421142]: time=2025-12-05T22:44:57.127+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:16[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:16(20..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:44:57 string ollama[421142]: time=2025-12-05T22:44:57.294+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:15[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:15(21..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:44:57 string ollama[421142]: time=2025-12-05T22:44:57.457+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:14[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:14(22..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:44:57 string ollama[421142]: time=2025-12-05T22:44:57.625+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:13[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:13(23..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:44:57 string ollama[421142]: time=2025-12-05T22:44:57.791+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:12[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:12(24..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:44:57 string ollama[421142]: time=2025-12-05T22:44:57.954+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:11[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:11(25..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:44:58 string ollama[421142]: time=2025-12-05T22:44:58.124+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:10[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:10(26..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:44:58 string ollama[421142]: time=2025-12-05T22:44:58.289+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:9[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:9(27..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:44:58 string ollama[421142]: time=2025-12-05T22:44:58.454+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:8[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:8(28..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:44:58 string ollama[421142]: time=2025-12-05T22:44:58.615+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:7[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:7(29..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:44:58 string ollama[421142]: time=2025-12-05T22:44:58.784+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:6[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:6(30..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:44:58 string ollama[421142]: time=2025-12-05T22:44:58.949+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:5[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:5(31..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:44:59 string ollama[421142]: time=2025-12-05T22:44:59.117+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:4[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:4(32..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:44:59 string ollama[421142]: time=2025-12-05T22:44:59.285+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:3[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:3(33..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:44:59 string ollama[421142]: time=2025-12-05T22:44:59.458+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:2[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:2(34..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:44:59 string ollama[421142]: time=2025-12-05T22:44:59.627+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:1[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:1(35..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:44:59 string ollama[421142]: time=2025-12-05T22:44:59.793+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:44:59 string ollama[421142]: time=2025-12-05T22:44:59.961+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:45:00 string ollama[421142]: time=2025-12-05T22:45:00.174+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:35[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:35(1..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:45:00 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:45:00 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752 Dec 05 22:45:00 string ollama[421142]: time=2025-12-05T22:45:00.336+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:34[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:34(2..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:45:00 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:45:00 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752 Dec 05 22:45:00 string ollama[421142]: time=2025-12-05T22:45:00.551+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:33[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:33(3..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:45:00 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:45:00 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752 Dec 05 22:45:00 string ollama[421142]: time=2025-12-05T22:45:00.769+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:32[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:32(4..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:45:00 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:45:00 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752 Dec 05 22:45:00 string ollama[421142]: time=2025-12-05T22:45:00.974+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:31[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:31(5..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:45:01 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:45:01 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752 Dec 05 22:45:01 string ollama[421142]: time=2025-12-05T22:45:01.177+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:30[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:30(6..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:45:01 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:45:01 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752 Dec 05 22:45:01 string ollama[421142]: time=2025-12-05T22:45:01.382+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:29[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:29(7..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:45:01 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:45:01 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752 Dec 05 22:45:01 string ollama[421142]: time=2025-12-05T22:45:01.581+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:28[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:28(8..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:45:01 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:45:01 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752 Dec 05 22:45:01 string ollama[421142]: time=2025-12-05T22:45:01.777+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:27[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:27(9..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:45:01 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:45:01 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752 Dec 05 22:45:01 string ollama[421142]: time=2025-12-05T22:45:01.940+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:26[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:26(10..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:45:02 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:45:02 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752 Dec 05 22:45:02 string ollama[421142]: time=2025-12-05T22:45:02.115+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:25[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:25(11..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:45:02 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:45:02 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752 Dec 05 22:45:02 string ollama[421142]: time=2025-12-05T22:45:02.276+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:24[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:24(12..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:45:02 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:45:02 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752 Dec 05 22:45:02 string ollama[421142]: time=2025-12-05T22:45:02.440+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:23[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:23(13..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:45:02 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:45:02 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752 Dec 05 22:45:02 string ollama[421142]: time=2025-12-05T22:45:02.604+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:22[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:22(14..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:45:02 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:45:02 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752 Dec 05 22:45:02 string ollama[421142]: time=2025-12-05T22:45:02.766+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:21[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:21(15..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:45:02 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:45:02 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752 Dec 05 22:45:02 string ollama[421142]: time=2025-12-05T22:45:02.930+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:20[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:20(16..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:45:03 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:45:03 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752 Dec 05 22:45:03 string ollama[421142]: time=2025-12-05T22:45:03.092+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:19[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:19(17..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:45:03 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:45:03 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752 Dec 05 22:45:03 string ollama[421142]: time=2025-12-05T22:45:03.256+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:18[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:18(18..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:45:03 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:45:03 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752 Dec 05 22:45:03 string ollama[421142]: time=2025-12-05T22:45:03.418+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:17[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:17(19..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:45:03 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:45:03 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752 Dec 05 22:45:03 string ollama[421142]: time=2025-12-05T22:45:03.579+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:16[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:16(20..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:45:03 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:45:03 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752 Dec 05 22:45:03 string ollama[421142]: time=2025-12-05T22:45:03.741+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:15[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:15(21..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:45:03 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:45:03 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752 Dec 05 22:45:03 string ollama[421142]: time=2025-12-05T22:45:03.904+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:14[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:14(22..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:45:04 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:45:04 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752 Dec 05 22:45:04 string ollama[421142]: time=2025-12-05T22:45:04.068+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:13[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:13(23..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:45:04 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:45:04 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752 Dec 05 22:45:04 string ollama[421142]: time=2025-12-05T22:45:04.231+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:12[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:12(24..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:45:04 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:45:04 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752 Dec 05 22:45:04 string ollama[421142]: time=2025-12-05T22:45:04.391+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:11[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:11(25..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:45:04 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:45:04 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752 Dec 05 22:45:04 string ollama[421142]: time=2025-12-05T22:45:04.557+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:10[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:10(26..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:45:04 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:45:04 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752 Dec 05 22:45:04 string ollama[421142]: time=2025-12-05T22:45:04.723+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:9[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:9(27..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:45:04 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:45:04 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752 Dec 05 22:45:04 string ollama[421142]: time=2025-12-05T22:45:04.886+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:8[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:8(28..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:45:05 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:45:05 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752 Dec 05 22:45:05 string ollama[421142]: time=2025-12-05T22:45:05.045+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:7[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:7(29..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:45:05 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:45:05 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752 Dec 05 22:45:05 string ollama[421142]: time=2025-12-05T22:45:05.199+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:6[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:6(30..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:45:05 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:45:05 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752 Dec 05 22:45:05 string ollama[421142]: time=2025-12-05T22:45:05.356+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:5[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:5(31..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:45:05 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:45:05 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752 Dec 05 22:45:05 string ollama[421142]: time=2025-12-05T22:45:05.518+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:4[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:4(32..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:45:05 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:45:05 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752 Dec 05 22:45:05 string ollama[421142]: time=2025-12-05T22:45:05.675+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:3[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:3(33..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:45:05 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:45:05 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752 Dec 05 22:45:05 string ollama[421142]: time=2025-12-05T22:45:05.839+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:2[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:2(34..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:45:05 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:45:05 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752 Dec 05 22:45:05 string ollama[421142]: time=2025-12-05T22:45:05.997+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:1[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:1(35..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:45:06 string ollama[421142]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 4295.24 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:45:06 string ollama[421142]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 4503882752 Dec 05 22:45:06 string ollama[421142]: time=2025-12-05T22:45:06.157+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:45:06 string ollama[421142]: time=2025-12-05T22:45:06.370+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:45:06 string ollama[421142]: time=2025-12-05T22:45:06.370+02:00 level=INFO source=ggml.go:482 msg="offloading 0 repeating layers to GPU" Dec 05 22:45:06 string ollama[421142]: time=2025-12-05T22:45:06.370+02:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU" Dec 05 22:45:06 string ollama[421142]: time=2025-12-05T22:45:06.370+02:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="3.4 GiB" Dec 05 22:45:06 string ollama[421142]: time=2025-12-05T22:45:06.370+02:00 level=INFO source=ggml.go:494 msg="offloaded 0/37 layers to GPU" Dec 05 22:45:06 string ollama[421142]: time=2025-12-05T22:45:06.370+02:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="576.0 MiB" Dec 05 22:45:06 string ollama[421142]: time=2025-12-05T22:45:06.370+02:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="4.2 GiB" Dec 05 22:45:06 string ollama[421142]: time=2025-12-05T22:45:06.370+02:00 level=INFO source=device.go:272 msg="total memory" size="8.1 GiB" Dec 05 22:45:06 string ollama[421142]: time=2025-12-05T22:45:06.370+02:00 level=INFO source=sched.go:517 msg="loaded runners" count=1 Dec 05 22:45:06 string ollama[421142]: time=2025-12-05T22:45:06.370+02:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding" Dec 05 22:45:06 string ollama[421142]: time=2025-12-05T22:45:06.375+02:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model" Dec 05 22:45:06 string ollama[421142]: time=2025-12-05T22:45:06.625+02:00 level=INFO source=server.go:1332 msg="llama runner started in 13.13 seconds" Dec 05 22:45:06 string ollama[421142]: [GIN] 2025/12/05 - 22:45:06 | 200 | 13.25248179s | 127.0.0.1 | POST "/api/generate" Dec 05 22:45:20 string ollama[421142]: [GIN] 2025/12/05 - 22:45:20 | 200 | 9.977555375s | 127.0.0.1 | POST "/api/chat" Dec 05 22:45:24 string ollama[421142]: [GIN] 2025/12/05 - 22:45:24 | 200 | 19.31µs | 127.0.0.1 | HEAD "/" Dec 05 22:45:24 string ollama[421142]: [GIN] 2025/12/05 - 22:45:24 | 200 | 17.61µs | 127.0.0.1 | GET "/api/ps" Dec 05 22:46:21 string systemd[1]: Stopping ollama.service - Ollama Service... Dec 05 22:46:21 string ollama[421142]: time=2025-12-05T22:46:21.483+02:00 level=ERROR source=server.go:265 msg="llama runner terminated" error="signal: terminated" Dec 05 22:46:21 string systemd[1]: ollama.service: Deactivated successfully. Dec 05 22:46:21 string systemd[1]: Stopped ollama.service - Ollama Service. Dec 05 22:46:21 string systemd[1]: ollama.service: Consumed 6min 31.359s CPU time, 16.7G memory peak. Dec 05 22:50:51 string systemd[1]: Started ollama.service - Ollama Service. Dec 05 22:50:51 string ollama[431503]: time=2025-12-05T22:50:51.112+02:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:6 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:16 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Dec 05 22:50:51 string ollama[431503]: time=2025-12-05T22:50:51.112+02:00 level=INFO source=images.go:522 msg="total blobs: 35" Dec 05 22:50:51 string ollama[431503]: time=2025-12-05T22:50:51.113+02:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" Dec 05 22:50:51 string ollama[431503]: time=2025-12-05T22:50:51.113+02:00 level=INFO source=routes.go:1597 msg="Listening on 127.0.0.1:11434 (version 0.13.1)" Dec 05 22:50:51 string ollama[431503]: time=2025-12-05T22:50:51.113+02:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." Dec 05 22:50:51 string ollama[431503]: time=2025-12-05T22:50:51.113+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 42411" Dec 05 22:50:51 string ollama[431503]: time=2025-12-05T22:50:51.155+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 42529" Dec 05 22:50:51 string ollama[431503]: time=2025-12-05T22:50:51.183+02:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" Dec 05 22:50:51 string ollama[431503]: time=2025-12-05T22:50:51.183+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 39263" Dec 05 22:50:51 string ollama[431503]: time=2025-12-05T22:50:51.262+02:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 filter_id="" library=CUDA compute=6.1 name=CUDA0 description="NVIDIA GeForce GTX 1050 Ti" libdirs=ollama,cuda_v12 driver=12.2 pci_id=0000:01:00.0 type=discrete total="4.0 GiB" available="3.3 GiB" Dec 05 22:50:51 string ollama[431503]: time=2025-12-05T22:50:51.262+02:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="4.0 GiB" threshold="20.0 GiB" Dec 05 22:52:45 string ollama[431503]: [GIN] 2025/12/05 - 22:52:45 | 200 | 32.681µs | 127.0.0.1 | HEAD "/" Dec 05 22:52:45 string ollama[431503]: [GIN] 2025/12/05 - 22:52:45 | 200 | 29.159632ms | 127.0.0.1 | POST "/api/show" Dec 05 22:52:45 string ollama[431503]: [GIN] 2025/12/05 - 22:52:45 | 200 | 28.719099ms | 127.0.0.1 | POST "/api/show" Dec 05 22:52:45 string ollama[431503]: time=2025-12-05T22:52:45.380+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 40591" Dec 05 22:52:45 string ollama[431503]: time=2025-12-05T22:52:45.466+02:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-094eb0a75095db5a9f83e51323879750023d7050d008a9b2899bf9f47c4926e5 --port 44011" Dec 05 22:52:45 string ollama[431503]: time=2025-12-05T22:52:45.466+02:00 level=INFO source=sched.go:443 msg="system memory" total="92.0 GiB" free="77.0 GiB" free_swap="54.0 GiB" Dec 05 22:52:45 string ollama[431503]: time=2025-12-05T22:52:45.466+02:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 library=CUDA available="2.9 GiB" free="3.3 GiB" minimum="457.0 MiB" overhead="0 B" Dec 05 22:52:45 string ollama[431503]: time=2025-12-05T22:52:45.466+02:00 level=INFO source=server.go:702 msg="loading model" "model layers"=27 requested=-1 Dec 05 22:52:45 string ollama[431503]: time=2025-12-05T22:52:45.471+02:00 level=INFO source=runner.go:1398 msg="starting ollama engine" Dec 05 22:52:45 string ollama[431503]: time=2025-12-05T22:52:45.471+02:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:44011" Dec 05 22:52:45 string ollama[431503]: time=2025-12-05T22:52:45.478+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:27[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:27(0..26)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:52:45 string ollama[431503]: time=2025-12-05T22:52:45.495+02:00 level=INFO source=ggml.go:136 msg="" architecture=mistral3 file_type=Q4_K_M name="" description="" num_tensors=458 num_key_values=45 Dec 05 22:52:45 string ollama[431503]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so Dec 05 22:52:45 string ollama[431503]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Dec 05 22:52:45 string ollama[431503]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Dec 05 22:52:45 string ollama[431503]: ggml_cuda_init: found 1 CUDA devices: Dec 05 22:52:45 string ollama[431503]: Device 0: NVIDIA GeForce GTX 1050 Ti, compute capability 6.1, VMM: yes, ID: GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Dec 05 22:52:45 string ollama[431503]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so Dec 05 22:52:45 string ollama[431503]: time=2025-12-05T22:52:45.519+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) Dec 05 22:52:45 string ollama[431503]: time=2025-12-05T22:52:45.742+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:52:45 string ollama[431503]: time=2025-12-05T22:52:45.921+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:8[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:8(18..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:52:46 string ollama[431503]: time=2025-12-05T22:52:46.101+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:7[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:7(19..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:52:46 string ollama[431503]: time=2025-12-05T22:52:46.282+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:6[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:6(20..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:52:46 string ollama[431503]: time=2025-12-05T22:52:46.462+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:5[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:5(21..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:52:46 string ollama[431503]: time=2025-12-05T22:52:46.644+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:4[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:4(22..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:52:46 string ollama[431503]: time=2025-12-05T22:52:46.829+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:3[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:3(23..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:52:47 string ollama[431503]: time=2025-12-05T22:52:47.007+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:2[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:2(24..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:52:47 string ollama[431503]: time=2025-12-05T22:52:47.187+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:1[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:1(25..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:52:47 string ollama[431503]: time=2025-12-05T22:52:47.369+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:52:47 string ollama[431503]: time=2025-12-05T22:52:47.548+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:52:48 string ollama[431503]: time=2025-12-05T22:52:48.057+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:8[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:8(18..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:52:48 string ollama[431503]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:52:48 string ollama[431503]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760 Dec 05 22:52:48 string ollama[431503]: time=2025-12-05T22:52:48.212+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:7[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:7(19..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:52:48 string ollama[431503]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:52:48 string ollama[431503]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760 Dec 05 22:52:48 string ollama[431503]: time=2025-12-05T22:52:48.374+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:6[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:6(20..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:52:48 string ollama[431503]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:52:48 string ollama[431503]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760 Dec 05 22:52:48 string ollama[431503]: time=2025-12-05T22:52:48.526+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:5[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:5(21..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:52:48 string ollama[431503]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:52:48 string ollama[431503]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760 Dec 05 22:52:48 string ollama[431503]: time=2025-12-05T22:52:48.679+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:4[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:4(22..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:52:48 string ollama[431503]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:52:48 string ollama[431503]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760 Dec 05 22:52:48 string ollama[431503]: time=2025-12-05T22:52:48.834+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:3[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:3(23..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:52:48 string ollama[431503]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:52:48 string ollama[431503]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760 Dec 05 22:52:48 string ollama[431503]: time=2025-12-05T22:52:48.987+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:2[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:2(24..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:52:49 string ollama[431503]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:52:49 string ollama[431503]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760 Dec 05 22:52:49 string ollama[431503]: time=2025-12-05T22:52:49.141+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:1[ID:GPU-b9f0866d-45b0-35b9-575d-2443d746ce48 Layers:1(25..25)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:52:49 string ollama[431503]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9220.57 MiB on device 0: cudaMalloc failed: out of memory Dec 05 22:52:49 string ollama[431503]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 9668469760 Dec 05 22:52:49 string ollama[431503]: time=2025-12-05T22:52:49.298+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:52:49 string ollama[431503]: time=2025-12-05T22:52:49.811+02:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:16 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Dec 05 22:52:49 string ollama[431503]: time=2025-12-05T22:52:49.811+02:00 level=INFO source=ggml.go:482 msg="offloading 0 repeating layers to GPU" Dec 05 22:52:49 string ollama[431503]: time=2025-12-05T22:52:49.811+02:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU" Dec 05 22:52:49 string ollama[431503]: time=2025-12-05T22:52:49.811+02:00 level=INFO source=ggml.go:494 msg="offloaded 0/27 layers to GPU" Dec 05 22:52:49 string ollama[431503]: time=2025-12-05T22:52:49.811+02:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="3.1 GiB" Dec 05 22:52:49 string ollama[431503]: time=2025-12-05T22:52:49.811+02:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="6.5 GiB" Dec 05 22:52:49 string ollama[431503]: time=2025-12-05T22:52:49.811+02:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="9.0 GiB" Dec 05 22:52:49 string ollama[431503]: time=2025-12-05T22:52:49.811+02:00 level=INFO source=device.go:272 msg="total memory" size="18.6 GiB" Dec 05 22:52:49 string ollama[431503]: time=2025-12-05T22:52:49.811+02:00 level=INFO source=sched.go:517 msg="loaded runners" count=1 Dec 05 22:52:49 string ollama[431503]: time=2025-12-05T22:52:49.811+02:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding" Dec 05 22:52:49 string ollama[431503]: time=2025-12-05T22:52:49.829+02:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model" Dec 05 22:52:50 string ollama[431503]: time=2025-12-05T22:52:50.080+02:00 level=INFO source=server.go:1332 msg="llama runner started in 4.61 seconds" Dec 05 22:52:50 string ollama[431503]: [GIN] 2025/12/05 - 22:52:50 | 200 | 4.759222739s | 127.0.0.1 | POST "/api/generate" ```
Author
Owner

@SirNate0 commented on GitHub (Dec 16, 2025):

If #13315 is actually a duplicate of this, then I think this is fixed. Particularly, https://github.com/ollama/ollama/issues/13315#issuecomment-3617823620 reports that 0.13.2-rc1 fixed it, and for me, upgrading from 0.13.1 to 0.13.4 fixed the issue (13GB of RAM/VRAM used before for ministral-3:3b, and now only 5GB of VRAM).

<!-- gh-comment-id:3662135263 --> @SirNate0 commented on GitHub (Dec 16, 2025): If #13315 is actually a duplicate of this, then I think this is fixed. Particularly, https://github.com/ollama/ollama/issues/13315#issuecomment-3617823620 reports that `0.13.2-rc1` fixed it, and for me, upgrading from `0.13.1` to `0.13.4` fixed the issue (13GB of RAM/VRAM used before for ministral-3:3b, and now only 5GB of VRAM).
Author
Owner

@rick-github commented on GitHub (Jan 1, 2026):

Does upgrading resolve this issue for others?

<!-- gh-comment-id:3703292924 --> @rick-github commented on GitHub (Jan 1, 2026): Does upgrading resolve this issue for others?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#70867