[GH-ISSUE #10726] Latest v.0.7 uses incorrect model size #7044

Closed
opened 2026-04-12 18:57:17 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @trinhkvo on GitHub (May 16, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10726

What is the issue?

Using the same model Gemma 3 27B QAT Q4_0, ollama ps shows the size of this model to be 26GB, while v.0.6.8 only showed 24GB for the same model. This increase in model size leads to less layers being offloaded to GPU: only 55/63 layers offloaded, compared to 62/63 layers in v.0.6.8.
Same context length at 32768. Using docker desktop on Windows 11.

Ollama ps:
v.0.6.8:

Image

v.0.7:

Image

Relevant log output

v.0.7

2025-05-16 00:10:34.244 | time=2025-05-16T04:10:34.243Z level=INFO source=routes.go:1205 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE:q8_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:15m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
2025-05-16 00:10:34.419 | time=2025-05-16T04:10:34.418Z level=INFO source=images.go:463 msg="total blobs: 36"
2025-05-16 00:10:34.522 | time=2025-05-16T04:10:34.522Z level=INFO source=images.go:470 msg="total unused blobs removed: 0"
2025-05-16 00:10:34.634 | time=2025-05-16T04:10:34.634Z level=INFO source=routes.go:1258 msg="Listening on [::]:11434 (version 0.7.0)"
2025-05-16 00:10:34.634 | time=2025-05-16T04:10:34.634Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
2025-05-16 00:10:34.932 | time=2025-05-16T04:10:34.932Z level=INFO source=types.go:130 msg="inference compute" id=GPU-fb65d1cd-e129-e457-f1f2-c11081edf878 library=cuda variant=v12 compute=8.6 driver=12.9 name="NVIDIA GeForce RTX 3060" total="12.0 GiB" available="11.0 GiB"
2025-05-16 00:10:34.932 | time=2025-05-16T04:10:34.932Z level=INFO source=types.go:130 msg="inference compute" id=GPU-9d96376a-8917-7f4e-e3c4-03408ac757ee library=cuda variant=v12 compute=8.6 driver=12.9 name="NVIDIA GeForce RTX 3060" total="12.0 GiB" available="11.0 GiB"
2025-05-16 00:12:52.925 | [GIN] 2025/05/16 - 04:12:52 | 200 |  130.686418ms |      172.18.0.5 | GET      "/api/tags"
2025-05-16 00:12:54.407 | [GIN] 2025/05/16 - 04:12:54 | 200 |      57.281µs |      172.18.0.5 | GET      "/api/version"
2025-05-16 00:12:56.738 | [GIN] 2025/05/16 - 04:12:56 | 200 |      40.311µs |      172.18.0.5 | GET      "/api/version"
2025-05-16 00:18:09.629 | [GIN] 2025/05/16 - 04:18:09 | 200 |      48.841µs |      172.18.0.5 | GET      "/api/version"
2025-05-16 00:19:39.808 | time=2025-05-16T04:19:39.808Z level=INFO source=server.go:135 msg="system memory" total="47.0 GiB" free="38.4 GiB" free_swap="12.0 GiB"
2025-05-16 00:19:40.060 | time=2025-05-16T04:19:40.060Z level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=63 layers.offload=55 layers.split=28,27 memory.available="[11.0 GiB 11.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="24.4 GiB" memory.required.partial="21.6 GiB" memory.required.kv="1.6 GiB" memory.required.allocations="[10.8 GiB 10.8 GiB]" memory.weights.total="14.5 GiB" memory.weights.repeating="13.5 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="2.2 GiB" memory.graph.partial="2.2 GiB" projector.weights="818.0 MiB" projector.graph="0 B"
2025-05-16 00:19:40.060 | time=2025-05-16T04:19:40.060Z level=INFO source=server.go:211 msg="enabling flash attention"
2025-05-16 00:19:40.133 | time=2025-05-16T04:19:40.133Z level=INFO source=server.go:431 msg="starting llama server" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-4f1e32db877a9339df2d6529c1635570425cbe81f0aa3f7dd5d1452f2e632b42 --ctx-size 32768 --batch-size 512 --n-gpu-layers 55 --threads 8 --flash-attn --kv-cache-type q8_0 --no-mmap --parallel 1 --tensor-split 28,27 --port 41079"
2025-05-16 00:19:40.133 | time=2025-05-16T04:19:40.133Z level=INFO source=sched.go:472 msg="loaded runners" count=1
2025-05-16 00:19:40.133 | time=2025-05-16T04:19:40.133Z level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
2025-05-16 00:19:40.135 | time=2025-05-16T04:19:40.135Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding"
2025-05-16 00:19:40.149 | time=2025-05-16T04:19:40.148Z level=INFO source=runner.go:836 msg="starting ollama engine"
2025-05-16 00:19:40.160 | time=2025-05-16T04:19:40.160Z level=INFO source=runner.go:899 msg="Server listening on 127.0.0.1:41079"
2025-05-16 00:19:40.223 | time=2025-05-16T04:19:40.223Z level=INFO source=ggml.go:73 msg="" architecture=gemma3 file_type=Q4_0 name="Gemma 3 27b It Qat" description="" num_tensors=808 num_key_values=45
2025-05-16 00:19:40.246 | load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so
2025-05-16 00:19:40.387 | time=2025-05-16T04:19:40.387Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
2025-05-16 00:19:40.488 | ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
2025-05-16 00:19:40.488 | ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
2025-05-16 00:19:40.488 | ggml_cuda_init: found 2 CUDA devices:
2025-05-16 00:19:40.488 |   Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes
2025-05-16 00:19:40.488 |   Device 1: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes
2025-05-16 00:19:40.586 | load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so
2025-05-16 00:19:40.587 | time=2025-05-16T04:19:40.586Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
2025-05-16 00:19:40.765 | time=2025-05-16T04:19:40.765Z level=INFO source=ggml.go:299 msg="model weights" buffer=CPU size="3.7 GiB"
2025-05-16 00:19:40.765 | time=2025-05-16T04:19:40.765Z level=INFO source=ggml.go:299 msg="model weights" buffer=CUDA0 size="6.1 GiB"
2025-05-16 00:19:40.765 | time=2025-05-16T04:19:40.765Z level=INFO source=ggml.go:299 msg="model weights" buffer=CUDA1 size="5.8 GiB"
2025-05-16 00:20:19.827 | time=2025-05-16T04:20:19.826Z level=INFO source=ggml.go:556 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="138.5 MiB"
2025-05-16 00:20:19.827 | time=2025-05-16T04:20:19.826Z level=INFO source=ggml.go:556 msg="compute graph" backend=CUDA1 buffer_type=CUDA1 size="138.5 MiB"
2025-05-16 00:20:19.827 | time=2025-05-16T04:20:19.826Z level=INFO source=ggml.go:556 msg="compute graph" backend=CPU buffer_type=CPU size="121.0 MiB"
2025-05-16 00:20:20.046 | time=2025-05-16T04:20:20.045Z level=INFO source=server.go:630 msg="llama runner started in 39.91 seconds"
2025-05-16 00:20:44.128 | [GIN] 2025/05/16 - 04:20:44 | 200 |       22.93µs |       127.0.0.1 | HEAD     "/"
2025-05-16 00:20:44.128 | [GIN] 2025/05/16 - 04:20:44 | 200 |      65.192µs |       127.0.0.1 | GET      "/api/ps"
2025-05-16 00:23:12.335 | [GIN] 2025/05/16 - 04:23:12 | 200 |      40.881µs |      172.18.0.5 | GET      "/api/version"
2025-05-16 00:25:28.933 | [GIN] 2025/05/16 - 04:25:28 | 200 |         5m51s |      172.18.0.5 | POST     "/api/chat"

OS

Docker

GPU

Nvidia

CPU

AMD

Ollama version

0.7

Originally created by @trinhkvo on GitHub (May 16, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10726 ### What is the issue? Using the same model Gemma 3 27B QAT Q4_0, `ollama ps` shows the size of this model to be 26GB, while v.0.6.8 only showed 24GB for the same model. This increase in model size leads to less layers being offloaded to GPU: only 55/63 layers offloaded, compared to 62/63 layers in v.0.6.8. Same context length at 32768. Using docker desktop on Windows 11. Ollama ps: v.0.6.8: ![Image](https://github.com/user-attachments/assets/25a7d450-3a58-4070-8ab8-d50bfcf821e8) v.0.7: ![Image](https://github.com/user-attachments/assets/ca786558-99aa-4fb1-815f-0b16f4bdda61) ### Relevant log output ```shell v.0.7 2025-05-16 00:10:34.244 | time=2025-05-16T04:10:34.243Z level=INFO source=routes.go:1205 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE:q8_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:15m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" 2025-05-16 00:10:34.419 | time=2025-05-16T04:10:34.418Z level=INFO source=images.go:463 msg="total blobs: 36" 2025-05-16 00:10:34.522 | time=2025-05-16T04:10:34.522Z level=INFO source=images.go:470 msg="total unused blobs removed: 0" 2025-05-16 00:10:34.634 | time=2025-05-16T04:10:34.634Z level=INFO source=routes.go:1258 msg="Listening on [::]:11434 (version 0.7.0)" 2025-05-16 00:10:34.634 | time=2025-05-16T04:10:34.634Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" 2025-05-16 00:10:34.932 | time=2025-05-16T04:10:34.932Z level=INFO source=types.go:130 msg="inference compute" id=GPU-fb65d1cd-e129-e457-f1f2-c11081edf878 library=cuda variant=v12 compute=8.6 driver=12.9 name="NVIDIA GeForce RTX 3060" total="12.0 GiB" available="11.0 GiB" 2025-05-16 00:10:34.932 | time=2025-05-16T04:10:34.932Z level=INFO source=types.go:130 msg="inference compute" id=GPU-9d96376a-8917-7f4e-e3c4-03408ac757ee library=cuda variant=v12 compute=8.6 driver=12.9 name="NVIDIA GeForce RTX 3060" total="12.0 GiB" available="11.0 GiB" 2025-05-16 00:12:52.925 | [GIN] 2025/05/16 - 04:12:52 | 200 | 130.686418ms | 172.18.0.5 | GET "/api/tags" 2025-05-16 00:12:54.407 | [GIN] 2025/05/16 - 04:12:54 | 200 | 57.281µs | 172.18.0.5 | GET "/api/version" 2025-05-16 00:12:56.738 | [GIN] 2025/05/16 - 04:12:56 | 200 | 40.311µs | 172.18.0.5 | GET "/api/version" 2025-05-16 00:18:09.629 | [GIN] 2025/05/16 - 04:18:09 | 200 | 48.841µs | 172.18.0.5 | GET "/api/version" 2025-05-16 00:19:39.808 | time=2025-05-16T04:19:39.808Z level=INFO source=server.go:135 msg="system memory" total="47.0 GiB" free="38.4 GiB" free_swap="12.0 GiB" 2025-05-16 00:19:40.060 | time=2025-05-16T04:19:40.060Z level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=63 layers.offload=55 layers.split=28,27 memory.available="[11.0 GiB 11.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="24.4 GiB" memory.required.partial="21.6 GiB" memory.required.kv="1.6 GiB" memory.required.allocations="[10.8 GiB 10.8 GiB]" memory.weights.total="14.5 GiB" memory.weights.repeating="13.5 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="2.2 GiB" memory.graph.partial="2.2 GiB" projector.weights="818.0 MiB" projector.graph="0 B" 2025-05-16 00:19:40.060 | time=2025-05-16T04:19:40.060Z level=INFO source=server.go:211 msg="enabling flash attention" 2025-05-16 00:19:40.133 | time=2025-05-16T04:19:40.133Z level=INFO source=server.go:431 msg="starting llama server" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-4f1e32db877a9339df2d6529c1635570425cbe81f0aa3f7dd5d1452f2e632b42 --ctx-size 32768 --batch-size 512 --n-gpu-layers 55 --threads 8 --flash-attn --kv-cache-type q8_0 --no-mmap --parallel 1 --tensor-split 28,27 --port 41079" 2025-05-16 00:19:40.133 | time=2025-05-16T04:19:40.133Z level=INFO source=sched.go:472 msg="loaded runners" count=1 2025-05-16 00:19:40.133 | time=2025-05-16T04:19:40.133Z level=INFO source=server.go:591 msg="waiting for llama runner to start responding" 2025-05-16 00:19:40.135 | time=2025-05-16T04:19:40.135Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding" 2025-05-16 00:19:40.149 | time=2025-05-16T04:19:40.148Z level=INFO source=runner.go:836 msg="starting ollama engine" 2025-05-16 00:19:40.160 | time=2025-05-16T04:19:40.160Z level=INFO source=runner.go:899 msg="Server listening on 127.0.0.1:41079" 2025-05-16 00:19:40.223 | time=2025-05-16T04:19:40.223Z level=INFO source=ggml.go:73 msg="" architecture=gemma3 file_type=Q4_0 name="Gemma 3 27b It Qat" description="" num_tensors=808 num_key_values=45 2025-05-16 00:19:40.246 | load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so 2025-05-16 00:19:40.387 | time=2025-05-16T04:19:40.387Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" 2025-05-16 00:19:40.488 | ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no 2025-05-16 00:19:40.488 | ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no 2025-05-16 00:19:40.488 | ggml_cuda_init: found 2 CUDA devices: 2025-05-16 00:19:40.488 | Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes 2025-05-16 00:19:40.488 | Device 1: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes 2025-05-16 00:19:40.586 | load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so 2025-05-16 00:19:40.587 | time=2025-05-16T04:19:40.586Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) 2025-05-16 00:19:40.765 | time=2025-05-16T04:19:40.765Z level=INFO source=ggml.go:299 msg="model weights" buffer=CPU size="3.7 GiB" 2025-05-16 00:19:40.765 | time=2025-05-16T04:19:40.765Z level=INFO source=ggml.go:299 msg="model weights" buffer=CUDA0 size="6.1 GiB" 2025-05-16 00:19:40.765 | time=2025-05-16T04:19:40.765Z level=INFO source=ggml.go:299 msg="model weights" buffer=CUDA1 size="5.8 GiB" 2025-05-16 00:20:19.827 | time=2025-05-16T04:20:19.826Z level=INFO source=ggml.go:556 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="138.5 MiB" 2025-05-16 00:20:19.827 | time=2025-05-16T04:20:19.826Z level=INFO source=ggml.go:556 msg="compute graph" backend=CUDA1 buffer_type=CUDA1 size="138.5 MiB" 2025-05-16 00:20:19.827 | time=2025-05-16T04:20:19.826Z level=INFO source=ggml.go:556 msg="compute graph" backend=CPU buffer_type=CPU size="121.0 MiB" 2025-05-16 00:20:20.046 | time=2025-05-16T04:20:20.045Z level=INFO source=server.go:630 msg="llama runner started in 39.91 seconds" 2025-05-16 00:20:44.128 | [GIN] 2025/05/16 - 04:20:44 | 200 | 22.93µs | 127.0.0.1 | HEAD "/" 2025-05-16 00:20:44.128 | [GIN] 2025/05/16 - 04:20:44 | 200 | 65.192µs | 127.0.0.1 | GET "/api/ps" 2025-05-16 00:23:12.335 | [GIN] 2025/05/16 - 04:23:12 | 200 | 40.881µs | 172.18.0.5 | GET "/api/version" 2025-05-16 00:25:28.933 | [GIN] 2025/05/16 - 04:25:28 | 200 | 5m51s | 172.18.0.5 | POST "/api/chat" ``` ### OS Docker ### GPU Nvidia ### CPU AMD ### Ollama version 0.7
GiteaMirror added the bug label 2026-04-12 18:57:17 -05:00
Author
Owner

@trinhkvo commented on GitHub (May 16, 2025):

v.0.6.8:

2025/05/11 04:40:13 routes.go:1233: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE:q8_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:15m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"


time=2025-05-11T04:40:13.324Z level=INFO source=images.go:463 msg="total blobs: 26"


time=2025-05-11T04:40:13.402Z level=INFO source=images.go:470 msg="total unused blobs removed: 0"


time=2025-05-11T04:40:13.487Z level=INFO source=routes.go:1300 msg="Listening on [::]:11434 (version 0.6.8)"


time=2025-05-11T04:40:13.489Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"


time=2025-05-11T04:40:13.812Z level=INFO source=types.go:130 msg="inference compute" id=GPU-fb65d1cd-e129-e457-f1f2-c11081edf878 library=cuda variant=v12 compute=8.6 driver=12.9 name="NVIDIA GeForce RTX 3060" total="12.0 GiB" available="11.0 GiB"


time=2025-05-11T04:40:13.812Z level=INFO source=types.go:130 msg="inference compute" id=GPU-9d96376a-8917-7f4e-e3c4-03408ac757ee library=cuda variant=v12 compute=8.6 driver=12.9 name="NVIDIA GeForce RTX 3060" total="12.0 GiB" available="11.0 GiB"


[GIN] 2025/05/11 - 04:40:32 | 200 | 1.524776ms | 172.18.0.5 | GET "/api/version"


time=2025-05-11T04:40:47.469Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32


time=2025-05-11T04:40:47.821Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32


time=2025-05-11T04:40:47.880Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32


time=2025-05-11T04:40:47.883Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32


time=2025-05-11T04:40:48.135Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32


time=2025-05-11T04:40:48.380Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32


time=2025-05-11T04:40:48.620Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32


time=2025-05-11T04:40:48.872Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32


time=2025-05-11T04:40:49.122Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32


time=2025-05-11T04:40:49.606Z level=INFO source=server.go:106 msg="system memory" total="47.0 GiB" free="39.8 GiB" free_swap="12.0 GiB"


time=2025-05-11T04:40:49.612Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32


time=2025-05-11T04:40:49.850Z level=INFO source=server.go:139 msg=offload library=cuda layers.requested=-1 layers.model=63 layers.offload=62 layers.split=32,30 memory.available="[11.0 GiB 11.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="22.7 GiB" memory.required.partial="21.6 GiB" memory.required.kv="1.6 GiB" memory.required.allocations="[10.8 GiB 10.8 GiB]" memory.weights.total="14.5 GiB" memory.weights.repeating="13.5 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="2.2 GiB" memory.graph.partial="2.2 GiB" projector.weights="818.0 MiB" projector.graph="0 B"


time=2025-05-11T04:40:49.850Z level=INFO source=server.go:186 msg="enabling flash attention"


time=2025-05-11T04:40:49.918Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32


time=2025-05-11T04:40:49.919Z level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false


time=2025-05-11T04:40:49.923Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.image_size default=0


time=2025-05-11T04:40:49.923Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.patch_size default=0


time=2025-05-11T04:40:49.923Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.num_channels default=0


time=2025-05-11T04:40:49.923Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.block_count default=0


time=2025-05-11T04:40:49.923Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.embedding_length default=0


time=2025-05-11T04:40:49.923Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.attention.head_count default=0


time=2025-05-11T04:40:49.923Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.image_size default=0


time=2025-05-11T04:40:49.923Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.patch_size default=0


time=2025-05-11T04:40:49.923Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.attention.layer_norm_epsilon default=0


time=2025-05-11T04:40:49.927Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000


time=2025-05-11T04:40:49.927Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06


time=2025-05-11T04:40:49.927Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1


time=2025-05-11T04:40:49.927Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256

time=2025-05-11T04:40:49.929Z level=INFO source=server.go:410 msg="starting llama server" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-4f1e32db877a9339df2d6529c1635570425cbe81f0aa3f7dd5d1452f2e632b42 --ctx-size 32768 --batch-size 512 --n-gpu-layers 62 --threads 8 --flash-attn --kv-cache-type q8_0 --no-mmap --parallel 1 --tensor-split 32,30 --port 46409"


time=2025-05-11T04:40:49.930Z level=INFO source=sched.go:452 msg="loaded runners" count=1


time=2025-05-11T04:40:49.930Z level=INFO source=server.go:589 msg="waiting for llama runner to start responding"


time=2025-05-11T04:40:49.931Z level=INFO source=server.go:623 msg="waiting for server to become available" status="llm server not responding"


time=2025-05-11T04:40:49.944Z level=INFO source=runner.go:851 msg="starting ollama engine"


time=2025-05-11T04:40:49.956Z level=INFO source=runner.go:914 msg="Server listening on 127.0.0.1:46409"


time=2025-05-11T04:40:50.025Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32


time=2025-05-11T04:40:50.026Z level=WARN source=ggml.go:152 msg="key not found" key=general.description default=""


time=2025-05-11T04:40:50.026Z level=INFO source=ggml.go:72 msg="" architecture=gemma3 file_type=Q4_0 name="Gemma 3 27b It Qat" description="" num_tensors=808 num_key_values=45


load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so


time=2025-05-11T04:40:50.183Z level=INFO source=server.go:623 msg="waiting for server to become available" status="llm server loading model"


ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no


ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no


ggml_cuda_init: found 2 CUDA devices:


Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes


Device 1: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes


load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so


time=2025-05-11T04:40:50.425Z level=INFO source=ggml.go:103 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)


time=2025-05-11T04:40:50.597Z level=INFO source=ggml.go:298 msg="model weights" buffer=CPU size="2.2 GiB"


time=2025-05-11T04:40:50.597Z level=INFO source=ggml.go:298 msg="model weights" buffer=CUDA0 size="7.0 GiB"


time=2025-05-11T04:40:50.597Z level=INFO source=ggml.go:298 msg="model weights" buffer=CUDA1 size="6.5 GiB"


time=2025-05-11T04:41:31.318Z level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false


time=2025-05-11T04:41:31.322Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.image_size default=0


time=2025-05-11T04:41:31.322Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.patch_size default=0


time=2025-05-11T04:41:31.322Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.num_channels default=0


time=2025-05-11T04:41:31.322Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.block_count default=0


time=2025-05-11T04:41:31.322Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.embedding_length default=0


time=2025-05-11T04:41:31.322Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.attention.head_count default=0


time=2025-05-11T04:41:31.322Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.image_size default=0


time=2025-05-11T04:41:31.322Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.patch_size default=0


time=2025-05-11T04:41:31.322Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.attention.layer_norm_epsilon default=0


time=2025-05-11T04:41:31.326Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000


time=2025-05-11T04:41:31.326Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06


time=2025-05-11T04:41:31.326Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1


time=2025-05-11T04:41:31.326Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256


time=2025-05-11T04:41:32.262Z level=INFO source=ggml.go:553 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="154.5 MiB"


time=2025-05-11T04:41:32.262Z level=INFO source=ggml.go:553 msg="compute graph" backend=CUDA1 buffer_type=CUDA1 size="138.5 MiB"


time=2025-05-11T04:41:32.262Z level=INFO source=ggml.go:553 msg="compute graph" backend=CPU buffer_type=CPU size="10.5 MiB"


time=2025-05-11T04:41:32.353Z level=INFO source=server.go:628 msg="llama runner started in 42.42 seconds"

[GIN] 2025/05/11 - 04:43:19 | 200 | 2m32s | 172.18.0.5 | POST "/api/chat"

<!-- gh-comment-id:2885612990 --> @trinhkvo commented on GitHub (May 16, 2025): v.0.6.8: ```v.0.6.8 2025/05/11 04:40:13 routes.go:1233: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE:q8_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:15m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-05-11T04:40:13.324Z level=INFO source=images.go:463 msg="total blobs: 26" time=2025-05-11T04:40:13.402Z level=INFO source=images.go:470 msg="total unused blobs removed: 0" time=2025-05-11T04:40:13.487Z level=INFO source=routes.go:1300 msg="Listening on [::]:11434 (version 0.6.8)" time=2025-05-11T04:40:13.489Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-05-11T04:40:13.812Z level=INFO source=types.go:130 msg="inference compute" id=GPU-fb65d1cd-e129-e457-f1f2-c11081edf878 library=cuda variant=v12 compute=8.6 driver=12.9 name="NVIDIA GeForce RTX 3060" total="12.0 GiB" available="11.0 GiB" time=2025-05-11T04:40:13.812Z level=INFO source=types.go:130 msg="inference compute" id=GPU-9d96376a-8917-7f4e-e3c4-03408ac757ee library=cuda variant=v12 compute=8.6 driver=12.9 name="NVIDIA GeForce RTX 3060" total="12.0 GiB" available="11.0 GiB" [GIN] 2025/05/11 - 04:40:32 | 200 | 1.524776ms | 172.18.0.5 | GET "/api/version" time=2025-05-11T04:40:47.469Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-11T04:40:47.821Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-11T04:40:47.880Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-11T04:40:47.883Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-11T04:40:48.135Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-11T04:40:48.380Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-11T04:40:48.620Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-11T04:40:48.872Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-11T04:40:49.122Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-11T04:40:49.606Z level=INFO source=server.go:106 msg="system memory" total="47.0 GiB" free="39.8 GiB" free_swap="12.0 GiB" time=2025-05-11T04:40:49.612Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-11T04:40:49.850Z level=INFO source=server.go:139 msg=offload library=cuda layers.requested=-1 layers.model=63 layers.offload=62 layers.split=32,30 memory.available="[11.0 GiB 11.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="22.7 GiB" memory.required.partial="21.6 GiB" memory.required.kv="1.6 GiB" memory.required.allocations="[10.8 GiB 10.8 GiB]" memory.weights.total="14.5 GiB" memory.weights.repeating="13.5 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="2.2 GiB" memory.graph.partial="2.2 GiB" projector.weights="818.0 MiB" projector.graph="0 B" time=2025-05-11T04:40:49.850Z level=INFO source=server.go:186 msg="enabling flash attention" time=2025-05-11T04:40:49.918Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-11T04:40:49.919Z level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false time=2025-05-11T04:40:49.923Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.image_size default=0 time=2025-05-11T04:40:49.923Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.patch_size default=0 time=2025-05-11T04:40:49.923Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.num_channels default=0 time=2025-05-11T04:40:49.923Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.block_count default=0 time=2025-05-11T04:40:49.923Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.embedding_length default=0 time=2025-05-11T04:40:49.923Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.attention.head_count default=0 time=2025-05-11T04:40:49.923Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.image_size default=0 time=2025-05-11T04:40:49.923Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.patch_size default=0 time=2025-05-11T04:40:49.923Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.attention.layer_norm_epsilon default=0 time=2025-05-11T04:40:49.927Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000 time=2025-05-11T04:40:49.927Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 time=2025-05-11T04:40:49.927Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1 time=2025-05-11T04:40:49.927Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256 time=2025-05-11T04:40:49.929Z level=INFO source=server.go:410 msg="starting llama server" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-4f1e32db877a9339df2d6529c1635570425cbe81f0aa3f7dd5d1452f2e632b42 --ctx-size 32768 --batch-size 512 --n-gpu-layers 62 --threads 8 --flash-attn --kv-cache-type q8_0 --no-mmap --parallel 1 --tensor-split 32,30 --port 46409" time=2025-05-11T04:40:49.930Z level=INFO source=sched.go:452 msg="loaded runners" count=1 time=2025-05-11T04:40:49.930Z level=INFO source=server.go:589 msg="waiting for llama runner to start responding" time=2025-05-11T04:40:49.931Z level=INFO source=server.go:623 msg="waiting for server to become available" status="llm server not responding" time=2025-05-11T04:40:49.944Z level=INFO source=runner.go:851 msg="starting ollama engine" time=2025-05-11T04:40:49.956Z level=INFO source=runner.go:914 msg="Server listening on 127.0.0.1:46409" time=2025-05-11T04:40:50.025Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-11T04:40:50.026Z level=WARN source=ggml.go:152 msg="key not found" key=general.description default="" time=2025-05-11T04:40:50.026Z level=INFO source=ggml.go:72 msg="" architecture=gemma3 file_type=Q4_0 name="Gemma 3 27b It Qat" description="" num_tensors=808 num_key_values=45 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so time=2025-05-11T04:40:50.183Z level=INFO source=server.go:623 msg="waiting for server to become available" status="llm server loading model" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 2 CUDA devices: Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes Device 1: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so time=2025-05-11T04:40:50.425Z level=INFO source=ggml.go:103 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2025-05-11T04:40:50.597Z level=INFO source=ggml.go:298 msg="model weights" buffer=CPU size="2.2 GiB" time=2025-05-11T04:40:50.597Z level=INFO source=ggml.go:298 msg="model weights" buffer=CUDA0 size="7.0 GiB" time=2025-05-11T04:40:50.597Z level=INFO source=ggml.go:298 msg="model weights" buffer=CUDA1 size="6.5 GiB" time=2025-05-11T04:41:31.318Z level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false time=2025-05-11T04:41:31.322Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.image_size default=0 time=2025-05-11T04:41:31.322Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.patch_size default=0 time=2025-05-11T04:41:31.322Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.num_channels default=0 time=2025-05-11T04:41:31.322Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.block_count default=0 time=2025-05-11T04:41:31.322Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.embedding_length default=0 time=2025-05-11T04:41:31.322Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.attention.head_count default=0 time=2025-05-11T04:41:31.322Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.image_size default=0 time=2025-05-11T04:41:31.322Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.patch_size default=0 time=2025-05-11T04:41:31.322Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.attention.layer_norm_epsilon default=0 time=2025-05-11T04:41:31.326Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000 time=2025-05-11T04:41:31.326Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 time=2025-05-11T04:41:31.326Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1 time=2025-05-11T04:41:31.326Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256 time=2025-05-11T04:41:32.262Z level=INFO source=ggml.go:553 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="154.5 MiB" time=2025-05-11T04:41:32.262Z level=INFO source=ggml.go:553 msg="compute graph" backend=CUDA1 buffer_type=CUDA1 size="138.5 MiB" time=2025-05-11T04:41:32.262Z level=INFO source=ggml.go:553 msg="compute graph" backend=CPU buffer_type=CPU size="10.5 MiB" time=2025-05-11T04:41:32.353Z level=INFO source=server.go:628 msg="llama runner started in 42.42 seconds" [GIN] 2025/05/11 - 04:43:19 | 200 | 2m32s | 172.18.0.5 | POST "/api/chat" ```
Author
Owner

@chunyao commented on GitHub (May 16, 2025):

same as this problem , i rollback to 0.6.8

<!-- gh-comment-id:2886680126 --> @chunyao commented on GitHub (May 16, 2025): same as this problem , i rollback to 0.6.8
Author
Owner

@jessegross commented on GitHub (May 16, 2025):

Most likely due to this commit:
0478d440f0

It increased the amount of buffer since some models were crashing with OOM.

<!-- gh-comment-id:2887435688 --> @jessegross commented on GitHub (May 16, 2025): Most likely due to this commit: https://github.com/ollama/ollama/commit/0478d440f0ba62202bc4b98043ae4a7d0b85e4ba It increased the amount of buffer since some models were crashing with OOM.
Author
Owner

@Kingbadger3d commented on GitHub (May 19, 2025):

Yeah I just created a ollama create blahblah -f modelfile for Qwen3 8B (only 8.1GB in Size) And after oolama list you get this:

qwen3-8b-CrewAI:latest 93271d9857d9 102 GB 48 minutes ago
gemma3:27b-it-qat 29eb0b9aeda3 18 GB 3 hours ago
gemma3:12b-it-q4_K_M f4031aab637d 8.1 GB 4 hours ago
qwen3:8b e4b5fd7f8af0 5.2 GB 26 hours ago

Goes from 5.2GB which it pulled from Ollama models site, to then creating a new version with a custom modelfile that is now 102 GB !!!?

Im on Windows BTW

<!-- gh-comment-id:2892012587 --> @Kingbadger3d commented on GitHub (May 19, 2025): Yeah I just created a ollama create blahblah -f modelfile for Qwen3 8B (only 8.1GB in Size) And after oolama list you get this: qwen3-8b-CrewAI:latest 93271d9857d9 102 GB 48 minutes ago gemma3:27b-it-qat 29eb0b9aeda3 18 GB 3 hours ago gemma3:12b-it-q4_K_M f4031aab637d 8.1 GB 4 hours ago qwen3:8b e4b5fd7f8af0 5.2 GB 26 hours ago Goes from 5.2GB which it pulled from Ollama models site, to then creating a new version with a custom modelfile that is now 102 GB !!!? Im on Windows BTW
Author
Owner

@jessegross commented on GitHub (May 20, 2025):

Fixed in https://github.com/ollama/ollama/pull/10773

<!-- gh-comment-id:2896080323 --> @jessegross commented on GitHub (May 20, 2025): Fixed in https://github.com/ollama/ollama/pull/10773
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#7044