[GH-ISSUE #10417] Ollama 0.6.6 - Mistral-Small3.1 - Memory Leak ? #53357

Closed
opened 2026-04-29 02:42:29 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @Burnarz on GitHub (Apr 26, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10417

Memory Leak ?

Hi,

Config :
Ryzen 5 3600x@5.2Ghz
16 Go DDR4, swap 16Go
RTX 3090
Ubuntu 24.04
Ollama 0.6.6
Context 8192
Flash Attention Active

Running ollama 0.6.6 with mistral-small3.1 (re-downloaded today to be sure) still result in :

ollama ps
NAME                       ID              SIZE     PROCESSOR         UNTIL   
mistral-small3.1:latest    b9aaf0c2586a    26 GB    7%/93% CPU/GPU    Forever 

While nvtop shows:

Image

Let me know if you need more detail.
Thx in advance.

Relevant log output


OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

0.6.6

Originally created by @Burnarz on GitHub (Apr 26, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10417 ### Memory Leak ? Hi, Config : Ryzen 5 3600x@5.2Ghz 16 Go DDR4, swap 16Go RTX 3090 Ubuntu 24.04 Ollama 0.6.6 Context 8192 Flash Attention Active Running ollama 0.6.6 with mistral-small3.1 (re-downloaded today to be sure) still result in : ``` ollama ps NAME ID SIZE PROCESSOR UNTIL mistral-small3.1:latest b9aaf0c2586a 26 GB 7%/93% CPU/GPU Forever ``` While nvtop shows: ![Image](https://github.com/user-attachments/assets/9042885e-68c3-472b-a709-2263545e6c6d) Let me know if you need more detail. Thx in advance. ### Relevant log output ```shell ``` ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version 0.6.6
GiteaMirror added the bug label 2026-04-29 02:42:29 -05:00
Author
Owner

@Syirrus commented on GitHub (Apr 26, 2025):

I have the same issue. I have a 24GB card and 64GB of system ram. After I run the model for 30 mins after running through iterations, my system RAM usage goes to 100% and my computer crashes out. Super frustrating. I've also tried to limit the context to 6000 and it has no effect on the memory usage on the GPU. I'm running and AMD CPU with an NVIDIA GPU and Ollama 0.6.6. I'm running mistral-small3.1:24b-instruct-2503-q4_K_M using it for vision capabilities.

<!-- gh-comment-id:2832520610 --> @Syirrus commented on GitHub (Apr 26, 2025): I have the same issue. I have a 24GB card and 64GB of system ram. After I run the model for 30 mins after running through iterations, my system RAM usage goes to 100% and my computer crashes out. Super frustrating. I've also tried to limit the context to 6000 and it has no effect on the memory usage on the GPU. I'm running and AMD CPU with an NVIDIA GPU and Ollama 0.6.6. I'm running mistral-small3.1:24b-instruct-2503-q4_K_M using it for vision capabilities.
Author
Owner

@Burnarz commented on GitHub (Apr 27, 2025):

Here is the log, hope it helps.

avril 27 11:22:03 jarvis-server ollama[1145305]: [GIN] 2025/04/27 - 11:22:03 | 200 |    1.187772ms |       127.0.0.1 | HEAD     "/"
avril 27 11:22:03 jarvis-server ollama[1145305]: [GIN] 2025/04/27 - 11:22:03 | 200 |     493.981µs |       127.0.0.1 | GET      "/api/ps"
avril 27 11:22:26 jarvis-server ollama[1145305]: [GIN] 2025/04/27 - 11:22:26 | 200 |       90.74µs |       127.0.0.1 | HEAD     "/"
avril 27 11:22:26 jarvis-server ollama[1145305]: time=2025-04-27T11:22:26.394Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
avril 27 11:22:26 jarvis-server ollama[1145305]: time=2025-04-27T11:22:26.417Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
avril 27 11:22:26 jarvis-server ollama[1145305]: [GIN] 2025/04/27 - 11:22:26 | 200 |   68.731064ms |       127.0.0.1 | POST     "/api/show"
avril 27 11:22:26 jarvis-server ollama[1145305]: time=2025-04-27T11:22:26.443Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
avril 27 11:22:26 jarvis-server ollama[1145305]: time=2025-04-27T11:22:26.706Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
avril 27 11:22:26 jarvis-server ollama[1145305]: time=2025-04-27T11:22:26.727Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.538Z level=INFO source=server.go:105 msg="system memory" total="15.5 GiB" free="14.3 GiB" free_swap="14.1 GiB"
avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.684Z level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=41 layers.offload=36 layers.split="" memory.available="[23.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="24.4 GiB" memory.required.partial="22.7 GiB" memory.required.kv="640.0 MiB" memory.required.allocations="[22.7 GiB]" memory.weights.total="13.1 GiB" memory.weights.repeating="12.7 GiB" memory.weights.nonrepeating="360.0 MiB" memory.graph.full="426.7 MiB" memory.graph.partial="426.7 MiB" projector.weights="769.3 MiB" projector.graph="8.8 GiB"
avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.684Z level=INFO source=server.go:185 msg="enabling flash attention"
avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.684Z level=WARN source=server.go:193 msg="kv cache type not supported by model" type=""
avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.740Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.741Z level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.pretokenizer default="[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]*[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]+|[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]+[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]*|\\p{N}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n/]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.750Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.rope.freq_scale default=1
avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.751Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.vision.attention.layer_norm_epsilon default=9.999999747378752e-06
avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.751Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.vision.longest_edge default=1540
avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.751Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.text_config.rms_norm_eps default=9.999999747378752e-06
avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.752Z level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-1fa8532d986d729117d6b5ac2c884824d0717c9468094554fd1d36412c740cfc --ctx-size 4096 --batch-size 512 --n-gpu-layers 36 --threads 6 --flash-attn --no-mmap --parallel 1 --port 36079"
avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.752Z level=INFO source=sched.go:451 msg="loaded runners" count=1
avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.753Z level=INFO source=server.go:580 msg="waiting for llama runner to start responding"
avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.754Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error"
avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.762Z level=INFO source=runner.go:866 msg="starting ollama engine"
avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.763Z level=INFO source=runner.go:929 msg="Server listening on 127.0.0.1:36079"
avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.820Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.820Z level=WARN source=ggml.go:152 msg="key not found" key=general.name default=""
avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.820Z level=WARN source=ggml.go:152 msg="key not found" key=general.description default=""
avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.820Z level=INFO source=ggml.go:72 msg="" architecture=mistral3 file_type=Q4_K_M name="" description="" num_tensors=585 num_key_values=43
avril 27 11:22:28 jarvis-server ollama[1145305]: time=2025-04-27T11:22:28.005Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model"
avril 27 11:22:28 jarvis-server ollama[1145305]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
avril 27 11:22:28 jarvis-server ollama[1145305]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
avril 27 11:22:28 jarvis-server ollama[1145305]: ggml_cuda_init: found 1 CUDA devices:
avril 27 11:22:28 jarvis-server ollama[1145305]:   Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
avril 27 11:22:28 jarvis-server ollama[1145305]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so
avril 27 11:22:28 jarvis-server ollama[1145305]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so
avril 27 11:22:28 jarvis-server ollama[1145305]: time=2025-04-27T11:22:28.024Z level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
avril 27 11:22:28 jarvis-server ollama[1145305]: time=2025-04-27T11:22:28.120Z level=INFO source=ggml.go:298 msg="model weights" buffer=CPU size="3.0 GiB"
avril 27 11:22:28 jarvis-server ollama[1145305]: time=2025-04-27T11:22:28.120Z level=INFO source=ggml.go:298 msg="model weights" buffer=CUDA0 size="11.4 GiB"
avril 27 11:22:32 jarvis-server ollama[1145305]: time=2025-04-27T11:22:32.372Z level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.pretokenizer default="[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]*[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]+|[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]+[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]*|\\p{N}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n/]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
avril 27 11:22:32 jarvis-server ollama[1145305]: time=2025-04-27T11:22:32.373Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.rope.freq_scale default=1
avril 27 11:22:32 jarvis-server ollama[1145305]: time=2025-04-27T11:22:32.373Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.vision.attention.layer_norm_epsilon default=9.999999747378752e-06
avril 27 11:22:32 jarvis-server ollama[1145305]: time=2025-04-27T11:22:32.373Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.vision.longest_edge default=1540
avril 27 11:22:32 jarvis-server ollama[1145305]: time=2025-04-27T11:22:32.373Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.text_config.rms_norm_eps default=9.999999747378752e-06
avril 27 11:22:32 jarvis-server ollama[1145305]: time=2025-04-27T11:22:32.422Z level=INFO source=ggml.go:556 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="272.1 MiB"
avril 27 11:22:32 jarvis-server ollama[1145305]: time=2025-04-27T11:22:32.422Z level=INFO source=ggml.go:556 msg="compute graph" backend=CPU buffer_type=CPU size="10.0 MiB"
avril 27 11:22:32 jarvis-server ollama[1145305]: time=2025-04-27T11:22:32.517Z level=INFO source=server.go:619 msg="llama runner started in 4.76 seconds"
avril 27 11:22:32 jarvis-server ollama[1145305]: [GIN] 2025/04/27 - 11:22:32 | 200 |  6.095798259s |       127.0.0.1 | POST     "/api/generate"
avril 27 11:22:40 jarvis-server ollama[1145305]: [GIN] 2025/04/27 - 11:22:40 | 200 |       15.66µs |       127.0.0.1 | HEAD     "/"
avril 27 11:22:40 jarvis-server ollama[1145305]: [GIN] 2025/04/27 - 11:22:40 | 200 |    6.326491ms |       127.0.0.1 | GET      "/api/ps"
<!-- gh-comment-id:2833407371 --> @Burnarz commented on GitHub (Apr 27, 2025): Here is the log, hope it helps. ``` avril 27 11:22:03 jarvis-server ollama[1145305]: [GIN] 2025/04/27 - 11:22:03 | 200 | 1.187772ms | 127.0.0.1 | HEAD "/" avril 27 11:22:03 jarvis-server ollama[1145305]: [GIN] 2025/04/27 - 11:22:03 | 200 | 493.981µs | 127.0.0.1 | GET "/api/ps" avril 27 11:22:26 jarvis-server ollama[1145305]: [GIN] 2025/04/27 - 11:22:26 | 200 | 90.74µs | 127.0.0.1 | HEAD "/" avril 27 11:22:26 jarvis-server ollama[1145305]: time=2025-04-27T11:22:26.394Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 avril 27 11:22:26 jarvis-server ollama[1145305]: time=2025-04-27T11:22:26.417Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 avril 27 11:22:26 jarvis-server ollama[1145305]: [GIN] 2025/04/27 - 11:22:26 | 200 | 68.731064ms | 127.0.0.1 | POST "/api/show" avril 27 11:22:26 jarvis-server ollama[1145305]: time=2025-04-27T11:22:26.443Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 avril 27 11:22:26 jarvis-server ollama[1145305]: time=2025-04-27T11:22:26.706Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 avril 27 11:22:26 jarvis-server ollama[1145305]: time=2025-04-27T11:22:26.727Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.538Z level=INFO source=server.go:105 msg="system memory" total="15.5 GiB" free="14.3 GiB" free_swap="14.1 GiB" avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.684Z level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=41 layers.offload=36 layers.split="" memory.available="[23.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="24.4 GiB" memory.required.partial="22.7 GiB" memory.required.kv="640.0 MiB" memory.required.allocations="[22.7 GiB]" memory.weights.total="13.1 GiB" memory.weights.repeating="12.7 GiB" memory.weights.nonrepeating="360.0 MiB" memory.graph.full="426.7 MiB" memory.graph.partial="426.7 MiB" projector.weights="769.3 MiB" projector.graph="8.8 GiB" avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.684Z level=INFO source=server.go:185 msg="enabling flash attention" avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.684Z level=WARN source=server.go:193 msg="kv cache type not supported by model" type="" avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.740Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.741Z level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.pretokenizer default="[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]*[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]+|[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]+[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]*|\\p{N}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n/]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.750Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.rope.freq_scale default=1 avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.751Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.vision.attention.layer_norm_epsilon default=9.999999747378752e-06 avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.751Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.vision.longest_edge default=1540 avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.751Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.text_config.rms_norm_eps default=9.999999747378752e-06 avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.752Z level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-1fa8532d986d729117d6b5ac2c884824d0717c9468094554fd1d36412c740cfc --ctx-size 4096 --batch-size 512 --n-gpu-layers 36 --threads 6 --flash-attn --no-mmap --parallel 1 --port 36079" avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.752Z level=INFO source=sched.go:451 msg="loaded runners" count=1 avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.753Z level=INFO source=server.go:580 msg="waiting for llama runner to start responding" avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.754Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error" avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.762Z level=INFO source=runner.go:866 msg="starting ollama engine" avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.763Z level=INFO source=runner.go:929 msg="Server listening on 127.0.0.1:36079" avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.820Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.820Z level=WARN source=ggml.go:152 msg="key not found" key=general.name default="" avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.820Z level=WARN source=ggml.go:152 msg="key not found" key=general.description default="" avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.820Z level=INFO source=ggml.go:72 msg="" architecture=mistral3 file_type=Q4_K_M name="" description="" num_tensors=585 num_key_values=43 avril 27 11:22:28 jarvis-server ollama[1145305]: time=2025-04-27T11:22:28.005Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model" avril 27 11:22:28 jarvis-server ollama[1145305]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no avril 27 11:22:28 jarvis-server ollama[1145305]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no avril 27 11:22:28 jarvis-server ollama[1145305]: ggml_cuda_init: found 1 CUDA devices: avril 27 11:22:28 jarvis-server ollama[1145305]: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes avril 27 11:22:28 jarvis-server ollama[1145305]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so avril 27 11:22:28 jarvis-server ollama[1145305]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so avril 27 11:22:28 jarvis-server ollama[1145305]: time=2025-04-27T11:22:28.024Z level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) avril 27 11:22:28 jarvis-server ollama[1145305]: time=2025-04-27T11:22:28.120Z level=INFO source=ggml.go:298 msg="model weights" buffer=CPU size="3.0 GiB" avril 27 11:22:28 jarvis-server ollama[1145305]: time=2025-04-27T11:22:28.120Z level=INFO source=ggml.go:298 msg="model weights" buffer=CUDA0 size="11.4 GiB" avril 27 11:22:32 jarvis-server ollama[1145305]: time=2025-04-27T11:22:32.372Z level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.pretokenizer default="[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]*[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]+|[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]+[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]*|\\p{N}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n/]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" avril 27 11:22:32 jarvis-server ollama[1145305]: time=2025-04-27T11:22:32.373Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.rope.freq_scale default=1 avril 27 11:22:32 jarvis-server ollama[1145305]: time=2025-04-27T11:22:32.373Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.vision.attention.layer_norm_epsilon default=9.999999747378752e-06 avril 27 11:22:32 jarvis-server ollama[1145305]: time=2025-04-27T11:22:32.373Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.vision.longest_edge default=1540 avril 27 11:22:32 jarvis-server ollama[1145305]: time=2025-04-27T11:22:32.373Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.text_config.rms_norm_eps default=9.999999747378752e-06 avril 27 11:22:32 jarvis-server ollama[1145305]: time=2025-04-27T11:22:32.422Z level=INFO source=ggml.go:556 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="272.1 MiB" avril 27 11:22:32 jarvis-server ollama[1145305]: time=2025-04-27T11:22:32.422Z level=INFO source=ggml.go:556 msg="compute graph" backend=CPU buffer_type=CPU size="10.0 MiB" avril 27 11:22:32 jarvis-server ollama[1145305]: time=2025-04-27T11:22:32.517Z level=INFO source=server.go:619 msg="llama runner started in 4.76 seconds" avril 27 11:22:32 jarvis-server ollama[1145305]: [GIN] 2025/04/27 - 11:22:32 | 200 | 6.095798259s | 127.0.0.1 | POST "/api/generate" avril 27 11:22:40 jarvis-server ollama[1145305]: [GIN] 2025/04/27 - 11:22:40 | 200 | 15.66µs | 127.0.0.1 | HEAD "/" avril 27 11:22:40 jarvis-server ollama[1145305]: [GIN] 2025/04/27 - 11:22:40 | 200 | 6.326491ms | 127.0.0.1 | GET "/api/ps" ```
Author
Owner

@Burnarz commented on GitHub (Apr 29, 2025):

Hi @rick-github,

May i ask you to have a look please ?
Already tried a Modelfile with force gpu offload to 41, CPU usage is worse, jumps to 10%.

<!-- gh-comment-id:2837129693 --> @Burnarz commented on GitHub (Apr 29, 2025): Hi @rick-github, May i ask you to have a look please ? Already tried a Modelfile with force gpu offload to 41, CPU usage is worse, jumps to 10%.
Author
Owner

@rick-github commented on GitHub (Apr 29, 2025):

avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.684Z level=INFO source=server.go:138 msg=offload
 library=cuda layers.requested=-1 layers.model=41 layers.offload=36 layers.split="" memory.available="[23.0 GiB]"
 memory.gpu_overhead="0 B" memory.required.full="24.4 GiB" memory.required.partial="22.7 GiB"
 memory.required.kv="640.0 MiB" memory.required.allocations="[22.7 GiB]" memory.weights.total="13.1 GiB"
 memory.weights.repeating="12.7 GiB" memory.weights.nonrepeating="360.0 MiB" memory.graph.full="426.7 MiB"
 memory.graph.partial="426.7 MiB" projector.weights="769.3 MiB" projector.graph="8.8 GiB"

avril 27 11:22:28 jarvis-server ollama[1145305]: time=2025-04-27T11:22:28.120Z level=INFO source=ggml.go:298
 msg="model weights" buffer=CPU size="3.0 GiB"
avril 27 11:22:28 jarvis-server ollama[1145305]: time=2025-04-27T11:22:28.120Z level=INFO source=ggml.go:298
 msg="model weights" buffer=CUDA0 size="11.4 GiB"

So the ollama server estimated that it needed 22.7 of 23G to offload 36 of 41 layers, but the runner only used 11.4G. This is because the runner is using flash attention. which is a more efficient use of VRAM, so not all of the ollama estimation is used. This is a known issue, #6160. The usual workaround for this is to force GPU offload by setting num_gpu to 41, but if you've tried that and it didn't work, something else is wrong. Please post your Modelfile and logs from the failed override.

<!-- gh-comment-id:2840419954 --> @rick-github commented on GitHub (Apr 29, 2025): ``` avril 27 11:22:27 jarvis-server ollama[1145305]: time=2025-04-27T11:22:27.684Z level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=41 layers.offload=36 layers.split="" memory.available="[23.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="24.4 GiB" memory.required.partial="22.7 GiB" memory.required.kv="640.0 MiB" memory.required.allocations="[22.7 GiB]" memory.weights.total="13.1 GiB" memory.weights.repeating="12.7 GiB" memory.weights.nonrepeating="360.0 MiB" memory.graph.full="426.7 MiB" memory.graph.partial="426.7 MiB" projector.weights="769.3 MiB" projector.graph="8.8 GiB" avril 27 11:22:28 jarvis-server ollama[1145305]: time=2025-04-27T11:22:28.120Z level=INFO source=ggml.go:298 msg="model weights" buffer=CPU size="3.0 GiB" avril 27 11:22:28 jarvis-server ollama[1145305]: time=2025-04-27T11:22:28.120Z level=INFO source=ggml.go:298 msg="model weights" buffer=CUDA0 size="11.4 GiB" ``` So the ollama server estimated that it needed 22.7 of 23G to offload 36 of 41 layers, but the runner only used 11.4G. This is because the runner is using flash attention. which is a more efficient use of VRAM, so not all of the ollama estimation is used. This is a known issue, #6160. The usual workaround for this is to force GPU offload by setting `num_gpu` to 41, but if you've tried that and it didn't work, something else is wrong. Please post your Modelfile and logs from the failed override.
Author
Owner

@Burnarz commented on GitHub (Apr 30, 2025):

Thx for your answer.

This time exemple is a bit better, but i fully cleared VRAM this time to be sure, but still shows:

ollama ps
NAME                    ID              SIZE     PROCESSOR         UNTIL   
mistral-small:latest    414ba322960e    26 GB    6%/94% CPU/GPU    Forever  

So here's the Modelfile used:

# Modelfile generated by "ollama show"
# To build a new Modelfile based on this, replace FROM with:
FROM mistral-small3.1:latest

# FROM /usr/share/ollama/.ollama/models/blobs/sha256-1fa8532d986d729117d6b5ac2c884824d0717c9468094554fd1d36412c740cfc
TEMPLATE """{{- range $index, $_ := .Messages }}
{{- if eq .Role "system" }}[SYSTEM_PROMPT]{{ .Content }}[/SYSTEM_PROMPT]
{{- else if eq .Role "user" }}
{{- if and (le (len (slice $.Messages $index)) 2) $.Tools }}[AVAILABLE_TOOLS]{{ $.Tools }}[/AVAILABLE_TOOLS]
{{- end }}[INST]{{ .Content }}[/INST]
{{- else if eq .Role "assistant" }}
{{- if .Content }}{{ .Content }}
{{- if not (eq (len (slice $.Messages $index)) 1) }}</s>
{{- end }}
{{- else if .ToolCalls }}[TOOL_CALLS][
{{- range .ToolCalls }}{"name": "{{ .Function.Name }}", "arguments": {{ .Function.Arguments }}}
{{- end }}]</s>
{{- end }}
{{- else if eq .Role "tool" }}[TOOL_RESULTS]{"content": {{ .Content }}}[/TOOL_RESULTS]
{{- end }}
{{- end }}"""
SYSTEM """You are Mistral Small 3.1, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris.
You power an AI assistant called Le Chat.
Your knowledge base was last updated on 2023-10-01.

When you're not sure about some information, you say that you don't have the information and don't make up anything.
If the user's question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. "What are some good restaurants around me?" => "Where are you?" or "When is the next flight to Tokyo" => "Where do you travel from?").
You are always very attentive to dates, in particular you try to resolve dates (e.g. "yesterday" is {yesterday}) and when asked about information at specific dates, you discard information that is at another date.
You follow these instructions in all languages, and always respond to the user in the language they use or request.
Next sections describe the capabilities that you have.

# WEB BROWSING INSTRUCTIONS

You cannot perform any web search or access internet to open URLs, links etc. If it seems like the user is expecting you to do so, you clarify the situation and ask the user to copy paste the text directly in the chat.

# MULTI-MODAL INSTRUCTIONS

You have the ability to read images, but you cannot generate images. You also cannot transcribe audio files or videos.
You cannot read nor transcribe audio files or videos."""
PARAMETER num_ctx 4096
PARAMETER num_gpu 41

And the log from this load, no usage only load :

avril 30 00:04:07 jarvis-server ollama[1175]: [GIN] 2025/04/30 - 00:04:07 | 200 |       15.66µs |       127.0.0.1 | HEAD     "/"
avril 30 00:04:07 jarvis-server ollama[1175]: time=2025-04-30T00:04:07.536Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
avril 30 00:04:07 jarvis-server ollama[1175]: time=2025-04-30T00:04:07.556Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
avril 30 00:04:07 jarvis-server ollama[1175]: [GIN] 2025/04/30 - 00:04:07 | 200 |   48.056688ms |       127.0.0.1 | POST     "/api/show"
avril 30 00:04:07 jarvis-server ollama[1175]: time=2025-04-30T00:04:07.580Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
avril 30 00:04:07 jarvis-server ollama[1175]: time=2025-04-30T00:04:07.760Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
avril 30 00:04:07 jarvis-server ollama[1175]: time=2025-04-30T00:04:07.780Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.543Z level=INFO source=server.go:105 msg="system memory" total="15.5 GiB" free="14.2 GiB" free_swap="14.9 GiB"
avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.690Z level=INFO source=server.go:138 msg=offload library=cuda layers.requested=41 layers.model=41 layers.offload=37 layers.split="" memory.available="[23.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="24.4 GiB" memory.required.partial="23.0 GiB" memory.required.kv="640.0 MiB" memory.required.allocations="[23.0 GiB]" memory.weights.total="13.1 GiB" memory.weights.repeating="12.7 GiB" memory.weights.nonrepeating="360.0 MiB" memory.graph.full="426.7 MiB" memory.graph.partial="426.7 MiB" projector.weights="769.3 MiB" projector.graph="8.8 GiB"
avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.690Z level=INFO source=server.go:185 msg="enabling flash attention"
avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.690Z level=WARN source=server.go:193 msg="kv cache type not supported by model" type=""
avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.741Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.742Z level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.pretokenizer default="[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]*[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]+|[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]+[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]*|\\p{N}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n/]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.750Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.rope.freq_scale default=1
avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.750Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.vision.attention.layer_norm_epsilon default=9.999999747378752e-06
avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.750Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.vision.longest_edge default=1540
avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.750Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.text_config.rms_norm_eps default=9.999999747378752e-06
avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.750Z level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-1fa8532d986d729117d6b5ac2c884824d0717c9468094554fd1d36412c740cfc --ctx-size 4096 --batch-size 512 --n-gpu-layers 41 --threads 6 --flash-attn --no-mmap --parallel 1 --port 42271"
avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.751Z level=INFO source=sched.go:451 msg="loaded runners" count=1
avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.751Z level=INFO source=server.go:580 msg="waiting for llama runner to start responding"
avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.751Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error"
avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.760Z level=INFO source=runner.go:866 msg="starting ollama engine"
avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.760Z level=INFO source=runner.go:929 msg="Server listening on 127.0.0.1:42271"
avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.817Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.817Z level=WARN source=ggml.go:152 msg="key not found" key=general.name default=""
avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.817Z level=WARN source=ggml.go:152 msg="key not found" key=general.description default=""
avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.817Z level=INFO source=ggml.go:72 msg="" architecture=mistral3 file_type=Q4_K_M name="" description="" num_tensors=585 num_key_values=43
avril 30 00:04:08 jarvis-server ollama[1175]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
avril 30 00:04:08 jarvis-server ollama[1175]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
avril 30 00:04:08 jarvis-server ollama[1175]: ggml_cuda_init: found 1 CUDA devices:
avril 30 00:04:08 jarvis-server ollama[1175]:   Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
avril 30 00:04:08 jarvis-server ollama[1175]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so
avril 30 00:04:08 jarvis-server ollama[1175]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so
avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.892Z level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.980Z level=INFO source=ggml.go:298 msg="model weights" buffer=CPU size="525.0 MiB"
avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.980Z level=INFO source=ggml.go:298 msg="model weights" buffer=CUDA0 size="13.9 GiB"
avril 30 00:04:09 jarvis-server ollama[1175]: time=2025-04-30T00:04:09.002Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model"
avril 30 00:04:13 jarvis-server ollama[1175]: time=2025-04-30T00:04:13.101Z level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.pretokenizer default="[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]*[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]+|[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]+[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]*|\\p{N}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n/]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
avril 30 00:04:13 jarvis-server ollama[1175]: time=2025-04-30T00:04:13.104Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.rope.freq_scale default=1
avril 30 00:04:13 jarvis-server ollama[1175]: time=2025-04-30T00:04:13.104Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.vision.attention.layer_norm_epsilon default=9.999999747378752e-06
avril 30 00:04:13 jarvis-server ollama[1175]: time=2025-04-30T00:04:13.104Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.vision.longest_edge default=1540
avril 30 00:04:13 jarvis-server ollama[1175]: time=2025-04-30T00:04:13.104Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.text_config.rms_norm_eps default=9.999999747378752e-06
avril 30 00:04:13 jarvis-server ollama[1175]: time=2025-04-30T00:04:13.125Z level=INFO source=ggml.go:556 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="152.0 MiB"
avril 30 00:04:13 jarvis-server ollama[1175]: time=2025-04-30T00:04:13.125Z level=INFO source=ggml.go:556 msg="compute graph" backend=CPU buffer_type=CPU size="10.0 MiB"
avril 30 00:04:13 jarvis-server ollama[1175]: time=2025-04-30T00:04:13.266Z level=INFO source=server.go:619 msg="llama runner started in 4.52 seconds"
avril 30 00:04:13 jarvis-server ollama[1175]: [GIN] 2025/04/30 - 00:04:13 | 200 |  5.707655417s |       127.0.0.1 | POST     "/api/generate"
avril 30 00:04:16 jarvis-server ollama[1175]: [GIN] 2025/04/30 - 00:04:16 | 200 |       30.35µs |       127.0.0.1 | HEAD     "/"
avril 30 00:04:16 jarvis-server ollama[1175]: [GIN] 2025/04/30 - 00:04:16 | 200 |       28.42µs |       127.0.0.1 | GET      "/api/ps"
<!-- gh-comment-id:2840498536 --> @Burnarz commented on GitHub (Apr 30, 2025): Thx for your answer. This time exemple is a bit better, but i fully cleared VRAM this time to be sure, but still shows: ``` ollama ps NAME ID SIZE PROCESSOR UNTIL mistral-small:latest 414ba322960e 26 GB 6%/94% CPU/GPU Forever ``` So here's the Modelfile used: ``` # Modelfile generated by "ollama show" # To build a new Modelfile based on this, replace FROM with: FROM mistral-small3.1:latest # FROM /usr/share/ollama/.ollama/models/blobs/sha256-1fa8532d986d729117d6b5ac2c884824d0717c9468094554fd1d36412c740cfc TEMPLATE """{{- range $index, $_ := .Messages }} {{- if eq .Role "system" }}[SYSTEM_PROMPT]{{ .Content }}[/SYSTEM_PROMPT] {{- else if eq .Role "user" }} {{- if and (le (len (slice $.Messages $index)) 2) $.Tools }}[AVAILABLE_TOOLS]{{ $.Tools }}[/AVAILABLE_TOOLS] {{- end }}[INST]{{ .Content }}[/INST] {{- else if eq .Role "assistant" }} {{- if .Content }}{{ .Content }} {{- if not (eq (len (slice $.Messages $index)) 1) }}</s> {{- end }} {{- else if .ToolCalls }}[TOOL_CALLS][ {{- range .ToolCalls }}{"name": "{{ .Function.Name }}", "arguments": {{ .Function.Arguments }}} {{- end }}]</s> {{- end }} {{- else if eq .Role "tool" }}[TOOL_RESULTS]{"content": {{ .Content }}}[/TOOL_RESULTS] {{- end }} {{- end }}""" SYSTEM """You are Mistral Small 3.1, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris. You power an AI assistant called Le Chat. Your knowledge base was last updated on 2023-10-01. When you're not sure about some information, you say that you don't have the information and don't make up anything. If the user's question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. "What are some good restaurants around me?" => "Where are you?" or "When is the next flight to Tokyo" => "Where do you travel from?"). You are always very attentive to dates, in particular you try to resolve dates (e.g. "yesterday" is {yesterday}) and when asked about information at specific dates, you discard information that is at another date. You follow these instructions in all languages, and always respond to the user in the language they use or request. Next sections describe the capabilities that you have. # WEB BROWSING INSTRUCTIONS You cannot perform any web search or access internet to open URLs, links etc. If it seems like the user is expecting you to do so, you clarify the situation and ask the user to copy paste the text directly in the chat. # MULTI-MODAL INSTRUCTIONS You have the ability to read images, but you cannot generate images. You also cannot transcribe audio files or videos. You cannot read nor transcribe audio files or videos.""" PARAMETER num_ctx 4096 PARAMETER num_gpu 41 ``` And the log from this load, no usage only load : ``` avril 30 00:04:07 jarvis-server ollama[1175]: [GIN] 2025/04/30 - 00:04:07 | 200 | 15.66µs | 127.0.0.1 | HEAD "/" avril 30 00:04:07 jarvis-server ollama[1175]: time=2025-04-30T00:04:07.536Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 avril 30 00:04:07 jarvis-server ollama[1175]: time=2025-04-30T00:04:07.556Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 avril 30 00:04:07 jarvis-server ollama[1175]: [GIN] 2025/04/30 - 00:04:07 | 200 | 48.056688ms | 127.0.0.1 | POST "/api/show" avril 30 00:04:07 jarvis-server ollama[1175]: time=2025-04-30T00:04:07.580Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 avril 30 00:04:07 jarvis-server ollama[1175]: time=2025-04-30T00:04:07.760Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 avril 30 00:04:07 jarvis-server ollama[1175]: time=2025-04-30T00:04:07.780Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.543Z level=INFO source=server.go:105 msg="system memory" total="15.5 GiB" free="14.2 GiB" free_swap="14.9 GiB" avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.690Z level=INFO source=server.go:138 msg=offload library=cuda layers.requested=41 layers.model=41 layers.offload=37 layers.split="" memory.available="[23.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="24.4 GiB" memory.required.partial="23.0 GiB" memory.required.kv="640.0 MiB" memory.required.allocations="[23.0 GiB]" memory.weights.total="13.1 GiB" memory.weights.repeating="12.7 GiB" memory.weights.nonrepeating="360.0 MiB" memory.graph.full="426.7 MiB" memory.graph.partial="426.7 MiB" projector.weights="769.3 MiB" projector.graph="8.8 GiB" avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.690Z level=INFO source=server.go:185 msg="enabling flash attention" avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.690Z level=WARN source=server.go:193 msg="kv cache type not supported by model" type="" avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.741Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.742Z level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.pretokenizer default="[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]*[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]+|[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]+[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]*|\\p{N}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n/]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.750Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.rope.freq_scale default=1 avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.750Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.vision.attention.layer_norm_epsilon default=9.999999747378752e-06 avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.750Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.vision.longest_edge default=1540 avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.750Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.text_config.rms_norm_eps default=9.999999747378752e-06 avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.750Z level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-1fa8532d986d729117d6b5ac2c884824d0717c9468094554fd1d36412c740cfc --ctx-size 4096 --batch-size 512 --n-gpu-layers 41 --threads 6 --flash-attn --no-mmap --parallel 1 --port 42271" avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.751Z level=INFO source=sched.go:451 msg="loaded runners" count=1 avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.751Z level=INFO source=server.go:580 msg="waiting for llama runner to start responding" avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.751Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error" avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.760Z level=INFO source=runner.go:866 msg="starting ollama engine" avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.760Z level=INFO source=runner.go:929 msg="Server listening on 127.0.0.1:42271" avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.817Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.817Z level=WARN source=ggml.go:152 msg="key not found" key=general.name default="" avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.817Z level=WARN source=ggml.go:152 msg="key not found" key=general.description default="" avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.817Z level=INFO source=ggml.go:72 msg="" architecture=mistral3 file_type=Q4_K_M name="" description="" num_tensors=585 num_key_values=43 avril 30 00:04:08 jarvis-server ollama[1175]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no avril 30 00:04:08 jarvis-server ollama[1175]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no avril 30 00:04:08 jarvis-server ollama[1175]: ggml_cuda_init: found 1 CUDA devices: avril 30 00:04:08 jarvis-server ollama[1175]: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes avril 30 00:04:08 jarvis-server ollama[1175]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so avril 30 00:04:08 jarvis-server ollama[1175]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.892Z level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.980Z level=INFO source=ggml.go:298 msg="model weights" buffer=CPU size="525.0 MiB" avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.980Z level=INFO source=ggml.go:298 msg="model weights" buffer=CUDA0 size="13.9 GiB" avril 30 00:04:09 jarvis-server ollama[1175]: time=2025-04-30T00:04:09.002Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model" avril 30 00:04:13 jarvis-server ollama[1175]: time=2025-04-30T00:04:13.101Z level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.pretokenizer default="[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]*[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]+|[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]+[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]*|\\p{N}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n/]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" avril 30 00:04:13 jarvis-server ollama[1175]: time=2025-04-30T00:04:13.104Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.rope.freq_scale default=1 avril 30 00:04:13 jarvis-server ollama[1175]: time=2025-04-30T00:04:13.104Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.vision.attention.layer_norm_epsilon default=9.999999747378752e-06 avril 30 00:04:13 jarvis-server ollama[1175]: time=2025-04-30T00:04:13.104Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.vision.longest_edge default=1540 avril 30 00:04:13 jarvis-server ollama[1175]: time=2025-04-30T00:04:13.104Z level=WARN source=ggml.go:152 msg="key not found" key=mistral3.text_config.rms_norm_eps default=9.999999747378752e-06 avril 30 00:04:13 jarvis-server ollama[1175]: time=2025-04-30T00:04:13.125Z level=INFO source=ggml.go:556 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="152.0 MiB" avril 30 00:04:13 jarvis-server ollama[1175]: time=2025-04-30T00:04:13.125Z level=INFO source=ggml.go:556 msg="compute graph" backend=CPU buffer_type=CPU size="10.0 MiB" avril 30 00:04:13 jarvis-server ollama[1175]: time=2025-04-30T00:04:13.266Z level=INFO source=server.go:619 msg="llama runner started in 4.52 seconds" avril 30 00:04:13 jarvis-server ollama[1175]: [GIN] 2025/04/30 - 00:04:13 | 200 | 5.707655417s | 127.0.0.1 | POST "/api/generate" avril 30 00:04:16 jarvis-server ollama[1175]: [GIN] 2025/04/30 - 00:04:16 | 200 | 30.35µs | 127.0.0.1 | HEAD "/" avril 30 00:04:16 jarvis-server ollama[1175]: [GIN] 2025/04/30 - 00:04:16 | 200 | 28.42µs | 127.0.0.1 | GET "/api/ps" ```
Author
Owner

@rick-github commented on GitHub (Apr 30, 2025):

This is performing as expected.

avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.690Z level=INFO source=server.go:138 msg=offload
 library=cuda layers.requested=41 layers.model=41 layers.offload=37 layers.split="" memory.available="[23.3 GiB]"
 memory.gpu_overhead="0 B" memory.required.full="24.4 GiB" memory.required.partial="23.0 GiB" memory.required.kv="640.0 MiB"
 memory.required.allocations="[23.0 GiB]" memory.weights.total="13.1 GiB" memory.weights.repeating="12.7 GiB"
 memory.weights.nonrepeating="360.0 MiB" memory.graph.full="426.7 MiB" memory.graph.partial="426.7 MiB"
 projector.weights="769.3 MiB" projector.graph="8.8 GiB"

The initial server estimate was offloading 37 layers.

avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.750Z level=INFO source=server.go:405
 msg="starting llama server" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-1fa8532d986d729117d6b5ac2c884824d0717c9468094554fd1d36412c740cfc
 --ctx-size 4096 --batch-size 512 --n-gpu-layers 41 --threads 6 --flash-attn --no-mmap --parallel 1 --port 42271"
avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.980Z level=INFO source=ggml.go:298
 msg="model weights" buffer=CUDA0 size="13.9 GiB"

The runner actually offloaded 41 layers because of the instruction in the Modelfile.

The ollama ps output is incorrect because it shows the server estimation, not the runner allocation.

<!-- gh-comment-id:2840512728 --> @rick-github commented on GitHub (Apr 30, 2025): This is performing as expected. ``` avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.690Z level=INFO source=server.go:138 msg=offload library=cuda layers.requested=41 layers.model=41 layers.offload=37 layers.split="" memory.available="[23.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="24.4 GiB" memory.required.partial="23.0 GiB" memory.required.kv="640.0 MiB" memory.required.allocations="[23.0 GiB]" memory.weights.total="13.1 GiB" memory.weights.repeating="12.7 GiB" memory.weights.nonrepeating="360.0 MiB" memory.graph.full="426.7 MiB" memory.graph.partial="426.7 MiB" projector.weights="769.3 MiB" projector.graph="8.8 GiB" ``` The initial server estimate was offloading 37 layers. ``` avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.750Z level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-1fa8532d986d729117d6b5ac2c884824d0717c9468094554fd1d36412c740cfc --ctx-size 4096 --batch-size 512 --n-gpu-layers 41 --threads 6 --flash-attn --no-mmap --parallel 1 --port 42271" avril 30 00:04:08 jarvis-server ollama[1175]: time=2025-04-30T00:04:08.980Z level=INFO source=ggml.go:298 msg="model weights" buffer=CUDA0 size="13.9 GiB" ``` The runner actually offloaded 41 layers because of the instruction in the Modelfile. The `ollama ps` output is incorrect because it shows the server estimation, not the runner allocation.
Author
Owner

@Burnarz commented on GitHub (Apr 30, 2025):

Thanks working great 👍
My bad, didn't checked token/s, fully trusted ollama ps.
Can't it fixed by forcing num_gpu in mistal-small parameters (i mean for everyone in the model download) ?

<!-- gh-comment-id:2840521933 --> @Burnarz commented on GitHub (Apr 30, 2025): Thanks working great 👍 My bad, didn't checked token/s, fully trusted ollama ps. Can't it fixed by forcing num_gpu in mistal-small parameters (i mean for everyone in the model download) ?
Author
Owner

@rick-github commented on GitHub (Apr 30, 2025):

Everybody has different hardware and so the layer offloading can't be enforced in the model download parameters. There is work in progress in making the estimation more accurate, which will fix the issue for all models on all types of hardware.

<!-- gh-comment-id:2840524704 --> @rick-github commented on GitHub (Apr 30, 2025): Everybody has different hardware and so the layer offloading can't be enforced in the model download parameters. There is work in progress in making the estimation more accurate, which will fix the issue for all models on all types of hardware.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#53357