[GH-ISSUE #10132] Very strange RAM behavior with v0.6.4 (memory leak?) #32409

Closed
opened 2026-04-22 13:37:52 -05:00 by GiteaMirror · 13 comments
Owner

Originally created by @nfsecurity on GitHub (Apr 4, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10132

Image Image

What is the issue?

I have been running Ollama with Gemma-3-12b-it 4Bit (Quantized) from v0.6.3 with no issues on my RTX4000 SFF Ada with 20GB of VRAM (the model uses only 9.4 GB), but yesterday, I did an upgrade to the v0.6.4 version and Ollama process started consuming my system RAM so fast, and in a period of 24 hours it went from 1 GB to 64 GB (my server has 64 GB of RAM) and my monitoring system (Zabbix) triggered an alert of high RAM usage (that has never happened since I started using Ollama many months ago).

My temporary solution is:

% sudo service ollama restart

But Ollama process starts consuming again system RAM despite the model was loaded into the VRAM of the RTX4000 correctly.

Relevant log output

# ollama ps
NAME                 ID              SIZE     PROCESSOR    UNTIL   
gemma3-12b:latest    1038323a41a8    10 GB    100% GPU     Forever

# top -p {ollama-PID}
top - 17:41:52 up 140 days, 22:11,  1 user,  load average: 0.02, 0.10, 0.17
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
MiB Mem :  64103.1 total,   4115.6 free,  43591.1 used,  16396.3 buff/cache
MiB Swap:  32735.0 total,  32718.2 free,     16.8 used.  14007.8 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                                                                                                               
2468000 ollama    20   0  197.4g  47.1g   5.8g S   0.0  75.2 115:18.93 ollama 

# cat /var/log/syslog
Apr  4 11:03:04 cygnus systemd[1]: Started Ollama Service.
Apr  4 11:03:04 cygnus ollama[2467940]: 2025/04/04 11:03:04 routes.go:1231: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Apr  4 11:03:04 cygnus ollama[2467940]: time=2025-04-04T11:03:04.853-05:00 level=INFO source=images.go:458 msg="total blobs: 15"
Apr  4 11:03:04 cygnus ollama[2467940]: time=2025-04-04T11:03:04.854-05:00 level=INFO source=images.go:465 msg="total unused blobs removed: 0"
Apr  4 11:03:04 cygnus ollama[2467940]: time=2025-04-04T11:03:04.854-05:00 level=INFO source=routes.go:1298 msg="Listening on 127.0.0.1:11434 (version 0.6.4)"
Apr  4 11:03:04 cygnus ollama[2467940]: time=2025-04-04T11:03:04.855-05:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
Apr  4 11:03:05 cygnus ollama[2467940]: time=2025-04-04T11:03:05.016-05:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-52f63f85-0119-7965-7b5d-4581411d3e77 library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA RTX 4000 SFF Ada Generation" total="19.7 GiB" available="19.5 GiB"
Apr  4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.281-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.block_count default=0
Apr  4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.282-05:00 level=INFO source=sched.go:716 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-6313a660e782dac550b5b8eceb55d5ac3d7391b1530e066bd7028864dca6d096 gpu=GPU-52f63f85-0119-7965-7b5d-4581411d3e77 parallel=4 available=20952973312 required="9.8 GiB"
Apr  4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.328-05:00 level=INFO source=server.go:105 msg="system memory" total="62.6 GiB" free="61.2 GiB" free_swap="32.0 GiB"
Apr  4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.328-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.block_count default=0
Apr  4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.328-05:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[19.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="9.8 GiB" memory.required.partial="9.8 GiB" memory.required.kv="1.9 GiB" memory.required.allocations="[9.8 GiB]" memory.weights.total="6.8 GiB" memory.weights.repeating="6.0 GiB" memory.weights.nonrepeating="787.7 MiB" memory.graph.full="519.6 MiB" memory.graph.partial="1.3 GiB"
Apr  4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.365-05:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
Apr  4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.366-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.image_size default=0
Apr  4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.366-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.patch_size default=0
Apr  4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.366-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.num_channels default=0
Apr  4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.366-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.block_count default=0
Apr  4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.366-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.embedding_length default=0
Apr  4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.366-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.attention.head_count default=0
Apr  4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.366-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.image_size default=0
Apr  4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.366-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.patch_size default=0
Apr  4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.366-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.attention.layer_norm_epsilon default=0
Apr  4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.370-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000
Apr  4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.370-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
Apr  4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.370-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1
Apr  4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.370-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256
Apr  4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.371-05:00 level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-6313a660e782dac550b5b8eceb55d5ac3d7391b1530e066bd7028864dca6d096 --ctx-size 8192 --batch-size 512 --n-gpu-layers 49 --threads 14 --no-mmap --parallel 4 --port 32919"
Apr  4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.371-05:00 level=INFO source=sched.go:451 msg="loaded runners" count=1
Apr  4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.371-05:00 level=INFO source=server.go:580 msg="waiting for llama runner to start responding"
Apr  4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.372-05:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error"
Apr  4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.380-05:00 level=INFO source=runner.go:821 msg="starting ollama engine"
Apr  4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.382-05:00 level=INFO source=runner.go:884 msg="Server listening on 127.0.0.1:32919"
Apr  4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.413-05:00 level=WARN source=ggml.go:149 msg="key not found" key=general.description default=""
Apr  4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.413-05:00 level=INFO source=ggml.go:66 msg="" architecture=gemma3 file_type=Q4_K_M name=Fused_Model description="" num_tensors=627 num_key_values=33
Apr  4 11:03:25 cygnus ollama[2467940]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Apr  4 11:03:25 cygnus ollama[2467940]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Apr  4 11:03:25 cygnus ollama[2467940]: ggml_cuda_init: found 1 CUDA devices:
Apr  4 11:03:25 cygnus ollama[2467940]:   Device 0: NVIDIA RTX 4000 SFF Ada Generation, compute capability 8.9, VMM: yes
Apr  4 11:03:25 cygnus ollama[2467940]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so
Apr  4 11:03:25 cygnus ollama[2467940]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-alderlake.so
Apr  4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.503-05:00 level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
Apr  4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.543-05:00 level=INFO source=ggml.go:288 msg="model weights" buffer=CUDA0 size="6.8 GiB"
Apr  4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.543-05:00 level=INFO source=ggml.go:288 msg="model weights" buffer=CPU size="540.1 MiB"
Apr  4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.625-05:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model"
Apr  4 11:03:26 cygnus ollama[2467940]: time=2025-04-04T11:03:26.796-05:00 level=INFO source=ggml.go:380 msg="compute graph" backend=CUDA0 buffer_type=CUDA0
Apr  4 11:03:26 cygnus ollama[2467940]: time=2025-04-04T11:03:26.796-05:00 level=INFO source=ggml.go:380 msg="compute graph" backend=CPU buffer_type=CUDA_Host
Apr  4 11:03:26 cygnus ollama[2467940]: time=2025-04-04T11:03:26.798-05:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
Apr  4 11:03:26 cygnus ollama[2467940]: time=2025-04-04T11:03:26.800-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.image_size default=0
Apr  4 11:03:26 cygnus ollama[2467940]: time=2025-04-04T11:03:26.800-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.patch_size default=0
Apr  4 11:03:26 cygnus ollama[2467940]: time=2025-04-04T11:03:26.800-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.num_channels default=0
Apr  4 11:03:26 cygnus ollama[2467940]: time=2025-04-04T11:03:26.800-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.block_count default=0
Apr  4 11:03:26 cygnus ollama[2467940]: time=2025-04-04T11:03:26.800-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.embedding_length default=0
Apr  4 11:03:26 cygnus ollama[2467940]: time=2025-04-04T11:03:26.800-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.attention.head_count default=0
Apr  4 11:03:26 cygnus ollama[2467940]: time=2025-04-04T11:03:26.800-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.image_size default=0
Apr  4 11:03:26 cygnus ollama[2467940]: time=2025-04-04T11:03:26.800-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.patch_size default=0
Apr  4 11:03:26 cygnus ollama[2467940]: time=2025-04-04T11:03:26.800-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.attention.layer_norm_epsilon default=0
Apr  4 11:03:26 cygnus ollama[2467940]: time=2025-04-04T11:03:26.807-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000
Apr  4 11:03:26 cygnus ollama[2467940]: time=2025-04-04T11:03:26.807-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
Apr  4 11:03:26 cygnus ollama[2467940]: time=2025-04-04T11:03:26.807-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1
Apr  4 11:03:26 cygnus ollama[2467940]: time=2025-04-04T11:03:26.807-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256
Apr  4 11:03:26 cygnus ollama[2467940]: time=2025-04-04T11:03:26.880-05:00 level=INFO source=server.go:619 msg="llama runner started in 1.51 seconds"

OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

0.6.4

Originally created by @nfsecurity on GitHub (Apr 4, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10132 <img width="823" alt="Image" src="https://github.com/user-attachments/assets/54f56119-b83b-4f8f-8b39-36532bb57954" /> <img width="2020" alt="Image" src="https://github.com/user-attachments/assets/415be79a-7b63-46a5-a752-16840c35885b" /> ### What is the issue? I have been running Ollama with Gemma-3-12b-it 4Bit (Quantized) from v0.6.3 with no issues on my RTX4000 SFF Ada with 20GB of VRAM (the model uses only 9.4 GB), but yesterday, I did an upgrade to the v0.6.4 version and Ollama process started consuming my system RAM so fast, and in a period of 24 hours it went from 1 GB to 64 GB (my server has 64 GB of RAM) and my monitoring system (Zabbix) triggered an alert of high RAM usage (that has never happened since I started using Ollama many months ago). My temporary solution is: % sudo service ollama restart But Ollama process starts consuming again system RAM despite the model was loaded into the VRAM of the RTX4000 correctly. ### Relevant log output ```shell # ollama ps NAME ID SIZE PROCESSOR UNTIL gemma3-12b:latest 1038323a41a8 10 GB 100% GPU Forever # top -p {ollama-PID} top - 17:41:52 up 140 days, 22:11, 1 user, load average: 0.02, 0.10, 0.17 Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie %Cpu(s): 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st MiB Mem : 64103.1 total, 4115.6 free, 43591.1 used, 16396.3 buff/cache MiB Swap: 32735.0 total, 32718.2 free, 16.8 used. 14007.8 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2468000 ollama 20 0 197.4g 47.1g 5.8g S 0.0 75.2 115:18.93 ollama # cat /var/log/syslog Apr 4 11:03:04 cygnus systemd[1]: Started Ollama Service. Apr 4 11:03:04 cygnus ollama[2467940]: 2025/04/04 11:03:04 routes.go:1231: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Apr 4 11:03:04 cygnus ollama[2467940]: time=2025-04-04T11:03:04.853-05:00 level=INFO source=images.go:458 msg="total blobs: 15" Apr 4 11:03:04 cygnus ollama[2467940]: time=2025-04-04T11:03:04.854-05:00 level=INFO source=images.go:465 msg="total unused blobs removed: 0" Apr 4 11:03:04 cygnus ollama[2467940]: time=2025-04-04T11:03:04.854-05:00 level=INFO source=routes.go:1298 msg="Listening on 127.0.0.1:11434 (version 0.6.4)" Apr 4 11:03:04 cygnus ollama[2467940]: time=2025-04-04T11:03:04.855-05:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" Apr 4 11:03:05 cygnus ollama[2467940]: time=2025-04-04T11:03:05.016-05:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-52f63f85-0119-7965-7b5d-4581411d3e77 library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA RTX 4000 SFF Ada Generation" total="19.7 GiB" available="19.5 GiB" Apr 4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.281-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.block_count default=0 Apr 4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.282-05:00 level=INFO source=sched.go:716 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-6313a660e782dac550b5b8eceb55d5ac3d7391b1530e066bd7028864dca6d096 gpu=GPU-52f63f85-0119-7965-7b5d-4581411d3e77 parallel=4 available=20952973312 required="9.8 GiB" Apr 4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.328-05:00 level=INFO source=server.go:105 msg="system memory" total="62.6 GiB" free="61.2 GiB" free_swap="32.0 GiB" Apr 4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.328-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.block_count default=0 Apr 4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.328-05:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[19.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="9.8 GiB" memory.required.partial="9.8 GiB" memory.required.kv="1.9 GiB" memory.required.allocations="[9.8 GiB]" memory.weights.total="6.8 GiB" memory.weights.repeating="6.0 GiB" memory.weights.nonrepeating="787.7 MiB" memory.graph.full="519.6 MiB" memory.graph.partial="1.3 GiB" Apr 4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.365-05:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false Apr 4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.366-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.image_size default=0 Apr 4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.366-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.patch_size default=0 Apr 4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.366-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.num_channels default=0 Apr 4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.366-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.block_count default=0 Apr 4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.366-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.embedding_length default=0 Apr 4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.366-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.attention.head_count default=0 Apr 4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.366-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.image_size default=0 Apr 4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.366-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.patch_size default=0 Apr 4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.366-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.attention.layer_norm_epsilon default=0 Apr 4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.370-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000 Apr 4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.370-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 Apr 4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.370-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1 Apr 4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.370-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256 Apr 4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.371-05:00 level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-6313a660e782dac550b5b8eceb55d5ac3d7391b1530e066bd7028864dca6d096 --ctx-size 8192 --batch-size 512 --n-gpu-layers 49 --threads 14 --no-mmap --parallel 4 --port 32919" Apr 4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.371-05:00 level=INFO source=sched.go:451 msg="loaded runners" count=1 Apr 4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.371-05:00 level=INFO source=server.go:580 msg="waiting for llama runner to start responding" Apr 4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.372-05:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error" Apr 4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.380-05:00 level=INFO source=runner.go:821 msg="starting ollama engine" Apr 4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.382-05:00 level=INFO source=runner.go:884 msg="Server listening on 127.0.0.1:32919" Apr 4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.413-05:00 level=WARN source=ggml.go:149 msg="key not found" key=general.description default="" Apr 4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.413-05:00 level=INFO source=ggml.go:66 msg="" architecture=gemma3 file_type=Q4_K_M name=Fused_Model description="" num_tensors=627 num_key_values=33 Apr 4 11:03:25 cygnus ollama[2467940]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Apr 4 11:03:25 cygnus ollama[2467940]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Apr 4 11:03:25 cygnus ollama[2467940]: ggml_cuda_init: found 1 CUDA devices: Apr 4 11:03:25 cygnus ollama[2467940]: Device 0: NVIDIA RTX 4000 SFF Ada Generation, compute capability 8.9, VMM: yes Apr 4 11:03:25 cygnus ollama[2467940]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so Apr 4 11:03:25 cygnus ollama[2467940]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-alderlake.so Apr 4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.503-05:00 level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) Apr 4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.543-05:00 level=INFO source=ggml.go:288 msg="model weights" buffer=CUDA0 size="6.8 GiB" Apr 4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.543-05:00 level=INFO source=ggml.go:288 msg="model weights" buffer=CPU size="540.1 MiB" Apr 4 11:03:25 cygnus ollama[2467940]: time=2025-04-04T11:03:25.625-05:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model" Apr 4 11:03:26 cygnus ollama[2467940]: time=2025-04-04T11:03:26.796-05:00 level=INFO source=ggml.go:380 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 Apr 4 11:03:26 cygnus ollama[2467940]: time=2025-04-04T11:03:26.796-05:00 level=INFO source=ggml.go:380 msg="compute graph" backend=CPU buffer_type=CUDA_Host Apr 4 11:03:26 cygnus ollama[2467940]: time=2025-04-04T11:03:26.798-05:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false Apr 4 11:03:26 cygnus ollama[2467940]: time=2025-04-04T11:03:26.800-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.image_size default=0 Apr 4 11:03:26 cygnus ollama[2467940]: time=2025-04-04T11:03:26.800-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.patch_size default=0 Apr 4 11:03:26 cygnus ollama[2467940]: time=2025-04-04T11:03:26.800-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.num_channels default=0 Apr 4 11:03:26 cygnus ollama[2467940]: time=2025-04-04T11:03:26.800-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.block_count default=0 Apr 4 11:03:26 cygnus ollama[2467940]: time=2025-04-04T11:03:26.800-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.embedding_length default=0 Apr 4 11:03:26 cygnus ollama[2467940]: time=2025-04-04T11:03:26.800-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.attention.head_count default=0 Apr 4 11:03:26 cygnus ollama[2467940]: time=2025-04-04T11:03:26.800-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.image_size default=0 Apr 4 11:03:26 cygnus ollama[2467940]: time=2025-04-04T11:03:26.800-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.patch_size default=0 Apr 4 11:03:26 cygnus ollama[2467940]: time=2025-04-04T11:03:26.800-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.attention.layer_norm_epsilon default=0 Apr 4 11:03:26 cygnus ollama[2467940]: time=2025-04-04T11:03:26.807-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000 Apr 4 11:03:26 cygnus ollama[2467940]: time=2025-04-04T11:03:26.807-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 Apr 4 11:03:26 cygnus ollama[2467940]: time=2025-04-04T11:03:26.807-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1 Apr 4 11:03:26 cygnus ollama[2467940]: time=2025-04-04T11:03:26.807-05:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256 Apr 4 11:03:26 cygnus ollama[2467940]: time=2025-04-04T11:03:26.880-05:00 level=INFO source=server.go:619 msg="llama runner started in 1.51 seconds" ``` ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version 0.6.4
GiteaMirror added the bug label 2026-04-22 13:37:52 -05:00
Author
Owner

@bourquenoud commented on GitHub (Apr 5, 2025):

I noticed the same behavior recently, running gemma3:12b. RAM usage slowly increase overtime, until the process gets killed. However, I am running ollama 0.6.2.

<!-- gh-comment-id:2780671731 --> @bourquenoud commented on GitHub (Apr 5, 2025): I noticed the same behavior recently, running `gemma3:12b`. RAM usage slowly increase overtime, until the process gets killed. However, I am running ollama 0.6.2.
Author
Owner

@jetnet commented on GitHub (Apr 5, 2025):

Ollama 0.6.4 + gemma3:27b + API Generate is able to serve a couple requests only. After that it crashes:

Apr 05 13:12:47 ai-server ollama[970617]: ggml.c:1584: GGML_ASSERT(view_src == NULL || data_size == 0 || data_size + view_offs <= ggml_nbytes(view_src)) failed
Apr 05 13:12:47 ai-server ollama[970617]: SIGSEGV: segmentation violation
Apr 05 13:12:47 ai-server ollama[970617]: PC=0x7f65853e2817 m=5 sigcode=1 addr=0x20d803fa0
Apr 05 13:12:47 ai-server ollama[970617]: signal arrived during cgo execution

Dev team, please let me know, if you need a full stack trace.

podman container
podman run -d \
  --device nvidia.com/gpu=all --memory=100g \
  -v ollama:$HOME/.ollama \
  -v /opt/ollama/models:/models \
  -p 11434:11434 \
  -e OLLAMA_MODELS=/models \
  -e CUDA_VISIBLE_DEVICES=0,1 \
  -e OLLAMA_SCHED_SPREAD=1 \
  -e OLLAMA_GPU_LAYERS=12 \
  --name ollama ollama/ollama
nvidia-smi
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 555.42.06              Driver Version: 555.42.06      CUDA Version: 12.5     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA L40                     Off |   00000000:06:10.0 Off |                    0 |
| N/A   49C    P0            162W /  300W |   17174MiB /  46068MiB |     27%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   1  NVIDIA L40                     Off |   00000000:06:11.0 Off |                    0 |
| N/A   70C    P0            239W /  300W |   27595MiB /  46068MiB |     84%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A    882543      C   /usr/local/bin/python                        6800MiB |
|    0   N/A  N/A    971496      C   /usr/bin/ollama                             10204MiB |
|    1   N/A  N/A    881828      C   /usr/bin/python3                             2242MiB |
|    1   N/A  N/A    881935      C   /usr/local/bin/python3.12                     714MiB |
|    1   N/A  N/A    882260      C   /usr/local/bin/python3.11                    4004MiB |
|    1   N/A  N/A    882692      C   /usr/local/bin/python                        5676MiB |
|    1   N/A  N/A    882835      C   python                                       2242MiB |
|    1   N/A  N/A    971496      C   /usr/bin/ollama                             12524MiB |
+-----------------------------------------------------------------------------------------+

P.S. Ollama 0.6.3 has a memory leak with gemma3:27b. It gets restarted by Podman automatically, when process's RES-memory reaches the pre-configured limit 100Gb.

<!-- gh-comment-id:2780687319 --> @jetnet commented on GitHub (Apr 5, 2025): Ollama 0.6.4 + `gemma3:27b` + API Generate is able to serve a couple requests only. After that it crashes: ``` Apr 05 13:12:47 ai-server ollama[970617]: ggml.c:1584: GGML_ASSERT(view_src == NULL || data_size == 0 || data_size + view_offs <= ggml_nbytes(view_src)) failed Apr 05 13:12:47 ai-server ollama[970617]: SIGSEGV: segmentation violation Apr 05 13:12:47 ai-server ollama[970617]: PC=0x7f65853e2817 m=5 sigcode=1 addr=0x20d803fa0 Apr 05 13:12:47 ai-server ollama[970617]: signal arrived during cgo execution ``` Dev team, please let me know, if you need a full stack trace. <details> <summary>podman container</summary> ```bash podman run -d \ --device nvidia.com/gpu=all --memory=100g \ -v ollama:$HOME/.ollama \ -v /opt/ollama/models:/models \ -p 11434:11434 \ -e OLLAMA_MODELS=/models \ -e CUDA_VISIBLE_DEVICES=0,1 \ -e OLLAMA_SCHED_SPREAD=1 \ -e OLLAMA_GPU_LAYERS=12 \ --name ollama ollama/ollama ``` </details> <details> <summary>nvidia-smi</summary> ``` +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 555.42.06 Driver Version: 555.42.06 CUDA Version: 12.5 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA L40 Off | 00000000:06:10.0 Off | 0 | | N/A 49C P0 162W / 300W | 17174MiB / 46068MiB | 27% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 1 NVIDIA L40 Off | 00000000:06:11.0 Off | 0 | | N/A 70C P0 239W / 300W | 27595MiB / 46068MiB | 84% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 882543 C /usr/local/bin/python 6800MiB | | 0 N/A N/A 971496 C /usr/bin/ollama 10204MiB | | 1 N/A N/A 881828 C /usr/bin/python3 2242MiB | | 1 N/A N/A 881935 C /usr/local/bin/python3.12 714MiB | | 1 N/A N/A 882260 C /usr/local/bin/python3.11 4004MiB | | 1 N/A N/A 882692 C /usr/local/bin/python 5676MiB | | 1 N/A N/A 882835 C python 2242MiB | | 1 N/A N/A 971496 C /usr/bin/ollama 12524MiB | +-----------------------------------------------------------------------------------------+ ``` </details> P.S. Ollama 0.6.3 has a memory leak with `gemma3:27b`. It gets restarted by Podman automatically, when process's RES-memory reaches the pre-configured limit 100Gb.
Author
Owner

@alexzk1 commented on GitHub (Apr 5, 2025):

So I have plugin into IDE which uses ollama to propose code. I use qwen2.5-coder.
Today, with 0.6.4 it worked ok with 7b model, than I switched to 14b and it ate all RAM + swap pretty fast.
Then stopped/closed everything and downgraded ollama to 0.6.3. I started with 14b model and it looks ok as for RAM usage.
Now I see model responds, and memory used stays below full RAM, and I see it releases some even. No swap used.

<!-- gh-comment-id:2781044819 --> @alexzk1 commented on GitHub (Apr 5, 2025): So I have plugin into IDE which uses ollama to propose code. I use qwen2.5-coder. Today, with 0.6.4 it worked ok with 7b model, than I switched to 14b and it ate all RAM + swap pretty fast. Then stopped/closed everything and downgraded ollama to 0.6.3. I started with 14b model and it looks ok as for RAM usage. Now I see model responds, and memory used stays below full RAM, and I see it releases some even. No swap used.
Author
Owner

@bourquenoud commented on GitHub (Apr 6, 2025):

I upgraded to 0.6.4, and the error is still present. At each request, the RAM usage increases. I tried with or without providing an image just to check, but in both cases there seem to be a memory leak. Also tried with 0.6.5-rc1, same issue.

Log
Apr  6 22:42:00 pop-os ollama[516957]: 2025/04/06 22:42:00 routes.go:1231: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Apr  6 22:42:00 pop-os ollama[516957]: time=2025-04-06T22:42:00.057+02:00 level=INFO source=images.go:458 msg="total blobs: 35"
Apr  6 22:42:00 pop-os ollama[516957]: time=2025-04-06T22:42:00.057+02:00 level=INFO source=images.go:465 msg="total unused blobs removed: 0"
Apr  6 22:42:00 pop-os ollama[516957]: time=2025-04-06T22:42:00.057+02:00 level=INFO source=routes.go:1298 msg="Listening on 127.0.0.1:11434 (version 0.6.4)"
Apr  6 22:42:00 pop-os ollama[516957]: time=2025-04-06T22:42:00.057+02:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
Apr  6 22:42:00 pop-os ollama[516957]: time=2025-04-06T22:42:00.392+02:00 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
Apr  6 22:42:00 pop-os ollama[516957]: time=2025-04-06T22:42:00.393+02:00 level=INFO source=amd_linux.go:296 msg="unsupported Radeon iGPU detected skipping" id=0 total="512.0 MiB"
Apr  6 22:42:00 pop-os ollama[516957]: time=2025-04-06T22:42:00.393+02:00 level=INFO source=amd_linux.go:402 msg="no compatible amdgpu devices detected"
Apr  6 22:42:00 pop-os ollama[516957]: time=2025-04-06T22:42:00.393+02:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-11c2768b-7e47-3404-49ad-7979a4b6c3c8 library=cuda variant=v12 compute=8.6 driver=12.8 name="NVIDIA GeForce RTX 3090" total="23.5 GiB" available="22.3 GiB"
Apr  6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.117+02:00 level=INFO source=sched.go:716 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de gpu=GPU-11c2768b-7e47-3404-49ad-7979a4b6c3c8 parallel=1 available=23949934592 required="10.6 GiB"
Apr  6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.272+02:00 level=INFO source=server.go:105 msg="system memory" total="30.5 GiB" free="18.8 GiB" free_swap="0 B"
Apr  6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.273+02:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[22.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="10.6 GiB" memory.required.partial="10.6 GiB" memory.required.kv="992.0 MiB" memory.required.allocations="[10.6 GiB]" memory.weights.total="6.8 GiB" memory.weights.repeating="6.0 GiB" memory.weights.nonrepeating="787.5 MiB" memory.graph.full="519.5 MiB" memory.graph.partial="1.3 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB"
Apr  6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.343+02:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
Apr  6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.350+02:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07
Apr  6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.350+02:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000
Apr  6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.350+02:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
Apr  6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.350+02:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1
Apr  6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.350+02:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256
Apr  6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.350+02:00 level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de --ctx-size 8192 --batch-size 512 --n-gpu-layers 49 --threads 6 --parallel 1 --port 43091"
Apr  6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.350+02:00 level=INFO source=sched.go:451 msg="loaded runners" count=1
Apr  6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.351+02:00 level=INFO source=server.go:580 msg="waiting for llama runner to start responding"
Apr  6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.351+02:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error"
Apr  6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.359+02:00 level=INFO source=runner.go:821 msg="starting ollama engine"
Apr  6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.359+02:00 level=INFO source=runner.go:884 msg="Server listening on 127.0.0.1:43091"
Apr  6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.426+02:00 level=WARN source=ggml.go:149 msg="key not found" key=general.name default=""
Apr  6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.426+02:00 level=WARN source=ggml.go:149 msg="key not found" key=general.description default=""
Apr  6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.426+02:00 level=INFO source=ggml.go:66 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1065 num_key_values=37
Apr  6 22:42:16 pop-os ollama[516957]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Apr  6 22:42:16 pop-os ollama[516957]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Apr  6 22:42:16 pop-os ollama[516957]: ggml_cuda_init: found 1 CUDA devices:
Apr  6 22:42:16 pop-os ollama[516957]:   Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Apr  6 22:42:16 pop-os ollama[516957]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so
Apr  6 22:42:16 pop-os ollama[516957]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so
Apr  6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.486+02:00 level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
Apr  6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.586+02:00 level=INFO source=ggml.go:288 msg="model weights" buffer=CUDA0 size="7.6 GiB"
Apr  6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.586+02:00 level=INFO source=ggml.go:288 msg="model weights" buffer=CPU size="787.5 MiB"
Apr  6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.603+02:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model"
Apr  6 22:42:24 pop-os ollama[516957]: time=2025-04-06T22:42:24.351+02:00 level=INFO source=ggml.go:380 msg="compute graph" backend=CUDA0 buffer_type=CUDA0
Apr  6 22:42:24 pop-os ollama[516957]: time=2025-04-06T22:42:24.351+02:00 level=INFO source=ggml.go:380 msg="compute graph" backend=CPU buffer_type=CUDA_Host
Apr  6 22:42:24 pop-os ollama[516957]: time=2025-04-06T22:42:24.353+02:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
Apr  6 22:42:24 pop-os ollama[516957]: time=2025-04-06T22:42:24.358+02:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07
Apr  6 22:42:24 pop-os ollama[516957]: time=2025-04-06T22:42:24.358+02:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000
Apr  6 22:42:24 pop-os ollama[516957]: time=2025-04-06T22:42:24.358+02:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
Apr  6 22:42:24 pop-os ollama[516957]: time=2025-04-06T22:42:24.358+02:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1
Apr  6 22:42:24 pop-os ollama[516957]: time=2025-04-06T22:42:24.358+02:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256
Apr  6 22:42:24 pop-os ollama[516957]: time=2025-04-06T22:42:24.376+02:00 level=INFO source=server.go:619 msg="llama runner started in 8.03 seconds"
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: loaded meta data with 36 key-value pairs and 1065 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de (version GGUF V3 (latest))
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv   0:                gemma3.attention.head_count u32              = 16
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv   1:             gemma3.attention.head_count_kv u32              = 8
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv   2:                gemma3.attention.key_length u32              = 256
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv   3:            gemma3.attention.sliding_window u32              = 1024
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv   4:              gemma3.attention.value_length u32              = 256
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv   5:                         gemma3.block_count u32              = 48
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv   6:                      gemma3.context_length u32              = 131072
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv   7:                    gemma3.embedding_length u32              = 3840
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv   8:                 gemma3.feed_forward_length u32              = 15360
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv   9:                 gemma3.mm.tokens_per_image u32              = 256
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv  10:         gemma3.vision.attention.head_count u32              = 16
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv  11: gemma3.vision.attention.layer_norm_epsilon f32              = 0.000001
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv  12:                  gemma3.vision.block_count u32              = 27
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv  13:             gemma3.vision.embedding_length u32              = 1152
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv  14:          gemma3.vision.feed_forward_length u32              = 4304
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv  15:                   gemma3.vision.image_size u32              = 896
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv  16:                 gemma3.vision.num_channels u32              = 3
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv  17:                   gemma3.vision.patch_size u32              = 14
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv  18:                       general.architecture str              = gemma3
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv  19:                    tokenizer.chat_template str              = {{ bos_token }}\n{%- if messages[0]['r...
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv  20:               tokenizer.ggml.add_bos_token bool             = true
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv  21:               tokenizer.ggml.add_eos_token bool             = false
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv  22:           tokenizer.ggml.add_padding_token bool             = false
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv  23:           tokenizer.ggml.add_unknown_token bool             = false
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv  24:                tokenizer.ggml.bos_token_id u32              = 2
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv  25:                tokenizer.ggml.eos_token_id u32              = 1
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv  26:                      tokenizer.ggml.merges arr[str,514906]  = ["\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n", ...
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv  27:                       tokenizer.ggml.model str              = llama
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv  28:            tokenizer.ggml.padding_token_id u32              = 0
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv  29:                         tokenizer.ggml.pre str              = default
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv  30:                      tokenizer.ggml.scores arr[f32,262145]  = [0.000000, 0.000000, 0.000000, 0.0000...
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv  31:                  tokenizer.ggml.token_type arr[i32,262145]  = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ...
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv  32:                      tokenizer.ggml.tokens arr[str,262145]  = ["<pad>", "<eos>", "<bos>", "<unk>", ...
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv  33:            tokenizer.ggml.unknown_token_id u32              = 3
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv  34:               general.quantization_version u32              = 2
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv  35:                          general.file_type u32              = 15
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: - type  f32:  563 tensors
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: - type  f16:  165 tensors
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: - type q4_K:  290 tensors
Apr  6 22:42:24 pop-os ollama[516957]: llama_model_loader: - type q6_K:   47 tensors
Apr  6 22:42:24 pop-os ollama[516957]: load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
Apr  6 22:42:24 pop-os ollama[516957]: load: special tokens cache size = 7
Apr  6 22:42:24 pop-os ollama[516957]: load: token to piece cache size = 1.9446 MB

<!-- gh-comment-id:2781653039 --> @bourquenoud commented on GitHub (Apr 6, 2025): I upgraded to 0.6.4, and the error is still present. At each request, the RAM usage increases. I tried with or without providing an image just to check, but in both cases there seem to be a memory leak. Also tried with 0.6.5-rc1, same issue. <details> <summary>Log</summary> ``` Apr 6 22:42:00 pop-os ollama[516957]: 2025/04/06 22:42:00 routes.go:1231: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Apr 6 22:42:00 pop-os ollama[516957]: time=2025-04-06T22:42:00.057+02:00 level=INFO source=images.go:458 msg="total blobs: 35" Apr 6 22:42:00 pop-os ollama[516957]: time=2025-04-06T22:42:00.057+02:00 level=INFO source=images.go:465 msg="total unused blobs removed: 0" Apr 6 22:42:00 pop-os ollama[516957]: time=2025-04-06T22:42:00.057+02:00 level=INFO source=routes.go:1298 msg="Listening on 127.0.0.1:11434 (version 0.6.4)" Apr 6 22:42:00 pop-os ollama[516957]: time=2025-04-06T22:42:00.057+02:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" Apr 6 22:42:00 pop-os ollama[516957]: time=2025-04-06T22:42:00.392+02:00 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" Apr 6 22:42:00 pop-os ollama[516957]: time=2025-04-06T22:42:00.393+02:00 level=INFO source=amd_linux.go:296 msg="unsupported Radeon iGPU detected skipping" id=0 total="512.0 MiB" Apr 6 22:42:00 pop-os ollama[516957]: time=2025-04-06T22:42:00.393+02:00 level=INFO source=amd_linux.go:402 msg="no compatible amdgpu devices detected" Apr 6 22:42:00 pop-os ollama[516957]: time=2025-04-06T22:42:00.393+02:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-11c2768b-7e47-3404-49ad-7979a4b6c3c8 library=cuda variant=v12 compute=8.6 driver=12.8 name="NVIDIA GeForce RTX 3090" total="23.5 GiB" available="22.3 GiB" Apr 6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.117+02:00 level=INFO source=sched.go:716 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de gpu=GPU-11c2768b-7e47-3404-49ad-7979a4b6c3c8 parallel=1 available=23949934592 required="10.6 GiB" Apr 6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.272+02:00 level=INFO source=server.go:105 msg="system memory" total="30.5 GiB" free="18.8 GiB" free_swap="0 B" Apr 6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.273+02:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[22.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="10.6 GiB" memory.required.partial="10.6 GiB" memory.required.kv="992.0 MiB" memory.required.allocations="[10.6 GiB]" memory.weights.total="6.8 GiB" memory.weights.repeating="6.0 GiB" memory.weights.nonrepeating="787.5 MiB" memory.graph.full="519.5 MiB" memory.graph.partial="1.3 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB" Apr 6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.343+02:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false Apr 6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.350+02:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07 Apr 6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.350+02:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000 Apr 6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.350+02:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 Apr 6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.350+02:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1 Apr 6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.350+02:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256 Apr 6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.350+02:00 level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de --ctx-size 8192 --batch-size 512 --n-gpu-layers 49 --threads 6 --parallel 1 --port 43091" Apr 6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.350+02:00 level=INFO source=sched.go:451 msg="loaded runners" count=1 Apr 6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.351+02:00 level=INFO source=server.go:580 msg="waiting for llama runner to start responding" Apr 6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.351+02:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error" Apr 6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.359+02:00 level=INFO source=runner.go:821 msg="starting ollama engine" Apr 6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.359+02:00 level=INFO source=runner.go:884 msg="Server listening on 127.0.0.1:43091" Apr 6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.426+02:00 level=WARN source=ggml.go:149 msg="key not found" key=general.name default="" Apr 6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.426+02:00 level=WARN source=ggml.go:149 msg="key not found" key=general.description default="" Apr 6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.426+02:00 level=INFO source=ggml.go:66 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1065 num_key_values=37 Apr 6 22:42:16 pop-os ollama[516957]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Apr 6 22:42:16 pop-os ollama[516957]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Apr 6 22:42:16 pop-os ollama[516957]: ggml_cuda_init: found 1 CUDA devices: Apr 6 22:42:16 pop-os ollama[516957]: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Apr 6 22:42:16 pop-os ollama[516957]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so Apr 6 22:42:16 pop-os ollama[516957]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so Apr 6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.486+02:00 level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) Apr 6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.586+02:00 level=INFO source=ggml.go:288 msg="model weights" buffer=CUDA0 size="7.6 GiB" Apr 6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.586+02:00 level=INFO source=ggml.go:288 msg="model weights" buffer=CPU size="787.5 MiB" Apr 6 22:42:16 pop-os ollama[516957]: time=2025-04-06T22:42:16.603+02:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model" Apr 6 22:42:24 pop-os ollama[516957]: time=2025-04-06T22:42:24.351+02:00 level=INFO source=ggml.go:380 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 Apr 6 22:42:24 pop-os ollama[516957]: time=2025-04-06T22:42:24.351+02:00 level=INFO source=ggml.go:380 msg="compute graph" backend=CPU buffer_type=CUDA_Host Apr 6 22:42:24 pop-os ollama[516957]: time=2025-04-06T22:42:24.353+02:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false Apr 6 22:42:24 pop-os ollama[516957]: time=2025-04-06T22:42:24.358+02:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07 Apr 6 22:42:24 pop-os ollama[516957]: time=2025-04-06T22:42:24.358+02:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000 Apr 6 22:42:24 pop-os ollama[516957]: time=2025-04-06T22:42:24.358+02:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 Apr 6 22:42:24 pop-os ollama[516957]: time=2025-04-06T22:42:24.358+02:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1 Apr 6 22:42:24 pop-os ollama[516957]: time=2025-04-06T22:42:24.358+02:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256 Apr 6 22:42:24 pop-os ollama[516957]: time=2025-04-06T22:42:24.376+02:00 level=INFO source=server.go:619 msg="llama runner started in 8.03 seconds" Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: loaded meta data with 36 key-value pairs and 1065 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de (version GGUF V3 (latest)) Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv 0: gemma3.attention.head_count u32 = 16 Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv 1: gemma3.attention.head_count_kv u32 = 8 Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv 2: gemma3.attention.key_length u32 = 256 Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv 3: gemma3.attention.sliding_window u32 = 1024 Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv 4: gemma3.attention.value_length u32 = 256 Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv 5: gemma3.block_count u32 = 48 Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv 6: gemma3.context_length u32 = 131072 Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv 7: gemma3.embedding_length u32 = 3840 Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv 8: gemma3.feed_forward_length u32 = 15360 Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv 9: gemma3.mm.tokens_per_image u32 = 256 Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv 10: gemma3.vision.attention.head_count u32 = 16 Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv 11: gemma3.vision.attention.layer_norm_epsilon f32 = 0.000001 Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv 12: gemma3.vision.block_count u32 = 27 Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv 13: gemma3.vision.embedding_length u32 = 1152 Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv 14: gemma3.vision.feed_forward_length u32 = 4304 Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv 15: gemma3.vision.image_size u32 = 896 Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv 16: gemma3.vision.num_channels u32 = 3 Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv 17: gemma3.vision.patch_size u32 = 14 Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv 18: general.architecture str = gemma3 Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv 19: tokenizer.chat_template str = {{ bos_token }}\n{%- if messages[0]['r... Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv 22: tokenizer.ggml.add_padding_token bool = false Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv 23: tokenizer.ggml.add_unknown_token bool = false Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv 24: tokenizer.ggml.bos_token_id u32 = 2 Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv 25: tokenizer.ggml.eos_token_id u32 = 1 Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv 26: tokenizer.ggml.merges arr[str,514906] = ["\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n", ... Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv 27: tokenizer.ggml.model str = llama Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv 28: tokenizer.ggml.padding_token_id u32 = 0 Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv 29: tokenizer.ggml.pre str = default Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv 30: tokenizer.ggml.scores arr[f32,262145] = [0.000000, 0.000000, 0.000000, 0.0000... Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv 31: tokenizer.ggml.token_type arr[i32,262145] = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ... Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv 32: tokenizer.ggml.tokens arr[str,262145] = ["<pad>", "<eos>", "<bos>", "<unk>", ... Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv 33: tokenizer.ggml.unknown_token_id u32 = 3 Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv 34: general.quantization_version u32 = 2 Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: - kv 35: general.file_type u32 = 15 Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: - type f32: 563 tensors Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: - type f16: 165 tensors Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: - type q4_K: 290 tensors Apr 6 22:42:24 pop-os ollama[516957]: llama_model_loader: - type q6_K: 47 tensors Apr 6 22:42:24 pop-os ollama[516957]: load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect Apr 6 22:42:24 pop-os ollama[516957]: load: special tokens cache size = 7 Apr 6 22:42:24 pop-os ollama[516957]: load: token to piece cache size = 1.9446 MB ``` </details>
Author
Owner

@lasseedfast commented on GitHub (Apr 7, 2025):

I’m having the same problem. Ended up here as I was searching for documentation on how to restrict Ollama to only use VRAM. Is that possible? Would at least be a workaround (and I can’t see why I would ever want to use RAM for offloading models, but maybe something else happens here?).

<!-- gh-comment-id:2781776316 --> @lasseedfast commented on GitHub (Apr 7, 2025): I’m having the same problem. Ended up here as I was searching for documentation on how to restrict Ollama to only use VRAM. Is that possible? Would at least be a workaround (and I can’t see why I would ever want to use RAM for offloading models, but maybe something else happens here?).
Author
Owner

@alexzk1 commented on GitHub (Apr 7, 2025):

I’m having the same problem. Ended up here as I was searching for documentation on how to restrict Ollama to only use VRAM. Is that possible? Would at least be a workaround (and I can’t see why I would ever want to use RAM for offloading models, but maybe something else happens here?).

On my laptop 30% of the model fits VRAM only using CUDA. But problem is, it was working 2 weeks, 3 weeks ago and now starts to swap suddenly. If ollama would load 2nd copy of the model, I would expect such a swap.

<!-- gh-comment-id:2782355029 --> @alexzk1 commented on GitHub (Apr 7, 2025): > I’m having the same problem. Ended up here as I was searching for documentation on how to restrict Ollama to only use VRAM. Is that possible? Would at least be a workaround (and I can’t see why I would ever want to use RAM for offloading models, but maybe something else happens here?). On my laptop 30% of the model fits VRAM only using CUDA. But problem is, it was working 2 weeks, 3 weeks ago and now starts to swap suddenly. If ollama would load 2nd copy of the model, I would expect such a swap.
Author
Owner

@TheMasterFX commented on GitHub (Apr 7, 2025):

Same here with gemma3:27b (4 bit). It seems to be worse when flash attention is enabled. So I disabled it, CPU load also goes down.
Ollama 0.64 (Container). Using one Nvidia L40S

Image

<!-- gh-comment-id:2783939463 --> @TheMasterFX commented on GitHub (Apr 7, 2025): Same here with gemma3:27b (4 bit). It seems to be worse when flash attention is enabled. So I disabled it, CPU load also goes down. Ollama 0.64 (Container). Using one Nvidia L40S ![Image](https://github.com/user-attachments/assets/a8e17cff-60f6-4eb6-8ba0-5cc2f644c2ed)
Author
Owner

@Kazunarit commented on GitHub (Apr 8, 2025):

Image
Same for v0.6.5. Gemma3:12b crashes when repeating "tell me a story" (#9791)

<!-- gh-comment-id:2785068719 --> @Kazunarit commented on GitHub (Apr 8, 2025): ![Image](https://github.com/user-attachments/assets/5414f571-4644-4184-9785-85af9130940f) Same for v0.6.5. Gemma3:12b crashes when repeating "tell me a story" (#9791)
Author
Owner

@Kurokabe commented on GitHub (Apr 8, 2025):

Also have the same issue with Gemma3:27B. After rebuilding my docker container I had to reinstall Ollama. When using the regular install command curl -fsSL https://ollama.com/install.sh | sh it installed 0.6.4 which gave me the same issue as described here so I downgraded to 0.6.3 which worked previously on my system, but now I also have this RAM issue.

I installed Ollama 0.6.3 with this command:
curl -fsSL https://ollama.com/install.sh | sed 's#https://ollama.com/download#https://github.com/jmorganca/ollama/releases/download/v0.6.3#' | sh

For now I use the following script to restart Ollama every 30 minutes (if the service is stopped in the middle of a request, the client will receive an exception)

#!/bin/bash

while true; do
    # Kill any existing process
    pkill -f "ollama serve"

    # Start fresh
    nohup ollama serve > /tmp/ollama.log 2>&1 &

    # Wait 30 minutes
    sleep 1800
done
<!-- gh-comment-id:2785711746 --> @Kurokabe commented on GitHub (Apr 8, 2025): Also have the same issue with Gemma3:27B. After rebuilding my docker container I had to reinstall Ollama. When using the regular install command `curl -fsSL https://ollama.com/install.sh | sh` it installed 0.6.4 which gave me the same issue as described [here](https://github.com/ollama/ollama/issues/7748) so I downgraded to 0.6.3 which worked previously on my system, but now I also have this RAM issue. I installed Ollama 0.6.3 with this command: `curl -fsSL https://ollama.com/install.sh | sed 's#https://ollama.com/download#https://github.com/jmorganca/ollama/releases/download/v0.6.3#' | sh ` For now I use the following script to restart Ollama every 30 minutes (if the service is stopped in the middle of a request, the client will receive an exception) ```sh #!/bin/bash while true; do # Kill any existing process pkill -f "ollama serve" # Start fresh nohup ollama serve > /tmp/ollama.log 2>&1 & # Wait 30 minutes sleep 1800 done ```
Author
Owner

@nfsecurity commented on GitHub (Apr 8, 2025):

Same here with gemma3:27b (4 bit). It seems to be worse when flash attention is enabled. So I disabled it, CPU load also goes down. Ollama 0.64 (Container). Using one Nvidia L40S

Image

Yes, this is true, I was able to reproduce the problem too. When OLLAMA_FLASH_ATTENTION is enabled, the RAM is eaten faster (not VRAM).

<!-- gh-comment-id:2787082823 --> @nfsecurity commented on GitHub (Apr 8, 2025): > Same here with gemma3:27b (4 bit). It seems to be worse when flash attention is enabled. So I disabled it, CPU load also goes down. Ollama 0.64 (Container). Using one Nvidia L40S > > ![Image](https://github.com/user-attachments/assets/a8e17cff-60f6-4eb6-8ba0-5cc2f644c2ed) Yes, this is true, I was able to reproduce the problem too. When OLLAMA_FLASH_ATTENTION is enabled, the RAM is eaten faster (not VRAM).
Author
Owner

@ModernHooman commented on GitHub (Apr 8, 2025):

I am experiencing the same issue.
With v0.6.0 and Gemma3-27b the API throws InternalServerError with every request.
So I upgraded to v0.6.4 and now I am experiencing memory leak, it leaks about 100MB per each inference until the ram fills up to maximum 64gb.
I also have downgraded to v0.6.3 but it has the memory leak too.

OS:
Windows Server 2025

GPU:
RTX 4090 OC 24GB

<!-- gh-comment-id:2787698431 --> @ModernHooman commented on GitHub (Apr 8, 2025): I am experiencing the same issue. With v0.6.0 and Gemma3-27b the API throws InternalServerError with every request. So I upgraded to v0.6.4 and now I am experiencing memory leak, it leaks about 100MB per each inference until the ram fills up to maximum 64gb. I also have downgraded to v0.6.3 but it has the memory leak too. OS: Windows Server 2025 GPU: RTX 4090 OC 24GB
Author
Owner

@jessegross commented on GitHub (Apr 8, 2025):

Please follow up in #10040

<!-- gh-comment-id:2787709750 --> @jessegross commented on GitHub (Apr 8, 2025): Please follow up in #10040
Author
Owner

@Gregory-Colin commented on GitHub (Apr 10, 2025):

Same thing going on here with 0.6.5 and DeepSeek-R1:8b

<!-- gh-comment-id:2793684892 --> @Gregory-Colin commented on GitHub (Apr 10, 2025): Same thing going on here with 0.6.5 and DeepSeek-R1:8b
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#32409