[GH-ISSUE #12247] using embeddinggemma unloads other loaded llm models, even if there is VRAM available #33907

Closed
opened 2026-04-22 17:05:26 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @tomaszbk on GitHub (Sep 11, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12247

What is the issue?

tried embeddinggemma with qwen 2.5vl and llama 3.1, on 8GB VRAM GPU, and the llm models are unloaded upon an embedding request.
using granite embedding 300m doesnt unload the other models

Relevant log output

time=2025-09-11T01:28:15.420Z level=INFO source=sched.go:540 msg="updated VRAM based on existing loaded models" gpu=GPU-98df2674-c0ca-b15b-1a24-bf2947db3290 library=cuda total="8.0 GiB" available="2.3 GiB"
time=2025-09-11T01:28:15.553Z level=INFO source=server.go:398 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-0800cbac9c2064dde519420e75e512a83cb360de3ad5df176185dc69652fc515 --port 33085"
time=2025-09-11T01:28:15.564Z level=INFO source=runner.go:1251 msg="starting ollama engine"
time=2025-09-11T01:28:15.566Z level=INFO source=runner.go:1286 msg="Server listening on 127.0.0.1:33085"
time=2025-09-11T01:28:15.645Z level=INFO source=server.go:503 msg="system memory" total="15.2 GiB" free="12.5 GiB" free_swap="3.5 GiB"
time=2025-09-11T01:28:15.645Z level=INFO source=server.go:510 msg="model requires more memory than is currently available, evicting a model to make space" estimate.library="" estimate.layers.requested=0 estimate.layers.model=0 estimate.layers.offload=0 estimate.layers.split=[] estimate.memory.available=[] estimate.memory.gpu_overhead="0 B" estimate.memory.required.full="0 B" estimate.memory.required.partial="0 B" estimate.memory.required.kv="0 B" estimate.memory.required.allocations=[] estimate.memory.weights.total="0 B" estimate.memory.weights.repeating="0 B" estimate.memory.weights.nonrepeating="0 B" estimate.memory.graph.full="0 B" estimate.memory.graph.partial="0 B"
time=2025-09-11T01:28:19.869Z level=WARN source=sched.go:652 msg="gpu VRAM usage didn't recover within timeout" seconds=5.098100875 runner.size="5.7 GiB" runner.vram="5.7 GiB" runner.parallel=1 runner.pid=1647 runner.model=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29
time=2025-09-11T01:28:20.119Z level=WARN source=sched.go:652 msg="gpu VRAM usage didn't recover within timeout" seconds=5.348006231 runner.size="5.7 GiB" runner.vram="5.7 GiB" runner.parallel=1 runner.pid=1647 runner.model=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29
time=2025-09-11T01:28:20.185Z level=INFO source=server.go:503 msg="system memory" total="15.2 GiB" free="12.9 GiB" free_swap="3.5 GiB"
time=2025-09-11T01:28:20.185Z level=INFO source=memory.go:36 msg="new model will fit in available VRAM across minimum required GPUs, loading" model=/root/.ollama/models/blobs/sha256-0800cbac9c2064dde519420e75e512a83cb360de3ad5df176185dc69652fc515 library=cuda parallel=1 required="3.1 GiB" gpus=1
time=2025-09-11T01:28:20.185Z level=INFO source=server.go:543 msg=offload library=cuda layers.requested=-1 layers.model=25 layers.offload=25 layers.split=[25] memory.available="[6.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="3.1 GiB" memory.required.partial="3.1 GiB" memory.required.kv="58.0 MiB" memory.required.allocations="[3.1 GiB]" memory.weights.total="577.8 MiB" memory.weights.repeating="193.8 MiB" memory.weights.nonrepeating="384.0 MiB" memory.graph.full="2.0 GiB" memory.graph.partial="2.2 GiB"
time=2025-09-11T01:28:20.187Z level=INFO source=runner.go:1170 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:2048 FlashAttention:false KvSize:2048 KvCacheType: NumThreads:8 GPULayers:25[ID:GPU-98df2674-c0ca-b15b-1a24-bf2947db3290 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-09-11T01:28:20.207Z level=INFO source=ggml.go:131 msg="" architecture=gemma3 file_type=BF16 name="Embeddinggemma 300M" description="" num_tensors=316 num_key_values=37
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3070, compute capability 8.6, VMM: yes, ID: GPU-98df2674-c0ca-b15b-1a24-bf2947db3290
load_backend: loaded CUDA backend from /usr/lib/ollama/libggml-cuda.so
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so
time=2025-09-11T01:28:20.266Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2025-09-11T01:28:20.368Z level=WARN source=sched.go:652 msg="gpu VRAM usage didn't recover within timeout" seconds=5.597301181 runner.size="5.7 GiB" runner.vram="5.7 GiB" runner.parallel=1 runner.pid=1647 runner.model=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29
time=2025-09-11T01:28:20.390Z level=INFO source=ggml.go:487 msg="offloading 24 repeating layers to GPU"
time=2025-09-11T01:28:20.390Z level=INFO source=ggml.go:493 msg="offloading output layer to GPU"
time=2025-09-11T01:28:20.390Z level=INFO source=ggml.go:498 msg="offloaded 25/25 layers to GPU"
time=2025-09-11T01:28:20.391Z level=INFO source=backend.go:310 msg="model weights" device=CUDA0 size="586.8 MiB"
time=2025-09-11T01:28:20.391Z level=INFO source=backend.go:315 msg="model weights" device=CPU size="384.0 MiB"
time=2025-09-11T01:28:20.391Z level=INFO source=backend.go:321 msg="kv cache" device=CUDA0 size="58.0 MiB"
time=2025-09-11T01:28:20.391Z level=INFO source=backend.go:332 msg="compute graph" device=CUDA0 size="108.0 MiB"
time=2025-09-11T01:28:20.391Z level=INFO source=backend.go:337 msg="compute graph" device=CPU size="6.0 MiB"
time=2025-09-11T01:28:20.391Z level=INFO source=backend.go:342 msg="total memory" size="1.1 GiB"
time=2025-09-11T01:28:20.391Z level=INFO source=sched.go:473 msg="loaded runners" count=1
time=2025-09-11T01:28:20.391Z level=INFO source=server.go:1250 msg="waiting for llama runner to start responding"
time=2025-09-11T01:28:20.391Z level=INFO source=server.go:1284 msg="waiting for server to become available" status="llm server loading model"
time=2025-09-11T01:28:20.895Z level=INFO source=server.go:1288 msg="llama runner started in 6.22 seconds"
[GIN] 2025/09/11 - 01:28:21 | 200 |  6.694510783s |      172.18.0.6 | POST     "/v1/embeddings"
time=2025-09-11T01:28:21.232Z level=INFO source=sched.go:540 msg="updated VRAM based on existing loaded models" gpu=GPU-98df2674-c0ca-b15b-1a24-bf2947db3290 library=cuda total="8.0 GiB" available="4.9 GiB"
llama_model_loader: loaded meta data with 29 key-value pairs and 292 tensors from /root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Meta Llama 3.1 8B Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Meta-Llama-3.1
llama_model_loader: - kv   5:                         general.size_label str              = 8B
llama_model_loader: - kv   6:                            general.license str              = llama3.1
llama_model_loader: - kv   7:                               general.tags arr[str,6]       = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv   8:                          general.languages arr[str,8]       = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv   9:                          llama.block_count u32              = 32
llama_model_loader: - kv  10:                       llama.context_length u32              = 131072
llama_model_loader: - kv  11:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv  12:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv  13:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv  14:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  15:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  16:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  17:                          general.file_type u32              = 15
llama_model_loader: - kv  18:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  19:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  20:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  21:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  22:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  23:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  24:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  25:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  26:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  27:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv  28:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   66 tensors
llama_model_loader: - type q4_K:  193 tensors
llama_model_loader: - type q6_K:   33 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 4.58 GiB (4.89 BPW) 
load: printing all EOG tokens:
load:   - 128001 ('<|end_of_text|>')
load:   - 128008 ('<|eom_id|>')
load:   - 128009 ('<|eot_id|>')
load: special tokens cache size = 256
load: token to piece cache size = 0.7999 MB
print_info: arch             = llama
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 8.03 B
print_info: general.name     = Meta Llama 3.1 8B Instruct
print_info: vocab type       = BPE
print_info: n_vocab          = 128256
print_info: n_merges         = 280147
print_info: BOS token        = 128000 '<|begin_of_text|>'
print_info: EOS token        = 128009 '<|eot_id|>'
print_info: EOT token        = 128009 '<|eot_id|>'
print_info: EOM token        = 128008 '<|eom_id|>'
print_info: LF token         = 198 'Ċ'
print_info: EOG token        = 128001 '<|end_of_text|>'
print_info: EOG token        = 128008 '<|eom_id|>'
print_info: EOG token        = 128009 '<|eot_id|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-09-11T01:28:21.538Z level=INFO source=server.go:398 msg="starting runner" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 --port 45399"
time=2025-09-11T01:28:21.550Z level=INFO source=runner.go:864 msg="starting go runner"
time=2025-09-11T01:28:21.631Z level=INFO source=server.go:503 msg="system memory" total="15.2 GiB" free="12.2 GiB" free_swap="3.5 GiB"
time=2025-09-11T01:28:21.632Z level=INFO source=server.go:510 msg="model requires more memory than is currently available, evicting a model to make space" estimate.library="" estimate.layers.requested=0 estimate.layers.model=0 estimate.layers.offload=0 estimate.layers.split=[] estimate.memory.available=[] estimate.memory.gpu_overhead="0 B" estimate.memory.required.full="0 B" estimate.memory.required.partial="0 B" estimate.memory.required.kv="0 B" estimate.memory.required.allocations=[] estimate.memory.weights.total="0 B" estimate.memory.weights.repeating="0 B" estimate.memory.weights.nonrepeating="0 B" estimate.memory.graph.full="0 B" estimate.memory.graph.partial="0 B"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3070, compute capability 8.6, VMM: yes, ID: GPU-98df2674-c0ca-b15b-1a24-bf2947db3290
load_backend: loaded CUDA backend from /usr/lib/ollama/libggml-cuda.so
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so
time=2025-09-11T01:28:21.650Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2025-09-11T01:28:21.653Z level=INFO source=runner.go:900 msg="Server listening on 127.0.0.1:45399"
time=2025-09-11T01:28:26.720Z level=WARN source=sched.go:652 msg="gpu VRAM usage didn't recover within timeout" seconds=5.087912251 runner.size="3.1 GiB" runner.vram="3.1 GiB" runner.parallel=1 runner.pid=1686 runner.model=/root/.ollama/models/blobs/sha256-0800cbac9c2064dde519420e75e512a83cb360de3ad5df176185dc69652fc515
time=2025-09-11T01:28:26.970Z level=WARN source=sched.go:652 msg="gpu VRAM usage didn't recover within timeout" seconds=5.337829115 runner.size="3.1 GiB" runner.vram="3.1 GiB" runner.parallel=1 runner.pid=1686 runner.model=/root/.ollama/models/blobs/sha256-0800cbac9c2064dde519420e75e512a83cb360de3ad5df176185dc69652fc515
time=2025-09-11T01:28:27.030Z level=INFO source=server.go:503 msg="system memory" total="15.2 GiB" free="12.8 GiB" free_swap="3.5 GiB"
time=2025-09-11T01:28:27.030Z level=INFO source=memory.go:36 msg="new model will fit in available VRAM across minimum required GPUs, loading" model=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 library=cuda parallel=1 required="5.7 GiB" gpus=1
time=2025-09-11T01:28:27.030Z level=INFO source=server.go:543 msg=offload library=cuda layers.requested=-1 layers.model=33 layers.offload=33 layers.split=[33] memory.available="[6.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.7 GiB" memory.required.partial="5.7 GiB" memory.required.kv="512.0 MiB" memory.required.allocations="[5.7 GiB]" memory.weights.total="4.3 GiB" memory.weights.repeating="3.9 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="296.0 MiB" memory.graph.partial="677.5 MiB"
time=2025-09-11T01:28:27.031Z level=INFO source=runner.go:799 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:33[ID:GPU-98df2674-c0ca-b15b-1a24-bf2947db3290 Layers:33(0..32)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:true}"
time=2025-09-11T01:28:27.031Z level=INFO source=server.go:1250 msg="waiting for llama runner to start responding"
time=2025-09-11T01:28:27.032Z level=INFO source=server.go:1284 msg="waiting for server to become available" status="llm server loading model"
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 3070) - 7098 MiB free
llama_model_loader: loaded meta data with 29 key-value pairs and 292 tensors from /root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Meta Llama 3.1 8B Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Meta-Llama-3.1
llama_model_loader: - kv   5:                         general.size_label str              = 8B
llama_model_loader: - kv   6:                            general.license str              = llama3.1
llama_model_loader: - kv   7:                               general.tags arr[str,6]       = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv   8:                          general.languages arr[str,8]       = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv   9:                          llama.block_count u32              = 32
llama_model_loader: - kv  10:                       llama.context_length u32              = 131072
llama_model_loader: - kv  11:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv  12:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv  13:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv  14:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  15:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  16:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  17:                          general.file_type u32              = 15
llama_model_loader: - kv  18:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  19:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  20:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  21:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  22:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  23:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  24:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  25:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  26:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  27:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv  28:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   66 tensors
llama_model_loader: - type q4_K:  193 tensors
llama_model_loader: - type q6_K:   33 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 4.58 GiB (4.89 BPW) 
time=2025-09-11T01:28:27.220Z level=WARN source=sched.go:652 msg="gpu VRAM usage didn't recover within timeout" seconds=5.5878821720000005 runner.size="3.1 GiB" runner.vram="3.1 GiB" runner.parallel=1 runner.pid=1686 runner.model=/root/.ollama/models/blobs/sha256-0800cbac9c2064dde519420e75e512a83cb360de3ad5df176185dc69652fc515
load: printing all EOG tokens:
load:   - 128001 ('<|end_of_text|>')
load:   - 128008 ('<|eom_id|>')
load:   - 128009 ('<|eot_id|>')
load: special tokens cache size = 256
load: token to piece cache size = 0.7999 MB
print_info: arch             = llama
print_info: vocab_only       = 0
print_info: n_ctx_train      = 131072
print_info: n_embd           = 4096
print_info: n_layer          = 32
print_info: n_head           = 32
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: is_swa_any       = 0
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 4
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 14336
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 0
print_info: rope scaling     = linear
print_info: freq_base_train  = 500000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 131072
print_info: rope_finetuned   = unknown
print_info: model type       = 8B
print_info: model params     = 8.03 B
print_info: general.name     = Meta Llama 3.1 8B Instruct
print_info: vocab type       = BPE
print_info: n_vocab          = 128256
print_info: n_merges         = 280147
print_info: BOS token        = 128000 '<|begin_of_text|>'
print_info: EOS token        = 128009 '<|eot_id|>'
print_info: EOT token        = 128009 '<|eot_id|>'
print_info: EOM token        = 128008 '<|eom_id|>'
print_info: LF token         = 198 'Ċ'
print_info: EOG token        = 128001 '<|end_of_text|>'
print_info: EOG token        = 128008 '<|eom_id|>'
print_info: EOG token        = 128009 '<|eot_id|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors: offloading 32 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 33/33 layers to GPU
load_tensors:        CUDA0 model buffer size =  4403.49 MiB
load_tensors:   CPU_Mapped model buffer size =   281.81 MiB
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 4096
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch       = 512
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = 0
llama_context: kv_unified    = false
llama_context: freq_base     = 500000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_context:  CUDA_Host  output buffer size =     0.50 MiB
llama_kv_cache_unified:      CUDA0 KV buffer size =   512.00 MiB
llama_kv_cache_unified: size =  512.00 MiB (  4096 cells,  32 layers,  1/1 seqs), K (f16):  256.00 MiB, V (f16):  256.00 MiB
llama_context:      CUDA0 compute buffer size =   300.01 MiB
llama_context:  CUDA_Host compute buffer size =    20.01 MiB
llama_context: graph nodes  = 1126
llama_context: graph splits = 2
time=2025-09-11T01:28:28.036Z level=INFO source=server.go:1288 msg="llama runner started in 6.50 seconds"
time=2025-09-11T01:28:28.036Z level=INFO source=sched.go:473 msg="loaded runners" count=1
time=2025-09-11T01:28:28.036Z level=INFO source=server.go:1250 msg="waiting for llama runner to start responding"
time=2025-09-11T01:28:28.037Z level=INFO source=server.go:1288 msg="llama runner started in 6.50 seconds"
[GIN] 2025/09/11 - 01:28:28 | 200 |  7.589501463s |      172.18.0.6 | POST     "/v1/chat/completions"
[GIN] 2025/09/11 - 01:28:28 | 200 |  343.092569ms |      172.18.0.6 | POST     "/v1/chat/completions"
time=2025-09-11T01:28:29.170Z level=INFO source=sched.go:540 msg="updated VRAM based on existing loaded models" gpu=GPU-98df2674-c0ca-b15b-1a24-bf2947db3290 library=cuda total="8.0 GiB" available="2.3 GiB"
time=2025-09-11T01:28:29.309Z level=INFO source=server.go:398 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-0800cbac9c2064dde519420e75e512a83cb360de3ad5df176185dc69652fc515 --port 35901"
time=2025-09-11T01:28:29.321Z level=INFO source=runner.go:1251 msg="starting ollama engine"
time=2025-09-11T01:28:29.324Z level=INFO source=runner.go:1286 msg="Server listening on 127.0.0.1:35901"
time=2025-09-11T01:28:29.407Z level=INFO source=server.go:503 msg="system memory" total="15.2 GiB" free="12.4 GiB" free_swap="3.5 GiB"
time=2025-09-11T01:28:29.407Z level=INFO source=server.go:510 msg="model requires more memory than is currently available, evicting a model to make space" estimate.library="" estimate.layers.requested=0 estimate.layers.model=0 estimate.layers.offload=0 estimate.layers.split=[] estimate.memory.available=[] estimate.memory.gpu_overhead="0 B" estimate.memory.required.full="0 B" estimate.memory.required.partial="0 B" estimate.memory.required.kv="0 B" estimate.memory.required.allocations=[] estimate.memory.weights.total="0 B" estimate.memory.weights.repeating="0 B" estimate.memory.weights.nonrepeating="0 B" estimate.memory.graph.full="0 B" estimate.memory.graph.partial="0 B"
time=2025-09-11T01:28:34.504Z level=WARN source=sched.go:652 msg="gpu VRAM usage didn't recover within timeout" seconds=5.096350349 runner.size="5.7 GiB" runner.vram="5.7 GiB" runner.parallel=1 runner.pid=1742 runner.model=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29
time=2025-09-11T01:28:34.747Z level=WARN source=sched.go:652 msg="gpu VRAM usage didn't recover within timeout" seconds=5.340107962 runner.size="5.7 GiB" runner.vram="5.7 GiB" runner.parallel=1 runner.pid=1742 runner.model=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29
time=2025-09-11T01:28:34.842Z level=INFO source=server.go:503 msg="system memory" total="15.2 GiB" free="12.9 GiB" free_swap="3.5 GiB"
time=2025-09-11T01:28:34.842Z level=INFO source=memory.go:36 msg="new model will fit in available VRAM across minimum required GPUs, loading" model=/root/.ollama/models/blobs/sha256-0800cbac9c2064dde519420e75e512a83cb360de3ad5df176185dc69652fc515 library=cuda parallel=1 required="3.1 GiB" gpus=1
time=2025-09-11T01:28:34.842Z level=INFO source=server.go:543 msg=offload library=cuda layers.requested=-1 layers.model=25 layers.offload=25 layers.split=[25] memory.available="[6.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="3.1 GiB" memory.required.partial="3.1 GiB" memory.required.kv="58.0 MiB" memory.required.allocations="[3.1 GiB]" memory.weights.total="577.8 MiB" memory.weights.repeating="193.8 MiB" memory.weights.nonrepeating="384.0 MiB" memory.graph.full="2.0 GiB" memory.graph.partial="2.2 GiB"
time=2025-09-11T01:28:34.844Z level=INFO source=runner.go:1170 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:2048 FlashAttention:false KvSize:2048 KvCacheType: NumThreads:8 GPULayers:25[ID:GPU-98df2674-c0ca-b15b-1a24-bf2947db3290 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-09-11T01:28:34.867Z level=INFO source=ggml.go:131 msg="" architecture=gemma3 file_type=BF16 name="Embeddinggemma 300M" description="" num_tensors=316 num_key_values=37
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3070, compute capability 8.6, VMM: yes, ID: GPU-98df2674-c0ca-b15b-1a24-bf2947db3290
load_backend: loaded CUDA backend from /usr/lib/ollama/libggml-cuda.so
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so
time=2025-09-11T01:28:34.947Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2025-09-11T01:28:34.997Z level=WARN source=sched.go:652 msg="gpu VRAM usage didn't recover within timeout" seconds=5.58967704 runner.size="5.7 GiB" runner.vram="5.7 GiB" runner.parallel=1 runner.pid=1742 runner.model=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29
time=2025-09-11T01:28:35.063Z level=INFO source=ggml.go:487 msg="offloading 24 repeating layers to GPU"
time=2025-09-11T01:28:35.063Z level=INFO source=ggml.go:493 msg="offloading output layer to GPU"
time=2025-09-11T01:28:35.063Z level=INFO source=ggml.go:498 msg="offloaded 25/25 layers to GPU"
time=2025-09-11T01:28:35.063Z level=INFO source=backend.go:310 msg="model weights" device=CUDA0 size="586.8 MiB"
time=2025-09-11T01:28:35.063Z level=INFO source=backend.go:315 msg="model weights" device=CPU size="384.0 MiB"
time=2025-09-11T01:28:35.063Z level=INFO source=backend.go:321 msg="kv cache" device=CUDA0 size="58.0 MiB"
time=2025-09-11T01:28:35.063Z level=INFO source=backend.go:332 msg="compute graph" device=CUDA0 size="108.0 MiB"
time=2025-09-11T01:28:35.063Z level=INFO source=backend.go:337 msg="compute graph" device=CPU size="6.0 MiB"
time=2025-09-11T01:28:35.063Z level=INFO source=backend.go:342 msg="total memory" size="1.1 GiB"
time=2025-09-11T01:28:35.063Z level=INFO source=sched.go:473 msg="loaded runners" count=1
time=2025-09-11T01:28:35.063Z level=INFO source=server.go:1250 msg="waiting for llama runner to start responding"
time=2025-09-11T01:28:35.064Z level=INFO source=server.go:1284 msg="waiting for server to become available" status="llm server loading model"
time=2025-09-11T01:28:35.566Z level=INFO source=server.go:1288 msg="llama runner started in 6.26 seconds"
[GIN] 2025/09/11 - 01:28:35 | 200 |  6.940837124s |      172.18.0.6 | POST     "/v1/embeddings"
time=2025-09-11T01:28:36.133Z level=INFO source=sched.go:540 msg="updated VRAM based on existing loaded models" gpu=GPU-98df2674-c0ca-b15b-1a24-bf2947db3290 library=cuda total="8.0 GiB" available="4.9 GiB"
llama_model_loader: loaded meta data with 29 key-value pairs and 292 tensors from /root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Meta Llama 3.1 8B Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Meta-Llama-3.1
llama_model_loader: - kv   5:                         general.size_label str              = 8B
llama_model_loader: - kv   6:                            general.license str              = llama3.1
llama_model_loader: - kv   7:                               general.tags arr[str,6]       = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv   8:                          general.languages arr[str,8]       = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv   9:                          llama.block_count u32              = 32
llama_model_loader: - kv  10:                       llama.context_length u32              = 131072
llama_model_loader: - kv  11:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv  12:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv  13:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv  14:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  15:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  16:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  17:                          general.file_type u32              = 15
llama_model_loader: - kv  18:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  19:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  20:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  21:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  22:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  23:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  24:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  25:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  26:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  27:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv  28:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   66 tensors
llama_model_loader: - type q4_K:  193 tensors
llama_model_loader: - type q6_K:   33 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 4.58 GiB (4.89 BPW) 
load: printing all EOG tokens:
load:   - 128001 ('<|end_of_text|>')
load:   - 128008 ('<|eom_id|>')
load:   - 128009 ('<|eot_id|>')
load: special tokens cache size = 256
load: token to piece cache size = 0.7999 MB
print_info: arch             = llama
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 8.03 B
print_info: general.name     = Meta Llama 3.1 8B Instruct
print_info: vocab type       = BPE
print_info: n_vocab          = 128256
print_info: n_merges         = 280147
print_info: BOS token        = 128000 '<|begin_of_text|>'
print_info: EOS token        = 128009 '<|eot_id|>'
print_info: EOT token        = 128009 '<|eot_id|>'
print_info: EOM token        = 128008 '<|eom_id|>'
print_info: LF token         = 198 'Ċ'
print_info: EOG token        = 128001 '<|end_of_text|>'
print_info: EOG token        = 128008 '<|eom_id|>'
print_info: EOG token        = 128009 '<|eot_id|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-09-11T01:28:36.398Z level=INFO source=server.go:398 msg="starting runner" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 --port 38393"
time=2025-09-11T01:28:36.414Z level=INFO source=runner.go:864 msg="starting go runner"
time=2025-09-11T01:28:36.491Z level=INFO source=server.go:503 msg="system memory" total="15.2 GiB" free="11.9 GiB" free_swap="3.5 GiB"
time=2025-09-11T01:28:36.491Z level=INFO source=server.go:510 msg="model requires more memory than is currently available, evicting a model to make space" estimate.library="" estimate.layers.requested=0 estimate.layers.model=0 estimate.layers.offload=0 estimate.layers.split=[] estimate.memory.available=[] estimate.memory.gpu_overhead="0 B" estimate.memory.required.full="0 B" estimate.memory.required.partial="0 B" estimate.memory.required.kv="0 B" estimate.memory.required.allocations=[] estimate.memory.weights.total="0 B" estimate.memory.weights.repeating="0 B" estimate.memory.weights.nonrepeating="0 B" estimate.memory.graph.full="0 B" estimate.memory.graph.partial="0 B"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3070, compute capability 8.6, VMM: yes, ID: GPU-98df2674-c0ca-b15b-1a24-bf2947db3290
load_backend: loaded CUDA backend from /usr/lib/ollama/libggml-cuda.so
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so
time=2025-09-11T01:28:36.519Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2025-09-11T01:28:36.522Z level=INFO source=runner.go:900 msg="Server listening on 127.0.0.1:38393"
time=2025-09-11T01:28:41.587Z level=WARN source=sched.go:652 msg="gpu VRAM usage didn't recover within timeout" seconds=5.095617305 runner.size="3.1 GiB" runner.vram="3.1 GiB" runner.parallel=1 runner.pid=1782 runner.model=/root/.ollama/models/blobs/sha256-0800cbac9c2064dde519420e75e512a83cb360de3ad5df176185dc69652fc515
time=2025-09-11T01:28:41.837Z level=WARN source=sched.go:652 msg="gpu VRAM usage didn't recover within timeout" seconds=5.346106826 runner.size="3.1 GiB" runner.vram="3.1 GiB" runner.parallel=1 runner.pid=1782 runner.model=/root/.ollama/models/blobs/sha256-0800cbac9c2064dde519420e75e512a83cb360de3ad5df176185dc69652fc515
time=2025-09-11T01:28:41.916Z level=INFO source=server.go:503 msg="system memory" total="15.2 GiB" free="12.8 GiB" free_swap="3.5 GiB"
time=2025-09-11T01:28:41.916Z level=INFO source=memory.go:36 msg="new model will fit in available VRAM across minimum required GPUs, loading" model=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 library=cuda parallel=1 required="5.7 GiB" gpus=1
time=2025-09-11T01:28:41.916Z level=INFO source=server.go:543 msg=offload library=cuda layers.requested=-1 layers.model=33 layers.offload=33 layers.split=[33] memory.available="[6.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.7 GiB" memory.required.partial="5.7 GiB" memory.required.kv="512.0 MiB" memory.required.allocations="[5.7 GiB]" memory.weights.total="4.3 GiB" memory.weights.repeating="3.9 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="296.0 MiB" memory.graph.partial="677.5 MiB"
time=2025-09-11T01:28:41.917Z level=INFO source=runner.go:799 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:33[ID:GPU-98df2674-c0ca-b15b-1a24-bf2947db3290 Layers:33(0..32)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:true}"
time=2025-09-11T01:28:41.918Z level=INFO source=server.go:1250 msg="waiting for llama runner to start responding"
time=2025-09-11T01:28:41.918Z level=INFO source=server.go:1284 msg="waiting for server to become available" status="llm server loading model"
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 3070) - 7098 MiB free
llama_model_loader: loaded meta data with 29 key-value pairs and 292 tensors from /root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Meta Llama 3.1 8B Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Meta-Llama-3.1
llama_model_loader: - kv   5:                         general.size_label str              = 8B
llama_model_loader: - kv   6:                            general.license str              = llama3.1
llama_model_loader: - kv   7:                               general.tags arr[str,6]       = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv   8:                          general.languages arr[str,8]       = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv   9:                          llama.block_count u32              = 32
llama_model_loader: - kv  10:                       llama.context_length u32              = 131072
llama_model_loader: - kv  11:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv  12:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv  13:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv  14:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  15:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  16:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  17:                          general.file_type u32              = 15
llama_model_loader: - kv  18:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  19:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  20:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  21:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  22:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  23:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  24:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  25:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  26:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  27:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv  28:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   66 tensors
llama_model_loader: - type q4_K:  193 tensors
llama_model_loader: - type q6_K:   33 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 4.58 GiB (4.89 BPW) 
time=2025-09-11T01:28:42.087Z level=WARN source=sched.go:652 msg="gpu VRAM usage didn't recover within timeout" seconds=5.595772987 runner.size="3.1 GiB" runner.vram="3.1 GiB" runner.parallel=1 runner.pid=1782 runner.model=/root/.ollama/models/blobs/sha256-0800cbac9c2064dde519420e75e512a83cb360de3ad5df176185dc69652fc515
load: printing all EOG tokens:
load:   - 128001 ('<|end_of_text|>')
load:   - 128008 ('<|eom_id|>')
load:   - 128009 ('<|eot_id|>')

OS

Windows, Docker

GPU

Nvidia

CPU

AMD

Ollama version

0.11.10

Originally created by @tomaszbk on GitHub (Sep 11, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12247 ### What is the issue? tried embeddinggemma with qwen 2.5vl and llama 3.1, on 8GB VRAM GPU, and the llm models are unloaded upon an embedding request. using granite embedding 300m doesnt unload the other models ### Relevant log output ```shell time=2025-09-11T01:28:15.420Z level=INFO source=sched.go:540 msg="updated VRAM based on existing loaded models" gpu=GPU-98df2674-c0ca-b15b-1a24-bf2947db3290 library=cuda total="8.0 GiB" available="2.3 GiB" time=2025-09-11T01:28:15.553Z level=INFO source=server.go:398 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-0800cbac9c2064dde519420e75e512a83cb360de3ad5df176185dc69652fc515 --port 33085" time=2025-09-11T01:28:15.564Z level=INFO source=runner.go:1251 msg="starting ollama engine" time=2025-09-11T01:28:15.566Z level=INFO source=runner.go:1286 msg="Server listening on 127.0.0.1:33085" time=2025-09-11T01:28:15.645Z level=INFO source=server.go:503 msg="system memory" total="15.2 GiB" free="12.5 GiB" free_swap="3.5 GiB" time=2025-09-11T01:28:15.645Z level=INFO source=server.go:510 msg="model requires more memory than is currently available, evicting a model to make space" estimate.library="" estimate.layers.requested=0 estimate.layers.model=0 estimate.layers.offload=0 estimate.layers.split=[] estimate.memory.available=[] estimate.memory.gpu_overhead="0 B" estimate.memory.required.full="0 B" estimate.memory.required.partial="0 B" estimate.memory.required.kv="0 B" estimate.memory.required.allocations=[] estimate.memory.weights.total="0 B" estimate.memory.weights.repeating="0 B" estimate.memory.weights.nonrepeating="0 B" estimate.memory.graph.full="0 B" estimate.memory.graph.partial="0 B" time=2025-09-11T01:28:19.869Z level=WARN source=sched.go:652 msg="gpu VRAM usage didn't recover within timeout" seconds=5.098100875 runner.size="5.7 GiB" runner.vram="5.7 GiB" runner.parallel=1 runner.pid=1647 runner.model=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 time=2025-09-11T01:28:20.119Z level=WARN source=sched.go:652 msg="gpu VRAM usage didn't recover within timeout" seconds=5.348006231 runner.size="5.7 GiB" runner.vram="5.7 GiB" runner.parallel=1 runner.pid=1647 runner.model=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 time=2025-09-11T01:28:20.185Z level=INFO source=server.go:503 msg="system memory" total="15.2 GiB" free="12.9 GiB" free_swap="3.5 GiB" time=2025-09-11T01:28:20.185Z level=INFO source=memory.go:36 msg="new model will fit in available VRAM across minimum required GPUs, loading" model=/root/.ollama/models/blobs/sha256-0800cbac9c2064dde519420e75e512a83cb360de3ad5df176185dc69652fc515 library=cuda parallel=1 required="3.1 GiB" gpus=1 time=2025-09-11T01:28:20.185Z level=INFO source=server.go:543 msg=offload library=cuda layers.requested=-1 layers.model=25 layers.offload=25 layers.split=[25] memory.available="[6.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="3.1 GiB" memory.required.partial="3.1 GiB" memory.required.kv="58.0 MiB" memory.required.allocations="[3.1 GiB]" memory.weights.total="577.8 MiB" memory.weights.repeating="193.8 MiB" memory.weights.nonrepeating="384.0 MiB" memory.graph.full="2.0 GiB" memory.graph.partial="2.2 GiB" time=2025-09-11T01:28:20.187Z level=INFO source=runner.go:1170 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:2048 FlashAttention:false KvSize:2048 KvCacheType: NumThreads:8 GPULayers:25[ID:GPU-98df2674-c0ca-b15b-1a24-bf2947db3290 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-09-11T01:28:20.207Z level=INFO source=ggml.go:131 msg="" architecture=gemma3 file_type=BF16 name="Embeddinggemma 300M" description="" num_tensors=316 num_key_values=37 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3070, compute capability 8.6, VMM: yes, ID: GPU-98df2674-c0ca-b15b-1a24-bf2947db3290 load_backend: loaded CUDA backend from /usr/lib/ollama/libggml-cuda.so load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so time=2025-09-11T01:28:20.266Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2025-09-11T01:28:20.368Z level=WARN source=sched.go:652 msg="gpu VRAM usage didn't recover within timeout" seconds=5.597301181 runner.size="5.7 GiB" runner.vram="5.7 GiB" runner.parallel=1 runner.pid=1647 runner.model=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 time=2025-09-11T01:28:20.390Z level=INFO source=ggml.go:487 msg="offloading 24 repeating layers to GPU" time=2025-09-11T01:28:20.390Z level=INFO source=ggml.go:493 msg="offloading output layer to GPU" time=2025-09-11T01:28:20.390Z level=INFO source=ggml.go:498 msg="offloaded 25/25 layers to GPU" time=2025-09-11T01:28:20.391Z level=INFO source=backend.go:310 msg="model weights" device=CUDA0 size="586.8 MiB" time=2025-09-11T01:28:20.391Z level=INFO source=backend.go:315 msg="model weights" device=CPU size="384.0 MiB" time=2025-09-11T01:28:20.391Z level=INFO source=backend.go:321 msg="kv cache" device=CUDA0 size="58.0 MiB" time=2025-09-11T01:28:20.391Z level=INFO source=backend.go:332 msg="compute graph" device=CUDA0 size="108.0 MiB" time=2025-09-11T01:28:20.391Z level=INFO source=backend.go:337 msg="compute graph" device=CPU size="6.0 MiB" time=2025-09-11T01:28:20.391Z level=INFO source=backend.go:342 msg="total memory" size="1.1 GiB" time=2025-09-11T01:28:20.391Z level=INFO source=sched.go:473 msg="loaded runners" count=1 time=2025-09-11T01:28:20.391Z level=INFO source=server.go:1250 msg="waiting for llama runner to start responding" time=2025-09-11T01:28:20.391Z level=INFO source=server.go:1284 msg="waiting for server to become available" status="llm server loading model" time=2025-09-11T01:28:20.895Z level=INFO source=server.go:1288 msg="llama runner started in 6.22 seconds" [GIN] 2025/09/11 - 01:28:21 | 200 | 6.694510783s | 172.18.0.6 | POST "/v1/embeddings" time=2025-09-11T01:28:21.232Z level=INFO source=sched.go:540 msg="updated VRAM based on existing loaded models" gpu=GPU-98df2674-c0ca-b15b-1a24-bf2947db3290 library=cuda total="8.0 GiB" available="4.9 GiB" llama_model_loader: loaded meta data with 29 key-value pairs and 292 tensors from /root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Meta Llama 3.1 8B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Meta-Llama-3.1 llama_model_loader: - kv 5: general.size_label str = 8B llama_model_loader: - kv 6: general.license str = llama3.1 llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... llama_model_loader: - kv 9: llama.block_count u32 = 32 llama_model_loader: - kv 10: llama.context_length u32 = 131072 llama_model_loader: - kv 11: llama.embedding_length u32 = 4096 llama_model_loader: - kv 12: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 13: llama.attention.head_count u32 = 32 llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 17: general.file_type u32 = 15 llama_model_loader: - kv 18: llama.vocab_size u32 = 128256 llama_model_loader: - kv 19: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 20: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 21: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 23: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 24: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 27: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 28: general.quantization_version u32 = 2 llama_model_loader: - type f32: 66 tensors llama_model_loader: - type q4_K: 193 tensors llama_model_loader: - type q6_K: 33 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 4.58 GiB (4.89 BPW) load: printing all EOG tokens: load: - 128001 ('<|end_of_text|>') load: - 128008 ('<|eom_id|>') load: - 128009 ('<|eot_id|>') load: special tokens cache size = 256 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 8.03 B print_info: general.name = Meta Llama 3.1 8B Instruct print_info: vocab type = BPE print_info: n_vocab = 128256 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin_of_text|>' print_info: EOS token = 128009 '<|eot_id|>' print_info: EOT token = 128009 '<|eot_id|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128001 '<|end_of_text|>' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-09-11T01:28:21.538Z level=INFO source=server.go:398 msg="starting runner" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 --port 45399" time=2025-09-11T01:28:21.550Z level=INFO source=runner.go:864 msg="starting go runner" time=2025-09-11T01:28:21.631Z level=INFO source=server.go:503 msg="system memory" total="15.2 GiB" free="12.2 GiB" free_swap="3.5 GiB" time=2025-09-11T01:28:21.632Z level=INFO source=server.go:510 msg="model requires more memory than is currently available, evicting a model to make space" estimate.library="" estimate.layers.requested=0 estimate.layers.model=0 estimate.layers.offload=0 estimate.layers.split=[] estimate.memory.available=[] estimate.memory.gpu_overhead="0 B" estimate.memory.required.full="0 B" estimate.memory.required.partial="0 B" estimate.memory.required.kv="0 B" estimate.memory.required.allocations=[] estimate.memory.weights.total="0 B" estimate.memory.weights.repeating="0 B" estimate.memory.weights.nonrepeating="0 B" estimate.memory.graph.full="0 B" estimate.memory.graph.partial="0 B" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3070, compute capability 8.6, VMM: yes, ID: GPU-98df2674-c0ca-b15b-1a24-bf2947db3290 load_backend: loaded CUDA backend from /usr/lib/ollama/libggml-cuda.so load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so time=2025-09-11T01:28:21.650Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2025-09-11T01:28:21.653Z level=INFO source=runner.go:900 msg="Server listening on 127.0.0.1:45399" time=2025-09-11T01:28:26.720Z level=WARN source=sched.go:652 msg="gpu VRAM usage didn't recover within timeout" seconds=5.087912251 runner.size="3.1 GiB" runner.vram="3.1 GiB" runner.parallel=1 runner.pid=1686 runner.model=/root/.ollama/models/blobs/sha256-0800cbac9c2064dde519420e75e512a83cb360de3ad5df176185dc69652fc515 time=2025-09-11T01:28:26.970Z level=WARN source=sched.go:652 msg="gpu VRAM usage didn't recover within timeout" seconds=5.337829115 runner.size="3.1 GiB" runner.vram="3.1 GiB" runner.parallel=1 runner.pid=1686 runner.model=/root/.ollama/models/blobs/sha256-0800cbac9c2064dde519420e75e512a83cb360de3ad5df176185dc69652fc515 time=2025-09-11T01:28:27.030Z level=INFO source=server.go:503 msg="system memory" total="15.2 GiB" free="12.8 GiB" free_swap="3.5 GiB" time=2025-09-11T01:28:27.030Z level=INFO source=memory.go:36 msg="new model will fit in available VRAM across minimum required GPUs, loading" model=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 library=cuda parallel=1 required="5.7 GiB" gpus=1 time=2025-09-11T01:28:27.030Z level=INFO source=server.go:543 msg=offload library=cuda layers.requested=-1 layers.model=33 layers.offload=33 layers.split=[33] memory.available="[6.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.7 GiB" memory.required.partial="5.7 GiB" memory.required.kv="512.0 MiB" memory.required.allocations="[5.7 GiB]" memory.weights.total="4.3 GiB" memory.weights.repeating="3.9 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="296.0 MiB" memory.graph.partial="677.5 MiB" time=2025-09-11T01:28:27.031Z level=INFO source=runner.go:799 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:33[ID:GPU-98df2674-c0ca-b15b-1a24-bf2947db3290 Layers:33(0..32)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:true}" time=2025-09-11T01:28:27.031Z level=INFO source=server.go:1250 msg="waiting for llama runner to start responding" time=2025-09-11T01:28:27.032Z level=INFO source=server.go:1284 msg="waiting for server to become available" status="llm server loading model" llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 3070) - 7098 MiB free llama_model_loader: loaded meta data with 29 key-value pairs and 292 tensors from /root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Meta Llama 3.1 8B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Meta-Llama-3.1 llama_model_loader: - kv 5: general.size_label str = 8B llama_model_loader: - kv 6: general.license str = llama3.1 llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... llama_model_loader: - kv 9: llama.block_count u32 = 32 llama_model_loader: - kv 10: llama.context_length u32 = 131072 llama_model_loader: - kv 11: llama.embedding_length u32 = 4096 llama_model_loader: - kv 12: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 13: llama.attention.head_count u32 = 32 llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 17: general.file_type u32 = 15 llama_model_loader: - kv 18: llama.vocab_size u32 = 128256 llama_model_loader: - kv 19: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 20: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 21: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 23: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 24: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 27: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 28: general.quantization_version u32 = 2 llama_model_loader: - type f32: 66 tensors llama_model_loader: - type q4_K: 193 tensors llama_model_loader: - type q6_K: 33 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 4.58 GiB (4.89 BPW) time=2025-09-11T01:28:27.220Z level=WARN source=sched.go:652 msg="gpu VRAM usage didn't recover within timeout" seconds=5.5878821720000005 runner.size="3.1 GiB" runner.vram="3.1 GiB" runner.parallel=1 runner.pid=1686 runner.model=/root/.ollama/models/blobs/sha256-0800cbac9c2064dde519420e75e512a83cb360de3ad5df176185dc69652fc515 load: printing all EOG tokens: load: - 128001 ('<|end_of_text|>') load: - 128008 ('<|eom_id|>') load: - 128009 ('<|eot_id|>') load: special tokens cache size = 256 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 4096 print_info: n_layer = 32 print_info: n_head = 32 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: is_swa_any = 0 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 4 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 14336 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 0 print_info: rope scaling = linear print_info: freq_base_train = 500000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 131072 print_info: rope_finetuned = unknown print_info: model type = 8B print_info: model params = 8.03 B print_info: general.name = Meta Llama 3.1 8B Instruct print_info: vocab type = BPE print_info: n_vocab = 128256 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin_of_text|>' print_info: EOS token = 128009 '<|eot_id|>' print_info: EOT token = 128009 '<|eot_id|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128001 '<|end_of_text|>' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = true) load_tensors: offloading 32 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 33/33 layers to GPU load_tensors: CUDA0 model buffer size = 4403.49 MiB load_tensors: CPU_Mapped model buffer size = 281.81 MiB llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 4096 llama_context: n_ctx_per_seq = 4096 llama_context: n_batch = 512 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 0 llama_context: kv_unified = false llama_context: freq_base = 500000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_context: CUDA_Host output buffer size = 0.50 MiB llama_kv_cache_unified: CUDA0 KV buffer size = 512.00 MiB llama_kv_cache_unified: size = 512.00 MiB ( 4096 cells, 32 layers, 1/1 seqs), K (f16): 256.00 MiB, V (f16): 256.00 MiB llama_context: CUDA0 compute buffer size = 300.01 MiB llama_context: CUDA_Host compute buffer size = 20.01 MiB llama_context: graph nodes = 1126 llama_context: graph splits = 2 time=2025-09-11T01:28:28.036Z level=INFO source=server.go:1288 msg="llama runner started in 6.50 seconds" time=2025-09-11T01:28:28.036Z level=INFO source=sched.go:473 msg="loaded runners" count=1 time=2025-09-11T01:28:28.036Z level=INFO source=server.go:1250 msg="waiting for llama runner to start responding" time=2025-09-11T01:28:28.037Z level=INFO source=server.go:1288 msg="llama runner started in 6.50 seconds" [GIN] 2025/09/11 - 01:28:28 | 200 | 7.589501463s | 172.18.0.6 | POST "/v1/chat/completions" [GIN] 2025/09/11 - 01:28:28 | 200 | 343.092569ms | 172.18.0.6 | POST "/v1/chat/completions" time=2025-09-11T01:28:29.170Z level=INFO source=sched.go:540 msg="updated VRAM based on existing loaded models" gpu=GPU-98df2674-c0ca-b15b-1a24-bf2947db3290 library=cuda total="8.0 GiB" available="2.3 GiB" time=2025-09-11T01:28:29.309Z level=INFO source=server.go:398 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-0800cbac9c2064dde519420e75e512a83cb360de3ad5df176185dc69652fc515 --port 35901" time=2025-09-11T01:28:29.321Z level=INFO source=runner.go:1251 msg="starting ollama engine" time=2025-09-11T01:28:29.324Z level=INFO source=runner.go:1286 msg="Server listening on 127.0.0.1:35901" time=2025-09-11T01:28:29.407Z level=INFO source=server.go:503 msg="system memory" total="15.2 GiB" free="12.4 GiB" free_swap="3.5 GiB" time=2025-09-11T01:28:29.407Z level=INFO source=server.go:510 msg="model requires more memory than is currently available, evicting a model to make space" estimate.library="" estimate.layers.requested=0 estimate.layers.model=0 estimate.layers.offload=0 estimate.layers.split=[] estimate.memory.available=[] estimate.memory.gpu_overhead="0 B" estimate.memory.required.full="0 B" estimate.memory.required.partial="0 B" estimate.memory.required.kv="0 B" estimate.memory.required.allocations=[] estimate.memory.weights.total="0 B" estimate.memory.weights.repeating="0 B" estimate.memory.weights.nonrepeating="0 B" estimate.memory.graph.full="0 B" estimate.memory.graph.partial="0 B" time=2025-09-11T01:28:34.504Z level=WARN source=sched.go:652 msg="gpu VRAM usage didn't recover within timeout" seconds=5.096350349 runner.size="5.7 GiB" runner.vram="5.7 GiB" runner.parallel=1 runner.pid=1742 runner.model=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 time=2025-09-11T01:28:34.747Z level=WARN source=sched.go:652 msg="gpu VRAM usage didn't recover within timeout" seconds=5.340107962 runner.size="5.7 GiB" runner.vram="5.7 GiB" runner.parallel=1 runner.pid=1742 runner.model=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 time=2025-09-11T01:28:34.842Z level=INFO source=server.go:503 msg="system memory" total="15.2 GiB" free="12.9 GiB" free_swap="3.5 GiB" time=2025-09-11T01:28:34.842Z level=INFO source=memory.go:36 msg="new model will fit in available VRAM across minimum required GPUs, loading" model=/root/.ollama/models/blobs/sha256-0800cbac9c2064dde519420e75e512a83cb360de3ad5df176185dc69652fc515 library=cuda parallel=1 required="3.1 GiB" gpus=1 time=2025-09-11T01:28:34.842Z level=INFO source=server.go:543 msg=offload library=cuda layers.requested=-1 layers.model=25 layers.offload=25 layers.split=[25] memory.available="[6.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="3.1 GiB" memory.required.partial="3.1 GiB" memory.required.kv="58.0 MiB" memory.required.allocations="[3.1 GiB]" memory.weights.total="577.8 MiB" memory.weights.repeating="193.8 MiB" memory.weights.nonrepeating="384.0 MiB" memory.graph.full="2.0 GiB" memory.graph.partial="2.2 GiB" time=2025-09-11T01:28:34.844Z level=INFO source=runner.go:1170 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:2048 FlashAttention:false KvSize:2048 KvCacheType: NumThreads:8 GPULayers:25[ID:GPU-98df2674-c0ca-b15b-1a24-bf2947db3290 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-09-11T01:28:34.867Z level=INFO source=ggml.go:131 msg="" architecture=gemma3 file_type=BF16 name="Embeddinggemma 300M" description="" num_tensors=316 num_key_values=37 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3070, compute capability 8.6, VMM: yes, ID: GPU-98df2674-c0ca-b15b-1a24-bf2947db3290 load_backend: loaded CUDA backend from /usr/lib/ollama/libggml-cuda.so load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so time=2025-09-11T01:28:34.947Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2025-09-11T01:28:34.997Z level=WARN source=sched.go:652 msg="gpu VRAM usage didn't recover within timeout" seconds=5.58967704 runner.size="5.7 GiB" runner.vram="5.7 GiB" runner.parallel=1 runner.pid=1742 runner.model=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 time=2025-09-11T01:28:35.063Z level=INFO source=ggml.go:487 msg="offloading 24 repeating layers to GPU" time=2025-09-11T01:28:35.063Z level=INFO source=ggml.go:493 msg="offloading output layer to GPU" time=2025-09-11T01:28:35.063Z level=INFO source=ggml.go:498 msg="offloaded 25/25 layers to GPU" time=2025-09-11T01:28:35.063Z level=INFO source=backend.go:310 msg="model weights" device=CUDA0 size="586.8 MiB" time=2025-09-11T01:28:35.063Z level=INFO source=backend.go:315 msg="model weights" device=CPU size="384.0 MiB" time=2025-09-11T01:28:35.063Z level=INFO source=backend.go:321 msg="kv cache" device=CUDA0 size="58.0 MiB" time=2025-09-11T01:28:35.063Z level=INFO source=backend.go:332 msg="compute graph" device=CUDA0 size="108.0 MiB" time=2025-09-11T01:28:35.063Z level=INFO source=backend.go:337 msg="compute graph" device=CPU size="6.0 MiB" time=2025-09-11T01:28:35.063Z level=INFO source=backend.go:342 msg="total memory" size="1.1 GiB" time=2025-09-11T01:28:35.063Z level=INFO source=sched.go:473 msg="loaded runners" count=1 time=2025-09-11T01:28:35.063Z level=INFO source=server.go:1250 msg="waiting for llama runner to start responding" time=2025-09-11T01:28:35.064Z level=INFO source=server.go:1284 msg="waiting for server to become available" status="llm server loading model" time=2025-09-11T01:28:35.566Z level=INFO source=server.go:1288 msg="llama runner started in 6.26 seconds" [GIN] 2025/09/11 - 01:28:35 | 200 | 6.940837124s | 172.18.0.6 | POST "/v1/embeddings" time=2025-09-11T01:28:36.133Z level=INFO source=sched.go:540 msg="updated VRAM based on existing loaded models" gpu=GPU-98df2674-c0ca-b15b-1a24-bf2947db3290 library=cuda total="8.0 GiB" available="4.9 GiB" llama_model_loader: loaded meta data with 29 key-value pairs and 292 tensors from /root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Meta Llama 3.1 8B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Meta-Llama-3.1 llama_model_loader: - kv 5: general.size_label str = 8B llama_model_loader: - kv 6: general.license str = llama3.1 llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... llama_model_loader: - kv 9: llama.block_count u32 = 32 llama_model_loader: - kv 10: llama.context_length u32 = 131072 llama_model_loader: - kv 11: llama.embedding_length u32 = 4096 llama_model_loader: - kv 12: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 13: llama.attention.head_count u32 = 32 llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 17: general.file_type u32 = 15 llama_model_loader: - kv 18: llama.vocab_size u32 = 128256 llama_model_loader: - kv 19: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 20: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 21: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 23: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 24: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 27: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 28: general.quantization_version u32 = 2 llama_model_loader: - type f32: 66 tensors llama_model_loader: - type q4_K: 193 tensors llama_model_loader: - type q6_K: 33 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 4.58 GiB (4.89 BPW) load: printing all EOG tokens: load: - 128001 ('<|end_of_text|>') load: - 128008 ('<|eom_id|>') load: - 128009 ('<|eot_id|>') load: special tokens cache size = 256 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 8.03 B print_info: general.name = Meta Llama 3.1 8B Instruct print_info: vocab type = BPE print_info: n_vocab = 128256 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin_of_text|>' print_info: EOS token = 128009 '<|eot_id|>' print_info: EOT token = 128009 '<|eot_id|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128001 '<|end_of_text|>' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-09-11T01:28:36.398Z level=INFO source=server.go:398 msg="starting runner" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 --port 38393" time=2025-09-11T01:28:36.414Z level=INFO source=runner.go:864 msg="starting go runner" time=2025-09-11T01:28:36.491Z level=INFO source=server.go:503 msg="system memory" total="15.2 GiB" free="11.9 GiB" free_swap="3.5 GiB" time=2025-09-11T01:28:36.491Z level=INFO source=server.go:510 msg="model requires more memory than is currently available, evicting a model to make space" estimate.library="" estimate.layers.requested=0 estimate.layers.model=0 estimate.layers.offload=0 estimate.layers.split=[] estimate.memory.available=[] estimate.memory.gpu_overhead="0 B" estimate.memory.required.full="0 B" estimate.memory.required.partial="0 B" estimate.memory.required.kv="0 B" estimate.memory.required.allocations=[] estimate.memory.weights.total="0 B" estimate.memory.weights.repeating="0 B" estimate.memory.weights.nonrepeating="0 B" estimate.memory.graph.full="0 B" estimate.memory.graph.partial="0 B" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3070, compute capability 8.6, VMM: yes, ID: GPU-98df2674-c0ca-b15b-1a24-bf2947db3290 load_backend: loaded CUDA backend from /usr/lib/ollama/libggml-cuda.so load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so time=2025-09-11T01:28:36.519Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2025-09-11T01:28:36.522Z level=INFO source=runner.go:900 msg="Server listening on 127.0.0.1:38393" time=2025-09-11T01:28:41.587Z level=WARN source=sched.go:652 msg="gpu VRAM usage didn't recover within timeout" seconds=5.095617305 runner.size="3.1 GiB" runner.vram="3.1 GiB" runner.parallel=1 runner.pid=1782 runner.model=/root/.ollama/models/blobs/sha256-0800cbac9c2064dde519420e75e512a83cb360de3ad5df176185dc69652fc515 time=2025-09-11T01:28:41.837Z level=WARN source=sched.go:652 msg="gpu VRAM usage didn't recover within timeout" seconds=5.346106826 runner.size="3.1 GiB" runner.vram="3.1 GiB" runner.parallel=1 runner.pid=1782 runner.model=/root/.ollama/models/blobs/sha256-0800cbac9c2064dde519420e75e512a83cb360de3ad5df176185dc69652fc515 time=2025-09-11T01:28:41.916Z level=INFO source=server.go:503 msg="system memory" total="15.2 GiB" free="12.8 GiB" free_swap="3.5 GiB" time=2025-09-11T01:28:41.916Z level=INFO source=memory.go:36 msg="new model will fit in available VRAM across minimum required GPUs, loading" model=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 library=cuda parallel=1 required="5.7 GiB" gpus=1 time=2025-09-11T01:28:41.916Z level=INFO source=server.go:543 msg=offload library=cuda layers.requested=-1 layers.model=33 layers.offload=33 layers.split=[33] memory.available="[6.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.7 GiB" memory.required.partial="5.7 GiB" memory.required.kv="512.0 MiB" memory.required.allocations="[5.7 GiB]" memory.weights.total="4.3 GiB" memory.weights.repeating="3.9 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="296.0 MiB" memory.graph.partial="677.5 MiB" time=2025-09-11T01:28:41.917Z level=INFO source=runner.go:799 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:33[ID:GPU-98df2674-c0ca-b15b-1a24-bf2947db3290 Layers:33(0..32)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:true}" time=2025-09-11T01:28:41.918Z level=INFO source=server.go:1250 msg="waiting for llama runner to start responding" time=2025-09-11T01:28:41.918Z level=INFO source=server.go:1284 msg="waiting for server to become available" status="llm server loading model" llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 3070) - 7098 MiB free llama_model_loader: loaded meta data with 29 key-value pairs and 292 tensors from /root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Meta Llama 3.1 8B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Meta-Llama-3.1 llama_model_loader: - kv 5: general.size_label str = 8B llama_model_loader: - kv 6: general.license str = llama3.1 llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... llama_model_loader: - kv 9: llama.block_count u32 = 32 llama_model_loader: - kv 10: llama.context_length u32 = 131072 llama_model_loader: - kv 11: llama.embedding_length u32 = 4096 llama_model_loader: - kv 12: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 13: llama.attention.head_count u32 = 32 llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 17: general.file_type u32 = 15 llama_model_loader: - kv 18: llama.vocab_size u32 = 128256 llama_model_loader: - kv 19: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 20: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 21: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 23: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 24: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 27: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 28: general.quantization_version u32 = 2 llama_model_loader: - type f32: 66 tensors llama_model_loader: - type q4_K: 193 tensors llama_model_loader: - type q6_K: 33 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 4.58 GiB (4.89 BPW) time=2025-09-11T01:28:42.087Z level=WARN source=sched.go:652 msg="gpu VRAM usage didn't recover within timeout" seconds=5.595772987 runner.size="3.1 GiB" runner.vram="3.1 GiB" runner.parallel=1 runner.pid=1782 runner.model=/root/.ollama/models/blobs/sha256-0800cbac9c2064dde519420e75e512a83cb360de3ad5df176185dc69652fc515 load: printing all EOG tokens: load: - 128001 ('<|end_of_text|>') load: - 128008 ('<|eom_id|>') load: - 128009 ('<|eot_id|>') ``` ### OS Windows, Docker ### GPU Nvidia ### CPU AMD ### Ollama version 0.11.10
GiteaMirror added the bug label 2026-04-22 17:05:26 -05:00
Author
Owner

@tomaszbk commented on GitHub (Sep 11, 2025):

my logs were taken after alternating llama 3.1 and gemmaembedding requests

<!-- gh-comment-id:3277071143 --> @tomaszbk commented on GitHub (Sep 11, 2025): my logs were taken after alternating llama 3.1 and gemmaembedding requests
Author
Owner

@tomaszbk commented on GitHub (Sep 11, 2025):

seems gemmaembedding takes like 3gb, but ollama reports its only 600mb

<!-- gh-comment-id:3277072986 --> @tomaszbk commented on GitHub (Sep 11, 2025): seems gemmaembedding takes like 3gb, but ollama reports its only 600mb
Author
Owner

@rick-github commented on GitHub (Sep 11, 2025):

A model requires VRAM for context and computation, so the loaded size of a model will always be greater than the size of just the weights. In this case, embeddinggemma needs 3.3GB of VRAM when loaded, so there is not enough room in your 3070 to host both that model and another LLM at the same time.

Size of weights:

NAME                        ID              SIZE      MODIFIED   
embeddinggemma:latest       693ca723e5e7    621 MB    6 days ago    
granite-embedding:latest    eb4c533ba6f7     62 MB    4 minutes ago    
llama3.1:8b                 46e0c10c039e    4.9 GB    6 months ago     
qwen2.5vl:latest            5ced39dfa4ba    6.0 GB    3 months ago  

VRAM footprint:

NAME                        ID              SIZE      PROCESSOR    CONTEXT    UNTIL   
embeddinggemma:latest       693ca723e5e7    3.3 GB    100% GPU     2048       Forever    
granite-embedding:latest    eb4c533ba6f7    543 MB    100% GPU     4096       Forever    
llama3.1:8b                 46e0c10c039e    6.1 GB    100% GPU     4096       Forever    
qwen2.5vl:latest            5ced39dfa4ba    8.5 GB    100% GPU     4096       Forever    
<!-- gh-comment-id:3279851358 --> @rick-github commented on GitHub (Sep 11, 2025): A model requires VRAM for context and computation, so the loaded size of a model will always be greater than the size of just the weights. In this case, embeddinggemma needs 3.3GB of VRAM when loaded, so there is not enough room in your 3070 to host both that model and another LLM at the same time. Size of weights: ``` NAME ID SIZE MODIFIED embeddinggemma:latest 693ca723e5e7 621 MB 6 days ago granite-embedding:latest eb4c533ba6f7 62 MB 4 minutes ago llama3.1:8b 46e0c10c039e 4.9 GB 6 months ago qwen2.5vl:latest 5ced39dfa4ba 6.0 GB 3 months ago ``` VRAM footprint: ``` NAME ID SIZE PROCESSOR CONTEXT UNTIL embeddinggemma:latest 693ca723e5e7 3.3 GB 100% GPU 2048 Forever granite-embedding:latest eb4c533ba6f7 543 MB 100% GPU 4096 Forever llama3.1:8b 46e0c10c039e 6.1 GB 100% GPU 4096 Forever qwen2.5vl:latest 5ced39dfa4ba 8.5 GB 100% GPU 4096 Forever ```
Author
Owner

@tomaszbk commented on GitHub (Sep 11, 2025):

Thanks! It's a shame gemma embeddiing was announced as a small model but turns out to be quite the opposite

<!-- gh-comment-id:3280854219 --> @tomaszbk commented on GitHub (Sep 11, 2025): Thanks! It's a shame gemma embeddiing was announced as a small model but turns out to be quite the opposite
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#33907