[GH-ISSUE #11008] gemma3:12b does not load onto Nvidia Card if AMD is Present but deepseek:12b does #53771

Closed
opened 2026-04-29 04:43:59 -05:00 by GiteaMirror · 10 comments
Owner

Originally created by @sto1 on GitHub (Jun 7, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11008

What is the issue?

I'm not abel to load the gemma3:12b on the Nvidia 3060 12GB Card, but other model work, even if they have to use partly the CPU. I'm working on windows and the version 0.9.0

Relevant log output

time=2025-06-07T17:03:26.618+02:00 level=INFO source=routes.go:1234 msg="server config" env="map[CUDA_VISIBLE_DEVICES:0 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\storc\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-06-07T17:03:26.646+02:00 level=INFO source=images.go:479 msg="total blobs: 40"
time=2025-06-07T17:03:26.647+02:00 level=INFO source=images.go:486 msg="total unused blobs removed: 0"
time=2025-06-07T17:03:26.648+02:00 level=INFO source=routes.go:1287 msg="Listening on [::]:11434 (version 0.9.0)"
time=2025-06-07T17:03:26.648+02:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-06-07T17:03:26.648+02:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-06-07T17:03:26.649+02:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=8 efficiency=0 threads=16
time=2025-06-07T17:03:26.804+02:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-cd38eb1c-290a-15c2-d573-b2c87845fde3 library=cuda compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3060" overhead="841.2 MiB"
time=2025-06-07T17:03:27.184+02:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-cd38eb1c-290a-15c2-d573-b2c87845fde3 library=cuda variant=v12 compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3060" total="12.0 GiB" available="11.0 GiB"
time=2025-06-07T17:03:27.184+02:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=rocm variant="" compute=gfx1030 driver=6.2 name="AMD Radeon RX 6900 XT" total="16.0 GiB" available="15.8 GiB"
[GIN] 2025/06/07 - 17:03:27 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/06/07 - 17:03:27 | 404 |      2.0025ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/06/07 - 17:03:27 | 200 |    445.1745ms |       127.0.0.1 | POST     "/api/pull"
[GIN] 2025/06/07 - 17:03:33 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/06/07 - 17:03:33 | 200 |     66.3675ms |       127.0.0.1 | POST     "/api/show"
time=2025-06-07T17:03:34.298+02:00 level=INFO source=sched.go:189 msg="one or more GPUs detected that are unable to accurately report free memory - disabling default concurrency"
time=2025-06-07T17:03:34.368+02:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\storc\.ollama\models\blobs\sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de gpu=0 parallel=2 available=16866869248 required="11.0 GiB"
time=2025-06-07T17:03:34.764+02:00 level=INFO source=server.go:135 msg="system memory" total="95.9 GiB" free="67.5 GiB" free_swap="58.4 GiB"
time=2025-06-07T17:03:34.765+02:00 level=INFO source=server.go:168 msg=offload library=rocm layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[15.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.0 GiB" memory.required.partial="11.0 GiB" memory.required.kv="1.3 GiB" memory.required.allocations="[11.0 GiB]" memory.weights.total="6.8 GiB" memory.weights.repeating="6.0 GiB" memory.weights.nonrepeating="787.5 MiB" memory.graph.full="519.5 MiB" memory.graph.partial="1.3 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB"
time=2025-06-07T17:03:34.826+02:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\\Users\\storc\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\storc\\.ollama\\models\\blobs\\sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de --ctx-size 8192 --batch-size 512 --n-gpu-layers 49 --threads 8 --parallel 2 --port 61321"
time=2025-06-07T17:03:34.829+02:00 level=INFO source=sched.go:483 msg="loaded runners" count=1
time=2025-06-07T17:03:34.829+02:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
time=2025-06-07T17:03:34.829+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error"
time=2025-06-07T17:03:34.868+02:00 level=INFO source=runner.go:925 msg="starting ollama engine"
time=2025-06-07T17:03:34.891+02:00 level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:61321"
time=2025-06-07T17:03:34.945+02:00 level=INFO source=ggml.go:92 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1065 num_key_values=37
load_backend: loaded CPU backend from C:\Users\storc\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
time=2025-06-07T17:03:34.958+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang)
time=2025-06-07T17:03:34.962+02:00 level=INFO source=ggml.go:351 msg="model weights" buffer=CPU size="8.3 GiB"
time=2025-06-07T17:03:35.080+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
time=2025-06-07T17:03:35.113+02:00 level=INFO source=ggml.go:638 msg="compute graph" backend=CPU buffer_type=CPU size="1.1 GiB"
time=2025-06-07T17:03:35.251+02:00 level=INFO source=ggml.go:638 msg="compute graph" backend=CPU buffer_type=CPU size="1.1 GiB"
time=2025-06-07T17:03:36.337+02:00 level=INFO source=server.go:630 msg="llama runner started in 1.51 seconds"
[GIN] 2025/06/07 - 17:03:36 | 200 |    2.4686285s |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/06/07 - 17:05:41 | 200 |          2m1s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2025/06/07 - 17:05:51 | 200 |    5.3926998s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2025/06/07 - 17:06:16 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/06/07 - 17:06:16 | 200 |     26.0029ms |       127.0.0.1 | POST     "/api/show"
time=2025-06-07T17:06:16.969+02:00 level=INFO source=sched.go:548 msg="updated VRAM based on existing loaded models" gpu=GPU-cd38eb1c-290a-15c2-d573-b2c87845fde3 library=cuda total="12.0 GiB" available="11.0 GiB"
time=2025-06-07T17:06:16.969+02:00 level=INFO source=sched.go:548 msg="updated VRAM based on existing loaded models" gpu=0 library=rocm total="16.0 GiB" available="5.0 GiB"
time=2025-06-07T17:06:16.970+02:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\storc\.ollama\models\blobs\sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e gpu=GPU-cd38eb1c-290a-15c2-d573-b2c87845fde3 parallel=1 available=11793334272 required="9.7 GiB"
time=2025-06-07T17:06:17.346+02:00 level=INFO source=server.go:135 msg="system memory" total="95.9 GiB" free="57.8 GiB" free_swap="47.1 GiB"
time=2025-06-07T17:06:17.346+02:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[11.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="9.7 GiB" memory.required.partial="9.7 GiB" memory.required.kv="768.0 MiB" memory.required.allocations="[9.7 GiB]" memory.weights.total="8.0 GiB" memory.weights.repeating="7.4 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="348.0 MiB" memory.graph.partial="916.1 MiB"
llama_model_loader: loaded meta data with 26 key-value pairs and 579 tensors from C:\Users\storc\.ollama\models\blobs\sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = DeepSeek R1 Distill Qwen 14B
llama_model_loader: - kv   3:                           general.basename str              = DeepSeek-R1-Distill-Qwen
llama_model_loader: - kv   4:                         general.size_label str              = 14B
llama_model_loader: - kv   5:                          qwen2.block_count u32              = 48
llama_model_loader: - kv   6:                       qwen2.context_length u32              = 131072
llama_model_loader: - kv   7:                     qwen2.embedding_length u32              = 5120
llama_model_loader: - kv   8:                  qwen2.feed_forward_length u32              = 13824
llama_model_loader: - kv   9:                 qwen2.attention.head_count u32              = 40
llama_model_loader: - kv  10:              qwen2.attention.head_count_kv u32              = 8
llama_model_loader: - kv  11:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  12:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  13:                          general.file_type u32              = 15
llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  15:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  16:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  18:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  19:                tokenizer.ggml.bos_token_id u32              = 151646
llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 151643
llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  22:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  23:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  25:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  241 tensors
llama_model_loader: - type q4_K:  289 tensors
llama_model_loader: - type q6_K:   49 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 8.37 GiB (4.87 BPW) 
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 22
load: token to piece cache size = 0.9310 MB
print_info: arch             = qwen2
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 14.77 B
print_info: general.name     = DeepSeek R1 Distill Qwen 14B
print_info: vocab type       = BPE
print_info: n_vocab          = 152064
print_info: n_merges         = 151387
print_info: BOS token        = 151646 '<|begin▁of▁sentence|>'
print_info: EOS token        = 151643 '<|end▁of▁sentence|>'
print_info: EOT token        = 151643 '<|end▁of▁sentence|>'
print_info: PAD token        = 151643 '<|end▁of▁sentence|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|end▁of▁sentence|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-06-07T17:06:17.499+02:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\\Users\\storc\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\storc\\.ollama\\models\\blobs\\sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e --ctx-size 4096 --batch-size 512 --n-gpu-layers 49 --threads 8 --no-mmap --parallel 1 --port 62012"
time=2025-06-07T17:06:17.502+02:00 level=INFO source=sched.go:483 msg="loaded runners" count=2
time=2025-06-07T17:06:17.502+02:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
time=2025-06-07T17:06:17.502+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error"
time=2025-06-07T17:06:17.538+02:00 level=INFO source=runner.go:815 msg="starting go runner"
load_backend: loaded CPU backend from C:\Users\storc\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes
load_backend: loaded CUDA backend from C:\Users\storc\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
time=2025-06-07T17:06:17.654+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-06-07T17:06:17.654+02:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:62012"
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 3060) - 11247 MiB free
time=2025-06-07T17:06:17.753+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: loaded meta data with 26 key-value pairs and 579 tensors from C:\Users\storc\.ollama\models\blobs\sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = DeepSeek R1 Distill Qwen 14B
llama_model_loader: - kv   3:                           general.basename str              = DeepSeek-R1-Distill-Qwen
llama_model_loader: - kv   4:                         general.size_label str              = 14B
llama_model_loader: - kv   5:                          qwen2.block_count u32              = 48
llama_model_loader: - kv   6:                       qwen2.context_length u32              = 131072
llama_model_loader: - kv   7:                     qwen2.embedding_length u32              = 5120
llama_model_loader: - kv   8:                  qwen2.feed_forward_length u32              = 13824
llama_model_loader: - kv   9:                 qwen2.attention.head_count u32              = 40
llama_model_loader: - kv  10:              qwen2.attention.head_count_kv u32              = 8
llama_model_loader: - kv  11:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  12:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  13:                          general.file_type u32              = 15
llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  15:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  16:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  18:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  19:                tokenizer.ggml.bos_token_id u32              = 151646
llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 151643
llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  22:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  23:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  25:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  241 tensors
llama_model_loader: - type q4_K:  289 tensors
llama_model_loader: - type q6_K:   49 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 8.37 GiB (4.87 BPW) 
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 22
load: token to piece cache size = 0.9310 MB
print_info: arch             = qwen2
print_info: vocab_only       = 0
print_info: n_ctx_train      = 131072
print_info: n_embd           = 5120
print_info: n_layer          = 48
print_info: n_head           = 40
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: n_swa_pattern    = 1
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 5
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 13824
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = -1
print_info: rope type        = 2
print_info: rope scaling     = linear
print_info: freq_base_train  = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 131072
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 14B
print_info: model params     = 14.77 B
print_info: general.name     = DeepSeek R1 Distill Qwen 14B
print_info: vocab type       = BPE
print_info: n_vocab          = 152064
print_info: n_merges         = 151387
print_info: BOS token        = 151646 '<|begin▁of▁sentence|>'
print_info: EOS token        = 151643 '<|end▁of▁sentence|>'
print_info: EOT token        = 151643 '<|end▁of▁sentence|>'
print_info: PAD token        = 151643 '<|end▁of▁sentence|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|end▁of▁sentence|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors: offloading 48 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 49/49 layers to GPU
load_tensors:        CUDA0 model buffer size =  8148.38 MiB
load_tensors:          CPU model buffer size =   417.66 MiB
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 4096
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch       = 512
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = 0
llama_context: freq_base     = 1000000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_context:  CUDA_Host  output buffer size =     0.60 MiB
llama_kv_cache_unified: kv_size = 4096, type_k = 'f16', type_v = 'f16', n_layer = 48, can_shift = 1, padding = 32
llama_kv_cache_unified:      CUDA0 KV buffer size =   768.00 MiB
llama_kv_cache_unified: KV self size  =  768.00 MiB, K (f16):  384.00 MiB, V (f16):  384.00 MiB
llama_context:      CUDA0 compute buffer size =   368.00 MiB
llama_context:  CUDA_Host compute buffer size =    18.01 MiB
llama_context: graph nodes  = 1782
llama_context: graph splits = 2
time=2025-06-07T17:06:23.262+02:00 level=INFO source=server.go:630 msg="llama runner started in 5.76 seconds"
[GIN] 2025/06/07 - 17:06:23 | 200 |    6.7251425s |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/06/07 - 17:06:46 | 200 |   20.7912827s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2025/06/07 - 17:16:16 | 200 |            0s |       127.0.0.1 | GET      "/api/version"
[GIN] 2025/06/07 - 17:16:42 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/06/07 - 17:16:43 | 200 |     65.5513ms |       127.0.0.1 | POST     "/api/show"
time=2025-06-07T17:16:43.509+02:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\storc\.ollama\models\blobs\sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de gpu=0 parallel=2 available=15978201088 required="11.0 GiB"
time=2025-06-07T17:16:43.890+02:00 level=INFO source=server.go:135 msg="system memory" total="95.9 GiB" free="67.3 GiB" free_swap="57.1 GiB"
time=2025-06-07T17:16:43.892+02:00 level=INFO source=server.go:168 msg=offload library=rocm layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[14.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.0 GiB" memory.required.partial="11.0 GiB" memory.required.kv="1.3 GiB" memory.required.allocations="[11.0 GiB]" memory.weights.total="6.8 GiB" memory.weights.repeating="6.0 GiB" memory.weights.nonrepeating="787.5 MiB" memory.graph.full="519.5 MiB" memory.graph.partial="1.3 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB"
time=2025-06-07T17:16:43.951+02:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\\Users\\storc\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\storc\\.ollama\\models\\blobs\\sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de --ctx-size 8192 --batch-size 512 --n-gpu-layers 49 --threads 8 --parallel 2 --port 64646"
time=2025-06-07T17:16:43.954+02:00 level=INFO source=sched.go:483 msg="loaded runners" count=1
time=2025-06-07T17:16:43.954+02:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
time=2025-06-07T17:16:43.954+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error"
time=2025-06-07T17:16:43.992+02:00 level=INFO source=runner.go:925 msg="starting ollama engine"
time=2025-06-07T17:16:44.014+02:00 level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:64646"
time=2025-06-07T17:16:44.066+02:00 level=INFO source=ggml.go:92 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1065 num_key_values=37
load_backend: loaded CPU backend from C:\Users\storc\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
time=2025-06-07T17:16:44.080+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang)
time=2025-06-07T17:16:44.084+02:00 level=INFO source=ggml.go:351 msg="model weights" buffer=CPU size="8.3 GiB"
time=2025-06-07T17:16:44.205+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
time=2025-06-07T17:16:44.234+02:00 level=INFO source=ggml.go:638 msg="compute graph" backend=CPU buffer_type=CPU size="1.1 GiB"
time=2025-06-07T17:16:44.368+02:00 level=INFO source=ggml.go:638 msg="compute graph" backend=CPU buffer_type=CPU size="1.1 GiB"
time=2025-06-07T17:16:45.460+02:00 level=INFO source=server.go:630 msg="llama runner started in 1.51 seconds"
[GIN] 2025/06/07 - 17:16:45 | 200 |    2.4218627s |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/06/07 - 17:16:53 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/06/07 - 17:16:53 | 200 |     65.1856ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/06/07 - 17:16:53 | 200 |     34.9929ms |       127.0.0.1 | POST     "/api/generate"

OS

Windows

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @sto1 on GitHub (Jun 7, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11008 ### What is the issue? I'm not abel to load the gemma3:12b on the Nvidia 3060 12GB Card, but other model work, even if they have to use partly the CPU. I'm working on windows and the version 0.9.0 ### Relevant log output ```shell time=2025-06-07T17:03:26.618+02:00 level=INFO source=routes.go:1234 msg="server config" env="map[CUDA_VISIBLE_DEVICES:0 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\storc\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-06-07T17:03:26.646+02:00 level=INFO source=images.go:479 msg="total blobs: 40" time=2025-06-07T17:03:26.647+02:00 level=INFO source=images.go:486 msg="total unused blobs removed: 0" time=2025-06-07T17:03:26.648+02:00 level=INFO source=routes.go:1287 msg="Listening on [::]:11434 (version 0.9.0)" time=2025-06-07T17:03:26.648+02:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-06-07T17:03:26.648+02:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2025-06-07T17:03:26.649+02:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=8 efficiency=0 threads=16 time=2025-06-07T17:03:26.804+02:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-cd38eb1c-290a-15c2-d573-b2c87845fde3 library=cuda compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3060" overhead="841.2 MiB" time=2025-06-07T17:03:27.184+02:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-cd38eb1c-290a-15c2-d573-b2c87845fde3 library=cuda variant=v12 compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3060" total="12.0 GiB" available="11.0 GiB" time=2025-06-07T17:03:27.184+02:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=rocm variant="" compute=gfx1030 driver=6.2 name="AMD Radeon RX 6900 XT" total="16.0 GiB" available="15.8 GiB" [GIN] 2025/06/07 - 17:03:27 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/06/07 - 17:03:27 | 404 | 2.0025ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/06/07 - 17:03:27 | 200 | 445.1745ms | 127.0.0.1 | POST "/api/pull" [GIN] 2025/06/07 - 17:03:33 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/06/07 - 17:03:33 | 200 | 66.3675ms | 127.0.0.1 | POST "/api/show" time=2025-06-07T17:03:34.298+02:00 level=INFO source=sched.go:189 msg="one or more GPUs detected that are unable to accurately report free memory - disabling default concurrency" time=2025-06-07T17:03:34.368+02:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\storc\.ollama\models\blobs\sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de gpu=0 parallel=2 available=16866869248 required="11.0 GiB" time=2025-06-07T17:03:34.764+02:00 level=INFO source=server.go:135 msg="system memory" total="95.9 GiB" free="67.5 GiB" free_swap="58.4 GiB" time=2025-06-07T17:03:34.765+02:00 level=INFO source=server.go:168 msg=offload library=rocm layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[15.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.0 GiB" memory.required.partial="11.0 GiB" memory.required.kv="1.3 GiB" memory.required.allocations="[11.0 GiB]" memory.weights.total="6.8 GiB" memory.weights.repeating="6.0 GiB" memory.weights.nonrepeating="787.5 MiB" memory.graph.full="519.5 MiB" memory.graph.partial="1.3 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB" time=2025-06-07T17:03:34.826+02:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\\Users\\storc\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\storc\\.ollama\\models\\blobs\\sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de --ctx-size 8192 --batch-size 512 --n-gpu-layers 49 --threads 8 --parallel 2 --port 61321" time=2025-06-07T17:03:34.829+02:00 level=INFO source=sched.go:483 msg="loaded runners" count=1 time=2025-06-07T17:03:34.829+02:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2025-06-07T17:03:34.829+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error" time=2025-06-07T17:03:34.868+02:00 level=INFO source=runner.go:925 msg="starting ollama engine" time=2025-06-07T17:03:34.891+02:00 level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:61321" time=2025-06-07T17:03:34.945+02:00 level=INFO source=ggml.go:92 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1065 num_key_values=37 load_backend: loaded CPU backend from C:\Users\storc\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll time=2025-06-07T17:03:34.958+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang) time=2025-06-07T17:03:34.962+02:00 level=INFO source=ggml.go:351 msg="model weights" buffer=CPU size="8.3 GiB" time=2025-06-07T17:03:35.080+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" time=2025-06-07T17:03:35.113+02:00 level=INFO source=ggml.go:638 msg="compute graph" backend=CPU buffer_type=CPU size="1.1 GiB" time=2025-06-07T17:03:35.251+02:00 level=INFO source=ggml.go:638 msg="compute graph" backend=CPU buffer_type=CPU size="1.1 GiB" time=2025-06-07T17:03:36.337+02:00 level=INFO source=server.go:630 msg="llama runner started in 1.51 seconds" [GIN] 2025/06/07 - 17:03:36 | 200 | 2.4686285s | 127.0.0.1 | POST "/api/generate" [GIN] 2025/06/07 - 17:05:41 | 200 | 2m1s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/06/07 - 17:05:51 | 200 | 5.3926998s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/06/07 - 17:06:16 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/06/07 - 17:06:16 | 200 | 26.0029ms | 127.0.0.1 | POST "/api/show" time=2025-06-07T17:06:16.969+02:00 level=INFO source=sched.go:548 msg="updated VRAM based on existing loaded models" gpu=GPU-cd38eb1c-290a-15c2-d573-b2c87845fde3 library=cuda total="12.0 GiB" available="11.0 GiB" time=2025-06-07T17:06:16.969+02:00 level=INFO source=sched.go:548 msg="updated VRAM based on existing loaded models" gpu=0 library=rocm total="16.0 GiB" available="5.0 GiB" time=2025-06-07T17:06:16.970+02:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\storc\.ollama\models\blobs\sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e gpu=GPU-cd38eb1c-290a-15c2-d573-b2c87845fde3 parallel=1 available=11793334272 required="9.7 GiB" time=2025-06-07T17:06:17.346+02:00 level=INFO source=server.go:135 msg="system memory" total="95.9 GiB" free="57.8 GiB" free_swap="47.1 GiB" time=2025-06-07T17:06:17.346+02:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[11.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="9.7 GiB" memory.required.partial="9.7 GiB" memory.required.kv="768.0 MiB" memory.required.allocations="[9.7 GiB]" memory.weights.total="8.0 GiB" memory.weights.repeating="7.4 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="348.0 MiB" memory.graph.partial="916.1 MiB" llama_model_loader: loaded meta data with 26 key-value pairs and 579 tensors from C:\Users\storc\.ollama\models\blobs\sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 14B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen llama_model_loader: - kv 4: general.size_label str = 14B llama_model_loader: - kv 5: qwen2.block_count u32 = 48 llama_model_loader: - kv 6: qwen2.context_length u32 = 131072 llama_model_loader: - kv 7: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 13824 llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: general.file_type u32 = 15 llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646 llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - type f32: 241 tensors llama_model_loader: - type q4_K: 289 tensors llama_model_loader: - type q6_K: 49 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 8.37 GiB (4.87 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 22 load: token to piece cache size = 0.9310 MB print_info: arch = qwen2 print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 14.77 B print_info: general.name = DeepSeek R1 Distill Qwen 14B print_info: vocab type = BPE print_info: n_vocab = 152064 print_info: n_merges = 151387 print_info: BOS token = 151646 '<|begin▁of▁sentence|>' print_info: EOS token = 151643 '<|end▁of▁sentence|>' print_info: EOT token = 151643 '<|end▁of▁sentence|>' print_info: PAD token = 151643 '<|end▁of▁sentence|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|end▁of▁sentence|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-06-07T17:06:17.499+02:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\\Users\\storc\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\storc\\.ollama\\models\\blobs\\sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e --ctx-size 4096 --batch-size 512 --n-gpu-layers 49 --threads 8 --no-mmap --parallel 1 --port 62012" time=2025-06-07T17:06:17.502+02:00 level=INFO source=sched.go:483 msg="loaded runners" count=2 time=2025-06-07T17:06:17.502+02:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2025-06-07T17:06:17.502+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error" time=2025-06-07T17:06:17.538+02:00 level=INFO source=runner.go:815 msg="starting go runner" load_backend: loaded CPU backend from C:\Users\storc\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes load_backend: loaded CUDA backend from C:\Users\storc\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll time=2025-06-07T17:06:17.654+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-06-07T17:06:17.654+02:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:62012" llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 3060) - 11247 MiB free time=2025-06-07T17:06:17.753+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: loaded meta data with 26 key-value pairs and 579 tensors from C:\Users\storc\.ollama\models\blobs\sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 14B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen llama_model_loader: - kv 4: general.size_label str = 14B llama_model_loader: - kv 5: qwen2.block_count u32 = 48 llama_model_loader: - kv 6: qwen2.context_length u32 = 131072 llama_model_loader: - kv 7: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 13824 llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: general.file_type u32 = 15 llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646 llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - type f32: 241 tensors llama_model_loader: - type q4_K: 289 tensors llama_model_loader: - type q6_K: 49 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 8.37 GiB (4.87 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 22 load: token to piece cache size = 0.9310 MB print_info: arch = qwen2 print_info: vocab_only = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 5120 print_info: n_layer = 48 print_info: n_head = 40 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_swa_pattern = 1 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 5 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 13824 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = -1 print_info: rope type = 2 print_info: rope scaling = linear print_info: freq_base_train = 1000000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 131072 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 14B print_info: model params = 14.77 B print_info: general.name = DeepSeek R1 Distill Qwen 14B print_info: vocab type = BPE print_info: n_vocab = 152064 print_info: n_merges = 151387 print_info: BOS token = 151646 '<|begin▁of▁sentence|>' print_info: EOS token = 151643 '<|end▁of▁sentence|>' print_info: EOT token = 151643 '<|end▁of▁sentence|>' print_info: PAD token = 151643 '<|end▁of▁sentence|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|end▁of▁sentence|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = false) load_tensors: offloading 48 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 49/49 layers to GPU load_tensors: CUDA0 model buffer size = 8148.38 MiB load_tensors: CPU model buffer size = 417.66 MiB llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 4096 llama_context: n_ctx_per_seq = 4096 llama_context: n_batch = 512 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 0 llama_context: freq_base = 1000000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_context: CUDA_Host output buffer size = 0.60 MiB llama_kv_cache_unified: kv_size = 4096, type_k = 'f16', type_v = 'f16', n_layer = 48, can_shift = 1, padding = 32 llama_kv_cache_unified: CUDA0 KV buffer size = 768.00 MiB llama_kv_cache_unified: KV self size = 768.00 MiB, K (f16): 384.00 MiB, V (f16): 384.00 MiB llama_context: CUDA0 compute buffer size = 368.00 MiB llama_context: CUDA_Host compute buffer size = 18.01 MiB llama_context: graph nodes = 1782 llama_context: graph splits = 2 time=2025-06-07T17:06:23.262+02:00 level=INFO source=server.go:630 msg="llama runner started in 5.76 seconds" [GIN] 2025/06/07 - 17:06:23 | 200 | 6.7251425s | 127.0.0.1 | POST "/api/generate" [GIN] 2025/06/07 - 17:06:46 | 200 | 20.7912827s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/06/07 - 17:16:16 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2025/06/07 - 17:16:42 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/06/07 - 17:16:43 | 200 | 65.5513ms | 127.0.0.1 | POST "/api/show" time=2025-06-07T17:16:43.509+02:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\storc\.ollama\models\blobs\sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de gpu=0 parallel=2 available=15978201088 required="11.0 GiB" time=2025-06-07T17:16:43.890+02:00 level=INFO source=server.go:135 msg="system memory" total="95.9 GiB" free="67.3 GiB" free_swap="57.1 GiB" time=2025-06-07T17:16:43.892+02:00 level=INFO source=server.go:168 msg=offload library=rocm layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[14.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.0 GiB" memory.required.partial="11.0 GiB" memory.required.kv="1.3 GiB" memory.required.allocations="[11.0 GiB]" memory.weights.total="6.8 GiB" memory.weights.repeating="6.0 GiB" memory.weights.nonrepeating="787.5 MiB" memory.graph.full="519.5 MiB" memory.graph.partial="1.3 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB" time=2025-06-07T17:16:43.951+02:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\\Users\\storc\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\storc\\.ollama\\models\\blobs\\sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de --ctx-size 8192 --batch-size 512 --n-gpu-layers 49 --threads 8 --parallel 2 --port 64646" time=2025-06-07T17:16:43.954+02:00 level=INFO source=sched.go:483 msg="loaded runners" count=1 time=2025-06-07T17:16:43.954+02:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2025-06-07T17:16:43.954+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error" time=2025-06-07T17:16:43.992+02:00 level=INFO source=runner.go:925 msg="starting ollama engine" time=2025-06-07T17:16:44.014+02:00 level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:64646" time=2025-06-07T17:16:44.066+02:00 level=INFO source=ggml.go:92 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1065 num_key_values=37 load_backend: loaded CPU backend from C:\Users\storc\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll time=2025-06-07T17:16:44.080+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang) time=2025-06-07T17:16:44.084+02:00 level=INFO source=ggml.go:351 msg="model weights" buffer=CPU size="8.3 GiB" time=2025-06-07T17:16:44.205+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" time=2025-06-07T17:16:44.234+02:00 level=INFO source=ggml.go:638 msg="compute graph" backend=CPU buffer_type=CPU size="1.1 GiB" time=2025-06-07T17:16:44.368+02:00 level=INFO source=ggml.go:638 msg="compute graph" backend=CPU buffer_type=CPU size="1.1 GiB" time=2025-06-07T17:16:45.460+02:00 level=INFO source=server.go:630 msg="llama runner started in 1.51 seconds" [GIN] 2025/06/07 - 17:16:45 | 200 | 2.4218627s | 127.0.0.1 | POST "/api/generate" [GIN] 2025/06/07 - 17:16:53 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/06/07 - 17:16:53 | 200 | 65.1856ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/06/07 - 17:16:53 | 200 | 34.9929ms | 127.0.0.1 | POST "/api/generate" ``` ### OS Windows ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-29 04:43:59 -05:00
Author
Owner

@sto1 commented on GitHub (Jun 7, 2025):

the same problem if I use the linux version under Ubuntu

<!-- gh-comment-id:2952662944 --> @sto1 commented on GitHub (Jun 7, 2025): the same problem if I use the linux version under Ubuntu
Author
Owner

@rick-github commented on GitHub (Jun 7, 2025):

time=2025-06-07T17:03:34.765+02:00 level=INFO source=server.go:168 msg=offload library=rocm layers.requested=-1
 layers.model=49 layers.offload=49 layers.split="" memory.available="[15.7 GiB]" memory.gpu_overhead="0 B"
 memory.required.full="11.0 GiB" memory.required.partial="11.0 GiB" memory.required.kv="1.3 GiB"
 memory.required.allocations="[11.0 GiB]" memory.weights.total="6.8 GiB" memory.weights.repeating="6.0 GiB"
 memory.weights.nonrepeating="787.5 MiB" memory.graph.full="519.5 MiB" memory.graph.partial="1.3 GiB"
 projector.weights="795.9 MiB" projector.graph="1.0 GiB"

ollama has determined that it can fit the entire model on the ROCm card.

load_backend: loaded CPU backend from C:\Users\storc\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
time=2025-06-07T17:16:44.080+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang)

However, it was unable to find a ROCm backend, so loaded it in CPU instead. Is there a rocm directory in C:\Users\storc\AppData\Local\Programs\Ollama\lib\ollama?

<!-- gh-comment-id:2952791281 --> @rick-github commented on GitHub (Jun 7, 2025): ``` time=2025-06-07T17:03:34.765+02:00 level=INFO source=server.go:168 msg=offload library=rocm layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[15.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.0 GiB" memory.required.partial="11.0 GiB" memory.required.kv="1.3 GiB" memory.required.allocations="[11.0 GiB]" memory.weights.total="6.8 GiB" memory.weights.repeating="6.0 GiB" memory.weights.nonrepeating="787.5 MiB" memory.graph.full="519.5 MiB" memory.graph.partial="1.3 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB" ``` ollama has determined that it can fit the entire model on the ROCm card. ``` load_backend: loaded CPU backend from C:\Users\storc\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll time=2025-06-07T17:16:44.080+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang) ``` However, it was unable to find a ROCm backend, so loaded it in CPU instead. Is there a `rocm` directory in `C:\Users\storc\AppData\Local\Programs\Ollama\lib\ollama`?
Author
Owner

@sto1 commented on GitHub (Jun 7, 2025):

yes, it was able to load ROCm. But I renamed the library to force him to take the NVIDIA Card (faster). Then he took the CPU instead. I'm now able to run it on Ubuntu. and there it works with: CUDA_VISIBLE_DEVICES=0 HIP_VISIBLE_DEVICES="" ROCR_VISIBLE_DEVICES=""

<!-- gh-comment-id:2952820884 --> @sto1 commented on GitHub (Jun 7, 2025): yes, it was able to load ROCm. But I renamed the library to force him to take the NVIDIA Card (faster). Then he took the CPU instead. I'm now able to run it on Ubuntu. and there it works with: CUDA_VISIBLE_DEVICES=0 HIP_VISIBLE_DEVICES="" ROCR_VISIBLE_DEVICES=""
Author
Owner

@sto1 commented on GitHub (Jun 7, 2025):

the reason I try this, it's faster using the NVIDA 3060 and a part on the CPU then fiting the full model on the AMD Card even it's a 6900.

<!-- gh-comment-id:2952848093 --> @sto1 commented on GitHub (Jun 7, 2025): the reason I try this, it's faster using the NVIDA 3060 and a part on the CPU then fiting the full model on the AMD Card even it's a 6900.
Author
Owner

@rick-github commented on GitHub (Jun 7, 2025):

But I renamed the library to force him to take the NVIDIA Card

Try setting OLLAMA_LLM_LIBRARY=cuda_v12 instead of breaking your installation.

<!-- gh-comment-id:2952856255 --> @rick-github commented on GitHub (Jun 7, 2025): > But I renamed the library to force him to take the NVIDIA Card Try setting `OLLAMA_LLM_LIBRARY=cuda_v12` instead of breaking your installation.
Author
Owner

@sto1 commented on GitHub (Jun 7, 2025):

Thanks for your support, I will try tomorrow.

<!-- gh-comment-id:2952869784 --> @sto1 commented on GitHub (Jun 7, 2025): Thanks for your support, I will try tomorrow.
Author
Owner

@sto1 commented on GitHub (Jun 7, 2025):

I just followed Gemeni 2.5 Flash ;-)

<!-- gh-comment-id:2952870716 --> @sto1 commented on GitHub (Jun 7, 2025): I just followed Gemeni 2.5 Flash ;-)
Author
Owner

@sto1 commented on GitHub (Jun 8, 2025):

I did take back my changes and Set the variables:
PS I:\Users\storc> $env:OLLAMA_LLM_LIBRARY = "cuda_v12"
PS I:\Users\storc> $env:CUDA_VISIBLE_DEVICES="0"
PS I:\Users\storc> $env:HIP_VISIBLE_DEVICES=""
PS I:\Users\storc> ollama run gemma:12b --verbose

The system tells me that he is running on the GPU, but it is not!
It does not take the memory and it's to slow!
It works now fine unter WSL Ubuntu, but not Windows! But my Ubuntu has no access to the AMD Card!

(base) stor@DESKTOP-NFL740H:$ ollama ps
NAME ID SIZE PROCESSOR UNTIL
gemma3:12b f4031aab637d 11 GB 100% GPU 3 minutes from now
(base) stor@DESKTOP-NFL740H:
$ nvidia-smi
Sun Jun 8 09:11:39 2025
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 560.35.02 Driver Version: 560.94 CUDA Version: 12.6 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 3060 On | 00000000:2D:00.0 Off | N/A |
| 0% 41C P8 13W / 170W | 27MiB / 12288MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
+-----------------------------------------------------------------------------------------+

time=2025-06-08T08:25:37.011+02:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\Users\storc\AppData\Local\Programs\Ollama\ollama.exe runner --model I:\Benutzer\storc\.ollama\blobs\sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 --ctx-size 4096 --batch-size 512 --n-gpu-layers 26 --threads 8 --parallel 1 --port 55998"
time=2025-06-08T08:25:37.014+02:00 level=INFO source=sched.go:483 msg="loaded runners" count=1
time=2025-06-08T08:25:37.014+02:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
time=2025-06-08T08:25:37.014+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error"
time=2025-06-08T08:25:37.049+02:00 level=INFO source=runner.go:815 msg="starting go runner"
load_backend: loaded CPU backend from C:\Users\storc\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
time=2025-06-08T08:25:37.085+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang)
time=2025-06-08T08:25:37.085+02:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:55998"
llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from I:\Benutzer\storc.ollama\blobs\sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv 4: general.size_label str = 70B
llama_model_loader: - kv 5: llama.block_count u32 = 80
llama_model_loader: - kv 6: llama.context_length u32 = 131072
llama_model_loader: - kv 7: llama.embedding_length u32 = 8192
llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 9: llama.attention.head_count u32 = 64
llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: llama.attention.key_length u32 = 128
llama_model_loader: - kv 14: llama.attention.value_length u32 = 128
llama_model_loader: - kv 15: general.file_type u32 = 15
llama_model_loader: - kv 16: llama.vocab_size u32 = 128256
llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001
llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 29: general.quantization_version u32 = 2
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q4_K: 441 tensors
llama_model_loader: - type q5_K: 40 tensors
llama_model_loader: - type q6_K: 81 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q4_K - Medium
print_info: file size = 39.59 GiB (4.82 BPW)
time=2025-06-08T08:25:37.265+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 256
load: token to piece cache size = 0.7999 MB
print_info: arch = llama
print_info: vocab_only = 0
print_info: n_ctx_train = 131072
print_info: n_embd = 8192
print_info: n_layer = 80
print_info: n_head = 64
print_info: n_head_kv = 8
print_info: n_rot = 128
print_info: n_swa = 0
print_info: n_swa_pattern = 1
print_info: n_embd_head_k = 128
print_info: n_embd_head_v = 128
print_info: n_gqa = 8
print_info: n_embd_k_gqa = 1024
print_info: n_embd_v_gqa = 1024
print_info: f_norm_eps = 0.0e+00
print_info: f_norm_rms_eps = 1.0e-05
print_info: f_clamp_kqv = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale = 0.0e+00
print_info: f_attn_scale = 0.0e+00
print_info: n_ff = 28672
print_info: n_expert = 0
print_info: n_expert_used = 0
print_info: causal attn = 1
print_info: pooling type = 0
print_info: rope type = 0
print_info: rope scaling = linear
print_info: freq_base_train = 500000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn = 131072
print_info: rope_finetuned = unknown
print_info: ssm_d_conv = 0
print_info: ssm_d_inner = 0
print_info: ssm_d_state = 0
print_info: ssm_dt_rank = 0
print_info: ssm_dt_b_c_rms = 0
print_info: model type = 70B
print_info: model params = 70.55 B
print_info: general.name = DeepSeek R1 Distill Llama 70B
print_info: vocab type = BPE
print_info: n_vocab = 128256
print_info: n_merges = 280147
print_info: BOS token = 128000 '<|begin▁of▁sentence|>'
print_info: EOS token = 128001 '<|end▁of▁sentence|>'
print_info: EOT token = 128001 '<|end▁of▁sentence|>'
print_info: EOM token = 128008 '<|eom_id|>'
print_info: PAD token = 128001 '<|end▁of▁sentence|>'
print_info: LF token = 198 'Ċ'
print_info: EOG token = 128001 '<|end▁of▁sentence|>'
print_info: EOG token = 128008 '<|eom_id|>'
print_info: EOG token = 128009 '<|eot_id|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors: CPU_Mapped model buffer size = 40543.11 MiB
llama_context: constructing llama_context
llama_context: n_seq_max = 1
llama_context: n_ctx = 4096
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch = 512
llama_context: n_ubatch = 512
llama_context: causal_attn = 1
llama_context: flash_attn = 0
llama_context: freq_base = 500000.0
llama_context: freq_scale = 1
llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_context: CPU output buffer size = 0.52 MiB
llama_kv_cache_unified: kv_size = 4096, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1, padding = 32
llama_kv_cache_unified: CPU KV buffer size = 1280.00 MiB
llama_kv_cache_unified: KV self size = 1280.00 MiB, K (f16): 640.00 MiB, V (f16): 640.00 MiB
llama_context: CPU compute buffer size = 584.01 MiB
llama_context: graph nodes = 2726
llama_context: graph splits = 1
time=2025-06-08T08:25:51.786+02:00 level=INFO source=server.go:630 msg="llama runner started in 14.77 seconds"
[GIN] 2025/06/08 - 08:25:51 | 200 | 15.8432965s | 127.0.0.1 | POST "/api/generate"
[GIN] 2025/06/08 - 08:29:10 | 200 | 21.5046313s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/06/08 - 08:32:18 | 200 | 28.6268919s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/06/08 - 08:32:26 | 200 | 3.2012421s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/06/08 - 08:38:21 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/06/08 - 08:38:21 | 200 | 998.7µs | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/06/08 - 08:38:43 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/06/08 - 08:38:43 | 200 | 999.3µs | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/06/08 - 08:40:02 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/06/08 - 08:40:02 | 200 | 997.9µs | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/06/08 - 08:40:39 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/06/08 - 08:40:39 | 200 | 405.8µs | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/06/08 - 08:41:37 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/06/08 - 08:41:37 | 200 | 0s | 127.0.0.1 | GET "/api/ps"
[GIN] 2025/06/08 - 08:41:40 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/06/08 - 08:41:40 | 200 | 1.0022ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/06/08 - 08:45:00 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/06/08 - 08:45:00 | 200 | 500.2µs | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/06/08 - 08:46:51 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/06/08 - 08:46:51 | 200 | 497.1µs | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/06/08 - 08:47:33 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/06/08 - 08:47:33 | 404 | 499.7µs | 127.0.0.1 | POST "/api/show"
time=2025-06-08T08:47:34.877+02:00 level=INFO source=download.go:177 msg="downloading a99b7f834d75 in 16 373 MB part(s)"
time=2025-06-08T08:48:19.277+02:00 level=INFO source=download.go:177 msg="downloading a242d8dfdc8f in 1 487 B part(s)"
time=2025-06-08T08:48:20.607+02:00 level=INFO source=download.go:177 msg="downloading 75357d685f23 in 1 28 B part(s)"
time=2025-06-08T08:48:21.944+02:00 level=INFO source=download.go:177 msg="downloading 832dd9e00a68 in 1 11 KB part(s)"
time=2025-06-08T08:48:23.276+02:00 level=INFO source=download.go:177 msg="downloading 52d2a7aa3a38 in 1 23 B part(s)"
time=2025-06-08T08:48:24.606+02:00 level=INFO source=download.go:177 msg="downloading 83b9da835d9f in 1 567 B part(s)"
[GIN] 2025/06/08 - 08:48:31 | 200 | 57.758926s | 127.0.0.1 | POST "/api/pull"
[GIN] 2025/06/08 - 08:48:31 | 200 | 47.1552ms | 127.0.0.1 | POST "/api/show"
time=2025-06-08T08:48:32.142+02:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=I:\Benutzer\storc.ollama\blobs\sha256-a99b7f834d754b88f122d865f32758ba9f0994a83f8363df2c1e71c17605a025 gpu=GPU-cd38eb1c-290a-15c2-d573-b2c87845fde3 parallel=2 available=11793334272 required="8.4 GiB"
time=2025-06-08T08:48:32.531+02:00 level=INFO source=server.go:135 msg="system memory" total="95.9 GiB" free="77.9 GiB" free_swap="74.5 GiB"
time=2025-06-08T08:48:32.533+02:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[11.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="8.4 GiB" memory.required.partial="8.4 GiB" memory.required.kv="448.0 MiB" memory.required.allocations="[8.4 GiB]" memory.weights.total="4.1 GiB" memory.weights.repeating="3.7 GiB" memory.weights.nonrepeating="426.4 MiB" memory.graph.full="522.7 MiB" memory.graph.partial="522.7 MiB" projector.weights="1.2 GiB" projector.graph="1.6 GiB"
time=2025-06-08T08:48:32.564+02:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\Users\storc\AppData\Local\Programs\Ollama\ollama.exe runner --ollama-engine --model I:\Benutzer\storc\.ollama\blobs\sha256-a99b7f834d754b88f122d865f32758ba9f0994a83f8363df2c1e71c17605a025 --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --threads 8 --no-mmap --parallel 2 --port 60362"
time=2025-06-08T08:48:32.566+02:00 level=INFO source=sched.go:483 msg="loaded runners" count=1
time=2025-06-08T08:48:32.566+02:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
time=2025-06-08T08:48:32.566+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error"
time=2025-06-08T08:48:32.603+02:00 level=INFO source=runner.go:925 msg="starting ollama engine"
time=2025-06-08T08:48:32.626+02:00 level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:60362"
time=2025-06-08T08:48:32.654+02:00 level=INFO source=ggml.go:92 msg="" architecture=qwen25vl file_type=Q4_K_M name="" description="" num_tensors=858 num_key_values=36
load_backend: loaded CPU backend from C:\Users\storc\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes
load_backend: loaded CUDA backend from C:\Users\storc\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
time=2025-06-08T08:48:32.757+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-06-08T08:48:32.817+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
time=2025-06-08T08:48:33.020+02:00 level=INFO source=ggml.go:351 msg="model weights" buffer=CPU size="292.4 MiB"
time=2025-06-08T08:48:33.020+02:00 level=INFO source=ggml.go:351 msg="model weights" buffer=CUDA0 size="5.3 GiB"
time=2025-06-08T08:48:33.271+02:00 level=INFO source=ggml.go:638 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="1.7 GiB"
time=2025-06-08T08:48:33.271+02:00 level=INFO source=ggml.go:638 msg="compute graph" backend=CPU buffer_type=CPU size="0 B"
time=2025-06-08T08:48:33.344+02:00 level=INFO source=ggml.go:638 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="1.7 GiB"
time=2025-06-08T08:48:33.344+02:00 level=INFO source=ggml.go:638 msg="compute graph" backend=CPU buffer_type=CPU size="16.8 MiB"
time=2025-06-08T08:48:34.320+02:00 level=INFO source=server.go:630 msg="llama runner started in 1.75 seconds"
[GIN] 2025/06/08 - 08:48:34 | 200 | 2.6222462s | 127.0.0.1 | POST "/api/generate"
[GIN] 2025/06/08 - 08:49:02 | 200 | 7.0925569s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/06/08 - 08:49:32 | 200 | 611.0885ms | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/06/08 - 08:49:52 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/06/08 - 08:49:52 | 404 | 499.1µs | 127.0.0.1 | POST "/api/show"
time=2025-06-08T08:49:53.356+02:00 level=INFO source=download.go:177 msg="downloading a99b7f834d75 in 16 373 MB part(s)"
time=2025-06-08T08:50:37.711+02:00 level=INFO source=download.go:177 msg="downloading a242d8dfdc8f in 1 487 B part(s)"
time=2025-06-08T08:50:39.020+02:00 level=INFO source=download.go:177 msg="downloading 75357d685f23 in 1 28 B part(s)"
time=2025-06-08T08:50:40.383+02:00 level=INFO source=download.go:177 msg="downloading 832dd9e00a68 in 1 11 KB part(s)"
time=2025-06-08T08:50:41.693+02:00 level=INFO source=download.go:177 msg="downloading 52d2a7aa3a38 in 1 23 B part(s)"
time=2025-06-08T08:50:43.057+02:00 level=INFO source=download.go:177 msg="downloading 83b9da835d9f in 1 567 B part(s)"
[GIN] 2025/06/08 - 08:50:50 | 200 | 57.5681917s | 127.0.0.1 | POST "/api/pull"
[GIN] 2025/06/08 - 08:50:50 | 200 | 33.5409ms | 127.0.0.1 | POST "/api/show"
[GIN] 2025/06/08 - 08:50:50 | 200 | 17.9988ms | 127.0.0.1 | POST "/api/generate"
[GIN] 2025/06/08 - 09:08:37 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/06/08 - 09:08:37 | 404 | 501.5µs | 127.0.0.1 | POST "/api/show"
[GIN] 2025/06/08 - 09:08:38 | 200 | 396.3477ms | 127.0.0.1 | POST "/api/pull"
[GIN] 2025/06/08 - 09:08:46 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/06/08 - 09:08:46 | 404 | 500.6µs | 127.0.0.1 | POST "/api/show"
time=2025-06-08T09:08:47.575+02:00 level=INFO source=download.go:177 msg="downloading e8ad13eff07a in 16 509 MB part(s)"
time=2025-06-08T09:08:52.474+02:00 level=INFO source=download.go:295 msg="e8ad13eff07a part 13 attempt 0 failed: unexpected EOF, retrying in 1s"
time=2025-06-08T09:09:47.101+02:00 level=INFO source=download.go:177 msg="downloading e0a42594d802 in 1 358 B part(s)"
time=2025-06-08T09:09:48.446+02:00 level=INFO source=download.go:177 msg="downloading dd084c7d92a3 in 1 8.4 KB part(s)"
time=2025-06-08T09:09:49.754+02:00 level=INFO source=download.go:177 msg="downloading 3116c5225075 in 1 77 B part(s)"
time=2025-06-08T09:09:51.060+02:00 level=INFO source=download.go:177 msg="downloading 6819964c2bcf in 1 490 B part(s)"
[GIN] 2025/06/08 - 09:10:00 | 200 | 1m13s | 127.0.0.1 | POST "/api/pull"
[GIN] 2025/06/08 - 09:10:00 | 200 | 64.1806ms | 127.0.0.1 | POST "/api/show"
time=2025-06-08T09:10:00.722+02:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=I:\Benutzer\storc.ollama\blobs\sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de gpu=0 parallel=2 available=15385755648 required="11.0 GiB"
time=2025-06-08T09:10:01.103+02:00 level=INFO source=server.go:135 msg="system memory" total="95.9 GiB" free="78.0 GiB" free_swap="74.0 GiB"
time=2025-06-08T09:10:01.105+02:00 level=INFO source=server.go:168 msg=offload library=rocm layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[14.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.0 GiB" memory.required.partial="11.0 GiB" memory.required.kv="1.3 GiB" memory.required.allocations="[11.0 GiB]" memory.weights.total="6.8 GiB" memory.weights.repeating="6.0 GiB" memory.weights.nonrepeating="787.5 MiB" memory.graph.full="519.5 MiB" memory.graph.partial="1.3 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB"
time=2025-06-08T09:10:01.167+02:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\Users\storc\AppData\Local\Programs\Ollama\ollama.exe runner --ollama-engine --model I:\Benutzer\storc\.ollama\blobs\sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de --ctx-size 8192 --batch-size 512 --n-gpu-layers 49 --threads 8 --parallel 2 --port 64366"
time=2025-06-08T09:10:01.169+02:00 level=INFO source=sched.go:483 msg="loaded runners" count=1
time=2025-06-08T09:10:01.169+02:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
time=2025-06-08T09:10:01.170+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error"
time=2025-06-08T09:10:01.203+02:00 level=INFO source=runner.go:925 msg="starting ollama engine"
time=2025-06-08T09:10:01.226+02:00 level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:64366"
time=2025-06-08T09:10:01.286+02:00 level=INFO source=ggml.go:92 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1065 num_key_values=37
load_backend: loaded CPU backend from C:\Users\storc\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
time=2025-06-08T09:10:01.420+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 ROCm devices:
Device 0: AMD Radeon RX 6900 XT, gfx1030 (0x1030), VMM: no, Wave Size: 32
load_backend: loaded ROCm backend from C:\Users\storc\AppData\Local\Programs\Ollama\lib\ollama\rocm\ggml-hip.dll
time=2025-06-08T09:10:01.455+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.NO_PEER_COPY=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-06-08T09:10:04.252+02:00 level=INFO source=ggml.go:351 msg="model weights" buffer=ROCm0 size="7.6 GiB"
time=2025-06-08T09:10:04.252+02:00 level=INFO source=ggml.go:351 msg="model weights" buffer=CPU size="787.5 MiB"
time=2025-06-08T09:10:04.506+02:00 level=INFO source=ggml.go:638 msg="compute graph" backend=ROCm0 buffer_type=ROCm0 size="1.1 GiB"
time=2025-06-08T09:10:04.506+02:00 level=INFO source=ggml.go:638 msg="compute graph" backend=CPU buffer_type=CPU size="0 B"
time=2025-06-08T09:10:04.908+02:00 level=INFO source=ggml.go:638 msg="compute graph" backend=ROCm0 buffer_type=ROCm0 size="1.1 GiB"
time=2025-06-08T09:10:04.908+02:00 level=INFO source=ggml.go:638 msg="compute graph" backend=CPU buffer_type=CPU size="7.5 MiB"
time=2025-06-08T09:10:05.935+02:00 level=INFO source=server.go:630 msg="llama runner started in 4.77 seconds"
[GIN] 2025/06/08 - 09:10:05 | 200 | 5.7064075s | 127.0.0.1 | POST "/api/generate"
[GIN] 2025/06/08 - 09:10:08 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/06/08 - 09:10:08 | 200 | 0s | 127.0.0.1 | GET "/api/ps"
[GIN] 2025/06/08 - 09:10:52 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/06/08 - 09:10:52 | 200 | 547µs | 127.0.0.1 | GET "/api/ps"
[GIN] 2025/06/08 - 09:11:36 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/06/08 - 09:11:36 | 200 | 0s | 127.0.0.1 | GET "/api/ps"
[GIN] 2025/06/08 - 09:14:37 | 200 | 4m5s | 127.0.0.1 | POST "/api/chat"

<!-- gh-comment-id:2953663531 --> @sto1 commented on GitHub (Jun 8, 2025): I did take back my changes and Set the variables: PS I:\Users\storc> $env:OLLAMA_LLM_LIBRARY = "cuda_v12" PS I:\Users\storc> $env:CUDA_VISIBLE_DEVICES="0" PS I:\Users\storc> $env:HIP_VISIBLE_DEVICES="" PS I:\Users\storc> ollama run gemma:12b --verbose The system tells me that he is running on the GPU, but it is not! It does not take the memory and it's to slow! It works now fine unter WSL Ubuntu, but not Windows! But my Ubuntu has no access to the AMD Card! (base) stor@DESKTOP-NFL740H:~$ ollama ps NAME ID SIZE PROCESSOR UNTIL gemma3:12b f4031aab637d 11 GB 100% GPU 3 minutes from now (base) stor@DESKTOP-NFL740H:~$ nvidia-smi Sun Jun 8 09:11:39 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 560.35.02 Driver Version: 560.94 CUDA Version: 12.6 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 3060 On | 00000000:2D:00.0 Off | N/A | | 0% 41C P8 13W / 170W | 27MiB / 12288MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | No running processes found | +-----------------------------------------------------------------------------------------+ time=2025-06-08T08:25:37.011+02:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\\Users\\storc\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model I:\\Benutzer\\storc\\.ollama\\blobs\\sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 --ctx-size 4096 --batch-size 512 --n-gpu-layers 26 --threads 8 --parallel 1 --port 55998" time=2025-06-08T08:25:37.014+02:00 level=INFO source=sched.go:483 msg="loaded runners" count=1 time=2025-06-08T08:25:37.014+02:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2025-06-08T08:25:37.014+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error" time=2025-06-08T08:25:37.049+02:00 level=INFO source=runner.go:815 msg="starting go runner" load_backend: loaded CPU backend from C:\Users\storc\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll time=2025-06-08T08:25:37.085+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang) time=2025-06-08T08:25:37.085+02:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:55998" llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from I:\Benutzer\storc\.ollama\blobs\sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 4: general.size_label str = 70B llama_model_loader: - kv 5: llama.block_count u32 = 80 llama_model_loader: - kv 6: llama.context_length u32 = 131072 llama_model_loader: - kv 7: llama.embedding_length u32 = 8192 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 9: llama.attention.head_count u32 = 64 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: llama.attention.key_length u32 = 128 llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 llama_model_loader: - kv 15: general.file_type u32 = 15 llama_model_loader: - kv 16: llama.vocab_size u32 = 128256 llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q4_K: 441 tensors llama_model_loader: - type q5_K: 40 tensors llama_model_loader: - type q6_K: 81 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 39.59 GiB (4.82 BPW) time=2025-06-08T08:25:37.265+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 256 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 8192 print_info: n_layer = 80 print_info: n_head = 64 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_swa_pattern = 1 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 8 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 28672 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 0 print_info: rope scaling = linear print_info: freq_base_train = 500000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 131072 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 70B print_info: model params = 70.55 B print_info: general.name = DeepSeek R1 Distill Llama 70B print_info: vocab type = BPE print_info: n_vocab = 128256 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin▁of▁sentence|>' print_info: EOS token = 128001 '<|end▁of▁sentence|>' print_info: EOT token = 128001 '<|end▁of▁sentence|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: PAD token = 128001 '<|end▁of▁sentence|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128001 '<|end▁of▁sentence|>' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = true) load_tensors: CPU_Mapped model buffer size = 40543.11 MiB llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 4096 llama_context: n_ctx_per_seq = 4096 llama_context: n_batch = 512 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 0 llama_context: freq_base = 500000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_context: CPU output buffer size = 0.52 MiB llama_kv_cache_unified: kv_size = 4096, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1, padding = 32 llama_kv_cache_unified: CPU KV buffer size = 1280.00 MiB llama_kv_cache_unified: KV self size = 1280.00 MiB, K (f16): 640.00 MiB, V (f16): 640.00 MiB llama_context: CPU compute buffer size = 584.01 MiB llama_context: graph nodes = 2726 llama_context: graph splits = 1 time=2025-06-08T08:25:51.786+02:00 level=INFO source=server.go:630 msg="llama runner started in 14.77 seconds" [GIN] 2025/06/08 - 08:25:51 | 200 | 15.8432965s | 127.0.0.1 | POST "/api/generate" [GIN] 2025/06/08 - 08:29:10 | 200 | 21.5046313s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/06/08 - 08:32:18 | 200 | 28.6268919s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/06/08 - 08:32:26 | 200 | 3.2012421s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/06/08 - 08:38:21 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/06/08 - 08:38:21 | 200 | 998.7µs | 127.0.0.1 | GET "/api/tags" [GIN] 2025/06/08 - 08:38:43 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/06/08 - 08:38:43 | 200 | 999.3µs | 127.0.0.1 | GET "/api/tags" [GIN] 2025/06/08 - 08:40:02 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/06/08 - 08:40:02 | 200 | 997.9µs | 127.0.0.1 | GET "/api/tags" [GIN] 2025/06/08 - 08:40:39 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/06/08 - 08:40:39 | 200 | 405.8µs | 127.0.0.1 | GET "/api/tags" [GIN] 2025/06/08 - 08:41:37 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/06/08 - 08:41:37 | 200 | 0s | 127.0.0.1 | GET "/api/ps" [GIN] 2025/06/08 - 08:41:40 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/06/08 - 08:41:40 | 200 | 1.0022ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/06/08 - 08:45:00 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/06/08 - 08:45:00 | 200 | 500.2µs | 127.0.0.1 | GET "/api/tags" [GIN] 2025/06/08 - 08:46:51 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/06/08 - 08:46:51 | 200 | 497.1µs | 127.0.0.1 | GET "/api/tags" [GIN] 2025/06/08 - 08:47:33 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/06/08 - 08:47:33 | 404 | 499.7µs | 127.0.0.1 | POST "/api/show" time=2025-06-08T08:47:34.877+02:00 level=INFO source=download.go:177 msg="downloading a99b7f834d75 in 16 373 MB part(s)" time=2025-06-08T08:48:19.277+02:00 level=INFO source=download.go:177 msg="downloading a242d8dfdc8f in 1 487 B part(s)" time=2025-06-08T08:48:20.607+02:00 level=INFO source=download.go:177 msg="downloading 75357d685f23 in 1 28 B part(s)" time=2025-06-08T08:48:21.944+02:00 level=INFO source=download.go:177 msg="downloading 832dd9e00a68 in 1 11 KB part(s)" time=2025-06-08T08:48:23.276+02:00 level=INFO source=download.go:177 msg="downloading 52d2a7aa3a38 in 1 23 B part(s)" time=2025-06-08T08:48:24.606+02:00 level=INFO source=download.go:177 msg="downloading 83b9da835d9f in 1 567 B part(s)" [GIN] 2025/06/08 - 08:48:31 | 200 | 57.758926s | 127.0.0.1 | POST "/api/pull" [GIN] 2025/06/08 - 08:48:31 | 200 | 47.1552ms | 127.0.0.1 | POST "/api/show" time=2025-06-08T08:48:32.142+02:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=I:\Benutzer\storc\.ollama\blobs\sha256-a99b7f834d754b88f122d865f32758ba9f0994a83f8363df2c1e71c17605a025 gpu=GPU-cd38eb1c-290a-15c2-d573-b2c87845fde3 parallel=2 available=11793334272 required="8.4 GiB" time=2025-06-08T08:48:32.531+02:00 level=INFO source=server.go:135 msg="system memory" total="95.9 GiB" free="77.9 GiB" free_swap="74.5 GiB" time=2025-06-08T08:48:32.533+02:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[11.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="8.4 GiB" memory.required.partial="8.4 GiB" memory.required.kv="448.0 MiB" memory.required.allocations="[8.4 GiB]" memory.weights.total="4.1 GiB" memory.weights.repeating="3.7 GiB" memory.weights.nonrepeating="426.4 MiB" memory.graph.full="522.7 MiB" memory.graph.partial="522.7 MiB" projector.weights="1.2 GiB" projector.graph="1.6 GiB" time=2025-06-08T08:48:32.564+02:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\\Users\\storc\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model I:\\Benutzer\\storc\\.ollama\\blobs\\sha256-a99b7f834d754b88f122d865f32758ba9f0994a83f8363df2c1e71c17605a025 --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --threads 8 --no-mmap --parallel 2 --port 60362" time=2025-06-08T08:48:32.566+02:00 level=INFO source=sched.go:483 msg="loaded runners" count=1 time=2025-06-08T08:48:32.566+02:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2025-06-08T08:48:32.566+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error" time=2025-06-08T08:48:32.603+02:00 level=INFO source=runner.go:925 msg="starting ollama engine" time=2025-06-08T08:48:32.626+02:00 level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:60362" time=2025-06-08T08:48:32.654+02:00 level=INFO source=ggml.go:92 msg="" architecture=qwen25vl file_type=Q4_K_M name="" description="" num_tensors=858 num_key_values=36 load_backend: loaded CPU backend from C:\Users\storc\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes load_backend: loaded CUDA backend from C:\Users\storc\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll time=2025-06-08T08:48:32.757+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-06-08T08:48:32.817+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" time=2025-06-08T08:48:33.020+02:00 level=INFO source=ggml.go:351 msg="model weights" buffer=CPU size="292.4 MiB" time=2025-06-08T08:48:33.020+02:00 level=INFO source=ggml.go:351 msg="model weights" buffer=CUDA0 size="5.3 GiB" time=2025-06-08T08:48:33.271+02:00 level=INFO source=ggml.go:638 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="1.7 GiB" time=2025-06-08T08:48:33.271+02:00 level=INFO source=ggml.go:638 msg="compute graph" backend=CPU buffer_type=CPU size="0 B" time=2025-06-08T08:48:33.344+02:00 level=INFO source=ggml.go:638 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="1.7 GiB" time=2025-06-08T08:48:33.344+02:00 level=INFO source=ggml.go:638 msg="compute graph" backend=CPU buffer_type=CPU size="16.8 MiB" time=2025-06-08T08:48:34.320+02:00 level=INFO source=server.go:630 msg="llama runner started in 1.75 seconds" [GIN] 2025/06/08 - 08:48:34 | 200 | 2.6222462s | 127.0.0.1 | POST "/api/generate" [GIN] 2025/06/08 - 08:49:02 | 200 | 7.0925569s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/06/08 - 08:49:32 | 200 | 611.0885ms | 127.0.0.1 | POST "/api/chat" [GIN] 2025/06/08 - 08:49:52 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/06/08 - 08:49:52 | 404 | 499.1µs | 127.0.0.1 | POST "/api/show" time=2025-06-08T08:49:53.356+02:00 level=INFO source=download.go:177 msg="downloading a99b7f834d75 in 16 373 MB part(s)" time=2025-06-08T08:50:37.711+02:00 level=INFO source=download.go:177 msg="downloading a242d8dfdc8f in 1 487 B part(s)" time=2025-06-08T08:50:39.020+02:00 level=INFO source=download.go:177 msg="downloading 75357d685f23 in 1 28 B part(s)" time=2025-06-08T08:50:40.383+02:00 level=INFO source=download.go:177 msg="downloading 832dd9e00a68 in 1 11 KB part(s)" time=2025-06-08T08:50:41.693+02:00 level=INFO source=download.go:177 msg="downloading 52d2a7aa3a38 in 1 23 B part(s)" time=2025-06-08T08:50:43.057+02:00 level=INFO source=download.go:177 msg="downloading 83b9da835d9f in 1 567 B part(s)" [GIN] 2025/06/08 - 08:50:50 | 200 | 57.5681917s | 127.0.0.1 | POST "/api/pull" [GIN] 2025/06/08 - 08:50:50 | 200 | 33.5409ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/06/08 - 08:50:50 | 200 | 17.9988ms | 127.0.0.1 | POST "/api/generate" [GIN] 2025/06/08 - 09:08:37 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/06/08 - 09:08:37 | 404 | 501.5µs | 127.0.0.1 | POST "/api/show" [GIN] 2025/06/08 - 09:08:38 | 200 | 396.3477ms | 127.0.0.1 | POST "/api/pull" [GIN] 2025/06/08 - 09:08:46 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/06/08 - 09:08:46 | 404 | 500.6µs | 127.0.0.1 | POST "/api/show" time=2025-06-08T09:08:47.575+02:00 level=INFO source=download.go:177 msg="downloading e8ad13eff07a in 16 509 MB part(s)" time=2025-06-08T09:08:52.474+02:00 level=INFO source=download.go:295 msg="e8ad13eff07a part 13 attempt 0 failed: unexpected EOF, retrying in 1s" time=2025-06-08T09:09:47.101+02:00 level=INFO source=download.go:177 msg="downloading e0a42594d802 in 1 358 B part(s)" time=2025-06-08T09:09:48.446+02:00 level=INFO source=download.go:177 msg="downloading dd084c7d92a3 in 1 8.4 KB part(s)" time=2025-06-08T09:09:49.754+02:00 level=INFO source=download.go:177 msg="downloading 3116c5225075 in 1 77 B part(s)" time=2025-06-08T09:09:51.060+02:00 level=INFO source=download.go:177 msg="downloading 6819964c2bcf in 1 490 B part(s)" [GIN] 2025/06/08 - 09:10:00 | 200 | 1m13s | 127.0.0.1 | POST "/api/pull" [GIN] 2025/06/08 - 09:10:00 | 200 | 64.1806ms | 127.0.0.1 | POST "/api/show" time=2025-06-08T09:10:00.722+02:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=I:\Benutzer\storc\.ollama\blobs\sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de gpu=0 parallel=2 available=15385755648 required="11.0 GiB" time=2025-06-08T09:10:01.103+02:00 level=INFO source=server.go:135 msg="system memory" total="95.9 GiB" free="78.0 GiB" free_swap="74.0 GiB" time=2025-06-08T09:10:01.105+02:00 level=INFO source=server.go:168 msg=offload library=rocm layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[14.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.0 GiB" memory.required.partial="11.0 GiB" memory.required.kv="1.3 GiB" memory.required.allocations="[11.0 GiB]" memory.weights.total="6.8 GiB" memory.weights.repeating="6.0 GiB" memory.weights.nonrepeating="787.5 MiB" memory.graph.full="519.5 MiB" memory.graph.partial="1.3 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB" time=2025-06-08T09:10:01.167+02:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\\Users\\storc\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model I:\\Benutzer\\storc\\.ollama\\blobs\\sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de --ctx-size 8192 --batch-size 512 --n-gpu-layers 49 --threads 8 --parallel 2 --port 64366" time=2025-06-08T09:10:01.169+02:00 level=INFO source=sched.go:483 msg="loaded runners" count=1 time=2025-06-08T09:10:01.169+02:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2025-06-08T09:10:01.170+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error" time=2025-06-08T09:10:01.203+02:00 level=INFO source=runner.go:925 msg="starting ollama engine" time=2025-06-08T09:10:01.226+02:00 level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:64366" time=2025-06-08T09:10:01.286+02:00 level=INFO source=ggml.go:92 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1065 num_key_values=37 load_backend: loaded CPU backend from C:\Users\storc\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll time=2025-06-08T09:10:01.420+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 ROCm devices: Device 0: AMD Radeon RX 6900 XT, gfx1030 (0x1030), VMM: no, Wave Size: 32 load_backend: loaded ROCm backend from C:\Users\storc\AppData\Local\Programs\Ollama\lib\ollama\rocm\ggml-hip.dll time=2025-06-08T09:10:01.455+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.NO_PEER_COPY=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-06-08T09:10:04.252+02:00 level=INFO source=ggml.go:351 msg="model weights" buffer=ROCm0 size="7.6 GiB" time=2025-06-08T09:10:04.252+02:00 level=INFO source=ggml.go:351 msg="model weights" buffer=CPU size="787.5 MiB" time=2025-06-08T09:10:04.506+02:00 level=INFO source=ggml.go:638 msg="compute graph" backend=ROCm0 buffer_type=ROCm0 size="1.1 GiB" time=2025-06-08T09:10:04.506+02:00 level=INFO source=ggml.go:638 msg="compute graph" backend=CPU buffer_type=CPU size="0 B" time=2025-06-08T09:10:04.908+02:00 level=INFO source=ggml.go:638 msg="compute graph" backend=ROCm0 buffer_type=ROCm0 size="1.1 GiB" time=2025-06-08T09:10:04.908+02:00 level=INFO source=ggml.go:638 msg="compute graph" backend=CPU buffer_type=CPU size="7.5 MiB" time=2025-06-08T09:10:05.935+02:00 level=INFO source=server.go:630 msg="llama runner started in 4.77 seconds" [GIN] 2025/06/08 - 09:10:05 | 200 | 5.7064075s | 127.0.0.1 | POST "/api/generate" [GIN] 2025/06/08 - 09:10:08 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/06/08 - 09:10:08 | 200 | 0s | 127.0.0.1 | GET "/api/ps" [GIN] 2025/06/08 - 09:10:52 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/06/08 - 09:10:52 | 200 | 547µs | 127.0.0.1 | GET "/api/ps" [GIN] 2025/06/08 - 09:11:36 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/06/08 - 09:11:36 | 200 | 0s | 127.0.0.1 | GET "/api/ps" [GIN] 2025/06/08 - 09:14:37 | 200 | 4m5s | 127.0.0.1 | POST "/api/chat"
Author
Owner

@sto1 commented on GitHub (Jun 8, 2025):

It is again on the AMD card

<!-- gh-comment-id:2953666971 --> @sto1 commented on GitHub (Jun 8, 2025): It is again on the AMD card
Author
Owner

@sto1 commented on GitHub (Jun 8, 2025):

I have closed down everthing and started from scratch. Now it's also working in Windows. Thanks for your Suppor!!!

<!-- gh-comment-id:2953720184 --> @sto1 commented on GitHub (Jun 8, 2025): I have closed down everthing and started from scratch. Now it's also working in Windows. Thanks for your Suppor!!!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#53771