[GH-ISSUE #11849] NVIDIA GPU (RTX 5080) falls back to CPU when AMD iGPU is present, while RTX 4060 with Intel iGPU works correctly #69924

Closed
opened 2026-05-04 19:47:31 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @king-66jack on GitHub (Aug 11, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11849

What is the issue?

Environment
System 1 (Working):
GPU: NVIDIA GeForce RTX 4060 Laptop GPU
CPU: Intel i7-14650HX (with Intel integrated GPU)
Ollama Version: 0.6.5
OS: Windows
System 2 (Issue):
GPU: NVIDIA GeForce RTX 5080
CPU: AMD 9900X3D (with AMD integrated GPU)
Ollama Version: 0.9.6
OS: Windows
Problem Description
I’m running the same model (Qwen2.5 3B Instruct) and query on both systems, but observing inconsistent GPU utilization:

On System 1 (RTX 4060 + Intel iGPU), the model loads and runs on the NVIDIA GPU successfully (CUDA acceleration works).
On System 2 (RTX 5080 + AMD iGPU), the model falls back to CPU, even though the NVIDIA GPU is detected.
Key Observations from Logs
System 1 (RTX 4060 + Intel iGPU):
Successfully detects and initializes CUDA:
plaintext
ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4060 Laptop GPU, compute capability 8.9
load_backend: loaded CUDA backend from [path]/ggml-cuda.dll

Model layers are offloaded to GPU:
plaintext
offload library=cuda layers.offload=37
load_tensors: offloaded 37/37 layers to GPU

System 2 (RTX 5080 + AMD iGPU):
Detects AMD iGPU but fails to find ROCm (expected, as it’s an NVIDIA system):
plaintext
amdgpu detected, but no compatible rocm library found. Please install ROCm
unable to verify rocm library: no suitable rocm found, falling back to CPU

No CUDA initialization/backend loading logs (missing ggml_cuda_init or loaded CUDA backend entries).
Model loads entirely on CPU:
plaintext
load_tensors: CPU model buffer size = 1834.82 MiB

(Hypothesis)

The presence of an AMD integrated GPU triggers Ollama’s ROCm detection logic. When ROCm is missing (which is expected on an NVIDIA-focused system), Ollama incorrectly falls back to CPU globally—ignoring the compatible NVIDIA GPU and its CUDA support.

In contrast, Intel integrated GPUs do not trigger this behavior; Ollama skips them and correctly uses the NVIDIA GPU with CUDA.
Request
Could this be a compatibility issue with AMD iGPUs causing Ollama to bypass NVIDIA CUDA acceleration? Any fixes or workarounds to ensure NVIDIA GPUs are prioritized even when AMD iGPUs are present would be appreciated.

Logs for both systems are available for further debugging. Let me know if additional details are needed!

Relevant log output


OS

Windows

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @king-66jack on GitHub (Aug 11, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11849 ### What is the issue? Environment System 1 (Working): GPU: NVIDIA GeForce RTX 4060 Laptop GPU CPU: Intel i7-14650HX (with Intel integrated GPU) Ollama Version: 0.6.5 OS: Windows System 2 (Issue): GPU: NVIDIA GeForce RTX 5080 CPU: AMD 9900X3D (with AMD integrated GPU) Ollama Version: 0.9.6 OS: Windows Problem Description I’m running the same model (Qwen2.5 3B Instruct) and query on both systems, but observing inconsistent GPU utilization: On System 1 (RTX 4060 + Intel iGPU), the model loads and runs on the NVIDIA GPU successfully (CUDA acceleration works). On System 2 (RTX 5080 + AMD iGPU), the model falls back to CPU, even though the NVIDIA GPU is detected. Key Observations from Logs System 1 (RTX 4060 + Intel iGPU): Successfully detects and initializes CUDA: plaintext ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4060 Laptop GPU, compute capability 8.9 load_backend: loaded CUDA backend from [path]/ggml-cuda.dll Model layers are offloaded to GPU: plaintext offload library=cuda layers.offload=37 load_tensors: offloaded 37/37 layers to GPU System 2 (RTX 5080 + AMD iGPU): Detects AMD iGPU but fails to find ROCm (expected, as it’s an NVIDIA system): plaintext amdgpu detected, but no compatible rocm library found. Please install ROCm unable to verify rocm library: no suitable rocm found, falling back to CPU No CUDA initialization/backend loading logs (missing ggml_cuda_init or loaded CUDA backend entries). Model loads entirely on CPU: plaintext load_tensors: CPU model buffer size = 1834.82 MiB ### (Hypothesis) The presence of an AMD integrated GPU triggers Ollama’s ROCm detection logic. When ROCm is missing (which is expected on an NVIDIA-focused system), Ollama incorrectly falls back to CPU globally—ignoring the compatible NVIDIA GPU and its CUDA support. In contrast, Intel integrated GPUs do not trigger this behavior; Ollama skips them and correctly uses the NVIDIA GPU with CUDA. Request Could this be a compatibility issue with AMD iGPUs causing Ollama to bypass NVIDIA CUDA acceleration? Any fixes or workarounds to ensure NVIDIA GPUs are prioritized even when AMD iGPUs are present would be appreciated. Logs for both systems are available for further debugging. Let me know if additional details are needed! ### Relevant log output ```shell ``` ### OS Windows ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the bugneeds more info labels 2026-05-04 19:47:32 -05:00
Author
Owner

@rick-github commented on GitHub (Aug 11, 2025):

Server logs will help in debugging.

<!-- gh-comment-id:3173738492 --> @rick-github commented on GitHub (Aug 11, 2025): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will help in debugging.
Author
Owner

@king-66jack commented on GitHub (Aug 11, 2025):

here is serve log
that is 5080
(base) PS C:\Users\admin> ollama serve
time=2025-08-11T15:21:27.240+08:00 level=INFO source=routes.go:1235 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\Users\admin\.ollama\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-08-11T15:21:27.248+08:00 level=INFO source=images.go:476 msg="total blobs: 9"
time=2025-08-11T15:21:27.248+08:00 level=INFO source=images.go:483 msg="total unused blobs removed: 0"
time=2025-08-11T15:21:27.249+08:00 level=INFO source=routes.go:1288 msg="Listening on 127.0.0.1:11434 (version 0.9.6)"
time=2025-08-11T15:21:27.249+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-08-11T15:21:27.249+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-08-11T15:21:27.249+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=12 efficiency=0 threads=24
time=2025-08-11T15:21:27.368+08:00 level=WARN source=amd_windows.go:172 msg="amdgpu detected, but no compatible rocm library found. Please install ROCm"
time=2025-08-11T15:21:27.368+08:00 level=WARN source=amd_windows.go:56 msg="unable to verify rocm library: no suitable rocm found, falling back to CPU"
time=2025-08-11T15:21:27.369+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-033d8ea1-1777-5f05-2b68-fe21df894c31 library=cuda variant=v12 compute=12.0 driver=12.9 name="NVIDIA GeForce RTX 5080" total="15.9 GiB" available="14.5 GiB"
[GIN] 2025/08/11 - 15:21:49 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/08/11 - 15:21:49 | 200 | 27.0146ms | 127.0.0.1 | POST "/api/show"
time=2025-08-11T15:21:49.605+08:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\admin.ollama\models\blobs\sha256-5ee4f07cdb9beadbbb293e85803c569b01bd37ed059d2715faa7bb405f31caa6 gpu=GPU-033d8ea1-1777-5f05-2b68-fe21df894c31 parallel=2 available=13880553472 required="2.9 GiB"
time=2025-08-11T15:21:49.626+08:00 level=INFO source=server.go:135 msg="system memory" total="47.1 GiB" free="26.8 GiB" free_swap="24.7 GiB"
time=2025-08-11T15:21:49.627+08:00 level=INFO source=server.go:175 msg=offload library=cuda layers.requested=-1 layers.model=37 layers.offload=37 layers.split="" memory.available="[12.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="2.9 GiB" memory.required.partial="2.9 GiB" memory.required.kv="288.0 MiB" memory.required.allocations="[2.9 GiB]" memory.weights.total="1.8 GiB" memory.weights.repeating="1.6 GiB" memory.weights.nonrepeating="243.4 MiB" memory.graph.full="300.8 MiB" memory.graph.partial="544.2 MiB"
llama_model_loader: loaded meta data with 35 key-value pairs and 434 tensors from C:\Users\admin.ollama\models\blobs\sha256-5ee4f07cdb9beadbbb293e85803c569b01bd37ed059d2715faa7bb405f31caa6 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Qwen2.5 3B Instruct
llama_model_loader: - kv 3: general.finetune str = Instruct
llama_model_loader: - kv 4: general.basename str = Qwen2.5
llama_model_loader: - kv 5: general.size_label str = 3B
llama_model_loader: - kv 6: general.license str = other
llama_model_loader: - kv 7: general.license.name str = qwen-research
llama_model_loader: - kv 8: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-3...
llama_model_loader: - kv 9: general.base_model.count u32 = 1
llama_model_loader: - kv 10: general.base_model.0.name str = Qwen2.5 3B
llama_model_loader: - kv 11: general.base_model.0.organization str = Qwen
llama_model_loader: - kv 12: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-3B
llama_model_loader: - kv 13: general.tags arr[str,2] = ["chat", "text-generation"]
llama_model_loader: - kv 14: general.languages arr[str,1] = ["en"]
llama_model_loader: - kv 15: qwen2.block_count u32 = 36
llama_model_loader: - kv 16: qwen2.context_length u32 = 32768
llama_model_loader: - kv 17: qwen2.embedding_length u32 = 2048
llama_model_loader: - kv 18: qwen2.feed_forward_length u32 = 11008
llama_model_loader: - kv 19: qwen2.attention.head_count u32 = 16
llama_model_loader: - kv 20: qwen2.attention.head_count_kv u32 = 2
llama_model_loader: - kv 21: qwen2.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 22: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 23: general.file_type u32 = 15
llama_model_loader: - kv 24: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 25: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 26: tokenizer.ggml.tokens arr[str,151936] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 27: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 28: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 29: tokenizer.ggml.eos_token_id u32 = 151645
llama_model_loader: - kv 30: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 31: tokenizer.ggml.bos_token_id u32 = 151643
llama_model_loader: - kv 32: tokenizer.ggml.add_bos_token bool = false
llama_model_loader: - kv 33: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
llama_model_loader: - kv 34: general.quantization_version u32 = 2
llama_model_loader: - type f32: 181 tensors
llama_model_loader: - type q4_K: 216 tensors
llama_model_loader: - type q6_K: 37 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q4_K - Medium
print_info: file size = 1.79 GiB (4.99 BPW)
load: special tokens cache size = 22
load: token to piece cache size = 0.9310 MB
print_info: arch = qwen2
print_info: vocab_only = 1
print_info: model type = ?B
print_info: model params = 3.09 B
print_info: general.name = Qwen2.5 3B Instruct
print_info: vocab type = BPE
print_info: n_vocab = 151936
print_info: n_merges = 151387
print_info: BOS token = 151643 '<|endoftext|>'
print_info: EOS token = 151645 '<|im_end|>'
print_info: EOT token = 151645 '<|im_end|>'
print_info: PAD token = 151643 '<|endoftext|>'
print_info: LF token = 198 'Ċ'
print_info: FIM PRE token = 151659 '<|fim_prefix|>'
print_info: FIM SUF token = 151661 '<|fim_suffix|>'
print_info: FIM MID token = 151660 '<|fim_middle|>'
print_info: FIM PAD token = 151662 '<|fim_pad|>'
print_info: FIM REP token = 151663 '<|repo_name|>'
print_info: FIM SEP token = 151664 '<|file_sep|>'
print_info: EOG token = 151643 '<|endoftext|>'
print_info: EOG token = 151645 '<|im_end|>'
print_info: EOG token = 151662 '<|fim_pad|>'
print_info: EOG token = 151663 '<|repo_name|>'
print_info: EOG token = 151664 '<|file_sep|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-08-11T15:21:49.747+08:00 level=INFO source=server.go:438 msg="starting llama server" cmd="C:\Users\admin\AppData\Local\Programs\Ollama\ollama.exe runner --model C:\Users\admin\.ollama\models\blobs\sha256-5ee4f07cdb9beadbbb293e85803c569b01bd37ed059d2715faa7bb405f31caa6 --ctx-size 8192 --batch-size 512 --n-gpu-layers 37 --threads 12 --no-mmap --parallel 2 --port 57357"
time=2025-08-11T15:21:49.852+08:00 level=INFO source=sched.go:483 msg="loaded runners" count=1
time=2025-08-11T15:21:49.852+08:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding"
time=2025-08-11T15:21:49.852+08:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server error"
time=2025-08-11T15:21:49.980+08:00 level=INFO source=runner.go:815 msg="starting go runner"
time=2025-08-11T15:21:49.987+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(clang)
time=2025-08-11T15:21:49.990+08:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:57357"
llama_model_loader: loaded meta data with 35 key-value pairs and 434 tensors from C:\Users\admin.ollama\models\blobs\sha256-5ee4f07cdb9beadbbb293e85803c569b01bd37ed059d2715faa7bb405f31caa6 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Qwen2.5 3B Instruct
llama_model_loader: - kv 3: general.finetune str = Instruct
llama_model_loader: - kv 4: general.basename str = Qwen2.5
llama_model_loader: - kv 5: general.size_label str = 3B
llama_model_loader: - kv 6: general.license str = other
llama_model_loader: - kv 7: general.license.name str = qwen-research
llama_model_loader: - kv 8: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-3...
llama_model_loader: - kv 9: general.base_model.count u32 = 1
llama_model_loader: - kv 10: general.base_model.0.name str = Qwen2.5 3B
llama_model_loader: - kv 11: general.base_model.0.organization str = Qwen
llama_model_loader: - kv 12: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-3B
llama_model_loader: - kv 13: general.tags arr[str,2] = ["chat", "text-generation"]
llama_model_loader: - kv 14: general.languages arr[str,1] = ["en"]
llama_model_loader: - kv 15: qwen2.block_count u32 = 36
llama_model_loader: - kv 16: qwen2.context_length u32 = 32768
llama_model_loader: - kv 17: qwen2.embedding_length u32 = 2048
llama_model_loader: - kv 18: qwen2.feed_forward_length u32 = 11008
llama_model_loader: - kv 19: qwen2.attention.head_count u32 = 16
llama_model_loader: - kv 20: qwen2.attention.head_count_kv u32 = 2
llama_model_loader: - kv 21: qwen2.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 22: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 23: general.file_type u32 = 15
llama_model_loader: - kv 24: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 25: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 26: tokenizer.ggml.tokens arr[str,151936] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 27: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 28: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 29: tokenizer.ggml.eos_token_id u32 = 151645
llama_model_loader: - kv 30: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 31: tokenizer.ggml.bos_token_id u32 = 151643
llama_model_loader: - kv 32: tokenizer.ggml.add_bos_token bool = false
llama_model_loader: - kv 33: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
llama_model_loader: - kv 34: general.quantization_version u32 = 2
llama_model_loader: - type f32: 181 tensors
llama_model_loader: - type q4_K: 216 tensors
llama_model_loader: - type q6_K: 37 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q4_K - Medium
print_info: file size = 1.79 GiB (4.99 BPW)
load: special tokens cache size = 22
load: token to piece cache size = 0.9310 MB
print_info: arch = qwen2
print_info: vocab_only = 0
print_info: n_ctx_train = 32768
print_info: n_embd = 2048
print_info: n_layer = 36
print_info: n_head = 16
print_info: n_head_kv = 2
print_info: n_rot = 128
print_info: n_swa = 0
print_info: n_swa_pattern = 1
print_info: n_embd_head_k = 128
print_info: n_embd_head_v = 128
print_info: n_gqa = 8
print_info: n_embd_k_gqa = 256
print_info: n_embd_v_gqa = 256
print_info: f_norm_eps = 0.0e+00
print_info: f_norm_rms_eps = 1.0e-06
print_info: f_clamp_kqv = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale = 0.0e+00
print_info: f_attn_scale = 0.0e+00
print_info: n_ff = 11008
print_info: n_expert = 0
print_info: n_expert_used = 0
print_info: causal attn = 1
print_info: pooling type = -1
print_info: rope type = 2
print_info: rope scaling = linear
print_info: freq_base_train = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn = 32768
print_info: rope_finetuned = unknown
print_info: ssm_d_conv = 0
print_info: ssm_d_inner = 0
print_info: ssm_d_state = 0
print_info: ssm_dt_rank = 0
print_info: ssm_dt_b_c_rms = 0
print_info: model type = 3B
print_info: model params = 3.09 B
print_info: general.name = Qwen2.5 3B Instruct
print_info: vocab type = BPE
print_info: n_vocab = 151936
print_info: n_merges = 151387
print_info: BOS token = 151643 '<|endoftext|>'
print_info: EOS token = 151645 '<|im_end|>'
print_info: EOT token = 151645 '<|im_end|>'
print_info: PAD token = 151643 '<|endoftext|>'
print_info: LF token = 198 'Ċ'
print_info: FIM PRE token = 151659 '<|fim_prefix|>'
print_info: FIM SUF token = 151661 '<|fim_suffix|>'
print_info: FIM MID token = 151660 '<|fim_middle|>'
print_info: FIM PAD token = 151662 '<|fim_pad|>'
print_info: FIM REP token = 151663 '<|repo_name|>'
print_info: FIM SEP token = 151664 '<|file_sep|>'
print_info: EOG token = 151643 '<|endoftext|>'
print_info: EOG token = 151645 '<|im_end|>'
print_info: EOG token = 151662 '<|fim_pad|>'
print_info: EOG token = 151663 '<|repo_name|>'
print_info: EOG token = 151664 '<|file_sep|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors: CPU model buffer size = 1834.82 MiB
time=2025-08-11T15:21:50.103+08:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model"
llama_context: constructing llama_context
llama_context: n_seq_max = 2
llama_context: n_ctx = 8192
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch = 1024
llama_context: n_ubatch = 512
llama_context: causal_attn = 1
llama_context: flash_attn = 0
llama_context: freq_base = 1000000.0
llama_context: freq_scale = 1
llama_context: n_ctx_per_seq (4096) < n_ctx_train (32768) -- the full capacity of the model will not be utilized
llama_context: CPU output buffer size = 1.17 MiB
llama_kv_cache_unified: kv_size = 8192, type_k = 'f16', type_v = 'f16', n_layer = 36, can_shift = 1, padding = 32
llama_kv_cache_unified: CPU KV buffer size = 288.00 MiB
llama_kv_cache_unified: KV self size = 288.00 MiB, K (f16): 144.00 MiB, V (f16): 144.00 MiB
llama_context: CPU compute buffer size = 304.75 MiB
llama_context: graph nodes = 1338
llama_context: graph splits = 1
time=2025-08-11T15:21:50.604+08:00 level=INFO source=server.go:637 msg="llama runner started in 0.75 seconds"
[GIN] 2025/08/11 - 15:21:50 | 200 | 1.059186s | 127.0.0.1 | POST "/api/generate"
[GIN] 2025/08/11 - 15:22:24 | 200 | 18.5863319s | 127.0.0.1 | POST "/api/chat"

<!-- gh-comment-id:3173744283 --> @king-66jack commented on GitHub (Aug 11, 2025): here is serve log that is 5080 (base) PS C:\Users\admin> ollama serve time=2025-08-11T15:21:27.240+08:00 level=INFO source=routes.go:1235 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\admin\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-08-11T15:21:27.248+08:00 level=INFO source=images.go:476 msg="total blobs: 9" time=2025-08-11T15:21:27.248+08:00 level=INFO source=images.go:483 msg="total unused blobs removed: 0" time=2025-08-11T15:21:27.249+08:00 level=INFO source=routes.go:1288 msg="Listening on 127.0.0.1:11434 (version 0.9.6)" time=2025-08-11T15:21:27.249+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-08-11T15:21:27.249+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2025-08-11T15:21:27.249+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=12 efficiency=0 threads=24 time=2025-08-11T15:21:27.368+08:00 level=WARN source=amd_windows.go:172 msg="amdgpu detected, but no compatible rocm library found. Please install ROCm" time=2025-08-11T15:21:27.368+08:00 level=WARN source=amd_windows.go:56 msg="unable to verify rocm library: no suitable rocm found, falling back to CPU" time=2025-08-11T15:21:27.369+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-033d8ea1-1777-5f05-2b68-fe21df894c31 library=cuda variant=v12 compute=12.0 driver=12.9 name="NVIDIA GeForce RTX 5080" total="15.9 GiB" available="14.5 GiB" [GIN] 2025/08/11 - 15:21:49 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/08/11 - 15:21:49 | 200 | 27.0146ms | 127.0.0.1 | POST "/api/show" time=2025-08-11T15:21:49.605+08:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\admin\.ollama\models\blobs\sha256-5ee4f07cdb9beadbbb293e85803c569b01bd37ed059d2715faa7bb405f31caa6 gpu=GPU-033d8ea1-1777-5f05-2b68-fe21df894c31 parallel=2 available=13880553472 required="2.9 GiB" time=2025-08-11T15:21:49.626+08:00 level=INFO source=server.go:135 msg="system memory" total="47.1 GiB" free="26.8 GiB" free_swap="24.7 GiB" time=2025-08-11T15:21:49.627+08:00 level=INFO source=server.go:175 msg=offload library=cuda layers.requested=-1 layers.model=37 layers.offload=37 layers.split="" memory.available="[12.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="2.9 GiB" memory.required.partial="2.9 GiB" memory.required.kv="288.0 MiB" memory.required.allocations="[2.9 GiB]" memory.weights.total="1.8 GiB" memory.weights.repeating="1.6 GiB" memory.weights.nonrepeating="243.4 MiB" memory.graph.full="300.8 MiB" memory.graph.partial="544.2 MiB" llama_model_loader: loaded meta data with 35 key-value pairs and 434 tensors from C:\Users\admin\.ollama\models\blobs\sha256-5ee4f07cdb9beadbbb293e85803c569b01bd37ed059d2715faa7bb405f31caa6 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen2.5 3B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Qwen2.5 llama_model_loader: - kv 5: general.size_label str = 3B llama_model_loader: - kv 6: general.license str = other llama_model_loader: - kv 7: general.license.name str = qwen-research llama_model_loader: - kv 8: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-3... llama_model_loader: - kv 9: general.base_model.count u32 = 1 llama_model_loader: - kv 10: general.base_model.0.name str = Qwen2.5 3B llama_model_loader: - kv 11: general.base_model.0.organization str = Qwen llama_model_loader: - kv 12: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-3B llama_model_loader: - kv 13: general.tags arr[str,2] = ["chat", "text-generation"] llama_model_loader: - kv 14: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 15: qwen2.block_count u32 = 36 llama_model_loader: - kv 16: qwen2.context_length u32 = 32768 llama_model_loader: - kv 17: qwen2.embedding_length u32 = 2048 llama_model_loader: - kv 18: qwen2.feed_forward_length u32 = 11008 llama_model_loader: - kv 19: qwen2.attention.head_count u32 = 16 llama_model_loader: - kv 20: qwen2.attention.head_count_kv u32 = 2 llama_model_loader: - kv 21: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 22: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 23: general.file_type u32 = 15 llama_model_loader: - kv 24: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 25: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 26: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 27: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 28: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 29: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 30: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 31: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 32: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 33: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 34: general.quantization_version u32 = 2 llama_model_loader: - type f32: 181 tensors llama_model_loader: - type q4_K: 216 tensors llama_model_loader: - type q6_K: 37 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 1.79 GiB (4.99 BPW) load: special tokens cache size = 22 load: token to piece cache size = 0.9310 MB print_info: arch = qwen2 print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 3.09 B print_info: general.name = Qwen2.5 3B Instruct print_info: vocab type = BPE print_info: n_vocab = 151936 print_info: n_merges = 151387 print_info: BOS token = 151643 '<|endoftext|>' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-08-11T15:21:49.747+08:00 level=INFO source=server.go:438 msg="starting llama server" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\admin\\.ollama\\models\\blobs\\sha256-5ee4f07cdb9beadbbb293e85803c569b01bd37ed059d2715faa7bb405f31caa6 --ctx-size 8192 --batch-size 512 --n-gpu-layers 37 --threads 12 --no-mmap --parallel 2 --port 57357" time=2025-08-11T15:21:49.852+08:00 level=INFO source=sched.go:483 msg="loaded runners" count=1 time=2025-08-11T15:21:49.852+08:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding" time=2025-08-11T15:21:49.852+08:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server error" time=2025-08-11T15:21:49.980+08:00 level=INFO source=runner.go:815 msg="starting go runner" time=2025-08-11T15:21:49.987+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(clang) time=2025-08-11T15:21:49.990+08:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:57357" llama_model_loader: loaded meta data with 35 key-value pairs and 434 tensors from C:\Users\admin\.ollama\models\blobs\sha256-5ee4f07cdb9beadbbb293e85803c569b01bd37ed059d2715faa7bb405f31caa6 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen2.5 3B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Qwen2.5 llama_model_loader: - kv 5: general.size_label str = 3B llama_model_loader: - kv 6: general.license str = other llama_model_loader: - kv 7: general.license.name str = qwen-research llama_model_loader: - kv 8: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-3... llama_model_loader: - kv 9: general.base_model.count u32 = 1 llama_model_loader: - kv 10: general.base_model.0.name str = Qwen2.5 3B llama_model_loader: - kv 11: general.base_model.0.organization str = Qwen llama_model_loader: - kv 12: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-3B llama_model_loader: - kv 13: general.tags arr[str,2] = ["chat", "text-generation"] llama_model_loader: - kv 14: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 15: qwen2.block_count u32 = 36 llama_model_loader: - kv 16: qwen2.context_length u32 = 32768 llama_model_loader: - kv 17: qwen2.embedding_length u32 = 2048 llama_model_loader: - kv 18: qwen2.feed_forward_length u32 = 11008 llama_model_loader: - kv 19: qwen2.attention.head_count u32 = 16 llama_model_loader: - kv 20: qwen2.attention.head_count_kv u32 = 2 llama_model_loader: - kv 21: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 22: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 23: general.file_type u32 = 15 llama_model_loader: - kv 24: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 25: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 26: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 27: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 28: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 29: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 30: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 31: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 32: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 33: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 34: general.quantization_version u32 = 2 llama_model_loader: - type f32: 181 tensors llama_model_loader: - type q4_K: 216 tensors llama_model_loader: - type q6_K: 37 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 1.79 GiB (4.99 BPW) load: special tokens cache size = 22 load: token to piece cache size = 0.9310 MB print_info: arch = qwen2 print_info: vocab_only = 0 print_info: n_ctx_train = 32768 print_info: n_embd = 2048 print_info: n_layer = 36 print_info: n_head = 16 print_info: n_head_kv = 2 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_swa_pattern = 1 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 8 print_info: n_embd_k_gqa = 256 print_info: n_embd_v_gqa = 256 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-06 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 11008 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = -1 print_info: rope type = 2 print_info: rope scaling = linear print_info: freq_base_train = 1000000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 32768 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 3B print_info: model params = 3.09 B print_info: general.name = Qwen2.5 3B Instruct print_info: vocab type = BPE print_info: n_vocab = 151936 print_info: n_merges = 151387 print_info: BOS token = 151643 '<|endoftext|>' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = false) load_tensors: CPU model buffer size = 1834.82 MiB time=2025-08-11T15:21:50.103+08:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model" llama_context: constructing llama_context llama_context: n_seq_max = 2 llama_context: n_ctx = 8192 llama_context: n_ctx_per_seq = 4096 llama_context: n_batch = 1024 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 0 llama_context: freq_base = 1000000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (4096) < n_ctx_train (32768) -- the full capacity of the model will not be utilized llama_context: CPU output buffer size = 1.17 MiB llama_kv_cache_unified: kv_size = 8192, type_k = 'f16', type_v = 'f16', n_layer = 36, can_shift = 1, padding = 32 llama_kv_cache_unified: CPU KV buffer size = 288.00 MiB llama_kv_cache_unified: KV self size = 288.00 MiB, K (f16): 144.00 MiB, V (f16): 144.00 MiB llama_context: CPU compute buffer size = 304.75 MiB llama_context: graph nodes = 1338 llama_context: graph splits = 1 time=2025-08-11T15:21:50.604+08:00 level=INFO source=server.go:637 msg="llama runner started in 0.75 seconds" [GIN] 2025/08/11 - 15:21:50 | 200 | 1.059186s | 127.0.0.1 | POST "/api/generate" [GIN] 2025/08/11 - 15:22:24 | 200 | 18.5863319s | 127.0.0.1 | POST "/api/chat"
Author
Owner

@rick-github commented on GitHub (Aug 11, 2025):

WHat's the output of

dir C:\Users\%USER%\AppData\Local\Programs\Ollama\lib\ollama
<!-- gh-comment-id:3173763572 --> @rick-github commented on GitHub (Aug 11, 2025): WHat's the output of ``` dir C:\Users\%USER%\AppData\Local\Programs\Ollama\lib\ollama ```
Author
Owner

@king-66jack commented on GitHub (Aug 11, 2025):

just two files
2025/07/11 09:09

.
2025/07/11 09:09 ..
2025/07/08 14:51 113,720,768 cublas64_12.dll
2025/07/11 09:09 692,449,728 is-QVFB3.tmp

<!-- gh-comment-id:3173800078 --> @king-66jack commented on GitHub (Aug 11, 2025): just two files 2025/07/11 09:09 <DIR> . 2025/07/11 09:09 <DIR> .. 2025/07/08 14:51 113,720,768 cublas64_12.dll 2025/07/11 09:09 692,449,728 is-QVFB3.tmp
Author
Owner

@rick-github commented on GitHub (Aug 11, 2025):

Re-install ollama. For some reason, your current install is missing the backend CPU and GPU libraries required for accelerated inference:

11/08/2025  00:37    <DIR>          .
11/08/2025  00:38    <DIR>          ..
08/08/2025  00:35       113,720,824 cublas64_12.dll
08/08/2025  00:35       692,449,784 cublasLt64_12.dll
08/08/2025  00:35           582,136 cudart64_12.dll
08/08/2025  00:35           834,040 ggml-base.dll
08/08/2025  00:35           824,312 ggml-cpu-alderlake.dll
08/08/2025  00:35           827,384 ggml-cpu-haswell.dll
08/08/2025  00:35         1,030,136 ggml-cpu-icelake.dll
08/08/2025  00:35           801,272 ggml-cpu-sandybridge.dll
08/08/2025  00:35         1,034,232 ggml-cpu-skylakex.dll
08/08/2025  00:35           697,336 ggml-cpu-sse42.dll
08/08/2025  00:35           687,096 ggml-cpu-x64.dll
08/08/2025  00:35     1,293,683,192 ggml-cuda.dll
08/08/2025  00:35       596,369,912 ggml-hip.dll
11/08/2025  00:38    <DIR>          rocm

<!-- gh-comment-id:3173819218 --> @rick-github commented on GitHub (Aug 11, 2025): Re-install ollama. For some reason, your current install is missing the backend CPU and GPU libraries required for accelerated inference: ``` 11/08/2025 00:37 <DIR> . 11/08/2025 00:38 <DIR> .. 08/08/2025 00:35 113,720,824 cublas64_12.dll 08/08/2025 00:35 692,449,784 cublasLt64_12.dll 08/08/2025 00:35 582,136 cudart64_12.dll 08/08/2025 00:35 834,040 ggml-base.dll 08/08/2025 00:35 824,312 ggml-cpu-alderlake.dll 08/08/2025 00:35 827,384 ggml-cpu-haswell.dll 08/08/2025 00:35 1,030,136 ggml-cpu-icelake.dll 08/08/2025 00:35 801,272 ggml-cpu-sandybridge.dll 08/08/2025 00:35 1,034,232 ggml-cpu-skylakex.dll 08/08/2025 00:35 697,336 ggml-cpu-sse42.dll 08/08/2025 00:35 687,096 ggml-cpu-x64.dll 08/08/2025 00:35 1,293,683,192 ggml-cuda.dll 08/08/2025 00:35 596,369,912 ggml-hip.dll 11/08/2025 00:38 <DIR> rocm ```
Author
Owner

@pdevine commented on GitHub (Aug 11, 2025):

@king-66jack was there a particular way you installed ollama? Also, did reinstalling solve the issue?

<!-- gh-comment-id:3177196808 --> @pdevine commented on GitHub (Aug 11, 2025): @king-66jack was there a particular way you installed ollama? Also, did reinstalling solve the issue?
Author
Owner

@king-66jack commented on GitHub (Aug 12, 2025):

I just upgraded ollama and the problem has been solved. I found that it can be upgraded through the "restart to update" of ollama in the status bar

<!-- gh-comment-id:3178466898 --> @king-66jack commented on GitHub (Aug 12, 2025): I just upgraded ollama and the problem has been solved. I found that it can be upgraded through the "restart to update" of ollama in the status bar
Author
Owner

@rick-github commented on GitHub (Aug 12, 2025):

Due to is-QVFB3.tmp being close to the same size as cublasLt64_12.dll, I suspect that a previous install was interrupted while extracting the contents of lib\ollama.

<!-- gh-comment-id:3178526603 --> @rick-github commented on GitHub (Aug 12, 2025): Due to `is-QVFB3.tmp` being close to the same size as `cublasLt64_12.dll`, I suspect that a previous install was interrupted while extracting the contents of `lib\ollama`.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#69924