[GH-ISSUE #13338] RTX 5090 on Windows – GPU sometimes detected, sometimes reported as “0 B VRAM”, causing CPU fallback + dolphin3 crash (graph_reserve: failed to allocate compute buffers) #70869

Closed
opened 2026-05-04 23:17:04 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @jason-witter on GitHub (Dec 5, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13338

What is the issue?

RTX 5090 on Windows – GPU intermittently detected, sometimes reported as “0 B VRAM”, causing CPU fallback + dolphin3 crash

Environment

  • OS: Windows 10 (build 19045)
  • Ollama version: 0.13.1
  • GPU: NVIDIA GeForce RTX 5090 (Blackwell), 31.8 GiB VRAM
  • NVIDIA Driver: 591.44
  • CUDA: using bundled Ollama runtime (no standalone CUDA toolkit installed)
  • Hardware: Single-GPU desktop

Expected Behavior

  • Ollama consistently detects and uses the RTX 5090 GPU.

  • Server log contains a CUDA entry similar to:
    inference compute id=GPU-xxxx library=CUDA name="NVIDIA GeForce RTX 5090"

  • Models (including dolphin3) run on GPU without crashing.

  • Ollama does not intermittently drop into CPU-only “low VRAM mode”.


Actual Behavior

GPU detection succeeds sometimes and fails other times, and dolphin3 crashes when it does attempt GPU execution.

Sometimes GPU is detected correctly:

inference compute id=GPU-7345c94c-464c-a8ab-fa48-9229002dda06 filter_id="" library=CUDA compute=12.0 name="NVIDIA GeForce RTX 5090" libdirs=ollama,cuda_v13 driver=13.0 pci_id=0000:0b:00.0 type=discrete total="31.8 GiB" available="30.8 GiB"

Other times, Ollama reports 0 B of VRAM and falls back to CPU:

discovering available GPUs...
inference compute id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="63.9 GiB" available="53.7 GiB"
entering low vram mode "total vram"="0 B" threshold="20.0 GiB"

At this point, all models run on CPU only, even if OLLAMA_LLM_LIBRARY=cuda is set.

dolphin3 crash when GPU is detected:

llama_kv_cache: size = 16384.00 MiB (131072 cells, 32 layers, 1/1 seqs), K (f16): 8192.00 MiB, V (f16): 8192.00 MiB
graph_reserve: failed to allocate compute buffers
Exception 0xc0000005 ... signal arrived during external code execution
llama runner process has terminated: exit status 2

After the crash:

  • GPU detection frequently stops working entirely.
  • Server log begins reporting total vram 0 B.
  • All models run exclusively on CPU.

Steps to Reproduce

  1. Install Ollama 0.13.1 on Windows 10.
  2. Pull dolphin3:
    ollama pull dolphin3
  3. Run dolphin3 repeatedly:
    ollama run dolphin3
  4. Observe the following pattern:
    • Sometimes the GPU is detected and dolphin3 attempts to load on GPU.
    • dolphin3 then crashes with graph_reserve / access violation.
    • After the crash, GPU detection fails and server log reports total vram 0 B.
    • Restarting Ollama does not always restore GPU detection.

Notes

  • Changing OLLAMA_* environment variables (flash attention, cuda graphs, force slow, etc.) does not influence GPU detection — the issue occurs even with all GPU-related env vars removed.
  • Other models (e.g., mistral) also run on CPU once Ollama enters the 0 B VRAM state.
  • GPU works normally in other CUDA applications.
  • Issue appears consistent with incomplete Blackwell (sm_120) support, based on other user reports.

  • #10402 – Official RTX 5090 Support
  • #13163 – RTX 5070 Ti (Blackwell) falling back to CPU
  • #12116 – Models not loading into VRAM on RTX 5090
  • #12895 – GPU discovery inconsistency on Windows
  • #13083 – Low GPU utilization on 5090 (Windows)

Summary

Ollama 0.13.1 on Windows intermittently fails to detect the RTX 5090 GPU. When dolphin3 attempts GPU execution, the runner sometimes crashes with a buffer allocation error. After the crash, Ollama frequently reports total vram 0 B and switches into CPU-only mode. This behavior appears related to early Blackwell GPU support on Windows.

I am happy to provide additional logs or run diagnostic builds if helpful.

Relevant log output

time=2025-12-04T23:29:48.783-08:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:262144 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY:cuda OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\Jason\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:]"
time=2025-12-04T23:29:48.785-08:00 level=INFO source=images.go:522 msg="total blobs: 6"
time=2025-12-04T23:29:48.785-08:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
time=2025-12-04T23:29:48.786-08:00 level=INFO source=routes.go:1597 msg="Listening on 127.0.0.1:11434 (version 0.13.1)"
time=2025-12-04T23:29:48.787-08:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2025-12-04T23:29:48.805-08:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="63.9 GiB" available="53.7 GiB"
time=2025-12-04T23:29:48.805-08:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB"
[GIN] 2025/12/04 - 23:29:49 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/12/04 - 23:29:49 | 200 |      43.619ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/12/04 - 23:29:49 | 200 |     39.9348ms |       127.0.0.1 | POST     "/api/show"
time=2025-12-04T23:29:49.340-08:00 level=INFO source=cpu_windows.go:148 msg=packages count=1
time=2025-12-04T23:29:49.340-08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=16 efficiency=0 threads=32
llama_model_loader: loaded meta data with 77 key-value pairs and 292 tensors from C:\Users\Jason\.ollama\models\blobs\sha256-1eee6953530837b2b17d61a4e6f71a5aa31c9714cfcf3cb141aa5c1972b5116b (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Dolphin 3.0 Llama 3.1 8B
llama_model_loader: - kv   3:                       general.organization str              = Cognitivecomputations
llama_model_loader: - kv   4:                           general.basename str              = Dolphin-3.0-Llama-3.1
llama_model_loader: - kv   5:                         general.size_label str              = 8B
llama_model_loader: - kv   6:                            general.license str              = llama3.1
llama_model_loader: - kv   7:                   general.base_model.count u32              = 1
llama_model_loader: - kv   8:                  general.base_model.0.name str              = Llama 3.1 8B
llama_model_loader: - kv   9:          general.base_model.0.organization str              = Meta Llama
llama_model_loader: - kv  10:              general.base_model.0.repo_url str              = https://huggingface.co/meta-llama/Lla...
llama_model_loader: - kv  11:                      general.dataset.count u32              = 13
llama_model_loader: - kv  12:                     general.dataset.0.name str              = Opc Sft Stage1
llama_model_loader: - kv  13:             general.dataset.0.organization str              = OpenCoder LLM
llama_model_loader: - kv  14:                 general.dataset.0.repo_url str              = https://huggingface.co/OpenCoder-LLM/...
llama_model_loader: - kv  15:                     general.dataset.1.name str              = Opc Sft Stage2
llama_model_loader: - kv  16:             general.dataset.1.organization str              = OpenCoder LLM
llama_model_loader: - kv  17:                 general.dataset.1.repo_url str              = https://huggingface.co/OpenCoder-LLM/...
llama_model_loader: - kv  18:                     general.dataset.2.name str              = Orca Agentinstruct 1M v1
llama_model_loader: - kv  19:                  general.dataset.2.version str              = v1
llama_model_loader: - kv  20:             general.dataset.2.organization str              = Microsoft
llama_model_loader: - kv  21:                 general.dataset.2.repo_url str              = https://huggingface.co/microsoft/orca...
llama_model_loader: - kv  22:                     general.dataset.3.name str              = Orca Math Word Problems 200k
llama_model_loader: - kv  23:             general.dataset.3.organization str              = Microsoft
llama_model_loader: - kv  24:                 general.dataset.3.repo_url str              = https://huggingface.co/microsoft/orca...
llama_model_loader: - kv  25:                     general.dataset.4.name str              = Hermes Function Calling v1
llama_model_loader: - kv  26:                  general.dataset.4.version str              = v1
llama_model_loader: - kv  27:             general.dataset.4.organization str              = NousResearch
llama_model_loader: - kv  28:                 general.dataset.4.repo_url str              = https://huggingface.co/NousResearch/h...
llama_model_loader: - kv  29:                     general.dataset.5.name str              = NuminaMath CoT
llama_model_loader: - kv  30:             general.dataset.5.organization str              = AI MO
llama_model_loader: - kv  31:                 general.dataset.5.repo_url str              = https://huggingface.co/AI-MO/NuminaMa...
llama_model_loader: - kv  32:                     general.dataset.6.name str              = NuminaMath TIR
llama_model_loader: - kv  33:             general.dataset.6.organization str              = AI MO
llama_model_loader: - kv  34:                 general.dataset.6.repo_url str              = https://huggingface.co/AI-MO/NuminaMa...
llama_model_loader: - kv  35:                     general.dataset.7.name str              = Tulu 3 Sft Mixture
llama_model_loader: - kv  36:             general.dataset.7.organization str              = Allenai
llama_model_loader: - kv  37:                 general.dataset.7.repo_url str              = https://huggingface.co/allenai/tulu-3...
llama_model_loader: - kv  38:                     general.dataset.8.name str              = Dolphin Coder
llama_model_loader: - kv  39:             general.dataset.8.organization str              = Cognitivecomputations
llama_model_loader: - kv  40:                 general.dataset.8.repo_url str              = https://huggingface.co/cognitivecompu...
llama_model_loader: - kv  41:                     general.dataset.9.name str              = Smoltalk
llama_model_loader: - kv  42:             general.dataset.9.organization str              = HuggingFaceTB
llama_model_loader: - kv  43:                 general.dataset.9.repo_url str              = https://huggingface.co/HuggingFaceTB/...
llama_model_loader: - kv  44:                    general.dataset.10.name str              = Samantha Data
llama_model_loader: - kv  45:            general.dataset.10.organization str              = Cognitivecomputations
llama_model_loader: - kv  46:                general.dataset.10.repo_url str              = https://huggingface.co/cognitivecompu...
llama_model_loader: - kv  47:                    general.dataset.11.name str              = CodeFeedback Filtered Instruction
llama_model_loader: - kv  48:            general.dataset.11.organization str              = M A P
llama_model_loader: - kv  49:                general.dataset.11.repo_url str              = https://huggingface.co/m-a-p/CodeFeed...
llama_model_loader: - kv  50:                    general.dataset.12.name str              = Code Feedback
llama_model_loader: - kv  51:            general.dataset.12.organization str              = M A P
llama_model_loader: - kv  52:                general.dataset.12.repo_url str              = https://huggingface.co/m-a-p/Code-Fee...
llama_model_loader: - kv  53:                          general.languages arr[str,1]       = ["en"]
llama_model_loader: - kv  54:                          llama.block_count u32              = 32
llama_model_loader: - kv  55:                       llama.context_length u32              = 131072
llama_model_loader: - kv  56:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv  57:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv  58:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv  59:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  60:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  61:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  62:                 llama.attention.key_length u32              = 128
llama_model_loader: - kv  63:               llama.attention.value_length u32              = 128
llama_model_loader: - kv  64:                          general.file_type u32              = 15
llama_model_loader: - kv  65:                           llama.vocab_size u32              = 128258
llama_model_loader: - kv  66:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  67:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  68:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  69:                      tokenizer.ggml.tokens arr[str,128258]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  70:                  tokenizer.ggml.token_type arr[i32,128258]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  71:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  72:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  73:                tokenizer.ggml.eos_token_id u32              = 128256
llama_model_loader: - kv  74:            tokenizer.ggml.padding_token_id u32              = 128001
llama_model_loader: - kv  75:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  76:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   66 tensors
llama_model_loader: - type q4_K:  193 tensors
llama_model_loader: - type q6_K:   33 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 4.58 GiB (4.89 BPW) 
load: printing all EOG tokens:
load:   - 128001 ('<|end_of_text|>')
load:   - 128008 ('<|eom_id|>')
load:   - 128009 ('<|eot_id|>')
load:   - 128256 ('<|im_end|>')
load: special tokens cache size = 258
load: token to piece cache size = 0.7999 MB
print_info: arch             = llama
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 8.03 B
print_info: general.name     = Dolphin 3.0 Llama 3.1 8B
print_info: vocab type       = BPE
print_info: n_vocab          = 128258
print_info: n_merges         = 280147
print_info: BOS token        = 128000 '<|begin_of_text|>'
print_info: EOS token        = 128256 '<|im_end|>'
print_info: EOT token        = 128256 '<|im_end|>'
print_info: EOM token        = 128008 '<|eom_id|>'
print_info: PAD token        = 128001 '<|end_of_text|>'
print_info: LF token         = 198 'Ċ'
print_info: EOG token        = 128001 '<|end_of_text|>'
print_info: EOG token        = 128008 '<|eom_id|>'
print_info: EOG token        = 128009 '<|eot_id|>'
print_info: EOG token        = 128256 '<|im_end|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-12-04T23:29:49.553-08:00 level=WARN source=server.go:167 msg="requested context size too large for model" num_ctx=262144 n_ctx_train=131072
time=2025-12-04T23:29:49.559-08:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\Jason\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\Jason\\.ollama\\models\\blobs\\sha256-1eee6953530837b2b17d61a4e6f71a5aa31c9714cfcf3cb141aa5c1972b5116b --port 52014"
time=2025-12-04T23:29:49.561-08:00 level=INFO source=sched.go:443 msg="system memory" total="63.9 GiB" free="53.7 GiB" free_swap="55.5 GiB"
time=2025-12-04T23:29:49.561-08:00 level=INFO source=server.go:459 msg="loading model" "model layers"=33 requested=-1
time=2025-12-04T23:29:49.561-08:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="4.3 GiB"
time=2025-12-04T23:29:49.561-08:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="16.0 GiB"
time=2025-12-04T23:29:49.561-08:00 level=INFO source=device.go:272 msg="total memory" size="20.3 GiB"
time=2025-12-04T23:29:49.593-08:00 level=INFO source=runner.go:963 msg="starting go runner"
load_backend: loaded CPU backend from C:\Users\Jason\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
time=2025-12-04T23:29:49.606-08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang)
time=2025-12-04T23:29:49.607-08:00 level=INFO source=runner.go:999 msg="Server listening on 127.0.0.1:52014"
time=2025-12-04T23:29:49.615-08:00 level=INFO source=runner.go:893 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:131072 KvCacheType: NumThreads:16 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-04T23:29:49.615-08:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding"
time=2025-12-04T23:29:49.615-08:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: loaded meta data with 77 key-value pairs and 292 tensors from C:\Users\Jason\.ollama\models\blobs\sha256-1eee6953530837b2b17d61a4e6f71a5aa31c9714cfcf3cb141aa5c1972b5116b (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Dolphin 3.0 Llama 3.1 8B
llama_model_loader: - kv   3:                       general.organization str              = Cognitivecomputations
llama_model_loader: - kv   4:                           general.basename str              = Dolphin-3.0-Llama-3.1
llama_model_loader: - kv   5:                         general.size_label str              = 8B
llama_model_loader: - kv   6:                            general.license str              = llama3.1
llama_model_loader: - kv   7:                   general.base_model.count u32              = 1
llama_model_loader: - kv   8:                  general.base_model.0.name str              = Llama 3.1 8B
llama_model_loader: - kv   9:          general.base_model.0.organization str              = Meta Llama
llama_model_loader: - kv  10:              general.base_model.0.repo_url str              = https://huggingface.co/meta-llama/Lla...
llama_model_loader: - kv  11:                      general.dataset.count u32              = 13
llama_model_loader: - kv  12:                     general.dataset.0.name str              = Opc Sft Stage1
llama_model_loader: - kv  13:             general.dataset.0.organization str              = OpenCoder LLM
llama_model_loader: - kv  14:                 general.dataset.0.repo_url str              = https://huggingface.co/OpenCoder-LLM/...
llama_model_loader: - kv  15:                     general.dataset.1.name str              = Opc Sft Stage2
llama_model_loader: - kv  16:             general.dataset.1.organization str              = OpenCoder LLM
llama_model_loader: - kv  17:                 general.dataset.1.repo_url str              = https://huggingface.co/OpenCoder-LLM/...
llama_model_loader: - kv  18:                     general.dataset.2.name str              = Orca Agentinstruct 1M v1
llama_model_loader: - kv  19:                  general.dataset.2.version str              = v1
llama_model_loader: - kv  20:             general.dataset.2.organization str              = Microsoft
llama_model_loader: - kv  21:                 general.dataset.2.repo_url str              = https://huggingface.co/microsoft/orca...
llama_model_loader: - kv  22:                     general.dataset.3.name str              = Orca Math Word Problems 200k
llama_model_loader: - kv  23:             general.dataset.3.organization str              = Microsoft
llama_model_loader: - kv  24:                 general.dataset.3.repo_url str              = https://huggingface.co/microsoft/orca...
llama_model_loader: - kv  25:                     general.dataset.4.name str              = Hermes Function Calling v1
llama_model_loader: - kv  26:                  general.dataset.4.version str              = v1
llama_model_loader: - kv  27:             general.dataset.4.organization str              = NousResearch
llama_model_loader: - kv  28:                 general.dataset.4.repo_url str              = https://huggingface.co/NousResearch/h...
llama_model_loader: - kv  29:                     general.dataset.5.name str              = NuminaMath CoT
llama_model_loader: - kv  30:             general.dataset.5.organization str              = AI MO
llama_model_loader: - kv  31:                 general.dataset.5.repo_url str              = https://huggingface.co/AI-MO/NuminaMa...
llama_model_loader: - kv  32:                     general.dataset.6.name str              = NuminaMath TIR
llama_model_loader: - kv  33:             general.dataset.6.organization str              = AI MO
llama_model_loader: - kv  34:                 general.dataset.6.repo_url str              = https://huggingface.co/AI-MO/NuminaMa...
llama_model_loader: - kv  35:                     general.dataset.7.name str              = Tulu 3 Sft Mixture
llama_model_loader: - kv  36:             general.dataset.7.organization str              = Allenai
llama_model_loader: - kv  37:                 general.dataset.7.repo_url str              = https://huggingface.co/allenai/tulu-3...
llama_model_loader: - kv  38:                     general.dataset.8.name str              = Dolphin Coder
llama_model_loader: - kv  39:             general.dataset.8.organization str              = Cognitivecomputations
llama_model_loader: - kv  40:                 general.dataset.8.repo_url str              = https://huggingface.co/cognitivecompu...
llama_model_loader: - kv  41:                     general.dataset.9.name str              = Smoltalk
llama_model_loader: - kv  42:             general.dataset.9.organization str              = HuggingFaceTB
llama_model_loader: - kv  43:                 general.dataset.9.repo_url str              = https://huggingface.co/HuggingFaceTB/...
llama_model_loader: - kv  44:                    general.dataset.10.name str              = Samantha Data
llama_model_loader: - kv  45:            general.dataset.10.organization str              = Cognitivecomputations
llama_model_loader: - kv  46:                general.dataset.10.repo_url str              = https://huggingface.co/cognitivecompu...
llama_model_loader: - kv  47:                    general.dataset.11.name str              = CodeFeedback Filtered Instruction
llama_model_loader: - kv  48:            general.dataset.11.organization str              = M A P
llama_model_loader: - kv  49:                general.dataset.11.repo_url str              = https://huggingface.co/m-a-p/CodeFeed...
llama_model_loader: - kv  50:                    general.dataset.12.name str              = Code Feedback
llama_model_loader: - kv  51:            general.dataset.12.organization str              = M A P
llama_model_loader: - kv  52:                general.dataset.12.repo_url str              = https://huggingface.co/m-a-p/Code-Fee...
llama_model_loader: - kv  53:                          general.languages arr[str,1]       = ["en"]
llama_model_loader: - kv  54:                          llama.block_count u32              = 32
llama_model_loader: - kv  55:                       llama.context_length u32              = 131072
llama_model_loader: - kv  56:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv  57:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv  58:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv  59:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  60:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  61:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  62:                 llama.attention.key_length u32              = 128
llama_model_loader: - kv  63:               llama.attention.value_length u32              = 128
llama_model_loader: - kv  64:                          general.file_type u32              = 15
llama_model_loader: - kv  65:                           llama.vocab_size u32              = 128258
llama_model_loader: - kv  66:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  67:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  68:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  69:                      tokenizer.ggml.tokens arr[str,128258]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  70:                  tokenizer.ggml.token_type arr[i32,128258]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  71:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  72:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  73:                tokenizer.ggml.eos_token_id u32              = 128256
llama_model_loader: - kv  74:            tokenizer.ggml.padding_token_id u32              = 128001
llama_model_loader: - kv  75:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  76:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   66 tensors
llama_model_loader: - type q4_K:  193 tensors
llama_model_loader: - type q6_K:   33 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 4.58 GiB (4.89 BPW) 
load: printing all EOG tokens:
load:   - 128001 ('<|end_of_text|>')
load:   - 128008 ('<|eom_id|>')
load:   - 128009 ('<|eot_id|>')
load:   - 128256 ('<|im_end|>')
load: special tokens cache size = 258
load: token to piece cache size = 0.7999 MB
print_info: arch             = llama
print_info: vocab_only       = 0
print_info: n_ctx_train      = 131072
print_info: n_embd           = 4096
print_info: n_layer          = 32
print_info: n_head           = 32
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: is_swa_any       = 0
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 4
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 14336
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 0
print_info: rope scaling     = linear
print_info: freq_base_train  = 500000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 131072
print_info: rope_finetuned   = unknown
print_info: model type       = 8B
print_info: model params     = 8.03 B
print_info: general.name     = Dolphin 3.0 Llama 3.1 8B
print_info: vocab type       = BPE
print_info: n_vocab          = 128258
print_info: n_merges         = 280147
print_info: BOS token        = 128000 '<|begin_of_text|>'
print_info: EOS token        = 128256 '<|im_end|>'
print_info: EOT token        = 128256 '<|im_end|>'
print_info: EOM token        = 128008 '<|eom_id|>'
print_info: PAD token        = 128001 '<|end_of_text|>'
print_info: LF token         = 198 'Ċ'
print_info: EOG token        = 128001 '<|end_of_text|>'
print_info: EOG token        = 128008 '<|eom_id|>'
print_info: EOG token        = 128009 '<|eot_id|>'
print_info: EOG token        = 128256 '<|im_end|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors:          CPU model buffer size =  4685.32 MiB
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 131072
llama_context: n_ctx_per_seq = 131072
llama_context: n_batch       = 512
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = disabled
llama_context: kv_unified    = false
llama_context: freq_base     = 500000.0
llama_context: freq_scale    = 1
llama_context:        CPU  output buffer size =     0.50 MiB
llama_kv_cache:        CPU KV buffer size = 16384.00 MiB
llama_kv_cache: size = 16384.00 MiB (131072 cells,  32 layers,  1/1 seqs), K (f16): 8192.00 MiB, V (f16): 8192.00 MiB
llama_context:        CPU compute buffer size =  8484.01 MiB
llama_context: graph nodes  = 1158
llama_context: graph splits = 1
time=2025-12-04T23:29:52.367-08:00 level=INFO source=server.go:1332 msg="llama runner started in 2.81 seconds"
time=2025-12-04T23:29:52.367-08:00 level=INFO source=sched.go:517 msg="loaded runners" count=1
time=2025-12-04T23:29:52.367-08:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding"
time=2025-12-04T23:29:52.367-08:00 level=INFO source=server.go:1332 msg="llama runner started in 2.81 seconds"
[GIN] 2025/12/04 - 23:29:52 | 200 |    3.1097964s |       127.0.0.1 | POST     "/api/generate"

OS

Windows

GPU

Nvidia

CPU

AMD

Ollama version

0.13.1

Originally created by @jason-witter on GitHub (Dec 5, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13338 ### What is the issue? # RTX 5090 on Windows – GPU intermittently detected, sometimes reported as “0 B VRAM”, causing CPU fallback + dolphin3 crash ## Environment - OS: Windows 10 (build 19045) - Ollama version: 0.13.1 - GPU: NVIDIA GeForce RTX 5090 (Blackwell), 31.8 GiB VRAM - NVIDIA Driver: 591.44 - CUDA: using bundled Ollama runtime (no standalone CUDA toolkit installed) - Hardware: Single-GPU desktop --- ## Expected Behavior - Ollama consistently detects and uses the RTX 5090 GPU. - Server log contains a CUDA entry similar to: inference compute id=GPU-xxxx library=CUDA name="NVIDIA GeForce RTX 5090" - Models (including dolphin3) run on GPU without crashing. - Ollama does not intermittently drop into CPU-only “low VRAM mode”. --- ## Actual Behavior GPU detection succeeds sometimes and fails other times, and dolphin3 crashes when it does attempt GPU execution. ### Sometimes GPU is detected correctly: inference compute id=GPU-7345c94c-464c-a8ab-fa48-9229002dda06 filter_id="" library=CUDA compute=12.0 name="NVIDIA GeForce RTX 5090" libdirs=ollama,cuda_v13 driver=13.0 pci_id=0000:0b:00.0 type=discrete total="31.8 GiB" available="30.8 GiB" ### Other times, Ollama reports 0 B of VRAM and falls back to CPU: discovering available GPUs... inference compute id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="63.9 GiB" available="53.7 GiB" entering low vram mode "total vram"="0 B" threshold="20.0 GiB" At this point, all models run on CPU only, even if OLLAMA_LLM_LIBRARY=cuda is set. ### dolphin3 crash when GPU *is* detected: llama_kv_cache: size = 16384.00 MiB (131072 cells, 32 layers, 1/1 seqs), K (f16): 8192.00 MiB, V (f16): 8192.00 MiB graph_reserve: failed to allocate compute buffers Exception 0xc0000005 ... signal arrived during external code execution llama runner process has terminated: exit status 2 After the crash: - GPU detection frequently stops working entirely. - Server log begins reporting total vram 0 B. - All models run exclusively on CPU. --- ## Steps to Reproduce 1. Install Ollama 0.13.1 on Windows 10. 2. Pull dolphin3: ollama pull dolphin3 3. Run dolphin3 repeatedly: ollama run dolphin3 4. Observe the following pattern: - Sometimes the GPU is detected and dolphin3 attempts to load on GPU. - dolphin3 then crashes with graph_reserve / access violation. - After the crash, GPU detection fails and server log reports total vram 0 B. - Restarting Ollama does not always restore GPU detection. --- ## Notes - Changing OLLAMA_* environment variables (flash attention, cuda graphs, force slow, etc.) does not influence GPU detection — the issue occurs even with all GPU-related env vars removed. - Other models (e.g., mistral) also run on CPU once Ollama enters the 0 B VRAM state. - GPU works normally in other CUDA applications. - Issue appears consistent with incomplete Blackwell (sm_120) support, based on other user reports. --- ## Related Issues - #10402 – Official RTX 5090 Support - #13163 – RTX 5070 Ti (Blackwell) falling back to CPU - #12116 – Models not loading into VRAM on RTX 5090 - #12895 – GPU discovery inconsistency on Windows - #13083 – Low GPU utilization on 5090 (Windows) --- ## Summary Ollama 0.13.1 on Windows intermittently fails to detect the RTX 5090 GPU. When dolphin3 attempts GPU execution, the runner sometimes crashes with a buffer allocation error. After the crash, Ollama frequently reports total vram 0 B and switches into CPU-only mode. This behavior appears related to early Blackwell GPU support on Windows. I am happy to provide additional logs or run diagnostic builds if helpful. ### Relevant log output ```shell time=2025-12-04T23:29:48.783-08:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:262144 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY:cuda OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\Jason\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:]" time=2025-12-04T23:29:48.785-08:00 level=INFO source=images.go:522 msg="total blobs: 6" time=2025-12-04T23:29:48.785-08:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" time=2025-12-04T23:29:48.786-08:00 level=INFO source=routes.go:1597 msg="Listening on 127.0.0.1:11434 (version 0.13.1)" time=2025-12-04T23:29:48.787-08:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2025-12-04T23:29:48.805-08:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="63.9 GiB" available="53.7 GiB" time=2025-12-04T23:29:48.805-08:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB" [GIN] 2025/12/04 - 23:29:49 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/12/04 - 23:29:49 | 200 | 43.619ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/12/04 - 23:29:49 | 200 | 39.9348ms | 127.0.0.1 | POST "/api/show" time=2025-12-04T23:29:49.340-08:00 level=INFO source=cpu_windows.go:148 msg=packages count=1 time=2025-12-04T23:29:49.340-08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=16 efficiency=0 threads=32 llama_model_loader: loaded meta data with 77 key-value pairs and 292 tensors from C:\Users\Jason\.ollama\models\blobs\sha256-1eee6953530837b2b17d61a4e6f71a5aa31c9714cfcf3cb141aa5c1972b5116b (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Dolphin 3.0 Llama 3.1 8B llama_model_loader: - kv 3: general.organization str = Cognitivecomputations llama_model_loader: - kv 4: general.basename str = Dolphin-3.0-Llama-3.1 llama_model_loader: - kv 5: general.size_label str = 8B llama_model_loader: - kv 6: general.license str = llama3.1 llama_model_loader: - kv 7: general.base_model.count u32 = 1 llama_model_loader: - kv 8: general.base_model.0.name str = Llama 3.1 8B llama_model_loader: - kv 9: general.base_model.0.organization str = Meta Llama llama_model_loader: - kv 10: general.base_model.0.repo_url str = https://huggingface.co/meta-llama/Lla... llama_model_loader: - kv 11: general.dataset.count u32 = 13 llama_model_loader: - kv 12: general.dataset.0.name str = Opc Sft Stage1 llama_model_loader: - kv 13: general.dataset.0.organization str = OpenCoder LLM llama_model_loader: - kv 14: general.dataset.0.repo_url str = https://huggingface.co/OpenCoder-LLM/... llama_model_loader: - kv 15: general.dataset.1.name str = Opc Sft Stage2 llama_model_loader: - kv 16: general.dataset.1.organization str = OpenCoder LLM llama_model_loader: - kv 17: general.dataset.1.repo_url str = https://huggingface.co/OpenCoder-LLM/... llama_model_loader: - kv 18: general.dataset.2.name str = Orca Agentinstruct 1M v1 llama_model_loader: - kv 19: general.dataset.2.version str = v1 llama_model_loader: - kv 20: general.dataset.2.organization str = Microsoft llama_model_loader: - kv 21: general.dataset.2.repo_url str = https://huggingface.co/microsoft/orca... llama_model_loader: - kv 22: general.dataset.3.name str = Orca Math Word Problems 200k llama_model_loader: - kv 23: general.dataset.3.organization str = Microsoft llama_model_loader: - kv 24: general.dataset.3.repo_url str = https://huggingface.co/microsoft/orca... llama_model_loader: - kv 25: general.dataset.4.name str = Hermes Function Calling v1 llama_model_loader: - kv 26: general.dataset.4.version str = v1 llama_model_loader: - kv 27: general.dataset.4.organization str = NousResearch llama_model_loader: - kv 28: general.dataset.4.repo_url str = https://huggingface.co/NousResearch/h... llama_model_loader: - kv 29: general.dataset.5.name str = NuminaMath CoT llama_model_loader: - kv 30: general.dataset.5.organization str = AI MO llama_model_loader: - kv 31: general.dataset.5.repo_url str = https://huggingface.co/AI-MO/NuminaMa... llama_model_loader: - kv 32: general.dataset.6.name str = NuminaMath TIR llama_model_loader: - kv 33: general.dataset.6.organization str = AI MO llama_model_loader: - kv 34: general.dataset.6.repo_url str = https://huggingface.co/AI-MO/NuminaMa... llama_model_loader: - kv 35: general.dataset.7.name str = Tulu 3 Sft Mixture llama_model_loader: - kv 36: general.dataset.7.organization str = Allenai llama_model_loader: - kv 37: general.dataset.7.repo_url str = https://huggingface.co/allenai/tulu-3... llama_model_loader: - kv 38: general.dataset.8.name str = Dolphin Coder llama_model_loader: - kv 39: general.dataset.8.organization str = Cognitivecomputations llama_model_loader: - kv 40: general.dataset.8.repo_url str = https://huggingface.co/cognitivecompu... llama_model_loader: - kv 41: general.dataset.9.name str = Smoltalk llama_model_loader: - kv 42: general.dataset.9.organization str = HuggingFaceTB llama_model_loader: - kv 43: general.dataset.9.repo_url str = https://huggingface.co/HuggingFaceTB/... llama_model_loader: - kv 44: general.dataset.10.name str = Samantha Data llama_model_loader: - kv 45: general.dataset.10.organization str = Cognitivecomputations llama_model_loader: - kv 46: general.dataset.10.repo_url str = https://huggingface.co/cognitivecompu... llama_model_loader: - kv 47: general.dataset.11.name str = CodeFeedback Filtered Instruction llama_model_loader: - kv 48: general.dataset.11.organization str = M A P llama_model_loader: - kv 49: general.dataset.11.repo_url str = https://huggingface.co/m-a-p/CodeFeed... llama_model_loader: - kv 50: general.dataset.12.name str = Code Feedback llama_model_loader: - kv 51: general.dataset.12.organization str = M A P llama_model_loader: - kv 52: general.dataset.12.repo_url str = https://huggingface.co/m-a-p/Code-Fee... llama_model_loader: - kv 53: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 54: llama.block_count u32 = 32 llama_model_loader: - kv 55: llama.context_length u32 = 131072 llama_model_loader: - kv 56: llama.embedding_length u32 = 4096 llama_model_loader: - kv 57: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 58: llama.attention.head_count u32 = 32 llama_model_loader: - kv 59: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 60: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 61: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 62: llama.attention.key_length u32 = 128 llama_model_loader: - kv 63: llama.attention.value_length u32 = 128 llama_model_loader: - kv 64: general.file_type u32 = 15 llama_model_loader: - kv 65: llama.vocab_size u32 = 128258 llama_model_loader: - kv 66: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 67: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 68: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 69: tokenizer.ggml.tokens arr[str,128258] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 70: tokenizer.ggml.token_type arr[i32,128258] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 71: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 72: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 73: tokenizer.ggml.eos_token_id u32 = 128256 llama_model_loader: - kv 74: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 75: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 76: general.quantization_version u32 = 2 llama_model_loader: - type f32: 66 tensors llama_model_loader: - type q4_K: 193 tensors llama_model_loader: - type q6_K: 33 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 4.58 GiB (4.89 BPW) load: printing all EOG tokens: load: - 128001 ('<|end_of_text|>') load: - 128008 ('<|eom_id|>') load: - 128009 ('<|eot_id|>') load: - 128256 ('<|im_end|>') load: special tokens cache size = 258 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 8.03 B print_info: general.name = Dolphin 3.0 Llama 3.1 8B print_info: vocab type = BPE print_info: n_vocab = 128258 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin_of_text|>' print_info: EOS token = 128256 '<|im_end|>' print_info: EOT token = 128256 '<|im_end|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: PAD token = 128001 '<|end_of_text|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128001 '<|end_of_text|>' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: EOG token = 128256 '<|im_end|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-12-04T23:29:49.553-08:00 level=WARN source=server.go:167 msg="requested context size too large for model" num_ctx=262144 n_ctx_train=131072 time=2025-12-04T23:29:49.559-08:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\Jason\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\Jason\\.ollama\\models\\blobs\\sha256-1eee6953530837b2b17d61a4e6f71a5aa31c9714cfcf3cb141aa5c1972b5116b --port 52014" time=2025-12-04T23:29:49.561-08:00 level=INFO source=sched.go:443 msg="system memory" total="63.9 GiB" free="53.7 GiB" free_swap="55.5 GiB" time=2025-12-04T23:29:49.561-08:00 level=INFO source=server.go:459 msg="loading model" "model layers"=33 requested=-1 time=2025-12-04T23:29:49.561-08:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="4.3 GiB" time=2025-12-04T23:29:49.561-08:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="16.0 GiB" time=2025-12-04T23:29:49.561-08:00 level=INFO source=device.go:272 msg="total memory" size="20.3 GiB" time=2025-12-04T23:29:49.593-08:00 level=INFO source=runner.go:963 msg="starting go runner" load_backend: loaded CPU backend from C:\Users\Jason\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll time=2025-12-04T23:29:49.606-08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang) time=2025-12-04T23:29:49.607-08:00 level=INFO source=runner.go:999 msg="Server listening on 127.0.0.1:52014" time=2025-12-04T23:29:49.615-08:00 level=INFO source=runner.go:893 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:131072 KvCacheType: NumThreads:16 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-04T23:29:49.615-08:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding" time=2025-12-04T23:29:49.615-08:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: loaded meta data with 77 key-value pairs and 292 tensors from C:\Users\Jason\.ollama\models\blobs\sha256-1eee6953530837b2b17d61a4e6f71a5aa31c9714cfcf3cb141aa5c1972b5116b (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Dolphin 3.0 Llama 3.1 8B llama_model_loader: - kv 3: general.organization str = Cognitivecomputations llama_model_loader: - kv 4: general.basename str = Dolphin-3.0-Llama-3.1 llama_model_loader: - kv 5: general.size_label str = 8B llama_model_loader: - kv 6: general.license str = llama3.1 llama_model_loader: - kv 7: general.base_model.count u32 = 1 llama_model_loader: - kv 8: general.base_model.0.name str = Llama 3.1 8B llama_model_loader: - kv 9: general.base_model.0.organization str = Meta Llama llama_model_loader: - kv 10: general.base_model.0.repo_url str = https://huggingface.co/meta-llama/Lla... llama_model_loader: - kv 11: general.dataset.count u32 = 13 llama_model_loader: - kv 12: general.dataset.0.name str = Opc Sft Stage1 llama_model_loader: - kv 13: general.dataset.0.organization str = OpenCoder LLM llama_model_loader: - kv 14: general.dataset.0.repo_url str = https://huggingface.co/OpenCoder-LLM/... llama_model_loader: - kv 15: general.dataset.1.name str = Opc Sft Stage2 llama_model_loader: - kv 16: general.dataset.1.organization str = OpenCoder LLM llama_model_loader: - kv 17: general.dataset.1.repo_url str = https://huggingface.co/OpenCoder-LLM/... llama_model_loader: - kv 18: general.dataset.2.name str = Orca Agentinstruct 1M v1 llama_model_loader: - kv 19: general.dataset.2.version str = v1 llama_model_loader: - kv 20: general.dataset.2.organization str = Microsoft llama_model_loader: - kv 21: general.dataset.2.repo_url str = https://huggingface.co/microsoft/orca... llama_model_loader: - kv 22: general.dataset.3.name str = Orca Math Word Problems 200k llama_model_loader: - kv 23: general.dataset.3.organization str = Microsoft llama_model_loader: - kv 24: general.dataset.3.repo_url str = https://huggingface.co/microsoft/orca... llama_model_loader: - kv 25: general.dataset.4.name str = Hermes Function Calling v1 llama_model_loader: - kv 26: general.dataset.4.version str = v1 llama_model_loader: - kv 27: general.dataset.4.organization str = NousResearch llama_model_loader: - kv 28: general.dataset.4.repo_url str = https://huggingface.co/NousResearch/h... llama_model_loader: - kv 29: general.dataset.5.name str = NuminaMath CoT llama_model_loader: - kv 30: general.dataset.5.organization str = AI MO llama_model_loader: - kv 31: general.dataset.5.repo_url str = https://huggingface.co/AI-MO/NuminaMa... llama_model_loader: - kv 32: general.dataset.6.name str = NuminaMath TIR llama_model_loader: - kv 33: general.dataset.6.organization str = AI MO llama_model_loader: - kv 34: general.dataset.6.repo_url str = https://huggingface.co/AI-MO/NuminaMa... llama_model_loader: - kv 35: general.dataset.7.name str = Tulu 3 Sft Mixture llama_model_loader: - kv 36: general.dataset.7.organization str = Allenai llama_model_loader: - kv 37: general.dataset.7.repo_url str = https://huggingface.co/allenai/tulu-3... llama_model_loader: - kv 38: general.dataset.8.name str = Dolphin Coder llama_model_loader: - kv 39: general.dataset.8.organization str = Cognitivecomputations llama_model_loader: - kv 40: general.dataset.8.repo_url str = https://huggingface.co/cognitivecompu... llama_model_loader: - kv 41: general.dataset.9.name str = Smoltalk llama_model_loader: - kv 42: general.dataset.9.organization str = HuggingFaceTB llama_model_loader: - kv 43: general.dataset.9.repo_url str = https://huggingface.co/HuggingFaceTB/... llama_model_loader: - kv 44: general.dataset.10.name str = Samantha Data llama_model_loader: - kv 45: general.dataset.10.organization str = Cognitivecomputations llama_model_loader: - kv 46: general.dataset.10.repo_url str = https://huggingface.co/cognitivecompu... llama_model_loader: - kv 47: general.dataset.11.name str = CodeFeedback Filtered Instruction llama_model_loader: - kv 48: general.dataset.11.organization str = M A P llama_model_loader: - kv 49: general.dataset.11.repo_url str = https://huggingface.co/m-a-p/CodeFeed... llama_model_loader: - kv 50: general.dataset.12.name str = Code Feedback llama_model_loader: - kv 51: general.dataset.12.organization str = M A P llama_model_loader: - kv 52: general.dataset.12.repo_url str = https://huggingface.co/m-a-p/Code-Fee... llama_model_loader: - kv 53: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 54: llama.block_count u32 = 32 llama_model_loader: - kv 55: llama.context_length u32 = 131072 llama_model_loader: - kv 56: llama.embedding_length u32 = 4096 llama_model_loader: - kv 57: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 58: llama.attention.head_count u32 = 32 llama_model_loader: - kv 59: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 60: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 61: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 62: llama.attention.key_length u32 = 128 llama_model_loader: - kv 63: llama.attention.value_length u32 = 128 llama_model_loader: - kv 64: general.file_type u32 = 15 llama_model_loader: - kv 65: llama.vocab_size u32 = 128258 llama_model_loader: - kv 66: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 67: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 68: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 69: tokenizer.ggml.tokens arr[str,128258] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 70: tokenizer.ggml.token_type arr[i32,128258] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 71: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 72: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 73: tokenizer.ggml.eos_token_id u32 = 128256 llama_model_loader: - kv 74: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 75: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 76: general.quantization_version u32 = 2 llama_model_loader: - type f32: 66 tensors llama_model_loader: - type q4_K: 193 tensors llama_model_loader: - type q6_K: 33 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 4.58 GiB (4.89 BPW) load: printing all EOG tokens: load: - 128001 ('<|end_of_text|>') load: - 128008 ('<|eom_id|>') load: - 128009 ('<|eot_id|>') load: - 128256 ('<|im_end|>') load: special tokens cache size = 258 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 4096 print_info: n_layer = 32 print_info: n_head = 32 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: is_swa_any = 0 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 4 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 14336 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 0 print_info: rope scaling = linear print_info: freq_base_train = 500000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 131072 print_info: rope_finetuned = unknown print_info: model type = 8B print_info: model params = 8.03 B print_info: general.name = Dolphin 3.0 Llama 3.1 8B print_info: vocab type = BPE print_info: n_vocab = 128258 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin_of_text|>' print_info: EOS token = 128256 '<|im_end|>' print_info: EOT token = 128256 '<|im_end|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: PAD token = 128001 '<|end_of_text|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128001 '<|end_of_text|>' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: EOG token = 128256 '<|im_end|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = false) load_tensors: CPU model buffer size = 4685.32 MiB llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 131072 llama_context: n_ctx_per_seq = 131072 llama_context: n_batch = 512 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = disabled llama_context: kv_unified = false llama_context: freq_base = 500000.0 llama_context: freq_scale = 1 llama_context: CPU output buffer size = 0.50 MiB llama_kv_cache: CPU KV buffer size = 16384.00 MiB llama_kv_cache: size = 16384.00 MiB (131072 cells, 32 layers, 1/1 seqs), K (f16): 8192.00 MiB, V (f16): 8192.00 MiB llama_context: CPU compute buffer size = 8484.01 MiB llama_context: graph nodes = 1158 llama_context: graph splits = 1 time=2025-12-04T23:29:52.367-08:00 level=INFO source=server.go:1332 msg="llama runner started in 2.81 seconds" time=2025-12-04T23:29:52.367-08:00 level=INFO source=sched.go:517 msg="loaded runners" count=1 time=2025-12-04T23:29:52.367-08:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding" time=2025-12-04T23:29:52.367-08:00 level=INFO source=server.go:1332 msg="llama runner started in 2.81 seconds" [GIN] 2025/12/04 - 23:29:52 | 200 | 3.1097964s | 127.0.0.1 | POST "/api/generate" ``` ### OS Windows ### GPU Nvidia ### CPU AMD ### Ollama version 0.13.1
GiteaMirror added the bug label 2026-05-04 23:17:04 -05:00
Author
Owner

@rick-github commented on GitHub (Dec 5, 2025):

Set OLLAMA_DEBUG=2 in the server environment and post the log up to the line that says msg="inference compute". This will give more detail about the GPU discovery process.

<!-- gh-comment-id:3616080953 --> @rick-github commented on GitHub (Dec 5, 2025): Set `OLLAMA_DEBUG=2` in the server environment and post the log up to the line that says `msg="inference compute"`. This will give more detail about the GPU discovery process.
Author
Owner

@jason-witter commented on GitHub (Dec 5, 2025):

time=2025-12-05T10:00:49.003-08:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG-4 OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY:cuda OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\Jason\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:]"
time=2025-12-05T10:00:49.016-08:00 level=INFO source=images.go:522 msg="total blobs: 6"
time=2025-12-05T10:00:49.017-08:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
time=2025-12-05T10:00:49.019-08:00 level=INFO source=routes.go:1597 msg="Listening on 127.0.0.1:11434 (version 0.13.1)"
time=2025-12-05T10:00:49.019-08:00 level=DEBUG source=sched.go:120 msg="starting llm scheduler"
time=2025-12-05T10:00:49.021-08:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2025-12-05T10:00:49.038-08:00 level=DEBUG source=runner.go:98 msg="skipping available library at user's request" requested=cuda libDir=C:\Users\Jason\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12
time=2025-12-05T10:00:49.038-08:00 level=DEBUG source=runner.go:98 msg="skipping available library at user's request" requested=cuda libDir=C:\Users\Jason\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13
time=2025-12-05T10:00:49.038-08:00 level=DEBUG source=runner.go:98 msg="skipping available library at user's request" requested=cuda libDir=C:\Users\Jason\AppData\Local\Programs\Ollama\lib\ollama\rocm
time=2025-12-05T10:00:49.038-08:00 level=DEBUG source=runner.go:98 msg="skipping available library at user's request" requested=cuda libDir=C:\Users\Jason\AppData\Local\Programs\Ollama\lib\ollama\vulkan
time=2025-12-05T10:00:49.038-08:00 level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=0
time=2025-12-05T10:00:49.039-08:00 level=TRACE source=runner.go:174 msg="supported GPU library combinations before filtering" supported=map[]
time=2025-12-05T10:00:49.039-08:00 level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=19.5169ms
time=2025-12-05T10:00:49.039-08:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="63.9 GiB" available="47.0 GiB"
time=2025-12-05T10:00:49.039-08:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB"

Posting for posterity, but the root of the problem was that I had set OLLAMA_LLM_LIBRARY = cuda as an attempt to get out of CPU only mode that I was using for debugging. Most likely I just needed to re-pull dolphin3 to solve my original problem but I ended up causing more problems for myself along the way. Thank you for your time and attention.

<!-- gh-comment-id:3618004965 --> @jason-witter commented on GitHub (Dec 5, 2025): ``` time=2025-12-05T10:00:49.003-08:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG-4 OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY:cuda OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\Jason\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:]" time=2025-12-05T10:00:49.016-08:00 level=INFO source=images.go:522 msg="total blobs: 6" time=2025-12-05T10:00:49.017-08:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" time=2025-12-05T10:00:49.019-08:00 level=INFO source=routes.go:1597 msg="Listening on 127.0.0.1:11434 (version 0.13.1)" time=2025-12-05T10:00:49.019-08:00 level=DEBUG source=sched.go:120 msg="starting llm scheduler" time=2025-12-05T10:00:49.021-08:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2025-12-05T10:00:49.038-08:00 level=DEBUG source=runner.go:98 msg="skipping available library at user's request" requested=cuda libDir=C:\Users\Jason\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12 time=2025-12-05T10:00:49.038-08:00 level=DEBUG source=runner.go:98 msg="skipping available library at user's request" requested=cuda libDir=C:\Users\Jason\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13 time=2025-12-05T10:00:49.038-08:00 level=DEBUG source=runner.go:98 msg="skipping available library at user's request" requested=cuda libDir=C:\Users\Jason\AppData\Local\Programs\Ollama\lib\ollama\rocm time=2025-12-05T10:00:49.038-08:00 level=DEBUG source=runner.go:98 msg="skipping available library at user's request" requested=cuda libDir=C:\Users\Jason\AppData\Local\Programs\Ollama\lib\ollama\vulkan time=2025-12-05T10:00:49.038-08:00 level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=0 time=2025-12-05T10:00:49.039-08:00 level=TRACE source=runner.go:174 msg="supported GPU library combinations before filtering" supported=map[] time=2025-12-05T10:00:49.039-08:00 level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=19.5169ms time=2025-12-05T10:00:49.039-08:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="63.9 GiB" available="47.0 GiB" time=2025-12-05T10:00:49.039-08:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB" ``` Posting for posterity, but the root of the problem was that I had set `OLLAMA_LLM_LIBRARY = cuda` as an attempt to get out of CPU only mode that I was using for debugging. Most likely I just needed to re-pull dolphin3 to solve my original problem but I ended up causing more problems for myself along the way. Thank you for your time and attention.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#70869