[GH-ISSUE #13212] Vulkan is enabled by default and can't be disabled with OLLAMA_VULKAN=0 #34495

Closed
opened 2026-04-22 18:06:45 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @PaulGrandperrin on GitHub (Nov 23, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13212

What is the issue?

According to the release log of version v0.12.11 Vulkan shouldn't be enabled by default:

Vulkan support (opt-in)
Ollama 0.12.11 includes support for Vulkan acceleration. Vulkan brings support for a broad range of GPUs from AMD, Intel, and iGPUs. Vulkan support is not yet enabled by default, and requires opting in by running Ollama with a custom environment variable:

OLLAMA_VULKAN=1 ollama serve

But it seems to be always enabled even when explicitly disabled with OLLAMA_VULKAN=0.

I have this issue with both v0.12.11 and v0.13.0 binaries.

It is an issue because on some systems with integrated GPUs, this makes Ollama slower than when using the CPU only.

Relevant log output

$ OLLAMA_VULKAN=0./bin/ollama serve                                                                                          
time=2025-11-23T11:38:41.941+01:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/paulg/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-11-23T11:38:41.942+01:00 level=INFO source=images.go:522 msg="total blobs: 5"
time=2025-11-23T11:38:41.942+01:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
time=2025-11-23T11:38:41.942+01:00 level=INFO source=routes.go:1597 msg="Listening on 127.0.0.1:11434 (version 0.13.0)"
time=2025-11-23T11:38:41.943+01:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2025-11-23T11:38:41.945+01:00 level=INFO source=server.go:392 msg="starting runner" cmd="/home/paulg/Downloads/ollama/bin/ollama runner --ollama-engine --port 46713"
time=2025-11-23T11:38:42.091+01:00 level=INFO source=server.go:392 msg="starting runner" cmd="/home/paulg/Downloads/ollama/bin/ollama runner --ollama-engine --port 42989"
time=2025-11-23T11:38:42.308+01:00 level=INFO source=runner.go:102 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
time=2025-11-23T11:38:42.308+01:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-06f6a1f2-0ec1-1360-a8d4-b72add839377 filter_id="" library=CUDA compute=6.1 name=CUDA0 description="NVIDIA GeForce GTX 1050" libdirs=ollama,cuda_v12 driver=13.0 pci_id=0000:01:00.0 type=discrete total="4.0 GiB" available="3.9 GiB"
time=2025-11-23T11:38:42.308+01:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="4.0 GiB" threshold="20.0 GiB"
[GIN] 2025/11/23 - 11:38:48 | 200 |      68.093µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/11/23 - 11:38:48 | 200 |   63.250467ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/11/23 - 11:38:48 | 200 |   61.566648ms |       127.0.0.1 | POST     "/api/show"
time=2025-11-23T11:38:48.266+01:00 level=INFO source=server.go:392 msg="starting runner" cmd="/home/paulg/Downloads/ollama/bin/ollama runner --ollama-engine --port 40989"
llama_model_loader: loaded meta data with 26 key-value pairs and 339 tensors from /home/paulg/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = DeepSeek R1 Distill Qwen 1.5B
llama_model_loader: - kv   3:                           general.basename str              = DeepSeek-R1-Distill-Qwen
llama_model_loader: - kv   4:                         general.size_label str              = 1.5B
llama_model_loader: - kv   5:                          qwen2.block_count u32              = 28
llama_model_loader: - kv   6:                       qwen2.context_length u32              = 131072
llama_model_loader: - kv   7:                     qwen2.embedding_length u32              = 1536
llama_model_loader: - kv   8:                  qwen2.feed_forward_length u32              = 8960
llama_model_loader: - kv   9:                 qwen2.attention.head_count u32              = 12
llama_model_loader: - kv  10:              qwen2.attention.head_count_kv u32              = 2
llama_model_loader: - kv  11:                       qwen2.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv  12:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  13:                          general.file_type u32              = 15
llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  15:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  16:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  18:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  19:                tokenizer.ggml.bos_token_id u32              = 151646
llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 151643
llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  22:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  23:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  25:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  141 tensors
llama_model_loader: - type q4_K:  169 tensors
llama_model_loader: - type q6_K:   29 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 1.04 GiB (5.00 BPW) 
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: printing all EOG tokens:
load:   - 151643 ('<|end▁of▁sentence|>')
load:   - 151662 ('<|fim_pad|>')
load:   - 151663 ('<|repo_name|>')
load:   - 151664 ('<|file_sep|>')
load: special tokens cache size = 22
load: token to piece cache size = 0.9310 MB
print_info: arch             = qwen2
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 1.78 B
print_info: general.name     = DeepSeek R1 Distill Qwen 1.5B
print_info: vocab type       = BPE
print_info: n_vocab          = 151936
print_info: n_merges         = 151387
print_info: BOS token        = 151646 '<|begin▁of▁sentence|>'
print_info: EOS token        = 151643 '<|end▁of▁sentence|>'
print_info: EOT token        = 151643 '<|end▁of▁sentence|>'
print_info: PAD token        = 151643 '<|end▁of▁sentence|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|end▁of▁sentence|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-11-23T11:38:48.795+01:00 level=INFO source=server.go:392 msg="starting runner" cmd="/home/paulg/Downloads/ollama/bin/ollama runner --model /home/paulg/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc --port 37021"
time=2025-11-23T11:38:48.795+01:00 level=INFO source=sched.go:443 msg="system memory" total="31.2 GiB" free="10.7 GiB" free_swap="22.6 GiB"
time=2025-11-23T11:38:48.795+01:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-06f6a1f2-0ec1-1360-a8d4-b72add839377 library=CUDA available="3.5 GiB" free="3.9 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-11-23T11:38:48.795+01:00 level=INFO source=server.go:459 msg="loading model" "model layers"=29 requested=-1
time=2025-11-23T11:38:48.796+01:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="934.7 MiB"
time=2025-11-23T11:38:48.796+01:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="112.0 MiB"
time=2025-11-23T11:38:48.796+01:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="299.8 MiB"
time=2025-11-23T11:38:48.796+01:00 level=INFO source=device.go:272 msg="total memory" size="1.3 GiB"
time=2025-11-23T11:38:48.807+01:00 level=INFO source=runner.go:963 msg="starting go runner"
load_backend: loaded CPU backend from /home/paulg/Downloads/ollama/lib/ollama/libggml-cpu-haswell.so
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce GTX 1050, compute capability 6.1, VMM: yes, ID: GPU-06f6a1f2-0ec1-1360-a8d4-b72add839377
load_backend: loaded CUDA backend from /home/paulg/Downloads/ollama/lib/ollama/cuda_v12/libggml-cuda.so
time=2025-11-23T11:38:48.874+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2025-11-23T11:38:48.874+01:00 level=INFO source=runner.go:999 msg="Server listening on 127.0.0.1:37021"
time=2025-11-23T11:38:48.881+01:00 level=INFO source=runner.go:893 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:4 GPULayers:29[ID:GPU-06f6a1f2-0ec1-1360-a8d4-b72add839377 Layers:29(0..28)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:true}"
time=2025-11-23T11:38:48.881+01:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding"
time=2025-11-23T11:38:48.882+01:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model"
ggml_backend_cuda_device_get_memory device GPU-06f6a1f2-0ec1-1360-a8d4-b72add839377 utilizing NVML memory reporting free: 4227399680 total: 4294967296
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce GTX 1050) (0000:01:00.0) - 4031 MiB free
llama_model_loader: loaded meta data with 26 key-value pairs and 339 tensors from /home/paulg/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = DeepSeek R1 Distill Qwen 1.5B
llama_model_loader: - kv   3:                           general.basename str              = DeepSeek-R1-Distill-Qwen
llama_model_loader: - kv   4:                         general.size_label str              = 1.5B
llama_model_loader: - kv   5:                          qwen2.block_count u32              = 28
llama_model_loader: - kv   6:                       qwen2.context_length u32              = 131072
llama_model_loader: - kv   7:                     qwen2.embedding_length u32              = 1536
llama_model_loader: - kv   8:                  qwen2.feed_forward_length u32              = 8960
llama_model_loader: - kv   9:                 qwen2.attention.head_count u32              = 12
llama_model_loader: - kv  10:              qwen2.attention.head_count_kv u32              = 2
llama_model_loader: - kv  11:                       qwen2.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv  12:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  13:                          general.file_type u32              = 15
llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  15:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  16:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  18:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  19:                tokenizer.ggml.bos_token_id u32              = 151646
llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 151643
llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  22:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  23:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  25:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  141 tensors
llama_model_loader: - type q4_K:  169 tensors
llama_model_loader: - type q6_K:   29 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 1.04 GiB (5.00 BPW) 
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: printing all EOG tokens:
load:   - 151643 ('<|end▁of▁sentence|>')
load:   - 151662 ('<|fim_pad|>')
load:   - 151663 ('<|repo_name|>')
load:   - 151664 ('<|file_sep|>')
load: special tokens cache size = 22
load: token to piece cache size = 0.9310 MB
print_info: arch             = qwen2
print_info: vocab_only       = 0
print_info: n_ctx_train      = 131072
print_info: n_embd           = 1536
print_info: n_layer          = 28
print_info: n_head           = 12
print_info: n_head_kv        = 2
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: is_swa_any       = 0
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 6
print_info: n_embd_k_gqa     = 256
print_info: n_embd_v_gqa     = 256
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-06
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 8960
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = -1
print_info: rope type        = 2
print_info: rope scaling     = linear
print_info: freq_base_train  = 10000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 131072
print_info: rope_finetuned   = unknown
print_info: model type       = 1.5B
print_info: model params     = 1.78 B
print_info: general.name     = DeepSeek R1 Distill Qwen 1.5B
print_info: vocab type       = BPE
print_info: n_vocab          = 151936
print_info: n_merges         = 151387
print_info: BOS token        = 151646 '<|begin▁of▁sentence|>'
print_info: EOS token        = 151643 '<|end▁of▁sentence|>'
print_info: EOT token        = 151643 '<|end▁of▁sentence|>'
print_info: PAD token        = 151643 '<|end▁of▁sentence|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|end▁of▁sentence|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors: offloading 28 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 29/29 layers to GPU
load_tensors:        CUDA0 model buffer size =   934.70 MiB
load_tensors:   CPU_Mapped model buffer size =   125.19 MiB
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 4096
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch       = 512
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = disabled
llama_context: kv_unified    = false
llama_context: freq_base     = 10000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_context:  CUDA_Host  output buffer size =     0.59 MiB
llama_kv_cache:      CUDA0 KV buffer size =   112.00 MiB
llama_kv_cache: size =  112.00 MiB (  4096 cells,  28 layers,  1/1 seqs), K (f16):   56.00 MiB, V (f16):   56.00 MiB
llama_context:      CUDA0 compute buffer size =   299.75 MiB
llama_context:  CUDA_Host compute buffer size =    12.01 MiB
llama_context: graph nodes  = 1098
llama_context: graph splits = 2
time=2025-11-23T11:38:49.884+01:00 level=INFO source=server.go:1332 msg="llama runner started in 1.09 seconds"
time=2025-11-23T11:38:49.885+01:00 level=INFO source=sched.go:517 msg="loaded runners" count=1
time=2025-11-23T11:38:49.885+01:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding"
time=2025-11-23T11:38:49.885+01:00 level=INFO source=server.go:1332 msg="llama runner started in 1.09 seconds"
[GIN] 2025/11/23 - 11:38:49 | 200 |   1.73076529s |       127.0.0.1 | POST     "/api/generate"

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.13.0

Originally created by @PaulGrandperrin on GitHub (Nov 23, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13212 ### What is the issue? According to the release log of version [v0.12.11](https://github.com/ollama/ollama/releases/tag/v0.12.11) Vulkan shouldn't be enabled by default: ``` Vulkan support (opt-in) Ollama 0.12.11 includes support for Vulkan acceleration. Vulkan brings support for a broad range of GPUs from AMD, Intel, and iGPUs. Vulkan support is not yet enabled by default, and requires opting in by running Ollama with a custom environment variable: OLLAMA_VULKAN=1 ollama serve ``` But it seems to be always enabled even when explicitly disabled with `OLLAMA_VULKAN=0`. I have this issue with both `v0.12.11` and `v0.13.0` binaries. It is an issue because on some systems with integrated GPUs, this makes Ollama slower than when using the CPU only. ### Relevant log output ```shell $ OLLAMA_VULKAN=0./bin/ollama serve time=2025-11-23T11:38:41.941+01:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/paulg/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-11-23T11:38:41.942+01:00 level=INFO source=images.go:522 msg="total blobs: 5" time=2025-11-23T11:38:41.942+01:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" time=2025-11-23T11:38:41.942+01:00 level=INFO source=routes.go:1597 msg="Listening on 127.0.0.1:11434 (version 0.13.0)" time=2025-11-23T11:38:41.943+01:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2025-11-23T11:38:41.945+01:00 level=INFO source=server.go:392 msg="starting runner" cmd="/home/paulg/Downloads/ollama/bin/ollama runner --ollama-engine --port 46713" time=2025-11-23T11:38:42.091+01:00 level=INFO source=server.go:392 msg="starting runner" cmd="/home/paulg/Downloads/ollama/bin/ollama runner --ollama-engine --port 42989" time=2025-11-23T11:38:42.308+01:00 level=INFO source=runner.go:102 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" time=2025-11-23T11:38:42.308+01:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-06f6a1f2-0ec1-1360-a8d4-b72add839377 filter_id="" library=CUDA compute=6.1 name=CUDA0 description="NVIDIA GeForce GTX 1050" libdirs=ollama,cuda_v12 driver=13.0 pci_id=0000:01:00.0 type=discrete total="4.0 GiB" available="3.9 GiB" time=2025-11-23T11:38:42.308+01:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="4.0 GiB" threshold="20.0 GiB" [GIN] 2025/11/23 - 11:38:48 | 200 | 68.093µs | 127.0.0.1 | HEAD "/" [GIN] 2025/11/23 - 11:38:48 | 200 | 63.250467ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/11/23 - 11:38:48 | 200 | 61.566648ms | 127.0.0.1 | POST "/api/show" time=2025-11-23T11:38:48.266+01:00 level=INFO source=server.go:392 msg="starting runner" cmd="/home/paulg/Downloads/ollama/bin/ollama runner --ollama-engine --port 40989" llama_model_loader: loaded meta data with 26 key-value pairs and 339 tensors from /home/paulg/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 1.5B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen llama_model_loader: - kv 4: general.size_label str = 1.5B llama_model_loader: - kv 5: qwen2.block_count u32 = 28 llama_model_loader: - kv 6: qwen2.context_length u32 = 131072 llama_model_loader: - kv 7: qwen2.embedding_length u32 = 1536 llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 8960 llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 12 llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 2 llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 13: general.file_type u32 = 15 llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646 llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - type f32: 141 tensors llama_model_loader: - type q4_K: 169 tensors llama_model_loader: - type q6_K: 29 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 1.04 GiB (5.00 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: printing all EOG tokens: load: - 151643 ('<|end▁of▁sentence|>') load: - 151662 ('<|fim_pad|>') load: - 151663 ('<|repo_name|>') load: - 151664 ('<|file_sep|>') load: special tokens cache size = 22 load: token to piece cache size = 0.9310 MB print_info: arch = qwen2 print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 1.78 B print_info: general.name = DeepSeek R1 Distill Qwen 1.5B print_info: vocab type = BPE print_info: n_vocab = 151936 print_info: n_merges = 151387 print_info: BOS token = 151646 '<|begin▁of▁sentence|>' print_info: EOS token = 151643 '<|end▁of▁sentence|>' print_info: EOT token = 151643 '<|end▁of▁sentence|>' print_info: PAD token = 151643 '<|end▁of▁sentence|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|end▁of▁sentence|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-11-23T11:38:48.795+01:00 level=INFO source=server.go:392 msg="starting runner" cmd="/home/paulg/Downloads/ollama/bin/ollama runner --model /home/paulg/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc --port 37021" time=2025-11-23T11:38:48.795+01:00 level=INFO source=sched.go:443 msg="system memory" total="31.2 GiB" free="10.7 GiB" free_swap="22.6 GiB" time=2025-11-23T11:38:48.795+01:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-06f6a1f2-0ec1-1360-a8d4-b72add839377 library=CUDA available="3.5 GiB" free="3.9 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-11-23T11:38:48.795+01:00 level=INFO source=server.go:459 msg="loading model" "model layers"=29 requested=-1 time=2025-11-23T11:38:48.796+01:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="934.7 MiB" time=2025-11-23T11:38:48.796+01:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="112.0 MiB" time=2025-11-23T11:38:48.796+01:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="299.8 MiB" time=2025-11-23T11:38:48.796+01:00 level=INFO source=device.go:272 msg="total memory" size="1.3 GiB" time=2025-11-23T11:38:48.807+01:00 level=INFO source=runner.go:963 msg="starting go runner" load_backend: loaded CPU backend from /home/paulg/Downloads/ollama/lib/ollama/libggml-cpu-haswell.so ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce GTX 1050, compute capability 6.1, VMM: yes, ID: GPU-06f6a1f2-0ec1-1360-a8d4-b72add839377 load_backend: loaded CUDA backend from /home/paulg/Downloads/ollama/lib/ollama/cuda_v12/libggml-cuda.so time=2025-11-23T11:38:48.874+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2025-11-23T11:38:48.874+01:00 level=INFO source=runner.go:999 msg="Server listening on 127.0.0.1:37021" time=2025-11-23T11:38:48.881+01:00 level=INFO source=runner.go:893 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:4 GPULayers:29[ID:GPU-06f6a1f2-0ec1-1360-a8d4-b72add839377 Layers:29(0..28)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:true}" time=2025-11-23T11:38:48.881+01:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding" time=2025-11-23T11:38:48.882+01:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model" ggml_backend_cuda_device_get_memory device GPU-06f6a1f2-0ec1-1360-a8d4-b72add839377 utilizing NVML memory reporting free: 4227399680 total: 4294967296 llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce GTX 1050) (0000:01:00.0) - 4031 MiB free llama_model_loader: loaded meta data with 26 key-value pairs and 339 tensors from /home/paulg/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 1.5B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen llama_model_loader: - kv 4: general.size_label str = 1.5B llama_model_loader: - kv 5: qwen2.block_count u32 = 28 llama_model_loader: - kv 6: qwen2.context_length u32 = 131072 llama_model_loader: - kv 7: qwen2.embedding_length u32 = 1536 llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 8960 llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 12 llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 2 llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 13: general.file_type u32 = 15 llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646 llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - type f32: 141 tensors llama_model_loader: - type q4_K: 169 tensors llama_model_loader: - type q6_K: 29 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 1.04 GiB (5.00 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: printing all EOG tokens: load: - 151643 ('<|end▁of▁sentence|>') load: - 151662 ('<|fim_pad|>') load: - 151663 ('<|repo_name|>') load: - 151664 ('<|file_sep|>') load: special tokens cache size = 22 load: token to piece cache size = 0.9310 MB print_info: arch = qwen2 print_info: vocab_only = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 1536 print_info: n_layer = 28 print_info: n_head = 12 print_info: n_head_kv = 2 print_info: n_rot = 128 print_info: n_swa = 0 print_info: is_swa_any = 0 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 6 print_info: n_embd_k_gqa = 256 print_info: n_embd_v_gqa = 256 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-06 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 8960 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = -1 print_info: rope type = 2 print_info: rope scaling = linear print_info: freq_base_train = 10000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 131072 print_info: rope_finetuned = unknown print_info: model type = 1.5B print_info: model params = 1.78 B print_info: general.name = DeepSeek R1 Distill Qwen 1.5B print_info: vocab type = BPE print_info: n_vocab = 151936 print_info: n_merges = 151387 print_info: BOS token = 151646 '<|begin▁of▁sentence|>' print_info: EOS token = 151643 '<|end▁of▁sentence|>' print_info: EOT token = 151643 '<|end▁of▁sentence|>' print_info: PAD token = 151643 '<|end▁of▁sentence|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|end▁of▁sentence|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = true) load_tensors: offloading 28 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 29/29 layers to GPU load_tensors: CUDA0 model buffer size = 934.70 MiB load_tensors: CPU_Mapped model buffer size = 125.19 MiB llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 4096 llama_context: n_ctx_per_seq = 4096 llama_context: n_batch = 512 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = disabled llama_context: kv_unified = false llama_context: freq_base = 10000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_context: CUDA_Host output buffer size = 0.59 MiB llama_kv_cache: CUDA0 KV buffer size = 112.00 MiB llama_kv_cache: size = 112.00 MiB ( 4096 cells, 28 layers, 1/1 seqs), K (f16): 56.00 MiB, V (f16): 56.00 MiB llama_context: CUDA0 compute buffer size = 299.75 MiB llama_context: CUDA_Host compute buffer size = 12.01 MiB llama_context: graph nodes = 1098 llama_context: graph splits = 2 time=2025-11-23T11:38:49.884+01:00 level=INFO source=server.go:1332 msg="llama runner started in 1.09 seconds" time=2025-11-23T11:38:49.885+01:00 level=INFO source=sched.go:517 msg="loaded runners" count=1 time=2025-11-23T11:38:49.885+01:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding" time=2025-11-23T11:38:49.885+01:00 level=INFO source=server.go:1332 msg="llama runner started in 1.09 seconds" [GIN] 2025/11/23 - 11:38:49 | 200 | 1.73076529s | 127.0.0.1 | POST "/api/generate" ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.13.0
GiteaMirror added the bug label 2026-04-22 18:06:45 -05:00
Author
Owner

@rick-github commented on GitHub (Nov 23, 2025):

There's no indication in this log that the Vulkan backend is being used.

llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce GTX 1050) (0000:01:00.0) - 4031 MiB free
<!-- gh-comment-id:3567816083 --> @rick-github commented on GitHub (Nov 23, 2025): There's no indication in this log that the Vulkan backend is being used. ``` llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce GTX 1050) (0000:01:00.0) - 4031 MiB free ```
Author
Owner

@PaulGrandperrin commented on GitHub (Nov 23, 2025):

doesn't those lines indicate that the GPU is used?

load_tensors: offloading 28 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 29/29 layers to GPU
load_tensors:        CUDA0 model buffer size =   934.70 MiB
load_tensors:   CPU_Mapped model buffer size =   125.19 MiB
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 4096
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch       = 512
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = disabled
llama_context: kv_unified    = false
llama_context: freq_base     = 10000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_context:  CUDA_Host  output buffer size =     0.59 MiB
llama_kv_cache:      CUDA0 KV buffer size =   112.00 MiB
llama_kv_cache: size =  112.00 MiB (  4096 cells,  28 layers,  1/1 seqs), K (f16):   56.00 MiB, V (f16):   56.00 MiB
llama_context:      CUDA0 compute buffer size =   299.75 MiB
llama_context:  CUDA_Host compute buffer size =    12.01 MiB

nvidia-smi also sees ollama:

$ nvidia-smi       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 580.105.08             Driver Version: 580.105.08     CUDA Version: 13.0     |
+-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce GTX 1050        On  |   00000000:01:00.0 Off |                  N/A |
| N/A   76C    P0            N/A  / 5001W |    1415MiB /   4096MiB |     89%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A           80588      C   ...g/Downloads/ollama/bin/ollama       1412MiB |
+-----------------------------------------------------------------------------------------+

<!-- gh-comment-id:3567818838 --> @PaulGrandperrin commented on GitHub (Nov 23, 2025): doesn't those lines indicate that the GPU is used? ``` load_tensors: offloading 28 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 29/29 layers to GPU load_tensors: CUDA0 model buffer size = 934.70 MiB load_tensors: CPU_Mapped model buffer size = 125.19 MiB llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 4096 llama_context: n_ctx_per_seq = 4096 llama_context: n_batch = 512 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = disabled llama_context: kv_unified = false llama_context: freq_base = 10000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_context: CUDA_Host output buffer size = 0.59 MiB llama_kv_cache: CUDA0 KV buffer size = 112.00 MiB llama_kv_cache: size = 112.00 MiB ( 4096 cells, 28 layers, 1/1 seqs), K (f16): 56.00 MiB, V (f16): 56.00 MiB llama_context: CUDA0 compute buffer size = 299.75 MiB llama_context: CUDA_Host compute buffer size = 12.01 MiB ``` nvidia-smi also sees ollama: ``` $ nvidia-smi +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 580.105.08 Driver Version: 580.105.08 CUDA Version: 13.0 | +-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce GTX 1050 On | 00000000:01:00.0 Off | N/A | | N/A 76C P0 N/A / 5001W | 1415MiB / 4096MiB | 89% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 80588 C ...g/Downloads/ollama/bin/ollama 1412MiB | +-----------------------------------------------------------------------------------------+ ```
Author
Owner

@rick-github commented on GitHub (Nov 23, 2025):

doesn't those lines indicate that the GPU is used?

Yes, the CUDA backend is being used. Vulkan is not CUDA.

<!-- gh-comment-id:3567820103 --> @rick-github commented on GitHub (Nov 23, 2025): > doesn't those lines indicate that the GPU is used? Yes, the CUDA backend is being used. Vulkan is not CUDA.
Author
Owner

@PaulGrandperrin commented on GitHub (Nov 23, 2025):

Ahhh sorry, indeed!

I was trying to understand a bug we have in the packaged version of ollama in NixOS and for some reason, in this package, when vulkan is built-in, it is then always used, even when the OLLAMA_VULKAN=0 is set.

I wanted to try to reproduce the bug when using the official binaries and was only looking for "GPU" uses in the logs, no realizing it was using it through CUDA..

I'll close this issue as it's a NixOS/nixpkgs bug only.

<!-- gh-comment-id:3567835921 --> @PaulGrandperrin commented on GitHub (Nov 23, 2025): Ahhh sorry, indeed! I was trying to understand a bug we have in the packaged version of ollama in NixOS and for some reason, in this package, when vulkan is built-in, it is then always used, even when the OLLAMA_VULKAN=0 is set. I wanted to try to reproduce the bug when using the official binaries and was only looking for "GPU" uses in the logs, no realizing it was using it through CUDA.. I'll close this issue as it's a NixOS/nixpkgs bug only.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#34495