[GH-ISSUE #9722] Ollama 0.6.0 not respecting CUDA_VISIBLE_DEVICES or CUDA_DEVICE_ORDER #6354

Open
opened 2026-04-12 17:51:59 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @rschaer on GitHub (Mar 13, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9722

What is the issue?

System:
Ollama Version 0.6.0
Ubuntu Server 24.04 LTS
AMD Ryzen 5700X on an Aorus X570 Master

3x Nvidia 30xx GPUs

Tested with CUDA_DEVICE_ORDER set to either FASTEST_FIRST or PCI_BUS_ID, with and without specifying CUDA_VISIBLE_DEVICES:

I’ve tried setting CUDA_VISIBLE_DEVICES to a specific device in the ollama.service file (tested with device number and UUID), in an override.conf, and setting it as a system wide variable, to 0 effect. For me, Ollama ALWAYS chooses GPU0 as the first GPU to load up.

At the same time, Invokeai (stable diffusion image generation), with system wide variables set, follows both CUDA_DEVICE_ORDER and CUDA_VISIBLE_DEVICES exactly as expected.

To make things weirder, if I remove the 3rd GPU from the system, Ollama now defaults to GPU1 instead of the "new" GPU0, and still can't be convinced otherwise via CUDA_VISIBLE_DEVICES.

Also, using journalctl -u ollama -S 2025-03-13 | grep CUDA_VISIBLE_DEVICES, it seems like the variable is ALWAYS set to 1, with 2 or 3 GPUs, and regardless of what the variable is set to in /etc/profile:

Mar 13 00:16:17 neuroforge ollama[1254]: 2025/03/13 00:16:17 routes.go:1225: INFO server config env="map[CUDA_VISIBLE_DEVICES:1 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/data/ollama_models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Mar 13 07:42:17 neuroforge ollama[1206]: 2025/03/13 07:42:17 routes.go:1225: INFO server config env="map[CUDA_VISIBLE_DEVICES:1 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/data/ollama_models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Mar 13 07:52:00 neuroforge ollama[1203]: 2025/03/13 07:52:00 routes.go:1225: INFO server config env="map[CUDA_VISIBLE_DEVICES:1 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/data/ollama_models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Mar 13 08:12:28 neuroforge ollama[1201]: 2025/03/13 08:12:28 routes.go:1225: INFO server config env="map[CUDA_VISIBLE_DEVICES:1 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/data/ollama_models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Mar 13 08:55:10 neuroforge ollama[1207]: 2025/03/13 08:55:10 routes.go:1225: INFO server config env="map[CUDA_VISIBLE_DEVICES:1 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/data/ollama_models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Mar 13 09:08:13 neuroforge ollama[1209]: 2025/03/13 09:08:13 routes.go:1225: INFO server config env="map[CUDA_VISIBLE_DEVICES:1 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/data/ollama_models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Mar 13 09:16:25 neuroforge ollama[1204]: 2025/03/13 09:16:25 routes.go:1225: INFO server config env="map[CUDA_VISIBLE_DEVICES:1 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/data/ollama_models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Mar 13 09:35:41 neuroforge ollama[1201]: 2025/03/13 09:35:41 routes.go:1225: INFO server config env="map[CUDA_VISIBLE_DEVICES:1 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/data/ollama_models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Mar 13 10:02:54 neuroforge ollama[942]: 2025/03/13 10:02:54 routes.go:1225: INFO server config env="map[CUDA_VISIBLE_DEVICES:1 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/data/ollama_models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Mar 13 10:13:58 neuroforge ollama[2189]: 2025/03/13 10:13:58 routes.go:1225: INFO server config env="map[CUDA_VISIBLE_DEVICES:1 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/data/ollama_models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Mar 13 10:15:44 neuroforge ollama[2407]: 2025/03/13 10:15:44 routes.go:1225: INFO server config env="map[CUDA_VISIBLE_DEVICES:1 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/data/ollama_models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"

Curious if others can reproduce this issue?

Relevant log output

Mar 13 09:13:34 neuroforge ollama[1209]: print_info: freq_base_train  = 100000000.0
Mar 13 09:13:34 neuroforge ollama[1209]: print_info: freq_scale_train = 1
Mar 13 09:13:34 neuroforge ollama[1209]: print_info: n_ctx_orig_yarn  = 32768
Mar 13 09:13:34 neuroforge ollama[1209]: print_info: rope_finetuned   = unknown
Mar 13 09:13:34 neuroforge ollama[1209]: print_info: ssm_d_conv       = 0
Mar 13 09:13:34 neuroforge ollama[1209]: print_info: ssm_d_inner      = 0
Mar 13 09:13:34 neuroforge ollama[1209]: print_info: ssm_d_state      = 0
Mar 13 09:13:34 neuroforge ollama[1209]: print_info: ssm_dt_rank      = 0
Mar 13 09:13:34 neuroforge ollama[1209]: print_info: ssm_dt_b_c_rms   = 0
Mar 13 09:13:34 neuroforge ollama[1209]: print_info: model type       = 13B
Mar 13 09:13:34 neuroforge ollama[1209]: print_info: model params     = 23.57 B
Mar 13 09:13:34 neuroforge ollama[1209]: print_info: general.name     = Dolphin3.0 R1 Mistral 24B
Mar 13 09:13:34 neuroforge ollama[1209]: print_info: vocab type       = BPE
Mar 13 09:13:34 neuroforge ollama[1209]: print_info: n_vocab          = 131074
Mar 13 09:13:34 neuroforge ollama[1209]: print_info: n_merges         = 269443
Mar 13 09:13:34 neuroforge ollama[1209]: print_info: BOS token        = 1 '<s>'
Mar 13 09:13:34 neuroforge ollama[1209]: print_info: EOS token        = 131072 '<|im_end|>'
Mar 13 09:13:34 neuroforge ollama[1209]: print_info: EOT token        = 131072 '<|im_end|>'
Mar 13 09:13:34 neuroforge ollama[1209]: print_info: UNK token        = 0 '<unk>'
Mar 13 09:13:34 neuroforge ollama[1209]: print_info: PAD token        = 11 '<pad>'
Mar 13 09:13:34 neuroforge ollama[1209]: print_info: LF token         = 1010 'Ċ'
Mar 13 09:13:34 neuroforge ollama[1209]: print_info: EOG token        = 131072 '<|im_end|>'
Mar 13 09:13:34 neuroforge ollama[1209]: print_info: max token length = 150
Mar 13 09:13:34 neuroforge ollama[1209]: load_tensors: loading model tensors, this can take a while... (mmap = true)
Mar 13 09:13:38 neuroforge ollama[1209]: load_tensors: offloading 40 repeating layers to GPU
Mar 13 09:13:38 neuroforge ollama[1209]: load_tensors: offloading output layer to GPU
Mar 13 09:13:38 neuroforge ollama[1209]: load_tensors: offloaded 41/41 layers to GPU
Mar 13 09:13:38 neuroforge ollama[1209]: load_tensors:        CUDA0 model buffer size = 12501.59 MiB
Mar 13 09:13:38 neuroforge ollama[1209]: load_tensors:   CPU_Mapped model buffer size =   360.01 MiB
Mar 13 09:13:43 neuroforge ollama[1209]: llama_init_from_model: n_seq_max     = 1
Mar 13 09:13:43 neuroforge ollama[1209]: llama_init_from_model: n_ctx         = 32768
Mar 13 09:13:43 neuroforge ollama[1209]: llama_init_from_model: n_ctx_per_seq = 32768
Mar 13 09:13:43 neuroforge ollama[1209]: llama_init_from_model: n_batch       = 512
Mar 13 09:13:43 neuroforge ollama[1209]: llama_init_from_model: n_ubatch      = 512
Mar 13 09:13:43 neuroforge ollama[1209]: llama_init_from_model: flash_attn    = 0
Mar 13 09:13:43 neuroforge ollama[1209]: llama_init_from_model: freq_base     = 100000000.0
Mar 13 09:13:43 neuroforge ollama[1209]: llama_init_from_model: freq_scale    = 1
Mar 13 09:13:43 neuroforge ollama[1209]: llama_kv_cache_init: kv_size = 32768, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 40, can_shift = 1
Mar 13 09:13:43 neuroforge ollama[1209]: llama_kv_cache_init:      CUDA0 KV buffer size =  5120.00 MiB
Mar 13 09:13:43 neuroforge ollama[1209]: llama_init_from_model: KV self size  = 5120.00 MiB, K (f16): 2560.00 MiB, V (f16): 2560.00 MiB
Mar 13 09:13:43 neuroforge ollama[1209]: llama_init_from_model:  CUDA_Host  output buffer size =     0.52 MiB
Mar 13 09:13:43 neuroforge ollama[1209]: llama_init_from_model:      CUDA0 compute buffer size =  2148.00 MiB
Mar 13 09:13:43 neuroforge ollama[1209]: llama_init_from_model:  CUDA_Host compute buffer size =    74.01 MiB
Mar 13 09:13:43 neuroforge ollama[1209]: llama_init_from_model: graph nodes  = 1286
Mar 13 09:13:43 neuroforge ollama[1209]: llama_init_from_model: graph splits = 2
Mar 13 09:13:43 neuroforge ollama[1209]: time=2025-03-13T09:13:43.272+01:00 level=INFO source=server.go:624 msg="llama runner started in 8.78 seconds"
Mar 13 09:13:43 neuroforge ollama[1209]: [GIN] 2025/03/13 - 09:13:43 | 200 |   9.20050059s |       127.0.0.1 | POST     "/api/generate"
Mar 13 09:13:56 neuroforge ollama[1209]: [GIN] 2025/03/13 - 09:13:56 | 200 |      18.439µs |       127.0.0.1 | HEAD     "/"
Mar 13 09:13:56 neuroforge ollama[1209]: [GIN] 2025/03/13 - 09:13:56 | 200 |     974.439µs |       127.0.0.1 | POST     "/api/generate"
Mar 13 09:15:47 neuroforge systemd[1]: Stopping ollama.service - Ollama Service...
Mar 13 09:15:47 neuroforge systemd[1]: ollama.service: Deactivated successfully.
Mar 13 09:15:47 neuroforge systemd[1]: Stopped ollama.service - Ollama Service.
Mar 13 09:15:47 neuroforge systemd[1]: ollama.service: Consumed 9.657s CPU time.
-- Boot 2cd7bfacbb7145958dd812d7885707c1 --
Mar 13 09:16:25 neuroforge systemd[1]: Started ollama.service - Ollama Service.
Mar 13 09:16:25 neuroforge ollama[1204]: 2025/03/13 09:16:25 routes.go:1225: INFO server config env="map[CUDA_VISIBLE_DEVICES:1 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/data/ollama_models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Mar 13 09:16:25 neuroforge ollama[1204]: time=2025-03-13T09:16:25.505+01:00 level=INFO source=images.go:432 msg="total blobs: 94"
Mar 13 09:16:25 neuroforge ollama[1204]: time=2025-03-13T09:16:25.506+01:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
Mar 13 09:16:25 neuroforge ollama[1204]: time=2025-03-13T09:16:25.507+01:00 level=INFO source=routes.go:1292 msg="Listening on 127.0.0.1:11434 (version 0.6.0)"
Mar 13 09:16:25 neuroforge ollama[1204]: time=2025-03-13T09:16:25.507+01:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
Mar 13 09:16:25 neuroforge ollama[1204]: time=2025-03-13T09:16:25.618+01:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-75c56b20-ce84-29a9-127c-018b7beb1cc0 library=cuda variant=v12 compute=8.6 driver=12.8 name="NVIDIA GeForce RTX 3090" total="23.8 GiB" available="23.5 GiB"

OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

v0.6.0

Originally created by @rschaer on GitHub (Mar 13, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9722 ### What is the issue? System: Ollama Version 0.6.0 Ubuntu Server 24.04 LTS AMD Ryzen 5700X on an Aorus X570 Master 3x Nvidia 30xx GPUs Tested with `CUDA_DEVICE_ORDER` set to either `FASTEST_FIRST` or `PCI_BUS_ID`, with and without specifying `CUDA_VISIBLE_DEVICES`: I’ve tried setting `CUDA_VISIBLE_DEVICES` to a specific device in the ollama.service file (tested with device number and UUID), in an override.conf, and setting it as a system wide variable, to 0 effect. For me, Ollama ALWAYS chooses `GPU0` as the first GPU to load up. At the same time, Invokeai (stable diffusion image generation), with system wide variables set, follows both `CUDA_DEVICE_ORDER` and `CUDA_VISIBLE_DEVICES` exactly as expected. To make things weirder, if I remove the 3rd GPU from the system, Ollama now defaults to `GPU1` instead of the "new" `GPU0`, and still can't be convinced otherwise via `CUDA_VISIBLE_DEVICES`. Also, using `journalctl -u ollama -S 2025-03-13 | grep CUDA_VISIBLE_DEVICES`, it seems like the variable is ALWAYS set to `1`, with 2 or 3 GPUs, and regardless of what the variable is set to in `/etc/profile`: ``` Mar 13 00:16:17 neuroforge ollama[1254]: 2025/03/13 00:16:17 routes.go:1225: INFO server config env="map[CUDA_VISIBLE_DEVICES:1 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/data/ollama_models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Mar 13 07:42:17 neuroforge ollama[1206]: 2025/03/13 07:42:17 routes.go:1225: INFO server config env="map[CUDA_VISIBLE_DEVICES:1 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/data/ollama_models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Mar 13 07:52:00 neuroforge ollama[1203]: 2025/03/13 07:52:00 routes.go:1225: INFO server config env="map[CUDA_VISIBLE_DEVICES:1 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/data/ollama_models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Mar 13 08:12:28 neuroforge ollama[1201]: 2025/03/13 08:12:28 routes.go:1225: INFO server config env="map[CUDA_VISIBLE_DEVICES:1 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/data/ollama_models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Mar 13 08:55:10 neuroforge ollama[1207]: 2025/03/13 08:55:10 routes.go:1225: INFO server config env="map[CUDA_VISIBLE_DEVICES:1 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/data/ollama_models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Mar 13 09:08:13 neuroforge ollama[1209]: 2025/03/13 09:08:13 routes.go:1225: INFO server config env="map[CUDA_VISIBLE_DEVICES:1 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/data/ollama_models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Mar 13 09:16:25 neuroforge ollama[1204]: 2025/03/13 09:16:25 routes.go:1225: INFO server config env="map[CUDA_VISIBLE_DEVICES:1 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/data/ollama_models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Mar 13 09:35:41 neuroforge ollama[1201]: 2025/03/13 09:35:41 routes.go:1225: INFO server config env="map[CUDA_VISIBLE_DEVICES:1 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/data/ollama_models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Mar 13 10:02:54 neuroforge ollama[942]: 2025/03/13 10:02:54 routes.go:1225: INFO server config env="map[CUDA_VISIBLE_DEVICES:1 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/data/ollama_models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Mar 13 10:13:58 neuroforge ollama[2189]: 2025/03/13 10:13:58 routes.go:1225: INFO server config env="map[CUDA_VISIBLE_DEVICES:1 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/data/ollama_models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Mar 13 10:15:44 neuroforge ollama[2407]: 2025/03/13 10:15:44 routes.go:1225: INFO server config env="map[CUDA_VISIBLE_DEVICES:1 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/data/ollama_models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" ``` Curious if others can reproduce this issue? ### Relevant log output ```shell Mar 13 09:13:34 neuroforge ollama[1209]: print_info: freq_base_train = 100000000.0 Mar 13 09:13:34 neuroforge ollama[1209]: print_info: freq_scale_train = 1 Mar 13 09:13:34 neuroforge ollama[1209]: print_info: n_ctx_orig_yarn = 32768 Mar 13 09:13:34 neuroforge ollama[1209]: print_info: rope_finetuned = unknown Mar 13 09:13:34 neuroforge ollama[1209]: print_info: ssm_d_conv = 0 Mar 13 09:13:34 neuroforge ollama[1209]: print_info: ssm_d_inner = 0 Mar 13 09:13:34 neuroforge ollama[1209]: print_info: ssm_d_state = 0 Mar 13 09:13:34 neuroforge ollama[1209]: print_info: ssm_dt_rank = 0 Mar 13 09:13:34 neuroforge ollama[1209]: print_info: ssm_dt_b_c_rms = 0 Mar 13 09:13:34 neuroforge ollama[1209]: print_info: model type = 13B Mar 13 09:13:34 neuroforge ollama[1209]: print_info: model params = 23.57 B Mar 13 09:13:34 neuroforge ollama[1209]: print_info: general.name = Dolphin3.0 R1 Mistral 24B Mar 13 09:13:34 neuroforge ollama[1209]: print_info: vocab type = BPE Mar 13 09:13:34 neuroforge ollama[1209]: print_info: n_vocab = 131074 Mar 13 09:13:34 neuroforge ollama[1209]: print_info: n_merges = 269443 Mar 13 09:13:34 neuroforge ollama[1209]: print_info: BOS token = 1 '<s>' Mar 13 09:13:34 neuroforge ollama[1209]: print_info: EOS token = 131072 '<|im_end|>' Mar 13 09:13:34 neuroforge ollama[1209]: print_info: EOT token = 131072 '<|im_end|>' Mar 13 09:13:34 neuroforge ollama[1209]: print_info: UNK token = 0 '<unk>' Mar 13 09:13:34 neuroforge ollama[1209]: print_info: PAD token = 11 '<pad>' Mar 13 09:13:34 neuroforge ollama[1209]: print_info: LF token = 1010 'Ċ' Mar 13 09:13:34 neuroforge ollama[1209]: print_info: EOG token = 131072 '<|im_end|>' Mar 13 09:13:34 neuroforge ollama[1209]: print_info: max token length = 150 Mar 13 09:13:34 neuroforge ollama[1209]: load_tensors: loading model tensors, this can take a while... (mmap = true) Mar 13 09:13:38 neuroforge ollama[1209]: load_tensors: offloading 40 repeating layers to GPU Mar 13 09:13:38 neuroforge ollama[1209]: load_tensors: offloading output layer to GPU Mar 13 09:13:38 neuroforge ollama[1209]: load_tensors: offloaded 41/41 layers to GPU Mar 13 09:13:38 neuroforge ollama[1209]: load_tensors: CUDA0 model buffer size = 12501.59 MiB Mar 13 09:13:38 neuroforge ollama[1209]: load_tensors: CPU_Mapped model buffer size = 360.01 MiB Mar 13 09:13:43 neuroforge ollama[1209]: llama_init_from_model: n_seq_max = 1 Mar 13 09:13:43 neuroforge ollama[1209]: llama_init_from_model: n_ctx = 32768 Mar 13 09:13:43 neuroforge ollama[1209]: llama_init_from_model: n_ctx_per_seq = 32768 Mar 13 09:13:43 neuroforge ollama[1209]: llama_init_from_model: n_batch = 512 Mar 13 09:13:43 neuroforge ollama[1209]: llama_init_from_model: n_ubatch = 512 Mar 13 09:13:43 neuroforge ollama[1209]: llama_init_from_model: flash_attn = 0 Mar 13 09:13:43 neuroforge ollama[1209]: llama_init_from_model: freq_base = 100000000.0 Mar 13 09:13:43 neuroforge ollama[1209]: llama_init_from_model: freq_scale = 1 Mar 13 09:13:43 neuroforge ollama[1209]: llama_kv_cache_init: kv_size = 32768, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 40, can_shift = 1 Mar 13 09:13:43 neuroforge ollama[1209]: llama_kv_cache_init: CUDA0 KV buffer size = 5120.00 MiB Mar 13 09:13:43 neuroforge ollama[1209]: llama_init_from_model: KV self size = 5120.00 MiB, K (f16): 2560.00 MiB, V (f16): 2560.00 MiB Mar 13 09:13:43 neuroforge ollama[1209]: llama_init_from_model: CUDA_Host output buffer size = 0.52 MiB Mar 13 09:13:43 neuroforge ollama[1209]: llama_init_from_model: CUDA0 compute buffer size = 2148.00 MiB Mar 13 09:13:43 neuroforge ollama[1209]: llama_init_from_model: CUDA_Host compute buffer size = 74.01 MiB Mar 13 09:13:43 neuroforge ollama[1209]: llama_init_from_model: graph nodes = 1286 Mar 13 09:13:43 neuroforge ollama[1209]: llama_init_from_model: graph splits = 2 Mar 13 09:13:43 neuroforge ollama[1209]: time=2025-03-13T09:13:43.272+01:00 level=INFO source=server.go:624 msg="llama runner started in 8.78 seconds" Mar 13 09:13:43 neuroforge ollama[1209]: [GIN] 2025/03/13 - 09:13:43 | 200 | 9.20050059s | 127.0.0.1 | POST "/api/generate" Mar 13 09:13:56 neuroforge ollama[1209]: [GIN] 2025/03/13 - 09:13:56 | 200 | 18.439µs | 127.0.0.1 | HEAD "/" Mar 13 09:13:56 neuroforge ollama[1209]: [GIN] 2025/03/13 - 09:13:56 | 200 | 974.439µs | 127.0.0.1 | POST "/api/generate" Mar 13 09:15:47 neuroforge systemd[1]: Stopping ollama.service - Ollama Service... Mar 13 09:15:47 neuroforge systemd[1]: ollama.service: Deactivated successfully. Mar 13 09:15:47 neuroforge systemd[1]: Stopped ollama.service - Ollama Service. Mar 13 09:15:47 neuroforge systemd[1]: ollama.service: Consumed 9.657s CPU time. -- Boot 2cd7bfacbb7145958dd812d7885707c1 -- Mar 13 09:16:25 neuroforge systemd[1]: Started ollama.service - Ollama Service. Mar 13 09:16:25 neuroforge ollama[1204]: 2025/03/13 09:16:25 routes.go:1225: INFO server config env="map[CUDA_VISIBLE_DEVICES:1 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/data/ollama_models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Mar 13 09:16:25 neuroforge ollama[1204]: time=2025-03-13T09:16:25.505+01:00 level=INFO source=images.go:432 msg="total blobs: 94" Mar 13 09:16:25 neuroforge ollama[1204]: time=2025-03-13T09:16:25.506+01:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" Mar 13 09:16:25 neuroforge ollama[1204]: time=2025-03-13T09:16:25.507+01:00 level=INFO source=routes.go:1292 msg="Listening on 127.0.0.1:11434 (version 0.6.0)" Mar 13 09:16:25 neuroforge ollama[1204]: time=2025-03-13T09:16:25.507+01:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" Mar 13 09:16:25 neuroforge ollama[1204]: time=2025-03-13T09:16:25.618+01:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-75c56b20-ce84-29a9-127c-018b7beb1cc0 library=cuda variant=v12 compute=8.6 driver=12.8 name="NVIDIA GeForce RTX 3090" total="23.8 GiB" available="23.5 GiB" ``` ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version v0.6.0
GiteaMirror added the bug label 2026-04-12 17:51:59 -05:00
Author
Owner

@rick-github commented on GitHub (Mar 13, 2025):

/etc/profile is not used when setting ollama environment variables. What's the output of:

systemctl cat ollama
<!-- gh-comment-id:2720894686 --> @rick-github commented on GitHub (Mar 13, 2025): `/etc/profile` is not used when setting ollama environment variables. What's the output of: ``` systemctl cat ollama ```
Author
Owner

@rschaer commented on GitHub (Mar 14, 2025):

/etc/profile is not used when setting ollama environment variables. What's the output of:

systemctl cat ollama

I must have been very tired when I posted that: systemctl cat showed that I forgot to remove Environment="CUDA_VISIBLE_DEVICES=1"from the override.conf file before trying to set the variable elsewhere:

# /etc/systemd/system/ollama.service
[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/home/rusher/.local/bin:/home/rusher/.local/bin:/home/rusher/.pyenv/shims:/home/rusher/.pyenv/bin:/usr/local/cuda-12.8/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin"

[Install]
WantedBy=default.target

# /etc/systemd/system/ollama.service.d/override.conf
[Service]
Environment="OLLAMA_MODELS=/data/ollama_models"
Environment="CUDA_VISIBLE_DEVICES=1"

Trying again via override.conf (instead of /etc/profile), I realized that I am able to affect which GPU is used, but there is some weirdness in the enumeration going on:

When no manual preference is set, Ollama defaults to GPU0, which in my system is on my 3rd PCIe slot (but has the lowest BUS ID)

If I only set CUDA_VISIBLE_DEVICES=X
X = 0, Ollama uses GPU1. (RTX 3090Ti, BUS-ID: 09:00.0)
X = 1, Ollama uses GPU0. (RTX3090 - 370W TDP, BUS-ID: 04:00.0)
X = 2, Ollama uses GPU2. (RTX 3090 -350W TDP, BUS-ID: 0A:00.0)
...the numeration order corresponds with FASTEST_FIRST, which would be CUDA default AFAIK.

If I set CUDA_VISIBLE_DEVICES=X, and CUDA_DEVICE_ORDER=PCI_BUS_ID
X = 0, Ollama uses GPU0. (RTX 3090 - 370W TDP, BUS-ID: 04:00.0)
X = 1, Ollama uses GPU1. (RTX 3090Ti, BUS-ID: 09:00.0)
X = 2, Ollama uses GPU2. (RTX 3090 -350W TDP, US-ID: 0A:00.0)
...the numeration order corresponds with PCI_BUS_ID as expected.

If I set CUDA_VISIBLE_DEVICES=X, and CUDA_DEVICE_ORDER=FASTEST_FIRST
X = 0, Ollama uses GPU1. (RTX 3090Ti, BUS-ID: 09:00.0)
X = 1, Ollama uses GPU0. (RTX3090 - 370W TDP, BUS-ID: 04:00.0)
X = 2, Ollama uses GPU2. (RTX 3090 -350W TDP, BUS-ID: 0A:00.0)
...the numeration order corresponds with FASTEST_FIRST as expected.

If I only set CUDA_DEVICE_ORDER=FASTEST_FIRST
.. Ollama uses GPU0. (RTX3090 - 370W TDP, BUS-ID: 04:00.0) first, which would be PCI_BUS_ID order

If I only set CUDA_DEVICE_ORDER=PCI_BUS_ID
.. Ollama also uses GPU0. (RTX3090 - 370W TDP, BUS-ID: 04:00.0) first, which would be PCI_BUS_ID order

So, really the only thing that's not quite right is that when CUDA_VISIBLE_DEVICES and CUDA_DEVICE_ORDER are not set by the user, or only CUDA_DEVICE_ORDER=FASTEST_FIRST, the order seems to erroneously revert to PCI_BUS_ID.

In many cases this would go unnoticed, since usually people have their strongest GPU in x16 slot 1, and it usually gets the lowest BUS ID of the GPUs in many setups. However, I suspect that the x4 PCIe CPU->X570 chipset downlink lanes get initialized before the normal GPU x16 PCIe lanes, and subsequently peripherals downstream from the chipset get a lower BUS ID, including the GPU in slot 3, which makes it GPU0 for CUDA when ordering by BUS ID.

TLDR; I'm an idiot, the problem isn't big as I thought, but there is some weirdness where with Ollama CUDA seems to default to PCI_BUS_ID order instead of FASTEST_FIRST, when nothing else is set, or only CUDA_DEVICE_ORDER=FASTEST_FIRST, is defined.

Is that expected behavior? To me it would make sense that, out of the box with nothing else specified, Ollama should always prioritize the strongest GPU, regardless of BUS ID, and especially if CUDA_DEVICE_ORDER=FASTEST_FIRST is set.

<!-- gh-comment-id:2724169509 --> @rschaer commented on GitHub (Mar 14, 2025): > `/etc/profile` is not used when setting ollama environment variables. What's the output of: > > ``` > systemctl cat ollama > ``` I must have been very tired when I posted that: systemctl cat showed that I forgot to remove `Environment="CUDA_VISIBLE_DEVICES=1"`from the override.conf file before trying to set the variable elsewhere: ``` # /etc/systemd/system/ollama.service [Unit] Description=Ollama Service After=network-online.target [Service] ExecStart=/usr/local/bin/ollama serve User=ollama Group=ollama Restart=always RestartSec=3 Environment="PATH=/home/rusher/.local/bin:/home/rusher/.local/bin:/home/rusher/.pyenv/shims:/home/rusher/.pyenv/bin:/usr/local/cuda-12.8/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin" [Install] WantedBy=default.target # /etc/systemd/system/ollama.service.d/override.conf [Service] Environment="OLLAMA_MODELS=/data/ollama_models" Environment="CUDA_VISIBLE_DEVICES=1" ``` Trying again via override.conf (instead of /etc/profile), I realized that **_I am_** able to affect which GPU is used, but there is some weirdness in the enumeration going on: When no manual preference is set, Ollama defaults to `GPU0`, which in my system is on my 3rd PCIe slot (but has the lowest BUS ID) **If I only set `CUDA_VISIBLE_DEVICES=X`** X = `0`, Ollama uses `GPU1`. (RTX 3090Ti, BUS-ID: 09:00.0) X = `1`, Ollama uses `GPU0`. (RTX3090 - 370W TDP, BUS-ID: 04:00.0) X = `2`, Ollama uses `GPU2`. (RTX 3090 -350W TDP, BUS-ID: 0A:00.0) ...the numeration order corresponds with `FASTEST_FIRST`, which would be CUDA default AFAIK. **If I set `CUDA_VISIBLE_DEVICES=X`, and `CUDA_DEVICE_ORDER=PCI_BUS_ID`** X = `0`, Ollama uses `GPU0`. (RTX 3090 - 370W TDP, BUS-ID: 04:00.0) X = `1`, Ollama uses `GPU1`. (RTX 3090Ti, BUS-ID: 09:00.0) X = `2`, Ollama uses `GPU2`. (RTX 3090 -350W TDP, US-ID: 0A:00.0) ...the numeration order corresponds with `PCI_BUS_ID` as expected. **If I set `CUDA_VISIBLE_DEVICES=X`, and `CUDA_DEVICE_ORDER=FASTEST_FIRST`** X = `0`, Ollama uses `GPU1`. (RTX 3090Ti, BUS-ID: 09:00.0) X = `1`, Ollama uses `GPU0`. (RTX3090 - 370W TDP, BUS-ID: 04:00.0) X = `2`, Ollama uses `GPU2`. (RTX 3090 -350W TDP, BUS-ID: 0A:00.0) ...the numeration order corresponds with `FASTEST_FIRST` as expected. **If I only set `CUDA_DEVICE_ORDER=FASTEST_FIRST`** .. Ollama uses `GPU0`. (RTX3090 - 370W TDP, BUS-ID: 04:00.0) first, which would be `PCI_BUS_ID` order **If I only set `CUDA_DEVICE_ORDER=PCI_BUS_ID`** .. Ollama also uses `GPU0`. (RTX3090 - 370W TDP, BUS-ID: 04:00.0) first, which would be `PCI_BUS_ID` order So, really the only thing that's not quite right is that when `CUDA_VISIBLE_DEVICES` and `CUDA_DEVICE_ORDER` are not set by the user, or only `CUDA_DEVICE_ORDER=FASTEST_FIRST`, the order seems to erroneously revert to `PCI_BUS_ID`. In many cases this would go unnoticed, since usually people have their strongest GPU in x16 slot 1, and it usually gets the lowest BUS ID of the GPUs in many setups. However, I suspect that the x4 PCIe CPU->X570 chipset downlink lanes get initialized before the normal GPU x16 PCIe lanes, and subsequently peripherals downstream from the chipset get a lower BUS ID, including the GPU in slot 3, which makes it `GPU0` for CUDA when ordering by BUS ID. TLDR; I'm an idiot, the problem isn't big as I thought, but there is some weirdness where with Ollama CUDA seems to default to `PCI_BUS_ID` order instead of `FASTEST_FIRST`, when nothing else is set, or only `CUDA_DEVICE_ORDER=FASTEST_FIRST`, is defined. Is that expected behavior? To me it would make sense that, out of the box with nothing else specified, Ollama should always prioritize the strongest GPU, regardless of BUS ID, and especially if `CUDA_DEVICE_ORDER=FASTEST_FIRST` is set.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6354