[GH-ISSUE #5093] set CUDA_VISIBLE_DEVICES to mutiple ids is not work #28974

Closed
opened 2026-04-22 07:33:50 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @wywself on GitHub (Jun 17, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5093

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

set CUDA_VISIBLE_DEVICES=2, nvidia-smi shows GPU-2 is being used.
set CUDA_VISIBLE_DEVICES=2,3, nvidia-smi shows GPU-2 is being used, and logs is as follows.
How to use all gpus? Thank you.

ggml_cuda_init: GGML_CUDA_FORCE_MMQ:   yes
ggml_cuda_init: CUDA_USE_TENSOR_CORES: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: Tesla M60, compute capability 5.2, VMM: yes

The complete log is as follows:

# OLLAMA_MAX_VRAM=17179869184 CUDA_VISIBLE_DEVICES=2,3 OLLAMA_MAX_LOADED_MODELS=2 ollama serve
2024/06/17 14:18:21 routes.go:1011: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST: OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:2 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:17179869184 OLLAMA_MODELS: OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_TMPDIR:]"
time=2024-06-17T14:18:21.827+08:00 level=INFO source=images.go:740 msg="total blobs: 5"
time=2024-06-17T14:18:21.827+08:00 level=INFO source=images.go:747 msg="total unused blobs removed: 0"
time=2024-06-17T14:18:21.828+08:00 level=INFO source=routes.go:1057 msg="Listening on 127.0.0.1:11434 (version 0.1.42)"
time=2024-06-17T14:18:21.828+08:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama2205351867/runners
time=2024-06-17T14:18:26.179+08:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx2 cuda_v11 rocm_v60002 cpu cpu_avx]"
time=2024-06-17T14:18:28.045+08:00 level=INFO source=types.go:71 msg="inference compute" id=GPU-c9361d01-ab4c-ede1-c11a-dea0a78ed584 library=cuda compute=5.2 driver=12.4 name="Tesla M60" total="7.9 GiB" available="7.9 GiB"
time=2024-06-17T14:18:28.045+08:00 level=INFO source=types.go:71 msg="inference compute" id=GPU-e799f308-95a1-8a36-5bc4-4f93be7d80fe library=cuda compute=5.2 driver=12.4 name="Tesla M60" total="7.9 GiB" available="7.9 GiB"
time=2024-06-17T14:23:43.729+08:00 level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=29 memory.available="16.0 GiB" memory.required.full="4.8 GiB" memory.required.partial="4.8 GiB" memory.required.kv="112.0 MiB" memory.weights.total="3.8 GiB" memory.weights.repeating="3.4 GiB" memory.weights.nonrepeating="426.4 MiB" memory.graph.full="304.0 MiB" memory.graph.partial="730.4 MiB"
time=2024-06-17T14:23:43.730+08:00 level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=29 memory.available="16.0 GiB" memory.required.full="4.8 GiB" memory.required.partial="4.8 GiB" memory.required.kv="112.0 MiB" memory.weights.total="3.8 GiB" memory.weights.repeating="3.4 GiB" memory.weights.nonrepeating="426.4 MiB" memory.graph.full="304.0 MiB" memory.graph.partial="730.4 MiB"
time=2024-06-17T14:23:43.732+08:00 level=INFO source=server.go:341 msg="starting llama server" cmd="/tmp/ollama2205351867/runners/cuda_v11/ollama_llama_server --model /opt/unicloud/ollama/models/blobs/sha256-43f7a214e5329f672bb05404cfba1913cbb70fdaa1a17497224e1925046b0ed5 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 29 --parallel 1 --port 45693"
time=2024-06-17T14:23:43.734+08:00 level=INFO source=sched.go:338 msg="loaded runners" count=1
time=2024-06-17T14:23:43.734+08:00 level=INFO source=server.go:529 msg="waiting for llama runner to start responding"
time=2024-06-17T14:23:43.735+08:00 level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="5921b8f" tid="140540808175616" timestamp=1718605424
INFO [main] system info | n_threads=16 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="140540808175616" timestamp=1718605424 total_threads=16
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="15" port="45693" tid="140540808175616" timestamp=1718605424
time=2024-06-17T14:23:44.238+08:00 level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: loaded meta data with 21 key-value pairs and 339 tensors from /opt/unicloud/ollama/models/blobs/sha256-43f7a214e5329f672bb05404cfba1913cbb70fdaa1a17497224e1925046b0ed5 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.name str              = Qwen2-7B-Instruct
llama_model_loader: - kv   2:                          qwen2.block_count u32              = 28
llama_model_loader: - kv   3:                       qwen2.context_length u32              = 32768
llama_model_loader: - kv   4:                     qwen2.embedding_length u32              = 3584
llama_model_loader: - kv   5:                  qwen2.feed_forward_length u32              = 18944
llama_model_loader: - kv   6:                 qwen2.attention.head_count u32              = 28
llama_model_loader: - kv   7:              qwen2.attention.head_count_kv u32              = 4
llama_model_loader: - kv   8:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv   9:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  10:                          general.file_type u32              = 2
llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  12:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  14:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  15:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  16:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  17:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  19:                    tokenizer.chat_template str              = {% for message in messages %}{% if lo...
llama_model_loader: - kv  20:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  141 tensors
llama_model_loader: - type q4_0:  197 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens cache size = 421
llm_load_vocab: token to piece cache size = 1.8703 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = qwen2
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 152064
llm_load_print_meta: n_merges         = 151387
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 3584
llm_load_print_meta: n_head           = 28
llm_load_print_meta: n_head_kv        = 4
llm_load_print_meta: n_layer          = 28
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 7
llm_load_print_meta: n_embd_k_gqa     = 512
llm_load_print_meta: n_embd_v_gqa     = 512
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 18944
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = ?B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 7.62 B
llm_load_print_meta: model size       = 4.12 GiB (4.65 BPW) 
llm_load_print_meta: general.name     = Qwen2-7B-Instruct
llm_load_print_meta: BOS token        = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token        = 151645 '<|im_end|>'
llm_load_print_meta: PAD token        = 151643 '<|endoftext|>'
llm_load_print_meta: LF token         = 148848 'ÄĬ'
llm_load_print_meta: EOT token        = 151643 '<|endoftext|>'
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:   yes
ggml_cuda_init: CUDA_USE_TENSOR_CORES: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: Tesla M60, compute capability 5.2, VMM: yes
llm_load_tensors: ggml ctx size =    0.32 MiB
llm_load_tensors: offloading 28 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 29/29 layers to GPU
llm_load_tensors:        CPU buffer size =   292.36 MiB
llm_load_tensors:      CUDA0 buffer size =  3928.07 MiB
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      CUDA0 KV buffer size =   112.00 MiB
llama_new_context_with_model: KV self size  =  112.00 MiB, K (f16):   56.00 MiB, V (f16):   56.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     0.59 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =   304.00 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    11.01 MiB
llama_new_context_with_model: graph nodes  = 986
llama_new_context_with_model: graph splits = 2
INFO [main] model loaded | tid="140540808175616" timestamp=1718605429
time=2024-06-17T14:23:49.259+08:00 level=INFO source=server.go:572 msg="llama runner started in 5.53 seconds"
[GIN] 2024/06/17 - 14:23:55 | 200 | 13.726004107s |       127.0.0.1 | POST     "/api/chat"

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.1.42

Originally created by @wywself on GitHub (Jun 17, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5093 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? set CUDA_VISIBLE_DEVICES=2, nvidia-smi shows GPU-2 is being used. set CUDA_VISIBLE_DEVICES=2,3, nvidia-smi shows GPU-2 is being used, and logs is as follows. How to use all gpus? Thank you. ``` ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes ggml_cuda_init: CUDA_USE_TENSOR_CORES: no ggml_cuda_init: found 1 CUDA devices: Device 0: Tesla M60, compute capability 5.2, VMM: yes ``` The complete log is as follows: ``` # OLLAMA_MAX_VRAM=17179869184 CUDA_VISIBLE_DEVICES=2,3 OLLAMA_MAX_LOADED_MODELS=2 ollama serve 2024/06/17 14:18:21 routes.go:1011: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST: OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:2 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:17179869184 OLLAMA_MODELS: OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_TMPDIR:]" time=2024-06-17T14:18:21.827+08:00 level=INFO source=images.go:740 msg="total blobs: 5" time=2024-06-17T14:18:21.827+08:00 level=INFO source=images.go:747 msg="total unused blobs removed: 0" time=2024-06-17T14:18:21.828+08:00 level=INFO source=routes.go:1057 msg="Listening on 127.0.0.1:11434 (version 0.1.42)" time=2024-06-17T14:18:21.828+08:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama2205351867/runners time=2024-06-17T14:18:26.179+08:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx2 cuda_v11 rocm_v60002 cpu cpu_avx]" time=2024-06-17T14:18:28.045+08:00 level=INFO source=types.go:71 msg="inference compute" id=GPU-c9361d01-ab4c-ede1-c11a-dea0a78ed584 library=cuda compute=5.2 driver=12.4 name="Tesla M60" total="7.9 GiB" available="7.9 GiB" time=2024-06-17T14:18:28.045+08:00 level=INFO source=types.go:71 msg="inference compute" id=GPU-e799f308-95a1-8a36-5bc4-4f93be7d80fe library=cuda compute=5.2 driver=12.4 name="Tesla M60" total="7.9 GiB" available="7.9 GiB" time=2024-06-17T14:23:43.729+08:00 level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=29 memory.available="16.0 GiB" memory.required.full="4.8 GiB" memory.required.partial="4.8 GiB" memory.required.kv="112.0 MiB" memory.weights.total="3.8 GiB" memory.weights.repeating="3.4 GiB" memory.weights.nonrepeating="426.4 MiB" memory.graph.full="304.0 MiB" memory.graph.partial="730.4 MiB" time=2024-06-17T14:23:43.730+08:00 level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=29 memory.available="16.0 GiB" memory.required.full="4.8 GiB" memory.required.partial="4.8 GiB" memory.required.kv="112.0 MiB" memory.weights.total="3.8 GiB" memory.weights.repeating="3.4 GiB" memory.weights.nonrepeating="426.4 MiB" memory.graph.full="304.0 MiB" memory.graph.partial="730.4 MiB" time=2024-06-17T14:23:43.732+08:00 level=INFO source=server.go:341 msg="starting llama server" cmd="/tmp/ollama2205351867/runners/cuda_v11/ollama_llama_server --model /opt/unicloud/ollama/models/blobs/sha256-43f7a214e5329f672bb05404cfba1913cbb70fdaa1a17497224e1925046b0ed5 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 29 --parallel 1 --port 45693" time=2024-06-17T14:23:43.734+08:00 level=INFO source=sched.go:338 msg="loaded runners" count=1 time=2024-06-17T14:23:43.734+08:00 level=INFO source=server.go:529 msg="waiting for llama runner to start responding" time=2024-06-17T14:23:43.735+08:00 level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server error" INFO [main] build info | build=1 commit="5921b8f" tid="140540808175616" timestamp=1718605424 INFO [main] system info | n_threads=16 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="140540808175616" timestamp=1718605424 total_threads=16 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="15" port="45693" tid="140540808175616" timestamp=1718605424 time=2024-06-17T14:23:44.238+08:00 level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: loaded meta data with 21 key-value pairs and 339 tensors from /opt/unicloud/ollama/models/blobs/sha256-43f7a214e5329f672bb05404cfba1913cbb70fdaa1a17497224e1925046b0ed5 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.name str = Qwen2-7B-Instruct llama_model_loader: - kv 2: qwen2.block_count u32 = 28 llama_model_loader: - kv 3: qwen2.context_length u32 = 32768 llama_model_loader: - kv 4: qwen2.embedding_length u32 = 3584 llama_model_loader: - kv 5: qwen2.feed_forward_length u32 = 18944 llama_model_loader: - kv 6: qwen2.attention.head_count u32 = 28 llama_model_loader: - kv 7: qwen2.attention.head_count_kv u32 = 4 llama_model_loader: - kv 8: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 9: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 10: general.file_type u32 = 2 llama_model_loader: - kv 11: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 12: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 16: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 17: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 19: tokenizer.chat_template str = {% for message in messages %}{% if lo... llama_model_loader: - kv 20: general.quantization_version u32 = 2 llama_model_loader: - type f32: 141 tensors llama_model_loader: - type q4_0: 197 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens cache size = 421 llm_load_vocab: token to piece cache size = 1.8703 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 152064 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 3584 llm_load_print_meta: n_head = 28 llm_load_print_meta: n_head_kv = 4 llm_load_print_meta: n_layer = 28 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 7 llm_load_print_meta: n_embd_k_gqa = 512 llm_load_print_meta: n_embd_v_gqa = 512 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 18944 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 7.62 B llm_load_print_meta: model size = 4.12 GiB (4.65 BPW) llm_load_print_meta: general.name = Qwen2-7B-Instruct llm_load_print_meta: BOS token = 151643 '<|endoftext|>' llm_load_print_meta: EOS token = 151645 '<|im_end|>' llm_load_print_meta: PAD token = 151643 '<|endoftext|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: EOT token = 151643 '<|endoftext|>' ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes ggml_cuda_init: CUDA_USE_TENSOR_CORES: no ggml_cuda_init: found 1 CUDA devices: Device 0: Tesla M60, compute capability 5.2, VMM: yes llm_load_tensors: ggml ctx size = 0.32 MiB llm_load_tensors: offloading 28 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 29/29 layers to GPU llm_load_tensors: CPU buffer size = 292.36 MiB llm_load_tensors: CUDA0 buffer size = 3928.07 MiB llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA0 KV buffer size = 112.00 MiB llama_new_context_with_model: KV self size = 112.00 MiB, K (f16): 56.00 MiB, V (f16): 56.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 0.59 MiB llama_new_context_with_model: CUDA0 compute buffer size = 304.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 11.01 MiB llama_new_context_with_model: graph nodes = 986 llama_new_context_with_model: graph splits = 2 INFO [main] model loaded | tid="140540808175616" timestamp=1718605429 time=2024-06-17T14:23:49.259+08:00 level=INFO source=server.go:572 msg="llama runner started in 5.53 seconds" [GIN] 2024/06/17 - 14:23:55 | 200 | 13.726004107s | 127.0.0.1 | POST "/api/chat" ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.42
GiteaMirror added the bug label 2026-04-22 07:33:50 -05:00
Author
Owner

@dhiltgen commented on GitHub (Jun 18, 2024):

The system did detect your 2 GPUs, however, the model you loaded fit into one, so the current design favors a single GPU instead of spreading across multiples if it can fit into VRAM.

time=2024-06-17T14:18:28.045+08:00 level=INFO source=types.go:71 msg="inference compute" id=GPU-c9361d01-ab4c-ede1-c11a-dea0a78ed584 library=cuda compute=5.2 driver=12.4 name="Tesla M60" total="7.9 GiB" available="7.9 GiB"
time=2024-06-17T14:18:28.045+08:00 level=INFO source=types.go:71 msg="inference compute" id=GPU-e799f308-95a1-8a36-5bc4-4f93be7d80fe library=cuda compute=5.2 driver=12.4 name="Tesla M60" total="7.9 GiB" available="7.9 GiB"

You can change this behavior by setting OLLAMA_SCHED_SPREAD to 1

<!-- gh-comment-id:2176716043 --> @dhiltgen commented on GitHub (Jun 18, 2024): The system did detect your 2 GPUs, however, the model you loaded fit into one, so the current design favors a single GPU instead of spreading across multiples if it can fit into VRAM. ``` time=2024-06-17T14:18:28.045+08:00 level=INFO source=types.go:71 msg="inference compute" id=GPU-c9361d01-ab4c-ede1-c11a-dea0a78ed584 library=cuda compute=5.2 driver=12.4 name="Tesla M60" total="7.9 GiB" available="7.9 GiB" time=2024-06-17T14:18:28.045+08:00 level=INFO source=types.go:71 msg="inference compute" id=GPU-e799f308-95a1-8a36-5bc4-4f93be7d80fe library=cuda compute=5.2 driver=12.4 name="Tesla M60" total="7.9 GiB" available="7.9 GiB" ``` You can change this behavior by setting OLLAMA_SCHED_SPREAD to 1
Author
Owner

@littlegirlpppp commented on GitHub (Jun 25, 2024):

I set OLLAMA_SCHED_SPREAD to 1 but is not work
image
image
image

<!-- gh-comment-id:2188083080 --> @littlegirlpppp commented on GitHub (Jun 25, 2024): I set OLLAMA_SCHED_SPREAD to 1 but is not work ![image](https://github.com/ollama/ollama/assets/9802301/07fd96c5-5d97-4541-aed6-e768208e1e5b) ![image](https://github.com/ollama/ollama/assets/9802301/12c4cc64-ac15-41c5-a54c-a47a1dfde839) ![image](https://github.com/ollama/ollama/assets/9802301/b2390b68-9aa6-48c9-9afb-a62d5b13f463)
Author
Owner

@dhiltgen commented on GitHub (Jun 25, 2024):

@littlegirlpppp please share your server log so I can see what went wrong. Also please make sure you've upgraded to the latest version, as OLLAMA_SCHED_SPREAD is new.

<!-- gh-comment-id:2189265680 --> @dhiltgen commented on GitHub (Jun 25, 2024): @littlegirlpppp please share your server log so I can see what went wrong. Also please make sure you've upgraded to the latest version, as OLLAMA_SCHED_SPREAD is new.
Author
Owner

@dhiltgen commented on GitHub (Jul 3, 2024):

If you're still having trouble with the latest version, please share your server log and I'll reopen the issue (in particular, the first line will report all the settings to confirm if you got the spread setting wired up correctly)

<!-- gh-comment-id:2207539237 --> @dhiltgen commented on GitHub (Jul 3, 2024): If you're still having trouble with the latest version, please share your server log and I'll reopen the issue (in particular, the first line will report all the settings to confirm if you got the spread setting wired up correctly)
Author
Owner

@17Reset commented on GitHub (Jul 22, 2024):

Having a similar problem, I have 4 Gpus and the default model loads on 4 Gpus. I only want to use two of these Gpus, I set the command 'CUDA_VISIBLE_DEVICES=0, 1', but the model only runs on one GPU.

<!-- gh-comment-id:2241954348 --> @17Reset commented on GitHub (Jul 22, 2024): Having a similar problem, I have 4 Gpus and the default model loads on 4 Gpus. I only want to use two of these Gpus, I set the command 'CUDA_VISIBLE_DEVICES=0, 1', but the model only runs on one GPU.
Author
Owner

@dhiltgen commented on GitHub (Jul 22, 2024):

@17Reset are you setting OLLAMA_SCHED_SPREAD=1? If not, the scheduler will try to load into a single GPU if the model will fit. If you want it to force spreading over all the GPUs even when unnecessary, set that environment variable for the server. Also, we recommend using the UUID of the GPUs to avoid potential ambiguity - https://github.com/ollama/ollama/blob/main/docs/gpu.md#gpu-selection

<!-- gh-comment-id:2243795390 --> @dhiltgen commented on GitHub (Jul 22, 2024): @17Reset are you setting `OLLAMA_SCHED_SPREAD=1`? If not, the scheduler will try to load into a single GPU if the model will fit. If you want it to force spreading over all the GPUs even when unnecessary, set that environment variable for the server. Also, we recommend using the UUID of the GPUs to avoid potential ambiguity - https://github.com/ollama/ollama/blob/main/docs/gpu.md#gpu-selection
Author
Owner

@17Reset commented on GitHub (Jul 23, 2024):

I initially set 'CUDA_VISIBLE_DEVICES=uuid1, uuid2' didn't work, but when I changed it to 'CUDA_VISIBLE_DEVICES=0,1', it started to work。Is it possible that commas cannot have Spaces?

<!-- gh-comment-id:2244138034 --> @17Reset commented on GitHub (Jul 23, 2024): I initially set 'CUDA_VISIBLE_DEVICES=uuid1, uuid2' didn't work, but when I changed it to 'CUDA_VISIBLE_DEVICES=0,1', it started to work。Is it possible that commas cannot have Spaces?
Author
Owner

@dhiltgen commented on GitHub (Jul 26, 2024):

Correct, no spaces.

<!-- gh-comment-id:2253242086 --> @dhiltgen commented on GitHub (Jul 26, 2024): Correct, no spaces.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#28974