[GH-ISSUE #7148] runner crashes with more than 15 GPUs #4537

Open
opened 2026-04-12 15:28:34 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @scriptbotprime on GitHub (Oct 9, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7148

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

I have deployed ollama using the docker image 0.3.10. Loading "big" models fails.
llama3.1 and other "small" models (e.g. codestral) fits into one GPU and works fine. llama3.1:70b is too big for one GPU and fails to load.

This is the output of docker logs:

2024/10/09 12:16:49 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:1h0m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:1h0m0s OLLAMA_MAX_LOADED_MODELS:3 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:5 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-10-09T12:16:49.993Z level=INFO source=images.go:753 msg="total blobs: 21"
time=2024-10-09T12:16:49.999Z level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2024-10-09T12:16:50.001Z level=INFO source=routes.go:1172 msg="Listening on [::]:11434 (version 0.3.10)"
time=2024-10-09T12:16:50.002Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama1657300007/runners
time=2024-10-09T12:17:00.079Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [rocm_v60102 cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12]"
time=2024-10-09T12:17:00.080Z level=INFO source=gpu.go:200 msg="looking for compatible GPUs"
time=2024-10-09T12:17:03.697Z level=INFO source=types.go:107 msg="inference compute" id=GPU-6f4b9a45-fcde-cd1a-0781-7246dcb622a6 library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB"
time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-f7d94192-d68a-1967-f062-3d7dfcb64aea library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB"
time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-8bc1b166-ee57-ccbb-961a-967cb4dfe3ab library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB"
time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-54192c94-2cf5-455b-7ebf-a137a92ad0e9 library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB"
time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-52f37851-1140-6ae4-955c-b550b2434d17 library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB"
time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-fabd1d82-7b92-d92a-56f0-af4fa8c23359 library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB"
time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-44a53644-d89b-bd78-cd7a-0456eb2fa1de library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB"
time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-79a9983c-1b89-d512-d5cd-e72c9368033c library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB"
time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-ccb73f64-5800-6fca-cb79-8bf896f6a47c library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB"
time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-c2897dff-5316-1816-cdd8-976ca178f89c library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB"
time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-bef71b66-e6a1-af72-917d-2e399d7f4543 library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB"
time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-5a4ba200-8e83-da15-b9c3-11cc5be911ea library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB"
time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-241017b3-8811-dc04-8a36-b9cae1333b87 library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB"
time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-516ac106-2f5f-9fb2-7ef1-76c1790e6477 library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB"
time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-923baeaa-e06d-7ffe-7eab-82a02a87eef4 library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB"
time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-8b4d57a2-a60a-2da0-887a-581d3182e209 library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB"
[GIN] 2024/10/09 - 12:17:03 | 200 |      80.006µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/10/09 - 12:17:03 | 200 |   28.728333ms |       127.0.0.1 | POST     "/api/show"
time=2024-10-09T12:17:06.297Z level=INFO source=sched.go:731 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-a677b4a4b70c45e702b1d600f7905e367733c53898b8be60e3f29272cf334574 library=cuda parallel=5 required="76.7 GiB"
time=2024-10-09T12:17:06.297Z level=INFO source=server.go:101 msg="system memory" total="1510.1 GiB" free="1320.9 GiB" free_swap="0 B"
time=2024-10-09T12:17:06.299Z level=INFO source=memory.go:326 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split=6,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5 memory.available="[31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="76.7 GiB" memory.required.partial="76.7 GiB" memory.required.kv="3.1 GiB" memory.required.allocations="[5.5 GiB 4.7 GiB 4.7 GiB 4.7 GiB 4.7 GiB 4.7 GiB 4.7 GiB 4.7 GiB 4.7 GiB 4.7 GiB 4.7 GiB 4.7 GiB 4.7 GiB 4.7 GiB 4.7 GiB 4.7 GiB]" memory.weights.total="39.0 GiB" memory.weights.repeating="38.2 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.4 GiB" memory.graph.partial="1.4 GiB"
time=2024-10-09T12:17:06.306Z level=INFO source=server.go:391 msg="starting llama server" cmd="/tmp/ollama1657300007/runners/cuda_v12/ollama_llama_server --model /root/.ollama/models/blobs/sha256-a677b4a4b70c45e702b1d600f7905e367733c53898b8be60e3f29272cf334574 --ctx-size 10240 --batch-size 512 --embedding --log-disable --n-gpu-layers 81 --parallel 5 --tensor-split 6,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5 --port 35043"
time=2024-10-09T12:17:06.306Z level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2024-10-09T12:17:06.306Z level=INFO source=server.go:590 msg="waiting for llama runner to start responding"
time=2024-10-09T12:17:06.307Z level=INFO source=server.go:624 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="8962422" tid="139858080260096" timestamp=1728476226
INFO [main] system info | n_threads=48 n_threads_batch=48 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="139858080260096" timestamp=1728476226 total_threads=96
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="95" port="35043" tid="139858080260096" timestamp=1728476226
llama_model_loader: loaded meta data with 29 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-a677b4a4b70c45e702b1d600f7905e367733c53898b8be60e3f29272cf334574 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Meta Llama 3.1 70B Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Meta-Llama-3.1
llama_model_loader: - kv   5:                         general.size_label str              = 70B
llama_model_loader: - kv   6:                            general.license str              = llama3.1
llama_model_loader: - kv   7:                               general.tags arr[str,6]       = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv   8:                          general.languages arr[str,8]       = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv   9:                          llama.block_count u32              = 80
llama_model_loader: - kv  10:                       llama.context_length u32              = 131072
llama_model_loader: - kv  11:                     llama.embedding_length u32              = 8192
llama_model_loader: - kv  12:                  llama.feed_forward_length u32              = 28672
llama_model_loader: - kv  13:                 llama.attention.head_count u32              = 64
llama_model_loader: - kv  14:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  15:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  16:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  17:                          general.file_type u32              = 2
llama_model_loader: - kv  18:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  19:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  20:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  21:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  22:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  23:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  24:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  25:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  26:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  27:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv  28:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  162 tensors
llama_model_loader: - type q4_0:  561 tensors
llama_model_loader: - type q6_K:    1 tensors
time=2024-10-09T12:17:06.558Z level=INFO source=server.go:624 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 128256
llm_load_print_meta: n_merges         = 280147
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 131072
llm_load_print_meta: n_embd           = 8192
llm_load_print_meta: n_layer          = 80
llm_load_print_meta: n_head           = 64
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 8
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 28672
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 131072
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = 70B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 70.55 B
llm_load_print_meta: model size       = 37.22 GiB (4.53 BPW)
llm_load_print_meta: general.name     = Meta Llama 3.1 70B Instruct
llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token        = 128009 '<|eot_id|>'
llm_load_print_meta: LF token         = 128 'Ä'
llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 16 CUDA devices:
  Device 0: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes
  Device 1: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes
  Device 2: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes
  Device 3: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes
  Device 4: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes
  Device 5: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes
  Device 6: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes
  Device 7: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes
  Device 8: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes
  Device 9: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes
  Device 10: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes
  Device 11: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes
  Device 12: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes
  Device 13: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes
  Device 14: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes
  Device 15: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes
llm_load_tensors: ggml ctx size =    5.76 MiB
time=2024-10-09T12:17:08.014Z level=INFO source=server.go:624 msg="waiting for server to become available" status="llm server not responding"
time=2024-10-09T12:17:10.973Z level=INFO source=server.go:624 msg="waiting for server to become available" status="llm server loading model"
llm_load_tensors: offloading 80 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 81/81 layers to GPU
llm_load_tensors:        CPU buffer size =   563.62 MiB
llm_load_tensors:      CUDA0 buffer size =  2754.38 MiB
llm_load_tensors:      CUDA1 buffer size =  2295.31 MiB
llm_load_tensors:      CUDA2 buffer size =  2295.31 MiB
llm_load_tensors:      CUDA3 buffer size =  2295.31 MiB
llm_load_tensors:      CUDA4 buffer size =  2295.31 MiB
llm_load_tensors:      CUDA5 buffer size =  2295.31 MiB
llm_load_tensors:      CUDA6 buffer size =  2295.31 MiB
llm_load_tensors:      CUDA7 buffer size =  2295.31 MiB
llm_load_tensors:      CUDA8 buffer size =  2295.31 MiB
llm_load_tensors:      CUDA9 buffer size =  2295.31 MiB
llm_load_tensors:     CUDA10 buffer size =  2295.31 MiB
llm_load_tensors:     CUDA11 buffer size =  2295.31 MiB
llm_load_tensors:     CUDA12 buffer size =  2295.31 MiB
llm_load_tensors:     CUDA13 buffer size =  2295.31 MiB
llm_load_tensors:     CUDA14 buffer size =  2295.31 MiB
llm_load_tensors:     CUDA15 buffer size =  2658.24 MiB
llama_new_context_with_model: n_ctx      = 10240
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      CUDA0 KV buffer size =   240.00 MiB
llama_kv_cache_init:      CUDA1 KV buffer size =   200.00 MiB
llama_kv_cache_init:      CUDA2 KV buffer size =   200.00 MiB
llama_kv_cache_init:      CUDA3 KV buffer size =   200.00 MiB
llama_kv_cache_init:      CUDA4 KV buffer size =   200.00 MiB
llama_kv_cache_init:      CUDA5 KV buffer size =   200.00 MiB
llama_kv_cache_init:      CUDA6 KV buffer size =   200.00 MiB
llama_kv_cache_init:      CUDA7 KV buffer size =   200.00 MiB
llama_kv_cache_init:      CUDA8 KV buffer size =   200.00 MiB
llama_kv_cache_init:      CUDA9 KV buffer size =   200.00 MiB
llama_kv_cache_init:     CUDA10 KV buffer size =   200.00 MiB
llama_kv_cache_init:     CUDA11 KV buffer size =   200.00 MiB
llama_kv_cache_init:     CUDA12 KV buffer size =   200.00 MiB
llama_kv_cache_init:     CUDA13 KV buffer size =   200.00 MiB
llama_kv_cache_init:     CUDA14 KV buffer size =   200.00 MiB
llama_kv_cache_init:     CUDA15 KV buffer size =   160.00 MiB
llama_new_context_with_model: KV self size  = 3200.00 MiB, K (f16): 1600.00 MiB, V (f16): 1600.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     2.60 MiB
/go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-backend.c:1864: GGML_ASSERT(n_backends <= GGML_SCHED_MAX_BACKENDS) failed

OS

Linux, Docker

GPU

Nvidia

CPU

Intel

Ollama version

0.3.10

Originally created by @scriptbotprime on GitHub (Oct 9, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7148 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? I have deployed ollama using the docker image 0.3.10. Loading "big" models fails. llama3.1 and other "small" models (e.g. codestral) fits into one GPU and works fine. llama3.1:70b is too big for one GPU and fails to load. This is the output of docker logs: ``` 2024/10/09 12:16:49 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:1h0m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:1h0m0s OLLAMA_MAX_LOADED_MODELS:3 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:5 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-10-09T12:16:49.993Z level=INFO source=images.go:753 msg="total blobs: 21" time=2024-10-09T12:16:49.999Z level=INFO source=images.go:760 msg="total unused blobs removed: 0" time=2024-10-09T12:16:50.001Z level=INFO source=routes.go:1172 msg="Listening on [::]:11434 (version 0.3.10)" time=2024-10-09T12:16:50.002Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama1657300007/runners time=2024-10-09T12:17:00.079Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [rocm_v60102 cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12]" time=2024-10-09T12:17:00.080Z level=INFO source=gpu.go:200 msg="looking for compatible GPUs" time=2024-10-09T12:17:03.697Z level=INFO source=types.go:107 msg="inference compute" id=GPU-6f4b9a45-fcde-cd1a-0781-7246dcb622a6 library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB" time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-f7d94192-d68a-1967-f062-3d7dfcb64aea library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB" time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-8bc1b166-ee57-ccbb-961a-967cb4dfe3ab library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB" time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-54192c94-2cf5-455b-7ebf-a137a92ad0e9 library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB" time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-52f37851-1140-6ae4-955c-b550b2434d17 library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB" time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-fabd1d82-7b92-d92a-56f0-af4fa8c23359 library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB" time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-44a53644-d89b-bd78-cd7a-0456eb2fa1de library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB" time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-79a9983c-1b89-d512-d5cd-e72c9368033c library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB" time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-ccb73f64-5800-6fca-cb79-8bf896f6a47c library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB" time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-c2897dff-5316-1816-cdd8-976ca178f89c library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB" time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-bef71b66-e6a1-af72-917d-2e399d7f4543 library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB" time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-5a4ba200-8e83-da15-b9c3-11cc5be911ea library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB" time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-241017b3-8811-dc04-8a36-b9cae1333b87 library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB" time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-516ac106-2f5f-9fb2-7ef1-76c1790e6477 library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB" time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-923baeaa-e06d-7ffe-7eab-82a02a87eef4 library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB" time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-8b4d57a2-a60a-2da0-887a-581d3182e209 library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB" [GIN] 2024/10/09 - 12:17:03 | 200 | 80.006µs | 127.0.0.1 | HEAD "/" [GIN] 2024/10/09 - 12:17:03 | 200 | 28.728333ms | 127.0.0.1 | POST "/api/show" time=2024-10-09T12:17:06.297Z level=INFO source=sched.go:731 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-a677b4a4b70c45e702b1d600f7905e367733c53898b8be60e3f29272cf334574 library=cuda parallel=5 required="76.7 GiB" time=2024-10-09T12:17:06.297Z level=INFO source=server.go:101 msg="system memory" total="1510.1 GiB" free="1320.9 GiB" free_swap="0 B" time=2024-10-09T12:17:06.299Z level=INFO source=memory.go:326 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split=6,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5 memory.available="[31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="76.7 GiB" memory.required.partial="76.7 GiB" memory.required.kv="3.1 GiB" memory.required.allocations="[5.5 GiB 4.7 GiB 4.7 GiB 4.7 GiB 4.7 GiB 4.7 GiB 4.7 GiB 4.7 GiB 4.7 GiB 4.7 GiB 4.7 GiB 4.7 GiB 4.7 GiB 4.7 GiB 4.7 GiB 4.7 GiB]" memory.weights.total="39.0 GiB" memory.weights.repeating="38.2 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.4 GiB" memory.graph.partial="1.4 GiB" time=2024-10-09T12:17:06.306Z level=INFO source=server.go:391 msg="starting llama server" cmd="/tmp/ollama1657300007/runners/cuda_v12/ollama_llama_server --model /root/.ollama/models/blobs/sha256-a677b4a4b70c45e702b1d600f7905e367733c53898b8be60e3f29272cf334574 --ctx-size 10240 --batch-size 512 --embedding --log-disable --n-gpu-layers 81 --parallel 5 --tensor-split 6,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5 --port 35043" time=2024-10-09T12:17:06.306Z level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2024-10-09T12:17:06.306Z level=INFO source=server.go:590 msg="waiting for llama runner to start responding" time=2024-10-09T12:17:06.307Z level=INFO source=server.go:624 msg="waiting for server to become available" status="llm server error" INFO [main] build info | build=1 commit="8962422" tid="139858080260096" timestamp=1728476226 INFO [main] system info | n_threads=48 n_threads_batch=48 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="139858080260096" timestamp=1728476226 total_threads=96 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="95" port="35043" tid="139858080260096" timestamp=1728476226 llama_model_loader: loaded meta data with 29 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-a677b4a4b70c45e702b1d600f7905e367733c53898b8be60e3f29272cf334574 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Meta Llama 3.1 70B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Meta-Llama-3.1 llama_model_loader: - kv 5: general.size_label str = 70B llama_model_loader: - kv 6: general.license str = llama3.1 llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... llama_model_loader: - kv 9: llama.block_count u32 = 80 llama_model_loader: - kv 10: llama.context_length u32 = 131072 llama_model_loader: - kv 11: llama.embedding_length u32 = 8192 llama_model_loader: - kv 12: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 13: llama.attention.head_count u32 = 64 llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 17: general.file_type u32 = 2 llama_model_loader: - kv 18: llama.vocab_size u32 = 128256 llama_model_loader: - kv 19: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 20: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 21: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 23: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 24: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 27: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 28: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q4_0: 561 tensors llama_model_loader: - type q6_K: 1 tensors time=2024-10-09T12:17:06.558Z level=INFO source=server.go:624 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 8192 llm_load_print_meta: n_layer = 80 llm_load_print_meta: n_head = 64 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 8 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 28672 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 70B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 70.55 B llm_load_print_meta: model size = 37.22 GiB (4.53 BPW) llm_load_print_meta: general.name = Meta Llama 3.1 70B Instruct llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 16 CUDA devices: Device 0: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes Device 1: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes Device 2: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes Device 3: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes Device 4: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes Device 5: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes Device 6: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes Device 7: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes Device 8: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes Device 9: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes Device 10: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes Device 11: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes Device 12: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes Device 13: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes Device 14: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes Device 15: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes llm_load_tensors: ggml ctx size = 5.76 MiB time=2024-10-09T12:17:08.014Z level=INFO source=server.go:624 msg="waiting for server to become available" status="llm server not responding" time=2024-10-09T12:17:10.973Z level=INFO source=server.go:624 msg="waiting for server to become available" status="llm server loading model" llm_load_tensors: offloading 80 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 81/81 layers to GPU llm_load_tensors: CPU buffer size = 563.62 MiB llm_load_tensors: CUDA0 buffer size = 2754.38 MiB llm_load_tensors: CUDA1 buffer size = 2295.31 MiB llm_load_tensors: CUDA2 buffer size = 2295.31 MiB llm_load_tensors: CUDA3 buffer size = 2295.31 MiB llm_load_tensors: CUDA4 buffer size = 2295.31 MiB llm_load_tensors: CUDA5 buffer size = 2295.31 MiB llm_load_tensors: CUDA6 buffer size = 2295.31 MiB llm_load_tensors: CUDA7 buffer size = 2295.31 MiB llm_load_tensors: CUDA8 buffer size = 2295.31 MiB llm_load_tensors: CUDA9 buffer size = 2295.31 MiB llm_load_tensors: CUDA10 buffer size = 2295.31 MiB llm_load_tensors: CUDA11 buffer size = 2295.31 MiB llm_load_tensors: CUDA12 buffer size = 2295.31 MiB llm_load_tensors: CUDA13 buffer size = 2295.31 MiB llm_load_tensors: CUDA14 buffer size = 2295.31 MiB llm_load_tensors: CUDA15 buffer size = 2658.24 MiB llama_new_context_with_model: n_ctx = 10240 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA0 KV buffer size = 240.00 MiB llama_kv_cache_init: CUDA1 KV buffer size = 200.00 MiB llama_kv_cache_init: CUDA2 KV buffer size = 200.00 MiB llama_kv_cache_init: CUDA3 KV buffer size = 200.00 MiB llama_kv_cache_init: CUDA4 KV buffer size = 200.00 MiB llama_kv_cache_init: CUDA5 KV buffer size = 200.00 MiB llama_kv_cache_init: CUDA6 KV buffer size = 200.00 MiB llama_kv_cache_init: CUDA7 KV buffer size = 200.00 MiB llama_kv_cache_init: CUDA8 KV buffer size = 200.00 MiB llama_kv_cache_init: CUDA9 KV buffer size = 200.00 MiB llama_kv_cache_init: CUDA10 KV buffer size = 200.00 MiB llama_kv_cache_init: CUDA11 KV buffer size = 200.00 MiB llama_kv_cache_init: CUDA12 KV buffer size = 200.00 MiB llama_kv_cache_init: CUDA13 KV buffer size = 200.00 MiB llama_kv_cache_init: CUDA14 KV buffer size = 200.00 MiB llama_kv_cache_init: CUDA15 KV buffer size = 160.00 MiB llama_new_context_with_model: KV self size = 3200.00 MiB, K (f16): 1600.00 MiB, V (f16): 1600.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 2.60 MiB /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-backend.c:1864: GGML_ASSERT(n_backends <= GGML_SCHED_MAX_BACKENDS) failed ``` ### OS Linux, Docker ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.10
GiteaMirror added the feature request label 2026-04-12 15:28:34 -05:00
Author
Owner

@rick-github commented on GitHub (Oct 9, 2024):

llama.cpp supports 16 devices, unfortunately for you your CPU is counted as a device, so with your 16 V100s you have 17 devices. You should be able to work around this by listing 15 devices in CUDA_VISIBLE_DEVICES.
You can do this by setting CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7,8,9,10,11,12,13,14 in your container environment.

<!-- gh-comment-id:2402288910 --> @rick-github commented on GitHub (Oct 9, 2024): llama.cpp supports 16 devices, unfortunately for you your CPU is counted as a device, so with your 16 V100s you have 17 devices. You should be able to work around this by listing 15 devices in [CUDA_VISIBLE_DEVICES](https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#:~:text=A%20comma%2Dseparated%20sequence%20of%20GPU%20identifiers%20MIG%20support). You can do this by setting `CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7,8,9,10,11,12,13,14` in your [container environment](https://github.com/ollama/ollama/issues/6799#issuecomment-2350957701).
Author
Owner

@rick-github commented on GitHub (Oct 9, 2024):

Having looked at the code, it's not clear to me if ollama recognizes CUDA_VISIBLE_DEVICES the same as llama.cpp. Fortunately you can achieve the same result by specifying the available GPUs to docker, either by --gpus 15 for plain docker or by setting a count in the docker compose file:

services:
  ollama:
    image: ollama/ollama
    ports:
      - 11434:11434
    deploy:
      resources:
        reservations:
          devices:
            - capabilities:
                - gpu
              driver: nvidia
              count: 15
<!-- gh-comment-id:2402372456 --> @rick-github commented on GitHub (Oct 9, 2024): Having looked at the code, it's not clear to me if ollama recognizes `CUDA_VISIBLE_DEVICES` the same as llama.cpp. Fortunately you can achieve the same result by specifying the available GPUs to docker, either by [`--gpus 15`](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/docker-specialized.html#:~:text=docker%20run%20%2D%2Drm%20%2D%2Dgpus%202%20nvidia/cuda%20nvidia%2Dsmi) for plain docker or by setting a count in the docker compose file: ```yaml services: ollama: image: ollama/ollama ports: - 11434:11434 deploy: resources: reservations: devices: - capabilities: - gpu driver: nvidia count: 15 ```
Author
Owner

@scriptbotprime commented on GitHub (Oct 9, 2024):

Wow, that was quick. Limiting the GPUs solves my problem.

Having looked at the code, it's not clear to me if ollama recognizes CUDA_VISIBLE_DEVICES the same as llama.cpp.

I tried it and it does recognize the variable.

Thank you very much!

<!-- gh-comment-id:2402534441 --> @scriptbotprime commented on GitHub (Oct 9, 2024): Wow, that was quick. Limiting the GPUs solves my problem. > Having looked at the code, it's not clear to me if ollama recognizes `CUDA_VISIBLE_DEVICES` the same as llama.cpp. I tried it and it does recognize the variable. Thank you very much!
Author
Owner

@dhiltgen commented on GitHub (Oct 9, 2024):

Ideally we should detect this scenario and reduce the number of GPUs we consider to fit within the limits.

<!-- gh-comment-id:2403100903 --> @dhiltgen commented on GitHub (Oct 9, 2024): Ideally we should detect this scenario and reduce the number of GPUs we consider to fit within the limits.
Author
Owner

@fahadshery commented on GitHub (May 14, 2025):

I have 16 GPUs in my Rig...I am not utilising 1 GPU with this workaround....is there any solution?

<!-- gh-comment-id:2879359305 --> @fahadshery commented on GitHub (May 14, 2025): I have 16 GPUs in my Rig...I am not utilising 1 GPU with this workaround....is there any solution?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#4537