[GH-ISSUE #9264] 无法加载所有GPU #6038

Closed
opened 2026-04-12 17:22:27 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @sunt1009 on GitHub (Feb 21, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9264

What is the issue?

操作系统为4个A40显卡,显存大小:48G/个,目前运行任务为:DeepSeek-R1:70b,任务在运行的过程中只有一个GPU占用率98%,其余三个GPU使用率为0%

Image

Relevant log output

1.ollama启动日志:
INFO server config env="map[CUDA_VISIBLE_DEVICES:0,1,2,3 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY:cuda_v12 OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/data/deepseek-modules/70b OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
source=images.go:432 msg="total blobs: 5"
source=images.go:439 msg="total unused blobs removed: 0"
source=routes.go:1237 msg="Listening on [::]:11434 (version 0.5.11)"
source=gpu.go:217 msg="looking for compatible GPUs"
source=types.go:130 msg="inference compute" id=GPU-2b1e2aa8-a371-42ac-8f25-8f39c12e8707 library=cuda variant=v12 compute=8.6 driver=12.8 name="NVIDIA A40" total="44.4 GiB" available="44.1 GiB"
source=types.go:130 msg="inference compute" id=GPU-f1404a54-99bd-776c-b7c2-10f438641b38 library=cuda variant=v12 compute=8.6 driver=12.8 name="NVIDIA A40" total="44.4 GiB" available="44.2 GiB"
source=types.go:130 msg="inference compute" id=GPU-cb4c513b-6932-4aa6-181b-d49801a5f2e9 library=cuda variant=v12 compute=8.6 driver=12.8 name="NVIDIA A40" total="44.4 GiB" available="44.2 GiB"
source=types.go:130 msg="inference compute" id=GPU-475ec94b-5592-966f-dffd-557aad68f985 library=cuda variant=v12 compute=8.6 driver=12.8 name="NVIDIA A40" total="44.4 GiB" available="44.2 GiB"

2.ollama运行任务的日志
level=INFO source=server.go:596 msg="llama runner started in 8.28 seconds"
llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /data/deepseek-modules/70b/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv   3:                           general.basename str              = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv   4:                         general.size_label str              = 70B
llama_model_loader: - kv   5:                          llama.block_count u32              = 80
llama_model_loader: - kv   6:                       llama.context_length u32              = 131072
llama_model_loader: - kv   7:                     llama.embedding_length u32              = 8192
llama_model_loader: - kv   8:                  llama.feed_forward_length u32              = 28672
llama_model_loader: - kv   9:                 llama.attention.head_count u32              = 64
llama_model_loader: - kv  10:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  11:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  12:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  13:                 llama.attention.key_length u32              = 128
llama_model_loader: - kv  14:               llama.attention.value_length u32              = 128
llama_model_loader: - kv  15:                          general.file_type u32              = 15
llama_model_loader: - kv  16:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  17:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  18:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  19:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  20:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  21:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  22:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  23:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  24:                tokenizer.ggml.eos_token_id u32              = 128001
llama_model_loader: - kv  25:            tokenizer.ggml.padding_token_id u32              = 128001
llama_model_loader: - kv  26:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  27:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  28:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  29:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  162 tensors
llama_model_loader: - type q4_K:  441 tensors
llama_model_loader: - type q5_K:   40 tensors
llama_model_loader: - type q6_K:   81 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 128256
llm_load_print_meta: n_merges         = 280147
llm_load_print_meta: vocab_only       = 1
llm_load_print_meta: model type       = ?B
llm_load_print_meta: model ftype      = all F32
llm_load_print_meta: model params     = 70.55 B
llm_load_print_meta: model size       = 39.59 GiB (4.82 BPW)
llm_load_print_meta: general.name     = DeepSeek R1 Distill Llama 70B
llm_load_print_meta: BOS token        = 128000 '<|begin▁of▁sentence|>'
llm_load_print_meta: EOS token        = 128001 '<|end▁of▁sentence|>'
llm_load_print_meta: EOT token        = 128001 '<|end▁of▁sentence|>'
llm_load_print_meta: EOM token        = 128008 '<|eom_id|>'
llm_load_print_meta: PAD token        = 128001 '<|end▁of▁sentence|>'
llm_load_print_meta: LF token         = 128 'Ä'
llm_load_print_meta: EOG token        = 128001 '<|end▁of▁sentence|>'
llm_load_print_meta: EOG token        = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token        = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llama_model_load: vocab only - skipping tensors

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.5.11

Originally created by @sunt1009 on GitHub (Feb 21, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9264 ### What is the issue? 操作系统为4个A40显卡,显存大小:48G/个,目前运行任务为:DeepSeek-R1:70b,任务在运行的过程中只有一个GPU占用率98%,其余三个GPU使用率为0% ![Image](https://github.com/user-attachments/assets/15d5b458-24a3-47a9-a460-31ee2f88fb5b) ### Relevant log output ```shell 1.ollama启动日志: INFO server config env="map[CUDA_VISIBLE_DEVICES:0,1,2,3 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY:cuda_v12 OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/data/deepseek-modules/70b OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" source=images.go:432 msg="total blobs: 5" source=images.go:439 msg="total unused blobs removed: 0" source=routes.go:1237 msg="Listening on [::]:11434 (version 0.5.11)" source=gpu.go:217 msg="looking for compatible GPUs" source=types.go:130 msg="inference compute" id=GPU-2b1e2aa8-a371-42ac-8f25-8f39c12e8707 library=cuda variant=v12 compute=8.6 driver=12.8 name="NVIDIA A40" total="44.4 GiB" available="44.1 GiB" source=types.go:130 msg="inference compute" id=GPU-f1404a54-99bd-776c-b7c2-10f438641b38 library=cuda variant=v12 compute=8.6 driver=12.8 name="NVIDIA A40" total="44.4 GiB" available="44.2 GiB" source=types.go:130 msg="inference compute" id=GPU-cb4c513b-6932-4aa6-181b-d49801a5f2e9 library=cuda variant=v12 compute=8.6 driver=12.8 name="NVIDIA A40" total="44.4 GiB" available="44.2 GiB" source=types.go:130 msg="inference compute" id=GPU-475ec94b-5592-966f-dffd-557aad68f985 library=cuda variant=v12 compute=8.6 driver=12.8 name="NVIDIA A40" total="44.4 GiB" available="44.2 GiB" 2.ollama运行任务的日志 level=INFO source=server.go:596 msg="llama runner started in 8.28 seconds" llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /data/deepseek-modules/70b/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 4: general.size_label str = 70B llama_model_loader: - kv 5: llama.block_count u32 = 80 llama_model_loader: - kv 6: llama.context_length u32 = 131072 llama_model_loader: - kv 7: llama.embedding_length u32 = 8192 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 9: llama.attention.head_count u32 = 64 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: llama.attention.key_length u32 = 128 llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 llama_model_loader: - kv 15: general.file_type u32 = 15 llama_model_loader: - kv 16: llama.vocab_size u32 = 128256 llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q4_K: 441 tensors llama_model_loader: - type q5_K: 40 tensors llama_model_loader: - type q6_K: 81 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 1 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = all F32 llm_load_print_meta: model params = 70.55 B llm_load_print_meta: model size = 39.59 GiB (4.82 BPW) llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B llm_load_print_meta: BOS token = 128000 '<|begin▁of▁sentence|>' llm_load_print_meta: EOS token = 128001 '<|end▁of▁sentence|>' llm_load_print_meta: EOT token = 128001 '<|end▁of▁sentence|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: PAD token = 128001 '<|end▁of▁sentence|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOG token = 128001 '<|end▁of▁sentence|>' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llama_model_load: vocab only - skipping tensors ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.11
GiteaMirror added the bug label 2026-04-12 17:22:27 -05:00
Author
Owner

@rick-github commented on GitHub (Feb 21, 2025):

If the model fits on one GPU, only one GPU is used. There is no performance improvement by using multiple GPUs for a single completion.

<!-- gh-comment-id:2673885328 --> @rick-github commented on GitHub (Feb 21, 2025): If the model fits on one GPU, only one GPU is used. There is [no performance improvement](https://github.com/ollama/ollama/issues/7648#issuecomment-2473561990) by using multiple GPUs for a single completion.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6038