[GH-ISSUE #9989] how to accelerate the inference speed of the model #32305

Closed
opened 2026-04-22 13:26:10 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @Tu1231 on GitHub (Mar 26, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9989

What is the issue?

When I entered my ollama/ollama container terminal and ran deepseek-r1:32b, its inference speed was slow, and executing ollama displayed ollama ps
NAME ID SIZE PROCESSOR UNTIL
deepseek-r1:32b 63a233c0c27b 126 GB 100% GPU 2 minutes from now
bge-m3:latest 790764642607 1.7 GB 100% GPU 12 seconds from now,
My total video memory is 160GB, and in actual testing, it is about 4-6 words per second, which doesn't feel smooth. The background Nvidia SMI shows that each GPU only has a 20% usage rate.

Relevant log output

load_tensors: offloaded 25/25 layers to GPU
load_tensors:        CUDA0 model buffer size =   577.22 MiB
load_tensors:   CPU_Mapped model buffer size =   520.30 MiB
llama_init_from_model: n_seq_max     = 1
llama_init_from_model: n_ctx         = 4096
llama_init_from_model: n_ctx_per_seq = 4096
llama_init_from_model: n_batch       = 512
llama_init_from_model: n_ubatch      = 512
llama_init_from_model: flash_attn    = 0
llama_init_from_model: freq_base     = 10000.0
llama_init_from_model: freq_scale    = 1
llama_init_from_model: n_ctx_per_seq (4096) < n_ctx_train (8192) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 4096, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 24, can_shift = 1
llama_kv_cache_init:      CUDA0 KV buffer size =   384.00 MiB
llama_init_from_model: KV self size  =  384.00 MiB, K (f16):  192.00 MiB, V (f16):  192.00 MiB
llama_init_from_model:  CUDA_Host  output buffer size =     0.00 MiB
llama_init_from_model:      CUDA0 compute buffer size =    25.01 MiB
llama_init_from_model:  CUDA_Host compute buffer size =     5.01 MiB
llama_init_from_model: graph nodes  = 849
llama_init_from_model: graph splits = 4 (with bs=512), 2 (with bs=1)
time=2025-03-26T03:04:30.907Z level=INFO source=server.go:596 msg="llama runner started in 1.51 seconds"
llama_model_loader: loaded meta data with 33 key-value pairs and 389 tensors from /root/.ollama/models/blobs/sha256-daec91ffb5dd0c27411bd71f29932917c49cf529a641d0168496c3a501e3062c (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = bert
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                         general.size_label str              = 567M
llama_model_loader: - kv   3:                            general.license str              = mit
llama_model_loader: - kv   4:                               general.tags arr[str,4]       = ["sentence-transformers", "feature-ex...
llama_model_loader: - kv   5:                           bert.block_count u32              = 24
llama_model_loader: - kv   6:                        bert.context_length u32              = 8192
llama_model_loader: - kv   7:                      bert.embedding_length u32              = 1024
llama_model_loader: - kv   8:                   bert.feed_forward_length u32              = 4096
llama_model_loader: - kv   9:                  bert.attention.head_count u32              = 16
llama_model_loader: - kv  10:          bert.attention.layer_norm_epsilon f32              = 0.000010
llama_model_loader: - kv  11:                          general.file_type u32              = 1
llama_model_loader: - kv  12:                      bert.attention.causal bool             = false
llama_model_loader: - kv  13:                          bert.pooling_type u32              = 2
llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = t5
llama_model_loader: - kv  15:                         tokenizer.ggml.pre str              = default
llama_model_loader: - kv  16:                      tokenizer.ggml.tokens arr[str,250002]  = ["<s>", "<pad>", "</s>", "<unk>", ","...
llama_model_loader: - kv  17:                      tokenizer.ggml.scores arr[f32,250002]  = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  18:                  tokenizer.ggml.token_type arr[i32,250002]  = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  19:            tokenizer.ggml.add_space_prefix bool             = true
llama_model_loader: - kv  20:            tokenizer.ggml.token_type_count u32              = 1
llama_model_loader: - kv  21:    tokenizer.ggml.remove_extra_whitespaces bool             = true
llama_model_loader: - kv  22:        tokenizer.ggml.precompiled_charsmap arr[u8,237539]   = [0, 180, 2, 0, 0, 132, 0, 0, 0, 0, 0,...
llama_model_loader: - kv  23:                tokenizer.ggml.bos_token_id u32              = 0
llama_model_loader: - kv  24:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  25:            tokenizer.ggml.unknown_token_id u32              = 3
llama_model_loader: - kv  26:          tokenizer.ggml.seperator_token_id u32              = 2
llama_model_loader: - kv  27:            tokenizer.ggml.padding_token_id u32              = 1
llama_model_loader: - kv  28:                tokenizer.ggml.cls_token_id u32              = 0
llama_model_loader: - kv  29:               tokenizer.ggml.mask_token_id u32              = 250001
llama_model_loader: - kv  30:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  31:               tokenizer.ggml.add_eos_token bool             = true
llama_model_loader: - kv  32:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  244 tensors
llama_model_loader: - type  f16:  145 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = F16
print_info: file size   = 1.07 GiB (16.25 BPW)
load: model vocab missing newline token, using special_pad_id instead
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 4
load: token to piece cache size = 2.1668 MB
print_info: arch             = bert
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 566.70 M
print_info: general.name     = n/a
print_info: vocab type       = UGM
print_info: n_vocab          = 250002
print_info: n_merges         = 0
print_info: BOS token        = 0 '<s>'
print_info: EOS token        = 2 '</s>'
print_info: UNK token        = 3 '<unk>'
print_info: SEP token        = 2 '</s>'
print_info: PAD token        = 1 '<pad>'
print_info: MASK token       = 250001 '[PAD250000]'
print_info: LF token         = 0 '<s>'
print_info: EOG token        = 2 '</s>'
print_info: max token length = 48
llama_model_load: vocab only - skipping tensors

OS

Docker

GPU

Nvidia

CPU

Intel

Ollama version

0.5.13

Originally created by @Tu1231 on GitHub (Mar 26, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9989 ### What is the issue? When I entered my ollama/ollama container terminal and ran deepseek-r1:32b, its inference speed was slow, and executing ollama displayed ollama ps NAME ID SIZE PROCESSOR UNTIL deepseek-r1:32b 63a233c0c27b 126 GB 100% GPU 2 minutes from now bge-m3:latest 790764642607 1.7 GB 100% GPU 12 seconds from now, My total video memory is 160GB, and in actual testing, it is about 4-6 words per second, which doesn't feel smooth. The background Nvidia SMI shows that each GPU only has a 20% usage rate. ### Relevant log output ```shell load_tensors: offloaded 25/25 layers to GPU load_tensors: CUDA0 model buffer size = 577.22 MiB load_tensors: CPU_Mapped model buffer size = 520.30 MiB llama_init_from_model: n_seq_max = 1 llama_init_from_model: n_ctx = 4096 llama_init_from_model: n_ctx_per_seq = 4096 llama_init_from_model: n_batch = 512 llama_init_from_model: n_ubatch = 512 llama_init_from_model: flash_attn = 0 llama_init_from_model: freq_base = 10000.0 llama_init_from_model: freq_scale = 1 llama_init_from_model: n_ctx_per_seq (4096) < n_ctx_train (8192) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 4096, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 24, can_shift = 1 llama_kv_cache_init: CUDA0 KV buffer size = 384.00 MiB llama_init_from_model: KV self size = 384.00 MiB, K (f16): 192.00 MiB, V (f16): 192.00 MiB llama_init_from_model: CUDA_Host output buffer size = 0.00 MiB llama_init_from_model: CUDA0 compute buffer size = 25.01 MiB llama_init_from_model: CUDA_Host compute buffer size = 5.01 MiB llama_init_from_model: graph nodes = 849 llama_init_from_model: graph splits = 4 (with bs=512), 2 (with bs=1) time=2025-03-26T03:04:30.907Z level=INFO source=server.go:596 msg="llama runner started in 1.51 seconds" llama_model_loader: loaded meta data with 33 key-value pairs and 389 tensors from /root/.ollama/models/blobs/sha256-daec91ffb5dd0c27411bd71f29932917c49cf529a641d0168496c3a501e3062c (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = bert llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.size_label str = 567M llama_model_loader: - kv 3: general.license str = mit llama_model_loader: - kv 4: general.tags arr[str,4] = ["sentence-transformers", "feature-ex... llama_model_loader: - kv 5: bert.block_count u32 = 24 llama_model_loader: - kv 6: bert.context_length u32 = 8192 llama_model_loader: - kv 7: bert.embedding_length u32 = 1024 llama_model_loader: - kv 8: bert.feed_forward_length u32 = 4096 llama_model_loader: - kv 9: bert.attention.head_count u32 = 16 llama_model_loader: - kv 10: bert.attention.layer_norm_epsilon f32 = 0.000010 llama_model_loader: - kv 11: general.file_type u32 = 1 llama_model_loader: - kv 12: bert.attention.causal bool = false llama_model_loader: - kv 13: bert.pooling_type u32 = 2 llama_model_loader: - kv 14: tokenizer.ggml.model str = t5 llama_model_loader: - kv 15: tokenizer.ggml.pre str = default llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,250002] = ["<s>", "<pad>", "</s>", "<unk>", ","... llama_model_loader: - kv 17: tokenizer.ggml.scores arr[f32,250002] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,250002] = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 19: tokenizer.ggml.add_space_prefix bool = true llama_model_loader: - kv 20: tokenizer.ggml.token_type_count u32 = 1 llama_model_loader: - kv 21: tokenizer.ggml.remove_extra_whitespaces bool = true llama_model_loader: - kv 22: tokenizer.ggml.precompiled_charsmap arr[u8,237539] = [0, 180, 2, 0, 0, 132, 0, 0, 0, 0, 0,... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 0 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 25: tokenizer.ggml.unknown_token_id u32 = 3 llama_model_loader: - kv 26: tokenizer.ggml.seperator_token_id u32 = 2 llama_model_loader: - kv 27: tokenizer.ggml.padding_token_id u32 = 1 llama_model_loader: - kv 28: tokenizer.ggml.cls_token_id u32 = 0 llama_model_loader: - kv 29: tokenizer.ggml.mask_token_id u32 = 250001 llama_model_loader: - kv 30: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 31: tokenizer.ggml.add_eos_token bool = true llama_model_loader: - kv 32: general.quantization_version u32 = 2 llama_model_loader: - type f32: 244 tensors llama_model_loader: - type f16: 145 tensors print_info: file format = GGUF V3 (latest) print_info: file type = F16 print_info: file size = 1.07 GiB (16.25 BPW) load: model vocab missing newline token, using special_pad_id instead load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 4 load: token to piece cache size = 2.1668 MB print_info: arch = bert print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 566.70 M print_info: general.name = n/a print_info: vocab type = UGM print_info: n_vocab = 250002 print_info: n_merges = 0 print_info: BOS token = 0 '<s>' print_info: EOS token = 2 '</s>' print_info: UNK token = 3 '<unk>' print_info: SEP token = 2 '</s>' print_info: PAD token = 1 '<pad>' print_info: MASK token = 250001 '[PAD250000]' print_info: LF token = 0 '<s>' print_info: EOG token = 2 '</s>' print_info: max token length = 48 llama_model_load: vocab only - skipping tensors ``` ### OS Docker ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.13
GiteaMirror added the question label 2026-04-22 13:26:10 -05:00
Author
Owner

@rick-github commented on GitHub (Mar 26, 2025):

https://github.com/ollama/ollama/issues/7648#issuecomment-2473561990

<!-- gh-comment-id:2753661814 --> @rick-github commented on GitHub (Mar 26, 2025): https://github.com/ollama/ollama/issues/7648#issuecomment-2473561990
Author
Owner

@Tu1231 commented on GitHub (Mar 27, 2025):

thanks

<!-- gh-comment-id:2757189222 --> @Tu1231 commented on GitHub (Mar 27, 2025): thanks
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#32305