[GH-ISSUE #9413] Running on CPU is faster than GPU? Ollama loading model to GPU but model not responding #6137

Closed
opened 2026-04-12 17:29:05 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @tzelalouzeir on GitHub (Feb 28, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9413

What is the issue?

Dear community, I'm getting confused about ollama usage, i can load models to GPU and CPU but the performance is so different. It's not my first time with ollama, gpu configuration etc. First time I'm seeing problem like that. Using keep_alive 0 it's not fixing, i created them in Docker Containers: ollama_cpu and ollama_gpu, different ports and dir. CPU is working fine but GPU stucking and not responding, model loaded to GPU i can see it and ollama can unload the model from GPU but response is ZERO!

OS

Linux / Ubuntu 24.04

GPU

Nvidia H100 96GB (Partial VRAM usage, access only a 24GB VRAM, server )

RAM

198GB

CPU

Intel

Ollama version

0.5.12

Originally created by @tzelalouzeir on GitHub (Feb 28, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9413 ### What is the issue? Dear community, I'm getting confused about ollama usage, i can load models to GPU and CPU but the performance is so different. It's not my first time with ollama, gpu configuration etc. First time I'm seeing problem like that. Using keep_alive 0 it's not fixing, i created them in Docker Containers: ollama_cpu and ollama_gpu, different ports and dir. CPU is working fine but GPU stucking and not responding, model loaded to GPU i can see it and ollama can unload the model from GPU but response is ZERO! ### OS Linux / Ubuntu 24.04 ### GPU Nvidia H100 96GB (Partial VRAM usage, access only a 24GB VRAM, server ) ### RAM 198GB ### CPU Intel ### Ollama version 0.5.12
GiteaMirror added the needs more infobug labels 2026-04-12 17:29:05 -05:00
Author
Owner

@rick-github commented on GitHub (Feb 28, 2025):

Server logs may help in debugging.

<!-- gh-comment-id:2690020177 --> @rick-github commented on GitHub (Feb 28, 2025): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) may help in debugging.
Author
Owner

@tzelalouzeir commented on GitHub (Mar 4, 2025):

Server logs may help in debugging.

nothing problem at logs, everything clean, i set already the flash attention + q4 at ollama env, but due to something ollama getting bug or memory alloc problem, idk

<!-- gh-comment-id:2698965092 --> @tzelalouzeir commented on GitHub (Mar 4, 2025): > [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) may help in debugging. nothing problem at logs, everything clean, i set already the flash attention + q4 at ollama env, but due to something ollama getting bug or memory alloc problem, idk
Author
Owner

@rick-github commented on GitHub (Mar 4, 2025):

but due to something ollama getting bug or memory alloc problem, idk

It's too bad there's not some record of what ollama is doing that could be looked at to see if there's a possible clue.

<!-- gh-comment-id:2698974208 --> @rick-github commented on GitHub (Mar 4, 2025): > but due to something ollama getting bug or memory alloc problem, idk It's too bad there's not some record of what ollama is doing that could be looked at to see if there's a possible clue.
Author
Owner

@tzelalouzeir commented on GitHub (Mar 5, 2025):

but due to something ollama getting bug or memory alloc problem, idk

It's too bad there's not some record of what ollama is doing that could be looked at to see if there's a possible clue.

ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA H100L-23C, compute capability 9.0, VMM: no
load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so
time=2025-03-04T13:02:35.063Z level=INFO source=runner.go:935 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | LLAMAFILE = 1 | cgo(gcc)" threads=8
....................................................
llm_load_tensors: offloading 48 repeating layers to GPU
llm_load_tensors: offloading output layer to GPU
llm_load_tensors: offloaded 49/49 layers to GPU
llm_load_tensors:        CUDA0 model buffer size =  8148.38 MiB
llm_load_tensors:   CPU_Mapped model buffer size =   417.66 MiB
llama_new_context_with_model: n_seq_max     = 4
llama_new_context_with_model: n_ctx         = 8192
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch       = 2048
llama_new_context_with_model: n_ubatch      = 512
llama_new_context_with_model: flash_attn    = 1
llama_new_context_with_model: freq_base     = 1000000.0
llama_new_context_with_model: freq_scale    = 1
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'q4_0', type_v = 'q4_0', n_layer = 48, can_shift = 1
llama_kv_cache_init:      CUDA0 KV buffer size =   432.00 MiB
llama_new_context_with_model: KV self size  =  432.00 MiB, K (q4_0):  216.00 MiB, V (q4_0):  216.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     2.40 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =   307.00 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    26.01 MiB
llama_new_context_with_model: graph nodes  = 1495
llama_new_context_with_model: graph splits = 2
....................................................
time=2025-03-04T13:02:37.519Z level=INFO source=server.go:596 msg="llama runner started in 2.76 seconds"
llama_model_loader: loaded meta data with 26 key-value pairs and 579 tensors from /root/.ollama/models/blobs/sha256-6xxxxxxxxxxxxxxxx1e (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = DeepSeek R1 Distill Qwen 14B
llama_model_loader: - kv   3:                           general.basename str              = DeepSeek-R1-Distill-Qwen
llama_model_loader: - kv   4:                         general.size_label str              = 14B
llama_model_loader: - kv   5:                          qwen2.block_count u32              = 48
llama_model_loader: - kv   6:                       qwen2.context_length u32              = 131072
llama_model_loader: - kv   7:                     qwen2.embedding_length u32              = 5120
llama_model_loader: - kv   8:                  qwen2.feed_forward_length u32              = 13824
llama_model_loader: - kv   9:                 qwen2.attention.head_count u32              = 40
llama_model_loader: - kv  10:              qwen2.attention.head_count_kv u32              = 8
llama_model_loader: - kv  11:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  12:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  13:                          general.file_type u32              = 15
llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  15:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  16:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  18:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  19:                tokenizer.ggml.bos_token_id u32              = 151646
llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 151643
llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  22:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  23:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  25:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  241 tensors
llama_model_loader: - type q4_K:  289 tensors
llama_model_loader: - type q6_K:   49 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 22
llm_load_vocab: token to piece cache size = 0.9310 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = qwen2
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 152064
llm_load_print_meta: n_merges         = 151387
llm_load_print_meta: vocab_only       = 1
llm_load_print_meta: model type       = ?B
llm_load_print_meta: model ftype      = all F32
llm_load_print_meta: model params     = 14.77 B
llm_load_print_meta: model size       = 8.37 GiB (4.87 BPW)
llm_load_print_meta: general.name     = DeepSeek R1 Distill Qwen 14B
llm_load_print_meta: BOS token        = 151646 '<|begin▁of▁sentence|>'
llm_load_print_meta: EOS token        = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: EOT token        = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: PAD token        = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: LF token         = 148848 'ÄĬ'
llm_load_print_meta: FIM PRE token    = 151659 '<|fim_prefix|>'
llm_load_print_meta: FIM SUF token    = 151661 '<|fim_suffix|>'
llm_load_print_meta: FIM MID token    = 151660 '<|fim_middle|>'
llm_load_print_meta: FIM PAD token    = 151662 '<|fim_pad|>'
llm_load_print_meta: FIM REP token    = 151663 '<|repo_name|>'
llm_load_print_meta: FIM SEP token    = 151664 '<|file_sep|>'
llm_load_print_meta: EOG token        = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: EOG token        = 151662 '<|fim_pad|>'
llm_load_print_meta: EOG token        = 151663 '<|repo_name|>'
llm_load_print_meta: EOG token        = 151664 '<|file_sep|>'
llm_load_print_meta: max token length = 256
llama_model_load: vocab only - skipping tensors
[GIN] 2025/03/04 - 13:02:39 | 200 |   5.98293318s |  1xx.1xx.xxx.xx | POST     "/api/generate"
<!-- gh-comment-id:2700104122 --> @tzelalouzeir commented on GitHub (Mar 5, 2025): > > but due to something ollama getting bug or memory alloc problem, idk > > It's too bad there's not some record of what ollama is doing that could be looked at to see if there's a possible clue. ```bash ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA H100L-23C, compute capability 9.0, VMM: no load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so time=2025-03-04T13:02:35.063Z level=INFO source=runner.go:935 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | LLAMAFILE = 1 | cgo(gcc)" threads=8 .................................................... llm_load_tensors: offloading 48 repeating layers to GPU llm_load_tensors: offloading output layer to GPU llm_load_tensors: offloaded 49/49 layers to GPU llm_load_tensors: CUDA0 model buffer size = 8148.38 MiB llm_load_tensors: CPU_Mapped model buffer size = 417.66 MiB llama_new_context_with_model: n_seq_max = 4 llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 1 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'q4_0', type_v = 'q4_0', n_layer = 48, can_shift = 1 llama_kv_cache_init: CUDA0 KV buffer size = 432.00 MiB llama_new_context_with_model: KV self size = 432.00 MiB, K (q4_0): 216.00 MiB, V (q4_0): 216.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 2.40 MiB llama_new_context_with_model: CUDA0 compute buffer size = 307.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 26.01 MiB llama_new_context_with_model: graph nodes = 1495 llama_new_context_with_model: graph splits = 2 .................................................... time=2025-03-04T13:02:37.519Z level=INFO source=server.go:596 msg="llama runner started in 2.76 seconds" llama_model_loader: loaded meta data with 26 key-value pairs and 579 tensors from /root/.ollama/models/blobs/sha256-6xxxxxxxxxxxxxxxx1e (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 14B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen llama_model_loader: - kv 4: general.size_label str = 14B llama_model_loader: - kv 5: qwen2.block_count u32 = 48 llama_model_loader: - kv 6: qwen2.context_length u32 = 131072 llama_model_loader: - kv 7: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 13824 llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: general.file_type u32 = 15 llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646 llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - type f32: 241 tensors llama_model_loader: - type q4_K: 289 tensors llama_model_loader: - type q6_K: 49 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 22 llm_load_vocab: token to piece cache size = 0.9310 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 152064 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 1 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = all F32 llm_load_print_meta: model params = 14.77 B llm_load_print_meta: model size = 8.37 GiB (4.87 BPW) llm_load_print_meta: general.name = DeepSeek R1 Distill Qwen 14B llm_load_print_meta: BOS token = 151646 '<|begin▁of▁sentence|>' llm_load_print_meta: EOS token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: EOT token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: PAD token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>' llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>' llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>' llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>' llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>' llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>' llm_load_print_meta: EOG token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: EOG token = 151662 '<|fim_pad|>' llm_load_print_meta: EOG token = 151663 '<|repo_name|>' llm_load_print_meta: EOG token = 151664 '<|file_sep|>' llm_load_print_meta: max token length = 256 llama_model_load: vocab only - skipping tensors [GIN] 2025/03/04 - 13:02:39 | 200 | 5.98293318s | 1xx.1xx.xxx.xx | POST "/api/generate" ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6137