[GH-ISSUE #1917] GPU still used when offloading zero layers #63140

Closed
opened 2026-05-03 12:16:55 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @coder543 on GitHub (Jan 11, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/1917

Originally assigned to: @jmorganca on GitHub.

To try to work around https://github.com/jmorganca/ollama/issues/1907, I decided to create a Modelfile that offloads zero layers. I noticed that it still takes up a few gigabytes of RAM on the GPU and spins up the GPU, even though I can't imagine what it is doing on the GPU when no layers are running on the GPU.

Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_vocab: special tokens definition check successful ( 259/32000 ).
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: format           = GGUF V3 (latest)
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: arch             = llama
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: vocab type       = SPM
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: n_vocab          = 32000
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: n_merges         = 0
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: n_ctx_train      = 32768
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: n_embd           = 4096
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: n_head           = 32
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: n_head_kv        = 8
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: n_layer          = 32
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: n_rot            = 128
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: n_gqa            = 4
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: f_norm_eps       = 0.0e+00
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: f_clamp_kqv      = 0.0e+00
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: n_ff             = 14336
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: n_expert         = 8
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: n_expert_used    = 2
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: rope scaling     = linear
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: freq_base_train  = 1000000.0
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: freq_scale_train = 1
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: n_yarn_orig_ctx  = 32768
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: rope_finetuned   = unknown
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: model type       = 7B
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: model ftype      = Q3_K - Small
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: model params     = 46.70 B
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: model size       = 18.90 GiB (3.48 BPW)
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: general.name     = mistralai
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: BOS token        = 1 '<s>'
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: EOS token        = 2 '</s>'
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: UNK token        = 0 '<unk>'
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: LF token         = 13 '<0x0A>'
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_tensors: ggml ctx size =    0.38 MiB
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_tensors: using CUDA for GPU acceleration
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_tensors: mem required  = 19351.65 MiB
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_tensors: offloading 0 repeating layers to GPU
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_tensors: offloaded 0/33 layers to GPU
Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_tensors: VRAM used: 0.00 MiB
Jan 11 04:10:06 cognicore ollama[3082453]: ....................................................................................................
Jan 11 04:10:06 cognicore ollama[3082453]: llama_new_context_with_model: n_ctx      = 20000
Jan 11 04:10:06 cognicore ollama[3082453]: llama_new_context_with_model: freq_base  = 1000000.0
Jan 11 04:10:06 cognicore ollama[3082453]: llama_new_context_with_model: freq_scale = 1
Jan 11 04:10:07 cognicore ollama[3082453]: llama_new_context_with_model: KV self size  = 2500.00 MiB, K (f16): 1250.00 MiB, V (f16): 1250.00 MiB
Jan 11 04:10:07 cognicore ollama[3082453]: llama_build_graph: non-view tensors processed: 1124/1124
Jan 11 04:10:07 cognicore ollama[3082453]: llama_new_context_with_model: compute buffer total size = 1344.29 MiB
Jan 11 04:10:07 cognicore ollama[3082453]: llama_new_context_with_model: VRAM scratch buffer: 1341.10 MiB
Jan 11 04:10:07 cognicore ollama[3082453]: llama_new_context_with_model: total VRAM used: 1341.10 MiB (model: 0.00 MiB, context: 1341.10 MiB)
Jan 11 04:10:07 cognicore ollama[3082453]: 2024/01/11 04:10:07 ext_server_common.go:144: Starting internal llama main loop
Jan 11 04:10:07 cognicore ollama[3082453]: 2024/01/11 04:10:07 ext_server_common.go:158: loaded 0 images
Thu Jan 11 04:12:12 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.129.03             Driver Version: 535.129.03   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 3090        Off | 00000000:01:00.0 Off |                  N/A |
| 49%   58C    P2             126W / 420W |   2944MiB / 24576MiB |      6%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|    0   N/A  N/A   3082453      C   /usr/local/bin/ollama                      2930MiB |
+---------------------------------------------------------------------------------------+

The entire Modelfile:

FROM mixtral:8x7b-instruct-v0.1-q3_K_S
PARAMETER num_gpu 0

I believe in previous versions of ollama, it would revert to a CPU-only mode when it realized no layers were being offloaded.

Originally created by @coder543 on GitHub (Jan 11, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/1917 Originally assigned to: @jmorganca on GitHub. To try to work around https://github.com/jmorganca/ollama/issues/1907, I decided to create a Modelfile that offloads zero layers. I noticed that it still takes up a few gigabytes of RAM on the GPU and spins up the GPU, even though I can't imagine _what_ it is doing on the GPU when no layers are running on the GPU. ``` Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_vocab: special tokens definition check successful ( 259/32000 ). Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: format = GGUF V3 (latest) Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: arch = llama Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: vocab type = SPM Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: n_vocab = 32000 Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: n_merges = 0 Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: n_ctx_train = 32768 Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: n_embd = 4096 Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: n_head = 32 Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: n_head_kv = 8 Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: n_layer = 32 Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: n_rot = 128 Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: n_gqa = 4 Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: f_norm_eps = 0.0e+00 Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05 Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: n_ff = 14336 Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: n_expert = 8 Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: n_expert_used = 2 Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: rope scaling = linear Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: freq_base_train = 1000000.0 Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: freq_scale_train = 1 Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: n_yarn_orig_ctx = 32768 Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: rope_finetuned = unknown Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: model type = 7B Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: model ftype = Q3_K - Small Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: model params = 46.70 B Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: model size = 18.90 GiB (3.48 BPW) Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: general.name = mistralai Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: BOS token = 1 '<s>' Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: EOS token = 2 '</s>' Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: UNK token = 0 '<unk>' Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_print_meta: LF token = 13 '<0x0A>' Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_tensors: ggml ctx size = 0.38 MiB Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_tensors: using CUDA for GPU acceleration Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_tensors: mem required = 19351.65 MiB Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_tensors: offloading 0 repeating layers to GPU Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_tensors: offloaded 0/33 layers to GPU Jan 11 04:10:05 cognicore ollama[3082453]: llm_load_tensors: VRAM used: 0.00 MiB Jan 11 04:10:06 cognicore ollama[3082453]: .................................................................................................... Jan 11 04:10:06 cognicore ollama[3082453]: llama_new_context_with_model: n_ctx = 20000 Jan 11 04:10:06 cognicore ollama[3082453]: llama_new_context_with_model: freq_base = 1000000.0 Jan 11 04:10:06 cognicore ollama[3082453]: llama_new_context_with_model: freq_scale = 1 Jan 11 04:10:07 cognicore ollama[3082453]: llama_new_context_with_model: KV self size = 2500.00 MiB, K (f16): 1250.00 MiB, V (f16): 1250.00 MiB Jan 11 04:10:07 cognicore ollama[3082453]: llama_build_graph: non-view tensors processed: 1124/1124 Jan 11 04:10:07 cognicore ollama[3082453]: llama_new_context_with_model: compute buffer total size = 1344.29 MiB Jan 11 04:10:07 cognicore ollama[3082453]: llama_new_context_with_model: VRAM scratch buffer: 1341.10 MiB Jan 11 04:10:07 cognicore ollama[3082453]: llama_new_context_with_model: total VRAM used: 1341.10 MiB (model: 0.00 MiB, context: 1341.10 MiB) Jan 11 04:10:07 cognicore ollama[3082453]: 2024/01/11 04:10:07 ext_server_common.go:144: Starting internal llama main loop Jan 11 04:10:07 cognicore ollama[3082453]: 2024/01/11 04:10:07 ext_server_common.go:158: loaded 0 images ``` ``` Thu Jan 11 04:12:12 2024 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 535.129.03 Driver Version: 535.129.03 CUDA Version: 12.2 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA GeForce RTX 3090 Off | 00000000:01:00.0 Off | N/A | | 49% 58C P2 126W / 420W | 2944MiB / 24576MiB | 6% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | 0 N/A N/A 3082453 C /usr/local/bin/ollama 2930MiB | +---------------------------------------------------------------------------------------+ ``` The entire Modelfile: ``` FROM mixtral:8x7b-instruct-v0.1-q3_K_S PARAMETER num_gpu 0 ``` I believe in previous versions of ollama, it would revert to a CPU-only mode when it realized no layers were being offloaded.
GiteaMirror added the bug label 2026-05-03 12:16:55 -05:00
Author
Owner

@coder543 commented on GitHub (Jan 11, 2024):

And... the zero layer memory usage continues to grow during this ~16k token prompt... 🤔

+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.129.03             Driver Version: 535.129.03   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 3090        Off | 00000000:01:00.0 Off |                  N/A |
| 42%   60C    P2             153W / 420W |  21890MiB / 24576MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|    0   N/A  N/A   3082453      C   /usr/local/bin/ollama                     21876MiB |
+---------------------------------------------------------------------------------------+

(EDIT: updated with even higher number seen as processing continued.)

<!-- gh-comment-id:1886215392 --> @coder543 commented on GitHub (Jan 11, 2024): And... the zero layer memory usage continues to grow during this ~16k token prompt... 🤔 ``` +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 535.129.03 Driver Version: 535.129.03 CUDA Version: 12.2 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA GeForce RTX 3090 Off | 00000000:01:00.0 Off | N/A | | 42% 60C P2 153W / 420W | 21890MiB / 24576MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | 0 N/A N/A 3082453 C /usr/local/bin/ollama 21876MiB | +---------------------------------------------------------------------------------------+ ``` (EDIT: updated with even higher number seen as processing continued.)
Author
Owner

@jmorganca commented on GitHub (Jan 11, 2024):

Thanks for the issue! It seems with num_gpu 0, data may still be allocated on the GPU (the compute graph and kv cache). will fix this in the upcoming release. Good catch!

<!-- gh-comment-id:1886261943 --> @jmorganca commented on GitHub (Jan 11, 2024): Thanks for the issue! It seems with `num_gpu` 0, data may still be allocated on the GPU (the compute graph and kv cache). will fix this in the upcoming release. Good catch!
Author
Owner

@jmorganca commented on GitHub (Jan 11, 2024):

This should be fixed as of version 0.1.20 - please let me know if you see it again!

<!-- gh-comment-id:1888098278 --> @jmorganca commented on GitHub (Jan 11, 2024): This should be fixed as of version [0.1.20](https://github.com/jmorganca/ollama/releases/tag/v0.1.20) - please let me know if you see it again!
Author
Owner

@coder543 commented on GitHub (Jan 11, 2024):

Thanks! I can confirm that this issue is fixed, although I'm still able to reproduce #1907.

<!-- gh-comment-id:1888113074 --> @coder543 commented on GitHub (Jan 11, 2024): Thanks! I can confirm that this issue is fixed, although I'm still able to reproduce #1907.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#63140