[GH-ISSUE #8776] Over VRAM allocation - Error in DeekSeek R1:671B model distribution on MI210 #67754

Open
opened 2026-05-04 11:34:58 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @itej89 on GitHub (Feb 2, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8776

What is the issue?

Hi, I would like to report an issue while running DeepSeek R1 on 3X MI210 GPU configuration.

When I try to run DeekSeek R1:671B on 3X MI210 GPUs (each with 64GB), ollama is trying to allocation 70GB on GPU0 and failing. Please find the error logs below

--------------------------------------------------------------------------------------------------------------------------------------------
_load_backend: loaded ROCm backend from /home/master/vpolamre/ollama/ollama/build/lib/ollama/libggml-hip.so
llama_load_model_from_file: using device ROCm1 (AMD Instinct MI210) - 65430 MiB free
llama_load_model_from_file: using device ROCm2 (AMD Instinct MI210) - 65430 MiB free
llama_load_model_from_file: using device ROCm0 (AMD Instinct MI210) - 64468 MiB free
---
---
llm_load_print_meta: expert_weights_norm  = 1
llm_load_print_meta: expert_gating_func   = sigmoid
llm_load_print_meta: rope_yarn_log_mul    = 0.1000
llm_load_tensors: tensor 'token_embd.weight' (q4_K) (and 531 others) cannot be used with preferred buffer type ROCm_Host, using CPU instead
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 70139.08 MiB on device 0: cudaMalloc failed: out of memory
llama_model_load: error loading model: unable to allocate ROCm0 buffer
llama_load_model_from_file: failed to load model
panic: unable to load model: /root/.ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9_
------------------------------------------------------------------------------------------------------------------------------------------------

I tried debugging issue by printing the GPUs memory computed during layercount calculation and the output is as follows

time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=1 "LAYER SIZE"=530100224 "GPU ID"=0 "REQUIRED MEMORY in GB"=2 "GPU FREE MEMORY in GB"=63
time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=2 "LAYER SIZE"=530100224 "GPU ID"=1 "REQUIRED MEMORY in GB"=2 "GPU FREE MEMORY in GB"=63
time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=3 "LAYER SIZE"=530100224 "GPU ID"=2 "REQUIRED MEMORY in GB"=2 "GPU FREE MEMORY in GB"=63
time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=4 "LAYER SIZE"=7619654656 "GPU ID"=0 "REQUIRED MEMORY in GB"=9 "GPU FREE MEMORY in GB"=63
time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=5 "LAYER SIZE"=7619654656 "GPU ID"=1 "REQUIRED MEMORY in GB"=9 "GPU FREE MEMORY in GB"=63
time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=6 "LAYER SIZE"=7619654656 "GPU ID"=2 "REQUIRED MEMORY in GB"=9 "GPU FREE MEMORY in GB"=63
time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=7 "LAYER SIZE"=7619654656 "GPU ID"=0 "REQUIRED MEMORY in GB"=16 "GPU FREE MEMORY in GB"=63
time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=8 "LAYER SIZE"=6646985728 "GPU ID"=1 "REQUIRED MEMORY in GB"=15 "GPU FREE MEMORY in GB"=63
time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=9 "LAYER SIZE"=6646985728 "GPU ID"=2 "REQUIRED MEMORY in GB"=15 "GPU FREE MEMORY in GB"=63
time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=10 "LAYER SIZE"=7619654656 "GPU ID"=0 "REQUIRED MEMORY in GB"=23 "GPU FREE MEMORY in GB"=63
time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=11 "LAYER SIZE"=6646985728 "GPU ID"=1 "REQUIRED MEMORY in GB"=21 "GPU FREE MEMORY in GB"=63
time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=12 "LAYER SIZE"=6646985728 "GPU ID"=2 "REQUIRED MEMORY in GB"=21 "GPU FREE MEMORY in GB"=63
time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=13 "LAYER SIZE"=7619654656 "GPU ID"=0 "REQUIRED MEMORY in GB"=30 "GPU FREE MEMORY in GB"=63
time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=14 "LAYER SIZE"=6646985728 "GPU ID"=1 "REQUIRED MEMORY in GB"=28 "GPU FREE MEMORY in GB"=63
time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=15 "LAYER SIZE"=6646985728 "GPU ID"=2 "REQUIRED MEMORY in GB"=28 "GPU FREE MEMORY in GB"=63
time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=16 "LAYER SIZE"=7619654656 "GPU ID"=0 "REQUIRED MEMORY in GB"=37 "GPU FREE MEMORY in GB"=63
time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=17 "LAYER SIZE"=6646985728 "GPU ID"=1 "REQUIRED MEMORY in GB"=34 "GPU FREE MEMORY in GB"=63
time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=18 "LAYER SIZE"=6646985728 "GPU ID"=2 "REQUIRED MEMORY in GB"=34 "GPU FREE MEMORY in GB"=63
time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=19 "LAYER SIZE"=7619654656 "GPU ID"=0 "REQUIRED MEMORY in GB"=45 "GPU FREE MEMORY in GB"=63
time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=20 "LAYER SIZE"=6646985728 "GPU ID"=1 "REQUIRED MEMORY in GB"=40 "GPU FREE MEMORY in GB"=63
time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=21 "LAYER SIZE"=6646985728 "GPU ID"=2 "REQUIRED MEMORY in GB"=40 "GPU FREE MEMORY in GB"=63
time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=22 "LAYER SIZE"=7619654656 "GPU ID"=0 "REQUIRED MEMORY in GB"=52 "GPU FREE MEMORY in GB"=63
time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=23 "LAYER SIZE"=6646985728 "GPU ID"=1 "REQUIRED MEMORY in GB"=46 "GPU FREE MEMORY in GB"=63
time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=24 "LAYER SIZE"=6646985728 "GPU ID"=2 "REQUIRED MEMORY in GB"=46 "GPU FREE MEMORY in GB"=63
time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=25 "LAYER SIZE"=7619654656 "GPU ID"=0 "REQUIRED MEMORY in GB"=59 "GPU FREE MEMORY in GB"=63
time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=26 "LAYER SIZE"=6646985728 "GPU ID"=1 "REQUIRED MEMORY in GB"=52 "GPU FREE MEMORY in GB"=63
time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=27 "LAYER SIZE"=6646985728 "GPU ID"=2 "REQUIRED MEMORY in GB"=52 "GPU FREE MEMORY in GB"=63
time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=28 "LAYER SIZE"=7619654656 "GPU ID"=1 "REQUIRED MEMORY in GB"=59 "GPU FREE MEMORY in GB"=63
time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=29 "LAYER SIZE"=6646985728 "GPU ID"=0 "REQUIRED MEMORY in GB"=59 "GPU FREE MEMORY in GB"=63

As per my experiment I noticed only a total of 24 layers can be successfully offloaded to on to these GPUs through Ollama and llama.cpp.

Ollama MI210 3X GPU Memory allocation fail.txt

MI210 3xGPU - Force 24 layercount.txt

Please let me know if I can share any more information that can help in resolving the issue. Appreciate your support!

OS

Linux

GPU

AMD

CPU

AMD

Ollama version

0.5.7

Originally created by @itej89 on GitHub (Feb 2, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8776 ### What is the issue? Hi, I would like to report an issue while running DeepSeek R1 on 3X MI210 GPU configuration. When I try to run DeekSeek R1:671B on 3X MI210 GPUs (each with 64GB), ollama is trying to allocation 70GB on GPU0 and failing. Please find the error logs below ``` -------------------------------------------------------------------------------------------------------------------------------------------- _load_backend: loaded ROCm backend from /home/master/vpolamre/ollama/ollama/build/lib/ollama/libggml-hip.so llama_load_model_from_file: using device ROCm1 (AMD Instinct MI210) - 65430 MiB free llama_load_model_from_file: using device ROCm2 (AMD Instinct MI210) - 65430 MiB free llama_load_model_from_file: using device ROCm0 (AMD Instinct MI210) - 64468 MiB free --- --- llm_load_print_meta: expert_weights_norm = 1 llm_load_print_meta: expert_gating_func = sigmoid llm_load_print_meta: rope_yarn_log_mul = 0.1000 llm_load_tensors: tensor 'token_embd.weight' (q4_K) (and 531 others) cannot be used with preferred buffer type ROCm_Host, using CPU instead ggml_backend_cuda_buffer_type_alloc_buffer: allocating 70139.08 MiB on device 0: cudaMalloc failed: out of memory llama_model_load: error loading model: unable to allocate ROCm0 buffer llama_load_model_from_file: failed to load model panic: unable to load model: /root/.ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9_ ------------------------------------------------------------------------------------------------------------------------------------------------ ``` I tried debugging issue by printing the GPUs memory computed during layercount calculation and the output is as follows ``` time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=1 "LAYER SIZE"=530100224 "GPU ID"=0 "REQUIRED MEMORY in GB"=2 "GPU FREE MEMORY in GB"=63 time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=2 "LAYER SIZE"=530100224 "GPU ID"=1 "REQUIRED MEMORY in GB"=2 "GPU FREE MEMORY in GB"=63 time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=3 "LAYER SIZE"=530100224 "GPU ID"=2 "REQUIRED MEMORY in GB"=2 "GPU FREE MEMORY in GB"=63 time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=4 "LAYER SIZE"=7619654656 "GPU ID"=0 "REQUIRED MEMORY in GB"=9 "GPU FREE MEMORY in GB"=63 time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=5 "LAYER SIZE"=7619654656 "GPU ID"=1 "REQUIRED MEMORY in GB"=9 "GPU FREE MEMORY in GB"=63 time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=6 "LAYER SIZE"=7619654656 "GPU ID"=2 "REQUIRED MEMORY in GB"=9 "GPU FREE MEMORY in GB"=63 time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=7 "LAYER SIZE"=7619654656 "GPU ID"=0 "REQUIRED MEMORY in GB"=16 "GPU FREE MEMORY in GB"=63 time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=8 "LAYER SIZE"=6646985728 "GPU ID"=1 "REQUIRED MEMORY in GB"=15 "GPU FREE MEMORY in GB"=63 time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=9 "LAYER SIZE"=6646985728 "GPU ID"=2 "REQUIRED MEMORY in GB"=15 "GPU FREE MEMORY in GB"=63 time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=10 "LAYER SIZE"=7619654656 "GPU ID"=0 "REQUIRED MEMORY in GB"=23 "GPU FREE MEMORY in GB"=63 time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=11 "LAYER SIZE"=6646985728 "GPU ID"=1 "REQUIRED MEMORY in GB"=21 "GPU FREE MEMORY in GB"=63 time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=12 "LAYER SIZE"=6646985728 "GPU ID"=2 "REQUIRED MEMORY in GB"=21 "GPU FREE MEMORY in GB"=63 time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=13 "LAYER SIZE"=7619654656 "GPU ID"=0 "REQUIRED MEMORY in GB"=30 "GPU FREE MEMORY in GB"=63 time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=14 "LAYER SIZE"=6646985728 "GPU ID"=1 "REQUIRED MEMORY in GB"=28 "GPU FREE MEMORY in GB"=63 time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=15 "LAYER SIZE"=6646985728 "GPU ID"=2 "REQUIRED MEMORY in GB"=28 "GPU FREE MEMORY in GB"=63 time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=16 "LAYER SIZE"=7619654656 "GPU ID"=0 "REQUIRED MEMORY in GB"=37 "GPU FREE MEMORY in GB"=63 time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=17 "LAYER SIZE"=6646985728 "GPU ID"=1 "REQUIRED MEMORY in GB"=34 "GPU FREE MEMORY in GB"=63 time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=18 "LAYER SIZE"=6646985728 "GPU ID"=2 "REQUIRED MEMORY in GB"=34 "GPU FREE MEMORY in GB"=63 time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=19 "LAYER SIZE"=7619654656 "GPU ID"=0 "REQUIRED MEMORY in GB"=45 "GPU FREE MEMORY in GB"=63 time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=20 "LAYER SIZE"=6646985728 "GPU ID"=1 "REQUIRED MEMORY in GB"=40 "GPU FREE MEMORY in GB"=63 time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=21 "LAYER SIZE"=6646985728 "GPU ID"=2 "REQUIRED MEMORY in GB"=40 "GPU FREE MEMORY in GB"=63 time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=22 "LAYER SIZE"=7619654656 "GPU ID"=0 "REQUIRED MEMORY in GB"=52 "GPU FREE MEMORY in GB"=63 time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=23 "LAYER SIZE"=6646985728 "GPU ID"=1 "REQUIRED MEMORY in GB"=46 "GPU FREE MEMORY in GB"=63 time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=24 "LAYER SIZE"=6646985728 "GPU ID"=2 "REQUIRED MEMORY in GB"=46 "GPU FREE MEMORY in GB"=63 time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=25 "LAYER SIZE"=7619654656 "GPU ID"=0 "REQUIRED MEMORY in GB"=59 "GPU FREE MEMORY in GB"=63 time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=26 "LAYER SIZE"=6646985728 "GPU ID"=1 "REQUIRED MEMORY in GB"=52 "GPU FREE MEMORY in GB"=63 time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=27 "LAYER SIZE"=6646985728 "GPU ID"=2 "REQUIRED MEMORY in GB"=52 "GPU FREE MEMORY in GB"=63 time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=28 "LAYER SIZE"=7619654656 "GPU ID"=1 "REQUIRED MEMORY in GB"=59 "GPU FREE MEMORY in GB"=63 time=2025-02-02T19:15:01.440Z level=DEBUG source=memory.go:235 msg="CUSTOM DEBUG PRINT:" "LAYER COUNT"=29 "LAYER SIZE"=6646985728 "GPU ID"=0 "REQUIRED MEMORY in GB"=59 "GPU FREE MEMORY in GB"=63 ``` As per my experiment I noticed only a total of 24 layers can be successfully offloaded to on to these GPUs through Ollama and llama.cpp. [Ollama MI210 3X GPU Memory allocation fail.txt](https://github.com/user-attachments/files/18633774/Ollama.MI210.3X.GPU.Memory.allocation.fail.txt) [MI210 3xGPU - Force 24 layercount.txt](https://github.com/user-attachments/files/18633779/MI210.3xGPU.-.Force.24.layercount.txt) Please let me know if I can share any more information that can help in resolving the issue. Appreciate your support! ### OS Linux ### GPU AMD ### CPU AMD ### Ollama version 0.5.7
GiteaMirror added the bug label 2026-05-04 11:34:58 -05:00
Author
Owner

@mikessb commented on GitHub (Feb 17, 2025):

Have you resolved this issue? I encountered the same problem on the ollama version 0.5.9!

<!-- gh-comment-id:2661732653 --> @mikessb commented on GitHub (Feb 17, 2025): Have you resolved this issue? I encountered the same problem on the ollama version 0.5.9!
Author
Owner

@itej89 commented on GitHub (Feb 17, 2025):

Have you resolved this issue? I encountered the same problem on the ollama version 0.5.9!

I am still debugging this one. My initial observations are Ollama is following different schemes for calculating the tensor split sizes and actual loading the tensors. Anyone please correct me if I am understanding it the right way.

Scheme 1: Tensor split calcualtion
https://github.com/ollama/ollama/blob/main/llm/memory.go#L213

Here the code is calculating the GPU offloadable layercount by allocating alternative layers to each gpu till all gpu's are exhausted of space. It starts with Block one.

Scheme2: Loading of tensors
https://github.com/ollama/ollama/blob/main/llama/llama.cpp/src/llama.cpp#L330

Here the code is assigning final layers into the GPU, instead of initial layers used above for the layer count calculation .

As you can see from my first comment, the first 3 layers are of 530100224 bytes but all last layers are either 6646985728 or 7619654656. Since the initial calculation is using the first 3 layers it is estimating more number of layers can be fit.

When I change Scheme1 to use last layers for computation the model does load with 26 layers. But I am facing a different ROCm error. during inference which I am debugging.

<!-- gh-comment-id:2661855768 --> @itej89 commented on GitHub (Feb 17, 2025): > Have you resolved this issue? I encountered the same problem on the ollama version 0.5.9! I am still debugging this one. My initial observations are Ollama is following different schemes for calculating the tensor split sizes and actual loading the tensors. Anyone please correct me if I am understanding it the right way. Scheme 1: Tensor split calcualtion https://github.com/ollama/ollama/blob/main/llm/memory.go#L213 Here the code is calculating the GPU offloadable layercount by allocating alternative layers to each gpu till all gpu's are exhausted of space. **It starts with Block one**. Scheme2: Loading of tensors https://github.com/ollama/ollama/blob/main/llama/llama.cpp/src/llama.cpp#L330 Here the **code is assigning final layers** into the GPU, instead of initial layers used above for the layer count calculation . As you can see from my first comment, the first 3 layers are of 530100224 bytes but all last layers are either 6646985728 or 7619654656. Since the initial calculation is using the first 3 layers it is estimating more number of layers can be fit. When I change Scheme1 to use last layers for computation the model does load with 26 layers. But I am facing a different ROCm error. during inference which I am debugging.
Author
Owner

@itej89 commented on GitHub (Feb 20, 2025):

Have you resolved this issue? I encountered the same problem on the ollama version 0.5.9!

I have created this PR to resolve the issue.
https://github.com/ollama/ollama/pull/9243

<!-- gh-comment-id:2671853280 --> @itej89 commented on GitHub (Feb 20, 2025): > Have you resolved this issue? I encountered the same problem on the ollama version 0.5.9! I have created this PR to resolve the issue. https://github.com/ollama/ollama/pull/9243
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#67754