[GH-ISSUE #8664] Wrong GPU size calculation for the command-r7b:7b model #5614

Closed
opened 2026-04-12 16:53:00 -05:00 by GiteaMirror · 17 comments
Owner

Originally created by @vvidovic on GitHub (Jan 29, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8664

What is the issue?

I wasn't able to run command-r7b:7b model while all other larger models were running successfully.
After some investigation and trial and error, I realized I could fix this issue by creating a new model that would offload fewer model layers to GPU.

Initial state:

$ nvidia-smi 
Wed Jan 29 15:33:17 2025       
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.183.01             Driver Version: 535.183.01   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA RTX A1000 Laptop GPU    Off | 00000000:01:00.0  On |                  N/A |
| N/A   56C    P3               6W /  35W |    149MiB /  4096MiB |     16%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|    0   N/A  N/A      4937      G   /usr/lib/xorg/Xorg                          143MiB |
+---------------------------------------------------------------------------------------+

Running model, error produced:

$ ollama run command-r7b:7b
Error: llama runner process has terminated: cudaMalloc failed: out of memory
ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 1531936768
llama_new_context_with_model: failed to allocate compute buffers

A new model with fewer layers was created using the following modelfile:

# ollama create command-r7b-v:7b -f command-r7.modelfile
FROM command-r7b:7b
PARAMETER num_gpu 17

Successfully running newly created model:

$ ollama run command-r7b-v:7b
>>> /bye

Log information for error and success cases produced by journal -S today is attached.

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.5.7

Originally created by @vvidovic on GitHub (Jan 29, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8664 ### What is the issue? I wasn't able to run `command-r7b:7b` model while all other larger models were running successfully. After some investigation and trial and error, I realized I could fix this issue by creating a new model that would offload fewer model layers to GPU. Initial state: ``` $ nvidia-smi Wed Jan 29 15:33:17 2025 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 535.183.01 Driver Version: 535.183.01 CUDA Version: 12.2 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA RTX A1000 Laptop GPU Off | 00000000:01:00.0 On | N/A | | N/A 56C P3 6W / 35W | 149MiB / 4096MiB | 16% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | 0 N/A N/A 4937 G /usr/lib/xorg/Xorg 143MiB | +---------------------------------------------------------------------------------------+ ``` Running model, error produced: ``` $ ollama run command-r7b:7b Error: llama runner process has terminated: cudaMalloc failed: out of memory ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 1531936768 llama_new_context_with_model: failed to allocate compute buffers ``` A new model with fewer layers was created using the following modelfile: ``` # ollama create command-r7b-v:7b -f command-r7.modelfile FROM command-r7b:7b PARAMETER num_gpu 17 ``` Successfully running newly created model: ``` $ ollama run command-r7b-v:7b >>> /bye ``` Log information for error and success cases produced by `journal -S today` is attached. ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.7
GiteaMirror added the memorybug labels 2026-04-12 16:53:00 -05:00
Author
Owner

@rick-github commented on GitHub (Jan 29, 2025):

Other workarounds: https://github.com/ollama/ollama/issues/8597#issuecomment-2614533288

<!-- gh-comment-id:2621885108 --> @rick-github commented on GitHub (Jan 29, 2025): Other workarounds: https://github.com/ollama/ollama/issues/8597#issuecomment-2614533288
Author
Owner

@vvidovic commented on GitHub (Jan 29, 2025):

Other workarounds: #8597 (comment)

None of those worked for me, only changing the num_gpu approach worked:

$ OLLAMA_GPU_OVERHEAD=536870912 ollama run command-r7b:7b
Error: llama runner process has terminated: cudaMalloc failed: out of memory
ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 1531936768
llama_new_context_with_model: failed to allocate compute buffers

$ OLLAMA_FLASH_ATTENTION=1 ollama run command-r7b:7b
Error: llama runner process has terminated: cudaMalloc failed: out of memory
ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 1531936768
llama_new_context_with_model: failed to allocate compute buffers

$ GGML_CUDA_ENABLE_UNIFIED_MEMORY=1 ollama run command-r7b:7b
Error: llama runner process has terminated: cudaMalloc failed: out of memory
ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 1531936768
llama_new_context_with_model: failed to allocate compute buffers
<!-- gh-comment-id:2621936710 --> @vvidovic commented on GitHub (Jan 29, 2025): > Other workarounds: [#8597 (comment)](https://github.com/ollama/ollama/issues/8597#issuecomment-2614533288) None of those worked for me, only changing the `num_gpu` approach worked: ``` $ OLLAMA_GPU_OVERHEAD=536870912 ollama run command-r7b:7b Error: llama runner process has terminated: cudaMalloc failed: out of memory ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 1531936768 llama_new_context_with_model: failed to allocate compute buffers $ OLLAMA_FLASH_ATTENTION=1 ollama run command-r7b:7b Error: llama runner process has terminated: cudaMalloc failed: out of memory ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 1531936768 llama_new_context_with_model: failed to allocate compute buffers $ GGML_CUDA_ENABLE_UNIFIED_MEMORY=1 ollama run command-r7b:7b Error: llama runner process has terminated: cudaMalloc failed: out of memory ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 1531936768 llama_new_context_with_model: failed to allocate compute buffers ```
Author
Owner

@rick-github commented on GitHub (Jan 29, 2025):

These variables need to be set in the server environment. OLLAMA_GPU_OVERHEAD=536870912 also just a suggestion, it needs to be adjusted per GPU/model. For example, command-7b:7b-12-2024-fp16 needs more: https://github.com/ollama/ollama/issues/8471#issuecomment-2604624681

<!-- gh-comment-id:2621952300 --> @rick-github commented on GitHub (Jan 29, 2025): These variables need to be set in the [server environment](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server). `OLLAMA_GPU_OVERHEAD=536870912` also just a suggestion, it needs to be adjusted per GPU/model. For example, command-7b:7b-12-2024-fp16 needs more: https://github.com/ollama/ollama/issues/8471#issuecomment-2604624681
Author
Owner

@vvidovic commented on GitHub (Jan 30, 2025):

These variables need to be set in the server environment. OLLAMA_GPU_OVERHEAD=536870912 also just a suggestion, it needs to be adjusted per GPU/model. For example, command-7b:7b-12-2024-fp16 needs more: #8471 (comment)

Thanks a lot for your help, it makes sense that environment variables for a client don't make any difference.

I did some testing and here are results for my machine.

  • the GGML_CUDA_ENABLE_UNIFIED_MEMORY=1 seems to work best
  • reserving fixed size of memory (OLLAMA_GPU_OVERHEAD) works too but it doesn't seems as a good choice for my case because that approach would cause models that can fit in a GPU to be split between CPU and GPU
  • I didn't notice any difference when using OLLAMA_FLASH_ATTENTION=1

The only "downside" of GGML_CUDA_ENABLE_UNIFIED_MEMORY is that it seems that nvidia-smi reports "wrong" (much smaller) GPU usage by ollama, not sure how can that be. I did quite a few measured experiments and I didn't notice that this settings affect the speed of model inference. Here is the nvidia-smi output for comparison when running the ollama run granite3.1-moe:3b "Write 200 words about who you are." command:

# Using GGML_CUDA_ENABLE_UNIFIED_MEMORY
$ nvidia-smi
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|    0   N/A  N/A      4937      G   /usr/lib/xorg/Xorg                          143MiB |
|    0   N/A  N/A    469496      C   ...rs/cuda_v12_avx/ollama_llama_server       96MiB |
+---------------------------------------------------------------------------------------+

# Without GGML_CUDA_ENABLE_UNIFIED_MEMORY
$ nvidia-smi
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|    0   N/A  N/A      4937      G   /usr/lib/xorg/Xorg                          143MiB |
|    0   N/A  N/A    469601      C   ...rs/cuda_v12_avx/ollama_llama_server     2944MiB |
+---------------------------------------------------------------------------------------+

In both cases, ollama ps reports that everything is executed on GPU:

$ ollama ps
NAME                 ID              SIZE      PROCESSOR    UNTIL              
granite3.1-moe:3b    df6f6578dba8    3.4 GB    100% GPU     4 minutes from now

By the way, setting GGML_CUDA_ENABLE_UNIFIED_MEMORY to 1 or 0 results in the same behaviour.

<!-- gh-comment-id:2623766308 --> @vvidovic commented on GitHub (Jan 30, 2025): > These variables need to be set in the [server environment](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server). `OLLAMA_GPU_OVERHEAD=536870912` also just a suggestion, it needs to be adjusted per GPU/model. For example, command-7b:7b-12-2024-fp16 needs more: [#8471 (comment)](https://github.com/ollama/ollama/issues/8471#issuecomment-2604624681) Thanks a lot for your help, it makes sense that environment variables for a client don't make any difference. I did some testing and here are results for my machine. - the `GGML_CUDA_ENABLE_UNIFIED_MEMORY=1` seems to work best - reserving fixed size of memory (`OLLAMA_GPU_OVERHEAD`) works too but it doesn't seems as a good choice for my case because that approach would cause models that can fit in a GPU to be split between CPU and GPU - I didn't notice any difference when using `OLLAMA_FLASH_ATTENTION=1` The only "downside" of `GGML_CUDA_ENABLE_UNIFIED_MEMORY` is that it seems that `nvidia-smi` reports "wrong" (much smaller) GPU usage by `ollama`, not sure how can that be. I did quite a few measured experiments and I didn't notice that this settings affect the speed of model inference. Here is the `nvidia-smi` output for comparison when running the `ollama run granite3.1-moe:3b "Write 200 words about who you are."` command: ``` # Using GGML_CUDA_ENABLE_UNIFIED_MEMORY $ nvidia-smi +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | 0 N/A N/A 4937 G /usr/lib/xorg/Xorg 143MiB | | 0 N/A N/A 469496 C ...rs/cuda_v12_avx/ollama_llama_server 96MiB | +---------------------------------------------------------------------------------------+ # Without GGML_CUDA_ENABLE_UNIFIED_MEMORY $ nvidia-smi +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | 0 N/A N/A 4937 G /usr/lib/xorg/Xorg 143MiB | | 0 N/A N/A 469601 C ...rs/cuda_v12_avx/ollama_llama_server 2944MiB | +---------------------------------------------------------------------------------------+ ``` In both cases, `ollama ps` reports that everything is executed on GPU: ``` $ ollama ps NAME ID SIZE PROCESSOR UNTIL granite3.1-moe:3b df6f6578dba8 3.4 GB 100% GPU 4 minutes from now ``` By the way, setting `GGML_CUDA_ENABLE_UNIFIED_MEMORY` to `1` or `0` results in the same behaviour.
Author
Owner

@CHN-STUDENT commented on GitHub (Feb 10, 2025):

@rick-github
hi, i can not find any gpu settings in modelfile by the lastest document, could you give some advice.

https://github.com/ollama/ollama/blob/main/docs/modelfile.md#parameter

My server system configuration is EYPC 9654 * 2 | 32 * 24 RAM | 8 * NVIDIA L20 48G | 480G * 2 + 3.84T * 6

root@NF5468:~# nvidia-smi
Mon Feb 10 09:28:52 2025       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 570.86.10              Driver Version: 570.86.10      CUDA Version: 12.8     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA L20                     Off |   00000000:01:00.0 Off |                    0 |
| N/A   40C    P0             80W /  350W |   28030MiB /  46068MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   1  NVIDIA L20                     Off |   00000000:21:00.0 Off |                    0 |
| N/A   40C    P0             79W /  350W |   27296MiB /  46068MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   2  NVIDIA L20                     Off |   00000000:41:00.0 Off |                    0 |
| N/A   40C    P0             78W /  350W |   27296MiB /  46068MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   3  NVIDIA L20                     Off |   00000000:61:00.0 Off |                    0 |
| N/A   39C    P0             77W /  350W |   27296MiB /  46068MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   4  NVIDIA L20                     Off |   00000000:81:00.0 Off |                    0 |
| N/A   38C    P0             78W /  350W |   27296MiB /  46068MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   5  NVIDIA L20                     Off |   00000000:A1:00.0 Off |                    0 |
| N/A   40C    P0             78W /  350W |   27296MiB /  46068MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   6  NVIDIA L20                     Off |   00000000:C1:00.0 Off |                    0 |
| N/A   39C    P0             78W /  350W |   27296MiB /  46068MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   7  NVIDIA L20                     Off |   00000000:E1:00.0 Off |                    0 |
| N/A   39C    P0             77W /  350W |   27296MiB /  46068MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A            5609      G   /usr/lib/xorg/Xorg                        4MiB |
|    0   N/A  N/A           67986      C   ./llama-cli                           28008MiB |
|    1   N/A  N/A            5609      G   /usr/lib/xorg/Xorg                        4MiB |
|    1   N/A  N/A           67986      C   ./llama-cli                           27274MiB |
|    2   N/A  N/A            5609      G   /usr/lib/xorg/Xorg                        4MiB |
|    2   N/A  N/A           67986      C   ./llama-cli                           27274MiB |
|    3   N/A  N/A            5609      G   /usr/lib/xorg/Xorg                        4MiB |
|    3   N/A  N/A           67986      C   ./llama-cli                           27274MiB |
|    4   N/A  N/A            5609      G   /usr/lib/xorg/Xorg                        4MiB |
|    4   N/A  N/A           67986      C   ./llama-cli                           27274MiB |
|    5   N/A  N/A            5609      G   /usr/lib/xorg/Xorg                        4MiB |
|    5   N/A  N/A           67986      C   ./llama-cli                           27274MiB |
|    6   N/A  N/A            5609      G   /usr/lib/xorg/Xorg                        4MiB |
|    6   N/A  N/A           67986      C   ./llama-cli                           27274MiB |
|    7   N/A  N/A            5609      G   /usr/lib/xorg/Xorg                        4MiB |
|    7   N/A  N/A           67986      C   ./llama-cli                           27274MiB |
+-----------------------------------------------------------------------------------------+

<!-- gh-comment-id:2646770288 --> @CHN-STUDENT commented on GitHub (Feb 10, 2025): @rick-github hi, i can not find any gpu settings in modelfile by the lastest document, could you give some advice. > https://github.com/ollama/ollama/blob/main/docs/modelfile.md#parameter My server system configuration is `EYPC 9654 * 2 | 32 * 24 RAM | 8 * NVIDIA L20 48G | 480G * 2 + 3.84T * 6` ``` root@NF5468:~# nvidia-smi Mon Feb 10 09:28:52 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 570.86.10 Driver Version: 570.86.10 CUDA Version: 12.8 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA L20 Off | 00000000:01:00.0 Off | 0 | | N/A 40C P0 80W / 350W | 28030MiB / 46068MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 1 NVIDIA L20 Off | 00000000:21:00.0 Off | 0 | | N/A 40C P0 79W / 350W | 27296MiB / 46068MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 2 NVIDIA L20 Off | 00000000:41:00.0 Off | 0 | | N/A 40C P0 78W / 350W | 27296MiB / 46068MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 3 NVIDIA L20 Off | 00000000:61:00.0 Off | 0 | | N/A 39C P0 77W / 350W | 27296MiB / 46068MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 4 NVIDIA L20 Off | 00000000:81:00.0 Off | 0 | | N/A 38C P0 78W / 350W | 27296MiB / 46068MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 5 NVIDIA L20 Off | 00000000:A1:00.0 Off | 0 | | N/A 40C P0 78W / 350W | 27296MiB / 46068MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 6 NVIDIA L20 Off | 00000000:C1:00.0 Off | 0 | | N/A 39C P0 78W / 350W | 27296MiB / 46068MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 7 NVIDIA L20 Off | 00000000:E1:00.0 Off | 0 | | N/A 39C P0 77W / 350W | 27296MiB / 46068MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 5609 G /usr/lib/xorg/Xorg 4MiB | | 0 N/A N/A 67986 C ./llama-cli 28008MiB | | 1 N/A N/A 5609 G /usr/lib/xorg/Xorg 4MiB | | 1 N/A N/A 67986 C ./llama-cli 27274MiB | | 2 N/A N/A 5609 G /usr/lib/xorg/Xorg 4MiB | | 2 N/A N/A 67986 C ./llama-cli 27274MiB | | 3 N/A N/A 5609 G /usr/lib/xorg/Xorg 4MiB | | 3 N/A N/A 67986 C ./llama-cli 27274MiB | | 4 N/A N/A 5609 G /usr/lib/xorg/Xorg 4MiB | | 4 N/A N/A 67986 C ./llama-cli 27274MiB | | 5 N/A N/A 5609 G /usr/lib/xorg/Xorg 4MiB | | 5 N/A N/A 67986 C ./llama-cli 27274MiB | | 6 N/A N/A 5609 G /usr/lib/xorg/Xorg 4MiB | | 6 N/A N/A 67986 C ./llama-cli 27274MiB | | 7 N/A N/A 5609 G /usr/lib/xorg/Xorg 4MiB | | 7 N/A N/A 67986 C ./llama-cli 27274MiB | +-----------------------------------------------------------------------------------------+ ```
Author
Owner

@rick-github commented on GitHub (Feb 10, 2025):

Advice about what?

<!-- gh-comment-id:2646774571 --> @rick-github commented on GitHub (Feb 10, 2025): Advice about what?
Author
Owner

@CHN-STUDENT commented on GitHub (Feb 10, 2025):

i follow this article https://snowkylin.github.io/blogs/a-note-on-deepseek-r1.html and try to setting some gpu settings on my server, but i can not find any gpu settings in modelfile by the lastest document. should i need use default configuration

<!-- gh-comment-id:2646777563 --> @CHN-STUDENT commented on GitHub (Feb 10, 2025): i follow this article https://snowkylin.github.io/blogs/a-note-on-deepseek-r1.html and try to setting some gpu settings on my server, but i can not find any gpu settings in modelfile by the lastest document. should i need use default configuration
Author
Owner

@rick-github commented on GitHub (Feb 10, 2025):

The page you link documents the required settings. ollama documentation is being refreshed and doesn't cover everything at the moment. If in doubt, read the source code.

<!-- gh-comment-id:2646783206 --> @rick-github commented on GitHub (Feb 10, 2025): The page you link documents the required settings. ollama documentation is being refreshed and doesn't cover everything at the moment. If in doubt, read the source code.
Author
Owner

@CHN-STUDENT commented on GitHub (Feb 10, 2025):

@rick-github Thanks for your help! i will try debugging use, if there is a problem I will look at the source code to raise issues!

<!-- gh-comment-id:2646796553 --> @CHN-STUDENT commented on GitHub (Feb 10, 2025): @rick-github Thanks for your help! i will try debugging use, if there is a problem I will look at the source code to raise issues!
Author
Owner

@XXXiby commented on GitHub (Feb 11, 2025):

@CHN-STUDENT Hi,have you successfully run it? I also have 8 L20 GPUs, but when using Ollama for inference, the GPU didn't work.

<!-- gh-comment-id:2650737388 --> @XXXiby commented on GitHub (Feb 11, 2025): @CHN-STUDENT Hi,have you successfully run it? I also have 8 L20 GPUs, but when using Ollama for inference, the GPU didn't work.
Author
Owner

@rick-github commented on GitHub (Feb 11, 2025):

@XXXiby Open a new ticket and attach server logs.

<!-- gh-comment-id:2650756180 --> @rick-github commented on GitHub (Feb 11, 2025): @XXXiby Open a new ticket and attach [server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues).
Author
Owner

@XXXiby commented on GitHub (Feb 11, 2025):

@rick-github Okay, here is what I tried. I attempted to run the Qwen2.5 - 0.5b model with Ollama, but the GPU didn't work.

Image

<!-- gh-comment-id:2650964139 --> @XXXiby commented on GitHub (Feb 11, 2025): @rick-github Okay, here is what I tried. I attempted to run the Qwen2.5 - 0.5b model with Ollama, but the GPU didn't work. ![Image](https://github.com/user-attachments/assets/22c27ef0-f816-4211-83f7-10620c9e1947)
Author
Owner

@rick-github commented on GitHub (Feb 11, 2025):

Open a new ticket and attach server logs in text format.

<!-- gh-comment-id:2650969221 --> @rick-github commented on GitHub (Feb 11, 2025): Open a new ticket and attach [server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) in text format.
Author
Owner

@XXXiby commented on GitHub (Feb 11, 2025):

@rick-github Okay, here is the text format:
2月 11 20:58:09 node06 ollama[61217]: [GIN] 2025/02/11 - 20:58:09 | 200 | 76.87µs | 127.0.0.1 | HEAD "/"
2月 11 20:58:09 node06 ollama[61217]: [GIN] 2025/02/11 - 20:58:09 | 200 | 3.038298ms | 127.0.0.1 | GET "/api/tags"
2月 11 20:59:24 node06 ollama[61217]: [GIN] 2025/02/11 - 20:59:24 | 200 | 29.966µs | 127.0.0.1 | HEAD "/"
2月 11 20:59:24 node06 ollama[61217]: [GIN] 2025/02/11 - 20:59:24 | 200 | 3.464036ms | 127.0.0.1 | POST "/api/generate"
2月 11 20:59:26 node06 ollama[61217]: [GIN] 2025/02/11 - 20:59:26 | 200 | 1.667816298s | 127.0.0.1 | DELETE "/api/delete"
2月 11 20:59:47 node06 ollama[61217]: [GIN] 2025/02/11 - 20:59:47 | 200 | 53.255µs | 127.0.0.1 | HEAD "/"
2月 11 20:59:47 node06 ollama[61217]: [GIN] 2025/02/11 - 20:59:47 | 200 | 1.604822ms | 127.0.0.1 | GET "/api/tags"
2月 11 21:02:54 node06 ollama[61217]: [GIN] 2025/02/11 - 21:02:54 | 200 | 49.567µs | 127.0.0.1 | HEAD "/"
2月 11 21:03:34 node06 ollama[61217]: [GIN] 2025/02/11 - 21:03:34 | 200 | 19.489µs | 127.0.0.1 | HEAD "/"
2月 11 21:07:02 node06 ollama[61217]: [GIN] 2025/02/11 - 21:07:02 | 200 | 1.234315ms | 127.0.0.1 | POST "/api/blobs/sha256:f7c9b2dba4a296b1aa76c16a34b8225c0c118978400d4bb66bff0902d702f5b8"
2月 11 21:07:02 node06 ollama[61217]: [GIN] 2025/02/11 - 21:07:02 | 200 | 327.977µs | 127.0.0.1 | POST "/api/blobs/sha256:482bd979881423375ca5414e4e0d94cd7c5349dbb17fffd46b4d36d71e62a1bc"
2月 11 21:07:02 node06 ollama[61217]: [GIN] 2025/02/11 - 21:07:02 | 200 | 306.888µs | 127.0.0.1 | POST "/api/blobs/sha256:ca10d7e9fb3ed18575dd1e277a2579c16d108e32f27439684afa0e10b1440910"
2月 11 21:07:02 node06 ollama[61217]: [GIN] 2025/02/11 - 21:07:02 | 200 | 163.273µs | 127.0.0.1 | POST "/api/blobs/sha256:f4212639f8b6e105df9c2feebc2f8ebe6c1bb5cac3e721051b097a6bca76c183"
2月 11 21:07:02 node06 ollama[61217]: [GIN] 2025/02/11 - 21:07:02 | 200 | 199.269µs | 127.0.0.1 | POST "/api/blobs/sha256:f4a175206c507552ac2289934a6e65f4b3abee9d91273b0077a296d606342246"
2月 11 21:07:02 node06 ollama[61217]: [GIN] 2025/02/11 - 21:07:02 | 200 | 229.404µs | 127.0.0.1 | POST "/api/blobs/sha256:cd6fc45c13907b878a0430531c4e588ccc5469978ea56ea31bf4021720819957"
2月 11 21:07:02 node06 ollama[61217]: [GIN] 2025/02/11 - 21:07:02 | 200 | 159.08µs | 127.0.0.1 | POST "/api/blobs/sha256:5cf6fb81bd473eeb55678afbaee79e8d5f8b6e9bc2f942c30ee94721a0a91945"
2月 11 21:07:02 node06 ollama[61217]: [GIN] 2025/02/11 - 21:07:02 | 200 | 130.594µs | 127.0.0.1 | POST "/api/blobs/sha256:a5d3f5eaf7bd20ebe520861dc1dc77e59426ae99584743efc5a380448de9886b"
2月 11 21:07:02 node06 ollama[61217]: time=2025-02-11T21:07:02.437+08:00 level=ERROR source=create.go:467 msg="unsupported content type: text/plain; charset=utf-8"
2月 11 21:07:02 node06 ollama[61217]: [GIN] 2025/02/11 - 21:07:02 | 200 | 2.844018ms | 127.0.0.1 | POST "/api/create"
2月 11 21:11:56 node06 ollama[61217]: [GIN] 2025/02/11 - 21:11:56 | 200 | 49.559µs | 127.0.0.1 | HEAD "/"
2月 11 21:11:56 node06 ollama[61217]: [GIN] 2025/02/11 - 21:11:56 | 200 | 2.541733ms | 127.0.0.1 | GET "/api/tags"
2月 11 21:19:47 node06 ollama[61217]: [GIN] 2025/02/11 - 21:19:47 | 200 | 53.337µs | 127.0.0.1 | HEAD "/"
2月 11 21:23:22 node06 ollama[61217]: [GIN] 2025/02/11 - 21:23:22 | 201 | 4.278771504s | 127.0.0.1 | POST "/api/blobs/sha256:7671c0c304e6ce5a7fc577bcb12aba01e2c155cc2efd29b2213c95b18edaf6ed"
2月 11 21:23:22 node06 ollama[61217]: [GIN] 2025/02/11 - 21:23:22 | 200 | 301.07µs | 127.0.0.1 | POST "/api/blobs/sha256:5cf6fb81bd473eeb55678afbaee79e8d5f8b6e9bc2f942c30ee94721a0a91945"
2月 11 21:23:22 node06 ollama[61217]: [GIN] 2025/02/11 - 21:23:22 | 200 | 322.648µs | 127.0.0.1 | POST "/api/blobs/sha256:f4212639f8b6e105df9c2feebc2f8ebe6c1bb5cac3e721051b097a6bca76c183"
2月 11 21:23:22 node06 ollama[61217]: [GIN] 2025/02/11 - 21:23:22 | 200 | 235.477µs | 127.0.0.1 | POST "/api/blobs/sha256:cd6fc45c13907b878a0430531c4e588ccc5469978ea56ea31bf4021720819957"
2月 11 21:23:22 node06 ollama[61217]: [GIN] 2025/02/11 - 21:23:22 | 200 | 149.617µs | 127.0.0.1 | POST "/api/blobs/sha256:a5d3f5eaf7bd20ebe520861dc1dc77e59426ae99584743efc5a380448de9886b"
2月 11 21:23:22 node06 ollama[61217]: [GIN] 2025/02/11 - 21:23:22 | 200 | 195.862µs | 127.0.0.1 | POST "/api/blobs/sha256:f7c9b2dba4a296b1aa76c16a34b8225c0c118978400d4bb66bff0902d702f5b8"
2月 11 21:23:22 node06 ollama[61217]: [GIN] 2025/02/11 - 21:23:22 | 200 | 194.849µs | 127.0.0.1 | POST "/api/blobs/sha256:482bd979881423375ca5414e4e0d94cd7c5349dbb17fffd46b4d36d71e62a1bc"
2月 11 21:23:22 node06 ollama[61217]: [GIN] 2025/02/11 - 21:23:22 | 200 | 164.235µs | 127.0.0.1 | POST "/api/blobs/sha256:ca10d7e9fb3ed18575dd1e277a2579c16d108e32f27439684afa0e10b1440910"
2月 11 21:23:22 node06 ollama[61217]: [GIN] 2025/02/11 - 21:23:22 | 200 | 145.967µs | 127.0.0.1 | POST "/api/blobs/sha256:f4a175206c507552ac2289934a6e65f4b3abee9d91273b0077a296d606342246"
2月 11 21:23:22 node06 ollama[61217]: time=2025-02-11T21:23:22.032+08:00 level=ERROR source=create.go:467 msg="unsupported content type: text/plain; charset=utf-8"
2月 11 21:23:22 node06 ollama[61217]: [GIN] 2025/02/11 - 21:23:22 | 200 | 2.095143ms | 127.0.0.1 | POST "/api/create"
2月 11 21:24:00 node06 ollama[61217]: [GIN] 2025/02/11 - 21:24:00 | 200 | 48.434µs | 127.0.0.1 | HEAD "/"
2月 11 21:27:27 node06 ollama[61217]: [GIN] 2025/02/11 - 21:27:27 | 200 | 604.993µs | 127.0.0.1 | POST "/api/blobs/sha256:f4212639f8b6e105df9c2feebc2f8ebe6c1bb5cac3e721051b097a6bca76c183"
2月 11 21:27:27 node06 ollama[61217]: [GIN] 2025/02/11 - 21:27:27 | 200 | 180.954ms | 127.0.0.1 | POST "/api/create"
2月 11 21:27:45 node06 ollama[61217]: [GIN] 2025/02/11 - 21:27:45 | 200 | 63.134µs | 127.0.0.1 | HEAD "/"
2月 11 21:27:45 node06 ollama[61217]: [GIN] 2025/02/11 - 21:27:45 | 200 | 2.849958ms | 127.0.0.1 | GET "/api/tags"
2月 11 21:27:55 node06 ollama[61217]: [GIN] 2025/02/11 - 21:27:55 | 200 | 46.083µs | 127.0.0.1 | HEAD "/"
2月 11 21:27:55 node06 ollama[61217]: [GIN] 2025/02/11 - 21:27:55 | 200 | 17.096553ms | 127.0.0.1 | POST "/api/show"
2月 11 21:27:56 node06 ollama[61217]: time=2025-02-11T21:27:56.956+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/mnt/inaisfs/user-fs/node6_pengwenzhong/ollama_modelblobs/sha256-f4212639f8b6e105df9c2feebc2f8ebe6c1bb5cac3e721051b097a6bca76c183 gpu=GPU-88a11034-ed3a-d905-17cc-4bdc29a22c14 parallel=1 available=47370600448 required="29.4 GiB"
2月 11 21:27:58 node06 ollama[61217]: time=2025-02-11T21:27:58.156+08:00 level=INFO source=server.go:104 msg="system memory" total="755.5 GiB" free="745.5 GiB" free_swap="1.9 GiB"
2月 11 21:27:58 node06 ollama[61217]: time=2025-02-11T21:27:58.157+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=8 layers.model=62 layers.offload=8 layers.split="" memory.available="[44.1 GiB]" memry.gpu_overhead="0 B" memory.required.full="252.4 GiB" memory.required.partial="29.4 GiB" memory.required.kv="38.1 GiB" memory.required.allocations="[29.4 GiB]" memory.weights.total="248.0 GiB" memory.weights.repeating="247.3 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="2.2 GiB" memory.graph.partial="3.0 GiB"
2月 11 21:27:58 node06 ollama[61217]: time=2025-02-11T21:27:58.160+08:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/lib/ollama/runners/cpu_avx2/ollama_llama_server runner --model /mnt/inaisfs/user-fs/nde6_pengwenzhong/ollama_model/blobs/sha256-f4212639f8b6e105df9c2feebc2f8ebe6c1bb5cac3e721051b097a6bca76c183 --ctx-size 8192 --batch-size 512 --n-gpu-layers 8 --threads 64 --parallel 1 --port 44321"
2月 11 21:27:58 node06 ollama[61217]: time=2025-02-11T21:27:58.160+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
2月 11 21:27:58 node06 ollama[61217]: time=2025-02-11T21:27:58.160+08:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
2月 11 21:27:58 node06 ollama[61217]: time=2025-02-11T21:27:58.161+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
2月 11 21:27:58 node06 ollama[61217]: time=2025-02-11T21:27:58.165+08:00 level=INFO source=runner.go:936 msg="starting go runner"
2月 11 21:27:58 node06 ollama[61217]: time=2025-02-11T21:27:58.167+08:00 level=INFO source=runner.go:937 msg=system info="CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 cgo(gcc)" threads=64
2月 11 21:27:58 node06 ollama[61217]: time=2025-02-11T21:27:58.167+08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:44321"
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: loaded meta data with 48 key-value pairs and 1025 tensors from /mnt/inaisfs/user-fs/node6_pengwenzhong/ollama_model/blobs/sha256-f4212639f8b6e105df9c2feebc2f8ebe6c1bb5cace721051b097a6bca76c183 (version GGUF V3 (latest))
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 0: general.architecture str = deepseek2
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 1: general.type str = model
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 2: general.name str = DeepSeek R1 BF16
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 3: general.quantized_by str = Unsloth
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 4: general.size_label str = 256x20B
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 5: general.repo_url str = https://huggingface.co/unsloth
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 6: deepseek2.block_count u32 = 61
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 7: deepseek2.context_length u32 = 163840
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 8: deepseek2.embedding_length u32 = 7168
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 9: deepseek2.feed_forward_length u32 = 18432
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 10: deepseek2.attention.head_count u32 = 128
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 11: deepseek2.attention.head_count_kv u32 = 128
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 12: deepseek2.rope.freq_base f32 = 10000.000000
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 13: deepseek2.attention.layer_norm_rms_epsilon f32 = 0.000001
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 14: deepseek2.expert_used_count u32 = 8
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 15: deepseek2.leading_dense_block_count u32 = 3
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 16: deepseek2.vocab_size u32 = 129280
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 17: deepseek2.attention.q_lora_rank u32 = 1536
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 18: deepseek2.attention.kv_lora_rank u32 = 512
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 19: deepseek2.attention.key_length u32 = 192
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 20: deepseek2.attention.value_length u32 = 128
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 21: deepseek2.expert_feed_forward_length u32 = 2048
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 22: deepseek2.expert_count u32 = 256
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 23: deepseek2.expert_shared_count u32 = 1
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 24: deepseek2.expert_weights_scale f32 = 2.500000
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 25: deepseek2.expert_weights_norm bool = true
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 26: deepseek2.expert_gating_func u32 = 2
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 27: deepseek2.rope.dimension_count u32 = 64
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 28: deepseek2.rope.scaling.type str = yarn
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 29: deepseek2.rope.scaling.factor f32 = 40.000000
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 30: deepseek2.rope.scaling.original_context_length u32 = 4096
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 31: deepseek2.rope.scaling.yarn_log_multiplier f32 = 0.100000
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 32: tokenizer.ggml.model str = gpt2
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 33: tokenizer.ggml.pre str = deepseek-v3
2月 11 21:27:58 node06 ollama[61217]: [132B blob data]
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 35: tokenizer.ggml.token_type arr[i32,129280] = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 36: tokenizer.ggml.merges arr[str,127741] = ["Ġ t", "Ġ a", "i n", "Ġ Ġ", "h e...
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 37: tokenizer.ggml.bos_token_id u32 = 0
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 38: tokenizer.ggml.eos_token_id u32 = 1
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 39: tokenizer.ggml.padding_token_id u32 = 128815
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 40: tokenizer.ggml.add_bos_token bool = true
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 41: tokenizer.ggml.add_eos_token bool = false
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 42: tokenizer.chat_template str = {% if not add_generation_prompt is de...
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 43: general.quantization_version u32 = 2
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 44: general.file_type u32 = 10
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 45: split.no u16 = 0
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 46: split.tensors.count i32 = 1025
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 47: split.count u16 = 0
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - type f32: 361 tensors
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - type q2_K: 171 tensors
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - type q3_K: 3 tensors
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - type q4_K: 306 tensors
2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - type q6_K: 184 tensors
2月 11 21:27:58 node06 ollama[61217]: time=2025-02-11T21:27:58.412+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
2月 11 21:27:58 node06 ollama[61217]: llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
2月 11 21:27:58 node06 ollama[61217]: llm_load_vocab: special tokens cache size = 819
2月 11 21:27:58 node06 ollama[61217]: llm_load_vocab: token to piece cache size = 0.8223 MB
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: format = GGUF V3 (latest)
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: arch = deepseek2
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: vocab type = BPE
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_vocab = 129280
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_merges = 127741
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: vocab_only = 0
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_ctx_train = 163840
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_embd = 7168
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_layer = 61
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_head = 128
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_head_kv = 128
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_rot = 64
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_swa = 0
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_embd_head_k = 192
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_embd_head_v = 128
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_gqa = 1
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_embd_k_gqa = 24576
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_embd_v_gqa = 16384
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: f_norm_eps = 0.0e+00
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: f_norm_rms_eps = 1.0e-06
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: f_clamp_kqv = 0.0e+00
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: f_logit_scale = 0.0e+00
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_ff = 18432
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_expert = 256
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_expert_used = 8
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: causal attn = 1
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: pooling type = 0
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: rope type = 0
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: rope scaling = yarn
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: freq_base_train = 10000.0
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: freq_scale_train = 0.025
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_ctx_orig_yarn = 4096
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: rope_finetuned = unknown
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: ssm_d_conv = 0
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: ssm_d_inner = 0
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: ssm_d_state = 0
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: ssm_dt_rank = 0
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: ssm_dt_b_c_rms = 0
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: model type = 671B
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: model ftype = Q2_K - Medium
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: model params = 671.03 B
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: model size = 211.03 GiB (2.70 BPW)
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: general.name = DeepSeek R1 BF16
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: BOS token = 0 '<|begin▁of▁sentence|>'
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: EOS token = 1 '<|end▁of▁sentence|>'
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: EOT token = 1 '<|end▁of▁sentence|>'
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: PAD token = 128815 '<|PAD▁TOKEN|>'
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: LF token = 131 'Ä'
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: FIM PRE token = 128801 '<|fim▁begin|>'
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: FIM SUF token = 128800 '<|fim▁hole|>'
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: FIM MID token = 128802 '<|fim▁end|>'
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: EOG token = 1 '<|end▁of▁sentence|>'
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: max token length = 256
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_layer_dense_lead = 3
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_lora_q = 1536
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_lora_kv = 512
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_ff_exp = 2048
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_expert_shared = 1
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: expert_weights_scale = 2.5
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: expert_weights_norm = 1
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: expert_gating_func = sigmoid
2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: rope_yarn_log_mul = 0.1000
2月 11 21:32:58 node06 ollama[61217]: time=2025-02-11T21:32:58.252+08:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="timed out waiting for llama runner to start - progress 0.00 - "
2月 11 21:32:58 node06 ollama[61217]: [GIN] 2025/02/11 - 21:32:58 | 500 | 5m2s | 127.0.0.1 | POST "/api/generate"
2月 11 21:33:03 node06 ollama[61217]: time=2025-02-11T21:33:03.424+08:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.172500198 model=/mnt/inaisfs/user-fs/node6_pengwenzhong/ollama_odel/blobs/sha256-f4212639f8b6e105df9c2feebc2f8ebe6c1bb5cac3e721051b097a6bca76c183
2月 11 21:33:04 node06 ollama[61217]: time=2025-02-11T21:33:04.595+08:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=6.342834102 model=/mnt/inaisfs/user-fs/node6_pengwenzhong/ollama_odel/blobs/sha256-f4212639f8b6e105df9c2feebc2f8ebe6c1bb5cac3e721051b097a6bca76c183
2月 11 21:33:05 node06 ollama[61217]: time=2025-02-11T21:33:05.774+08:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=7.522595338 model=/mnt/inaisfs/user-fs/node6_pengwenzhong/ollama_odel/blobs/sha256-f4212639f8b6e105df9c2feebc2f8ebe6c1bb5cac3e721051b097a6bca76c183
2月 11 21:43:36 node06 systemd[1]: Stopping Ollama Service...
2月 11 21:43:39 node06 systemd[1]: ollama.service: Deactivated successfully.
2月 11 21:43:39 node06 systemd[1]: Stopped Ollama Service.
2月 11 21:43:39 node06 systemd[1]: ollama.service: Consumed 9min 59.836s CPU time.

<!-- gh-comment-id:2650990683 --> @XXXiby commented on GitHub (Feb 11, 2025): @rick-github Okay, here is the text format: 2月 11 20:58:09 node06 ollama[61217]: [GIN] 2025/02/11 - 20:58:09 | 200 | 76.87µs | 127.0.0.1 | HEAD "/" 2月 11 20:58:09 node06 ollama[61217]: [GIN] 2025/02/11 - 20:58:09 | 200 | 3.038298ms | 127.0.0.1 | GET "/api/tags" 2月 11 20:59:24 node06 ollama[61217]: [GIN] 2025/02/11 - 20:59:24 | 200 | 29.966µs | 127.0.0.1 | HEAD "/" 2月 11 20:59:24 node06 ollama[61217]: [GIN] 2025/02/11 - 20:59:24 | 200 | 3.464036ms | 127.0.0.1 | POST "/api/generate" 2月 11 20:59:26 node06 ollama[61217]: [GIN] 2025/02/11 - 20:59:26 | 200 | 1.667816298s | 127.0.0.1 | DELETE "/api/delete" 2月 11 20:59:47 node06 ollama[61217]: [GIN] 2025/02/11 - 20:59:47 | 200 | 53.255µs | 127.0.0.1 | HEAD "/" 2月 11 20:59:47 node06 ollama[61217]: [GIN] 2025/02/11 - 20:59:47 | 200 | 1.604822ms | 127.0.0.1 | GET "/api/tags" 2月 11 21:02:54 node06 ollama[61217]: [GIN] 2025/02/11 - 21:02:54 | 200 | 49.567µs | 127.0.0.1 | HEAD "/" 2月 11 21:03:34 node06 ollama[61217]: [GIN] 2025/02/11 - 21:03:34 | 200 | 19.489µs | 127.0.0.1 | HEAD "/" 2月 11 21:07:02 node06 ollama[61217]: [GIN] 2025/02/11 - 21:07:02 | 200 | 1.234315ms | 127.0.0.1 | POST "/api/blobs/sha256:f7c9b2dba4a296b1aa76c16a34b8225c0c118978400d4bb66bff0902d702f5b8" 2月 11 21:07:02 node06 ollama[61217]: [GIN] 2025/02/11 - 21:07:02 | 200 | 327.977µs | 127.0.0.1 | POST "/api/blobs/sha256:482bd979881423375ca5414e4e0d94cd7c5349dbb17fffd46b4d36d71e62a1bc" 2月 11 21:07:02 node06 ollama[61217]: [GIN] 2025/02/11 - 21:07:02 | 200 | 306.888µs | 127.0.0.1 | POST "/api/blobs/sha256:ca10d7e9fb3ed18575dd1e277a2579c16d108e32f27439684afa0e10b1440910" 2月 11 21:07:02 node06 ollama[61217]: [GIN] 2025/02/11 - 21:07:02 | 200 | 163.273µs | 127.0.0.1 | POST "/api/blobs/sha256:f4212639f8b6e105df9c2feebc2f8ebe6c1bb5cac3e721051b097a6bca76c183" 2月 11 21:07:02 node06 ollama[61217]: [GIN] 2025/02/11 - 21:07:02 | 200 | 199.269µs | 127.0.0.1 | POST "/api/blobs/sha256:f4a175206c507552ac2289934a6e65f4b3abee9d91273b0077a296d606342246" 2月 11 21:07:02 node06 ollama[61217]: [GIN] 2025/02/11 - 21:07:02 | 200 | 229.404µs | 127.0.0.1 | POST "/api/blobs/sha256:cd6fc45c13907b878a0430531c4e588ccc5469978ea56ea31bf4021720819957" 2月 11 21:07:02 node06 ollama[61217]: [GIN] 2025/02/11 - 21:07:02 | 200 | 159.08µs | 127.0.0.1 | POST "/api/blobs/sha256:5cf6fb81bd473eeb55678afbaee79e8d5f8b6e9bc2f942c30ee94721a0a91945" 2月 11 21:07:02 node06 ollama[61217]: [GIN] 2025/02/11 - 21:07:02 | 200 | 130.594µs | 127.0.0.1 | POST "/api/blobs/sha256:a5d3f5eaf7bd20ebe520861dc1dc77e59426ae99584743efc5a380448de9886b" 2月 11 21:07:02 node06 ollama[61217]: time=2025-02-11T21:07:02.437+08:00 level=ERROR source=create.go:467 msg="unsupported content type: text/plain; charset=utf-8" 2月 11 21:07:02 node06 ollama[61217]: [GIN] 2025/02/11 - 21:07:02 | 200 | 2.844018ms | 127.0.0.1 | POST "/api/create" 2月 11 21:11:56 node06 ollama[61217]: [GIN] 2025/02/11 - 21:11:56 | 200 | 49.559µs | 127.0.0.1 | HEAD "/" 2月 11 21:11:56 node06 ollama[61217]: [GIN] 2025/02/11 - 21:11:56 | 200 | 2.541733ms | 127.0.0.1 | GET "/api/tags" 2月 11 21:19:47 node06 ollama[61217]: [GIN] 2025/02/11 - 21:19:47 | 200 | 53.337µs | 127.0.0.1 | HEAD "/" 2月 11 21:23:22 node06 ollama[61217]: [GIN] 2025/02/11 - 21:23:22 | 201 | 4.278771504s | 127.0.0.1 | POST "/api/blobs/sha256:7671c0c304e6ce5a7fc577bcb12aba01e2c155cc2efd29b2213c95b18edaf6ed" 2月 11 21:23:22 node06 ollama[61217]: [GIN] 2025/02/11 - 21:23:22 | 200 | 301.07µs | 127.0.0.1 | POST "/api/blobs/sha256:5cf6fb81bd473eeb55678afbaee79e8d5f8b6e9bc2f942c30ee94721a0a91945" 2月 11 21:23:22 node06 ollama[61217]: [GIN] 2025/02/11 - 21:23:22 | 200 | 322.648µs | 127.0.0.1 | POST "/api/blobs/sha256:f4212639f8b6e105df9c2feebc2f8ebe6c1bb5cac3e721051b097a6bca76c183" 2月 11 21:23:22 node06 ollama[61217]: [GIN] 2025/02/11 - 21:23:22 | 200 | 235.477µs | 127.0.0.1 | POST "/api/blobs/sha256:cd6fc45c13907b878a0430531c4e588ccc5469978ea56ea31bf4021720819957" 2月 11 21:23:22 node06 ollama[61217]: [GIN] 2025/02/11 - 21:23:22 | 200 | 149.617µs | 127.0.0.1 | POST "/api/blobs/sha256:a5d3f5eaf7bd20ebe520861dc1dc77e59426ae99584743efc5a380448de9886b" 2月 11 21:23:22 node06 ollama[61217]: [GIN] 2025/02/11 - 21:23:22 | 200 | 195.862µs | 127.0.0.1 | POST "/api/blobs/sha256:f7c9b2dba4a296b1aa76c16a34b8225c0c118978400d4bb66bff0902d702f5b8" 2月 11 21:23:22 node06 ollama[61217]: [GIN] 2025/02/11 - 21:23:22 | 200 | 194.849µs | 127.0.0.1 | POST "/api/blobs/sha256:482bd979881423375ca5414e4e0d94cd7c5349dbb17fffd46b4d36d71e62a1bc" 2月 11 21:23:22 node06 ollama[61217]: [GIN] 2025/02/11 - 21:23:22 | 200 | 164.235µs | 127.0.0.1 | POST "/api/blobs/sha256:ca10d7e9fb3ed18575dd1e277a2579c16d108e32f27439684afa0e10b1440910" 2月 11 21:23:22 node06 ollama[61217]: [GIN] 2025/02/11 - 21:23:22 | 200 | 145.967µs | 127.0.0.1 | POST "/api/blobs/sha256:f4a175206c507552ac2289934a6e65f4b3abee9d91273b0077a296d606342246" 2月 11 21:23:22 node06 ollama[61217]: time=2025-02-11T21:23:22.032+08:00 level=ERROR source=create.go:467 msg="unsupported content type: text/plain; charset=utf-8" 2月 11 21:23:22 node06 ollama[61217]: [GIN] 2025/02/11 - 21:23:22 | 200 | 2.095143ms | 127.0.0.1 | POST "/api/create" 2月 11 21:24:00 node06 ollama[61217]: [GIN] 2025/02/11 - 21:24:00 | 200 | 48.434µs | 127.0.0.1 | HEAD "/" 2月 11 21:27:27 node06 ollama[61217]: [GIN] 2025/02/11 - 21:27:27 | 200 | 604.993µs | 127.0.0.1 | POST "/api/blobs/sha256:f4212639f8b6e105df9c2feebc2f8ebe6c1bb5cac3e721051b097a6bca76c183" 2月 11 21:27:27 node06 ollama[61217]: [GIN] 2025/02/11 - 21:27:27 | 200 | 180.954ms | 127.0.0.1 | POST "/api/create" 2月 11 21:27:45 node06 ollama[61217]: [GIN] 2025/02/11 - 21:27:45 | 200 | 63.134µs | 127.0.0.1 | HEAD "/" 2月 11 21:27:45 node06 ollama[61217]: [GIN] 2025/02/11 - 21:27:45 | 200 | 2.849958ms | 127.0.0.1 | GET "/api/tags" 2月 11 21:27:55 node06 ollama[61217]: [GIN] 2025/02/11 - 21:27:55 | 200 | 46.083µs | 127.0.0.1 | HEAD "/" 2月 11 21:27:55 node06 ollama[61217]: [GIN] 2025/02/11 - 21:27:55 | 200 | 17.096553ms | 127.0.0.1 | POST "/api/show" 2月 11 21:27:56 node06 ollama[61217]: time=2025-02-11T21:27:56.956+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/mnt/inaisfs/user-fs/node6_pengwenzhong/ollama_modelblobs/sha256-f4212639f8b6e105df9c2feebc2f8ebe6c1bb5cac3e721051b097a6bca76c183 gpu=GPU-88a11034-ed3a-d905-17cc-4bdc29a22c14 parallel=1 available=47370600448 required="29.4 GiB" 2月 11 21:27:58 node06 ollama[61217]: time=2025-02-11T21:27:58.156+08:00 level=INFO source=server.go:104 msg="system memory" total="755.5 GiB" free="745.5 GiB" free_swap="1.9 GiB" 2月 11 21:27:58 node06 ollama[61217]: time=2025-02-11T21:27:58.157+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=8 layers.model=62 layers.offload=8 layers.split="" memory.available="[44.1 GiB]" memry.gpu_overhead="0 B" memory.required.full="252.4 GiB" memory.required.partial="29.4 GiB" memory.required.kv="38.1 GiB" memory.required.allocations="[29.4 GiB]" memory.weights.total="248.0 GiB" memory.weights.repeating="247.3 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="2.2 GiB" memory.graph.partial="3.0 GiB" 2月 11 21:27:58 node06 ollama[61217]: time=2025-02-11T21:27:58.160+08:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/lib/ollama/runners/cpu_avx2/ollama_llama_server runner --model /mnt/inaisfs/user-fs/nde6_pengwenzhong/ollama_model/blobs/sha256-f4212639f8b6e105df9c2feebc2f8ebe6c1bb5cac3e721051b097a6bca76c183 --ctx-size 8192 --batch-size 512 --n-gpu-layers 8 --threads 64 --parallel 1 --port 44321" 2月 11 21:27:58 node06 ollama[61217]: time=2025-02-11T21:27:58.160+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 2月 11 21:27:58 node06 ollama[61217]: time=2025-02-11T21:27:58.160+08:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding" 2月 11 21:27:58 node06 ollama[61217]: time=2025-02-11T21:27:58.161+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" 2月 11 21:27:58 node06 ollama[61217]: time=2025-02-11T21:27:58.165+08:00 level=INFO source=runner.go:936 msg="starting go runner" 2月 11 21:27:58 node06 ollama[61217]: time=2025-02-11T21:27:58.167+08:00 level=INFO source=runner.go:937 msg=system info="CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 cgo(gcc)" threads=64 2月 11 21:27:58 node06 ollama[61217]: time=2025-02-11T21:27:58.167+08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:44321" 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: loaded meta data with 48 key-value pairs and 1025 tensors from /mnt/inaisfs/user-fs/node6_pengwenzhong/ollama_model/blobs/sha256-f4212639f8b6e105df9c2feebc2f8ebe6c1bb5cace721051b097a6bca76c183 (version GGUF V3 (latest)) 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 0: general.architecture str = deepseek2 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 1: general.type str = model 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 2: general.name str = DeepSeek R1 BF16 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 3: general.quantized_by str = Unsloth 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 4: general.size_label str = 256x20B 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 5: general.repo_url str = https://huggingface.co/unsloth 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 6: deepseek2.block_count u32 = 61 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 7: deepseek2.context_length u32 = 163840 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 8: deepseek2.embedding_length u32 = 7168 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 9: deepseek2.feed_forward_length u32 = 18432 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 10: deepseek2.attention.head_count u32 = 128 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 11: deepseek2.attention.head_count_kv u32 = 128 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 12: deepseek2.rope.freq_base f32 = 10000.000000 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 13: deepseek2.attention.layer_norm_rms_epsilon f32 = 0.000001 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 14: deepseek2.expert_used_count u32 = 8 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 15: deepseek2.leading_dense_block_count u32 = 3 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 16: deepseek2.vocab_size u32 = 129280 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 17: deepseek2.attention.q_lora_rank u32 = 1536 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 18: deepseek2.attention.kv_lora_rank u32 = 512 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 19: deepseek2.attention.key_length u32 = 192 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 20: deepseek2.attention.value_length u32 = 128 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 21: deepseek2.expert_feed_forward_length u32 = 2048 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 22: deepseek2.expert_count u32 = 256 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 23: deepseek2.expert_shared_count u32 = 1 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 24: deepseek2.expert_weights_scale f32 = 2.500000 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 25: deepseek2.expert_weights_norm bool = true 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 26: deepseek2.expert_gating_func u32 = 2 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 27: deepseek2.rope.dimension_count u32 = 64 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 28: deepseek2.rope.scaling.type str = yarn 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 29: deepseek2.rope.scaling.factor f32 = 40.000000 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 30: deepseek2.rope.scaling.original_context_length u32 = 4096 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 31: deepseek2.rope.scaling.yarn_log_multiplier f32 = 0.100000 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 32: tokenizer.ggml.model str = gpt2 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 33: tokenizer.ggml.pre str = deepseek-v3 2月 11 21:27:58 node06 ollama[61217]: [132B blob data] 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 35: tokenizer.ggml.token_type arr[i32,129280] = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 36: tokenizer.ggml.merges arr[str,127741] = ["Ġ t", "Ġ a", "i n", "Ġ Ġ", "h e... 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 37: tokenizer.ggml.bos_token_id u32 = 0 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 38: tokenizer.ggml.eos_token_id u32 = 1 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 39: tokenizer.ggml.padding_token_id u32 = 128815 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 40: tokenizer.ggml.add_bos_token bool = true 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 41: tokenizer.ggml.add_eos_token bool = false 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 42: tokenizer.chat_template str = {% if not add_generation_prompt is de... 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 43: general.quantization_version u32 = 2 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 44: general.file_type u32 = 10 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 45: split.no u16 = 0 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 46: split.tensors.count i32 = 1025 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - kv 47: split.count u16 = 0 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - type f32: 361 tensors 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - type q2_K: 171 tensors 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - type q3_K: 3 tensors 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - type q4_K: 306 tensors 2月 11 21:27:58 node06 ollama[61217]: llama_model_loader: - type q6_K: 184 tensors 2月 11 21:27:58 node06 ollama[61217]: time=2025-02-11T21:27:58.412+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" 2月 11 21:27:58 node06 ollama[61217]: llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect 2月 11 21:27:58 node06 ollama[61217]: llm_load_vocab: special tokens cache size = 819 2月 11 21:27:58 node06 ollama[61217]: llm_load_vocab: token to piece cache size = 0.8223 MB 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: format = GGUF V3 (latest) 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: arch = deepseek2 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: vocab type = BPE 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_vocab = 129280 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_merges = 127741 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: vocab_only = 0 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_ctx_train = 163840 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_embd = 7168 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_layer = 61 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_head = 128 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_head_kv = 128 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_rot = 64 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_swa = 0 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_embd_head_k = 192 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_embd_head_v = 128 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_gqa = 1 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_embd_k_gqa = 24576 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_embd_v_gqa = 16384 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: f_norm_eps = 0.0e+00 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: f_norm_rms_eps = 1.0e-06 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: f_logit_scale = 0.0e+00 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_ff = 18432 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_expert = 256 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_expert_used = 8 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: causal attn = 1 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: pooling type = 0 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: rope type = 0 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: rope scaling = yarn 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: freq_base_train = 10000.0 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: freq_scale_train = 0.025 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_ctx_orig_yarn = 4096 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: rope_finetuned = unknown 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: ssm_d_conv = 0 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: ssm_d_inner = 0 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: ssm_d_state = 0 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: ssm_dt_rank = 0 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: ssm_dt_b_c_rms = 0 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: model type = 671B 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: model ftype = Q2_K - Medium 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: model params = 671.03 B 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: model size = 211.03 GiB (2.70 BPW) 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: general.name = DeepSeek R1 BF16 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: BOS token = 0 '<|begin▁of▁sentence|>' 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: EOS token = 1 '<|end▁of▁sentence|>' 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: EOT token = 1 '<|end▁of▁sentence|>' 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: PAD token = 128815 '<|PAD▁TOKEN|>' 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: LF token = 131 'Ä' 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: FIM PRE token = 128801 '<|fim▁begin|>' 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: FIM SUF token = 128800 '<|fim▁hole|>' 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: FIM MID token = 128802 '<|fim▁end|>' 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: EOG token = 1 '<|end▁of▁sentence|>' 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: max token length = 256 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_layer_dense_lead = 3 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_lora_q = 1536 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_lora_kv = 512 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_ff_exp = 2048 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: n_expert_shared = 1 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: expert_weights_scale = 2.5 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: expert_weights_norm = 1 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: expert_gating_func = sigmoid 2月 11 21:27:58 node06 ollama[61217]: llm_load_print_meta: rope_yarn_log_mul = 0.1000 2月 11 21:32:58 node06 ollama[61217]: time=2025-02-11T21:32:58.252+08:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="timed out waiting for llama runner to start - progress 0.00 - " 2月 11 21:32:58 node06 ollama[61217]: [GIN] 2025/02/11 - 21:32:58 | 500 | 5m2s | 127.0.0.1 | POST "/api/generate" 2月 11 21:33:03 node06 ollama[61217]: time=2025-02-11T21:33:03.424+08:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.172500198 model=/mnt/inaisfs/user-fs/node6_pengwenzhong/ollama_odel/blobs/sha256-f4212639f8b6e105df9c2feebc2f8ebe6c1bb5cac3e721051b097a6bca76c183 2月 11 21:33:04 node06 ollama[61217]: time=2025-02-11T21:33:04.595+08:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=6.342834102 model=/mnt/inaisfs/user-fs/node6_pengwenzhong/ollama_odel/blobs/sha256-f4212639f8b6e105df9c2feebc2f8ebe6c1bb5cac3e721051b097a6bca76c183 2月 11 21:33:05 node06 ollama[61217]: time=2025-02-11T21:33:05.774+08:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=7.522595338 model=/mnt/inaisfs/user-fs/node6_pengwenzhong/ollama_odel/blobs/sha256-f4212639f8b6e105df9c2feebc2f8ebe6c1bb5cac3e721051b097a6bca76c183 2月 11 21:43:36 node06 systemd[1]: Stopping Ollama Service... 2月 11 21:43:39 node06 systemd[1]: ollama.service: Deactivated successfully. 2月 11 21:43:39 node06 systemd[1]: Stopped Ollama Service. 2月 11 21:43:39 node06 systemd[1]: ollama.service: Consumed 9min 59.836s CPU time.
Author
Owner

@XXXiby commented on GitHub (Feb 11, 2025):

@rick-github I just checked the information, is it because Ollama currently doesn't support L20 GPUs?
Image

<!-- gh-comment-id:2651015000 --> @XXXiby commented on GitHub (Feb 11, 2025): @rick-github I just checked the information, is it because Ollama currently doesn't support L20 GPUs? ![Image](https://github.com/user-attachments/assets/a54f87ff-e259-4198-9358-c74c31e2c70c)
Author
Owner

@CHN-STUDENT commented on GitHub (Feb 12, 2025):

@XXXiby yes, i run deepseek R1-IQ2-XXS on our server by ollama successfully.
This is my note: https://gist.github.com/CHN-STUDENT/965aa7f30f9734fb1271cbce3d69cd1f
Do you installed cuda and gpu drivers? Perhaps you can use nvidia-smi to know the gpu status.

Since IQ2 was full of vmem, It's not very pleasant to use, so now i download IQ1 to try.

<!-- gh-comment-id:2652344654 --> @CHN-STUDENT commented on GitHub (Feb 12, 2025): @XXXiby yes, i run [deepseek R1-IQ2-XXS](https://huggingface.co/unsloth/DeepSeek-R1-GGUF) on our server by ollama successfully. This is my note: https://gist.github.com/CHN-STUDENT/965aa7f30f9734fb1271cbce3d69cd1f Do you installed cuda and gpu drivers? Perhaps you can use `nvidia-smi` to know the gpu status. Since IQ2 was full of vmem, It's not very pleasant to use, so now i download IQ1 to try.
Author
Owner

@XXXiby commented on GitHub (Feb 12, 2025):

@CHN-STUDENT Thank you for your reply.
It seems there were some issues when I was installing Ollama.
I successfully completed the inference for the R1-IQ2 using llama.cpp and was able to provide the API server.

<!-- gh-comment-id:2652654327 --> @XXXiby commented on GitHub (Feb 12, 2025): @CHN-STUDENT Thank you for your reply. It seems there were some issues when I was installing Ollama. I successfully completed the inference for the R1-IQ2 using llama.cpp and was able to provide the API server.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#5614