[GH-ISSUE #9068] ps show 100%GPU but running on CPU #52416

Closed
opened 2026-04-28 23:11:32 -05:00 by GiteaMirror · 15 comments
Owner

Originally created by @GeofferyGeng on GitHub (Feb 13, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9068

What is the issue?

Description

ollama ps show 100% GPU but actually running on cpu

trying

  1. driver reload: failed
~# sudo rmmod nvidia_uvm && sudo modprobe nvidia_uvm
rmmod: ERROR: Module nvidia_uvm is in use
  1. add envs, all failed
Environment="CUDA_VISIBLE_DEVICES=0"
Environment="OLLAMA_GPU_LAYER=cuda"
Environment="OLLAMA_DEBUG=1"
Environment="OLLAMA_MAX_LOADED_MODELS=1"
Environment="OLLAMA_LLM_LIBRARY=cuda_v12.4"
  1. check log
    I'm confused about why Ollama found the GPU but ultimately didn't use it.
Feb 13 17:52:15 aigc-host-2 ollama[210496]: time=2025-02-13T17:52:15.288+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="187.4 GiB" before.free="180.0 GiB" before.free_swap="0 B" now.total="187.4 GiB" now.free="180.0 GiB" now.free_swap="0 B"
Feb 13 17:52:15 aigc-host-2 ollama[210496]: initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.54.14
Feb 13 17:52:15 aigc-host-2 ollama[210496]: dlsym: cuInit - 0x7fd4356fcb50
Feb 13 17:52:15 aigc-host-2 ollama[210496]: dlsym: cuDriverGetVersion - 0x7fd4356fcb70
Feb 13 17:52:15 aigc-host-2 ollama[210496]: dlsym: cuDeviceGetCount - 0x7fd4356fcbb0
Feb 13 17:52:15 aigc-host-2 ollama[210496]: dlsym: cuDeviceGet - 0x7fd4356fcb90
Feb 13 17:52:15 aigc-host-2 ollama[210496]: dlsym: cuDeviceGetAttribute - 0x7fd4356fcc90
Feb 13 17:52:15 aigc-host-2 ollama[210496]: dlsym: cuDeviceGetUuid - 0x7fd4356fcbf0
Feb 13 17:52:15 aigc-host-2 ollama[210496]: dlsym: cuDeviceGetName - 0x7fd4356fcbd0
Feb 13 17:52:15 aigc-host-2 ollama[210496]: dlsym: cuCtxCreate_v3 - 0x7fd4356fce70
Feb 13 17:52:15 aigc-host-2 ollama[210496]: dlsym: cuMemGetInfo_v2 - 0x7fd435706af0
Feb 13 17:52:15 aigc-host-2 ollama[210496]: dlsym: cuCtxDestroy - 0x7fd435761770
Feb 13 17:52:15 aigc-host-2 ollama[210496]: calling cuInit
Feb 13 17:52:15 aigc-host-2 ollama[210496]: calling cuDriverGetVersion
Feb 13 17:52:15 aigc-host-2 ollama[210496]: raw version 0x2f08
Feb 13 17:52:15 aigc-host-2 ollama[210496]: CUDA driver version: 12.4
Feb 13 17:52:15 aigc-host-2 ollama[210496]: calling cuDeviceGetCount
Feb 13 17:52:15 aigc-host-2 ollama[210496]: device count 1
Feb 13 17:52:15 aigc-host-2 ollama[210496]: time=2025-02-13T17:52:15.508+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-8aecca6d-0c8a-630e-b8dc-c5f7276f0389 name="NVIDIA GeForce RTX 2080 Ti" overhead="0 B" before.total="10.6 GiB" before.free="10.4 GiB" now.total="10.6 GiB" now.free="10.4 GiB" now.used="157.8 MiB"
Feb 13 17:52:15 aigc-host-2 ollama[210496]: releasing cuda driver library

Log

command

ollama run deepseek-r1-local:7b

ps

ollama ps
NAME                    ID              SIZE      PROCESSOR    UNTIL
deepseek-r1-local:7b    5fe3dc4023d1    6.0 GB    100% GPU     4 minutes from now

nvidia-smi

Thu Feb 13 19:48:20 2025
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.54.14              Driver Version: 550.54.14      CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 2080 Ti     Off |   00000000:65:00.0 Off |                  N/A |
| 22%   26C    P8              8W /  250W |       3MiB /  11264MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|  No running processes found                                                             |
+-----------------------------------------------------------------------------------------+

htop

Image

env

  • OS: Ubunt 22.04 5.15.0-124-generic
  • GPU: Nvidia RTX 2080 Ti
  • CPU: Intel
  • Ollama version: 0.5.8-rc7

installation

I install ollama by the following step:

  1. download ollama-linux-amd64.tgz
  2. download install.sh
  3. modify install.sh, skip download and tar directly

log

journalctl -u ollama --no-pager

debug.log

Relevant log output


OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.5.8-rc7

Originally created by @GeofferyGeng on GitHub (Feb 13, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9068 ### What is the issue? ### Description `ollama ps` show 100% GPU but actually running on cpu ### trying 1. driver reload: failed ``` ~# sudo rmmod nvidia_uvm && sudo modprobe nvidia_uvm rmmod: ERROR: Module nvidia_uvm is in use ``` 2. add envs, all failed ``` Environment="CUDA_VISIBLE_DEVICES=0" Environment="OLLAMA_GPU_LAYER=cuda" Environment="OLLAMA_DEBUG=1" Environment="OLLAMA_MAX_LOADED_MODELS=1" Environment="OLLAMA_LLM_LIBRARY=cuda_v12.4" ``` 3. check log I'm confused about why Ollama found the GPU but ultimately didn't use it. ``` Feb 13 17:52:15 aigc-host-2 ollama[210496]: time=2025-02-13T17:52:15.288+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="187.4 GiB" before.free="180.0 GiB" before.free_swap="0 B" now.total="187.4 GiB" now.free="180.0 GiB" now.free_swap="0 B" Feb 13 17:52:15 aigc-host-2 ollama[210496]: initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.54.14 Feb 13 17:52:15 aigc-host-2 ollama[210496]: dlsym: cuInit - 0x7fd4356fcb50 Feb 13 17:52:15 aigc-host-2 ollama[210496]: dlsym: cuDriverGetVersion - 0x7fd4356fcb70 Feb 13 17:52:15 aigc-host-2 ollama[210496]: dlsym: cuDeviceGetCount - 0x7fd4356fcbb0 Feb 13 17:52:15 aigc-host-2 ollama[210496]: dlsym: cuDeviceGet - 0x7fd4356fcb90 Feb 13 17:52:15 aigc-host-2 ollama[210496]: dlsym: cuDeviceGetAttribute - 0x7fd4356fcc90 Feb 13 17:52:15 aigc-host-2 ollama[210496]: dlsym: cuDeviceGetUuid - 0x7fd4356fcbf0 Feb 13 17:52:15 aigc-host-2 ollama[210496]: dlsym: cuDeviceGetName - 0x7fd4356fcbd0 Feb 13 17:52:15 aigc-host-2 ollama[210496]: dlsym: cuCtxCreate_v3 - 0x7fd4356fce70 Feb 13 17:52:15 aigc-host-2 ollama[210496]: dlsym: cuMemGetInfo_v2 - 0x7fd435706af0 Feb 13 17:52:15 aigc-host-2 ollama[210496]: dlsym: cuCtxDestroy - 0x7fd435761770 Feb 13 17:52:15 aigc-host-2 ollama[210496]: calling cuInit Feb 13 17:52:15 aigc-host-2 ollama[210496]: calling cuDriverGetVersion Feb 13 17:52:15 aigc-host-2 ollama[210496]: raw version 0x2f08 Feb 13 17:52:15 aigc-host-2 ollama[210496]: CUDA driver version: 12.4 Feb 13 17:52:15 aigc-host-2 ollama[210496]: calling cuDeviceGetCount Feb 13 17:52:15 aigc-host-2 ollama[210496]: device count 1 Feb 13 17:52:15 aigc-host-2 ollama[210496]: time=2025-02-13T17:52:15.508+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-8aecca6d-0c8a-630e-b8dc-c5f7276f0389 name="NVIDIA GeForce RTX 2080 Ti" overhead="0 B" before.total="10.6 GiB" before.free="10.4 GiB" now.total="10.6 GiB" now.free="10.4 GiB" now.used="157.8 MiB" Feb 13 17:52:15 aigc-host-2 ollama[210496]: releasing cuda driver library ``` ### Log #### command ``` ollama run deepseek-r1-local:7b ``` #### ps ``` ollama ps NAME ID SIZE PROCESSOR UNTIL deepseek-r1-local:7b 5fe3dc4023d1 6.0 GB 100% GPU 4 minutes from now ``` #### nvidia-smi ``` Thu Feb 13 19:48:20 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 550.54.14 Driver Version: 550.54.14 CUDA Version: 12.4 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 2080 Ti Off | 00000000:65:00.0 Off | N/A | | 22% 26C P8 8W / 250W | 3MiB / 11264MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | No running processes found | +-----------------------------------------------------------------------------------------+ ``` #### htop ![Image](https://github.com/user-attachments/assets/ae0bc862-aff4-47a0-880d-9d466d092f57) #### env - OS: Ubunt 22.04 5.15.0-124-generic - GPU: Nvidia RTX 2080 Ti - CPU: Intel - Ollama version: 0.5.8-rc7 #### installation I install ollama by the following step: 1. download ollama-linux-amd64.tgz 2. download install.sh 3. modify install.sh, skip download and tar directly #### log ``` journalctl -u ollama --no-pager ``` [debug.log](https://github.com/user-attachments/files/18783393/debug.log) ### Relevant log output ```shell ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.8-rc7
GiteaMirror added the bug label 2026-04-28 23:11:32 -05:00
Author
Owner

@rick-github commented on GitHub (Feb 13, 2025):

modify install.sh, skip download and tar directly

Did you move the location of ollama? ollama finds runners relative to the path to the binary, if you modified the location of the binary or libraries, ollama will fall back to using a GPU runner.

/xxx/yyy/bin/ollama
/xxx/yyy/lib/ollama/runners

https://github.com/ollama/ollama/issues/8532#issuecomment-2616281903

<!-- gh-comment-id:2656526250 --> @rick-github commented on GitHub (Feb 13, 2025): > modify install.sh, skip download and tar directly Did you move the location of ollama? ollama finds runners relative to the path to the binary, if you modified the location of the binary or libraries, ollama will fall back to using a GPU runner. /xxx/yyy/bin/ollama /xxx/yyy/lib/ollama/runners https://github.com/ollama/ollama/issues/8532#issuecomment-2616281903
Author
Owner

@GeofferyGeng commented on GitHub (Feb 14, 2025):

modify install.sh, skip download and tar directly

Did you move the location of ollama? ollama finds runners relative to the path to the binary, if you modified the location of the binary or libraries, ollama will fall back to using a GPU runner.

/xxx/yyy/bin/ollama /xxx/yyy/lib/ollama/runners

#8532 (comment)

@rick-github Thank you for your response!

I checked my installation:

/usr/local/bin/ollama*
/usr/local/lib/ollama/
├── cuda_v11
│   ├── libcublas.so.11 -> libcublas.so.11.5.1.109
│   ├── libcublas.so.11.5.1.109
│   ├── libcublasLt.so.11 -> libcublasLt.so.11.5.1.109
│   ├── libcublasLt.so.11.5.1.109
│   ├── libcudart.so.11.0 -> libcudart.so.11.3.109
│   ├── libcudart.so.11.3.109
│   └── libggml-cuda.so
└── cuda_v12
    ├── libcublas.so.12 -> libcublas.so.12.4.5.8
    ├── libcublas.so.12.4.5.8
    ├── libcublasLt.so.12 -> libcublasLt.so.12.4.5.8
    ├── libcublasLt.so.12.4.5.8
    ├── libcudart.so.12 -> libcudart.so.12.4.127
    ├── libcudart.so.12.4.127
    └── libggml-cuda.so

PATH Environment is set in service file

[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/opt/mambaforge/condabin:/usr/local/sbin:/usr/local/bin:/usr/local/lib:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib:/usr/games:/usr/local/games:/snap/bin:/usr/local/cuda/bin:/usr/mpi/gcc/openmpi-4.1.7a1/bin:"
Environment="CUDA_VISIBLE_DEVICES=0"
Environment="OLLAMA_GPU_LAYER=cuda"
Environment="OLLAMA_DEBUG=1"
Environment="OLLAMA_MAX_LOADED_MODELS=1"
Environment="OLLAMA_LLM_LIBRARY=cuda_v12.4"

[Install]
WantedBy=default.target

should I rename 'cuda_v12' to 'runners' ?

<!-- gh-comment-id:2658100002 --> @GeofferyGeng commented on GitHub (Feb 14, 2025): > > modify install.sh, skip download and tar directly > > Did you move the location of ollama? ollama finds runners relative to the path to the binary, if you modified the location of the binary or libraries, ollama will fall back to using a GPU runner. > > /xxx/yyy/bin/ollama /xxx/yyy/lib/ollama/runners > > [#8532 (comment)](https://github.com/ollama/ollama/issues/8532#issuecomment-2616281903) @rick-github Thank you for your response! I checked my installation: ``` /usr/local/bin/ollama* /usr/local/lib/ollama/ ├── cuda_v11 │   ├── libcublas.so.11 -> libcublas.so.11.5.1.109 │   ├── libcublas.so.11.5.1.109 │   ├── libcublasLt.so.11 -> libcublasLt.so.11.5.1.109 │   ├── libcublasLt.so.11.5.1.109 │   ├── libcudart.so.11.0 -> libcudart.so.11.3.109 │   ├── libcudart.so.11.3.109 │   └── libggml-cuda.so └── cuda_v12 ├── libcublas.so.12 -> libcublas.so.12.4.5.8 ├── libcublas.so.12.4.5.8 ├── libcublasLt.so.12 -> libcublasLt.so.12.4.5.8 ├── libcublasLt.so.12.4.5.8 ├── libcudart.so.12 -> libcudart.so.12.4.127 ├── libcudart.so.12.4.127 └── libggml-cuda.so ``` PATH Environment is set in service file ``` [Unit] Description=Ollama Service After=network-online.target [Service] ExecStart=/usr/local/bin/ollama serve User=ollama Group=ollama Restart=always RestartSec=3 Environment="PATH=/opt/mambaforge/condabin:/usr/local/sbin:/usr/local/bin:/usr/local/lib:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib:/usr/games:/usr/local/games:/snap/bin:/usr/local/cuda/bin:/usr/mpi/gcc/openmpi-4.1.7a1/bin:" Environment="CUDA_VISIBLE_DEVICES=0" Environment="OLLAMA_GPU_LAYER=cuda" Environment="OLLAMA_DEBUG=1" Environment="OLLAMA_MAX_LOADED_MODELS=1" Environment="OLLAMA_LLM_LIBRARY=cuda_v12.4" [Install] WantedBy=default.target ``` should I rename 'cuda_v12' to 'runners' ?
Author
Owner

@rick-github commented on GitHub (Feb 14, 2025):

No, the layout has just changed with the 0.5.9 (and maybe 0.5.8, didn't install that one) release, /usr/local/lib/ollama/{cuda_v11,cuda_v12} looks right. If you can include a full server log there might be relevant information.

<!-- gh-comment-id:2658106319 --> @rick-github commented on GitHub (Feb 14, 2025): No, the layout has just changed with the 0.5.9 (and maybe 0.5.8, didn't install that one) release, /usr/local/lib/ollama/{cuda_v11,cuda_v12} looks right. If you can include a full server log there might be relevant information.
Author
Owner

@rick-github commented on GitHub (Feb 14, 2025):

Just noticed Environment="OLLAMA_LLM_LIBRARY=cuda_v12.4", that's not the right way to set that variable.

Environment="OLLAMA_LLM_LIBRARY=cuda_v12"
<!-- gh-comment-id:2658107608 --> @rick-github commented on GitHub (Feb 14, 2025): Just noticed `Environment="OLLAMA_LLM_LIBRARY=cuda_v12.4"`, that's not the right way to set that variable. ``` Environment="OLLAMA_LLM_LIBRARY=cuda_v12" ```
Author
Owner

@GeofferyGeng commented on GitHub (Feb 14, 2025):

No, the layout has just changed with the 0.5.9 (and maybe 0.5.8, didn't install that one) release, /usr/local/lib/ollama/{cuda_v11,cuda_v12} looks right. If you can include a full server log there might be relevant information.

this log is collected by journalctl -u ollama --no-pager

debug.log

I have checked the log, and not found ollama using runners cpu log that show in #issue 8532

furthermore, recorrect the Environment OLLAMA_LLM_LIBRARY to "cuda_v12", sadly it didn't work.

<!-- gh-comment-id:2658131300 --> @GeofferyGeng commented on GitHub (Feb 14, 2025): > No, the layout has just changed with the 0.5.9 (and maybe 0.5.8, didn't install that one) release, /usr/local/lib/ollama/{cuda_v11,cuda_v12} looks right. If you can include a full server log there might be relevant information. this log is collected by `journalctl -u ollama --no-pager` [debug.log](https://github.com/user-attachments/files/18792643/debug.log) I have checked the log, and not found ollama using runners cpu log that show in [#issue 8532](https://github.com/ollama/ollama/issues/8532#issuecomment-2616281903) furthermore, recorrect the Environment OLLAMA_LLM_LIBRARY to "cuda_v12", sadly it didn't work.
Author
Owner

@arkerwu commented on GitHub (Feb 14, 2025):

try 0.5.10

<!-- gh-comment-id:2658430888 --> @arkerwu commented on GitHub (Feb 14, 2025): try 0.5.10
Author
Owner

@driversti commented on GitHub (Feb 14, 2025):

I have a similar behavior on Nvidia Jetson Orin Nano. v0.5.10

ollama ps
NAME                ID              SIZE      PROCESSOR    UNTIL
deepseek-r1:1.5b    a42b25d8c10a    1.6 GB    100% CPU     3 minutes from now
llama3.2:latest     a80c4f17acd5    3.5 GB    100% CPU     Stopping...

ollama -v
ollama version is 0.5.10
<!-- gh-comment-id:2658800528 --> @driversti commented on GitHub (Feb 14, 2025): I have a similar behavior on Nvidia Jetson Orin Nano. v0.5.10 ```bash ollama ps NAME ID SIZE PROCESSOR UNTIL deepseek-r1:1.5b a42b25d8c10a 1.6 GB 100% CPU 3 minutes from now llama3.2:latest a80c4f17acd5 3.5 GB 100% CPU Stopping... ollama -v ollama version is 0.5.10 ```
Author
Owner

@GeofferyGeng commented on GitHub (Feb 14, 2025):

I have change the installation file and it works.

v0.5.7 works well.

<!-- gh-comment-id:2658939744 --> @GeofferyGeng commented on GitHub (Feb 14, 2025): I have change the installation file and it works. v0.5.7 works well.
Author
Owner

@GeofferyGeng commented on GitHub (Feb 14, 2025):

v0.5.8 tgz file seems to have the wroung fold structure.

<!-- gh-comment-id:2658949501 --> @GeofferyGeng commented on GitHub (Feb 14, 2025): v0.5.8 tgz file seems to have the wroung fold structure.
Author
Owner

@lixk666 commented on GitHub (Feb 14, 2025):

我已经更改了安装文件并且它可以正常工作。

v0.5.7 运行良好。

How did you do it? Can you describe how I have no success on 0.5.10 and 0.5.7, or the ollama ps shows 100% GPU, but actually uses the CPU entirely. Thank you!

<!-- gh-comment-id:2659232623 --> @lixk666 commented on GitHub (Feb 14, 2025): > 我已经更改了安装文件并且它可以正常工作。 > > v0.5.7 运行良好。 How did you do it? Can you describe how I have no success on 0.5.10 and 0.5.7, or the ollama ps shows 100% GPU, but actually uses the CPU entirely. Thank you!
Author
Owner

@bumblebee-code-gh commented on GitHub (Mar 4, 2025):

我已经更改了安装文件并且它可以正常工作。
v0.5.7 运行良好。

How did you do it? Can you describe how I have no success on 0.5.10 and 0.5.7, or the ollama ps shows 100% GPU, but actually uses the CPU entirely. Thank you!

I also encountered the same problem, did you solve it?

<!-- gh-comment-id:2695893823 --> @bumblebee-code-gh commented on GitHub (Mar 4, 2025): > > 我已经更改了安装文件并且它可以正常工作。 > > v0.5.7 运行良好。 > > How did you do it? Can you describe how I have no success on 0.5.10 and 0.5.7, or the ollama ps shows 100% GPU, but actually uses the CPU entirely. Thank you! I also encountered the same problem, did you solve it?
Author
Owner

@rick-github commented on GitHub (Mar 4, 2025):

Server logs may help in debugging the problem.

<!-- gh-comment-id:2695897777 --> @rick-github commented on GitHub (Mar 4, 2025): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) may help in debugging the problem.
Author
Owner

@bumblebee-code-gh commented on GitHub (Mar 4, 2025):

Server logs may help in debugging the problem.

Hello, this is my service running log. I can't find where the problem is. Could you help me take a look?

2025/03/04 17:25:38 routes.go:1205: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\\ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:true ROCR_VISIBLE_DEVICES:]"
time=2025-03-04T17:25:38.369+08:00 level=INFO source=images.go:432 msg="total blobs: 12"
time=2025-03-04T17:25:38.371+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-03-04T17:25:38.374+08:00 level=INFO source=routes.go:1256 msg="Listening on [::]:11434 (version 0.5.12)"
time=2025-03-04T17:25:38.374+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-03-04T17:25:38.374+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=2
time=2025-03-04T17:25:38.374+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=64 efficiency=0 threads=128
time=2025-03-04T17:25:38.374+08:00 level=INFO source=gpu_windows.go:214 msg="" package=1 cores=64 efficiency=0 threads=128
time=2025-03-04T17:25:38.922+08:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-eb5918f2-8024-dba1-a6b5-113a975124bc library=cuda compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090" overhead="3.9 GiB"
time=2025-03-04T17:25:39.273+08:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-3c3feaa6-4a63-6892-5578-b41c9cc0a1c1 library=cuda compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090" overhead="3.5 GiB"
time=2025-03-04T17:25:39.275+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-eb5918f2-8024-dba1-a6b5-113a975124bc library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090" total="24.0 GiB" available="19.7 GiB"
time=2025-03-04T17:25:39.275+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-3c3feaa6-4a63-6892-5578-b41c9cc0a1c1 library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090" total="24.0 GiB" available="19.7 GiB"
[GIN] 2025/03/04 - 17:25:41 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/04 - 17:25:41 | 200 |       497.7µs |       127.0.0.1 | GET      "/api/ps"
[GIN] 2025/03/04 - 17:25:42 | 200 |         498µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/04 - 17:25:42 | 200 |     51.2739ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/03/04 - 17:25:55 | 200 |      3.9825ms |   172.16.10.124 | GET      "/api/tags"
[GIN] 2025/03/04 - 17:25:55 | 200 |         497µs |   172.16.10.124 | GET      "/api/version"
[GIN] 2025/03/04 - 17:26:05 | 200 |      3.9824ms |   172.16.10.124 | GET      "/api/tags"
[GIN] 2025/03/04 - 17:26:09 | 200 |      1.9913ms |   172.16.10.124 | GET      "/api/tags"
[GIN] 2025/03/04 - 17:26:09 | 200 |       496.9µs |   172.16.10.124 | GET      "/api/version"
time=2025-03-04T17:26:21.800+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.key_length default=128
time=2025-03-04T17:26:21.800+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.value_length default=128
time=2025-03-04T17:26:21.801+08:00 level=INFO source=sched.go:731 msg="new model will fit in available VRAM, loading" model=D:\ollama\models\blobs\sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93 library=cuda parallel=1 required="21.9 GiB"
time=2025-03-04T17:26:21.819+08:00 level=INFO source=server.go:97 msg="system memory" total="511.7 GiB" free="360.9 GiB" free_swap="464.5 GiB"
time=2025-03-04T17:26:21.820+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.key_length default=128
time=2025-03-04T17:26:21.820+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.value_length default=128
time=2025-03-04T17:26:21.820+08:00 level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=65 layers.offload=65 layers.split=33,32 memory.available="[19.7 GiB 19.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="21.9 GiB" memory.required.partial="21.9 GiB" memory.required.kv="512.0 MiB" memory.required.allocations="[11.2 GiB 10.6 GiB]" memory.weights.total="18.0 GiB" memory.weights.repeating="17.4 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="916.1 MiB" memory.graph.partial="916.1 MiB"
time=2025-03-04T17:26:21.830+08:00 level=INFO source=server.go:380 msg="starting llama server" cmd="C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model D:\\ollama\\models\\blobs\\sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93 --ctx-size 2048 --batch-size 512 --n-gpu-layers 65 --threads 128 --no-mmap --parallel 1 --tensor-split 33,32 --port 62844"
time=2025-03-04T17:26:21.846+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-03-04T17:26:21.846+08:00 level=INFO source=server.go:557 msg="waiting for llama runner to start responding"
time=2025-03-04T17:26:21.847+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
time=2025-03-04T17:26:21.889+08:00 level=INFO source=runner.go:932 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 2 CUDA devices:
  Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
  Device 1: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
load_backend: loaded CUDA backend from C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
load_backend: loaded CPU backend from C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll
time=2025-03-04T17:26:23.838+08:00 level=INFO source=runner.go:935 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | LLAMAFILE = 1 | cgo(clang)" threads=128
time=2025-03-04T17:26:23.840+08:00 level=INFO source=runner.go:993 msg="Server listening on 127.0.0.1:62844"
time=2025-03-04T17:26:23.853+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model"
llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 4090) - 20140 MiB free
llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 4090) - 20140 MiB free
llama_model_loader: loaded meta data with 26 key-value pairs and 771 tensors from D:\ollama\models\blobs\sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = DeepSeek R1 Distill Qwen 32B
llama_model_loader: - kv   3:                           general.basename str              = DeepSeek-R1-Distill-Qwen
llama_model_loader: - kv   4:                         general.size_label str              = 32B
llama_model_loader: - kv   5:                          qwen2.block_count u32              = 64
llama_model_loader: - kv   6:                       qwen2.context_length u32              = 131072
llama_model_loader: - kv   7:                     qwen2.embedding_length u32              = 5120
llama_model_loader: - kv   8:                  qwen2.feed_forward_length u32              = 27648
llama_model_loader: - kv   9:                 qwen2.attention.head_count u32              = 40
llama_model_loader: - kv  10:              qwen2.attention.head_count_kv u32              = 8
llama_model_loader: - kv  11:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  12:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  13:                          general.file_type u32              = 15
llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  15:                         tokenizer.ggml.pre str              = deepseek-r1-qwen
llama_model_loader: - kv  16:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  18:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  19:                tokenizer.ggml.bos_token_id u32              = 151646
llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 151643
llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  22:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  23:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  25:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  321 tensors
llama_model_loader: - type q4_K:  385 tensors
llama_model_loader: - type q6_K:   65 tensors
llm_load_vocab: missing or unrecognized pre-tokenizer type, using: 'default'
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 22
llm_load_vocab: token to piece cache size = 0.9310 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = qwen2
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 152064
llm_load_print_meta: n_merges         = 151387
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 131072
llm_load_print_meta: n_embd           = 5120
llm_load_print_meta: n_layer          = 64
llm_load_print_meta: n_head           = 40
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 5
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 27648
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 131072
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = 32B
llm_load_print_meta: model ftype      = Q4_K - Medium
llm_load_print_meta: model params     = 32.76 B
llm_load_print_meta: model size       = 18.48 GiB (4.85 BPW) 
llm_load_print_meta: general.name     = DeepSeek R1 Distill Qwen 32B
llm_load_print_meta: BOS token        = 151646 '<|begin▁of▁sentence|>'
llm_load_print_meta: EOS token        = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: EOT token        = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: PAD token        = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: LF token         = 148848 'ÄĬ'
llm_load_print_meta: FIM PRE token    = 151659 '<|fim_prefix|>'
llm_load_print_meta: FIM SUF token    = 151661 '<|fim_suffix|>'
llm_load_print_meta: FIM MID token    = 151660 '<|fim_middle|>'
llm_load_print_meta: FIM PAD token    = 151662 '<|fim_pad|>'
llm_load_print_meta: FIM REP token    = 151663 '<|repo_name|>'
llm_load_print_meta: FIM SEP token    = 151664 '<|file_sep|>'
llm_load_print_meta: EOG token        = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: EOG token        = 151662 '<|fim_pad|>'
llm_load_print_meta: EOG token        = 151663 '<|repo_name|>'
llm_load_print_meta: EOG token        = 151664 '<|file_sep|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: offloading 64 repeating layers to GPU
llm_load_tensors: offloading output layer to GPU
llm_load_tensors: offloaded 65/65 layers to GPU
llm_load_tensors:        CUDA0 model buffer size =  9211.25 MiB
llm_load_tensors:        CUDA1 model buffer size =  9297.10 MiB
llm_load_tensors:          CPU model buffer size =   417.66 MiB
llama_new_context_with_model: n_seq_max     = 1
llama_new_context_with_model: n_ctx         = 2048
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch       = 512
llama_new_context_with_model: n_ubatch      = 512
llama_new_context_with_model: flash_attn    = 0
llama_new_context_with_model: freq_base     = 1000000.0
llama_new_context_with_model: freq_scale    = 1
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 2048, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 64, can_shift = 1
llama_kv_cache_init:      CUDA0 KV buffer size =   264.00 MiB
llama_kv_cache_init:      CUDA1 KV buffer size =   248.00 MiB
llama_new_context_with_model: KV self size  =  512.00 MiB, K (f16):  256.00 MiB, V (f16):  256.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     0.60 MiB
llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
llama_new_context_with_model:      CUDA0 compute buffer size =   256.01 MiB
llama_new_context_with_model:      CUDA1 compute buffer size =   363.02 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    26.02 MiB
llama_new_context_with_model: graph nodes  = 2246
llama_new_context_with_model: graph splits = 3
time=2025-03-04T17:26:30.126+08:00 level=INFO source=server.go:596 msg="llama runner started in 8.28 seconds"
[GIN] 2025/03/04 - 17:26:44 | 200 |   22.5157695s |   172.16.10.124 | POST     "/api/chat"
[GIN] 2025/03/04 - 17:26:46 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/04 - 17:26:46 | 200 |      3.9828ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/03/04 - 17:26:47 | 200 |       498.4µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/04 - 17:26:47 | 200 |       497.8µs |       127.0.0.1 | GET      "/api/ps"
[GIN] 2025/03/04 - 17:26:51 | 200 |       535.4µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/04 - 17:26:51 | 200 |            0s |       127.0.0.1 | GET      "/api/ps"
[GIN] 2025/03/04 - 17:26:51 | 200 |   30.0247217s |   172.16.10.124 | POST     "/api/chat"
time=2025-03-04T17:26:52.661+08:00 level=WARN source=ggml.go:132 msg="key not found" key=bert.attention.head_count_kv default=1
time=2025-03-04T17:26:52.661+08:00 level=WARN source=ggml.go:132 msg="key not found" key=bert.attention.key_length default=64
time=2025-03-04T17:26:52.661+08:00 level=WARN source=ggml.go:132 msg="key not found" key=bert.attention.value_length default=64
time=2025-03-04T17:26:52.662+08:00 level=WARN source=ggml.go:132 msg="key not found" key=bert.attention.head_count_kv default=1
time=2025-03-04T17:26:52.662+08:00 level=INFO source=sched.go:731 msg="new model will fit in available VRAM, loading" model=D:\ollama\models\blobs\sha256-daec91ffb5dd0c27411bd71f29932917c49cf529a641d0168496c3a501e3062c library=cuda parallel=1 required="2.1 GiB"
time=2025-03-04T17:26:52.679+08:00 level=INFO source=server.go:97 msg="system memory" total="511.7 GiB" free="360.8 GiB" free_swap="464.5 GiB"
time=2025-03-04T17:26:52.680+08:00 level=WARN source=ggml.go:132 msg="key not found" key=bert.attention.head_count_kv default=1
time=2025-03-04T17:26:52.680+08:00 level=WARN source=ggml.go:132 msg="key not found" key=bert.attention.key_length default=64
time=2025-03-04T17:26:52.680+08:00 level=WARN source=ggml.go:132 msg="key not found" key=bert.attention.value_length default=64
time=2025-03-04T17:26:52.681+08:00 level=WARN source=ggml.go:132 msg="key not found" key=bert.attention.head_count_kv default=1
time=2025-03-04T17:26:52.681+08:00 level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=25 layers.offload=25 layers.split=13,12 memory.available="[19.7 GiB 19.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="2.1 GiB" memory.required.partial="2.1 GiB" memory.required.kv="12.0 MiB" memory.required.allocations="[1.3 GiB 808.2 MiB]" memory.weights.total="589.2 MiB" memory.weights.repeating="100.9 MiB" memory.weights.nonrepeating="488.3 MiB" memory.graph.full="32.0 MiB" memory.graph.partial="32.0 MiB"
time=2025-03-04T17:26:52.690+08:00 level=INFO source=server.go:380 msg="starting llama server" cmd="C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model D:\\ollama\\models\\blobs\\sha256-daec91ffb5dd0c27411bd71f29932917c49cf529a641d0168496c3a501e3062c --ctx-size 2048 --batch-size 512 --n-gpu-layers 25 --threads 128 --no-mmap --parallel 1 --tensor-split 13,12 --port 62859"
time=2025-03-04T17:26:52.701+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-03-04T17:26:52.701+08:00 level=INFO source=server.go:557 msg="waiting for llama runner to start responding"
time=2025-03-04T17:26:52.701+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
time=2025-03-04T17:26:52.752+08:00 level=INFO source=runner.go:932 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 2 CUDA devices:
  Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
  Device 1: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
load_backend: loaded CUDA backend from C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
load_backend: loaded CPU backend from C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll
time=2025-03-04T17:26:53.206+08:00 level=INFO source=runner.go:935 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | LLAMAFILE = 1 | cgo(clang)" threads=128
time=2025-03-04T17:26:53.207+08:00 level=INFO source=runner.go:993 msg="Server listening on 127.0.0.1:62859"
time=2025-03-04T17:26:53.456+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model"
llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 4090) - 20140 MiB free
llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 4090) - 20140 MiB free
llama_model_loader: loaded meta data with 33 key-value pairs and 389 tensors from D:\ollama\models\blobs\sha256-daec91ffb5dd0c27411bd71f29932917c49cf529a641d0168496c3a501e3062c (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = bert
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                         general.size_label str              = 567M
llama_model_loader: - kv   3:                            general.license str              = mit
llama_model_loader: - kv   4:                               general.tags arr[str,4]       = ["sentence-transformers", "feature-ex...
llama_model_loader: - kv   5:                           bert.block_count u32              = 24
llama_model_loader: - kv   6:                        bert.context_length u32              = 8192
llama_model_loader: - kv   7:                      bert.embedding_length u32              = 1024
llama_model_loader: - kv   8:                   bert.feed_forward_length u32              = 4096
llama_model_loader: - kv   9:                  bert.attention.head_count u32              = 16
llama_model_loader: - kv  10:          bert.attention.layer_norm_epsilon f32              = 0.000010
llama_model_loader: - kv  11:                          general.file_type u32              = 1
llama_model_loader: - kv  12:                      bert.attention.causal bool             = false
llama_model_loader: - kv  13:                          bert.pooling_type u32              = 2
llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = t5
llama_model_loader: - kv  15:                         tokenizer.ggml.pre str              = default
llama_model_loader: - kv  16:                      tokenizer.ggml.tokens arr[str,250002]  = ["<s>", "<pad>", "</s>", "<unk>", ","...
llama_model_loader: - kv  17:                      tokenizer.ggml.scores arr[f32,250002]  = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  18:                  tokenizer.ggml.token_type arr[i32,250002]  = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  19:            tokenizer.ggml.add_space_prefix bool             = true
llama_model_loader: - kv  20:            tokenizer.ggml.token_type_count u32              = 1
llama_model_loader: - kv  21:    tokenizer.ggml.remove_extra_whitespaces bool             = true
llama_model_loader: - kv  22:        tokenizer.ggml.precompiled_charsmap arr[u8,237539]   = [0, 180, 2, 0, 0, 132, 0, 0, 0, 0, 0,...
llama_model_loader: - kv  23:                tokenizer.ggml.bos_token_id u32              = 0
llama_model_loader: - kv  24:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  25:            tokenizer.ggml.unknown_token_id u32              = 3
llama_model_loader: - kv  26:          tokenizer.ggml.seperator_token_id u32              = 2
llama_model_loader: - kv  27:            tokenizer.ggml.padding_token_id u32              = 1
llama_model_loader: - kv  28:                tokenizer.ggml.cls_token_id u32              = 0
llama_model_loader: - kv  29:               tokenizer.ggml.mask_token_id u32              = 250001
llama_model_loader: - kv  30:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  31:               tokenizer.ggml.add_eos_token bool             = true
llama_model_loader: - kv  32:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  244 tensors
llama_model_loader: - type  f16:  145 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 4
llm_load_vocab: token to piece cache size = 2.1668 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = bert
llm_load_print_meta: vocab type       = UGM
llm_load_print_meta: n_vocab          = 250002
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 8192
llm_load_print_meta: n_embd           = 1024
llm_load_print_meta: n_layer          = 24
llm_load_print_meta: n_head           = 16
llm_load_print_meta: n_head_kv        = 16
llm_load_print_meta: n_rot            = 64
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 64
llm_load_print_meta: n_embd_head_v    = 64
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 1.0e-05
llm_load_print_meta: f_norm_rms_eps   = 0.0e+00
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 4096
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 0
llm_load_print_meta: pooling type     = 2
llm_load_print_meta: rope type        = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 8192
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = 335M
llm_load_print_meta: model ftype      = F16
llm_load_print_meta: model params     = 566.70 M
llm_load_print_meta: model size       = 1.07 GiB (16.25 BPW) 
llm_load_print_meta: general.name     = n/a
llm_load_print_meta: BOS token        = 0 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 3 '<unk>'
llm_load_print_meta: SEP token        = 2 '</s>'
llm_load_print_meta: PAD token        = 1 '<pad>'
llm_load_print_meta: CLS token        = 0 '<s>'
llm_load_print_meta: MASK token       = 250001 '[PAD250000]'
llm_load_print_meta: LF token         = 6 '▁'
llm_load_print_meta: EOG token        = 2 '</s>'
llm_load_print_meta: max token length = 48
llm_load_tensors: offloading 24 repeating layers to GPU
llm_load_tensors: offloading output layer to GPU
llm_load_tensors: offloaded 25/25 layers to GPU
llm_load_tensors:    CUDA_Host model buffer size =   520.30 MiB
llm_load_tensors:        CUDA0 model buffer size =   312.66 MiB
llm_load_tensors:        CUDA1 model buffer size =   264.56 MiB
llama_new_context_with_model: n_seq_max     = 1
llama_new_context_with_model: n_ctx         = 2048
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch       = 512
llama_new_context_with_model: n_ubatch      = 512
llama_new_context_with_model: flash_attn    = 0
llama_new_context_with_model: freq_base     = 10000.0
llama_new_context_with_model: freq_scale    = 1
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (8192) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 2048, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 24, can_shift = 1
llama_kv_cache_init:      CUDA0 KV buffer size =   104.00 MiB
llama_kv_cache_init:      CUDA1 KV buffer size =    88.00 MiB
llama_new_context_with_model: KV self size  =  192.00 MiB, K (f16):   96.00 MiB, V (f16):   96.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     0.00 MiB
llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
llama_new_context_with_model:      CUDA0 compute buffer size =    44.05 MiB
llama_new_context_with_model:      CUDA1 compute buffer size =    36.01 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =     8.02 MiB
llama_new_context_with_model: graph nodes  = 849
llama_new_context_with_model: graph splits = 5 (with bs=512), 3 (with bs=1)
time=2025-03-04T17:26:54.711+08:00 level=INFO source=server.go:596 msg="llama runner started in 2.01 seconds"
llama_model_loader: loaded meta data with 33 key-value pairs and 389 tensors from D:\ollama\models\blobs\sha256-daec91ffb5dd0c27411bd71f29932917c49cf529a641d0168496c3a501e3062c (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = bert
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                         general.size_label str              = 567M
llama_model_loader: - kv   3:                            general.license str              = mit
llama_model_loader: - kv   4:                               general.tags arr[str,4]       = ["sentence-transformers", "feature-ex...
llama_model_loader: - kv   5:                           bert.block_count u32              = 24
llama_model_loader: - kv   6:                        bert.context_length u32              = 8192
llama_model_loader: - kv   7:                      bert.embedding_length u32              = 1024
llama_model_loader: - kv   8:                   bert.feed_forward_length u32              = 4096
llama_model_loader: - kv   9:                  bert.attention.head_count u32              = 16
llama_model_loader: - kv  10:          bert.attention.layer_norm_epsilon f32              = 0.000010
llama_model_loader: - kv  11:                          general.file_type u32              = 1
llama_model_loader: - kv  12:                      bert.attention.causal bool             = false
llama_model_loader: - kv  13:                          bert.pooling_type u32              = 2
llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = t5
llama_model_loader: - kv  15:                         tokenizer.ggml.pre str              = default
[GIN] 2025/03/04 - 17:26:54 | 200 |         249µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/04 - 17:26:54 | 200 |            0s |       127.0.0.1 | GET      "/api/ps"
llama_model_loader: - kv  16:                      tokenizer.ggml.tokens arr[str,250002]  = ["<s>", "<pad>", "</s>", "<unk>", ","...
llama_model_loader: - kv  17:                      tokenizer.ggml.scores arr[f32,250002]  = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  18:                  tokenizer.ggml.token_type arr[i32,250002]  = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  19:            tokenizer.ggml.add_space_prefix bool             = true
llama_model_loader: - kv  20:            tokenizer.ggml.token_type_count u32              = 1
llama_model_loader: - kv  21:    tokenizer.ggml.remove_extra_whitespaces bool             = true
llama_model_loader: - kv  22:        tokenizer.ggml.precompiled_charsmap arr[u8,237539]   = [0, 180, 2, 0, 0, 132, 0, 0, 0, 0, 0,...
llama_model_loader: - kv  23:                tokenizer.ggml.bos_token_id u32              = 0
llama_model_loader: - kv  24:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  25:            tokenizer.ggml.unknown_token_id u32              = 3
llama_model_loader: - kv  26:          tokenizer.ggml.seperator_token_id u32              = 2
llama_model_loader: - kv  27:            tokenizer.ggml.padding_token_id u32              = 1
llama_model_loader: - kv  28:                tokenizer.ggml.cls_token_id u32              = 0
llama_model_loader: - kv  29:               tokenizer.ggml.mask_token_id u32              = 250001
llama_model_loader: - kv  30:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  31:               tokenizer.ggml.add_eos_token bool             = true
llama_model_loader: - kv  32:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  244 tensors
llama_model_loader: - type  f16:  145 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 4
llm_load_vocab: token to piece cache size = 2.1668 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = bert
llm_load_print_meta: vocab type       = UGM
llm_load_print_meta: n_vocab          = 250002
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: vocab_only       = 1
llm_load_print_meta: model type       = ?B
llm_load_print_meta: model ftype      = all F32
llm_load_print_meta: model params     = 566.70 M
llm_load_print_meta: model size       = 1.07 GiB (16.25 BPW) 
llm_load_print_meta: general.name     = n/a
llm_load_print_meta: BOS token        = 0 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 3 '<unk>'
llm_load_print_meta: SEP token        = 2 '</s>'
llm_load_print_meta: PAD token        = 1 '<pad>'
llm_load_print_meta: CLS token        = 0 '<s>'
llm_load_print_meta: MASK token       = 250001 '[PAD250000]'
llm_load_print_meta: LF token         = 6 '▁'
llm_load_print_meta: EOG token        = 2 '</s>'
llm_load_print_meta: max token length = 48
llama_model_load: vocab only - skipping tensors
[GIN] 2025/03/04 - 17:27:11 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/04 - 17:27:11 | 200 |            0s |       127.0.0.1 | GET      "/api/ps"
[GIN] 2025/03/04 - 17:27:11 | 200 |   26.8011527s |   172.16.10.124 | POST     "/api/embed"
[GIN] 2025/03/04 - 17:27:29 | 200 |   17.8122472s |   172.16.10.124 | POST     "/api/embed"
[GIN] 2025/03/04 - 17:27:46 | 200 |   16.5012803s |   172.16.10.124 | POST     "/api/embed"
[GIN] 2025/03/04 - 17:27:46 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/04 - 17:27:46 | 200 |            0s |       127.0.0.1 | GET      "/api/ps"
[GIN] 2025/03/04 - 17:27:48 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/04 - 17:27:48 | 200 |            0s |       127.0.0.1 | GET      "/api/ps"
[GIN] 2025/03/04 - 17:27:50 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/04 - 17:27:50 | 200 |            0s |       127.0.0.1 | GET      "/api/ps"
time=2025-03-04T17:27:51.182+08:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.0203752 model=D:\ollama\models\blobs\sha256-daec91ffb5dd0c27411bd71f29932917c49cf529a641d0168496c3a501e3062c
time=2025-03-04T17:27:51.275+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.key_length default=128
time=2025-03-04T17:27:51.275+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.value_length default=128
time=2025-03-04T17:27:51.277+08:00 level=INFO source=sched.go:731 msg="new model will fit in available VRAM, loading" model=D:\ollama\models\blobs\sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93 library=cuda parallel=1 required="21.9 GiB"
time=2025-03-04T17:27:51.301+08:00 level=INFO source=server.go:97 msg="system memory" total="511.7 GiB" free="360.8 GiB" free_swap="464.5 GiB"
time=2025-03-04T17:27:51.302+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.key_length default=128
time=2025-03-04T17:27:51.302+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.value_length default=128
time=2025-03-04T17:27:51.304+08:00 level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=65 layers.offload=65 layers.split=33,32 memory.available="[19.7 GiB 19.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="21.9 GiB" memory.required.partial="21.9 GiB" memory.required.kv="512.0 MiB" memory.required.allocations="[11.2 GiB 10.6 GiB]" memory.weights.total="18.0 GiB" memory.weights.repeating="17.4 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="916.1 MiB" memory.graph.partial="916.1 MiB"
time=2025-03-04T17:27:51.310+08:00 level=INFO source=server.go:380 msg="starting llama server" cmd="C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model D:\\ollama\\models\\blobs\\sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93 --ctx-size 2048 --batch-size 512 --n-gpu-layers 65 --threads 128 --no-mmap --parallel 1 --tensor-split 33,32 --port 62875"
time=2025-03-04T17:27:51.333+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-03-04T17:27:51.333+08:00 level=INFO source=server.go:557 msg="waiting for llama runner to start responding"
time=2025-03-04T17:27:51.334+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
time=2025-03-04T17:27:51.379+08:00 level=INFO source=runner.go:932 msg="starting go runner"
time=2025-03-04T17:27:51.432+08:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.2702739 model=D:\ollama\models\blobs\sha256-daec91ffb5dd0c27411bd71f29932917c49cf529a641d0168496c3a501e3062c
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 2 CUDA devices:
  Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
  Device 1: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
time=2025-03-04T17:27:51.682+08:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.520182 model=D:\ollama\models\blobs\sha256-daec91ffb5dd0c27411bd71f29932917c49cf529a641d0168496c3a501e3062c
load_backend: loaded CUDA backend from C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
load_backend: loaded CPU backend from C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll
time=2025-03-04T17:27:51.830+08:00 level=INFO source=runner.go:935 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | LLAMAFILE = 1 | cgo(clang)" threads=128
time=2025-03-04T17:27:51.831+08:00 level=INFO source=runner.go:993 msg="Server listening on 127.0.0.1:62875"
time=2025-03-04T17:27:51.837+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model"
llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 4090) - 20140 MiB free
llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 4090) - 20140 MiB free
llama_model_loader: loaded meta data with 26 key-value pairs and 771 tensors from D:\ollama\models\blobs\sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = DeepSeek R1 Distill Qwen 32B
llama_model_loader: - kv   3:                           general.basename str              = DeepSeek-R1-Distill-Qwen
llama_model_loader: - kv   4:                         general.size_label str              = 32B
llama_model_loader: - kv   5:                          qwen2.block_count u32              = 64
llama_model_loader: - kv   6:                       qwen2.context_length u32              = 131072
llama_model_loader: - kv   7:                     qwen2.embedding_length u32              = 5120
llama_model_loader: - kv   8:                  qwen2.feed_forward_length u32              = 27648
llama_model_loader: - kv   9:                 qwen2.attention.head_count u32              = 40
llama_model_loader: - kv  10:              qwen2.attention.head_count_kv u32              = 8
llama_model_loader: - kv  11:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  12:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  13:                          general.file_type u32              = 15
llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  15:                         tokenizer.ggml.pre str              = deepseek-r1-qwen
llama_model_loader: - kv  16:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  18:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  19:                tokenizer.ggml.bos_token_id u32              = 151646
llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 151643
llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  22:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  23:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  25:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  321 tensors
llama_model_loader: - type q4_K:  385 tensors
llama_model_loader: - type q6_K:   65 tensors
llm_load_vocab: missing or unrecognized pre-tokenizer type, using: 'default'
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 22
llm_load_vocab: token to piece cache size = 0.9310 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = qwen2
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 152064
llm_load_print_meta: n_merges         = 151387
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 131072
llm_load_print_meta: n_embd           = 5120
llm_load_print_meta: n_layer          = 64
llm_load_print_meta: n_head           = 40
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 5
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 27648
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 131072
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = 32B
llm_load_print_meta: model ftype      = Q4_K - Medium
llm_load_print_meta: model params     = 32.76 B
llm_load_print_meta: model size       = 18.48 GiB (4.85 BPW) 
llm_load_print_meta: general.name     = DeepSeek R1 Distill Qwen 32B
llm_load_print_meta: BOS token        = 151646 '<|begin▁of▁sentence|>'
llm_load_print_meta: EOS token        = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: EOT token        = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: PAD token        = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: LF token         = 148848 'ÄĬ'
llm_load_print_meta: FIM PRE token    = 151659 '<|fim_prefix|>'
llm_load_print_meta: FIM SUF token    = 151661 '<|fim_suffix|>'
llm_load_print_meta: FIM MID token    = 151660 '<|fim_middle|>'
llm_load_print_meta: FIM PAD token    = 151662 '<|fim_pad|>'
llm_load_print_meta: FIM REP token    = 151663 '<|repo_name|>'
llm_load_print_meta: FIM SEP token    = 151664 '<|file_sep|>'
llm_load_print_meta: EOG token        = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: EOG token        = 151662 '<|fim_pad|>'
llm_load_print_meta: EOG token        = 151663 '<|repo_name|>'
llm_load_print_meta: EOG token        = 151664 '<|file_sep|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: offloading 64 repeating layers to GPU
llm_load_tensors: offloading output layer to GPU
llm_load_tensors: offloaded 65/65 layers to GPU
llm_load_tensors:        CUDA0 model buffer size =  9211.25 MiB
llm_load_tensors:        CUDA1 model buffer size =  9297.10 MiB
llm_load_tensors:          CPU model buffer size =   417.66 MiB
[GIN] 2025/03/04 - 17:27:54 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/04 - 17:27:54 | 200 |            0s |       127.0.0.1 | GET      "/api/ps"
[GIN] 2025/03/04 - 17:27:55 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/04 - 17:27:55 | 200 |            0s |       127.0.0.1 | GET      "/api/ps"
llama_new_context_with_model: n_seq_max     = 1
llama_new_context_with_model: n_ctx         = 2048
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch       = 512
llama_new_context_with_model: n_ubatch      = 512
llama_new_context_with_model: flash_attn    = 0
llama_new_context_with_model: freq_base     = 1000000.0
llama_new_context_with_model: freq_scale    = 1
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 2048, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 64, can_shift = 1
llama_kv_cache_init:      CUDA0 KV buffer size =   264.00 MiB
llama_kv_cache_init:      CUDA1 KV buffer size =   248.00 MiB
llama_new_context_with_model: KV self size  =  512.00 MiB, K (f16):  256.00 MiB, V (f16):  256.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     0.60 MiB
llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
llama_new_context_with_model:      CUDA0 compute buffer size =   256.01 MiB
llama_new_context_with_model:      CUDA1 compute buffer size =   363.02 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    26.02 MiB
llama_new_context_with_model: graph nodes  = 2246
llama_new_context_with_model: graph splits = 3
time=2025-03-04T17:27:58.109+08:00 level=INFO source=server.go:596 msg="llama runner started in 6.78 seconds"
[GIN] 2025/03/04 - 17:28:07 | 200 |   21.4091504s |   172.16.10.124 | POST     "/api/chat"
[GIN] 2025/03/04 - 17:28:08 | 200 |       496.6µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/04 - 17:28:08 | 200 |            0s |       127.0.0.1 | GET      "/api/ps"
[GIN] 2025/03/04 - 17:28:10 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/04 - 17:28:10 | 200 |            0s |       127.0.0.1 | GET      "/api/ps"
[GIN] 2025/03/04 - 17:28:12 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/04 - 17:28:12 | 200 |            0s |       127.0.0.1 | GET      "/api/ps"
[GIN] 2025/03/04 - 17:28:15 | 200 |      1.9913ms |   172.16.10.124 | GET      "/api/tags"
[GIN] 2025/03/04 - 17:28:15 | 200 |            0s |   172.16.10.124 | GET      "/api/version"
[GIN] 2025/03/04 - 17:28:18 | 200 |   11.3758686s |   172.16.10.124 | POST     "/api/chat"
[GIN] 2025/03/04 - 17:28:30 | 200 |   11.6859997s |   172.16.10.124 | POST     "/api/chat"
[GIN] 2025/03/04 - 17:28:37 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/04 - 17:28:37 | 200 |      4.9458ms |       127.0.0.1 | POST     "/api/generate"
<!-- gh-comment-id:2696882350 --> @bumblebee-code-gh commented on GitHub (Mar 4, 2025): > [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) may help in debugging the problem. Hello, this is my service running log. I can't find where the problem is. Could you help me take a look? ``` 2025/03/04 17:25:38 routes.go:1205: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\\ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:true ROCR_VISIBLE_DEVICES:]" time=2025-03-04T17:25:38.369+08:00 level=INFO source=images.go:432 msg="total blobs: 12" time=2025-03-04T17:25:38.371+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-03-04T17:25:38.374+08:00 level=INFO source=routes.go:1256 msg="Listening on [::]:11434 (version 0.5.12)" time=2025-03-04T17:25:38.374+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-03-04T17:25:38.374+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=2 time=2025-03-04T17:25:38.374+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=64 efficiency=0 threads=128 time=2025-03-04T17:25:38.374+08:00 level=INFO source=gpu_windows.go:214 msg="" package=1 cores=64 efficiency=0 threads=128 time=2025-03-04T17:25:38.922+08:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-eb5918f2-8024-dba1-a6b5-113a975124bc library=cuda compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090" overhead="3.9 GiB" time=2025-03-04T17:25:39.273+08:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-3c3feaa6-4a63-6892-5578-b41c9cc0a1c1 library=cuda compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090" overhead="3.5 GiB" time=2025-03-04T17:25:39.275+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-eb5918f2-8024-dba1-a6b5-113a975124bc library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090" total="24.0 GiB" available="19.7 GiB" time=2025-03-04T17:25:39.275+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-3c3feaa6-4a63-6892-5578-b41c9cc0a1c1 library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090" total="24.0 GiB" available="19.7 GiB" [GIN] 2025/03/04 - 17:25:41 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/03/04 - 17:25:41 | 200 | 497.7µs | 127.0.0.1 | GET "/api/ps" [GIN] 2025/03/04 - 17:25:42 | 200 | 498µs | 127.0.0.1 | HEAD "/" [GIN] 2025/03/04 - 17:25:42 | 200 | 51.2739ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/03/04 - 17:25:55 | 200 | 3.9825ms | 172.16.10.124 | GET "/api/tags" [GIN] 2025/03/04 - 17:25:55 | 200 | 497µs | 172.16.10.124 | GET "/api/version" [GIN] 2025/03/04 - 17:26:05 | 200 | 3.9824ms | 172.16.10.124 | GET "/api/tags" [GIN] 2025/03/04 - 17:26:09 | 200 | 1.9913ms | 172.16.10.124 | GET "/api/tags" [GIN] 2025/03/04 - 17:26:09 | 200 | 496.9µs | 172.16.10.124 | GET "/api/version" time=2025-03-04T17:26:21.800+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-03-04T17:26:21.800+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-03-04T17:26:21.801+08:00 level=INFO source=sched.go:731 msg="new model will fit in available VRAM, loading" model=D:\ollama\models\blobs\sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93 library=cuda parallel=1 required="21.9 GiB" time=2025-03-04T17:26:21.819+08:00 level=INFO source=server.go:97 msg="system memory" total="511.7 GiB" free="360.9 GiB" free_swap="464.5 GiB" time=2025-03-04T17:26:21.820+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-03-04T17:26:21.820+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-03-04T17:26:21.820+08:00 level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=65 layers.offload=65 layers.split=33,32 memory.available="[19.7 GiB 19.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="21.9 GiB" memory.required.partial="21.9 GiB" memory.required.kv="512.0 MiB" memory.required.allocations="[11.2 GiB 10.6 GiB]" memory.weights.total="18.0 GiB" memory.weights.repeating="17.4 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="916.1 MiB" memory.graph.partial="916.1 MiB" time=2025-03-04T17:26:21.830+08:00 level=INFO source=server.go:380 msg="starting llama server" cmd="C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model D:\\ollama\\models\\blobs\\sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93 --ctx-size 2048 --batch-size 512 --n-gpu-layers 65 --threads 128 --no-mmap --parallel 1 --tensor-split 33,32 --port 62844" time=2025-03-04T17:26:21.846+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2025-03-04T17:26:21.846+08:00 level=INFO source=server.go:557 msg="waiting for llama runner to start responding" time=2025-03-04T17:26:21.847+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" time=2025-03-04T17:26:21.889+08:00 level=INFO source=runner.go:932 msg="starting go runner" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 2 CUDA devices: Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes Device 1: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes load_backend: loaded CUDA backend from C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll load_backend: loaded CPU backend from C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll time=2025-03-04T17:26:23.838+08:00 level=INFO source=runner.go:935 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | LLAMAFILE = 1 | cgo(clang)" threads=128 time=2025-03-04T17:26:23.840+08:00 level=INFO source=runner.go:993 msg="Server listening on 127.0.0.1:62844" time=2025-03-04T17:26:23.853+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model" llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 4090) - 20140 MiB free llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 4090) - 20140 MiB free llama_model_loader: loaded meta data with 26 key-value pairs and 771 tensors from D:\ollama\models\blobs\sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 32B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen llama_model_loader: - kv 4: general.size_label str = 32B llama_model_loader: - kv 5: qwen2.block_count u32 = 64 llama_model_loader: - kv 6: qwen2.context_length u32 = 131072 llama_model_loader: - kv 7: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 27648 llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: general.file_type u32 = 15 llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 15: tokenizer.ggml.pre str = deepseek-r1-qwen llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646 llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - type f32: 321 tensors llama_model_loader: - type q4_K: 385 tensors llama_model_loader: - type q6_K: 65 tensors llm_load_vocab: missing or unrecognized pre-tokenizer type, using: 'default' llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 22 llm_load_vocab: token to piece cache size = 0.9310 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 152064 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 5120 llm_load_print_meta: n_layer = 64 llm_load_print_meta: n_head = 40 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 5 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 27648 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 32B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 32.76 B llm_load_print_meta: model size = 18.48 GiB (4.85 BPW) llm_load_print_meta: general.name = DeepSeek R1 Distill Qwen 32B llm_load_print_meta: BOS token = 151646 '<|begin▁of▁sentence|>' llm_load_print_meta: EOS token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: EOT token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: PAD token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>' llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>' llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>' llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>' llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>' llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>' llm_load_print_meta: EOG token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: EOG token = 151662 '<|fim_pad|>' llm_load_print_meta: EOG token = 151663 '<|repo_name|>' llm_load_print_meta: EOG token = 151664 '<|file_sep|>' llm_load_print_meta: max token length = 256 llm_load_tensors: offloading 64 repeating layers to GPU llm_load_tensors: offloading output layer to GPU llm_load_tensors: offloaded 65/65 layers to GPU llm_load_tensors: CUDA0 model buffer size = 9211.25 MiB llm_load_tensors: CUDA1 model buffer size = 9297.10 MiB llm_load_tensors: CPU model buffer size = 417.66 MiB llama_new_context_with_model: n_seq_max = 1 llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 2048, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 64, can_shift = 1 llama_kv_cache_init: CUDA0 KV buffer size = 264.00 MiB llama_kv_cache_init: CUDA1 KV buffer size = 248.00 MiB llama_new_context_with_model: KV self size = 512.00 MiB, K (f16): 256.00 MiB, V (f16): 256.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 0.60 MiB llama_new_context_with_model: pipeline parallelism enabled (n_copies=4) llama_new_context_with_model: CUDA0 compute buffer size = 256.01 MiB llama_new_context_with_model: CUDA1 compute buffer size = 363.02 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 26.02 MiB llama_new_context_with_model: graph nodes = 2246 llama_new_context_with_model: graph splits = 3 time=2025-03-04T17:26:30.126+08:00 level=INFO source=server.go:596 msg="llama runner started in 8.28 seconds" [GIN] 2025/03/04 - 17:26:44 | 200 | 22.5157695s | 172.16.10.124 | POST "/api/chat" [GIN] 2025/03/04 - 17:26:46 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/03/04 - 17:26:46 | 200 | 3.9828ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/03/04 - 17:26:47 | 200 | 498.4µs | 127.0.0.1 | HEAD "/" [GIN] 2025/03/04 - 17:26:47 | 200 | 497.8µs | 127.0.0.1 | GET "/api/ps" [GIN] 2025/03/04 - 17:26:51 | 200 | 535.4µs | 127.0.0.1 | HEAD "/" [GIN] 2025/03/04 - 17:26:51 | 200 | 0s | 127.0.0.1 | GET "/api/ps" [GIN] 2025/03/04 - 17:26:51 | 200 | 30.0247217s | 172.16.10.124 | POST "/api/chat" time=2025-03-04T17:26:52.661+08:00 level=WARN source=ggml.go:132 msg="key not found" key=bert.attention.head_count_kv default=1 time=2025-03-04T17:26:52.661+08:00 level=WARN source=ggml.go:132 msg="key not found" key=bert.attention.key_length default=64 time=2025-03-04T17:26:52.661+08:00 level=WARN source=ggml.go:132 msg="key not found" key=bert.attention.value_length default=64 time=2025-03-04T17:26:52.662+08:00 level=WARN source=ggml.go:132 msg="key not found" key=bert.attention.head_count_kv default=1 time=2025-03-04T17:26:52.662+08:00 level=INFO source=sched.go:731 msg="new model will fit in available VRAM, loading" model=D:\ollama\models\blobs\sha256-daec91ffb5dd0c27411bd71f29932917c49cf529a641d0168496c3a501e3062c library=cuda parallel=1 required="2.1 GiB" time=2025-03-04T17:26:52.679+08:00 level=INFO source=server.go:97 msg="system memory" total="511.7 GiB" free="360.8 GiB" free_swap="464.5 GiB" time=2025-03-04T17:26:52.680+08:00 level=WARN source=ggml.go:132 msg="key not found" key=bert.attention.head_count_kv default=1 time=2025-03-04T17:26:52.680+08:00 level=WARN source=ggml.go:132 msg="key not found" key=bert.attention.key_length default=64 time=2025-03-04T17:26:52.680+08:00 level=WARN source=ggml.go:132 msg="key not found" key=bert.attention.value_length default=64 time=2025-03-04T17:26:52.681+08:00 level=WARN source=ggml.go:132 msg="key not found" key=bert.attention.head_count_kv default=1 time=2025-03-04T17:26:52.681+08:00 level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=25 layers.offload=25 layers.split=13,12 memory.available="[19.7 GiB 19.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="2.1 GiB" memory.required.partial="2.1 GiB" memory.required.kv="12.0 MiB" memory.required.allocations="[1.3 GiB 808.2 MiB]" memory.weights.total="589.2 MiB" memory.weights.repeating="100.9 MiB" memory.weights.nonrepeating="488.3 MiB" memory.graph.full="32.0 MiB" memory.graph.partial="32.0 MiB" time=2025-03-04T17:26:52.690+08:00 level=INFO source=server.go:380 msg="starting llama server" cmd="C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model D:\\ollama\\models\\blobs\\sha256-daec91ffb5dd0c27411bd71f29932917c49cf529a641d0168496c3a501e3062c --ctx-size 2048 --batch-size 512 --n-gpu-layers 25 --threads 128 --no-mmap --parallel 1 --tensor-split 13,12 --port 62859" time=2025-03-04T17:26:52.701+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2025-03-04T17:26:52.701+08:00 level=INFO source=server.go:557 msg="waiting for llama runner to start responding" time=2025-03-04T17:26:52.701+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" time=2025-03-04T17:26:52.752+08:00 level=INFO source=runner.go:932 msg="starting go runner" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 2 CUDA devices: Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes Device 1: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes load_backend: loaded CUDA backend from C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll load_backend: loaded CPU backend from C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll time=2025-03-04T17:26:53.206+08:00 level=INFO source=runner.go:935 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | LLAMAFILE = 1 | cgo(clang)" threads=128 time=2025-03-04T17:26:53.207+08:00 level=INFO source=runner.go:993 msg="Server listening on 127.0.0.1:62859" time=2025-03-04T17:26:53.456+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model" llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 4090) - 20140 MiB free llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 4090) - 20140 MiB free llama_model_loader: loaded meta data with 33 key-value pairs and 389 tensors from D:\ollama\models\blobs\sha256-daec91ffb5dd0c27411bd71f29932917c49cf529a641d0168496c3a501e3062c (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = bert llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.size_label str = 567M llama_model_loader: - kv 3: general.license str = mit llama_model_loader: - kv 4: general.tags arr[str,4] = ["sentence-transformers", "feature-ex... llama_model_loader: - kv 5: bert.block_count u32 = 24 llama_model_loader: - kv 6: bert.context_length u32 = 8192 llama_model_loader: - kv 7: bert.embedding_length u32 = 1024 llama_model_loader: - kv 8: bert.feed_forward_length u32 = 4096 llama_model_loader: - kv 9: bert.attention.head_count u32 = 16 llama_model_loader: - kv 10: bert.attention.layer_norm_epsilon f32 = 0.000010 llama_model_loader: - kv 11: general.file_type u32 = 1 llama_model_loader: - kv 12: bert.attention.causal bool = false llama_model_loader: - kv 13: bert.pooling_type u32 = 2 llama_model_loader: - kv 14: tokenizer.ggml.model str = t5 llama_model_loader: - kv 15: tokenizer.ggml.pre str = default llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,250002] = ["<s>", "<pad>", "</s>", "<unk>", ","... llama_model_loader: - kv 17: tokenizer.ggml.scores arr[f32,250002] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,250002] = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 19: tokenizer.ggml.add_space_prefix bool = true llama_model_loader: - kv 20: tokenizer.ggml.token_type_count u32 = 1 llama_model_loader: - kv 21: tokenizer.ggml.remove_extra_whitespaces bool = true llama_model_loader: - kv 22: tokenizer.ggml.precompiled_charsmap arr[u8,237539] = [0, 180, 2, 0, 0, 132, 0, 0, 0, 0, 0,... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 0 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 25: tokenizer.ggml.unknown_token_id u32 = 3 llama_model_loader: - kv 26: tokenizer.ggml.seperator_token_id u32 = 2 llama_model_loader: - kv 27: tokenizer.ggml.padding_token_id u32 = 1 llama_model_loader: - kv 28: tokenizer.ggml.cls_token_id u32 = 0 llama_model_loader: - kv 29: tokenizer.ggml.mask_token_id u32 = 250001 llama_model_loader: - kv 30: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 31: tokenizer.ggml.add_eos_token bool = true llama_model_loader: - kv 32: general.quantization_version u32 = 2 llama_model_loader: - type f32: 244 tensors llama_model_loader: - type f16: 145 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 4 llm_load_vocab: token to piece cache size = 2.1668 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = bert llm_load_print_meta: vocab type = UGM llm_load_print_meta: n_vocab = 250002 llm_load_print_meta: n_merges = 0 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 8192 llm_load_print_meta: n_embd = 1024 llm_load_print_meta: n_layer = 24 llm_load_print_meta: n_head = 16 llm_load_print_meta: n_head_kv = 16 llm_load_print_meta: n_rot = 64 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 64 llm_load_print_meta: n_embd_head_v = 64 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 1.0e-05 llm_load_print_meta: f_norm_rms_eps = 0.0e+00 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 4096 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 0 llm_load_print_meta: pooling type = 2 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 8192 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 335M llm_load_print_meta: model ftype = F16 llm_load_print_meta: model params = 566.70 M llm_load_print_meta: model size = 1.07 GiB (16.25 BPW) llm_load_print_meta: general.name = n/a llm_load_print_meta: BOS token = 0 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 3 '<unk>' llm_load_print_meta: SEP token = 2 '</s>' llm_load_print_meta: PAD token = 1 '<pad>' llm_load_print_meta: CLS token = 0 '<s>' llm_load_print_meta: MASK token = 250001 '[PAD250000]' llm_load_print_meta: LF token = 6 '▁' llm_load_print_meta: EOG token = 2 '</s>' llm_load_print_meta: max token length = 48 llm_load_tensors: offloading 24 repeating layers to GPU llm_load_tensors: offloading output layer to GPU llm_load_tensors: offloaded 25/25 layers to GPU llm_load_tensors: CUDA_Host model buffer size = 520.30 MiB llm_load_tensors: CUDA0 model buffer size = 312.66 MiB llm_load_tensors: CUDA1 model buffer size = 264.56 MiB llama_new_context_with_model: n_seq_max = 1 llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (8192) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 2048, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 24, can_shift = 1 llama_kv_cache_init: CUDA0 KV buffer size = 104.00 MiB llama_kv_cache_init: CUDA1 KV buffer size = 88.00 MiB llama_new_context_with_model: KV self size = 192.00 MiB, K (f16): 96.00 MiB, V (f16): 96.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 0.00 MiB llama_new_context_with_model: pipeline parallelism enabled (n_copies=4) llama_new_context_with_model: CUDA0 compute buffer size = 44.05 MiB llama_new_context_with_model: CUDA1 compute buffer size = 36.01 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 8.02 MiB llama_new_context_with_model: graph nodes = 849 llama_new_context_with_model: graph splits = 5 (with bs=512), 3 (with bs=1) time=2025-03-04T17:26:54.711+08:00 level=INFO source=server.go:596 msg="llama runner started in 2.01 seconds" llama_model_loader: loaded meta data with 33 key-value pairs and 389 tensors from D:\ollama\models\blobs\sha256-daec91ffb5dd0c27411bd71f29932917c49cf529a641d0168496c3a501e3062c (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = bert llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.size_label str = 567M llama_model_loader: - kv 3: general.license str = mit llama_model_loader: - kv 4: general.tags arr[str,4] = ["sentence-transformers", "feature-ex... llama_model_loader: - kv 5: bert.block_count u32 = 24 llama_model_loader: - kv 6: bert.context_length u32 = 8192 llama_model_loader: - kv 7: bert.embedding_length u32 = 1024 llama_model_loader: - kv 8: bert.feed_forward_length u32 = 4096 llama_model_loader: - kv 9: bert.attention.head_count u32 = 16 llama_model_loader: - kv 10: bert.attention.layer_norm_epsilon f32 = 0.000010 llama_model_loader: - kv 11: general.file_type u32 = 1 llama_model_loader: - kv 12: bert.attention.causal bool = false llama_model_loader: - kv 13: bert.pooling_type u32 = 2 llama_model_loader: - kv 14: tokenizer.ggml.model str = t5 llama_model_loader: - kv 15: tokenizer.ggml.pre str = default [GIN] 2025/03/04 - 17:26:54 | 200 | 249µs | 127.0.0.1 | HEAD "/" [GIN] 2025/03/04 - 17:26:54 | 200 | 0s | 127.0.0.1 | GET "/api/ps" llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,250002] = ["<s>", "<pad>", "</s>", "<unk>", ","... llama_model_loader: - kv 17: tokenizer.ggml.scores arr[f32,250002] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,250002] = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 19: tokenizer.ggml.add_space_prefix bool = true llama_model_loader: - kv 20: tokenizer.ggml.token_type_count u32 = 1 llama_model_loader: - kv 21: tokenizer.ggml.remove_extra_whitespaces bool = true llama_model_loader: - kv 22: tokenizer.ggml.precompiled_charsmap arr[u8,237539] = [0, 180, 2, 0, 0, 132, 0, 0, 0, 0, 0,... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 0 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 25: tokenizer.ggml.unknown_token_id u32 = 3 llama_model_loader: - kv 26: tokenizer.ggml.seperator_token_id u32 = 2 llama_model_loader: - kv 27: tokenizer.ggml.padding_token_id u32 = 1 llama_model_loader: - kv 28: tokenizer.ggml.cls_token_id u32 = 0 llama_model_loader: - kv 29: tokenizer.ggml.mask_token_id u32 = 250001 llama_model_loader: - kv 30: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 31: tokenizer.ggml.add_eos_token bool = true llama_model_loader: - kv 32: general.quantization_version u32 = 2 llama_model_loader: - type f32: 244 tensors llama_model_loader: - type f16: 145 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 4 llm_load_vocab: token to piece cache size = 2.1668 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = bert llm_load_print_meta: vocab type = UGM llm_load_print_meta: n_vocab = 250002 llm_load_print_meta: n_merges = 0 llm_load_print_meta: vocab_only = 1 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = all F32 llm_load_print_meta: model params = 566.70 M llm_load_print_meta: model size = 1.07 GiB (16.25 BPW) llm_load_print_meta: general.name = n/a llm_load_print_meta: BOS token = 0 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 3 '<unk>' llm_load_print_meta: SEP token = 2 '</s>' llm_load_print_meta: PAD token = 1 '<pad>' llm_load_print_meta: CLS token = 0 '<s>' llm_load_print_meta: MASK token = 250001 '[PAD250000]' llm_load_print_meta: LF token = 6 '▁' llm_load_print_meta: EOG token = 2 '</s>' llm_load_print_meta: max token length = 48 llama_model_load: vocab only - skipping tensors [GIN] 2025/03/04 - 17:27:11 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/03/04 - 17:27:11 | 200 | 0s | 127.0.0.1 | GET "/api/ps" [GIN] 2025/03/04 - 17:27:11 | 200 | 26.8011527s | 172.16.10.124 | POST "/api/embed" [GIN] 2025/03/04 - 17:27:29 | 200 | 17.8122472s | 172.16.10.124 | POST "/api/embed" [GIN] 2025/03/04 - 17:27:46 | 200 | 16.5012803s | 172.16.10.124 | POST "/api/embed" [GIN] 2025/03/04 - 17:27:46 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/03/04 - 17:27:46 | 200 | 0s | 127.0.0.1 | GET "/api/ps" [GIN] 2025/03/04 - 17:27:48 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/03/04 - 17:27:48 | 200 | 0s | 127.0.0.1 | GET "/api/ps" [GIN] 2025/03/04 - 17:27:50 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/03/04 - 17:27:50 | 200 | 0s | 127.0.0.1 | GET "/api/ps" time=2025-03-04T17:27:51.182+08:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.0203752 model=D:\ollama\models\blobs\sha256-daec91ffb5dd0c27411bd71f29932917c49cf529a641d0168496c3a501e3062c time=2025-03-04T17:27:51.275+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-03-04T17:27:51.275+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-03-04T17:27:51.277+08:00 level=INFO source=sched.go:731 msg="new model will fit in available VRAM, loading" model=D:\ollama\models\blobs\sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93 library=cuda parallel=1 required="21.9 GiB" time=2025-03-04T17:27:51.301+08:00 level=INFO source=server.go:97 msg="system memory" total="511.7 GiB" free="360.8 GiB" free_swap="464.5 GiB" time=2025-03-04T17:27:51.302+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-03-04T17:27:51.302+08:00 level=WARN source=ggml.go:132 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-03-04T17:27:51.304+08:00 level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=65 layers.offload=65 layers.split=33,32 memory.available="[19.7 GiB 19.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="21.9 GiB" memory.required.partial="21.9 GiB" memory.required.kv="512.0 MiB" memory.required.allocations="[11.2 GiB 10.6 GiB]" memory.weights.total="18.0 GiB" memory.weights.repeating="17.4 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="916.1 MiB" memory.graph.partial="916.1 MiB" time=2025-03-04T17:27:51.310+08:00 level=INFO source=server.go:380 msg="starting llama server" cmd="C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model D:\\ollama\\models\\blobs\\sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93 --ctx-size 2048 --batch-size 512 --n-gpu-layers 65 --threads 128 --no-mmap --parallel 1 --tensor-split 33,32 --port 62875" time=2025-03-04T17:27:51.333+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2025-03-04T17:27:51.333+08:00 level=INFO source=server.go:557 msg="waiting for llama runner to start responding" time=2025-03-04T17:27:51.334+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" time=2025-03-04T17:27:51.379+08:00 level=INFO source=runner.go:932 msg="starting go runner" time=2025-03-04T17:27:51.432+08:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.2702739 model=D:\ollama\models\blobs\sha256-daec91ffb5dd0c27411bd71f29932917c49cf529a641d0168496c3a501e3062c ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 2 CUDA devices: Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes Device 1: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes time=2025-03-04T17:27:51.682+08:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.520182 model=D:\ollama\models\blobs\sha256-daec91ffb5dd0c27411bd71f29932917c49cf529a641d0168496c3a501e3062c load_backend: loaded CUDA backend from C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll load_backend: loaded CPU backend from C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll time=2025-03-04T17:27:51.830+08:00 level=INFO source=runner.go:935 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | LLAMAFILE = 1 | cgo(clang)" threads=128 time=2025-03-04T17:27:51.831+08:00 level=INFO source=runner.go:993 msg="Server listening on 127.0.0.1:62875" time=2025-03-04T17:27:51.837+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model" llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 4090) - 20140 MiB free llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 4090) - 20140 MiB free llama_model_loader: loaded meta data with 26 key-value pairs and 771 tensors from D:\ollama\models\blobs\sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 32B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen llama_model_loader: - kv 4: general.size_label str = 32B llama_model_loader: - kv 5: qwen2.block_count u32 = 64 llama_model_loader: - kv 6: qwen2.context_length u32 = 131072 llama_model_loader: - kv 7: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 27648 llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: general.file_type u32 = 15 llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 15: tokenizer.ggml.pre str = deepseek-r1-qwen llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646 llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - type f32: 321 tensors llama_model_loader: - type q4_K: 385 tensors llama_model_loader: - type q6_K: 65 tensors llm_load_vocab: missing or unrecognized pre-tokenizer type, using: 'default' llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 22 llm_load_vocab: token to piece cache size = 0.9310 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 152064 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 5120 llm_load_print_meta: n_layer = 64 llm_load_print_meta: n_head = 40 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 5 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 27648 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 32B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 32.76 B llm_load_print_meta: model size = 18.48 GiB (4.85 BPW) llm_load_print_meta: general.name = DeepSeek R1 Distill Qwen 32B llm_load_print_meta: BOS token = 151646 '<|begin▁of▁sentence|>' llm_load_print_meta: EOS token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: EOT token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: PAD token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>' llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>' llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>' llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>' llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>' llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>' llm_load_print_meta: EOG token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: EOG token = 151662 '<|fim_pad|>' llm_load_print_meta: EOG token = 151663 '<|repo_name|>' llm_load_print_meta: EOG token = 151664 '<|file_sep|>' llm_load_print_meta: max token length = 256 llm_load_tensors: offloading 64 repeating layers to GPU llm_load_tensors: offloading output layer to GPU llm_load_tensors: offloaded 65/65 layers to GPU llm_load_tensors: CUDA0 model buffer size = 9211.25 MiB llm_load_tensors: CUDA1 model buffer size = 9297.10 MiB llm_load_tensors: CPU model buffer size = 417.66 MiB [GIN] 2025/03/04 - 17:27:54 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/03/04 - 17:27:54 | 200 | 0s | 127.0.0.1 | GET "/api/ps" [GIN] 2025/03/04 - 17:27:55 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/03/04 - 17:27:55 | 200 | 0s | 127.0.0.1 | GET "/api/ps" llama_new_context_with_model: n_seq_max = 1 llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 2048, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 64, can_shift = 1 llama_kv_cache_init: CUDA0 KV buffer size = 264.00 MiB llama_kv_cache_init: CUDA1 KV buffer size = 248.00 MiB llama_new_context_with_model: KV self size = 512.00 MiB, K (f16): 256.00 MiB, V (f16): 256.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 0.60 MiB llama_new_context_with_model: pipeline parallelism enabled (n_copies=4) llama_new_context_with_model: CUDA0 compute buffer size = 256.01 MiB llama_new_context_with_model: CUDA1 compute buffer size = 363.02 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 26.02 MiB llama_new_context_with_model: graph nodes = 2246 llama_new_context_with_model: graph splits = 3 time=2025-03-04T17:27:58.109+08:00 level=INFO source=server.go:596 msg="llama runner started in 6.78 seconds" [GIN] 2025/03/04 - 17:28:07 | 200 | 21.4091504s | 172.16.10.124 | POST "/api/chat" [GIN] 2025/03/04 - 17:28:08 | 200 | 496.6µs | 127.0.0.1 | HEAD "/" [GIN] 2025/03/04 - 17:28:08 | 200 | 0s | 127.0.0.1 | GET "/api/ps" [GIN] 2025/03/04 - 17:28:10 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/03/04 - 17:28:10 | 200 | 0s | 127.0.0.1 | GET "/api/ps" [GIN] 2025/03/04 - 17:28:12 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/03/04 - 17:28:12 | 200 | 0s | 127.0.0.1 | GET "/api/ps" [GIN] 2025/03/04 - 17:28:15 | 200 | 1.9913ms | 172.16.10.124 | GET "/api/tags" [GIN] 2025/03/04 - 17:28:15 | 200 | 0s | 172.16.10.124 | GET "/api/version" [GIN] 2025/03/04 - 17:28:18 | 200 | 11.3758686s | 172.16.10.124 | POST "/api/chat" [GIN] 2025/03/04 - 17:28:30 | 200 | 11.6859997s | 172.16.10.124 | POST "/api/chat" [GIN] 2025/03/04 - 17:28:37 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/03/04 - 17:28:37 | 200 | 4.9458ms | 127.0.0.1 | POST "/api/generate" ```
Author
Owner

@rick-github commented on GitHub (Mar 4, 2025):

llm_load_tensors: offloading 64 repeating layers to GPU
llm_load_tensors: offloading output layer to GPU
llm_load_tensors: offloaded 65/65 layers to GPU
llm_load_tensors:        CUDA0 model buffer size =  9211.25 MiB
llm_load_tensors:        CUDA1 model buffer size =  9297.10 MiB
llm_load_tensors:          CPU model buffer size =   417.66 MiB

ollama has offloaded the entire model to GPU. According to the log, there is no problem. What problem are you seeing?

<!-- gh-comment-id:2696900816 --> @rick-github commented on GitHub (Mar 4, 2025): ``` llm_load_tensors: offloading 64 repeating layers to GPU llm_load_tensors: offloading output layer to GPU llm_load_tensors: offloaded 65/65 layers to GPU llm_load_tensors: CUDA0 model buffer size = 9211.25 MiB llm_load_tensors: CUDA1 model buffer size = 9297.10 MiB llm_load_tensors: CPU model buffer size = 417.66 MiB ``` ollama has offloaded the entire model to GPU. According to the log, there is no problem. What problem are you seeing?
Author
Owner

@bumblebee-code-gh commented on GitHub (Mar 5, 2025):

llm_load_tensors: offloading 64 repeating layers to GPU
llm_load_tensors: offloading output layer to GPU
llm_load_tensors: offloaded 65/65 layers to GPU
llm_load_tensors:        CUDA0 model buffer size =  9211.25 MiB
llm_load_tensors:        CUDA1 model buffer size =  9297.10 MiB
llm_load_tensors:          CPU model buffer size =   417.66 MiB

ollama has offloaded the entire model to GPU. According to the log, there is no problem. What problem are you seeing?

Thank you, I found the cause and it has been resolved.

<!-- gh-comment-id:2700004074 --> @bumblebee-code-gh commented on GitHub (Mar 5, 2025): > ``` > llm_load_tensors: offloading 64 repeating layers to GPU > llm_load_tensors: offloading output layer to GPU > llm_load_tensors: offloaded 65/65 layers to GPU > llm_load_tensors: CUDA0 model buffer size = 9211.25 MiB > llm_load_tensors: CUDA1 model buffer size = 9297.10 MiB > llm_load_tensors: CPU model buffer size = 417.66 MiB > ``` > > ollama has offloaded the entire model to GPU. According to the log, there is no problem. What problem are you seeing? Thank you, I found the cause and it has been resolved.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#52416