[GH-ISSUE #7323] ollama ps reporting "100% GPU" while model is running on CPU only. #51164

Closed
opened 2026-04-28 18:50:15 -05:00 by GiteaMirror · 15 comments
Owner

Originally created by @Liu-Eroteme on GitHub (Oct 22, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7323

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

Running llama 3.1 70b q3km on 2x4090 when there is already a colbert retriever loaded (takes up ~2800MiB VRAM) should work, but doesn't - ollama ps reports that the model is running and using the GPU:

llama3.1:70b-instruct-q3_K_M 0e97a7709799 40 GB 100% GPU Less than a second from now

but my GPUs are unloaded, the VRAM is empty, and my CPU is fully loaded.

what gives?

OS

Linux, Docker

GPU

Nvidia

CPU

AMD

Ollama version

0.3.10

Originally created by @Liu-Eroteme on GitHub (Oct 22, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7323 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? Running llama 3.1 70b q3km on 2x4090 when there is already a colbert retriever loaded (takes up ~2800MiB VRAM) should work, but doesn't - ollama ps reports that the model is running and using the GPU: `llama3.1:70b-instruct-q3_K_M 0e97a7709799 40 GB 100% GPU Less than a second from now` but my GPUs are unloaded, the VRAM is empty, and my CPU is fully loaded. what gives? ### OS Linux, Docker ### GPU Nvidia ### CPU AMD ### Ollama version 0.3.10
GiteaMirror added the needs more infobugnvidia labels 2026-04-28 18:50:22 -05:00
Author
Owner

@rick-github commented on GitHub (Oct 22, 2024):

Server logs will aid in debugging.

<!-- gh-comment-id:2429912135 --> @rick-github commented on GitHub (Oct 22, 2024): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will aid in debugging.
Author
Owner

@Liu-Eroteme commented on GitHub (Oct 24, 2024):

found out today what made the difference:
setting "OLLAMA_KEEP_ALIVE" to anything below 5s is what causes it.
I can reliably reproduce the bug at 1s, 2s, sometimes at 3s, 4s, never at 5s.

<!-- gh-comment-id:2435494996 --> @Liu-Eroteme commented on GitHub (Oct 24, 2024): found out today what made the difference: setting "OLLAMA_KEEP_ALIVE" to anything below 5s is what causes it. I can reliably reproduce the bug at 1s, 2s, sometimes at 3s, 4s, never at 5s.
Author
Owner

@rick-github commented on GitHub (Oct 24, 2024):

It sounds like the models are just unloaded because you have a short period of residency - but my GPUs are unloaded, the VRAM is empty, and my CPU is fully loaded because the model has just been unloaded.

<!-- gh-comment-id:2435516666 --> @rick-github commented on GitHub (Oct 24, 2024): It sounds like the models are just unloaded because you have a short period of residency - `but my GPUs are unloaded, the VRAM is empty, and my CPU is fully loaded` because the model has just been unloaded.
Author
Owner

@Liu-Eroteme commented on GitHub (Nov 5, 2024):

It sounds like the models are just unloaded

That's the thing tho, no.

When i start the docker container with keepalive set to 1s and request a completion from the API endpoint with watch -n 1 ollama ps running, ollama ps shows no activity until the request comes in, as expected, and then shows the requested model, listing it as running 100% on the gpu, and the generation is running & the api server will respond, just.. later.
Because while ollama ps will keep displaying "100% GPU" the model is being loaded into cpu memory, and the generation is taking place on the cpu.

So no, the model has not just been unloaded, it is still actively loaded and generating - just on the cpu.

PS.:

'and my CPU is fully loaded' because the model has just been unloaded.

If ollama / llama.cpp actually managed to load 128 threads at 90-100% and pull ~400w while 'unloading a model from GPU memory' id be even more worried.

<!-- gh-comment-id:2456894996 --> @Liu-Eroteme commented on GitHub (Nov 5, 2024): > It sounds like the models are just unloaded That's the thing tho, no. When i start the docker container with keepalive set to 1s and request a completion from the API endpoint with watch -n 1 ollama ps running, ollama ps shows no activity until the request comes in, as expected, and then shows the requested model, listing it as running 100% on the gpu, and the generation is running & the api server will respond, just.. later. Because while ollama ps will keep displaying "100% GPU" the model is being loaded into cpu memory, and the generation is taking place on the cpu. So no, the model has not just been unloaded, it is still actively loaded and generating - just on the cpu. PS.: > 'and my CPU is fully loaded' because the model has just been unloaded. If ollama / llama.cpp actually managed to load 128 threads at 90-100% and pull ~400w while 'unloading a model from GPU memory' id be even more worried.
Author
Owner

@rick-github commented on GitHub (Nov 5, 2024):

Server logs will aid in debugging.

<!-- gh-comment-id:2456900912 --> @rick-github commented on GitHub (Nov 5, 2024): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will aid in debugging.
Author
Owner

@EduTecher commented on GitHub (Nov 26, 2024):

I get the same problem with Archlinux Kernel 6.12.1-arch1-1
The version of Ollama is 0.4.4-1. But it work well when i downgrade ollama to v0.3.12-6.
Is it a problem of archlinux's package?

#ollama ps
NAME ID SIZE PROCESSOR UNTIL
qwen2.5:latest 845dbda0ea48 6.0 GB 100% GPU 4 minutes from now

#nvidia-smi
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 565.57.01 Driver Version: 565.57.01 CUDA Version: 12.7 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 4060 Ti Off | 00000000:01:00.0 Off | N/A |
| 0% 31C P8 3W / 165W | 4MiB / 16380MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+

<!-- gh-comment-id:2500883140 --> @EduTecher commented on GitHub (Nov 26, 2024): I get the same problem with Archlinux Kernel 6.12.1-arch1-1 The version of Ollama is 0.4.4-1. But it work well when i downgrade ollama to v0.3.12-6. Is it a problem of archlinux's package? #ollama ps NAME ID SIZE PROCESSOR UNTIL qwen2.5:latest 845dbda0ea48 6.0 GB 100% GPU 4 minutes from now #nvidia-smi +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 565.57.01 Driver Version: 565.57.01 CUDA Version: 12.7 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 4060 Ti Off | 00000000:01:00.0 Off | N/A | | 0% 31C P8 3W / 165W | 4MiB / 16380MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+
Author
Owner

@rick-github commented on GitHub (Nov 26, 2024):

Server logs will aid in debugging.

<!-- gh-comment-id:2500887669 --> @rick-github commented on GitHub (Nov 26, 2024): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will aid in debugging.
Author
Owner

@EduTecher commented on GitHub (Nov 27, 2024):

Server logs will aid in debugging.

Thanks for your replying. But there is no Server logs in my archlinux edition. Finally I find it a problem caused by archlinux's package. It's fixed with the official installation: curl -fsSL https://ollama.com/install.sh | sh.

<!-- gh-comment-id:2502445925 --> @EduTecher commented on GitHub (Nov 27, 2024): > [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will aid in debugging. Thanks for your replying. But there is no Server logs in my archlinux edition. Finally I find it a problem caused by archlinux's package. It's fixed with the official installation: curl -fsSL https://ollama.com/install.sh | sh.
Author
Owner

@ghost commented on GitHub (Jan 27, 2025):

The same error but that's my cpu not support avx

<!-- gh-comment-id:2615256446 --> @ghost commented on GitHub (Jan 27, 2025): The same error but that's my cpu not support avx
Author
Owner

@rick-github commented on GitHub (Jan 27, 2025):

Server logs will aid in debugging.

<!-- gh-comment-id:2615261902 --> @rick-github commented on GitHub (Jan 27, 2025): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will aid in debugging.
Author
Owner

@PandaWei commented on GitHub (Feb 24, 2025):

I met the same problem by using ollama 0.5.11 and cuda 12.8

<!-- gh-comment-id:2677725587 --> @PandaWei commented on GitHub (Feb 24, 2025): I met the same problem by using ollama 0.5.11 and cuda 12.8
Author
Owner

@rick-github commented on GitHub (Feb 24, 2025):

Server logs will aid in debugging.

<!-- gh-comment-id:2677731053 --> @rick-github commented on GitHub (Feb 24, 2025): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will aid in debugging.
Author
Owner

@PandaWei commented on GitHub (Feb 24, 2025):

Feb 24 16:26:15 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:26:15.723+08:00 level=DEBUG source=sched.go:575 msg="evaluating already loaded" model=/usr/share/ollama/.ollama/models/blobs/sha256-96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49
Feb 24 16:26:15 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:26:15.723+08:00 level=DEBUG source=routes.go:1461 msg="chat request" images=0 prompt=<|User|>请使用python编写快速排序程序<|Assistant|>
Feb 24 16:26:15 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:26:15.725+08:00 level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=10 used=0 remaining=10

Feb 24 16:28:49 PC-BUGFLYFLY ollama[16860]: [GIN] 2025/02/24 - 16:28:49 | 200 | 19.4µs | 127.0.0.1 | HEAD "/"
Feb 24 16:28:49 PC-BUGFLYFLY ollama[16860]: [GIN] 2025/02/24 - 16:28:49 | 200 | 24.1µs | 127.0.0.1 | GET "/api/ps"
Feb 24 16:28:56 PC-BUGFLYFLY ollama[16860]: [GIN] 2025/02/24 - 16:28:56 | 200 | 23µs | 127.0.0.1 | HEAD "/"
Feb 24 16:28:56 PC-BUGFLYFLY ollama[16860]: [GIN] 2025/02/24 - 16:28:56 | 200 | 18.7µs | 127.0.0.1 | GET "/api/ps"
Feb 24 16:29:33 PC-BUGFLYFLY ollama[16860]: [GIN] 2025/02/24 - 16:29:33 | 200 | 21.4µs | 127.0.0.1 | HEAD "/"
Feb 24 16:29:34 PC-BUGFLYFLY ollama[16860]: [GIN] 2025/02/24 - 16:29:33 | 200 | 19.2µs | 127.0.0.1 | GET "/api/ps"
Feb 24 16:30:01 PC-BUGFLYFLY CRON[18300]: (root) CMD ([ -x /etc/init.d/anacron ] && if [ ! -d /run/systemd/system ]; then /usr/sbin/invoke-rc.d anacron start >/dev/null; fi)
Feb 24 16:31:20 PC-BUGFLYFLY systemd[1]: Started Run anacron jobs.
Feb 24 16:31:20 PC-BUGFLYFLY anacron[18726]: Anacron 2.3 started on 2025-02-24
Feb 24 16:31:20 PC-BUGFLYFLY anacron[18726]: Normal exit (0 jobs run)
Feb 24 16:31:20 PC-BUGFLYFLY systemd[1]: anacron.service: Succeeded.
Feb 24 16:31:59 PC-BUGFLYFLY ollama[16860]: [GIN] 2025/02/24 - 16:31:59 | 200 | 48.1µs | 127.0.0.1 | GET "/api/version"
Feb 24 16:32:23 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:23.584+08:00 level=DEBUG source=sched.go:407 msg="context for request finished"
Feb 24 16:32:23 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:23.584+08:00 level=DEBUG source=sched.go:339 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49 duration=5m0s
Feb 24 16:32:23 PC-BUGFLYFLY ollama[16860]: [GIN] 2025/02/24 - 16:32:23 | 200 | 6m7s | 127.0.0.1 | POST "/api/chat"
Feb 24 16:32:23 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:23.584+08:00 level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49 refCount=0
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: [GIN] 2025/02/24 - 16:32:30 | 200 | 18.7µs | 127.0.0.1 | HEAD "/"
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: [GIN] 2025/02/24 - 16:32:30 | 200 | 96.825699ms | 127.0.0.1 | POST "/api/show"
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.136+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="9.7 GiB" before.free="8.7 GiB" before.free_swap="3.0 GiB" now.total="9.7 GiB" now.free="8.2 GiB" now.free_swap="3.0 GiB"
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: initializing /usr/lib/wsl/lib/libcuda.so
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: dlsym: cuInit - 0x7fedb40579f0
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: dlsym: cuDriverGetVersion - 0x7fedb40579b0
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: dlsym: cuDeviceGetCount - 0x7fedb4057a2d
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: dlsym: cuDeviceGet - 0x7fedb4057a27
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: dlsym: cuDeviceGetAttribute - 0x7fedb4057910
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: dlsym: cuDeviceGetUuid - 0x7fedb4057a39
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: dlsym: cuDeviceGetName - 0x7fedb4057a33
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: dlsym: cuCtxCreate_v3 - 0x7fedb4057aab
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: dlsym: cuMemGetInfo_v2 - 0x7fedb4057bad
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: dlsym: cuCtxDestroy - 0x7fedb4057abd
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: calling cuInit
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: calling cuDriverGetVersion
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: raw version 0x2f30
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: CUDA driver version: 12.8
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: calling cuDeviceGetCount
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: device count 1
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.270+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-187355f1-e545-5c98-f198-6273bb7e0ee9 name="NVIDIA GeForce RTX 2060" overhead="0 B" before.total="12.0 GiB" before.free="10.9 GiB" now.total="12.0 GiB" now.free="10.9 GiB" now.used="1.1 GiB"
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: releasing cuda driver library
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.299+08:00 level=DEBUG source=sched.go:496 msg="gpu reported" gpu=GPU-187355f1-e545-5c98-f198-6273bb7e0ee9 library=cuda available="10.9 GiB"
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.299+08:00 level=INFO source=sched.go:507 msg="updated VRAM based on existing loaded models" gpu=GPU-187355f1-e545-5c98-f198-6273bb7e0ee9 library=cuda total="12.0 GiB" available="6.4 GiB"
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.299+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[6.4 GiB]"
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.299+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc gpu=GPU-187355f1-e545-5c98-f198-6273bb7e0ee9 parallel=4 available=6897942528 required="1.9 GiB"
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.299+08:00 level=DEBUG source=sched.go:249 msg="new model fits with existing models, loading"
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.299+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="9.7 GiB" before.free="8.2 GiB" before.free_swap="3.0 GiB" now.total="9.7 GiB" now.free="8.2 GiB" now.free_swap="3.0 GiB"
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: initializing /usr/lib/wsl/lib/libcuda.so
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: dlsym: cuInit - 0x7fedb40579f0
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: dlsym: cuDriverGetVersion - 0x7fedb40579b0
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: dlsym: cuDeviceGetCount - 0x7fedb4057a2d
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: dlsym: cuDeviceGet - 0x7fedb4057a27
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: dlsym: cuDeviceGetAttribute - 0x7fedb4057910
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: dlsym: cuDeviceGetUuid - 0x7fedb4057a39
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: dlsym: cuDeviceGetName - 0x7fedb4057a33
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: dlsym: cuCtxCreate_v3 - 0x7fedb4057aab
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: dlsym: cuMemGetInfo_v2 - 0x7fedb4057bad
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: dlsym: cuCtxDestroy - 0x7fedb4057abd
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: calling cuInit
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: calling cuDriverGetVersion
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: raw version 0x2f30
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: CUDA driver version: 12.8
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: calling cuDeviceGetCount
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: device count 1
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.419+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-187355f1-e545-5c98-f198-6273bb7e0ee9 name="NVIDIA GeForce RTX 2060" overhead="0 B" before.total="12.0 GiB" before.free="10.9 GiB" now.total="12.0 GiB" now.free="10.9 GiB" now.used="1.1 GiB"
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: releasing cuda driver library
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.419+08:00 level=INFO source=server.go:100 msg="system memory" total="9.7 GiB" free="8.2 GiB" free_swap="3.0 GiB"
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.419+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[6.4 GiB]"
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.419+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[6.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="1.9 GiB" memory.required.partial="1.9 GiB" memory.required.kv="224.0 MiB" memory.required.allocations="[1.9 GiB]" memory.weights.total="976.1 MiB" memory.weights.repeating="793.5 MiB" memory.weights.nonrepeating="182.6 MiB" memory.graph.full="299.8 MiB" memory.graph.partial="482.3 MiB"
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.419+08:00 level=DEBUG source=server.go:262 msg="compatible gpu libraries" compatible=[]
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.420+08:00 level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --verbose --threads 6 --parallel 4 --port 41839"
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.420+08:00 level=DEBUG source=server.go:398 msg=subprocess environment="[PATH=/usr/local/cuda-12.8/bin/:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/lib/wsl/lib:/mnt/c/Program Files (x86)/VMware/VMware Workstation/bin/:/mnt/c/WINDOWS/system32:/mnt/c/WINDOWS:/mnt/c/WINDOWS/System32/Wbem:/mnt/c/WINDOWS/System32/WindowsPowerShell/v1.0/:/mnt/c/WINDOWS/System32/OpenSSH/:/mnt/c/Program Files (x86)/NetSarang/Xftp 7/:/mnt/c/Program Files/TortoiseSVN/bin:/mnt/c/Program Files/NVIDIA Corporation/NVIDIA app/NvDLISR:/mnt/c/Program Files (x86)/NVIDIA Corporation/PhysX/Common:/mnt/c/Program Files (x86)/SSH Communications Security/SSH Secure Shell:/mnt/c/Program Files/qemu:/mnt/c/Program Files/cpolar/:/mnt/c/Users/Administrator/AppData/Local/Programs/Microsoft VS Code/bin:/mnt/c/Program Files/Seelen/Seelen UI:/snap/bin: CUDA_ERROR_LEVEL=50 LD_LIBRARY_PATH=/usr/local/bin CUDA_VISIBLE_DEVICES=GPU-187355f1-e545-5c98-f198-6273bb7e0ee9]"
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.420+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=2
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.420+08:00 level=INFO source=server.go:557 msg="waiting for llama runner to start responding"
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.420+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.433+08:00 level=INFO source=runner.go:936 msg="starting go runner"
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.433+08:00 level=INFO source=runner.go:937 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | cgo(gcc)" threads=6
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.433+08:00 level=DEBUG source=ggml.go:89 msg="ggml backend load all from path" path=/usr/local/bin
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.433+08:00 level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:41839"
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: loaded meta data with 26 key-value pairs and 339 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc (version GGUF V3 (latest))
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 0: general.architecture str = qwen2
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 1: general.type str = model
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 1.5B
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 4: general.size_label str = 1.5B
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 5: qwen2.block_count u32 = 28
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 6: qwen2.context_length u32 = 131072
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 7: qwen2.embedding_length u32 = 1536
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 8960
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 12
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 2
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 10000.000000
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 13: general.file_type u32 = 15
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,151936] = ["!", """, "#", "$", "%", "&", "'", ...
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de...
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 25: general.quantization_version u32 = 2
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - type f32: 141 tensors
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - type q4_K: 169 tensors
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - type q6_K: 29 tensors
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.671+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model"
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_vocab: control token: 151660 '<|fim_middle|>' is not marked as EOG
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_vocab: control token: 151659 '<|fim_prefix|>' is not marked as EOG
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_vocab: control token: 151653 '<|vision_end|>' is not marked as EOG
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_vocab: control token: 151645 '<|Assistant|>' is not marked as EOG
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_vocab: control token: 151644 '<|User|>' is not marked as EOG
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_vocab: control token: 151655 '<|image_pad|>' is not marked as EOG
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_vocab: control token: 151651 '<|quad_end|>' is not marked as EOG
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_vocab: control token: 151646 '<|begin▁of▁sentence|>' is not marked as EOG
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_vocab: control token: 151643 '<|end▁of▁sentence|>' is not marked as EOG
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_vocab: control token: 151652 '<|vision_start|>' is not marked as EOG
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_vocab: control token: 151647 '<|EOT|>' is not marked as EOG
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_vocab: control token: 151654 '<|vision_pad|>' is not marked as EOG
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_vocab: control token: 151656 '<|video_pad|>' is not marked as EOG
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_vocab: control token: 151661 '<|fim_suffix|>' is not marked as EOG
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_vocab: control token: 151650 '<|quad_start|>' is not marked as EOG
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_vocab: special tokens cache size = 22
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_vocab: token to piece cache size = 0.9310 MB
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: format = GGUF V3 (latest)
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: arch = qwen2
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: vocab type = BPE
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: n_vocab = 151936
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: n_merges = 151387
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: vocab_only = 0
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: n_ctx_train = 131072
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: n_embd = 1536
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: n_layer = 28
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: n_head = 12
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: n_head_kv = 2
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: n_rot = 128
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: n_swa = 0
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: n_embd_head_k = 128
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: n_embd_head_v = 128
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: n_gqa = 6
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: n_embd_k_gqa = 256
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: n_embd_v_gqa = 256
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: f_norm_eps = 0.0e+00
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: f_norm_rms_eps = 1.0e-06
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: f_clamp_kqv = 0.0e+00
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: f_logit_scale = 0.0e+00
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: n_ff = 8960
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: n_expert = 0
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: n_expert_used = 0
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: causal attn = 1
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: pooling type = 0
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: rope type = 2
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: rope scaling = linear
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: freq_base_train = 10000.0
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: freq_scale_train = 1
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: n_ctx_orig_yarn = 131072
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: rope_finetuned = unknown
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: ssm_d_conv = 0
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: ssm_d_inner = 0
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: ssm_d_state = 0
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: ssm_dt_rank = 0
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: ssm_dt_b_c_rms = 0
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: model type = 1.5B
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: model ftype = Q4_K - Medium
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: model params = 1.78 B
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: model size = 1.04 GiB (5.00 BPW)
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: general.name = DeepSeek R1 Distill Qwen 1.5B
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: BOS token = 151646 '<|begin▁of▁sentence|>'
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: EOS token = 151643 '<|end▁of▁sentence|>'
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: EOT token = 151643 '<|end▁of▁sentence|>'
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: PAD token = 151643 '<|end▁of▁sentence|>'
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: LF token = 148848 'ÄĬ'
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>'
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>'
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>'
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>'
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>'
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>'
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: EOG token = 151643 '<|end▁of▁sentence|>'
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: EOG token = 151662 '<|fim_pad|>'
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: EOG token = 151663 '<|repo_name|>'
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: EOG token = 151664 '<|file_sep|>'
Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: max token length = 256
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llm_load_tensors: CPU_Mapped model buffer size = 1059.89 MiB
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_new_context_with_model: n_seq_max = 4
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_new_context_with_model: n_ctx = 8192
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_new_context_with_model: n_ctx_per_seq = 2048
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_new_context_with_model: n_batch = 2048
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_new_context_with_model: n_ubatch = 512
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_new_context_with_model: flash_attn = 0
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_new_context_with_model: freq_base = 10000.0
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_new_context_with_model: freq_scale = 1
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 28, can_shift = 1
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 0: n_embd_k_gqa = 256, n_embd_v_gqa = 256
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 1: n_embd_k_gqa = 256, n_embd_v_gqa = 256
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 2: n_embd_k_gqa = 256, n_embd_v_gqa = 256
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 3: n_embd_k_gqa = 256, n_embd_v_gqa = 256
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 4: n_embd_k_gqa = 256, n_embd_v_gqa = 256
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 5: n_embd_k_gqa = 256, n_embd_v_gqa = 256
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 6: n_embd_k_gqa = 256, n_embd_v_gqa = 256
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 7: n_embd_k_gqa = 256, n_embd_v_gqa = 256
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 8: n_embd_k_gqa = 256, n_embd_v_gqa = 256
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 9: n_embd_k_gqa = 256, n_embd_v_gqa = 256
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 10: n_embd_k_gqa = 256, n_embd_v_gqa = 256
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 11: n_embd_k_gqa = 256, n_embd_v_gqa = 256
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 12: n_embd_k_gqa = 256, n_embd_v_gqa = 256
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 13: n_embd_k_gqa = 256, n_embd_v_gqa = 256
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 14: n_embd_k_gqa = 256, n_embd_v_gqa = 256
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 15: n_embd_k_gqa = 256, n_embd_v_gqa = 256
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 16: n_embd_k_gqa = 256, n_embd_v_gqa = 256
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 17: n_embd_k_gqa = 256, n_embd_v_gqa = 256
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 18: n_embd_k_gqa = 256, n_embd_v_gqa = 256
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 19: n_embd_k_gqa = 256, n_embd_v_gqa = 256
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 20: n_embd_k_gqa = 256, n_embd_v_gqa = 256
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 21: n_embd_k_gqa = 256, n_embd_v_gqa = 256
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 22: n_embd_k_gqa = 256, n_embd_v_gqa = 256
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 23: n_embd_k_gqa = 256, n_embd_v_gqa = 256
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 24: n_embd_k_gqa = 256, n_embd_v_gqa = 256
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 25: n_embd_k_gqa = 256, n_embd_v_gqa = 256
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 26: n_embd_k_gqa = 256, n_embd_v_gqa = 256
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 27: n_embd_k_gqa = 256, n_embd_v_gqa = 256
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: CPU KV buffer size = 224.00 MiB
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_new_context_with_model: KV self size = 224.00 MiB, K (f16): 112.00 MiB, V (f16): 112.00 MiB
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_new_context_with_model: CPU output buffer size = 2.34 MiB
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_new_context_with_model: CPU compute buffer size = 302.75 MiB
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_new_context_with_model: graph nodes = 986
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_new_context_with_model: graph splits = 1
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:42.243+08:00 level=INFO source=server.go:596 msg="llama runner started in 11.82 seconds"
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:42.243+08:00 level=DEBUG source=sched.go:462 msg="finished setting up runner" model=/usr/share/ollama/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: [GIN] 2025/02/24 - 16:32:42 | 200 | 12.122311984s | 127.0.0.1 | POST "/api/generate"
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:42.244+08:00 level=DEBUG source=sched.go:466 msg="context for request finished"
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:42.244+08:00 level=DEBUG source=sched.go:339 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc duration=5m0s
Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:42.244+08:00 level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc refCount=0
Feb 24 16:33:02 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:33:02.976+08:00 level=DEBUG source=sched.go:575 msg="evaluating already loaded" model=/usr/share/ollama/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc
Feb 24 16:33:02 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:33:02.977+08:00 level=DEBUG source=routes.go:1461 msg="chat request" images=0 prompt=<|User|>请使用python语言编写快速排序程序<|Assistant|>
Feb 24 16:33:02 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:33:02.979+08:00 level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=11 used=0 remaining=11

<!-- gh-comment-id:2677733159 --> @PandaWei commented on GitHub (Feb 24, 2025): Feb 24 16:26:15 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:26:15.723+08:00 level=DEBUG source=sched.go:575 msg="evaluating already loaded" model=/usr/share/ollama/.ollama/models/blobs/sha256-96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49 Feb 24 16:26:15 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:26:15.723+08:00 level=DEBUG source=routes.go:1461 msg="chat request" images=0 prompt=<|User|>请使用python编写快速排序程序<|Assistant|> Feb 24 16:26:15 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:26:15.725+08:00 level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=10 used=0 remaining=10 Feb 24 16:28:49 PC-BUGFLYFLY ollama[16860]: [GIN] 2025/02/24 - 16:28:49 | 200 | 19.4µs | 127.0.0.1 | HEAD "/" Feb 24 16:28:49 PC-BUGFLYFLY ollama[16860]: [GIN] 2025/02/24 - 16:28:49 | 200 | 24.1µs | 127.0.0.1 | GET "/api/ps" Feb 24 16:28:56 PC-BUGFLYFLY ollama[16860]: [GIN] 2025/02/24 - 16:28:56 | 200 | 23µs | 127.0.0.1 | HEAD "/" Feb 24 16:28:56 PC-BUGFLYFLY ollama[16860]: [GIN] 2025/02/24 - 16:28:56 | 200 | 18.7µs | 127.0.0.1 | GET "/api/ps" Feb 24 16:29:33 PC-BUGFLYFLY ollama[16860]: [GIN] 2025/02/24 - 16:29:33 | 200 | 21.4µs | 127.0.0.1 | HEAD "/" Feb 24 16:29:34 PC-BUGFLYFLY ollama[16860]: [GIN] 2025/02/24 - 16:29:33 | 200 | 19.2µs | 127.0.0.1 | GET "/api/ps" Feb 24 16:30:01 PC-BUGFLYFLY CRON[18300]: (root) CMD ([ -x /etc/init.d/anacron ] && if [ ! -d /run/systemd/system ]; then /usr/sbin/invoke-rc.d anacron start >/dev/null; fi) Feb 24 16:31:20 PC-BUGFLYFLY systemd[1]: Started Run anacron jobs. Feb 24 16:31:20 PC-BUGFLYFLY anacron[18726]: Anacron 2.3 started on 2025-02-24 Feb 24 16:31:20 PC-BUGFLYFLY anacron[18726]: Normal exit (0 jobs run) Feb 24 16:31:20 PC-BUGFLYFLY systemd[1]: anacron.service: Succeeded. Feb 24 16:31:59 PC-BUGFLYFLY ollama[16860]: [GIN] 2025/02/24 - 16:31:59 | 200 | 48.1µs | 127.0.0.1 | GET "/api/version" Feb 24 16:32:23 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:23.584+08:00 level=DEBUG source=sched.go:407 msg="context for request finished" Feb 24 16:32:23 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:23.584+08:00 level=DEBUG source=sched.go:339 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49 duration=5m0s Feb 24 16:32:23 PC-BUGFLYFLY ollama[16860]: [GIN] 2025/02/24 - 16:32:23 | 200 | 6m7s | 127.0.0.1 | POST "/api/chat" Feb 24 16:32:23 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:23.584+08:00 level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49 refCount=0 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: [GIN] 2025/02/24 - 16:32:30 | 200 | 18.7µs | 127.0.0.1 | HEAD "/" Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: [GIN] 2025/02/24 - 16:32:30 | 200 | 96.825699ms | 127.0.0.1 | POST "/api/show" Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.136+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="9.7 GiB" before.free="8.7 GiB" before.free_swap="3.0 GiB" now.total="9.7 GiB" now.free="8.2 GiB" now.free_swap="3.0 GiB" Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: initializing /usr/lib/wsl/lib/libcuda.so Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: dlsym: cuInit - 0x7fedb40579f0 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: dlsym: cuDriverGetVersion - 0x7fedb40579b0 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: dlsym: cuDeviceGetCount - 0x7fedb4057a2d Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: dlsym: cuDeviceGet - 0x7fedb4057a27 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: dlsym: cuDeviceGetAttribute - 0x7fedb4057910 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: dlsym: cuDeviceGetUuid - 0x7fedb4057a39 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: dlsym: cuDeviceGetName - 0x7fedb4057a33 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: dlsym: cuCtxCreate_v3 - 0x7fedb4057aab Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: dlsym: cuMemGetInfo_v2 - 0x7fedb4057bad Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: dlsym: cuCtxDestroy - 0x7fedb4057abd Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: calling cuInit Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: calling cuDriverGetVersion Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: raw version 0x2f30 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: CUDA driver version: 12.8 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: calling cuDeviceGetCount Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: device count 1 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.270+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-187355f1-e545-5c98-f198-6273bb7e0ee9 name="NVIDIA GeForce RTX 2060" overhead="0 B" before.total="12.0 GiB" before.free="10.9 GiB" now.total="12.0 GiB" now.free="10.9 GiB" now.used="1.1 GiB" Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: releasing cuda driver library Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.299+08:00 level=DEBUG source=sched.go:496 msg="gpu reported" gpu=GPU-187355f1-e545-5c98-f198-6273bb7e0ee9 library=cuda available="10.9 GiB" Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.299+08:00 level=INFO source=sched.go:507 msg="updated VRAM based on existing loaded models" gpu=GPU-187355f1-e545-5c98-f198-6273bb7e0ee9 library=cuda total="12.0 GiB" available="6.4 GiB" Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.299+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[6.4 GiB]" Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.299+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc gpu=GPU-187355f1-e545-5c98-f198-6273bb7e0ee9 parallel=4 available=6897942528 required="1.9 GiB" Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.299+08:00 level=DEBUG source=sched.go:249 msg="new model fits with existing models, loading" Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.299+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="9.7 GiB" before.free="8.2 GiB" before.free_swap="3.0 GiB" now.total="9.7 GiB" now.free="8.2 GiB" now.free_swap="3.0 GiB" Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: initializing /usr/lib/wsl/lib/libcuda.so Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: dlsym: cuInit - 0x7fedb40579f0 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: dlsym: cuDriverGetVersion - 0x7fedb40579b0 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: dlsym: cuDeviceGetCount - 0x7fedb4057a2d Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: dlsym: cuDeviceGet - 0x7fedb4057a27 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: dlsym: cuDeviceGetAttribute - 0x7fedb4057910 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: dlsym: cuDeviceGetUuid - 0x7fedb4057a39 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: dlsym: cuDeviceGetName - 0x7fedb4057a33 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: dlsym: cuCtxCreate_v3 - 0x7fedb4057aab Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: dlsym: cuMemGetInfo_v2 - 0x7fedb4057bad Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: dlsym: cuCtxDestroy - 0x7fedb4057abd Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: calling cuInit Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: calling cuDriverGetVersion Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: raw version 0x2f30 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: CUDA driver version: 12.8 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: calling cuDeviceGetCount Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: device count 1 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.419+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-187355f1-e545-5c98-f198-6273bb7e0ee9 name="NVIDIA GeForce RTX 2060" overhead="0 B" before.total="12.0 GiB" before.free="10.9 GiB" now.total="12.0 GiB" now.free="10.9 GiB" now.used="1.1 GiB" Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: releasing cuda driver library Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.419+08:00 level=INFO source=server.go:100 msg="system memory" total="9.7 GiB" free="8.2 GiB" free_swap="3.0 GiB" Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.419+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[6.4 GiB]" Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.419+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[6.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="1.9 GiB" memory.required.partial="1.9 GiB" memory.required.kv="224.0 MiB" memory.required.allocations="[1.9 GiB]" memory.weights.total="976.1 MiB" memory.weights.repeating="793.5 MiB" memory.weights.nonrepeating="182.6 MiB" memory.graph.full="299.8 MiB" memory.graph.partial="482.3 MiB" Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.419+08:00 level=DEBUG source=server.go:262 msg="compatible gpu libraries" compatible=[] Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.420+08:00 level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --verbose --threads 6 --parallel 4 --port 41839" Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.420+08:00 level=DEBUG source=server.go:398 msg=subprocess environment="[PATH=/usr/local/cuda-12.8/bin/:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/lib/wsl/lib:/mnt/c/Program Files (x86)/VMware/VMware Workstation/bin/:/mnt/c/WINDOWS/system32:/mnt/c/WINDOWS:/mnt/c/WINDOWS/System32/Wbem:/mnt/c/WINDOWS/System32/WindowsPowerShell/v1.0/:/mnt/c/WINDOWS/System32/OpenSSH/:/mnt/c/Program Files (x86)/NetSarang/Xftp 7/:/mnt/c/Program Files/TortoiseSVN/bin:/mnt/c/Program Files/NVIDIA Corporation/NVIDIA app/NvDLISR:/mnt/c/Program Files (x86)/NVIDIA Corporation/PhysX/Common:/mnt/c/Program Files (x86)/SSH Communications Security/SSH Secure Shell:/mnt/c/Program Files/qemu:/mnt/c/Program Files/cpolar/:/mnt/c/Users/Administrator/AppData/Local/Programs/Microsoft VS Code/bin:/mnt/c/Program Files/Seelen/Seelen UI:/snap/bin: CUDA_ERROR_LEVEL=50 LD_LIBRARY_PATH=/usr/local/bin CUDA_VISIBLE_DEVICES=GPU-187355f1-e545-5c98-f198-6273bb7e0ee9]" Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.420+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=2 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.420+08:00 level=INFO source=server.go:557 msg="waiting for llama runner to start responding" Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.420+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.433+08:00 level=INFO source=runner.go:936 msg="starting go runner" Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.433+08:00 level=INFO source=runner.go:937 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | cgo(gcc)" threads=6 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.433+08:00 level=DEBUG source=ggml.go:89 msg="ggml backend load all from path" path=/usr/local/bin Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.433+08:00 level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:41839" Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: loaded meta data with 26 key-value pairs and 339 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc (version GGUF V3 (latest)) Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 0: general.architecture str = qwen2 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 1: general.type str = model Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 1.5B Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 4: general.size_label str = 1.5B Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 5: qwen2.block_count u32 = 28 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 6: qwen2.context_length u32 = 131072 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 7: qwen2.embedding_length u32 = 1536 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 8960 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 12 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 2 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 10000.000000 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 13: general.file_type u32 = 15 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de... Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - kv 25: general.quantization_version u32 = 2 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - type f32: 141 tensors Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - type q4_K: 169 tensors Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llama_model_loader: - type q6_K: 29 tensors Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.671+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model" Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_vocab: control token: 151660 '<|fim_middle|>' is not marked as EOG Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_vocab: control token: 151659 '<|fim_prefix|>' is not marked as EOG Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_vocab: control token: 151653 '<|vision_end|>' is not marked as EOG Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_vocab: control token: 151645 '<|Assistant|>' is not marked as EOG Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_vocab: control token: 151644 '<|User|>' is not marked as EOG Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_vocab: control token: 151655 '<|image_pad|>' is not marked as EOG Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_vocab: control token: 151651 '<|quad_end|>' is not marked as EOG Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_vocab: control token: 151646 '<|begin▁of▁sentence|>' is not marked as EOG Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_vocab: control token: 151643 '<|end▁of▁sentence|>' is not marked as EOG Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_vocab: control token: 151652 '<|vision_start|>' is not marked as EOG Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_vocab: control token: 151647 '<|EOT|>' is not marked as EOG Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_vocab: control token: 151654 '<|vision_pad|>' is not marked as EOG Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_vocab: control token: 151656 '<|video_pad|>' is not marked as EOG Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_vocab: control token: 151661 '<|fim_suffix|>' is not marked as EOG Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_vocab: control token: 151650 '<|quad_start|>' is not marked as EOG Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_vocab: special tokens cache size = 22 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_vocab: token to piece cache size = 0.9310 MB Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: format = GGUF V3 (latest) Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: arch = qwen2 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: vocab type = BPE Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: n_vocab = 151936 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: n_merges = 151387 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: vocab_only = 0 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: n_ctx_train = 131072 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: n_embd = 1536 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: n_layer = 28 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: n_head = 12 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: n_head_kv = 2 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: n_rot = 128 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: n_swa = 0 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: n_embd_head_k = 128 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: n_embd_head_v = 128 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: n_gqa = 6 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: n_embd_k_gqa = 256 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: n_embd_v_gqa = 256 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: f_norm_eps = 0.0e+00 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: f_norm_rms_eps = 1.0e-06 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: f_logit_scale = 0.0e+00 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: n_ff = 8960 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: n_expert = 0 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: n_expert_used = 0 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: causal attn = 1 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: pooling type = 0 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: rope type = 2 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: rope scaling = linear Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: freq_base_train = 10000.0 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: freq_scale_train = 1 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: n_ctx_orig_yarn = 131072 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: rope_finetuned = unknown Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: ssm_d_conv = 0 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: ssm_d_inner = 0 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: ssm_d_state = 0 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: ssm_dt_rank = 0 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: ssm_dt_b_c_rms = 0 Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: model type = 1.5B Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: model ftype = Q4_K - Medium Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: model params = 1.78 B Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: model size = 1.04 GiB (5.00 BPW) Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: general.name = DeepSeek R1 Distill Qwen 1.5B Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: BOS token = 151646 '<|begin▁of▁sentence|>' Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: EOS token = 151643 '<|end▁of▁sentence|>' Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: EOT token = 151643 '<|end▁of▁sentence|>' Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: PAD token = 151643 '<|end▁of▁sentence|>' Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: LF token = 148848 'ÄĬ' Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>' Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>' Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>' Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>' Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>' Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>' Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: EOG token = 151643 '<|end▁of▁sentence|>' Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: EOG token = 151662 '<|fim_pad|>' Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: EOG token = 151663 '<|repo_name|>' Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: EOG token = 151664 '<|file_sep|>' Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: llm_load_print_meta: max token length = 256 Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llm_load_tensors: CPU_Mapped model buffer size = 1059.89 MiB Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_new_context_with_model: n_seq_max = 4 Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_new_context_with_model: n_ctx = 8192 Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_new_context_with_model: n_ctx_per_seq = 2048 Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_new_context_with_model: n_batch = 2048 Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_new_context_with_model: n_ubatch = 512 Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_new_context_with_model: flash_attn = 0 Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_new_context_with_model: freq_base = 10000.0 Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_new_context_with_model: freq_scale = 1 Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 28, can_shift = 1 Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 0: n_embd_k_gqa = 256, n_embd_v_gqa = 256 Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 1: n_embd_k_gqa = 256, n_embd_v_gqa = 256 Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 2: n_embd_k_gqa = 256, n_embd_v_gqa = 256 Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 3: n_embd_k_gqa = 256, n_embd_v_gqa = 256 Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 4: n_embd_k_gqa = 256, n_embd_v_gqa = 256 Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 5: n_embd_k_gqa = 256, n_embd_v_gqa = 256 Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 6: n_embd_k_gqa = 256, n_embd_v_gqa = 256 Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 7: n_embd_k_gqa = 256, n_embd_v_gqa = 256 Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 8: n_embd_k_gqa = 256, n_embd_v_gqa = 256 Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 9: n_embd_k_gqa = 256, n_embd_v_gqa = 256 Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 10: n_embd_k_gqa = 256, n_embd_v_gqa = 256 Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 11: n_embd_k_gqa = 256, n_embd_v_gqa = 256 Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 12: n_embd_k_gqa = 256, n_embd_v_gqa = 256 Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 13: n_embd_k_gqa = 256, n_embd_v_gqa = 256 Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 14: n_embd_k_gqa = 256, n_embd_v_gqa = 256 Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 15: n_embd_k_gqa = 256, n_embd_v_gqa = 256 Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 16: n_embd_k_gqa = 256, n_embd_v_gqa = 256 Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 17: n_embd_k_gqa = 256, n_embd_v_gqa = 256 Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 18: n_embd_k_gqa = 256, n_embd_v_gqa = 256 Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 19: n_embd_k_gqa = 256, n_embd_v_gqa = 256 Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 20: n_embd_k_gqa = 256, n_embd_v_gqa = 256 Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 21: n_embd_k_gqa = 256, n_embd_v_gqa = 256 Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 22: n_embd_k_gqa = 256, n_embd_v_gqa = 256 Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 23: n_embd_k_gqa = 256, n_embd_v_gqa = 256 Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 24: n_embd_k_gqa = 256, n_embd_v_gqa = 256 Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 25: n_embd_k_gqa = 256, n_embd_v_gqa = 256 Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 26: n_embd_k_gqa = 256, n_embd_v_gqa = 256 Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: layer 27: n_embd_k_gqa = 256, n_embd_v_gqa = 256 Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_kv_cache_init: CPU KV buffer size = 224.00 MiB Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_new_context_with_model: KV self size = 224.00 MiB, K (f16): 112.00 MiB, V (f16): 112.00 MiB Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_new_context_with_model: CPU output buffer size = 2.34 MiB Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_new_context_with_model: CPU compute buffer size = 302.75 MiB Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_new_context_with_model: graph nodes = 986 Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: llama_new_context_with_model: graph splits = 1 Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:42.243+08:00 level=INFO source=server.go:596 msg="llama runner started in 11.82 seconds" Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:42.243+08:00 level=DEBUG source=sched.go:462 msg="finished setting up runner" model=/usr/share/ollama/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: [GIN] 2025/02/24 - 16:32:42 | 200 | 12.122311984s | 127.0.0.1 | POST "/api/generate" Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:42.244+08:00 level=DEBUG source=sched.go:466 msg="context for request finished" Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:42.244+08:00 level=DEBUG source=sched.go:339 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc duration=5m0s Feb 24 16:32:42 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:42.244+08:00 level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc refCount=0 Feb 24 16:33:02 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:33:02.976+08:00 level=DEBUG source=sched.go:575 msg="evaluating already loaded" model=/usr/share/ollama/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc Feb 24 16:33:02 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:33:02.977+08:00 level=DEBUG source=routes.go:1461 msg="chat request" images=0 prompt=<|User|>请使用python语言编写快速排序程序<|Assistant|> Feb 24 16:33:02 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:33:02.979+08:00 level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=11 used=0 remaining=11
Author
Owner

@rick-github commented on GitHub (Feb 24, 2025):

Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.419+08:00 level=DEBUG source=server.go:262 msg="compatible gpu libraries" compatible=[]

No GPU enabled runners. How did you install ollama?

<!-- gh-comment-id:2677744085 --> @rick-github commented on GitHub (Feb 24, 2025): ``` Feb 24 16:32:30 PC-BUGFLYFLY ollama[16860]: time=2025-02-24T16:32:30.419+08:00 level=DEBUG source=server.go:262 msg="compatible gpu libraries" compatible=[] ``` No GPU enabled runners. How did you install ollama?
Author
Owner

@wuyukai0403 commented on GitHub (Sep 20, 2025):

Server logs will aid in debugging.

Thanks for your replying. But there is no Server logs in my archlinux edition. Finally I find it a problem caused by archlinux's package. It's fixed with the official installation: curl -fsSL https://ollama.com/install.sh | sh.

I think you should install the ollama-cuda package as Arch Linux packages it separately. I solved this problem by doing that.

<!-- gh-comment-id:3314390533 --> @wuyukai0403 commented on GitHub (Sep 20, 2025): > > [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will aid in debugging. > > Thanks for your replying. But there is no Server logs in my archlinux edition. Finally I find it a problem caused by archlinux's package. It's fixed with the official installation: curl -fsSL https://ollama.com/install.sh | sh. I think you should install the `ollama-cuda` package as Arch Linux packages it separately. I solved this problem by doing that.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#51164