[GH-ISSUE #8623] Ollama does inference on CPU, despite ollama ps saying 100%GPU #67637

Closed
opened 2026-05-04 11:08:19 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @akamaus on GitHub (Jan 28, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8623

What is the issue?

I'm experiencing weird problem a bit similar to #8606
Basically it says it does inference on GPU, but in reality vram is not utilized and ollama process uses 600% cpu and all is very-very slow.

I enabled debug logging and found a suspicios line, "Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.750+03:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error".
Can you please give me some ideas how to investigate it further?

$ ollama ps
marco-o1:latest    007603b83a96    6.0 GB    100% GPU     4 minutes from now
% nvidia-smi -l                                                                                                                                                       ~ 
Tue Jan 28 07:51:00 2025       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 565.77                 Driver Version: 565.77         CUDA Version: 12.7     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce GTX 1080 Ti     Off |   00000000:03:00.0 Off |                  N/A |
| 28%   47C    P8             13W /  250W |      15MiB /  11264MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   1  NVIDIA GeForce GTX 1080 Ti     Off |   00000000:04:00.0  On |                  N/A |
| 37%   56C    P8             20W /  250W |    1687MiB /  11264MiB |      4%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
$ Top
1125797 ollama    20   0 7558420   4.9g   4.4g S 566.7   7.8   1:14.58 .ollama-wrapped
$ journalctl -u ollama
Jan 28 07:32:50 maunix systemd[1]: Started Server for local large language models.
Jan 28 07:32:50 maunix ollama[1125741]: 2025/01/28 07:32:50 routes.go:1259: INFO server config env="map[CUDA_VISIBLE_DEVICES:0,1 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/var/lib/ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Jan 28 07:32:50 maunix ollama[1125741]: time=2025-01-28T07:32:50.843+03:00 level=INFO source=images.go:757 msg="total blobs: 53"
Jan 28 07:32:50 maunix ollama[1125741]: time=2025-01-28T07:32:50.844+03:00 level=INFO source=images.go:764 msg="total unused blobs removed: 0"
Jan 28 07:32:50 maunix ollama[1125741]: time=2025-01-28T07:32:50.844+03:00 level=INFO source=routes.go:1310 msg="Listening on 127.0.0.1:11434 (version 0.5.4)"
Jan 28 07:32:50 maunix ollama[1125741]: time=2025-01-28T07:32:50.844+03:00 level=DEBUG source=common.go:85 msg="no dynamic runners detected, using only built-in"
Jan 28 07:32:50 maunix ollama[1125741]: time=2025-01-28T07:32:50.844+03:00 level=INFO source=routes.go:1339 msg="Dynamic LLM libraries" runners=[cpu]
Jan 28 07:32:50 maunix ollama[1125741]: time=2025-01-28T07:32:50.844+03:00 level=DEBUG source=routes.go:1340 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
Jan 28 07:32:50 maunix ollama[1125741]: time=2025-01-28T07:32:50.844+03:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler"
Jan 28 07:32:50 maunix ollama[1125741]: time=2025-01-28T07:32:50.844+03:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
Jan 28 07:32:50 maunix ollama[1125741]: time=2025-01-28T07:32:50.845+03:00 level=DEBUG source=gpu.go:99 msg="searching for GPU discovery libraries for NVIDIA"
Jan 28 07:32:50 maunix ollama[1125741]: time=2025-01-28T07:32:50.845+03:00 level=DEBUG source=gpu.go:517 msg="Searching for GPU library" name=libcuda.so*
Jan 28 07:32:50 maunix ollama[1125741]: time=2025-01-28T07:32:50.845+03:00 level=DEBUG source=gpu.go:543 msg="gpu library search" globs="[libcuda.so* /run/opengl-driver/lib/libcuda.so* /nix/store/b5arfaz3xqrspi891903m9p66q4vhl5p-cuda_cudart-12.4.99-lib/lib/libcuda.so* /nix/store/5x4cw7hfnky0cb4jhkrzv4gk25m82dg4-libcublas-12.4.2.65-lib/lib/libcuda.so* /nix/store/apiqf75jalgdj4nvpnswn815x37r35bx-cuda_cccl-12.4.99/lib/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
Jan 28 07:32:50 maunix ollama[1125741]: time=2025-01-28T07:32:50.845+03:00 level=DEBUG source=gpu.go:577 msg="discovered GPU libraries" paths=[/nix/store/w9kgp7i2ihnb672vxmv3d6ax5dh91i2m-nvidia-x11-565.77-6.6.71/lib/libcuda.so.565.77]
Jan 28 07:32:50 maunix ollama[1125741]: initializing /nix/store/w9kgp7i2ihnb672vxmv3d6ax5dh91i2m-nvidia-x11-565.77-6.6.71/lib/libcuda.so.565.77
Jan 28 07:32:50 maunix ollama[1125741]: dlsym: cuInit - 0x7f21e92d4cc0
Jan 28 07:32:50 maunix ollama[1125741]: dlsym: cuDriverGetVersion - 0x7f21e92d4ce0
Jan 28 07:32:50 maunix ollama[1125741]: dlsym: cuDeviceGetCount - 0x7f21e92d4d20
Jan 28 07:32:50 maunix ollama[1125741]: dlsym: cuDeviceGet - 0x7f21e92d4d00
Jan 28 07:32:50 maunix ollama[1125741]: dlsym: cuDeviceGetAttribute - 0x7f21e92d4e00
Jan 28 07:32:50 maunix ollama[1125741]: dlsym: cuDeviceGetUuid - 0x7f21e92d4d60
Jan 28 07:32:50 maunix ollama[1125741]: dlsym: cuDeviceGetName - 0x7f21e92d4d40
Jan 28 07:32:50 maunix ollama[1125741]: dlsym: cuCtxCreate_v3 - 0x7f21e92d4fe0
Jan 28 07:32:50 maunix ollama[1125741]: dlsym: cuMemGetInfo_v2 - 0x7f21e92d5760
Jan 28 07:32:50 maunix ollama[1125741]: dlsym: cuCtxDestroy - 0x7f21e93213a0
Jan 28 07:32:50 maunix ollama[1125741]: calling cuInit
Jan 28 07:32:50 maunix ollama[1125741]: calling cuDriverGetVersion
Jan 28 07:32:50 maunix ollama[1125741]: raw version 0x2f26
Jan 28 07:32:50 maunix ollama[1125741]: CUDA driver version: 12.7
Jan 28 07:32:50 maunix ollama[1125741]: calling cuDeviceGetCount
Jan 28 07:32:50 maunix ollama[1125741]: device count 2
Jan 28 07:32:50 maunix ollama[1125741]: time=2025-01-28T07:32:50.865+03:00 level=DEBUG source=gpu.go:134 msg="detected GPUs" count=2 library=/nix/store/w9kgp7i2ihnb672vxmv3d6ax5dh91i2m-nvidia-x11-565.77-6.6.71/lib/libcuda.so.565.77
Jan 28 07:32:50 maunix ollama[1125741]: [GPU-6e35f211-e159-474a-495a-9da4fcd53cbe] CUDA totalMem 11165 mb
Jan 28 07:32:50 maunix ollama[1125741]: [GPU-6e35f211-e159-474a-495a-9da4fcd53cbe] CUDA freeMem 11015 mb
Jan 28 07:32:50 maunix ollama[1125741]: [GPU-6e35f211-e159-474a-495a-9da4fcd53cbe] Compute Capability 6.1
Jan 28 07:32:51 maunix ollama[1125741]: [GPU-f1134a8c-4a14-0895-b87a-a0c760a19e97] CUDA totalMem 11165 mb
Jan 28 07:32:51 maunix ollama[1125741]: [GPU-f1134a8c-4a14-0895-b87a-a0c760a19e97] CUDA freeMem 9549 mb
Jan 28 07:32:51 maunix ollama[1125741]: [GPU-f1134a8c-4a14-0895-b87a-a0c760a19e97] Compute Capability 6.1
Jan 28 07:32:51 maunix ollama[1125741]: time=2025-01-28T07:32:51.167+03:00 level=DEBUG source=amd_linux.go:421 msg="amdgpu driver not detected /sys/module/amdgpu"
Jan 28 07:32:51 maunix ollama[1125741]: releasing cuda driver library
Jan 28 07:32:51 maunix ollama[1125741]: time=2025-01-28T07:32:51.167+03:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-6e35f211-e159-474a-495a-9da4fcd53cbe library=cuda variant=v12 compute=6.1 driver=12.7 name="NVIDIA GeForce GTX 1080 Ti" total="10.9 GiB" available="10.8 GiB"
Jan 28 07:32:51 maunix ollama[1125741]: time=2025-01-28T07:32:51.167+03:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-f1134a8c-4a14-0895-b87a-a0c760a19e97 library=cuda variant=v12 compute=6.1 driver=12.7 name="NVIDIA GeForce GTX 1080 Ti" total="10.9 GiB" available="9.3 GiB"
Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.182+03:00 level=DEBUG source=gpu.go:406 msg="updating system memory data" before.total="62.6 GiB" before.free="47.8 GiB" before.free_swap="64.0 GiB" now.total="62.6 GiB" now.free="47.6 GiB" now.free_swap="64.0 GiB"
Jan 28 07:32:59 maunix ollama[1125741]: initializing /nix/store/w9kgp7i2ihnb672vxmv3d6ax5dh91i2m-nvidia-x11-565.77-6.6.71/lib/libcuda.so.565.77
Jan 28 07:32:59 maunix ollama[1125741]: dlsym: cuInit - 0x7f21e92d4cc0
Jan 28 07:32:59 maunix ollama[1125741]: dlsym: cuDriverGetVersion - 0x7f21e92d4ce0
Jan 28 07:32:59 maunix ollama[1125741]: dlsym: cuDeviceGetCount - 0x7f21e92d4d20
Jan 28 07:32:59 maunix ollama[1125741]: dlsym: cuDeviceGet - 0x7f21e92d4d00
Jan 28 07:32:59 maunix ollama[1125741]: dlsym: cuDeviceGetAttribute - 0x7f21e92d4e00
Jan 28 07:32:59 maunix ollama[1125741]: dlsym: cuDeviceGetUuid - 0x7f21e92d4d60
Jan 28 07:32:59 maunix ollama[1125741]: dlsym: cuDeviceGetName - 0x7f21e92d4d40
Jan 28 07:32:59 maunix ollama[1125741]: dlsym: cuCtxCreate_v3 - 0x7f21e92d4fe0
Jan 28 07:32:59 maunix ollama[1125741]: dlsym: cuMemGetInfo_v2 - 0x7f21e92d5760
Jan 28 07:32:59 maunix ollama[1125741]: dlsym: cuCtxDestroy - 0x7f21e93213a0
Jan 28 07:32:59 maunix ollama[1125741]: calling cuInit
Jan 28 07:32:59 maunix ollama[1125741]: calling cuDriverGetVersion
Jan 28 07:32:59 maunix ollama[1125741]: raw version 0x2f26
Jan 28 07:32:59 maunix ollama[1125741]: CUDA driver version: 12.7
Jan 28 07:32:59 maunix ollama[1125741]: calling cuDeviceGetCount
Jan 28 07:32:59 maunix ollama[1125741]: device count 2
Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.325+03:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-6e35f211-e159-474a-495a-9da4fcd53cbe name="NVIDIA GeForce GTX 1080 Ti" overhead="0 B" before.total="10.9 GiB" before.free="10.8 GiB" now.total="10.9 GiB" now.free="10.8 GiB" now.used="150.1 MiB"
Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.469+03:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-f1134a8c-4a14-0895-b87a-a0c760a19e97 name="NVIDIA GeForce GTX 1080 Ti" overhead="0 B" before.total="10.9 GiB" before.free="9.3 GiB" now.total="10.9 GiB" now.free="9.3 GiB" now.used="1.6 GiB"
Jan 28 07:32:59 maunix ollama[1125741]: releasing cuda driver library
Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.469+03:00 level=DEBUG source=sched.go:181 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=0x846b40 gpu_count=2
Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.507+03:00 level=DEBUG source=sched.go:224 msg="loading first model" model=/var/lib/ollama/models/blobs/sha256-234ea779a388b986a6c961440f175e75c7cd336f70614d7c5b043c09a5931ad7
Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.507+03:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[10.8 GiB]"
Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.507+03:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/var/lib/ollama/models/blobs/sha256-234ea779a388b986a6c961440f175e75c7cd336f70614d7c5b043c09a5931ad7 gpu=GPU-6e35f211-e159-474a-495a-9da4fcd53cbe parallel=4 available=11550457856 required="5.6 GiB"
Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.507+03:00 level=DEBUG source=gpu.go:406 msg="updating system memory data" before.total="62.6 GiB" before.free="47.6 GiB" before.free_swap="64.0 GiB" now.total="62.6 GiB" now.free="47.6 GiB" now.free_swap="64.0 GiB"
Jan 28 07:32:59 maunix ollama[1125741]: initializing /nix/store/w9kgp7i2ihnb672vxmv3d6ax5dh91i2m-nvidia-x11-565.77-6.6.71/lib/libcuda.so.565.77
Jan 28 07:32:59 maunix ollama[1125741]: dlsym: cuInit - 0x7f21e92d4cc0
Jan 28 07:32:59 maunix ollama[1125741]: dlsym: cuDriverGetVersion - 0x7f21e92d4ce0
Jan 28 07:32:59 maunix ollama[1125741]: dlsym: cuDeviceGetCount - 0x7f21e92d4d20
Jan 28 07:32:59 maunix ollama[1125741]: dlsym: cuDeviceGet - 0x7f21e92d4d00
Jan 28 07:32:59 maunix ollama[1125741]: dlsym: cuDeviceGetAttribute - 0x7f21e92d4e00
Jan 28 07:32:59 maunix ollama[1125741]: dlsym: cuDeviceGetUuid - 0x7f21e92d4d60
Jan 28 07:32:59 maunix ollama[1125741]: dlsym: cuDeviceGetName - 0x7f21e92d4d40
Jan 28 07:32:59 maunix ollama[1125741]: dlsym: cuCtxCreate_v3 - 0x7f21e92d4fe0
Jan 28 07:32:59 maunix ollama[1125741]: dlsym: cuMemGetInfo_v2 - 0x7f21e92d5760
Jan 28 07:32:59 maunix ollama[1125741]: dlsym: cuCtxDestroy - 0x7f21e93213a0
Jan 28 07:32:59 maunix ollama[1125741]: calling cuInit
Jan 28 07:32:59 maunix ollama[1125741]: calling cuDriverGetVersion
Jan 28 07:32:59 maunix ollama[1125741]: raw version 0x2f26
Jan 28 07:32:59 maunix ollama[1125741]: CUDA driver version: 12.7
Jan 28 07:32:59 maunix ollama[1125741]: calling cuDeviceGetCount
Jan 28 07:32:59 maunix ollama[1125741]: device count 2
Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.627+03:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-6e35f211-e159-474a-495a-9da4fcd53cbe name="NVIDIA GeForce GTX 1080 Ti" overhead="0 B" before.total="10.9 GiB" before.free="10.8 GiB" now.total="10.9 GiB" now.free="10.8 GiB" now.used="150.1 MiB"
Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.748+03:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-f1134a8c-4a14-0895-b87a-a0c760a19e97 name="NVIDIA GeForce GTX 1080 Ti" overhead="0 B" before.total="10.9 GiB" before.free="9.3 GiB" now.total="10.9 GiB" now.free="9.3 GiB" now.used="1.6 GiB"
Jan 28 07:32:59 maunix ollama[1125741]: releasing cuda driver library
Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.748+03:00 level=INFO source=server.go:104 msg="system memory" total="62.6 GiB" free="47.6 GiB" free_swap="64.0 GiB"
Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.748+03:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[10.8 GiB]"
Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.749+03:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[10.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.6 GiB" memory.required.partial="5.6 GiB" memory.required.kv="448.0 MiB" memory.required.allocations="[5.6 GiB]" memory.weights.total="4.1 GiB" memory.weights.repeating="3.7 GiB" memory.weights.nonrepeating="426.4 MiB" memory.graph.full="478.0 MiB" memory.graph.partial="730.4 MiB"
Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.749+03:00 level=DEBUG source=gpu.go:714 msg="no filter required for library cpu"
Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.749+03:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/nix/store/y7jkic6a3ybzh2n2cxhwas2f1hhmcvsb-ollama-0.5.4/bin/.ollama-wrapped runner --model /var/lib/ollama/models/blobs/sha256-234ea779a388b986a6c961440f175e75c7cd336f70614d7c5b043c09a5931ad7 --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --verbose --threads 6 --parallel 4 --port 35481"
Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.749+03:00 level=DEBUG source=server.go:393 msg=subprocess environment="[CUDA_VISIBLE_DEVICES=0,1 LD_LIBRARY_PATH=.:/nix/store/y7jkic6a3ybzh2n2cxhwas2f1hhmcvsb-ollama-0.5.4/bin:/run/opengl-driver/lib:/nix/store/b5arfaz3xqrspi891903m9p66q4vhl5p-cuda_cudart-12.4.99-lib/lib:/nix/store/5x4cw7hfnky0cb4jhkrzv4gk25m82dg4-libcublas-12.4.2.65-lib/lib:/nix/store/apiqf75jalgdj4nvpnswn815x37r35bx-cuda_cccl-12.4.99/lib PATH=/nix/store/6wgd8c9vq93mqxzc7jhkl86mv6qbc360-coreutils-9.5/bin:/nix/store/r99d2m4swgmrv9jvm4l9di40hvanq1aq-findutils-4.10.0/bin:/nix/store/vniy1y5n8g28c55y7788npwc4h09fh7c-gnugrep-3.11/bin:/nix/store/yq39xdwm4z0fhx7dsm8mlpgvcz3vbfg3-gnused-4.9/bin:/nix/store/bl5dgjbbr9y4wpdw6k959mkq4ig0jwyg-systemd-256.10/bin:/nix/store/6wgd8c9vq93mqxzc7jhkl86mv6qbc360-coreutils-9.5/sbin:/nix/store/r99d2m4swgmrv9jvm4l9di40hvanq1aq-findutils-4.10.0/sbin:/nix/store/vniy1y5n8g28c55y7788npwc4h09fh7c-gnugrep-3.11/sbin:/nix/store/yq39xdwm4z0fhx7dsm8mlpgvcz3vbfg3-gnused-4.9/sbin:/nix/store/bl5dgjbbr9y4wpdw6k959mkq4ig0jwyg-systemd-256.10/sbin]"
Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.750+03:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.750+03:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.750+03:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.764+03:00 level=INFO source=runner.go:945 msg="starting go runner"
Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.764+03:00 level=INFO source=runner.go:946 msg=system info="CPU : LLAMAFILE = 1 | AARCH64_REPACK = 1 | CPU : LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=6
Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.765+03:00 level=INFO source=runner.go:1004 msg="Server listening on 127.0.0.1:35481"
Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: loaded meta data with 25 key-value pairs and 339 tensors from /var/lib/ollama/models/blobs/sha256-234ea779a388b986a6c961440f175e75c7cd336f70614d7c5b043c09a5931ad7 (version GGUF V3 (latest))
Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv   0:                       general.architecture str              = qwen2
Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv   1:                               general.type str              = model
Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv   2:                               general.name str              = Macro O1
Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv   3:                       general.organization str              = AIDC AI
Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv   4:                         general.size_label str              = 7.6B
Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv   5:                            general.license str              = apache-2.0
Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv   6:                          qwen2.block_count u32              = 28
Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv   7:                       qwen2.context_length u32              = 32768
Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv   8:                     qwen2.embedding_length u32              = 3584
Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv   9:                  qwen2.feed_forward_length u32              = 18944
Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv  10:                 qwen2.attention.head_count u32              = 28
Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv  11:              qwen2.attention.head_count_kv u32              = 4
Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv  12:                       qwen2.rope.freq_base f32              = 1000000.000000
Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv  13:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv  14:                          general.file_type u32              = 15
Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv  15:                       tokenizer.ggml.model str              = gpt2
Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv  16:                         tokenizer.ggml.pre str              = qwen2
Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv  17:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv  18:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv  19:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 151645
Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 151643
Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv  22:                tokenizer.ggml.bos_token_id u32              = 151643
Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv  23:                    tokenizer.chat_template str              = {% for message in messages %}{% if lo...
Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv  24:               general.quantization_version u32              = 2
Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - type  f32:  141 tensors
Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - type q4_K:  169 tensors
Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - type q6_K:   29 tensors
Jan 28 07:33:00 maunix ollama[1125741]: time=2025-01-28T07:33:00.002+03:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_vocab: control token: 151660 '<|fim_middle|>' is not marked as EOG
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_vocab: control token: 151659 '<|fim_prefix|>' is not marked as EOG
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_vocab: control token: 151653 '<|vision_end|>' is not marked as EOG
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_vocab: control token: 151648 '<|box_start|>' is not marked as EOG
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_vocab: control token: 151646 '<|object_ref_start|>' is not marked as EOG
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_vocab: control token: 151649 '<|box_end|>' is not marked as EOG
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_vocab: control token: 151655 '<|image_pad|>' is not marked as EOG
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_vocab: control token: 151651 '<|quad_end|>' is not marked as EOG
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_vocab: control token: 151647 '<|object_ref_end|>' is not marked as EOG
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_vocab: control token: 151652 '<|vision_start|>' is not marked as EOG
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_vocab: control token: 151654 '<|vision_pad|>' is not marked as EOG
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_vocab: control token: 151656 '<|video_pad|>' is not marked as EOG
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_vocab: control token: 151644 '<|im_start|>' is not marked as EOG
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_vocab: control token: 151661 '<|fim_suffix|>' is not marked as EOG
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_vocab: control token: 151650 '<|quad_start|>' is not marked as EOG
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_vocab: special tokens cache size = 22
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_vocab: token to piece cache size = 0.9310 MB
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: format           = GGUF V3 (latest)
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: arch             = qwen2
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: vocab type       = BPE
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: n_vocab          = 152064
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: n_merges         = 151387
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: vocab_only       = 0
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: n_ctx_train      = 32768
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: n_embd           = 3584
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: n_layer          = 28
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: n_head           = 28
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: n_head_kv        = 4
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: n_rot            = 128
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: n_swa            = 0
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: n_embd_head_k    = 128
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: n_embd_head_v    = 128
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: n_gqa            = 7
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: n_embd_k_gqa     = 512
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: n_embd_v_gqa     = 512
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: f_norm_eps       = 0.0e+00
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: f_clamp_kqv      = 0.0e+00
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: f_logit_scale    = 0.0e+00
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: n_ff             = 18944
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: n_expert         = 0
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: n_expert_used    = 0
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: causal attn      = 1
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: pooling type     = 0
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: rope type        = 2
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: rope scaling     = linear
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: freq_base_train  = 1000000.0
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: freq_scale_train = 1
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: n_ctx_orig_yarn  = 32768
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: rope_finetuned   = unknown
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: ssm_d_conv       = 0
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: ssm_d_inner      = 0
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: ssm_d_state      = 0
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: ssm_dt_rank      = 0
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: ssm_dt_b_c_rms   = 0
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: model type       = 7B
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: model ftype      = Q4_K - Medium
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: model params     = 7.62 B
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: model size       = 4.36 GiB (4.91 BPW)
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: general.name     = Macro O1
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: BOS token        = 151643 '<|endoftext|>'
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: EOS token        = 151645 '<|im_end|>'
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: EOT token        = 151645 '<|im_end|>'
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: PAD token        = 151643 '<|endoftext|>'
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: LF token         = 148848 'ÄĬ'
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: FIM PRE token    = 151659 '<|fim_prefix|>'
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: FIM SUF token    = 151661 '<|fim_suffix|>'
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: FIM MID token    = 151660 '<|fim_middle|>'
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: FIM PAD token    = 151662 '<|fim_pad|>'
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: FIM REP token    = 151663 '<|repo_name|>'
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: FIM SEP token    = 151664 '<|file_sep|>'
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: EOG token        = 151643 '<|endoftext|>'
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: EOG token        = 151645 '<|im_end|>'
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: EOG token        = 151662 '<|fim_pad|>'
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: EOG token        = 151663 '<|repo_name|>'
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: EOG token        = 151664 '<|file_sep|>'
Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: max token length = 256

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.5.4

Originally created by @akamaus on GitHub (Jan 28, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8623 ### What is the issue? I'm experiencing weird problem a bit similar to #8606 Basically it says it does inference on GPU, but in reality vram is not utilized and ollama process uses 600% cpu and all is very-very slow. I enabled debug logging and found a suspicios line, "Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.750+03:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error". Can you please give me some ideas how to investigate it further? ``` $ ollama ps marco-o1:latest 007603b83a96 6.0 GB 100% GPU 4 minutes from now ``` ``` % nvidia-smi -l ~ Tue Jan 28 07:51:00 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 565.77 Driver Version: 565.77 CUDA Version: 12.7 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce GTX 1080 Ti Off | 00000000:03:00.0 Off | N/A | | 28% 47C P8 13W / 250W | 15MiB / 11264MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 1 NVIDIA GeForce GTX 1080 Ti Off | 00000000:04:00.0 On | N/A | | 37% 56C P8 20W / 250W | 1687MiB / 11264MiB | 4% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ ``` ``` $ Top 1125797 ollama 20 0 7558420 4.9g 4.4g S 566.7 7.8 1:14.58 .ollama-wrapped ``` ``` $ journalctl -u ollama Jan 28 07:32:50 maunix systemd[1]: Started Server for local large language models. Jan 28 07:32:50 maunix ollama[1125741]: 2025/01/28 07:32:50 routes.go:1259: INFO server config env="map[CUDA_VISIBLE_DEVICES:0,1 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/var/lib/ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Jan 28 07:32:50 maunix ollama[1125741]: time=2025-01-28T07:32:50.843+03:00 level=INFO source=images.go:757 msg="total blobs: 53" Jan 28 07:32:50 maunix ollama[1125741]: time=2025-01-28T07:32:50.844+03:00 level=INFO source=images.go:764 msg="total unused blobs removed: 0" Jan 28 07:32:50 maunix ollama[1125741]: time=2025-01-28T07:32:50.844+03:00 level=INFO source=routes.go:1310 msg="Listening on 127.0.0.1:11434 (version 0.5.4)" Jan 28 07:32:50 maunix ollama[1125741]: time=2025-01-28T07:32:50.844+03:00 level=DEBUG source=common.go:85 msg="no dynamic runners detected, using only built-in" Jan 28 07:32:50 maunix ollama[1125741]: time=2025-01-28T07:32:50.844+03:00 level=INFO source=routes.go:1339 msg="Dynamic LLM libraries" runners=[cpu] Jan 28 07:32:50 maunix ollama[1125741]: time=2025-01-28T07:32:50.844+03:00 level=DEBUG source=routes.go:1340 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" Jan 28 07:32:50 maunix ollama[1125741]: time=2025-01-28T07:32:50.844+03:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler" Jan 28 07:32:50 maunix ollama[1125741]: time=2025-01-28T07:32:50.844+03:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" Jan 28 07:32:50 maunix ollama[1125741]: time=2025-01-28T07:32:50.845+03:00 level=DEBUG source=gpu.go:99 msg="searching for GPU discovery libraries for NVIDIA" Jan 28 07:32:50 maunix ollama[1125741]: time=2025-01-28T07:32:50.845+03:00 level=DEBUG source=gpu.go:517 msg="Searching for GPU library" name=libcuda.so* Jan 28 07:32:50 maunix ollama[1125741]: time=2025-01-28T07:32:50.845+03:00 level=DEBUG source=gpu.go:543 msg="gpu library search" globs="[libcuda.so* /run/opengl-driver/lib/libcuda.so* /nix/store/b5arfaz3xqrspi891903m9p66q4vhl5p-cuda_cudart-12.4.99-lib/lib/libcuda.so* /nix/store/5x4cw7hfnky0cb4jhkrzv4gk25m82dg4-libcublas-12.4.2.65-lib/lib/libcuda.so* /nix/store/apiqf75jalgdj4nvpnswn815x37r35bx-cuda_cccl-12.4.99/lib/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" Jan 28 07:32:50 maunix ollama[1125741]: time=2025-01-28T07:32:50.845+03:00 level=DEBUG source=gpu.go:577 msg="discovered GPU libraries" paths=[/nix/store/w9kgp7i2ihnb672vxmv3d6ax5dh91i2m-nvidia-x11-565.77-6.6.71/lib/libcuda.so.565.77] Jan 28 07:32:50 maunix ollama[1125741]: initializing /nix/store/w9kgp7i2ihnb672vxmv3d6ax5dh91i2m-nvidia-x11-565.77-6.6.71/lib/libcuda.so.565.77 Jan 28 07:32:50 maunix ollama[1125741]: dlsym: cuInit - 0x7f21e92d4cc0 Jan 28 07:32:50 maunix ollama[1125741]: dlsym: cuDriverGetVersion - 0x7f21e92d4ce0 Jan 28 07:32:50 maunix ollama[1125741]: dlsym: cuDeviceGetCount - 0x7f21e92d4d20 Jan 28 07:32:50 maunix ollama[1125741]: dlsym: cuDeviceGet - 0x7f21e92d4d00 Jan 28 07:32:50 maunix ollama[1125741]: dlsym: cuDeviceGetAttribute - 0x7f21e92d4e00 Jan 28 07:32:50 maunix ollama[1125741]: dlsym: cuDeviceGetUuid - 0x7f21e92d4d60 Jan 28 07:32:50 maunix ollama[1125741]: dlsym: cuDeviceGetName - 0x7f21e92d4d40 Jan 28 07:32:50 maunix ollama[1125741]: dlsym: cuCtxCreate_v3 - 0x7f21e92d4fe0 Jan 28 07:32:50 maunix ollama[1125741]: dlsym: cuMemGetInfo_v2 - 0x7f21e92d5760 Jan 28 07:32:50 maunix ollama[1125741]: dlsym: cuCtxDestroy - 0x7f21e93213a0 Jan 28 07:32:50 maunix ollama[1125741]: calling cuInit Jan 28 07:32:50 maunix ollama[1125741]: calling cuDriverGetVersion Jan 28 07:32:50 maunix ollama[1125741]: raw version 0x2f26 Jan 28 07:32:50 maunix ollama[1125741]: CUDA driver version: 12.7 Jan 28 07:32:50 maunix ollama[1125741]: calling cuDeviceGetCount Jan 28 07:32:50 maunix ollama[1125741]: device count 2 Jan 28 07:32:50 maunix ollama[1125741]: time=2025-01-28T07:32:50.865+03:00 level=DEBUG source=gpu.go:134 msg="detected GPUs" count=2 library=/nix/store/w9kgp7i2ihnb672vxmv3d6ax5dh91i2m-nvidia-x11-565.77-6.6.71/lib/libcuda.so.565.77 Jan 28 07:32:50 maunix ollama[1125741]: [GPU-6e35f211-e159-474a-495a-9da4fcd53cbe] CUDA totalMem 11165 mb Jan 28 07:32:50 maunix ollama[1125741]: [GPU-6e35f211-e159-474a-495a-9da4fcd53cbe] CUDA freeMem 11015 mb Jan 28 07:32:50 maunix ollama[1125741]: [GPU-6e35f211-e159-474a-495a-9da4fcd53cbe] Compute Capability 6.1 Jan 28 07:32:51 maunix ollama[1125741]: [GPU-f1134a8c-4a14-0895-b87a-a0c760a19e97] CUDA totalMem 11165 mb Jan 28 07:32:51 maunix ollama[1125741]: [GPU-f1134a8c-4a14-0895-b87a-a0c760a19e97] CUDA freeMem 9549 mb Jan 28 07:32:51 maunix ollama[1125741]: [GPU-f1134a8c-4a14-0895-b87a-a0c760a19e97] Compute Capability 6.1 Jan 28 07:32:51 maunix ollama[1125741]: time=2025-01-28T07:32:51.167+03:00 level=DEBUG source=amd_linux.go:421 msg="amdgpu driver not detected /sys/module/amdgpu" Jan 28 07:32:51 maunix ollama[1125741]: releasing cuda driver library Jan 28 07:32:51 maunix ollama[1125741]: time=2025-01-28T07:32:51.167+03:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-6e35f211-e159-474a-495a-9da4fcd53cbe library=cuda variant=v12 compute=6.1 driver=12.7 name="NVIDIA GeForce GTX 1080 Ti" total="10.9 GiB" available="10.8 GiB" Jan 28 07:32:51 maunix ollama[1125741]: time=2025-01-28T07:32:51.167+03:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-f1134a8c-4a14-0895-b87a-a0c760a19e97 library=cuda variant=v12 compute=6.1 driver=12.7 name="NVIDIA GeForce GTX 1080 Ti" total="10.9 GiB" available="9.3 GiB" Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.182+03:00 level=DEBUG source=gpu.go:406 msg="updating system memory data" before.total="62.6 GiB" before.free="47.8 GiB" before.free_swap="64.0 GiB" now.total="62.6 GiB" now.free="47.6 GiB" now.free_swap="64.0 GiB" Jan 28 07:32:59 maunix ollama[1125741]: initializing /nix/store/w9kgp7i2ihnb672vxmv3d6ax5dh91i2m-nvidia-x11-565.77-6.6.71/lib/libcuda.so.565.77 Jan 28 07:32:59 maunix ollama[1125741]: dlsym: cuInit - 0x7f21e92d4cc0 Jan 28 07:32:59 maunix ollama[1125741]: dlsym: cuDriverGetVersion - 0x7f21e92d4ce0 Jan 28 07:32:59 maunix ollama[1125741]: dlsym: cuDeviceGetCount - 0x7f21e92d4d20 Jan 28 07:32:59 maunix ollama[1125741]: dlsym: cuDeviceGet - 0x7f21e92d4d00 Jan 28 07:32:59 maunix ollama[1125741]: dlsym: cuDeviceGetAttribute - 0x7f21e92d4e00 Jan 28 07:32:59 maunix ollama[1125741]: dlsym: cuDeviceGetUuid - 0x7f21e92d4d60 Jan 28 07:32:59 maunix ollama[1125741]: dlsym: cuDeviceGetName - 0x7f21e92d4d40 Jan 28 07:32:59 maunix ollama[1125741]: dlsym: cuCtxCreate_v3 - 0x7f21e92d4fe0 Jan 28 07:32:59 maunix ollama[1125741]: dlsym: cuMemGetInfo_v2 - 0x7f21e92d5760 Jan 28 07:32:59 maunix ollama[1125741]: dlsym: cuCtxDestroy - 0x7f21e93213a0 Jan 28 07:32:59 maunix ollama[1125741]: calling cuInit Jan 28 07:32:59 maunix ollama[1125741]: calling cuDriverGetVersion Jan 28 07:32:59 maunix ollama[1125741]: raw version 0x2f26 Jan 28 07:32:59 maunix ollama[1125741]: CUDA driver version: 12.7 Jan 28 07:32:59 maunix ollama[1125741]: calling cuDeviceGetCount Jan 28 07:32:59 maunix ollama[1125741]: device count 2 Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.325+03:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-6e35f211-e159-474a-495a-9da4fcd53cbe name="NVIDIA GeForce GTX 1080 Ti" overhead="0 B" before.total="10.9 GiB" before.free="10.8 GiB" now.total="10.9 GiB" now.free="10.8 GiB" now.used="150.1 MiB" Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.469+03:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-f1134a8c-4a14-0895-b87a-a0c760a19e97 name="NVIDIA GeForce GTX 1080 Ti" overhead="0 B" before.total="10.9 GiB" before.free="9.3 GiB" now.total="10.9 GiB" now.free="9.3 GiB" now.used="1.6 GiB" Jan 28 07:32:59 maunix ollama[1125741]: releasing cuda driver library Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.469+03:00 level=DEBUG source=sched.go:181 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=0x846b40 gpu_count=2 Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.507+03:00 level=DEBUG source=sched.go:224 msg="loading first model" model=/var/lib/ollama/models/blobs/sha256-234ea779a388b986a6c961440f175e75c7cd336f70614d7c5b043c09a5931ad7 Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.507+03:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[10.8 GiB]" Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.507+03:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/var/lib/ollama/models/blobs/sha256-234ea779a388b986a6c961440f175e75c7cd336f70614d7c5b043c09a5931ad7 gpu=GPU-6e35f211-e159-474a-495a-9da4fcd53cbe parallel=4 available=11550457856 required="5.6 GiB" Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.507+03:00 level=DEBUG source=gpu.go:406 msg="updating system memory data" before.total="62.6 GiB" before.free="47.6 GiB" before.free_swap="64.0 GiB" now.total="62.6 GiB" now.free="47.6 GiB" now.free_swap="64.0 GiB" Jan 28 07:32:59 maunix ollama[1125741]: initializing /nix/store/w9kgp7i2ihnb672vxmv3d6ax5dh91i2m-nvidia-x11-565.77-6.6.71/lib/libcuda.so.565.77 Jan 28 07:32:59 maunix ollama[1125741]: dlsym: cuInit - 0x7f21e92d4cc0 Jan 28 07:32:59 maunix ollama[1125741]: dlsym: cuDriverGetVersion - 0x7f21e92d4ce0 Jan 28 07:32:59 maunix ollama[1125741]: dlsym: cuDeviceGetCount - 0x7f21e92d4d20 Jan 28 07:32:59 maunix ollama[1125741]: dlsym: cuDeviceGet - 0x7f21e92d4d00 Jan 28 07:32:59 maunix ollama[1125741]: dlsym: cuDeviceGetAttribute - 0x7f21e92d4e00 Jan 28 07:32:59 maunix ollama[1125741]: dlsym: cuDeviceGetUuid - 0x7f21e92d4d60 Jan 28 07:32:59 maunix ollama[1125741]: dlsym: cuDeviceGetName - 0x7f21e92d4d40 Jan 28 07:32:59 maunix ollama[1125741]: dlsym: cuCtxCreate_v3 - 0x7f21e92d4fe0 Jan 28 07:32:59 maunix ollama[1125741]: dlsym: cuMemGetInfo_v2 - 0x7f21e92d5760 Jan 28 07:32:59 maunix ollama[1125741]: dlsym: cuCtxDestroy - 0x7f21e93213a0 Jan 28 07:32:59 maunix ollama[1125741]: calling cuInit Jan 28 07:32:59 maunix ollama[1125741]: calling cuDriverGetVersion Jan 28 07:32:59 maunix ollama[1125741]: raw version 0x2f26 Jan 28 07:32:59 maunix ollama[1125741]: CUDA driver version: 12.7 Jan 28 07:32:59 maunix ollama[1125741]: calling cuDeviceGetCount Jan 28 07:32:59 maunix ollama[1125741]: device count 2 Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.627+03:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-6e35f211-e159-474a-495a-9da4fcd53cbe name="NVIDIA GeForce GTX 1080 Ti" overhead="0 B" before.total="10.9 GiB" before.free="10.8 GiB" now.total="10.9 GiB" now.free="10.8 GiB" now.used="150.1 MiB" Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.748+03:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-f1134a8c-4a14-0895-b87a-a0c760a19e97 name="NVIDIA GeForce GTX 1080 Ti" overhead="0 B" before.total="10.9 GiB" before.free="9.3 GiB" now.total="10.9 GiB" now.free="9.3 GiB" now.used="1.6 GiB" Jan 28 07:32:59 maunix ollama[1125741]: releasing cuda driver library Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.748+03:00 level=INFO source=server.go:104 msg="system memory" total="62.6 GiB" free="47.6 GiB" free_swap="64.0 GiB" Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.748+03:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[10.8 GiB]" Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.749+03:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[10.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.6 GiB" memory.required.partial="5.6 GiB" memory.required.kv="448.0 MiB" memory.required.allocations="[5.6 GiB]" memory.weights.total="4.1 GiB" memory.weights.repeating="3.7 GiB" memory.weights.nonrepeating="426.4 MiB" memory.graph.full="478.0 MiB" memory.graph.partial="730.4 MiB" Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.749+03:00 level=DEBUG source=gpu.go:714 msg="no filter required for library cpu" Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.749+03:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/nix/store/y7jkic6a3ybzh2n2cxhwas2f1hhmcvsb-ollama-0.5.4/bin/.ollama-wrapped runner --model /var/lib/ollama/models/blobs/sha256-234ea779a388b986a6c961440f175e75c7cd336f70614d7c5b043c09a5931ad7 --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --verbose --threads 6 --parallel 4 --port 35481" Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.749+03:00 level=DEBUG source=server.go:393 msg=subprocess environment="[CUDA_VISIBLE_DEVICES=0,1 LD_LIBRARY_PATH=.:/nix/store/y7jkic6a3ybzh2n2cxhwas2f1hhmcvsb-ollama-0.5.4/bin:/run/opengl-driver/lib:/nix/store/b5arfaz3xqrspi891903m9p66q4vhl5p-cuda_cudart-12.4.99-lib/lib:/nix/store/5x4cw7hfnky0cb4jhkrzv4gk25m82dg4-libcublas-12.4.2.65-lib/lib:/nix/store/apiqf75jalgdj4nvpnswn815x37r35bx-cuda_cccl-12.4.99/lib PATH=/nix/store/6wgd8c9vq93mqxzc7jhkl86mv6qbc360-coreutils-9.5/bin:/nix/store/r99d2m4swgmrv9jvm4l9di40hvanq1aq-findutils-4.10.0/bin:/nix/store/vniy1y5n8g28c55y7788npwc4h09fh7c-gnugrep-3.11/bin:/nix/store/yq39xdwm4z0fhx7dsm8mlpgvcz3vbfg3-gnused-4.9/bin:/nix/store/bl5dgjbbr9y4wpdw6k959mkq4ig0jwyg-systemd-256.10/bin:/nix/store/6wgd8c9vq93mqxzc7jhkl86mv6qbc360-coreutils-9.5/sbin:/nix/store/r99d2m4swgmrv9jvm4l9di40hvanq1aq-findutils-4.10.0/sbin:/nix/store/vniy1y5n8g28c55y7788npwc4h09fh7c-gnugrep-3.11/sbin:/nix/store/yq39xdwm4z0fhx7dsm8mlpgvcz3vbfg3-gnused-4.9/sbin:/nix/store/bl5dgjbbr9y4wpdw6k959mkq4ig0jwyg-systemd-256.10/sbin]" Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.750+03:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.750+03:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding" Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.750+03:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.764+03:00 level=INFO source=runner.go:945 msg="starting go runner" Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.764+03:00 level=INFO source=runner.go:946 msg=system info="CPU : LLAMAFILE = 1 | AARCH64_REPACK = 1 | CPU : LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=6 Jan 28 07:32:59 maunix ollama[1125741]: time=2025-01-28T07:32:59.765+03:00 level=INFO source=runner.go:1004 msg="Server listening on 127.0.0.1:35481" Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: loaded meta data with 25 key-value pairs and 339 tensors from /var/lib/ollama/models/blobs/sha256-234ea779a388b986a6c961440f175e75c7cd336f70614d7c5b043c09a5931ad7 (version GGUF V3 (latest)) Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv 0: general.architecture str = qwen2 Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv 1: general.type str = model Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv 2: general.name str = Macro O1 Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv 3: general.organization str = AIDC AI Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv 4: general.size_label str = 7.6B Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv 5: general.license str = apache-2.0 Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv 6: qwen2.block_count u32 = 28 Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv 7: qwen2.context_length u32 = 32768 Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv 8: qwen2.embedding_length u32 = 3584 Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv 9: qwen2.feed_forward_length u32 = 18944 Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv 10: qwen2.attention.head_count u32 = 28 Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv 11: qwen2.attention.head_count_kv u32 = 4 Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv 12: qwen2.rope.freq_base f32 = 1000000.000000 Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv 13: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv 14: general.file_type u32 = 15 Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv 15: tokenizer.ggml.model str = gpt2 Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv 16: tokenizer.ggml.pre str = qwen2 Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv 17: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv 19: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151645 Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 151643 Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv 23: tokenizer.chat_template str = {% for message in messages %}{% if lo... Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - kv 24: general.quantization_version u32 = 2 Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - type f32: 141 tensors Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - type q4_K: 169 tensors Jan 28 07:32:59 maunix ollama[1125741]: llama_model_loader: - type q6_K: 29 tensors Jan 28 07:33:00 maunix ollama[1125741]: time=2025-01-28T07:33:00.002+03:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" Jan 28 07:33:00 maunix ollama[1125741]: llm_load_vocab: control token: 151660 '<|fim_middle|>' is not marked as EOG Jan 28 07:33:00 maunix ollama[1125741]: llm_load_vocab: control token: 151659 '<|fim_prefix|>' is not marked as EOG Jan 28 07:33:00 maunix ollama[1125741]: llm_load_vocab: control token: 151653 '<|vision_end|>' is not marked as EOG Jan 28 07:33:00 maunix ollama[1125741]: llm_load_vocab: control token: 151648 '<|box_start|>' is not marked as EOG Jan 28 07:33:00 maunix ollama[1125741]: llm_load_vocab: control token: 151646 '<|object_ref_start|>' is not marked as EOG Jan 28 07:33:00 maunix ollama[1125741]: llm_load_vocab: control token: 151649 '<|box_end|>' is not marked as EOG Jan 28 07:33:00 maunix ollama[1125741]: llm_load_vocab: control token: 151655 '<|image_pad|>' is not marked as EOG Jan 28 07:33:00 maunix ollama[1125741]: llm_load_vocab: control token: 151651 '<|quad_end|>' is not marked as EOG Jan 28 07:33:00 maunix ollama[1125741]: llm_load_vocab: control token: 151647 '<|object_ref_end|>' is not marked as EOG Jan 28 07:33:00 maunix ollama[1125741]: llm_load_vocab: control token: 151652 '<|vision_start|>' is not marked as EOG Jan 28 07:33:00 maunix ollama[1125741]: llm_load_vocab: control token: 151654 '<|vision_pad|>' is not marked as EOG Jan 28 07:33:00 maunix ollama[1125741]: llm_load_vocab: control token: 151656 '<|video_pad|>' is not marked as EOG Jan 28 07:33:00 maunix ollama[1125741]: llm_load_vocab: control token: 151644 '<|im_start|>' is not marked as EOG Jan 28 07:33:00 maunix ollama[1125741]: llm_load_vocab: control token: 151661 '<|fim_suffix|>' is not marked as EOG Jan 28 07:33:00 maunix ollama[1125741]: llm_load_vocab: control token: 151650 '<|quad_start|>' is not marked as EOG Jan 28 07:33:00 maunix ollama[1125741]: llm_load_vocab: special tokens cache size = 22 Jan 28 07:33:00 maunix ollama[1125741]: llm_load_vocab: token to piece cache size = 0.9310 MB Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: format = GGUF V3 (latest) Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: arch = qwen2 Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: vocab type = BPE Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: n_vocab = 152064 Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: n_merges = 151387 Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: vocab_only = 0 Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: n_ctx_train = 32768 Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: n_embd = 3584 Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: n_layer = 28 Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: n_head = 28 Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: n_head_kv = 4 Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: n_rot = 128 Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: n_swa = 0 Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: n_embd_head_k = 128 Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: n_embd_head_v = 128 Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: n_gqa = 7 Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: n_embd_k_gqa = 512 Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: n_embd_v_gqa = 512 Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: f_norm_eps = 0.0e+00 Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: f_norm_rms_eps = 1.0e-06 Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: f_logit_scale = 0.0e+00 Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: n_ff = 18944 Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: n_expert = 0 Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: n_expert_used = 0 Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: causal attn = 1 Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: pooling type = 0 Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: rope type = 2 Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: rope scaling = linear Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: freq_base_train = 1000000.0 Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: freq_scale_train = 1 Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: n_ctx_orig_yarn = 32768 Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: rope_finetuned = unknown Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: ssm_d_conv = 0 Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: ssm_d_inner = 0 Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: ssm_d_state = 0 Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: ssm_dt_rank = 0 Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: ssm_dt_b_c_rms = 0 Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: model type = 7B Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: model ftype = Q4_K - Medium Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: model params = 7.62 B Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: model size = 4.36 GiB (4.91 BPW) Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: general.name = Macro O1 Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: BOS token = 151643 '<|endoftext|>' Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: EOS token = 151645 '<|im_end|>' Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: EOT token = 151645 '<|im_end|>' Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: PAD token = 151643 '<|endoftext|>' Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: LF token = 148848 'ÄĬ' Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>' Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>' Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>' Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>' Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>' Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>' Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: EOG token = 151643 '<|endoftext|>' Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: EOG token = 151645 '<|im_end|>' Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: EOG token = 151662 '<|fim_pad|>' Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: EOG token = 151663 '<|repo_name|>' Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: EOG token = 151664 '<|file_sep|>' Jan 28 07:33:00 maunix ollama[1125741]: llm_load_print_meta: max token length = 256 ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.4
GiteaMirror added the bug label 2026-05-04 11:08:19 -05:00
Author
Owner

@rick-github commented on GitHub (Jan 28, 2025):

Jan 28 07:32:50 maunix ollama[1125741]: time=2025-01-28T07:32:50.844+03:00 level=DEBUG source=common.go:85 msg="no dynamic runners detected, using only built-in"
Jan 28 07:32:50 maunix ollama[1125741]: time=2025-01-28T07:32:50.844+03:00 level=INFO source=routes.go:1339 msg="Dynamic LLM libraries" runners=[cpu]

You don't have any GPU enabled runners. How did you install ollama?

<!-- gh-comment-id:2618086999 --> @rick-github commented on GitHub (Jan 28, 2025): ``` Jan 28 07:32:50 maunix ollama[1125741]: time=2025-01-28T07:32:50.844+03:00 level=DEBUG source=common.go:85 msg="no dynamic runners detected, using only built-in" Jan 28 07:32:50 maunix ollama[1125741]: time=2025-01-28T07:32:50.844+03:00 level=INFO source=routes.go:1339 msg="Dynamic LLM libraries" runners=[cpu] ``` You don't have any GPU enabled runners. How did you install ollama?
Author
Owner

@cleverunit commented on GitHub (Jan 28, 2025):

I encounted the same question.
I installed ollama by using command: curl -fsSL https://ollama.com/install.sh | sh.

when input command “ollama serve”, the log was:
+--------------------------------------------------AutoDL--------------------------------------------------------+
root@autodl-container-6d25459816-0bbc0370:/autodl-pub/data# ollama serve
2025/01/28 17:08:57 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:6006 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-01-28T17:08:57.364+08:00 level=INFO source=images.go:432 msg="total blobs: 5"
time=2025-01-28T17:08:57.364+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.

  • using env: export GIN_MODE=release
  • using code: gin.SetMode(gin.ReleaseMode)

[GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
[GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
[GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
[GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
[GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
[GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
[GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
[GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
time=2025-01-28T17:08:57.365+08:00 level=INFO source=routes.go:1238 msg="Listening on [::]:6006 (version 0.5.7)"
time=2025-01-28T17:08:57.365+08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners=[cpu]
time=2025-01-28T17:08:57.365+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2025-01-28T17:08:57.661+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-85fa3824-d8bd-9487-b237-e98ece392daf library=cuda variant=v12 compute=7.5 driver=12.4 name="NVIDIA GeForce RTX 2080 Ti" total="21.7 GiB" available="21.5 GiB"

[GIN] 2025/01/28 - 17:09:57 | 200 | 113.835µs | 127.0.0.1 | HEAD "/"
[GIN] 2025/01/28 - 17:09:57 | 200 | 338.765µs | 127.0.0.1 | GET "/api/ps"
[GIN] 2025/01/28 - 17:10:04 | 200 | 65.109µs | 127.0.0.1 | HEAD "/"
[GIN] 2025/01/28 - 17:10:04 | 200 | 42.647µs | 127.0.0.1 | GET "/api/ps"
[GIN] 2025/01/28 - 17:10:32 | 200 | 49.28µs | 127.0.0.1 | HEAD "/"
[GIN] 2025/01/28 - 17:10:32 | 200 | 43.033144ms | 127.0.0.1 | POST "/api/show"
time=2025-01-28T17:10:32.960+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e gpu=GPU-85fa3824-d8bd-9487-b237-e98ece392daf parallel=4 available=23097704448 required="10.8 GiB"
time=2025-01-28T17:10:33.104+08:00 level=INFO source=server.go:104 msg="system memory" total="440.5 GiB" free="337.6 GiB" free_swap="0 B"
time=2025-01-28T17:10:33.105+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[21.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="10.8 GiB" memory.required.partial="10.8 GiB" memory.required.kv="1.5 GiB" memory.required.allocations="[10.8 GiB]" memory.weights.total="8.9 GiB" memory.weights.repeating="8.3 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="676.0 MiB" memory.graph.partial="916.1 MiB"
time=2025-01-28T17:10:33.105+08:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e --ctx-size 8192 --batch-size 512 --n-gpu-layers 49 --threads 48 --parallel 4 --port 45269"
time=2025-01-28T17:10:33.106+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-01-28T17:10:33.106+08:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
time=2025-01-28T17:10:33.106+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2025-01-28T17:10:33.152+08:00 level=INFO source=runner.go:936 msg="starting go runner"
time=2025-01-28T17:10:33.152+08:00 level=INFO source=runner.go:937 msg=system info="CPU : LLAMAFILE = 1 | AARCH64_REPACK = 1 | CPU : LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=48
time=2025-01-28T17:10:33.153+08:00 level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:45269"
llama_model_loader: loaded meta data with 26 key-value pairs and 579 tensors from /root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 14B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen
llama_model_loader: - kv 4: general.size_label str = 14B
llama_model_loader: - kv 5: qwen2.block_count u32 = 48
llama_model_loader: - kv 6: qwen2.context_length u32 = 131072
llama_model_loader: - kv 7: qwen2.embedding_length u32 = 5120
llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 13824
llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 40
llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: general.file_type u32 = 15
llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,152064] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646
llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643
llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 25: general.quantization_version u32 = 2
llama_model_loader: - type f32: 241 tensors
llama_model_loader: - type q4_K: 289 tensors
llama_model_loader: - type q6_K: 49 tensors
time=2025-01-28T17:10:33.358+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 22
llm_load_vocab: token to piece cache size = 0.9310 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = qwen2
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 152064
llm_load_print_meta: n_merges = 151387
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 5120
llm_load_print_meta: n_layer = 48
llm_load_print_meta: n_head = 40
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 5
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 13824
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 14B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 14.77 B
llm_load_print_meta: model size = 8.37 GiB (4.87 BPW)
llm_load_print_meta: general.name = DeepSeek R1 Distill Qwen 14B
llm_load_print_meta: BOS token = 151646 '<|begin▁of▁sentence|>'
llm_load_print_meta: EOS token = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: EOT token = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: PAD token = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: LF token = 148848 'ÄĬ'
llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>'
llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>'
llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>'
llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>'
llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>'
llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>'
llm_load_print_meta: EOG token = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: EOG token = 151662 '<|fim_pad|>'
llm_load_print_meta: EOG token = 151663 '<|repo_name|>'
llm_load_print_meta: EOG token = 151664 '<|file_sep|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: CPU_Mapped model buffer size = 8566.04 MiB
llama_new_context_with_model: n_seq_max = 4
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 48, can_shift = 1
llama_kv_cache_init: CPU KV buffer size = 1536.00 MiB
llama_new_context_with_model: KV self size = 1536.00 MiB, K (f16): 768.00 MiB, V (f16): 768.00 MiB
llama_new_context_with_model: CPU output buffer size = 2.40 MiB
llama_new_context_with_model: CPU compute buffer size = 696.01 MiB
llama_new_context_with_model: graph nodes = 1686
llama_new_context_with_model: graph splits = 1
time=2025-01-28T17:10:38.883+08:00 level=INFO source=server.go:594 msg="llama runner started in 5.78 seconds"
[GIN] 2025/01/28 - 17:10:38 | 200 | 6.209691367s | 127.0.0.1 | POST "/api/generate"
[GIN] 2025/01/28 - 17:16:10 | 200 | 4m30s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/01/28 - 17:20:28 | 200 | 34.841µs | 127.0.0.1 | HEAD "/"
[GIN] 2025/01/28 - 17:20:28 | 200 | 67.697µs | 127.0.0.1 | GET "/api/ps"
time=2025-01-28T17:21:15.888+08:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.182585985 model=/root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e
time=2025-01-28T17:21:16.139+08:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.432914803 model=/root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e
time=2025-01-28T17:21:16.388+08:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.681943502 model=/root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e
^Croot@autodl-container-6d25459816-0bbc0370:/autodl-pub/data# ^C
root@autodl-container-6d25459816-0bbc0370:/autodl-pub/data# ^C
root@autodl-container-6d25459816-0bbc0370:/autodl-pub/data# ^C
root@autodl-container-6d25459816-0bbc0370:/autodl-pub/data# ^C
root@autodl-container-6d25459816-0bbc0370:/autodl-pub/data# ^C
root@autodl-container-6d25459816-0bbc0370:/autodl-pub/data# source /etc/profile
root@autodl-container-6d25459816-0bbc0370:/autodl-pub/data# ollama serve
2025/01/28 17:25:54 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:6006 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-01-28T17:25:54.324+08:00 level=INFO source=images.go:432 msg="total blobs: 5"
time=2025-01-28T17:25:54.324+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.

  • using env: export GIN_MODE=release
  • using code: gin.SetMode(gin.ReleaseMode)

[GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
[GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
[GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
[GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
[GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
[GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
[GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(Server).ListHandler-fm (6 handlers)
[GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(Server).ShowHandler-fm (6 handlers)
[GIN-debug] GET / --> github.com/ollama/ollama/server.(Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(Server).ListHandler-fm (5 handlers)
[GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD / --> github.com/ollama/ollama/server.(Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(Server).ListHandler-fm (5 handlers)
[GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(Server).GenerateRoutes.func2 (5 handlers)
time=2025-01-28T17:25:54.325+08:00 level=INFO source=routes.go:1238 msg="Listening on [::]:6006 (version 0.5.7)"
time=2025-01-28T17:25:54.325+08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners=[cpu]
time=2025-01-28T17:25:54.325+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2025-01-28T17:25:54.603+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-85fa3824-d8bd-9487-b237-e98ece392daf library=cuda variant=v12 compute=7.5 driver=12.4 name="NVIDIA GeForce RTX 2080 Ti" total="21.7 GiB" available="21.5 GiB"
^Croot@autodl-container-6d25459816-0bbc0370:/autodl-pub/data# ^C
root@autodl-container-6d25459816-0bbc0370:/autodl-pub/data# ^C
root@autodl-container-6d25459816-0bbc0370:/autodl-pub/data# ^C
root@autodl-container-6d25459816-0bbc0370:/autodl-pub/data# ^C
root@autodl-container-6d25459816-0bbc0370:/autodl-pub/data# ^C
root@autodl-container-6d25459816-0bbc0370:/autodl-pub/data# ^C
root@autodl-container-6d25459816-0bbc0370:/autodl-pub/data# export GIN_MODE=release
root@autodl-container-6d25459816-0bbc0370:/autodl-pub/data# ollama serve
2025/01/28 17:26:18 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:6006 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:
https://localhost:
http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:
https://127.0.0.1:
http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:
https://0.0.0.0:
app://
file://
tauri://
vscode-webview://
] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-01-28T17:26:18.836+08:00 level=INFO source=images.go:432 msg="total blobs: 5"
time=2025-01-28T17:26:18.836+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.

  • using env: export GIN_MODE=release
  • using code: gin.SetMode(gin.ReleaseMode)

[GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
[GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
[GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
[GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
[GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
[GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
[GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
[GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
time=2025-01-28T17:26:18.837+08:00 level=INFO source=routes.go:1238 msg="Listening on [::]:6006 (version 0.5.7)"
time=2025-01-28T17:26:18.837+08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners=[cpu]
time=2025-01-28T17:26:18.837+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2025-01-28T17:26:19.107+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-85fa3824-d8bd-9487-b237-e98ece392daf library=cuda variant=v12 compute=7.5 driver=12.4 name="NVIDIA GeForce RTX 2080 Ti" total="21.7 GiB" available="21.5 GiB"
[GIN] 2025/01/28 - 17:27:35 | 200 | 123.383µs | 127.0.0.1 | HEAD "/"
[GIN] 2025/01/28 - 17:27:35 | 200 | 30.589508ms | 127.0.0.1 | POST "/api/show"
time=2025-01-28T17:27:36.066+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e gpu=GPU-85fa3824-d8bd-9487-b237-e98ece392daf parallel=4 available=23097704448 required="10.8 GiB"
time=2025-01-28T17:27:36.213+08:00 level=INFO source=server.go:104 msg="system memory" total="440.5 GiB" free="339.6 GiB" free_swap="0 B"
time=2025-01-28T17:27:36.216+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[21.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="10.8 GiB" memory.required.partial="10.8 GiB" memory.required.kv="1.5 GiB" memory.required.allocations="[10.8 GiB]" memory.weights.total="8.9 GiB" memory.weights.repeating="8.3 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="676.0 MiB" memory.graph.partial="916.1 MiB"
time=2025-01-28T17:27:36.217+08:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e --ctx-size 8192 --batch-size 512 --n-gpu-layers 49 --threads 48 --parallel 4 --port 42553"
time=2025-01-28T17:27:36.218+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-01-28T17:27:36.218+08:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
time=2025-01-28T17:27:36.218+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2025-01-28T17:27:36.287+08:00 level=INFO source=runner.go:936 msg="starting go runner"
time=2025-01-28T17:27:36.287+08:00 level=INFO source=runner.go:937 msg=system info="CPU : LLAMAFILE = 1 | AARCH64_REPACK = 1 | CPU : LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=48
time=2025-01-28T17:27:36.287+08:00 level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:42553"
llama_model_loader: loaded meta data with 26 key-value pairs and 579 tensors from /root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 14B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen
llama_model_loader: - kv 4: general.size_label str = 14B
llama_model_loader: - kv 5: qwen2.block_count u32 = 48
llama_model_loader: - kv 6: qwen2.context_length u32 = 131072
llama_model_loader: - kv 7: qwen2.embedding_length u32 = 5120
llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 13824
llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 40
llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: general.file_type u32 = 15
llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,152064] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646
llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643
llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 25: general.quantization_version u32 = 2
llama_model_loader: - type f32: 241 tensors
llama_model_loader: - type q4_K: 289 tensors
llama_model_loader: - type q6_K: 49 tensors
time=2025-01-28T17:27:36.470+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 22
llm_load_vocab: token to piece cache size = 0.9310 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = qwen2
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 152064
llm_load_print_meta: n_merges = 151387
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 5120
llm_load_print_meta: n_layer = 48
llm_load_print_meta: n_head = 40
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 5
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 13824
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 14B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 14.77 B
llm_load_print_meta: model size = 8.37 GiB (4.87 BPW)
llm_load_print_meta: general.name = DeepSeek R1 Distill Qwen 14B
llm_load_print_meta: BOS token = 151646 '<|begin▁of▁sentence|>'
llm_load_print_meta: EOS token = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: EOT token = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: PAD token = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: LF token = 148848 'ÄĬ'
llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>'
llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>'
llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>'
llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>'
llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>'
llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>'
llm_load_print_meta: EOG token = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: EOG token = 151662 '<|fim_pad|>'
llm_load_print_meta: EOG token = 151663 '<|repo_name|>'
llm_load_print_meta: EOG token = 151664 '<|file_sep|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: CPU_Mapped model buffer size = 8566.04 MiB
llama_new_context_with_model: n_seq_max = 4
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 48, can_shift = 1
llama_kv_cache_init: CPU KV buffer size = 1536.00 MiB
llama_new_context_with_model: KV self size = 1536.00 MiB, K (f16): 768.00 MiB, V (f16): 768.00 MiB
llama_new_context_with_model: CPU output buffer size = 2.40 MiB
llama_new_context_with_model: CPU compute buffer size = 696.01 MiB
llama_new_context_with_model: graph nodes = 1686
llama_new_context_with_model: graph splits = 1
time=2025-01-28T17:27:38.731+08:00 level=INFO source=server.go:594 msg="llama runner started in 2.51 seconds"
[GIN] 2025/01/28 - 17:27:38 | 200 | 2.904614003s | 127.0.0.1 | POST "/api/generate"

<!-- gh-comment-id:2619480110 --> @cleverunit commented on GitHub (Jan 28, 2025): I encounted the same question. I installed ollama by using command: curl -fsSL https://ollama.com/install.sh | sh. when input command “ollama serve”, the log was: +--------------------------------------------------AutoDL--------------------------------------------------------+ root@autodl-container-6d25459816-0bbc0370:/autodl-pub/data# ollama serve 2025/01/28 17:08:57 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:6006 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-01-28T17:08:57.364+08:00 level=INFO source=images.go:432 msg="total blobs: 5" time=2025-01-28T17:08:57.364+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers) [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers) [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers) [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers) [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers) [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers) [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers) [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers) [GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers) [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers) [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers) [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers) [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers) [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers) [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers) [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers) [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers) [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers) [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) time=2025-01-28T17:08:57.365+08:00 level=INFO source=routes.go:1238 msg="Listening on [::]:6006 (version 0.5.7)" time=2025-01-28T17:08:57.365+08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners=[cpu] time=2025-01-28T17:08:57.365+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2025-01-28T17:08:57.661+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-85fa3824-d8bd-9487-b237-e98ece392daf library=cuda variant=v12 compute=7.5 driver=12.4 name="NVIDIA GeForce RTX 2080 Ti" total="21.7 GiB" available="21.5 GiB" [GIN] 2025/01/28 - 17:09:57 | 200 | 113.835µs | 127.0.0.1 | HEAD "/" [GIN] 2025/01/28 - 17:09:57 | 200 | 338.765µs | 127.0.0.1 | GET "/api/ps" [GIN] 2025/01/28 - 17:10:04 | 200 | 65.109µs | 127.0.0.1 | HEAD "/" [GIN] 2025/01/28 - 17:10:04 | 200 | 42.647µs | 127.0.0.1 | GET "/api/ps" [GIN] 2025/01/28 - 17:10:32 | 200 | 49.28µs | 127.0.0.1 | HEAD "/" [GIN] 2025/01/28 - 17:10:32 | 200 | 43.033144ms | 127.0.0.1 | POST "/api/show" time=2025-01-28T17:10:32.960+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e gpu=GPU-85fa3824-d8bd-9487-b237-e98ece392daf parallel=4 available=23097704448 required="10.8 GiB" time=2025-01-28T17:10:33.104+08:00 level=INFO source=server.go:104 msg="system memory" total="440.5 GiB" free="337.6 GiB" free_swap="0 B" time=2025-01-28T17:10:33.105+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[21.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="10.8 GiB" memory.required.partial="10.8 GiB" memory.required.kv="1.5 GiB" memory.required.allocations="[10.8 GiB]" memory.weights.total="8.9 GiB" memory.weights.repeating="8.3 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="676.0 MiB" memory.graph.partial="916.1 MiB" time=2025-01-28T17:10:33.105+08:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e --ctx-size 8192 --batch-size 512 --n-gpu-layers 49 --threads 48 --parallel 4 --port 45269" time=2025-01-28T17:10:33.106+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-01-28T17:10:33.106+08:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding" time=2025-01-28T17:10:33.106+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" time=2025-01-28T17:10:33.152+08:00 level=INFO source=runner.go:936 msg="starting go runner" time=2025-01-28T17:10:33.152+08:00 level=INFO source=runner.go:937 msg=system info="CPU : LLAMAFILE = 1 | AARCH64_REPACK = 1 | CPU : LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=48 time=2025-01-28T17:10:33.153+08:00 level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:45269" llama_model_loader: loaded meta data with 26 key-value pairs and 579 tensors from /root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 14B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen llama_model_loader: - kv 4: general.size_label str = 14B llama_model_loader: - kv 5: qwen2.block_count u32 = 48 llama_model_loader: - kv 6: qwen2.context_length u32 = 131072 llama_model_loader: - kv 7: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 13824 llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: general.file_type u32 = 15 llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646 llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - type f32: 241 tensors llama_model_loader: - type q4_K: 289 tensors llama_model_loader: - type q6_K: 49 tensors time=2025-01-28T17:10:33.358+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 22 llm_load_vocab: token to piece cache size = 0.9310 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 152064 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 5120 llm_load_print_meta: n_layer = 48 llm_load_print_meta: n_head = 40 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 5 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 13824 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 14B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 14.77 B llm_load_print_meta: model size = 8.37 GiB (4.87 BPW) llm_load_print_meta: general.name = DeepSeek R1 Distill Qwen 14B llm_load_print_meta: BOS token = 151646 '<|begin▁of▁sentence|>' llm_load_print_meta: EOS token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: EOT token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: PAD token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>' llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>' llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>' llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>' llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>' llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>' llm_load_print_meta: EOG token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: EOG token = 151662 '<|fim_pad|>' llm_load_print_meta: EOG token = 151663 '<|repo_name|>' llm_load_print_meta: EOG token = 151664 '<|file_sep|>' llm_load_print_meta: max token length = 256 llm_load_tensors: CPU_Mapped model buffer size = 8566.04 MiB llama_new_context_with_model: n_seq_max = 4 llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 48, can_shift = 1 llama_kv_cache_init: CPU KV buffer size = 1536.00 MiB llama_new_context_with_model: KV self size = 1536.00 MiB, K (f16): 768.00 MiB, V (f16): 768.00 MiB llama_new_context_with_model: CPU output buffer size = 2.40 MiB llama_new_context_with_model: CPU compute buffer size = 696.01 MiB llama_new_context_with_model: graph nodes = 1686 llama_new_context_with_model: graph splits = 1 time=2025-01-28T17:10:38.883+08:00 level=INFO source=server.go:594 msg="llama runner started in 5.78 seconds" [GIN] 2025/01/28 - 17:10:38 | 200 | 6.209691367s | 127.0.0.1 | POST "/api/generate" [GIN] 2025/01/28 - 17:16:10 | 200 | 4m30s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/01/28 - 17:20:28 | 200 | 34.841µs | 127.0.0.1 | HEAD "/" [GIN] 2025/01/28 - 17:20:28 | 200 | 67.697µs | 127.0.0.1 | GET "/api/ps" time=2025-01-28T17:21:15.888+08:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.182585985 model=/root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e time=2025-01-28T17:21:16.139+08:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.432914803 model=/root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e time=2025-01-28T17:21:16.388+08:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.681943502 model=/root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e ^Croot@autodl-container-6d25459816-0bbc0370:/autodl-pub/data# ^C root@autodl-container-6d25459816-0bbc0370:/autodl-pub/data# ^C root@autodl-container-6d25459816-0bbc0370:/autodl-pub/data# ^C root@autodl-container-6d25459816-0bbc0370:/autodl-pub/data# ^C root@autodl-container-6d25459816-0bbc0370:/autodl-pub/data# ^C root@autodl-container-6d25459816-0bbc0370:/autodl-pub/data# source /etc/profile root@autodl-container-6d25459816-0bbc0370:/autodl-pub/data# ollama serve 2025/01/28 17:25:54 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:6006 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-01-28T17:25:54.324+08:00 level=INFO source=images.go:432 msg="total blobs: 5" time=2025-01-28T17:25:54.324+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers) [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers) [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers) [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers) [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers) [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers) [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers) [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers) [GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers) [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers) [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers) [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers) [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers) [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers) [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers) [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers) [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers) [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers) [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) time=2025-01-28T17:25:54.325+08:00 level=INFO source=routes.go:1238 msg="Listening on [::]:6006 (version 0.5.7)" time=2025-01-28T17:25:54.325+08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners=[cpu] time=2025-01-28T17:25:54.325+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2025-01-28T17:25:54.603+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-85fa3824-d8bd-9487-b237-e98ece392daf library=cuda variant=v12 compute=7.5 driver=12.4 name="NVIDIA GeForce RTX 2080 Ti" total="21.7 GiB" available="21.5 GiB" ^Croot@autodl-container-6d25459816-0bbc0370:/autodl-pub/data# ^C root@autodl-container-6d25459816-0bbc0370:/autodl-pub/data# ^C root@autodl-container-6d25459816-0bbc0370:/autodl-pub/data# ^C root@autodl-container-6d25459816-0bbc0370:/autodl-pub/data# ^C root@autodl-container-6d25459816-0bbc0370:/autodl-pub/data# ^C root@autodl-container-6d25459816-0bbc0370:/autodl-pub/data# ^C root@autodl-container-6d25459816-0bbc0370:/autodl-pub/data# export GIN_MODE=release root@autodl-container-6d25459816-0bbc0370:/autodl-pub/data# ollama serve 2025/01/28 17:26:18 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:6006 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-01-28T17:26:18.836+08:00 level=INFO source=images.go:432 msg="total blobs: 5" time=2025-01-28T17:26:18.836+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers) [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers) [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers) [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers) [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers) [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers) [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers) [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers) [GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers) [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers) [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers) [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers) [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers) [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers) [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers) [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers) [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers) [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers) [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) time=2025-01-28T17:26:18.837+08:00 level=INFO source=routes.go:1238 msg="Listening on [::]:6006 (version 0.5.7)" time=2025-01-28T17:26:18.837+08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners=[cpu] time=2025-01-28T17:26:18.837+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2025-01-28T17:26:19.107+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-85fa3824-d8bd-9487-b237-e98ece392daf library=cuda variant=v12 compute=7.5 driver=12.4 name="NVIDIA GeForce RTX 2080 Ti" total="21.7 GiB" available="21.5 GiB" [GIN] 2025/01/28 - 17:27:35 | 200 | 123.383µs | 127.0.0.1 | HEAD "/" [GIN] 2025/01/28 - 17:27:35 | 200 | 30.589508ms | 127.0.0.1 | POST "/api/show" time=2025-01-28T17:27:36.066+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e gpu=GPU-85fa3824-d8bd-9487-b237-e98ece392daf parallel=4 available=23097704448 required="10.8 GiB" time=2025-01-28T17:27:36.213+08:00 level=INFO source=server.go:104 msg="system memory" total="440.5 GiB" free="339.6 GiB" free_swap="0 B" time=2025-01-28T17:27:36.216+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[21.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="10.8 GiB" memory.required.partial="10.8 GiB" memory.required.kv="1.5 GiB" memory.required.allocations="[10.8 GiB]" memory.weights.total="8.9 GiB" memory.weights.repeating="8.3 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="676.0 MiB" memory.graph.partial="916.1 MiB" time=2025-01-28T17:27:36.217+08:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e --ctx-size 8192 --batch-size 512 --n-gpu-layers 49 --threads 48 --parallel 4 --port 42553" time=2025-01-28T17:27:36.218+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-01-28T17:27:36.218+08:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding" time=2025-01-28T17:27:36.218+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" time=2025-01-28T17:27:36.287+08:00 level=INFO source=runner.go:936 msg="starting go runner" time=2025-01-28T17:27:36.287+08:00 level=INFO source=runner.go:937 msg=system info="CPU : LLAMAFILE = 1 | AARCH64_REPACK = 1 | CPU : LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=48 time=2025-01-28T17:27:36.287+08:00 level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:42553" llama_model_loader: loaded meta data with 26 key-value pairs and 579 tensors from /root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 14B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen llama_model_loader: - kv 4: general.size_label str = 14B llama_model_loader: - kv 5: qwen2.block_count u32 = 48 llama_model_loader: - kv 6: qwen2.context_length u32 = 131072 llama_model_loader: - kv 7: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 13824 llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: general.file_type u32 = 15 llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646 llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - type f32: 241 tensors llama_model_loader: - type q4_K: 289 tensors llama_model_loader: - type q6_K: 49 tensors time=2025-01-28T17:27:36.470+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 22 llm_load_vocab: token to piece cache size = 0.9310 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 152064 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 5120 llm_load_print_meta: n_layer = 48 llm_load_print_meta: n_head = 40 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 5 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 13824 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 14B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 14.77 B llm_load_print_meta: model size = 8.37 GiB (4.87 BPW) llm_load_print_meta: general.name = DeepSeek R1 Distill Qwen 14B llm_load_print_meta: BOS token = 151646 '<|begin▁of▁sentence|>' llm_load_print_meta: EOS token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: EOT token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: PAD token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>' llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>' llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>' llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>' llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>' llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>' llm_load_print_meta: EOG token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: EOG token = 151662 '<|fim_pad|>' llm_load_print_meta: EOG token = 151663 '<|repo_name|>' llm_load_print_meta: EOG token = 151664 '<|file_sep|>' llm_load_print_meta: max token length = 256 llm_load_tensors: CPU_Mapped model buffer size = 8566.04 MiB llama_new_context_with_model: n_seq_max = 4 llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 48, can_shift = 1 llama_kv_cache_init: CPU KV buffer size = 1536.00 MiB llama_new_context_with_model: KV self size = 1536.00 MiB, K (f16): 768.00 MiB, V (f16): 768.00 MiB llama_new_context_with_model: CPU output buffer size = 2.40 MiB llama_new_context_with_model: CPU compute buffer size = 696.01 MiB llama_new_context_with_model: graph nodes = 1686 llama_new_context_with_model: graph splits = 1 time=2025-01-28T17:27:38.731+08:00 level=INFO source=server.go:594 msg="llama runner started in 2.51 seconds" [GIN] 2025/01/28 - 17:27:38 | 200 | 2.904614003s | 127.0.0.1 | POST "/api/generate"
Author
Owner

@rick-github commented on GitHub (Jan 28, 2025):

What's the result of:

command -v ollama
ls $(dirname $(dirname $(command -v ollama)))/lib/ollama/runners
<!-- gh-comment-id:2619498457 --> @rick-github commented on GitHub (Jan 28, 2025): What's the result of: ``` command -v ollama ls $(dirname $(dirname $(command -v ollama)))/lib/ollama/runners ```
Author
Owner

@akamaus commented on GitHub (Jan 28, 2025):

Jan 28 07:32:50 maunix ollama[1125741]: time=2025-01-28T07:32:50.844+03:00 level=DEBUG source=common.go:85 msg="no dynamic runners detected, using only built-in"
Jan 28 07:32:50 maunix ollama[1125741]: time=2025-01-28T07:32:50.844+03:00 level=INFO source=routes.go:1339 msg="Dynamic LLM libraries" runners=[cpu]

You don't have any GPU enabled runners. How did you install ollama?

Oh, looks like it indeed was the case. Installed ollama as a service in nixos-unstable, looks like it's build expression was buggy that time. I updated nixpkgs and now I see runners in place. GPUs are utilized too.

# ls /nix/store/v4q2igd5rw2l6407bn8ldlhb3wk6r4pl-ollama-0.5.7/lib/ollama/runners/
cuda_v12_avx

Thanks for pointing me right to the problem. I must say, logs are a bit misleading, a verbose chatting about CUDA being detected and all, and an innocent notice about absence of dynamic runners.

By the way, I still see the following:

Jan 28 17:41:41 maunix ollama[1266758]: time=2025-01-28T17:41:41.144+03:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
Jan 28 17:41:41 maunix ollama[1266758]: time=2025-01-28T17:41:41.145+03:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
Jan 28 17:41:41 maunix ollama[1266758]: time=2025-01-28T17:41:41.145+03:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
Jan 28 17:41:41 maunix ollama[1266758]: time=2025-01-28T17:41:41.369+03:00 level=INFO source=runner.go:936 msg="starting go runner"
Jan 28 17:41:41 maunix ollama[1266758]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Jan 28 17:41:41 maunix ollama[1266758]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Jan 28 17:41:41 maunix ollama[1266758]: ggml_cuda_init: found 1 CUDA devices:
Jan 28 17:41:41 maunix ollama[1266758]:   Device 0: NVIDIA GeForce GTX 1080 Ti, compute capability 6.1, VMM: yes
Jan 28 17:41:41 maunix ollama[1266758]: time=2025-01-28T17:41:41.379+03:00 level=INFO source=runner.go:937 msg=system info="CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=6
Jan 28 17:41:41 maunix ollama[1266758]: time=2025-01-28T17:41:41.379+03:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:39945"
Jan 28 17:41:41 maunix ollama[1266758]: time=2025-01-28T17:41:41.396+03:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
Jan 28 17:41:41 maunix ollama[1266758]: llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce GTX 1080 Ti) - 11015 MiB free

what does status="llm server loading model" mean?

<!-- gh-comment-id:2619703010 --> @akamaus commented on GitHub (Jan 28, 2025): > ``` > Jan 28 07:32:50 maunix ollama[1125741]: time=2025-01-28T07:32:50.844+03:00 level=DEBUG source=common.go:85 msg="no dynamic runners detected, using only built-in" > Jan 28 07:32:50 maunix ollama[1125741]: time=2025-01-28T07:32:50.844+03:00 level=INFO source=routes.go:1339 msg="Dynamic LLM libraries" runners=[cpu] > ``` > > You don't have any GPU enabled runners. How did you install ollama? Oh, looks like it indeed was the case. Installed ollama as a service in nixos-unstable, looks like it's build expression was buggy that time. I updated nixpkgs and now I see runners in place. GPUs are utilized too. ``` # ls /nix/store/v4q2igd5rw2l6407bn8ldlhb3wk6r4pl-ollama-0.5.7/lib/ollama/runners/ cuda_v12_avx ``` Thanks for pointing me right to the problem. I must say, logs are a bit misleading, a verbose chatting about CUDA being detected and all, and an innocent notice about absence of dynamic runners. By the way, I still see the following: ``` Jan 28 17:41:41 maunix ollama[1266758]: time=2025-01-28T17:41:41.144+03:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 Jan 28 17:41:41 maunix ollama[1266758]: time=2025-01-28T17:41:41.145+03:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding" Jan 28 17:41:41 maunix ollama[1266758]: time=2025-01-28T17:41:41.145+03:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" Jan 28 17:41:41 maunix ollama[1266758]: time=2025-01-28T17:41:41.369+03:00 level=INFO source=runner.go:936 msg="starting go runner" Jan 28 17:41:41 maunix ollama[1266758]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Jan 28 17:41:41 maunix ollama[1266758]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Jan 28 17:41:41 maunix ollama[1266758]: ggml_cuda_init: found 1 CUDA devices: Jan 28 17:41:41 maunix ollama[1266758]: Device 0: NVIDIA GeForce GTX 1080 Ti, compute capability 6.1, VMM: yes Jan 28 17:41:41 maunix ollama[1266758]: time=2025-01-28T17:41:41.379+03:00 level=INFO source=runner.go:937 msg=system info="CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=6 Jan 28 17:41:41 maunix ollama[1266758]: time=2025-01-28T17:41:41.379+03:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:39945" Jan 28 17:41:41 maunix ollama[1266758]: time=2025-01-28T17:41:41.396+03:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" Jan 28 17:41:41 maunix ollama[1266758]: llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce GTX 1080 Ti) - 11015 MiB free ``` what does `status="llm server loading model"` mean?
Author
Owner

@rick-github commented on GitHub (Jan 28, 2025):

what does status="llm server loading model" mean?

The ollama server, the bit that responds to the API requests, starts the runner, or llm server, which does the actual inference. When the llm server starts, it loads the model weights, the artificial neurons of the LLM, into the storage area that will be computed on, in your case the GTX 1080.

<!-- gh-comment-id:2619724320 --> @rick-github commented on GitHub (Jan 28, 2025): > what does status="llm server loading model" mean? The ollama server, the bit that responds to the API requests, starts the runner, or llm server, which does the actual inference. When the llm server starts, it loads the model weights, the artificial neurons of the LLM, into the storage area that will be computed on, in your case the GTX 1080.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#67637