[GH-ISSUE #7597] detect missing GPU runners and don't report incorrect GPU info/logs #66901

Open
opened 2026-05-04 08:44:48 -05:00 by GiteaMirror · 20 comments
Owner

Originally created by @kaleocheng on GitHub (Nov 10, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7597

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

$ ollama -v 
ollama version is 0.4.1

$ ollama  run llama3.2-vision:latest
$ ollama ps 
NAME                      ID              SIZE     PROCESSOR    UNTIL              
llama3.2-vision:latest    38107a0cd119    12 GB    100% GPU     2 minutes from now    

from the logs it also saying ollama offload to cuda:

ollama[1773]: [GIN] 2024/11/10 - 21:32:56 | 200 |   22.078108ms |       127.0.0.1 | POST     "/api/show"
ollama[1773]: time=2024-11-10T21:32:56.205+08:00 level=WARN source=sched.go:137 msg="mllama doesn't support parallel requests yet"
ollama[1773]: time=2024-11-10T21:32:56.342+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/var/lib/ollama/models/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 gpu=GPU-957abb1f-e95c-db43-ee81-b345b6e60491 parallel=1 available=16139026432 required="11.3 GiB"
ollama[1773]: time=2024-11-10T21:32:56.440+08:00 level=INFO source=server.go:105 msg="system memory" total="15.4 GiB" free="11.3 GiB" free_swap="12.2 GiB"
ollama[1773]: time=2024-11-10T21:32:56.442+08:00 level=INFO source=memory.go:343 msg="offload to cuda" projector.weights="1.8 GiB" projector.graph="2.8 GiB" layers.requested=-1 layers.model=41 layers.offload=41 layers.split="" memory.available="[15.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.3 GiB" memory.required.partial="11.3 GiB" memory.required.kv="656.2 MiB" memory.required.allocations="[11.3 GiB]" memory.weights.total="5.5 GiB" memory.weights.repeating="5.1 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="258.5 MiB" memory.graph.partial="669.5 MiB"
ollama[1773]: time=2024-11-10T21:32:56.443+08:00 level=INFO source=server.go:383 msg="starting llama server" cmd="/tmp/ollama1704822012/runners/cpu_avx2/ollama_llama_server --model /var/lib/ollama/models/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 --ctx-size 2048 --batch-size 512 --n-gpu-layers 41 --mmproj /var/lib/ollama/models/blobs/sha256-ece5e659647a20a5c28ab9eea1c12a1ad430bc0f2a27021d00ad103b3bf5206f --threads 6 --no-mmap --parallel 1 --port 40225"
ollama[1773]: time=2024-11-10T21:32:56.443+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
ollama[1773]: time=2024-11-10T21:32:56.443+08:00 level=INFO source=server.go:562 msg="waiting for llama runner to start responding"
ollama[1773]: time=2024-11-10T21:32:56.444+08:00 level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server error"
ollama[1773]: time=2024-11-10T21:32:56.446+08:00 level=INFO source=runner.go:863 msg="starting go runner"
ollama[1773]: time=2024-11-10T21:32:56.446+08:00 level=INFO source=runner.go:864 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=6
ollama[1773]: time=2024-11-10T21:32:56.446+08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:40225"
ollama[1773]: llama_model_loader: loaded meta data with 27 key-value pairs and 396 tensors from /var/lib/ollama/models/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 (version GGUF V3 (latest))

but from nvidia-smi nothing in there:

$  nvidia-smi 
Sun Nov 10 21:38:22 2024       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 560.35.03              Driver Version: 560.35.03      CUDA Version: 12.6     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 4060 Ti     Off |   00000000:01:00.0  On |                  N/A |
|  0%   35C    P8             14W /  165W |     498MiB /  16380MiB |      7%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A      2054      G   ...nim4annni-xorg-server-21.1.13/bin/X        252MiB |
|    0   N/A  N/A      3315      G   ...bcvgsdr9v5mjmr-picom-12.3/bin/picom         94MiB |
|    0   N/A  N/A     10451      G   ...irefox-132.0.1/bin/.firefox-wrapped        118MiB |
+-----------------------------------------------------------------------------------------+

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.4.1

Originally created by @kaleocheng on GitHub (Nov 10, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7597 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? ``` $ ollama -v ollama version is 0.4.1 $ ollama run llama3.2-vision:latest $ ollama ps NAME ID SIZE PROCESSOR UNTIL llama3.2-vision:latest 38107a0cd119 12 GB 100% GPU 2 minutes from now ``` from the logs it also saying ollama offload to cuda: ``` ollama[1773]: [GIN] 2024/11/10 - 21:32:56 | 200 | 22.078108ms | 127.0.0.1 | POST "/api/show" ollama[1773]: time=2024-11-10T21:32:56.205+08:00 level=WARN source=sched.go:137 msg="mllama doesn't support parallel requests yet" ollama[1773]: time=2024-11-10T21:32:56.342+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/var/lib/ollama/models/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 gpu=GPU-957abb1f-e95c-db43-ee81-b345b6e60491 parallel=1 available=16139026432 required="11.3 GiB" ollama[1773]: time=2024-11-10T21:32:56.440+08:00 level=INFO source=server.go:105 msg="system memory" total="15.4 GiB" free="11.3 GiB" free_swap="12.2 GiB" ollama[1773]: time=2024-11-10T21:32:56.442+08:00 level=INFO source=memory.go:343 msg="offload to cuda" projector.weights="1.8 GiB" projector.graph="2.8 GiB" layers.requested=-1 layers.model=41 layers.offload=41 layers.split="" memory.available="[15.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.3 GiB" memory.required.partial="11.3 GiB" memory.required.kv="656.2 MiB" memory.required.allocations="[11.3 GiB]" memory.weights.total="5.5 GiB" memory.weights.repeating="5.1 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="258.5 MiB" memory.graph.partial="669.5 MiB" ollama[1773]: time=2024-11-10T21:32:56.443+08:00 level=INFO source=server.go:383 msg="starting llama server" cmd="/tmp/ollama1704822012/runners/cpu_avx2/ollama_llama_server --model /var/lib/ollama/models/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 --ctx-size 2048 --batch-size 512 --n-gpu-layers 41 --mmproj /var/lib/ollama/models/blobs/sha256-ece5e659647a20a5c28ab9eea1c12a1ad430bc0f2a27021d00ad103b3bf5206f --threads 6 --no-mmap --parallel 1 --port 40225" ollama[1773]: time=2024-11-10T21:32:56.443+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 ollama[1773]: time=2024-11-10T21:32:56.443+08:00 level=INFO source=server.go:562 msg="waiting for llama runner to start responding" ollama[1773]: time=2024-11-10T21:32:56.444+08:00 level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server error" ollama[1773]: time=2024-11-10T21:32:56.446+08:00 level=INFO source=runner.go:863 msg="starting go runner" ollama[1773]: time=2024-11-10T21:32:56.446+08:00 level=INFO source=runner.go:864 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=6 ollama[1773]: time=2024-11-10T21:32:56.446+08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:40225" ollama[1773]: llama_model_loader: loaded meta data with 27 key-value pairs and 396 tensors from /var/lib/ollama/models/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 (version GGUF V3 (latest)) ``` but from nvidia-smi nothing in there: ``` $ nvidia-smi Sun Nov 10 21:38:22 2024 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 560.35.03 Driver Version: 560.35.03 CUDA Version: 12.6 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 4060 Ti Off | 00000000:01:00.0 On | N/A | | 0% 35C P8 14W / 165W | 498MiB / 16380MiB | 7% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 2054 G ...nim4annni-xorg-server-21.1.13/bin/X 252MiB | | 0 N/A N/A 3315 G ...bcvgsdr9v5mjmr-picom-12.3/bin/picom 94MiB | | 0 N/A N/A 10451 G ...irefox-132.0.1/bin/.firefox-wrapped 118MiB | +-----------------------------------------------------------------------------------------+ ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.4.1
GiteaMirror added the feature request label 2026-05-04 08:44:48 -05:00
Author
Owner

@rick-github commented on GitHub (Nov 10, 2024):

Post full server log.

<!-- gh-comment-id:2466756382 --> @rick-github commented on GitHub (Nov 10, 2024): Post full server log.
Author
Owner

@kaleocheng commented on GitHub (Nov 10, 2024):

this is the full server log:

systemd[1]: Started Server for local large language models.
ollama[1667]: 2024/11/10 22:38:56 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:8100 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/var/lib/ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
ollama[1667]: time=2024-11-10T22:38:56.217+08:00 level=INFO source=images.go:755 msg="total blobs: 15"
ollama[1667]: time=2024-11-10T22:38:56.217+08:00 level=INFO source=images.go:762 msg="total unused blobs removed: 0"
ollama[1667]: time=2024-11-10T22:38:56.219+08:00 level=INFO source=routes.go:1240 msg="Listening on 127.0.0.1:8100 (version 0.4.1)"
ollama[1667]: time=2024-11-10T22:38:56.221+08:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/tmp/ollama3247001074/runners
ollama[1667]: time=2024-11-10T22:38:56.281+08:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2]"
ollama[1667]: time=2024-11-10T22:38:56.281+08:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
ollama[1667]: time=2024-11-10T22:38:56.282+08:00 level=WARN source=gpu.go:732 msg="unable to locate gpu dependency libraries"
ollama[1667]: time=2024-11-10T22:38:56.282+08:00 level=WARN source=gpu.go:732 msg="unable to locate gpu dependency libraries"
ollama[1667]: time=2024-11-10T22:38:56.282+08:00 level=WARN source=gpu.go:732 msg="unable to locate gpu dependency libraries"
ollama[1667]: time=2024-11-10T22:38:56.455+08:00 level=INFO source=types.go:123 msg="inference compute" id=GPU-957abb1f-e95c-db43-ee81-b345b6e60491 library=cuda variant=v12 compute=8.9 driver=12.6 name="NVIDIA GeForce RTX 4060 Ti" total="15.6 GiB" available="15.5 GiB"
ollama[1667]: [GIN] 2024/11/10 - 22:40:25 | 200 |     410.662µs |       127.0.0.1 | HEAD     "/"
ollama[1667]: [GIN] 2024/11/10 - 22:40:25 | 200 |   31.601568ms |       127.0.0.1 | POST     "/api/show"
ollama[1667]: time=2024-11-10T22:40:25.907+08:00 level=WARN source=sched.go:137 msg="mllama doesn't support parallel requests yet"
ollama[1667]: time=2024-11-10T22:40:26.042+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/var/lib/ollama/models/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 gpu=GPU-957abb1f-e95c-db43-ee81-b345b6e60491 parallel=1 available=16307781632 required="11.3 GiB"
ollama[1667]: time=2024-11-10T22:40:26.141+08:00 level=INFO source=server.go:105 msg="system memory" total="15.4 GiB" free="9.0 GiB" free_swap="17.0 GiB"
ollama[1667]: time=2024-11-10T22:40:26.146+08:00 level=INFO source=memory.go:343 msg="offload to cuda" projector.weights="1.8 GiB" projector.graph="2.8 GiB" layers.requested=-1 layers.model=41 layers.offload=41 layers.split="" memory.available="[15.2 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.3 GiB" memory.required.partial="11.3 GiB" memory.required.kv="656.2 MiB" memory.required.allocations="[11.3 GiB]" memory.weights.total="5.5 GiB" memory.weights.repeating="5.1 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="258.5 MiB" memory.graph.partial="669.5 MiB"
ollama[1667]: time=2024-11-10T22:40:26.146+08:00 level=INFO source=server.go:383 msg="starting llama server" cmd="/tmp/ollama3247001074/runners/cpu_avx2/ollama_llama_server --model /var/lib/ollama/models/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 --ctx-size 2048 --batch-size 512 --n-gpu-layers 41 --mmproj /var/lib/ollama/models/blobs/sha256-ece5e659647a20a5c28ab9eea1c12a1ad430bc0f2a27021d00ad103b3bf5206f --threads 6 --no-mmap --parallel 1 --port 44021"
ollama[1667]: time=2024-11-10T22:40:26.147+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
ollama[1667]: time=2024-11-10T22:40:26.147+08:00 level=INFO source=server.go:562 msg="waiting for llama runner to start responding"
ollama[1667]: time=2024-11-10T22:40:26.147+08:00 level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server error"
ollama[1667]: time=2024-11-10T22:40:26.149+08:00 level=INFO source=runner.go:863 msg="starting go runner"
ollama[1667]: time=2024-11-10T22:40:26.149+08:00 level=INFO source=runner.go:864 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=6
ollama[1667]: time=2024-11-10T22:40:26.149+08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:44021"
ollama[1667]: llama_model_loader: loaded meta data with 27 key-value pairs and 396 tensors from /var/lib/ollama/models/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 (version GGUF V3 (latest))
ollama[1667]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
ollama[1667]: llama_model_loader: - kv   0:                       general.architecture str              = mllama
ollama[1667]: llama_model_loader: - kv   1:                               general.type str              = model
ollama[1667]: llama_model_loader: - kv   2:                               general.name str              = Model
ollama[1667]: llama_model_loader: - kv   3:                         general.size_label str              = 10B
ollama[1667]: llama_model_loader: - kv   4:                         mllama.block_count u32              = 40
ollama[1667]: llama_model_loader: - kv   5:                      mllama.context_length u32              = 131072
ollama[1667]: llama_model_loader: - kv   6:                    mllama.embedding_length u32              = 4096
ollama[1667]: llama_model_loader: - kv   7:                 mllama.feed_forward_length u32              = 14336
ollama[1667]: llama_model_loader: - kv   8:                mllama.attention.head_count u32              = 32
ollama[1667]: llama_model_loader: - kv   9:             mllama.attention.head_count_kv u32              = 8
ollama[1667]: llama_model_loader: - kv  10:                      mllama.rope.freq_base f32              = 500000.000000
ollama[1667]: llama_model_loader: - kv  11:    mllama.attention.layer_norm_rms_epsilon f32              = 0.000010
ollama[1667]: llama_model_loader: - kv  12:                          general.file_type u32              = 15
ollama[1667]: llama_model_loader: - kv  13:                          mllama.vocab_size u32              = 128256
ollama[1667]: llama_model_loader: - kv  14:                mllama.rope.dimension_count u32              = 128
ollama[1667]: llama_model_loader: - kv  15:    mllama.attention.cross_attention_layers arr[i32,8]       = [3, 8, 13, 18, 23, 28, 33, 38]
ollama[1667]: llama_model_loader: - kv  16:               tokenizer.ggml.add_bos_token bool             = true
ollama[1667]: llama_model_loader: - kv  17:                       tokenizer.ggml.model str              = gpt2
ollama[1667]: llama_model_loader: - kv  18:                         tokenizer.ggml.pre str              = llama-bpe
ollama[1667]: llama_model_loader: - kv  19:                      tokenizer.ggml.tokens arr[str,128257]  = ["!", "\"", "#", "$", "%", "&", "'", ...
ollama[1667]: llama_model_loader: - kv  20:                  tokenizer.ggml.token_type arr[i32,128257]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
ollama[1667]: llama_model_loader: - kv  21:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
ollama[1667]: llama_model_loader: - kv  22:                tokenizer.ggml.bos_token_id u32              = 128000
ollama[1667]: llama_model_loader: - kv  23:                tokenizer.ggml.eos_token_id u32              = 128009
ollama[1667]: llama_model_loader: - kv  24:            tokenizer.ggml.padding_token_id u32              = 128004
ollama[1667]: llama_model_loader: - kv  25:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
ollama[1667]: llama_model_loader: - kv  26:               general.quantization_version u32              = 2
ollama[1667]: llama_model_loader: - type  f32:  114 tensors
ollama[1667]: llama_model_loader: - type q4_K:  245 tensors
ollama[1667]: llama_model_loader: - type q6_K:   37 tensors
ollama[1667]: llm_load_vocab: special tokens cache size = 257
ollama[1667]: time=2024-11-10T22:40:26.398+08:00 level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server loading model"
ollama[1667]: llm_load_vocab: token to piece cache size = 0.7999 MB
ollama[1667]: llm_load_print_meta: format           = GGUF V3 (latest)
ollama[1667]: llm_load_print_meta: arch             = mllama
ollama[1667]: llm_load_print_meta: vocab type       = BPE
ollama[1667]: llm_load_print_meta: n_vocab          = 128256
ollama[1667]: llm_load_print_meta: n_merges         = 280147
ollama[1667]: llm_load_print_meta: vocab_only       = 0
ollama[1667]: llm_load_print_meta: n_ctx_train      = 131072
ollama[1667]: llm_load_print_meta: n_embd           = 4096
ollama[1667]: llm_load_print_meta: n_layer          = 40
ollama[1667]: llm_load_print_meta: n_head           = 32
ollama[1667]: llm_load_print_meta: n_head_kv        = 8
ollama[1667]: llm_load_print_meta: n_rot            = 128
ollama[1667]: llm_load_print_meta: n_swa            = 0
ollama[1667]: llm_load_print_meta: n_embd_head_k    = 128
ollama[1667]: llm_load_print_meta: n_embd_head_v    = 128
ollama[1667]: llm_load_print_meta: n_gqa            = 4
ollama[1667]: llm_load_print_meta: n_embd_k_gqa     = 1024
ollama[1667]: llm_load_print_meta: n_embd_v_gqa     = 1024
ollama[1667]: llm_load_print_meta: f_norm_eps       = 0.0e+00
ollama[1667]: llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
ollama[1667]: llm_load_print_meta: f_clamp_kqv      = 0.0e+00
ollama[1667]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
ollama[1667]: llm_load_print_meta: f_logit_scale    = 0.0e+00
ollama[1667]: llm_load_print_meta: n_ff             = 14336
ollama[1667]: llm_load_print_meta: n_expert         = 0
ollama[1667]: llm_load_print_meta: n_expert_used    = 0
ollama[1667]: llm_load_print_meta: causal attn      = 1
ollama[1667]: llm_load_print_meta: pooling type     = 0
ollama[1667]: llm_load_print_meta: rope type        = 0
ollama[1667]: llm_load_print_meta: rope scaling     = linear
ollama[1667]: llm_load_print_meta: freq_base_train  = 500000.0
ollama[1667]: llm_load_print_meta: freq_scale_train = 1
ollama[1667]: llm_load_print_meta: n_ctx_orig_yarn  = 131072
ollama[1667]: llm_load_print_meta: rope_finetuned   = unknown
ollama[1667]: llm_load_print_meta: ssm_d_conv       = 0
ollama[1667]: llm_load_print_meta: ssm_d_inner      = 0
ollama[1667]: llm_load_print_meta: ssm_d_state      = 0
ollama[1667]: llm_load_print_meta: ssm_dt_rank      = 0
ollama[1667]: llm_load_print_meta: ssm_dt_b_c_rms   = 0
ollama[1667]: llm_load_print_meta: model type       = 11B
ollama[1667]: llm_load_print_meta: model ftype      = Q4_K - Medium
ollama[1667]: llm_load_print_meta: model params     = 9.78 B
ollama[1667]: llm_load_print_meta: model size       = 5.55 GiB (4.87 BPW)
ollama[1667]: llm_load_print_meta: general.name     = Model
ollama[1667]: llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
ollama[1667]: llm_load_print_meta: EOS token        = 128009 '<|eot_id|>'
ollama[1667]: llm_load_print_meta: PAD token        = 128004 '<|finetune_right_pad_id|>'
ollama[1667]: llm_load_print_meta: LF token         = 128 'Ä'
ollama[1667]: llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
ollama[1667]: llm_load_print_meta: EOM token        = 128008 '<|eom_id|>'
ollama[1667]: llm_load_print_meta: EOG token        = 128008 '<|eom_id|>'
ollama[1667]: llm_load_print_meta: EOG token        = 128009 '<|eot_id|>'
ollama[1667]: llm_load_print_meta: max token length = 256
ollama[1667]: llama_model_load: vocab mismatch 128256 !- 128257 ...
ollama[1667]: llm_load_tensors: ggml ctx size =    0.18 MiB
ollama[1667]: llm_load_tensors:        CPU buffer size =  5679.34 MiB
ollama[1667]: llama_new_context_with_model: n_ctx      = 2048
ollama[1667]: llama_new_context_with_model: n_batch    = 512
ollama[1667]: llama_new_context_with_model: n_ubatch   = 512
ollama[1667]: llama_new_context_with_model: flash_attn = 0
ollama[1667]: llama_new_context_with_model: freq_base  = 500000.0
ollama[1667]: llama_new_context_with_model: freq_scale = 1
ollama[1667]: llama_kv_cache_init:        CPU KV buffer size =   656.25 MiB
ollama[1667]: llama_new_context_with_model: KV self size  =  656.25 MiB, K (f16):  328.12 MiB, V (f16):  328.12 MiB
ollama[1667]: llama_new_context_with_model:        CPU  output buffer size =     0.50 MiB
ollama[1667]: llama_new_context_with_model:        CPU compute buffer size =   258.50 MiB
ollama[1667]: llama_new_context_with_model: graph nodes  = 1030
ollama[1667]: llama_new_context_with_model: graph splits = 1
ollama[1667]: mllama_model_load: model name:   Llama-3.2-11B-Vision-Instruct
ollama[1667]: mllama_model_load: description:  vision encoder for Mllama
ollama[1667]: mllama_model_load: GGUF version: 3
ollama[1667]: mllama_model_load: alignment:    32
ollama[1667]: mllama_model_load: n_tensors:    512
ollama[1667]: mllama_model_load: n_kv:         17
ollama[1667]: mllama_model_load: ftype:        f16
ollama[1667]: mllama_model_load:
ollama[1667]: mllama_model_load: vision using CPU backend
ollama[1667]: mllama_model_load: compute allocated memory: 2853.34 MB
ollama[1667]: time=2024-11-10T22:40:31.424+08:00 level=INFO source=server.go:601 msg="llama runner started in 5.27 seconds"
ollama[1667]: [GIN] 2024/11/10 - 22:40:31 | 200 |  5.534962907s |       127.0.0.1 | POST     "/api/generate"
ollama[1667]: [GIN] 2024/11/10 - 22:40:41 | 200 |     762.152µs |       127.0.0.1 | HEAD     "/"
ollama[1667]: [GIN] 2024/11/10 - 22:40:41 | 200 |     620.835µs |       127.0.0.1 | GET      "/api/ps"

<!-- gh-comment-id:2466762267 --> @kaleocheng commented on GitHub (Nov 10, 2024): this is the full server log: ``` systemd[1]: Started Server for local large language models. ollama[1667]: 2024/11/10 22:38:56 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:8100 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/var/lib/ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" ollama[1667]: time=2024-11-10T22:38:56.217+08:00 level=INFO source=images.go:755 msg="total blobs: 15" ollama[1667]: time=2024-11-10T22:38:56.217+08:00 level=INFO source=images.go:762 msg="total unused blobs removed: 0" ollama[1667]: time=2024-11-10T22:38:56.219+08:00 level=INFO source=routes.go:1240 msg="Listening on 127.0.0.1:8100 (version 0.4.1)" ollama[1667]: time=2024-11-10T22:38:56.221+08:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/tmp/ollama3247001074/runners ollama[1667]: time=2024-11-10T22:38:56.281+08:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2]" ollama[1667]: time=2024-11-10T22:38:56.281+08:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs" ollama[1667]: time=2024-11-10T22:38:56.282+08:00 level=WARN source=gpu.go:732 msg="unable to locate gpu dependency libraries" ollama[1667]: time=2024-11-10T22:38:56.282+08:00 level=WARN source=gpu.go:732 msg="unable to locate gpu dependency libraries" ollama[1667]: time=2024-11-10T22:38:56.282+08:00 level=WARN source=gpu.go:732 msg="unable to locate gpu dependency libraries" ollama[1667]: time=2024-11-10T22:38:56.455+08:00 level=INFO source=types.go:123 msg="inference compute" id=GPU-957abb1f-e95c-db43-ee81-b345b6e60491 library=cuda variant=v12 compute=8.9 driver=12.6 name="NVIDIA GeForce RTX 4060 Ti" total="15.6 GiB" available="15.5 GiB" ollama[1667]: [GIN] 2024/11/10 - 22:40:25 | 200 | 410.662µs | 127.0.0.1 | HEAD "/" ollama[1667]: [GIN] 2024/11/10 - 22:40:25 | 200 | 31.601568ms | 127.0.0.1 | POST "/api/show" ollama[1667]: time=2024-11-10T22:40:25.907+08:00 level=WARN source=sched.go:137 msg="mllama doesn't support parallel requests yet" ollama[1667]: time=2024-11-10T22:40:26.042+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/var/lib/ollama/models/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 gpu=GPU-957abb1f-e95c-db43-ee81-b345b6e60491 parallel=1 available=16307781632 required="11.3 GiB" ollama[1667]: time=2024-11-10T22:40:26.141+08:00 level=INFO source=server.go:105 msg="system memory" total="15.4 GiB" free="9.0 GiB" free_swap="17.0 GiB" ollama[1667]: time=2024-11-10T22:40:26.146+08:00 level=INFO source=memory.go:343 msg="offload to cuda" projector.weights="1.8 GiB" projector.graph="2.8 GiB" layers.requested=-1 layers.model=41 layers.offload=41 layers.split="" memory.available="[15.2 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.3 GiB" memory.required.partial="11.3 GiB" memory.required.kv="656.2 MiB" memory.required.allocations="[11.3 GiB]" memory.weights.total="5.5 GiB" memory.weights.repeating="5.1 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="258.5 MiB" memory.graph.partial="669.5 MiB" ollama[1667]: time=2024-11-10T22:40:26.146+08:00 level=INFO source=server.go:383 msg="starting llama server" cmd="/tmp/ollama3247001074/runners/cpu_avx2/ollama_llama_server --model /var/lib/ollama/models/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 --ctx-size 2048 --batch-size 512 --n-gpu-layers 41 --mmproj /var/lib/ollama/models/blobs/sha256-ece5e659647a20a5c28ab9eea1c12a1ad430bc0f2a27021d00ad103b3bf5206f --threads 6 --no-mmap --parallel 1 --port 44021" ollama[1667]: time=2024-11-10T22:40:26.147+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 ollama[1667]: time=2024-11-10T22:40:26.147+08:00 level=INFO source=server.go:562 msg="waiting for llama runner to start responding" ollama[1667]: time=2024-11-10T22:40:26.147+08:00 level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server error" ollama[1667]: time=2024-11-10T22:40:26.149+08:00 level=INFO source=runner.go:863 msg="starting go runner" ollama[1667]: time=2024-11-10T22:40:26.149+08:00 level=INFO source=runner.go:864 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=6 ollama[1667]: time=2024-11-10T22:40:26.149+08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:44021" ollama[1667]: llama_model_loader: loaded meta data with 27 key-value pairs and 396 tensors from /var/lib/ollama/models/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 (version GGUF V3 (latest)) ollama[1667]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. ollama[1667]: llama_model_loader: - kv 0: general.architecture str = mllama ollama[1667]: llama_model_loader: - kv 1: general.type str = model ollama[1667]: llama_model_loader: - kv 2: general.name str = Model ollama[1667]: llama_model_loader: - kv 3: general.size_label str = 10B ollama[1667]: llama_model_loader: - kv 4: mllama.block_count u32 = 40 ollama[1667]: llama_model_loader: - kv 5: mllama.context_length u32 = 131072 ollama[1667]: llama_model_loader: - kv 6: mllama.embedding_length u32 = 4096 ollama[1667]: llama_model_loader: - kv 7: mllama.feed_forward_length u32 = 14336 ollama[1667]: llama_model_loader: - kv 8: mllama.attention.head_count u32 = 32 ollama[1667]: llama_model_loader: - kv 9: mllama.attention.head_count_kv u32 = 8 ollama[1667]: llama_model_loader: - kv 10: mllama.rope.freq_base f32 = 500000.000000 ollama[1667]: llama_model_loader: - kv 11: mllama.attention.layer_norm_rms_epsilon f32 = 0.000010 ollama[1667]: llama_model_loader: - kv 12: general.file_type u32 = 15 ollama[1667]: llama_model_loader: - kv 13: mllama.vocab_size u32 = 128256 ollama[1667]: llama_model_loader: - kv 14: mllama.rope.dimension_count u32 = 128 ollama[1667]: llama_model_loader: - kv 15: mllama.attention.cross_attention_layers arr[i32,8] = [3, 8, 13, 18, 23, 28, 33, 38] ollama[1667]: llama_model_loader: - kv 16: tokenizer.ggml.add_bos_token bool = true ollama[1667]: llama_model_loader: - kv 17: tokenizer.ggml.model str = gpt2 ollama[1667]: llama_model_loader: - kv 18: tokenizer.ggml.pre str = llama-bpe ollama[1667]: llama_model_loader: - kv 19: tokenizer.ggml.tokens arr[str,128257] = ["!", "\"", "#", "$", "%", "&", "'", ... ollama[1667]: llama_model_loader: - kv 20: tokenizer.ggml.token_type arr[i32,128257] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ollama[1667]: llama_model_loader: - kv 21: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... ollama[1667]: llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 128000 ollama[1667]: llama_model_loader: - kv 23: tokenizer.ggml.eos_token_id u32 = 128009 ollama[1667]: llama_model_loader: - kv 24: tokenizer.ggml.padding_token_id u32 = 128004 ollama[1667]: llama_model_loader: - kv 25: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... ollama[1667]: llama_model_loader: - kv 26: general.quantization_version u32 = 2 ollama[1667]: llama_model_loader: - type f32: 114 tensors ollama[1667]: llama_model_loader: - type q4_K: 245 tensors ollama[1667]: llama_model_loader: - type q6_K: 37 tensors ollama[1667]: llm_load_vocab: special tokens cache size = 257 ollama[1667]: time=2024-11-10T22:40:26.398+08:00 level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server loading model" ollama[1667]: llm_load_vocab: token to piece cache size = 0.7999 MB ollama[1667]: llm_load_print_meta: format = GGUF V3 (latest) ollama[1667]: llm_load_print_meta: arch = mllama ollama[1667]: llm_load_print_meta: vocab type = BPE ollama[1667]: llm_load_print_meta: n_vocab = 128256 ollama[1667]: llm_load_print_meta: n_merges = 280147 ollama[1667]: llm_load_print_meta: vocab_only = 0 ollama[1667]: llm_load_print_meta: n_ctx_train = 131072 ollama[1667]: llm_load_print_meta: n_embd = 4096 ollama[1667]: llm_load_print_meta: n_layer = 40 ollama[1667]: llm_load_print_meta: n_head = 32 ollama[1667]: llm_load_print_meta: n_head_kv = 8 ollama[1667]: llm_load_print_meta: n_rot = 128 ollama[1667]: llm_load_print_meta: n_swa = 0 ollama[1667]: llm_load_print_meta: n_embd_head_k = 128 ollama[1667]: llm_load_print_meta: n_embd_head_v = 128 ollama[1667]: llm_load_print_meta: n_gqa = 4 ollama[1667]: llm_load_print_meta: n_embd_k_gqa = 1024 ollama[1667]: llm_load_print_meta: n_embd_v_gqa = 1024 ollama[1667]: llm_load_print_meta: f_norm_eps = 0.0e+00 ollama[1667]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05 ollama[1667]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 ollama[1667]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 ollama[1667]: llm_load_print_meta: f_logit_scale = 0.0e+00 ollama[1667]: llm_load_print_meta: n_ff = 14336 ollama[1667]: llm_load_print_meta: n_expert = 0 ollama[1667]: llm_load_print_meta: n_expert_used = 0 ollama[1667]: llm_load_print_meta: causal attn = 1 ollama[1667]: llm_load_print_meta: pooling type = 0 ollama[1667]: llm_load_print_meta: rope type = 0 ollama[1667]: llm_load_print_meta: rope scaling = linear ollama[1667]: llm_load_print_meta: freq_base_train = 500000.0 ollama[1667]: llm_load_print_meta: freq_scale_train = 1 ollama[1667]: llm_load_print_meta: n_ctx_orig_yarn = 131072 ollama[1667]: llm_load_print_meta: rope_finetuned = unknown ollama[1667]: llm_load_print_meta: ssm_d_conv = 0 ollama[1667]: llm_load_print_meta: ssm_d_inner = 0 ollama[1667]: llm_load_print_meta: ssm_d_state = 0 ollama[1667]: llm_load_print_meta: ssm_dt_rank = 0 ollama[1667]: llm_load_print_meta: ssm_dt_b_c_rms = 0 ollama[1667]: llm_load_print_meta: model type = 11B ollama[1667]: llm_load_print_meta: model ftype = Q4_K - Medium ollama[1667]: llm_load_print_meta: model params = 9.78 B ollama[1667]: llm_load_print_meta: model size = 5.55 GiB (4.87 BPW) ollama[1667]: llm_load_print_meta: general.name = Model ollama[1667]: llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' ollama[1667]: llm_load_print_meta: EOS token = 128009 '<|eot_id|>' ollama[1667]: llm_load_print_meta: PAD token = 128004 '<|finetune_right_pad_id|>' ollama[1667]: llm_load_print_meta: LF token = 128 'Ä' ollama[1667]: llm_load_print_meta: EOT token = 128009 '<|eot_id|>' ollama[1667]: llm_load_print_meta: EOM token = 128008 '<|eom_id|>' ollama[1667]: llm_load_print_meta: EOG token = 128008 '<|eom_id|>' ollama[1667]: llm_load_print_meta: EOG token = 128009 '<|eot_id|>' ollama[1667]: llm_load_print_meta: max token length = 256 ollama[1667]: llama_model_load: vocab mismatch 128256 !- 128257 ... ollama[1667]: llm_load_tensors: ggml ctx size = 0.18 MiB ollama[1667]: llm_load_tensors: CPU buffer size = 5679.34 MiB ollama[1667]: llama_new_context_with_model: n_ctx = 2048 ollama[1667]: llama_new_context_with_model: n_batch = 512 ollama[1667]: llama_new_context_with_model: n_ubatch = 512 ollama[1667]: llama_new_context_with_model: flash_attn = 0 ollama[1667]: llama_new_context_with_model: freq_base = 500000.0 ollama[1667]: llama_new_context_with_model: freq_scale = 1 ollama[1667]: llama_kv_cache_init: CPU KV buffer size = 656.25 MiB ollama[1667]: llama_new_context_with_model: KV self size = 656.25 MiB, K (f16): 328.12 MiB, V (f16): 328.12 MiB ollama[1667]: llama_new_context_with_model: CPU output buffer size = 0.50 MiB ollama[1667]: llama_new_context_with_model: CPU compute buffer size = 258.50 MiB ollama[1667]: llama_new_context_with_model: graph nodes = 1030 ollama[1667]: llama_new_context_with_model: graph splits = 1 ollama[1667]: mllama_model_load: model name: Llama-3.2-11B-Vision-Instruct ollama[1667]: mllama_model_load: description: vision encoder for Mllama ollama[1667]: mllama_model_load: GGUF version: 3 ollama[1667]: mllama_model_load: alignment: 32 ollama[1667]: mllama_model_load: n_tensors: 512 ollama[1667]: mllama_model_load: n_kv: 17 ollama[1667]: mllama_model_load: ftype: f16 ollama[1667]: mllama_model_load: ollama[1667]: mllama_model_load: vision using CPU backend ollama[1667]: mllama_model_load: compute allocated memory: 2853.34 MB ollama[1667]: time=2024-11-10T22:40:31.424+08:00 level=INFO source=server.go:601 msg="llama runner started in 5.27 seconds" ollama[1667]: [GIN] 2024/11/10 - 22:40:31 | 200 | 5.534962907s | 127.0.0.1 | POST "/api/generate" ollama[1667]: [GIN] 2024/11/10 - 22:40:41 | 200 | 762.152µs | 127.0.0.1 | HEAD "/" ollama[1667]: [GIN] 2024/11/10 - 22:40:41 | 200 | 620.835µs | 127.0.0.1 | GET "/api/ps" ```
Author
Owner

@FinoVM commented on GitHub (Nov 10, 2024):

ollama[1667]: time=2024-11-10T22:38:56.282+08:00 level=WARN source=gpu.go:732 msg="unable to locate gpu dependency libraries"

That bug should have been fixed in 0.4.1.

<!-- gh-comment-id:2466768600 --> @FinoVM commented on GitHub (Nov 10, 2024): ollama[1667]: time=2024-11-10T22:38:56.282+08:00 level=WARN source=gpu.go:732 msg="unable to locate gpu dependency libraries" That bug should have been fixed in 0.4.1.
Author
Owner

@rick-github commented on GitHub (Nov 10, 2024):

GPU detected but there's no runner to use it

ollama[1667]: time=2024-11-10T22:38:56.281+08:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2]"
ollama[1667]: time=2024-11-10T22:38:56.455+08:00 level=INFO source=types.go:123 msg="inference compute" id=GPU-957abb1f-e95c-db43-ee81-b345b6e60491 library=cuda variant=v12 compute=8.9 driver=12.6 name="NVIDIA GeForce RTX 4060 Ti" total="15.6 GiB" available="15.5 GiB"

What's the output of:

find /tmp/ollama3247001074/

How did you install ollama?

<!-- gh-comment-id:2466768909 --> @rick-github commented on GitHub (Nov 10, 2024): GPU detected but there's no runner to use it ``` ollama[1667]: time=2024-11-10T22:38:56.281+08:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2]" ollama[1667]: time=2024-11-10T22:38:56.455+08:00 level=INFO source=types.go:123 msg="inference compute" id=GPU-957abb1f-e95c-db43-ee81-b345b6e60491 library=cuda variant=v12 compute=8.9 driver=12.6 name="NVIDIA GeForce RTX 4060 Ti" total="15.6 GiB" available="15.5 GiB" ``` What's the output of: ``` find /tmp/ollama3247001074/ ``` How did you install ollama?
Author
Owner

@mastoca commented on GitHub (Nov 10, 2024):

@rick-github he's also a nix user trying a similar attempt to mine last night. I'm still in the same situation where it appears the cuda libs aren't being linked in. I'm currently trying to switch to cuda 11 to see if there's a difference. The nixpkgs PR being worked on is here https://github.com/NixOS/nixpkgs/pull/354969

EDIT: cuda 11 is also no go. :) Additionally, I setup and tested the docker build of 0.4.1 and with proper config to allow gpu, it did fully work on my nixos however, I don't want to run it dockerized. I wonder if all the documentation on the developer build page...which was updated recently is still missing some flag or setting we need for cuda and rocm.

<!-- gh-comment-id:2466852260 --> @mastoca commented on GitHub (Nov 10, 2024): @rick-github he's also a nix user trying a similar attempt to mine last night. I'm still in the same situation where it appears the cuda libs aren't being linked in. I'm currently trying to switch to cuda 11 to see if there's a difference. The nixpkgs PR being worked on is here https://github.com/NixOS/nixpkgs/pull/354969 EDIT: cuda 11 is also no go. :) Additionally, I setup and tested the docker build of 0.4.1 and with proper config to allow gpu, it did fully work on my nixos however, I don't want to run it dockerized. I wonder if all the documentation on the developer build page...which was updated recently is still missing some flag or setting we need for cuda and rocm.
Author
Owner

@kaleocheng commented on GitHub (Nov 11, 2024):

Yes, I installed it using the Nixpkgs PR mentioned above. Essentially it followed the steps from the Ollama development guide for Linux:

make -j 5
go build . 

also it injected the CUDA lib path like this:

LD_LIBRARY_PATH=${LD_LIBRARY_PATH:+':'$LD_LIBRARY_PATH':'}
if [[ $LD_LIBRARY_PATH != *':''/run/opengl-driver/lib'':'* ]]; then
    LD_LIBRARY_PATH=$LD_LIBRARY_PATH'/run/opengl-driver/lib'
fi
LD_LIBRARY_PATH=${LD_LIBRARY_PATH#':'}
LD_LIBRARY_PATH=${LD_LIBRARY_PATH%':'}
export LD_LIBRARY_PATH
LD_LIBRARY_PATH=${LD_LIBRARY_PATH:+':'$LD_LIBRARY_PATH':'}
if [[ $LD_LIBRARY_PATH != *':''/nix/store/7a5ss8a7sakx3lr58j8c6fmqgdmyxpg0-cuda_cudart-12.4.99-lib/lib'':'* ]]; then
    LD_LIBRARY_PATH=$LD_LIBRARY_PATH'/nix/store/7a5ss8a7sakx3lr58j8c6fmqgdmyxpg0-cuda_cudart-12.4.99-lib/lib'
fi
LD_LIBRARY_PATH=${LD_LIBRARY_PATH#':'}
LD_LIBRARY_PATH=${LD_LIBRARY_PATH%':'}
export LD_LIBRARY_PATH
LD_LIBRARY_PATH=${LD_LIBRARY_PATH:+':'$LD_LIBRARY_PATH':'}
if [[ $LD_LIBRARY_PATH != *':''/nix/store/scyk8cyngav231czzdm2yk6964k7qfhg-libcublas-12.4.2.65-lib/lib'':'* ]]; then
    LD_LIBRARY_PATH=$LD_LIBRARY_PATH'/nix/store/scyk8cyngav231czzdm2yk6964k7qfhg-libcublas-12.4.2.65-lib/lib'
fi
LD_LIBRARY_PATH=${LD_LIBRARY_PATH#':'}
LD_LIBRARY_PATH=${LD_LIBRARY_PATH%':'}
export LD_LIBRARY_PATH
LD_LIBRARY_PATH=${LD_LIBRARY_PATH:+':'$LD_LIBRARY_PATH':'}
if [[ $LD_LIBRARY_PATH != *':''/nix/store/qphb3m7an3d0i1wv5wzcf6418f3rpv7i-cuda_cccl-12.4.99/lib'':'* ]]; then
    LD_LIBRARY_PATH=$LD_LIBRARY_PATH'/nix/store/qphb3m7an3d0i1wv5wzcf6418f3rpv7i-cuda_cccl-12.4.99/lib'
fi
LD_LIBRARY_PATH=${LD_LIBRARY_PATH#':'}
LD_LIBRARY_PATH=${LD_LIBRARY_PATH%':'}
export LD_LIBRARY_PATH
exec -a "$0" "/nix/store/vd6jib79sbciq74qrj9jrln7mw0pn05h-ollama-0.4.1/bin/.ollama-wrapped"  "$@"

( .ollama-wrapped is the original Ollama binary.)

<!-- gh-comment-id:2467126140 --> @kaleocheng commented on GitHub (Nov 11, 2024): Yes, I installed it using the Nixpkgs PR mentioned above. Essentially it followed the steps from the [Ollama development guide for Linux](https://github.com/ollama/ollama/blob/v0.4.1/docs/development.md#linux): ``` make -j 5 go build . ``` also it injected the CUDA lib path like this: ``` LD_LIBRARY_PATH=${LD_LIBRARY_PATH:+':'$LD_LIBRARY_PATH':'} if [[ $LD_LIBRARY_PATH != *':''/run/opengl-driver/lib'':'* ]]; then LD_LIBRARY_PATH=$LD_LIBRARY_PATH'/run/opengl-driver/lib' fi LD_LIBRARY_PATH=${LD_LIBRARY_PATH#':'} LD_LIBRARY_PATH=${LD_LIBRARY_PATH%':'} export LD_LIBRARY_PATH LD_LIBRARY_PATH=${LD_LIBRARY_PATH:+':'$LD_LIBRARY_PATH':'} if [[ $LD_LIBRARY_PATH != *':''/nix/store/7a5ss8a7sakx3lr58j8c6fmqgdmyxpg0-cuda_cudart-12.4.99-lib/lib'':'* ]]; then LD_LIBRARY_PATH=$LD_LIBRARY_PATH'/nix/store/7a5ss8a7sakx3lr58j8c6fmqgdmyxpg0-cuda_cudart-12.4.99-lib/lib' fi LD_LIBRARY_PATH=${LD_LIBRARY_PATH#':'} LD_LIBRARY_PATH=${LD_LIBRARY_PATH%':'} export LD_LIBRARY_PATH LD_LIBRARY_PATH=${LD_LIBRARY_PATH:+':'$LD_LIBRARY_PATH':'} if [[ $LD_LIBRARY_PATH != *':''/nix/store/scyk8cyngav231czzdm2yk6964k7qfhg-libcublas-12.4.2.65-lib/lib'':'* ]]; then LD_LIBRARY_PATH=$LD_LIBRARY_PATH'/nix/store/scyk8cyngav231czzdm2yk6964k7qfhg-libcublas-12.4.2.65-lib/lib' fi LD_LIBRARY_PATH=${LD_LIBRARY_PATH#':'} LD_LIBRARY_PATH=${LD_LIBRARY_PATH%':'} export LD_LIBRARY_PATH LD_LIBRARY_PATH=${LD_LIBRARY_PATH:+':'$LD_LIBRARY_PATH':'} if [[ $LD_LIBRARY_PATH != *':''/nix/store/qphb3m7an3d0i1wv5wzcf6418f3rpv7i-cuda_cccl-12.4.99/lib'':'* ]]; then LD_LIBRARY_PATH=$LD_LIBRARY_PATH'/nix/store/qphb3m7an3d0i1wv5wzcf6418f3rpv7i-cuda_cccl-12.4.99/lib' fi LD_LIBRARY_PATH=${LD_LIBRARY_PATH#':'} LD_LIBRARY_PATH=${LD_LIBRARY_PATH%':'} export LD_LIBRARY_PATH exec -a "$0" "/nix/store/vd6jib79sbciq74qrj9jrln7mw0pn05h-ollama-0.4.1/bin/.ollama-wrapped" "$@" ``` ( `.ollama-wrapped` is the original Ollama binary.)
Author
Owner

@kaleocheng commented on GitHub (Nov 11, 2024):

Ollama introduced new variables in the Makefile (compare from v0.3.12 to v0.4.1) to append specific runner targets. This might be why it skips runner targets like cuda_v11 and cuda_v12 during the build, as these paths don't exist in Nix. I'm currently patching the build script in my Nix configuration to see if this resolves the issue.

<!-- gh-comment-id:2467200648 --> @kaleocheng commented on GitHub (Nov 11, 2024): Ollama introduced new variables in the [Makefile](https://github.com/ollama/ollama/blob/v0.4.1/llama/Makefile#L16-L18) (compare from v0.3.12 to v0.4.1) to append specific runner targets. This might be why it skips runner targets like `cuda_v11` and `cuda_v12` during the build, as these paths don't exist in Nix. I'm currently patching the build script in my Nix configuration to see if this resolves the issue.
Author
Owner

@kaleocheng commented on GitHub (Nov 11, 2024):

Confirmed! After setting the CUDA_PATH and applying some path patches, the GPU is now working without any issues. no more error in server log:

ollama[138684]: 2024/11/11 15:09:56 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:8100 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/var/lib/ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
ollama[138684]: time=2024-11-11T15:09:56.977+08:00 level=INFO source=images.go:755 msg="total blobs: 15"
ollama[138684]: time=2024-11-11T15:09:56.977+08:00 level=INFO source=images.go:762 msg="total unused blobs removed: 0"
ollama[138684]: time=2024-11-11T15:09:56.977+08:00 level=INFO source=routes.go:1240 msg="Listening on 127.0.0.1:8100 (version 0.4.1)"
ollama[138684]: time=2024-11-11T15:09:56.977+08:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/tmp/ollama1634729669/runners
ollama[138684]: time=2024-11-11T15:09:57.058+08:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v12]"
ollama[138684]: time=2024-11-11T15:09:57.058+08:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
ollama[138684]: time=2024-11-11T15:09:57.255+08:00 level=INFO source=types.go:123 msg="inference compute" id=GPU-957abb1f-e95c-db43-ee81-b345b6e60491 library=cuda variant=v12 compute=8.9 driver=12.6 name="NVIDIA GeForce RTX 4060 Ti" total="15.6 GiB" available="14.9 GiB"
ollama[138684]: [GIN] 2024/11/11 - 15:10:47 | 200 |      45.235µs |       127.0.0.1 | HEAD     "/"
ollama[138684]: [GIN] 2024/11/11 - 15:10:47 | 200 |     115.749µs |       127.0.0.1 | GET      "/api/ps"
ollama[138684]: [GIN] 2024/11/11 - 15:10:52 | 200 |       27.03µs |       127.0.0.1 | HEAD     "/"
ollama[138684]: [GIN] 2024/11/11 - 15:10:52 | 200 |    35.83532ms |       127.0.0.1 | POST     "/api/show"
ollama[138684]: time=2024-11-11T15:10:52.874+08:00 level=WARN source=sched.go:137 msg="mllama doesn't support parallel requests yet"
ollama[138684]: time=2024-11-11T15:10:53.020+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/var/lib/ollama/models/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 gpu=GPU-957abb1f-e95c-db43-ee81-b345b6e60491 parallel=1 available=15918104576 required="11.3 GiB"
ollama[138684]: time=2024-11-11T15:10:53.123+08:00 level=INFO source=server.go:105 msg="system memory" total="15.4 GiB" free="9.2 GiB" free_swap="11.9 GiB"
ollama[138684]: time=2024-11-11T15:10:53.125+08:00 level=INFO source=memory.go:343 msg="offload to cuda" projector.weights="1.8 GiB" projector.graph="2.8 GiB" layers.requested=-1 layers.model=41 layers.offload=41 layers.split="" memory.available="[14.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.3 GiB" memory.required.partial="11.3 GiB" memory.required.kv="656.2 MiB" memory.required.allocations="[11.3 GiB]" memory.weights.total="5.5 GiB" memory.weights.repeating="5.1 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="258.5 MiB" memory.graph.partial="669.5 MiB"
ollama[138684]: time=2024-11-11T15:10:53.126+08:00 level=INFO source=server.go:383 msg="starting llama server" cmd="/tmp/ollama1634729669/runners/cuda_v12/ollama_llama_server --model /var/lib/ollama/models/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 --ctx-size 2048 --batch-size 512 --n-gpu-layers 41 --mmproj /var/lib/ollama/models/blobs/sha256-ece5e659647a20a5c28ab9eea1c12a1ad430bc0f2a27021d00ad103b3bf5206f --threads 6 --no-mmap --parallel 1 --port 39541"
ollama[138684]: time=2024-11-11T15:10:53.126+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
ollama[138684]: time=2024-11-11T15:10:53.126+08:00 level=INFO source=server.go:562 msg="waiting for llama runner to start responding"
ollama[138684]: time=2024-11-11T15:10:53.126+08:00 level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server error"
ollama[138684]: time=2024-11-11T15:10:53.275+08:00 level=INFO source=runner.go:863 msg="starting go runner"
ollama[138684]: time=2024-11-11T15:10:53.275+08:00 level=INFO source=runner.go:864 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=6
ollama[138684]: time=2024-11-11T15:10:53.275+08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:39541"

also I saw it in smi output:

$ nvidia-smi  
Mon Nov 11 15:27:36 2024       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 560.35.03              Driver Version: 560.35.03      CUDA Version: 12.6     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 4060 Ti     Off |   00000000:01:00.0  On |                  N/A |
|  0%   38C    P0             36W /  165W |    6556MiB /  16380MiB |      2%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A      2064      G   ...nim4annni-xorg-server-21.1.13/bin/X        313MiB |
|    0   N/A  N/A      3667      G   ...bcvgsdr9v5mjmr-picom-12.3/bin/picom        139MiB |
|    0   N/A  N/A      4638      G   ...irefox-132.0.1/bin/.firefox-wrapped        122MiB |
|    0   N/A  N/A    107282      G   ...seed-version=20241108-130108.678000         38MiB |
|    0   N/A  N/A    107518      G   ...erProcess --variations-seed-version          2MiB |
|    0   N/A  N/A    108261      G   ...an,WebOTP --variations-seed-version         28MiB |
|    0   N/A  N/A    147541      C   ...unners/cuda_v12/ollama_llama_server       5860MiB |
+-----------------------------------------------------------------------------------------+

There's still some work needed on the Nixpkgs side, but I'm going to close this ticket now.

<!-- gh-comment-id:2467425027 --> @kaleocheng commented on GitHub (Nov 11, 2024): Confirmed! After setting the CUDA_PATH and applying some path patches, the GPU is now working without any issues. no more error in server log: ``` ollama[138684]: 2024/11/11 15:09:56 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:8100 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/var/lib/ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" ollama[138684]: time=2024-11-11T15:09:56.977+08:00 level=INFO source=images.go:755 msg="total blobs: 15" ollama[138684]: time=2024-11-11T15:09:56.977+08:00 level=INFO source=images.go:762 msg="total unused blobs removed: 0" ollama[138684]: time=2024-11-11T15:09:56.977+08:00 level=INFO source=routes.go:1240 msg="Listening on 127.0.0.1:8100 (version 0.4.1)" ollama[138684]: time=2024-11-11T15:09:56.977+08:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/tmp/ollama1634729669/runners ollama[138684]: time=2024-11-11T15:09:57.058+08:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v12]" ollama[138684]: time=2024-11-11T15:09:57.058+08:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs" ollama[138684]: time=2024-11-11T15:09:57.255+08:00 level=INFO source=types.go:123 msg="inference compute" id=GPU-957abb1f-e95c-db43-ee81-b345b6e60491 library=cuda variant=v12 compute=8.9 driver=12.6 name="NVIDIA GeForce RTX 4060 Ti" total="15.6 GiB" available="14.9 GiB" ollama[138684]: [GIN] 2024/11/11 - 15:10:47 | 200 | 45.235µs | 127.0.0.1 | HEAD "/" ollama[138684]: [GIN] 2024/11/11 - 15:10:47 | 200 | 115.749µs | 127.0.0.1 | GET "/api/ps" ollama[138684]: [GIN] 2024/11/11 - 15:10:52 | 200 | 27.03µs | 127.0.0.1 | HEAD "/" ollama[138684]: [GIN] 2024/11/11 - 15:10:52 | 200 | 35.83532ms | 127.0.0.1 | POST "/api/show" ollama[138684]: time=2024-11-11T15:10:52.874+08:00 level=WARN source=sched.go:137 msg="mllama doesn't support parallel requests yet" ollama[138684]: time=2024-11-11T15:10:53.020+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/var/lib/ollama/models/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 gpu=GPU-957abb1f-e95c-db43-ee81-b345b6e60491 parallel=1 available=15918104576 required="11.3 GiB" ollama[138684]: time=2024-11-11T15:10:53.123+08:00 level=INFO source=server.go:105 msg="system memory" total="15.4 GiB" free="9.2 GiB" free_swap="11.9 GiB" ollama[138684]: time=2024-11-11T15:10:53.125+08:00 level=INFO source=memory.go:343 msg="offload to cuda" projector.weights="1.8 GiB" projector.graph="2.8 GiB" layers.requested=-1 layers.model=41 layers.offload=41 layers.split="" memory.available="[14.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.3 GiB" memory.required.partial="11.3 GiB" memory.required.kv="656.2 MiB" memory.required.allocations="[11.3 GiB]" memory.weights.total="5.5 GiB" memory.weights.repeating="5.1 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="258.5 MiB" memory.graph.partial="669.5 MiB" ollama[138684]: time=2024-11-11T15:10:53.126+08:00 level=INFO source=server.go:383 msg="starting llama server" cmd="/tmp/ollama1634729669/runners/cuda_v12/ollama_llama_server --model /var/lib/ollama/models/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 --ctx-size 2048 --batch-size 512 --n-gpu-layers 41 --mmproj /var/lib/ollama/models/blobs/sha256-ece5e659647a20a5c28ab9eea1c12a1ad430bc0f2a27021d00ad103b3bf5206f --threads 6 --no-mmap --parallel 1 --port 39541" ollama[138684]: time=2024-11-11T15:10:53.126+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 ollama[138684]: time=2024-11-11T15:10:53.126+08:00 level=INFO source=server.go:562 msg="waiting for llama runner to start responding" ollama[138684]: time=2024-11-11T15:10:53.126+08:00 level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server error" ollama[138684]: time=2024-11-11T15:10:53.275+08:00 level=INFO source=runner.go:863 msg="starting go runner" ollama[138684]: time=2024-11-11T15:10:53.275+08:00 level=INFO source=runner.go:864 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=6 ollama[138684]: time=2024-11-11T15:10:53.275+08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:39541" ``` also I saw it in smi output: ``` $ nvidia-smi Mon Nov 11 15:27:36 2024 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 560.35.03 Driver Version: 560.35.03 CUDA Version: 12.6 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 4060 Ti Off | 00000000:01:00.0 On | N/A | | 0% 38C P0 36W / 165W | 6556MiB / 16380MiB | 2% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 2064 G ...nim4annni-xorg-server-21.1.13/bin/X 313MiB | | 0 N/A N/A 3667 G ...bcvgsdr9v5mjmr-picom-12.3/bin/picom 139MiB | | 0 N/A N/A 4638 G ...irefox-132.0.1/bin/.firefox-wrapped 122MiB | | 0 N/A N/A 107282 G ...seed-version=20241108-130108.678000 38MiB | | 0 N/A N/A 107518 G ...erProcess --variations-seed-version 2MiB | | 0 N/A N/A 108261 G ...an,WebOTP --variations-seed-version 28MiB | | 0 N/A N/A 147541 C ...unners/cuda_v12/ollama_llama_server 5860MiB | +-----------------------------------------------------------------------------------------+ ``` There's still some work needed on the Nixpkgs side, but I'm going to close this ticket now.
Author
Owner

@kaleocheng commented on GitHub (Nov 11, 2024):

Although it's not a blocker for me now, I'm still curious about the discrepancy between the initial logs claiming GPU usage and the actual behavior.

<!-- gh-comment-id:2468050387 --> @kaleocheng commented on GitHub (Nov 11, 2024): Although it's not a blocker for me now, I'm still curious about the discrepancy between the initial logs claiming GPU usage and the actual behavior.
Author
Owner

@rick-github commented on GitHub (Nov 12, 2024):

ollama detects a GPU and figures out what percentage of the VRAM allocation is going to be done by the GPU. It then launches the runner, but it's not expecting the runner to not exist, so the VRAM allocation estimation doesn't get updated.

<!-- gh-comment-id:2470986955 --> @rick-github commented on GitHub (Nov 12, 2024): ollama detects a GPU and figures out what percentage of the VRAM allocation is going to be done by the GPU. It then launches the runner, but it's not expecting the runner to not exist, so the VRAM allocation estimation doesn't get updated.
Author
Owner

@kaleocheng commented on GitHub (Nov 13, 2024):

should be able to fix that edge case by checking the available runners at https://github.com/ollama/ollama/blob/v0.4.1/llm/server.go#L108-L110,

something like

if opts.NumGPU == 0 || hasNoGPUServers(runners.GetAvailableServers()) {
    gpus = discover.GetCPUInfo()
}

Would you be open to accepting a PR with this change? I'd be happy to contribute if so.

<!-- gh-comment-id:2473075924 --> @kaleocheng commented on GitHub (Nov 13, 2024): should be able to fix that edge case by checking the available runners at https://github.com/ollama/ollama/blob/v0.4.1/llm/server.go#L108-L110, something like ``` if opts.NumGPU == 0 || hasNoGPUServers(runners.GetAvailableServers()) { gpus = discover.GetCPUInfo() } ``` Would you be open to accepting a PR with this change? I'd be happy to contribute if so.
Author
Owner

@rick-github commented on GitHub (Nov 13, 2024):

There's another way this fails - if OLLMA_LLM_LIBRARY is set and doesn't include a runner with a GPU, the output of ollama ps will still indicate GPU usage. So the estimation logic should be filtered by the available runner processing at line 166, which would fix both problems.

<!-- gh-comment-id:2473202592 --> @rick-github commented on GitHub (Nov 13, 2024): There's another way this fails - if OLLMA_LLM_LIBRARY is set and doesn't include a runner with a GPU, the output of `ollama ps` will still indicate GPU usage. So the estimation logic should be filtered by the available runner processing at line 166, which would fix both problems.
Author
Owner

@kaleocheng commented on GitHub (Nov 13, 2024):

I'm not sure I got that logic.

if there is no GPU runner in place, put the filter at line 108 will exclude the GPU servers. As a result, the estimation will only consider the CPU at line 113. Even if the user sets the OLLMA_LLM_LIBRARY , at line 166 it will either ignore an invalid OLLMA_LLM_LIBRARY or continue using the CPU runner, which is expected.

<!-- gh-comment-id:2473544040 --> @kaleocheng commented on GitHub (Nov 13, 2024): I'm not sure I got that logic. if there is no GPU runner in place, put the filter at line [108](https://github.com/ollama/ollama/blob/v0.4.1/llm/server.go#L108) will exclude the GPU servers. As a result, the estimation will only consider the CPU at line [113](https://github.com/ollama/ollama/blob/v0.4.1/llm/server.go#L113). Even if the user sets the OLLMA_LLM_LIBRARY , at line [166](https://github.com/ollama/ollama/blob/v0.4.1/llm/server.go#L166) it will either ignore an invalid OLLMA_LLM_LIBRARY or continue using the CPU runner, which is expected.
Author
Owner

@rick-github commented on GitHub (Nov 13, 2024):

The failure case is that there is a GPU runner available, but the OLLAMA_LLM_LIBRARY override excludes it from the list of potential runners. Fire up an ollama server with OLLAMA_LLM_LIBRARY=cpu_avx and then load a model:

$ docker compose logs ollama | grep "Dynamic LLM libraries" 
ollama  | time=2024-11-13T10:26:25.297Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12]"
$ ps $(pidof ollama_llama_server)
    PID TTY      STAT   TIME COMMAND
2504152 ?        Sl     0:04 /usr/lib/ollama/runners/cpu_avx/ollama_llama_server --model /root/.ollama/models/blobs/sha256-c5396e06af294bd101b30dce59131a76d2b773e76950acc870eda801d3ab0515 --ctx-size 6144 --batch-size 512 --embedding --verbose --threads 8 --parallel 3 --port 38917
$ ollama ps
NAME            ID              SIZE      PROCESSOR    UNTIL   
qwen2.5:0.5b    a8b0c5157701    1.3 GB    100% GPU     Forever  
<!-- gh-comment-id:2473616401 --> @rick-github commented on GitHub (Nov 13, 2024): The failure case is that there is a GPU runner available, but the `OLLAMA_LLM_LIBRARY` override excludes it from the list of potential runners. Fire up an ollama server with `OLLAMA_LLM_LIBRARY=cpu_avx` and then load a model: ```console $ docker compose logs ollama | grep "Dynamic LLM libraries" ollama | time=2024-11-13T10:26:25.297Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12]" $ ps $(pidof ollama_llama_server) PID TTY STAT TIME COMMAND 2504152 ? Sl 0:04 /usr/lib/ollama/runners/cpu_avx/ollama_llama_server --model /root/.ollama/models/blobs/sha256-c5396e06af294bd101b30dce59131a76d2b773e76950acc870eda801d3ab0515 --ctx-size 6144 --batch-size 512 --embedding --verbose --threads 8 --parallel 3 --port 38917 $ ollama ps NAME ID SIZE PROCESSOR UNTIL qwen2.5:0.5b a8b0c5157701 1.3 GB 100% GPU Forever ```
Author
Owner

@dhiltgen commented on GitHub (Nov 13, 2024):

This is a corner case we don't currently handle well.

<!-- gh-comment-id:2474642880 --> @dhiltgen commented on GitHub (Nov 13, 2024): This is a corner case we don't currently handle well.
Author
Owner

@kripper commented on GitHub (Nov 14, 2024):

Here is another case where "llama3.2-vision" (Llama-3.2-11B-Vision-Instruct) is not using GPU.
When using "Llama 3.1" on the same environment, GPU was being used and working fine.

<!-- gh-comment-id:2477510247 --> @kripper commented on GitHub (Nov 14, 2024): Here is another case where "llama3.2-vision" (Llama-3.2-11B-Vision-Instruct) is not using GPU. When using "Llama 3.1" on the same environment, GPU was being used and working fine.
Author
Owner

@rick-github commented on GitHub (Nov 14, 2024):

You're missing the bit of the log that contains relevant information. The most likely answer is that the model doesn't fit, llama3.2-vision has a larger VRAM footprint than llama-3.1.

$ ollama ps
NAME                    ID              SIZE    PROCESSOR       UNTIL   
llama3.2-vision:latest  38107a0cd119    12 GB   100% GPU        Forever
llama3.1:latest         42182419e950    5.5 GB  100% GPU        Forever
<!-- gh-comment-id:2477533980 --> @rick-github commented on GitHub (Nov 14, 2024): You're missing the bit of the log that contains relevant information. The most likely answer is that the model doesn't fit, llama3.2-vision has a larger VRAM footprint than llama-3.1. ```console $ ollama ps NAME ID SIZE PROCESSOR UNTIL llama3.2-vision:latest 38107a0cd119 12 GB 100% GPU Forever llama3.1:latest 42182419e950 5.5 GB 100% GPU Forever ```
Author
Owner

@kripper commented on GitHub (Nov 14, 2024):

The most likely answer is that the model doesn't fit

Actually, I'm also experiencing https://github.com/ollama/ollama/issues/7673

<!-- gh-comment-id:2477579830 --> @kripper commented on GitHub (Nov 14, 2024): > The most likely answer is that the model doesn't fit Actually, I'm also experiencing https://github.com/ollama/ollama/issues/7673
Author
Owner

@kripper commented on GitHub (Nov 14, 2024):

You're missing the bit of the log that contains relevant information

I updated the logs in my comment: https://github.com/ollama/ollama/issues/7597#issuecomment-2477510247
Why can't the GPU be used at all? Isn't the CPU using the same memory as the GPU?

<!-- gh-comment-id:2477599865 --> @kripper commented on GitHub (Nov 14, 2024): > You're missing the bit of the log that contains relevant information I updated the logs in my comment: https://github.com/ollama/ollama/issues/7597#issuecomment-2477510247 Why can't the GPU be used at all? Isn't the CPU using the same memory as the GPU?
Author
Owner

@rick-github commented on GitHub (Nov 14, 2024):

The problem you are experiencing is not related to this issue, let's discuss your problem in #7673.

<!-- gh-comment-id:2477606702 --> @rick-github commented on GitHub (Nov 14, 2024): The problem you are experiencing is not related to this issue, let's discuss your problem in #7673.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#66901