[GH-ISSUE #9842] Ollama not using GPU (RTX 3090) anymore on Ubuntu 20.04 – (it previously worked) #32203

Closed
opened 2026-04-22 13:15:01 -05:00 by GiteaMirror · 27 comments
Owner

Originally created by @antonkratz on GitHub (Mar 18, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9842

What is the issue?

Problem description: ollama does not seem to utilize the GPU (GeFore GTX 3090) at all anymore. It simply ignores that there is a GPU. I could get it to run successfully in the past, reaching around 30 token/s second. Now, I barely reach 4 t/s and when I do watch nvidia-smi, while ollama is generating, clearly shows that there is nothing loaded on the GPU, it is not aware of any process and does not accelerate ollama. Strangely, when I execute my own pytorch python scripts on the same node, they are clearly accelerated, report the GPU being there, and nvidia-smi reports my scripts running. The models I use with ollama clearly fit into the VRAM... again, I had this in a working state before, so I am dumbfounded how it can have stopped working.

I have de-installed, installed ollama to no effect. Restarted the node to no effect.

This is on a cluster running Ubuntu 20.04.2 LTS.

Relevant log output

2025/03/10 16:06:33 routes.go:1205: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/anonymized/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-03-10T16:06:33.571+09:00 level=INFO source=images.go:432 msg="total blobs: 34"
time=2025-03-10T16:06:33.572+09:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-03-10T16:06:33.573+09:00 level=INFO source=routes.go:1256 msg="Listening on 127.0.0.1:11434 (version 0.5.12)"
time=2025-03-10T16:06:33.573+09:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-03-10T16:06:33.590+09:00 level=INFO source=gpu.go:612 msg="Unable to load cudart library /usr/lib/x86_64-linux-gnu/libcuda.so.460.32.03: symbol lookup for cuCtxCreate_v3 failed: /usr/lib/x86_64-linux-gnu/libcuda.so.460.32.03: undefined symbol: cuCtxCreate_v3"
time=2025-03-10T16:06:33.723+09:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-04c4165d-6fbb-7e5e-9215-de652d767bd7 library=cuda variant=v11 compute=8.6 driver=0.0 name="" total="23.7 GiB" available="23.4 GiB"
[GIN] 2025/03/10 - 16:06:48 | 200 |     105.353µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/10 - 16:06:48 | 200 |    7.023744ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/03/10 - 16:08:24 | 200 |      42.941µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/10 - 16:08:24 | 200 |    9.817317ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/03/10 - 16:08:38 | 200 |      51.031µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/10 - 16:08:38 | 200 |   61.895518ms |       127.0.0.1 | POST     "/api/show"
time=2025-03-10T16:08:39.016+09:00 level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/home/anonymized/.ollama/models/blobs/sha256-d7e4b00a7d7a8d03d4eed9b0f3f61a427e9f0fc5dea6aeb414e41dee23dc8ecc gpu=GPU-04c4165d-6fbb-7e5e-9215-de652d767bd7 parallel=4 available=25178079232 required="18.8 GiB"
time=2025-03-10T16:08:39.110+09:00 level=INFO source=server.go:97 msg="system memory" total="503.6 GiB" free="496.6 GiB" free_swap="22.9 GiB"
time=2025-03-10T16:08:39.111+09:00 level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=47 layers.offload=47 layers.split="" memory.available="[23.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="18.8 GiB" memory.required.partial="18.8 GiB" memory.required.kv="2.9 GiB" memory.required.allocations="[18.8 GiB]" memory.weights.total="16.5 GiB" memory.weights.repeating="15.6 GiB" memory.weights.nonrepeating="922.9 MiB" memory.graph.full="562.0 MiB" memory.graph.partial="1.4 GiB"
time=2025-03-10T16:08:39.112+09:00 level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /home/anonymized/.ollama/models/blobs/sha256-d7e4b00a7d7a8d03d4eed9b0f3f61a427e9f0fc5dea6aeb414e41dee23dc8ecc --ctx-size 8192 --batch-size 512 --n-gpu-layers 47 --threads 32 --parallel 4 --port 34723"
time=2025-03-10T16:08:39.112+09:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-03-10T16:08:39.112+09:00 level=INFO source=server.go:557 msg="waiting for llama runner to start responding"
time=2025-03-10T16:08:39.113+09:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
time=2025-03-10T16:08:39.132+09:00 level=INFO source=runner.go:932 msg="starting go runner"
time=2025-03-10T16:08:39.136+09:00 level=INFO source=runner.go:935 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | cgo(gcc)" threads=32
time=2025-03-10T16:08:39.136+09:00 level=INFO source=runner.go:993 msg="Server listening on 127.0.0.1:34723"
llama_model_loader: loaded meta data with 29 key-value pairs and 508 tensors from /home/anonymized/.ollama/models/blobs/sha256-d7e4b00a7d7a8d03d4eed9b0f3f61a427e9f0fc5dea6aeb414e41dee23dc8ecc (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = gemma2
llama_model_loader: - kv   1:                               general.name str              = gemma-2-27b-it
llama_model_loader: - kv   2:                      gemma2.context_length u32              = 8192
llama_model_loader: - kv   3:                    gemma2.embedding_length u32              = 4608
llama_model_loader: - kv   4:                         gemma2.block_count u32              = 46
llama_model_loader: - kv   5:                 gemma2.feed_forward_length u32              = 36864
llama_model_loader: - kv   6:                gemma2.attention.head_count u32              = 32
llama_model_loader: - kv   7:             gemma2.attention.head_count_kv u32              = 16
llama_model_loader: - kv   8:    gemma2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv   9:                gemma2.attention.key_length u32              = 128
llama_model_loader: - kv  10:              gemma2.attention.value_length u32              = 128
llama_model_loader: - kv  11:                          general.file_type u32              = 2
llama_model_loader: - kv  12:              gemma2.attn_logit_softcapping f32              = 50.000000
llama_model_loader: - kv  13:             gemma2.final_logit_softcapping f32              = 30.000000
llama_model_loader: - kv  14:            gemma2.attention.sliding_window u32              = 4096
llama_model_loader: - kv  15:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  16:                         tokenizer.ggml.pre str              = default
llama_model_loader: - kv  17:                      tokenizer.ggml.tokens arr[str,256000]  = ["<pad>", "<eos>", "<bos>", "<unk>", ...
time=2025-03-10T16:08:39.365+09:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: - kv  18:                      tokenizer.ggml.scores arr[f32,256000]  = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  19:                  tokenizer.ggml.token_type arr[i32,256000]  = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv  20:                tokenizer.ggml.bos_token_id u32              = 2
llama_model_loader: - kv  21:                tokenizer.ggml.eos_token_id u32              = 1
llama_model_loader: - kv  22:            tokenizer.ggml.unknown_token_id u32              = 3
llama_model_loader: - kv  23:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  24:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  25:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  26:                    tokenizer.chat_template str              = {{ bos_token }}{% if messages[0]['rol...
llama_model_loader: - kv  27:            tokenizer.ggml.add_space_prefix bool             = false
llama_model_loader: - kv  28:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  185 tensors
llama_model_loader: - type q4_0:  322 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 108
llm_load_vocab: token to piece cache size = 1.6014 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = gemma2
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 256000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 8192
llm_load_print_meta: n_embd           = 4608
llm_load_print_meta: n_layer          = 46
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 16
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 4096
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 2
llm_load_print_meta: n_embd_k_gqa     = 2048
llm_load_print_meta: n_embd_v_gqa     = 2048
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 36864
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 8192
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = 27B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 27.23 B
llm_load_print_meta: model size       = 14.55 GiB (4.59 BPW)
llm_load_print_meta: general.name     = gemma-2-27b-it
llm_load_print_meta: BOS token        = 2 '<bos>'
llm_load_print_meta: EOS token        = 1 '<eos>'
llm_load_print_meta: EOT token        = 107 '<end_of_turn>'
llm_load_print_meta: UNK token        = 3 '<unk>'
llm_load_print_meta: PAD token        = 0 '<pad>'
llm_load_print_meta: LF token         = 227 '<0x0A>'
llm_load_print_meta: EOG token        = 1 '<eos>'
llm_load_print_meta: EOG token        = 107 '<end_of_turn>'
llm_load_print_meta: max token length = 93
llm_load_tensors:   CPU_Mapped model buffer size = 14898.60 MiB
llama_new_context_with_model: n_seq_max     = 4
llama_new_context_with_model: n_ctx         = 8192
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch       = 2048
llama_new_context_with_model: n_ubatch      = 512
llama_new_context_with_model: flash_attn    = 0
llama_new_context_with_model: freq_base     = 10000.0
llama_new_context_with_model: freq_scale    = 1
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (8192) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 46, can_shift = 1
llama_kv_cache_init:        CPU KV buffer size =  2944.00 MiB
llama_new_context_with_model: KV self size  = 2944.00 MiB, K (f16): 1472.00 MiB, V (f16): 1472.00 MiB
llama_new_context_with_model:        CPU  output buffer size =     3.98 MiB
llama_new_context_with_model:        CPU compute buffer size =   578.01 MiB
llama_new_context_with_model: graph nodes  = 1850
llama_new_context_with_model: graph splits = 1
time=2025-03-10T16:08:40.871+09:00 level=INFO source=server.go:596 msg="llama runner started in 1.76 seconds"
[GIN] 2025/03/10 - 16:08:40 | 200 |  2.105569897s |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/03/10 - 16:09:03 | 200 | 15.045887735s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2025/03/10 - 16:09:12 | 200 |      48.562µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/10 - 16:09:12 | 200 |   57.931317ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/03/10 - 16:09:12 | 200 |   52.717533ms |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/03/10 - 16:09:34 | 200 | 16.390822111s |       127.0.0.1 | POST     "/api/chat"
time=2025-03-10T16:14:39.784+09:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.117816902 model=/home/anonymized/.ollama/models/blobs/sha256-d7e4b00a7d7a8d03d4eed9b0f3f61a427e9f0fc5dea6aeb414e41dee23dc8ecc
time=2025-03-10T16:14:40.033+09:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.366878677 model=/home/anonymized/.ollama/models/blobs/sha256-d7e4b00a7d7a8d03d4eed9b0f3f61a427e9f0fc5dea6aeb414e41dee23dc8ecc
time=2025-03-10T16:14:40.283+09:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.617509757 model=/home/anonymized/.ollama/models/blobs/sha256-d7e4b00a7d7a8d03d4eed9b0f3f61a427e9f0fc5dea6aeb414e41dee23dc8ecc

OS

Ubuntu 20.04.2 LTS

GPU

GeFore GTX 3090

CPU

No response

Ollama version

0.5.12

Originally created by @antonkratz on GitHub (Mar 18, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9842 ### What is the issue? **Problem description**: ollama does not seem to utilize the GPU (GeFore GTX 3090) at all anymore. It simply ignores that there is a GPU. I could get it to run successfully in the past, reaching around 30 token/s second. Now, I barely reach 4 t/s and when I do `watch nvidia-smi`, while ollama is generating, clearly shows that there is nothing loaded on the GPU, it is not aware of any process and does not accelerate ollama. **Strangely, when I execute my own pytorch python scripts on the same node, they are clearly accelerated, report the GPU being there, and nvidia-smi reports my scripts running.** The models I use with ollama clearly fit into the VRAM... again, I had this in a working state before, so I am dumbfounded how it can have stopped working. I have de-installed, installed ollama to no effect. Restarted the node to no effect. This is on a cluster running Ubuntu 20.04.2 LTS. ### Relevant log output ```shell 2025/03/10 16:06:33 routes.go:1205: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/anonymized/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-03-10T16:06:33.571+09:00 level=INFO source=images.go:432 msg="total blobs: 34" time=2025-03-10T16:06:33.572+09:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-03-10T16:06:33.573+09:00 level=INFO source=routes.go:1256 msg="Listening on 127.0.0.1:11434 (version 0.5.12)" time=2025-03-10T16:06:33.573+09:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-03-10T16:06:33.590+09:00 level=INFO source=gpu.go:612 msg="Unable to load cudart library /usr/lib/x86_64-linux-gnu/libcuda.so.460.32.03: symbol lookup for cuCtxCreate_v3 failed: /usr/lib/x86_64-linux-gnu/libcuda.so.460.32.03: undefined symbol: cuCtxCreate_v3" time=2025-03-10T16:06:33.723+09:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-04c4165d-6fbb-7e5e-9215-de652d767bd7 library=cuda variant=v11 compute=8.6 driver=0.0 name="" total="23.7 GiB" available="23.4 GiB" [GIN] 2025/03/10 - 16:06:48 | 200 | 105.353µs | 127.0.0.1 | HEAD "/" [GIN] 2025/03/10 - 16:06:48 | 200 | 7.023744ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/03/10 - 16:08:24 | 200 | 42.941µs | 127.0.0.1 | HEAD "/" [GIN] 2025/03/10 - 16:08:24 | 200 | 9.817317ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/03/10 - 16:08:38 | 200 | 51.031µs | 127.0.0.1 | HEAD "/" [GIN] 2025/03/10 - 16:08:38 | 200 | 61.895518ms | 127.0.0.1 | POST "/api/show" time=2025-03-10T16:08:39.016+09:00 level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/home/anonymized/.ollama/models/blobs/sha256-d7e4b00a7d7a8d03d4eed9b0f3f61a427e9f0fc5dea6aeb414e41dee23dc8ecc gpu=GPU-04c4165d-6fbb-7e5e-9215-de652d767bd7 parallel=4 available=25178079232 required="18.8 GiB" time=2025-03-10T16:08:39.110+09:00 level=INFO source=server.go:97 msg="system memory" total="503.6 GiB" free="496.6 GiB" free_swap="22.9 GiB" time=2025-03-10T16:08:39.111+09:00 level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=47 layers.offload=47 layers.split="" memory.available="[23.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="18.8 GiB" memory.required.partial="18.8 GiB" memory.required.kv="2.9 GiB" memory.required.allocations="[18.8 GiB]" memory.weights.total="16.5 GiB" memory.weights.repeating="15.6 GiB" memory.weights.nonrepeating="922.9 MiB" memory.graph.full="562.0 MiB" memory.graph.partial="1.4 GiB" time=2025-03-10T16:08:39.112+09:00 level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /home/anonymized/.ollama/models/blobs/sha256-d7e4b00a7d7a8d03d4eed9b0f3f61a427e9f0fc5dea6aeb414e41dee23dc8ecc --ctx-size 8192 --batch-size 512 --n-gpu-layers 47 --threads 32 --parallel 4 --port 34723" time=2025-03-10T16:08:39.112+09:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2025-03-10T16:08:39.112+09:00 level=INFO source=server.go:557 msg="waiting for llama runner to start responding" time=2025-03-10T16:08:39.113+09:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" time=2025-03-10T16:08:39.132+09:00 level=INFO source=runner.go:932 msg="starting go runner" time=2025-03-10T16:08:39.136+09:00 level=INFO source=runner.go:935 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | cgo(gcc)" threads=32 time=2025-03-10T16:08:39.136+09:00 level=INFO source=runner.go:993 msg="Server listening on 127.0.0.1:34723" llama_model_loader: loaded meta data with 29 key-value pairs and 508 tensors from /home/anonymized/.ollama/models/blobs/sha256-d7e4b00a7d7a8d03d4eed9b0f3f61a427e9f0fc5dea6aeb414e41dee23dc8ecc (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = gemma2 llama_model_loader: - kv 1: general.name str = gemma-2-27b-it llama_model_loader: - kv 2: gemma2.context_length u32 = 8192 llama_model_loader: - kv 3: gemma2.embedding_length u32 = 4608 llama_model_loader: - kv 4: gemma2.block_count u32 = 46 llama_model_loader: - kv 5: gemma2.feed_forward_length u32 = 36864 llama_model_loader: - kv 6: gemma2.attention.head_count u32 = 32 llama_model_loader: - kv 7: gemma2.attention.head_count_kv u32 = 16 llama_model_loader: - kv 8: gemma2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 9: gemma2.attention.key_length u32 = 128 llama_model_loader: - kv 10: gemma2.attention.value_length u32 = 128 llama_model_loader: - kv 11: general.file_type u32 = 2 llama_model_loader: - kv 12: gemma2.attn_logit_softcapping f32 = 50.000000 llama_model_loader: - kv 13: gemma2.final_logit_softcapping f32 = 30.000000 llama_model_loader: - kv 14: gemma2.attention.sliding_window u32 = 4096 llama_model_loader: - kv 15: tokenizer.ggml.model str = llama llama_model_loader: - kv 16: tokenizer.ggml.pre str = default llama_model_loader: - kv 17: tokenizer.ggml.tokens arr[str,256000] = ["<pad>", "<eos>", "<bos>", "<unk>", ... time=2025-03-10T16:08:39.365+09:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: - kv 18: tokenizer.ggml.scores arr[f32,256000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 19: tokenizer.ggml.token_type arr[i32,256000] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ... llama_model_loader: - kv 20: tokenizer.ggml.bos_token_id u32 = 2 llama_model_loader: - kv 21: tokenizer.ggml.eos_token_id u32 = 1 llama_model_loader: - kv 22: tokenizer.ggml.unknown_token_id u32 = 3 llama_model_loader: - kv 23: tokenizer.ggml.padding_token_id u32 = 0 llama_model_loader: - kv 24: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 25: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 26: tokenizer.chat_template str = {{ bos_token }}{% if messages[0]['rol... llama_model_loader: - kv 27: tokenizer.ggml.add_space_prefix bool = false llama_model_loader: - kv 28: general.quantization_version u32 = 2 llama_model_loader: - type f32: 185 tensors llama_model_loader: - type q4_0: 322 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 108 llm_load_vocab: token to piece cache size = 1.6014 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = gemma2 llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 256000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 8192 llm_load_print_meta: n_embd = 4608 llm_load_print_meta: n_layer = 46 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 16 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 4096 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 2 llm_load_print_meta: n_embd_k_gqa = 2048 llm_load_print_meta: n_embd_v_gqa = 2048 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 36864 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 8192 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 27B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 27.23 B llm_load_print_meta: model size = 14.55 GiB (4.59 BPW) llm_load_print_meta: general.name = gemma-2-27b-it llm_load_print_meta: BOS token = 2 '<bos>' llm_load_print_meta: EOS token = 1 '<eos>' llm_load_print_meta: EOT token = 107 '<end_of_turn>' llm_load_print_meta: UNK token = 3 '<unk>' llm_load_print_meta: PAD token = 0 '<pad>' llm_load_print_meta: LF token = 227 '<0x0A>' llm_load_print_meta: EOG token = 1 '<eos>' llm_load_print_meta: EOG token = 107 '<end_of_turn>' llm_load_print_meta: max token length = 93 llm_load_tensors: CPU_Mapped model buffer size = 14898.60 MiB llama_new_context_with_model: n_seq_max = 4 llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (8192) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 46, can_shift = 1 llama_kv_cache_init: CPU KV buffer size = 2944.00 MiB llama_new_context_with_model: KV self size = 2944.00 MiB, K (f16): 1472.00 MiB, V (f16): 1472.00 MiB llama_new_context_with_model: CPU output buffer size = 3.98 MiB llama_new_context_with_model: CPU compute buffer size = 578.01 MiB llama_new_context_with_model: graph nodes = 1850 llama_new_context_with_model: graph splits = 1 time=2025-03-10T16:08:40.871+09:00 level=INFO source=server.go:596 msg="llama runner started in 1.76 seconds" [GIN] 2025/03/10 - 16:08:40 | 200 | 2.105569897s | 127.0.0.1 | POST "/api/generate" [GIN] 2025/03/10 - 16:09:03 | 200 | 15.045887735s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/03/10 - 16:09:12 | 200 | 48.562µs | 127.0.0.1 | HEAD "/" [GIN] 2025/03/10 - 16:09:12 | 200 | 57.931317ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/03/10 - 16:09:12 | 200 | 52.717533ms | 127.0.0.1 | POST "/api/generate" [GIN] 2025/03/10 - 16:09:34 | 200 | 16.390822111s | 127.0.0.1 | POST "/api/chat" time=2025-03-10T16:14:39.784+09:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.117816902 model=/home/anonymized/.ollama/models/blobs/sha256-d7e4b00a7d7a8d03d4eed9b0f3f61a427e9f0fc5dea6aeb414e41dee23dc8ecc time=2025-03-10T16:14:40.033+09:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.366878677 model=/home/anonymized/.ollama/models/blobs/sha256-d7e4b00a7d7a8d03d4eed9b0f3f61a427e9f0fc5dea6aeb414e41dee23dc8ecc time=2025-03-10T16:14:40.283+09:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.617509757 model=/home/anonymized/.ollama/models/blobs/sha256-d7e4b00a7d7a8d03d4eed9b0f3f61a427e9f0fc5dea6aeb414e41dee23dc8ecc ``` ### OS Ubuntu 20.04.2 LTS ### GPU GeFore GTX 3090 ### CPU _No response_ ### Ollama version 0.5.12
GiteaMirror added the bug label 2026-04-22 13:15:01 -05:00
Author
Owner

@OmegaHaze commented on GitHub (Mar 18, 2025):

I have the exact same issue.

<!-- gh-comment-id:2731387122 --> @OmegaHaze commented on GitHub (Mar 18, 2025): I have the exact same issue.
Author
Owner

@rick-github commented on GitHub (Mar 18, 2025):

Looks like it didn't find the GPU backend. Set OLLAMA_DEBUG=1 in the server environment and post the resulting log.

<!-- gh-comment-id:2731405526 --> @rick-github commented on GitHub (Mar 18, 2025): Looks like it didn't find the GPU backend. Set `OLLAMA_DEBUG=1` in the server environment and post the resulting log.
Author
Owner

@antonkratz commented on GitHub (Mar 18, 2025):

Looks like it didn't find the GPU backend. Set OLLAMA_DEBUG=1 in the server environment and post the resulting log.

2025/03/18 11:15:09 routes.go:1205: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/anonymized/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-03-18T11:15:09.303+09:00 level=INFO source=images.go:432 msg="total blobs: 34"
time=2025-03-18T11:15:09.304+09:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-03-18T11:15:09.306+09:00 level=INFO source=routes.go:1256 msg="Listening on 127.0.0.1:11434 (version 0.5.12)"
time=2025-03-18T11:15:09.306+09:00 level=DEBUG source=sched.go:106 msg="starting llm scheduler"
time=2025-03-18T11:15:09.306+09:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-03-18T11:15:09.310+09:00 level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA"
time=2025-03-18T11:15:09.310+09:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcuda.so*
time=2025-03-18T11:15:09.310+09:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/local/bin/libcuda.so* /home/anonymized/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2025-03-18T11:15:09.319+09:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[/usr/lib/x86_64-linux-gnu/libcuda.so.460.32.03]
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.460.32.03
dlsym: cuInit - 0x7fffa4ba3f40
dlsym: cuDriverGetVersion - 0x7fffa4ba3d60
dlsym: cuDeviceGetCount - 0x7fffa4ba3930
dlsym: cuDeviceGet - 0x7fffa4ba3b30
dlsym: cuDeviceGetAttribute - 0x7fffa4ba2990
dlsym: cuDeviceGetUuid - 0x7fffa4ba3440
dlsym: cuDeviceGetName - 0x7fffa4ba36f0
dlerr: /usr/lib/x86_64-linux-gnu/libcuda.so.460.32.03: undefined symbol: cuCtxCreate_v3
time=2025-03-18T11:15:09.319+09:00 level=INFO source=gpu.go:612 msg="Unable to load cudart library /usr/lib/x86_64-linux-gnu/libcuda.so.460.32.03: symbol lookup for cuCtxCreate_v3 failed: /usr/lib/x86_64-linux-gnu/libcuda.so.460.32.03: undefined symbol: cuCtxCreate_v3"
time=2025-03-18T11:15:09.319+09:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcudart.so*
time=2025-03-18T11:15:09.319+09:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/local/bin/libcudart.so* /home/anonymized/libcudart.so* /usr/local/bin/cuda_v*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]"
time=2025-03-18T11:15:09.326+09:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[/usr/local/cuda/lib64/libcudart.so.11.2.72]
CUDA driver version: 11-2
time=2025-03-18T11:15:09.335+09:00 level=DEBUG source=gpu.go:140 msg="detected GPUs" library=/usr/local/cuda/lib64/libcudart.so.11.2.72 count=1
[GPU-04c4165d-6fbb-7e5e-9215-de652d767bd7] CUDA totalMem 25447170048
[GPU-04c4165d-6fbb-7e5e-9215-de652d767bd7] CUDA freeMem 23761453056
[GPU-04c4165d-6fbb-7e5e-9215-de652d767bd7] CUDA usedMem 0
[GPU-04c4165d-6fbb-7e5e-9215-de652d767bd7] Compute Capability 8.6
time=2025-03-18T11:15:09.393+09:00 level=DEBUG source=amd_linux.go:419 msg="amdgpu driver not detected /sys/module/amdgpu"
releasing cudart library
time=2025-03-18T11:15:09.417+09:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-04c4165d-6fbb-7e5e-9215-de652d767bd7 library=cuda variant=v11 compute=8.6 driver=0.0 name="" total="23.7 GiB" available="22.1 GiB"
[GIN] 2025/03/18 - 11:15:14 | 200 |      92.924µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/18 - 11:15:14 | 200 |    3.884945ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/03/18 - 11:15:57 | 200 |      48.602µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/18 - 11:15:57 | 200 |   15.591976ms |       127.0.0.1 | POST     "/api/show"
time=2025-03-18T11:15:57.559+09:00 level=WARN source=types.go:512 msg="invalid option provided" option=rope_frequency_base
time=2025-03-18T11:15:57.559+09:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="503.6 GiB" before.free="494.0 GiB" before.free_swap="22.9 GiB" now.total="503.6 GiB" now.free="496.5 GiB" now.free_swap="22.9 GiB"
CUDA driver version: 11-2
[GPU-04c4165d-6fbb-7e5e-9215-de652d767bd7] CUDA totalMem 25447170048
[GPU-04c4165d-6fbb-7e5e-9215-de652d767bd7] CUDA freeMem 25178079232
[GPU-04c4165d-6fbb-7e5e-9215-de652d767bd7] CUDA usedMem 0
[GPU-04c4165d-6fbb-7e5e-9215-de652d767bd7] Compute Capability 8.6
time=2025-03-18T11:15:57.630+09:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-04c4165d-6fbb-7e5e-9215-de652d767bd7 name="" overhead="0 B" before.total="23.7 GiB" before.free="22.1 GiB" now.total="23.7 GiB" now.free="23.4 GiB" now.used="0 B"
releasing cudart library
time=2025-03-18T11:15:57.665+09:00 level=DEBUG source=sched.go:182 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1
time=2025-03-18T11:15:57.676+09:00 level=DEBUG source=sched.go:225 msg="loading first model" model=/home/anonymized/.ollama/models/blobs/sha256-3a43f93b78ec50f7c4e4dc8bd1cb3fff5a900e7d574c51a6f7495e48486e0dac
time=2025-03-18T11:15:57.676+09:00 level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[23.4 GiB]"
time=2025-03-18T11:15:57.676+09:00 level=WARN source=ggml.go:132 msg="key not found" key=llama.attention.key_length default=128
time=2025-03-18T11:15:57.676+09:00 level=WARN source=ggml.go:132 msg="key not found" key=llama.attention.value_length default=128
time=2025-03-18T11:15:57.676+09:00 level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/home/anonymized/.ollama/models/blobs/sha256-3a43f93b78ec50f7c4e4dc8bd1cb3fff5a900e7d574c51a6f7495e48486e0dac gpu=GPU-04c4165d-6fbb-7e5e-9215-de652d767bd7 parallel=4 available=25178079232 required="8.7 GiB"
time=2025-03-18T11:15:57.677+09:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="503.6 GiB" before.free="496.5 GiB" before.free_swap="22.9 GiB" now.total="503.6 GiB" now.free="496.5 GiB" now.free_swap="22.9 GiB"
CUDA driver version: 11-2
[GPU-04c4165d-6fbb-7e5e-9215-de652d767bd7] CUDA totalMem 25447170048
[GPU-04c4165d-6fbb-7e5e-9215-de652d767bd7] CUDA freeMem 25178079232
[GPU-04c4165d-6fbb-7e5e-9215-de652d767bd7] CUDA usedMem 0
[GPU-04c4165d-6fbb-7e5e-9215-de652d767bd7] Compute Capability 8.6
time=2025-03-18T11:15:57.749+09:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-04c4165d-6fbb-7e5e-9215-de652d767bd7 name="" overhead="0 B" before.total="23.7 GiB" before.free="23.4 GiB" now.total="23.7 GiB" now.free="23.4 GiB" now.used="0 B"
releasing cudart library
time=2025-03-18T11:15:57.783+09:00 level=INFO source=server.go:97 msg="system memory" total="503.6 GiB" free="496.5 GiB" free_swap="22.9 GiB"
time=2025-03-18T11:15:57.783+09:00 level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[23.4 GiB]"
time=2025-03-18T11:15:57.783+09:00 level=WARN source=ggml.go:132 msg="key not found" key=llama.attention.key_length default=128
time=2025-03-18T11:15:57.783+09:00 level=WARN source=ggml.go:132 msg="key not found" key=llama.attention.value_length default=128
time=2025-03-18T11:15:57.784+09:00 level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[23.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="8.7 GiB" memory.required.partial="8.7 GiB" memory.required.kv="4.0 GiB" memory.required.allocations="[8.7 GiB]" memory.weights.total="7.4 GiB" memory.weights.repeating="7.3 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="681.0 MiB"
time=2025-03-18T11:15:57.784+09:00 level=DEBUG source=server.go:259 msg="compatible gpu libraries" compatible=[]
time=2025-03-18T11:15:57.784+09:00 level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /home/anonymized/.ollama/models/blobs/sha256-3a43f93b78ec50f7c4e4dc8bd1cb3fff5a900e7d574c51a6f7495e48486e0dac --ctx-size 8192 --batch-size 512 --n-gpu-layers 33 --verbose --threads 32 --parallel 4 --port 41263"
time=2025-03-18T11:15:57.784+09:00 level=DEBUG source=server.go:398 msg=subprocess environment="[PATH=/home/anonymized/.local/bin:/home/anonymized/anaconda3/bin:/home/anonymized/anaconda3/condabin:/home/anonymized/.sdkman/candidates/java/current/bin:/usr/local/spack-0.15.1/bin:/usr/local/UGE/bin/lx-amd64:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/anonymized/.local/bin LD_LIBRARY_PATH=/usr/local/bin CUDA_VISIBLE_DEVICES=GPU-04c4165d-6fbb-7e5e-9215-de652d767bd7]"
time=2025-03-18T11:15:57.785+09:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-03-18T11:15:57.785+09:00 level=INFO source=server.go:557 msg="waiting for llama runner to start responding"
time=2025-03-18T11:15:57.785+09:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
time=2025-03-18T11:15:57.806+09:00 level=INFO source=runner.go:932 msg="starting go runner"
time=2025-03-18T11:15:57.806+09:00 level=DEBUG source=ggml.go:89 msg="ggml backend load all from path" path=/usr/local/bin
time=2025-03-18T11:15:57.809+09:00 level=INFO source=runner.go:935 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | cgo(gcc)" threads=32
time=2025-03-18T11:15:57.809+09:00 level=INFO source=runner.go:993 msg="Server listening on 127.0.0.1:41263"
llama_model_loader: loaded meta data with 20 key-value pairs and 291 tensors from /home/anonymized/.ollama/models/blobs/sha256-3a43f93b78ec50f7c4e4dc8bd1cb3fff5a900e7d574c51a6f7495e48486e0dac (version GGUF V2)
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = codellama
llama_model_loader: - kv   2:                       llama.context_length u32              = 16384
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 11008
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 32
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                       llama.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  11:                          general.file_type u32              = 2
llama_model_loader: - kv  12:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,32016]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  14:                      tokenizer.ggml.scores arr[f32,32016]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,32016]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  16:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  17:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  18:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  19:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: control token:      2 '</s>' is not marked as EOG
llm_load_vocab: control token:      1 '<s>' is not marked as EOG
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 3
llm_load_vocab: token to piece cache size = 0.1686 MB
llm_load_print_meta: format           = GGUF V2
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32016
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 16384
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 4096
llm_load_print_meta: n_embd_v_gqa     = 4096
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 11008
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 16384
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 6.74 B
llm_load_print_meta: model size       = 3.56 GiB (4.54 BPW)
llm_load_print_meta: general.name     = codellama
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_print_meta: EOG token        = 2 '</s>'
llm_load_print_meta: max token length = 48
llm_load_tensors:   CPU_Mapped model buffer size =  3647.95 MiB
llama_new_context_with_model: n_seq_max     = 4
llama_new_context_with_model: n_ctx         = 8192
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch       = 2048
llama_new_context_with_model: n_ubatch      = 512
llama_new_context_with_model: flash_attn    = 0
llama_new_context_with_model: freq_base     = 1000000.0
llama_new_context_with_model: freq_scale    = 1
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (16384) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 32, can_shift = 1
llama_kv_cache_init: layer 0: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096
llama_kv_cache_init: layer 1: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096
llama_kv_cache_init: layer 2: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096
llama_kv_cache_init: layer 3: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096
llama_kv_cache_init: layer 4: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096
llama_kv_cache_init: layer 5: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096
llama_kv_cache_init: layer 6: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096
llama_kv_cache_init: layer 7: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096
llama_kv_cache_init: layer 8: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096
llama_kv_cache_init: layer 9: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096
llama_kv_cache_init: layer 10: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096
llama_kv_cache_init: layer 11: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096
llama_kv_cache_init: layer 12: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096
llama_kv_cache_init: layer 13: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096
llama_kv_cache_init: layer 14: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096
llama_kv_cache_init: layer 15: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096
llama_kv_cache_init: layer 16: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096
llama_kv_cache_init: layer 17: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096
llama_kv_cache_init: layer 18: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096
llama_kv_cache_init: layer 19: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096
llama_kv_cache_init: layer 20: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096
llama_kv_cache_init: layer 21: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096
llama_kv_cache_init: layer 22: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096
llama_kv_cache_init: layer 23: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096
llama_kv_cache_init: layer 24: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096
llama_kv_cache_init: layer 25: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096
llama_kv_cache_init: layer 26: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096
llama_kv_cache_init: layer 27: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096
llama_kv_cache_init: layer 28: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096
llama_kv_cache_init: layer 29: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096
llama_kv_cache_init: layer 30: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096
llama_kv_cache_init: layer 31: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096
time=2025-03-18T11:15:58.037+09:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model"
time=2025-03-18T11:15:58.037+09:00 level=DEBUG source=server.go:602 msg="model load progress 1.00"
time=2025-03-18T11:15:58.288+09:00 level=DEBUG source=server.go:605 msg="model load completed, waiting for server to become available" status="llm server loading model"
llama_kv_cache_init:        CPU KV buffer size =  4096.00 MiB
llama_new_context_with_model: KV self size  = 4096.00 MiB, K (f16): 2048.00 MiB, V (f16): 2048.00 MiB
llama_new_context_with_model:        CPU  output buffer size =     0.55 MiB
llama_new_context_with_model:        CPU compute buffer size =   560.01 MiB
llama_new_context_with_model: graph nodes  = 1030
llama_new_context_with_model: graph splits = 1
time=2025-03-18T11:15:59.040+09:00 level=INFO source=server.go:596 msg="llama runner started in 1.25 seconds"
time=2025-03-18T11:15:59.040+09:00 level=DEBUG source=sched.go:463 msg="finished setting up runner" model=/home/anonymized/.ollama/models/blobs/sha256-3a43f93b78ec50f7c4e4dc8bd1cb3fff5a900e7d574c51a6f7495e48486e0dac
[GIN] 2025/03/18 - 11:15:59 | 200 |  1.495643437s |       127.0.0.1 | POST     "/api/generate"
time=2025-03-18T11:15:59.040+09:00 level=DEBUG source=sched.go:467 msg="context for request finished"
time=2025-03-18T11:15:59.040+09:00 level=DEBUG source=sched.go:340 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/anonymized/.ollama/models/blobs/sha256-3a43f93b78ec50f7c4e4dc8bd1cb3fff5a900e7d574c51a6f7495e48486e0dac duration=5m0s
time=2025-03-18T11:15:59.040+09:00 level=DEBUG source=sched.go:358 msg="after processing request finished event" modelPath=/home/anonymized/.ollama/models/blobs/sha256-3a43f93b78ec50f7c4e4dc8bd1cb3fff5a900e7d574c51a6f7495e48486e0dac refCount=0
time=2025-03-18T11:16:22.378+09:00 level=WARN source=types.go:512 msg="invalid option provided" option=rope_frequency_base
time=2025-03-18T11:16:22.378+09:00 level=DEBUG source=sched.go:576 msg="evaluating already loaded" model=/home/anonymized/.ollama/models/blobs/sha256-3a43f93b78ec50f7c4e4dc8bd1cb3fff5a900e7d574c51a6f7495e48486e0dac
time=2025-03-18T11:16:22.379+09:00 level=DEBUG source=routes.go:1480 msg="chat request" images=0 prompt="[INST] <<SYS>><</SYS>>\n\nclassify handwritten letters from MNIST  [/INST]\n"
time=2025-03-18T11:16:22.380+09:00 level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=30 used=0 remaining=30
time=2025-03-18T11:16:33.841+09:00 level=DEBUG source=sched.go:408 msg="context for request finished"
time=2025-03-18T11:16:33.841+09:00 level=DEBUG source=sched.go:340 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/anonymized/.ollama/models/blobs/sha256-3a43f93b78ec50f7c4e4dc8bd1cb3fff5a900e7d574c51a6f7495e48486e0dac duration=5m0s
time=2025-03-18T11:16:33.841+09:00 level=DEBUG source=sched.go:358 msg="after processing request finished event" modelPath=/home/anonymized/.ollama/models/blobs/sha256-3a43f93b78ec50f7c4e4dc8bd1cb3fff5a900e7d574c51a6f7495e48486e0dac refCount=0
[GIN] 2025/03/18 - 11:16:33 | 200 | 11.476571586s |       127.0.0.1 | POST     "/api/chat"
^Ctime=2025-03-18T11:16:40.890+09:00 level=DEBUG source=sched.go:120 msg="shutting down scheduler pending loop"
time=2025-03-18T11:16:40.890+09:00 level=DEBUG source=sched.go:799 msg="shutting down runner" model=/home/anonymized/.ollama/models/blobs/sha256-3a43f93b78ec50f7c4e4dc8bd1cb3fff5a900e7d574c51a6f7495e48486e0dac
time=2025-03-18T11:16:40.890+09:00 level=DEBUG source=server.go:1081 msg="stopping llama server"
time=2025-03-18T11:16:40.890+09:00 level=DEBUG source=server.go:1087 msg="waiting for llama server to exit"
time=2025-03-18T11:16:40.890+09:00 level=DEBUG source=sched.go:319 msg="shutting down scheduler completed loop"
time=2025-03-18T11:16:41.058+09:00 level=DEBUG source=server.go:1091 msg="llama server stopped"
<!-- gh-comment-id:2731411770 --> @antonkratz commented on GitHub (Mar 18, 2025): > Looks like it didn't find the GPU backend. Set `OLLAMA_DEBUG=1` in the server environment and post the resulting log. ``` 2025/03/18 11:15:09 routes.go:1205: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/anonymized/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-03-18T11:15:09.303+09:00 level=INFO source=images.go:432 msg="total blobs: 34" time=2025-03-18T11:15:09.304+09:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-03-18T11:15:09.306+09:00 level=INFO source=routes.go:1256 msg="Listening on 127.0.0.1:11434 (version 0.5.12)" time=2025-03-18T11:15:09.306+09:00 level=DEBUG source=sched.go:106 msg="starting llm scheduler" time=2025-03-18T11:15:09.306+09:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-03-18T11:15:09.310+09:00 level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA" time=2025-03-18T11:15:09.310+09:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcuda.so* time=2025-03-18T11:15:09.310+09:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/local/bin/libcuda.so* /home/anonymized/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" time=2025-03-18T11:15:09.319+09:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[/usr/lib/x86_64-linux-gnu/libcuda.so.460.32.03] initializing /usr/lib/x86_64-linux-gnu/libcuda.so.460.32.03 dlsym: cuInit - 0x7fffa4ba3f40 dlsym: cuDriverGetVersion - 0x7fffa4ba3d60 dlsym: cuDeviceGetCount - 0x7fffa4ba3930 dlsym: cuDeviceGet - 0x7fffa4ba3b30 dlsym: cuDeviceGetAttribute - 0x7fffa4ba2990 dlsym: cuDeviceGetUuid - 0x7fffa4ba3440 dlsym: cuDeviceGetName - 0x7fffa4ba36f0 dlerr: /usr/lib/x86_64-linux-gnu/libcuda.so.460.32.03: undefined symbol: cuCtxCreate_v3 time=2025-03-18T11:15:09.319+09:00 level=INFO source=gpu.go:612 msg="Unable to load cudart library /usr/lib/x86_64-linux-gnu/libcuda.so.460.32.03: symbol lookup for cuCtxCreate_v3 failed: /usr/lib/x86_64-linux-gnu/libcuda.so.460.32.03: undefined symbol: cuCtxCreate_v3" time=2025-03-18T11:15:09.319+09:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcudart.so* time=2025-03-18T11:15:09.319+09:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/local/bin/libcudart.so* /home/anonymized/libcudart.so* /usr/local/bin/cuda_v*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]" time=2025-03-18T11:15:09.326+09:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[/usr/local/cuda/lib64/libcudart.so.11.2.72] CUDA driver version: 11-2 time=2025-03-18T11:15:09.335+09:00 level=DEBUG source=gpu.go:140 msg="detected GPUs" library=/usr/local/cuda/lib64/libcudart.so.11.2.72 count=1 [GPU-04c4165d-6fbb-7e5e-9215-de652d767bd7] CUDA totalMem 25447170048 [GPU-04c4165d-6fbb-7e5e-9215-de652d767bd7] CUDA freeMem 23761453056 [GPU-04c4165d-6fbb-7e5e-9215-de652d767bd7] CUDA usedMem 0 [GPU-04c4165d-6fbb-7e5e-9215-de652d767bd7] Compute Capability 8.6 time=2025-03-18T11:15:09.393+09:00 level=DEBUG source=amd_linux.go:419 msg="amdgpu driver not detected /sys/module/amdgpu" releasing cudart library time=2025-03-18T11:15:09.417+09:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-04c4165d-6fbb-7e5e-9215-de652d767bd7 library=cuda variant=v11 compute=8.6 driver=0.0 name="" total="23.7 GiB" available="22.1 GiB" [GIN] 2025/03/18 - 11:15:14 | 200 | 92.924µs | 127.0.0.1 | HEAD "/" [GIN] 2025/03/18 - 11:15:14 | 200 | 3.884945ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/03/18 - 11:15:57 | 200 | 48.602µs | 127.0.0.1 | HEAD "/" [GIN] 2025/03/18 - 11:15:57 | 200 | 15.591976ms | 127.0.0.1 | POST "/api/show" time=2025-03-18T11:15:57.559+09:00 level=WARN source=types.go:512 msg="invalid option provided" option=rope_frequency_base time=2025-03-18T11:15:57.559+09:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="503.6 GiB" before.free="494.0 GiB" before.free_swap="22.9 GiB" now.total="503.6 GiB" now.free="496.5 GiB" now.free_swap="22.9 GiB" CUDA driver version: 11-2 [GPU-04c4165d-6fbb-7e5e-9215-de652d767bd7] CUDA totalMem 25447170048 [GPU-04c4165d-6fbb-7e5e-9215-de652d767bd7] CUDA freeMem 25178079232 [GPU-04c4165d-6fbb-7e5e-9215-de652d767bd7] CUDA usedMem 0 [GPU-04c4165d-6fbb-7e5e-9215-de652d767bd7] Compute Capability 8.6 time=2025-03-18T11:15:57.630+09:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-04c4165d-6fbb-7e5e-9215-de652d767bd7 name="" overhead="0 B" before.total="23.7 GiB" before.free="22.1 GiB" now.total="23.7 GiB" now.free="23.4 GiB" now.used="0 B" releasing cudart library time=2025-03-18T11:15:57.665+09:00 level=DEBUG source=sched.go:182 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1 time=2025-03-18T11:15:57.676+09:00 level=DEBUG source=sched.go:225 msg="loading first model" model=/home/anonymized/.ollama/models/blobs/sha256-3a43f93b78ec50f7c4e4dc8bd1cb3fff5a900e7d574c51a6f7495e48486e0dac time=2025-03-18T11:15:57.676+09:00 level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[23.4 GiB]" time=2025-03-18T11:15:57.676+09:00 level=WARN source=ggml.go:132 msg="key not found" key=llama.attention.key_length default=128 time=2025-03-18T11:15:57.676+09:00 level=WARN source=ggml.go:132 msg="key not found" key=llama.attention.value_length default=128 time=2025-03-18T11:15:57.676+09:00 level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/home/anonymized/.ollama/models/blobs/sha256-3a43f93b78ec50f7c4e4dc8bd1cb3fff5a900e7d574c51a6f7495e48486e0dac gpu=GPU-04c4165d-6fbb-7e5e-9215-de652d767bd7 parallel=4 available=25178079232 required="8.7 GiB" time=2025-03-18T11:15:57.677+09:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="503.6 GiB" before.free="496.5 GiB" before.free_swap="22.9 GiB" now.total="503.6 GiB" now.free="496.5 GiB" now.free_swap="22.9 GiB" CUDA driver version: 11-2 [GPU-04c4165d-6fbb-7e5e-9215-de652d767bd7] CUDA totalMem 25447170048 [GPU-04c4165d-6fbb-7e5e-9215-de652d767bd7] CUDA freeMem 25178079232 [GPU-04c4165d-6fbb-7e5e-9215-de652d767bd7] CUDA usedMem 0 [GPU-04c4165d-6fbb-7e5e-9215-de652d767bd7] Compute Capability 8.6 time=2025-03-18T11:15:57.749+09:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-04c4165d-6fbb-7e5e-9215-de652d767bd7 name="" overhead="0 B" before.total="23.7 GiB" before.free="23.4 GiB" now.total="23.7 GiB" now.free="23.4 GiB" now.used="0 B" releasing cudart library time=2025-03-18T11:15:57.783+09:00 level=INFO source=server.go:97 msg="system memory" total="503.6 GiB" free="496.5 GiB" free_swap="22.9 GiB" time=2025-03-18T11:15:57.783+09:00 level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[23.4 GiB]" time=2025-03-18T11:15:57.783+09:00 level=WARN source=ggml.go:132 msg="key not found" key=llama.attention.key_length default=128 time=2025-03-18T11:15:57.783+09:00 level=WARN source=ggml.go:132 msg="key not found" key=llama.attention.value_length default=128 time=2025-03-18T11:15:57.784+09:00 level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[23.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="8.7 GiB" memory.required.partial="8.7 GiB" memory.required.kv="4.0 GiB" memory.required.allocations="[8.7 GiB]" memory.weights.total="7.4 GiB" memory.weights.repeating="7.3 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="681.0 MiB" time=2025-03-18T11:15:57.784+09:00 level=DEBUG source=server.go:259 msg="compatible gpu libraries" compatible=[] time=2025-03-18T11:15:57.784+09:00 level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /home/anonymized/.ollama/models/blobs/sha256-3a43f93b78ec50f7c4e4dc8bd1cb3fff5a900e7d574c51a6f7495e48486e0dac --ctx-size 8192 --batch-size 512 --n-gpu-layers 33 --verbose --threads 32 --parallel 4 --port 41263" time=2025-03-18T11:15:57.784+09:00 level=DEBUG source=server.go:398 msg=subprocess environment="[PATH=/home/anonymized/.local/bin:/home/anonymized/anaconda3/bin:/home/anonymized/anaconda3/condabin:/home/anonymized/.sdkman/candidates/java/current/bin:/usr/local/spack-0.15.1/bin:/usr/local/UGE/bin/lx-amd64:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/anonymized/.local/bin LD_LIBRARY_PATH=/usr/local/bin CUDA_VISIBLE_DEVICES=GPU-04c4165d-6fbb-7e5e-9215-de652d767bd7]" time=2025-03-18T11:15:57.785+09:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2025-03-18T11:15:57.785+09:00 level=INFO source=server.go:557 msg="waiting for llama runner to start responding" time=2025-03-18T11:15:57.785+09:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" time=2025-03-18T11:15:57.806+09:00 level=INFO source=runner.go:932 msg="starting go runner" time=2025-03-18T11:15:57.806+09:00 level=DEBUG source=ggml.go:89 msg="ggml backend load all from path" path=/usr/local/bin time=2025-03-18T11:15:57.809+09:00 level=INFO source=runner.go:935 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | cgo(gcc)" threads=32 time=2025-03-18T11:15:57.809+09:00 level=INFO source=runner.go:993 msg="Server listening on 127.0.0.1:41263" llama_model_loader: loaded meta data with 20 key-value pairs and 291 tensors from /home/anonymized/.ollama/models/blobs/sha256-3a43f93b78ec50f7c4e4dc8bd1cb3fff5a900e7d574c51a6f7495e48486e0dac (version GGUF V2) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = codellama llama_model_loader: - kv 2: llama.context_length u32 = 16384 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: llama.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 11: general.file_type u32 = 2 llama_model_loader: - kv 12: tokenizer.ggml.model str = llama llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32016] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32016] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32016] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 19: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_0: 225 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: control token: 2 '</s>' is not marked as EOG llm_load_vocab: control token: 1 '<s>' is not marked as EOG llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 3 llm_load_vocab: token to piece cache size = 0.1686 MB llm_load_print_meta: format = GGUF V2 llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32016 llm_load_print_meta: n_merges = 0 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 16384 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 4096 llm_load_print_meta: n_embd_v_gqa = 4096 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 11008 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 16384 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 6.74 B llm_load_print_meta: model size = 3.56 GiB (4.54 BPW) llm_load_print_meta: general.name = codellama llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_print_meta: EOG token = 2 '</s>' llm_load_print_meta: max token length = 48 llm_load_tensors: CPU_Mapped model buffer size = 3647.95 MiB llama_new_context_with_model: n_seq_max = 4 llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (16384) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 32, can_shift = 1 llama_kv_cache_init: layer 0: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096 llama_kv_cache_init: layer 1: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096 llama_kv_cache_init: layer 2: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096 llama_kv_cache_init: layer 3: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096 llama_kv_cache_init: layer 4: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096 llama_kv_cache_init: layer 5: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096 llama_kv_cache_init: layer 6: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096 llama_kv_cache_init: layer 7: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096 llama_kv_cache_init: layer 8: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096 llama_kv_cache_init: layer 9: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096 llama_kv_cache_init: layer 10: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096 llama_kv_cache_init: layer 11: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096 llama_kv_cache_init: layer 12: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096 llama_kv_cache_init: layer 13: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096 llama_kv_cache_init: layer 14: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096 llama_kv_cache_init: layer 15: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096 llama_kv_cache_init: layer 16: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096 llama_kv_cache_init: layer 17: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096 llama_kv_cache_init: layer 18: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096 llama_kv_cache_init: layer 19: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096 llama_kv_cache_init: layer 20: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096 llama_kv_cache_init: layer 21: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096 llama_kv_cache_init: layer 22: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096 llama_kv_cache_init: layer 23: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096 llama_kv_cache_init: layer 24: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096 llama_kv_cache_init: layer 25: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096 llama_kv_cache_init: layer 26: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096 llama_kv_cache_init: layer 27: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096 llama_kv_cache_init: layer 28: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096 llama_kv_cache_init: layer 29: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096 llama_kv_cache_init: layer 30: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096 llama_kv_cache_init: layer 31: n_embd_k_gqa = 4096, n_embd_v_gqa = 4096 time=2025-03-18T11:15:58.037+09:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model" time=2025-03-18T11:15:58.037+09:00 level=DEBUG source=server.go:602 msg="model load progress 1.00" time=2025-03-18T11:15:58.288+09:00 level=DEBUG source=server.go:605 msg="model load completed, waiting for server to become available" status="llm server loading model" llama_kv_cache_init: CPU KV buffer size = 4096.00 MiB llama_new_context_with_model: KV self size = 4096.00 MiB, K (f16): 2048.00 MiB, V (f16): 2048.00 MiB llama_new_context_with_model: CPU output buffer size = 0.55 MiB llama_new_context_with_model: CPU compute buffer size = 560.01 MiB llama_new_context_with_model: graph nodes = 1030 llama_new_context_with_model: graph splits = 1 time=2025-03-18T11:15:59.040+09:00 level=INFO source=server.go:596 msg="llama runner started in 1.25 seconds" time=2025-03-18T11:15:59.040+09:00 level=DEBUG source=sched.go:463 msg="finished setting up runner" model=/home/anonymized/.ollama/models/blobs/sha256-3a43f93b78ec50f7c4e4dc8bd1cb3fff5a900e7d574c51a6f7495e48486e0dac [GIN] 2025/03/18 - 11:15:59 | 200 | 1.495643437s | 127.0.0.1 | POST "/api/generate" time=2025-03-18T11:15:59.040+09:00 level=DEBUG source=sched.go:467 msg="context for request finished" time=2025-03-18T11:15:59.040+09:00 level=DEBUG source=sched.go:340 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/anonymized/.ollama/models/blobs/sha256-3a43f93b78ec50f7c4e4dc8bd1cb3fff5a900e7d574c51a6f7495e48486e0dac duration=5m0s time=2025-03-18T11:15:59.040+09:00 level=DEBUG source=sched.go:358 msg="after processing request finished event" modelPath=/home/anonymized/.ollama/models/blobs/sha256-3a43f93b78ec50f7c4e4dc8bd1cb3fff5a900e7d574c51a6f7495e48486e0dac refCount=0 time=2025-03-18T11:16:22.378+09:00 level=WARN source=types.go:512 msg="invalid option provided" option=rope_frequency_base time=2025-03-18T11:16:22.378+09:00 level=DEBUG source=sched.go:576 msg="evaluating already loaded" model=/home/anonymized/.ollama/models/blobs/sha256-3a43f93b78ec50f7c4e4dc8bd1cb3fff5a900e7d574c51a6f7495e48486e0dac time=2025-03-18T11:16:22.379+09:00 level=DEBUG source=routes.go:1480 msg="chat request" images=0 prompt="[INST] <<SYS>><</SYS>>\n\nclassify handwritten letters from MNIST [/INST]\n" time=2025-03-18T11:16:22.380+09:00 level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=30 used=0 remaining=30 time=2025-03-18T11:16:33.841+09:00 level=DEBUG source=sched.go:408 msg="context for request finished" time=2025-03-18T11:16:33.841+09:00 level=DEBUG source=sched.go:340 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/anonymized/.ollama/models/blobs/sha256-3a43f93b78ec50f7c4e4dc8bd1cb3fff5a900e7d574c51a6f7495e48486e0dac duration=5m0s time=2025-03-18T11:16:33.841+09:00 level=DEBUG source=sched.go:358 msg="after processing request finished event" modelPath=/home/anonymized/.ollama/models/blobs/sha256-3a43f93b78ec50f7c4e4dc8bd1cb3fff5a900e7d574c51a6f7495e48486e0dac refCount=0 [GIN] 2025/03/18 - 11:16:33 | 200 | 11.476571586s | 127.0.0.1 | POST "/api/chat" ^Ctime=2025-03-18T11:16:40.890+09:00 level=DEBUG source=sched.go:120 msg="shutting down scheduler pending loop" time=2025-03-18T11:16:40.890+09:00 level=DEBUG source=sched.go:799 msg="shutting down runner" model=/home/anonymized/.ollama/models/blobs/sha256-3a43f93b78ec50f7c4e4dc8bd1cb3fff5a900e7d574c51a6f7495e48486e0dac time=2025-03-18T11:16:40.890+09:00 level=DEBUG source=server.go:1081 msg="stopping llama server" time=2025-03-18T11:16:40.890+09:00 level=DEBUG source=server.go:1087 msg="waiting for llama server to exit" time=2025-03-18T11:16:40.890+09:00 level=DEBUG source=sched.go:319 msg="shutting down scheduler completed loop" time=2025-03-18T11:16:41.058+09:00 level=DEBUG source=server.go:1091 msg="llama server stopped" ```
Author
Owner

@rick-github commented on GitHub (Mar 18, 2025):

time=2025-03-18T11:15:57.784+09:00 level=DEBUG source=server.go:259 msg="compatible gpu libraries" compatible=[]

No GPU runners found. What's the output of:

find /usr/local/lib/ollama
<!-- gh-comment-id:2731418484 --> @rick-github commented on GitHub (Mar 18, 2025): ``` time=2025-03-18T11:15:57.784+09:00 level=DEBUG source=server.go:259 msg="compatible gpu libraries" compatible=[] ``` No GPU runners found. What's the output of: ``` find /usr/local/lib/ollama ```
Author
Owner

@antonkratz commented on GitHub (Mar 18, 2025):

time=2025-03-18T11:15:57.784+09:00 level=DEBUG source=server.go:259 msg="compatible gpu libraries" compatible=[]

No GPU runners found. What's the output of:

find /usr/local/lib/ollama

find: ‘/usr/local/lib/ollama’: No such file or directory

<!-- gh-comment-id:2731422049 --> @antonkratz commented on GitHub (Mar 18, 2025): > ``` > time=2025-03-18T11:15:57.784+09:00 level=DEBUG source=server.go:259 msg="compatible gpu libraries" compatible=[] > ``` > > No GPU runners found. What's the output of: > > ``` > find /usr/local/lib/ollama > ``` `find: ‘/usr/local/lib/ollama’: No such file or directory`
Author
Owner

@antonkratz commented on GitHub (Mar 18, 2025):

P.S.: I was wondering, which ollama am I using then? which ollama gives me /usr/local/bin/ollama.

<!-- gh-comment-id:2731423059 --> @antonkratz commented on GitHub (Mar 18, 2025): P.S.: I was wondering, which ollama am I using then? `which ollama` gives me `/usr/local/bin/ollama`.
Author
Owner

@rick-github commented on GitHub (Mar 18, 2025):

Your installation has no runners at all, so ollama falls back to using plain non-vector CPU for inference. What install method did you use?

<!-- gh-comment-id:2731430059 --> @rick-github commented on GitHub (Mar 18, 2025): Your installation has no runners at all, so ollama falls back to using plain non-vector CPU for inference. What install method did you use?
Author
Owner

@antonkratz commented on GitHub (Mar 18, 2025):

curl -fsSL https://ollama.com/install.sh | sh

I copied and pasted this directly from https://ollama.com/download.

<!-- gh-comment-id:2731431727 --> @antonkratz commented on GitHub (Mar 18, 2025): `curl -fsSL https://ollama.com/install.sh | sh` I copied and pasted this directly from `https://ollama.com/download`.
Author
Owner

@rick-github commented on GitHub (Mar 18, 2025):

Did you notice any errors during the install? Plenty of disk space?

<!-- gh-comment-id:2731435322 --> @rick-github commented on GitHub (Mar 18, 2025): Did you notice any errors during the install? Plenty of disk space?
Author
Owner

@antonkratz commented on GitHub (Mar 18, 2025):

Did you notice any errors during the install? Plenty of disk space?

I did not notice any errors. Exremely large amount of free storage.

(I believe a manual install, which I tried earlier, shows the same phenmenon as described in the original issue, but am not absolutely sure anymore)

<!-- gh-comment-id:2731438464 --> @antonkratz commented on GitHub (Mar 18, 2025): > Did you notice any errors during the install? Plenty of disk space? I did not notice any errors. Exremely large amount of free storage. (I believe a manual install, which I tried earlier, shows the same phenmenon as described in the original issue, but am not absolutely sure anymore)
Author
Owner

@antonkratz commented on GitHub (Mar 18, 2025):

Ah! I just remembered something maybe important The gpu node is not connected to the internet. So I used curl -fsSL https://ollama.com/install.sh | sh on a non-GPU, internet-connected node, then switch to the gpu node.

<!-- gh-comment-id:2731442308 --> @antonkratz commented on GitHub (Mar 18, 2025): Ah! I just remembered something maybe important The gpu node is not connected to the internet. So I used `curl -fsSL https://ollama.com/install.sh | sh` on a non-GPU, internet-connected node, then switch to the gpu node.
Author
Owner

@antonkratz commented on GitHub (Mar 18, 2025):

It seems the runners end up under /usr/lib/ollama/ not /usr/local/lib/ollama/.

<!-- gh-comment-id:2731473018 --> @antonkratz commented on GitHub (Mar 18, 2025): It seems the runners end up under `/usr/lib/ollama/` not `/usr/local/lib/ollama/`.
Author
Owner

@antonkratz commented on GitHub (Mar 18, 2025):

I found a fix.

First I do a manual install as described here https://github.com/ollama/ollama/blob/main/docs/linux.md.

But, the problem is that the runners end up under /usr/lib/ollama/ not /usr/local/lib/ollama/. (Isn't that a problem in the way the tar ball is organized, or a problem with the installer?)

So I fix that like this: sudo ln -s /usr/lib/ollama /usr/local/lib/ollama.

Now GPU acceleration works again!

<!-- gh-comment-id:2731477618 --> @antonkratz commented on GitHub (Mar 18, 2025): I found a fix. First I do a manual install as described here `https://github.com/ollama/ollama/blob/main/docs/linux.md`. But, the problem is that the runners end up under `/usr/lib/ollama/` not `/usr/local/lib/ollama/`. (Isn't that a problem in the way the tar ball is organized, or a problem with the installer?) So I fix that like this: `sudo ln -s /usr/lib/ollama /usr/local/lib/ollama`. Now GPU acceleration works again!
Author
Owner

@noskill commented on GitHub (Mar 18, 2025):

Is there a way to install ollama into home directory?

<!-- gh-comment-id:2732015782 --> @noskill commented on GitHub (Mar 18, 2025): Is there a way to install ollama into home directory?
Author
Owner

@tzhanghan commented on GitHub (Mar 19, 2025):

I found a fix.

First I do a manual install as described here https://github.com/ollama/ollama/blob/main/docs/linux.md.

But, the problem is that the runners end up under /usr/lib/ollama/ not /usr/local/lib/ollama/. (Isn't that a problem in the way the tar ball is organized, or a problem with the installer?)

So I fix that like this: sudo ln -s /usr/lib/ollama /usr/local/lib/ollama.

Now GPU acceleration works again!

Thank you for sharing this solution! I encountered the same issue when upgrading from Ollama 0.5.7 to 0.6.1. In my case, both directories (/usr/lib/ollama/ and /usr/local/lib/ollama/) actually existed, but the /usr/local/lib/ollama/ directory was missing the necessary libggml library files while only containing the CUDA directories.

I followed your approach with a slight modification:

  1. Backed up the existing incomplete directory: sudo mv /usr/local/lib/ollama /usr/local/lib/ollama.bak
  2. Created the symbolic link: sudo ln -s /usr/lib/ollama /usr/local/lib/ollama

After restarting Ollama, GPU acceleration worked perfectly! This seems to be a common issue when upgrading to newer versions. Thanks again for pointing out the solution!

<!-- gh-comment-id:2735026959 --> @tzhanghan commented on GitHub (Mar 19, 2025): > I found a fix. > > First I do a manual install as described here `https://github.com/ollama/ollama/blob/main/docs/linux.md`. > > But, the problem is that the runners end up under `/usr/lib/ollama/` not `/usr/local/lib/ollama/`. (Isn't that a problem in the way the tar ball is organized, or a problem with the installer?) > > So I fix that like this: `sudo ln -s /usr/lib/ollama /usr/local/lib/ollama`. > > Now GPU acceleration works again! Thank you for sharing this solution! I encountered the same issue when upgrading from Ollama 0.5.7 to 0.6.1. In my case, both directories (/usr/lib/ollama/ and /usr/local/lib/ollama/) actually existed, but the /usr/local/lib/ollama/ directory was missing the necessary libggml library files while only containing the CUDA directories. I followed your approach with a slight modification: 1. Backed up the existing incomplete directory: `sudo mv /usr/local/lib/ollama /usr/local/lib/ollama.bak` 2. Created the symbolic link: `sudo ln -s /usr/lib/ollama /usr/local/lib/ollama` After restarting Ollama, GPU acceleration worked perfectly! This seems to be a common issue when upgrading to newer versions. Thanks again for pointing out the solution!
Author
Owner

@OmegaHaze commented on GitHub (Mar 19, 2025):

I set all of the right flags. I didn't get this issue using https://ollama.com/install.sh. https://ollama.com/install.sh is what i used to solve the problem. I got this error using the official ollama docker image ollama:latest. When I check for the cuda binaries it says there are none in that build but it's flagging the gpu in the logs. Not sure I was using the right command to check, though. (ldd $(which ollama) | grep -iE "cublas|cudart|cuda"). Either way, I tried 100 different possibilities and none of them enabled the gpu. I ended up just building the image correctly from install.sh in my dockerfile. Works perfectly now.

<!-- gh-comment-id:2735118854 --> @OmegaHaze commented on GitHub (Mar 19, 2025): I set all of the right flags. I didn't get this issue using https://ollama.com/install.sh. https://ollama.com/install.sh is what i used to solve the problem. I got this error using the official ollama docker image ollama:latest. When I check for the cuda binaries it says there are none in that build but it's flagging the gpu in the logs. Not sure I was using the right command to check, though. (ldd $(which ollama) | grep -iE "cublas|cudart|cuda"). Either way, I tried 100 different possibilities and none of them enabled the gpu. I ended up just building the image correctly from install.sh in my dockerfile. Works perfectly now.
Author
Owner

@rick-github commented on GitHub (Mar 19, 2025):

The runners are dynamically linked, they won't show up from ldd.

<!-- gh-comment-id:2735140810 --> @rick-github commented on GitHub (Mar 19, 2025): The runners are dynamically linked, they won't show up from `ldd`.
Author
Owner

@Mohamed0Hegazi commented on GitHub (Mar 19, 2025):

عايز كود عوه لمانوس

<!-- gh-comment-id:2735153795 --> @Mohamed0Hegazi commented on GitHub (Mar 19, 2025): عايز كود عوه لمانوس
Author
Owner

@510076394 commented on GitHub (Mar 19, 2025):

I should also be able to use it by reducing the version to 0.5.12 due to a bug

<!-- gh-comment-id:2735482251 --> @510076394 commented on GitHub (Mar 19, 2025): I should also be able to use it by reducing the version to 0.5.12 due to a bug
Author
Owner

@la1ty commented on GitHub (Mar 21, 2025):

I compiled ollama from source code on Linux and met a similar problem.

When I run ollama serve, it uses the GPU backend. But when I run systemctl start ollama it still uses CPU backend to run models. It's quite confusing.

Thanks to the hints from here, it turns out that when ollama runs a model, it calls a subprocess ollama runner to find the best backend to run it.

Then I find the function load_backend in source code, though I still don't find the caller: 0fbfcf3c9c/ml/backend/ggml/ggml/src/ggml-backend-reg.cpp (L224)

It seems that it loads libraries from a specific path. So I add a environment variable in the service file:

LD_LIBRARY_PATH=/path/to/lib/ollama:/usr/local/cuda-12.4/lib64:$LD_LIBRARY_PATH

And it works!

<!-- gh-comment-id:2742566131 --> @la1ty commented on GitHub (Mar 21, 2025): I compiled ollama from source code on Linux and met a similar problem. When I run `ollama serve`, it uses the GPU backend. But when I run `systemctl start ollama` it still uses CPU backend to run models. It's quite confusing. Thanks to the hints from [here](https://github.com/ollama/ollama/issues/9266), it turns out that when ollama runs a model, it calls a subprocess `ollama runner` to find the best backend to run it. Then I find the function `load_backend` in source code, though I still don't find the caller: https://github.com/ollama/ollama/blob/0fbfcf3c9c7bfdbf4616238595eafd7eca2a916c/ml/backend/ggml/ggml/src/ggml-backend-reg.cpp#L224 It seems that it loads libraries from a specific path. So I add a environment variable in the service file: ``` LD_LIBRARY_PATH=/path/to/lib/ollama:/usr/local/cuda-12.4/lib64:$LD_LIBRARY_PATH ``` And it works!
Author
Owner

@Mikhail42 commented on GitHub (Jul 30, 2025):

I solved similar problem with export LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu in my ~/.profile (or in service file). But you need to see logs for ollama service for your case (e.g., systemctl status ollama, or journalctl ...)

<!-- gh-comment-id:3137857991 --> @Mikhail42 commented on GitHub (Jul 30, 2025): I solved similar problem with `export LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu` in my `~/.profile` (or in service file). But you need to see logs for ollama service for your case (e.g., systemctl status ollama, or journalctl ...)
Author
Owner

@cyb-s commented on GitHub (Aug 8, 2025):

I have this problem with rtx4090. it was working fine 3 days ago without a problem.

<!-- gh-comment-id:3167613054 --> @cyb-s commented on GitHub (Aug 8, 2025): I have this problem with rtx4090. it was working fine 3 days ago without a problem.
Author
Owner

@rick-github commented on GitHub (Aug 8, 2025):

Open a new issue, include system logs.

<!-- gh-comment-id:3167630419 --> @rick-github commented on GitHub (Aug 8, 2025): Open a new issue, include [system logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues).
Author
Owner

@Anyeos commented on GitHub (Aug 19, 2025):

Hello, in my case it is not working at all but everything looks ok. I mean, it detects CPU / GPU, it say it is using CPU / GPU (ollama ps) but nvidia-smi does not show any process using it (and the GPU is just cold / 0% activity). All the CPU cores goes 100% and clearly it is using CPU only.

I look at the logs and there are a runner but it does not "like" to use the GPU (I don't know why, there are no obvious reason). But it only says CPU backend and the famous: msg="compatible gpu libraries" compatible=[].

I am using the version downloaded from Releases of this github project.
So I decided to build my own version, CUDA 12, Nvidia Driver 575.64.03 (Ubuntu 24.04.2) and it builds without problems, no erros, detected CUDA SDK and it is working. So there are a problem with your build.

I am using go run . serve and your original binary ollama run and it is working. The problem is the serve part.

<!-- gh-comment-id:3199941260 --> @Anyeos commented on GitHub (Aug 19, 2025): Hello, in my case it is not working at all but everything looks ok. I mean, it detects CPU / GPU, it say it is using CPU / GPU (ollama ps) but nvidia-smi does not show any process using it (and the GPU is just cold / 0% activity). All the CPU cores goes 100% and clearly it is using CPU only. I look at the logs and there are a runner but it does not "like" to use the GPU (I don't know why, there are no obvious reason). But it only says CPU backend and the famous: `msg="compatible gpu libraries" compatible=[]`. I am using the version downloaded from Releases of this github project. So I decided to build my own version, CUDA 12, Nvidia Driver 575.64.03 (Ubuntu 24.04.2) and it builds without problems, no erros, detected CUDA SDK and it is working. So there are a problem with your build. I am using `go run . serve` and your original binary `ollama run` and it is working. The problem is the serve part.
Author
Owner

@rick-github commented on GitHub (Aug 19, 2025):

Open a new issue, include system logs.

<!-- gh-comment-id:3200292433 --> @rick-github commented on GitHub (Aug 19, 2025): Open a new issue, include [system logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues).
Author
Owner

@jo-neves commented on GitHub (Dec 3, 2025):

I am on Arch (BTW), so for me I just had to install the package "ollama-cuda" instead of "ollama".

<!-- gh-comment-id:3606438649 --> @jo-neves commented on GitHub (Dec 3, 2025): I am on Arch (BTW), so for me I just had to install the package "ollama-cuda" instead of "ollama".
Author
Owner

@thkinh commented on GitHub (Dec 13, 2025):

I am on Arch (BTW), so for me I just had to install the package "ollama-cuda" instead of "ollama".

Thank you, I also use Arch (BTW) and your comment helped me!

<!-- gh-comment-id:3649612171 --> @thkinh commented on GitHub (Dec 13, 2025): > I am on Arch (BTW), so for me I just had to install the package "ollama-cuda" instead of "ollama". Thank you, I also use Arch (BTW) and your comment helped me!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#32203