[GH-ISSUE #8845] NVIDIA GPU not being used for unknown reason #67788

Closed
opened 2026-05-04 11:41:48 -05:00 by GiteaMirror · 21 comments
Owner

Originally created by @sapphirepro on GitHub (Feb 5, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8845

What is the issue?

Hello.

I am newbie here. I wanted to run deepseek and it runs, but only on CPU. Despite available NVidia GPU, cuda installed, I even recompiled everything making sure cuda works, but when loading deepseek r1 model, it tells something weird that "llma" fails to start and computes all on cpu, instead of GPU.

Logs are too blurry to understand what is wrong at all and how to make it use GPU compute. Please help someone to solve thing. Thanks a lot in advance. I even tried running as root, thought maybe some privileges missing, but still same thing.

`ollama serve
Couldn't find '/root/.ollama/id_ed25519'. Generating new private key.
Your new public key is:

ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAbkO2N0BssJZyK8MOi9GpUT5gb4Ilwd75I8brHoilay

2025/02/05 14:17:52 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-02-05T14:17:52.893+01:00 level=INFO source=images.go:432 msg="total blobs: 0"
time=2025-02-05T14:17:52.893+01:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.

  • using env: export GIN_MODE=release
  • using code: gin.SetMode(gin.ReleaseMode)

[GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
[GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
[GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
[GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
[GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
[GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
[GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
[GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
time=2025-02-05T14:17:52.893+01:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7-0-ga420a45)"
time=2025-02-05T14:17:52.893+01:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners=[cpu]
time=2025-02-05T14:17:52.893+01:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2025-02-05T14:17:52.995+01:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-830dad3a-2751-63ee-2299-65a3bb9dcf1e library=cuda variant=v12 compute=6.1 driver=12.8 name="Quadro P3000" total="5.9 GiB" available="4.7 GiB"
[GIN] 2025/02/05 - 14:17:59 | 200 | 52.014µs | 127.0.0.1 | HEAD "/"
[GIN] 2025/02/05 - 14:17:59 | 404 | 319.351µs | 127.0.0.1 | POST "/api/show"
time=2025-02-05T14:18:00.603+01:00 level=INFO source=download.go:175 msg="downloading aabd4debf0c8 in 12 100 MB part(s)"
time=2025-02-05T14:18:12.941+01:00 level=INFO source=download.go:175 msg="downloading 369ca498f347 in 1 387 B part(s)"
time=2025-02-05T14:18:14.278+01:00 level=INFO source=download.go:175 msg="downloading 6e4c38e1172f in 1 1.1 KB part(s)"
time=2025-02-05T14:18:15.611+01:00 level=INFO source=download.go:175 msg="downloading f4d24e9138dd in 1 148 B part(s)"
time=2025-02-05T14:18:16.954+01:00 level=INFO source=download.go:175 msg="downloading a85fe2a2e58e in 1 487 B part(s)"
[GIN] 2025/02/05 - 14:18:21 | 200 | 21.358537284s | 127.0.0.1 | POST "/api/pull"
[GIN] 2025/02/05 - 14:18:21 | 200 | 19.450722ms | 127.0.0.1 | POST "/api/show"
time=2025-02-05T14:18:21.399+01:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc gpu=GPU-830dad3a-2751-63ee-2299-65a3bb9dcf1e parallel=4 available=5028446208 required="1.9 GiB"
time=2025-02-05T14:18:21.480+01:00 level=INFO source=server.go:104 msg="system memory" total="62.7 GiB" free="45.3 GiB" free_swap="0 B"
time=2025-02-05T14:18:21.481+01:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[4.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="1.9 GiB" memory.required.partial="1.9 GiB" memory.required.kv="224.0 MiB" memory.required.allocations="[1.9 GiB]" memory.weights.total="976.1 MiB" memory.weights.repeating="793.5 MiB" memory.weights.nonrepeating="182.6 MiB" memory.graph.full="299.8 MiB" memory.graph.partial="482.3 MiB"
time=2025-02-05T14:18:21.482+01:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --threads 4 --parallel 4 --port 33681"
time=2025-02-05T14:18:21.482+01:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-02-05T14:18:21.482+01:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
time=2025-02-05T14:18:21.483+01:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2025-02-05T14:18:21.496+01:00 level=INFO source=runner.go:936 msg="starting go runner"
time=2025-02-05T14:18:21.496+01:00 level=INFO source=runner.go:937 msg=system info="CPU : LLAMAFILE = 1 | AARCH64_REPACK = 1 | CPU : LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=4
time=2025-02-05T14:18:21.496+01:00 level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:33681"
llama_model_loader: loaded meta data with 26 key-value pairs and 339 tensors from /root/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 1.5B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen
llama_model_loader: - kv 4: general.size_label str = 1.5B
llama_model_loader: - kv 5: qwen2.block_count u32 = 28
llama_model_loader: - kv 6: qwen2.context_length u32 = 131072
llama_model_loader: - kv 7: qwen2.embedding_length u32 = 1536
llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 8960
llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 12
llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 2
llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 10000.000000
llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 13: general.file_type u32 = 15
llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,151936] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646
llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643
llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 25: general.quantization_version u32 = 2
llama_model_loader: - type f32: 141 tensors
llama_model_loader: - type q4_K: 169 tensors
llama_model_loader: - type q6_K: 29 tensors
time=2025-02-05T14:18:21.734+01:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 22
llm_load_vocab: token to piece cache size = 0.9310 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = qwen2
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 151936
llm_load_print_meta: n_merges = 151387
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 1536
llm_load_print_meta: n_layer = 28
llm_load_print_meta: n_head = 12
llm_load_print_meta: n_head_kv = 2
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 6
llm_load_print_meta: n_embd_k_gqa = 256
llm_load_print_meta: n_embd_v_gqa = 256
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 8960
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 1.5B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 1.78 B
llm_load_print_meta: model size = 1.04 GiB (5.00 BPW)
llm_load_print_meta: general.name = DeepSeek R1 Distill Qwen 1.5B
llm_load_print_meta: BOS token = 151646 '<|begin▁of▁sentence|>'
llm_load_print_meta: EOS token = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: EOT token = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: PAD token = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: LF token = 148848 'ÄĬ'
llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>'
llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>'
llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>'
llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>'
llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>'
llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>'
llm_load_print_meta: EOG token = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: EOG token = 151662 '<|fim_pad|>'
llm_load_print_meta: EOG token = 151663 '<|repo_name|>'
llm_load_print_meta: EOG token = 151664 '<|file_sep|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: CPU_Mapped model buffer size = 1059.89 MiB
llama_new_context_with_model: n_seq_max = 4
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 28, can_shift = 1
llama_kv_cache_init: CPU KV buffer size = 224.00 MiB
llama_new_context_with_model: KV self size = 224.00 MiB, K (f16): 112.00 MiB, V (f16): 112.00 MiB
llama_new_context_with_model: CPU output buffer size = 2.34 MiB
llama_new_context_with_model: CPU compute buffer size = 302.75 MiB
llama_new_context_with_model: graph nodes = 986
llama_new_context_with_model: graph splits = 1
time=2025-02-05T14:18:21.984+01:00 level=INFO source=server.go:594 msg="llama runner started in 0.50 seconds"
[GIN] 2025/02/05 - 14:18:21 | 200 | 749.275895ms | 127.0.0.1 | POST "/api/generate"
[GIN] 2025/02/05 - 14:19:20 | 200 | 21.93435584s | 127.0.0.1 | POST "/api/chat"
`

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.5.7 (latest)

Originally created by @sapphirepro on GitHub (Feb 5, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8845 ### What is the issue? Hello. I am newbie here. I wanted to run deepseek and it runs, but only on CPU. Despite available NVidia GPU, cuda installed, I even recompiled everything making sure cuda works, but when loading deepseek r1 model, it tells something weird that "llma" fails to start and computes all on cpu, instead of GPU. Logs are too blurry to understand what is wrong at all and how to make it use GPU compute. Please help someone to solve thing. Thanks a lot in advance. I even tried running as root, thought maybe some privileges missing, but still same thing. `ollama serve Couldn't find '/root/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAbkO2N0BssJZyK8MOi9GpUT5gb4Ilwd75I8brHoilay 2025/02/05 14:17:52 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-02-05T14:17:52.893+01:00 level=INFO source=images.go:432 msg="total blobs: 0" time=2025-02-05T14:17:52.893+01:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers) [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers) [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers) [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers) [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers) [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers) [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers) [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers) [GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers) [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers) [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers) [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers) [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers) [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers) [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers) [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers) [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers) [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers) [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) time=2025-02-05T14:17:52.893+01:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7-0-ga420a45)" time=2025-02-05T14:17:52.893+01:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners=[cpu] time=2025-02-05T14:17:52.893+01:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2025-02-05T14:17:52.995+01:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-830dad3a-2751-63ee-2299-65a3bb9dcf1e library=cuda variant=v12 compute=6.1 driver=12.8 name="Quadro P3000" total="5.9 GiB" available="4.7 GiB" [GIN] 2025/02/05 - 14:17:59 | 200 | 52.014µs | 127.0.0.1 | HEAD "/" [GIN] 2025/02/05 - 14:17:59 | 404 | 319.351µs | 127.0.0.1 | POST "/api/show" time=2025-02-05T14:18:00.603+01:00 level=INFO source=download.go:175 msg="downloading aabd4debf0c8 in 12 100 MB part(s)" time=2025-02-05T14:18:12.941+01:00 level=INFO source=download.go:175 msg="downloading 369ca498f347 in 1 387 B part(s)" time=2025-02-05T14:18:14.278+01:00 level=INFO source=download.go:175 msg="downloading 6e4c38e1172f in 1 1.1 KB part(s)" time=2025-02-05T14:18:15.611+01:00 level=INFO source=download.go:175 msg="downloading f4d24e9138dd in 1 148 B part(s)" time=2025-02-05T14:18:16.954+01:00 level=INFO source=download.go:175 msg="downloading a85fe2a2e58e in 1 487 B part(s)" [GIN] 2025/02/05 - 14:18:21 | 200 | 21.358537284s | 127.0.0.1 | POST "/api/pull" [GIN] 2025/02/05 - 14:18:21 | 200 | 19.450722ms | 127.0.0.1 | POST "/api/show" time=2025-02-05T14:18:21.399+01:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc gpu=GPU-830dad3a-2751-63ee-2299-65a3bb9dcf1e parallel=4 available=5028446208 required="1.9 GiB" time=2025-02-05T14:18:21.480+01:00 level=INFO source=server.go:104 msg="system memory" total="62.7 GiB" free="45.3 GiB" free_swap="0 B" time=2025-02-05T14:18:21.481+01:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[4.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="1.9 GiB" memory.required.partial="1.9 GiB" memory.required.kv="224.0 MiB" memory.required.allocations="[1.9 GiB]" memory.weights.total="976.1 MiB" memory.weights.repeating="793.5 MiB" memory.weights.nonrepeating="182.6 MiB" memory.graph.full="299.8 MiB" memory.graph.partial="482.3 MiB" time=2025-02-05T14:18:21.482+01:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --threads 4 --parallel 4 --port 33681" time=2025-02-05T14:18:21.482+01:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-02-05T14:18:21.482+01:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding" time=2025-02-05T14:18:21.483+01:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" time=2025-02-05T14:18:21.496+01:00 level=INFO source=runner.go:936 msg="starting go runner" time=2025-02-05T14:18:21.496+01:00 level=INFO source=runner.go:937 msg=system info="CPU : LLAMAFILE = 1 | AARCH64_REPACK = 1 | CPU : LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=4 time=2025-02-05T14:18:21.496+01:00 level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:33681" llama_model_loader: loaded meta data with 26 key-value pairs and 339 tensors from /root/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 1.5B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen llama_model_loader: - kv 4: general.size_label str = 1.5B llama_model_loader: - kv 5: qwen2.block_count u32 = 28 llama_model_loader: - kv 6: qwen2.context_length u32 = 131072 llama_model_loader: - kv 7: qwen2.embedding_length u32 = 1536 llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 8960 llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 12 llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 2 llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 13: general.file_type u32 = 15 llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646 llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - type f32: 141 tensors llama_model_loader: - type q4_K: 169 tensors llama_model_loader: - type q6_K: 29 tensors time=2025-02-05T14:18:21.734+01:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 22 llm_load_vocab: token to piece cache size = 0.9310 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 151936 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 1536 llm_load_print_meta: n_layer = 28 llm_load_print_meta: n_head = 12 llm_load_print_meta: n_head_kv = 2 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 6 llm_load_print_meta: n_embd_k_gqa = 256 llm_load_print_meta: n_embd_v_gqa = 256 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 8960 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 1.5B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 1.78 B llm_load_print_meta: model size = 1.04 GiB (5.00 BPW) llm_load_print_meta: general.name = DeepSeek R1 Distill Qwen 1.5B llm_load_print_meta: BOS token = 151646 '<|begin▁of▁sentence|>' llm_load_print_meta: EOS token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: EOT token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: PAD token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>' llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>' llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>' llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>' llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>' llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>' llm_load_print_meta: EOG token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: EOG token = 151662 '<|fim_pad|>' llm_load_print_meta: EOG token = 151663 '<|repo_name|>' llm_load_print_meta: EOG token = 151664 '<|file_sep|>' llm_load_print_meta: max token length = 256 llm_load_tensors: CPU_Mapped model buffer size = 1059.89 MiB llama_new_context_with_model: n_seq_max = 4 llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 28, can_shift = 1 llama_kv_cache_init: CPU KV buffer size = 224.00 MiB llama_new_context_with_model: KV self size = 224.00 MiB, K (f16): 112.00 MiB, V (f16): 112.00 MiB llama_new_context_with_model: CPU output buffer size = 2.34 MiB llama_new_context_with_model: CPU compute buffer size = 302.75 MiB llama_new_context_with_model: graph nodes = 986 llama_new_context_with_model: graph splits = 1 time=2025-02-05T14:18:21.984+01:00 level=INFO source=server.go:594 msg="llama runner started in 0.50 seconds" [GIN] 2025/02/05 - 14:18:21 | 200 | 749.275895ms | 127.0.0.1 | POST "/api/generate" [GIN] 2025/02/05 - 14:19:20 | 200 | 21.93435584s | 127.0.0.1 | POST "/api/chat" ` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.7 (latest)
GiteaMirror added the bug label 2026-05-04 11:41:48 -05:00
Author
Owner

@rick-github commented on GitHub (Feb 5, 2025):

time=2025-02-05T14:17:52.893+01:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners=[cpu]

You have no GPU-enabled runners. How did you install ollama?

<!-- gh-comment-id:2637288186 --> @rick-github commented on GitHub (Feb 5, 2025): ``` time=2025-02-05T14:17:52.893+01:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners=[cpu] ``` You have no GPU-enabled runners. How did you install ollama?
Author
Owner

@mxyng commented on GitHub (Feb 5, 2025):

I even recompiled everything making sure cuda works

You may have missed steps when building from source that enables CUDA. You should see something like this if GPU support is detected:

time=2025-02-05T19:05:45.253Z level=INFO source=routes.go:1238 msg="Listening on [::]:54321 (version 0.5.7)"
time=2025-02-05T19:05:45.254Z level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx]"

How did you install Ollama?

<!-- gh-comment-id:2637798297 --> @mxyng commented on GitHub (Feb 5, 2025): > I even recompiled everything making sure cuda works You may have missed steps when building from source that enables CUDA. You should see something like this if GPU support is detected: ``` time=2025-02-05T19:05:45.253Z level=INFO source=routes.go:1238 msg="Listening on [::]:54321 (version 0.5.7)" time=2025-02-05T19:05:45.254Z level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx]" ``` How did you install Ollama?
Author
Owner

@JohnYehyo commented on GitHub (Feb 6, 2025):

I encountered the same issue. I manually installed the Linux version of Ollama. The NVIDIA driver and CUDA are already installed on my machine and working properly. Before running the model with Ollama, I set the environment variable:export CUDA_VISIBLE_DEVICES=0, However, during model execution, I observed nvidia-smi, and it seems that the GPU is not being used. Its load remains at 0%, and there are no related processes listed in the processes section. The log contains information:

2025/02/06 14:10:03 routes.go:1186: INFO server config env="map[CUDA_VISIBLE_DEVICES:0 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: >
2月 06 14:10:03 ollama[29668]: time=2025-02-06T14:10:03.585+08:00 level=INFO source=images.go:432 msg="total blobs: 5"
2月 06 14:10:03 ollama[29668]: time=2025-02-06T14:10:03.585+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
2月 06 14:10:03 ollama[29668]: time=2025-02-06T14:10:03.585+08:00 level=INFO source=routes.go:1237 msg="Listening on [::]:11434 (version 0.5.8-rc7)"
2月 06 14:10:03 ollama[29668]: time=2025-02-06T14:10:03.585+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
2月 06 14:10:03 ollama[29668]: time=2025-02-06T14:10:03.653+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-c2b546f1-2717-0ea5-0c1c-dfec4ee6cc99 library=cuda variant>
2月 06 14:10:18 ollama[29668]: [GIN] 2025/02/06 - 14:10:18 | 200 | 24.335µs | 127.0.0.1 | HEAD "/"
2月 06 14:10:18 ollama[29668]: [GIN] 2025/02/06 - 14:10:18 | 200 | 8.384619ms | 127.0.0.1 | POST "/api/show"
2月 06 14:10:18 ollama[29668]: time=2025-02-06T14:10:18.203+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollam>
2月 06 14:10:18 ollama[29668]: time=2025-02-06T14:10:18.252+08:00 level=INFO source=server.go:100 msg="system memory" total="62.6 GiB" free="59.9 GiB" free_swap="2.0 GiB"
2月 06 14:10:18 ollama[29668]: time=2025-02-06T14:10:18.252+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=29 layers.offload=29 layers.spli>
2月 06 14:10:18 ollama[29668]: time=2025-02-06T14:10:18.252+08:00 level=INFO source=server.go:381 msg="starting llama server" cmd="/usr/bin/ollama runner --model /usr/share/ollama/.olla>
2月 06 14:10:18 ollama[29668]: time=2025-02-06T14:10:18.253+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
2月 06 14:10:18 ollama[29668]: time=2025-02-06T14:10:18.253+08:00 level=INFO source=server.go:558 msg="waiting for llama runner to start responding"
2月 06 14:10:18 ollama[29668]: time=2025-02-06T14:10:18.253+08:00 level=INFO source=server.go:592 msg="waiting for server to become available" status="llm server error"
2月 06 14:10:18 ollama[29668]: time=2025-02-06T14:10:18.262+08:00 level=INFO source=runner.go:936 msg="starting go runner"
2月 06 14:10:18 ollama[29668]: time=2025-02-06T14:10:18.262+08:00 level=INFO source=runner.go:937 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | cgo(gcc)" threads=8
2月 06 14:10:18 ollama[29668]: time=2025-02-06T14:10:18.262+08:00 level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:38295"
2月 06 14:10:18 ollama[29668]: llama_model_loader: loaded meta data with 26 key-value pairs and 339 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25>
2月 06 14:10:18 ollama[29668]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.

<!-- gh-comment-id:2638958758 --> @JohnYehyo commented on GitHub (Feb 6, 2025): I encountered the same issue. I manually installed the Linux version of Ollama. The NVIDIA driver and CUDA are already installed on my machine and working properly. Before running the model with Ollama, I set the environment variable:`export CUDA_VISIBLE_DEVICES=0`, However, during model execution, I observed nvidia-smi, and it seems that the GPU is not being used. Its load remains at 0%, and there are no related processes listed in the processes section. The log contains information: > 2025/02/06 14:10:03 routes.go:1186: INFO server config env="map[CUDA_VISIBLE_DEVICES:0 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: > > 2月 06 14:10:03 ollama[29668]: time=2025-02-06T14:10:03.585+08:00 level=INFO source=images.go:432 msg="total blobs: 5" > 2月 06 14:10:03 ollama[29668]: time=2025-02-06T14:10:03.585+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" > 2月 06 14:10:03 ollama[29668]: time=2025-02-06T14:10:03.585+08:00 level=INFO source=routes.go:1237 msg="Listening on [::]:11434 (version 0.5.8-rc7)" > 2月 06 14:10:03 ollama[29668]: time=2025-02-06T14:10:03.585+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" > 2月 06 14:10:03 ollama[29668]: time=2025-02-06T14:10:03.653+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-c2b546f1-2717-0ea5-0c1c-dfec4ee6cc99 library=cuda variant> > 2月 06 14:10:18 ollama[29668]: [GIN] 2025/02/06 - 14:10:18 | 200 | 24.335µs | 127.0.0.1 | HEAD "/" > 2月 06 14:10:18 ollama[29668]: [GIN] 2025/02/06 - 14:10:18 | 200 | 8.384619ms | 127.0.0.1 | POST "/api/show" > 2月 06 14:10:18 ollama[29668]: time=2025-02-06T14:10:18.203+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollam> > 2月 06 14:10:18 ollama[29668]: time=2025-02-06T14:10:18.252+08:00 level=INFO source=server.go:100 msg="system memory" total="62.6 GiB" free="59.9 GiB" free_swap="2.0 GiB" > 2月 06 14:10:18 ollama[29668]: time=2025-02-06T14:10:18.252+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=29 layers.offload=29 layers.spli> > 2月 06 14:10:18 ollama[29668]: time=2025-02-06T14:10:18.252+08:00 level=INFO source=server.go:381 msg="starting llama server" cmd="/usr/bin/ollama runner --model /usr/share/ollama/.olla> > 2月 06 14:10:18 ollama[29668]: time=2025-02-06T14:10:18.253+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 > 2月 06 14:10:18 ollama[29668]: time=2025-02-06T14:10:18.253+08:00 level=INFO source=server.go:558 msg="waiting for llama runner to start responding" > 2月 06 14:10:18 ollama[29668]: time=2025-02-06T14:10:18.253+08:00 level=INFO source=server.go:592 msg="waiting for server to become available" status="llm server error" > 2月 06 14:10:18 ollama[29668]: time=2025-02-06T14:10:18.262+08:00 level=INFO source=runner.go:936 msg="starting go runner" > 2月 06 14:10:18 ollama[29668]: time=2025-02-06T14:10:18.262+08:00 level=INFO source=runner.go:937 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | cgo(gcc)" threads=8 > 2月 06 14:10:18 ollama[29668]: time=2025-02-06T14:10:18.262+08:00 level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:38295" > 2月 06 14:10:18 ollama[29668]: llama_model_loader: loaded meta data with 26 key-value pairs and 339 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25> > 2月 06 14:10:18 ollama[29668]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Author
Owner

@rick-github commented on GitHub (Feb 6, 2025):

The log is incomplete and truncated on the right. Please follow the instructions at https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues.

<!-- gh-comment-id:2639521709 --> @rick-github commented on GitHub (Feb 6, 2025): The log is incomplete and truncated on the right. Please follow the instructions at https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues.
Author
Owner

@ChenZihua-cn commented on GitHub (Feb 6, 2025):

Image
I also met this problem but I am wsl.

<!-- gh-comment-id:2639695518 --> @ChenZihua-cn commented on GitHub (Feb 6, 2025): ![Image](https://github.com/user-attachments/assets/2eccd058-4434-4028-9965-a01f2cc9f7cc) I also met this problem but I am wsl.
Author
Owner
<!-- gh-comment-id:2639699943 --> @rick-github commented on GitHub (Feb 6, 2025): https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues
Author
Owner

@ByePastHub commented on GitHub (Feb 7, 2025):

I encountered the same issue. I manually installed the Linux version of Ollama. The NVIDIA driver and CUDA are already installed on my machine and working properly. Before running the model with Ollama, I set the environment variable:export CUDA_VISIBLE_DEVICES=0, However, during model execution, I observed nvidia-smi, and it seems that the GPU is not being used. Its load remains at 0%, and there are no related processes listed in the processes section. The log contains information:

2025/02/06 14:10:03 routes.go:1186: INFO server config env="map[CUDA_VISIBLE_DEVICES:0 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: >
2月 06 14:10:03 ollama[29668]: time=2025-02-06T14:10:03.585+08:00 level=INFO source=images.go:432 msg="total blobs: 5"
2月 06 14:10:03 ollama[29668]: time=2025-02-06T14:10:03.585+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
2月 06 14:10:03 ollama[29668]: time=2025-02-06T14:10:03.585+08:00 level=INFO source=routes.go:1237 msg="Listening on [::]:11434 (version 0.5.8-rc7)"
2月 06 14:10:03 ollama[29668]: time=2025-02-06T14:10:03.585+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
2月 06 14:10:03 ollama[29668]: time=2025-02-06T14:10:03.653+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-c2b546f1-2717-0ea5-0c1c-dfec4ee6cc99 library=cuda variant>
2月 06 14:10:18 ollama[29668]: [GIN] 2025/02/06 - 14:10:18 | 200 | 24.335µs | 127.0.0.1 | HEAD "/"
2月 06 14:10:18 ollama[29668]: [GIN] 2025/02/06 - 14:10:18 | 200 | 8.384619ms | 127.0.0.1 | POST "/api/show"
2月 06 14:10:18 ollama[29668]: time=2025-02-06T14:10:18.203+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollam>
2月 06 14:10:18 ollama[29668]: time=2025-02-06T14:10:18.252+08:00 level=INFO source=server.go:100 msg="system memory" total="62.6 GiB" free="59.9 GiB" free_swap="2.0 GiB"
2月 06 14:10:18 ollama[29668]: time=2025-02-06T14:10:18.252+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=29 layers.offload=29 layers.spli>
2月 06 14:10:18 ollama[29668]: time=2025-02-06T14:10:18.252+08:00 level=INFO source=server.go:381 msg="starting llama server" cmd="/usr/bin/ollama runner --model /usr/share/ollama/.olla>
2月 06 14:10:18 ollama[29668]: time=2025-02-06T14:10:18.253+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
2月 06 14:10:18 ollama[29668]: time=2025-02-06T14:10:18.253+08:00 level=INFO source=server.go:558 msg="waiting for llama runner to start responding"
2月 06 14:10:18 ollama[29668]: time=2025-02-06T14:10:18.253+08:00 level=INFO source=server.go:592 msg="waiting for server to become available" status="llm server error"
2月 06 14:10:18 ollama[29668]: time=2025-02-06T14:10:18.262+08:00 level=INFO source=runner.go:936 msg="starting go runner"
2月 06 14:10:18 ollama[29668]: time=2025-02-06T14:10:18.262+08:00 level=INFO source=runner.go:937 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | cgo(gcc)" threads=8
2月 06 14:10:18 ollama[29668]: time=2025-02-06T14:10:18.262+08:00 level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:38295"
2月 06 14:10:18 ollama[29668]: llama_model_loader: loaded meta data with 26 key-value pairs and 339 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25>
2月 06 14:10:18 ollama[29668]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.

Encountering the same problem, the GPU cannot be used after manual installation, and the CPU has been called to answer.

<!-- gh-comment-id:2641800176 --> @ByePastHub commented on GitHub (Feb 7, 2025): > I encountered the same issue. I manually installed the Linux version of Ollama. The NVIDIA driver and CUDA are already installed on my machine and working properly. Before running the model with Ollama, I set the environment variable:`export CUDA_VISIBLE_DEVICES=0`, However, during model execution, I observed nvidia-smi, and it seems that the GPU is not being used. Its load remains at 0%, and there are no related processes listed in the processes section. The log contains information: > > > 2025/02/06 14:10:03 routes.go:1186: INFO server config env="map[CUDA_VISIBLE_DEVICES:0 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: > > > 2月 06 14:10:03 ollama[29668]: time=2025-02-06T14:10:03.585+08:00 level=INFO source=images.go:432 msg="total blobs: 5" > > 2月 06 14:10:03 ollama[29668]: time=2025-02-06T14:10:03.585+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" > > 2月 06 14:10:03 ollama[29668]: time=2025-02-06T14:10:03.585+08:00 level=INFO source=routes.go:1237 msg="Listening on [::]:11434 (version 0.5.8-rc7)" > > 2月 06 14:10:03 ollama[29668]: time=2025-02-06T14:10:03.585+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" > > 2月 06 14:10:03 ollama[29668]: time=2025-02-06T14:10:03.653+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-c2b546f1-2717-0ea5-0c1c-dfec4ee6cc99 library=cuda variant> > > 2月 06 14:10:18 ollama[29668]: [GIN] 2025/02/06 - 14:10:18 | 200 | 24.335µs | 127.0.0.1 | HEAD "/" > > 2月 06 14:10:18 ollama[29668]: [GIN] 2025/02/06 - 14:10:18 | 200 | 8.384619ms | 127.0.0.1 | POST "/api/show" > > 2月 06 14:10:18 ollama[29668]: time=2025-02-06T14:10:18.203+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollam> > > 2月 06 14:10:18 ollama[29668]: time=2025-02-06T14:10:18.252+08:00 level=INFO source=server.go:100 msg="system memory" total="62.6 GiB" free="59.9 GiB" free_swap="2.0 GiB" > > 2月 06 14:10:18 ollama[29668]: time=2025-02-06T14:10:18.252+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=29 layers.offload=29 layers.spli> > > 2月 06 14:10:18 ollama[29668]: time=2025-02-06T14:10:18.252+08:00 level=INFO source=server.go:381 msg="starting llama server" cmd="/usr/bin/ollama runner --model /usr/share/ollama/.olla> > > 2月 06 14:10:18 ollama[29668]: time=2025-02-06T14:10:18.253+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 > > 2月 06 14:10:18 ollama[29668]: time=2025-02-06T14:10:18.253+08:00 level=INFO source=server.go:558 msg="waiting for llama runner to start responding" > > 2月 06 14:10:18 ollama[29668]: time=2025-02-06T14:10:18.253+08:00 level=INFO source=server.go:592 msg="waiting for server to become available" status="llm server error" > > 2月 06 14:10:18 ollama[29668]: time=2025-02-06T14:10:18.262+08:00 level=INFO source=runner.go:936 msg="starting go runner" > > 2月 06 14:10:18 ollama[29668]: time=2025-02-06T14:10:18.262+08:00 level=INFO source=runner.go:937 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | cgo(gcc)" threads=8 > > 2月 06 14:10:18 ollama[29668]: time=2025-02-06T14:10:18.262+08:00 level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:38295" > > 2月 06 14:10:18 ollama[29668]: llama_model_loader: loaded meta data with 26 key-value pairs and 339 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25> > > 2月 06 14:10:18 ollama[29668]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Encountering the same problem, the GPU cannot be used after manual installation, and the CPU has been called to answer.
Author
Owner
<!-- gh-comment-id:2642410032 --> @rick-github commented on GitHub (Feb 7, 2025): https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues
Author
Owner

@mxyng commented on GitHub (Feb 7, 2025):

@JohnYehyo You're using a different release (0.5.8-rc7) which has a bug with build artifacts that prevent loading cuda libraries. This has been fixed in later release candidates.

@sapphirepro @ChenZihua-cn please attach logs. there's not enough information to identify an issue. Follow @rick-github's link to get the logs. If you don't provide more information, I would have no choice but to close this issue.

<!-- gh-comment-id:2644214962 --> @mxyng commented on GitHub (Feb 7, 2025): @JohnYehyo You're using a different release (0.5.8-rc7) which has a bug with build artifacts that prevent loading cuda libraries. This has been fixed in later release candidates. @sapphirepro @ChenZihua-cn please attach logs. there's not enough information to identify an issue. Follow @rick-github's link to get the logs. If you don't provide more information, I would have no choice but to close this issue.
Author
Owner

@ChenZihua-cn commented on GitHub (Feb 8, 2025):

ollama_logs.txt
I am very sorry because I can't find the specific logs where the trouble is, so I post the whole logs which has 16670 lines.
I don't know does it help whether or not, very sorry about that.
However, I find I can start ollama on Windows first, then in wsl CLI to run model, finally it can use my GPU instead of CPU.
Though, I don't know why it run. (・-・*)

@JohnYehyo You're using a different release (0.5.8-rc7) which has a bug with build artifacts that prevent loading cuda libraries. This has been fixed in later release candidates.

@sapphirepro @ChenZihua-cn please attach logs. there's not enough information to identify an issue. Follow @rick-github's link to get the logs. If you don't provide more information, I would have no choice but to close this issue.

<!-- gh-comment-id:2645773043 --> @ChenZihua-cn commented on GitHub (Feb 8, 2025): [ollama_logs.txt](https://github.com/user-attachments/files/18719248/ollama_logs.txt) I am very sorry because I can't find the specific logs where the trouble is, so I post the whole logs which has 16670 lines. I don't know does it help whether or not, very sorry about that. However, I find I can start ollama on Windows first, then in wsl CLI to run model, finally it can use my GPU instead of CPU. Though, I don't know why it run. (・-・*) > [@JohnYehyo](https://github.com/JohnYehyo) You're using a different release (0.5.8-rc7) which has a bug with build artifacts that prevent loading cuda libraries. This has been fixed in later release candidates. > > [@sapphirepro](https://github.com/sapphirepro) [@ChenZihua-cn](https://github.com/ChenZihua-cn) please attach logs. there's not enough information to identify an issue. Follow [@rick-github](https://github.com/rick-github)'s link to get the logs. If you don't provide more information, I would have no choice but to close this issue.
Author
Owner

@ChenZihua-cn commented on GitHub (Feb 8, 2025):

ollama_logs.txt
this has the timestamp of the serve logs. So might be much longer than the former, hope that could help.

<!-- gh-comment-id:2645775743 --> @ChenZihua-cn commented on GitHub (Feb 8, 2025): [ollama_logs.txt](https://github.com/user-attachments/files/18719282/ollama_logs.txt) this has the timestamp of the serve logs. So might be much longer than the former, hope that could help.
Author
Owner

@rick-github commented on GitHub (Feb 8, 2025):

Neither of these logs contain useful information, just that the ollama server exited with code 217, which I can't find in the source code.

If you can run ollama natively, why do you want to do it inside WSL?

<!-- gh-comment-id:2645816820 --> @rick-github commented on GitHub (Feb 8, 2025): Neither of these logs contain useful information, just that the ollama server exited with code 217, which I can't find in the source code. If you can run ollama natively, why do you want to do it inside WSL?
Author
Owner

@sapphirepro commented on GitHub (Feb 9, 2025):

Hmm... What is a command then to install latest RC? Command provided on site only updates to latest release, not RC. How install latest RC from site?

<!-- gh-comment-id:2646027703 --> @sapphirepro commented on GitHub (Feb 9, 2025): Hmm... What is a command then to install latest RC? Command provided on site only updates to latest release, not RC. How install latest RC from site?
Author
Owner

@ChenZihua-cn commented on GitHub (Feb 9, 2025):

I heard about that ollama on the linux can be fine-tuning by yourself, so I try to use ubuntu to run ollama.

<!-- gh-comment-id:2646073909 --> @ChenZihua-cn commented on GitHub (Feb 9, 2025): I heard about that ollama on the linux can be fine-tuning by yourself, so I try to use ubuntu to run ollama.
Author
Owner

@rick-github commented on GitHub (Feb 9, 2025):

@sapphirepro

curl -fsSL https://ollama.com/install.sh | OLLAMA_VERSION=0.5.8-rc12 sh
<!-- gh-comment-id:2646152462 --> @rick-github commented on GitHub (Feb 9, 2025): @sapphirepro ``` curl -fsSL https://ollama.com/install.sh | OLLAMA_VERSION=0.5.8-rc12 sh ```
Author
Owner

@rick-github commented on GitHub (Feb 9, 2025):

@ChenZihua-cn
ollama is only used to run the results of fine tuning, you don't need to run ollama in linux. You use linux to run the tools that you use to do the fine tuning.

<!-- gh-comment-id:2646153164 --> @rick-github commented on GitHub (Feb 9, 2025): @ChenZihua-cn ollama is only used to run the results of fine tuning, you don't need to run ollama in linux. You use linux to run the tools that you use to do the fine tuning.
Author
Owner

@JohnYehyo commented on GitHub (Feb 10, 2025):

@JohnYehyo You're using a different release (0.5.8-rc7) which has a bug with build artifacts that prevent loading cuda libraries. This has been fixed in later release candidates.您使用的是不同的版本 (0.5.8-rc7),该版本存在阻止加载 cuda 库的构建工件错误。此问题已在以后的候选版本中修复。

@sapphirepro @ChenZihua-cn please attach logs. there's not enough information to identify an issue. Follow @rick-github's link to get the logs. If you don't provide more information, I would have no choice but to close this issue.请附上日志。没有足够的信息来识别问题。按照 的链接获取日志。如果您不提供更多信息,我别无选择,只能关闭此问题。

Yes, I use version v0.57 instead of the 0.5.8-rc7 preview version, and the software runs normally on the GPU.

<!-- gh-comment-id:2646974552 --> @JohnYehyo commented on GitHub (Feb 10, 2025): > [@JohnYehyo](https://github.com/JohnYehyo) You're using a different release (0.5.8-rc7) which has a bug with build artifacts that prevent loading cuda libraries. This has been fixed in later release candidates.您使用的是不同的版本 (0.5.8-rc7),该版本存在阻止加载 cuda 库的构建工件错误。此问题已在以后的候选版本中修复。 > > [@sapphirepro](https://github.com/sapphirepro) [@ChenZihua-cn](https://github.com/ChenZihua-cn) please attach logs. there's not enough information to identify an issue. Follow [@rick-github](https://github.com/rick-github)'s link to get the logs. If you don't provide more information, I would have no choice but to close this issue.请附上日志。没有足够的信息来识别问题。按照 的链接获取日志。如果您不提供更多信息,我别无选择,只能关闭此问题。 Yes, I use version v0.57 instead of the 0.5.8-rc7 preview version, and the software runs normally on the GPU.
Author
Owner

@sapphirepro commented on GitHub (Feb 10, 2025):

@sapphirepro

curl -fsSL https://ollama.com/install.sh | OLLAMA_VERSION=0.5.8-rc12 sh

thanks a lot! It helped

<!-- gh-comment-id:2649309224 --> @sapphirepro commented on GitHub (Feb 10, 2025): > [@sapphirepro](https://github.com/sapphirepro) > > ``` > curl -fsSL https://ollama.com/install.sh | OLLAMA_VERSION=0.5.8-rc12 sh > ``` thanks a lot! It helped
Author
Owner

@sapphirepro commented on GitHub (Feb 11, 2025):

@sapphirepro

curl -fsSL https://ollama.com/install.sh | OLLAMA_VERSION=0.5.8-rc12 sh

Last question. How to build similar version to that install ready one. The one I tried to build is much smaller and no ollama executable. Simply I'd like cuda 12.8 support which should have better performance in theory. Thanks in advance

<!-- gh-comment-id:2649539204 --> @sapphirepro commented on GitHub (Feb 11, 2025): > [@sapphirepro](https://github.com/sapphirepro) > > ``` > curl -fsSL https://ollama.com/install.sh | OLLAMA_VERSION=0.5.8-rc12 sh > ``` Last question. How to build similar version to that install ready one. The one I tried to build is much smaller and no ollama executable. Simply I'd like cuda 12.8 support which should have better performance in theory. Thanks in advance
Author
Owner

@rick-github commented on GitHub (Feb 11, 2025):

The build procedure has changed and I haven't got around to doing a custom build myself yet, so unfortunately I don't know the answer to your question.

<!-- gh-comment-id:2650409400 --> @rick-github commented on GitHub (Feb 11, 2025): The build procedure has changed and I haven't got around to doing a custom build myself yet, so unfortunately I don't know the answer to your question.
Author
Owner

@mxyng commented on GitHub (Feb 12, 2025):

The updated build instructions are here.

TL;DR:

You'll need CMake, a C/C++ compiler, and CUDA Toolkit in this case

  1. cmake -B build
  2. cmake --build build -j
  3. Finally copy/install the build artifacts in build/lib/ollama into your lib/ollama directory
<!-- gh-comment-id:2654369094 --> @mxyng commented on GitHub (Feb 12, 2025): The updated build instructions are [here](https://github.com/ollama/ollama/blob/main/docs/development.md). TL;DR: You'll need CMake, a C/C++ compiler, and CUDA Toolkit in this case 1. `cmake -B build` 2. `cmake --build build -j` 3. Finally copy/install the build artifacts in `build/lib/ollama` into your `lib/ollama` directory
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#67788