[GH-ISSUE #8393] Unable to enable GPU for models #5389

Closed
opened 2026-04-12 16:36:56 -05:00 by GiteaMirror · 16 comments
Owner

Originally created by @hyongaa on GitHub (Jan 12, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8393

What is the issue?

I don't have much knowledge in this kind of stuff, so I followed the instructions and installed ollama. But when I run the model, only CPU is working and my nvidia GPU doesn't work at all. I am wondering how can I enable GPU for ollama. Thanks a lot.

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.5.4

Originally created by @hyongaa on GitHub (Jan 12, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8393 ### What is the issue? I don't have much knowledge in this kind of stuff, so I followed the instructions and installed ollama. But when I run the model, only CPU is working and my nvidia GPU doesn't work at all. I am wondering how can I enable GPU for ollama. Thanks a lot. ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.4
GiteaMirror added the bug label 2026-04-12 16:36:56 -05:00
Author
Owner

@rick-github commented on GitHub (Jan 12, 2025):

Server logs will aid in debugging.

<!-- gh-comment-id:2585619509 --> @rick-github commented on GitHub (Jan 12, 2025): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will aid in debugging.
Author
Owner

@knoopx commented on GitHub (Jan 12, 2025):

just updated and ollama no longer uses my gpu either despite being detected and having plenty of vram:

Jan 13 00:27:28 desktop systemd[1]: Starting Server for local large language models...
Jan 13 00:27:28 desktop systemd[1]: Started Server for local large language models.
Jan 13 00:27:29 desktop ollama[1253]: 2025/01/13 00:27:29 routes.go:1259: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://[::]:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/var/lib/ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Jan 13 00:27:29 desktop ollama[1253]: time=2025-01-13T00:27:29.021+01:00 level=INFO source=images.go:757 msg="total blobs: 54"
Jan 13 00:27:29 desktop ollama[1253]: time=2025-01-13T00:27:29.021+01:00 level=INFO source=images.go:764 msg="total unused blobs removed: 0"
Jan 13 00:27:29 desktop ollama[1253]: time=2025-01-13T00:27:29.021+01:00 level=INFO source=routes.go:1310 msg="Listening on [::]:11434 (version 0.5.4)"
Jan 13 00:27:29 desktop ollama[1253]: time=2025-01-13T00:27:29.021+01:00 level=INFO source=routes.go:1339 msg="Dynamic LLM libraries" runners=[cpu]
Jan 13 00:27:29 desktop ollama[1253]: time=2025-01-13T00:27:29.021+01:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
Jan 13 00:27:29 desktop ollama[1253]: time=2025-01-13T00:27:29.123+01:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-92dd72a7-d9f3-c9dd-18c3-2ff50ce7de6b library=cuda variant=v12 compute=8.6 driver=12.7 name="NVIDIA GeForce RTX 3090" total="23.8 GiB" available="23.5 GiB"
Jan 13 00:28:10 desktop ollama[1253]: [GIN] 2025/01/13 - 00:28:10 | 200 |      39.256µs |       127.0.0.1 | HEAD     "/"
Jan 13 00:28:10 desktop ollama[1253]: [GIN] 2025/01/13 - 00:28:10 | 200 |   18.410441ms |       127.0.0.1 | POST     "/api/show"
Jan 13 00:28:10 desktop ollama[1253]: time=2025-01-13T00:28:10.984+01:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/var/lib/ollama/models/blobs/sha256-ac9bc7a69dab38da1c790838955f1293420b55ab555ef6b4615efa1c1507b1ed gpu=GPU-92dd72a7-d9f3-c9dd-18c3-2ff50ce7de6b parallel=4 available=24411308032 required="10.8 GiB"
Jan 13 00:28:11 desktop ollama[1253]: time=2025-01-13T00:28:11.040+01:00 level=INFO source=server.go:104 msg="system memory" total="31.2 GiB" free="26.1 GiB" free_swap="64.0 GiB"
Jan 13 00:28:11 desktop ollama[1253]: time=2025-01-13T00:28:11.041+01:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[22.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="10.8 GiB" memory.required.partial="10.8 GiB" memory.required.kv="1.5 GiB" memory.required.allocations="[10.8 GiB]" memory.weights.total="8.9 GiB" memory.weights.repeating="8.3 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="676.0 MiB" memory.graph.partial="916.1 MiB"
Jan 13 00:28:11 desktop ollama[1253]: time=2025-01-13T00:28:11.041+01:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/nix/store/y7jkic6a3ybzh2n2cxhwas2f1hhmcvsb-ollama-0.5.4/bin/.ollama-wrapped runner --model /var/lib/ollama/models/blobs/sha256-ac9bc7a69dab38da1c790838955f1293420b55ab555ef6b4615efa1c1507b1ed --ctx-size 8192 --batch-size 512 --n-gpu-layers 49 --threads 8 --parallel 4 --port 33817"
Jan 13 00:28:11 desktop ollama[1253]: time=2025-01-13T00:28:11.041+01:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
Jan 13 00:28:11 desktop ollama[1253]: time=2025-01-13T00:28:11.041+01:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
Jan 13 00:28:11 desktop ollama[1253]: time=2025-01-13T00:28:11.042+01:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
Jan 13 00:28:11 desktop ollama[1253]: time=2025-01-13T00:28:11.049+01:00 level=INFO source=runner.go:945 msg="starting go runner"
Jan 13 00:28:11 desktop ollama[1253]: time=2025-01-13T00:28:11.049+01:00 level=INFO source=runner.go:946 msg=system info="CPU : LLAMAFILE = 1 | AARCH64_REPACK = 1 | CPU : LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=8
Jan 13 00:28:11 desktop ollama[1253]: time=2025-01-13T00:28:11.049+01:00 level=INFO source=runner.go:1004 msg="Server listening on 127.0.0.1:33817"
Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: loaded meta data with 34 key-value pairs and 579 tensors from /var/lib/ollama/models/blobs/sha256-ac9bc7a69dab38da1c790838955f1293420b55ab555ef6b4615efa1c1507b1ed (version GGUF V3 (latest))
Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv   0:                       general.architecture str              = qwen2
Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv   1:                               general.type str              = model
Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv   2:                               general.name str              = Qwen2.5 Coder 14B Instruct
Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv   3:                           general.finetune str              = Instruct
Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv   4:                           general.basename str              = Qwen2.5-Coder
Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv   5:                         general.size_label str              = 14B
Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv   6:                            general.license str              = apache-2.0
Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv   7:                       general.license.link str              = https://huggingface.co/Qwen/Qwen2.5-C...
Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv   8:                   general.base_model.count u32              = 1
Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv   9:                  general.base_model.0.name str              = Qwen2.5 Coder 14B
Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv  10:          general.base_model.0.organization str              = Qwen
Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv  11:              general.base_model.0.repo_url str              = https://huggingface.co/Qwen/Qwen2.5-C...
Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv  12:                               general.tags arr[str,6]       = ["code", "codeqwen", "chat", "qwen", ...
Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv  13:                          general.languages arr[str,1]       = ["en"]
Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv  14:                          qwen2.block_count u32              = 48
Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv  15:                       qwen2.context_length u32              = 32768
Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv  16:                     qwen2.embedding_length u32              = 5120
Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv  17:                  qwen2.feed_forward_length u32              = 13824
Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv  18:                 qwen2.attention.head_count u32              = 40
Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv  19:              qwen2.attention.head_count_kv u32              = 8
Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv  20:                       qwen2.rope.freq_base f32              = 1000000.000000
Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv  21:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv  22:                          general.file_type u32              = 15
Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv  23:                       tokenizer.ggml.model str              = gpt2
Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv  24:                         tokenizer.ggml.pre str              = qwen2
Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv  25:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv  26:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv  27:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv  28:                tokenizer.ggml.eos_token_id u32              = 151645
Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv  29:            tokenizer.ggml.padding_token_id u32              = 151643
Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv  30:                tokenizer.ggml.bos_token_id u32              = 151643
Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv  31:               tokenizer.ggml.add_bos_token bool             = false
Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv  32:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv  33:               general.quantization_version u32              = 2
Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - type  f32:  241 tensors
Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - type q4_K:  289 tensors
Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - type q6_K:   49 tensors
Jan 13 00:28:11 desktop ollama[1253]: llm_load_vocab: special tokens cache size = 22
Jan 13 00:28:11 desktop ollama[1253]: llm_load_vocab: token to piece cache size = 0.9310 MB
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: format           = GGUF V3 (latest)
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: arch             = qwen2
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: vocab type       = BPE
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: n_vocab          = 152064
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: n_merges         = 151387
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: vocab_only       = 0
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: n_ctx_train      = 32768
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: n_embd           = 5120
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: n_layer          = 48
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: n_head           = 40
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: n_head_kv        = 8
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: n_rot            = 128
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: n_swa            = 0
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: n_embd_head_k    = 128
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: n_embd_head_v    = 128
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: n_gqa            = 5
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: n_embd_k_gqa     = 1024
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: n_embd_v_gqa     = 1024
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: f_norm_eps       = 0.0e+00
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: f_clamp_kqv      = 0.0e+00
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: f_logit_scale    = 0.0e+00
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: n_ff             = 13824
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: n_expert         = 0
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: n_expert_used    = 0
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: causal attn      = 1
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: pooling type     = 0
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: rope type        = 2
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: rope scaling     = linear
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: freq_base_train  = 1000000.0
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: freq_scale_train = 1
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: n_ctx_orig_yarn  = 32768
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: rope_finetuned   = unknown
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: ssm_d_conv       = 0
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: ssm_d_inner      = 0
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: ssm_d_state      = 0
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: ssm_dt_rank      = 0
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: ssm_dt_b_c_rms   = 0
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: model type       = 14B
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: model ftype      = Q4_K - Medium
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: model params     = 14.77 B
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: model size       = 8.37 GiB (4.87 BPW)
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: general.name     = Qwen2.5 Coder 14B Instruct
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: BOS token        = 151643 '<|endoftext|>'
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: EOS token        = 151645 '<|im_end|>'
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: EOT token        = 151645 '<|im_end|>'
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: PAD token        = 151643 '<|endoftext|>'
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: LF token         = 148848 'ÄĬ'
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: FIM PRE token    = 151659 '<|fim_prefix|>'
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: FIM SUF token    = 151661 '<|fim_suffix|>'
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: FIM MID token    = 151660 '<|fim_middle|>'
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: FIM PAD token    = 151662 '<|fim_pad|>'
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: FIM REP token    = 151663 '<|repo_name|>'
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: FIM SEP token    = 151664 '<|file_sep|>'
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: EOG token        = 151643 '<|endoftext|>'
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: EOG token        = 151645 '<|im_end|>'
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: EOG token        = 151662 '<|fim_pad|>'
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: EOG token        = 151663 '<|repo_name|>'
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: EOG token        = 151664 '<|file_sep|>'
Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: max token length = 256
Jan 13 00:28:11 desktop ollama[1253]: time=2025-01-13T00:28:11.293+01:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
Jan 13 00:28:15 desktop ollama[1253]: llm_load_tensors:   CPU_Mapped model buffer size =  8566.04 MiB
Jan 13 00:28:15 desktop ollama[1253]: llama_new_context_with_model: n_seq_max     = 4
Jan 13 00:28:15 desktop ollama[1253]: llama_new_context_with_model: n_ctx         = 8192
Jan 13 00:28:15 desktop ollama[1253]: llama_new_context_with_model: n_ctx_per_seq = 2048
Jan 13 00:28:15 desktop ollama[1253]: llama_new_context_with_model: n_batch       = 2048
Jan 13 00:28:15 desktop ollama[1253]: llama_new_context_with_model: n_ubatch      = 512
Jan 13 00:28:15 desktop ollama[1253]: llama_new_context_with_model: flash_attn    = 0
Jan 13 00:28:15 desktop ollama[1253]: llama_new_context_with_model: freq_base     = 1000000.0
Jan 13 00:28:15 desktop ollama[1253]: llama_new_context_with_model: freq_scale    = 1
Jan 13 00:28:15 desktop ollama[1253]: llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (32768) -- the full capacity of the model will not be utilized
Jan 13 00:28:15 desktop ollama[1253]: llama_kv_cache_init:        CPU KV buffer size =  1536.00 MiB
Jan 13 00:28:15 desktop ollama[1253]: llama_new_context_with_model: KV self size  = 1536.00 MiB, K (f16):  768.00 MiB, V (f16):  768.00 MiB
Jan 13 00:28:15 desktop ollama[1253]: llama_new_context_with_model:        CPU  output buffer size =     2.40 MiB
Jan 13 00:28:15 desktop ollama[1253]: llama_new_context_with_model:        CPU compute buffer size =   696.01 MiB
Jan 13 00:28:15 desktop ollama[1253]: llama_new_context_with_model: graph nodes  = 1686
Jan 13 00:28:15 desktop ollama[1253]: llama_new_context_with_model: graph splits = 1
Jan 13 00:28:15 desktop ollama[1253]: time=2025-01-13T00:28:15.807+01:00 level=INFO source=server.go:594 msg="llama runner started in 4.77 seconds"
Jan 13 00:28:36 desktop ollama[1253]: [GIN] 2025/01/13 - 00:28:36 | 200 | 26.099276862s |       127.0.0.1 | POST     "/api/generate"
$ ollama --version
ollama version is 0.5.4
<!-- gh-comment-id:2585965384 --> @knoopx commented on GitHub (Jan 12, 2025): just updated and ollama no longer uses my gpu either despite being detected and having plenty of vram: ``` Jan 13 00:27:28 desktop systemd[1]: Starting Server for local large language models... Jan 13 00:27:28 desktop systemd[1]: Started Server for local large language models. Jan 13 00:27:29 desktop ollama[1253]: 2025/01/13 00:27:29 routes.go:1259: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://[::]:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/var/lib/ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Jan 13 00:27:29 desktop ollama[1253]: time=2025-01-13T00:27:29.021+01:00 level=INFO source=images.go:757 msg="total blobs: 54" Jan 13 00:27:29 desktop ollama[1253]: time=2025-01-13T00:27:29.021+01:00 level=INFO source=images.go:764 msg="total unused blobs removed: 0" Jan 13 00:27:29 desktop ollama[1253]: time=2025-01-13T00:27:29.021+01:00 level=INFO source=routes.go:1310 msg="Listening on [::]:11434 (version 0.5.4)" Jan 13 00:27:29 desktop ollama[1253]: time=2025-01-13T00:27:29.021+01:00 level=INFO source=routes.go:1339 msg="Dynamic LLM libraries" runners=[cpu] Jan 13 00:27:29 desktop ollama[1253]: time=2025-01-13T00:27:29.021+01:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" Jan 13 00:27:29 desktop ollama[1253]: time=2025-01-13T00:27:29.123+01:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-92dd72a7-d9f3-c9dd-18c3-2ff50ce7de6b library=cuda variant=v12 compute=8.6 driver=12.7 name="NVIDIA GeForce RTX 3090" total="23.8 GiB" available="23.5 GiB" Jan 13 00:28:10 desktop ollama[1253]: [GIN] 2025/01/13 - 00:28:10 | 200 | 39.256µs | 127.0.0.1 | HEAD "/" Jan 13 00:28:10 desktop ollama[1253]: [GIN] 2025/01/13 - 00:28:10 | 200 | 18.410441ms | 127.0.0.1 | POST "/api/show" Jan 13 00:28:10 desktop ollama[1253]: time=2025-01-13T00:28:10.984+01:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/var/lib/ollama/models/blobs/sha256-ac9bc7a69dab38da1c790838955f1293420b55ab555ef6b4615efa1c1507b1ed gpu=GPU-92dd72a7-d9f3-c9dd-18c3-2ff50ce7de6b parallel=4 available=24411308032 required="10.8 GiB" Jan 13 00:28:11 desktop ollama[1253]: time=2025-01-13T00:28:11.040+01:00 level=INFO source=server.go:104 msg="system memory" total="31.2 GiB" free="26.1 GiB" free_swap="64.0 GiB" Jan 13 00:28:11 desktop ollama[1253]: time=2025-01-13T00:28:11.041+01:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[22.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="10.8 GiB" memory.required.partial="10.8 GiB" memory.required.kv="1.5 GiB" memory.required.allocations="[10.8 GiB]" memory.weights.total="8.9 GiB" memory.weights.repeating="8.3 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="676.0 MiB" memory.graph.partial="916.1 MiB" Jan 13 00:28:11 desktop ollama[1253]: time=2025-01-13T00:28:11.041+01:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/nix/store/y7jkic6a3ybzh2n2cxhwas2f1hhmcvsb-ollama-0.5.4/bin/.ollama-wrapped runner --model /var/lib/ollama/models/blobs/sha256-ac9bc7a69dab38da1c790838955f1293420b55ab555ef6b4615efa1c1507b1ed --ctx-size 8192 --batch-size 512 --n-gpu-layers 49 --threads 8 --parallel 4 --port 33817" Jan 13 00:28:11 desktop ollama[1253]: time=2025-01-13T00:28:11.041+01:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 Jan 13 00:28:11 desktop ollama[1253]: time=2025-01-13T00:28:11.041+01:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding" Jan 13 00:28:11 desktop ollama[1253]: time=2025-01-13T00:28:11.042+01:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" Jan 13 00:28:11 desktop ollama[1253]: time=2025-01-13T00:28:11.049+01:00 level=INFO source=runner.go:945 msg="starting go runner" Jan 13 00:28:11 desktop ollama[1253]: time=2025-01-13T00:28:11.049+01:00 level=INFO source=runner.go:946 msg=system info="CPU : LLAMAFILE = 1 | AARCH64_REPACK = 1 | CPU : LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=8 Jan 13 00:28:11 desktop ollama[1253]: time=2025-01-13T00:28:11.049+01:00 level=INFO source=runner.go:1004 msg="Server listening on 127.0.0.1:33817" Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: loaded meta data with 34 key-value pairs and 579 tensors from /var/lib/ollama/models/blobs/sha256-ac9bc7a69dab38da1c790838955f1293420b55ab555ef6b4615efa1c1507b1ed (version GGUF V3 (latest)) Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv 0: general.architecture str = qwen2 Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv 1: general.type str = model Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv 2: general.name str = Qwen2.5 Coder 14B Instruct Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv 3: general.finetune str = Instruct Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv 4: general.basename str = Qwen2.5-Coder Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv 5: general.size_label str = 14B Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv 6: general.license str = apache-2.0 Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-C... Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv 8: general.base_model.count u32 = 1 Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 Coder 14B Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-C... Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv 12: general.tags arr[str,6] = ["code", "codeqwen", "chat", "qwen", ... Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"] Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv 14: qwen2.block_count u32 = 48 Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv 15: qwen2.context_length u32 = 32768 Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv 16: qwen2.embedding_length u32 = 5120 Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 13824 Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 40 Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 8 Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000 Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv 22: general.file_type u32 = 15 Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2 Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645 Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643 Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643 Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - kv 33: general.quantization_version u32 = 2 Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - type f32: 241 tensors Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - type q4_K: 289 tensors Jan 13 00:28:11 desktop ollama[1253]: llama_model_loader: - type q6_K: 49 tensors Jan 13 00:28:11 desktop ollama[1253]: llm_load_vocab: special tokens cache size = 22 Jan 13 00:28:11 desktop ollama[1253]: llm_load_vocab: token to piece cache size = 0.9310 MB Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: format = GGUF V3 (latest) Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: arch = qwen2 Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: vocab type = BPE Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: n_vocab = 152064 Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: n_merges = 151387 Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: vocab_only = 0 Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: n_ctx_train = 32768 Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: n_embd = 5120 Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: n_layer = 48 Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: n_head = 40 Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: n_head_kv = 8 Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: n_rot = 128 Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: n_swa = 0 Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: n_embd_head_k = 128 Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: n_embd_head_v = 128 Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: n_gqa = 5 Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: n_embd_k_gqa = 1024 Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: n_embd_v_gqa = 1024 Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: f_norm_eps = 0.0e+00 Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: f_norm_rms_eps = 1.0e-06 Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: f_logit_scale = 0.0e+00 Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: n_ff = 13824 Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: n_expert = 0 Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: n_expert_used = 0 Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: causal attn = 1 Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: pooling type = 0 Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: rope type = 2 Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: rope scaling = linear Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: freq_base_train = 1000000.0 Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: freq_scale_train = 1 Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: n_ctx_orig_yarn = 32768 Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: rope_finetuned = unknown Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: ssm_d_conv = 0 Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: ssm_d_inner = 0 Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: ssm_d_state = 0 Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: ssm_dt_rank = 0 Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: ssm_dt_b_c_rms = 0 Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: model type = 14B Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: model ftype = Q4_K - Medium Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: model params = 14.77 B Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: model size = 8.37 GiB (4.87 BPW) Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: general.name = Qwen2.5 Coder 14B Instruct Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: BOS token = 151643 '<|endoftext|>' Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: EOS token = 151645 '<|im_end|>' Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: EOT token = 151645 '<|im_end|>' Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: PAD token = 151643 '<|endoftext|>' Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: LF token = 148848 'ÄĬ' Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>' Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>' Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>' Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>' Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>' Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>' Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: EOG token = 151643 '<|endoftext|>' Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: EOG token = 151645 '<|im_end|>' Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: EOG token = 151662 '<|fim_pad|>' Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: EOG token = 151663 '<|repo_name|>' Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: EOG token = 151664 '<|file_sep|>' Jan 13 00:28:11 desktop ollama[1253]: llm_load_print_meta: max token length = 256 Jan 13 00:28:11 desktop ollama[1253]: time=2025-01-13T00:28:11.293+01:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" Jan 13 00:28:15 desktop ollama[1253]: llm_load_tensors: CPU_Mapped model buffer size = 8566.04 MiB Jan 13 00:28:15 desktop ollama[1253]: llama_new_context_with_model: n_seq_max = 4 Jan 13 00:28:15 desktop ollama[1253]: llama_new_context_with_model: n_ctx = 8192 Jan 13 00:28:15 desktop ollama[1253]: llama_new_context_with_model: n_ctx_per_seq = 2048 Jan 13 00:28:15 desktop ollama[1253]: llama_new_context_with_model: n_batch = 2048 Jan 13 00:28:15 desktop ollama[1253]: llama_new_context_with_model: n_ubatch = 512 Jan 13 00:28:15 desktop ollama[1253]: llama_new_context_with_model: flash_attn = 0 Jan 13 00:28:15 desktop ollama[1253]: llama_new_context_with_model: freq_base = 1000000.0 Jan 13 00:28:15 desktop ollama[1253]: llama_new_context_with_model: freq_scale = 1 Jan 13 00:28:15 desktop ollama[1253]: llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (32768) -- the full capacity of the model will not be utilized Jan 13 00:28:15 desktop ollama[1253]: llama_kv_cache_init: CPU KV buffer size = 1536.00 MiB Jan 13 00:28:15 desktop ollama[1253]: llama_new_context_with_model: KV self size = 1536.00 MiB, K (f16): 768.00 MiB, V (f16): 768.00 MiB Jan 13 00:28:15 desktop ollama[1253]: llama_new_context_with_model: CPU output buffer size = 2.40 MiB Jan 13 00:28:15 desktop ollama[1253]: llama_new_context_with_model: CPU compute buffer size = 696.01 MiB Jan 13 00:28:15 desktop ollama[1253]: llama_new_context_with_model: graph nodes = 1686 Jan 13 00:28:15 desktop ollama[1253]: llama_new_context_with_model: graph splits = 1 Jan 13 00:28:15 desktop ollama[1253]: time=2025-01-13T00:28:15.807+01:00 level=INFO source=server.go:594 msg="llama runner started in 4.77 seconds" Jan 13 00:28:36 desktop ollama[1253]: [GIN] 2025/01/13 - 00:28:36 | 200 | 26.099276862s | 127.0.0.1 | POST "/api/generate" ``` ``` $ ollama --version ollama version is 0.5.4 ```
Author
Owner

@rick-github commented on GitHub (Jan 12, 2025):

It looks like the runner is not CUDA enabled. You'll have to take it up with the package maintainer.

<!-- gh-comment-id:2585968110 --> @rick-github commented on GitHub (Jan 12, 2025): It looks like the runner is not CUDA enabled. You'll have to take it up with the package maintainer.
Author
Owner

@hyongaa commented on GitHub (Jan 13, 2025):

Server logs will aid in debugging.

Yes sure. Here is my sever log. Could you please help me have a look?

2025/01/13 09:17:39 routes.go:1259: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\Users\31696\.ollama\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-01-13T09:17:39.506+08:00 level=INFO source=images.go:757 msg="total blobs: 11"
time=2025-01-13T09:17:39.506+08:00 level=INFO source=images.go:764 msg="total unused blobs removed: 0"
time=2025-01-13T09:17:39.507+08:00 level=INFO source=routes.go:1310 msg="Listening on 127.0.0.1:11434 (version 0.5.4)"
time=2025-01-13T09:17:39.508+08:00 level=INFO source=routes.go:1339 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx]"
time=2025-01-13T09:17:39.508+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2025-01-13T09:17:39.508+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-01-13T09:17:39.508+08:00 level=INFO source=gpu_windows.go:183 msg="efficiency cores detected" maxEfficiencyClass=1
time=2025-01-13T09:17:39.508+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=16 efficiency=8 threads=24
time=2025-01-13T09:17:39.684+08:00 level=INFO source=gpu.go:334 msg="detected OS VRAM overhead" id=GPU-46754647-c44c-78c4-b10d-feca1dfe7261 library=cuda compute=8.9 driver=0.0 name="" overhead="731.5 MiB"
time=2025-01-13T09:17:39.698+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-46754647-c44c-78c4-b10d-feca1dfe7261 library=cuda variant=v11 compute=8.9 driver=0.0 name="" total="6.0 GiB" available="5.0 GiB"
[GIN] 2025/01/13 - 09:17:39 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/01/13 - 09:17:39 | 200 | 27.9475ms | 127.0.0.1 | POST "/api/show"
time=2025-01-13T09:17:39.900+08:00 level=INFO source=server.go:104 msg="system memory" total="31.7 GiB" free="22.4 GiB" free_swap="54.7 GiB"
time=2025-01-13T09:17:39.900+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=31 layers.split="" memory.available="[5.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.5 GiB" memory.required.partial="5.0 GiB" memory.required.kv="256.0 MiB" memory.required.allocations="[5.0 GiB]" memory.weights.total="3.9 GiB" memory.weights.repeating="3.5 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="258.5 MiB" memory.graph.partial="677.5 MiB"
time=2025-01-13T09:17:39.908+08:00 level=INFO source=server.go:376 msg="starting llama server" cmd="C:\Users\31696\AppData\Local\Programs\Ollama\lib\ollama\runners\cuda_v11_avx\ollama_llama_server.exe runner --model C:\Users\31696\.ollama\models\blobs\sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa --ctx-size 2048 --batch-size 512 --n-gpu-layers 31 --threads 8 --no-mmap --parallel 1 --port 50974"
time=2025-01-13T09:17:39.950+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-01-13T09:17:39.955+08:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
time=2025-01-13T09:17:39.955+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2025-01-13T09:17:40.118+08:00 level=INFO source=runner.go:945 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 4050 Laptop GPU, compute capability 8.9, VMM: yes
time=2025-01-13T09:17:40.147+08:00 level=INFO source=runner.go:946 msg=system info="CUDA : USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(clang)" threads=8
time=2025-01-13T09:17:40.148+08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:50974"
time=2025-01-13T09:17:40.207+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 4050 Laptop GPU) - 5005 MiB free
llama_model_loader: loaded meta data with 22 key-value pairs and 291 tensors from C:\Users\31696.ollama\models\blobs\sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = Meta-Llama-3-8B-Instruct
llama_model_loader: - kv 2: llama.block_count u32 = 32
llama_model_loader: - kv 3: llama.context_length u32 = 8192
llama_model_loader: - kv 4: llama.embedding_length u32 = 4096
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.attention.head_count u32 = 32
llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 2
llama_model_loader: - kv 11: llama.vocab_size u32 = 128256
llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 14: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 128009
llama_model_loader: - kv 20: tokenizer.chat_template str = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv 21: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_0: 225 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.8000 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 8192
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 8192
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 8B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 8.03 B
llm_load_print_meta: model size = 4.33 GiB (4.64 BPW)
llm_load_print_meta: general.name = Meta-Llama-3-8B-Instruct
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128009 '<|eot_id|>'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: offloading 31 repeating layers to GPU
llm_load_tensors: offloaded 31/33 layers to GPU
llm_load_tensors: CUDA_Host model buffer size = 809.84 MiB
llm_load_tensors: CUDA0 model buffer size = 3627.97 MiB
llama_new_context_with_model: n_seq_max = 1
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (8192) -- the full capacity of the model will not be utilized
llama_kv_cache_init: CPU KV buffer size = 8.00 MiB
llama_kv_cache_init: CUDA0 KV buffer size = 248.00 MiB
llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB
llama_new_context_with_model: CPU output buffer size = 0.50 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 677.48 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 12.01 MiB
llama_new_context_with_model: graph nodes = 1030
llama_new_context_with_model: graph splits = 15 (with bs=512), 3 (with bs=1)
time=2025-01-13T09:17:44.715+08:00 level=INFO source=server.go:594 msg="llama runner started in 4.76 seconds"
[GIN] 2025/01/13 - 09:17:44 | 200 | 4.8678476s | 127.0.0.1 | POST "/api/generate"

<!-- gh-comment-id:2586021479 --> @hyongaa commented on GitHub (Jan 13, 2025): > [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will aid in debugging. Yes sure. Here is my sever log. Could you please help me have a look? 2025/01/13 09:17:39 routes.go:1259: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\31696\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-01-13T09:17:39.506+08:00 level=INFO source=images.go:757 msg="total blobs: 11" time=2025-01-13T09:17:39.506+08:00 level=INFO source=images.go:764 msg="total unused blobs removed: 0" time=2025-01-13T09:17:39.507+08:00 level=INFO source=routes.go:1310 msg="Listening on 127.0.0.1:11434 (version 0.5.4)" time=2025-01-13T09:17:39.508+08:00 level=INFO source=routes.go:1339 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx]" time=2025-01-13T09:17:39.508+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2025-01-13T09:17:39.508+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2025-01-13T09:17:39.508+08:00 level=INFO source=gpu_windows.go:183 msg="efficiency cores detected" maxEfficiencyClass=1 time=2025-01-13T09:17:39.508+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=16 efficiency=8 threads=24 time=2025-01-13T09:17:39.684+08:00 level=INFO source=gpu.go:334 msg="detected OS VRAM overhead" id=GPU-46754647-c44c-78c4-b10d-feca1dfe7261 library=cuda compute=8.9 driver=0.0 name="" overhead="731.5 MiB" time=2025-01-13T09:17:39.698+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-46754647-c44c-78c4-b10d-feca1dfe7261 library=cuda variant=v11 compute=8.9 driver=0.0 name="" total="6.0 GiB" available="5.0 GiB" [GIN] 2025/01/13 - 09:17:39 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/01/13 - 09:17:39 | 200 | 27.9475ms | 127.0.0.1 | POST "/api/show" time=2025-01-13T09:17:39.900+08:00 level=INFO source=server.go:104 msg="system memory" total="31.7 GiB" free="22.4 GiB" free_swap="54.7 GiB" time=2025-01-13T09:17:39.900+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=31 layers.split="" memory.available="[5.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.5 GiB" memory.required.partial="5.0 GiB" memory.required.kv="256.0 MiB" memory.required.allocations="[5.0 GiB]" memory.weights.total="3.9 GiB" memory.weights.repeating="3.5 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="258.5 MiB" memory.graph.partial="677.5 MiB" time=2025-01-13T09:17:39.908+08:00 level=INFO source=server.go:376 msg="starting llama server" cmd="C:\\Users\\31696\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\runners\\cuda_v11_avx\\ollama_llama_server.exe runner --model C:\\Users\\31696\\.ollama\\models\\blobs\\sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa --ctx-size 2048 --batch-size 512 --n-gpu-layers 31 --threads 8 --no-mmap --parallel 1 --port 50974" time=2025-01-13T09:17:39.950+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-01-13T09:17:39.955+08:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding" time=2025-01-13T09:17:39.955+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" time=2025-01-13T09:17:40.118+08:00 level=INFO source=runner.go:945 msg="starting go runner" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4050 Laptop GPU, compute capability 8.9, VMM: yes time=2025-01-13T09:17:40.147+08:00 level=INFO source=runner.go:946 msg=system info="CUDA : USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(clang)" threads=8 time=2025-01-13T09:17:40.148+08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:50974" time=2025-01-13T09:17:40.207+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 4050 Laptop GPU) - 5005 MiB free llama_model_loader: loaded meta data with 22 key-value pairs and 291 tensors from C:\Users\31696\.ollama\models\blobs\sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = Meta-Llama-3-8B-Instruct llama_model_loader: - kv 2: llama.block_count u32 = 32 llama_model_loader: - kv 3: llama.context_length u32 = 8192 llama_model_loader: - kv 4: llama.embedding_length u32 = 4096 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.attention.head_count u32 = 32 llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: general.file_type u32 = 2 llama_model_loader: - kv 11: llama.vocab_size u32 = 128256 llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 14: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 20: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... llama_model_loader: - kv 21: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_0: 225 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.8000 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 8192 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 8192 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 8B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 8.03 B llm_load_print_meta: model size = 4.33 GiB (4.64 BPW) llm_load_print_meta: general.name = Meta-Llama-3-8B-Instruct llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llm_load_tensors: offloading 31 repeating layers to GPU llm_load_tensors: offloaded 31/33 layers to GPU llm_load_tensors: CUDA_Host model buffer size = 809.84 MiB llm_load_tensors: CUDA0 model buffer size = 3627.97 MiB llama_new_context_with_model: n_seq_max = 1 llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (8192) -- the full capacity of the model will not be utilized llama_kv_cache_init: CPU KV buffer size = 8.00 MiB llama_kv_cache_init: CUDA0 KV buffer size = 248.00 MiB llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB llama_new_context_with_model: CPU output buffer size = 0.50 MiB llama_new_context_with_model: CUDA0 compute buffer size = 677.48 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 12.01 MiB llama_new_context_with_model: graph nodes = 1030 llama_new_context_with_model: graph splits = 15 (with bs=512), 3 (with bs=1) time=2025-01-13T09:17:44.715+08:00 level=INFO source=server.go:594 msg="llama runner started in 4.76 seconds" [GIN] 2025/01/13 - 09:17:44 | 200 | 4.8678476s | 127.0.0.1 | POST "/api/generate"
Author
Owner

@rick-github commented on GitHub (Jan 13, 2025):

llm_load_tensors: offloading 31 repeating layers to GPU
llm_load_tensors: offloaded 31/33 layers to GPU
llm_load_tensors: CUDA_Host model buffer size = 809.84 MiB
llm_load_tensors: CUDA0 model buffer size = 3627.97 MiB

According to this, ollama is using the GPU. What's the output of ollama ps and nvidia-smi?

<!-- gh-comment-id:2586041487 --> @rick-github commented on GitHub (Jan 13, 2025): ``` llm_load_tensors: offloading 31 repeating layers to GPU llm_load_tensors: offloaded 31/33 layers to GPU llm_load_tensors: CUDA_Host model buffer size = 809.84 MiB llm_load_tensors: CUDA0 model buffer size = 3627.97 MiB ``` According to this, ollama is using the GPU. What's the output of `ollama ps` and `nvidia-smi`?
Author
Owner

@hyongaa commented on GitHub (Jan 13, 2025):

Thanks! The output for ollama ps is:
屏幕截图 2025-01-13 102828
The output for nvidia-smi is:
屏幕截图 2025-01-13 102926
BTW, when running ollama models, I notice from the taskmgr that GPU is not operating and CPU is under full utilization, I'm wondering why is that happening.
屏幕截图 2025-01-13 101956
屏幕截图 2025-01-13 102114
Thanks for your time.

<!-- gh-comment-id:2586071308 --> @hyongaa commented on GitHub (Jan 13, 2025): Thanks! The output for `ollama ps` is: ![屏幕截图 2025-01-13 102828](https://github.com/user-attachments/assets/0a2e12f1-d952-4747-9083-98a347fcee59) The output for `nvidia-smi` is: ![屏幕截图 2025-01-13 102926](https://github.com/user-attachments/assets/20646690-d44f-4bc1-87fe-3c676bdd6813) BTW, when running ollama models, I notice from the taskmgr that GPU is not operating and CPU is under full utilization, I'm wondering why is that happening. ![屏幕截图 2025-01-13 101956](https://github.com/user-attachments/assets/bac90087-76e9-4b3d-96ac-9aadaeaa11b3) ![屏幕截图 2025-01-13 102114](https://github.com/user-attachments/assets/8d53b149-9c89-48bf-8604-97ece867b83b) Thanks for your time.
Author
Owner

@rick-github commented on GitHub (Jan 13, 2025):

The GPU is much faster than the CPU. What happens is that the GPU completes its portion of the inference, then waits for the CPU to complete. The utilization of the GPU is low because it's got nothing to do.

<!-- gh-comment-id:2586082447 --> @rick-github commented on GitHub (Jan 13, 2025): The GPU is much faster than the CPU. What happens is that the GPU completes its portion of the inference, then waits for the CPU to complete. The utilization of the GPU is low because it's got nothing to do.
Author
Owner

@hyongaa commented on GitHub (Jan 13, 2025):

The GPU is much faster than the CPU. What happens is that the GPU completes its portion of the inference, then waits for the CPU to complete. The utilization of the GPU is low because it's got nothing to do.

Thanks! Is there any way I can make GPU take up more job and improve utilization? I need to wait for several minutes for ollama3.3 to give a response.

<!-- gh-comment-id:2586086920 --> @hyongaa commented on GitHub (Jan 13, 2025): > The GPU is much faster than the CPU. What happens is that the GPU completes its portion of the inference, then waits for the CPU to complete. The utilization of the GPU is low because it's got nothing to do. Thanks! Is there any way I can make GPU take up more job and improve utilization? I need to wait for several minutes for ollama3.3 to give a response.
Author
Owner

@rick-github commented on GitHub (Jan 13, 2025):

ollama has offloaded 31 of 33 layers to the GPU (that's why ollama ps shows 9%/91%) and has loaded the remaining layers into RAM because it thinks that's the best use of resources. The GPU processes the layers in VRAM and the CPU processes layers in system RAM. There is a mode where you can have the GPU processes all layers, whether in VRAM or RAM. However, doing this for more than a small subset of layers can have a significant performance penalty because there's a bottleneck between the GPU and system RAM. You can enable this mode by overriding the layer count that ollama has chosen for GPU offload, see here for details, just set num_gpu to 33.

If this doesn't help, the only thing you can do is try to reduce the amount of VRAM the model uses. You can do this by choosing a smaller model, or using a different quantization of the Meta-Llama-3-8B-Instruct model. You are currently using Q4_0, you could try the q3_K_S, it is 3.7G compared to 4.7G for Q4_0. Otherwise, smaller models: llama3.2:3b-instruct-q4_K_M is 2G, qwen2.5:3b-instruct-q4_K_M is 1.9G.

<!-- gh-comment-id:2586102085 --> @rick-github commented on GitHub (Jan 13, 2025): ollama has offloaded 31 of 33 layers to the GPU (that's why `ollama ps` shows 9%/91%) and has loaded the remaining layers into RAM because it thinks that's the best use of resources. The GPU processes the layers in VRAM and the CPU processes layers in system RAM. There is a mode where you can have the GPU processes all layers, whether in VRAM or RAM. However, doing this for more than a small subset of layers can have a significant [performance penalty](https://github.com/ollama/ollama/issues/7584#issuecomment-2466715900) because there's a bottleneck between the GPU and system RAM. You can enable this mode by overriding the layer count that ollama has chosen for GPU offload, see [here](https://github.com/ollama/ollama/issues/6950#issuecomment-2373663650) for details, just set `num_gpu` to 33. If this doesn't help, the only thing you can do is try to reduce the amount of VRAM the model uses. You can do this by choosing a smaller model, or using a different quantization of the Meta-Llama-3-8B-Instruct model. You are currently using Q4_0, you could try the [q3_K_S](https://ollama.com/library/llama3:8b-instruct-q3_K_S), it is 3.7G compared to 4.7G for Q4_0. Otherwise, smaller models: [llama3.2:3b-instruct-q4_K_M](https://ollama.com/library/llama3.2:3b-instruct-q4_K_M) is 2G, [qwen2.5:3b-instruct-q4_K_M](https://ollama.com/library/qwen2.5:3b-instruct-q4_K_M) is 1.9G.
Author
Owner

@hyongaa commented on GitHub (Jan 13, 2025):

ollama has offloaded 31 of 33 layers to the GPU (that's why ollama ps shows 9%/91%) and has loaded the remaining layers into RAM because it thinks that's the best use of resources. The GPU processes the layers in VRAM and the CPU processes layers in system RAM. There is a mode where you can have the GPU processes all layers, whether in VRAM or RAM. However, doing this for more than a small subset of layers can have a significant performance penalty because there's a bottleneck between the GPU and system RAM. You can enable this mode by overriding the layer count that ollama has chosen for GPU offload, see here for details, just set num_gpu to 33.

If this doesn't help, the only thing you can do is try to reduce the amount of VRAM the model uses. You can do this by choosing a smaller model, or using a different quantization of the Meta-Llama-3-8B-Instruct model. You are currently using Q4_0, you could try the q3_K_S, it is 3.7G compared to 4.7G for Q4_0. Otherwise, smaller models: llama3.2:3b-instruct-q4_K_M is 2G, qwen2.5:3b-instruct-q4_K_M is 1.9G.

Thanks for your explanation!
However, after I changed the amount of num_gpu to 33, ollama ps tells me this:
屏幕截图 2025-01-13 114608
And no matter what amount I set for num_gpu, the highest percentage it can reach is 20%/80%. I am wondering why is that happening.

<!-- gh-comment-id:2586124914 --> @hyongaa commented on GitHub (Jan 13, 2025): > ollama has offloaded 31 of 33 layers to the GPU (that's why `ollama ps` shows 9%/91%) and has loaded the remaining layers into RAM because it thinks that's the best use of resources. The GPU processes the layers in VRAM and the CPU processes layers in system RAM. There is a mode where you can have the GPU processes all layers, whether in VRAM or RAM. However, doing this for more than a small subset of layers can have a significant [performance penalty](https://github.com/ollama/ollama/issues/7584#issuecomment-2466715900) because there's a bottleneck between the GPU and system RAM. You can enable this mode by overriding the layer count that ollama has chosen for GPU offload, see [here](https://github.com/ollama/ollama/issues/6950#issuecomment-2373663650) for details, just set `num_gpu` to 33. > > If this doesn't help, the only thing you can do is try to reduce the amount of VRAM the model uses. You can do this by choosing a smaller model, or using a different quantization of the Meta-Llama-3-8B-Instruct model. You are currently using Q4_0, you could try the [q3_K_S](https://ollama.com/library/llama3:8b-instruct-q3_K_S), it is 3.7G compared to 4.7G for Q4_0. Otherwise, smaller models: [llama3.2:3b-instruct-q4_K_M](https://ollama.com/library/llama3.2:3b-instruct-q4_K_M) is 2G, [qwen2.5:3b-instruct-q4_K_M](https://ollama.com/library/qwen2.5:3b-instruct-q4_K_M) is 1.9G. Thanks for your explanation! However, after I changed the amount of num_gpu to 33, `ollama ps` tells me this: ![屏幕截图 2025-01-13 114608](https://github.com/user-attachments/assets/1973a552-f8a3-4875-9269-db50df300e5b) And no matter what amount I set for `num_gpu`, the highest percentage it can reach is 20%/80%. I am wondering why is that happening.
Author
Owner

@rick-github commented on GitHub (Jan 13, 2025):

You are overriding ollama, the output of ollama ps will be incorrect. Look in the logs for offloaded xx/33 layers to GPU.

<!-- gh-comment-id:2586126673 --> @rick-github commented on GitHub (Jan 13, 2025): You are overriding ollama, the output of `ollama ps` will be incorrect. Look in the logs for `offloaded xx/33 layers to GPU`.
Author
Owner

@hyongaa commented on GitHub (Jan 13, 2025):

You are overriding ollama, the output of ollama ps will be incorrect. Look in the logs for offloaded xx/33 layers to GPU.

Oh thanks. yes I see this in the log.
llm_load_tensors: offloaded 33/33 layers to GPU
So does that mean ollama is actually running fully on GPU now and it is just a output error? If not how much layers should I set to make it run fully on GPU?
Sorry for so many numb questions. I am completely new in this area

<!-- gh-comment-id:2586138986 --> @hyongaa commented on GitHub (Jan 13, 2025): > You are overriding ollama, the output of `ollama ps` will be incorrect. Look in the logs for `offloaded xx/33 layers to GPU`. Oh thanks. yes I see this in the log. `llm_load_tensors: offloaded 33/33 layers to GPU` So does that mean ollama is actually running fully on GPU now and it is just a output error? If not how much layers should I set to make it run fully on GPU? Sorry for so many numb questions. I am completely new in this area
Author
Owner

@rick-github commented on GitHub (Jan 13, 2025):

The GPU will be processing the whole model (unless there's a bug) but some of the model layers are in system RAM, rather than VRAM. This means that while the GPU is doing all of the work, for some of the layers it needs to talk through the PCI bus to system RAM, and that part will be slower. Note that while the GPU is doing all the inference, the CPU is still in charge of telling the GPU what to do, so you will still see high usage on CPU. But this should be short periods, only while the GPU is doing inference.

<!-- gh-comment-id:2586155730 --> @rick-github commented on GitHub (Jan 13, 2025): The GPU will be processing the whole model (unless there's a bug) but some of the model layers are in system RAM, rather than VRAM. This means that while the GPU is doing all of the work, for some of the layers it needs to talk through the PCI bus to system RAM, and that part will be slower. Note that while the GPU is doing all the inference, the CPU is still in charge of telling the GPU what to do, so you will still see high usage on CPU. But this should be short periods, only while the GPU is doing inference.
Author
Owner

@hyongaa commented on GitHub (Jan 13, 2025):

Got it. Thanks a lot. It do solve my problem.
One last question, how can I disable this overriding mode if I want to reset the CPU/GPU offload?

<!-- gh-comment-id:2586160032 --> @hyongaa commented on GitHub (Jan 13, 2025): Got it. Thanks a lot. It do solve my problem. One last question, how can I disable this overriding mode if I want to reset the CPU/GPU offload?
Author
Owner

@rick-github commented on GitHub (Jan 13, 2025):

Don't set num_gpu.

<!-- gh-comment-id:2586166258 --> @rick-github commented on GitHub (Jan 13, 2025): Don't set `num_gpu`.
Author
Owner

@hyongaa commented on GitHub (Jan 13, 2025):

Thanks a lot

<!-- gh-comment-id:2586253840 --> @hyongaa commented on GitHub (Jan 13, 2025): Thanks a lot
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#5389