[GH-ISSUE #8908] Unable to run with GPU #52287

Closed
opened 2026-04-28 22:53:48 -05:00 by GiteaMirror · 22 comments
Owner

Originally created by @arkerwu on GitHub (Feb 7, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8908

What is the issue?

A freshly installed Ollama 0.5.7 cannot use the GPU, even though CUDA is installed correctly. After uninstalling and installing version 0.4.7, the problem disappears, and the GPU can be used. However, reinstalling 0.5.8-r version causes the issue to reappear, with the GPU being unusable, even though ollama ps shows 100% GPU usage.

Image

Image

Image

Relevant log output

2025/02/07 10:31:52 routes.go:1186: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/autodl-tmp/ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-02-07T10:31:52.881+08:00 level=INFO source=images.go:432 msg="total blobs: 4"
time=2025-02-07T10:31:52.882+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-02-07T10:31:52.884+08:00 level=INFO source=routes.go:1237 msg="Listening on [::]:11434 (version 0.5.8-rc7)"
time=2025-02-07T10:31:52.885+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-02-07T10:31:53.286+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-a4ca977d-1779-02be-6e3f-f523dcc15658 library=cuda variant=v12 compute=8.9 driver=12.2 name="NVIDIA GeForce RTX 4090" total="23.6 GiB" available="23.3 GiB"
[GIN] 2025/02/07 - 10:31:56 | 200 |     113.157µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/02/07 - 10:31:56 | 404 |     956.851µs |       127.0.0.1 | POST     "/api/show"
time=2025-02-07T10:31:58.629+08:00 level=INFO source=download.go:176 msg="downloading aabd4debf0c8 in 12 100 MB part(s)"
time=2025-02-07T10:33:06.300+08:00 level=INFO source=download.go:176 msg="downloading 369ca498f347 in 1 387 B part(s)"
time=2025-02-07T10:33:08.007+08:00 level=INFO source=download.go:176 msg="downloading 6e4c38e1172f in 1 1.1 KB part(s)"
time=2025-02-07T10:33:09.691+08:00 level=INFO source=download.go:176 msg="downloading f4d24e9138dd in 1 148 B part(s)"
time=2025-02-07T10:33:11.386+08:00 level=INFO source=download.go:176 msg="downloading a85fe2a2e58e in 1 487 B part(s)"
[GIN] 2025/02/07 - 10:33:14 | 200 |         1m17s |       127.0.0.1 | POST     "/api/pull"
[GIN] 2025/02/07 - 10:33:14 | 200 |   34.480541ms |       127.0.0.1 | POST     "/api/show"
time=2025-02-07T10:33:14.776+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/root/autodl-tmp/ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc gpu=GPU-a4ca977d-1779-02be-6e3f-f523dcc15658 parallel=4 available=24981340160 required="1.9 GiB"
time=2025-02-07T10:33:14.958+08:00 level=INFO source=server.go:100 msg="system memory" total="1007.5 GiB" free="904.7 GiB" free_swap="0 B"
time=2025-02-07T10:33:14.959+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[23.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="1.9 GiB" memory.required.partial="1.9 GiB" memory.required.kv="224.0 MiB" memory.required.allocations="[1.9 GiB]" memory.weights.total="976.1 MiB" memory.weights.repeating="793.5 MiB" memory.weights.nonrepeating="182.6 MiB" memory.graph.full="299.8 MiB" memory.graph.partial="482.3 MiB"
time=2025-02-07T10:33:14.959+08:00 level=INFO source=server.go:381 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/autodl-tmp/ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --threads 64 --parallel 4 --port 39517"
time=2025-02-07T10:33:14.960+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-02-07T10:33:14.960+08:00 level=INFO source=server.go:558 msg="waiting for llama runner to start responding"
time=2025-02-07T10:33:14.960+08:00 level=INFO source=server.go:592 msg="waiting for server to become available" status="llm server error"
time=2025-02-07T10:33:15.023+08:00 level=INFO source=runner.go:936 msg="starting go runner"
time=2025-02-07T10:33:15.023+08:00 level=INFO source=runner.go:937 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | cgo(gcc)" threads=64
time=2025-02-07T10:33:15.024+08:00 level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:39517"
llama_model_loader: loaded meta data with 26 key-value pairs and 339 tensors from /root/autodl-tmp/ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = DeepSeek R1 Distill Qwen 1.5B
llama_model_loader: - kv   3:                           general.basename str              = DeepSeek-R1-Distill-Qwen
llama_model_loader: - kv   4:                         general.size_label str              = 1.5B
llama_model_loader: - kv   5:                          qwen2.block_count u32              = 28
llama_model_loader: - kv   6:                       qwen2.context_length u32              = 131072
llama_model_loader: - kv   7:                     qwen2.embedding_length u32              = 1536
llama_model_loader: - kv   8:                  qwen2.feed_forward_length u32              = 8960
llama_model_loader: - kv   9:                 qwen2.attention.head_count u32              = 12
llama_model_loader: - kv  10:              qwen2.attention.head_count_kv u32              = 2
llama_model_loader: - kv  11:                       qwen2.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv  12:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  13:                          general.file_type u32              = 15
llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  15:                         tokenizer.ggml.pre str              = qwen2
time=2025-02-07T10:33:15.213+08:00 level=INFO source=server.go:592 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: - kv  16:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  18:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  19:                tokenizer.ggml.bos_token_id u32              = 151646
llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 151643
llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  22:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  23:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  25:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  141 tensors
llama_model_loader: - type q4_K:  169 tensors
llama_model_loader: - type q6_K:   29 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 22
llm_load_vocab: token to piece cache size = 0.9310 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = qwen2
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 151936
llm_load_print_meta: n_merges         = 151387
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 131072
llm_load_print_meta: n_embd           = 1536
llm_load_print_meta: n_layer          = 28
llm_load_print_meta: n_head           = 12
llm_load_print_meta: n_head_kv        = 2
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 6
llm_load_print_meta: n_embd_k_gqa     = 256
llm_load_print_meta: n_embd_v_gqa     = 256
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 8960
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 131072
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = 1.5B
llm_load_print_meta: model ftype      = Q4_K - Medium
llm_load_print_meta: model params     = 1.78 B
llm_load_print_meta: model size       = 1.04 GiB (5.00 BPW) 
llm_load_print_meta: general.name     = DeepSeek R1 Distill Qwen 1.5B
llm_load_print_meta: BOS token        = 151646 '<|begin▁of▁sentence|>'
llm_load_print_meta: EOS token        = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: EOT token        = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: PAD token        = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: LF token         = 148848 'ÄĬ'
llm_load_print_meta: FIM PRE token    = 151659 '<|fim_prefix|>'
llm_load_print_meta: FIM SUF token    = 151661 '<|fim_suffix|>'
llm_load_print_meta: FIM MID token    = 151660 '<|fim_middle|>'
llm_load_print_meta: FIM PAD token    = 151662 '<|fim_pad|>'
llm_load_print_meta: FIM REP token    = 151663 '<|repo_name|>'
llm_load_print_meta: FIM SEP token    = 151664 '<|file_sep|>'
llm_load_print_meta: EOG token        = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: EOG token        = 151662 '<|fim_pad|>'
llm_load_print_meta: EOG token        = 151663 '<|repo_name|>'
llm_load_print_meta: EOG token        = 151664 '<|file_sep|>'
llm_load_print_meta: max token length = 256
llm_load_tensors:   CPU_Mapped model buffer size =  1059.89 MiB
llama_new_context_with_model: n_seq_max     = 4
llama_new_context_with_model: n_ctx         = 8192
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch       = 2048
llama_new_context_with_model: n_ubatch      = 512
llama_new_context_with_model: flash_attn    = 0
llama_new_context_with_model: freq_base     = 10000.0
llama_new_context_with_model: freq_scale    = 1
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 28, can_shift = 1
llama_kv_cache_init:        CPU KV buffer size =   224.00 MiB
llama_new_context_with_model: KV self size  =  224.00 MiB, K (f16):  112.00 MiB, V (f16):  112.00 MiB
llama_new_context_with_model:        CPU  output buffer size =     2.34 MiB
llama_new_context_with_model:        CPU compute buffer size =   302.75 MiB
llama_new_context_with_model: graph nodes  = 986
llama_new_context_with_model: graph splits = 1
time=2025-02-07T10:33:16.469+08:00 level=INFO source=server.go:597 msg="llama runner started in 1.51 seconds"
[GIN] 2025/02/07 - 10:33:16 | 200 |  1.972311597s |       127.0.0.1 | POST     "/api/generate"

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.5.7

Originally created by @arkerwu on GitHub (Feb 7, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8908 ### What is the issue? A freshly installed Ollama 0.5.7 cannot use the GPU, even though CUDA is installed correctly. After uninstalling and installing version 0.4.7, the problem disappears, and the GPU can be used. However, reinstalling 0.5.8-r version causes the issue to reappear, with the GPU being unusable, even though ollama ps shows 100% GPU usage. ![Image](https://github.com/user-attachments/assets/d028e349-28eb-4084-aca1-cce7e5e32040) ![Image](https://github.com/user-attachments/assets/41c2dafd-9f56-4ce4-afdb-48a57e49fb42) ![Image](https://github.com/user-attachments/assets/fdc2e75e-1d9e-49a2-a651-bb3fd8d0277b) ### Relevant log output ```shell 2025/02/07 10:31:52 routes.go:1186: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/autodl-tmp/ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-02-07T10:31:52.881+08:00 level=INFO source=images.go:432 msg="total blobs: 4" time=2025-02-07T10:31:52.882+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-02-07T10:31:52.884+08:00 level=INFO source=routes.go:1237 msg="Listening on [::]:11434 (version 0.5.8-rc7)" time=2025-02-07T10:31:52.885+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-02-07T10:31:53.286+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-a4ca977d-1779-02be-6e3f-f523dcc15658 library=cuda variant=v12 compute=8.9 driver=12.2 name="NVIDIA GeForce RTX 4090" total="23.6 GiB" available="23.3 GiB" [GIN] 2025/02/07 - 10:31:56 | 200 | 113.157µs | 127.0.0.1 | HEAD "/" [GIN] 2025/02/07 - 10:31:56 | 404 | 956.851µs | 127.0.0.1 | POST "/api/show" time=2025-02-07T10:31:58.629+08:00 level=INFO source=download.go:176 msg="downloading aabd4debf0c8 in 12 100 MB part(s)" time=2025-02-07T10:33:06.300+08:00 level=INFO source=download.go:176 msg="downloading 369ca498f347 in 1 387 B part(s)" time=2025-02-07T10:33:08.007+08:00 level=INFO source=download.go:176 msg="downloading 6e4c38e1172f in 1 1.1 KB part(s)" time=2025-02-07T10:33:09.691+08:00 level=INFO source=download.go:176 msg="downloading f4d24e9138dd in 1 148 B part(s)" time=2025-02-07T10:33:11.386+08:00 level=INFO source=download.go:176 msg="downloading a85fe2a2e58e in 1 487 B part(s)" [GIN] 2025/02/07 - 10:33:14 | 200 | 1m17s | 127.0.0.1 | POST "/api/pull" [GIN] 2025/02/07 - 10:33:14 | 200 | 34.480541ms | 127.0.0.1 | POST "/api/show" time=2025-02-07T10:33:14.776+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/root/autodl-tmp/ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc gpu=GPU-a4ca977d-1779-02be-6e3f-f523dcc15658 parallel=4 available=24981340160 required="1.9 GiB" time=2025-02-07T10:33:14.958+08:00 level=INFO source=server.go:100 msg="system memory" total="1007.5 GiB" free="904.7 GiB" free_swap="0 B" time=2025-02-07T10:33:14.959+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[23.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="1.9 GiB" memory.required.partial="1.9 GiB" memory.required.kv="224.0 MiB" memory.required.allocations="[1.9 GiB]" memory.weights.total="976.1 MiB" memory.weights.repeating="793.5 MiB" memory.weights.nonrepeating="182.6 MiB" memory.graph.full="299.8 MiB" memory.graph.partial="482.3 MiB" time=2025-02-07T10:33:14.959+08:00 level=INFO source=server.go:381 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/autodl-tmp/ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --threads 64 --parallel 4 --port 39517" time=2025-02-07T10:33:14.960+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-02-07T10:33:14.960+08:00 level=INFO source=server.go:558 msg="waiting for llama runner to start responding" time=2025-02-07T10:33:14.960+08:00 level=INFO source=server.go:592 msg="waiting for server to become available" status="llm server error" time=2025-02-07T10:33:15.023+08:00 level=INFO source=runner.go:936 msg="starting go runner" time=2025-02-07T10:33:15.023+08:00 level=INFO source=runner.go:937 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | cgo(gcc)" threads=64 time=2025-02-07T10:33:15.024+08:00 level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:39517" llama_model_loader: loaded meta data with 26 key-value pairs and 339 tensors from /root/autodl-tmp/ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 1.5B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen llama_model_loader: - kv 4: general.size_label str = 1.5B llama_model_loader: - kv 5: qwen2.block_count u32 = 28 llama_model_loader: - kv 6: qwen2.context_length u32 = 131072 llama_model_loader: - kv 7: qwen2.embedding_length u32 = 1536 llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 8960 llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 12 llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 2 llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 13: general.file_type u32 = 15 llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2 time=2025-02-07T10:33:15.213+08:00 level=INFO source=server.go:592 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646 llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - type f32: 141 tensors llama_model_loader: - type q4_K: 169 tensors llama_model_loader: - type q6_K: 29 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 22 llm_load_vocab: token to piece cache size = 0.9310 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 151936 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 1536 llm_load_print_meta: n_layer = 28 llm_load_print_meta: n_head = 12 llm_load_print_meta: n_head_kv = 2 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 6 llm_load_print_meta: n_embd_k_gqa = 256 llm_load_print_meta: n_embd_v_gqa = 256 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 8960 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 1.5B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 1.78 B llm_load_print_meta: model size = 1.04 GiB (5.00 BPW) llm_load_print_meta: general.name = DeepSeek R1 Distill Qwen 1.5B llm_load_print_meta: BOS token = 151646 '<|begin▁of▁sentence|>' llm_load_print_meta: EOS token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: EOT token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: PAD token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>' llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>' llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>' llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>' llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>' llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>' llm_load_print_meta: EOG token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: EOG token = 151662 '<|fim_pad|>' llm_load_print_meta: EOG token = 151663 '<|repo_name|>' llm_load_print_meta: EOG token = 151664 '<|file_sep|>' llm_load_print_meta: max token length = 256 llm_load_tensors: CPU_Mapped model buffer size = 1059.89 MiB llama_new_context_with_model: n_seq_max = 4 llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 28, can_shift = 1 llama_kv_cache_init: CPU KV buffer size = 224.00 MiB llama_new_context_with_model: KV self size = 224.00 MiB, K (f16): 112.00 MiB, V (f16): 112.00 MiB llama_new_context_with_model: CPU output buffer size = 2.34 MiB llama_new_context_with_model: CPU compute buffer size = 302.75 MiB llama_new_context_with_model: graph nodes = 986 llama_new_context_with_model: graph splits = 1 time=2025-02-07T10:33:16.469+08:00 level=INFO source=server.go:597 msg="llama runner started in 1.51 seconds" [GIN] 2025/02/07 - 10:33:16 | 200 | 1.972311597s | 127.0.0.1 | POST "/api/generate" ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.7
GiteaMirror added the bug label 2026-04-28 22:53:48 -05:00
Author
Owner

@arkerwu commented on GitHub (Feb 7, 2025):

Tested on two other computers (Linux + 2080Ti and Windows + 3060), the issue persists.

<!-- gh-comment-id:2641828597 --> @arkerwu commented on GitHub (Feb 7, 2025): Tested on two other computers (Linux + 2080Ti and Windows + 3060), the issue persists.
Author
Owner

@YonTracks commented on GitHub (Feb 7, 2025):

same issue for me after re-build, I think even 0.5.7 is using the new gpu build settings?
I'm testing now with the changes.

https://github.com/ollama/ollama/blob/main/docs/development.md
official 0.5.7 OllamaSetup.exe should be good, I need to wait for CMAKE to finish.

<!-- gh-comment-id:2641869816 --> @YonTracks commented on GitHub (Feb 7, 2025): same issue for me after re-build, I think even 0.5.7 is using the new gpu build settings? I'm testing now with the changes. https://github.com/ollama/ollama/blob/main/docs/development.md official 0.5.7 OllamaSetup.exe should be good, I need to wait for CMAKE to finish.
Author
Owner

@YonTracks commented on GitHub (Feb 7, 2025):

yep, I could not wait for CMAKE (compiling 1+hours now), so I shut it down and tested 0.5.7 and 0.5.8 official OllamaSetup.exe.

both working great, but 0.5.8-rc10 seems better and quicker, tested deepseek 32b model. maybe the model, need to test more.
awesome.

edit^: started the cuda compile again using the new build from source instructions, slow as, taking long, way longer than before, cuda toolkit, but using vscode CMAKE and not powershell, not sure, so more testing, I'll keep testing lol.
I see the make folder is now gone, and a build folder.

<!-- gh-comment-id:2641903868 --> @YonTracks commented on GitHub (Feb 7, 2025): yep, I could not wait for CMAKE (compiling 1+hours now), so I shut it down and tested 0.5.7 and 0.5.8 official OllamaSetup.exe. both working great, but 0.5.8-rc10 seems better and quicker, tested deepseek 32b model. maybe the model, need to test more. awesome. edit^: started the cuda compile again using the new build from source instructions, slow as, taking long, way longer than before, cuda toolkit, but using vscode CMAKE and not powershell, not sure, so more testing, I'll keep testing lol. I see the make folder is now gone, and a build folder.
Author
Owner

@gyfprivate commented on GitHub (Feb 7, 2025):

i have same problem #8906

<!-- gh-comment-id:2642004274 --> @gyfprivate commented on GitHub (Feb 7, 2025): i have same problem #8906
Author
Owner

@YonTracks commented on GitHub (Feb 7, 2025):

i have same problem #8906

After the new build from source (link above), and now powershell -ExecutionPolicy Bypass -File .\scripts\build_windows.ps1 seems fixed (waiting again though, compiling again... yay. lol).

official works great below.
https://github.com/ollama/ollama/releases/tag/v0.5.7
the OllamaSetup.exe is good but, and also for the 0.5.8-rc10.

And after my local new build, it truly does seem faster, epic, cheers ollama.

<!-- gh-comment-id:2642016616 --> @YonTracks commented on GitHub (Feb 7, 2025): > i have same problem [#8906](https://github.com/ollama/ollama/issues/8906) After the new build from source (link above), and now `powershell -ExecutionPolicy Bypass -File .\scripts\build_windows.ps1` seems fixed (waiting again though, compiling again... yay. lol). official works great below. https://github.com/ollama/ollama/releases/tag/v0.5.7 the OllamaSetup.exe is good but, and also for the 0.5.8-rc10. And after my local new build, it truly does seem faster, epic, cheers ollama.
Author
Owner

@YonTracks commented on GitHub (Feb 7, 2025):

update: after powershell -ExecutionPolicy Bypass -File .\scripts\build_windows.ps1 compiling to test the installer, and running the compiled OllamaSetup.exe, the issue returns.

recap: so far, only using the new instructions at https://github.com/ollama/ollama/blob/main/docs/development.md and not using the
powershell CMAKE powershell -ExecutionPolicy Bypass -File .\scripts\build_windows.ps1. the local ollama serve will work.

<!-- gh-comment-id:2642065792 --> @YonTracks commented on GitHub (Feb 7, 2025): update: after `powershell -ExecutionPolicy Bypass -File .\scripts\build_windows.ps1` compiling to test the installer, and running the compiled OllamaSetup.exe, the issue returns. recap: so far, only using the new instructions at https://github.com/ollama/ollama/blob/main/docs/development.md and `not` using the powershell CMAKE `powershell -ExecutionPolicy Bypass -File .\scripts\build_windows.ps1`. the local `ollama serve` will work.
Author
Owner

@YonTracks commented on GitHub (Feb 7, 2025):

far out, compiling again fresh. I also missed go run . serve no way? really, I think same as ollama serve, but anyway, I'll have to test after the rebuild with vscode CMAKE. so far, I'm using the extension, and it auto configures the build folder, but if I try to use just the terminal, it says the build folder already exists, if I delete the build folder and run cmake -B build cmake --build build --config Release it does not find compiler.

-- Building for: Visual Studio 17 2022
-- Selecting Windows SDK version 10.0.22621.0 to target Windows 10.0.26100.
-- The C compiler identification is unknown
-- The CXX compiler identification is unknown
CMake Error at CMakeLists.txt:3 (project):
  No CMAKE_C_COMPILER could be found.



CMake Error at CMakeLists.txt:3 (project):
  No CMAKE_CXX_COMPILER could be found.



-- Configuring incomplete, errors occurred!

so really slow with the vscode CMAKE extension but it worked so I will do that again.
I will test after, and check the path and try deleting the extension etc.

<!-- gh-comment-id:2642101786 --> @YonTracks commented on GitHub (Feb 7, 2025): far out, compiling again fresh. I also missed `go run . serve` no way? really, I think same as `ollama serve`, but anyway, I'll have to test after the rebuild with vscode CMAKE. so far, I'm using the extension, and it auto configures the build folder, but if I try to use just the terminal, it says the build folder already exists, if I delete the build folder and run ` cmake -B build cmake --build build --config Release` it does not find compiler. ```cmake -B build -- Building for: Visual Studio 17 2022 -- Selecting Windows SDK version 10.0.22621.0 to target Windows 10.0.26100. -- The C compiler identification is unknown -- The CXX compiler identification is unknown CMake Error at CMakeLists.txt:3 (project): No CMAKE_C_COMPILER could be found. CMake Error at CMakeLists.txt:3 (project): No CMAKE_CXX_COMPILER could be found. -- Configuring incomplete, errors occurred! ``` so really slow with the vscode CMAKE extension but it worked so I will do that again. I will test after, and check the path and try deleting the extension etc.
Author
Owner

@arkerwu commented on GitHub (Feb 7, 2025):

I tried compiling from the source code, and it does work. However, I still hope the official team can identify the issue and fix it.

<!-- gh-comment-id:2642165192 --> @arkerwu commented on GitHub (Feb 7, 2025): I tried compiling from the source code, and it does work. However, I still hope the official team can identify the issue and fix it.
Author
Owner

@gyfprivate commented on GitHub (Feb 7, 2025):

i have same problem #8906

After the new build from source (link above), and now powershell -ExecutionPolicy Bypass -File .\scripts\build_windows.ps1 seems fixed (waiting again though, compiling again... yay. lol).

official works great below. https://github.com/ollama/ollama/releases/tag/v0.5.7 the OllamaSetup.exe is good but, and also for the 0.5.8-rc10.

And after my local new build, it truly does seem faster, epic, cheers ollama.

im very new, please help for all detail( hot to compiling ,what to do after compiling),i tried compiling and not work,help

<!-- gh-comment-id:2642405341 --> @gyfprivate commented on GitHub (Feb 7, 2025): > > i have same problem [#8906](https://github.com/ollama/ollama/issues/8906) > > After the new build from source (link above), and now `powershell -ExecutionPolicy Bypass -File .\scripts\build_windows.ps1` seems fixed (waiting again though, compiling again... yay. lol). > > official works great below. https://github.com/ollama/ollama/releases/tag/v0.5.7 the OllamaSetup.exe is good but, and also for the 0.5.8-rc10. > > And after my local new build, it truly does seem faster, epic, cheers ollama. im very new, please help for all detail( hot to compiling ,what to do after compiling),i tried compiling and not work,help
Author
Owner

@arkerwu commented on GitHub (Feb 7, 2025):

i have same problem #8906

After the new build from source (link above), and now powershell -ExecutionPolicy Bypass -File .\scripts\build_windows.ps1 seems fixed (waiting again though, compiling again... yay. lol).
official works great below. https://github.com/ollama/ollama/releases/tag/v0.5.7 the OllamaSetup.exe is good but, and also for the 0.5.8-rc10.
And after my local new build, it truly does seem faster, epic, cheers ollama.

im very new, please help for all detail( hot to compiling ,what to do after compiling),i tried compiling and not work,hel

try 0.4.7 version

<!-- gh-comment-id:2642443667 --> @arkerwu commented on GitHub (Feb 7, 2025): > > > i have same problem [#8906](https://github.com/ollama/ollama/issues/8906) > > > > > > After the new build from source (link above), and now `powershell -ExecutionPolicy Bypass -File .\scripts\build_windows.ps1` seems fixed (waiting again though, compiling again... yay. lol). > > official works great below. https://github.com/ollama/ollama/releases/tag/v0.5.7 the OllamaSetup.exe is good but, and also for the 0.5.8-rc10. > > And after my local new build, it truly does seem faster, epic, cheers ollama. > > im very new, please help for all detail( hot to compiling ,what to do after compiling),i tried compiling and not work,hel try 0.4.7 version
Author
Owner

@YonTracks commented on GitHub (Feb 7, 2025):

i have same problem #8906

After the new build from source (link above), and now powershell -ExecutionPolicy Bypass -File .\scripts\build_windows.ps1 seems fixed (waiting again though, compiling again... yay. lol).
official works great below. https://github.com/ollama/ollama/releases/tag/v0.5.7 the OllamaSetup.exe is good but, and also for the 0.5.8-rc10.
And after my local new build, it truly does seem faster, epic, cheers ollama.

im very new, please help for all detail( hot to compiling ,what to do after compiling),i tried compiling and not work,hel

try 0.4.7 version

yep, windows? for now, best to not compile from source? only use the OllamaSetup.exe from the tag releases, and it will work great (also don't run ollama serve, rather use the icon or installed app like a normal app).

but if wanting to compile, then need to configure CMake/build correct. I'm still compiling lmao. I need to configure better, I think.
good luck.

<!-- gh-comment-id:2642466901 --> @YonTracks commented on GitHub (Feb 7, 2025): > > > > i have same problem [#8906](https://github.com/ollama/ollama/issues/8906) > > > > > > > > > After the new build from source (link above), and now `powershell -ExecutionPolicy Bypass -File .\scripts\build_windows.ps1` seems fixed (waiting again though, compiling again... yay. lol). > > > official works great below. https://github.com/ollama/ollama/releases/tag/v0.5.7 the OllamaSetup.exe is good but, and also for the 0.5.8-rc10. > > > And after my local new build, it truly does seem faster, epic, cheers ollama. > > > > > > im very new, please help for all detail( hot to compiling ,what to do after compiling),i tried compiling and not work,hel > > try 0.4.7 version yep, windows? for now, best to not compile from source? only use the OllamaSetup.exe from the tag releases, and it will work great (also don't run ollama serve, rather use the icon or installed app like a normal app). but if wanting to compile, then need to configure CMake/build correct. I'm still compiling lmao. I need to configure better, I think. good luck.
Author
Owner

@gs80140 commented on GitHub (Feb 7, 2025):

does it auto select GPU or not ?
my setting like this

Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/bin/ollama serve
User=hadoop
Group=hadoop
Restart=always
RestartSec=3
Environment="PATH=/home/hadoop/.pyenv/bin:/home/hadoop/spark-2.4.8-bin-hadoop2.7/bin:/home/hadoop/apache-hive-2.3.3-bin/bin:/home/hadoop/anaconda
3/bin:/home/hadoop/anaconda3/condabin:/usr/share/Modules/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/skyformai/bin:/bin:/opt/pbs/b
in:/share/apps/anaconda3/bin:/home/hadoop/hadoop-2.7.3/bin:/home/hadoop/.local/bin:/home/hadoop/bin"
Environment="OLLAMA_MODELS=/home/hadoop/ollama"
Environment="OLLAMA_HOST=0.0.0.0:11434"
Environment="OLLAMA_DEVICE=gpu"
Environment="CUDA_VISIBLE_DEVICES =0,1"

[Install]
WantedBy=default.target

but not work , always use CPU

ollama version is 0.5.7

<!-- gh-comment-id:2642481371 --> @gs80140 commented on GitHub (Feb 7, 2025): does it auto select GPU or not ? my setting like this Description=Ollama Service After=network-online.target [Service] ExecStart=/usr/bin/ollama serve User=hadoop Group=hadoop Restart=always RestartSec=3 Environment="PATH=/home/hadoop/.pyenv/bin:/home/hadoop/spark-2.4.8-bin-hadoop2.7/bin:/home/hadoop/apache-hive-2.3.3-bin/bin:/home/hadoop/anaconda 3/bin:/home/hadoop/anaconda3/condabin:/usr/share/Modules/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/skyformai/bin:/bin:/opt/pbs/b in:/share/apps/anaconda3/bin:/home/hadoop/hadoop-2.7.3/bin:/home/hadoop/.local/bin:/home/hadoop/bin" Environment="OLLAMA_MODELS=/home/hadoop/ollama" Environment="OLLAMA_HOST=0.0.0.0:11434" Environment="OLLAMA_DEVICE=gpu" Environment="CUDA_VISIBLE_DEVICES =0,1" [Install] WantedBy=default.target but not work , always use CPU ollama version is 0.5.7
Author
Owner

@YonTracks commented on GitHub (Feb 7, 2025):

if not windows, not sure? try 0.4.7

I'm not good enough to explain it? hard to troubleshoot also I don't have the skills just looking, if installed via official OllamaSetup.exe then it works great.

if installed via official OllamaSetup.exe and ollama is running and then I quit the app, then use ollama serve or go run. serve or ./ollama serve or even start. it is great.

but if running the powershell compiler for the installer, then the issue returns, I think it is the env setup. testing.

<!-- gh-comment-id:2642551152 --> @YonTracks commented on GitHub (Feb 7, 2025): if not windows, not sure? try 0.4.7 I'm not good enough to explain it? hard to troubleshoot also I don't have the skills just looking, if installed via official OllamaSetup.exe then it works great. if installed via official OllamaSetup.exe and `ollama is running` and then I quit the app, then use `ollama serve` or `go run. serve` or `./ollama serve` or even start. it is great. but if running the powershell compiler for the installer, then the issue returns, I think it is the env setup. testing.
Author
Owner

@YonTracks commented on GitHub (Feb 8, 2025):

update: for me, windows 11 Ryzen 5 5600 32gb ram | rtx3060 12gb
my issues were because of cuda toolkit and the Path env details with v12.6 and 12.8, and then windows and CMake and again env details, for me I had to use Visual Studio 2022 Developer Command Prompt to compile.

<!-- gh-comment-id:2644481997 --> @YonTracks commented on GitHub (Feb 8, 2025): update: for me, windows 11 Ryzen 5 5600 32gb ram | rtx3060 12gb my issues were because of cuda toolkit and the Path env details with v12.6 and 12.8, and then windows and CMake and again env details, for me I had to use `Visual Studio 2022 Developer Command Prompt` to compile.
Author
Owner

@arkerwu commented on GitHub (Feb 8, 2025):

Regarding performance, there is no improvement when comparing the source-compiled 0.5.8-r10 version with 0.4.7. Does anyone know why this is the case?
Test environment: Linux + 4090.

<!-- gh-comment-id:2644611550 --> @arkerwu commented on GitHub (Feb 8, 2025): Regarding performance, there is no improvement when comparing the source-compiled 0.5.8-r10 version with 0.4.7. Does anyone know why this is the case? Test environment: Linux + 4090.
Author
Owner

@liuliaocheng commented on GitHub (Feb 8, 2025):

Only use cpu,not gpu, why ~,thx
deepseekr1:70b \ollama0.5.7\ GPU a100\ ubuntu 20.04 \cuda11.4

<!-- gh-comment-id:2645747833 --> @liuliaocheng commented on GitHub (Feb 8, 2025): Only use cpu,not gpu, why ~,thx deepseekr1:70b \ollama0.5.7\ GPU a100\ ubuntu 20.04 \cuda11.4
Author
Owner

@YonTracks commented on GitHub (Feb 8, 2025):

Only use cpu,not gpu, why ~,thx deepseekr1:70b \ollama0.5.7\ GPU a100\ ubuntu 20.04 \cuda11.4

check the new build instructions for gpus.

https://github.com/ollama/ollama/blob/main/docs/development.md

<!-- gh-comment-id:2645917719 --> @YonTracks commented on GitHub (Feb 8, 2025): > Only use cpu,not gpu, why ~,thx deepseekr1:70b \ollama0.5.7\ GPU a100\ ubuntu 20.04 \cuda11.4 check the new build instructions for gpus. https://github.com/ollama/ollama/blob/main/docs/development.md
Author
Owner

@YonTracks commented on GitHub (Feb 8, 2025):

I learned ollama serve starts the actual installed process on the machine, and go run . serve or ./ollama serve starts the dev build process from the repo, 2 separate processes. then for me on windows the icon also (similar to ollama serve but with the app exe process also running).

<!-- gh-comment-id:2645951583 --> @YonTracks commented on GitHub (Feb 8, 2025): I learned `ollama serve` starts the actual installed process on the machine, and `go run . serve` or `./ollama serve` starts the dev build process from the repo, 2 separate processes. then for me on windows the icon also (similar to `ollama serve` but with the app exe process also running).
Author
Owner

@xgocn commented on GitHub (Feb 11, 2025):

The same porblem.

windows11 with 3060 laptop GPU, the GPU used few,but CPU 100% occupied

<!-- gh-comment-id:2649756014 --> @xgocn commented on GitHub (Feb 11, 2025): The same porblem. windows11 with 3060 laptop GPU, the GPU used few,but CPU 100% occupied
Author
Owner

@arkerwu commented on GitHub (Feb 11, 2025):

The same porblem.

windows11 with 3060 laptop GPU, the GPU used few,but CPU 100% occupied

try0.4.7

<!-- gh-comment-id:2649868201 --> @arkerwu commented on GitHub (Feb 11, 2025): > The same porblem. > > windows11 with 3060 laptop GPU, the GPU used few,but CPU 100% occupied try[0.4.7](https://github.com/ollama/ollama/releases/tag/v0.4.7)
Author
Owner

@esselesse commented on GitHub (Feb 12, 2025):

the GPU used few,but CPU 100% occupied

ha
0% GPU, 100% CPU + 100% RAM

<!-- gh-comment-id:2653608491 --> @esselesse commented on GitHub (Feb 12, 2025): > the GPU used few,but CPU 100% occupied ha 0% GPU, 100% CPU + 100% RAM
Author
Owner

@arkerwu commented on GitHub (Feb 14, 2025):

Update; worth celebrating, the issue was resolved after updating to 0.5.10.

<!-- gh-comment-id:2658429931 --> @arkerwu commented on GitHub (Feb 14, 2025): Update; worth celebrating, the issue was resolved after updating to 0.5.10.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#52287