[GH-ISSUE #4117] 0.1.33 on Windows not using GPU #28317

Closed
opened 2026-04-22 06:22:43 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @Eisaichen on GitHub (May 3, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4117

What is the issue?

After upgrading to v0.1.33, Ollama no longer using my GPU, CPU will be used instead.

On the same PC, I tried to run 0.1.33 and older 0.1.32 side by side, 0.1.32 can run on GPU just fine while 0.1.33 is not.
After investigating the log, it seems 0.1.33 is not determining the CUDA level of the GPU correctly, causing the GPU ignored.
time=2024-05-02T19:41:48.667-07:00 level=INFO source=gpu.go:148 msg="[0] CUDA GPU is too old. Compute Capability detected: 1.0"

OS: Windows11
GPU: RTX 3090

0.1.33 logs
❯ .\ollama.exe serve
time=2024-05-02T20:39:24.533-07:00 level=INFO source=images.go:828 msg="total blobs: 5"
time=2024-05-02T20:39:24.533-07:00 level=INFO source=images.go:835 msg="total unused blobs removed: 0"
time=2024-05-02T20:39:24.534-07:00 level=INFO source=routes.go:1071 msg="Listening on [::]:11434 (version 0.1.33)"
time=2024-05-02T20:39:24.534-07:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cuda_v11.3 rocm_v5.7 cpu cpu_avx cpu_avx2]"
time=2024-05-02T20:39:24.534-07:00 level=INFO source=gpu.go:96 msg="Detecting GPUs"
time=2024-05-02T20:39:24.550-07:00 level=INFO source=gpu.go:101 msg="detected GPUs" library="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_65.dll" count=1
time=2024-05-02T20:39:24.550-07:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-05-02T20:39:24.607-07:00 level=INFO source=gpu.go:148 msg="[0] CUDA GPU is too old. Compute Capability detected: 1.0"
time=2024-05-02T20:39:24.614-07:00 level=INFO source=amd_windows.go:39 msg="AMD Driver: 50731541"
time=2024-05-02T20:39:24.616-07:00 level=INFO source=amd_windows.go:68 msg="detected hip devices" count=1
time=2024-05-02T20:39:24.616-07:00 level=INFO source=amd_windows.go:88 msg="hip device" id=0 name="AMD Radeon(TM) Graphics" gfx=gfx1036
time=2024-05-02T20:39:24.616-07:00 level=INFO source=amd_windows.go:99 msg="iGPU detected skipping" id=0
[GIN] 2024/05/02 - 20:39:32 | 200 |       399.5µs |  192.168.10.201 | GET      "/api/version"
[GIN] 2024/05/02 - 20:39:32 | 200 |      1.0279ms |  192.168.10.201 | GET      "/api/tags"
[GIN] 2024/05/02 - 20:39:33 | 200 |       696.2µs |  192.168.10.201 | GET      "/api/tags"
time=2024-05-02T20:39:45.674-07:00 level=INFO source=gpu.go:96 msg="Detecting GPUs"
time=2024-05-02T20:39:45.677-07:00 level=INFO source=gpu.go:101 msg="detected GPUs" library="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_65.dll" count=1
time=2024-05-02T20:39:45.677-07:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-05-02T20:39:45.677-07:00 level=INFO source=gpu.go:148 msg="[0] CUDA GPU is too old. Compute Capability detected: 1.0"
time=2024-05-02T20:39:45.684-07:00 level=INFO source=amd_windows.go:39 msg="AMD Driver: 50731541"
time=2024-05-02T20:39:45.686-07:00 level=INFO source=amd_windows.go:68 msg="detected hip devices" count=1
time=2024-05-02T20:39:45.686-07:00 level=INFO source=amd_windows.go:88 msg="hip device" id=0 name="AMD Radeon(TM) Graphics" gfx=gfx1036
time=2024-05-02T20:39:45.686-07:00 level=INFO source=amd_windows.go:99 msg="iGPU detected skipping" id=0
time=2024-05-02T20:39:46.113-07:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-05-02T20:39:46.122-07:00 level=INFO source=server.go:289 msg="starting llama server" cmd="C:\\Users\\local\\Desktop\\ollama-windows-amd64\\ollama_runners\\cpu_avx2\\ollama_llama_server.exe --model C:\\Users\\local\\.ollama\\models\\blobs\\sha256-e8a35b5937a5e6d5c35d1f2a15f161e07eefe5e5bb0a3cdd42998ee79b057730 --ctx-size 2048 --batch-size 512 --embedding --log-disable --parallel 1 --port 61324"
time=2024-05-02T20:39:46.125-07:00 level=INFO source=sched.go:340 msg="loaded runners" count=1
time=2024-05-02T20:39:46.125-07:00 level=INFO source=server.go:432 msg="waiting for llama runner to start responding"
{"function":"server_params_parse","level":"INFO","line":2606,"msg":"logging to file is disabled.","tid":"100504","timestamp":1714707586}
{"build":2770,"commit":"952d03d","function":"wmain","level":"INFO","line":2823,"msg":"build info","tid":"100504","timestamp":1714707586}
{"function":"wmain","level":"INFO","line":2830,"msg":"system info","n_threads":16,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | ","tid":"100504","timestamp":1714707586,"total_threads":32}
llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from C:\Users\local\.ollama\models\blobs\sha256-e8a35b5937a5e6d5c35d1f2a15f161e07eefe5e5bb0a3cdd42998ee79b057730 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = mistralai
llama_model_loader: - kv   2:                       llama.context_length u32              = 32768
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                       llama.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  11:                          general.file_type u32              = 2
llama_model_loader: - kv  12:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  14:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  16:                      tokenizer.ggml.merges arr[str,58980]   = ["▁ t", "i n", "e r", "▁ a", "h e...
llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  18:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  19:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  20:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  21:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  22:                    tokenizer.chat_template str              = {{ bos_token }}{% for message in mess...
llama_model_loader: - kv  23:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 8B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 7.24 B
llm_load_print_meta: model size       = 3.83 GiB (4.54 BPW)
llm_load_print_meta: general.name     = mistralai
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_tensors: ggml ctx size =    0.15 MiB
llm_load_tensors:        CPU buffer size =  3917.87 MiB
..................................................................................................
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:        CPU KV buffer size =   256.00 MiB
llama_new_context_with_model: KV self size  =  256.00 MiB, K (f16):  128.00 MiB, V (f16):  128.00 MiB
llama_new_context_with_model:        CPU  output buffer size =     0.14 MiB
llama_new_context_with_model:        CPU compute buffer size =   164.01 MiB
llama_new_context_with_model: graph nodes  = 1030
llama_new_context_with_model: graph splits = 1
{"function":"initialize","level":"INFO","line":448,"msg":"initializing slots","n_slots":1,"tid":"100504","timestamp":1714707586}
{"function":"initialize","level":"INFO","line":460,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"100504","timestamp":1714707586}
{"function":"wmain","level":"INFO","line":3067,"msg":"model loaded","tid":"100504","timestamp":1714707586}
{"function":"wmain","hostname":"127.0.0.1","level":"INFO","line":3270,"msg":"HTTP server listening","n_threads_http":"31","port":"61324","tid":"100504","timestamp":1714707586}
{"function":"update_slots","level":"INFO","line":1581,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"100504","timestamp":1714707586}
{"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":1,"tid":"100504","timestamp":1714707586}
...
0.1.32 logs
❯ .\ollama.exe serve
time=2024-05-02T20:29:14.601-07:00 level=INFO source=images.go:817 msg="total blobs: 5"
time=2024-05-02T20:29:14.602-07:00 level=INFO source=images.go:824 msg="total unused blobs removed: 0"
time=2024-05-02T20:29:14.602-07:00 level=INFO source=routes.go:1143 msg="Listening on [::]:11435 (version 0.1.32)"
time=2024-05-02T20:29:14.603-07:00 level=INFO source=payload.go:28 msg="extracting embedded files" dir=C:\Users\local\AppData\Local\Temp\ollama603992022\runners
time=2024-05-02T20:29:14.760-07:00 level=INFO source=payload.go:41 msg="Dynamic LLM libraries [rocm_v5.7 cpu cpu_avx cpu_avx2 cuda_v11.3]"
[GIN] 2024/05/02 - 20:29:21 | 200 |            0s |  192.168.10.201 | GET      "/api/version"
[GIN] 2024/05/02 - 20:29:22 | 200 |       1.029ms |  192.168.10.201 | GET      "/api/tags"
[GIN] 2024/05/02 - 20:29:24 | 200 |         504µs |  192.168.10.201 | GET      "/api/tags"
[GIN] 2024/05/02 - 20:29:25 | 200 |       502.9µs |  192.168.10.201 | GET      "/api/tags"
[GIN] 2024/05/02 - 20:29:25 | 200 |            0s |  192.168.10.201 | GET      "/api/version"
time=2024-05-02T20:29:40.960-07:00 level=INFO source=gpu.go:121 msg="Detecting GPU type"
time=2024-05-02T20:29:40.960-07:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library cudart64_*.dll"
time=2024-05-02T20:29:40.963-07:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_65.dll]"
time=2024-05-02T20:29:40.988-07:00 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
time=2024-05-02T20:29:40.989-07:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-05-02T20:29:41.088-07:00 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.6"
time=2024-05-02T20:29:41.089-07:00 level=INFO source=gpu.go:121 msg="Detecting GPU type"
time=2024-05-02T20:29:41.089-07:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library cudart64_*.dll"
time=2024-05-02T20:29:41.091-07:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_65.dll]"
time=2024-05-02T20:29:41.092-07:00 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
time=2024-05-02T20:29:41.092-07:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-05-02T20:29:41.092-07:00 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.6"
time=2024-05-02T20:29:41.092-07:00 level=INFO source=server.go:127 msg="offload to gpu" reallayers=33 layers=33 required="4724.5 MiB" used="4724.5 MiB" available="23306.0 MiB" kv="256.0 MiB" fulloffload="164.0 MiB" partialoffload="181.0 MiB"
time=2024-05-02T20:29:41.092-07:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-05-02T20:29:41.102-07:00 level=INFO source=server.go:264 msg="starting llama server" cmd="C:\\Users\\local\\AppData\\Local\\Temp\\ollama603992022\\runners\\cuda_v11.3\\ollama_llama_server.exe --model C:\\Users\\local\\.ollama\\models\\blobs\\sha256-e8a35b5937a5e6d5c35d1f2a15f161e07eefe5e5bb0a3cdd42998ee79b057730 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --port 60854"
time=2024-05-02T20:29:41.123-07:00 level=INFO source=server.go:389 msg="waiting for llama runner to start responding"
{"function":"server_params_parse","level":"INFO","line":2603,"msg":"logging to file is disabled.","tid":"80428","timestamp":1714706981}
{"build":2679,"commit":"7593639","function":"wmain","level":"INFO","line":2820,"msg":"build info","tid":"80428","timestamp":1714706981}
{"function":"wmain","level":"INFO","line":2827,"msg":"system info","n_threads":16,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | ","tid":"80428","timestamp":1714706981,"total_threads":32}
llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from C:\Users\local\.ollama\models\blobs\sha256-e8a35b5937a5e6d5c35d1f2a15f161e07eefe5e5bb0a3cdd42998ee79b057730 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = mistralai
llama_model_loader: - kv   2:                       llama.context_length u32              = 32768
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                       llama.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  11:                          general.file_type u32              = 2
llama_model_loader: - kv  12:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  14:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  16:                      tokenizer.ggml.merges arr[str,58980]   = ["▁ t", "i n", "e r", "▁ a", "h e...
llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  18:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  19:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  20:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  21:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  22:                    tokenizer.chat_template str              = {{ bos_token }}{% for message in mess...
llama_model_loader: - kv  23:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 7.24 B
llm_load_print_meta: model size       = 3.83 GiB (4.54 BPW)
llm_load_print_meta: general.name     = mistralai
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:   no
ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
llm_load_tensors: ggml ctx size =    0.22 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors:        CPU buffer size =    70.31 MiB
llm_load_tensors:      CUDA0 buffer size =  3847.55 MiB
..................................................................................................
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      CUDA0 KV buffer size =   256.00 MiB
llama_new_context_with_model: KV self size  =  256.00 MiB, K (f16):  128.00 MiB, V (f16):  128.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     0.14 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =   164.00 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    12.01 MiB
llama_new_context_with_model: graph nodes  = 1030
llama_new_context_with_model: graph splits = 2
{"function":"initialize","level":"INFO","line":448,"msg":"initializing slots","n_slots":1,"tid":"80428","timestamp":1714706983}
{"function":"initialize","level":"INFO","line":460,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"80428","timestamp":1714706983}
{"function":"wmain","level":"INFO","line":3064,"msg":"model loaded","tid":"80428","timestamp":1714706983}
{"function":"wmain","hostname":"127.0.0.1","level":"INFO","line":3267,"msg":"HTTP server listening","n_threads_http":"31","port":"60854","tid":"80428","timestamp":1714706983}
{"function":"update_slots","level":"INFO","line":1578,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"80428","timestamp":1714706983}
{"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":2,"tid":"80428","timestamp":1714706983}
{"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":0,"tid":"80428","timestamp":1714706983}
...
image Left 0.1.33, Right 0.1.32

OS

Windows

GPU

Nvidia

CPU

AMD

Ollama version

0.1.33

Originally created by @Eisaichen on GitHub (May 3, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4117 ### What is the issue? After upgrading to [v0.1.33](https://github.com/ollama/ollama/releases/tag/v0.1.33), Ollama no longer using my GPU, CPU will be used instead. On the same PC, I tried to run 0.1.33 and older 0.1.32 side by side, 0.1.32 can run on GPU just fine while 0.1.33 is not. After investigating the log, it seems 0.1.33 is not determining the CUDA level of the GPU correctly, causing the GPU ignored. `time=2024-05-02T19:41:48.667-07:00 level=INFO source=gpu.go:148 msg="[0] CUDA GPU is too old. Compute Capability detected: 1.0"` OS: Windows11 GPU: RTX 3090 <details><summary>0.1.33 logs</summary> ``` ❯ .\ollama.exe serve time=2024-05-02T20:39:24.533-07:00 level=INFO source=images.go:828 msg="total blobs: 5" time=2024-05-02T20:39:24.533-07:00 level=INFO source=images.go:835 msg="total unused blobs removed: 0" time=2024-05-02T20:39:24.534-07:00 level=INFO source=routes.go:1071 msg="Listening on [::]:11434 (version 0.1.33)" time=2024-05-02T20:39:24.534-07:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cuda_v11.3 rocm_v5.7 cpu cpu_avx cpu_avx2]" time=2024-05-02T20:39:24.534-07:00 level=INFO source=gpu.go:96 msg="Detecting GPUs" time=2024-05-02T20:39:24.550-07:00 level=INFO source=gpu.go:101 msg="detected GPUs" library="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_65.dll" count=1 time=2024-05-02T20:39:24.550-07:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-05-02T20:39:24.607-07:00 level=INFO source=gpu.go:148 msg="[0] CUDA GPU is too old. Compute Capability detected: 1.0" time=2024-05-02T20:39:24.614-07:00 level=INFO source=amd_windows.go:39 msg="AMD Driver: 50731541" time=2024-05-02T20:39:24.616-07:00 level=INFO source=amd_windows.go:68 msg="detected hip devices" count=1 time=2024-05-02T20:39:24.616-07:00 level=INFO source=amd_windows.go:88 msg="hip device" id=0 name="AMD Radeon(TM) Graphics" gfx=gfx1036 time=2024-05-02T20:39:24.616-07:00 level=INFO source=amd_windows.go:99 msg="iGPU detected skipping" id=0 [GIN] 2024/05/02 - 20:39:32 | 200 | 399.5µs | 192.168.10.201 | GET "/api/version" [GIN] 2024/05/02 - 20:39:32 | 200 | 1.0279ms | 192.168.10.201 | GET "/api/tags" [GIN] 2024/05/02 - 20:39:33 | 200 | 696.2µs | 192.168.10.201 | GET "/api/tags" time=2024-05-02T20:39:45.674-07:00 level=INFO source=gpu.go:96 msg="Detecting GPUs" time=2024-05-02T20:39:45.677-07:00 level=INFO source=gpu.go:101 msg="detected GPUs" library="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_65.dll" count=1 time=2024-05-02T20:39:45.677-07:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-05-02T20:39:45.677-07:00 level=INFO source=gpu.go:148 msg="[0] CUDA GPU is too old. Compute Capability detected: 1.0" time=2024-05-02T20:39:45.684-07:00 level=INFO source=amd_windows.go:39 msg="AMD Driver: 50731541" time=2024-05-02T20:39:45.686-07:00 level=INFO source=amd_windows.go:68 msg="detected hip devices" count=1 time=2024-05-02T20:39:45.686-07:00 level=INFO source=amd_windows.go:88 msg="hip device" id=0 name="AMD Radeon(TM) Graphics" gfx=gfx1036 time=2024-05-02T20:39:45.686-07:00 level=INFO source=amd_windows.go:99 msg="iGPU detected skipping" id=0 time=2024-05-02T20:39:46.113-07:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-05-02T20:39:46.122-07:00 level=INFO source=server.go:289 msg="starting llama server" cmd="C:\\Users\\local\\Desktop\\ollama-windows-amd64\\ollama_runners\\cpu_avx2\\ollama_llama_server.exe --model C:\\Users\\local\\.ollama\\models\\blobs\\sha256-e8a35b5937a5e6d5c35d1f2a15f161e07eefe5e5bb0a3cdd42998ee79b057730 --ctx-size 2048 --batch-size 512 --embedding --log-disable --parallel 1 --port 61324" time=2024-05-02T20:39:46.125-07:00 level=INFO source=sched.go:340 msg="loaded runners" count=1 time=2024-05-02T20:39:46.125-07:00 level=INFO source=server.go:432 msg="waiting for llama runner to start responding" {"function":"server_params_parse","level":"INFO","line":2606,"msg":"logging to file is disabled.","tid":"100504","timestamp":1714707586} {"build":2770,"commit":"952d03d","function":"wmain","level":"INFO","line":2823,"msg":"build info","tid":"100504","timestamp":1714707586} {"function":"wmain","level":"INFO","line":2830,"msg":"system info","n_threads":16,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | ","tid":"100504","timestamp":1714707586,"total_threads":32} llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from C:\Users\local\.ollama\models\blobs\sha256-e8a35b5937a5e6d5c35d1f2a15f161e07eefe5e5bb0a3cdd42998ee79b057730 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = mistralai llama_model_loader: - kv 2: llama.context_length u32 = 32768 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: llama.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 11: general.file_type u32 = 2 llama_model_loader: - kv 12: tokenizer.ggml.model str = llama llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,58980] = ["▁ t", "i n", "e r", "▁ a", "h e... llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 22: tokenizer.chat_template str = {{ bos_token }}{% for message in mess... llama_model_loader: - kv 23: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_0: 225 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens definition check successful ( 259/32000 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 8B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 7.24 B llm_load_print_meta: model size = 3.83 GiB (4.54 BPW) llm_load_print_meta: general.name = mistralai llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_tensors: ggml ctx size = 0.15 MiB llm_load_tensors: CPU buffer size = 3917.87 MiB .................................................................................................. llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CPU KV buffer size = 256.00 MiB llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB llama_new_context_with_model: CPU output buffer size = 0.14 MiB llama_new_context_with_model: CPU compute buffer size = 164.01 MiB llama_new_context_with_model: graph nodes = 1030 llama_new_context_with_model: graph splits = 1 {"function":"initialize","level":"INFO","line":448,"msg":"initializing slots","n_slots":1,"tid":"100504","timestamp":1714707586} {"function":"initialize","level":"INFO","line":460,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"100504","timestamp":1714707586} {"function":"wmain","level":"INFO","line":3067,"msg":"model loaded","tid":"100504","timestamp":1714707586} {"function":"wmain","hostname":"127.0.0.1","level":"INFO","line":3270,"msg":"HTTP server listening","n_threads_http":"31","port":"61324","tid":"100504","timestamp":1714707586} {"function":"update_slots","level":"INFO","line":1581,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"100504","timestamp":1714707586} {"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":1,"tid":"100504","timestamp":1714707586} ... ``` </details> <details><summary>0.1.32 logs</summary> ``` ❯ .\ollama.exe serve time=2024-05-02T20:29:14.601-07:00 level=INFO source=images.go:817 msg="total blobs: 5" time=2024-05-02T20:29:14.602-07:00 level=INFO source=images.go:824 msg="total unused blobs removed: 0" time=2024-05-02T20:29:14.602-07:00 level=INFO source=routes.go:1143 msg="Listening on [::]:11435 (version 0.1.32)" time=2024-05-02T20:29:14.603-07:00 level=INFO source=payload.go:28 msg="extracting embedded files" dir=C:\Users\local\AppData\Local\Temp\ollama603992022\runners time=2024-05-02T20:29:14.760-07:00 level=INFO source=payload.go:41 msg="Dynamic LLM libraries [rocm_v5.7 cpu cpu_avx cpu_avx2 cuda_v11.3]" [GIN] 2024/05/02 - 20:29:21 | 200 | 0s | 192.168.10.201 | GET "/api/version" [GIN] 2024/05/02 - 20:29:22 | 200 | 1.029ms | 192.168.10.201 | GET "/api/tags" [GIN] 2024/05/02 - 20:29:24 | 200 | 504µs | 192.168.10.201 | GET "/api/tags" [GIN] 2024/05/02 - 20:29:25 | 200 | 502.9µs | 192.168.10.201 | GET "/api/tags" [GIN] 2024/05/02 - 20:29:25 | 200 | 0s | 192.168.10.201 | GET "/api/version" time=2024-05-02T20:29:40.960-07:00 level=INFO source=gpu.go:121 msg="Detecting GPU type" time=2024-05-02T20:29:40.960-07:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library cudart64_*.dll" time=2024-05-02T20:29:40.963-07:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_65.dll]" time=2024-05-02T20:29:40.988-07:00 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart" time=2024-05-02T20:29:40.989-07:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-05-02T20:29:41.088-07:00 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.6" time=2024-05-02T20:29:41.089-07:00 level=INFO source=gpu.go:121 msg="Detecting GPU type" time=2024-05-02T20:29:41.089-07:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library cudart64_*.dll" time=2024-05-02T20:29:41.091-07:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_65.dll]" time=2024-05-02T20:29:41.092-07:00 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart" time=2024-05-02T20:29:41.092-07:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-05-02T20:29:41.092-07:00 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.6" time=2024-05-02T20:29:41.092-07:00 level=INFO source=server.go:127 msg="offload to gpu" reallayers=33 layers=33 required="4724.5 MiB" used="4724.5 MiB" available="23306.0 MiB" kv="256.0 MiB" fulloffload="164.0 MiB" partialoffload="181.0 MiB" time=2024-05-02T20:29:41.092-07:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-05-02T20:29:41.102-07:00 level=INFO source=server.go:264 msg="starting llama server" cmd="C:\\Users\\local\\AppData\\Local\\Temp\\ollama603992022\\runners\\cuda_v11.3\\ollama_llama_server.exe --model C:\\Users\\local\\.ollama\\models\\blobs\\sha256-e8a35b5937a5e6d5c35d1f2a15f161e07eefe5e5bb0a3cdd42998ee79b057730 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --port 60854" time=2024-05-02T20:29:41.123-07:00 level=INFO source=server.go:389 msg="waiting for llama runner to start responding" {"function":"server_params_parse","level":"INFO","line":2603,"msg":"logging to file is disabled.","tid":"80428","timestamp":1714706981} {"build":2679,"commit":"7593639","function":"wmain","level":"INFO","line":2820,"msg":"build info","tid":"80428","timestamp":1714706981} {"function":"wmain","level":"INFO","line":2827,"msg":"system info","n_threads":16,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | ","tid":"80428","timestamp":1714706981,"total_threads":32} llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from C:\Users\local\.ollama\models\blobs\sha256-e8a35b5937a5e6d5c35d1f2a15f161e07eefe5e5bb0a3cdd42998ee79b057730 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = mistralai llama_model_loader: - kv 2: llama.context_length u32 = 32768 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: llama.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 11: general.file_type u32 = 2 llama_model_loader: - kv 12: tokenizer.ggml.model str = llama llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,58980] = ["▁ t", "i n", "e r", "▁ a", "h e... llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 22: tokenizer.chat_template str = {{ bos_token }}{% for message in mess... llama_model_loader: - kv 23: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_0: 225 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens definition check successful ( 259/32000 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 7.24 B llm_load_print_meta: model size = 3.83 GiB (4.54 BPW) llm_load_print_meta: general.name = mistralai llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: LF token = 13 '<0x0A>' ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes llm_load_tensors: ggml ctx size = 0.22 MiB llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 33/33 layers to GPU llm_load_tensors: CPU buffer size = 70.31 MiB llm_load_tensors: CUDA0 buffer size = 3847.55 MiB .................................................................................................. llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA0 KV buffer size = 256.00 MiB llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 0.14 MiB llama_new_context_with_model: CUDA0 compute buffer size = 164.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 12.01 MiB llama_new_context_with_model: graph nodes = 1030 llama_new_context_with_model: graph splits = 2 {"function":"initialize","level":"INFO","line":448,"msg":"initializing slots","n_slots":1,"tid":"80428","timestamp":1714706983} {"function":"initialize","level":"INFO","line":460,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"80428","timestamp":1714706983} {"function":"wmain","level":"INFO","line":3064,"msg":"model loaded","tid":"80428","timestamp":1714706983} {"function":"wmain","hostname":"127.0.0.1","level":"INFO","line":3267,"msg":"HTTP server listening","n_threads_http":"31","port":"60854","tid":"80428","timestamp":1714706983} {"function":"update_slots","level":"INFO","line":1578,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"80428","timestamp":1714706983} {"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":2,"tid":"80428","timestamp":1714706983} {"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":0,"tid":"80428","timestamp":1714706983} ... ``` </details> <img width="1717" alt="image" src="https://github.com/ollama/ollama/assets/12467320/3f5d62fc-3ba8-4917-b190-f3f5b5955922"> Left 0.1.33, Right 0.1.32 ### OS Windows ### GPU Nvidia ### CPU AMD ### Ollama version 0.1.33
GiteaMirror added the gpubug labels 2026-04-22 06:22:44 -05:00
Author
Owner

@dhiltgen commented on GitHub (May 3, 2024):

Dup of #4008.

Unfortunately the PhysX version of the cuda runtime library seems to be returning incorrect information. To work around this until we figure out exactly what's going wrong, you can adjust your PATH to point to a different cudart library before this PhysX directory.

<!-- gh-comment-id:2092093678 --> @dhiltgen commented on GitHub (May 3, 2024): Dup of #4008. Unfortunately the PhysX version of the cuda runtime library seems to be returning incorrect information. To work around this until we figure out exactly what's going wrong, you can adjust your PATH to point to a different cudart library before this PhysX directory.
Author
Owner

@Eisaichen commented on GitHub (May 3, 2024):

@dhiltgen Sorry about that, I tried to search existing issues with "GPU", but didn't find that one.
curious though, isn't 0.1.33 and 0.1.32 using the exact same cudart dll file when I ran them side by side? you can verify that in the logs.

<!-- gh-comment-id:2092098701 --> @Eisaichen commented on GitHub (May 3, 2024): @dhiltgen Sorry about that, I tried to search existing issues with "GPU", but didn't find that one. curious though, isn't 0.1.33 and 0.1.32 using the exact same cudart dll file when I ran them side by side? you can verify that in the logs.
Author
Owner

@dhiltgen commented on GitHub (May 4, 2024):

@Eisaichen we changed around how we process the PATH in the latest release and try to favor a cudart found on the host instead of our bundled version, which is why this is cropping up now. We'll get this resolved in the next release, but until then, adjust your PATH to avoid PhysX and it should work.

<!-- gh-comment-id:2094401269 --> @dhiltgen commented on GitHub (May 4, 2024): @Eisaichen we changed around how we process the PATH in the latest release and try to favor a cudart found on the host instead of our bundled version, which is why this is cropping up now. We'll get this resolved in the next release, but until then, adjust your PATH to avoid PhysX and it should work.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#28317