[GH-ISSUE #2524] "CPU does not have AVX or AVX2, disabling GPU support" #47990

Closed
opened 2026-04-28 06:20:31 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @khromov on GitHub (Feb 15, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2524

Originally assigned to: @dhiltgen on GitHub.

👋 Just downloaded the latest Windows preview. Ollama does work, but GPU is not being used at all as per the title message. Using Windows 11, RTX 2070 and latest Nvidia game ready drivers.

Command:

ollama run llama2
>>> Hello!
...

Log:

time=2024-02-15T22:13:55.132+01:00 level=INFO source=images.go:706 msg="total blobs: 6"
time=2024-02-15T22:13:55.133+01:00 level=INFO source=images.go:713 msg="total unused blobs removed: 0"
time=2024-02-15T22:13:55.135+01:00 level=INFO source=routes.go:1014 msg="Listening on 127.0.0.1:11434 (version 0.1.25)"
time=2024-02-15T22:13:55.135+01:00 level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..."
time=2024-02-15T22:13:55.403+01:00 level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [cpu_avx cpu_avx2 cpu cuda_v11.3]"
time=2024-02-15T22:13:55.403+01:00 level=DEBUG source=payload_common.go:147 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
[GIN] 2024/02/15 - 22:13:55 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2024/02/15 - 22:13:55 | 200 |      1.0454ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2024/02/15 - 22:13:55 | 200 |      1.0465ms |       127.0.0.1 | POST     "/api/show"
time=2024-02-15T22:13:56.808+01:00 level=INFO source=gpu.go:94 msg="Detecting GPU type"
time=2024-02-15T22:13:56.808+01:00 level=INFO source=gpu.go:262 msg="Searching for GPU management library nvml.dll"
time=2024-02-15T22:13:56.808+01:00 level=DEBUG source=gpu.go:280 msg="gpu management search paths: [c:\\Windows\\System32\\nvml.dll C:\\WINDOWS\\system32\\nvml.dll* C:\\WINDOWS\\nvml.dll* C:\\WINDOWS\\System32\\Wbem\\nvml.dll* C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\nvml.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll* C:\\Program Files\\Git\\cmd\\nvml.dll* C:\\Users\\k\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\nvml.dll* C:\\Users\\k\\AppData\\Local\\Programs\\Python\\Python310\\nvml.dll* C:\\Users\\k\\AppData\\Local\\Microsoft\\WindowsApps\\nvml.dll* C:\\Users\\k\\AppData\\Local\\GitHubDesktop\\bin\\nvml.dll* C:\\Users\\k\\AppData\\Local\\Programs\\Ollama\\nvml.dll*]"
time=2024-02-15T22:13:56.813+01:00 level=INFO source=gpu.go:308 msg="Discovered GPU libraries: [c:\\Windows\\System32\\nvml.dll C:\\WINDOWS\\system32\\nvml.dll]"
time=2024-02-15T22:13:56.833+01:00 level=INFO source=gpu.go:99 msg="Nvidia GPU detected"
time=2024-02-15T22:13:56.833+01:00 level=INFO source=cpu_common.go:18 msg="CPU does not have vector extensions"
time=2024-02-15T22:13:56.833+01:00 level=WARN source=gpu.go:128 msg="CPU does not have AVX or AVX2, disabling GPU support."
time=2024-02-15T22:13:56.833+01:00 level=INFO source=cpu_common.go:18 msg="CPU does not have vector extensions"
time=2024-02-15T22:13:56.833+01:00 level=WARN source=gpu.go:128 msg="CPU does not have AVX or AVX2, disabling GPU support."
time=2024-02-15T22:13:56.833+01:00 level=INFO source=llm.go:77 msg="GPU not available, falling back to CPU"
time=2024-02-15T22:13:56.833+01:00 level=DEBUG source=payload_common.go:93 msg="ordered list of LLM libraries to try [C:\\Users\\k\\AppData\\Local\\Temp\\ollama451307992\\cpu\\ext_server.dll]"
time=2024-02-15T22:13:56.833+01:00 level=INFO source=dyn_ext_server.go:380 msg="Updating PATH to C:\\Users\\k\\AppData\\Local\\Temp\\ollama451307992\\cpu;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\Git\\cmd;C:\\Users\\k\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\;C:\\Users\\k\\AppData\\Local\\Programs\\Python\\Python310\\;C:\\Users\\k\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\k\\AppData\\Local\\GitHubDesktop\\bin;C:\\Users\\k\\AppData\\Local\\Programs\\Ollama"
time=2024-02-15T22:13:56.844+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: C:\\Users\\k\\AppData\\Local\\Temp\\ollama451307992\\cpu\\ext_server.dll"
time=2024-02-15T22:13:56.844+01:00 level=INFO source=dyn_ext_server.go:145 msg="Initializing llama server"
[1708031636] system info: AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | 
llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from C:\Users\k\.ollama\models\blobs\sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = LLaMA v2
llama_model_loader: - kv   2:                       llama.context_length u32              = 4096
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 11008
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 32
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                          general.file_type u32              = 2
llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  12:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  13:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  14:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  15:                      tokenizer.ggml.merges arr[str,61249]   = ["▁ t", "e r", "i n", "▁ a", "e n...
llama_model_loader: - kv  16:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  17:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  18:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  19:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  20:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  21:                    tokenizer.chat_template str              = {% if messages[0]['role'] == 'system'...
llama_model_loader: - kv  22:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 4096
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 32
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 4096
llm_load_print_meta: n_embd_v_gqa     = 4096
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff             = 11008
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 4096
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 6.74 B
llm_load_print_meta: model size       = 3.56 GiB (4.54 BPW) 
llm_load_print_meta: general.name     = LLaMA v2
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_tensors: ggml ctx size =    0.11 MiB
llm_load_tensors:        CPU buffer size =  3647.87 MiB
..................................................................................................
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:        CPU KV buffer size =  1024.00 MiB
llama_new_context_with_model: KV self size  = 1024.00 MiB, K (f16):  512.00 MiB, V (f16):  512.00 MiB
llama_new_context_with_model:        CPU input buffer size   =    13.01 MiB
llama_new_context_with_model:        CPU compute buffer size =   160.00 MiB
llama_new_context_with_model: graph splits (measure): 1
[1708031637] warming up the model with an empty run
[1708031643] Available slots:
[1708031643]  -> Slot 0 - max context: 2048
time=2024-02-15T22:14:03.225+01:00 level=INFO source=dyn_ext_server.go:156 msg="Starting llama main loop"
time=2024-02-15T22:14:03.225+01:00 level=DEBUG source=prompt.go:175 msg="prompt now fits in context window" required=1 window=2048
[GIN] 2024/02/15 - 22:14:03 | 200 |    7.6937168s |       127.0.0.1 | POST     "/api/chat"
[1708031643] llama server main loop starting
[1708031643] all slots are idle and system prompt is empty, clear the KV cache

Originally created by @khromov on GitHub (Feb 15, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2524 Originally assigned to: @dhiltgen on GitHub. 👋 Just downloaded the latest Windows preview. Ollama does work, but GPU is not being used at all as per the title message. Using Windows 11, RTX 2070 and latest Nvidia game ready drivers. Command: ``` ollama run llama2 >>> Hello! ... ``` Log: ``` time=2024-02-15T22:13:55.132+01:00 level=INFO source=images.go:706 msg="total blobs: 6" time=2024-02-15T22:13:55.133+01:00 level=INFO source=images.go:713 msg="total unused blobs removed: 0" time=2024-02-15T22:13:55.135+01:00 level=INFO source=routes.go:1014 msg="Listening on 127.0.0.1:11434 (version 0.1.25)" time=2024-02-15T22:13:55.135+01:00 level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..." time=2024-02-15T22:13:55.403+01:00 level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [cpu_avx cpu_avx2 cpu cuda_v11.3]" time=2024-02-15T22:13:55.403+01:00 level=DEBUG source=payload_common.go:147 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" [GIN] 2024/02/15 - 22:13:55 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/02/15 - 22:13:55 | 200 | 1.0454ms | 127.0.0.1 | POST "/api/show" [GIN] 2024/02/15 - 22:13:55 | 200 | 1.0465ms | 127.0.0.1 | POST "/api/show" time=2024-02-15T22:13:56.808+01:00 level=INFO source=gpu.go:94 msg="Detecting GPU type" time=2024-02-15T22:13:56.808+01:00 level=INFO source=gpu.go:262 msg="Searching for GPU management library nvml.dll" time=2024-02-15T22:13:56.808+01:00 level=DEBUG source=gpu.go:280 msg="gpu management search paths: [c:\\Windows\\System32\\nvml.dll C:\\WINDOWS\\system32\\nvml.dll* C:\\WINDOWS\\nvml.dll* C:\\WINDOWS\\System32\\Wbem\\nvml.dll* C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\nvml.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll* C:\\Program Files\\Git\\cmd\\nvml.dll* C:\\Users\\k\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\nvml.dll* C:\\Users\\k\\AppData\\Local\\Programs\\Python\\Python310\\nvml.dll* C:\\Users\\k\\AppData\\Local\\Microsoft\\WindowsApps\\nvml.dll* C:\\Users\\k\\AppData\\Local\\GitHubDesktop\\bin\\nvml.dll* C:\\Users\\k\\AppData\\Local\\Programs\\Ollama\\nvml.dll*]" time=2024-02-15T22:13:56.813+01:00 level=INFO source=gpu.go:308 msg="Discovered GPU libraries: [c:\\Windows\\System32\\nvml.dll C:\\WINDOWS\\system32\\nvml.dll]" time=2024-02-15T22:13:56.833+01:00 level=INFO source=gpu.go:99 msg="Nvidia GPU detected" time=2024-02-15T22:13:56.833+01:00 level=INFO source=cpu_common.go:18 msg="CPU does not have vector extensions" time=2024-02-15T22:13:56.833+01:00 level=WARN source=gpu.go:128 msg="CPU does not have AVX or AVX2, disabling GPU support." time=2024-02-15T22:13:56.833+01:00 level=INFO source=cpu_common.go:18 msg="CPU does not have vector extensions" time=2024-02-15T22:13:56.833+01:00 level=WARN source=gpu.go:128 msg="CPU does not have AVX or AVX2, disabling GPU support." time=2024-02-15T22:13:56.833+01:00 level=INFO source=llm.go:77 msg="GPU not available, falling back to CPU" time=2024-02-15T22:13:56.833+01:00 level=DEBUG source=payload_common.go:93 msg="ordered list of LLM libraries to try [C:\\Users\\k\\AppData\\Local\\Temp\\ollama451307992\\cpu\\ext_server.dll]" time=2024-02-15T22:13:56.833+01:00 level=INFO source=dyn_ext_server.go:380 msg="Updating PATH to C:\\Users\\k\\AppData\\Local\\Temp\\ollama451307992\\cpu;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\Git\\cmd;C:\\Users\\k\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\;C:\\Users\\k\\AppData\\Local\\Programs\\Python\\Python310\\;C:\\Users\\k\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\k\\AppData\\Local\\GitHubDesktop\\bin;C:\\Users\\k\\AppData\\Local\\Programs\\Ollama" time=2024-02-15T22:13:56.844+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: C:\\Users\\k\\AppData\\Local\\Temp\\ollama451307992\\cpu\\ext_server.dll" time=2024-02-15T22:13:56.844+01:00 level=INFO source=dyn_ext_server.go:145 msg="Initializing llama server" [1708031636] system info: AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from C:\Users\k\.ollama\models\blobs\sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = LLaMA v2 llama_model_loader: - kv 2: llama.context_length u32 = 4096 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: general.file_type u32 = 2 llama_model_loader: - kv 11: tokenizer.ggml.model str = llama llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n... llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 21: tokenizer.chat_template str = {% if messages[0]['role'] == 'system'... llama_model_loader: - kv 22: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_0: 225 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens definition check successful ( 259/32000 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 4096 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 32 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 4096 llm_load_print_meta: n_embd_v_gqa = 4096 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: n_ff = 11008 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 4096 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 6.74 B llm_load_print_meta: model size = 3.56 GiB (4.54 BPW) llm_load_print_meta: general.name = LLaMA v2 llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_tensors: ggml ctx size = 0.11 MiB llm_load_tensors: CPU buffer size = 3647.87 MiB .................................................................................................. llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CPU KV buffer size = 1024.00 MiB llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB llama_new_context_with_model: CPU input buffer size = 13.01 MiB llama_new_context_with_model: CPU compute buffer size = 160.00 MiB llama_new_context_with_model: graph splits (measure): 1 [1708031637] warming up the model with an empty run [1708031643] Available slots: [1708031643] -> Slot 0 - max context: 2048 time=2024-02-15T22:14:03.225+01:00 level=INFO source=dyn_ext_server.go:156 msg="Starting llama main loop" time=2024-02-15T22:14:03.225+01:00 level=DEBUG source=prompt.go:175 msg="prompt now fits in context window" required=1 window=2048 [GIN] 2024/02/15 - 22:14:03 | 200 | 7.6937168s | 127.0.0.1 | POST "/api/chat" [1708031643] llama server main loop starting [1708031643] all slots are idle and system prompt is empty, clear the KV cache ```
Author
Owner

@dhiltgen commented on GitHub (Feb 15, 2024):

What kind of CPU does your system have? Are you running under any emulation/virtualization layer?

CPUs from Intel/AMD have had AVX since ~2013, and our GPU LLM native code is compiled using those extensions as it provides a significant performance benefit if some of the model has to run in CPU.

<!-- gh-comment-id:1947375242 --> @dhiltgen commented on GitHub (Feb 15, 2024): What kind of CPU does your system have? Are you running under any emulation/virtualization layer? CPUs from Intel/AMD have had AVX since ~2013, and our GPU LLM native code is compiled using those extensions as it provides a significant performance benefit if some of the model has to run in CPU.
Author
Owner

@khromov commented on GitHub (Feb 15, 2024):

Hey @dhiltgen

This particular system is running an old Xeon CPU (x3480) from ~2010. However as mentioned in https://github.com/ollama/ollama/issues/1279#issuecomment-1892888048 there are actually CPU:s released in 2021 that do not have AVX such as Intel® Pentium® Silver N6005.

<!-- gh-comment-id:1947379691 --> @khromov commented on GitHub (Feb 15, 2024): Hey @dhiltgen This particular system is running an old Xeon CPU (x3480) from ~2010. However as mentioned in https://github.com/ollama/ollama/issues/1279#issuecomment-1892888048 there are actually CPU:s released in 2021 that do not have AVX such as Intel® Pentium® Silver N6005.
Author
Owner

@dhiltgen commented on GitHub (Feb 16, 2024):

Interesting. These CPUs are most likely poorly suited to LLM tasks.

Let's track this in #2187

<!-- gh-comment-id:1948806692 --> @dhiltgen commented on GitHub (Feb 16, 2024): Interesting. These CPUs are most likely poorly suited to LLM tasks. Let's track this in #2187
Author
Owner

@HeyItsDaddy commented on GitHub (Feb 12, 2025):

I'd just like to chime in that after several hours of trying to understand why my 3060 Ti was never being used even though ollama showed it as "100% GPU" for all of my models...that this is what it boiled down to. Similar to @khromov I am using a system with good ol' Xeon processors (X5690's in my case), that are still quite powerful! But unless we have a version compiled with no AVX requirement (or nicer, a parameter option to not use AVX), we're stuck having the CPU pull the weight.

So, looking forward to see what comes about from #2187 Thanks @dhiltgen !

<!-- gh-comment-id:2654645449 --> @HeyItsDaddy commented on GitHub (Feb 12, 2025): I'd just like to chime in that after several hours of trying to understand why my 3060 Ti was _never_ being used even though ollama showed it as "100% GPU" for all of my models...that this is what it boiled down to. Similar to @khromov I am using a system with good ol' Xeon processors (X5690's in my case), that are still quite powerful! But unless we have a version compiled with no AVX requirement (or nicer, a parameter option to not use AVX), we're stuck having the CPU pull the weight. So, looking forward to see what comes about from [#2187](https://github.com/ollama/ollama/issues/2187) Thanks @dhiltgen !
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#47990