[GH-ISSUE #6655] Windows binaries are built without GPU support and ignore available SIMD support #66228

Closed
opened 2026-05-04 01:01:06 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @mlgitter on GitHub (Sep 5, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6655

I've set up ollama from windows installer, and it says in logs

that it NOT build with GPU support
WARN [server_params_parse] Not compiled with GPU offload support, --n-gpu-layers option will be ignored.

and ignores SSE3 and SSSE3 of my cpus 5645x2 capabilities
INFO [wmain] system info | n_threads=12 n_threads_batch=-1 system_info="AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="16916" timestamp=1725530008 total_threads=24

So my questions are:
1. Is there a fork that actually builds with NVidia GPU support with 6+ computing capability as it's been said in releases and wiki?

2. Why SIMD features, like SSE3|4 which on par with AVX|2 are being ignored, yet it mostly a matter of compiling flags?

3. Where is information on enabling GPU BLAS support which was said to reside in main README.md but absent here as of latest release, 0.3.9 and Google can't find it out anywhere?

My CUDA is in place,

$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2013 NVIDIA Corporation
Built on Fri_Mar_14_19:30:01_PDT_2014
Cuda compilation tools, release 6.0, V6.0.1

The server log is as follows:

2024/09/05 12:40:01 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\Users\xxxxxxx\.ollama\models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\Users\xxxxxxx\AppData\Local\Programs\Ollama\lib\ollama\runners OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-09-05T12:40:01.258+03:00 level=INFO source=images.go:753 msg="total blobs: 5"
time=2024-09-05T12:40:01.259+03:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2024-09-05T12:40:01.264+03:00 level=INFO source=routes.go:1172 msg="Listening on 127.0.0.1:11434 (version 0.3.9)"
time=2024-09-05T12:40:01.267+03:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12 rocm_v6.1]"
time=2024-09-05T12:40:01.268+03:00 level=INFO source=gpu.go:200 msg="looking for compatible GPUs"
time=2024-09-05T12:40:01.268+03:00 level=WARN source=gpu.go:222 msg="CPU does not have minimum vector extensions, GPU inference disabled" required=avx detected="no vector extensions"
time=2024-09-05T12:40:01.270+03:00 level=INFO source=types.go:107 msg="inference compute" id=0 library=cpu variant="no vector extensions" compute="" driver=0.0 name="" total="128.0 GiB" available="36.9 GiB"
[GIN] 2024/09/05 - 12:53:23 | 200 | 4.3759ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/09/05 - 12:53:23 | 200 | 1.6545ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/09/05 - 12:53:23 | 200 | 509.1µs | 127.0.0.1 | GET "/"
time=2024-09-05T12:53:28.881+03:00 level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=50 layers.model=19 layers.offload=0 layers.split="" memory.available="[37.5 GiB]" memory.required.full="2.3 GiB" memory.required.partial="0 B" memory.required.kv="144.0 MiB" memory.required.allocations="[2.3 GiB]" memory.weights.total="1.2 GiB" memory.weights.repeating="675.9 MiB" memory.weights.nonrepeating="531.5 MiB" memory.graph.full="504.2 MiB" memory.graph.partial="914.6 MiB"
time=2024-09-05T12:53:28.898+03:00 level=INFO source=server.go:391 msg="starting llama server" cmd="C:\Users\xxxxxxx\AppData\Local\Programs\Ollama\lib\ollama\runners\cpu\ollama_llama_server.exe --model C:\Users\xxxxxxx\.ollama\models\blobs\sha256-c1864a5eb19305c40519da12cc543519e48a0697ecd30e15d5ac228644957d12 --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 50 --no-mmap --parallel 4 --port 50237"
time=2024-09-05T12:53:28.939+03:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2024-09-05T12:53:28.940+03:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
time=2024-09-05T12:53:28.948+03:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error"
WARN [server_params_parse] Not compiled with GPU offload support, --n-gpu-layers option will be ignored. See main README.md for information on enabling GPU BLAS support | n_gpu_layers=-1 tid="16916" timestamp=1725530008
INFO [wmain] build info | build=3535 commit="1e6f6554" tid="16916" timestamp=1725530008
INFO [wmain] system info | n_threads=12 n_threads_batch=-1 system_info="AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="16916" timestamp=1725530008 total_threads=24
INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="23" port="50237" tid="16916" timestamp=1725530008
llama_model_loader: loaded meta data with 21 key-value pairs and 164 tensors from C:\Users\xxxxxxx.ollama\models\blobs\sha256-c1864a5eb19305c40519da12cc543519e48a0697ecd30e15d5ac228644957d12 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = gemma
llama_model_loader: - kv 1: general.name str = gemma-2b-it
llama_model_loader: - kv 2: gemma.context_length u32 = 8192
llama_model_loader: - kv 3: gemma.block_count u32 = 18
llama_model_loader: - kv 4: gemma.embedding_length u32 = 2048
llama_model_loader: - kv 5: gemma.feed_forward_length u32 = 16384
llama_model_loader: - kv 6: gemma.attention.head_count u32 = 8
llama_model_loader: - kv 7: gemma.attention.head_count_kv u32 = 1
llama_model_loader: - kv 8: gemma.attention.key_length u32 = 256
llama_model_loader: - kv 9: gemma.attention.value_length u32 = 256
llama_model_loader: - kv 10: gemma.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 11: tokenizer.ggml.model str = llama
llama_model_loader: - kv 12: tokenizer.ggml.bos_token_id u32 = 2
llama_model_loader: - kv 13: tokenizer.ggml.eos_token_id u32 = 1
llama_model_loader: - kv 14: tokenizer.ggml.padding_token_id u32 = 0
llama_model_loader: - kv 15: tokenizer.ggml.unknown_token_id u32 = 3
llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,256128] = ["", "", "", "", ...
time=2024-09-05T12:53:29.207+03:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: - kv 17: tokenizer.ggml.scores arr[f32,256128] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,256128] = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 19: general.quantization_version u32 = 2
llama_model_loader: - kv 20: general.file_type u32 = 2
llama_model_loader: - type f32: 37 tensors
llama_model_loader: - type q4_0: 126 tensors
llama_model_loader: - type q8_0: 1 tensors
llm_load_vocab: special tokens cache size = 4
llm_load_vocab: token to piece cache size = 1.6014 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = gemma
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 256128
llm_load_print_meta: n_merges = 0
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 8192
llm_load_print_meta: n_embd = 2048
llm_load_print_meta: n_layer = 18
llm_load_print_meta: n_head = 8
llm_load_print_meta: n_head_kv = 1
llm_load_print_meta: n_rot = 256
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 256
llm_load_print_meta: n_embd_head_v = 256
llm_load_print_meta: n_gqa = 8
llm_load_print_meta: n_embd_k_gqa = 256
llm_load_print_meta: n_embd_v_gqa = 256
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 16384
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 8192
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 2B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 2.51 B
llm_load_print_meta: model size = 1.56 GiB (5.34 BPW)
llm_load_print_meta: general.name = gemma-2b-it
llm_load_print_meta: BOS token = 2 ''
llm_load_print_meta: EOS token = 1 ''
llm_load_print_meta: UNK token = 3 ''
llm_load_print_meta: PAD token = 0 ''
llm_load_print_meta: LF token = 227 '<0x0A>'
llm_load_print_meta: EOT token = 107 '<end_of_turn>'
llm_load_print_meta: max token length = 93
llm_load_tensors: ggml ctx size = 0.08 MiB
llm_load_tensors: CPU buffer size = 2126.45 MiB
[GIN] 2024/09/05 - 12:53:39 | 200 | 1.6532ms | 127.0.0.1 | GET "/api/tags"
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CPU KV buffer size = 144.00 MiB
llama_new_context_with_model: KV self size = 144.00 MiB, K (f16): 72.00 MiB, V (f16): 72.00 MiB
llama_new_context_with_model: CPU output buffer size = 3.94 MiB
llama_new_context_with_model: CPU compute buffer size = 508.25 MiB
llama_new_context_with_model: graph nodes = 601
llama_new_context_with_model: graph splits = 1
INFO [wmain] model loaded | tid="16916" timestamp=1725530022
time=2024-09-05T12:53:42.108+03:00 level=INFO source=server.go:630 msg="llama runner started in 13.17 seconds"
[GIN] 2024/09/05 - 12:53:54 | 200 | 2.2921ms | 127.0.0.1 | GET "/api/tags"

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

version 0.3.9

Originally created by @mlgitter on GitHub (Sep 5, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6655 I've set up ollama from windows installer, and it says in logs that it NOT build with GPU support **_WARN [server_params_parse] Not compiled with GPU offload support, --n-gpu-layers option will be ignored._** and ignores SSE3 and SSSE3 of my cpus 5645x2 capabilities **_INFO [wmain] system info | n_threads=12 n_threads_batch=-1 system_info="AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="16916" timestamp=1725530008 total_threads=24_** So my questions are: **_1. Is there a fork that actually builds with NVidia GPU support with 6+ computing capability as it's been said in releases and wiki?_** **_2. Why SIMD features, like SSE3|4 which on par with AVX|2 are being ignored, yet it mostly a matter of compiling flags?_** **_3. Where is information on enabling GPU BLAS support which was said to reside in main README.md but absent here as of latest release, 0.3.9 and Google can't find it out anywhere?_** My CUDA is in place, > $ nvcc --version > nvcc: NVIDIA (R) Cuda compiler driver > Copyright (c) 2005-2013 NVIDIA Corporation > Built on Fri_Mar_14_19:30:01_PDT_2014 > Cuda compilation tools, release 6.0, V6.0.1 > The server log is as follows: > 2024/09/05 12:40:01 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: _HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\xxxxxxx\\.ollama\\models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\\Users\\xxxxxxx\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\runners OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-09-05T12:40:01.258+03:00 level=INFO source=images.go:753 msg="total blobs: 5" time=2024-09-05T12:40:01.259+03:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0" time=2024-09-05T12:40:01.264+03:00 level=INFO source=routes.go:1172 msg="Listening on 127.0.0.1:11434 (version 0.3.9)" time=2024-09-05T12:40:01.267+03:00 level=INFO source=payload.go:44 **msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12 rocm_v6.1]"** time=2024-09-05T12:40:01.268+03:00 level=INFO source=gpu.go:200 msg="looking for compatible GPUs" time=2024-09-05T12:40:01.268+03:00 level=WARN source=gpu.go:222 **msg="CPU does not have minimum vector extensions, GPU inference disabled" required=avx detected="no vector extensions"** time=2024-09-05T12:40:01.270+03:00 level=INFO source=types.go:107 msg="inference compute" id=0 library=cpu variant="no vector extensions" compute="" driver=0.0 name="" total="128.0 GiB" available="36.9 GiB" [GIN] 2024/09/05 - 12:53:23 | 200 | 4.3759ms | 127.0.0.1 | GET "/api/tags" [GIN] 2024/09/05 - 12:53:23 | 200 | 1.6545ms | 127.0.0.1 | GET "/api/tags" [GIN] 2024/09/05 - 12:53:23 | 200 | 509.1µs | 127.0.0.1 | GET "/" time=2024-09-05T12:53:28.881+03:00 level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=50 layers.model=19 layers.offload=0 layers.split="" memory.available="[37.5 GiB]" memory.required.full="2.3 GiB" memory.required.partial="0 B" memory.required.kv="144.0 MiB" memory.required.allocations="[2.3 GiB]" memory.weights.total="1.2 GiB" memory.weights.repeating="675.9 MiB" memory.weights.nonrepeating="531.5 MiB" memory.graph.full="504.2 MiB" memory.graph.partial="914.6 MiB" time=2024-09-05T12:53:28.898+03:00 level=INFO source=server.go:391 msg="starting llama server" cmd="C:\\Users\\xxxxxxx\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\runners\\cpu\\ollama_llama_server.exe --model C:\\Users\\xxxxxxx\\.ollama\\models\\blobs\\sha256-c1864a5eb19305c40519da12cc543519e48a0697ecd30e15d5ac228644957d12 --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 50 --no-mmap --parallel 4 --port 50237" time=2024-09-05T12:53:28.939+03:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2024-09-05T12:53:28.940+03:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2024-09-05T12:53:28.948+03:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error" WARN [server_params_parse] Not compiled with GPU offload support, --n-gpu-layers option will be ignored. See main README.md for information on enabling GPU BLAS support | n_gpu_layers=-1 tid="16916" timestamp=1725530008 INFO [wmain] build info | build=3535 commit="1e6f6554" tid="16916" timestamp=1725530008 **INFO [wmain] system info | n_threads=12 n_threads_batch=-1 system_info="AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | "** tid="16916" timestamp=1725530008 total_threads=24 INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="23" port="50237" tid="16916" timestamp=1725530008 llama_model_loader: loaded meta data with 21 key-value pairs and 164 tensors from C:\Users\xxxxxxx\.ollama\models\blobs\sha256-c1864a5eb19305c40519da12cc543519e48a0697ecd30e15d5ac228644957d12 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = gemma llama_model_loader: - kv 1: general.name str = gemma-2b-it llama_model_loader: - kv 2: gemma.context_length u32 = 8192 llama_model_loader: - kv 3: gemma.block_count u32 = 18 llama_model_loader: - kv 4: gemma.embedding_length u32 = 2048 llama_model_loader: - kv 5: gemma.feed_forward_length u32 = 16384 llama_model_loader: - kv 6: gemma.attention.head_count u32 = 8 llama_model_loader: - kv 7: gemma.attention.head_count_kv u32 = 1 llama_model_loader: - kv 8: gemma.attention.key_length u32 = 256 llama_model_loader: - kv 9: gemma.attention.value_length u32 = 256 llama_model_loader: - kv 10: gemma.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 11: tokenizer.ggml.model str = llama llama_model_loader: - kv 12: tokenizer.ggml.bos_token_id u32 = 2 llama_model_loader: - kv 13: tokenizer.ggml.eos_token_id u32 = 1 llama_model_loader: - kv 14: tokenizer.ggml.padding_token_id u32 = 0 llama_model_loader: - kv 15: tokenizer.ggml.unknown_token_id u32 = 3 llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,256128] = ["<pad>", "<eos>", "<bos>", "<unk>", ... time=2024-09-05T12:53:29.207+03:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: - kv 17: tokenizer.ggml.scores arr[f32,256128] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,256128] = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 19: general.quantization_version u32 = 2 llama_model_loader: - kv 20: general.file_type u32 = 2 llama_model_loader: - type f32: 37 tensors llama_model_loader: - type q4_0: 126 tensors llama_model_loader: - type q8_0: 1 tensors llm_load_vocab: special tokens cache size = 4 llm_load_vocab: token to piece cache size = 1.6014 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = gemma llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 256128 llm_load_print_meta: n_merges = 0 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 8192 llm_load_print_meta: n_embd = 2048 llm_load_print_meta: n_layer = 18 llm_load_print_meta: n_head = 8 llm_load_print_meta: n_head_kv = 1 llm_load_print_meta: n_rot = 256 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 256 llm_load_print_meta: n_embd_head_v = 256 llm_load_print_meta: n_gqa = 8 llm_load_print_meta: n_embd_k_gqa = 256 llm_load_print_meta: n_embd_v_gqa = 256 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 16384 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 8192 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 2B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 2.51 B llm_load_print_meta: model size = 1.56 GiB (5.34 BPW) llm_load_print_meta: general.name = gemma-2b-it llm_load_print_meta: BOS token = 2 '<bos>' llm_load_print_meta: EOS token = 1 '<eos>' llm_load_print_meta: UNK token = 3 '<unk>' llm_load_print_meta: PAD token = 0 '<pad>' llm_load_print_meta: LF token = 227 '<0x0A>' llm_load_print_meta: EOT token = 107 '<end_of_turn>' llm_load_print_meta: max token length = 93 llm_load_tensors: ggml ctx size = 0.08 MiB llm_load_tensors: CPU buffer size = 2126.45 MiB [GIN] 2024/09/05 - 12:53:39 | 200 | 1.6532ms | 127.0.0.1 | GET "/api/tags" llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CPU KV buffer size = 144.00 MiB llama_new_context_with_model: KV self size = 144.00 MiB, K (f16): 72.00 MiB, V (f16): 72.00 MiB llama_new_context_with_model: CPU output buffer size = 3.94 MiB llama_new_context_with_model: CPU compute buffer size = 508.25 MiB llama_new_context_with_model: graph nodes = 601 llama_new_context_with_model: graph splits = 1 INFO [wmain] model loaded | tid="16916" timestamp=1725530022 time=2024-09-05T12:53:42.108+03:00 level=INFO source=server.go:630 msg="llama runner started in 13.17 seconds" [GIN] 2024/09/05 - 12:53:54 | 200 | 2.2921ms | 127.0.0.1 | GET "/api/tags"_ ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version version 0.3.9
GiteaMirror added the bug label 2026-05-04 01:01:08 -05:00
Author
Owner

@dhiltgen commented on GitHub (Sep 5, 2024):

The problem is your CPU. Our GPU runners are compiled with AVX enabled, which will result in an immediate crash with "illegal instruction" if run on a CPU without AVX support. We detect this missing support and report in the logs

time=2024-09-05T12:40:01.268+03:00 level=WARN source=gpu.go:222 msg="CPU does not have minimum vector extensions, GPU inference disabled" required=avx detected="no vector extensions"

The reason for this is to make sure when we have to split a model between GPU and CPU because it doesn't completely fit in VRAM it improves the performance of the CPU portion of inference. AVX is ~4x faster than no vector extensions.

We're working towards making it easier to support custom local builds for changing the CPU compilation flags for users. Your scenario (no AVX) is tracked via #2187

<!-- gh-comment-id:2332135781 --> @dhiltgen commented on GitHub (Sep 5, 2024): The problem is your CPU. Our GPU runners are compiled with AVX enabled, which will result in an immediate crash with "illegal instruction" if run on a CPU without AVX support. We detect this missing support and report in the logs ``` time=2024-09-05T12:40:01.268+03:00 level=WARN source=gpu.go:222 msg="CPU does not have minimum vector extensions, GPU inference disabled" required=avx detected="no vector extensions" ``` The reason for this is to make sure when we have to split a model between GPU and CPU because it doesn't completely fit in VRAM it improves the performance of the CPU portion of inference. AVX is ~4x faster than no vector extensions. We're working towards making it easier to support custom local builds for changing the CPU compilation flags for users. Your scenario (no AVX) is tracked via #2187
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#66228