[GH-ISSUE #7585] why Ollama runs on CPU by default #51347

Closed
opened 2026-04-28 19:37:27 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @Twilight-1p67e-27 on GitHub (Nov 9, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7585

What is the issue?

My device:
NVIDIA RTX4070 12G
The remaining video memory is 10G
Run a 7B model and have enough video memory to run the model
ollama will be forced to run on the CPU no matter what, even if its performance is much lower than that of the GPU

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

latest

Originally created by @Twilight-1p67e-27 on GitHub (Nov 9, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7585 ### What is the issue? My device: NVIDIA RTX4070 12G The remaining video memory is 10G Run a 7B model and have enough video memory to run the model ollama will be forced to run on the CPU no matter what, even if its performance is much lower than that of the GPU ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version latest
GiteaMirror added the bug label 2026-04-28 19:37:27 -05:00
Author
Owner

@AncientMystic commented on GitHub (Nov 9, 2024):

It should by default run on the GPU first, this would be abnormal behavior.

Check logs (right click tray app icon > veiw logs or c:\users\user\appdata\local\Ollama)

It should be listed in the server.log file what is happening here, likely around the end of the file where it will say if the cuda device is detected and how many layers are being offloaded, etc.

<!-- gh-comment-id:2466061529 --> @AncientMystic commented on GitHub (Nov 9, 2024): It should by default run on the GPU first, this would be abnormal behavior. Check logs (right click tray app icon > veiw logs or c:\users\user\appdata\local\Ollama) It should be listed in the server.log file what is happening here, likely around the end of the file where it will say if the cuda device is detected and how many layers are being offloaded, etc.
Author
Owner

@Twilight-1p67e-27 commented on GitHub (Nov 9, 2024):

logs:

2024/11/09 18:44:42 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\Users\Admin\.ollama\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-11-09T18:44:42.970+08:00 level=INFO source=images.go:755 msg="total blobs: 0"
time=2024-11-09T18:44:42.971+08:00 level=INFO source=images.go:762 msg="total unused blobs removed: 0"
time=2024-11-09T18:44:42.972+08:00 level=INFO source=routes.go:1240 msg="Listening on 127.0.0.1:11434 (version 0.4.0)"
time=2024-11-09T18:44:42.975+08:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[rocm cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12]"
time=2024-11-09T18:44:42.975+08:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
time=2024-11-09T18:44:42.976+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2024-11-09T18:44:42.976+08:00 level=INFO source=gpu_windows.go:183 msg="efficiency cores detected" maxEfficiencyClass=1
time=2024-11-09T18:44:42.976+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=10 efficiency=4 threads=16
time=2024-11-09T18:44:42.976+08:00 level=WARN source=gpu.go:252 msg="CPU does not have minimum vector extensions, GPU inference disabled. Required:avx Detected:no vector extensions"
time=2024-11-09T18:44:42.976+08:00 level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant="no vector extensions" compute="" driver=0.0 name="" total="31.8 GiB" available="25.4 GiB"
[GIN] 2024/11/09 - 18:44:53 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2024/11/09 - 18:45:44 | 200 | 521.8µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/11/09 - 18:45:44 | 200 | 1.6375ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/11/09 - 18:46:28 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2024/11/09 - 18:46:28 | 200 | 1.5715ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/11/09 - 18:48:45 | 404 | 521.7µs | 127.0.0.1 | POST "/v1/chat/completions"
[GIN] 2024/11/09 - 18:48:48 | 404 | 88.9µs | 127.0.0.1 | POST "/v1/chat/completions"
[GIN] 2024/11/09 - 18:49:55 | 404 | 0s | 127.0.0.1 | POST "/v1/chat/completions"
time=2024-11-09T18:51:22.779+08:00 level=INFO source=server.go:105 msg="system memory" total="31.8 GiB" free="20.4 GiB" free_swap="30.9 GiB"
time=2024-11-09T18:51:22.780+08:00 level=INFO source=memory.go:343 msg="offload to cpu" layers.requested=-1 layers.model=29 layers.offload=0 layers.split="" memory.available="[20.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.1 GiB" memory.required.partial="0 B" memory.required.kv="448.0 MiB" memory.required.allocations="[5.1 GiB]" memory.weights.total="4.1 GiB" memory.weights.repeating="3.7 GiB" memory.weights.nonrepeating="426.4 MiB" memory.graph.full="478.0 MiB" memory.graph.partial="730.4 MiB"
time=2024-11-09T18:51:22.790+08:00 level=INFO source=server.go:388 msg="starting llama server" cmd="C:\Users\Admin\AppData\Local\Programs\Ollama\lib\ollama\runners\cpu\ollama_llama_server.exe --model C:\Users\Admin\.ollama\models\blobs\sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730 --ctx-size 8192 --batch-size 512 --embedding --threads 6 --no-mmap --parallel 4 --port 5834"
time=2024-11-09T18:51:22.794+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2024-11-09T18:51:22.794+08:00 level=INFO source=server.go:567 msg="waiting for llama runner to start responding"
time=2024-11-09T18:51:22.794+08:00 level=INFO source=server.go:601 msg="waiting for server to become available" status="llm server error"
time=2024-11-09T18:51:22.822+08:00 level=INFO source=runner.go:869 msg="starting go runner"
time=2024-11-09T18:51:22.830+08:00 level=INFO source=runner.go:870 msg=system info="AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(clang)" threads=6
time=2024-11-09T18:51:22.831+08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:5834"
llama_model_loader: loaded meta data with 34 key-value pairs and 339 tensors from C:\Users\Admin.ollama\models\blobs\sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Qwen2.5 7B Instruct
llama_model_loader: - kv 3: general.finetune str = Instruct
llama_model_loader: - kv 4: general.basename str = Qwen2.5
llama_model_loader: - kv 5: general.size_label str = 7B
llama_model_loader: - kv 6: general.license str = apache-2.0
llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-7...
llama_model_loader: - kv 8: general.base_model.count u32 = 1
llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 7B
llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen
llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-7B
llama_model_loader: - kv 12: general.tags arr[str,2] = ["chat", "text-generation"]
llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"]
llama_model_loader: - kv 14: qwen2.block_count u32 = 28
llama_model_loader: - kv 15: qwen2.context_length u32 = 32768
llama_model_loader: - kv 16: qwen2.embedding_length u32 = 3584
llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 18944
llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 28
llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 4
llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 22: general.file_type u32 = 15
llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645
llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643
llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false
llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
llama_model_loader: - kv 33: general.quantization_version u32 = 2
llama_model_loader: - type f32: 141 tensors
llama_model_loader: - type q4_K: 169 tensors
llama_model_loader: - type q6_K: 29 tensors
llm_load_vocab: special tokens cache size = 22
llm_load_vocab: token to piece cache size = 0.9310 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = qwen2
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 152064
llm_load_print_meta: n_merges = 151387
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 3584
llm_load_print_meta: n_layer = 28
llm_load_print_meta: n_head = 28
llm_load_print_meta: n_head_kv = 4
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 7
llm_load_print_meta: n_embd_k_gqa = 512
llm_load_print_meta: n_embd_v_gqa = 512
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 18944
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 7.62 B
llm_load_print_meta: model size = 4.36 GiB (4.91 BPW)
llm_load_print_meta: general.name = Qwen2.5 7B Instruct
llm_load_print_meta: BOS token = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token = 151645 '<|im_end|>'
llm_load_print_meta: PAD token = 151643 '<|endoftext|>'
llm_load_print_meta: LF token = 148848 'ÄĬ'
llm_load_print_meta: EOT token = 151645 '<|im_end|>'
llm_load_print_meta: EOG token = 151643 '<|endoftext|>'
llm_load_print_meta: EOG token = 151645 '<|im_end|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: ggml ctx size = 0.15 MiB
llm_load_tensors: CPU buffer size = 4460.45 MiB
time=2024-11-09T18:51:23.058+08:00 level=INFO source=server.go:601 msg="waiting for server to become available" status="llm server loading model"
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CPU KV buffer size = 448.00 MiB
llama_new_context_with_model: KV self size = 448.00 MiB, K (f16): 224.00 MiB, V (f16): 224.00 MiB
llama_new_context_with_model: CPU output buffer size = 2.38 MiB
llama_new_context_with_model: CPU compute buffer size = 492.01 MiB
llama_new_context_with_model: graph nodes = 986
llama_new_context_with_model: graph splits = 1
time=2024-11-09T18:51:25.687+08:00 level=INFO source=server.go:606 msg="llama runner started in 2.89 seconds"
llama_model_loader: loaded meta data with 34 key-value pairs and 339 tensors from C:\Users\Admin.ollama\models\blobs\sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Qwen2.5 7B Instruct
llama_model_loader: - kv 3: general.finetune str = Instruct
llama_model_loader: - kv 4: general.basename str = Qwen2.5
llama_model_loader: - kv 5: general.size_label str = 7B
llama_model_loader: - kv 6: general.license str = apache-2.0
llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-7...
llama_model_loader: - kv 8: general.base_model.count u32 = 1
llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 7B
llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen
llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-7B
llama_model_loader: - kv 12: general.tags arr[str,2] = ["chat", "text-generation"]
llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"]
llama_model_loader: - kv 14: qwen2.block_count u32 = 28
llama_model_loader: - kv 15: qwen2.context_length u32 = 32768
llama_model_loader: - kv 16: qwen2.embedding_length u32 = 3584
llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 18944
llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 28
llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 4
llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 22: general.file_type u32 = 15
llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645
llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643
llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false
llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
llama_model_loader: - kv 33: general.quantization_version u32 = 2
llama_model_loader: - type f32: 141 tensors
llama_model_loader: - type q4_K: 169 tensors
llama_model_loader: - type q6_K: 29 tensors
llm_load_vocab: special tokens cache size = 22
llm_load_vocab: token to piece cache size = 0.9310 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = qwen2
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 152064
llm_load_print_meta: n_merges = 151387
llm_load_print_meta: vocab_only = 1
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = all F32
llm_load_print_meta: model params = 7.62 B
llm_load_print_meta: model size = 4.36 GiB (4.91 BPW)
llm_load_print_meta: general.name = Qwen2.5 7B Instruct
llm_load_print_meta: BOS token = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token = 151645 '<|im_end|>'
llm_load_print_meta: PAD token = 151643 '<|endoftext|>'
llm_load_print_meta: LF token = 148848 'ÄĬ'
llm_load_print_meta: EOT token = 151645 '<|im_end|>'
llm_load_print_meta: EOG token = 151643 '<|endoftext|>'
llm_load_print_meta: EOG token = 151645 '<|im_end|>'
llm_load_print_meta: max token length = 256
llama_model_load: vocab only - skipping tensors
[GIN] 2024/11/09 - 18:52:17 | 200 | 54.7176227s | 127.0.0.1 | POST "/v1/chat/completions"

please,it is important for me
why it work on CPU

<!-- gh-comment-id:2466171320 --> @Twilight-1p67e-27 commented on GitHub (Nov 9, 2024): logs: 2024/11/09 18:44:42 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\Admin\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-11-09T18:44:42.970+08:00 level=INFO source=images.go:755 msg="total blobs: 0" time=2024-11-09T18:44:42.971+08:00 level=INFO source=images.go:762 msg="total unused blobs removed: 0" time=2024-11-09T18:44:42.972+08:00 level=INFO source=routes.go:1240 msg="Listening on 127.0.0.1:11434 (version 0.4.0)" time=2024-11-09T18:44:42.975+08:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[rocm cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12]" time=2024-11-09T18:44:42.975+08:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs" time=2024-11-09T18:44:42.976+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2024-11-09T18:44:42.976+08:00 level=INFO source=gpu_windows.go:183 msg="efficiency cores detected" maxEfficiencyClass=1 time=2024-11-09T18:44:42.976+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=10 efficiency=4 threads=16 time=2024-11-09T18:44:42.976+08:00 level=WARN source=gpu.go:252 msg="CPU does not have minimum vector extensions, GPU inference disabled. Required:avx Detected:no vector extensions" time=2024-11-09T18:44:42.976+08:00 level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant="no vector extensions" compute="" driver=0.0 name="" total="31.8 GiB" available="25.4 GiB" [GIN] 2024/11/09 - 18:44:53 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/11/09 - 18:45:44 | 200 | 521.8µs | 127.0.0.1 | HEAD "/" [GIN] 2024/11/09 - 18:45:44 | 200 | 1.6375ms | 127.0.0.1 | GET "/api/tags" [GIN] 2024/11/09 - 18:46:28 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/11/09 - 18:46:28 | 200 | 1.5715ms | 127.0.0.1 | GET "/api/tags" [GIN] 2024/11/09 - 18:48:45 | 404 | 521.7µs | 127.0.0.1 | POST "/v1/chat/completions" [GIN] 2024/11/09 - 18:48:48 | 404 | 88.9µs | 127.0.0.1 | POST "/v1/chat/completions" [GIN] 2024/11/09 - 18:49:55 | 404 | 0s | 127.0.0.1 | POST "/v1/chat/completions" time=2024-11-09T18:51:22.779+08:00 level=INFO source=server.go:105 msg="system memory" total="31.8 GiB" free="20.4 GiB" free_swap="30.9 GiB" time=2024-11-09T18:51:22.780+08:00 level=INFO source=memory.go:343 msg="offload to cpu" layers.requested=-1 layers.model=29 layers.offload=0 layers.split="" memory.available="[20.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.1 GiB" memory.required.partial="0 B" memory.required.kv="448.0 MiB" memory.required.allocations="[5.1 GiB]" memory.weights.total="4.1 GiB" memory.weights.repeating="3.7 GiB" memory.weights.nonrepeating="426.4 MiB" memory.graph.full="478.0 MiB" memory.graph.partial="730.4 MiB" time=2024-11-09T18:51:22.790+08:00 level=INFO source=server.go:388 msg="starting llama server" cmd="C:\\Users\\Admin\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\runners\\cpu\\ollama_llama_server.exe --model C:\\Users\\Admin\\.ollama\\models\\blobs\\sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730 --ctx-size 8192 --batch-size 512 --embedding --threads 6 --no-mmap --parallel 4 --port 5834" time=2024-11-09T18:51:22.794+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2024-11-09T18:51:22.794+08:00 level=INFO source=server.go:567 msg="waiting for llama runner to start responding" time=2024-11-09T18:51:22.794+08:00 level=INFO source=server.go:601 msg="waiting for server to become available" status="llm server error" time=2024-11-09T18:51:22.822+08:00 level=INFO source=runner.go:869 msg="starting go runner" time=2024-11-09T18:51:22.830+08:00 level=INFO source=runner.go:870 msg=system info="AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(clang)" threads=6 time=2024-11-09T18:51:22.831+08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:5834" llama_model_loader: loaded meta data with 34 key-value pairs and 339 tensors from C:\Users\Admin\.ollama\models\blobs\sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen2.5 7B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Qwen2.5 llama_model_loader: - kv 5: general.size_label str = 7B llama_model_loader: - kv 6: general.license str = apache-2.0 llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-7... llama_model_loader: - kv 8: general.base_model.count u32 = 1 llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 7B llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-7B llama_model_loader: - kv 12: general.tags arr[str,2] = ["chat", "text-generation"] llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 14: qwen2.block_count u32 = 28 llama_model_loader: - kv 15: qwen2.context_length u32 = 32768 llama_model_loader: - kv 16: qwen2.embedding_length u32 = 3584 llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 18944 llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 28 llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 4 llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 22: general.file_type u32 = 15 llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 33: general.quantization_version u32 = 2 llama_model_loader: - type f32: 141 tensors llama_model_loader: - type q4_K: 169 tensors llama_model_loader: - type q6_K: 29 tensors llm_load_vocab: special tokens cache size = 22 llm_load_vocab: token to piece cache size = 0.9310 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 152064 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 3584 llm_load_print_meta: n_layer = 28 llm_load_print_meta: n_head = 28 llm_load_print_meta: n_head_kv = 4 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 7 llm_load_print_meta: n_embd_k_gqa = 512 llm_load_print_meta: n_embd_v_gqa = 512 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 18944 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 7.62 B llm_load_print_meta: model size = 4.36 GiB (4.91 BPW) llm_load_print_meta: general.name = Qwen2.5 7B Instruct llm_load_print_meta: BOS token = 151643 '<|endoftext|>' llm_load_print_meta: EOS token = 151645 '<|im_end|>' llm_load_print_meta: PAD token = 151643 '<|endoftext|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: EOT token = 151645 '<|im_end|>' llm_load_print_meta: EOG token = 151643 '<|endoftext|>' llm_load_print_meta: EOG token = 151645 '<|im_end|>' llm_load_print_meta: max token length = 256 llm_load_tensors: ggml ctx size = 0.15 MiB llm_load_tensors: CPU buffer size = 4460.45 MiB time=2024-11-09T18:51:23.058+08:00 level=INFO source=server.go:601 msg="waiting for server to become available" status="llm server loading model" llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CPU KV buffer size = 448.00 MiB llama_new_context_with_model: KV self size = 448.00 MiB, K (f16): 224.00 MiB, V (f16): 224.00 MiB llama_new_context_with_model: CPU output buffer size = 2.38 MiB llama_new_context_with_model: CPU compute buffer size = 492.01 MiB llama_new_context_with_model: graph nodes = 986 llama_new_context_with_model: graph splits = 1 time=2024-11-09T18:51:25.687+08:00 level=INFO source=server.go:606 msg="llama runner started in 2.89 seconds" llama_model_loader: loaded meta data with 34 key-value pairs and 339 tensors from C:\Users\Admin\.ollama\models\blobs\sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen2.5 7B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Qwen2.5 llama_model_loader: - kv 5: general.size_label str = 7B llama_model_loader: - kv 6: general.license str = apache-2.0 llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-7... llama_model_loader: - kv 8: general.base_model.count u32 = 1 llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 7B llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-7B llama_model_loader: - kv 12: general.tags arr[str,2] = ["chat", "text-generation"] llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 14: qwen2.block_count u32 = 28 llama_model_loader: - kv 15: qwen2.context_length u32 = 32768 llama_model_loader: - kv 16: qwen2.embedding_length u32 = 3584 llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 18944 llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 28 llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 4 llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 22: general.file_type u32 = 15 llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 33: general.quantization_version u32 = 2 llama_model_loader: - type f32: 141 tensors llama_model_loader: - type q4_K: 169 tensors llama_model_loader: - type q6_K: 29 tensors llm_load_vocab: special tokens cache size = 22 llm_load_vocab: token to piece cache size = 0.9310 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 152064 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 1 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = all F32 llm_load_print_meta: model params = 7.62 B llm_load_print_meta: model size = 4.36 GiB (4.91 BPW) llm_load_print_meta: general.name = Qwen2.5 7B Instruct llm_load_print_meta: BOS token = 151643 '<|endoftext|>' llm_load_print_meta: EOS token = 151645 '<|im_end|>' llm_load_print_meta: PAD token = 151643 '<|endoftext|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: EOT token = 151645 '<|im_end|>' llm_load_print_meta: EOG token = 151643 '<|endoftext|>' llm_load_print_meta: EOG token = 151645 '<|im_end|>' llm_load_print_meta: max token length = 256 llama_model_load: vocab only - skipping tensors [GIN] 2024/11/09 - 18:52:17 | 200 | 54.7176227s | 127.0.0.1 | POST "/v1/chat/completions" please,it is important for me why it work on CPU
Author
Owner

@AncientMystic commented on GitHub (Nov 9, 2024):

It appears to simply not be detecting your GPU

What is the output of nvidia-smi in command line or powershell?

<!-- gh-comment-id:2466182049 --> @AncientMystic commented on GitHub (Nov 9, 2024): It appears to simply not be detecting your GPU What is the output of `nvidia-smi` in command line or powershell?
Author
Owner

@rick-github commented on GitHub (Nov 9, 2024):

time=2024-11-09T18:44:42.976+08:00 level=WARN source=gpu.go:252 msg="CPU does not have minimum vector extensions, GPU inference disabled. Required:avx Detected:no vector extensions"
<!-- gh-comment-id:2466190575 --> @rick-github commented on GitHub (Nov 9, 2024): ``` time=2024-11-09T18:44:42.976+08:00 level=WARN source=gpu.go:252 msg="CPU does not have minimum vector extensions, GPU inference disabled. Required:avx Detected:no vector extensions" ```
Author
Owner

@AncientMystic commented on GitHub (Nov 9, 2024):


time=2024-11-09T18:44:42.976+08:00 level=WARN source=gpu.go:252 msg="CPU does not have minimum vector extensions, GPU inference disabled. Required:avx Detected:no vector extensions"

Thank you for pointing that out, i missed that bit while skimming over the log.

It appears to be a very modern cpu having both P and E cores. Why wouldn't AVX be detected?

<!-- gh-comment-id:2466192555 --> @AncientMystic commented on GitHub (Nov 9, 2024): > ``` > > time=2024-11-09T18:44:42.976+08:00 level=WARN source=gpu.go:252 msg="CPU does not have minimum vector extensions, GPU inference disabled. Required:avx Detected:no vector extensions" > > ``` Thank you for pointing that out, i missed that bit while skimming over the log. It appears to be a very modern cpu having both P and E cores. Why wouldn't AVX be detected?
Author
Owner

@Twilight-1p67e-27 commented on GitHub (Nov 9, 2024):

ok thanku

<!-- gh-comment-id:2466199921 --> @Twilight-1p67e-27 commented on GitHub (Nov 9, 2024): ok thanku
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#51347