[GH-ISSUE #8074] Windows NUMA 4 socket, 144 core system, default thread count causes very poor performance #51673

Closed
opened 2026-04-28 20:43:51 -05:00 by GiteaMirror · 26 comments
Owner

Originally created by @Panican-Whyasker on GitHub (Dec 12, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/8074

What is the issue?

A 135M-parameter model only yielded 4 words after running for 3.5 hours on one 36-core CPU @ 100% load.

A 3.8B model yielded only 10 words after 10.5 hours on the same machine.

Prompt in both cases: "Introduce yourself."

Windows Server 2016 OS (direct install, no Docker).

Ollama 0.5.1 on 4 Xeon Gold 6140 CPUs (144 logical cores in total) and 768 GB of system RAM (6-channel, NUMA architecture).

No GPU.

Tried two small LMMs for starters, namely smollm:135m and phi3.5 (3.8B).

The correct runner for that CPU type was loaded (cpu_avx2).

smollm:135m was saying (after 3.5 h): "I'm thrilled to introduce..."

phi3.5 (3.8B) was saying (after 10.5 h): "Hello! I am Phi, an artificial intelligence designed to interact..."

I have run larger LLMs with Q4 and FP16 quantizations on a much older server machine running Windows 10 with dual Xeons 5600 (intel Westmere, no AVX), 288 GB of RAM (and no GPU), and the "cpu" runner worked fine. Indeed, a 30B, Q4 model runs very slow (~one word/second), but nothing like one word/hour!!!

On the newer machine (Win Server 2016), Ollama seems to run 288 parallel threads on one of four 36-core (logical) CPU; here's an excerpt from the server.log:

time=2024-12-12T16:47:26.192+01:00 level=INFO source=runner.go:942 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(clang)" threads=288

On the older machine (Win 10 Pro x64), Ollama used both CPUs and the load peaked at ~60%. RAM is DDR3 @ 1333 MHz, 3 channels/CPU (6 channels for DDR4 @ 2666 MHz on the newer machine).

OS

Windows

GPU

Other

CPU

Intel

Ollama version

0.5.1

Originally created by @Panican-Whyasker on GitHub (Dec 12, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/8074 ### What is the issue? A 135M-parameter model only yielded 4 words after running for 3.5 hours on one 36-core CPU @ 100% load. A 3.8B model yielded only 10 words after 10.5 hours on the same machine. Prompt in both cases: "Introduce yourself." Windows Server 2016 OS (direct install, no Docker). Ollama 0.5.1 on 4 Xeon Gold 6140 CPUs (144 logical cores in total) and 768 GB of system RAM (6-channel, NUMA architecture). No GPU. Tried two small LMMs for starters, namely smollm:135m and phi3.5 (3.8B). The correct runner for that CPU type was loaded (cpu_avx2). smollm:135m was saying (after 3.5 h): "I'm thrilled to introduce..." phi3.5 (3.8B) was saying (after 10.5 h): "Hello! I am Phi, an artificial intelligence designed to interact..." I have run larger LLMs with Q4 and FP16 quantizations on a much older server machine running Windows 10 with dual Xeons 5600 (intel Westmere, no AVX), 288 GB of RAM (and no GPU), and the "cpu" runner worked fine. Indeed, a 30B, Q4 model runs very slow (~one word/second), but nothing like one word/hour!!! On the newer machine (Win Server 2016), Ollama seems to run 288 parallel threads on one of four 36-core (logical) CPU; here's an excerpt from the server.log: time=2024-12-12T16:47:26.192+01:00 level=INFO source=runner.go:942 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(clang)" threads=288 On the older machine (Win 10 Pro x64), Ollama used both CPUs and the load peaked at ~60%. RAM is DDR3 @ 1333 MHz, 3 channels/CPU (6 channels for DDR4 @ 2666 MHz on the newer machine). ### OS Windows ### GPU Other ### CPU Intel ### Ollama version 0.5.1
GiteaMirror added the performancebugwindows labels 2026-04-28 20:43:51 -05:00
Author
Owner

@rick-github commented on GitHub (Dec 12, 2024):

Server logs will aid in debugging.

<!-- gh-comment-id:2539502930 --> @rick-github commented on GitHub (Dec 12, 2024): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will aid in debugging.
Author
Owner

@Panican-Whyasker commented on GitHub (Dec 12, 2024):

server.log:

2024/12/12 12:00:03 routes.go:1195: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\Users\electa\.ollama\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-12-12T12:00:03.387+01:00 level=INFO source=images.go:753 msg="total blobs: 28"
time=2024-12-12T12:00:03.390+01:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2024-12-12T12:00:03.392+01:00 level=INFO source=routes.go:1246 msg="Listening on 127.0.0.1:11434 (version 0.5.1)"
time=2024-12-12T12:00:03.464+01:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu_avx cpu_avx2 cuda_v11 cuda_v12 rocm cpu]"
time=2024-12-12T12:00:03.464+01:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
time=2024-12-12T12:00:03.464+01:00 level=INFO source=gpu_windows.go:167 msg=packages count=4
time=2024-12-12T12:00:03.464+01:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=72 efficiency=0 threads=144
time=2024-12-12T12:00:03.464+01:00 level=INFO source=gpu_windows.go:214 msg="" package=1 cores=72 efficiency=0 threads=144
time=2024-12-12T12:00:03.464+01:00 level=INFO source=gpu_windows.go:214 msg="" package=2 cores=72 efficiency=0 threads=144
time=2024-12-12T12:00:03.464+01:00 level=INFO source=gpu_windows.go:214 msg="" package=3 cores=72 efficiency=0 threads=144
time=2024-12-12T12:00:03.828+01:00 level=INFO source=gpu.go:386 msg="no compatible GPUs were discovered"
time=2024-12-12T12:00:03.828+01:00 level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="767.2 GiB" available="741.7 GiB"
[GIN] 2024/12/12 - 12:01:41 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2024/12/12 - 12:01:41 | 200 | 151.7827ms | 127.0.0.1 | POST "/api/show"
time=2024-12-12T12:01:41.616+01:00 level=INFO source=server.go:105 msg="system memory" total="767.2 GiB" free="736.5 GiB" free_swap="658.2 GiB"
time=2024-12-12T12:01:41.617+01:00 level=INFO source=memory.go:356 msg="offload to cpu" layers.requested=-1 layers.model=31 layers.offload=0 layers.split="" memory.available="[736.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="438.2 MiB" memory.required.partial="0 B" memory.required.kv="180.0 MiB" memory.required.allocations="[438.2 MiB]" memory.weights.total="237.1 MiB" memory.weights.repeating="208.4 MiB" memory.weights.nonrepeating="28.7 MiB" memory.graph.full="164.5 MiB" memory.graph.partial="168.4 MiB"
time=2024-12-12T12:01:41.626+01:00 level=INFO source=server.go:397 msg="starting llama server" cmd="C:\Users\electa\AppData\Local\Programs\Ollama\lib\ollama\runners\cpu_avx2\ollama_llama_server.exe --model C:\Users\electa\.ollama\models\blobs\sha256-eb2c714d40d4b35ba4b8ee98475a06d51d8080a17d2d2a75a23665985c739b94 --ctx-size 8192 --batch-size 512 --threads 288 --no-mmap --parallel 4 --port 51600"
time=2024-12-12T12:01:42.108+01:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2024-12-12T12:01:42.108+01:00 level=INFO source=server.go:576 msg="waiting for llama runner to start responding"
time=2024-12-12T12:01:42.120+01:00 level=INFO source=runner.go:941 msg="starting go runner"
time=2024-12-12T12:01:42.121+01:00 level=INFO source=runner.go:942 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(clang)" threads=288
time=2024-12-12T12:01:42.123+01:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:51600"
llama_model_loader: loaded meta data with 39 key-value pairs and 272 tensors from C:\Users\electa.ollama\models\blobs\sha256-eb2c714d40d4b35ba4b8ee98475a06d51d8080a17d2d2a75a23665985c739b94 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = SmolLM 135M
llama_model_loader: - kv 3: general.organization str = HuggingFaceTB
llama_model_loader: - kv 4: general.finetune str = Instruct
llama_model_loader: - kv 5: general.basename str = SmolLM
llama_model_loader: - kv 6: general.size_label str = 135M
llama_model_loader: - kv 7: general.license str = apache-2.0
llama_model_loader: - kv 8: general.base_model.count u32 = 1
llama_model_loader: - kv 9: general.base_model.0.name str = SmolLM 135M
llama_model_loader: - kv 10: general.base_model.0.organization str = HuggingFaceTB
llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/HuggingFaceTB/...
llama_model_loader: - kv 12: general.tags arr[str,3] = ["alignment-handbook", "trl", "sft"]
llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"]
llama_model_loader: - kv 14: general.datasets arr[str,4] = ["Magpie-Align/Magpie-Pro-300K-Filter...
llama_model_loader: - kv 15: llama.block_count u32 = 30
llama_model_loader: - kv 16: llama.context_length u32 = 2048
llama_model_loader: - kv 17: llama.embedding_length u32 = 576
llama_model_loader: - kv 18: llama.feed_forward_length u32 = 1536
llama_model_loader: - kv 19: llama.attention.head_count u32 = 9
llama_model_loader: - kv 20: llama.attention.head_count_kv u32 = 3
llama_model_loader: - kv 21: llama.rope.freq_base f32 = 10000.000000
llama_model_loader: - kv 22: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 23: general.file_type u32 = 2
llama_model_loader: - kv 24: llama.vocab_size u32 = 49152
llama_model_loader: - kv 25: llama.rope.dimension_count u32 = 64
llama_model_loader: - kv 26: tokenizer.ggml.add_space_prefix bool = false
llama_model_loader: - kv 27: tokenizer.ggml.add_bos_token bool = false
llama_model_loader: - kv 28: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 29: tokenizer.ggml.pre str = smollm
llama_model_loader: - kv 30: tokenizer.ggml.tokens arr[str,49152] = ["<|endoftext|>", "<|im_start|>", "<|...
llama_model_loader: - kv 31: tokenizer.ggml.token_type arr[i32,49152] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv 32: tokenizer.ggml.merges arr[str,48900] = ["Ġ t", "Ġ a", "i n", "h e", "Ġ Ġ...
llama_model_loader: - kv 33: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 34: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 35: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 36: tokenizer.ggml.padding_token_id u32 = 2
llama_model_loader: - kv 37: tokenizer.chat_template str = {% for message in messages %}{{'<|im_...
llama_model_loader: - kv 38: general.quantization_version u32 = 2
llama_model_loader: - type f32: 61 tensors
llama_model_loader: - type q4_0: 210 tensors
llama_model_loader: - type q8_0: 1 tensors
llm_load_vocab: special tokens cache size = 17
llm_load_vocab: token to piece cache size = 0.3170 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 49152
llm_load_print_meta: n_merges = 48900
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 2048
llm_load_print_meta: n_embd = 576
llm_load_print_meta: n_layer = 30
llm_load_print_meta: n_head = 9
llm_load_print_meta: n_head_kv = 3
llm_load_print_meta: n_rot = 64
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 64
llm_load_print_meta: n_embd_head_v = 64
llm_load_print_meta: n_gqa = 3
llm_load_print_meta: n_embd_k_gqa = 192
llm_load_print_meta: n_embd_v_gqa = 192
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 1536
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 2048
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 134.52 M
llm_load_print_meta: model size = 85.77 MiB (5.35 BPW)
llm_load_print_meta: general.name = SmolLM 135M
llm_load_print_meta: BOS token = 1 '<|im_start|>'
llm_load_print_meta: EOS token = 2 '<|im_end|>'
llm_load_print_meta: UNK token = 0 '<|endoftext|>'
llm_load_print_meta: PAD token = 2 '<|im_end|>'
llm_load_print_meta: LF token = 143 'Ä'
llm_load_print_meta: EOT token = 2 '<|im_end|>'
llm_load_print_meta: EOG token = 0 '<|endoftext|>'
llm_load_print_meta: EOG token = 2 '<|im_end|>'
llm_load_print_meta: max token length = 162
llm_load_tensors: ggml ctx size = 0.13 MiB
llm_load_tensors: CPU buffer size = 114.46 MiB
time=2024-12-12T12:01:42.308+01:00 level=INFO source=server.go:610 msg="waiting for server to become available" status="llm server not responding"
time=2024-12-12T12:01:42.565+01:00 level=INFO source=server.go:610 msg="waiting for server to become available" status="llm server loading model"
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CPU KV buffer size = 180.00 MiB
llama_new_context_with_model: KV self size = 180.00 MiB, K (f16): 90.00 MiB, V (f16): 90.00 MiB
llama_new_context_with_model: CPU output buffer size = 0.76 MiB
llama_new_context_with_model: CPU compute buffer size = 164.51 MiB
llama_new_context_with_model: graph nodes = 966
llama_new_context_with_model: graph splits = 1
time=2024-12-12T12:01:43.066+01:00 level=INFO source=server.go:615 msg="llama runner started in 0.96 seconds"
[GIN] 2024/12/12 - 12:01:43 | 200 | 1.4739315s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/12/12 - 15:41:54 | 200 | 3h40m4s | 127.0.0.1 | POST "/api/chat"

(END OF server.log)

(".api/generate/" follows my prompt "Introduce yourself."

("/api/chat" follows termination of model in Windows PowerShell with Ctrl+C
and model closing with /bye command. The model yielded only 4 words in ~3.5 hours, namely "I'm thrilled to introduce...")

<!-- gh-comment-id:2539748709 --> @Panican-Whyasker commented on GitHub (Dec 12, 2024): server.log: 2024/12/12 12:00:03 routes.go:1195: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\electa\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-12-12T12:00:03.387+01:00 level=INFO source=images.go:753 msg="total blobs: 28" time=2024-12-12T12:00:03.390+01:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0" time=2024-12-12T12:00:03.392+01:00 level=INFO source=routes.go:1246 msg="Listening on 127.0.0.1:11434 (version 0.5.1)" time=2024-12-12T12:00:03.464+01:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu_avx cpu_avx2 cuda_v11 cuda_v12 rocm cpu]" time=2024-12-12T12:00:03.464+01:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs" time=2024-12-12T12:00:03.464+01:00 level=INFO source=gpu_windows.go:167 msg=packages count=4 time=2024-12-12T12:00:03.464+01:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=72 efficiency=0 threads=144 time=2024-12-12T12:00:03.464+01:00 level=INFO source=gpu_windows.go:214 msg="" package=1 cores=72 efficiency=0 threads=144 time=2024-12-12T12:00:03.464+01:00 level=INFO source=gpu_windows.go:214 msg="" package=2 cores=72 efficiency=0 threads=144 time=2024-12-12T12:00:03.464+01:00 level=INFO source=gpu_windows.go:214 msg="" package=3 cores=72 efficiency=0 threads=144 time=2024-12-12T12:00:03.828+01:00 level=INFO source=gpu.go:386 msg="no compatible GPUs were discovered" time=2024-12-12T12:00:03.828+01:00 level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="767.2 GiB" available="741.7 GiB" [GIN] 2024/12/12 - 12:01:41 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/12/12 - 12:01:41 | 200 | 151.7827ms | 127.0.0.1 | POST "/api/show" time=2024-12-12T12:01:41.616+01:00 level=INFO source=server.go:105 msg="system memory" total="767.2 GiB" free="736.5 GiB" free_swap="658.2 GiB" time=2024-12-12T12:01:41.617+01:00 level=INFO source=memory.go:356 msg="offload to cpu" layers.requested=-1 layers.model=31 layers.offload=0 layers.split="" memory.available="[736.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="438.2 MiB" memory.required.partial="0 B" memory.required.kv="180.0 MiB" memory.required.allocations="[438.2 MiB]" memory.weights.total="237.1 MiB" memory.weights.repeating="208.4 MiB" memory.weights.nonrepeating="28.7 MiB" memory.graph.full="164.5 MiB" memory.graph.partial="168.4 MiB" time=2024-12-12T12:01:41.626+01:00 level=INFO source=server.go:397 msg="starting llama server" cmd="C:\\Users\\electa\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\runners\\cpu_avx2\\ollama_llama_server.exe --model C:\\Users\\electa\\.ollama\\models\\blobs\\sha256-eb2c714d40d4b35ba4b8ee98475a06d51d8080a17d2d2a75a23665985c739b94 --ctx-size 8192 --batch-size 512 --threads 288 --no-mmap --parallel 4 --port 51600" time=2024-12-12T12:01:42.108+01:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2024-12-12T12:01:42.108+01:00 level=INFO source=server.go:576 msg="waiting for llama runner to start responding" time=2024-12-12T12:01:42.120+01:00 level=INFO source=runner.go:941 msg="starting go runner" time=2024-12-12T12:01:42.121+01:00 level=INFO source=runner.go:942 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(clang)" threads=288 time=2024-12-12T12:01:42.123+01:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:51600" llama_model_loader: loaded meta data with 39 key-value pairs and 272 tensors from C:\Users\electa\.ollama\models\blobs\sha256-eb2c714d40d4b35ba4b8ee98475a06d51d8080a17d2d2a75a23665985c739b94 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = SmolLM 135M llama_model_loader: - kv 3: general.organization str = HuggingFaceTB llama_model_loader: - kv 4: general.finetune str = Instruct llama_model_loader: - kv 5: general.basename str = SmolLM llama_model_loader: - kv 6: general.size_label str = 135M llama_model_loader: - kv 7: general.license str = apache-2.0 llama_model_loader: - kv 8: general.base_model.count u32 = 1 llama_model_loader: - kv 9: general.base_model.0.name str = SmolLM 135M llama_model_loader: - kv 10: general.base_model.0.organization str = HuggingFaceTB llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/HuggingFaceTB/... llama_model_loader: - kv 12: general.tags arr[str,3] = ["alignment-handbook", "trl", "sft"] llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 14: general.datasets arr[str,4] = ["Magpie-Align/Magpie-Pro-300K-Filter... llama_model_loader: - kv 15: llama.block_count u32 = 30 llama_model_loader: - kv 16: llama.context_length u32 = 2048 llama_model_loader: - kv 17: llama.embedding_length u32 = 576 llama_model_loader: - kv 18: llama.feed_forward_length u32 = 1536 llama_model_loader: - kv 19: llama.attention.head_count u32 = 9 llama_model_loader: - kv 20: llama.attention.head_count_kv u32 = 3 llama_model_loader: - kv 21: llama.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 22: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 23: general.file_type u32 = 2 llama_model_loader: - kv 24: llama.vocab_size u32 = 49152 llama_model_loader: - kv 25: llama.rope.dimension_count u32 = 64 llama_model_loader: - kv 26: tokenizer.ggml.add_space_prefix bool = false llama_model_loader: - kv 27: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 28: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 29: tokenizer.ggml.pre str = smollm llama_model_loader: - kv 30: tokenizer.ggml.tokens arr[str,49152] = ["<|endoftext|>", "<|im_start|>", "<|... llama_model_loader: - kv 31: tokenizer.ggml.token_type arr[i32,49152] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ... llama_model_loader: - kv 32: tokenizer.ggml.merges arr[str,48900] = ["Ġ t", "Ġ a", "i n", "h e", "Ġ Ġ... llama_model_loader: - kv 33: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 34: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 35: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 36: tokenizer.ggml.padding_token_id u32 = 2 llama_model_loader: - kv 37: tokenizer.chat_template str = {% for message in messages %}{{'<|im_... llama_model_loader: - kv 38: general.quantization_version u32 = 2 llama_model_loader: - type f32: 61 tensors llama_model_loader: - type q4_0: 210 tensors llama_model_loader: - type q8_0: 1 tensors llm_load_vocab: special tokens cache size = 17 llm_load_vocab: token to piece cache size = 0.3170 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 49152 llm_load_print_meta: n_merges = 48900 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 2048 llm_load_print_meta: n_embd = 576 llm_load_print_meta: n_layer = 30 llm_load_print_meta: n_head = 9 llm_load_print_meta: n_head_kv = 3 llm_load_print_meta: n_rot = 64 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 64 llm_load_print_meta: n_embd_head_v = 64 llm_load_print_meta: n_gqa = 3 llm_load_print_meta: n_embd_k_gqa = 192 llm_load_print_meta: n_embd_v_gqa = 192 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 1536 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 2048 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 134.52 M llm_load_print_meta: model size = 85.77 MiB (5.35 BPW) llm_load_print_meta: general.name = SmolLM 135M llm_load_print_meta: BOS token = 1 '<|im_start|>' llm_load_print_meta: EOS token = 2 '<|im_end|>' llm_load_print_meta: UNK token = 0 '<|endoftext|>' llm_load_print_meta: PAD token = 2 '<|im_end|>' llm_load_print_meta: LF token = 143 'Ä' llm_load_print_meta: EOT token = 2 '<|im_end|>' llm_load_print_meta: EOG token = 0 '<|endoftext|>' llm_load_print_meta: EOG token = 2 '<|im_end|>' llm_load_print_meta: max token length = 162 llm_load_tensors: ggml ctx size = 0.13 MiB llm_load_tensors: CPU buffer size = 114.46 MiB time=2024-12-12T12:01:42.308+01:00 level=INFO source=server.go:610 msg="waiting for server to become available" status="llm server not responding" time=2024-12-12T12:01:42.565+01:00 level=INFO source=server.go:610 msg="waiting for server to become available" status="llm server loading model" llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CPU KV buffer size = 180.00 MiB llama_new_context_with_model: KV self size = 180.00 MiB, K (f16): 90.00 MiB, V (f16): 90.00 MiB llama_new_context_with_model: CPU output buffer size = 0.76 MiB llama_new_context_with_model: CPU compute buffer size = 164.51 MiB llama_new_context_with_model: graph nodes = 966 llama_new_context_with_model: graph splits = 1 time=2024-12-12T12:01:43.066+01:00 level=INFO source=server.go:615 msg="llama runner started in 0.96 seconds" [GIN] 2024/12/12 - 12:01:43 | 200 | 1.4739315s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/12/12 - 15:41:54 | 200 | 3h40m4s | 127.0.0.1 | POST "/api/chat" (END OF server.log) (".api/generate/" follows my prompt "Introduce yourself." ("/api/chat" follows termination of model in Windows PowerShell with Ctrl+C and model closing with /bye command. The model yielded only 4 words in ~3.5 hours, namely "I'm thrilled to introduce...")
Author
Owner

@rick-github commented on GitHub (Dec 12, 2024):

I thought it might be something strange like only 1 thread being assigned to the runner or relying heavily on swap, but it all looks normal. For comparison, i7-13700:

$ ollama run --verbose smollm:135m "Introduce yourself."
I'm excited to introduce myself! I'm a passionate and knowledgeable data
...
total duration:       1.934819032s
load duration:        391.815614ms
prompt eval count:    13 token(s)
prompt eval duration: 21ms
prompt eval rate:     619.05 tokens/s
eval count:           323 token(s)
eval duration:        1.52s
eval rate:            212.50 tokens/s
ollama  | time=2024-12-12T18:48:46.438Z level=INFO source=runner.go:942 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=8

Out of curiosity, what happens if you reduce the number of threads?

$ ollama run --verbose smollm:135m
>>> /set parameter num_thread 1
Set parameter 'num_thread' to '1'
>>> Introduce yourself.
...
total duration:       6.52618099s
load duration:        5.487642932s
prompt eval count:    13 token(s)
prompt eval duration: 54ms
prompt eval rate:     240.74 tokens/s
eval count:           96 token(s)
eval duration:        983ms
eval rate:            97.66 tokens/s
<!-- gh-comment-id:2539785372 --> @rick-github commented on GitHub (Dec 12, 2024): I thought it might be something strange like only 1 thread being assigned to the runner or relying heavily on swap, but it all looks normal. For comparison, i7-13700: ``` $ ollama run --verbose smollm:135m "Introduce yourself." I'm excited to introduce myself! I'm a passionate and knowledgeable data ... total duration: 1.934819032s load duration: 391.815614ms prompt eval count: 13 token(s) prompt eval duration: 21ms prompt eval rate: 619.05 tokens/s eval count: 323 token(s) eval duration: 1.52s eval rate: 212.50 tokens/s ``` ``` ollama | time=2024-12-12T18:48:46.438Z level=INFO source=runner.go:942 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=8 ``` Out of curiosity, what happens if you reduce the number of threads? ```sh $ ollama run --verbose smollm:135m >>> /set parameter num_thread 1 Set parameter 'num_thread' to '1' >>> Introduce yourself. ... total duration: 6.52618099s load duration: 5.487642932s prompt eval count: 13 token(s) prompt eval duration: 54ms prompt eval rate: 240.74 tokens/s eval count: 96 token(s) eval duration: 983ms eval rate: 97.66 tokens/s ```
Author
Owner

@Panican-Whyasker commented on GitHub (Dec 12, 2024):

Wow, what a HUGE difference with "/set parameter num_thread 1" !!!

"I'm excited to introduce myself! I'm a passionate and experienced data scientist with over 10 years of experience in the field. My name is [Your Name], and I'll be your guide today as we dive into some exciting projects that showcase my skills and knowledge.

Introduction (5 minutes)

[Name]: Hello, I'm [Your Name] from [Company/Organization]. I've been a data scientist for over 10 years, and I..."

total duration: 14.0871704s
load duration: 512.1357ms
prompt eval count: 13 token(s)
prompt eval duration: 105ms
prompt eval rate: 123.81 tokens/s
eval count: 616 token(s)
eval duration: 13.468s
eval rate: 45.74 tokens/s

(End of --verbose model output.)

It seems that Ollama mistakenly assigns 288 threads to one CPU with 36 logical cores.
Since Win Srv 2016 is a server OS, it tends to assign at most one CPU to a single application (like MatLab - never made it run all the 144 cores in parallel).
I suspect that Ollama will run normally as long as <=36 threads are assigned.
Perhaps it has something to do with the initial detection of built-in GPUs in all those Xeons?

server.log:
time=2024-12-12T12:00:03.464+01:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
time=2024-12-12T12:00:03.464+01:00 level=INFO source=gpu_windows.go:167 msg=packages count=4
time=2024-12-12T12:00:03.464+01:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=72 efficiency=0 threads=144
time=2024-12-12T12:00:03.464+01:00 level=INFO source=gpu_windows.go:214 msg="" package=1 cores=72 efficiency=0 threads=144
time=2024-12-12T12:00:03.464+01:00 level=INFO source=gpu_windows.go:214 msg="" package=2 cores=72 efficiency=0 threads=144
time=2024-12-12T12:00:03.464+01:00 level=INFO source=gpu_windows.go:214 msg="" package=3 cores=72 efficiency=0 threads=144
time=2024-12-12T12:00:03.828+01:00 level=INFO source=gpu.go:386 msg="no compatible GPUs were discovered"
time=2024-12-12T12:00:03.828+01:00 level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="767.2 GiB" available="741.7 GiB"

Also:

time=2024-12-12T12:01:42.121+01:00 level=INFO source=runner.go:942 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(clang)" threads=288

Those are 4 Xeons Gold 6140 CPUs (144 logical cores in total/72 physical).

No idea if one can command a server OS like Win Srv 2016 to allow a single program to use all the 4 CPUs. Since it is by default set to run various services with higher prio, like MySQL, a Web service, a FTP service, it limits GUI apps to just one CPU, max. 36 logical cores.

<!-- gh-comment-id:2539835116 --> @Panican-Whyasker commented on GitHub (Dec 12, 2024): Wow, what a HUGE difference with "/set parameter num_thread 1" !!! "I'm excited to introduce myself! I'm a passionate and experienced data scientist with over 10 years of experience in the field. My name is [Your Name], and I'll be your guide today as we dive into some exciting projects that showcase my skills and knowledge. **Introduction (5 minutes)** [Name]: Hello, I'm [Your Name] from [Company/Organization]. I've been a data scientist for over 10 years, and I..." total duration: 14.0871704s load duration: 512.1357ms prompt eval count: 13 token(s) prompt eval duration: 105ms prompt eval rate: 123.81 tokens/s eval count: 616 token(s) eval duration: 13.468s eval rate: 45.74 tokens/s (End of --verbose model output.) It seems that Ollama mistakenly assigns 288 threads to one CPU with 36 logical cores. Since Win Srv 2016 is a server OS, it tends to assign at most one CPU to a single application (like MatLab - never made it run all the 144 cores in parallel). I suspect that Ollama will run normally as long as <=36 threads are assigned. Perhaps it has something to do with the initial detection of built-in GPUs in all those Xeons? server.log: time=2024-12-12T12:00:03.464+01:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs" time=2024-12-12T12:00:03.464+01:00 level=INFO source=gpu_windows.go:167 msg=packages count=4 time=2024-12-12T12:00:03.464+01:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=72 efficiency=0 threads=144 time=2024-12-12T12:00:03.464+01:00 level=INFO source=gpu_windows.go:214 msg="" package=1 cores=72 efficiency=0 threads=144 time=2024-12-12T12:00:03.464+01:00 level=INFO source=gpu_windows.go:214 msg="" package=2 cores=72 efficiency=0 threads=144 time=2024-12-12T12:00:03.464+01:00 level=INFO source=gpu_windows.go:214 msg="" package=3 cores=72 efficiency=0 threads=144 time=2024-12-12T12:00:03.828+01:00 level=INFO source=gpu.go:386 msg="no compatible GPUs were discovered" time=2024-12-12T12:00:03.828+01:00 level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="767.2 GiB" available="741.7 GiB" Also: time=2024-12-12T12:01:42.121+01:00 level=INFO source=runner.go:942 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(clang)" threads=288 Those are 4 Xeons Gold 6140 CPUs (144 logical cores in total/72 physical). No idea if one can command a server OS like Win Srv 2016 to allow a single program to use all the 4 CPUs. Since it is by default set to run various services with higher prio, like MySQL, a Web service, a FTP service, it limits GUI apps to just one CPU, max. 36 logical cores.
Author
Owner

@Panican-Whyasker commented on GitHub (Dec 12, 2024):

Continued: and this is with "/set parameter num_thread 36" (single-CPU load maxed at ~90%):

total duration: 7.4600041s
load duration: 512.9666ms
prompt eval count: 13 token(s)
prompt eval duration: 20ms
prompt eval rate: 650.00 tokens/s
eval count: 605 token(s)
eval duration: 6.926s
eval rate: 87.35 tokens/s

<!-- gh-comment-id:2539852952 --> @Panican-Whyasker commented on GitHub (Dec 12, 2024): Continued: and this is with "/set parameter num_thread 36" (single-CPU load maxed at ~90%): total duration: 7.4600041s load duration: 512.9666ms prompt eval count: 13 token(s) prompt eval duration: 20ms prompt eval rate: 650.00 tokens/s eval count: 605 token(s) eval duration: 6.926s eval rate: 87.35 tokens/s
Author
Owner

@Panican-Whyasker commented on GitHub (Dec 12, 2024):

...And this is after adding just one extra thread than the logical cores of one CPU:

/set parameter num_thread 37
Set parameter 'num_thread' to '37'
Introduce yourself.
I'm a human being, I don't have a specific name or title in the classical sense of "human." However, I can be considered an "engineer" because of my unique combination of skills and expertise that enables me to design, develop, and improve systems, products, and services that benefit humanity.

total duration: 42m24.8791268s
load duration: 514.1166ms
prompt eval count: 19 token(s)
prompt eval duration: 45.625s
prompt eval rate: 0.42 tokens/s
eval count: 64 token(s)
eval duration: 41m38.609s
eval rate: 0.03 tokens/s

42 minutes!!! Up from 7.5 seconds!.....

It seems that Ollama incorectly detects the CPU's number of logical cores (==>threads).
On this machine, it wrongly finds 288 logical cores (and assigns 288 threads) when there are only 144 cores. Then, since it is a Server OS, only 36 cores can be assigned to a single process.

So, ideally, Ollama should:

  1. Correctly detect the number of CPUs and logical cores in each CPU;
  2. Correctly detect whether there is a GPU built in the CPU (here, it detected GPUs in those Xeons when there were none);
    and 3) Add support for Windows Server OS 2016+ (2016 corresponds to the Desktop Windows 10) and detect how many max. cores can be assigned to one process, and start the runner (e.g., cpu_avx2) with that number of threads.

On my older server machine (running Windows 10 Pro x64) with a total of 24 logical cores in two Xeons 5600 Westmere (runner=cpu; no AVX), Ollama only detects the 12 threads (logical cores) of one CPU, and runs 12 threads by default ==> CPU load was shared between the two CPUs, and peak load maxed at ~60%, running at ~50-55% most of the time.
Just ran a 12B, FP16 model in Ollama with "/set parameter num_thread 24" and the CPU utilization now is at steady 100%.

<!-- gh-comment-id:2539984110 --> @Panican-Whyasker commented on GitHub (Dec 12, 2024): ...And this is after adding just one extra thread than the logical cores of one CPU: >>> /set parameter num_thread 37 Set parameter 'num_thread' to '37' >>> Introduce yourself. I'm a human being, I don't have a specific name or title in the classical sense of "human." However, I can be considered an "engineer" because of my unique combination of skills and expertise that enables me to design, develop, and improve systems, products, and services that benefit humanity. total duration: 42m24.8791268s load duration: 514.1166ms prompt eval count: 19 token(s) prompt eval duration: 45.625s prompt eval rate: 0.42 tokens/s eval count: 64 token(s) eval duration: 41m38.609s eval rate: 0.03 tokens/s 42 minutes!!! Up from 7.5 seconds!..... It seems that Ollama incorectly detects the CPU's number of logical cores (==>threads). On this machine, it wrongly finds 288 logical cores (and assigns 288 threads) when there are only 144 cores. Then, since it is a Server OS, only 36 cores can be assigned to a single process. So, ideally, Ollama should: 1) Correctly detect the number of CPUs and logical cores in each CPU; 2) Correctly detect whether there is a GPU built in the CPU (here, it detected GPUs in those Xeons when there were none); and 3) Add support for Windows Server OS 2016+ (2016 corresponds to the Desktop Windows 10) and detect how many max. cores can be assigned to one process, and start the runner (e.g., cpu_avx2) with that number of threads. On my older server machine (running Windows 10 Pro x64) with a total of 24 logical cores in two Xeons 5600 Westmere (runner=cpu; no AVX), Ollama only detects the 12 threads (logical cores) of one CPU, and runs 12 threads by default ==> CPU load was shared between the two CPUs, and peak load maxed at ~60%, running at ~50-55% most of the time. Just ran a 12B, FP16 model in Ollama with "/set parameter num_thread 24" and the CPU utilization now is at steady 100%.
Author
Owner

@rick-github commented on GitHub (Dec 12, 2024):

ollama sums the number of (cores - efficencyCores) over all CPUs, it doesn't treat the Xeons as GPUs. If Windows Server 2016 limits applications to one CPU then I assume there's some syscall that can be made to detect that, and that logic would need to be added to discover/gpu_windows.go. In the meantime you can create a copy of the models you want to use and assign the number of threads they are to use.

$ ollama run smollm:135m
>>> /set parameter num_thread 36
Set parameter 'num_thread' to '36'
>>> /save smollm:135m-36t
Created new model 'smollm:135m-36t'
>>> /bye
<!-- gh-comment-id:2540141770 --> @rick-github commented on GitHub (Dec 12, 2024): ollama sums the number of (cores - efficencyCores) over all CPUs, it doesn't treat the Xeons as GPUs. If Windows Server 2016 limits applications to one CPU then I assume there's some syscall that can be made to detect that, and that logic would need to be added to [discover/gpu_windows.go](https://github.com/ollama/ollama/blob/c21685052311054fe44ab203bd8e551b515fe535/discover/gpu_windows.go#L220). In the meantime you can create a copy of the models you want to use and assign the number of threads they are to use. ```console $ ollama run smollm:135m >>> /set parameter num_thread 36 Set parameter 'num_thread' to '36' >>> /save smollm:135m-36t Created new model 'smollm:135m-36t' >>> /bye ```
Author
Owner

@dhiltgen commented on GitHub (Dec 13, 2024):

Related to issue #2936 - our NUMA performance at present isn't optimal, particularly on Windows. Manually setting num_thread and experimenting to find what yields the best throughput is the best workaround for now until we implement proper NUMA support on windows for CPU inference.

<!-- gh-comment-id:2541955303 --> @dhiltgen commented on GitHub (Dec 13, 2024): Related to issue #2936 - our NUMA performance at present isn't optimal, particularly on Windows. Manually setting num_thread and experimenting to find what yields the best throughput is the best workaround for now until we implement proper NUMA support on windows for CPU inference.
Author
Owner

@mrdg-sys commented on GitHub (Feb 14, 2025):

...And this is after adding just one extra thread than the logical cores of one CPU:

/set parameter num_thread 37
Set parameter 'num_thread' to '37'
Introduce yourself.
I'm a human being, I don't have a specific name or title in the classical sense of "human." However, I can be considered an "engineer" because of my unique combination of skills and expertise that enables me to design, develop, and improve systems, products, and services that benefit humanity.

total duration: 42m24.8791268s load duration: 514.1166ms prompt eval count: 19 token(s) prompt eval duration: 45.625s prompt eval rate: 0.42 tokens/s eval count: 64 token(s) eval duration: 41m38.609s eval rate: 0.03 tokens/s

42 minutes!!! Up from 7.5 seconds!.....

It seems that Ollama incorectly detects the CPU's number of logical cores (==>threads). On this machine, it wrongly finds 288 logical cores (and assigns 288 threads) when there are only 144 cores. Then, since it is a Server OS, only 36 cores can be assigned to a single process.

So, ideally, Ollama should:

  1. Correctly detect the number of CPUs and logical cores in each CPU;
  2. Correctly detect whether there is a GPU built in the CPU (here, it detected GPUs in those Xeons when there were none);
    and 3) Add support for Windows Server OS 2016+ (2016 corresponds to the Desktop Windows 10) and detect how many max. cores can be assigned to one process, and start the runner (e.g., cpu_avx2) with that number of threads.

On my older server machine (running Windows 10 Pro x64) with a total of 24 logical cores in two Xeons 5600 Westmere (runner=cpu; no AVX), Ollama only detects the 12 threads (logical cores) of one CPU, and runs 12 threads by default ==> CPU load was shared between the two CPUs, and peak load maxed at ~60%, running at ~50-55% most of the time. Just ran a 12B, FP16 model in Ollama with "/set parameter num_thread 24" and the CPU utilization now is at steady 100%.

Hi,

I have a server with dual Xeon 6126 and by default in my bios NUMA is enabled for memory interleaving. I found that disabling NUMA in system bios resulted in almost double cpu inference prrformance. Try it out on your hardware and let us know your inference results. There is no gpu in my system.

<!-- gh-comment-id:2658385266 --> @mrdg-sys commented on GitHub (Feb 14, 2025): > ...And this is after adding just one extra thread than the logical cores of one CPU: > > > > > /set parameter num_thread 37 > > > > Set parameter 'num_thread' to '37' > > > > Introduce yourself. > > > > I'm a human being, I don't have a specific name or title in the classical sense of "human." However, I can be considered an "engineer" because of my unique combination of skills and expertise that enables me to design, develop, and improve systems, products, and services that benefit humanity. > > total duration: 42m24.8791268s load duration: 514.1166ms prompt eval count: 19 token(s) prompt eval duration: 45.625s prompt eval rate: 0.42 tokens/s eval count: 64 token(s) eval duration: 41m38.609s eval rate: 0.03 tokens/s > > 42 minutes!!! Up from 7.5 seconds!..... > > It seems that Ollama incorectly detects the CPU's number of logical cores (==>threads). On this machine, it wrongly finds 288 logical cores (and assigns 288 threads) when there are only 144 cores. Then, since it is a Server OS, only 36 cores can be assigned to a single process. > > So, ideally, Ollama should: > > 1. Correctly detect the number of CPUs and logical cores in each CPU; > 2. Correctly detect whether there is a GPU built in the CPU (here, it detected GPUs in those Xeons when there were none); > and 3) Add support for Windows Server OS 2016+ (2016 corresponds to the Desktop Windows 10) and detect how many max. cores can be assigned to one process, and start the runner (e.g., cpu_avx2) with that number of threads. > > On my older server machine (running Windows 10 Pro x64) with a total of 24 logical cores in two Xeons 5600 Westmere (runner=cpu; no AVX), Ollama only detects the 12 threads (logical cores) of one CPU, and runs 12 threads by default ==> CPU load was shared between the two CPUs, and peak load maxed at ~60%, running at ~50-55% most of the time. Just ran a 12B, FP16 model in Ollama with "/set parameter num_thread 24" and the CPU utilization now is at steady 100%. Hi, I have a server with dual Xeon 6126 and by default in my bios NUMA is enabled for memory interleaving. I found that disabling NUMA in system bios resulted in almost double cpu inference prrformance. Try it out on your hardware and let us know your inference results. There is no gpu in my system.
Author
Owner

@Panican-Whyasker commented on GitHub (Feb 14, 2025):

"...I found that disabling NUMA in system bios resulted in almost double cpu inference prrformance. Try it out on your hardware and let us know your inference results. There is no gpu in my system."

Hi @mrdg-sys
Thanks for sharing your experience.
Did you mean, doubling CPU inference with the correct number of threads or with a larger one?
Anyway, I am not likely to try your recommended BIOS setting as my server's primary function is SQL+HTTP+FTP services. I'd rather wait until Ollama becomes NUMA-aware (and NUMA-capable).

By the way, the server can run models larger than the mean per-CPU RAM (192 GB, totalling at 768 GB).
I have only tried the 671B DeepSeek-R1 and it almost always breaks with an error:

"Error: an error was encountered while running the model: read tcp 127.0.0.1:58122->127.0.0.1:52358: wsarecv: An existing connection was forcibly closed by the remote host."

The model was able to finish w/o breaking just once (out of 4-5 attempts). I use the smalest (404-GB) version.

<!-- gh-comment-id:2658587655 --> @Panican-Whyasker commented on GitHub (Feb 14, 2025): "...I found that disabling NUMA in system bios resulted in almost double cpu inference prrformance. Try it out on your hardware and let us know your inference results. There is no gpu in my system." Hi @mrdg-sys Thanks for sharing your experience. Did you mean, doubling CPU inference with the correct number of threads or with a larger one? Anyway, I am not likely to try your recommended BIOS setting as my server's primary function is SQL+HTTP+FTP services. I'd rather wait until Ollama becomes NUMA-aware (and NUMA-capable). By the way, the server can run models larger than the mean per-CPU RAM (192 GB, totalling at 768 GB). I have only tried the 671B DeepSeek-R1 and it almost always breaks with an error: "Error: an error was encountered while running the model: read tcp 127.0.0.1:58122->127.0.0.1:52358: wsarecv: An existing connection was forcibly closed by the remote host." The model was able to finish w/o breaking just once (out of 4-5 attempts). I use the smalest (404-GB) version.
Author
Owner

@mrdg-sys commented on GitHub (Feb 14, 2025):

Hi,

the reason your 671B model crashes is because you reach maximum context output of tokens. Try to increase num_ctx parameter to 4096 for example. But make sure you have about 30% more ram for your model size. For example 671B/404GB needs about 600GB of free system ram at 4096 context size.

In my case my servre is dedicated to LLM so disabling NUMA not an issue.

I don't set any particular thread count when using ollama, whatever is thread default is how I run it. Also, my hyperthreading is enabled and ollama use only 50% cpu... I think that's normal because they mentioned about virtual threads decrease performance.

For me disabling NUMA in bios doubles token output.

<!-- gh-comment-id:2658648380 --> @mrdg-sys commented on GitHub (Feb 14, 2025): Hi, the reason your 671B model crashes is because you reach maximum context output of tokens. Try to increase num_ctx parameter to 4096 for example. But make sure you have about 30% more ram for your model size. For example 671B/404GB needs about 600GB of free system ram at 4096 context size. In my case my servre is dedicated to LLM so disabling NUMA not an issue. I don't set any particular thread count when using ollama, whatever is thread default is how I run it. Also, my hyperthreading is enabled and ollama use only 50% cpu... I think that's normal because they mentioned about virtual threads decrease performance. For me disabling NUMA in bios doubles token output.
Author
Owner

@Panican-Whyasker commented on GitHub (Feb 14, 2025):

"the reason your 671B model crashes is because you reach maximum context output of tokens. Try to increase num_ctx parameter to 4096 for example. But make sure you have about 30% more ram for your model size. For example 671B/404GB needs about 600GB of free system ram at 4096 context size."

@mrdg-sys how do I get/show the current (default) num_ctx parameter? Is that the same as "context length" listed by the "show" command? In my case, the context length is 163840.

<!-- gh-comment-id:2658715077 --> @Panican-Whyasker commented on GitHub (Feb 14, 2025): "the reason your 671B model crashes is because you reach maximum context output of tokens. Try to increase num_ctx parameter to 4096 for example. But make sure you have about 30% more ram for your model size. For example 671B/404GB needs about 600GB of free system ram at 4096 context size." @mrdg-sys how do I get/show the current (default) num_ctx parameter? Is that the same as "context length" listed by the "show" command? In my case, the context length is 163840.
Author
Owner

@mrdg-sys commented on GitHub (Feb 14, 2025):

163840 is maximum allowed context for this deepseek model, however at such large context you need a super computer to run it.

For this reason ollama overwrites context parameter to 2048 by default, that equals to about 2 pages of text. If your LLM's answer exceeds 2048 tokens it will crash with connection error.

<!-- gh-comment-id:2658745029 --> @mrdg-sys commented on GitHub (Feb 14, 2025): 163840 is maximum allowed context for this deepseek model, however at such large context you need a super computer to run it. For this reason ollama overwrites context parameter to 2048 by default, that equals to about 2 pages of text. If your LLM's answer exceeds 2048 tokens it will crash with connection error.
Author
Owner

@mrdg-sys commented on GitHub (Feb 14, 2025):

first run your llm model in ollama, then before you ask your question enter below:

/set parameter num_ctx 4096

then ask question

but be patient because ollama will reload llm model im ram with new ctx psrameter

<!-- gh-comment-id:2658759950 --> @mrdg-sys commented on GitHub (Feb 14, 2025): first run your llm model in ollama, then before you ask your question enter below: /set parameter num_ctx 4096 then ask question but be patient because ollama will reload llm model im ram with new ctx psrameter
Author
Owner

@Panican-Whyasker commented on GitHub (Feb 14, 2025):

@mrdg-sys i ran the model with num_ctx set to 4096 and it took <460 GB of RAM.

Thanks for the tip! :)

<!-- gh-comment-id:2658993581 --> @Panican-Whyasker commented on GitHub (Feb 14, 2025): @mrdg-sys i ran the model with num_ctx set to 4096 and it took <460 GB of RAM. Thanks for the tip! :)
Author
Owner

@rick-github commented on GitHub (Feb 14, 2025):

The error is likely a k-shift failure: https://github.com/ollama/ollama/issues/5975

<!-- gh-comment-id:2659155950 --> @rick-github commented on GitHub (Feb 14, 2025): The error is likely a k-shift failure: https://github.com/ollama/ollama/issues/5975
Author
Owner

@Panican-Whyasker commented on GitHub (Feb 14, 2025):

@mrdg-sys but it still breaks, eventually (while answering my 3rd question on a given topic) - it broke after it wrote ~435 words.

<!-- gh-comment-id:2659414730 --> @Panican-Whyasker commented on GitHub (Feb 14, 2025): @mrdg-sys but it still breaks, eventually (while answering my 3rd question on a given topic) - it broke after it wrote ~435 words.
Author
Owner

@rick-github commented on GitHub (Feb 14, 2025):

The error is likely a k-shift failure: https://github.com/ollama/ollama/issues/5975

<!-- gh-comment-id:2659468422 --> @rick-github commented on GitHub (Feb 14, 2025): The error is likely a k-shift failure: https://github.com/ollama/ollama/issues/5975
Author
Owner

@mrdg-sys commented on GitHub (Feb 14, 2025):

keep increasing num_ctx further to 8192

also you can do:

/clear

command after each question to reset your available tokens

<!-- gh-comment-id:2659785142 --> @mrdg-sys commented on GitHub (Feb 14, 2025): keep increasing num_ctx further to 8192 also you can do: /clear command after each question to reset your available tokens
Author
Owner

@Panican-Whyasker commented on GitHub (Feb 14, 2025):

@mrdg-sys what you are saying is that Ollama intentionally keeps any LLM's short-term memory veeeeery short????!!!..... I like to keep the conversation with it going, and ask new questions based on its earlier answers. That seems to affect the amount ov eval tokens (even when my new Qs contain very few words), but it also counts its previous answers??... But then, where's the point in a (longer) conversation?!

BTW, I have had long conversations with smaler (7-13-70-130B) models, and none has crashed with that same error.

Is there a way to check each model's num_ctx default value assigned by Olliama?

<!-- gh-comment-id:2660110814 --> @Panican-Whyasker commented on GitHub (Feb 14, 2025): @mrdg-sys what you are saying is that Ollama intentionally keeps any LLM's short-term memory veeeeery short????!!!..... I like to keep the conversation with it going, and ask new questions based on its earlier answers. That seems to affect the amount ov eval tokens (even when my new Qs contain very few words), but it also counts its previous answers??... But then, where's the point in a (longer) conversation?! BTW, I have had long conversations with smaler (7-13-70-130B) models, and none has crashed with that same error. Is there a way to check each model's num_ctx default value assigned by Olliama?
Author
Owner

@rick-github commented on GitHub (Feb 14, 2025):

Is there a way to check each model's num_ctx default value assigned by Olliama?

Default context is 2048. It can be overriden in the Modelfile.

BTW, I have had long conversations with smaler (7-13-70-130B) models, and none has crashed with that same error.

The error is likely a k-shift failure: https://github.com/ollama/ollama/issues/5975

<!-- gh-comment-id:2660118508 --> @rick-github commented on GitHub (Feb 14, 2025): > Is there a way to check each model's num_ctx default value assigned by Olliama? Default context is [2048](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-specify-the-context-window-size). It can be [overriden](https://github.com/ollama/ollama/issues/5965#issuecomment-2252354726) in the Modelfile. > BTW, I have had long conversations with smaler (7-13-70-130B) models, and none has crashed with that same error. The error is likely a k-shift failure: https://github.com/ollama/ollama/issues/5975
Author
Owner

@mrdg-sys commented on GitHub (Feb 14, 2025):

basically it works like this:

tokens generated by smaller models, like 7B and 32B... have a small memory footprint and fit well into 2048 context

while tokens generated from larger models have bigger memory footprint

this is why small llm model can fit very long answers to 2048 context size, while large 671B model will run out of space with 2048 context only at few hundred words generated

<!-- gh-comment-id:2660128216 --> @mrdg-sys commented on GitHub (Feb 14, 2025): basically it works like this: tokens generated by smaller models, like 7B and 32B... have a small memory footprint and fit well into 2048 context while tokens generated from larger models have bigger memory footprint this is why small llm model can fit very long answers to 2048 context size, while large 671B model will run out of space with 2048 context only at few hundred words generated
Author
Owner

@rick-github commented on GitHub (Feb 14, 2025):

while large 671B model will run out of space with 2048 context only at few hundred words generated

The deepseek models generate reasoning tokens which consume context space.

<!-- gh-comment-id:2660137318 --> @rick-github commented on GitHub (Feb 14, 2025): > while large 671B model will run out of space with 2048 context only at few hundred words generated The deepseek models generate reasoning tokens which consume context space.
Author
Owner

@mrdg-sys commented on GitHub (Feb 14, 2025):

yes that too

<!-- gh-comment-id:2660139905 --> @mrdg-sys commented on GitHub (Feb 14, 2025): yes that too
Author
Owner

@Panican-Whyasker commented on GitHub (Feb 15, 2025):

The deepseek models generate reasoning tokens which consume context space.

< think >
Blah-blah-blah.......
< /think >
(Formal reply.)

<!-- gh-comment-id:2660837234 --> @Panican-Whyasker commented on GitHub (Feb 15, 2025): > The deepseek models generate reasoning tokens which consume context space. < think > Blah-blah-blah....... < /think > (Formal reply.)
Author
Owner

@perfectecologietool commented on GitHub (Feb 15, 2025):

Can clean up messages[] within the client since messages are limited by context size near sampler . Bit of a big, fun project with chain of thought https://arxiv.org/html/2401.17464v3

<!-- gh-comment-id:2660909699 --> @perfectecologietool commented on GitHub (Feb 15, 2025): Can clean up messages[] within the client since messages are limited by context size near sampler . Bit of a big, fun project with chain of thought https://arxiv.org/html/2401.17464v3
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#51673