[GH-ISSUE #9087] Ollama 0.5.9 Update make my CPU inference slower #31674

Closed
opened 2026-04-22 12:21:42 -05:00 by GiteaMirror · 13 comments
Owner

Originally created by @mrdg-sys on GitHub (Feb 14, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9087

What is the issue?

Hi,

Just updated Ollama from 0.5.7 > 0.5.9 and run my favorite LLM and noticed major performance drop on my dual Xeon 6126 setup. Went from ~3 t/s down to ~2 t/s. This is not great for me... Just to be sure this is correct I downgraded Ollama back to 0.5.7 and performance is restored!

Both of my CPUs have AVX512 instructions however it seems that using those instructions can in fact slows down inference performance?? I'm confused on this one... can some one explain this to me :)

My system is a Fujitsu RX2530 M4 1U server, dual Xeon 6126 with 384GB ram, no GPU and NUMA disabled.

Ollama 0.5.7 CPU only inference results:

total duration: 6m14.6106603s

load duration: 45.356ms

prompt eval count: 13 token(s)

prompt eval duration: 3.047s

prompt eval rate: 4.27 tokens/s

eval count: 1208 token(s)

eval duration: 6m11.51s

eval rate: 3.25 tokens/s

Ollama 0.5.9 CPU only inference results:

total duration: 14m48.8803918s

load duration: 49.9412ms

prompt eval count: 13 token(s)

prompt eval duration: 4.337s

prompt eval rate: 3.00 tokens/s

eval count: 1688 token(s)

eval duration: 14m44.491s

eval rate: 1.91 tokens/s

Relevant log output


OS

Windows

GPU

No response

CPU

Intel

Ollama version

0.5.9

Originally created by @mrdg-sys on GitHub (Feb 14, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9087 ### What is the issue? Hi, Just updated Ollama from 0.5.7 > 0.5.9 and run my favorite LLM and noticed major performance drop on my dual Xeon 6126 setup. Went from ~3 t/s down to ~2 t/s. This is not great for me... Just to be sure this is correct I downgraded Ollama back to 0.5.7 and performance is restored! Both of my CPUs have AVX512 instructions however it seems that using those instructions can in fact slows down inference performance?? I'm confused on this one... can some one explain this to me :) My system is a Fujitsu RX2530 M4 1U server, dual Xeon 6126 with 384GB ram, no GPU and NUMA disabled. > Ollama 0.5.7 CPU only inference results: total duration: 6m14.6106603s load duration: 45.356ms prompt eval count: 13 token(s) prompt eval duration: 3.047s prompt eval rate: 4.27 tokens/s eval count: 1208 token(s) eval duration: 6m11.51s eval rate: 3.25 tokens/s > Ollama 0.5.9 CPU only inference results: total duration: 14m48.8803918s load duration: 49.9412ms prompt eval count: 13 token(s) prompt eval duration: 4.337s prompt eval rate: 3.00 tokens/s eval count: 1688 token(s) eval duration: 14m44.491s eval rate: 1.91 tokens/s ### Relevant log output ```shell ``` ### OS Windows ### GPU _No response_ ### CPU Intel ### Ollama version 0.5.9
GiteaMirror added the performancebug labels 2026-04-22 12:21:43 -05:00
Author
Owner

@jmorganca commented on GitHub (Feb 14, 2025):

@mrdg-sys sorry this happened! Will be looking into it.

<!-- gh-comment-id:2658337880 --> @jmorganca commented on GitHub (Feb 14, 2025): @mrdg-sys sorry this happened! Will be looking into it.
Author
Owner

@veratu commented on GitHub (Feb 14, 2025):

Same issue here, going from 0.5.7 (not using AVX-512) to using 0.5.9 with a single Sapphire Rapids CPU, TPS went down.

Did adding AVX-512 support also include VNNI?

<!-- gh-comment-id:2658381162 --> @veratu commented on GitHub (Feb 14, 2025): Same issue here, going from 0.5.7 (not using AVX-512) to using 0.5.9 with a single Sapphire Rapids CPU, TPS went down. Did adding AVX-512 support also include VNNI?
Author
Owner

@jmorganca commented on GitHub (Feb 14, 2025):

@veratu thanks for the note. Yes it did!

@mrdg-sys @veratu would it be possible to share the logs? On Linux: journalctl -u ollama --no-pager. Can you see if it loading the CPU libraries? There should be lines like this:

Feb 13 18:42:39 tater16 ollama[288776]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-alderlake.so

If so, if you remove /usr/local/lib/ollama/libggml-cpu-sapphirerapids.so does it speed things up? Essentially the different CPU libraries are dynamically loaded from /usr/local/lib/ollama/ now, and so you can remove ones that might slow things down (although we'll obviously work on fixing this)

<!-- gh-comment-id:2658491187 --> @jmorganca commented on GitHub (Feb 14, 2025): @veratu thanks for the note. Yes it did! @mrdg-sys @veratu would it be possible to share the logs? On Linux: `journalctl -u ollama --no-pager`. Can you see if it loading the CPU libraries? There should be lines like this: ``` Feb 13 18:42:39 tater16 ollama[288776]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-alderlake.so ``` If so, if you remove `/usr/local/lib/ollama/libggml-cpu-sapphirerapids.so` does it speed things up? Essentially the different CPU libraries are dynamically loaded from `/usr/local/lib/ollama/` now, and so you can remove ones that might slow things down (although we'll obviously work on fixing this)
Author
Owner

@chli1 commented on GitHub (Feb 14, 2025):

I also encountered a massive performance drop since 0.5.8. The inference time of an example query increased from 53 seconds to 13min 28 seconds for an 8b model (15 times slower!), 70b did not even finish after hours (before it was 10 minutes or so).

Unfortunately, removing the CPU libraries did not help.

Initially, it chooses libggml-cpu-alderlake.so, which seems to be the right fit for an i5-12400. If I remove that one, and also the replacements it then picks up (libggml-cpu-haswell.so, libggml-cpu-sandybridge.so) there is no more log output like:

load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-XXX.so

But performance is always the same, regardless of the chosen CPU driver / no CPU driver.

<!-- gh-comment-id:2659122629 --> @chli1 commented on GitHub (Feb 14, 2025): I also encountered a massive performance drop since 0.5.8. The inference time of an example query increased from 53 seconds to 13min 28 seconds for an 8b model (15 times slower!), 70b did not even finish after hours (before it was 10 minutes or so). Unfortunately, removing the CPU libraries did not help. Initially, it chooses `libggml-cpu-alderlake.so`, which seems to be the right fit for an i5-12400. If I remove that one, and also the replacements it then picks up (`libggml-cpu-haswell.so`, `libggml-cpu-sandybridge.so`) there is no more log output like: `load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-XXX.so` But performance is always the same, regardless of the chosen CPU driver / no CPU driver.
Author
Owner

@lashnarz commented on GitHub (Feb 14, 2025):

I had also encountered the slow response performance issue on version 0.5.8 through pre-release 0.5.11. The response tokens/second on all models drop about 10-20% compared with 0.5.7. Removing the cpu libggml_XXX.dll files could not help the performance come back.
Hardware: AMD Ryzen 2700X + GTX 1660 6G
OS: Windows 11
Ollama version: 0.5.8 to 0.5.11

<!-- gh-comment-id:2659805746 --> @lashnarz commented on GitHub (Feb 14, 2025): I had also encountered the slow response performance issue on version 0.5.8 through pre-release 0.5.11. The response tokens/second on all models drop about 10-20% compared with 0.5.7. Removing the cpu libggml_XXX.dll files could not help the performance come back. Hardware: AMD Ryzen 2700X + GTX 1660 6G OS: Windows 11 Ollama version: 0.5.8 to 0.5.11
Author
Owner

@veratu commented on GitHub (Feb 14, 2025):

@jmorganca Yes, it is loading the proper CPU backend. I removed it, then it went to icelake, removed that and it went down to another, and I continued removing them until I got to just the base. They had pretty linear results in performance downwards as I removed CPU backends that had less cpu extensions built in. Base of course performed the worst. That said, for comparison in 0.5.7 with just the default cpu loader which did NOT use sapphire rapids, I was getting over 3 tps, in 0.5.9 and 0.5.10, it's down to 2.1 tps, base is 0.5 tps. My expectation would be that enabling these extensions would show gains, not losses but right now 0.5.7 default out performs everything in the latest build.

<!-- gh-comment-id:2659853067 --> @veratu commented on GitHub (Feb 14, 2025): @jmorganca Yes, it is loading the proper CPU backend. I removed it, then it went to icelake, removed that and it went down to another, and I continued removing them until I got to just the base. They had pretty linear results in performance downwards as I removed CPU backends that had less cpu extensions built in. Base of course performed the worst. That said, for comparison in 0.5.7 with just the default cpu loader which did NOT use sapphire rapids, I was getting over 3 tps, in 0.5.9 and 0.5.10, it's down to 2.1 tps, base is 0.5 tps. My expectation would be that enabling these extensions would show gains, not losses but right now 0.5.7 default out performs everything in the latest build.
Author
Owner

@mrdg-sys commented on GitHub (Feb 14, 2025):

below is my log output from ollama version 0.5.7 (My system is dual Xeon 6126 with AVX512)

2025/02/14 09:27:33 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\Users\user\.ollama\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-02-14T09:27:33.061-08:00 level=INFO source=images.go:432 msg="total blobs: 16"
time=2025-02-14T09:27:33.064-08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-02-14T09:27:33.066-08:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7)"
time=2025-02-14T09:27:33.068-08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[rocm_avx cpu cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx]"
time=2025-02-14T09:27:33.068-08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2025-02-14T09:27:33.069-08:00 level=INFO source=gpu_windows.go:167 msg=packages count=2
time=2025-02-14T09:27:33.069-08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=12 efficiency=0 threads=24
time=2025-02-14T09:27:33.069-08:00 level=INFO source=gpu_windows.go:214 msg="" package=1 cores=12 efficiency=0 threads=24
time=2025-02-14T09:27:33.088-08:00 level=INFO source=gpu.go:392 msg="no compatible GPUs were discovered"
time=2025-02-14T09:27:33.088-08:00 level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="383.1 GiB" available="375.6 GiB"
[GIN] 2025/02/14 - 09:28:01 | 200 | 519.2µs | 127.0.0.1 | GET "/api/version"
[GIN] 2025/02/14 - 09:29:49 | 200 | 1.0928ms | 127.0.0.1 | HEAD "/"
[GIN] 2025/02/14 - 09:29:49 | 200 | 100.5085ms | 127.0.0.1 | POST "/api/show"
time=2025-02-14T09:29:49.947-08:00 level=INFO source=server.go:104 msg="system memory" total="383.1 GiB" free="373.1 GiB" free_swap="405.2 GiB"
time=2025-02-14T09:29:49.948-08:00 level=INFO source=memory.go:356 msg="offload to cpu" layers.requested=-1 layers.model=62 layers.offload=0 layers.split="" memory.available="[373.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="251.8 GiB" memory.required.partial="0 B" memory.required.kv="38.1 GiB" memory.required.allocations="[251.8 GiB]" memory.weights.total="248.0 GiB" memory.weights.repeating="247.3 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="2.2 GiB" memory.graph.partial="3.0 GiB"
time=2025-02-14T09:29:49.972-08:00 level=INFO source=server.go:376 msg="starting llama server" cmd="C:\Users\user\AppData\Local\Programs\Ollama\lib\ollama\runners\cpu_avx2\ollama_llama_server.exe runner --model C:\Users\user\.ollama\models\blobs\sha256-f4212639f8b6e105df9c2feebc2f8ebe6c1bb5cac3e721051b097a6bca76c183 --ctx-size 8192 --batch-size 512 --threads 24 --no-mmap --parallel 4 --port 49792"
time=2025-02-14T09:29:50.015-08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-02-14T09:29:50.023-08:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
time=2025-02-14T09:29:50.025-08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2025-02-14T09:29:50.193-08:00 level=INFO source=runner.go:936 msg="starting go runner"
time=2025-02-14T09:29:50.197-08:00 level=INFO source=runner.go:937 msg=system info="CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(clang)" threads=24
time=2025-02-14T09:29:50.199-08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:49792"
time=2025-02-14T09:29:50.279-08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading

<!-- gh-comment-id:2659909643 --> @mrdg-sys commented on GitHub (Feb 14, 2025): below is my log output from ollama version 0.5.7 (My system is dual Xeon 6126 with AVX512) 2025/02/14 09:27:33 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\user\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-02-14T09:27:33.061-08:00 level=INFO source=images.go:432 msg="total blobs: 16" time=2025-02-14T09:27:33.064-08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-02-14T09:27:33.066-08:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7)" time=2025-02-14T09:27:33.068-08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[rocm_avx cpu cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx]" time=2025-02-14T09:27:33.068-08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2025-02-14T09:27:33.069-08:00 level=INFO source=gpu_windows.go:167 msg=packages count=2 time=2025-02-14T09:27:33.069-08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=12 efficiency=0 threads=24 time=2025-02-14T09:27:33.069-08:00 level=INFO source=gpu_windows.go:214 msg="" package=1 cores=12 efficiency=0 threads=24 time=2025-02-14T09:27:33.088-08:00 level=INFO source=gpu.go:392 msg="no compatible GPUs were discovered" time=2025-02-14T09:27:33.088-08:00 level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="383.1 GiB" available="375.6 GiB" [GIN] 2025/02/14 - 09:28:01 | 200 | 519.2µs | 127.0.0.1 | GET "/api/version" [GIN] 2025/02/14 - 09:29:49 | 200 | 1.0928ms | 127.0.0.1 | HEAD "/" [GIN] 2025/02/14 - 09:29:49 | 200 | 100.5085ms | 127.0.0.1 | POST "/api/show" time=2025-02-14T09:29:49.947-08:00 level=INFO source=server.go:104 msg="system memory" total="383.1 GiB" free="373.1 GiB" free_swap="405.2 GiB" time=2025-02-14T09:29:49.948-08:00 level=INFO source=memory.go:356 msg="offload to cpu" layers.requested=-1 layers.model=62 layers.offload=0 layers.split="" memory.available="[373.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="251.8 GiB" memory.required.partial="0 B" memory.required.kv="38.1 GiB" memory.required.allocations="[251.8 GiB]" memory.weights.total="248.0 GiB" memory.weights.repeating="247.3 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="2.2 GiB" memory.graph.partial="3.0 GiB" time=2025-02-14T09:29:49.972-08:00 level=INFO source=server.go:376 msg="starting llama server" cmd="C:\\Users\\user\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\runners\\cpu_avx2\\ollama_llama_server.exe runner --model C:\\Users\\user\\.ollama\\models\\blobs\\sha256-f4212639f8b6e105df9c2feebc2f8ebe6c1bb5cac3e721051b097a6bca76c183 --ctx-size 8192 --batch-size 512 --threads 24 --no-mmap --parallel 4 --port 49792" time=2025-02-14T09:29:50.015-08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-02-14T09:29:50.023-08:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding" time=2025-02-14T09:29:50.025-08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" time=2025-02-14T09:29:50.193-08:00 level=INFO source=runner.go:936 msg="starting go runner" time=2025-02-14T09:29:50.197-08:00 level=INFO source=runner.go:937 msg=system info="CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(clang)" threads=24 time=2025-02-14T09:29:50.199-08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:49792" time=2025-02-14T09:29:50.279-08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading
Author
Owner

@mrdg-sys commented on GitHub (Feb 14, 2025):

below is my log output from ollama version 0.5.11 (My system is dual Xeon 6126 with AVX512)

2025/02/14 09:41:14 routes.go:1186: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\Users\user\.ollama\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-02-14T09:41:14.077-08:00 level=INFO source=images.go:432 msg="total blobs: 16"
time=2025-02-14T09:41:14.078-08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-02-14T09:41:14.079-08:00 level=INFO source=routes.go:1237 msg="Listening on 127.0.0.1:11434 (version 0.5.11)"
time=2025-02-14T09:41:14.079-08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-02-14T09:41:14.081-08:00 level=INFO source=gpu_windows.go:167 msg=packages count=2
time=2025-02-14T09:41:14.081-08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=12 efficiency=0 threads=24
time=2025-02-14T09:41:14.081-08:00 level=INFO source=gpu_windows.go:214 msg="" package=1 cores=12 efficiency=0 threads=24
time=2025-02-14T09:41:15.406-08:00 level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered"
time=2025-02-14T09:41:15.406-08:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="383.1 GiB" available="374.0 GiB"
[GIN] 2025/02/14 - 09:43:41 | 200 | 0s | 127.0.0.1 | GET "/api/version"
[GIN] 2025/02/14 - 09:43:59 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/02/14 - 09:43:59 | 200 | 59.9338ms | 127.0.0.1 | POST "/api/show"
time=2025-02-14T09:43:59.989-08:00 level=INFO source=server.go:100 msg="system memory" total="383.1 GiB" free="374.8 GiB" free_swap="406.8 GiB"
time=2025-02-14T09:43:59.998-08:00 level=INFO source=memory.go:356 msg="offload to cpu" layers.requested=-1 layers.model=62 layers.offload=0 layers.split="" memory.available="[374.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="251.8 GiB" memory.required.partial="0 B" memory.required.kv="38.1 GiB" memory.required.allocations="[251.8 GiB]" memory.weights.total="248.0 GiB" memory.weights.repeating="247.3 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="2.2 GiB" memory.graph.partial="3.0 GiB"
time=2025-02-14T09:44:00.009-08:00 level=INFO source=server.go:380 msg="starting llama server" cmd="C:\Users\user\AppData\Local\Programs\Ollama\ollama.exe runner --model C:\Users\user\.ollama\models\blobs\sha256-f4212639f8b6e105df9c2feebc2f8ebe6c1bb5cac3e721051b097a6bca76c183 --ctx-size 8192 --batch-size 512 --threads 24 --no-mmap --parallel 4 --port 49876"
time=2025-02-14T09:44:00.015-08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-02-14T09:44:00.015-08:00 level=INFO source=server.go:557 msg="waiting for llama runner to start responding"
time=2025-02-14T09:44:00.016-08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
time=2025-02-14T09:44:00.060-08:00 level=INFO source=runner.go:936 msg="starting go runner"
time=2025-02-14T09:44:00.061-08:00 level=INFO source=runner.go:937 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | cgo(clang)" threads=24
time=2025-02-14T09:44:00.061-08:00 level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:49876"
time=2025-02-14T09:44:00.269-08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model"
load_backend: loaded CPU backend from C:\Users\user\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-skylakex.dll
llama_model_loader: loaded meta data with 48 key-value pairs and 1025 tensors from C:\Users\user.ollama\models\blobs\sha256-f4212639f8b6e105df9c2feebc2f8ebe6c1bb5cac3e721051b097a6bca76c183 (version GGUF V3 (latest))

<!-- gh-comment-id:2659927654 --> @mrdg-sys commented on GitHub (Feb 14, 2025): below is my log output from ollama version 0.5.11 (My system is dual Xeon 6126 with AVX512) 2025/02/14 09:41:14 routes.go:1186: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\user\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-02-14T09:41:14.077-08:00 level=INFO source=images.go:432 msg="total blobs: 16" time=2025-02-14T09:41:14.078-08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-02-14T09:41:14.079-08:00 level=INFO source=routes.go:1237 msg="Listening on 127.0.0.1:11434 (version 0.5.11)" time=2025-02-14T09:41:14.079-08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-02-14T09:41:14.081-08:00 level=INFO source=gpu_windows.go:167 msg=packages count=2 time=2025-02-14T09:41:14.081-08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=12 efficiency=0 threads=24 time=2025-02-14T09:41:14.081-08:00 level=INFO source=gpu_windows.go:214 msg="" package=1 cores=12 efficiency=0 threads=24 time=2025-02-14T09:41:15.406-08:00 level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered" time=2025-02-14T09:41:15.406-08:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="383.1 GiB" available="374.0 GiB" [GIN] 2025/02/14 - 09:43:41 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2025/02/14 - 09:43:59 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/02/14 - 09:43:59 | 200 | 59.9338ms | 127.0.0.1 | POST "/api/show" time=2025-02-14T09:43:59.989-08:00 level=INFO source=server.go:100 msg="system memory" total="383.1 GiB" free="374.8 GiB" free_swap="406.8 GiB" time=2025-02-14T09:43:59.998-08:00 level=INFO source=memory.go:356 msg="offload to cpu" layers.requested=-1 layers.model=62 layers.offload=0 layers.split="" memory.available="[374.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="251.8 GiB" memory.required.partial="0 B" memory.required.kv="38.1 GiB" memory.required.allocations="[251.8 GiB]" memory.weights.total="248.0 GiB" memory.weights.repeating="247.3 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="2.2 GiB" memory.graph.partial="3.0 GiB" time=2025-02-14T09:44:00.009-08:00 level=INFO source=server.go:380 msg="starting llama server" cmd="C:\\Users\\user\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\user\\.ollama\\models\\blobs\\sha256-f4212639f8b6e105df9c2feebc2f8ebe6c1bb5cac3e721051b097a6bca76c183 --ctx-size 8192 --batch-size 512 --threads 24 --no-mmap --parallel 4 --port 49876" time=2025-02-14T09:44:00.015-08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-02-14T09:44:00.015-08:00 level=INFO source=server.go:557 msg="waiting for llama runner to start responding" time=2025-02-14T09:44:00.016-08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" time=2025-02-14T09:44:00.060-08:00 level=INFO source=runner.go:936 msg="starting go runner" time=2025-02-14T09:44:00.061-08:00 level=INFO source=runner.go:937 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | cgo(clang)" threads=24 time=2025-02-14T09:44:00.061-08:00 level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:49876" time=2025-02-14T09:44:00.269-08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model" load_backend: loaded CPU backend from C:\Users\user\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-skylakex.dll llama_model_loader: loaded meta data with 48 key-value pairs and 1025 tensors from C:\Users\user\.ollama\models\blobs\sha256-f4212639f8b6e105df9c2feebc2f8ebe6c1bb5cac3e721051b097a6bca76c183 (version GGUF V3 (latest))
Author
Owner

@jmorganca commented on GitHub (Feb 14, 2025):

Hi folks, sorry for the performance issues – looking into this now.

<!-- gh-comment-id:2659987797 --> @jmorganca commented on GitHub (Feb 14, 2025): Hi folks, sorry for the performance issues – looking into this now.
Author
Owner

@slaren commented on GitHub (Feb 14, 2025):

time=2025-02-14T09:44:00.061-08:00 level=INFO source=runner.go:937 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | cgo(clang)" threads=24

I am not exactly sure how this message is generated in ollama, but it seems to indicate that it is using a CPU backend built without any architecture flags enabled, so it is reverting to the basic C implementations. Which could explain the dramatic decrease in performance.

<!-- gh-comment-id:2660027698 --> @slaren commented on GitHub (Feb 14, 2025): > time=2025-02-14T09:44:00.061-08:00 level=INFO source=runner.go:937 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | cgo(clang)" threads=24 I am not exactly sure how this message is generated in ollama, but it seems to indicate that it is using a CPU backend built without any architecture flags enabled, so it is reverting to the basic C implementations. Which could explain the dramatic decrease in performance.
Author
Owner

@veratu commented on GitHub (Feb 14, 2025):

Look further down @slaren and you should see a load backend line like this:
load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-alderlake.so

That's where it's doing the CPU loading. If you don't see an entry like that, then it would default to the base.

<!-- gh-comment-id:2660044028 --> @veratu commented on GitHub (Feb 14, 2025): Look further down @slaren and you should see a load backend line like this: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-alderlake.so That's where it's doing the CPU loading. If you don't see an entry like that, then it would default to the base.
Author
Owner

@slaren commented on GitHub (Feb 14, 2025):

Right, but that message indicates that there was a CPU backend already loaded before. I am now realizing that ollama has made changes to this ggml code, and it may actually be the normal behavior to have several versions of the CPU backend loaded at the same time. I am not sure. What I can say that the code in llama.cpp was written assuming that there is only one CPU backend loaded.

<!-- gh-comment-id:2660083088 --> @slaren commented on GitHub (Feb 14, 2025): Right, but that message indicates that there was a CPU backend already loaded before. I am now realizing that ollama has made changes to this ggml code, and it may actually be the normal behavior to have several versions of the CPU backend loaded at the same time. I am not sure. What I can say that the code in llama.cpp was written assuming that there is only one CPU backend loaded.
Author
Owner

@vt-alt commented on GitHub (Feb 18, 2025):

I noticed on 0.5.11 that sapphirerapids backend works slower than other backends on Intel(R) Xeon(R) Gold 5420+. The test command is ollama run --verbose deepseek-r1:32b-qwen-distill-q4_K_M "Hello, introduce yourself." Test results for all backends (two runs in a row):

# load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-sapphirerapids.so
eval rate:            2.09 tokens/s
eval rate:            2.05 tokens/s
# load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so
eval rate:            2.39 tokens/s
eval rate:            2.37 tokens/s
# load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-skylakex.so
eval rate:            2.21 tokens/s
eval rate:            2.31 tokens/s
# load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-alderlake.so
eval rate:            2.25 tokens/s
eval rate:            2.16 tokens/s
# load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so
eval rate:            2.36 tokens/s
eval rate:            2.33 tokens/s
# load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-sandybridge.so
eval rate:            2.11 tokens/s
eval rate:            2.17 tokens/s
# no more backends.
eval rate:            0.41 tokens/s
eval rate:            0.41 tokens/s

I rmed used backend between runs, so it loaded next one, for last run all backends are rm'ed. The CPU have such flags:

fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp
lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology cpuid
tsc_known_freq pni pclmulqdq dtes64 vmx ssse3 fma cx16 pdcm pcid
sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c
rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault ssbd ibrs ibpb
stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase
tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx
smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt
xsavec xgetbv1 xsaves avx_vnni avx512_bf16 wbnoinvd arat vnmi avx512vbmi
umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni
avx512_bitalg avx512_vpopcntdq rdpid bus_lock_detect cldemote movdiri
movdir64b fsrm md_clear serialize tsxldtrk amx_bf16 avx512_fp16 amx_tile
amx_int8 flush_l1d arch_capabilities
<!-- gh-comment-id:2664596517 --> @vt-alt commented on GitHub (Feb 18, 2025): I noticed on 0.5.11 that `sapphirerapids` backend works slower than other backends on `Intel(R) Xeon(R) Gold 5420+`. The test command is `ollama run --verbose deepseek-r1:32b-qwen-distill-q4_K_M "Hello, introduce yourself."` Test results for all backends (two runs in a row): ``` # load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-sapphirerapids.so eval rate: 2.09 tokens/s eval rate: 2.05 tokens/s # load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so eval rate: 2.39 tokens/s eval rate: 2.37 tokens/s # load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-skylakex.so eval rate: 2.21 tokens/s eval rate: 2.31 tokens/s # load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-alderlake.so eval rate: 2.25 tokens/s eval rate: 2.16 tokens/s # load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so eval rate: 2.36 tokens/s eval rate: 2.33 tokens/s # load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-sandybridge.so eval rate: 2.11 tokens/s eval rate: 2.17 tokens/s # no more backends. eval rate: 0.41 tokens/s eval rate: 0.41 tokens/s ``` I `rm`ed used backend between runs, so it loaded next one, for last run all backends are rm'ed. The CPU have such flags: ``` fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq dtes64 vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 wbnoinvd arat vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid bus_lock_detect cldemote movdiri movdir64b fsrm md_clear serialize tsxldtrk amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#31674