[GH-ISSUE #8025] Ollama run very very slow in ARM cpu (KunPeng 920 CPU) #5135

Closed
opened 2026-04-12 16:14:14 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @feikiss on GitHub (Dec 10, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/8025

What is the issue?

The ollama is extremely slow on my ARM server (KunPeng-920 series) even I use 8 cores. I use model "qwen-2.5-0.5b_q4" model
server details:

Collecting environment information...
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.5 LTS (aarch64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.1
Libc version: glibc-2.35

Python version: 3.10.12 (main, Nov  6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.10.0-136.23.0.99.u37.fos23.aarch64-aarch64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                    aarch64
CPU op-mode(s):                  64-bit
Byte Order:                      Little Endian
CPU(s):                          256
On-line CPU(s) list:             0-255
Vendor ID:                       HiSilicon
Model name:                      Kunpeng-920
Model:                           0
Thread(s) per core:              1
Core(s) per cluster:             64
Socket(s):                       -
Cluster(s):                      4
Stepping:                        0x1
Frequency boost:                 disabled
CPU max MHz:                     2600.0000
CPU min MHz:                     200.0000
BogoMIPS:                        200.00
Flags:                           fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma dcpop asimddp asimdfhm ssbs
L1d cache:                       16 MiB (256 instances)
L1i cache:                       16 MiB (256 instances)
L2 cache:                        128 MiB (256 instances)
L3 cache:                        256 MiB (8 instances)
NUMA node(s):                    8
NUMA node0 CPU(s):               0-31
NUMA node1 CPU(s):               32-63
NUMA node2 CPU(s):               64-95
NUMA node3 CPU(s):               96-127
NUMA node4 CPU(s):               128-159
NUMA node5 CPU(s):               160-191
NUMA node6 CPU(s):               192-223
NUMA node7 CPU(s):               224-255
Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Mmio stale data:   Not affected
Vulnerability Retbleed:          Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:        Mitigation; __user pointer sanitization
Vulnerability Spectre v2:        Not affected
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Not affected

Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] pyzmq==26.2.0
[pip3] torch==2.5.1
[pip3] torchvision==0.20.1
[pip3] transformers==4.46.3
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.6.4.post2.dev152+g1f6584ee.d20241127
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
Could not collect

VLLM_CPU_KVCACHE_SPACE=1
LD_LIBRARY_PATH=/usr/local/lib/python3.10/dist-packages/cv2/../../lib64:

OS

Linux

GPU

No response

CPU

Other

Ollama version

0.4.2

Originally created by @feikiss on GitHub (Dec 10, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/8025 ### What is the issue? The ollama is extremely slow on my ARM server (KunPeng-920 series) even I use 8 cores. I use model "qwen-2.5-0.5b_q4" model server details: ```text Collecting environment information... PyTorch version: 2.5.1 Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.5 LTS (aarch64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: version 3.31.1 Libc version: glibc-2.35 Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime) Python platform: Linux-5.10.0-136.23.0.99.u37.fos23.aarch64-aarch64-with-glibc2.35 Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: aarch64 CPU op-mode(s): 64-bit Byte Order: Little Endian CPU(s): 256 On-line CPU(s) list: 0-255 Vendor ID: HiSilicon Model name: Kunpeng-920 Model: 0 Thread(s) per core: 1 Core(s) per cluster: 64 Socket(s): - Cluster(s): 4 Stepping: 0x1 Frequency boost: disabled CPU max MHz: 2600.0000 CPU min MHz: 200.0000 BogoMIPS: 200.00 Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma dcpop asimddp asimdfhm ssbs L1d cache: 16 MiB (256 instances) L1i cache: 16 MiB (256 instances) L2 cache: 128 MiB (256 instances) L3 cache: 256 MiB (8 instances) NUMA node(s): 8 NUMA node0 CPU(s): 0-31 NUMA node1 CPU(s): 32-63 NUMA node2 CPU(s): 64-95 NUMA node3 CPU(s): 96-127 NUMA node4 CPU(s): 128-159 NUMA node5 CPU(s): 160-191 NUMA node6 CPU(s): 192-223 NUMA node7 CPU(s): 224-255 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; __user pointer sanitization Vulnerability Spectre v2: Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] numpy==1.26.4 [pip3] pyzmq==26.2.0 [pip3] torch==2.5.1 [pip3] torchvision==0.20.1 [pip3] transformers==4.46.3 [conda] Could not collect ROCM Version: Could not collect Neuron SDK Version: N/A vLLM Version: 0.6.4.post2.dev152+g1f6584ee.d20241127 vLLM Build Flags: CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled GPU Topology: Could not collect VLLM_CPU_KVCACHE_SPACE=1 LD_LIBRARY_PATH=/usr/local/lib/python3.10/dist-packages/cv2/../../lib64: ``` ### OS Linux ### GPU _No response_ ### CPU Other ### Ollama version 0.4.2
GiteaMirror added the bugneeds more info labels 2026-04-12 16:14:14 -05:00
Author
Owner

@rick-github commented on GitHub (Dec 10, 2024):

You have no GPU accelerator and the 920 apparently doesn't implement the ARM matrix extensions (SME) so you are relying on brute force CPU. For LLM inference workloads, it's just going to be slow.

<!-- gh-comment-id:2531015701 --> @rick-github commented on GitHub (Dec 10, 2024): You have no GPU accelerator and the 920 apparently doesn't implement the ARM matrix extensions (SME) so you are relying on brute force CPU. For LLM inference workloads, it's just going to be slow.
Author
Owner

@feikiss commented on GitHub (Dec 11, 2024):

You have no GPU accelerator and the 920 apparently doesn't implement the ARM matrix extensions (SME) so you are relying on brute force CPU. For LLM inference workloads, it's just going to be slow.

Thanks @rick-github , SME is based on arm V9 generation, and Kunpeng 920 is based on ARM V8. I rent an Arm VM in Alibaba Cloud with 2C4G, the cpu info is below:

Architecture:             aarch64
  CPU op-mode(s):         32-bit, 64-bit
  Byte Order:             Little Endian
CPU(s):                   2
  On-line CPU(s) list:    0,1
Vendor ID:                ARM
  BIOS Vendor ID:         Alibaba Cloud
  Model name:             Neoverse-N1
    BIOS Model name:      virt-rhel7.6.0  CPU @ 2.0GHz
    BIOS CPU family:      1
    Model:                1
    Thread(s) per core:   1
    Core(s) per cluster:  2
    Socket(s):            1
    Cluster(s):           1
    Stepping:             r3p1
    BogoMIPS:             50.00
    Flags:                fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp ssbs
NUMA:
  NUMA node(s):           1
  NUMA node0 CPU(s):      0,1
Vulnerabilities:
  Gather data sampling:   Not affected
  Itlb multihit:          Not affected
....

This cpu seems also based on Arm v8.2-A arch, but the ollama token generation speed is 18tokens/s with the same model , do u have any good suggestion?

<!-- gh-comment-id:2535433506 --> @feikiss commented on GitHub (Dec 11, 2024): > You have no GPU accelerator and the 920 apparently doesn't implement the ARM matrix extensions (SME) so you are relying on brute force CPU. For LLM inference workloads, it's just going to be slow. Thanks @rick-github , SME is based on arm V9 generation, and Kunpeng 920 is based on ARM V8. I rent an Arm VM in Alibaba Cloud with 2C4G, the cpu info is below: ```text Architecture: aarch64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 2 On-line CPU(s) list: 0,1 Vendor ID: ARM BIOS Vendor ID: Alibaba Cloud Model name: Neoverse-N1 BIOS Model name: virt-rhel7.6.0 CPU @ 2.0GHz BIOS CPU family: 1 Model: 1 Thread(s) per core: 1 Core(s) per cluster: 2 Socket(s): 1 Cluster(s): 1 Stepping: r3p1 BogoMIPS: 50.00 Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp ssbs NUMA: NUMA node(s): 1 NUMA node0 CPU(s): 0,1 Vulnerabilities: Gather data sampling: Not affected Itlb multihit: Not affected .... ``` This cpu seems also based on Arm v8.2-A arch, but the ollama token generation speed is 18tokens/s with the same model , do u have any good suggestion?
Author
Owner

@rick-github commented on GitHub (Dec 11, 2024):

Suggestions for how to increase token generation? Run with GPU acceleration or faster hardware. You can try running other inference engines on the KunPeng and see if they perform better, and even run them on different hardware platforms as a comparison. That might give an insight into performance issues.

<!-- gh-comment-id:2535830680 --> @rick-github commented on GitHub (Dec 11, 2024): Suggestions for how to increase token generation? Run with GPU acceleration or faster hardware. You can try running other inference engines on the KunPeng and see if they perform better, and even run them on different hardware platforms as a comparison. That might give an insight into performance issues.
Author
Owner

@feikiss commented on GitHub (Dec 12, 2024):

Suggestions for how to increase token generation? Run with GPU acceleration or faster hardware. You can try running other inference engines on the KunPeng and see if they perform better, and even run them on different hardware platforms as a comparison. That might give an insight into performance issues.

I mean the Arm VM in Alibaba Cloud is also based V8.2, and should not support SME either, but the token generation speed is 18 tokens/s. I suspect SME is not the root cause.

<!-- gh-comment-id:2537904812 --> @feikiss commented on GitHub (Dec 12, 2024): > Suggestions for how to increase token generation? Run with GPU acceleration or faster hardware. You can try running other inference engines on the KunPeng and see if they perform better, and even run them on different hardware platforms as a comparison. That might give an insight into performance issues. I mean the Arm VM in Alibaba Cloud is also based V8.2, and should not support SME either, but the token generation speed is 18 tokens/s. I suspect SME is not the root cause.
Author
Owner

@rick-github commented on GitHub (Dec 12, 2024):

You can try running other inference engines on the KunPeng and see if they perform better, and even run them on different hardware platforms as a comparison. That might give an insight into performance issues. Specifically, if a different engine performs better on the KugPeng server than ollama, then it would appear to be an ollama issue. If a different engine performs better on an Alibaba cloud VM than the KunPeng server, then it would appear to be a platform issue. If you make these comparisons and the ollama performance is poorer, then ollama's performance needs to be investigated. For that, server logs with OLLAMA_DEBUG=1 would help with debugging.

<!-- gh-comment-id:2539429772 --> @rick-github commented on GitHub (Dec 12, 2024): You can try running other inference engines on the KunPeng and see if they perform better, and even run them on different hardware platforms as a comparison. That might give an insight into performance issues. Specifically, if a different engine performs better on the KugPeng server than ollama, then it would appear to be an ollama issue. If a different engine performs better on an Alibaba cloud VM than the KunPeng server, then it would appear to be a platform issue. If you make these comparisons and the ollama performance is poorer, then ollama's performance needs to be investigated. For that, [server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) with `OLLAMA_DEBUG=1` would help with debugging.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#5135