[GH-ISSUE #6475] The issue of high CPU utilization in Ollama #29835

Closed
opened 2026-04-22 09:06:11 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @fenggaobj on GitHub (Aug 23, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6475

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

"ollama run qwen2" command loads until timeout

Seeking help:
How can I resolve this high CPU utilization issue with Ollama?
Is it possible to configure JIT compilation to support multithreading?

Please review the following analysis process.

(1) Environment and version information:
Device: Nvidia Jetson AGX Orin
CPU: 12 cores with a frequency of 2.2GHZ
Memory: 64G
GPU: 1.3GHZ

Ubuntu 22.04.4 LTS
Ollama 0.3.6

(2) Problem phenomenon:
When running Ollama on Orin to load the llama3.1 model, it does not return. Upon checking the CPU information, it is found that one CPU core utilization is consistently at 100%.
image

(3) Problem analysis:
By examining the problematic process, it is found to be:

ollama 17202 98.1 0.6 66818556 394996 ? Rl 19:15 0:11 /tmp/ollama3440494107/runners/cuda_v11/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --numa numactl --parallel 4 --port 36539

Using top -H -p 17202 to inspect, it is found that the main thread has very high CPU utilization.

`    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
  17202 ollama    20   0   63.8g 393652  63108 R  99.9   0.6   0:57.73 ollama_llama_se
  17203 ollama    20   0   63.8g 393652  63108 S   0.0   0.6   0:00.00 ollama_llama_se
  17204 ollama    20   0   63.8g 393652  63108 S   0.0   0.6   0:00.00 ollama_llama_se
  17205 ollama    20   0   63.8g 393652  63108 S   0.0   0.6   0:00.00 ollama_llama_se
  17206 ollama    20   0   63.8g 393652  63108 S   0.0   0.6   0:00.00 ollama_llama_se
  17207 ollama    20   0   63.8g 393652  63108 S   0.0   0.6   0:00.00 ollama_llama_se
  17208 ollama    20   0   63.8g 393652  63108 S   0.0   0.6   0:00.00 ollama_llama_se
  17209 ollama    20   0   63.8g 393652  63108 S   0.0   0.6   0:00.00 ollama_llama_se
  17210 ollama    20   0   63.8g 393652  63108 S   0.0   0.6   0:00.00 ollama_llama_se
  17211 ollama    20   0   63.8g 393652  63108 S   0.0   0.6   0:00.00 ollama_llama_se
  17212 ollama    20   0   63.8g 393652  63108 S   0.0   0.6   0:00.00 ollama_llama_se
  17213 ollama    20   0   63.8g 393652  63108 S   0.0   0.6   0:00.00 ollama_llama_se
  17214 ollama    20   0   63.8g 393652  63108 S   0.0   0.6   0:00.00 ollama_llama_se
  17221 ollama    20   0   63.8g 393652  63108 S   0.0   0.6   0:00.00 cuda-EvtHandlr`

Using gdb to inspect the problematic thread, the issue is found in the following stack:

libnvidia-ptxjitcompiler is a component in the NVIDIA CUDA Toolkit, specifically a Just-In-Time (JIT) compiler library. Its main function is to compile PTX (Parallel Thread Execution) code into GPU-executable machine code. The cause of the issue is high CPU utilization due to JIT compilation.

Thread 1 (Thread 0xffff8f6f3840 (LWP 17202) "ollama_llama_se"):
#0  0x0000ffff68ed3c28 in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libnvidia-ptxjitcompiler.so.1
#1  0x0000ffff68f8633c in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libnvidia-ptxjitcompiler.so.1
#2  0x0000ffff68f879dc in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libnvidia-ptxjitcompiler.so.1
#3  0x0000ffff68f06a10 in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libnvidia-ptxjitcompiler.so.1
#4  0x0000ffff68eec014 in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libnvidia-ptxjitcompiler.so.1
#5  0x0000ffff6988ec84 in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libnvidia-ptxjitcompiler.so.1
#6  0x0000ffff6988ecfc in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libnvidia-ptxjitcompiler.so.1
#7  0x0000ffff68dd4354 in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libnvidia-ptxjitcompiler.so.1
#8  0x0000ffff68ddddd8 in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libnvidia-ptxjitcompiler.so.1
#9  0x0000ffff68de22dc in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libnvidia-ptxjitcompiler.so.1
#10 0x0000ffff68de32b4 in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libnvidia-ptxjitcompiler.so.1
#11 0x0000ffff68dd5cc8 in __cuda_CallJitEntryPoint () from /usr/lib/aarch64-linux-gnu/nvidia/libnvidia-ptxjitcompiler.so.1
#12 0x0000ffff76dffeec in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1
#13 0x0000ffff76e008e8 in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1
#14 0x0000ffff76c02270 in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1
#15 0x0000ffff76c2bfd4 in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1
#16 0x0000ffff76b9c724 in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1
#17 0x0000ffff76b9d5f0 in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1
#18 0x0000ffff88cde99c in ?? () from /tmp/ollama3440494107/runners/cuda_v11/libcublas.so.11
#19 0x0000ffff88ccf5d8 in ?? () from /tmp/ollama3440494107/runners/cuda_v11/libcublas.so.11
#20 0x0000ffff88ce500c in ?? () from /tmp/ollama3440494107/runners/cuda_v11/libcublas.so.11
#21 0x0000ffff88ce65dc in ?? () from /tmp/ollama3440494107/runners/cuda_v11/libcublas.so.11
#22 0x0000ffff88ce6b3c in ?? () from /tmp/ollama3440494107/runners/cuda_v11/libcublas.so.11
#23 0x0000ffff88cdc56c in ?? () from /tmp/ollama3440494107/runners/cuda_v11/libcublas.so.11
#24 0x0000ffff88cc1514 in ?? () from /tmp/ollama3440494107/runners/cuda_v11/libcublas.so.11
#25 0x0000ffff88cf8f8c in ?? () from /tmp/ollama3440494107/runners/cuda_v11/libcublas.so.11
#26 0x0000ffff87f75098 in cublasCreate_v2 () from /tmp/ollama3440494107/runners/cuda_v11/libcublas.so.11
#27 0x0000000000538518 in ggml_cuda_mul_mat_batched_cublas(ggml_backend_cuda_context&, ggml_tensor const*, ggml_tensor const*, ggml_tensor*) ()
#28 0x00000000005412a4 in ggml_backend_cuda_graph_compute(ggml_backend*, ggml_cgraph*) ()
#29 0x00000000005117b8 in ggml_backend_sched_graph_compute_async ()
#30 0x000000000068d270 in llama_decode ()
#31 0x0000000000747040 in llama_init_from_gpt_params(gpt_params&) ()
#32 0x000000000047dfa0 in llama_server_context::load_model(gpt_params const&) ()
#33 0x000000000040f3ac in main ()
[Inferior 1 (process 17202) detached]

OS

Linux

GPU

Nvidia

CPU

Other

Ollama version

0.3.6

Originally created by @fenggaobj on GitHub (Aug 23, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6475 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? "ollama run qwen2" command loads until timeout **Seeking help:** How can I resolve this high CPU utilization issue with Ollama? Is it possible to configure JIT compilation to support multithreading? **Please review the following analysis process.** **(1) Environment and version information:** Device: Nvidia Jetson AGX Orin CPU: 12 cores with a frequency of 2.2GHZ Memory: 64G GPU: 1.3GHZ Ubuntu 22.04.4 LTS Ollama 0.3.6 **(2) Problem phenomenon:** When running Ollama on Orin to load the llama3.1 model, it does not return. Upon checking the CPU information, it is found that one CPU core utilization is consistently at 100%. ![image](https://github.com/user-attachments/assets/8e6ea7d5-c515-4044-9df2-bdc2f3263ff3) **(3) Problem analysis:** By examining the problematic process, it is found to be: `ollama 17202 98.1 0.6 66818556 394996 ? Rl 19:15 0:11 /tmp/ollama3440494107/runners/cuda_v11/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --numa numactl --parallel 4 --port 36539` Using top -H -p 17202 to inspect, it is found that the main thread has very high CPU utilization. ``` ` PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 17202 ollama 20 0 63.8g 393652 63108 R 99.9 0.6 0:57.73 ollama_llama_se 17203 ollama 20 0 63.8g 393652 63108 S 0.0 0.6 0:00.00 ollama_llama_se 17204 ollama 20 0 63.8g 393652 63108 S 0.0 0.6 0:00.00 ollama_llama_se 17205 ollama 20 0 63.8g 393652 63108 S 0.0 0.6 0:00.00 ollama_llama_se 17206 ollama 20 0 63.8g 393652 63108 S 0.0 0.6 0:00.00 ollama_llama_se 17207 ollama 20 0 63.8g 393652 63108 S 0.0 0.6 0:00.00 ollama_llama_se 17208 ollama 20 0 63.8g 393652 63108 S 0.0 0.6 0:00.00 ollama_llama_se 17209 ollama 20 0 63.8g 393652 63108 S 0.0 0.6 0:00.00 ollama_llama_se 17210 ollama 20 0 63.8g 393652 63108 S 0.0 0.6 0:00.00 ollama_llama_se 17211 ollama 20 0 63.8g 393652 63108 S 0.0 0.6 0:00.00 ollama_llama_se 17212 ollama 20 0 63.8g 393652 63108 S 0.0 0.6 0:00.00 ollama_llama_se 17213 ollama 20 0 63.8g 393652 63108 S 0.0 0.6 0:00.00 ollama_llama_se 17214 ollama 20 0 63.8g 393652 63108 S 0.0 0.6 0:00.00 ollama_llama_se 17221 ollama 20 0 63.8g 393652 63108 S 0.0 0.6 0:00.00 cuda-EvtHandlr` ``` Using gdb to inspect the problematic thread, the issue is found in the following stack: libnvidia-ptxjitcompiler is a component in the NVIDIA CUDA Toolkit, specifically a Just-In-Time (JIT) compiler library. Its main function is to compile PTX (Parallel Thread Execution) code into GPU-executable machine code. The cause of the issue is high CPU utilization due to JIT compilation. ``` Thread 1 (Thread 0xffff8f6f3840 (LWP 17202) "ollama_llama_se"): #0 0x0000ffff68ed3c28 in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libnvidia-ptxjitcompiler.so.1 #1 0x0000ffff68f8633c in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libnvidia-ptxjitcompiler.so.1 #2 0x0000ffff68f879dc in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libnvidia-ptxjitcompiler.so.1 #3 0x0000ffff68f06a10 in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libnvidia-ptxjitcompiler.so.1 #4 0x0000ffff68eec014 in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libnvidia-ptxjitcompiler.so.1 #5 0x0000ffff6988ec84 in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libnvidia-ptxjitcompiler.so.1 #6 0x0000ffff6988ecfc in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libnvidia-ptxjitcompiler.so.1 #7 0x0000ffff68dd4354 in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libnvidia-ptxjitcompiler.so.1 #8 0x0000ffff68ddddd8 in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libnvidia-ptxjitcompiler.so.1 #9 0x0000ffff68de22dc in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libnvidia-ptxjitcompiler.so.1 #10 0x0000ffff68de32b4 in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libnvidia-ptxjitcompiler.so.1 #11 0x0000ffff68dd5cc8 in __cuda_CallJitEntryPoint () from /usr/lib/aarch64-linux-gnu/nvidia/libnvidia-ptxjitcompiler.so.1 #12 0x0000ffff76dffeec in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1 #13 0x0000ffff76e008e8 in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1 #14 0x0000ffff76c02270 in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1 #15 0x0000ffff76c2bfd4 in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1 #16 0x0000ffff76b9c724 in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1 #17 0x0000ffff76b9d5f0 in ?? () from /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1 #18 0x0000ffff88cde99c in ?? () from /tmp/ollama3440494107/runners/cuda_v11/libcublas.so.11 #19 0x0000ffff88ccf5d8 in ?? () from /tmp/ollama3440494107/runners/cuda_v11/libcublas.so.11 #20 0x0000ffff88ce500c in ?? () from /tmp/ollama3440494107/runners/cuda_v11/libcublas.so.11 #21 0x0000ffff88ce65dc in ?? () from /tmp/ollama3440494107/runners/cuda_v11/libcublas.so.11 #22 0x0000ffff88ce6b3c in ?? () from /tmp/ollama3440494107/runners/cuda_v11/libcublas.so.11 #23 0x0000ffff88cdc56c in ?? () from /tmp/ollama3440494107/runners/cuda_v11/libcublas.so.11 #24 0x0000ffff88cc1514 in ?? () from /tmp/ollama3440494107/runners/cuda_v11/libcublas.so.11 #25 0x0000ffff88cf8f8c in ?? () from /tmp/ollama3440494107/runners/cuda_v11/libcublas.so.11 #26 0x0000ffff87f75098 in cublasCreate_v2 () from /tmp/ollama3440494107/runners/cuda_v11/libcublas.so.11 #27 0x0000000000538518 in ggml_cuda_mul_mat_batched_cublas(ggml_backend_cuda_context&, ggml_tensor const*, ggml_tensor const*, ggml_tensor*) () #28 0x00000000005412a4 in ggml_backend_cuda_graph_compute(ggml_backend*, ggml_cgraph*) () #29 0x00000000005117b8 in ggml_backend_sched_graph_compute_async () #30 0x000000000068d270 in llama_decode () #31 0x0000000000747040 in llama_init_from_gpt_params(gpt_params&) () #32 0x000000000047dfa0 in llama_server_context::load_model(gpt_params const&) () #33 0x000000000040f3ac in main () [Inferior 1 (process 17202) detached] ``` ### OS Linux ### GPU Nvidia ### CPU Other ### Ollama version 0.3.6
GiteaMirror added the nvidiabug labels 2026-04-22 09:06:11 -05:00
Author
Owner

@fenggaobj commented on GitHub (Aug 26, 2024):

Is there any expert who knows about this issue and could help answer the above question? Today, after looking through other issues, I feel that my problem is similar to the one below. Is it caused by a mismatch in the CUDA library version for Jetson? Can the following submission solve the problem I encountered?
https://github.com/ollama/ollama/pull/6400

<!-- gh-comment-id:2309952663 --> @fenggaobj commented on GitHub (Aug 26, 2024): Is there any expert who knows about this issue and could help answer the above question? Today, after looking through other issues, I feel that my problem is similar to the one below. Is it caused by a mismatch in the CUDA library version for Jetson? Can the following submission solve the problem I encountered? https://github.com/ollama/ollama/pull/6400
Author
Owner

@dhiltgen commented on GitHub (Aug 27, 2024):

Dup of #2408 which will be resolved via #6400

<!-- gh-comment-id:2313597292 --> @dhiltgen commented on GitHub (Aug 27, 2024): Dup of #2408 which will be resolved via #6400
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#29835