[GH-ISSUE #8023] Ollama is very slow after running for a while #67190

Closed
opened 2026-05-04 09:35:33 -05:00 by GiteaMirror · 19 comments
Owner

Originally created by @minakami443 on GitHub (Dec 10, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/8023

What is the issue?

CPU: Intel(R) Xeon(R) Silver 4410Y
GPU: NVIDIA L40S-24Q 24GB
DRAM: 32GB

OS: Ubuntu 24.04.1
GPU Driver: vWS 550.127.05 / 550.90.07
CUDA: 12.4

Ollama version: v0.5.1/v0.4.7/v0.3.14
Model: llama3.1:8b/Gemma2:2b/Qwen2.5:7b

Hello, I'm using to run ollama on a VM with grid GPU, but both locally and on Docker I'm experiencing the following problem:.
For the first 20 minutes after running, the results are very good, both model reading and reasoning speed are satisfactory.
However, when running again after some time, the model reading and reasoning speed is abnormally slow.
How can this situation be resolved?

Any help would be greatly appreciated.

Here's the log and GPU operation status:
ollama.log

+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.127.05             Driver Version: 550.127.05     CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA L40S-24Q                On  |   00000000:03:00.0 Off |                  N/A |
| N/A   N/A    P0             N/A /  N/A  |    4603MiB /  24576MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A    208396      C   ...unners/cuda_v12/ollama_llama_server       4594MiB |
+-----------------------------------------------------------------------------------------+

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

v0.5.1

Originally created by @minakami443 on GitHub (Dec 10, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/8023 ### What is the issue? CPU: Intel(R) Xeon(R) Silver 4410Y GPU: NVIDIA L40S-24Q 24GB DRAM: 32GB OS: Ubuntu 24.04.1 GPU Driver: vWS 550.127.05 / 550.90.07 CUDA: 12.4 Ollama version: v0.5.1/v0.4.7/v0.3.14 Model: llama3.1:8b/Gemma2:2b/Qwen2.5:7b Hello, I'm using to run ollama on a VM with grid GPU, but both locally and on Docker I'm experiencing the following problem:. For the first 20 minutes after running, the results are very good, both model reading and reasoning speed are satisfactory. However, when running again after some time, the model reading and reasoning speed is abnormally slow. How can this situation be resolved? Any help would be greatly appreciated. Here's the log and GPU operation status: [ollama.log](https://github.com/user-attachments/files/18073070/ollama.log) ``` +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 550.127.05 Driver Version: 550.127.05 CUDA Version: 12.4 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA L40S-24Q On | 00000000:03:00.0 Off | N/A | | N/A N/A P0 N/A / N/A | 4603MiB / 24576MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 208396 C ...unners/cuda_v12/ollama_llama_server 4594MiB | +-----------------------------------------------------------------------------------------+ ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version v0.5.1
GiteaMirror added the bug label 2026-05-04 09:35:33 -05:00
Author
Owner

@rick-github commented on GitHub (Dec 10, 2024):

There's another report of an L40S running slow in #7919. There's nothing in your log or their log to indicate why the GPU slows down. Are there management tools in the VM to see the state of the GPU? Is it being over-subscribed? Throttled? What's the output of nvidia-smi -q?

<!-- gh-comment-id:2530615554 --> @rick-github commented on GitHub (Dec 10, 2024): There's another report of an L40S running slow in #7919. There's nothing in your log or their log to indicate why the GPU slows down. Are there management tools in the VM to see the state of the GPU? Is it being over-subscribed? Throttled? What's the output of `nvidia-smi -q`?
Author
Owner

@minakami443 commented on GitHub (Dec 10, 2024):

I've confirmed that no other environments use GPU resources during the run except this one.
Also, restarting the environment allows ollama to run normally for a certain period of time, which is very constant at around 20 minutes, after which it runs very poorly.

Here is the output of nvidia-smi -q during the inference period, the GPU utilization is always at 0, but this number changes under normal execution.

==============NVSMI LOG==============

Timestamp                                 : Tue Dec 10 07:28:57 2024
Driver Version                            : 550.127.05
CUDA Version                              : 12.4

Attached GPUs                             : 1
GPU 00000000:03:00.0
    Product Name                          : NVIDIA L40S-24Q
    Product Brand                         : NVIDIA RTX Virtual Workstation
    Product Architecture                  : Ada Lovelace
    Display Mode                          : Enabled
    Display Active                        : Disabled
    Persistence Mode                      : Enabled
    Addressing Mode                       : None
    MIG Mode
        Current                           : N/A
        Pending                           : N/A
    Accounting Mode                       : Disabled
    Accounting Mode Buffer Size           : 4000
    Driver Model
        Current                           : N/A
        Pending                           : N/A
    Serial Number                         : N/A
    GPU UUID                              : GPU-c233ee9f-2348-11b2-a1e9-155a1814ae0b
    Minor Number                          : 0
    VBIOS Version                         : 00.00.00.00.00
    MultiGPU Board                        : No
    Board ID                              : 0x300
    Board Part Number                     : N/A
    GPU Part Number                       : 26B9-896-A1
    FRU Part Number                       : N/A
    Module ID                             : N/A
    Inforom Version
        Image Version                     : N/A
        OEM Object                        : N/A
        ECC Object                        : N/A
        Power Management Object           : N/A
    Inforom BBX Object Flush
        Latest Timestamp                  : N/A
        Latest Duration                   : N/A
    GPU Operation Mode
        Current                           : N/A
        Pending                           : N/A
    GPU C2C Mode                          : N/A
    GPU Virtualization Mode
        Virtualization Mode               : VGPU
        Host VGPU Mode                    : N/A
        vGPU Heterogeneous Mode           : N/A
    vGPU Software Licensed Product
        Product Name                      : NVIDIA RTX Virtual Workstation
        License Status                    : Unlicensed (Restricted)
    GPU Reset Status
        Reset Required                    : N/A
        Drain and Reset Recommended       : N/A
    GSP Firmware Version                  : N/A
    IBMNPU
        Relaxed Ordering Mode             : N/A
    PCI
        Bus                               : 0x03
        Device                            : 0x00
        Domain                            : 0x0000
        Base Classcode                    : 0x3
        Sub Classcode                     : 0x0
        Device Id                         : 0x26B910DE
        Bus Id                            : 00000000:03:00.0
        Sub System Id                     : 0x189310DE
        GPU Link Info
            PCIe Generation
                Max                       : N/A
                Current                   : N/A
                Device Current            : N/A
                Device Max                : N/A
                Host Max                  : N/A
            Link Width
                Max                       : N/A
                Current                   : N/A
        Bridge Chip
            Type                          : N/A
            Firmware                      : N/A
        Replays Since Reset               : N/A
        Replay Number Rollovers           : N/A
        Tx Throughput                     : N/A
        Rx Throughput                     : N/A
        Atomic Caps Inbound               : N/A
        Atomic Caps Outbound              : N/A
    Fan Speed                             : N/A
    Performance State                     : P0
    Clocks Event Reasons                  : N/A
    Sparse Operation Mode                 : N/A
    FB Memory Usage
        Total                             : 24576 MiB
        Reserved                          : 1275 MiB
        Used                              : 5609 MiB
        Free                              : 17694 MiB
    BAR1 Memory Usage
        Total                             : 256 MiB
        Used                              : 2 MiB
        Free                              : 254 MiB
    Conf Compute Protected Memory Usage
        Total                             : 0 MiB
        Used                              : 0 MiB
        Free                              : 0 MiB
    Compute Mode                          : Default
    Utilization
        Gpu                               : 0 %
        Memory                            : 0 %
        Encoder                           : 0 %
        Decoder                           : 0 %
        JPEG                              : N/A
        OFA                               : N/A
    Encoder Stats
        Active Sessions                   : 0
        Average FPS                       : 0
        Average Latency                   : 0
    FBC Stats
        Active Sessions                   : 0
        Average FPS                       : 0
        Average Latency                   : 0
    ECC Mode
        Current                           : N/A
        Pending                           : N/A
    ECC Errors
        Volatile
            SRAM Correctable              : N/A
            SRAM Uncorrectable Parity     : N/A
            SRAM Uncorrectable SEC-DED    : N/A
            DRAM Correctable              : N/A
            DRAM Uncorrectable            : N/A
        Aggregate
            SRAM Correctable              : N/A
            SRAM Uncorrectable Parity     : N/A
            SRAM Uncorrectable SEC-DED    : N/A
            DRAM Correctable              : N/A
            DRAM Uncorrectable            : N/A
            SRAM Threshold Exceeded       : N/A
        Aggregate Uncorrectable SRAM Sources
            SRAM L2                       : N/A
            SRAM SM                       : N/A
            SRAM Microcontroller          : N/A
            SRAM PCIE                     : N/A
            SRAM Other                    : N/A
    Retired Pages
        Single Bit ECC                    : N/A
        Double Bit ECC                    : N/A
        Pending Page Blacklist            : N/A
    Remapped Rows                         : N/A
    Temperature
        GPU Current Temp                  : N/A
        GPU T.Limit Temp                  : N/A
        GPU Shutdown Temp                 : N/A
        GPU Slowdown Temp                 : N/A
        GPU Max Operating Temp            : N/A
        GPU Target Temperature            : N/A
        Memory Current Temp               : N/A
        Memory Max Operating Temp         : N/A
    GPU Power Readings
        Power Draw                        : N/A
        Current Power Limit               : N/A
        Requested Power Limit             : N/A
        Default Power Limit               : N/A
        Min Power Limit                   : N/A
        Max Power Limit                   : N/A
    GPU Memory Power Readings
        Power Draw                        : N/A
    Module Power Readings
        Power Draw                        : N/A
        Current Power Limit               : N/A
        Requested Power Limit             : N/A
        Default Power Limit               : N/A
        Min Power Limit                   : N/A
        Max Power Limit                   : N/A
    Clocks
        Graphics                          : 2520 MHz
        SM                                : 2520 MHz
        Memory                            : 9000 MHz
        Video                             : 1965 MHz
    Applications Clocks
        Graphics                          : N/A
        Memory                            : N/A
    Default Applications Clocks
        Graphics                          : N/A
        Memory                            : N/A
    Deferred Clocks
        Memory                            : N/A
    Max Clocks
        Graphics                          : N/A
        SM                                : N/A
        Memory                            : N/A
        Video                             : N/A
    Max Customer Boost Clocks
        Graphics                          : N/A
    Clock Policy
        Auto Boost                        : N/A
        Auto Boost Default                : N/A
    Voltage
        Graphics                          : N/A
    Fabric
        State                             : N/A
        Status                            : N/A
        CliqueId                          : N/A
        ClusterUUID                       : N/A
        Health
            Bandwidth                     : N/A
    Processes
        GPU instance ID                   : N/A
        Compute instance ID               : N/A
        Process ID                        : 296262
            Type                          : C
            Name                          : /tmp/ollama114791310/runners/cuda_v12/ollama_llama_server
            Used GPU Memory               : 5600 MiB

<!-- gh-comment-id:2530725954 --> @minakami443 commented on GitHub (Dec 10, 2024): I've confirmed that no other environments use GPU resources during the run except this one. Also, restarting the environment allows ollama to run normally for a certain period of time, which is very constant at around 20 minutes, after which it runs very poorly. Here is the output of nvidia-smi -q during the inference period, the GPU utilization is always at 0, but this number changes under normal execution. ``` ==============NVSMI LOG============== Timestamp : Tue Dec 10 07:28:57 2024 Driver Version : 550.127.05 CUDA Version : 12.4 Attached GPUs : 1 GPU 00000000:03:00.0 Product Name : NVIDIA L40S-24Q Product Brand : NVIDIA RTX Virtual Workstation Product Architecture : Ada Lovelace Display Mode : Enabled Display Active : Disabled Persistence Mode : Enabled Addressing Mode : None MIG Mode Current : N/A Pending : N/A Accounting Mode : Disabled Accounting Mode Buffer Size : 4000 Driver Model Current : N/A Pending : N/A Serial Number : N/A GPU UUID : GPU-c233ee9f-2348-11b2-a1e9-155a1814ae0b Minor Number : 0 VBIOS Version : 00.00.00.00.00 MultiGPU Board : No Board ID : 0x300 Board Part Number : N/A GPU Part Number : 26B9-896-A1 FRU Part Number : N/A Module ID : N/A Inforom Version Image Version : N/A OEM Object : N/A ECC Object : N/A Power Management Object : N/A Inforom BBX Object Flush Latest Timestamp : N/A Latest Duration : N/A GPU Operation Mode Current : N/A Pending : N/A GPU C2C Mode : N/A GPU Virtualization Mode Virtualization Mode : VGPU Host VGPU Mode : N/A vGPU Heterogeneous Mode : N/A vGPU Software Licensed Product Product Name : NVIDIA RTX Virtual Workstation License Status : Unlicensed (Restricted) GPU Reset Status Reset Required : N/A Drain and Reset Recommended : N/A GSP Firmware Version : N/A IBMNPU Relaxed Ordering Mode : N/A PCI Bus : 0x03 Device : 0x00 Domain : 0x0000 Base Classcode : 0x3 Sub Classcode : 0x0 Device Id : 0x26B910DE Bus Id : 00000000:03:00.0 Sub System Id : 0x189310DE GPU Link Info PCIe Generation Max : N/A Current : N/A Device Current : N/A Device Max : N/A Host Max : N/A Link Width Max : N/A Current : N/A Bridge Chip Type : N/A Firmware : N/A Replays Since Reset : N/A Replay Number Rollovers : N/A Tx Throughput : N/A Rx Throughput : N/A Atomic Caps Inbound : N/A Atomic Caps Outbound : N/A Fan Speed : N/A Performance State : P0 Clocks Event Reasons : N/A Sparse Operation Mode : N/A FB Memory Usage Total : 24576 MiB Reserved : 1275 MiB Used : 5609 MiB Free : 17694 MiB BAR1 Memory Usage Total : 256 MiB Used : 2 MiB Free : 254 MiB Conf Compute Protected Memory Usage Total : 0 MiB Used : 0 MiB Free : 0 MiB Compute Mode : Default Utilization Gpu : 0 % Memory : 0 % Encoder : 0 % Decoder : 0 % JPEG : N/A OFA : N/A Encoder Stats Active Sessions : 0 Average FPS : 0 Average Latency : 0 FBC Stats Active Sessions : 0 Average FPS : 0 Average Latency : 0 ECC Mode Current : N/A Pending : N/A ECC Errors Volatile SRAM Correctable : N/A SRAM Uncorrectable Parity : N/A SRAM Uncorrectable SEC-DED : N/A DRAM Correctable : N/A DRAM Uncorrectable : N/A Aggregate SRAM Correctable : N/A SRAM Uncorrectable Parity : N/A SRAM Uncorrectable SEC-DED : N/A DRAM Correctable : N/A DRAM Uncorrectable : N/A SRAM Threshold Exceeded : N/A Aggregate Uncorrectable SRAM Sources SRAM L2 : N/A SRAM SM : N/A SRAM Microcontroller : N/A SRAM PCIE : N/A SRAM Other : N/A Retired Pages Single Bit ECC : N/A Double Bit ECC : N/A Pending Page Blacklist : N/A Remapped Rows : N/A Temperature GPU Current Temp : N/A GPU T.Limit Temp : N/A GPU Shutdown Temp : N/A GPU Slowdown Temp : N/A GPU Max Operating Temp : N/A GPU Target Temperature : N/A Memory Current Temp : N/A Memory Max Operating Temp : N/A GPU Power Readings Power Draw : N/A Current Power Limit : N/A Requested Power Limit : N/A Default Power Limit : N/A Min Power Limit : N/A Max Power Limit : N/A GPU Memory Power Readings Power Draw : N/A Module Power Readings Power Draw : N/A Current Power Limit : N/A Requested Power Limit : N/A Default Power Limit : N/A Min Power Limit : N/A Max Power Limit : N/A Clocks Graphics : 2520 MHz SM : 2520 MHz Memory : 9000 MHz Video : 1965 MHz Applications Clocks Graphics : N/A Memory : N/A Default Applications Clocks Graphics : N/A Memory : N/A Deferred Clocks Memory : N/A Max Clocks Graphics : N/A SM : N/A Memory : N/A Video : N/A Max Customer Boost Clocks Graphics : N/A Clock Policy Auto Boost : N/A Auto Boost Default : N/A Voltage Graphics : N/A Fabric State : N/A Status : N/A CliqueId : N/A ClusterUUID : N/A Health Bandwidth : N/A Processes GPU instance ID : N/A Compute instance ID : N/A Process ID : 296262 Type : C Name : /tmp/ollama114791310/runners/cuda_v12/ollama_llama_server Used GPU Memory : 5600 MiB ```
Author
Owner

@rick-github commented on GitHub (Dec 10, 2024):

I assume you are restarting the whole VM. Does the inference speed return to normal if you just restart ollama (systemctl restart ollama)?

<!-- gh-comment-id:2530746130 --> @rick-github commented on GitHub (Dec 10, 2024): I assume you are restarting the whole VM. Does the inference speed return to normal if you just restart ollama (`systemctl restart ollama`)?
Author
Owner

@minakami443 commented on GitHub (Dec 10, 2024):

I assume you are restarting the whole VM. Does the inference speed return to normal if you just restart ollama (systemctl restart ollama)?

On docker systemctl restart docker and then restarting the ollama container will resturn normal speed.
On local I have to restart the vm, only restart the ollama doesn't work.

<!-- gh-comment-id:2530808549 --> @minakami443 commented on GitHub (Dec 10, 2024): > I assume you are restarting the whole VM. Does the inference speed return to normal if you just restart ollama (systemctl restart ollama)? On docker ```systemctl restart docker``` and then restarting the ollama container will resturn normal speed. On local I have to restart the vm, only restart the ollama doesn't work.
Author
Owner

@minakami443 commented on GitHub (Dec 10, 2024):

Additional information:
In October, this problem could be solved by building from source code under the same hardware, but in the current version after building, even if the GPU is detected, it will not try to load the model into vRAM or use any GPU resource.

Following attachments are the compilation and execution record of the built version from the source code.
0_5_1_build.log
0_5_1_run.log

<!-- gh-comment-id:2530812171 --> @minakami443 commented on GitHub (Dec 10, 2024): Additional information: In October, this problem could be solved by building from source code under the same hardware, but in the current version after building, even if the GPU is detected, it will not try to load the model into vRAM or use any GPU resource. Following attachments are the compilation and execution record of the built version from the source code. [0_5_1_build.log](https://github.com/user-attachments/files/18075777/0_5_1_build.log) [0_5_1_run.log](https://github.com/user-attachments/files/18075780/0_5_1_run.log)
Author
Owner

@rick-github commented on GitHub (Dec 10, 2024):

There are no CUDA runners in your self-built ollama. Does your cuda installation live in /usr/local/cuda or somewhere else?

<!-- gh-comment-id:2530835669 --> @rick-github commented on GitHub (Dec 10, 2024): There are no CUDA runners in your self-built ollama. Does your cuda installation live in `/usr/local/cuda` or somewhere else?
Author
Owner

@minakami443 commented on GitHub (Dec 10, 2024):

Installed in /usr/local/cuda-12.4/ and pointed to /usr/local/cuda/ through a link

<!-- gh-comment-id:2530853608 --> @minakami443 commented on GitHub (Dec 10, 2024): Installed in ```/usr/local/cuda-12.4/ ``` and pointed to ```/usr/local/cuda/``` through a link
Author
Owner

@rick-github commented on GitHub (Dec 10, 2024):

Try

make -j 5 CUDA_12=/usr/local/cuda-12.4/
<!-- gh-comment-id:2531388494 --> @rick-github commented on GitHub (Dec 10, 2024): Try ``` make -j 5 CUDA_12=/usr/local/cuda-12.4/ ```
Author
Owner

@CultVonnegut commented on GitHub (Dec 10, 2024):

Hi I have a very similar setup with the same problem.
In my case it is a virtualized Nividia A16 with a profile of 8Gb
| NVIDIA-SMI 550.127.05 Driver Version: 550.127.05 CUDA Version: 12.4 || 0 NVIDIA A16-8Q On | 00000000:02:00.0 Off | N/A |
| N/A N/A P8 N/A / N/A | 1MiB / 8192MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+

Intel(R) Xeon(R) Gold 5320 CPU @ 2.20GHz
32GB RAM
Ubuntu 24.04 64bit, Linux 6.8.0-49-generic
ollama version is 0.4.7

I have the same problem, after a while or if I unload a model and load another it comes extremely slow and painful to use. While at the beginning it is quite fast.
I have stopped ollama service, I have tried unload and load the module of nvidia_uvm, the only thing that seems to fix it it is the restart of the whole machine, I have reviewed the logs in system, in ollama, even in ESXi host, but I can't seem any warning nor error to explain the extreme slowness.
When I load a model it is 100% in GPU loaded.
Let me know if I can try anything to help.
Thanks

<!-- gh-comment-id:2531491789 --> @CultVonnegut commented on GitHub (Dec 10, 2024): Hi I have a very similar setup with the same problem. In my case it is a virtualized Nividia A16 with a profile of 8Gb | NVIDIA-SMI 550.127.05 Driver Version: 550.127.05 CUDA Version: 12.4 || 0 NVIDIA A16-8Q On | 00000000:02:00.0 Off | N/A | | N/A N/A P8 N/A / N/A | 1MiB / 8192MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ Intel(R) Xeon(R) Gold 5320 CPU @ 2.20GHz 32GB RAM Ubuntu 24.04 64bit, Linux 6.8.0-49-generic ollama version is 0.4.7 I have the same problem, after a while or if I unload a model and load another it comes extremely slow and painful to use. While at the beginning it is quite fast. I have stopped ollama service, I have tried unload and load the module of nvidia_uvm, the only thing that seems to fix it it is the restart of the whole machine, I have reviewed the logs in system, in ollama, even in ESXi host, but I can't seem any warning nor error to explain the extreme slowness. When I load a model it is 100% in GPU loaded. Let me know if I can try anything to help. Thanks
Author
Owner

@rick-github commented on GitHub (Dec 10, 2024):

@bodypheo Server logs.

<!-- gh-comment-id:2531616920 --> @rick-github commented on GitHub (Dec 10, 2024): @bodypheo [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues).
Author
Owner

@CultVonnegut commented on GitHub (Dec 11, 2024):

@bodypheo Server logs.

ollama.log
There you go. During this trace, I restarted the service and yet the execution was painfully slow.

EDIT: BTW I have updated to last versión 0.5.1 with same results.

<!-- gh-comment-id:2535254263 --> @CultVonnegut commented on GitHub (Dec 11, 2024): > @bodypheo [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues). [ollama.log](https://github.com/user-attachments/files/18092396/ollama.log) There you go. During this trace, I restarted the service and yet the execution was painfully slow. EDIT: BTW I have updated to last versión 0.5.1 with same results.
Author
Owner

@rick-github commented on GitHub (Dec 12, 2024):

Unfortunately there's nothing in the logs (yours and the others reporting this issue) that's a smoking gun. Would it be possible for you to run another inference engine and see if it suffers a similar performance degradation?

LM Studio is simplest to install as an Appimage. but it requires a working display server to run, so may not be suitable for a server. If your desktop machine is linux, you could do this by forwarding the X server.

$ ssh -X ubuntu-testing
bodypheo@ubuntu-testing ~ $ wget  https://releases.lmstudio.ai/linux/x86/0.3.5/2/LM_Studio-0.3.5.AppImage
bodypheo@ubuntu-testing ~ $ chmod +x LM_Studio-0.3.5.AppImage
bodypheo@ubuntu-testing ~ $ ./LM_Studio-0.3.5.AppImage

When the UI starts, go to the Developer settings (green terminal icon on the left hand side) and start the server, then go to Discover (purple magnifying glass), type llama3.2-3b-instruct in the search bar, and then download the Q4_K_M quant. Test with

$ curl -s localhost:1234/v1/chat/completions -H 'Content-Type: application/json' -d '{"model":"llama-3.2-3b-instruct","messages":[{"role":"user","content":"why is the sky blue?"}]}'

mistral.rs will work standalone and has a docker image so is easy to run if you have docker installed. It doesn't use llama.cpp as a backend so it would be an interesting data point.

$ docker run --init --rm --gpus all -p 8000:80 -v /tmp/hf:/root/.cache/huggingface ghcr.io/ericlbuehler/mistral.rs:cuda-80-latest --pa-ctxt-len 2048 gguf -m bartowski/Llama-3.2-3B-Instruct-GGUF -f Llama-3.2-3B-Instruct-Q4_K_M.gguf
$ curl -s localhost:8000/v1/chat/completions -H 'Content-Type: application/json' -d '{"model":"bartowski/Llama-3.2-3B-Instruct-GGUF","messages":[{"role":"user","content":"why is the sky blue?"}]}'

vllm also has a docker image, uses the GGUF downloaded by mistral.rs.

$ docker run --init --rm --gpus all -p 8000:8000 -v /tmp/hf:/root/.cache/huggingface vllm/vllm-openai --model /root/.cache/huggingface/hub/models--bartowski--Llama-3.2-3B-Instruct-GGUF/blobs/6c1a2b41161032677be168d354123594c0e6e67d2b9227c84f296ad037c728ff --max_model_len 2048 
$ curl -s localhost:8000/v1/chat/completions -H 'Content-Type: application/json' -d '{"model":"/root/.cache/huggingface/hub/models--bartowski--Llama-3.2-3B-Instruct-GGUF/blobs/6c1a2b41161032677be168d354123594c0e6e67d2b9227c84f296ad037c728ff","messages":[{"role":"user","content":"why is the sky blue?"}]}'

If you put the test command in a loop (replace curl_command with one of the ones above) you can keep track of how long it's taking to execute the test and when it slows down:

$ while : ; do echo $(date) $(/usr/bin/time --format %E curl_command 2>&1 > /dev/null) ; sleep 60 ; done
<!-- gh-comment-id:2537488777 --> @rick-github commented on GitHub (Dec 12, 2024): Unfortunately there's nothing in the logs (yours and the others reporting this issue) that's a smoking gun. Would it be possible for you to run another inference engine and see if it suffers a similar performance degradation? [LM Studio](https://lmstudio.ai/) is simplest to install as an Appimage. but it requires a working display server to run, so may not be suitable for a server. If your desktop machine is linux, you could do this by forwarding the X server. ```sh $ ssh -X ubuntu-testing bodypheo@ubuntu-testing ~ $ wget https://releases.lmstudio.ai/linux/x86/0.3.5/2/LM_Studio-0.3.5.AppImage bodypheo@ubuntu-testing ~ $ chmod +x LM_Studio-0.3.5.AppImage bodypheo@ubuntu-testing ~ $ ./LM_Studio-0.3.5.AppImage ``` When the UI starts, go to the Developer settings (green terminal icon on the left hand side) and start the server, then go to Discover (purple magnifying glass), type llama3.2-3b-instruct in the search bar, and then download the Q4_K_M quant. Test with ```sh $ curl -s localhost:1234/v1/chat/completions -H 'Content-Type: application/json' -d '{"model":"llama-3.2-3b-instruct","messages":[{"role":"user","content":"why is the sky blue?"}]}' ``` [mistral.rs](https://github.com/EricLBuehler/mistral.rs) will work standalone and has a docker image so is easy to run if you have docker installed. It doesn't use llama.cpp as a backend so it would be an interesting data point. ```sh $ docker run --init --rm --gpus all -p 8000:80 -v /tmp/hf:/root/.cache/huggingface ghcr.io/ericlbuehler/mistral.rs:cuda-80-latest --pa-ctxt-len 2048 gguf -m bartowski/Llama-3.2-3B-Instruct-GGUF -f Llama-3.2-3B-Instruct-Q4_K_M.gguf ``` ```sh $ curl -s localhost:8000/v1/chat/completions -H 'Content-Type: application/json' -d '{"model":"bartowski/Llama-3.2-3B-Instruct-GGUF","messages":[{"role":"user","content":"why is the sky blue?"}]}' ``` [vllm](https://github.com/vllm-project/vllm) also has a docker image, uses the GGUF downloaded by mistral.rs. ```sh $ docker run --init --rm --gpus all -p 8000:8000 -v /tmp/hf:/root/.cache/huggingface vllm/vllm-openai --model /root/.cache/huggingface/hub/models--bartowski--Llama-3.2-3B-Instruct-GGUF/blobs/6c1a2b41161032677be168d354123594c0e6e67d2b9227c84f296ad037c728ff --max_model_len 2048 ``` ```sh $ curl -s localhost:8000/v1/chat/completions -H 'Content-Type: application/json' -d '{"model":"/root/.cache/huggingface/hub/models--bartowski--Llama-3.2-3B-Instruct-GGUF/blobs/6c1a2b41161032677be168d354123594c0e6e67d2b9227c84f296ad037c728ff","messages":[{"role":"user","content":"why is the sky blue?"}]}' ``` If you put the test command in a loop (replace `curl_command` with one of the ones above) you can keep track of how long it's taking to execute the test and when it slows down: ```sh $ while : ; do echo $(date) $(/usr/bin/time --format %E curl_command 2>&1 > /dev/null) ; sleep 60 ; done ```
Author
Owner

@CultVonnegut commented on GitHub (Dec 12, 2024):

@rick-github Thank you for your thorough explanation, I will try to make that test next weekend.

<!-- gh-comment-id:2538030843 --> @CultVonnegut commented on GitHub (Dec 12, 2024): @rick-github Thank you for your thorough explanation, I will try to make that test next weekend.
Author
Owner

@minakami443 commented on GitHub (Dec 12, 2024):

Back to v0.3.11 for the build I can successfully run Ollama on my environment without any delays
The following is my process, maybe this will be useful to people who also use vGPUs:

sudo apt install -y gcc-12 
sudo apt install -y linux-headers-$(uname -r) 
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-12 12 
sudo update-alternatives --config gcc 
sudo apt install -y g++-12 
sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-12 12 
sudo update-alternatives --config g++ 

cd ollama
git checkout v0.3.11

CUDA_12=/usr/local/cuda-12.4/ go generate ./...
go build .

However, with the latest v0.5.2, I still can't successfully build a GPU-capable version, and there doesn't seem to be any CUDA related information in the log.

make -j 5 CUDA_12=/usr/local/cuda-12.4/
GOARCH=amd64 go build -buildmode=pie "-ldflags=-w -s \"-X=github.com/ollama/ollama/version.Version=0.5.2-rc3-4-g18f6a98\"  " -trimpath -tags "avx" -o llama/build/linux-amd64/runners/cpu_avx/ollama_llama_server ./cmd/runner
GOARCH=amd64 go build -buildmode=pie "-ldflags=-w -s \"-X=github.com/ollama/ollama/version.Version=0.5.2-rc3-4-g18f6a98\"  " -trimpath -tags "avx,avx2" -o llama/build/linux-amd64/runners/cpu_avx2/ollama_llama_server ./cmd/runner
GOARCH=amd64 go build -buildmode=pie "-ldflags=-w -s \"-X=github.com/ollama/ollama/version.Version=0.5.2-rc3-4-g18f6a98\"  " -trimpath  -o ollama .
# github.com/ollama/ollama/llama
ggml-cpu.c: In function ‘ggml_vec_mad_f16’:
ggml-cpu.c:1667:45: warning: passing argument 1 of ‘__avx_f32cx8_load’ discards ‘const’ qualifier from pointer target type [-Wdiscarded-qualifiers]
 1667 |             ax[j] = GGML_F16_VEC_LOAD(x + i + j*GGML_F16_EPR, j);
      |                                             ^
ggml-cpu.c:802:51: note: in definition of macro ‘GGML_F32Cx8_LOAD’
  802 | #define GGML_F32Cx8_LOAD(x)     __avx_f32cx8_load(x)
      |                                                   ^
ggml-cpu.c:1667:21: note: in expansion of macro ‘GGML_F16_VEC_LOAD’
 1667 |             ax[j] = GGML_F16_VEC_LOAD(x + i + j*GGML_F16_EPR, j);
      |                     ^~~~~~~~~~~~~~~~~
ggml-cpu.c:785:53: note: expected ‘ggml_fp16_t *’ {aka ‘short unsigned int *’} but argument is of type ‘const ggml_fp16_t *’ {aka ‘const short unsigned int *’}
  785 | static inline __m256 __avx_f32cx8_load(ggml_fp16_t *x) {
      |                                        ~~~~~~~~~~~~~^

<!-- gh-comment-id:2538384094 --> @minakami443 commented on GitHub (Dec 12, 2024): Back to v0.3.11 for the build I can successfully run Ollama on my environment without any delays The following is my process, maybe this will be useful to people who also use vGPUs: ``` sudo apt install -y gcc-12 sudo apt install -y linux-headers-$(uname -r) sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-12 12 sudo update-alternatives --config gcc sudo apt install -y g++-12 sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-12 12 sudo update-alternatives --config g++ cd ollama git checkout v0.3.11 CUDA_12=/usr/local/cuda-12.4/ go generate ./... go build . ``` However, with the latest v0.5.2, I still can't successfully build a GPU-capable version, and there doesn't seem to be any CUDA related information in the log. ``` make -j 5 CUDA_12=/usr/local/cuda-12.4/ GOARCH=amd64 go build -buildmode=pie "-ldflags=-w -s \"-X=github.com/ollama/ollama/version.Version=0.5.2-rc3-4-g18f6a98\" " -trimpath -tags "avx" -o llama/build/linux-amd64/runners/cpu_avx/ollama_llama_server ./cmd/runner GOARCH=amd64 go build -buildmode=pie "-ldflags=-w -s \"-X=github.com/ollama/ollama/version.Version=0.5.2-rc3-4-g18f6a98\" " -trimpath -tags "avx,avx2" -o llama/build/linux-amd64/runners/cpu_avx2/ollama_llama_server ./cmd/runner GOARCH=amd64 go build -buildmode=pie "-ldflags=-w -s \"-X=github.com/ollama/ollama/version.Version=0.5.2-rc3-4-g18f6a98\" " -trimpath -o ollama . # github.com/ollama/ollama/llama ggml-cpu.c: In function ‘ggml_vec_mad_f16’: ggml-cpu.c:1667:45: warning: passing argument 1 of ‘__avx_f32cx8_load’ discards ‘const’ qualifier from pointer target type [-Wdiscarded-qualifiers] 1667 | ax[j] = GGML_F16_VEC_LOAD(x + i + j*GGML_F16_EPR, j); | ^ ggml-cpu.c:802:51: note: in definition of macro ‘GGML_F32Cx8_LOAD’ 802 | #define GGML_F32Cx8_LOAD(x) __avx_f32cx8_load(x) | ^ ggml-cpu.c:1667:21: note: in expansion of macro ‘GGML_F16_VEC_LOAD’ 1667 | ax[j] = GGML_F16_VEC_LOAD(x + i + j*GGML_F16_EPR, j); | ^~~~~~~~~~~~~~~~~ ggml-cpu.c:785:53: note: expected ‘ggml_fp16_t *’ {aka ‘short unsigned int *’} but argument is of type ‘const ggml_fp16_t *’ {aka ‘const short unsigned int *’} 785 | static inline __m256 __avx_f32cx8_load(ggml_fp16_t *x) { | ~~~~~~~~~~~~~^ ```
Author
Owner

@CultVonnegut commented on GitHub (Dec 12, 2024):

I have just realized that the VM didn't have the license configured, I have put the nvdia license and everything seems to work even faster than before, it seems prompts are answered faster than without license config, without lose of performance when changing LLMs or after a while.
@minakami443: I have seen in your log that you don't have the license either from your second post:
vGPU Software Licensed Product
Product Name : NVIDIA RTX Virtual Workstation
License Status : Unlicensed (Restricted)
Try putting the license and check with last version.

<!-- gh-comment-id:2538788831 --> @CultVonnegut commented on GitHub (Dec 12, 2024): I have just realized that the VM didn't have the license configured, I have put the nvdia license and everything seems to work even faster than before, it seems prompts are answered faster than without license config, without lose of performance when changing LLMs or after a while. @minakami443: I have seen in your log that you don't have the license either from your second post: vGPU Software Licensed Product Product Name : NVIDIA RTX Virtual Workstation License Status : Unlicensed (Restricted) Try putting the license and check with last version.
Author
Owner

@CultVonnegut commented on GitHub (Dec 13, 2024):

I can confirm that since I configured the license I haven't had any problem whatsoever.

<!-- gh-comment-id:2541392104 --> @CultVonnegut commented on GitHub (Dec 13, 2024): I can confirm that since I configured the license I haven't had any problem whatsoever.
Author
Owner

@rick-github commented on GitHub (Dec 13, 2024):

Thanks @bodypheo. For the sake of documentation, what's the process for setting the nvidia license information?

<!-- gh-comment-id:2541399694 --> @rick-github commented on GitHub (Dec 13, 2024): Thanks @bodypheo. For the sake of documentation, what's the process for setting the nvidia license information?
Author
Owner

@CultVonnegut commented on GitHub (Dec 13, 2024):

Thanks @bodypheo. For the sake of documentation, what's the process for setting the nvidia license information?

We have an on-premises nvidia DLS server that is registered in the nvidia cloud. Then through the DLS server a Client token file is generated and must be uploaded to the client folder /etc/nvidia/ClientConfigToken.
The file is dls_instance_token_mm-dd-yyyy-hh-mm-ss.tok
Then after restarting nvidia-gridd service license must be recognized:
nvidia license

I don't really know if it is enough with the nvidia cloud service, I should check.

Sources:
https://docs.nvidia.com/license-system/latest/nvidia-license-system-user-guide/index.html
https://docs.nvidia.com/license-system/latest/nvidia-license-system-user-guide/index.html#generating-client-configuration-token
https://docs.nvidia.com/license-system/latest/nvidia-license-system-quick-start-guide/index.html#configuring-nls-licensed-client-on-linux

<!-- gh-comment-id:2542241442 --> @CultVonnegut commented on GitHub (Dec 13, 2024): > Thanks @bodypheo. For the sake of documentation, what's the process for setting the nvidia license information? We have an on-premises nvidia DLS server that is registered in the nvidia cloud. Then through the DLS server a Client token file is generated and must be uploaded to the client folder /etc/nvidia/ClientConfigToken. The file is dls_instance_token_mm-dd-yyyy-hh-mm-ss.tok Then after restarting nvidia-gridd service license must be recognized: ![nvidia license](https://github.com/user-attachments/assets/4d8b6c8d-b7db-4aee-a5ee-fe02fa2a7a33) I don't really know if it is enough with the nvidia cloud service, I should check. Sources: https://docs.nvidia.com/license-system/latest/nvidia-license-system-user-guide/index.html https://docs.nvidia.com/license-system/latest/nvidia-license-system-user-guide/index.html#generating-client-configuration-token https://docs.nvidia.com/license-system/latest/nvidia-license-system-quick-start-guide/index.html#configuring-nls-licensed-client-on-linux
Author
Owner

@davro commented on GitHub (May 14, 2025):

When ollama models starts to run slowly, or it falls back to CPU mode, especially after ubuntu is suspended and woken.
I have this bash script to fix things up.

# Check for active connections to Ollama (port 11434)
ACTIVE_CONNS=$(ss -Htn sport = :11434 | wc -l)

if [ "$ACTIVE_CONNS" -eq 0 ]; then
    echo "Stopping Ollama..."
    sudo systemctl stop ollama

    echo "Reloading NVIDIA UVM..."
    sudo rmmod nvidia_uvm

    sleep 1
    sudo modprobe nvidia_uvm

    echo "Starting Ollama..."
    sudo systemctl start ollama
else
    echo "Ollama is currently in use — skipping restart."
fi
<!-- gh-comment-id:2881637209 --> @davro commented on GitHub (May 14, 2025): When ollama models starts to run slowly, or it falls back to CPU mode, especially after ubuntu is suspended and woken. I have this bash script to fix things up. ``` # Check for active connections to Ollama (port 11434) ACTIVE_CONNS=$(ss -Htn sport = :11434 | wc -l) if [ "$ACTIVE_CONNS" -eq 0 ]; then echo "Stopping Ollama..." sudo systemctl stop ollama echo "Reloading NVIDIA UVM..." sudo rmmod nvidia_uvm sleep 1 sudo modprobe nvidia_uvm echo "Starting Ollama..." sudo systemctl start ollama else echo "Ollama is currently in use — skipping restart." fi ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#67190