[GH-ISSUE #1557] Increasing slow response - CPU only on Linux Azure #62888

Closed
opened 2026-05-03 10:38:32 -05:00 by GiteaMirror · 11 comments
Owner

Originally created by @benmarinic on GitHub (Dec 16, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1557

Originally assigned to: @BruceMacD on GitHub.

I'm using the following VM in azure:
Standard D8s v3 vCPUs 8, RAM 32 GiB

Have tried Mistral 7b and Orca-mini. I've also tried 4 bit versions.

Ollama is responding increasingly slowly. After the 4th simple query ("hi" or "what's the capital of ...") I'm waiting in excess of 60 seconds for it to begin to respond; the response gets slow with each repeat simple question. Once it has responded the each token streams in reasonably well. I've tried both Ubuntu and Suse.

Is the VM just not suitable? I'm trying to see how far I can get without a GPU.

Originally created by @benmarinic on GitHub (Dec 16, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1557 Originally assigned to: @BruceMacD on GitHub. I'm using the following VM in azure: Standard D8s v3 vCPUs 8, RAM 32 GiB Have tried Mistral 7b and Orca-mini. I've also tried 4 bit versions. Ollama is responding increasingly slowly. After the 4th simple query ("hi" or "what's the capital of ...") I'm waiting in excess of 60 seconds for it to begin to respond; the response gets slow with each repeat simple question. Once it has responded the each token streams in reasonably well. I've tried both Ubuntu and Suse. Is the VM just not suitable? I'm trying to see how far I can get without a GPU.
GiteaMirror added the bug label 2026-05-03 10:38:33 -05:00
Author
Owner

@cwegener commented on GitHub (Dec 16, 2023):

Is the VM just not suitable? I'm trying to see how far I can get without a GPU.

I don't think it has anything to do with the VM.

I noticed the same problem with the latest 0.1.16 release as well

Also, this seems to be a duplicate of #1556

<!-- gh-comment-id:1858716273 --> @cwegener commented on GitHub (Dec 16, 2023): > Is the VM just not suitable? I'm trying to see how far I can get without a GPU. I don't think it has anything to do with the VM. I noticed the same problem with the latest 0.1.16 release as well Also, this seems to be a duplicate of #1556
Author
Owner

@anujva commented on GitHub (Dec 16, 2023):

yep.. I am facing the same issue with 0.1.16 release.. I will see if I can downgrade and get better inference times.

<!-- gh-comment-id:1858741936 --> @anujva commented on GitHub (Dec 16, 2023): yep.. I am facing the same issue with 0.1.16 release.. I will see if I can downgrade and get better inference times.
Author
Owner

@benmarinic commented on GitHub (Dec 16, 2023):

Is the VM just not suitable? I'm trying to see how far I can get without a GPU.

I don't think it has anything to do with the VM.

I noticed the same problem with the latest 0.1.16 release as well

Also, this seems to be a duplicate of #1556

Yes looks like the same issue. I should have said I was doing this in the command prompt - I have now tried API requests on the same instance and these respond much better and without increasing slowness.

<!-- gh-comment-id:1858775573 --> @benmarinic commented on GitHub (Dec 16, 2023): > > Is the VM just not suitable? I'm trying to see how far I can get without a GPU. > > > > I don't think it has anything to do with the VM. > > > > I noticed the same problem with the latest 0.1.16 release as well > > > > Also, this seems to be a duplicate of #1556 > > Yes looks like the same issue. I should have said I was doing this in the command prompt - I have now tried API requests on the same instance and these respond much better and without increasing slowness.
Author
Owner

@phalexo commented on GitHub (Dec 17, 2023):

Using it directly with llama.cpp does NOT appear to suffer from the same latency issue.

The problem appears to be ollama specific.

<!-- gh-comment-id:1859182127 --> @phalexo commented on GitHub (Dec 17, 2023): Using it directly with llama.cpp does NOT appear to suffer from the same latency issue. The problem appears to be ollama specific.
Author
Owner

@benmarinic commented on GitHub (Dec 17, 2023):

Using it directly with llama.cpp does NOT appear to suffer from the same latency issue.

The problem appears to be ollama specific.

Have you compared the ollama API endpoint "generate" speed vs llama.cpp by any chance? It's not suffering from increasing latency, but interested to know if it's still slower than llama.cpp.

<!-- gh-comment-id:1859208371 --> @benmarinic commented on GitHub (Dec 17, 2023): > Using it directly with llama.cpp does NOT appear to suffer from the same latency issue. > > > > The problem appears to be ollama specific. > > Have you compared the ollama API endpoint "generate" speed vs llama.cpp by any chance? It's not suffering from increasing latency, but interested to know if it's still slower than llama.cpp.
Author
Owner

@BruceMacD commented on GitHub (Dec 19, 2023):

How much time is it getting slower over? It sounds like the increasing conversation history is increasing time to generate, which I partially expect, but not major degradation.

<!-- gh-comment-id:1863311512 --> @BruceMacD commented on GitHub (Dec 19, 2023): How much time is it getting slower over? It sounds like the increasing conversation history is increasing time to generate, which I partially expect, but not major degradation.
Author
Owner

@AndreiSva commented on GitHub (Dec 20, 2023):

I am also facing this issue

<!-- gh-comment-id:1864008519 --> @AndreiSva commented on GitHub (Dec 20, 2023): I am also facing this issue
Author
Owner

@dhiltgen commented on GitHub (Mar 12, 2024):

Are folks still seeing this behavior on the latest release?

<!-- gh-comment-id:1992130990 --> @dhiltgen commented on GitHub (Mar 12, 2024): Are folks still seeing this behavior on the latest release?
Author
Owner

@AndreiSva commented on GitHub (Mar 13, 2024):

It's been fixed for me!

<!-- gh-comment-id:1992801988 --> @AndreiSva commented on GitHub (Mar 13, 2024): It's been fixed for me!
Author
Owner

@dhiltgen commented on GitHub (Mar 13, 2024):

That's great!

I'll go ahead and close this then. @benmarinic if you're still having troubles, please let us know and I'll re-open the issue.

<!-- gh-comment-id:1992812551 --> @dhiltgen commented on GitHub (Mar 13, 2024): That's great! I'll go ahead and close this then. @benmarinic if you're still having troubles, please let us know and I'll re-open the issue.
Author
Owner

@lewismunene020 commented on GitHub (Apr 15, 2024):

still having the same issue i tried it out today the latest docker image seems to have same issue on my 16bg 8core specs please check what might be the issue

<!-- gh-comment-id:2057186697 --> @lewismunene020 commented on GitHub (Apr 15, 2024): still having the same issue i tried it out today the latest docker image seems to have same issue on my 16bg 8core specs please check what might be the issue
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#62888