[GH-ISSUE #4929] Never-ending loading whether using the OpenAI API or Ollama Python #65153

Closed
opened 2026-05-03 19:53:20 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @Wannabeasmartguy on GitHub (Jun 8, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4929

What is the issue?

Hi, I'm having a problem:
Whether I'm using the OpenAI API or Ollama-python, I get bogged down in never-ending loading when doing model inference.
image

When I check the logs, I see that there is no record of a POST request in the logs.
image

I tried to fix it by reinstalling and rebooting, but that didn't work.

But what very much puzzles me is that on my other laptop, with a similar working environment (Windows), both run fine.

I didn't find a similar problem in issue, so any suggestions on how to solve this problem would be greatly appreciated.

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.1.42

Originally created by @Wannabeasmartguy on GitHub (Jun 8, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4929 ### What is the issue? Hi, I'm having a problem: Whether I'm using the OpenAI API or Ollama-python, I get bogged down in never-ending loading when doing model inference. ![image](https://github.com/ollama/ollama/assets/107250451/4b97cb64-e8ed-4c71-a1b0-b3d2d04c397c) When I check the logs, I see that there is no record of a POST request in the logs. ![image](https://github.com/ollama/ollama/assets/107250451/07f13b43-98be-4a08-be89-0a86b0bdb2d5) I tried to fix it by reinstalling and rebooting, but that didn't work. But what very much puzzles me is that on my other laptop, with a similar working environment (Windows), both run fine. I didn't find a similar problem in issue, so any suggestions on how to solve this problem would be greatly appreciated. ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.42
GiteaMirror added the bug label 2026-05-03 19:53:20 -05:00
Author
Owner

@mkesper commented on GitHub (Jun 10, 2024):

Try using the recommended prompt template for your model. I've had same results (nothing happens) with llama3-based models when not explicitly using all expected markers like <|start_text|> etc.

<!-- gh-comment-id:2157670975 --> @mkesper commented on GitHub (Jun 10, 2024): Try using the recommended prompt template for your model. I've had same results (nothing happens) with llama3-based models when not explicitly using all expected markers like <|start_text|> etc.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#65153