[GH-ISSUE #12769] OCT 24 2025 | ollama version is 0.12.6 | Not running |Windows 10 #12766 Reference to this case #70529

Closed
opened 2026-05-04 21:52:15 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @Dayal-star on GitHub (Oct 24, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12769

What is the issue?

Hi Rick

I closed by Mistake and created a new case referencing the previous one , I am attaching the log files,

Thank you
Best Regards
Dayal

app.log
server.log

Relevant log output


OS

Windows

GPU

No response

CPU

Intel

Ollama version

ollama version is 0.12.6

Originally created by @Dayal-star on GitHub (Oct 24, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12769 ### What is the issue? Hi Rick I closed by Mistake and created a new case referencing the previous one , I am attaching the log files, Thank you Best Regards Dayal [app.log](https://github.com/user-attachments/files/23118609/app.log) [server.log](https://github.com/user-attachments/files/23118608/server.log) ### Relevant log output ```shell ``` ### OS Windows ### GPU _No response_ ### CPU Intel ### Ollama version ollama version is 0.12.6
GiteaMirror added the bugneeds more info labels 2026-05-04 21:52:15 -05:00
Author
Owner

@rick-github commented on GitHub (Oct 24, 2025):

This might be #12699. From the logs it looks like you don't have a GPU, currently have no models, and you downloaded gemma3:1b-it-q4_K_M. I'm assuming that was ollama run gemma3:1b-it-q4_K_M and ollama got stuck at the spinner. If that's the case, you can install an older version of ollama or wait for the next release which should fix this bug.

<!-- gh-comment-id:3442679904 --> @rick-github commented on GitHub (Oct 24, 2025): This might be #12699. From the logs it looks like you don't have a GPU, currently have no models, and you downloaded gemma3:1b-it-q4_K_M. I'm assuming that was `ollama run gemma3:1b-it-q4_K_M` and ollama got stuck at the spinner. If that's the case, you can install an [older version](https://github.com/ollama/ollama/releases/tag/v0.12.3) of ollama or wait for the next release which should fix this bug.
Author
Owner

@Chikkis commented on GitHub (Oct 24, 2025):

Current version 0.12.6
getting error - "Error: 500 Internal Server Error: llama runner process has terminated: exit status 2"

Steps followed

  1. Download tinyllama --- this worked, downloades -nomic-embed-text:latest got error "Error: 400 Bad Request: "nomic-embed-text:latest" does not support generate" anyother model also failed.
  2. Downloading to older verison 0.12.3
  3. did NOT uninstall 0.12.6. Installed OLDER version 0.12.3 >>> ISSUE FIXED <<<<<
    .
<!-- gh-comment-id:3444843897 --> @Chikkis commented on GitHub (Oct 24, 2025): Current version 0.12.6 getting error - "Error: 500 Internal Server Error: llama runner process has terminated: exit status 2" Steps followed 1. Download tinyllama --- this worked, downloades -nomic-embed-text:latest got error "Error: 400 Bad Request: "nomic-embed-text:latest" does not support generate" anyother model also failed. 2. Downloading to older verison 0.12.3 3. did NOT uninstall 0.12.6. **Installed OLDER version 0.12.3** >>> **ISSUE FIXED** <<<<< .
Author
Owner

@rick-github commented on GitHub (Oct 24, 2025):

"Error: 500 Internal Server Error: llama runner process has terminated: exit status 2"

This is not #12699, something else failed.

downloades -nomic-embed-text:latest got error "Error: 400 Bad Request: "nomic-embed-text:latest" does not support generate"

Embedding models can't be started with ollama run, again this is not #12699.

Installed OLDER version 0.12.3 >>> ISSUE FIXED <<<<<

This sounds like #12699. But rolling back won't fix the other two issues.

<!-- gh-comment-id:3444896484 --> @rick-github commented on GitHub (Oct 24, 2025): > "Error: 500 Internal Server Error: llama runner process has terminated: exit status 2" This is not #12699, something else failed. > downloades -nomic-embed-text:latest got error "Error: 400 Bad Request: "nomic-embed-text:latest" does not support generate" Embedding models can't be started with `ollama run`, again this is not #12699. > Installed OLDER version 0.12.3 >>> ISSUE FIXED <<<<< This sounds like #12699. But rolling back won't fix the other two issues.
Author
Owner

@mrousse83 commented on GitHub (Oct 29, 2025):

Hello,

Same problem for me.

0.12.3 is working but 0.12.4 to 0.12.6 not working.

🖥️ Environment

Component Specification
OS Windows 10 Enterprise 22H2 (x64)
CPU Intel Core i3-7350K @ 4.20 GHz
RAM 32 GB
GPU NVIDIA GeForce RTX 2080 (8 GB VRAM)
CUDA Driver 13.0
Ollama Versions Tested 0.12.3 to 0.12.6
Install Path C:\Users\<username>\AppData\Local\Programs\Ollama\ollama.exe

/api/version is working
/api/pull is working
/api/generate => freeze the server (CLI + GUI tested with same problem)

server.log

app.log

Thanks,
Mathieu

<!-- gh-comment-id:3461148608 --> @mrousse83 commented on GitHub (Oct 29, 2025): Hello, Same problem for me. 0.12.3 is working but 0.12.4 to 0.12.6 not working. ## 🖥️ Environment | Component | Specification | |------------|----------------| | **OS** | Windows 10 Enterprise 22H2 (x64) | | **CPU** | Intel Core i3-7350K @ 4.20 GHz | | **RAM** | 32 GB | | **GPU** | NVIDIA GeForce RTX 2080 (8 GB VRAM) | | **CUDA Driver** | 13.0 | | **Ollama Versions Tested** | 0.12.3 to 0.12.6 | | **Install Path** | `C:\Users\<username>\AppData\Local\Programs\Ollama\ollama.exe` | /api/version is working /api/pull is working /api/generate => freeze the server (CLI + GUI tested with same problem) [server.log](https://github.com/user-attachments/files/23209172/server.log) [app.log](https://github.com/user-attachments/files/23209184/app.log) Thanks, Mathieu
Author
Owner

@rick-github commented on GitHub (Oct 29, 2025):

Try 0.12.7-rc0.

<!-- gh-comment-id:3461156367 --> @rick-github commented on GitHub (Oct 29, 2025): Try [0.12.7-rc0](https://github.com/ollama/ollama/releases/tag/v0.12.7-rc0).
Author
Owner

@mrousse83 commented on GitHub (Oct 29, 2025):

Hello,

I have juste tested the 0.12.7-rc0.
Same problem.

app.log

server.log

Install from scratch.
I use the CLI to run Ollama : ollama serve
In another CLI, i run : ollama --version
It's OK.
I use : curl -sS http://127.0.0.1:11434/api/version
It's OK.
I use : ollama pull tinyllama
It's OK
I use : curl -sS -H "Content-Type: application/json" -d "{"model":"tinyllama","prompt":"ping","stream":false}" http://127.0.0.1:11434/api/generate
It's freeze.

With Ollama GUI, same problem, stuck on the loading dots indefinitely.

Thanks,
Mathieu

<!-- gh-comment-id:3461231444 --> @mrousse83 commented on GitHub (Oct 29, 2025): Hello, I have juste tested the 0.12.7-rc0. Same problem. [app.log](https://github.com/user-attachments/files/23209562/app.log) [server.log](https://github.com/user-attachments/files/23209565/server.log) Install from scratch. I use the CLI to run Ollama : ollama serve In another CLI, i run : ollama --version It's OK. I use : curl -sS http://127.0.0.1:11434/api/version It's OK. I use : ollama pull tinyllama It's OK I use : curl -sS -H "Content-Type: application/json" -d "{\"model\":\"tinyllama\",\"prompt\":\"ping\",\"stream\":false}" http://127.0.0.1:11434/api/generate It's freeze. With Ollama GUI, same problem, stuck on the loading dots indefinitely. Thanks, Mathieu
Author
Owner

@dhiltgen commented on GitHub (Nov 5, 2025):

The hang on Windows should be fixed in 0.12.10. If you have any additional problems after updating to that release let us know and share an updated server log and I'll reopen.

<!-- gh-comment-id:3493766773 --> @dhiltgen commented on GitHub (Nov 5, 2025): The hang on Windows should be fixed in 0.12.10. If you have any additional problems after updating to that release let us know and share an updated server log and I'll reopen.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#70529