[GH-ISSUE #13547] REGRESSION: NVIDIA-Nemotron-Nano-9B-v2 not working. #55436

Open
opened 2026-04-29 09:11:50 -05:00 by GiteaMirror · 24 comments
Owner

Originally created by @mirage335 on GitHub (Dec 23, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13547

This is an EXTREMELY USEFUL model. Please, please get this working again!

What is the issue?

This model used to work normally. Now there is an error running the model.

ollama run mirage335/NVIDIA-Nemotron-Nano-9B-v2-virtuoso --verbose Please tell me a very short story.

Relevant log output

ollama run mirage335/NVIDIA-Nemotron-Nano-9B-v2-virtuoso --verbose Please tell me a very short story.
pulling manifest
pulling 95ee1b5339df: 100% ▕██████████████████████████████████████████████████████████▏ 9.1 GB
pulling 28826775b49e: 100% ▕██████████████████████████████████████████████████████████▏  128 B
pulling 9d70f331a9e3: 100% ▕██████████████████████████████████████████████████████████▏    6 B
pulling 09dbb5e0f24a: 100% ▕██████████████████████████████████████████████████████████▏  20 KB
pulling 42536301739d: 100% ▕██████████████████████████████████████████████████████████▏  216 B
pulling 9461827b6792: 100% ▕██████████████████████████████████████████████████████████▏  569 B
verifying sha256 digest
writing manifest
success
Error: 500 Internal Server Error: llama runner process has terminated: exit status 2

OS

Windows, Docker, WSL2

GPU

Nvidia

CPU

Intel

Ollama version

0.13.5

Originally created by @mirage335 on GitHub (Dec 23, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13547 This is an **EXTREMELY USEFUL** model. Please, please get this working again! ### What is the issue? This model used to work normally. Now there is an error running the model. ```bash ollama run mirage335/NVIDIA-Nemotron-Nano-9B-v2-virtuoso --verbose Please tell me a very short story. ``` ### Relevant log output ```shell ollama run mirage335/NVIDIA-Nemotron-Nano-9B-v2-virtuoso --verbose Please tell me a very short story. pulling manifest pulling 95ee1b5339df: 100% ▕██████████████████████████████████████████████████████████▏ 9.1 GB pulling 28826775b49e: 100% ▕██████████████████████████████████████████████████████████▏ 128 B pulling 9d70f331a9e3: 100% ▕██████████████████████████████████████████████████████████▏ 6 B pulling 09dbb5e0f24a: 100% ▕██████████████████████████████████████████████████████████▏ 20 KB pulling 42536301739d: 100% ▕██████████████████████████████████████████████████████████▏ 216 B pulling 9461827b6792: 100% ▕██████████████████████████████████████████████████████████▏ 569 B verifying sha256 digest writing manifest success Error: 500 Internal Server Error: llama runner process has terminated: exit status 2 ``` ### OS Windows, Docker, WSL2 ### GPU Nvidia ### CPU Intel ### Ollama version 0.13.5
GiteaMirror added the bug label 2026-04-29 09:11:50 -05:00
Author
Owner

@mirage335 commented on GitHub (Dec 23, 2025):

Very strangely:

A reboot did not solve this issue.

LM Studio is now having the same issue.

ONLY this model is having this issue. About a dozen other models work fine with ollama, and a few also work with LMStudio.

I am wondering if a MSWindows update could have broken some subtle feature of the NVIDIA drivers or something.

EDIT: Updating the NVIDIA drivers did not do any good.

<!-- gh-comment-id:3685705620 --> @mirage335 commented on GitHub (Dec 23, 2025): Very strangely: A reboot did not solve this issue. LM Studio is now having the same issue. _**ONLY**_ this model is having this issue. About a dozen other models work fine with ollama, and a few also work with LMStudio. I am wondering if a MSWindows update could have broken some subtle feature of the NVIDIA drivers or something. EDIT: Updating the NVIDIA drivers did not do any good.
Author
Owner

@rick-github commented on GitHub (Dec 23, 2025):

This got broken by 7e3ea813c1, which merged llama.cpp support for nemotron-nano. There's an open but seemingly unrelated bug for nemotron-nano: https://github.com/ggml-org/llama.cpp/issues/18099

<!-- gh-comment-id:3686918930 --> @rick-github commented on GitHub (Dec 23, 2025): This got broken by 7e3ea813c1d8a9714c6927f75656d5ff6eaf5acc, which merged llama.cpp support for nemotron-nano. There's an open but seemingly unrelated bug for nemotron-nano: https://github.com/ggml-org/llama.cpp/issues/18099
Author
Owner

@mirage335 commented on GitHub (Dec 23, 2025):

To clarify: this is a Nemotron Nano model, and this did work before regressing to a broken state. Maybe support for a different model broke this one, just making sure its clear that whatever broke this, was not necessary for it to work.

Can this bug be fixed? Like I said, this is an extremely useful model, capable of agentic and command-line automation work, from a model small enough to run on CPUs and very small GPUs...

<!-- gh-comment-id:3687965255 --> @mirage335 commented on GitHub (Dec 23, 2025): To clarify: this is a Nemotron Nano model, and this _**did work**_ before regressing to a broken state. Maybe support for a different model broke this one, just making sure its clear that whatever broke this, was not necessary for it to work. Can this bug be fixed? Like I said, this is an extremely useful model, capable of agentic and command-line automation work, from a model small enough to run on CPUs and very small GPUs...
Author
Owner

@rick-github commented on GitHub (Dec 24, 2025):

It's not a different model. Presumably it will be fixed, either in a vendor sync or with a patch to the llama.cpp code. In the meantime, use 0.13.3 to run the model.

<!-- gh-comment-id:3689173450 --> @rick-github commented on GitHub (Dec 24, 2025): It's not a different model. Presumably it will be fixed, either in a vendor sync or with a patch to the llama.cpp code. In the meantime, use [0.13.3](https://github.com/ollama/ollama/releases/tag/v0.13.3) to run the model.
Author
Owner

@mirage335 commented on GitHub (Dec 24, 2025):

@rick-github

Apparently there is a fix upstream already.

https://github.com/ggml-org/llama.cpp/issues/18344
https://github.com/ggml-org/llama.cpp/issues/18304
https://github.com/ggml-org/llama.cpp/pull/18309

Please can we please get this merged into upstream ollama?

<!-- gh-comment-id:3689962618 --> @mirage335 commented on GitHub (Dec 24, 2025): @rick-github Apparently there is a fix upstream already. https://github.com/ggml-org/llama.cpp/issues/18344 https://github.com/ggml-org/llama.cpp/issues/18304 https://github.com/ggml-org/llama.cpp/pull/18309 Please can we please get this merged into upstream ollama?
Author
Owner

@rick-github commented on GitHub (Dec 24, 2025):

Confirm that patching https://github.com/ggml-org/llama.cpp/pull/18309 into ollama head results in a successful model load.

<!-- gh-comment-id:3690193133 --> @rick-github commented on GitHub (Dec 24, 2025): Confirm that patching https://github.com/ggml-org/llama.cpp/pull/18309 into ollama head results in a successful model load.
Author
Owner

@quidscio commented on GitHub (Jan 2, 2026):

Issue: Nemotron v2 MoE parameter handling fixed upstream in llama.cpp

Fix confirmed. Convenience PR also provided:
13607

Summary

An upstream fix in llama.cpp resolves incorrect parameter handling for Nemotron v2 MoE models, which currently affects Ollama builds that have not yet synced.

Upstream PR:
https://github.com/ggml-org/llama.cpp/pull/18309

This issue provides:

  • A minimal reproduction of the failure before the fix
  • Proof of correct behavior after the fix using an Ollama build with the updated llama.cpp
  • Confirmation that other models remain unaffected

Environment

OS: Windows 10 x64
Ollama version: fix/nemotron-v2-moe-params
llama.cpp: includes ggml-org/llama.cpp#18309 

Before (current Ollama main)

Model: mirage335/NVIDIA-Nemotron-Nano-9B-v2-virtuoso
This model is an unmodified version of NVIDIA's

Symptom: Incorrect behavior due to MoE parameter mismatch

> ollama -v
ollama version is 0.13.5

> ollama run mirage335/NVIDIA-Nemotron-Nano-9B-v2-virtuoso --verbose Give me three words starting with the letter F
Error: 500 Internal Server Error: llama runner process has terminated: exit status 2

Observed behavior:

  • Model fails to load with a 500 error, exit status 2
  • This matches the failure mode described in #13547

(Full logs available if needed)


After (with llama.cpp PR 18309)

Same prompt, same model, same runtime flags.

> ollama --version
ollama version is fix/nemotron-v2-moe-params

> ollama run mirage335/NVIDIA-Nemotron-Nano-9B-v2-virtuoso --verbose Give me three words starting with the letter F

Result (correct):

1. Flower
2. Fire
3. Fish

Performance and token accounting appear normal:

eval rate: 34.92 tokens/s
no warnings or parameter mismatches

Regression Check (other models)

Verified unaffected models continue to behave correctly with the same build:

Model Status
gpt-oss:20b OK
deepseek-r1:14b OK
devstral:latest OK

Example:

ollama run gpt-oss:20b --verbose Give me three words starting with the letter F
→ Frost, Falcon, Fortune

No behavioral or performance regressions observed.


Why this should be pulled now

  • Fix is isolated and targeted to MoE parameter handling
  • No observable impact to non-MoE models
  • Unblocks Nemotron v2 usage today without downstream workarounds
  • This is a low-risk early pull. See llama.cpp author comments.

Request

Please pull or cherry-pick:
https://github.com/ggml-org/llama.cpp/pull/18309

Alternately, accept PR which contains these changes:
13607

Happy to provide additional logs, diff testing, or run validation on other Nemotron variants if helpful.

<!-- gh-comment-id:3705870199 --> @quidscio commented on GitHub (Jan 2, 2026): ## Issue: Nemotron v2 MoE parameter handling fixed upstream in llama.cpp Fix confirmed. Convenience PR also provided: [13607](https://github.com/ollama/ollama/pull/13607) ### Summary An upstream fix in `llama.cpp` resolves incorrect parameter handling for **Nemotron v2 MoE models**, which currently affects Ollama builds that have not yet synced. Upstream PR: [https://github.com/ggml-org/llama.cpp/pull/18309](https://github.com/ggml-org/llama.cpp/pull/18309) This issue provides: * A minimal reproduction of the failure **before** the fix * Proof of correct behavior **after** the fix using an Ollama build with the updated llama.cpp * Confirmation that other models remain unaffected * * * ### Environment OS: Windows 10 x64 Ollama version: fix/nemotron-v2-moe-params llama.cpp: includes ggml-org/llama.cpp#18309 * * * ### Before (current Ollama main) **Model:** `mirage335/NVIDIA-Nemotron-Nano-9B-v2-virtuoso` This model is an unmodified version of [NVIDIA's](https://ollama.com/mirage335/NVIDIA-Nemotron-Nano-9B-v2-virtuoso) **Symptom:** Incorrect behavior due to MoE parameter mismatch > ollama -v ollama version is 0.13.5 > ollama run mirage335/NVIDIA-Nemotron-Nano-9B-v2-virtuoso --verbose Give me three words starting with the letter F Error: 500 Internal Server Error: llama runner process has terminated: exit status 2 Observed behavior: * Model fails to load with a 500 error, exit status 2 * This matches the failure mode described in #13547 (Full logs available if needed) * * * ### After (with llama.cpp PR 18309) Same prompt, same model, same runtime flags. > ollama --version ollama version is fix/nemotron-v2-moe-params > ollama run mirage335/NVIDIA-Nemotron-Nano-9B-v2-virtuoso --verbose Give me three words starting with the letter F **Result (correct):** 1. Flower 2. Fire 3. Fish Performance and token accounting appear normal: eval rate: 34.92 tokens/s no warnings or parameter mismatches * * * ### Regression Check (other models) Verified unaffected models continue to behave correctly with the same build: | Model | Status | | --- | --- | | gpt-oss:20b | OK | | deepseek-r1:14b | OK | | devstral:latest | OK | Example: ollama run gpt-oss:20b --verbose Give me three words starting with the letter F → Frost, Falcon, Fortune No behavioral or performance regressions observed. * * * ### Why this should be pulled now * Fix is **isolated and targeted** to MoE parameter handling * No observable impact to non-MoE models * Unblocks Nemotron v2 usage today without downstream workarounds * This is a low-risk early pull. See llama.cpp author comments. * * * ### Request Please pull or cherry-pick: [https://github.com/ggml-org/llama.cpp/pull/18309](https://github.com/ggml-org/llama.cpp/pull/18309) Alternately, accept PR which contains these changes: [13607](https://github.com/ollama/ollama/pull/13607) Happy to provide additional logs, diff testing, or run validation on other Nemotron variants if helpful.
Author
Owner

@mirage335 commented on GitHub (Jan 5, 2026):

@rick-github Confirmed, thoroughly, with convenient pull request. Can we please get this merged? Keeping fairly capable and compatible AI LLM models working consistently if possible is very important.

<!-- gh-comment-id:3710288234 --> @mirage335 commented on GitHub (Jan 5, 2026): @rick-github Confirmed, thoroughly, with convenient pull request. Can we please get this merged? Keeping fairly capable and compatible AI LLM models working consistently if possible is very important.
Author
Owner

@mirage335 commented on GitHub (Jan 10, 2026):

@rick-github We confirmed it, can we please get this merged? This Nano model is very useful on widely available ~16GB VRAM/RAM systems.

<!-- gh-comment-id:3732737125 --> @mirage335 commented on GitHub (Jan 10, 2026): @rick-github We confirmed it, can we please get this merged? This Nano model is _very_ useful on widely available ~16GB VRAM/RAM systems.
Author
Owner

@purificant commented on GitHub (Jan 17, 2026):

Still an issue on ollama 0.14.2 with nemotron-3-nano:latest

<!-- gh-comment-id:3763076432 --> @purificant commented on GitHub (Jan 17, 2026): Still an issue on ollama 0.14.2 with nemotron-3-nano:latest
Author
Owner

@quidscio commented on GitHub (Jan 17, 2026):

Confirmed. This issue remains with ollama 0.14.2

> ollama --version
ollama version is 0.14.2

> ollama run mirage335/NVIDIA-Nemotron-Nano-9B-v2-virtuoso --verbose Give me three words starting with the letter F
Error: 500 Internal Server Error: llama runner process has terminated: exit status 2

@rick-github , do you see any merge issues or additional testing needed? The PR is straightforward. It just moves MoE calcs to within the MoE guard per the llama.cpp dev.

Image
<!-- gh-comment-id:3764082088 --> @quidscio commented on GitHub (Jan 17, 2026): Confirmed. This issue remains with ollama 0.14.2 ```bash > ollama --version ollama version is 0.14.2 > ollama run mirage335/NVIDIA-Nemotron-Nano-9B-v2-virtuoso --verbose Give me three words starting with the letter F Error: 500 Internal Server Error: llama runner process has terminated: exit status 2 ``` @rick-github , do you see any merge issues or additional testing needed? The PR is straightforward. It just moves MoE calcs to within the MoE guard per the llama.cpp dev. <img width="731" height="126" alt="Image" src="https://github.com/user-attachments/assets/e449872e-db08-46fe-89da-f31788848597" />
Author
Owner

@quidscio commented on GitHub (Jan 22, 2026):

Confirming this issues persists with ollama 0.14.3

> ollama pull mirage335/NVIDIA-Nemotron-Nano-9B-v2-virtuoso:latest
pulling manifest
pulling 95ee1b5339df: 100% ▕██████████████████████████████████████████████████████████▏ 9.1 GB
pulling 26df535d3a92: 100% ▕██████████████████████████████████████████████████████████▏  133 B
pulling 9d70f331a9e3: 100% ▕██████████████████████████████████████████████████████████▏    6 B
pulling f77bdb07874e: 100% ▕██████████████████████████████████████████████████████████▏  11 KB
pulling 42536301739d: 100% ▕██████████████████████████████████████████████████████████▏  216 B
pulling bde3a4e8ec43: 100% ▕██████████████████████████████████████████████████████████▏  569 B
verifying sha256 digest
writing manifest
removing unused layers
success

> ollama run mirage335/NVIDIA-Nemotron-Nano-9B-v2-virtuoso:latest
⠦ Error: 500 Internal Server Error: llama runner process has terminated: exit status 2

> ollama --version
ollama version is 0.14.3

There's a really easy PR to address :-)

<!-- gh-comment-id:3785349836 --> @quidscio commented on GitHub (Jan 22, 2026): Confirming this issues persists with ollama 0.14.3 ```bash > ollama pull mirage335/NVIDIA-Nemotron-Nano-9B-v2-virtuoso:latest pulling manifest pulling 95ee1b5339df: 100% ▕██████████████████████████████████████████████████████████▏ 9.1 GB pulling 26df535d3a92: 100% ▕██████████████████████████████████████████████████████████▏ 133 B pulling 9d70f331a9e3: 100% ▕██████████████████████████████████████████████████████████▏ 6 B pulling f77bdb07874e: 100% ▕██████████████████████████████████████████████████████████▏ 11 KB pulling 42536301739d: 100% ▕██████████████████████████████████████████████████████████▏ 216 B pulling bde3a4e8ec43: 100% ▕██████████████████████████████████████████████████████████▏ 569 B verifying sha256 digest writing manifest removing unused layers success > ollama run mirage335/NVIDIA-Nemotron-Nano-9B-v2-virtuoso:latest ⠦ Error: 500 Internal Server Error: llama runner process has terminated: exit status 2 > ollama --version ollama version is 0.14.3 ``` There's a really easy PR to address :-)
Author
Owner

@purificant commented on GitHub (Jan 22, 2026):

$ ollama --version
ollama version is 0.14.3

$ ollama pull nemotron-3-nano:latest
pulling manifest 
pulling a70437c41b3b: 100% ▕████████████████████████████████████████████████████████████████████████████████████████████▏  24 GB                         
pulling bca58c750377: 100% ▕████████████████████████████████████████████████████████████████████████████████████████████▏  10 KB                         
pulling 12e88b2a8727: 100% ▕████████████████████████████████████████████████████████████████████████████████████████████▏   28 B                         
pulling 12bee8c08a36: 100% ▕████████████████████████████████████████████████████████████████████████████████████████████▏  488 B                         
verifying sha256 digest 
writing manifest 
success

$ ollama run nemotron-3-nano:latest
Error: 500 Internal Server Error: llama runner process has terminated: CUDA error: the resource allocation failed
  current device: 0, in function cublas_handle at //ml/backend/ggml/ggml/src/ggml-cuda/common.cuh:1260
  cublasCreate_v2(&cublas_handles[device])
//ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:94: CUDA error
<!-- gh-comment-id:3785586902 --> @purificant commented on GitHub (Jan 22, 2026): ```bash $ ollama --version ollama version is 0.14.3 $ ollama pull nemotron-3-nano:latest pulling manifest pulling a70437c41b3b: 100% ▕████████████████████████████████████████████████████████████████████████████████████████████▏ 24 GB pulling bca58c750377: 100% ▕████████████████████████████████████████████████████████████████████████████████████████████▏ 10 KB pulling 12e88b2a8727: 100% ▕████████████████████████████████████████████████████████████████████████████████████████████▏ 28 B pulling 12bee8c08a36: 100% ▕████████████████████████████████████████████████████████████████████████████████████████████▏ 488 B verifying sha256 digest writing manifest success $ ollama run nemotron-3-nano:latest Error: 500 Internal Server Error: llama runner process has terminated: CUDA error: the resource allocation failed current device: 0, in function cublas_handle at //ml/backend/ggml/ggml/src/ggml-cuda/common.cuh:1260 cublasCreate_v2(&cublas_handles[device]) //ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:94: CUDA error ```
Author
Owner

@mirage335 commented on GitHub (Jan 23, 2026):

Can also confirm. Still does not work - 'ollama version is 0.14.3' .

@rick-github
Can we please get the convenient pull request merged?

https://github.com/ollama/ollama/pull/13607

<!-- gh-comment-id:3788126537 --> @mirage335 commented on GitHub (Jan 23, 2026): Can also confirm. Still does not work - 'ollama version is 0.14.3' . @rick-github Can we please get the convenient pull request merged? https://github.com/ollama/ollama/pull/13607
Author
Owner

@purificant commented on GitHub (Jan 26, 2026):

Still an issue in 0.15.1:

$ ollama --version
ollama version is 0.15.1

$ ollama pull nemotron-3-nano:latest
pulling manifest 
pulling a70437c41b3b: 100% ▕████████████████████████████████████████████████████████████████████████████████████████████▏  24 GB                         
pulling bca58c750377: 100% ▕████████████████████████████████████████████████████████████████████████████████████████████▏  10 KB                         
pulling 12e88b2a8727: 100% ▕████████████████████████████████████████████████████████████████████████████████████████████▏   28 B                         
pulling 12bee8c08a36: 100% ▕████████████████████████████████████████████████████████████████████████████████████████████▏  488 B                         
verifying sha256 digest 
writing manifest 
success

$ ollama run nemotron-3-nano:latest
Error: 500 Internal Server Error: llama runner process has terminated: exit status 2

<!-- gh-comment-id:3798092934 --> @purificant commented on GitHub (Jan 26, 2026): Still an issue in `0.15.1`: ```bash $ ollama --version ollama version is 0.15.1 $ ollama pull nemotron-3-nano:latest pulling manifest pulling a70437c41b3b: 100% ▕████████████████████████████████████████████████████████████████████████████████████████████▏ 24 GB pulling bca58c750377: 100% ▕████████████████████████████████████████████████████████████████████████████████████████████▏ 10 KB pulling 12e88b2a8727: 100% ▕████████████████████████████████████████████████████████████████████████████████████████████▏ 28 B pulling 12bee8c08a36: 100% ▕████████████████████████████████████████████████████████████████████████████████████████████▏ 488 B verifying sha256 digest writing manifest success $ ollama run nemotron-3-nano:latest Error: 500 Internal Server Error: llama runner process has terminated: exit status 2 ```
Author
Owner

@quidscio commented on GitHub (Jan 26, 2026):

@purificant , if you are running Windows (don't think you are), try running the ollama version at:
https://github.com/quidscio/ollama/releases/tag/untagged-944f1cd8a9acc8fb286e

This ollama run works for me on a Windows RTX 4090 16GB system. Your issue may differ.

> ollama --version
ollama version is 0.15.1

> ollama run nemotron-3-nano:latest "Name three words starting with F"
Thinking...
The user asks: "Name three words starting with F". Simple request. Provide three words that start with 'F'. So
answer can be e.g., "Fox, Freedom, Flourish." Could also add some context but likely just list three words.

Make sure to comply.
...done thinking.

Sure! Here are three words that begin with **F**:

1. Fox
2. Freedom
3. Flourish
<!-- gh-comment-id:3800644939 --> @quidscio commented on GitHub (Jan 26, 2026): @purificant , if you are running Windows (don't think you are), try running the ollama version at: https://github.com/quidscio/ollama/releases/tag/untagged-944f1cd8a9acc8fb286e This ollama run works for me on a Windows RTX 4090 16GB system. Your issue may differ. ```CMD > ollama --version ollama version is 0.15.1 > ollama run nemotron-3-nano:latest "Name three words starting with F" Thinking... The user asks: "Name three words starting with F". Simple request. Provide three words that start with 'F'. So answer can be e.g., "Fox, Freedom, Flourish." Could also add some context but likely just list three words. Make sure to comply. ...done thinking. Sure! Here are three words that begin with **F**: 1. Fox 2. Freedom 3. Flourish ```
Author
Owner

@quidscio commented on GitHub (Jan 26, 2026):

Confirming this MoE issue remains in v0.15.1.

> ollama --version
ollama version is 0.15.1

> ollama run mirage335/NVIDIA-Nemotron-Nano-9B-v2-virtuoso:latest "Name three words starting with F"
Error: 500 Internal Server Error: llama runner process has terminated: exit status 2

@rick-github , what further information can be provided so that PR https://github.com/ollama/ollama/pull/13607 can be included? The PR and related issue documents before/after-regression-break/after-fix states on the subject model and some others. And, the PR duplicates the original developer's fix. Let us know of additional steps please.

<!-- gh-comment-id:3800681133 --> @quidscio commented on GitHub (Jan 26, 2026): Confirming this MoE issue remains in v0.15.1. ```CMD > ollama --version ollama version is 0.15.1 > ollama run mirage335/NVIDIA-Nemotron-Nano-9B-v2-virtuoso:latest "Name three words starting with F" Error: 500 Internal Server Error: llama runner process has terminated: exit status 2 ``` @rick-github , what further information can be provided so that PR https://github.com/ollama/ollama/pull/13607 can be included? The PR and related issue documents before/after-regression-break/after-fix states on the subject model and some others. And, the PR duplicates the original developer's fix. Let us know of additional steps please.
Author
Owner

@jeremyanugrah commented on GitHub (Feb 6, 2026):

Are there any updates regarding a fix or workaround?

<!-- gh-comment-id:3861712384 --> @jeremyanugrah commented on GitHub (Feb 6, 2026): Are there any updates regarding a fix or workaround?
Author
Owner

@quidscio commented on GitHub (Feb 7, 2026):

Confirming 0.15.5 fails

> ollama --version
ollama version is 0.15.5

> ollama run mirage335/NVIDIA-Nemotron-Nano-9B-v2-virtuoso:latest "Name three words starting with F"
Error: 500 Internal Server Error: llama runner process has terminated: exit status 2

Quick PR available at:
https://github.com/ollama/ollama/pull/13607

<!-- gh-comment-id:3864860895 --> @quidscio commented on GitHub (Feb 7, 2026): Confirming 0.15.5 fails ``` > ollama --version ollama version is 0.15.5 > ollama run mirage335/NVIDIA-Nemotron-Nano-9B-v2-virtuoso:latest "Name three words starting with F" Error: 500 Internal Server Error: llama runner process has terminated: exit status 2 ``` Quick PR available at: https://github.com/ollama/ollama/pull/13607
Author
Owner

@ryanmon1 commented on GitHub (Feb 8, 2026):

Are there any updates regarding a fix or workaround?

Version 13.3 works

<!-- gh-comment-id:3865834195 --> @ryanmon1 commented on GitHub (Feb 8, 2026): > Are there any updates regarding a fix or workaround? Version 13.3 works
Author
Owner

@quidscio commented on GitHub (Feb 8, 2026):

@ryanmon1 , yes agree that 0.13.3 works. Thanks for pointing that out. As demonstrated above, there was, according to the Ollama developer, a regression introduced that broke things with 0.13.5.

There exists as a PR, a simple fix to duplicate exactly the regression fix he proved on the ollama.cpp side of things. We just need to pick up that change, or the PR, and push it forward here.

Quick PR available at:
https://github.com/ollama/ollama/pull/13607

<!-- gh-comment-id:3866133543 --> @quidscio commented on GitHub (Feb 8, 2026): @ryanmon1 , yes agree that 0.13.3 works. Thanks for pointing that out. As demonstrated above, there was, according to the Ollama developer, a regression introduced that broke things with 0.13.5. There exists as a PR, a simple fix to duplicate exactly the regression fix he proved on the ollama.cpp side of things. We just need to pick up that change, or the PR, and push it forward here. Quick PR available at: https://github.com/ollama/ollama/pull/13607
Author
Owner

@quidscio commented on GitHub (Feb 13, 2026):

Confirming, 0.16.1 still fails.

> ollama --version
ollama version is 0.15.6

C:\Users\rmhin
> ollama run mirage335/NVIDIA-Nemotron-Nano-9B-v2-virtuoso:latest "Name three words starting with F"
Error: 500 Internal Server Error: llama runner process has terminated: exit status 2

C:\Users\rmhin
> ollama --version
ollama version is 0.16.1

> ollama run mirage335/NVIDIA-Nemotron-Nano-9B-v2-virtuoso:latest "Name three words starting with F"
Error: 500 Internal Server Error: llama runner process has terminated: exit status 2

Hope retry queued…

<!-- gh-comment-id:3894614336 --> @quidscio commented on GitHub (Feb 13, 2026): Confirming, 0.16.1 still fails. ``` > ollama --version ollama version is 0.15.6 C:\Users\rmhin > ollama run mirage335/NVIDIA-Nemotron-Nano-9B-v2-virtuoso:latest "Name three words starting with F" Error: 500 Internal Server Error: llama runner process has terminated: exit status 2 C:\Users\rmhin > ollama --version ollama version is 0.16.1 > ollama run mirage335/NVIDIA-Nemotron-Nano-9B-v2-virtuoso:latest "Name three words starting with F" Error: 500 Internal Server Error: llama runner process has terminated: exit status 2 ``` `Hope retry queued…`
Author
Owner

@purificant commented on GitHub (Mar 2, 2026):

ollama run nemotron-3-nano:latest now works on 0.17.5

<!-- gh-comment-id:3985768067 --> @purificant commented on GitHub (Mar 2, 2026): `ollama run nemotron-3-nano:latest` now works on `0.17.5`
Author
Owner

@quidscio commented on GitHub (Mar 2, 2026):

Yay, mirage335/NVIDIA-Nemotron-Nano-9B-v2-virtuoso:latest also works!

> ollama --version
ollama version is 0.17.5

> ollama run mirage335/NVIDIA-Nemotron-Nano-9B-v2-virtuoso:latest "Name three words starting with F"

1. Flower
2. Fruit
3. Forest

>
<!-- gh-comment-id:3985888615 --> @quidscio commented on GitHub (Mar 2, 2026): Yay, `mirage335/NVIDIA-Nemotron-Nano-9B-v2-virtuoso:latest` also works! ```command > ollama --version ollama version is 0.17.5 > ollama run mirage335/NVIDIA-Nemotron-Nano-9B-v2-virtuoso:latest "Name three words starting with F" 1. Flower 2. Fruit 3. Forest > ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#55436