[GH-ISSUE #8481] Model "Not Found" intermittently for certain models, reproducable. #31221

Closed
opened 2026-04-22 11:27:51 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @tarbard on GitHub (Jan 18, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8481

What is the issue?

I've noticed some models give an intermittent model "not found" error most of the time but then will work.
In this example it's "vanilj/supernova-medius:q4_k_m" , trying with another such as llama3.2 works fine, every time.

curl to reproduce

curl -X POST http://127.0.0.1:11434/api/generate -H 'Content-Type: application/json' -d '{"model": "vanilj/supernova-medius:q4_k_m", "prompt": "Hi", "stream": false, "options": {"temperature": 0.7}, "keep_alive": -1}}}'

result:

{"error":"model 'vanilj/supernova-medius:q4_k_m' not found"}

I try, for example, 8 times with the error each time but then it will work (curl is exactly the same as above but this time it works):

curl -X POST http://127.0.0.1:11434/api/generate -H 'Content-Type: application/json' -d '{"model": "vanilj/supernova-medius:q4_k_m", "prompt": "Hi", "stream": false, "options": {"temperature": 0.7}, "keep_alive": -1}}}'

result:

{"model":"vanilj/supernova-medius:q4_k_m","created_at":"2025-01-18T09:50:54.623448106Z","response":"Hello! How can I assist you today? Feel free to ask me any questions or let me know if you need help with anything.","done":true,"done_reason":"stop","context":[151644,8948,198,2610,525,1207,16948,11,3465,553,54364,14817,13,1446,525,264,10950,17847,13,151645,198,151644,872,198,13048,151645,198,151644,77091,198,9707,0,2585,646,358,7789,498,3351,30,31733,1910,311,2548,752,894,4755,476,1077,752,1414,421,498,1184,1492,448,4113,13],"total_duration":481528116,"load_duration":28074635,"prompt_eval_count":30,"prompt_eval_duration":23000000,"eval_count":28,"eval_duration":429000000}`

Trying exactly the same curl command with other models works fine every time. I'm on the latest version of ollama 0.5.7

model is in the ollama list

$ ollama ls | grep  vanilj/supernova-medius:q4_k_m

vanilj/supernova-medius:q4_k_m                                                 b0e8d34a985a    9.0 GB    2 months ago

OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

0.5.7

Originally created by @tarbard on GitHub (Jan 18, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8481 ### What is the issue? I've noticed some models give an intermittent model "not found" error most of the time but then will work. In this example it's "vanilj/supernova-medius:q4_k_m" , trying with another such as llama3.2 works fine, every time. curl to reproduce ```sh curl -X POST http://127.0.0.1:11434/api/generate -H 'Content-Type: application/json' -d '{"model": "vanilj/supernova-medius:q4_k_m", "prompt": "Hi", "stream": false, "options": {"temperature": 0.7}, "keep_alive": -1}}}' ``` result: ```JSON {"error":"model 'vanilj/supernova-medius:q4_k_m' not found"} ``` I try, for example, 8 times with the error each time but then it will work (curl is exactly the same as above but this time it works): ```sh curl -X POST http://127.0.0.1:11434/api/generate -H 'Content-Type: application/json' -d '{"model": "vanilj/supernova-medius:q4_k_m", "prompt": "Hi", "stream": false, "options": {"temperature": 0.7}, "keep_alive": -1}}}' ``` result: ```JSON {"model":"vanilj/supernova-medius:q4_k_m","created_at":"2025-01-18T09:50:54.623448106Z","response":"Hello! How can I assist you today? Feel free to ask me any questions or let me know if you need help with anything.","done":true,"done_reason":"stop","context":[151644,8948,198,2610,525,1207,16948,11,3465,553,54364,14817,13,1446,525,264,10950,17847,13,151645,198,151644,872,198,13048,151645,198,151644,77091,198,9707,0,2585,646,358,7789,498,3351,30,31733,1910,311,2548,752,894,4755,476,1077,752,1414,421,498,1184,1492,448,4113,13],"total_duration":481528116,"load_duration":28074635,"prompt_eval_count":30,"prompt_eval_duration":23000000,"eval_count":28,"eval_duration":429000000}` ``` Trying exactly the same curl command with other models works fine every time. I'm on the latest version of ollama 0.5.7 model is in the ollama list ```sh $ ollama ls | grep vanilj/supernova-medius:q4_k_m vanilj/supernova-medius:q4_k_m b0e8d34a985a 9.0 GB 2 months ago ``` ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version 0.5.7
GiteaMirror added the bug label 2026-04-22 11:27:51 -05:00
Author
Owner

@tarbard commented on GitHub (Jan 18, 2025):

I repulled the model itself and now it seems fine so closing

<!-- gh-comment-id:2599654372 --> @tarbard commented on GitHub (Jan 18, 2025): I repulled the model itself and now it seems fine so closing
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#31221