[GH-ISSUE #1432] StableLM-Zephyr incompatible with Ollama version #763

Closed
opened 2026-04-12 10:26:48 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @horiacristescu on GitHub (Dec 8, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1432

When I run: ollama run stablelm-zephyr:3b-q6_K

The result is:

Error: llama runner: failed to load model '/home/horia/.ollama/models/blobs/sha256:6d9189f9d9e9c7763daeb08052a07e3a7ed42db66296f1972098fd7f945529b8': this model may be incompatible with your version of Ollama. If you previously pulled this model, try updating it by running `ollama pull stablelm-zephyr:3b-q6_K`

I reinstalled ollama fresh, and tried deleting and redownloading the model, and a different quant. My system is Ubuntu 20.04 with CUDA 11.7. Other models work.

BTW, is there a place to give model related feedback? It would be great to be a tab in the models page on ollama.ai

Originally created by @horiacristescu on GitHub (Dec 8, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1432 When I run: `ollama run stablelm-zephyr:3b-q6_K` The result is: ``` Error: llama runner: failed to load model '/home/horia/.ollama/models/blobs/sha256:6d9189f9d9e9c7763daeb08052a07e3a7ed42db66296f1972098fd7f945529b8': this model may be incompatible with your version of Ollama. If you previously pulled this model, try updating it by running `ollama pull stablelm-zephyr:3b-q6_K` ``` I reinstalled ollama fresh, and tried deleting and redownloading the model, and a different quant. My system is Ubuntu 20.04 with CUDA 11.7. Other models work. BTW, is there a place to give model related feedback? It would be great to be a tab in the models page on ollama.ai
Author
Owner

@technovangelist commented on GitHub (Dec 8, 2023):

How did you install ollama?

<!-- gh-comment-id:1846726633 --> @technovangelist commented on GitHub (Dec 8, 2023): How did you install ollama?
Author
Owner

@pdevine commented on GitHub (Dec 8, 2023):

Thanks for reporting this, @horiacristescu. What type of GPU are you using? I'm assuming if you reinstalled you got the newest version? Do the other quantized versions work?

I just pulled stablelm-zephyr:3b-q6_K and it seems to be working.

<!-- gh-comment-id:1847889333 --> @pdevine commented on GitHub (Dec 8, 2023): Thanks for reporting this, @horiacristescu. What type of GPU are you using? I'm assuming if you reinstalled you got the newest version? Do the other quantized versions work? I just pulled `stablelm-zephyr:3b-q6_K` and it seems to be working.
Author
Owner

@igorschlum commented on GitHub (Dec 10, 2023):

works on mac with 0.1.14 with 32GB

<!-- gh-comment-id:1849110597 --> @igorschlum commented on GitHub (Dec 10, 2023): works on mac with 0.1.14 with 32GB
Author
Owner

@Pulkit077 commented on GitHub (Dec 12, 2023):

Can I use stablelm-zephyr in Ollama from langchain.llms ?

<!-- gh-comment-id:1852395431 --> @Pulkit077 commented on GitHub (Dec 12, 2023): Can I use `stablelm-zephyr` in Ollama from `langchain.llms` ?
Author
Owner

@technovangelist commented on GitHub (Jan 3, 2024):

hi @Pulkit077 I don't think Langchain has any restrictions on which models can be used with Ollama. So yes, you should be good there

<!-- gh-comment-id:1875725178 --> @technovangelist commented on GitHub (Jan 3, 2024): hi @Pulkit077 I don't think Langchain has any restrictions on which models can be used with Ollama. So yes, you should be good there
Author
Owner

@technovangelist commented on GitHub (Jan 3, 2024):

@horiacristescu I think you may have had an older version of Ollama installed. As @igorschlum noted, it seemed to be working on 0.1.14, and we are on 0.1.17 today, so I think we can probably close this. Can you let us know if you are still experiencing this issue? Otherwise, I'll close it in the next couple of days. Thanks so much for being a great part of this community.

<!-- gh-comment-id:1875728547 --> @technovangelist commented on GitHub (Jan 3, 2024): @horiacristescu I think you may have had an older version of Ollama installed. As @igorschlum noted, it seemed to be working on 0.1.14, and we are on 0.1.17 today, so I think we can probably close this. Can you let us know if you are still experiencing this issue? Otherwise, I'll close it in the next couple of days. Thanks so much for being a great part of this community.
Author
Owner

@jmorganca commented on GitHub (Feb 20, 2024):

Yes, please upgrade to the latest version of Ollama, and let me know if this is still happening 😊

<!-- gh-comment-id:1953342175 --> @jmorganca commented on GitHub (Feb 20, 2024): Yes, please upgrade to the latest version of Ollama, and let me know if this is still happening 😊
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#763