[GH-ISSUE #564] Will ollama support Deci/DeciLM-6b-instruct series models in the future? #46767

Closed
opened 2026-04-27 23:56:23 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @ParseDark on GitHub (Sep 21, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/564

my purpose is?

To get more faster response and reduce the cost of GPU.

Model detail: https://deci.ai/blog/decilm-15-times-faster-than-llama2-nas-generated-llm-with-variable-gqa/

live demo: https://huggingface.co/spaces/Deci/DeciLM-6b-instruct

The Deci model builds on the llama. So I think maybe we can support it. For getting better experience.

Originally created by @ParseDark on GitHub (Sep 21, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/564 ## my purpose is? To get more faster response and reduce the cost of GPU. Model detail: https://deci.ai/blog/decilm-15-times-faster-than-llama2-nas-generated-llm-with-variable-gqa/ live demo: https://huggingface.co/spaces/Deci/DeciLM-6b-instruct The Deci model builds on the llama. So I think maybe we can support it. For getting better experience.
Author
Owner

@maxkrieger commented on GitHub (Dec 25, 2023):

This was never added, right? Any plans to?

<!-- gh-comment-id:1869113645 --> @maxkrieger commented on GitHub (Dec 25, 2023): This was never added, right? Any plans to?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#46767