[GH-ISSUE #3100] C4AI Command #63942

Closed
opened 2026-05-03 15:31:43 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @AdaptiveStep on GitHub (Mar 13, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3100

Please add the c4ai-command model
Its really good at translation and can handle 100 languages.

https://huggingface.co/CohereForAI/c4ai-command-r-v01

Originally created by @AdaptiveStep on GitHub (Mar 13, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3100 Please add the c4ai-command model Its really good at translation and can handle 100 languages. https://huggingface.co/CohereForAI/c4ai-command-r-v01
GiteaMirror added the model label 2026-05-03 15:31:43 -05:00
Author
Owner

@salah55s commented on GitHub (Mar 13, 2024):

Also supporting the quantized versions.. as it's normal is 50+ GB

<!-- gh-comment-id:1994273239 --> @salah55s commented on GitHub (Mar 13, 2024): Also supporting the quantized versions.. as it's normal is 50+ GB
Author
Owner

@simsi-andy commented on GitHub (Mar 16, 2024):

The model has already been created and it has been implemented (merged) into llama.cpp. How can it be used here?

https://huggingface.co/andrewcanis/c4ai-command-r-v01-GGUF

https://github.com/ggerganov/llama.cpp/pull/6033

<!-- gh-comment-id:2001992779 --> @simsi-andy commented on GitHub (Mar 16, 2024): The model has already been created and it has been implemented (merged) into llama.cpp. How can it be used here? https://huggingface.co/andrewcanis/c4ai-command-r-v01-GGUF https://github.com/ggerganov/llama.cpp/pull/6033
Author
Owner

@Noeda commented on GitHub (Mar 16, 2024):

Hey folks. I believe there is a slight issue with tokenization on Command-R on llama.cpp (just opened https://github.com/ggerganov/llama.cpp/issues/6104). I don't think it impacts output quality in a material way but if we've got invested people here on Command-R model maybe you'll just want that issue on your notifications. I plan to investigate it in more detail some time next week.

The tokenization divergence between llama.cpp and Command-R Huggingface implementation sometimes seems to slightly re-order top logits depending on how much the tokenization diverges based on your prompt, which can impact what it outputs. But I haven't empirically seen any degradation of quality.

<!-- gh-comment-id:2002037424 --> @Noeda commented on GitHub (Mar 16, 2024): Hey folks. I believe there is a slight issue with tokenization on Command-R on llama.cpp (just opened https://github.com/ggerganov/llama.cpp/issues/6104). I don't think it impacts output quality in a material way but if we've got invested people here on Command-R model maybe you'll just want that issue on your notifications. I plan to investigate it in more detail some time next week. The tokenization divergence between `llama.cpp` and Command-R Huggingface implementation sometimes seems to slightly re-order top logits depending on how much the tokenization diverges based on your prompt, which can impact what it outputs. But I haven't empirically seen any degradation of quality.
Author
Owner

@mchiang0610 commented on GitHub (Apr 15, 2024):

Hi! Really sorry for the slow reply to this issue. We've added Command-R model from Cohere to the model library.

https://ollama.com/library/command-r

We're working on Command R Plus model right now, and is available in the pre-release build of Ollama.

Please let us know if you run into any problems! Again, sorry for the slow reply.

<!-- gh-comment-id:2057154891 --> @mchiang0610 commented on GitHub (Apr 15, 2024): Hi! Really sorry for the slow reply to this issue. We've added Command-R model from Cohere to the model library. https://ollama.com/library/command-r We're working on Command R Plus model right now, and is available in the pre-release build of Ollama. Please let us know if you run into any problems! Again, sorry for the slow reply.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#63942