[GH-ISSUE #4703] Could you please support deepseek v2 ? #28723

Closed
opened 2026-04-22 07:14:33 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @netspym on GitHub (May 29, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4703

It is really a great model with excellent RAG support, great for server cpu inferencing with big memory. Its only 21b active parameters. Looks like llama.cpp is working on it, could you please also support it.

A lot of people are waiting for the support of this model.

Many Thanks
Yuming

Originally created by @netspym on GitHub (May 29, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4703 It is really a great model with excellent RAG support, great for server cpu inferencing with big memory. Its only 21b active parameters. Looks like llama.cpp is working on it, could you please also support it. A lot of people are waiting for the support of this model. Many Thanks Yuming
GiteaMirror added the model label 2026-04-22 07:14:33 -05:00
Author
Owner

@transcendence-x commented on GitHub (Jun 3, 2024):

latest version supportted

<!-- gh-comment-id:2144518165 --> @transcendence-x commented on GitHub (Jun 3, 2024): latest version supportted
Author
Owner

@jmorganca commented on GitHub (Jun 11, 2024):

https://ollama.com/library/deepseek-v2

<!-- gh-comment-id:2161688900 --> @jmorganca commented on GitHub (Jun 11, 2024): https://ollama.com/library/deepseek-v2
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#28723