[GH-ISSUE #11382] kimi k2 support #33273

Closed
opened 2026-04-22 15:47:22 -05:00 by GiteaMirror · 43 comments
Owner

Originally created by @olumolu on GitHub (Jul 11, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11382

https://huggingface.co/moonshotai/Kimi-K2-Instruct
Best opensource model available yet

Originally created by @olumolu on GitHub (Jul 11, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11382 https://huggingface.co/moonshotai/Kimi-K2-Instruct Best opensource model available yet
GiteaMirror added the model label 2026-04-22 15:47:22 -05:00
Author
Owner

@Notbici commented on GitHub (Jul 11, 2025):

I've been itching to try new models.

<!-- gh-comment-id:3063469229 --> @Notbici commented on GitHub (Jul 11, 2025): I've been itching to try new models.
Author
Owner

@wwjCMP commented on GitHub (Jul 12, 2025):

me too

<!-- gh-comment-id:3064507683 --> @wwjCMP commented on GitHub (Jul 12, 2025): me too
Author
Owner

@SPOOKEXE commented on GitHub (Jul 12, 2025):

yes yes yes

<!-- gh-comment-id:3064607483 --> @SPOOKEXE commented on GitHub (Jul 12, 2025): yes yes yes
Author
Owner

@rahulkarajgikar commented on GitHub (Jul 12, 2025):

need this

<!-- gh-comment-id:3064950611 --> @rahulkarajgikar commented on GitHub (Jul 12, 2025): need this
Author
Owner

@sole-sohail commented on GitHub (Jul 12, 2025):

Eager to see this Gem listed in all Ollama

<!-- gh-comment-id:3065562521 --> @sole-sohail commented on GitHub (Jul 12, 2025): Eager to see this Gem listed in all Ollama
Author
Owner

@rick-github commented on GitHub (Jul 12, 2025):

https://github.com/ggml-org/llama.cpp/issues/14642

<!-- gh-comment-id:3065962884 --> @rick-github commented on GitHub (Jul 12, 2025): https://github.com/ggml-org/llama.cpp/issues/14642
Author
Owner

@olumolu commented on GitHub (Jul 12, 2025):

ggml-org/llama.cpp#14642

does ollama still use llamacpp backend>?

<!-- gh-comment-id:3065992207 --> @olumolu commented on GitHub (Jul 12, 2025): > [ggml-org/llama.cpp#14642](https://github.com/ggml-org/llama.cpp/issues/14642) does ollama still use llamacpp backend>?
Author
Owner

@rick-github commented on GitHub (Jul 12, 2025):

ollama has some backward compatibility. After the next vendor sync there might be some more work done to fully integrate the new model.

<!-- gh-comment-id:3065993523 --> @rick-github commented on GitHub (Jul 12, 2025): ollama has some backward compatibility. After the next vendor sync there might be some more work done to fully integrate the new model.
Author
Owner

@gabrielxcom commented on GitHub (Jul 12, 2025):

BRING KIMI HOME! :p

<!-- gh-comment-id:3066040304 --> @gabrielxcom commented on GitHub (Jul 12, 2025): BRING KIMI HOME! :p
Author
Owner

@pdrago97 commented on GitHub (Jul 12, 2025):

Will it even be added to Ollama?

<!-- gh-comment-id:3066203685 --> @pdrago97 commented on GitHub (Jul 12, 2025): Will it even be added to Ollama?
Author
Owner

@rick-github commented on GitHub (Jul 12, 2025):

Large models have previously been added (https://ollama.com/library/deepseek-r1:671b-fp16) but Kimi-K2 would set a new record. It's not really suitable for consumer grade hardware so it's not certain if it will be added to the library. If not, it would still be importable.

<!-- gh-comment-id:3066228751 --> @rick-github commented on GitHub (Jul 12, 2025): Large models have previously been added (https://ollama.com/library/deepseek-r1:671b-fp16) but Kimi-K2 would set a new record. It's not really suitable for consumer grade hardware so it's not certain if it will be added to the library. If not, it would still be importable.
Author
Owner

@olumolu commented on GitHub (Jul 13, 2025):

Large models have previously been added (https://ollama.com/library/deepseek-r1:671b-fp16) but Kimi-K2 would set a new record. It's not really suitable for consumer grade hardware so it's not certain if it will be added to the library. If not, it would still be importable.

Many are running them in new mac with 512 gb of ram and due to low active b parameter count a decent performance can be obtained...
I don't think if someone will run a fp16 but running q4 or q8 is certainly possible.

<!-- gh-comment-id:3066564595 --> @olumolu commented on GitHub (Jul 13, 2025): > Large models have previously been added (https://ollama.com/library/deepseek-r1:671b-fp16) but Kimi-K2 would set a new record. It's not really suitable for consumer grade hardware so it's not certain if it will be added to the library. If not, it would still be importable. Many are running them in new mac with 512 gb of ram and due to low active b parameter count a decent performance can be obtained... I don't think if someone will run a fp16 but running q4 or q8 is certainly possible.
Author
Owner

@Notbici commented on GitHub (Jul 13, 2025):

my 2c, I see enthusiasts enjoying large tier models, if Ollama had them then you'll attract those people who rent or own insane spec hardware like 512GB macs or large ai servers.

I speak from experience as a personal fan of ollama but also at work where my tech team researches integrations of LLMs with Ollama as their first choice to quickly firing up and testing new models, we have the power but yearn high power catalog of models.

But again this would be appeasing likely the top 2% of non paying ollama users, so diverting developer resources for architecture that suits this group probably isn't top prio

<!-- gh-comment-id:3066956427 --> @Notbici commented on GitHub (Jul 13, 2025): my 2c, I see enthusiasts enjoying large tier models, if Ollama had them then you'll attract those people who rent or own insane spec hardware like 512GB macs or large ai servers. I speak from experience as a personal fan of ollama but also at work where my tech team researches integrations of LLMs with Ollama as their first choice to quickly firing up and testing new models, we have the power but yearn high power catalog of models. But again this would be appeasing likely the top 2% of non paying ollama users, so diverting developer resources for architecture that suits this group probably isn't top prio
Author
Owner

@oamazonasgabriel commented on GitHub (Jul 14, 2025):

Commenting for visibility

<!-- gh-comment-id:3069762329 --> @oamazonasgabriel commented on GitHub (Jul 14, 2025): Commenting for visibility
Author
Owner

@Antonytm commented on GitHub (Jul 14, 2025):

+1

<!-- gh-comment-id:3071145768 --> @Antonytm commented on GitHub (Jul 14, 2025): +1
Author
Owner

@ndgayan commented on GitHub (Jul 15, 2025):

+1

<!-- gh-comment-id:3071587829 --> @ndgayan commented on GitHub (Jul 15, 2025): +1
Author
Owner

@enryteam commented on GitHub (Jul 15, 2025):

+1

<!-- gh-comment-id:3071964844 --> @enryteam commented on GitHub (Jul 15, 2025): +1
Author
Owner

@SommerEngineering commented on GitHub (Jul 15, 2025):

We at DLR (German NASA) use ollama on a 1.1 million dollar Dell server with 8 Nvidia H200 GPUs (approx. 1 TB VRAM). I'm sure that other large organizations do this as well. Therefore, we are very interested in large models and would be pleased about Kimi K2 support in ollama.

<!-- gh-comment-id:3072160196 --> @SommerEngineering commented on GitHub (Jul 15, 2025): We at [DLR](https://www.dlr.de/en) (German NASA) use ollama on a 1.1 million dollar Dell server with 8 Nvidia H200 GPUs (approx. 1 TB VRAM). I'm sure that other large organizations do this as well. Therefore, we are very interested in large models and would be pleased about Kimi K2 support in ollama.
Author
Owner

@chengcheng84 commented on GitHub (Jul 15, 2025):

+1

<!-- gh-comment-id:3072561537 --> @chengcheng84 commented on GitHub (Jul 15, 2025): +1
Author
Owner

@sk15er commented on GitHub (Jul 15, 2025):

should i publish this? i can do this.. but you have to download and use this as

ollama run <my-usernam>/kimi-k2

i will do if you wanted guy's.

<!-- gh-comment-id:3073307799 --> @sk15er commented on GitHub (Jul 15, 2025): should i publish this? i can do this.. but you have to download and use this as ```bash ollama run <my-usernam>/kimi-k2 ``` i will do if you wanted guy's.
Author
Owner

@Eneswunbeaten commented on GitHub (Jul 15, 2025):

+1

<!-- gh-comment-id:3073639445 --> @Eneswunbeaten commented on GitHub (Jul 15, 2025): +1
Author
Owner

@Daniele-Felicetta commented on GitHub (Jul 15, 2025):

+1

<!-- gh-comment-id:3073670380 --> @Daniele-Felicetta commented on GitHub (Jul 15, 2025): +1
Author
Owner

@kayzidev commented on GitHub (Jul 15, 2025):

+1

<!-- gh-comment-id:3074352596 --> @kayzidev commented on GitHub (Jul 15, 2025): +1
Author
Owner

@gkvoelkl commented on GitHub (Jul 15, 2025):

+1

<!-- gh-comment-id:3075209874 --> @gkvoelkl commented on GitHub (Jul 15, 2025): +1
Author
Owner

@JoshuaDietz commented on GitHub (Jul 15, 2025):

+1

<!-- gh-comment-id:3075441673 --> @JoshuaDietz commented on GitHub (Jul 15, 2025): +1
Author
Owner

@benfaerber commented on GitHub (Jul 15, 2025):

Same here! Let me see if I can figure out how to add it...

<!-- gh-comment-id:3075563412 --> @benfaerber commented on GitHub (Jul 15, 2025): Same here! Let me see if I can figure out how to add it...
Author
Owner

@rick-github commented on GitHub (Jul 15, 2025):

Uploading the model is not going to be useful until ollama has been updated to support more than 256 experts.

<!-- gh-comment-id:3075668735 --> @rick-github commented on GitHub (Jul 15, 2025): Uploading the model is not going to be useful until ollama has been updated to support more than 256 experts.
Author
Owner

@natarajaya commented on GitHub (Jul 15, 2025):

+100500

<!-- gh-comment-id:3076178827 --> @natarajaya commented on GitHub (Jul 15, 2025): +100500
Author
Owner

@rick-github commented on GitHub (Jul 16, 2025):

Custom ollama and https://huggingface.co/unsloth/Kimi-K2-Instruct-GGUF

$ ollama run kimi-k2:1t-instruct-q4_K_M --verbose why is the sky blue?
The sky looks blue because of how sunlight interacts with Earth’s atmosphere.

Sunlight might look white, but it actually contains all colors. When this light enters our atmosphere, it collides with molecules and small particles (mostly nitrogen and oxygen). These collisions scatter 
shorter-wavelength light—blue and violet—much more strongly than longer-wavelength red or yellow light. This process is called Rayleigh scattering.

Although violet light is scattered even more than blue, two factors make the sky appear predominantly blue to human eyes:

1. The sun’s spectrum has slightly less energy in the violet range.
2. Human vision is less sensitive to violet and some violet is absorbed by the upper atmosphere (ozone).

The combined effect is that we perceive a mix of scattered blue and a little green as “sky blue.”

total duration:       11m43.484653758s
load duration:        2m52.3152854s
prompt eval count:    22 token(s)
prompt eval duration: 1m3.300024837s
prompt eval rate:     0.35 tokens/s
eval count:           161 token(s)
eval duration:        7m47.823862013s
eval rate:            0.34 tokens/s
$ ollama ps
NAME                          ID              SIZE      PROCESSOR    UNTIL   
kimi-k2:1t-instruct-q4_K_M    20fff32648d8    621 GB    100% CPU     Forever    
<!-- gh-comment-id:3076937362 --> @rick-github commented on GitHub (Jul 16, 2025): Custom `ollama` and https://huggingface.co/unsloth/Kimi-K2-Instruct-GGUF ```console $ ollama run kimi-k2:1t-instruct-q4_K_M --verbose why is the sky blue? The sky looks blue because of how sunlight interacts with Earth’s atmosphere. Sunlight might look white, but it actually contains all colors. When this light enters our atmosphere, it collides with molecules and small particles (mostly nitrogen and oxygen). These collisions scatter shorter-wavelength light—blue and violet—much more strongly than longer-wavelength red or yellow light. This process is called Rayleigh scattering. Although violet light is scattered even more than blue, two factors make the sky appear predominantly blue to human eyes: 1. The sun’s spectrum has slightly less energy in the violet range. 2. Human vision is less sensitive to violet and some violet is absorbed by the upper atmosphere (ozone). The combined effect is that we perceive a mix of scattered blue and a little green as “sky blue.” total duration: 11m43.484653758s load duration: 2m52.3152854s prompt eval count: 22 token(s) prompt eval duration: 1m3.300024837s prompt eval rate: 0.35 tokens/s eval count: 161 token(s) eval duration: 7m47.823862013s eval rate: 0.34 tokens/s $ ollama ps NAME ID SIZE PROCESSOR UNTIL kimi-k2:1t-instruct-q4_K_M 20fff32648d8 621 GB 100% CPU Forever ```
Author
Owner

@benfaerber commented on GitHub (Jul 16, 2025):

@rick-github What does custom version of ollama mean? Is it a certain branch I can use?

<!-- gh-comment-id:3080015470 --> @benfaerber commented on GitHub (Jul 16, 2025): @rick-github What does custom version of ollama mean? Is it a certain branch I can use?
Author
Owner

@mike-marcacci commented on GitHub (Jul 16, 2025):

Is this simply a matter of changing this from 256 to 384 and building? d73f8aa8c3/llama/llama.cpp/src/llama-hparams.h (L9)

<!-- gh-comment-id:3080371334 --> @mike-marcacci commented on GitHub (Jul 16, 2025): Is this simply a matter of changing this from 256 to 384 and building? https://github.com/ollama/ollama/blob/d73f8aa8c3979b33f5ea19b80406c20e88ee3b1b/llama/llama.cpp/src/llama-hparams.h#L9
Author
Owner

@xiyuecangxin commented on GitHub (Jul 17, 2025):

+114514

<!-- gh-comment-id:3082575138 --> @xiyuecangxin commented on GitHub (Jul 17, 2025): +114514
Author
Owner

@joeblew999 commented on GitHub (Jul 17, 2025):

cant wait for this one !!

<!-- gh-comment-id:3083885861 --> @joeblew999 commented on GitHub (Jul 17, 2025): cant wait for this one !!
Author
Owner

@lmendes86 commented on GitHub (Jul 18, 2025):

+1

<!-- gh-comment-id:3088765689 --> @lmendes86 commented on GitHub (Jul 18, 2025): +1
Author
Owner

@mintisan commented on GitHub (Jul 18, 2025):

+1

<!-- gh-comment-id:3089173484 --> @mintisan commented on GitHub (Jul 18, 2025): +1
Author
Owner

@mintisan commented on GitHub (Jul 18, 2025):

1.8 bits better...

<!-- gh-comment-id:3089185711 --> @mintisan commented on GitHub (Jul 18, 2025): 1.8 bits better...
Author
Owner

@Peuqui commented on GitHub (Jul 18, 2025):

+1

<!-- gh-comment-id:3090629588 --> @Peuqui commented on GitHub (Jul 18, 2025): +1
Author
Owner

@GoZippy commented on GitHub (Jul 21, 2025):

any update? There are quant that would work fine on my system...

<!-- gh-comment-id:3099575713 --> @GoZippy commented on GitHub (Jul 21, 2025): any update? There are quant that would work fine on my system...
Author
Owner

@pinghe commented on GitHub (Jul 23, 2025):

+1

<!-- gh-comment-id:3105282744 --> @pinghe commented on GitHub (Jul 23, 2025): +1
Author
Owner

@kripper commented on GitHub (Jul 24, 2025):

https://github.com/ggml-org/llama.cpp/issues/14642 was closed and merged.
But qwen3-coder is now the new leader :-)

EDIT: I still prefer Kimi K2 for the agentic use case (OpenHands). Love you Kimi 😘

<!-- gh-comment-id:3111860577 --> @kripper commented on GitHub (Jul 24, 2025): https://github.com/ggml-org/llama.cpp/issues/14642 was closed and merged. But qwen3-coder is now the new leader :-) EDIT: I still prefer Kimi K2 for the agentic use case (OpenHands). Love you Kimi 😘
Author
Owner

@vlad-nikityuk commented on GitHub (Aug 6, 2025):

+1

<!-- gh-comment-id:3161727713 --> @vlad-nikityuk commented on GitHub (Aug 6, 2025): +1
Author
Owner

@kripper commented on GitHub (Aug 31, 2025):

What about https://docs.unsloth.ai/basics/kimi-k2-how-to-run-locally#run-kimi-k2-tutorials ?

<!-- gh-comment-id:3240118182 --> @kripper commented on GitHub (Aug 31, 2025): What about https://docs.unsloth.ai/basics/kimi-k2-how-to-run-locally#run-kimi-k2-tutorials ?
Author
Owner

@rick-github commented on GitHub (Aug 31, 2025):

Ollama runs kimi-k2 if you have the resources, just download and merge the shards.

<!-- gh-comment-id:3240145740 --> @rick-github commented on GitHub (Aug 31, 2025): Ollama runs kimi-k2 if you have the resources, just download and merge the shards.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#33273