[GH-ISSUE #3222] Support Grok #64022

Open
opened 2026-05-03 15:53:24 -05:00 by GiteaMirror · 24 comments
Owner

Originally created by @FloLecoeuche on GitHub (Mar 18, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3222

What model would you like?

Please add xai-org/grok-1 model to ollama.

Originally created by @FloLecoeuche on GitHub (Mar 18, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3222 ### What model would you like? Please add [xai-org/grok-1](https://github.com/xai-org/grok-1) model to ollama.
GiteaMirror added the model label 2026-05-03 15:53:24 -05:00
Author
Owner

@robmurrer commented on GitHub (Mar 18, 2024):

is this possible? does it limit the hardware that it can run on? JAX is weird :)

<!-- gh-comment-id:2004534611 --> @robmurrer commented on GitHub (Mar 18, 2024): is this possible? does it limit the hardware that it can run on? JAX is weird :)
Author
Owner

@3Samourai commented on GitHub (Mar 20, 2024):

It can be a very good addition to Ollama; just put a warning before downloading

<!-- gh-comment-id:2008422125 --> @3Samourai commented on GitHub (Mar 20, 2024): It can be a very good addition to Ollama; just put a warning before downloading
Author
Owner

@easp commented on GitHub (Mar 21, 2024):

It’s a mediocre model with huge resource requirements. How is it a very good addition?

<!-- gh-comment-id:2011677621 --> @easp commented on GitHub (Mar 21, 2024): It’s a mediocre model with huge resource requirements. How is it a very good addition?
Author
Owner

@not-nullptr commented on GitHub (Mar 21, 2024):

agreed. i tried grok a while ago, it really gave no better results (and was a lot, lot more censored) than dolphin-mixtral (which ollama lists as 47B)

<!-- gh-comment-id:2012170709 --> @not-nullptr commented on GitHub (Mar 21, 2024): agreed. i tried grok a while ago, it really gave no better results (and was a lot, lot more censored) than dolphin-mixtral (which ollama lists as 47B)
Author
Owner

@3Samourai commented on GitHub (Mar 21, 2024):

It’s a mediocre model with huge resource requirements. How is it a very good addition?

More choices are always better, and since it’s a V1, it’s natural if the performance is not the best

<!-- gh-comment-id:2012348920 --> @3Samourai commented on GitHub (Mar 21, 2024): > It’s a mediocre model with huge resource requirements. How is it a very good addition? More choices are always better, and since it’s a V1, it’s natural if the performance is not the best
Author
Owner

@dib258 commented on GitHub (Mar 22, 2024):

There's a smaller (quantized) version of Grok that got posted few hours ago. Could this be a better fit to run on smaller machine? and to be added to Ollama?

https://huggingface.co/Arki05/Grok-1-GGUF/tree/main

<!-- gh-comment-id:2015120588 --> @dib258 commented on GitHub (Mar 22, 2024): There's a smaller (quantized) version of Grok that got posted few hours ago. Could this be a better fit to run on smaller machine? and to be added to Ollama? https://huggingface.co/Arki05/Grok-1-GGUF/tree/main
Author
Owner

@codearranger commented on GitHub (Mar 24, 2024):

There's a smaller (quantized) version of Grok that got posted few hours ago. Could this be a better fit to run on smaller machine? and to be added to Ollama?

https://huggingface.co/Arki05/Grok-1-GGUF/tree/main

I'm slowly pushing this to the repo here:

https://ollama.com/joefamous/grok-1

<!-- gh-comment-id:2016680115 --> @codearranger commented on GitHub (Mar 24, 2024): > There's a smaller (quantized) version of Grok that got posted few hours ago. Could this be a better fit to run on smaller machine? and to be added to Ollama? > > https://huggingface.co/Arki05/Grok-1-GGUF/tree/main I'm slowly pushing this to the repo here: https://ollama.com/joefamous/grok-1
Author
Owner

@codearranger commented on GitHub (Mar 25, 2024):

I was able to run the q6_K version with CPU on my 36 core Xeon with llama.cpp. I got a little less than 1 token per second.

<!-- gh-comment-id:2018466903 --> @codearranger commented on GitHub (Mar 25, 2024): I was able to run the q6_K version with CPU on my 36 core Xeon with llama.cpp. I got a little less than 1 token per second.
Author
Owner

@codearranger commented on GitHub (Mar 26, 2024):

This might do it. https://github.com/ollama/ollama/pull/3348

<!-- gh-comment-id:2019182939 --> @codearranger commented on GitHub (Mar 26, 2024): This might do it. https://github.com/ollama/ollama/pull/3348
Author
Owner

@AWKohler commented on GitHub (Mar 27, 2024):

mic library /tmp/ollama1052153078/runners/cpu_avx2/libext_server.so exception error loading model architectu

getting same issue

<!-- gh-comment-id:2022009801 --> @AWKohler commented on GitHub (Mar 27, 2024): > mic library /tmp/ollama1052153078/runners/cpu_avx2/libext_server.so exception error loading model architectu getting same issue
Author
Owner

@Zhongyi-Lu commented on GitHub (Mar 27, 2024):

There's a smaller (quantized) version of Grok that got posted few hours ago. Could this be a better fit to run on smaller machine? and to be added to Ollama?

https://huggingface.co/Arki05/Grok-1-GGUF/tree/main

I'm slowly pushing this to the repo here:

https://ollama.com/joefamous/grok-1

@joecryptotoo Hi. Thanks for uploading models! I got an error saying

Error: exception error loading model architecture: unknown model architecture: 'grok'

Is it expected?

<!-- gh-comment-id:2022112346 --> @Zhongyi-Lu commented on GitHub (Mar 27, 2024): > > There's a smaller (quantized) version of Grok that got posted few hours ago. Could this be a better fit to run on smaller machine? and to be added to Ollama? > > https://huggingface.co/Arki05/Grok-1-GGUF/tree/main > > I'm slowly pushing this to the repo here: > > https://ollama.com/joefamous/grok-1 @joecryptotoo Hi. Thanks for uploading models! I got an error saying ``` Error: exception error loading model architecture: unknown model architecture: 'grok' ``` Is it expected?
Author
Owner

@taozhiyuai commented on GitHub (Mar 28, 2024):

try to import gguf, but fail

截屏2024-03-29 07 44 46 截屏2024-03-29 07 45 24
<!-- gh-comment-id:2026330540 --> @taozhiyuai commented on GitHub (Mar 28, 2024): try to import gguf, but fail <img width="823" alt="截屏2024-03-29 07 44 46" src="https://github.com/ollama/ollama/assets/146583103/0f037fac-f70c-47e0-9936-fffbe722b1fd"> <img width="1439" alt="截屏2024-03-29 07 45 24" src="https://github.com/ollama/ollama/assets/146583103/b61350a2-dab9-4dd5-9232-fe46f4efd101">
Author
Owner

@taozhiyuai commented on GitHub (Mar 28, 2024):

@joecryptotoo can you upload this Q1 version? https://hf-mirror.com/mradermacher/grok-1-i1-GGUF/tree/main

@Zhongyi-Lu

<!-- gh-comment-id:2026337149 --> @taozhiyuai commented on GitHub (Mar 28, 2024): @joecryptotoo can you upload this Q1 version? https://hf-mirror.com/mradermacher/grok-1-i1-GGUF/tree/main @Zhongyi-Lu
Author
Owner

@olumolu commented on GitHub (May 18, 2024):

https://github.com/xai-org/grok-1?tab=readme-ov-file
When will we get grok support in ollama.

<!-- gh-comment-id:2118948585 --> @olumolu commented on GitHub (May 18, 2024): https://github.com/xai-org/grok-1?tab=readme-ov-file When will we get grok support in ollama.
Author
Owner

@olumolu commented on GitHub (May 21, 2024):

What is the support status of grok?

<!-- gh-comment-id:2123228962 --> @olumolu commented on GitHub (May 21, 2024): What is the support status of grok?
Author
Owner

@3Samourai commented on GitHub (May 21, 2024):

What is the support status of grok?

NaN

<!-- gh-comment-id:2123605910 --> @3Samourai commented on GitHub (May 21, 2024): > What is the support status of grok? NaN
Author
Owner

@olumolu commented on GitHub (Jun 6, 2024):

What is the support status.

<!-- gh-comment-id:2152570498 --> @olumolu commented on GitHub (Jun 6, 2024): What is the support status.
Author
Owner

@olumolu commented on GitHub (Aug 1, 2024):

When will we get gork support in ollama

<!-- gh-comment-id:2263512363 --> @olumolu commented on GitHub (Aug 1, 2024): When will we get gork support in ollama
Author
Owner

@TraceRecursion commented on GitHub (Sep 5, 2024):

When will we get gork support in ollama?

<!-- gh-comment-id:2331889400 --> @TraceRecursion commented on GitHub (Sep 5, 2024): When will we get gork support in ollama?
Author
Owner

@Maltz42 commented on GitHub (Mar 13, 2025):

Adding my request for this as well. Seems weird that one of the biggest AI players is missing support for a year. It requires a lot of resources, but Llama 3.1 405B is there, and there are plenty of people with 256GB who could probably run a q4 in RAM.

<!-- gh-comment-id:2722312572 --> @Maltz42 commented on GitHub (Mar 13, 2025): Adding my request for this as well. Seems weird that one of the biggest AI players is missing support for a year. It requires a lot of resources, but Llama 3.1 405B is there, and there are plenty of people with 256GB who could probably run a q4 in RAM.
Author
Owner

@olumolu commented on GitHub (Mar 13, 2025):

Does not make any sense to have support for this one now.

<!-- gh-comment-id:2722392822 --> @olumolu commented on GitHub (Mar 13, 2025): Does not make any sense to have support for this one now.
Author
Owner

@mat926 commented on GitHub (Jun 21, 2025):

Can we have grok3 support?

<!-- gh-comment-id:2993822686 --> @mat926 commented on GitHub (Jun 21, 2025): Can we have grok3 support?
Author
Owner

@olumolu commented on GitHub (Jun 22, 2025):

Can we have grok3 support?

Not a open weight model

<!-- gh-comment-id:2993844801 --> @olumolu commented on GitHub (Jun 22, 2025): > Can we have grok3 support? Not a open weight model
Author
Owner

@FearL0rd commented on GitHub (Aug 25, 2025):

Grok2.1 is out 😉

<!-- gh-comment-id:3218698489 --> @FearL0rd commented on GitHub (Aug 25, 2025): Grok2.1 is out 😉
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#64022