[GH-ISSUE #3370] databricks-dbrx #48585

Closed
opened 2026-04-28 08:54:27 -05:00 by GiteaMirror · 21 comments
Owner

Originally created by @Sparkenstein on GitHub (Mar 27, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3370

What model would you like?

Databricks just released a new model that is supposed to perform better than mistral. IMO would be a good addition

https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm

https://huggingface.co/databricks/dbrx-instruct

No response

Originally created by @Sparkenstein on GitHub (Mar 27, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3370 ### What model would you like? Databricks just released a new model that is supposed to perform better than mistral. IMO would be a good addition https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm https://huggingface.co/databricks/dbrx-instruct _No response_
GiteaMirror added the model label 2026-04-28 08:54:27 -05:00
Author
Owner

@triple-threat-dan commented on GitHub (Mar 28, 2024):

Would be great to have base model as well!

https://huggingface.co/databricks/dbrx-base

<!-- gh-comment-id:2024190068 --> @triple-threat-dan commented on GitHub (Mar 28, 2024): Would be great to have base model as well! https://huggingface.co/databricks/dbrx-base
Author
Owner

@Kavan72 commented on GitHub (Mar 28, 2024):

+1

<!-- gh-comment-id:2025261149 --> @Kavan72 commented on GitHub (Mar 28, 2024): +1
Author
Owner

@Kavan72 commented on GitHub (Mar 28, 2024):

I think first llama.cpp need to add support on this. https://github.com/ggerganov/llama.cpp/issues/6344

<!-- gh-comment-id:2025273333 --> @Kavan72 commented on GitHub (Mar 28, 2024): I think first llama.cpp need to add support on this. https://github.com/ggerganov/llama.cpp/issues/6344
Author
Owner

@ipsmile commented on GitHub (Mar 29, 2024):

+1

<!-- gh-comment-id:2027485301 --> @ipsmile commented on GitHub (Mar 29, 2024): +1
Author
Owner

@rlogank commented on GitHub (Mar 31, 2024):

That would be a good idea if it didn't require 264GB of RAM

<!-- gh-comment-id:2028508300 --> @rlogank commented on GitHub (Mar 31, 2024): That would be a good idea if it didn't require 264GB of RAM
Author
Owner

@corani commented on GitHub (Apr 1, 2024):

That would be a good idea if it didn't require 264GB of RAM

That is the original 16-bit. Let's see what it'll look like once they quantize it.

<!-- gh-comment-id:2029041500 --> @corani commented on GitHub (Apr 1, 2024): > That would be a good idea if it didn't require 264GB of RAM That is the original 16-bit. Let's see what it'll look like once they quantize it.
Author
Owner

@gorlitzer commented on GitHub (Apr 1, 2024):

+1

<!-- gh-comment-id:2029352520 --> @gorlitzer commented on GitHub (Apr 1, 2024): +1
Author
Owner

@saivarunk commented on GitHub (Apr 2, 2024):

+1

<!-- gh-comment-id:2031075445 --> @saivarunk commented on GitHub (Apr 2, 2024): +1
Author
Owner

@crmne commented on GitHub (Apr 2, 2024):

+1

<!-- gh-comment-id:2031437003 --> @crmne commented on GitHub (Apr 2, 2024): +1
Author
Owner

@hemangjoshi37a commented on GitHub (Apr 2, 2024):

i hope this gets added soon. i already searched for it on the library and did not find the model. however i tried starling-lm model and it works for me for now.

<!-- gh-comment-id:2031507587 --> @hemangjoshi37a commented on GitHub (Apr 2, 2024): i hope this gets added soon. i already searched for it on the library and did not find the model. however i tried starling-lm model and it works for me for now.
Author
Owner

@OPDEV001 commented on GitHub (Apr 2, 2024):

+1

<!-- gh-comment-id:2032113340 --> @OPDEV001 commented on GitHub (Apr 2, 2024): +1
Author
Owner

@razvanab commented on GitHub (Apr 4, 2024):

+1

<!-- gh-comment-id:2037596929 --> @razvanab commented on GitHub (Apr 4, 2024): +1
Author
Owner

@nkeilar commented on GitHub (Apr 9, 2024):

I just wanted to chime in and let you know it is possible to run on a dual 3090 with reduced context length using exl2. Its a bit annoying as I have to stop ollama and start tabbyapi. And I don't get nice model swapping out, which I really want for crewai.

So theoretically it is possible. I also got command-r-plus running at the same time, also with reduced context length. So both should theoretically be possible in ollama.

<!-- gh-comment-id:2044145612 --> @nkeilar commented on GitHub (Apr 9, 2024): I just wanted to chime in and let you know it is possible to run on a dual 3090 with reduced context length using exl2. Its a bit annoying as I have to stop ollama and start tabbyapi. And I don't get nice model swapping out, which I really want for crewai. So theoretically it is possible. I also got command-r-plus running at the same time, also with reduced context length. So both should theoretically be possible in ollama.
Author
Owner

@jukofyork commented on GitHub (Apr 13, 2024):

Removed as https://github.com/ollama/ollama/pull/3627 (as used in v0.1.32) now allows the import of DBRX models


Download the model:

https://huggingface.co/collections/phymbert/dbrx-16x12b-instruct-gguf-6619a7a4b7c50831dd33c7c8
https://huggingface.co/dranger003/dbrx-instruct-iMat.GGUF

For example to get the the Q4_0:

wget https://huggingface.co/phymbert/dbrx-16x12b-instruct-q4_0-gguf/resolve/main/dbrx-16x12b-instruct-q4_0-{00001..00010}-of-00010.gguf
./gguf-split --merge ./dbrx-16x12b-instruct-q4_0-00001-of-00010.gguf ./dbrx-16x12b-instruct-q4_0.gguf

If you don't have gguf-split already installed then you can go into the ollama/llm/llama.cpp folder and build it to use there. I'm not sure if Ollama supports split-gguf files, but I prefer having 1 big file I can symlink back from the /usr/share/ollama/... folder anyway.

Create the modelfile:

FROM ./dbrx-16x12b-instruct-q4_0.gguf
TEMPLATE """{{if .System}}<|im_start|>system
{{.System}}<|im_end|>
{{end}}<|im_start|>user
{{.Prompt}}<|im_end|>
<|im_start|>assistant
{{.Response}}"""
PARAMETER num_ctx 32768

You might have to reduce the num_ctx parameter down to fit in VRAM and add any other options like num_gpu 1000 and a system message, etc.

Then finally add the model to Ollama:

ollama create dbrx-16x12b-instruct-q4_0 -f dbrx-16x12b-instruct-q4_0.modelfile
<!-- gh-comment-id:2053666722 --> @jukofyork commented on GitHub (Apr 13, 2024): **Removed as https://github.com/ollama/ollama/pull/3627 (as used in `v0.1.32`) now allows the import of DBRX models** --- Download the model: https://huggingface.co/collections/phymbert/dbrx-16x12b-instruct-gguf-6619a7a4b7c50831dd33c7c8 https://huggingface.co/dranger003/dbrx-instruct-iMat.GGUF For example to get the the `Q4_0`: ``` wget https://huggingface.co/phymbert/dbrx-16x12b-instruct-q4_0-gguf/resolve/main/dbrx-16x12b-instruct-q4_0-{00001..00010}-of-00010.gguf ./gguf-split --merge ./dbrx-16x12b-instruct-q4_0-00001-of-00010.gguf ./dbrx-16x12b-instruct-q4_0.gguf ``` If you don't have `gguf-split` already installed then you can go into the `ollama/llm/llama.cpp` folder and build it to use there. I'm not sure if Ollama supports split-gguf files, but I prefer having 1 big file I can symlink back from the `/usr/share/ollama/...` folder anyway. Create the modelfile: ``` FROM ./dbrx-16x12b-instruct-q4_0.gguf TEMPLATE """{{if .System}}<|im_start|>system {{.System}}<|im_end|> {{end}}<|im_start|>user {{.Prompt}}<|im_end|> <|im_start|>assistant {{.Response}}""" PARAMETER num_ctx 32768 ``` You might have to reduce the `num_ctx` parameter down to fit in VRAM and add any other options like `num_gpu 1000` and a system message, etc. Then finally add the model to Ollama: ``` ollama create dbrx-16x12b-instruct-q4_0 -f dbrx-16x12b-instruct-q4_0.modelfile ```
Author
Owner

@jukofyork commented on GitHub (Apr 15, 2024):

If you use this model then make sure to reduce the repetition penalty right down to 1.0 (or just above 1.0 if absolutely necessary).

The default of 1.1 used in Ollama will kill the model's ability and for coding tasks it does all sorts of strange stuff and becomes really "lazy"... I even saw it start a for loop at 176 just to avoid having to write for (int i = 0... a second time!

<!-- gh-comment-id:2056660746 --> @jukofyork commented on GitHub (Apr 15, 2024): If you use this model then make sure to reduce the repetition penalty right down to 1.0 (or just above 1.0 if absolutely necessary). The default of 1.1 used in Ollama will kill the model's ability and for coding tasks it does all sorts of strange stuff and becomes really "lazy"... I even saw it start a for loop at `176` just to avoid having to write `for (int i = 0...` a second time!
Author
Owner

@Sparkenstein commented on GitHub (Apr 17, 2024):

DBRX is released: https://ollama.com/library/dbrx 🎉

Closing this now.

<!-- gh-comment-id:2061611965 --> @Sparkenstein commented on GitHub (Apr 17, 2024): DBRX is released: https://ollama.com/library/dbrx 🎉 Closing this now.
Author
Owner

@luan-cestari-ppro commented on GitHub (Apr 17, 2024):

I pulled the image and I got this error when I try to run:

Error: exception error loading model architecture: unknown model architecture: 'dbrx'
<!-- gh-comment-id:2061889284 --> @luan-cestari-ppro commented on GitHub (Apr 17, 2024): I pulled the image and I got this error when I try to run: ``` Error: exception error loading model architecture: unknown model architecture: 'dbrx' ```
Author
Owner

@nicodemus26 commented on GitHub (Apr 17, 2024):

It's in Ollama 0.1.32: https://github.com/ollama/ollama/releases/tag/v0.1.32

<!-- gh-comment-id:2061915451 --> @nicodemus26 commented on GitHub (Apr 17, 2024): It's in Ollama 0.1.32: https://github.com/ollama/ollama/releases/tag/v0.1.32
Author
Owner

@calcitem commented on GitHub (Apr 17, 2024):

Mine is ok after upgrading to the latest version.

<!-- gh-comment-id:2062673610 --> @calcitem commented on GitHub (Apr 17, 2024): Mine is ok after upgrading to the latest version.
Author
Owner

@luan-cestari-ppro commented on GitHub (Apr 18, 2024):

Worked for me too

<!-- gh-comment-id:2062778022 --> @luan-cestari-ppro commented on GitHub (Apr 18, 2024): Worked for me too
Author
Owner

@luan-cestari-ppro commented on GitHub (Apr 18, 2024):

Quick-question: how many tokens per minute does your machine produce and what is your setup? Mine is an M3 an it take minutes to produce a token

<!-- gh-comment-id:2063635541 --> @luan-cestari-ppro commented on GitHub (Apr 18, 2024): Quick-question: how many tokens per minute does your machine produce and what is your setup? Mine is an M3 an it take minutes to produce a token
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#48585