[GH-ISSUE #3563] Cannot import command-r-plus gguf #27959

Closed
opened 2026-04-22 05:37:28 -05:00 by GiteaMirror · 15 comments
Owner

Originally created by @jason-c-kwan on GitHub (Apr 9, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3563

What is the issue?

Just cloned ollama earlier today after the merging of PR#6491 in llama.cpp, so it should be able to deal with command-r-plus. There is already some quants of command-r-plus on ollama, but I wanted to import the full range for testing.

Using the GGUFs from dranger003/c4ai-command-r-plus-iMat.GGUF, ./ollama create fails with the following:

transferring model data 
creating model layer 
Error: invalid file magic

Note: I tested with ggml-c4ai-command-r-plus-104b-iq1_s.gguf and gave up after.

Using the GGUFs from pmysl/c4ai-command-r-plus-GGUF, ./ollama create seems to run fine but then ./ollama run fails:

Error: exception done_getting_tensors: wrong number of tensors; expected 642, got 514

What did you expect to see?

For ./ollama create, it should report success. For ./ollama run, I should get an interactive prompt and of course intelligible output from the model.

Steps to reproduce

I cloned Ollama

Installed go 1.22 and cuda 12

(I am using Linux mint)

go generate ./...

go build .

Then attempted to make a Modelfile and did ./ollama create and ./ollama run.

Are there any recent changes that introduced the issue?

No response

OS

Linux

Architecture

amd64

Platform

No response

Ollama version

0.1.30

GPU

Nvidia

GPU info

2 x 3090, 1 x 3060 TI, 1 x Quadro M6000

CPU

Intel

Other software

No response

Originally created by @jason-c-kwan on GitHub (Apr 9, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3563 ### What is the issue? Just cloned ollama earlier today after the merging of PR#6491 in llama.cpp, so it should be able to deal with command-r-plus. There is already some quants of command-r-plus on ollama, but I wanted to import the full range for testing. Using the GGUFs from dranger003/c4ai-command-r-plus-iMat.GGUF, `./ollama create` fails with the following: ```bash transferring model data creating model layer Error: invalid file magic ``` Note: I tested with ggml-c4ai-command-r-plus-104b-iq1_s.gguf and gave up after. Using the GGUFs from pmysl/c4ai-command-r-plus-GGUF, `./ollama create` seems to run fine but then `./ollama run` fails: ```bash Error: exception done_getting_tensors: wrong number of tensors; expected 642, got 514 ``` ### What did you expect to see? For `./ollama create`, it should report `success`. For `./ollama run`, I should get an interactive prompt and of course intelligible output from the model. ### Steps to reproduce I cloned Ollama Installed go 1.22 and cuda 12 (I am using Linux mint) `go generate ./...` `go build .` Then attempted to make a Modelfile and did `./ollama create` and `./ollama run`. ### Are there any recent changes that introduced the issue? _No response_ ### OS Linux ### Architecture amd64 ### Platform _No response_ ### Ollama version 0.1.30 ### GPU Nvidia ### GPU info 2 x 3090, 1 x 3060 TI, 1 x Quadro M6000 ### CPU Intel ### Other software _No response_
GiteaMirror added the bug label 2026-04-22 05:37:28 -05:00
Author
Owner

@sammcj commented on GitHub (Apr 9, 2024):

I've had problems with Ollama and IQ quants before, I'm not sure Ollama works with them but it would be nice to have these available.

<!-- gh-comment-id:2046154990 --> @sammcj commented on GitHub (Apr 9, 2024): I've had problems with Ollama and IQ quants before, I'm not sure Ollama works with them but it would be nice to have these available.
Author
Owner

@jason-c-kwan commented on GitHub (Apr 9, 2024):

I've had problems with Ollama and IQ quants before, I'm not sure Ollama works with them but it would be nice to have these available.

Problem is I couldn't get the non iq quant to work either, although perhaps I am doing something wrong. I know you have been able to do it so any insight would be appreciated.

<!-- gh-comment-id:2046162116 --> @jason-c-kwan commented on GitHub (Apr 9, 2024): > I've had problems with Ollama and IQ quants before, I'm not sure Ollama works with them but it would be nice to have these available. Problem is I couldn't get the non iq quant to work either, although perhaps I am doing something wrong. I know you have been able to do it so any insight would be appreciated.
Author
Owner

@zhaopengme commented on GitHub (Apr 10, 2024):

+1

<!-- gh-comment-id:2046425679 --> @zhaopengme commented on GitHub (Apr 10, 2024): +1
Author
Owner

@chigkim commented on GitHub (Apr 10, 2024):

I downloaded q3_k_m part 1 and part 2 and merged them with gguf_split --merge.
It worked fine with the latest llama.cpp.
I tried to import to Ollama and run it, I get the same message about wrong number of tensors like OP.
Hopefully it gets sorted out soon.

<!-- gh-comment-id:2046640974 --> @chigkim commented on GitHub (Apr 10, 2024): I downloaded q3_k_m part 1 and part 2 and merged them with gguf_split --merge. It worked fine with the latest llama.cpp. I tried to import to Ollama and run it, I get the same message about wrong number of tensors like OP. Hopefully it gets sorted out soon.
Author
Owner

@sammcj commented on GitHub (Apr 10, 2024):

Make sure you built Ollama from the latest source.

If it helps this is my personal build script, feel free to modify etc... https://github.com/sammcj/scripts/blob/master/build_ollama.sh

(Note: Run with PATCH_OLLAMA=false ./build_ollama.sh if you don't want my build tweaks)

<!-- gh-comment-id:2046698014 --> @sammcj commented on GitHub (Apr 10, 2024): Make sure you built Ollama from the latest source. If it helps this is my personal build script, feel free to modify etc... https://github.com/sammcj/scripts/blob/master/build_ollama.sh (Note: Run with `PATCH_OLLAMA=false ./build_ollama.sh` if you don't want my build tweaks)
Author
Owner

@taozhiyuai commented on GitHub (Apr 10, 2024):

can import, but error when run it

https://github.com/ollama/ollama/issues/3577

<!-- gh-comment-id:2047523918 --> @taozhiyuai commented on GitHub (Apr 10, 2024): can import, but error when run it https://github.com/ollama/ollama/issues/3577
Author
Owner

@jason-c-kwan commented on GitHub (Apr 10, 2024):

I tried a few more things without success. I confirmed that my local built ollama cannot pull and run sammcj/cohereforai_c4ai-command-r-plus. I also tried to import a non-iq gguf from the dranger0003 version of the model on Hugging Face. I also tried another version of command-r-plus that seems to have been uploaded to the ollama hub (jmorgan/command-r-plus). It seems that in all cases I get the same error when running:

Error: exception done_getting_tensors: wrong number of tensors; expected 642, got 514

So the first error I wrote about above seems to be due to the iq quant versions as @sammcj mentioned. The other error seems to be quite consistent. @sammcj I did try your build script, but it seemed to have a lot of parts specific to macs. I made sure to clone the latest llama.cpp into llm/ and built the normal way again. Got the same error. I guess those users with Macs might be able to use their script successfully?

<!-- gh-comment-id:2048263407 --> @jason-c-kwan commented on GitHub (Apr 10, 2024): I tried a few more things without success. I confirmed that my local built ollama cannot pull and run [sammcj/cohereforai_c4ai-command-r-plus](https://ollama.com/sammcj/cohereforai_c4ai-command-r-plus). I also tried to import a non-iq gguf from the dranger0003 version of the model on Hugging Face. I also tried another version of command-r-plus that seems to have been uploaded to the ollama hub (jmorgan/command-r-plus). It seems that in all cases I get the same error when running: ``` Error: exception done_getting_tensors: wrong number of tensors; expected 642, got 514 ``` So the first error I wrote about above seems to be due to the iq quant versions as @sammcj mentioned. The other error seems to be quite consistent. @sammcj I did try your build script, but it seemed to have a lot of parts specific to macs. I made sure to clone the latest llama.cpp into llm/ and built the normal way again. Got the same error. I guess those users with Macs might be able to use their script successfully?
Author
Owner

@atgreen commented on GitHub (Apr 11, 2024):

I was able to reproduce this with v0.1.31, but it works now with the ollama v0.1.32 prerelease.

<!-- gh-comment-id:2048679886 --> @atgreen commented on GitHub (Apr 11, 2024): I was able to reproduce this with v0.1.31, but it works now with the ollama v0.1.32 prerelease.
Author
Owner

@DirtyKnightForVi commented on GitHub (Apr 11, 2024):

I downloaded q3_k_m part 1 and part 2 and merged them with gguf_split --merge. It worked fine with the latest llama.cpp. I tried to import to Ollama and run it, I get the same message about wrong number of tensors like OP. Hopefully it gets sorted out soon.

hello, I'm kinda stuck on how to merge these files. Any tips you could throw my way?

<!-- gh-comment-id:2049006118 --> @DirtyKnightForVi commented on GitHub (Apr 11, 2024): > I downloaded q3_k_m part 1 and part 2 and merged them with gguf_split --merge. It worked fine with the latest llama.cpp. I tried to import to Ollama and run it, I get the same message about wrong number of tensors like OP. Hopefully it gets sorted out soon. hello, I'm kinda stuck on how to merge these files. Any tips you could throw my way?
Author
Owner

@chigkim commented on GitHub (Apr 11, 2024):

I can also confirm 0.3.2-rc1 works!
@DirtyKnightForVi you need to use gguf-split from llama.cpp.
https://github.com/ggerganov/llama.cpp

<!-- gh-comment-id:2049996641 --> @chigkim commented on GitHub (Apr 11, 2024): I can also confirm 0.3.2-rc1 works! @DirtyKnightForVi you need to use gguf-split from llama.cpp. https://github.com/ggerganov/llama.cpp
Author
Owner

@jason-c-kwan commented on GitHub (Apr 11, 2024):

Yes, thanks @atgreen ! Can confirm it works. Will close this issue.

<!-- gh-comment-id:2050000518 --> @jason-c-kwan commented on GitHub (Apr 11, 2024): Yes, thanks @atgreen ! Can confirm it works. Will close this issue.
Author
Owner

@ehartford commented on GitHub (Apr 11, 2024):

how to update to v0.1.32?

$ ollama --version
ollama version is 0.1.31
<!-- gh-comment-id:2050507733 --> @ehartford commented on GitHub (Apr 11, 2024): how to update to v0.1.32? ``` $ ollama --version ollama version is 0.1.31 ```
Author
Owner

@dector commented on GitHub (Apr 11, 2024):

how to update to v0.1.32?

I guess you can try RC1 here: https://github.com/ollama/ollama/releases/tag/v0.1.32-rc1

<!-- gh-comment-id:2050588606 --> @dector commented on GitHub (Apr 11, 2024): > how to update to v0.1.32? I guess you can try RC1 here: https://github.com/ollama/ollama/releases/tag/v0.1.32-rc1
Author
Owner

@DirtyKnightForVi commented on GitHub (Apr 12, 2024):

I can also confirm 0.3.2-rc1 works! @DirtyKnightForVi you need to use gguf-split from llama.cpp. https://github.com/ggerganov/llama.cpp

Thank you so much ! Actually, I upgraded ollama to the latest preview version, but I might have messed up the way I used the downloaded model files. Instead of using gguf_split --merge, I just went with a simple cat. It seems like when ollama loads the model files (using Modelfile locally), it prefers loading them as whole rather than splited.

<!-- gh-comment-id:2050783369 --> @DirtyKnightForVi commented on GitHub (Apr 12, 2024): > I can also confirm 0.3.2-rc1 works! @DirtyKnightForVi you need to use gguf-split from llama.cpp. https://github.com/ggerganov/llama.cpp Thank you so much ! Actually, I upgraded ollama to the latest preview version, but I might have messed up the way I used the downloaded model files. Instead of using `gguf_split --merge`, I just went with a simple `cat`. It seems like when ollama loads the model files (using Modelfile locally), it prefers loading them as whole rather than splited.
Author
Owner

@DirtyKnightForVi commented on GitHub (Apr 12, 2024):

Yes, thanks @atgreen ! Can confirm it works. Will close this issue.

Did you already merge the model files before editing Modelfile? For the files that have been split, I'm not sure if 'FROM' should be followed by '0001of0002.gguf', or if you need to specify the complete merged model file?

<!-- gh-comment-id:2050795342 --> @DirtyKnightForVi commented on GitHub (Apr 12, 2024): > Yes, thanks @atgreen ! Can confirm it works. Will close this issue. Did you already merge the model files before editing Modelfile? For the files that have been split, I'm not sure if 'FROM' should be followed by '0001of0002.gguf', or if you need to specify the complete merged model file?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#27959