[GH-ISSUE #10391] Ollama Library Upload Fails for Big-Endian Models #32587

Open
opened 2026-04-22 14:01:40 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @taronaeo on GitHub (Apr 24, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10391

What is the issue?

Trying to upload Big-Endian models to Ollama Library is met with a Error: 500 status code. Support for Big-Endian models have been patched in PR #10245, is the Ollama Library server checking the endianness of the model file as well?

Relevant log output

$ ./ollama push taronaeo/qwen2.5:1.5b
retrieving manifest 
pushing f491a557b46e... 100% ▕███████████████████████████▏ 986 MB                         
pushing 1d68e259ca72... 100% ▕███████████████████████████▏ 1.5 KB                         
pushing 2e71c400f140... 100% ▕███████████████████████████▏   70 B                         
pushing 53e88a01f2bb... 100% ▕███████████████████████████▏  413 B                         
pushing manifest 
Error: 500: {"errors":[{"code":"INTERNAL_ERROR","message":"internal error"}]}

OS

Linux

GPU

No response

CPU

Other

Ollama version

0.0.0

Originally created by @taronaeo on GitHub (Apr 24, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10391 ### What is the issue? Trying to upload Big-Endian models to Ollama Library is met with a `Error: 500` status code. Support for Big-Endian models have been patched in PR #10245, is the Ollama Library server checking the endianness of the model file as well? ### Relevant log output ```shell $ ./ollama push taronaeo/qwen2.5:1.5b retrieving manifest pushing f491a557b46e... 100% ▕███████████████████████████▏ 986 MB pushing 1d68e259ca72... 100% ▕███████████████████████████▏ 1.5 KB pushing 2e71c400f140... 100% ▕███████████████████████████▏ 70 B pushing 53e88a01f2bb... 100% ▕███████████████████████████▏ 413 B pushing manifest Error: 500: {"errors":[{"code":"INTERNAL_ERROR","message":"internal error"}]} ``` ### OS Linux ### GPU _No response_ ### CPU Other ### Ollama version 0.0.0
GiteaMirror added the bugollama.com labels 2026-04-22 14:01:41 -05:00
Author
Owner

@rick-github commented on GitHub (Apr 24, 2025):

As a guess I'd say it's because the model is being processed to add the stats (architecture, parameter count, quantization, etc) to the model card. A similar issue to #8226 where the server code was corrected for incorrect quantization display, but the website used the old code.

<!-- gh-comment-id:2827472459 --> @rick-github commented on GitHub (Apr 24, 2025): As a guess I'd say it's because the model is being processed to add the stats (architecture, parameter count, quantization, etc) to the model card. A similar issue to #8226 where the server code was corrected for incorrect quantization display, but the website used the old code.
Author
Owner

@taronaeo commented on GitHub (Apr 24, 2025):

Hi @rick-github, thanks for taking this issue. What about model endianness? As far as I recall, the codebase in main still can't load Big-Endian models unless PR #10245 is applied.

Does ollama.com use the same server codebase here, and could this be a contributing factor to it not reading the stats correctly?

<!-- gh-comment-id:2827504539 --> @taronaeo commented on GitHub (Apr 24, 2025): Hi @rick-github, thanks for taking this issue. What about model endianness? As far as I recall, the codebase in `main` still can't load Big-Endian models unless PR #10245 is applied. Does `ollama.com` use the same server codebase here, and could this be a contributing factor to it not reading the stats correctly?
Author
Owner

@rick-github commented on GitHub (Apr 24, 2025):

What about model endianness? As far as I recall, the codebase in main still can't load Big-Endian models unless PR #10245 is applied.

Correct, the ollama team have concerns about code maintainability.

Does ollama.com use the same server codebase here, and could this be a contributing factor to it not reading the stats correctly?

That would be my guess.

<!-- gh-comment-id:2827544025 --> @rick-github commented on GitHub (Apr 24, 2025): > What about model endianness? As far as I recall, the codebase in `main` still can't load Big-Endian models unless PR [#10245](https://github.com/ollama/ollama/pull/10245) is applied. Correct, the ollama team have [concerns](https://github.com/ollama/ollama/issues/4710#issuecomment-2345083220) about code maintainability. > Does `ollama.com` use the same server codebase here, and could this be a contributing factor to it not reading the stats correctly? That would be my guess.
Author
Owner

@taronaeo commented on GitHub (Apr 28, 2025):

Hi @rick-github, I took a few days to confer with my IBM colleagues and would like to understand the requirements of having Big-Endian systems supported in Ollama.

Both IBM Z and AIX teams are willing to continuously contribute open-source code to the project and are also willing to help with setting up CI. We would also be able to receive platform and endianness related problems directed to us and provide the necessary support for any of such issues.

Would there be an appropriate contact within Ollama for us to setup a discussion or understand the next steps needed to contribute? We would really appreciate if any help can be given!

<!-- gh-comment-id:2835125337 --> @taronaeo commented on GitHub (Apr 28, 2025): Hi @rick-github, I took a few days to confer with my IBM colleagues and would like to understand the requirements of having Big-Endian systems supported in Ollama. Both IBM Z and AIX teams are willing to continuously contribute open-source code to the project and are also willing to help with setting up CI. We would also be able to receive platform and endianness related problems directed to us and provide the necessary support for any of such issues. Would there be an appropriate contact within Ollama for us to setup a discussion or understand the next steps needed to contribute? We would really appreciate if any help can be given!
Author
Owner

@rick-github commented on GitHub (Apr 28, 2025):

Jeff (@jmorganca) would be the best person to talk to, if he can't help directly he will know who to loop in.

<!-- gh-comment-id:2835146641 --> @rick-github commented on GitHub (Apr 28, 2025): Jeff (@jmorganca) would be the best person to talk to, if he can't help directly he will know who to loop in.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#32587