[GH-ISSUE #15141] ollama pull in Linux results in "this model requires macOS" #56206

Closed
opened 2026-04-29 10:25:49 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @donatas-xyz on GitHub (Mar 30, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15141

Originally assigned to: @BruceMacD on GitHub.

What is the issue?

Hi there,

Since upgrading to Ollama 0.19.0 I'm no longer able to download the latest Qwen3.5 27B model on Ubuntu 25.10, which has been released a day before and required an upgraded Ollama.

Relevant log output

linux:~$ ollama pull qwen3.5:27b-bf16
pulling manifest 
Error: pull model manifest: 412: this model requires macOS

OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

0.19.0

Originally created by @donatas-xyz on GitHub (Mar 30, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15141 Originally assigned to: @BruceMacD on GitHub. ### What is the issue? Hi there, Since upgrading to Ollama 0.19.0 I'm no longer able to download the latest Qwen3.5 27B model on Ubuntu 25.10, which has been released a day before and required an upgraded Ollama. ### Relevant log output ```shell linux:~$ ollama pull qwen3.5:27b-bf16 pulling manifest Error: pull model manifest: 412: this model requires macOS ``` ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version 0.19.0
GiteaMirror added the bug label 2026-04-29 10:25:49 -05:00
Author
Owner

@rick-github commented on GitHub (Mar 30, 2026):

qwen3.5:27b-bf16 is an MLX model, requiring the MLX backend to run, which is currently only supported on macOS.

<!-- gh-comment-id:4153505442 --> @rick-github commented on GitHub (Mar 30, 2026): qwen3.5:27b-bf16 is an MLX model, requiring the MLX backend to run, which is currently only supported on macOS.
Author
Owner

@donatas-xyz commented on GitHub (Mar 30, 2026):

Thank you, @rick-github. So where do I go from here then? 2-3 weeks ago I was able to download this model with ollama pull qwen3.5:27b-bf16, and now all of the sudden it's available for MacOS only?

<!-- gh-comment-id:4153572692 --> @donatas-xyz commented on GitHub (Mar 30, 2026): Thank you, @rick-github. So where do I go from here then? 2-3 weeks ago I was able to download this model with `ollama pull qwen3.5:27b-bf16`, and now all of the sudden it's available for MacOS only?
Author
Owner

@rick-github commented on GitHub (Mar 30, 2026):

I'm guessing it was inadvertently overwritten when the MLX version of the model was uploaded. I've pushed the original GGUF version to frob/qwen3.5:27b-gguf-bf16.

<!-- gh-comment-id:4153718032 --> @rick-github commented on GitHub (Mar 30, 2026): I'm guessing it was inadvertently overwritten when the MLX version of the model was uploaded. I've pushed the original GGUF version to [frob/qwen3.5:27b-gguf-bf16](https://ollama.com/frob/qwen3.5:27b-gguf-bf16).
Author
Owner

@donatas-xyz commented on GitHub (Mar 30, 2026):

Thank you, @rick-github - this one is downloading. I'm a bit behind the times, when it comes to "custom" models and I always tend to stick with the "official" Ollama versions, so can I assume this qwen3.5:27b-gguf-bf16 model is the same as the previous non-MLX version?

<!-- gh-comment-id:4153801710 --> @donatas-xyz commented on GitHub (Mar 30, 2026): Thank you, @rick-github - this one is downloading. I'm a bit behind the times, when it comes to "custom" models and I always tend to stick with the "official" Ollama versions, so can I assume this `qwen3.5:27b-gguf-bf16` model is the same as the previous non-MLX version?
Author
Owner

@rick-github commented on GitHub (Mar 30, 2026):

Yes, it's the same as the previous non-MLX version.

<!-- gh-comment-id:4153888635 --> @rick-github commented on GitHub (Mar 30, 2026): Yes, it's the same as the previous non-MLX version.
Author
Owner

@BruceMacD commented on GitHub (Mar 30, 2026):

Sorry about this, mistakenly pushed while uploading safe tensor weights for MLX. The original gguf weights have been restored in the repo. Thanks for reporting this.

<!-- gh-comment-id:4157311190 --> @BruceMacD commented on GitHub (Mar 30, 2026): Sorry about this, mistakenly pushed while uploading safe tensor weights for MLX. The original gguf weights have been restored in the repo. Thanks for reporting this.
Author
Owner

@donatas-xyz commented on GitHub (Apr 8, 2026):

Hi, @BruceMacD. Thank you for your help with the issue.

I believe qwen3.5:27b-coding-bf16 is still meant for MLX though, so this one could be re-uploaded as well, that would be great.

Thank you!

<!-- gh-comment-id:4204787764 --> @donatas-xyz commented on GitHub (Apr 8, 2026): Hi, @BruceMacD. Thank you for your help with the issue. I believe `qwen3.5:27b-coding-bf16` is still meant for MLX though, so this one could be re-uploaded as well, that would be great. Thank you!
Author
Owner

@fieryWaters commented on GitHub (Apr 23, 2026):

This issue is happening again for qwen 3.6 27b.

jacob@MacBookAir ~ % ollama run qwen3.6:27b-coding-bf16
pulling manifest
Error: pull model manifest: 412: this model requires macOS
jacob@MacBookAir ~ % `

<!-- gh-comment-id:4307047752 --> @fieryWaters commented on GitHub (Apr 23, 2026): This issue is happening again for qwen 3.6 27b. `jacob@MacBookAir` ~ % ollama run qwen3.6:27b-coding-bf16 pulling manifest Error: pull model manifest: 412: this model requires macOS jacob@MacBookAir ~ % `
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#56206