[GH-ISSUE #9827] mistral-small v3.1 #68490

Closed
opened 2026-05-04 14:09:18 -05:00 by GiteaMirror · 19 comments
Owner

Originally created by @FelikZ on GitHub (Mar 17, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9827

Originally assigned to: @BruceMacD on GitHub.

Hi community, would be great to support this one, just released:
https://mistral.ai/news/mistral-small-3-1

https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Base-2503
https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Instruct-2503

Originally created by @FelikZ on GitHub (Mar 17, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9827 Originally assigned to: @BruceMacD on GitHub. Hi community, would be great to support this one, just released: https://mistral.ai/news/mistral-small-3-1 https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Base-2503 https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Instruct-2503
GiteaMirror added the model label 2026-05-04 14:09:18 -05:00
Author
Owner

@FriedVariable commented on GitHub (Mar 19, 2025):

Yes please!

<!-- gh-comment-id:2736048315 --> @FriedVariable commented on GitHub (Mar 19, 2025): Yes please!
Author
Owner

@bannert1337 commented on GitHub (Mar 19, 2025):

https://github.com/ggml-org/llama.cpp/pull/12450

<!-- gh-comment-id:2736083548 --> @bannert1337 commented on GitHub (Mar 19, 2025): https://github.com/ggml-org/llama.cpp/pull/12450
Author
Owner

@ErwanColombel92 commented on GitHub (Mar 19, 2025):

Hey ! Any idea when it will be available on Ollama ? :)
Thanks guys !

<!-- gh-comment-id:2736820378 --> @ErwanColombel92 commented on GitHub (Mar 19, 2025): Hey ! Any idea when it will be available on Ollama ? :) Thanks guys !
Author
Owner

@bannert1337 commented on GitHub (Mar 19, 2025):

Currently, only text works, vision needs additional integration.
You can test this:
https://ollama.com/aratan/mistral-small-3.1

<!-- gh-comment-id:2736929687 --> @bannert1337 commented on GitHub (Mar 19, 2025): Currently, only text works, vision needs additional integration. You can test this: https://ollama.com/aratan/mistral-small-3.1
Author
Owner

@mm2srv commented on GitHub (Mar 19, 2025):

Currently, only text works, vision needs additional integration. You can test this: https://ollama.com/aratan/mistral-small-3.1

system: You are a helpful AI assistant and response in spanish.

<!-- gh-comment-id:2738237532 --> @mm2srv commented on GitHub (Mar 19, 2025): > Currently, only text works, vision needs additional integration. You can test this: https://ollama.com/aratan/mistral-small-3.1 `system: You are a helpful AI assistant and response in spanish.`
Author
Owner

@bannert1337 commented on GitHub (Mar 25, 2025):

https://ollama.com/search?q=mistral-small-3.1

<!-- gh-comment-id:2750836303 --> @bannert1337 commented on GitHub (Mar 25, 2025): https://ollama.com/search?q=mistral-small-3.1
Author
Owner

@chigkim commented on GitHub (Apr 1, 2025):

@bannert1337 it doesn't work with image.

<!-- gh-comment-id:2770311159 --> @chigkim commented on GitHub (Apr 1, 2025): @bannert1337 it doesn't work with image.
Author
Owner

@Telsbat commented on GitHub (Apr 6, 2025):

Unfortunately, the new mistral3 support by Ollama runner still has some issues for now.
I installed v0.6.5 Pre-release and there are a few problems like

  1. It miscalculates model size, ollama ps shows 26GB on q4 quant which is about 15GB
  2. It doesn't load as much as it can to gpu, though this can be easily fixed by manually setting num_gpu e.g. to 100
  3. Vision seems to work great, but model performance seems significantly degraded compared to imported from GGUF (without vision) using llama architecture,

Model cannot properly search internet using tools (or use any other tools reliably), it either ignores what's in context or goes into endless repetition state,
Additionally, mathematical performance with a COT prompt is notably degraded with the mistral3 architecture, with a significant decline in accuracy.
Also compared to text-only version it very often ignores instructions based on my testing.

Maybe I imported it wrong?
Just ran ollama run mistral-small:24b-3.1-instruct-2503-q4_K_M
for text-only it was from Modelfile with gguf from unsloth

All tests with temp 0.15

<!-- gh-comment-id:2781355601 --> @Telsbat commented on GitHub (Apr 6, 2025): Unfortunately, the new mistral3 support by Ollama runner still has some issues for now. I installed v0.6.5 Pre-release and there are a few problems like 1. It miscalculates model size, ollama ps shows 26GB on q4 quant which is about 15GB 2. It doesn't load as much as it can to gpu, though this can be easily fixed by manually setting num_gpu e.g. to 100 3. Vision seems to work great, but model performance seems significantly degraded compared to imported from GGUF (without vision) using llama architecture, Model cannot properly search internet using tools (or use any other tools reliably), it either ignores what's in context or goes into endless repetition state, Additionally, mathematical performance with a COT prompt is notably degraded with the mistral3 architecture, with a significant decline in accuracy. Also compared to text-only version it very often ignores instructions based on my testing. Maybe I imported it wrong? Just ran `ollama run mistral-small:24b-3.1-instruct-2503-q4_K_M` for text-only it was from Modelfile with gguf from unsloth All tests with temp 0.15
Author
Owner

@ProjectMoon commented on GitHub (Apr 6, 2025):

I have noticed the exact same behavior. The text-only version of Mistral 3.1 on ollama.com (uploaded by another user) clocks in at 15 gb VRAM. Using the official image loads up 20+ GB depending on context size. Some extra would be needed for the vision part I guess, but I'm not sure it would need THAT much extra? I also noticed that the GPU was used even LESS when I lowered the context size parameter o_O.

<!-- gh-comment-id:2781607372 --> @ProjectMoon commented on GitHub (Apr 6, 2025): I have noticed the exact same behavior. The text-only version of Mistral 3.1 on ollama.com (uploaded by another user) clocks in at 15 gb VRAM. Using the official image loads up 20+ GB depending on context size. Some extra would be needed for the vision part I guess, but I'm not sure it would need THAT much extra? I also noticed that the GPU was used even LESS when I lowered the context size parameter o_O.
Author
Owner

@ProjectMoon commented on GitHub (Apr 7, 2025):

OK, so here is some interesting output from running the Q4_K_M quant, directly with ollama run.

time=2025-04-07T12:48:57.920+02:00 level=INFO source=ggml.go:289 msg="model weights" buffer=CPU size="10.2 GiB"
time=2025-04-07T12:48:57.920+02:00 level=INFO source=ggml.go:289 msg="model weights" buffer=ROCm0 size="4.2 GiB"

# rocm-smi


======================================== ROCm System Management Interface ========================================
================================================== Concise Info ==================================================
Device  Node  IDs              Temp    Power  Partitions          SCLK     MCLK   Fan  Perf  PwrCap  VRAM%  GPU%
              (DID,     GUID)  (Edge)  (Avg)  (Mem, Compute, ID)
==================================================================================================================
0       1     0x73bf,   17819  39.0°C  17.0W  N/A, N/A, 0         2520Mhz  96Mhz  0%   auto  272.0W  32%    0%
==================================================================================================================
============================================== End of ROCm SMI Log ===============================================

I have 16 GB of VRAM, so it's rather odd that only 4.2 of it is being used.

<!-- gh-comment-id:2782907923 --> @ProjectMoon commented on GitHub (Apr 7, 2025): OK, so here is some interesting output from running the Q4_K_M quant, directly with `ollama run`. ``` time=2025-04-07T12:48:57.920+02:00 level=INFO source=ggml.go:289 msg="model weights" buffer=CPU size="10.2 GiB" time=2025-04-07T12:48:57.920+02:00 level=INFO source=ggml.go:289 msg="model weights" buffer=ROCm0 size="4.2 GiB" # rocm-smi ======================================== ROCm System Management Interface ======================================== ================================================== Concise Info ================================================== Device Node IDs Temp Power Partitions SCLK MCLK Fan Perf PwrCap VRAM% GPU% (DID, GUID) (Edge) (Avg) (Mem, Compute, ID) ================================================================================================================== 0 1 0x73bf, 17819 39.0°C 17.0W N/A, N/A, 0 2520Mhz 96Mhz 0% auto 272.0W 32% 0% ================================================================================================================== ============================================== End of ROCm SMI Log =============================================== ``` I have 16 GB of VRAM, so it's rather odd that only 4.2 of it is being used.
Author
Owner

@rick-github commented on GitHub (Apr 7, 2025):

https://ollama.com/library/mistral-small3.1

<!-- gh-comment-id:2784285947 --> @rick-github commented on GitHub (Apr 7, 2025): https://ollama.com/library/mistral-small3.1
Author
Owner

@Colonial-Dev commented on GitHub (Apr 7, 2025):

I can confirm the same issues w.r.t. VRAM allocation and consumption.

ollama run mistral-small3.1:24b-instruct-2503-q4_K_M downloads ~15 GB, which somehow expands to 26 GB in ollama ps with a 56%/44% CPU/GPU split and a 4096 context length - but radeontop says only ~2.5 GB of my 16 GB of VRAM is actually allocated, while Ollama has allocated 17 GB in regular RAM.

Mistral Small 3 doesn't have these issues. (Granted, I may be doing something wrong - I don't use LLMs often, and was only trying to run this model as part of a coding competition.)

<!-- gh-comment-id:2784645248 --> @Colonial-Dev commented on GitHub (Apr 7, 2025): I can confirm the same issues w.r.t. VRAM allocation and consumption. `ollama run mistral-small3.1:24b-instruct-2503-q4_K_M` downloads ~15 GB, which somehow expands to 26 GB in `ollama ps` with a `56%/44% CPU/GPU` split and a 4096 context length - but `radeontop` says only ~2.5 GB of my 16 GB of VRAM is actually allocated, while Ollama has allocated 17 GB in regular RAM. Mistral Small 3 doesn't have these issues. (Granted, I may be doing something wrong - I don't use LLMs often, and was only trying to run this model as part of a coding competition.)
Author
Owner

@Telsbat commented on GitHub (Apr 7, 2025):

For now you can easily fix those VRAM allocation issues by using

/set parameter num_gpu 100 after running model in ollama by ollama run
It will just offload as many layers as possible to your VRAM

also you can do ollama save .... to save model with changed settings
(i also recommend setting temperature to 0.15 /set parameter temperature 0.15)

However, for me this model using the new Ollama engine is still performing way worse than the text only version, seems like something is incorrectly converted. Or maybe chat template handling by ollama? Or just something is wrong with ollama API?

While testing in OpenWebUI model just can't follow instructions correctly, is worse at math and using tools is also a nightmare, also often it goes into a loop of repeating itself and it is generally less coherent compared to text-only GGUF

<!-- gh-comment-id:2784693428 --> @Telsbat commented on GitHub (Apr 7, 2025): ## For now you can easily fix those VRAM allocation issues by using `/set parameter num_gpu 100` after running model in ollama by `ollama run` It will just offload as many layers as possible to your VRAM also you can do `ollama save ....` to save model with changed settings (i also recommend setting temperature to 0.15 `/set parameter temperature 0.15`) ### However, for me this model using the new Ollama engine is still performing way worse than the text only version, seems like something is incorrectly converted. Or maybe chat template handling by ollama? Or just something is wrong with ollama API? ### While testing in OpenWebUI model just can't follow instructions correctly, is worse at math and using tools is also a nightmare, also often it goes into a loop of repeating itself and it is generally less coherent compared to text-only GGUF
Author
Owner

@rick-github commented on GitHub (Apr 7, 2025):

mistral-small v3.1 uses the new go-based runner, which does memory allocation differently to the llama.cpp based runner. The new runner includes the size of the maximum computation graph, so uses a lot more VRAM, resulting in more layers being pushed into system RAM. As mentioned above, this can be overridden by specifying num_gpu in an API call or Modelfile, or by using /set in the CLI.

<!-- gh-comment-id:2784714810 --> @rick-github commented on GitHub (Apr 7, 2025): mistral-small v3.1 uses the new go-based runner, which does memory allocation differently to the llama.cpp based runner. The new runner [includes](https://github.com/ollama/ollama/issues/9791#issuecomment-2755958292) the size of the maximum computation graph, so uses a lot more VRAM, resulting in more layers being pushed into system RAM. As mentioned above, this can be overridden by specifying `num_gpu` in an API call or Modelfile, or by using `/set` in the CLI.
Author
Owner

@Colonial-Dev commented on GitHub (Apr 7, 2025):

Setting num_gpu works for text-only input, but if I attach an image, the runner immediately segmentation faults after a failure to allocate an additional 4 gigabytes. I'm using the official (?) FP16 quantized to IQ4_XS, but I see similar behavior with the official quantized Q4_K_M. Lowering num_gpu eventually fixes it, but leads to dismally slow responses. Perhaps a separate issue?

<!-- gh-comment-id:2784831524 --> @Colonial-Dev commented on GitHub (Apr 7, 2025): Setting `num_gpu` works for text-only input, but if I attach an image, the runner immediately segmentation faults after a failure to allocate an additional 4 gigabytes. I'm using the official (?) FP16 quantized to IQ4_XS, but I see similar behavior with the official quantized Q4_K_M. Lowering `num_gpu` eventually fixes it, but leads to dismally slow responses. Perhaps a separate issue?
Author
Owner

@jessegross commented on GitHub (Apr 8, 2025):

@rick-github A little bit of a clarification:

  • Memory estimation (and therefore number of layers offloaded) is the same between old and new engines. Different models have different estimates but conceptually things are the same. Memory estimates are based on the worst case, which typically includes images, max context and max batch.
  • The old engine preallocates the worst case at startup, which means that ollama ps and nvidia-smi should match if the estimate is accurate.
  • The new runner does not currently do this (but is changing). This means that ollama ps and nvidia-smi will not match until the worst case has been hit.
  • However, the above doesn't mean that the estimate is less accurate, fewer layers are being offloaded or the new engine is using more VRAM. There's just a difference between worst case and currently allocated.
<!-- gh-comment-id:2784907217 --> @jessegross commented on GitHub (Apr 8, 2025): @rick-github A little bit of a clarification: - Memory estimation (and therefore number of layers offloaded) is the same between old and new engines. Different models have different estimates but conceptually things are the same. Memory estimates are based on the worst case, which typically includes images, max context and max batch. - The old engine preallocates the worst case at startup, which means that `ollama ps` and `nvidia-smi` should match if the estimate is accurate. - The new runner does not currently do this (but is [changing](https://github.com/ollama/ollama/pull/10171)). This means that `ollama ps` and `nvidia-smi` will not match until the worst case has been hit. - However, the above doesn't mean that the estimate is less accurate, fewer layers are being offloaded or the new engine is using more VRAM. There's just a difference between worst case and currently allocated.
Author
Owner

@jessegross commented on GitHub (Apr 8, 2025):

@Colonial-Dev This is actually a sign that things are working properly. Images require significantly more memory to process, which Ollama needs to factor in to its estimates. That's why when you are not using an image, it looks like the estimate is too conservative but if you force more memory to be used then it crashes.

<!-- gh-comment-id:2784956539 --> @jessegross commented on GitHub (Apr 8, 2025): @Colonial-Dev This is actually a sign that things are working properly. Images require significantly more memory to process, which Ollama needs to factor in to its estimates. That's why when you are not using an image, it looks like the estimate is too conservative but if you force more memory to be used then it crashes.
Author
Owner

@rick-github commented on GitHub (Apr 8, 2025):

@jessegross Thanks for the clarification.

<!-- gh-comment-id:2784970771 --> @rick-github commented on GitHub (Apr 8, 2025): @jessegross Thanks for the clarification.
Author
Owner

@RubenMercadePrieto commented on GitHub (Apr 15, 2025):

Sorry, Im struggling with this. I can use mistral-small3.1 fine using num_gpu say 41 on a RTX4090, resulting about 15Gb after doing image processing. The problem is ollama ps still calculates 25-26Gb, which means on my RTX4090 I cannot load anything else, even a simple nomic embedding, which makes it unusable. The funny part is Gemma3 is ok, despite being larger, ollama ps calculates 21Gb so there is some memory spare to add nomic or whatever else small fits in. Not sure if you can provide some advice what to try. thanks

<!-- gh-comment-id:2804566109 --> @RubenMercadePrieto commented on GitHub (Apr 15, 2025): Sorry, Im struggling with this. I can use mistral-small3.1 fine using num_gpu say 41 on a RTX4090, resulting about 15Gb after doing image processing. The problem is ollama ps still calculates 25-26Gb, which means on my RTX4090 I cannot load anything else, even a simple nomic embedding, which makes it unusable. The funny part is Gemma3 is ok, despite being larger, ollama ps calculates 21Gb so there is some memory spare to add nomic or whatever else small fits in. Not sure if you can provide some advice what to try. thanks
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#68490