[GH-ISSUE #11688] Low token/s on GPT-OSS:20B MXFP4 with 4070 #33493

Closed
opened 2026-04-22 16:14:16 -05:00 by GiteaMirror · 14 comments
Owner

Originally created by @azomDev on GitHub (Aug 5, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11688

What is the issue?

I'm getting significantly lower throughput than expected running the GPT-OSS:20B model in MXFP4 with a 12GB RTX 4070 on Ollama. (I'm using no additional settings or variables)

  • Model: gpt-oss:20b (MXFP4 quantization)
  • GPU: RTX 4070 12GB (desktop)
  • 18 GB SIZE
  • 47%/53% CPU/GPU (CPU: ryzen 7 7800x3d)
  • context of 8192
  • VRAM usage: ~8–9GB
  • Token throughput: ~10 tokens/sec

Expected behavior:

  • Other users are reporting ~86 tokens/sec on an M1 Ultra using llama-bench.
  • Another user reported ~221 tokens/sec on an RTX 5090 in LM Studio with FlashAttention enabled.
  • Edit: Another one reported 25t/s for the 120B on a 3090
  • Another edit: Someone else reported "using RTX 3060 12GB GPU it starts with 27 t/s with LM Studio"
  • For comparison, I'm getting ~25 tokens/sec on the qwen3:30b-a3.5 model.

Let me know what other debug info would help, happy to provide logs or test alternative configs.

Thanks!

Relevant log output

Aug 05 16:02:28 fedora ollama[307353]: [GIN] 2025/08/05 - 16:02:28 | 200 |   71.800321ms |       127.0.0.1 | POST     "/api/show"
Aug 05 16:02:28 fedora ollama[307353]: time=2025-08-05T16:02:28.371-04:00 level=INFO source=server.go:135 msg="system memory" total="30.5 GiB" free="22.4 GiB" free_swap="5.8 GiB"
Aug 05 16:02:28 fedora ollama[307353]: time=2025-08-05T16:02:28.372-04:00 level=INFO source=server.go:175 msg=offload library=cuda layers.requested=-1 layers.model=25 layers.offload=9 layers.split="" memory.available="[9.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="16.9 GiB" memory.required.partial="9.0 GiB" memory.required.kv="300.0 MiB" memory.required.allocations="[9.0 GiB]" memory.weights.total="11.7 GiB" memory.weights.repeating="10.7 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="2.0 GiB" memory.graph.partial="4.0 GiB"
Aug 05 16:02:28 fedora ollama[307353]: time=2025-08-05T16:02:28.406-04:00 level=INFO source=server.go:438 msg="starting llama server" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 --ctx-size 8192 --batch-size 512 --n-gpu-layers 9 --threads 8 --parallel 1 --port 40799"
Aug 05 16:02:28 fedora ollama[307353]: time=2025-08-05T16:02:28.407-04:00 level=INFO source=sched.go:481 msg="loaded runners" count=1
Aug 05 16:02:28 fedora ollama[307353]: time=2025-08-05T16:02:28.407-04:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding"
Aug 05 16:02:28 fedora ollama[307353]: time=2025-08-05T16:02:28.407-04:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding"
Aug 05 16:02:28 fedora ollama[307353]: time=2025-08-05T16:02:28.415-04:00 level=INFO source=runner.go:925 msg="starting ollama engine"
Aug 05 16:02:28 fedora ollama[307353]: time=2025-08-05T16:02:28.415-04:00 level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:40799"
Aug 05 16:02:28 fedora ollama[307353]: time=2025-08-05T16:02:28.457-04:00 level=INFO source=ggml.go:92 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=315 num_key_values=30
Aug 05 16:02:28 fedora ollama[307353]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Aug 05 16:02:28 fedora ollama[307353]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Aug 05 16:02:28 fedora ollama[307353]: ggml_cuda_init: found 1 CUDA devices:
Aug 05 16:02:28 fedora ollama[307353]:   Device 0: NVIDIA GeForce RTX 4070, compute capability 8.9, VMM: yes
Aug 05 16:02:28 fedora ollama[307353]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/libggml-cuda.so
Aug 05 16:02:28 fedora ollama[307353]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so
Aug 05 16:02:28 fedora ollama[307353]: time=2025-08-05T16:02:28.527-04:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
Aug 05 16:02:28 fedora ollama[307353]: time=2025-08-05T16:02:28.603-04:00 level=INFO source=ggml.go:367 msg="offloading 9 repeating layers to GPU"
Aug 05 16:02:28 fedora ollama[307353]: time=2025-08-05T16:02:28.603-04:00 level=INFO source=ggml.go:371 msg="offloading output layer to CPU"
Aug 05 16:02:28 fedora ollama[307353]: time=2025-08-05T16:02:28.603-04:00 level=INFO source=ggml.go:378 msg="offloaded 9/25 layers to GPU"
Aug 05 16:02:28 fedora ollama[307353]: time=2025-08-05T16:02:28.603-04:00 level=INFO source=ggml.go:381 msg="model weights" buffer=CPU size="8.8 GiB"
Aug 05 16:02:28 fedora ollama[307353]: time=2025-08-05T16:02:28.603-04:00 level=INFO source=ggml.go:381 msg="model weights" buffer=CUDA0 size="4.0 GiB"
Aug 05 16:02:28 fedora ollama[307353]: time=2025-08-05T16:02:28.658-04:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model"
Aug 05 16:02:28 fedora ollama[307353]: time=2025-08-05T16:02:28.668-04:00 level=INFO source=ggml.go:672 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="2.1 GiB"
Aug 05 16:02:28 fedora ollama[307353]: time=2025-08-05T16:02:28.668-04:00 level=INFO source=ggml.go:672 msg="compute graph" backend=CPU buffer_type=CPU size="2.0 GiB"
Aug 05 16:02:30 fedora ollama[307353]: time=2025-08-05T16:02:30.166-04:00 level=INFO source=server.go:637 msg="llama runner started in 1.76 seconds"
Aug 05 16:02:30 fedora ollama[307353]: [GIN] 2025/08/05 - 16:02:30 | 200 |  2.159368886s |       127.0.0.1 | POST     "/api/generate"
Aug 05 16:03:52 fedora ollama[307353]: [GIN] 2025/08/05 - 16:03:52 | 200 |         1m20s |       127.0.0.1 | POST     "/api/chat"

OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

0.11.0

Originally created by @azomDev on GitHub (Aug 5, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11688 ### What is the issue? I'm getting significantly lower throughput than expected running the GPT-OSS:20B model in MXFP4 with a 12GB RTX 4070 on Ollama. (I'm using no additional settings or variables) - Model: gpt-oss:20b (MXFP4 quantization) - GPU: RTX 4070 12GB (desktop) - 18 GB SIZE - 47%/53% CPU/GPU (CPU: ryzen 7 7800x3d) - context of 8192 - VRAM usage: ~8–9GB - Token throughput: ~10 tokens/sec Expected behavior: - Other users are reporting ~86 tokens/sec on an M1 Ultra using llama-bench. - Another user reported ~221 tokens/sec on an RTX 5090 in LM Studio with FlashAttention enabled. - Edit: Another one reported 25t/s for the 120B on a 3090 - Another edit: Someone else reported "using RTX 3060 12GB GPU it starts with 27 t/s with LM Studio" - For comparison, I'm getting **~25 tokens/sec** on the **qwen3:30b-a3.5** model. Let me know what other debug info would help, happy to provide logs or test alternative configs. Thanks! ### Relevant log output ```shell Aug 05 16:02:28 fedora ollama[307353]: [GIN] 2025/08/05 - 16:02:28 | 200 | 71.800321ms | 127.0.0.1 | POST "/api/show" Aug 05 16:02:28 fedora ollama[307353]: time=2025-08-05T16:02:28.371-04:00 level=INFO source=server.go:135 msg="system memory" total="30.5 GiB" free="22.4 GiB" free_swap="5.8 GiB" Aug 05 16:02:28 fedora ollama[307353]: time=2025-08-05T16:02:28.372-04:00 level=INFO source=server.go:175 msg=offload library=cuda layers.requested=-1 layers.model=25 layers.offload=9 layers.split="" memory.available="[9.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="16.9 GiB" memory.required.partial="9.0 GiB" memory.required.kv="300.0 MiB" memory.required.allocations="[9.0 GiB]" memory.weights.total="11.7 GiB" memory.weights.repeating="10.7 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="2.0 GiB" memory.graph.partial="4.0 GiB" Aug 05 16:02:28 fedora ollama[307353]: time=2025-08-05T16:02:28.406-04:00 level=INFO source=server.go:438 msg="starting llama server" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 --ctx-size 8192 --batch-size 512 --n-gpu-layers 9 --threads 8 --parallel 1 --port 40799" Aug 05 16:02:28 fedora ollama[307353]: time=2025-08-05T16:02:28.407-04:00 level=INFO source=sched.go:481 msg="loaded runners" count=1 Aug 05 16:02:28 fedora ollama[307353]: time=2025-08-05T16:02:28.407-04:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding" Aug 05 16:02:28 fedora ollama[307353]: time=2025-08-05T16:02:28.407-04:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding" Aug 05 16:02:28 fedora ollama[307353]: time=2025-08-05T16:02:28.415-04:00 level=INFO source=runner.go:925 msg="starting ollama engine" Aug 05 16:02:28 fedora ollama[307353]: time=2025-08-05T16:02:28.415-04:00 level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:40799" Aug 05 16:02:28 fedora ollama[307353]: time=2025-08-05T16:02:28.457-04:00 level=INFO source=ggml.go:92 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=315 num_key_values=30 Aug 05 16:02:28 fedora ollama[307353]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Aug 05 16:02:28 fedora ollama[307353]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Aug 05 16:02:28 fedora ollama[307353]: ggml_cuda_init: found 1 CUDA devices: Aug 05 16:02:28 fedora ollama[307353]: Device 0: NVIDIA GeForce RTX 4070, compute capability 8.9, VMM: yes Aug 05 16:02:28 fedora ollama[307353]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/libggml-cuda.so Aug 05 16:02:28 fedora ollama[307353]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so Aug 05 16:02:28 fedora ollama[307353]: time=2025-08-05T16:02:28.527-04:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) Aug 05 16:02:28 fedora ollama[307353]: time=2025-08-05T16:02:28.603-04:00 level=INFO source=ggml.go:367 msg="offloading 9 repeating layers to GPU" Aug 05 16:02:28 fedora ollama[307353]: time=2025-08-05T16:02:28.603-04:00 level=INFO source=ggml.go:371 msg="offloading output layer to CPU" Aug 05 16:02:28 fedora ollama[307353]: time=2025-08-05T16:02:28.603-04:00 level=INFO source=ggml.go:378 msg="offloaded 9/25 layers to GPU" Aug 05 16:02:28 fedora ollama[307353]: time=2025-08-05T16:02:28.603-04:00 level=INFO source=ggml.go:381 msg="model weights" buffer=CPU size="8.8 GiB" Aug 05 16:02:28 fedora ollama[307353]: time=2025-08-05T16:02:28.603-04:00 level=INFO source=ggml.go:381 msg="model weights" buffer=CUDA0 size="4.0 GiB" Aug 05 16:02:28 fedora ollama[307353]: time=2025-08-05T16:02:28.658-04:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model" Aug 05 16:02:28 fedora ollama[307353]: time=2025-08-05T16:02:28.668-04:00 level=INFO source=ggml.go:672 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="2.1 GiB" Aug 05 16:02:28 fedora ollama[307353]: time=2025-08-05T16:02:28.668-04:00 level=INFO source=ggml.go:672 msg="compute graph" backend=CPU buffer_type=CPU size="2.0 GiB" Aug 05 16:02:30 fedora ollama[307353]: time=2025-08-05T16:02:30.166-04:00 level=INFO source=server.go:637 msg="llama runner started in 1.76 seconds" Aug 05 16:02:30 fedora ollama[307353]: [GIN] 2025/08/05 - 16:02:30 | 200 | 2.159368886s | 127.0.0.1 | POST "/api/generate" Aug 05 16:03:52 fedora ollama[307353]: [GIN] 2025/08/05 - 16:03:52 | 200 | 1m20s | 127.0.0.1 | POST "/api/chat" ``` ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version 0.11.0
GiteaMirror added the bug label 2026-04-22 16:14:16 -05:00
Author
Owner

@mohammedgomaa commented on GitHub (Aug 5, 2025):

its running on cpu , it takes 24 gb for me to run , flash atten off , ctx 8k

<!-- gh-comment-id:3156768220 --> @mohammedgomaa commented on GitHub (Aug 5, 2025): its running on cpu , it takes 24 gb for me to run , flash atten off , ctx 8k
Author
Owner

@jhsmith409 commented on GitHub (Aug 6, 2025):

On an RTX 5090, it is running almost entirely on CPU. Only storing the KV Cache on the GPU but using just 1% of GPU while it is using all 20 cores of the CPU. That gets it about 6 tokens/second.

Diagnosed with Claude and it says "The gpt-oss:20b model runs on CPU because it uses MXFP4 quantization, which likely lacks GPU acceleration support in
Ollama. Your working models (gemma3, qwen3) use standard quantizations (Q4_0, etc.) that have full GPU support.

Solution: Look for a gpt-oss model with standard quantization (Q4_K_M, Q5_K_M, Q8_0) for GPU acceleration, or wait for
MXFP4 GPU support in future Ollama updates."

Running Ollama 0.11.3, CUDA 12.9.1, Ubuntu 24.04.2 LTS

<!-- gh-comment-id:3158711748 --> @jhsmith409 commented on GitHub (Aug 6, 2025): On an RTX 5090, it is running almost entirely on CPU. Only storing the KV Cache on the GPU but using just 1% of GPU while it is using all 20 cores of the CPU. That gets it about 6 tokens/second. Diagnosed with Claude and it says "The gpt-oss:20b model runs on CPU because it uses MXFP4 quantization, which likely lacks GPU acceleration support in Ollama. Your working models (gemma3, qwen3) use standard quantizations (Q4_0, etc.) that have full GPU support. Solution: Look for a gpt-oss model with standard quantization (Q4_K_M, Q5_K_M, Q8_0) for GPU acceleration, or wait for MXFP4 GPU support in future Ollama updates." Running Ollama 0.11.3, CUDA 12.9.1, Ubuntu 24.04.2 LTS
Author
Owner

@markvandeven commented on GitHub (Aug 6, 2025):

did you try running the model in LM studio? I seem to have the same problem (on a 4070) but in my log it is loading 12/25 layers to GPU. Trying to find some way to force it to load some more, there is plenty memory left..

for me LM studio is a lot quicker

<!-- gh-comment-id:3160546355 --> @markvandeven commented on GitHub (Aug 6, 2025): did you try running the model in LM studio? I seem to have the same problem (on a 4070) but in my log it is loading 12/25 layers to GPU. Trying to find some way to force it to load some more, there is plenty memory left.. for me LM studio is a lot quicker
Author
Owner

@azomDev commented on GitHub (Aug 6, 2025):

Image

In LM studio I have ~25t/s so about the same as qwen3:30b-a3.5

It's using basically all of my vram and gpu utilization is higher than when using ollama (~6-8%)

<!-- gh-comment-id:3160800809 --> @azomDev commented on GitHub (Aug 6, 2025): <img width="631" height="354" alt="Image" src="https://github.com/user-attachments/assets/28930741-b22a-43cf-a680-7698f2f33dac" /> In LM studio I have ~25t/s so about the same as qwen3:30b-a3.5 It's using basically all of my vram and gpu utilization is higher than when using ollama (~6-8%)
Author
Owner

@jessegross commented on GitHub (Aug 6, 2025):

There was a bug in 0.11.2 and below where the memory estimation would become too high for gpt-oss if the model needed to be split across GPU and CPU or multiple GPUs. This causes fewer layers to be offloaded (up to 100% CPU) once the model overflows a single GPU. Fewer layers on the GPU will lower performance.

This is fixed in 0.11.3.

<!-- gh-comment-id:3161774633 --> @jessegross commented on GitHub (Aug 6, 2025): There was a bug in 0.11.2 and below where the memory estimation would become too high for gpt-oss if the model needed to be split across GPU and CPU or multiple GPUs. This causes fewer layers to be offloaded (up to 100% CPU) once the model overflows a single GPU. Fewer layers on the GPU will lower performance. This is fixed in 0.11.3.
Author
Owner

@azomDev commented on GitHub (Aug 6, 2025):

Image

Updated ollama and re-downloaded the model just in case. My GPU vram is full (so a bit better than before), but GPU utilization is still less than 10%. I'm still at ~10t/s so 0.11.3 did not fix my issue.

<!-- gh-comment-id:3161819374 --> @azomDev commented on GitHub (Aug 6, 2025): <img width="2848" height="988" alt="Image" src="https://github.com/user-attachments/assets/eed3c9dd-5a26-4fff-a6bb-4624d1461105" /> Updated ollama and re-downloaded the model just in case. My GPU vram is full (so a bit better than before), but GPU utilization is still less than 10%. I'm still at ~10t/s so 0.11.3 did not fix my issue.
Author
Owner

@SecureBot commented on GitHub (Aug 7, 2025):

On an A6000

If I set top_p=1 and top_k=0 (recommended settings) I get only 24t/s
If I set top_p=.9 and top_k=40, I get 58t/s

<!-- gh-comment-id:3164959008 --> @SecureBot commented on GitHub (Aug 7, 2025): On an A6000 If I set top_p=1 and top_k=0 (recommended settings) I get only 24t/s If I set top_p=.9 and top_k=40, I get 58t/s
Author
Owner

@Jonseed commented on GitHub (Aug 7, 2025):

I'm seeing a similar slowdown on Ollama with my 3060 12gb, where I only get about 4 t/s, which is almost unusable. In LM Studio I'm getting up to 13+ t/s, offloading 20 layers out of 24.

<!-- gh-comment-id:3166092651 --> @Jonseed commented on GitHub (Aug 7, 2025): I'm seeing a similar slowdown on Ollama with my 3060 12gb, where I only get about 4 t/s, which is almost unusable. In LM Studio I'm getting up to 13+ t/s, offloading 20 layers out of 24.
Author
Owner

@markvandeven commented on GitHub (Aug 8, 2025):

you may want to run again using a newer version. I updated this morning from 0.11.2 to 0.11.4 and i see that now 19/25 instead of 12/25 layers are offloaded to the GPU without any other configuration changes. in other threads i have seen some reports with 0.11.2 so this might help.

the other issue still is that a 4070 has 12gb ram, i think this model needs 16 to be fully offloaded.

<!-- gh-comment-id:3167147450 --> @markvandeven commented on GitHub (Aug 8, 2025): you may want to run again using a newer version. I updated this morning from 0.11.2 to 0.11.4 and i see that now 19/25 instead of 12/25 layers are offloaded to the GPU without any other configuration changes. in other threads i have seen some reports with 0.11.2 so this might help. the other issue still is that a 4070 has 12gb ram, i think this model needs 16 to be fully offloaded.
Author
Owner

@F1shez commented on GitHub (Aug 8, 2025):

I try 0.11.2 and 0.11.4 on 4070 super (last studio driver) and i get 10 t/s and 17 t/s. But in LM studio i get 28.57 t/s.

<!-- gh-comment-id:3167401751 --> @F1shez commented on GitHub (Aug 8, 2025): I try 0.11.2 and 0.11.4 on 4070 super (last studio driver) and i get 10 t/s and 17 t/s. But in LM studio i get 28.57 t/s.
Author
Owner

@azomDev commented on GitHub (Aug 8, 2025):

the other issue still is that a 4070 has 12gb ram, i think this model needs 16 to be fully offloaded.

LM studio has better performances, so VRAM/GPU isn't a problem, the problem is only on ollama. I've already tried all version up to 0.11.3 and it was still not fixed. I'll try 0.11.4 once I get back in the week (or whatever version will be there at that time), but looking at @F1shez 's results, it seems it still isn't fixed.

<!-- gh-comment-id:3168344891 --> @azomDev commented on GitHub (Aug 8, 2025): > the other issue still is that a 4070 has 12gb ram, i think this model needs 16 to be fully offloaded. LM studio has better performances, so VRAM/GPU isn't a problem, the problem is only on ollama. I've already tried all version up to 0.11.3 and it was still not fixed. I'll try 0.11.4 once I get back in the week (or whatever version will be there at that time), but looking at @F1shez 's results, it seems it still isn't fixed.
Author
Owner

@Jonseed commented on GitHub (Aug 8, 2025):

On my 3060, Ollama 0.11.4, I'm getting about 5 tokens per second. On LM Studio I'm getting almost 14 tokens per second, a nearly 3x speed boost.

<!-- gh-comment-id:3168384970 --> @Jonseed commented on GitHub (Aug 8, 2025): On my 3060, Ollama 0.11.4, I'm getting about 5 tokens per second. On LM Studio I'm getting almost 14 tokens per second, a nearly 3x speed boost.
Author
Owner

@F1shez commented on GitHub (Aug 20, 2025):

in 0.11.5 i get 28 token/s!

<!-- gh-comment-id:3204632656 --> @F1shez commented on GitHub (Aug 20, 2025): in 0.11.5 i get 28 token/s!
Author
Owner

@azomDev commented on GitHub (Aug 20, 2025):

I'm getting ~22.75t/s with 0.11.5! Not exactly the ~25t/s I was getting for LM studio, but I would say close enough for me. I'll close this since there are quite a few other threads about this that are still opened.

<!-- gh-comment-id:3206785014 --> @azomDev commented on GitHub (Aug 20, 2025): I'm getting ~22.75t/s with 0.11.5! Not exactly the ~25t/s I was getting for LM studio, but I would say close enough for me. I'll close this since there are quite a few other threads about this that are still opened.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#33493