[GH-ISSUE #14386] Starting from version 0.15.5, the qwen3 series models are no longer available. #55860

Closed
opened 2026-04-29 09:49:05 -05:00 by GiteaMirror · 15 comments
Owner

Originally created by @pureGavin on GitHub (Feb 24, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14386

What is the issue?

Starting from version 0.15.5, running QWEN3 series models may result in the GPU not being utilized, with both reasoning and response generation being handled entirely by the CPU.

Relevant log output

Ollama is essentially still functioning normally, but the GPU being offline has caused extremely slow response times.

OS

Linux, Docker

GPU

Nvidia

CPU

Intel

Ollama version

Version 0.15.5 and later

Originally created by @pureGavin on GitHub (Feb 24, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14386 ### What is the issue? Starting from version 0.15.5, running QWEN3 series models may result in the GPU not being utilized, with both reasoning and response generation being handled entirely by the CPU. ### Relevant log output ```shell Ollama is essentially still functioning normally, but the GPU being offline has caused extremely slow response times. ``` ### OS Linux, Docker ### GPU Nvidia ### CPU Intel ### Ollama version Version 0.15.5 and later
GiteaMirror added the bug label 2026-04-29 09:49:05 -05:00
Author
Owner

@pureGavin commented on GitHub (Feb 24, 2026):

According to testing, the same issue exists on the ARM architecture's GB10; additionally, qwen3-next spends a significant amount of time during bootup (these appear to be two separate issues :p ).

<!-- gh-comment-id:3949784117 --> @pureGavin commented on GitHub (Feb 24, 2026): According to testing, the same issue exists on the ARM architecture's GB10; additionally, qwen3-next spends a significant amount of time during bootup (these appear to be two separate issues :p ).
Author
Owner

@rick-github commented on GitHub (Feb 24, 2026):

Server logs will help in debugging.

<!-- gh-comment-id:3950313815 --> @rick-github commented on GitHub (Feb 24, 2026): [Server logs](https://docs.ollama.com/troubleshooting) will help in debugging.
Author
Owner

@pureGavin commented on GitHub (Feb 25, 2026):

Server logs will help in debugging.

Sorry, perhaps I didn't explain clearly enough. Ollama itself is still running normally, but the large model will directly use the CPU for computation, and the entire logs file won't show any errors.

<!-- gh-comment-id:3955668553 --> @pureGavin commented on GitHub (Feb 25, 2026): > [Server logs](https://docs.ollama.com/troubleshooting) will help in debugging. Sorry, perhaps I didn't explain clearly enough. Ollama itself is still running normally, but the large model will directly use the CPU for computation, and the entire logs file won't show any errors.
Author
Owner

@rick-github commented on GitHub (Feb 25, 2026):

The logs will show why CPU is used instead of GPU.

<!-- gh-comment-id:3955681779 --> @rick-github commented on GitHub (Feb 25, 2026): The logs will show why CPU is used instead of GPU.
Author
Owner

@pureGavin commented on GitHub (Feb 25, 2026):

Hitting this kind of issue is exactly where model routing becomes valuable. When a specific model ID fails or returns unexpected errors, you often end up manually trying alternatives one by one.

A router like Komilion handles this automatically — it classifies each request and picks from 400+ models across providers. If one model fails or is unavailable, it falls through to the next qualified option without you changing anything in your client config.

One URL change if you're using any OpenAI-compatible client. Might be worth a look while this gets sorted.

Your response reminds me that I never encountered GPU downtime issues when using the GPT-OSS series models.

<!-- gh-comment-id:3956292430 --> @pureGavin commented on GitHub (Feb 25, 2026): > Hitting this kind of issue is exactly where model routing becomes valuable. When a specific model ID fails or returns unexpected errors, you often end up manually trying alternatives one by one. > > A router like [Komilion](https://www.komilion.com) handles this automatically — it classifies each request and picks from 400+ models across providers. If one model fails or is unavailable, it falls through to the next qualified option without you changing anything in your client config. > > One URL change if you're using any OpenAI-compatible client. Might be worth a look while this gets sorted. Your response reminds me that I never encountered GPU downtime issues when using the GPT-OSS series models.
Author
Owner

@pureGavin commented on GitHub (Feb 25, 2026):

The logs will show why CPU is used instead of GPU.

This is the log for Ollama version 0.17.0, using the model qwen3-next:80b-a3b-thinking-q4_K_M.

time=2026-02-25T05:47:07.393Z level=INFO source=routes.go:1663 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:INFO OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2026-02-25T05:47:07.393Z level=INFO source=routes.go:1665 msg="Ollama cloud disabled: false"
time=2026-02-25T05:47:07.394Z level=INFO source=images.go:473 msg="total blobs: 22"
time=2026-02-25T05:47:07.395Z level=INFO source=images.go:480 msg="total unused blobs removed: 0"
time=2026-02-25T05:47:07.395Z level=INFO source=routes.go:1718 msg="Listening on [::]:11434 (version 0.17.0)"
time=2026-02-25T05:47:07.395Z level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2026-02-25T05:47:07.396Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 33203"
time=2026-02-25T05:47:07.948Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 43149"
time=2026-02-25T05:47:08.013Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
time=2026-02-25T05:47:08.014Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 37317"
time=2026-02-25T05:47:08.014Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 35705"
time=2026-02-25T05:47:08.692Z level=INFO source=types.go:42 msg="inference compute" id=GPU-585db7cc-5d7a-5531-c0db-eee123dfea0c filter_id="" library=CUDA compute=8.9 name=CUDA0 description="NVIDIA RTX 5880 Ada Generation" libdirs=ollama,cuda_v12 driver=12.8 pci_id=0000:03:00.0 type=discrete total="48.0 GiB" available="47.4 GiB"
time=2026-02-25T05:47:08.692Z level=INFO source=types.go:42 msg="inference compute" id=GPU-c8f78982-69cc-46a0-b29d-dcf5e8f07da0 filter_id="" library=CUDA compute=8.9 name=CUDA1 description="NVIDIA RTX 5880 Ada Generation" libdirs=ollama,cuda_v12 driver=12.8 pci_id=0000:03:02.0 type=discrete total="48.0 GiB" available="47.0 GiB"
time=2026-02-25T05:47:08.692Z level=INFO source=routes.go:1768 msg="vram-based default context" total_vram="96.0 GiB" default_num_ctx=262144
[GIN] 2026/02/25 - 05:47:30 | 200 |     155.942µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/02/25 - 05:47:31 | 200 |  1.083694679s |       127.0.0.1 | POST     "/api/pull"
[GIN] 2026/02/25 - 05:48:12 | 200 |       65.64µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/02/25 - 05:48:12 | 200 |     4.19371ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/02/25 - 05:48:21 | 200 |      34.991µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/02/25 - 05:48:21 | 200 |  121.332463ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/02/25 - 05:48:21 | 200 |  109.357147ms |       127.0.0.1 | POST     "/api/show"
time=2026-02-25T05:48:21.528Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 40999"
llama_model_loader: loaded meta data with 45 key-value pairs and 807 tensors from /root/.ollama/models/blobs/sha256-8476acca2ca7dc4dd86ad2e069cb270fdbd44287d9ff3006d86e9a54cc19acd1 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen3next
llama_model_loader: - kv   1:                           general.basename str              = Qwen3-Next
llama_model_loader: - kv   2:                          general.file_type u32              = 15
llama_model_loader: - kv   3:                           general.finetune str              = Thinking
llama_model_loader: - kv   4:                            general.license str              = apache-2.0
llama_model_loader: - kv   5:                       general.license.link str              = https://huggingface.co/Qwen/Qwen3-Nex...
llama_model_loader: - kv   6:                               general.name str              = Qwen3 Next 80B A3B Thinking
llama_model_loader: - kv   7:                    general.parameter_count u64              = 79674391296
llama_model_loader: - kv   8:               general.quantization_version u32              = 2
llama_model_loader: - kv   9:                      general.sampling.temp f32              = 0.600000
llama_model_loader: - kv  10:                     general.sampling.top_k i32              = 20
llama_model_loader: - kv  11:                     general.sampling.top_p f32              = 0.950000
llama_model_loader: - kv  12:                         general.size_label str              = 80B-A3B
llama_model_loader: - kv  13:                               general.tags arr[str,1]       = ["text-generation"]
llama_model_loader: - kv  14:                               general.type str              = model
llama_model_loader: - kv  15:             qwen3next.attention.head_count u32              = 16
llama_model_loader: - kv  16:          qwen3next.attention.head_count_kv u32              = 2
llama_model_loader: - kv  17:             qwen3next.attention.key_length u32              = 256
llama_model_loader: - kv  18: qwen3next.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  19:           qwen3next.attention.value_length u32              = 256
llama_model_loader: - kv  20:                      qwen3next.block_count u32              = 48
llama_model_loader: - kv  21:                   qwen3next.context_length u32              = 262144
llama_model_loader: - kv  22:                 qwen3next.embedding_length u32              = 2048
llama_model_loader: - kv  23:                     qwen3next.expert_count u32              = 512
llama_model_loader: - kv  24:       qwen3next.expert_feed_forward_length u32              = 512
llama_model_loader: - kv  25: qwen3next.expert_shared_feed_forward_length u32              = 512
llama_model_loader: - kv  26:                qwen3next.expert_used_count u32              = 10
llama_model_loader: - kv  27:              qwen3next.feed_forward_length u32              = 5120
llama_model_loader: - kv  28:             qwen3next.rope.dimension_count u32              = 64
llama_model_loader: - kv  29:                   qwen3next.rope.freq_base f32              = 10000000.000000
llama_model_loader: - kv  30:                  qwen3next.ssm.conv_kernel u32              = 4
llama_model_loader: - kv  31:                  qwen3next.ssm.group_count u32              = 16
llama_model_loader: - kv  32:                   qwen3next.ssm.inner_size u32              = 4096
llama_model_loader: - kv  33:                   qwen3next.ssm.state_size u32              = 128
llama_model_loader: - kv  34:               qwen3next.ssm.time_step_rank u32              = 32
llama_model_loader: - kv  35:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  36:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  37:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  38:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  39:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  40:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  41:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  42:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  43:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  44:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - type  f32:  313 tensors
llama_model_loader: - type q4_K:  415 tensors
llama_model_loader: - type q6_K:   79 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 46.89 GiB (5.06 BPW) 
load: printing all EOG tokens:
load:   - 151643 ('<|endoftext|>')
load:   - 151645 ('<|im_end|>')
load:   - 151662 ('<|fim_pad|>')
load:   - 151663 ('<|repo_name|>')
load:   - 151664 ('<|file_sep|>')
load: special tokens cache size = 26
load: token to piece cache size = 0.9311 MB
print_info: arch             = qwen3next
print_info: vocab_only       = 1
print_info: no_alloc         = 0
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_n_group      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = ?B
print_info: model params     = 79.67 B
print_info: general.name     = Qwen3 Next 80B A3B Thinking
print_info: vocab type       = BPE
print_info: n_vocab          = 151936
print_info: n_merges         = 151387
print_info: BOS token        = 151643 '<|endoftext|>'
print_info: EOS token        = 151645 '<|im_end|>'
print_info: EOT token        = 151645 '<|im_end|>'
print_info: PAD token        = 151643 '<|endoftext|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|endoftext|>'
print_info: EOG token        = 151645 '<|im_end|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2026-02-25T05:48:22.352Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-8476acca2ca7dc4dd86ad2e069cb270fdbd44287d9ff3006d86e9a54cc19acd1 --port 37859"
time=2026-02-25T05:48:22.353Z level=INFO source=sched.go:491 msg="system memory" total="62.4 GiB" free="59.8 GiB" free_swap="47.6 GiB"
time=2026-02-25T05:48:22.353Z level=INFO source=sched.go:498 msg="gpu memory" id=GPU-585db7cc-5d7a-5531-c0db-eee123dfea0c library=CUDA available="46.9 GiB" free="47.4 GiB" minimum="457.0 MiB" overhead="0 B"
time=2026-02-25T05:48:22.353Z level=INFO source=sched.go:498 msg="gpu memory" id=GPU-c8f78982-69cc-46a0-b29d-dcf5e8f07da0 library=CUDA available="46.6 GiB" free="47.0 GiB" minimum="457.0 MiB" overhead="0 B"
time=2026-02-25T05:48:22.353Z level=INFO source=server.go:498 msg="loading model" "model layers"=49 requested=-1
time=2026-02-25T05:48:22.353Z level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="8.7 GiB"
time=2026-02-25T05:48:22.353Z level=INFO source=device.go:240 msg="model weights" device=CUDA1 size="7.7 GiB"
time=2026-02-25T05:48:22.353Z level=INFO source=device.go:245 msg="model weights" device=CPU size="30.3 GiB"
time=2026-02-25T05:48:22.353Z level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="4.5 GiB"
time=2026-02-25T05:48:22.353Z level=INFO source=device.go:251 msg="kv cache" device=CUDA1 size="4.0 GiB"
time=2026-02-25T05:48:22.353Z level=INFO source=device.go:256 msg="kv cache" device=CPU size="15.5 GiB"
time=2026-02-25T05:48:22.353Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="32.0 GiB"
time=2026-02-25T05:48:22.353Z level=INFO source=device.go:262 msg="compute graph" device=CUDA1 size="32.0 GiB"
time=2026-02-25T05:48:22.353Z level=INFO source=device.go:272 msg="total memory" size="134.7 GiB"
time=2026-02-25T05:48:22.375Z level=INFO source=runner.go:965 msg="starting go runner"
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 2 CUDA devices:
  Device 0: NVIDIA RTX 5880 Ada Generation, compute capability 8.9, VMM: yes, ID: GPU-585db7cc-5d7a-5531-c0db-eee123dfea0c
  Device 1: NVIDIA RTX 5880 Ada Generation, compute capability 8.9, VMM: yes, ID: GPU-c8f78982-69cc-46a0-b29d-dcf5e8f07da0
load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so
time=2026-02-25T05:48:22.545Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2026-02-25T05:48:22.545Z level=INFO source=runner.go:1001 msg="Server listening on 127.0.0.1:37859"
time=2026-02-25T05:48:22.555Z level=INFO source=runner.go:895 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Auto KvSize:262144 KvCacheType: NumThreads:16 GPULayers:17[ID:GPU-585db7cc-5d7a-5531-c0db-eee123dfea0c Layers:9(31..39) ID:GPU-c8f78982-69cc-46a0-b29d-dcf5e8f07da0 Layers:8(40..47)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-02-25T05:48:22.556Z level=INFO source=server.go:1350 msg="waiting for llama runner to start responding"
time=2026-02-25T05:48:22.557Z level=INFO source=server.go:1384 msg="waiting for server to become available" status="llm server loading model"
ggml_backend_cuda_device_get_memory device GPU-585db7cc-5d7a-5531-c0db-eee123dfea0c utilizing NVML memory reporting free: 50871664640 total: 51527024640
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA RTX 5880 Ada Generation) (0000:03:00.0) - 48515 MiB free
ggml_backend_cuda_device_get_memory device GPU-c8f78982-69cc-46a0-b29d-dcf5e8f07da0 utilizing NVML memory reporting free: 50515869696 total: 51527024640
llama_model_load_from_file_impl: using device CUDA1 (NVIDIA RTX 5880 Ada Generation) (0000:03:02.0) - 48175 MiB free
llama_model_loader: loaded meta data with 45 key-value pairs and 807 tensors from /root/.ollama/models/blobs/sha256-8476acca2ca7dc4dd86ad2e069cb270fdbd44287d9ff3006d86e9a54cc19acd1 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen3next
llama_model_loader: - kv   1:                           general.basename str              = Qwen3-Next
llama_model_loader: - kv   2:                          general.file_type u32              = 15
llama_model_loader: - kv   3:                           general.finetune str              = Thinking
llama_model_loader: - kv   4:                            general.license str              = apache-2.0
llama_model_loader: - kv   5:                       general.license.link str              = https://huggingface.co/Qwen/Qwen3-Nex...
llama_model_loader: - kv   6:                               general.name str              = Qwen3 Next 80B A3B Thinking
llama_model_loader: - kv   7:                    general.parameter_count u64              = 79674391296
llama_model_loader: - kv   8:               general.quantization_version u32              = 2
llama_model_loader: - kv   9:                      general.sampling.temp f32              = 0.600000
llama_model_loader: - kv  10:                     general.sampling.top_k i32              = 20
llama_model_loader: - kv  11:                     general.sampling.top_p f32              = 0.950000
llama_model_loader: - kv  12:                         general.size_label str              = 80B-A3B
llama_model_loader: - kv  13:                               general.tags arr[str,1]       = ["text-generation"]
llama_model_loader: - kv  14:                               general.type str              = model
llama_model_loader: - kv  15:             qwen3next.attention.head_count u32              = 16
llama_model_loader: - kv  16:          qwen3next.attention.head_count_kv u32              = 2
llama_model_loader: - kv  17:             qwen3next.attention.key_length u32              = 256
llama_model_loader: - kv  18: qwen3next.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  19:           qwen3next.attention.value_length u32              = 256
llama_model_loader: - kv  20:                      qwen3next.block_count u32              = 48
llama_model_loader: - kv  21:                   qwen3next.context_length u32              = 262144
llama_model_loader: - kv  22:                 qwen3next.embedding_length u32              = 2048
llama_model_loader: - kv  23:                     qwen3next.expert_count u32              = 512
llama_model_loader: - kv  24:       qwen3next.expert_feed_forward_length u32              = 512
llama_model_loader: - kv  25: qwen3next.expert_shared_feed_forward_length u32              = 512
llama_model_loader: - kv  26:                qwen3next.expert_used_count u32              = 10
llama_model_loader: - kv  27:              qwen3next.feed_forward_length u32              = 5120
llama_model_loader: - kv  28:             qwen3next.rope.dimension_count u32              = 64
llama_model_loader: - kv  29:                   qwen3next.rope.freq_base f32              = 10000000.000000
llama_model_loader: - kv  30:                  qwen3next.ssm.conv_kernel u32              = 4
llama_model_loader: - kv  31:                  qwen3next.ssm.group_count u32              = 16
llama_model_loader: - kv  32:                   qwen3next.ssm.inner_size u32              = 4096
llama_model_loader: - kv  33:                   qwen3next.ssm.state_size u32              = 128
llama_model_loader: - kv  34:               qwen3next.ssm.time_step_rank u32              = 32
llama_model_loader: - kv  35:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  36:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  37:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  38:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  39:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  40:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  41:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  42:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  43:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  44:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - type  f32:  313 tensors
llama_model_loader: - type q4_K:  415 tensors
llama_model_loader: - type q6_K:   79 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 46.89 GiB (5.06 BPW) 
load: printing all EOG tokens:
load:   - 151643 ('<|endoftext|>')
load:   - 151645 ('<|im_end|>')
load:   - 151662 ('<|fim_pad|>')
load:   - 151663 ('<|repo_name|>')
load:   - 151664 ('<|file_sep|>')
load: special tokens cache size = 26
load: token to piece cache size = 0.9311 MB
print_info: arch             = qwen3next
print_info: vocab_only       = 0
print_info: no_alloc         = 0
print_info: n_ctx_train      = 262144
print_info: n_embd           = 2048
print_info: n_embd_inp       = 2048
print_info: n_layer          = 48
print_info: n_head           = 16
print_info: n_head_kv        = 2
print_info: n_rot            = 64
print_info: n_swa            = 0
print_info: is_swa_any       = 0
print_info: n_embd_head_k    = 256
print_info: n_embd_head_v    = 256
print_info: n_gqa            = 8
print_info: n_embd_k_gqa     = 512
print_info: n_embd_v_gqa     = 512
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-06
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 5120
print_info: n_expert         = 512
print_info: n_expert_used    = 10
print_info: n_expert_groups  = 0
print_info: n_group_used     = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 2
print_info: rope scaling     = linear
print_info: freq_base_train  = 10000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 262144
print_info: rope_yarn_log_mul= 0.0000
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 4
print_info: ssm_d_inner      = 4096
print_info: ssm_d_state      = 128
print_info: ssm_dt_rank      = 32
print_info: ssm_n_group      = 16
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 80B.A3B
print_info: model params     = 79.67 B
print_info: general.name     = Qwen3 Next 80B A3B Thinking
print_info: vocab type       = BPE
print_info: n_vocab          = 151936
print_info: n_merges         = 151387
print_info: BOS token        = 151643 '<|endoftext|>'
print_info: EOS token        = 151645 '<|im_end|>'
print_info: EOT token        = 151645 '<|im_end|>'
print_info: PAD token        = 151643 '<|endoftext|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|endoftext|>'
print_info: EOG token        = 151645 '<|im_end|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors: offloading 17 repeating layers to GPU
load_tensors: offloaded 17/49 layers to GPU
load_tensors:          CPU model buffer size =   166.92 MiB
load_tensors:        CUDA0 model buffer size =  8905.68 MiB
load_tensors:        CUDA1 model buffer size =  7890.13 MiB
load_tensors:    CUDA_Host model buffer size = 31050.32 MiB
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 262144
llama_context: n_ctx_seq     = 262144
llama_context: n_batch       = 512
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = auto
llama_context: kv_unified    = false
llama_context: freq_base     = 10000000.0
llama_context: freq_scale    = 1
llama_context:        CPU  output buffer size =     0.59 MiB
llama_kv_cache:        CPU KV buffer size =  3584.00 MiB
llama_kv_cache:      CUDA0 KV buffer size =  1536.00 MiB
llama_kv_cache:      CUDA1 KV buffer size =  1024.00 MiB
llama_kv_cache: size = 6144.00 MiB (262144 cells,  12 layers,  1/1 seqs), K (f16): 3072.00 MiB, V (f16): 3072.00 MiB
llama_memory_recurrent:        CPU RS buffer size =    50.25 MiB
llama_memory_recurrent:      CUDA0 RS buffer size =    12.56 MiB
llama_memory_recurrent:      CUDA1 RS buffer size =    12.56 MiB
llama_memory_recurrent: size =   75.38 MiB (     1 cells,  48 layers,  1 seqs), R (f32):    3.38 MiB, S (f32):   72.00 MiB
llama_context: Flash Attention was auto, set to enabled
llama_context:      CUDA0 compute buffer size =  1317.24 MiB
llama_context:      CUDA1 compute buffer size =   403.39 MiB
llama_context:  CUDA_Host compute buffer size =   528.15 MiB
llama_context: graph nodes  = 21554 (with bs=512), 6614 (with bs=1)
llama_context: graph splits = 636 (with bs=512), 54 (with bs=1)
time=2026-02-25T05:48:52.202Z level=INFO source=server.go:1388 msg="llama runner started in 29.85 seconds"
time=2026-02-25T05:48:52.202Z level=INFO source=sched.go:566 msg="loaded runners" count=1
time=2026-02-25T05:48:52.202Z level=INFO source=server.go:1350 msg="waiting for llama runner to start responding"
time=2026-02-25T05:48:52.203Z level=INFO source=server.go:1388 msg="llama runner started in 29.85 seconds"
[GIN] 2026/02/25 - 05:48:52 | 200 | 30.823565936s |       127.0.0.1 | POST     "/api/generate"
[GIN] 2026/02/25 - 05:49:03 | 200 |  5.508222515s |       127.0.0.1 | POST     "/api/chat"
<!-- gh-comment-id:3957019589 --> @pureGavin commented on GitHub (Feb 25, 2026): > The logs will show why CPU is used instead of GPU. This is the log for Ollama version 0.17.0, using the model qwen3-next:80b-a3b-thinking-q4_K_M. ```shell time=2026-02-25T05:47:07.393Z level=INFO source=routes.go:1663 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:INFO OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2026-02-25T05:47:07.393Z level=INFO source=routes.go:1665 msg="Ollama cloud disabled: false" time=2026-02-25T05:47:07.394Z level=INFO source=images.go:473 msg="total blobs: 22" time=2026-02-25T05:47:07.395Z level=INFO source=images.go:480 msg="total unused blobs removed: 0" time=2026-02-25T05:47:07.395Z level=INFO source=routes.go:1718 msg="Listening on [::]:11434 (version 0.17.0)" time=2026-02-25T05:47:07.395Z level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-02-25T05:47:07.396Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 33203" time=2026-02-25T05:47:07.948Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 43149" time=2026-02-25T05:47:08.013Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" time=2026-02-25T05:47:08.014Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 37317" time=2026-02-25T05:47:08.014Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 35705" time=2026-02-25T05:47:08.692Z level=INFO source=types.go:42 msg="inference compute" id=GPU-585db7cc-5d7a-5531-c0db-eee123dfea0c filter_id="" library=CUDA compute=8.9 name=CUDA0 description="NVIDIA RTX 5880 Ada Generation" libdirs=ollama,cuda_v12 driver=12.8 pci_id=0000:03:00.0 type=discrete total="48.0 GiB" available="47.4 GiB" time=2026-02-25T05:47:08.692Z level=INFO source=types.go:42 msg="inference compute" id=GPU-c8f78982-69cc-46a0-b29d-dcf5e8f07da0 filter_id="" library=CUDA compute=8.9 name=CUDA1 description="NVIDIA RTX 5880 Ada Generation" libdirs=ollama,cuda_v12 driver=12.8 pci_id=0000:03:02.0 type=discrete total="48.0 GiB" available="47.0 GiB" time=2026-02-25T05:47:08.692Z level=INFO source=routes.go:1768 msg="vram-based default context" total_vram="96.0 GiB" default_num_ctx=262144 [GIN] 2026/02/25 - 05:47:30 | 200 | 155.942µs | 127.0.0.1 | HEAD "/" [GIN] 2026/02/25 - 05:47:31 | 200 | 1.083694679s | 127.0.0.1 | POST "/api/pull" [GIN] 2026/02/25 - 05:48:12 | 200 | 65.64µs | 127.0.0.1 | HEAD "/" [GIN] 2026/02/25 - 05:48:12 | 200 | 4.19371ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/02/25 - 05:48:21 | 200 | 34.991µs | 127.0.0.1 | HEAD "/" [GIN] 2026/02/25 - 05:48:21 | 200 | 121.332463ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/02/25 - 05:48:21 | 200 | 109.357147ms | 127.0.0.1 | POST "/api/show" time=2026-02-25T05:48:21.528Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 40999" llama_model_loader: loaded meta data with 45 key-value pairs and 807 tensors from /root/.ollama/models/blobs/sha256-8476acca2ca7dc4dd86ad2e069cb270fdbd44287d9ff3006d86e9a54cc19acd1 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen3next llama_model_loader: - kv 1: general.basename str = Qwen3-Next llama_model_loader: - kv 2: general.file_type u32 = 15 llama_model_loader: - kv 3: general.finetune str = Thinking llama_model_loader: - kv 4: general.license str = apache-2.0 llama_model_loader: - kv 5: general.license.link str = https://huggingface.co/Qwen/Qwen3-Nex... llama_model_loader: - kv 6: general.name str = Qwen3 Next 80B A3B Thinking llama_model_loader: - kv 7: general.parameter_count u64 = 79674391296 llama_model_loader: - kv 8: general.quantization_version u32 = 2 llama_model_loader: - kv 9: general.sampling.temp f32 = 0.600000 llama_model_loader: - kv 10: general.sampling.top_k i32 = 20 llama_model_loader: - kv 11: general.sampling.top_p f32 = 0.950000 llama_model_loader: - kv 12: general.size_label str = 80B-A3B llama_model_loader: - kv 13: general.tags arr[str,1] = ["text-generation"] llama_model_loader: - kv 14: general.type str = model llama_model_loader: - kv 15: qwen3next.attention.head_count u32 = 16 llama_model_loader: - kv 16: qwen3next.attention.head_count_kv u32 = 2 llama_model_loader: - kv 17: qwen3next.attention.key_length u32 = 256 llama_model_loader: - kv 18: qwen3next.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 19: qwen3next.attention.value_length u32 = 256 llama_model_loader: - kv 20: qwen3next.block_count u32 = 48 llama_model_loader: - kv 21: qwen3next.context_length u32 = 262144 llama_model_loader: - kv 22: qwen3next.embedding_length u32 = 2048 llama_model_loader: - kv 23: qwen3next.expert_count u32 = 512 llama_model_loader: - kv 24: qwen3next.expert_feed_forward_length u32 = 512 llama_model_loader: - kv 25: qwen3next.expert_shared_feed_forward_length u32 = 512 llama_model_loader: - kv 26: qwen3next.expert_used_count u32 = 10 llama_model_loader: - kv 27: qwen3next.feed_forward_length u32 = 5120 llama_model_loader: - kv 28: qwen3next.rope.dimension_count u32 = 64 llama_model_loader: - kv 29: qwen3next.rope.freq_base f32 = 10000000.000000 llama_model_loader: - kv 30: qwen3next.ssm.conv_kernel u32 = 4 llama_model_loader: - kv 31: qwen3next.ssm.group_count u32 = 16 llama_model_loader: - kv 32: qwen3next.ssm.inner_size u32 = 4096 llama_model_loader: - kv 33: qwen3next.ssm.state_size u32 = 128 llama_model_loader: - kv 34: qwen3next.ssm.time_step_rank u32 = 32 llama_model_loader: - kv 35: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 36: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 37: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 38: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 39: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 40: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 41: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 42: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 43: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 44: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - type f32: 313 tensors llama_model_loader: - type q4_K: 415 tensors llama_model_loader: - type q6_K: 79 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 46.89 GiB (5.06 BPW) load: printing all EOG tokens: load: - 151643 ('<|endoftext|>') load: - 151645 ('<|im_end|>') load: - 151662 ('<|fim_pad|>') load: - 151663 ('<|repo_name|>') load: - 151664 ('<|file_sep|>') load: special tokens cache size = 26 load: token to piece cache size = 0.9311 MB print_info: arch = qwen3next print_info: vocab_only = 1 print_info: no_alloc = 0 print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_n_group = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = ?B print_info: model params = 79.67 B print_info: general.name = Qwen3 Next 80B A3B Thinking print_info: vocab type = BPE print_info: n_vocab = 151936 print_info: n_merges = 151387 print_info: BOS token = 151643 '<|endoftext|>' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2026-02-25T05:48:22.352Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-8476acca2ca7dc4dd86ad2e069cb270fdbd44287d9ff3006d86e9a54cc19acd1 --port 37859" time=2026-02-25T05:48:22.353Z level=INFO source=sched.go:491 msg="system memory" total="62.4 GiB" free="59.8 GiB" free_swap="47.6 GiB" time=2026-02-25T05:48:22.353Z level=INFO source=sched.go:498 msg="gpu memory" id=GPU-585db7cc-5d7a-5531-c0db-eee123dfea0c library=CUDA available="46.9 GiB" free="47.4 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-02-25T05:48:22.353Z level=INFO source=sched.go:498 msg="gpu memory" id=GPU-c8f78982-69cc-46a0-b29d-dcf5e8f07da0 library=CUDA available="46.6 GiB" free="47.0 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-02-25T05:48:22.353Z level=INFO source=server.go:498 msg="loading model" "model layers"=49 requested=-1 time=2026-02-25T05:48:22.353Z level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="8.7 GiB" time=2026-02-25T05:48:22.353Z level=INFO source=device.go:240 msg="model weights" device=CUDA1 size="7.7 GiB" time=2026-02-25T05:48:22.353Z level=INFO source=device.go:245 msg="model weights" device=CPU size="30.3 GiB" time=2026-02-25T05:48:22.353Z level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="4.5 GiB" time=2026-02-25T05:48:22.353Z level=INFO source=device.go:251 msg="kv cache" device=CUDA1 size="4.0 GiB" time=2026-02-25T05:48:22.353Z level=INFO source=device.go:256 msg="kv cache" device=CPU size="15.5 GiB" time=2026-02-25T05:48:22.353Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="32.0 GiB" time=2026-02-25T05:48:22.353Z level=INFO source=device.go:262 msg="compute graph" device=CUDA1 size="32.0 GiB" time=2026-02-25T05:48:22.353Z level=INFO source=device.go:272 msg="total memory" size="134.7 GiB" time=2026-02-25T05:48:22.375Z level=INFO source=runner.go:965 msg="starting go runner" load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 2 CUDA devices: Device 0: NVIDIA RTX 5880 Ada Generation, compute capability 8.9, VMM: yes, ID: GPU-585db7cc-5d7a-5531-c0db-eee123dfea0c Device 1: NVIDIA RTX 5880 Ada Generation, compute capability 8.9, VMM: yes, ID: GPU-c8f78982-69cc-46a0-b29d-dcf5e8f07da0 load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so time=2026-02-25T05:48:22.545Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2026-02-25T05:48:22.545Z level=INFO source=runner.go:1001 msg="Server listening on 127.0.0.1:37859" time=2026-02-25T05:48:22.555Z level=INFO source=runner.go:895 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Auto KvSize:262144 KvCacheType: NumThreads:16 GPULayers:17[ID:GPU-585db7cc-5d7a-5531-c0db-eee123dfea0c Layers:9(31..39) ID:GPU-c8f78982-69cc-46a0-b29d-dcf5e8f07da0 Layers:8(40..47)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-25T05:48:22.556Z level=INFO source=server.go:1350 msg="waiting for llama runner to start responding" time=2026-02-25T05:48:22.557Z level=INFO source=server.go:1384 msg="waiting for server to become available" status="llm server loading model" ggml_backend_cuda_device_get_memory device GPU-585db7cc-5d7a-5531-c0db-eee123dfea0c utilizing NVML memory reporting free: 50871664640 total: 51527024640 llama_model_load_from_file_impl: using device CUDA0 (NVIDIA RTX 5880 Ada Generation) (0000:03:00.0) - 48515 MiB free ggml_backend_cuda_device_get_memory device GPU-c8f78982-69cc-46a0-b29d-dcf5e8f07da0 utilizing NVML memory reporting free: 50515869696 total: 51527024640 llama_model_load_from_file_impl: using device CUDA1 (NVIDIA RTX 5880 Ada Generation) (0000:03:02.0) - 48175 MiB free llama_model_loader: loaded meta data with 45 key-value pairs and 807 tensors from /root/.ollama/models/blobs/sha256-8476acca2ca7dc4dd86ad2e069cb270fdbd44287d9ff3006d86e9a54cc19acd1 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen3next llama_model_loader: - kv 1: general.basename str = Qwen3-Next llama_model_loader: - kv 2: general.file_type u32 = 15 llama_model_loader: - kv 3: general.finetune str = Thinking llama_model_loader: - kv 4: general.license str = apache-2.0 llama_model_loader: - kv 5: general.license.link str = https://huggingface.co/Qwen/Qwen3-Nex... llama_model_loader: - kv 6: general.name str = Qwen3 Next 80B A3B Thinking llama_model_loader: - kv 7: general.parameter_count u64 = 79674391296 llama_model_loader: - kv 8: general.quantization_version u32 = 2 llama_model_loader: - kv 9: general.sampling.temp f32 = 0.600000 llama_model_loader: - kv 10: general.sampling.top_k i32 = 20 llama_model_loader: - kv 11: general.sampling.top_p f32 = 0.950000 llama_model_loader: - kv 12: general.size_label str = 80B-A3B llama_model_loader: - kv 13: general.tags arr[str,1] = ["text-generation"] llama_model_loader: - kv 14: general.type str = model llama_model_loader: - kv 15: qwen3next.attention.head_count u32 = 16 llama_model_loader: - kv 16: qwen3next.attention.head_count_kv u32 = 2 llama_model_loader: - kv 17: qwen3next.attention.key_length u32 = 256 llama_model_loader: - kv 18: qwen3next.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 19: qwen3next.attention.value_length u32 = 256 llama_model_loader: - kv 20: qwen3next.block_count u32 = 48 llama_model_loader: - kv 21: qwen3next.context_length u32 = 262144 llama_model_loader: - kv 22: qwen3next.embedding_length u32 = 2048 llama_model_loader: - kv 23: qwen3next.expert_count u32 = 512 llama_model_loader: - kv 24: qwen3next.expert_feed_forward_length u32 = 512 llama_model_loader: - kv 25: qwen3next.expert_shared_feed_forward_length u32 = 512 llama_model_loader: - kv 26: qwen3next.expert_used_count u32 = 10 llama_model_loader: - kv 27: qwen3next.feed_forward_length u32 = 5120 llama_model_loader: - kv 28: qwen3next.rope.dimension_count u32 = 64 llama_model_loader: - kv 29: qwen3next.rope.freq_base f32 = 10000000.000000 llama_model_loader: - kv 30: qwen3next.ssm.conv_kernel u32 = 4 llama_model_loader: - kv 31: qwen3next.ssm.group_count u32 = 16 llama_model_loader: - kv 32: qwen3next.ssm.inner_size u32 = 4096 llama_model_loader: - kv 33: qwen3next.ssm.state_size u32 = 128 llama_model_loader: - kv 34: qwen3next.ssm.time_step_rank u32 = 32 llama_model_loader: - kv 35: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 36: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 37: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 38: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 39: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 40: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 41: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 42: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 43: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 44: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - type f32: 313 tensors llama_model_loader: - type q4_K: 415 tensors llama_model_loader: - type q6_K: 79 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 46.89 GiB (5.06 BPW) load: printing all EOG tokens: load: - 151643 ('<|endoftext|>') load: - 151645 ('<|im_end|>') load: - 151662 ('<|fim_pad|>') load: - 151663 ('<|repo_name|>') load: - 151664 ('<|file_sep|>') load: special tokens cache size = 26 load: token to piece cache size = 0.9311 MB print_info: arch = qwen3next print_info: vocab_only = 0 print_info: no_alloc = 0 print_info: n_ctx_train = 262144 print_info: n_embd = 2048 print_info: n_embd_inp = 2048 print_info: n_layer = 48 print_info: n_head = 16 print_info: n_head_kv = 2 print_info: n_rot = 64 print_info: n_swa = 0 print_info: is_swa_any = 0 print_info: n_embd_head_k = 256 print_info: n_embd_head_v = 256 print_info: n_gqa = 8 print_info: n_embd_k_gqa = 512 print_info: n_embd_v_gqa = 512 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-06 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 5120 print_info: n_expert = 512 print_info: n_expert_used = 10 print_info: n_expert_groups = 0 print_info: n_group_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 2 print_info: rope scaling = linear print_info: freq_base_train = 10000000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 262144 print_info: rope_yarn_log_mul= 0.0000 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 4 print_info: ssm_d_inner = 4096 print_info: ssm_d_state = 128 print_info: ssm_dt_rank = 32 print_info: ssm_n_group = 16 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 80B.A3B print_info: model params = 79.67 B print_info: general.name = Qwen3 Next 80B A3B Thinking print_info: vocab type = BPE print_info: n_vocab = 151936 print_info: n_merges = 151387 print_info: BOS token = 151643 '<|endoftext|>' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = false) load_tensors: offloading 17 repeating layers to GPU load_tensors: offloaded 17/49 layers to GPU load_tensors: CPU model buffer size = 166.92 MiB load_tensors: CUDA0 model buffer size = 8905.68 MiB load_tensors: CUDA1 model buffer size = 7890.13 MiB load_tensors: CUDA_Host model buffer size = 31050.32 MiB llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 262144 llama_context: n_ctx_seq = 262144 llama_context: n_batch = 512 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = auto llama_context: kv_unified = false llama_context: freq_base = 10000000.0 llama_context: freq_scale = 1 llama_context: CPU output buffer size = 0.59 MiB llama_kv_cache: CPU KV buffer size = 3584.00 MiB llama_kv_cache: CUDA0 KV buffer size = 1536.00 MiB llama_kv_cache: CUDA1 KV buffer size = 1024.00 MiB llama_kv_cache: size = 6144.00 MiB (262144 cells, 12 layers, 1/1 seqs), K (f16): 3072.00 MiB, V (f16): 3072.00 MiB llama_memory_recurrent: CPU RS buffer size = 50.25 MiB llama_memory_recurrent: CUDA0 RS buffer size = 12.56 MiB llama_memory_recurrent: CUDA1 RS buffer size = 12.56 MiB llama_memory_recurrent: size = 75.38 MiB ( 1 cells, 48 layers, 1 seqs), R (f32): 3.38 MiB, S (f32): 72.00 MiB llama_context: Flash Attention was auto, set to enabled llama_context: CUDA0 compute buffer size = 1317.24 MiB llama_context: CUDA1 compute buffer size = 403.39 MiB llama_context: CUDA_Host compute buffer size = 528.15 MiB llama_context: graph nodes = 21554 (with bs=512), 6614 (with bs=1) llama_context: graph splits = 636 (with bs=512), 54 (with bs=1) time=2026-02-25T05:48:52.202Z level=INFO source=server.go:1388 msg="llama runner started in 29.85 seconds" time=2026-02-25T05:48:52.202Z level=INFO source=sched.go:566 msg="loaded runners" count=1 time=2026-02-25T05:48:52.202Z level=INFO source=server.go:1350 msg="waiting for llama runner to start responding" time=2026-02-25T05:48:52.203Z level=INFO source=server.go:1388 msg="llama runner started in 29.85 seconds" [GIN] 2026/02/25 - 05:48:52 | 200 | 30.823565936s | 127.0.0.1 | POST "/api/generate" [GIN] 2026/02/25 - 05:49:03 | 200 | 5.508222515s | 127.0.0.1 | POST "/api/chat" ```
Author
Owner

@rick-github commented on GitHub (Feb 25, 2026):

time=2026-02-25T05:47:08.692Z level=INFO source=routes.go:1768 msg="vram-based default context"
 total_vram="96.0 GiB" default_num_ctx=262144

Since OLLAMA_CONTEXT_LENGTH is not set and the machine has 96G of VRAM, the default context is set to 256k.

load_tensors: offloaded 17/49 layers to GPU

As a result only 17 layers can fit on the GPU, the rest offloaded to the CPU, resulting in slower inference.

#14116

<!-- gh-comment-id:3958628229 --> @rick-github commented on GitHub (Feb 25, 2026): ``` time=2026-02-25T05:47:08.692Z level=INFO source=routes.go:1768 msg="vram-based default context" total_vram="96.0 GiB" default_num_ctx=262144 ``` Since `OLLAMA_CONTEXT_LENGTH` is not set and the machine has 96G of VRAM, the default context is set to 256k. ``` load_tensors: offloaded 17/49 layers to GPU ``` As a result only 17 layers can fit on the GPU, the rest offloaded to the CPU, resulting in slower inference. #14116
Author
Owner

@pureGavin commented on GitHub (Feb 26, 2026):

time=2026-02-25T05:47:08.692Z level=INFO source=routes.go:1768 msg="vram-based default context"
 total_vram="96.0 GiB" default_num_ctx=262144

自此OLLAMA_CONTEXT_LENGTH未设置,且该机器具有96G的VRAM,默认上下文设置为256K。

load_tensors: offloaded 17/49 layers to GPU

因此,GPU上只能安装17层,其余层则卸载到CPU上,导致推理速度变慢。

#14116

If the issue stems from excessively long model contexts causing GPU failures, why does this problem not occur in Ollama version 0.15.4?

<!-- gh-comment-id:3963749682 --> @pureGavin commented on GitHub (Feb 26, 2026): > ``` > time=2026-02-25T05:47:08.692Z level=INFO source=routes.go:1768 msg="vram-based default context" > total_vram="96.0 GiB" default_num_ctx=262144 > ``` > > 自此`OLLAMA_CONTEXT_LENGTH`未设置,且该机器具有96G的VRAM,默认上下文设置为256K。 > > ``` > load_tensors: offloaded 17/49 layers to GPU > ``` > > 因此,GPU上只能安装17层,其余层则卸载到CPU上,导致推理速度变慢。 > > [#14116](https://github.com/ollama/ollama/issues/14116) If the issue stems from excessively long model contexts causing GPU failures, why does this problem not occur in Ollama version 0.15.4?
Author
Owner

@rick-github commented on GitHub (Feb 26, 2026):

Because the tiered context length feature was added in 0.15.5.

<!-- gh-comment-id:3967434191 --> @rick-github commented on GitHub (Feb 26, 2026): Because the tiered context length feature was added in 0.15.5.
Author
Owner

@pureGavin commented on GitHub (Feb 27, 2026):

Because the tiered context length feature was added in 0.15.5.

Is there any way I can disable this feature? Since I downloaded the qwen3 model directly from Ollama's official website, I cannot directly modify the context length by editing the modelfile. Additionally, I've observed a peculiar phenomenon: when using the same model on a GB10 compute card, the latest Ollama version doesn't exhibit any issues. Could it be that NVIDIA has implemented some special optimization?

<!-- gh-comment-id:3970232970 --> @pureGavin commented on GitHub (Feb 27, 2026): > Because the tiered context length feature was added in 0.15.5. Is there any way I can disable this feature? Since I downloaded the qwen3 model directly from Ollama's official website, I cannot directly modify the context length by editing the modelfile. Additionally, I've observed a peculiar phenomenon: when using the same model on a GB10 compute card, the latest Ollama version doesn't exhibit any issues. Could it be that NVIDIA has implemented some special optimization?
Author
Owner

@pureGavin commented on GitHub (Feb 27, 2026):

Because the tiered context length feature was added in 0.15.5.

Is there any way I can disable this feature? Since I downloaded the qwen3 model directly from Ollama's official website, I cannot directly modify the context length by editing the modelfile. Additionally, I've observed a peculiar phenomenon: when using the same model on a GB10 compute card, the latest Ollama version doesn't exhibit any issues. Could it be that NVIDIA has implemented some special optimization?

I've observed a peculiar phenomenon: when using the qwen3-next:80b-a3b-thinking-q4_K_M model, everything runs perfectly fine. The GPU doesn't go offline. However, when I run qwen3-vl:32b-thinking-q8_0, the GPU only loads partially, with the rest of the processing happening entirely on the CPU. Notably, the VRAM on my GB10 graphics card isn't fully utilized.

<!-- gh-comment-id:3971368247 --> @pureGavin commented on GitHub (Feb 27, 2026): > > Because the tiered context length feature was added in 0.15.5. > > Is there any way I can disable this feature? Since I downloaded the qwen3 model directly from Ollama's official website, I cannot directly modify the context length by editing the modelfile. Additionally, I've observed a peculiar phenomenon: when using the same model on a GB10 compute card, the latest Ollama version doesn't exhibit any issues. Could it be that NVIDIA has implemented some special optimization? I've observed a peculiar phenomenon: when using the `qwen3-next:80b-a3b-thinking-q4_K_M` model, everything runs perfectly fine. The GPU doesn't go offline. However, when I run `qwen3-vl:32b-thinking-q8_0`, the GPU only loads partially, with the rest of the processing happening entirely on the CPU. Notably, the VRAM on my GB10 graphics card isn't fully utilized.
Author
Owner

@rick-github commented on GitHub (Feb 27, 2026):

Is there any way I can disable this feature?

Set OLLAMA_CONTEXT_LENGTH.

<!-- gh-comment-id:3972333545 --> @rick-github commented on GitHub (Feb 27, 2026): > Is there any way I can disable this feature? Set `OLLAMA_CONTEXT_LENGTH`.
Author
Owner

@pureGavin commented on GitHub (Feb 28, 2026):

Is there any way I can disable this feature?

Set OLLAMA_CONTEXT_LENGTH.

I tried it, but it didn't work. However, I used the export OLLAMA_CONTEXT_LENGTH command within Docker. Strangely, on the GB10 compute card, the issue I described above doesn't occur at all. :(

<!-- gh-comment-id:3976199046 --> @pureGavin commented on GitHub (Feb 28, 2026): > > Is there any way I can disable this feature? > > Set `OLLAMA_CONTEXT_LENGTH`. I tried it, but it didn't work. However, I used the `export OLLAMA_CONTEXT_LENGTH` command within Docker. Strangely, on the GB10 compute card, the issue I described above doesn't occur at all. :(
Author
Owner

@rick-github commented on GitHub (Mar 9, 2026):

Set OLLAMA_CONTEXT_LENGTH in the environment of the server.

docker run -e OLLAMA_CONTEXT_LENGTH=4096 ollama/ollama
<!-- gh-comment-id:4026984681 --> @rick-github commented on GitHub (Mar 9, 2026): Set `OLLAMA_CONTEXT_LENGTH` in the environment of the server. ``` docker run -e OLLAMA_CONTEXT_LENGTH=4096 ollama/ollama ```
Author
Owner

@pureGavin commented on GitHub (Mar 16, 2026):

Set OLLAMA_CONTEXT_LENGTH in the environment of the server.

docker run -e OLLAMA_CONTEXT_LENGTH=4096 ollama/ollama

As I mentioned earlier, this doesn’t work, but fortunately, Qwen has released a new model, and it works just fine.
However, I’ve always wondered why the same version doesn’t run on x86 machines but works perfectly on ARM machines.

<!-- gh-comment-id:4064543119 --> @pureGavin commented on GitHub (Mar 16, 2026): > Set `OLLAMA_CONTEXT_LENGTH` in the environment of the server. > > ``` > docker run -e OLLAMA_CONTEXT_LENGTH=4096 ollama/ollama > ``` As I mentioned earlier, this doesn’t work, but fortunately, Qwen has released a new model, and it works just fine. However, I’ve always wondered why the same version doesn’t run on x86 machines but works perfectly on ARM machines.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#55860