[GH-ISSUE #14308] Ollama not offloading to GPU #55825

Closed
opened 2026-04-29 09:46:40 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @thisIsLoading on GitHub (Feb 18, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14308

What is the issue?

its very slow and according to log 0 gpu offloading.

Relevant log output

Feb 18 19:17:13 pm-quant-linux-gpu ollama[279305]: [GIN] 2026/02/18 - 19:17:13 | 200 |       78.93µs |       127.0.0.1 | HEAD     "/"
Feb 18 19:17:13 pm-quant-linux-gpu ollama[279305]: [GIN] 2026/02/18 - 19:17:13 | 200 |   104.09099ms |       127.0.0.1 | POST     "/api/show"
Feb 18 19:17:13 pm-quant-linux-gpu ollama[279305]: [GIN] 2026/02/18 - 19:17:13 | 200 |  102.815315ms |       127.0.0.1 | POST     "/api/show"
Feb 18 19:17:13 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:13.665Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 37513"
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: loaded meta data with 45 key-value pairs and 807 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-8476acca2ca7dc4dd86ad2e069cb270fdbd44287d9ff3006d86e9a54cc19acd1 (version GGUF V3 (latest))
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv   0:                       general.architecture str              = qwen3next
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv   1:                           general.basename str              = Qwen3-Next
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv   2:                          general.file_type u32              = 15
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv   3:                           general.finetune str              = Thinking
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv   4:                            general.license str              = apache-2.0
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv   5:                       general.license.link str              = https://huggingface.co/Qwen/Qwen3-Nex...
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv   6:                               general.name str              = Qwen3 Next 80B A3B Thinking
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv   7:                    general.parameter_count u64              = 79674391296
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv   8:               general.quantization_version u32              = 2
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv   9:                      general.sampling.temp f32              = 0.600000
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  10:                     general.sampling.top_k i32              = 20
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  11:                     general.sampling.top_p f32              = 0.950000
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  12:                         general.size_label str              = 80B-A3B
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  13:                               general.tags arr[str,1]       = ["text-generation"]
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  14:                               general.type str              = model
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  15:             qwen3next.attention.head_count u32              = 16
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  16:          qwen3next.attention.head_count_kv u32              = 2
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  17:             qwen3next.attention.key_length u32              = 256
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  18: qwen3next.attention.layer_norm_rms_epsilon f32              = 0.000001
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  19:           qwen3next.attention.value_length u32              = 256
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  20:                      qwen3next.block_count u32              = 48
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  21:                   qwen3next.context_length u32              = 262144
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  22:                 qwen3next.embedding_length u32              = 2048
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  23:                     qwen3next.expert_count u32              = 512
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  24:       qwen3next.expert_feed_forward_length u32              = 512
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  25: qwen3next.expert_shared_feed_forward_length u32              = 512
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  26:                qwen3next.expert_used_count u32              = 10
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  27:              qwen3next.feed_forward_length u32              = 5120
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  28:             qwen3next.rope.dimension_count u32              = 64
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  29:                   qwen3next.rope.freq_base f32              = 10000000.000000
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  30:                  qwen3next.ssm.conv_kernel u32              = 4
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  31:                  qwen3next.ssm.group_count u32              = 16
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  32:                   qwen3next.ssm.inner_size u32              = 4096
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  33:                   qwen3next.ssm.state_size u32              = 128
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  34:               qwen3next.ssm.time_step_rank u32              = 32
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  35:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  36:               tokenizer.ggml.add_bos_token bool             = false
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  37:                tokenizer.ggml.bos_token_id u32              = 151643
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  38:                tokenizer.ggml.eos_token_id u32              = 151645
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  39:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  40:                       tokenizer.ggml.model str              = gpt2
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  41:            tokenizer.ggml.padding_token_id u32              = 151643
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  42:                         tokenizer.ggml.pre str              = qwen2
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  43:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  44:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - type  f32:  313 tensors
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - type q4_K:  415 tensors
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - type q6_K:   79 tensors
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: file format = GGUF V3 (latest)
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: file type   = Q4_K - Medium
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: file size   = 46.89 GiB (5.06 BPW)
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: load: printing all EOG tokens:
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: load:   - 151643 ('<|endoftext|>')
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: load:   - 151645 ('<|im_end|>')
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: load:   - 151662 ('<|fim_pad|>')
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: load:   - 151663 ('<|repo_name|>')
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: load:   - 151664 ('<|file_sep|>')
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: load: special tokens cache size = 26
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: load: token to piece cache size = 0.9311 MB
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: arch             = qwen3next
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: vocab_only       = 1
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: no_alloc         = 0
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: ssm_d_conv       = 0
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: ssm_d_inner      = 0
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: ssm_d_state      = 0
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: ssm_dt_rank      = 0
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: ssm_n_group      = 0
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: ssm_dt_b_c_rms   = 0
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: model type       = ?B
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: model params     = 79.67 B
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: general.name     = Qwen3 Next 80B A3B Thinking
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: vocab type       = BPE
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: n_vocab          = 151936
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: n_merges         = 151387
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: BOS token        = 151643 '<|endoftext|>'
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: EOS token        = 151645 '<|im_end|>'
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: EOT token        = 151645 '<|im_end|>'
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: PAD token        = 151643 '<|endoftext|>'
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: LF token         = 198 'Ċ'
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: FIM MID token    = 151660 '<|fim_middle|>'
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: FIM PAD token    = 151662 '<|fim_pad|>'
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: FIM REP token    = 151663 '<|repo_name|>'
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: FIM SEP token    = 151664 '<|file_sep|>'
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: EOG token        = 151643 '<|endoftext|>'
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: EOG token        = 151645 '<|im_end|>'
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: EOG token        = 151662 '<|fim_pad|>'
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: EOG token        = 151663 '<|repo_name|>'
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: EOG token        = 151664 '<|file_sep|>'
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: max token length = 256
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_load: vocab only - skipping tensors
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:14.968Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-8476acca2ca7dc4dd86ad2e069cb270fdbd44287d9ff3006d86e9a54cc19acd1 --port 39231"
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:14.969Z level=INFO source=sched.go:466 msg="system memory" total="125.8 GiB" free="118.5 GiB" free_swap="8.0 GiB"
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:14.969Z level=INFO source=sched.go:473 msg="gpu memory" id=GPU-44097d7e-c563-f880-9cf5-ce70d39afff7 library=CUDA available="22.4 GiB" free="22.8 GiB" minimum="457.0 MiB" overhead="0 B"
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:14.969Z level=INFO source=sched.go:473 msg="gpu memory" id=GPU-5c930cf7-2c86-74de-46df-c019f9195587 library=CUDA available="21.8 GiB" free="22.3 GiB" minimum="457.0 MiB" overhead="0 B"
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:14.969Z level=INFO source=sched.go:473 msg="gpu memory" id=GPU-875d4f24-dc2d-35c2-570c-89a3947c99a2 library=CUDA available="22.5 GiB" free="22.9 GiB" minimum="457.0 MiB" overhead="0 B"
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:14.969Z level=INFO source=sched.go:473 msg="gpu memory" id=GPU-4bf9249d-1fe5-a57b-f409-8ff972b166c3 library=CUDA available="22.7 GiB" free="23.1 GiB" minimum="457.0 MiB" overhead="0 B"
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:14.969Z level=INFO source=sched.go:473 msg="gpu memory" id=GPU-f1fd89c2-2b4d-d195-3726-8ddd04d71f1d library=CUDA available="22.7 GiB" free="23.1 GiB" minimum="457.0 MiB" overhead="0 B"
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:14.969Z level=INFO source=sched.go:473 msg="gpu memory" id=GPU-4cc00a37-cdc7-b901-fc7e-29d63adf752b library=CUDA available="22.7 GiB" free="23.1 GiB" minimum="457.0 MiB" overhead="0 B"
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:14.969Z level=INFO source=server.go:498 msg="loading model" "model layers"=49 requested=-1
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:14.970Z level=INFO source=device.go:245 msg="model weights" device=CPU size="46.7 GiB"
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:14.971Z level=INFO source=device.go:256 msg="kv cache" device=CPU size="24.0 GiB"
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:14.971Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="32.0 GiB"
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:14.971Z level=INFO source=device.go:262 msg="compute graph" device=CUDA1 size="32.0 GiB"
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:14.971Z level=INFO source=device.go:262 msg="compute graph" device=CUDA2 size="32.0 GiB"
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:14.971Z level=INFO source=device.go:262 msg="compute graph" device=CUDA3 size="32.0 GiB"
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:14.971Z level=INFO source=device.go:262 msg="compute graph" device=CUDA4 size="32.0 GiB"
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:14.971Z level=INFO source=device.go:262 msg="compute graph" device=CUDA5 size="32.0 GiB"
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:14.971Z level=INFO source=device.go:272 msg="total memory" size="262.7 GiB"
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:14.984Z level=INFO source=runner.go:965 msg="starting go runner"
Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: ggml_cuda_init: found 6 CUDA devices:
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]:   Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes, ID: GPU-44097d7e-c563-f880-9cf5-ce70d39afff7
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]:   Device 1: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes, ID: GPU-5c930cf7-2c86-74de-46df-c019f9195587
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]:   Device 2: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes, ID: GPU-875d4f24-dc2d-35c2-570c-89a3947c99a2
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]:   Device 3: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes, ID: GPU-4bf9249d-1fe5-a57b-f409-8ff972b166c3
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]:   Device 4: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes, ID: GPU-f1fd89c2-2b4d-d195-3726-8ddd04d71f1d
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]:   Device 5: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes, ID: GPU-4cc00a37-cdc7-b901-fc7e-29d63adf752b
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:15.107Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 CUDA.2.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.2.USE_GRAPHS=1 CUDA.2.PEER_MAX_BATCH_SIZE=128 CUDA.3.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.3.USE_GRAPHS=1 CUDA.3.PEER_MAX_BATCH_SIZE=128 CUDA.4.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.4.USE_GRAPHS=1 CUDA.4.PEER_MAX_BATCH_SIZE=128 CUDA.5.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.5.USE_GRAPHS=1 CUDA.5.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:15.108Z level=INFO source=runner.go:1001 msg="Server listening on 127.0.0.1:39231"
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:15.110Z level=INFO source=runner.go:895 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Auto KvSize:262144 KvCacheType: NumThreads:24 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:15.111Z level=INFO source=server.go:1350 msg="waiting for llama runner to start responding"
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:15.111Z level=INFO source=server.go:1384 msg="waiting for server to become available" status="llm server loading model"
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: ggml_backend_cuda_device_get_memory device GPU-44097d7e-c563-f880-9cf5-ce70d39afff7 utilizing NVML memory reporting free: 24526127104 total: 25757220864
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4090) (0000:01:00.0) - 23389 MiB free
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: ggml_backend_cuda_device_get_memory device GPU-5c930cf7-2c86-74de-46df-c019f9195587 utilizing NVML memory reporting free: 23924703232 total: 25757220864
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_load_from_file_impl: using device CUDA1 (NVIDIA GeForce RTX 4090) (0000:02:00.0) - 22816 MiB free
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: ggml_backend_cuda_device_get_memory device GPU-875d4f24-dc2d-35c2-570c-89a3947c99a2 utilizing NVML memory reporting free: 24613421056 total: 25757220864
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_load_from_file_impl: using device CUDA2 (NVIDIA GeForce RTX 4090) (0000:03:00.0) - 23473 MiB free
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: ggml_backend_cuda_device_get_memory device GPU-4bf9249d-1fe5-a57b-f409-8ff972b166c3 utilizing NVML memory reporting free: 24834473984 total: 25757220864
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_load_from_file_impl: using device CUDA3 (NVIDIA GeForce RTX 4090) (0000:04:00.0) - 23684 MiB free
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: ggml_backend_cuda_device_get_memory device GPU-f1fd89c2-2b4d-d195-3726-8ddd04d71f1d utilizing NVML memory reporting free: 24834473984 total: 25757220864
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_load_from_file_impl: using device CUDA4 (NVIDIA GeForce RTX 4090) (0000:05:00.0) - 23684 MiB free
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: ggml_backend_cuda_device_get_memory device GPU-4cc00a37-cdc7-b901-fc7e-29d63adf752b utilizing NVML memory reporting free: 24834473984 total: 25757220864
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_load_from_file_impl: using device CUDA5 (NVIDIA GeForce RTX 4090) (0000:06:00.0) - 23684 MiB free
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: loaded meta data with 45 key-value pairs and 807 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-8476acca2ca7dc4dd86ad2e069cb270fdbd44287d9ff3006d86e9a54cc19acd1 (version GGUF V3 (latest))
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv   0:                       general.architecture str              = qwen3next
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv   1:                           general.basename str              = Qwen3-Next
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv   2:                          general.file_type u32              = 15
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv   3:                           general.finetune str              = Thinking
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv   4:                            general.license str              = apache-2.0
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv   5:                       general.license.link str              = https://huggingface.co/Qwen/Qwen3-Nex...
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv   6:                               general.name str              = Qwen3 Next 80B A3B Thinking
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv   7:                    general.parameter_count u64              = 79674391296
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv   8:               general.quantization_version u32              = 2
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv   9:                      general.sampling.temp f32              = 0.600000
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  10:                     general.sampling.top_k i32              = 20
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  11:                     general.sampling.top_p f32              = 0.950000
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  12:                         general.size_label str              = 80B-A3B
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  13:                               general.tags arr[str,1]       = ["text-generation"]
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  14:                               general.type str              = model
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  15:             qwen3next.attention.head_count u32              = 16
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  16:          qwen3next.attention.head_count_kv u32              = 2
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  17:             qwen3next.attention.key_length u32              = 256
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  18: qwen3next.attention.layer_norm_rms_epsilon f32              = 0.000001
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  19:           qwen3next.attention.value_length u32              = 256
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  20:                      qwen3next.block_count u32              = 48
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  21:                   qwen3next.context_length u32              = 262144
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  22:                 qwen3next.embedding_length u32              = 2048
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  23:                     qwen3next.expert_count u32              = 512
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  24:       qwen3next.expert_feed_forward_length u32              = 512
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  25: qwen3next.expert_shared_feed_forward_length u32              = 512
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  26:                qwen3next.expert_used_count u32              = 10
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  27:              qwen3next.feed_forward_length u32              = 5120
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  28:             qwen3next.rope.dimension_count u32              = 64
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  29:                   qwen3next.rope.freq_base f32              = 10000000.000000
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  30:                  qwen3next.ssm.conv_kernel u32              = 4
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  31:                  qwen3next.ssm.group_count u32              = 16
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  32:                   qwen3next.ssm.inner_size u32              = 4096
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  33:                   qwen3next.ssm.state_size u32              = 128
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  34:               qwen3next.ssm.time_step_rank u32              = 32
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  35:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  36:               tokenizer.ggml.add_bos_token bool             = false
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  37:                tokenizer.ggml.bos_token_id u32              = 151643
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  38:                tokenizer.ggml.eos_token_id u32              = 151645
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  39:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  40:                       tokenizer.ggml.model str              = gpt2
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  41:            tokenizer.ggml.padding_token_id u32              = 151643
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  42:                         tokenizer.ggml.pre str              = qwen2
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  43:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv  44:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - type  f32:  313 tensors
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - type q4_K:  415 tensors
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - type q6_K:   79 tensors
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: print_info: file format = GGUF V3 (latest)
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: print_info: file type   = Q4_K - Medium
Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: print_info: file size   = 46.89 GiB (5.06 BPW)
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: load: printing all EOG tokens:
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: load:   - 151643 ('<|endoftext|>')
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: load:   - 151645 ('<|im_end|>')
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: load:   - 151662 ('<|fim_pad|>')
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: load:   - 151663 ('<|repo_name|>')
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: load:   - 151664 ('<|file_sep|>')
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: load: special tokens cache size = 26
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: load: token to piece cache size = 0.9311 MB
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: arch             = qwen3next
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: vocab_only       = 0
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: no_alloc         = 0
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_ctx_train      = 262144
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_embd           = 2048
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_embd_inp       = 2048
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_layer          = 48
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_head           = 16
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_head_kv        = 2
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_rot            = 64
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_swa            = 0
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: is_swa_any       = 0
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_embd_head_k    = 256
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_embd_head_v    = 256
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_gqa            = 8
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_embd_k_gqa     = 512
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_embd_v_gqa     = 512
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: f_norm_eps       = 0.0e+00
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: f_norm_rms_eps   = 1.0e-06
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: f_clamp_kqv      = 0.0e+00
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: f_max_alibi_bias = 0.0e+00
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: f_logit_scale    = 0.0e+00
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: f_attn_scale     = 0.0e+00
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_ff             = 5120
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_expert         = 512
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_expert_used    = 10
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_expert_groups  = 0
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_group_used     = 0
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: causal attn      = 1
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: pooling type     = 0
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: rope type        = 2
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: rope scaling     = linear
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: freq_base_train  = 10000000.0
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: freq_scale_train = 1
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_ctx_orig_yarn  = 262144
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: rope_yarn_log_mul= 0.0000
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: rope_finetuned   = unknown
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: ssm_d_conv       = 4
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: ssm_d_inner      = 4096
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: ssm_d_state      = 128
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: ssm_dt_rank      = 32
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: ssm_n_group      = 16
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: ssm_dt_b_c_rms   = 0
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: model type       = 80B.A3B
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: model params     = 79.67 B
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: general.name     = Qwen3 Next 80B A3B Thinking
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: vocab type       = BPE
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_vocab          = 151936
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_merges         = 151387
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: BOS token        = 151643 '<|endoftext|>'
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: EOS token        = 151645 '<|im_end|>'
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: EOT token        = 151645 '<|im_end|>'
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: PAD token        = 151643 '<|endoftext|>'
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: LF token         = 198 'Ċ'
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: FIM MID token    = 151660 '<|fim_middle|>'
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: FIM PAD token    = 151662 '<|fim_pad|>'
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: FIM REP token    = 151663 '<|repo_name|>'
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: FIM SEP token    = 151664 '<|file_sep|>'
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: EOG token        = 151643 '<|endoftext|>'
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: EOG token        = 151645 '<|im_end|>'
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: EOG token        = 151662 '<|fim_pad|>'
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: EOG token        = 151663 '<|repo_name|>'
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: EOG token        = 151664 '<|file_sep|>'
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: max token length = 256
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: load_tensors: loading model tensors, this can take a while... (mmap = false)
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: ggml_backend_cuda_device_get_memory device GPU-44097d7e-c563-f880-9cf5-ce70d39afff7 utilizing NVML memory reporting free: 24117903360 total: 25757220864
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: ggml_backend_cuda_device_get_memory device GPU-5c930cf7-2c86-74de-46df-c019f9195587 utilizing NVML memory reporting free: 23924703232 total: 25757220864
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: ggml_backend_cuda_device_get_memory device GPU-875d4f24-dc2d-35c2-570c-89a3947c99a2 utilizing NVML memory reporting free: 24613421056 total: 25757220864
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: ggml_backend_cuda_device_get_memory device GPU-4bf9249d-1fe5-a57b-f409-8ff972b166c3 utilizing NVML memory reporting free: 24834473984 total: 25757220864
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: ggml_backend_cuda_device_get_memory device GPU-f1fd89c2-2b4d-d195-3726-8ddd04d71f1d utilizing NVML memory reporting free: 24834473984 total: 25757220864
Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: ggml_backend_cuda_device_get_memory device GPU-4cc00a37-cdc7-b901-fc7e-29d63adf752b utilizing NVML memory reporting free: 24834473984 total: 25757220864
Feb 18 19:17:37 pm-quant-linux-gpu ollama[279305]: load_tensors: offloading 0 repeating layers to GPU
Feb 18 19:17:37 pm-quant-linux-gpu ollama[279305]: load_tensors: offloaded 0/49 layers to GPU
Feb 18 19:17:37 pm-quant-linux-gpu ollama[279305]: load_tensors:          CPU model buffer size =   166.92 MiB
Feb 18 19:17:37 pm-quant-linux-gpu ollama[279305]: load_tensors:    CUDA_Host model buffer size = 47846.13 MiB
Feb 18 19:17:48 pm-quant-linux-gpu ollama[279305]: llama_context: constructing llama_context
Feb 18 19:17:48 pm-quant-linux-gpu ollama[279305]: llama_context: n_seq_max     = 1
Feb 18 19:17:48 pm-quant-linux-gpu ollama[279305]: llama_context: n_ctx         = 262144
Feb 18 19:17:48 pm-quant-linux-gpu ollama[279305]: llama_context: n_ctx_seq     = 262144
Feb 18 19:17:48 pm-quant-linux-gpu ollama[279305]: llama_context: n_batch       = 512
Feb 18 19:17:48 pm-quant-linux-gpu ollama[279305]: llama_context: n_ubatch      = 512
Feb 18 19:17:48 pm-quant-linux-gpu ollama[279305]: llama_context: causal_attn   = 1
Feb 18 19:17:48 pm-quant-linux-gpu ollama[279305]: llama_context: flash_attn    = auto
Feb 18 19:17:48 pm-quant-linux-gpu ollama[279305]: llama_context: kv_unified    = false
Feb 18 19:17:48 pm-quant-linux-gpu ollama[279305]: llama_context: freq_base     = 10000000.0
Feb 18 19:17:48 pm-quant-linux-gpu ollama[279305]: llama_context: freq_scale    = 1
Feb 18 19:17:48 pm-quant-linux-gpu ollama[279305]: llama_context:        CPU  output buffer size =     0.59 MiB
Feb 18 19:17:48 pm-quant-linux-gpu ollama[279305]: llama_kv_cache:        CPU KV buffer size =  6144.00 MiB
Feb 18 19:17:51 pm-quant-linux-gpu ollama[279305]: llama_kv_cache: size = 6144.00 MiB (262144 cells,  12 layers,  1/1 seqs), K (f16): 3072.00 MiB, V (f16): 3072.00 MiB
Feb 18 19:17:51 pm-quant-linux-gpu ollama[279305]: llama_memory_recurrent:        CPU RS buffer size =    75.38 MiB
Feb 18 19:17:51 pm-quant-linux-gpu ollama[279305]: llama_memory_recurrent: size =   75.38 MiB (     1 cells,  48 layers,  1 seqs), R (f32):    3.38 MiB, S (f32):   72.00 MiB
Feb 18 19:17:51 pm-quant-linux-gpu ollama[279305]: llama_context: Flash Attention was auto, set to enabled
Feb 18 19:17:52 pm-quant-linux-gpu ollama[279305]: llama_context:      CUDA0 compute buffer size =  1317.24 MiB
Feb 18 19:17:52 pm-quant-linux-gpu ollama[279305]: llama_context:  CUDA_Host compute buffer size =   528.15 MiB
Feb 18 19:17:52 pm-quant-linux-gpu ollama[279305]: llama_context: graph nodes  = 21554 (with bs=512), 6614 (with bs=1)
Feb 18 19:17:52 pm-quant-linux-gpu ollama[279305]: llama_context: graph splits = 975 (with bs=512), 73 (with bs=1)
Feb 18 19:17:52 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:52.312Z level=INFO source=server.go:1388 msg="llama runner started in 37.34 seconds"
Feb 18 19:17:52 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:52.312Z level=INFO source=sched.go:540 msg="loaded runners" count=1
Feb 18 19:17:52 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:52.312Z level=INFO source=server.go:1350 msg="waiting for llama runner to start responding"
Feb 18 19:17:52 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:52.312Z level=INFO source=server.go:1388 msg="llama runner started in 37.34 seconds"
Feb 18 19:17:52 pm-quant-linux-gpu ollama[279305]: [GIN] 2026/02/18 - 19:17:52 | 200 | 38.764625903s |       127.0.0.1 | POST     "/api/generate"

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.16.02

Originally created by @thisIsLoading on GitHub (Feb 18, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14308 ### What is the issue? its very slow and according to log 0 gpu offloading. ### Relevant log output ```shell Feb 18 19:17:13 pm-quant-linux-gpu ollama[279305]: [GIN] 2026/02/18 - 19:17:13 | 200 | 78.93µs | 127.0.0.1 | HEAD "/" Feb 18 19:17:13 pm-quant-linux-gpu ollama[279305]: [GIN] 2026/02/18 - 19:17:13 | 200 | 104.09099ms | 127.0.0.1 | POST "/api/show" Feb 18 19:17:13 pm-quant-linux-gpu ollama[279305]: [GIN] 2026/02/18 - 19:17:13 | 200 | 102.815315ms | 127.0.0.1 | POST "/api/show" Feb 18 19:17:13 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:13.665Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 37513" Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: loaded meta data with 45 key-value pairs and 807 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-8476acca2ca7dc4dd86ad2e069cb270fdbd44287d9ff3006d86e9a54cc19acd1 (version GGUF V3 (latest)) Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 0: general.architecture str = qwen3next Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 1: general.basename str = Qwen3-Next Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 2: general.file_type u32 = 15 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 3: general.finetune str = Thinking Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 4: general.license str = apache-2.0 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 5: general.license.link str = https://huggingface.co/Qwen/Qwen3-Nex... Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 6: general.name str = Qwen3 Next 80B A3B Thinking Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 7: general.parameter_count u64 = 79674391296 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 8: general.quantization_version u32 = 2 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 9: general.sampling.temp f32 = 0.600000 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 10: general.sampling.top_k i32 = 20 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 11: general.sampling.top_p f32 = 0.950000 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 12: general.size_label str = 80B-A3B Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 13: general.tags arr[str,1] = ["text-generation"] Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 14: general.type str = model Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 15: qwen3next.attention.head_count u32 = 16 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 16: qwen3next.attention.head_count_kv u32 = 2 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 17: qwen3next.attention.key_length u32 = 256 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 18: qwen3next.attention.layer_norm_rms_epsilon f32 = 0.000001 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 19: qwen3next.attention.value_length u32 = 256 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 20: qwen3next.block_count u32 = 48 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 21: qwen3next.context_length u32 = 262144 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 22: qwen3next.embedding_length u32 = 2048 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 23: qwen3next.expert_count u32 = 512 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 24: qwen3next.expert_feed_forward_length u32 = 512 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 25: qwen3next.expert_shared_feed_forward_length u32 = 512 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 26: qwen3next.expert_used_count u32 = 10 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 27: qwen3next.feed_forward_length u32 = 5120 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 28: qwen3next.rope.dimension_count u32 = 64 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 29: qwen3next.rope.freq_base f32 = 10000000.000000 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 30: qwen3next.ssm.conv_kernel u32 = 4 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 31: qwen3next.ssm.group_count u32 = 16 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 32: qwen3next.ssm.inner_size u32 = 4096 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 33: qwen3next.ssm.state_size u32 = 128 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 34: qwen3next.ssm.time_step_rank u32 = 32 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 35: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 36: tokenizer.ggml.add_bos_token bool = false Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 37: tokenizer.ggml.bos_token_id u32 = 151643 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 38: tokenizer.ggml.eos_token_id u32 = 151645 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 39: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 40: tokenizer.ggml.model str = gpt2 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 41: tokenizer.ggml.padding_token_id u32 = 151643 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 42: tokenizer.ggml.pre str = qwen2 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 43: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 44: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - type f32: 313 tensors Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - type q4_K: 415 tensors Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - type q6_K: 79 tensors Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: file format = GGUF V3 (latest) Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: file type = Q4_K - Medium Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: file size = 46.89 GiB (5.06 BPW) Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: load: printing all EOG tokens: Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: load: - 151643 ('<|endoftext|>') Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: load: - 151645 ('<|im_end|>') Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: load: - 151662 ('<|fim_pad|>') Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: load: - 151663 ('<|repo_name|>') Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: load: - 151664 ('<|file_sep|>') Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: load: special tokens cache size = 26 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: load: token to piece cache size = 0.9311 MB Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: arch = qwen3next Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: vocab_only = 1 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: no_alloc = 0 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: ssm_d_conv = 0 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: ssm_d_inner = 0 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: ssm_d_state = 0 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: ssm_dt_rank = 0 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: ssm_n_group = 0 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: ssm_dt_b_c_rms = 0 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: model type = ?B Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: model params = 79.67 B Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: general.name = Qwen3 Next 80B A3B Thinking Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: vocab type = BPE Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: n_vocab = 151936 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: n_merges = 151387 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: BOS token = 151643 '<|endoftext|>' Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: EOS token = 151645 '<|im_end|>' Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: EOT token = 151645 '<|im_end|>' Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: PAD token = 151643 '<|endoftext|>' Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: LF token = 198 'Ċ' Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: FIM PRE token = 151659 '<|fim_prefix|>' Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: FIM SUF token = 151661 '<|fim_suffix|>' Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: FIM MID token = 151660 '<|fim_middle|>' Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: FIM PAD token = 151662 '<|fim_pad|>' Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: FIM REP token = 151663 '<|repo_name|>' Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: FIM SEP token = 151664 '<|file_sep|>' Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: EOG token = 151643 '<|endoftext|>' Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: EOG token = 151645 '<|im_end|>' Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: EOG token = 151662 '<|fim_pad|>' Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: EOG token = 151663 '<|repo_name|>' Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: EOG token = 151664 '<|file_sep|>' Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: print_info: max token length = 256 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: llama_model_load: vocab only - skipping tensors Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:14.968Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-8476acca2ca7dc4dd86ad2e069cb270fdbd44287d9ff3006d86e9a54cc19acd1 --port 39231" Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:14.969Z level=INFO source=sched.go:466 msg="system memory" total="125.8 GiB" free="118.5 GiB" free_swap="8.0 GiB" Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:14.969Z level=INFO source=sched.go:473 msg="gpu memory" id=GPU-44097d7e-c563-f880-9cf5-ce70d39afff7 library=CUDA available="22.4 GiB" free="22.8 GiB" minimum="457.0 MiB" overhead="0 B" Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:14.969Z level=INFO source=sched.go:473 msg="gpu memory" id=GPU-5c930cf7-2c86-74de-46df-c019f9195587 library=CUDA available="21.8 GiB" free="22.3 GiB" minimum="457.0 MiB" overhead="0 B" Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:14.969Z level=INFO source=sched.go:473 msg="gpu memory" id=GPU-875d4f24-dc2d-35c2-570c-89a3947c99a2 library=CUDA available="22.5 GiB" free="22.9 GiB" minimum="457.0 MiB" overhead="0 B" Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:14.969Z level=INFO source=sched.go:473 msg="gpu memory" id=GPU-4bf9249d-1fe5-a57b-f409-8ff972b166c3 library=CUDA available="22.7 GiB" free="23.1 GiB" minimum="457.0 MiB" overhead="0 B" Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:14.969Z level=INFO source=sched.go:473 msg="gpu memory" id=GPU-f1fd89c2-2b4d-d195-3726-8ddd04d71f1d library=CUDA available="22.7 GiB" free="23.1 GiB" minimum="457.0 MiB" overhead="0 B" Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:14.969Z level=INFO source=sched.go:473 msg="gpu memory" id=GPU-4cc00a37-cdc7-b901-fc7e-29d63adf752b library=CUDA available="22.7 GiB" free="23.1 GiB" minimum="457.0 MiB" overhead="0 B" Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:14.969Z level=INFO source=server.go:498 msg="loading model" "model layers"=49 requested=-1 Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:14.970Z level=INFO source=device.go:245 msg="model weights" device=CPU size="46.7 GiB" Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:14.971Z level=INFO source=device.go:256 msg="kv cache" device=CPU size="24.0 GiB" Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:14.971Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="32.0 GiB" Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:14.971Z level=INFO source=device.go:262 msg="compute graph" device=CUDA1 size="32.0 GiB" Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:14.971Z level=INFO source=device.go:262 msg="compute graph" device=CUDA2 size="32.0 GiB" Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:14.971Z level=INFO source=device.go:262 msg="compute graph" device=CUDA3 size="32.0 GiB" Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:14.971Z level=INFO source=device.go:262 msg="compute graph" device=CUDA4 size="32.0 GiB" Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:14.971Z level=INFO source=device.go:262 msg="compute graph" device=CUDA5 size="32.0 GiB" Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:14.971Z level=INFO source=device.go:272 msg="total memory" size="262.7 GiB" Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:14.984Z level=INFO source=runner.go:965 msg="starting go runner" Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: ggml_cuda_init: found 6 CUDA devices: Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes, ID: GPU-44097d7e-c563-f880-9cf5-ce70d39afff7 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: Device 1: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes, ID: GPU-5c930cf7-2c86-74de-46df-c019f9195587 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: Device 2: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes, ID: GPU-875d4f24-dc2d-35c2-570c-89a3947c99a2 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: Device 3: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes, ID: GPU-4bf9249d-1fe5-a57b-f409-8ff972b166c3 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: Device 4: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes, ID: GPU-f1fd89c2-2b4d-d195-3726-8ddd04d71f1d Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: Device 5: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes, ID: GPU-4cc00a37-cdc7-b901-fc7e-29d63adf752b Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:15.107Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 CUDA.2.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.2.USE_GRAPHS=1 CUDA.2.PEER_MAX_BATCH_SIZE=128 CUDA.3.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.3.USE_GRAPHS=1 CUDA.3.PEER_MAX_BATCH_SIZE=128 CUDA.4.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.4.USE_GRAPHS=1 CUDA.4.PEER_MAX_BATCH_SIZE=128 CUDA.5.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.5.USE_GRAPHS=1 CUDA.5.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:15.108Z level=INFO source=runner.go:1001 msg="Server listening on 127.0.0.1:39231" Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:15.110Z level=INFO source=runner.go:895 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Auto KvSize:262144 KvCacheType: NumThreads:24 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:15.111Z level=INFO source=server.go:1350 msg="waiting for llama runner to start responding" Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:15.111Z level=INFO source=server.go:1384 msg="waiting for server to become available" status="llm server loading model" Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: ggml_backend_cuda_device_get_memory device GPU-44097d7e-c563-f880-9cf5-ce70d39afff7 utilizing NVML memory reporting free: 24526127104 total: 25757220864 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4090) (0000:01:00.0) - 23389 MiB free Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: ggml_backend_cuda_device_get_memory device GPU-5c930cf7-2c86-74de-46df-c019f9195587 utilizing NVML memory reporting free: 23924703232 total: 25757220864 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_load_from_file_impl: using device CUDA1 (NVIDIA GeForce RTX 4090) (0000:02:00.0) - 22816 MiB free Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: ggml_backend_cuda_device_get_memory device GPU-875d4f24-dc2d-35c2-570c-89a3947c99a2 utilizing NVML memory reporting free: 24613421056 total: 25757220864 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_load_from_file_impl: using device CUDA2 (NVIDIA GeForce RTX 4090) (0000:03:00.0) - 23473 MiB free Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: ggml_backend_cuda_device_get_memory device GPU-4bf9249d-1fe5-a57b-f409-8ff972b166c3 utilizing NVML memory reporting free: 24834473984 total: 25757220864 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_load_from_file_impl: using device CUDA3 (NVIDIA GeForce RTX 4090) (0000:04:00.0) - 23684 MiB free Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: ggml_backend_cuda_device_get_memory device GPU-f1fd89c2-2b4d-d195-3726-8ddd04d71f1d utilizing NVML memory reporting free: 24834473984 total: 25757220864 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_load_from_file_impl: using device CUDA4 (NVIDIA GeForce RTX 4090) (0000:05:00.0) - 23684 MiB free Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: ggml_backend_cuda_device_get_memory device GPU-4cc00a37-cdc7-b901-fc7e-29d63adf752b utilizing NVML memory reporting free: 24834473984 total: 25757220864 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_load_from_file_impl: using device CUDA5 (NVIDIA GeForce RTX 4090) (0000:06:00.0) - 23684 MiB free Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: loaded meta data with 45 key-value pairs and 807 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-8476acca2ca7dc4dd86ad2e069cb270fdbd44287d9ff3006d86e9a54cc19acd1 (version GGUF V3 (latest)) Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 0: general.architecture str = qwen3next Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 1: general.basename str = Qwen3-Next Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 2: general.file_type u32 = 15 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 3: general.finetune str = Thinking Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 4: general.license str = apache-2.0 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 5: general.license.link str = https://huggingface.co/Qwen/Qwen3-Nex... Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 6: general.name str = Qwen3 Next 80B A3B Thinking Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 7: general.parameter_count u64 = 79674391296 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 8: general.quantization_version u32 = 2 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 9: general.sampling.temp f32 = 0.600000 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 10: general.sampling.top_k i32 = 20 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 11: general.sampling.top_p f32 = 0.950000 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 12: general.size_label str = 80B-A3B Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 13: general.tags arr[str,1] = ["text-generation"] Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 14: general.type str = model Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 15: qwen3next.attention.head_count u32 = 16 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 16: qwen3next.attention.head_count_kv u32 = 2 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 17: qwen3next.attention.key_length u32 = 256 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 18: qwen3next.attention.layer_norm_rms_epsilon f32 = 0.000001 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 19: qwen3next.attention.value_length u32 = 256 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 20: qwen3next.block_count u32 = 48 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 21: qwen3next.context_length u32 = 262144 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 22: qwen3next.embedding_length u32 = 2048 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 23: qwen3next.expert_count u32 = 512 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 24: qwen3next.expert_feed_forward_length u32 = 512 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 25: qwen3next.expert_shared_feed_forward_length u32 = 512 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 26: qwen3next.expert_used_count u32 = 10 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 27: qwen3next.feed_forward_length u32 = 5120 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 28: qwen3next.rope.dimension_count u32 = 64 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 29: qwen3next.rope.freq_base f32 = 10000000.000000 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 30: qwen3next.ssm.conv_kernel u32 = 4 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 31: qwen3next.ssm.group_count u32 = 16 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 32: qwen3next.ssm.inner_size u32 = 4096 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 33: qwen3next.ssm.state_size u32 = 128 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 34: qwen3next.ssm.time_step_rank u32 = 32 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 35: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 36: tokenizer.ggml.add_bos_token bool = false Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 37: tokenizer.ggml.bos_token_id u32 = 151643 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 38: tokenizer.ggml.eos_token_id u32 = 151645 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 39: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 40: tokenizer.ggml.model str = gpt2 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 41: tokenizer.ggml.padding_token_id u32 = 151643 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 42: tokenizer.ggml.pre str = qwen2 Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 43: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - kv 44: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - type f32: 313 tensors Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - type q4_K: 415 tensors Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: llama_model_loader: - type q6_K: 79 tensors Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: print_info: file format = GGUF V3 (latest) Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: print_info: file type = Q4_K - Medium Feb 18 19:17:15 pm-quant-linux-gpu ollama[279305]: print_info: file size = 46.89 GiB (5.06 BPW) Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: load: printing all EOG tokens: Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: load: - 151643 ('<|endoftext|>') Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: load: - 151645 ('<|im_end|>') Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: load: - 151662 ('<|fim_pad|>') Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: load: - 151663 ('<|repo_name|>') Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: load: - 151664 ('<|file_sep|>') Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: load: special tokens cache size = 26 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: load: token to piece cache size = 0.9311 MB Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: arch = qwen3next Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: vocab_only = 0 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: no_alloc = 0 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_ctx_train = 262144 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_embd = 2048 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_embd_inp = 2048 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_layer = 48 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_head = 16 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_head_kv = 2 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_rot = 64 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_swa = 0 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: is_swa_any = 0 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_embd_head_k = 256 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_embd_head_v = 256 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_gqa = 8 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_embd_k_gqa = 512 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_embd_v_gqa = 512 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: f_norm_eps = 0.0e+00 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: f_norm_rms_eps = 1.0e-06 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: f_clamp_kqv = 0.0e+00 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: f_max_alibi_bias = 0.0e+00 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: f_logit_scale = 0.0e+00 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: f_attn_scale = 0.0e+00 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_ff = 5120 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_expert = 512 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_expert_used = 10 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_expert_groups = 0 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_group_used = 0 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: causal attn = 1 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: pooling type = 0 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: rope type = 2 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: rope scaling = linear Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: freq_base_train = 10000000.0 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: freq_scale_train = 1 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_ctx_orig_yarn = 262144 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: rope_yarn_log_mul= 0.0000 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: rope_finetuned = unknown Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: ssm_d_conv = 4 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: ssm_d_inner = 4096 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: ssm_d_state = 128 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: ssm_dt_rank = 32 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: ssm_n_group = 16 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: ssm_dt_b_c_rms = 0 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: model type = 80B.A3B Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: model params = 79.67 B Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: general.name = Qwen3 Next 80B A3B Thinking Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: vocab type = BPE Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_vocab = 151936 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: n_merges = 151387 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: BOS token = 151643 '<|endoftext|>' Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: EOS token = 151645 '<|im_end|>' Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: EOT token = 151645 '<|im_end|>' Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: PAD token = 151643 '<|endoftext|>' Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: LF token = 198 'Ċ' Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: FIM PRE token = 151659 '<|fim_prefix|>' Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: FIM SUF token = 151661 '<|fim_suffix|>' Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: FIM MID token = 151660 '<|fim_middle|>' Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: FIM PAD token = 151662 '<|fim_pad|>' Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: FIM REP token = 151663 '<|repo_name|>' Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: FIM SEP token = 151664 '<|file_sep|>' Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: EOG token = 151643 '<|endoftext|>' Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: EOG token = 151645 '<|im_end|>' Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: EOG token = 151662 '<|fim_pad|>' Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: EOG token = 151663 '<|repo_name|>' Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: EOG token = 151664 '<|file_sep|>' Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: print_info: max token length = 256 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: load_tensors: loading model tensors, this can take a while... (mmap = false) Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: ggml_backend_cuda_device_get_memory device GPU-44097d7e-c563-f880-9cf5-ce70d39afff7 utilizing NVML memory reporting free: 24117903360 total: 25757220864 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: ggml_backend_cuda_device_get_memory device GPU-5c930cf7-2c86-74de-46df-c019f9195587 utilizing NVML memory reporting free: 23924703232 total: 25757220864 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: ggml_backend_cuda_device_get_memory device GPU-875d4f24-dc2d-35c2-570c-89a3947c99a2 utilizing NVML memory reporting free: 24613421056 total: 25757220864 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: ggml_backend_cuda_device_get_memory device GPU-4bf9249d-1fe5-a57b-f409-8ff972b166c3 utilizing NVML memory reporting free: 24834473984 total: 25757220864 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: ggml_backend_cuda_device_get_memory device GPU-f1fd89c2-2b4d-d195-3726-8ddd04d71f1d utilizing NVML memory reporting free: 24834473984 total: 25757220864 Feb 18 19:17:16 pm-quant-linux-gpu ollama[279305]: ggml_backend_cuda_device_get_memory device GPU-4cc00a37-cdc7-b901-fc7e-29d63adf752b utilizing NVML memory reporting free: 24834473984 total: 25757220864 Feb 18 19:17:37 pm-quant-linux-gpu ollama[279305]: load_tensors: offloading 0 repeating layers to GPU Feb 18 19:17:37 pm-quant-linux-gpu ollama[279305]: load_tensors: offloaded 0/49 layers to GPU Feb 18 19:17:37 pm-quant-linux-gpu ollama[279305]: load_tensors: CPU model buffer size = 166.92 MiB Feb 18 19:17:37 pm-quant-linux-gpu ollama[279305]: load_tensors: CUDA_Host model buffer size = 47846.13 MiB Feb 18 19:17:48 pm-quant-linux-gpu ollama[279305]: llama_context: constructing llama_context Feb 18 19:17:48 pm-quant-linux-gpu ollama[279305]: llama_context: n_seq_max = 1 Feb 18 19:17:48 pm-quant-linux-gpu ollama[279305]: llama_context: n_ctx = 262144 Feb 18 19:17:48 pm-quant-linux-gpu ollama[279305]: llama_context: n_ctx_seq = 262144 Feb 18 19:17:48 pm-quant-linux-gpu ollama[279305]: llama_context: n_batch = 512 Feb 18 19:17:48 pm-quant-linux-gpu ollama[279305]: llama_context: n_ubatch = 512 Feb 18 19:17:48 pm-quant-linux-gpu ollama[279305]: llama_context: causal_attn = 1 Feb 18 19:17:48 pm-quant-linux-gpu ollama[279305]: llama_context: flash_attn = auto Feb 18 19:17:48 pm-quant-linux-gpu ollama[279305]: llama_context: kv_unified = false Feb 18 19:17:48 pm-quant-linux-gpu ollama[279305]: llama_context: freq_base = 10000000.0 Feb 18 19:17:48 pm-quant-linux-gpu ollama[279305]: llama_context: freq_scale = 1 Feb 18 19:17:48 pm-quant-linux-gpu ollama[279305]: llama_context: CPU output buffer size = 0.59 MiB Feb 18 19:17:48 pm-quant-linux-gpu ollama[279305]: llama_kv_cache: CPU KV buffer size = 6144.00 MiB Feb 18 19:17:51 pm-quant-linux-gpu ollama[279305]: llama_kv_cache: size = 6144.00 MiB (262144 cells, 12 layers, 1/1 seqs), K (f16): 3072.00 MiB, V (f16): 3072.00 MiB Feb 18 19:17:51 pm-quant-linux-gpu ollama[279305]: llama_memory_recurrent: CPU RS buffer size = 75.38 MiB Feb 18 19:17:51 pm-quant-linux-gpu ollama[279305]: llama_memory_recurrent: size = 75.38 MiB ( 1 cells, 48 layers, 1 seqs), R (f32): 3.38 MiB, S (f32): 72.00 MiB Feb 18 19:17:51 pm-quant-linux-gpu ollama[279305]: llama_context: Flash Attention was auto, set to enabled Feb 18 19:17:52 pm-quant-linux-gpu ollama[279305]: llama_context: CUDA0 compute buffer size = 1317.24 MiB Feb 18 19:17:52 pm-quant-linux-gpu ollama[279305]: llama_context: CUDA_Host compute buffer size = 528.15 MiB Feb 18 19:17:52 pm-quant-linux-gpu ollama[279305]: llama_context: graph nodes = 21554 (with bs=512), 6614 (with bs=1) Feb 18 19:17:52 pm-quant-linux-gpu ollama[279305]: llama_context: graph splits = 975 (with bs=512), 73 (with bs=1) Feb 18 19:17:52 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:52.312Z level=INFO source=server.go:1388 msg="llama runner started in 37.34 seconds" Feb 18 19:17:52 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:52.312Z level=INFO source=sched.go:540 msg="loaded runners" count=1 Feb 18 19:17:52 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:52.312Z level=INFO source=server.go:1350 msg="waiting for llama runner to start responding" Feb 18 19:17:52 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:52.312Z level=INFO source=server.go:1388 msg="llama runner started in 37.34 seconds" Feb 18 19:17:52 pm-quant-linux-gpu ollama[279305]: [GIN] 2026/02/18 - 19:17:52 | 200 | 38.764625903s | 127.0.0.1 | POST "/api/generate" ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.16.02
GiteaMirror added the bug label 2026-04-29 09:46:40 -05:00
Author
Owner

@rick-github commented on GitHub (Feb 18, 2026):

#14116

<!-- gh-comment-id:3922851214 --> @rick-github commented on GitHub (Feb 18, 2026): #14116
Author
Owner

@thisIsLoading commented on GitHub (Feb 18, 2026):

@rick-github i'd assume 144gb vram might be enough to run a 50GB model with full context size?

or do you refer to something else?

<!-- gh-comment-id:3922865349 --> @thisIsLoading commented on GitHub (Feb 18, 2026): @rick-github i'd assume 144gb vram might be enough to run a 50GB model with full context size? or do you refer to something else?
Author
Owner

@rick-github commented on GitHub (Feb 18, 2026):

You don't have 144GB VRAM, you have 6x24GB VRAM. The graph for each device with full context is 32GB:

Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:14.971Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="32.0 GiB"

32 is more than 24, so the model cannot be loaded on the GPUs. Because the beginning of the log was not available, I assumed the problem was caused by the new tiered context scaling. If you have deliberately set OLLAMA_CONTEXT_LENGTH to 256k with a view to running the model with maximum possible context, I'm afraid you will have to scale back. You could try setting OLLAMA_KV_CACHE_TYPE, that will reduce the KV footprint, although I don't know by how much.

<!-- gh-comment-id:3922925029 --> @rick-github commented on GitHub (Feb 18, 2026): You don't have 144GB VRAM, you have 6x24GB VRAM. The graph for each device with full context is 32GB: ``` Feb 18 19:17:14 pm-quant-linux-gpu ollama[279305]: time=2026-02-18T19:17:14.971Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="32.0 GiB" ``` 32 is more than 24, so the model cannot be loaded on the GPUs. Because the beginning of the log was not available, I assumed the problem was caused by the new tiered context scaling. If you have deliberately set `OLLAMA_CONTEXT_LENGTH` to 256k with a view to running the model with maximum possible context, I'm afraid you will have to scale back. You could try setting [`OLLAMA_KV_CACHE_TYPE`](https://github.com/ollama/ollama/blob/main/docs/faq.mdx#how-can-i-set-the-quantization-type-for-the-kv-cache), that will reduce the KV footprint, although I don't know by how much.
Author
Owner

@thisIsLoading commented on GitHub (Feb 18, 2026):

thank you rick. i just ran "ollama run qwen3-next".

appreciate the explanation, i couldnt make that link (small brains'n'such).

<!-- gh-comment-id:3922937458 --> @thisIsLoading commented on GitHub (Feb 18, 2026): thank you rick. i just ran "ollama run qwen3-next". appreciate the explanation, i couldnt make that link (small brains'n'such).
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#55825