[GH-ISSUE #10458] Qwen3 MoE 30b-a3b, poor performance and Low GPU utilization issue #68935

Open
opened 2026-05-04 15:55:40 -05:00 by GiteaMirror · 31 comments
Owner

Originally created by @vYLQs6 on GitHub (Apr 29, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10458

What is the issue?

When running Qwen3-30b-a3b, my 4090 is only running at ~120w, really low utilization and slow speed for a 3B active MoE

AMD 7950x3d, RTX 4090 24gb, 64gb RAM, windows 11

Test results down below:


All default ollama settings.

set OLLAMA_DEBUG=1 && ollama serve

ollama run qwen3:30b-a3b-q4_K_M --verbose

PS C:\Users\***> ollama ps
NAME                    ID              SIZE     PROCESSOR    UNTIL
qwen3:30b-a3b-q4_K_M    2ee832bc15b5    21 GB    100% GPU     3 minutes from now
PS D:\> ollama run qwen3:30b-a3b-q4_K_M --verbose
>>> how far is moon
<think>
Okay, the user is asking "how far is moon." I need to figure out what they mean. The Moon's distance from Earth
varies because its orbit is elliptical. The average distance is about 384,400 kilometers, but it's not constant.
There's also the concept of perigee and apogee. Maybe they want the average or the closest and farthest points. I
should mention both. Also, maybe they're interested in how that distance is measured or some interesting facts.
Let me check if there's any other context. The user might be a student or someone curious. I should explain it
clearly, maybe mention that it's the fifth largest moon in the solar system. Also, the time it takes for light to
travel from the Moon to Earth, which is about 1.3 seconds. That could be useful. Need to make sure the units are
correct, kilometers or miles. The user didn't specify, so maybe provide both. Wait, the question is in English, so
maybe they prefer kilometers or miles. But in the US, miles are more common. But since the user didn't specify, I
should give both. Also, maybe mention that the Moon is moving away from Earth slowly, about 3.8 centimeters per
year. That's a good point. Let me structure the answer: start with the average distance, then perigee and apogee,
mention the measurement method (laser ranging), and the interesting facts like the light travel time and the
moon's recession. Make sure it's clear and concise. Avoid any jargon. Check for accuracy. The average distance is
384,400 km, which is roughly 238,855 miles. Perigee is about 363,300 km (225,700 miles) and apogee around 405,500
km (252,000 miles). Light takes 1.3 seconds. The moon is moving away at 3.8 cm/year. That's all. I think that
covers it. Let me put it all together in a friendly, informative way.
</think>

The distance from the **Earth to the Moon** varies because the Moon follows an elliptical orbit. Here's a
breakdown:

- **Average distance**: ~384,400 kilometers (238,855 miles).
- **Closest point (perigee)**: ~363,300 km (225,700 miles).
- **Farthest point (apogee)**: ~405,500 km (252,000 miles).

### Fun Facts:
- **Light travel time**: It takes about **1.3 seconds** for light (or radio signals) to travel from the Moon to
Earth.
- **Laser ranging**: Scientists measure this distance precisely using lasers bounced off retroreflectors left by
Apollo missions.
- **Slow recession**: The Moon is moving away from Earth at a rate of **3.8 centimeters (1.5 inches) per year**
due to tidal forces.

Let me know if you'd like more details! 🌕✨

total duration:       22.9020241s
load duration:        22.8488ms
prompt eval count:    12 token(s)
prompt eval duration: 551.174ms
prompt eval rate:     21.77 tokens/s
eval count:           676 token(s)
eval duration:        22.3250165s
eval rate:            30.28 tokens/s

Relevant log output

C:\Users\***>set OLLAMA_DEBUG=1 && ollama serve
2025/04/29 10:06:49 routes.go:1232: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\\LLM\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-04-29T10:06:49.753+08:00 level=INFO source=images.go:458 msg="total blobs: 494"
time=2025-04-29T10:06:49.763+08:00 level=INFO source=images.go:465 msg="total unused blobs removed: 0"
time=2025-04-29T10:06:49.774+08:00 level=INFO source=routes.go:1299 msg="Listening on 127.0.0.1:11434 (version 0.6.6)"
time=2025-04-29T10:06:49.774+08:00 level=DEBUG source=sched.go:107 msg="starting llm scheduler"
time=2025-04-29T10:06:49.774+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-04-29T10:06:49.774+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-04-29T10:06:49.774+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=16 efficiency=0 threads=32
time=2025-04-29T10:06:49.774+08:00 level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA"
time=2025-04-29T10:06:49.774+08:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=nvml.dll
time=2025-04-29T10:06:49.774+08:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[C:\\Users\\***\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\nvml.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.5\\bin\\nvml.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.5\\libnvvp\\nvml.dll C:\\Windows\\system32\\nvml.dll C:\\Windows\\nvml.dll C:\\Windows\\System32\\Wbem\\nvml.dll C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvml.dll C:\\Windows\\System32\\OpenSSH\\nvml.dll C:\\Program Files\\dotnet\\nvml.dll C:\\Program Files\\Process Lasso\\nvml.dll C:\\Program Files\\Git\\cmd\\nvml.dll C:\\Program Files\\nodejs\\nvml.dll C:\\Program Files\\Docker\\Docker\\resources\\bin\\nvml.dll C:\\Users\\***\\miniconda3\\nvml.dll C:\\Users\\***\\miniconda3\\Library\\mingw-w64\\bin\\nvml.dll C:\\Users\\***\\miniconda3\\Library\\usr\\bin\\nvml.dll C:\\Users\\***\\miniconda3\\Library\\bin\\nvml.dll C:\\Users\\***\\miniconda3\\Scripts\\nvml.dll C:\\Users\\***\\AppData\\Local\\Programs\\Python\\Python312\\Scripts\\nvml.dll C:\\Users\\***\\AppData\\Local\\Programs\\Python\\Python312\\nvml.dll C:\\Users\\***\\AppData\\Local\\Microsoft\\WindowsApps\\nvml.dll C:\\Users\\***\\AppData\\Local\\GitHubDesktop\\bin\\nvml.dll C:\\Users\\***\\AppData\\Local\\Programs\\Ollama\\bin\\nvml.dll C:\\Users\\***\\AppData\\Roaming\\npm\\nvml.dll C:\\Users\\***\\AppData\\Local\\Programs\\Ollama\\nvml.dll c:\\Windows\\System32\\nvml.dll]"
time=2025-04-29T10:06:49.775+08:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths="[C:\\Windows\\system32\\nvml.dll c:\\Windows\\System32\\nvml.dll]"
time=2025-04-29T10:06:49.786+08:00 level=DEBUG source=gpu.go:111 msg="nvidia-ml loaded" library=C:\Windows\system32\nvml.dll
time=2025-04-29T10:06:49.786+08:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=nvcuda.dll
time=2025-04-29T10:06:49.786+08:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[C:\\Users\\***\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\nvcuda.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.5\\bin\\nvcuda.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.5\\libnvvp\\nvcuda.dll C:\\Windows\\system32\\nvcuda.dll C:\\Windows\\nvcuda.dll C:\\Windows\\System32\\Wbem\\nvcuda.dll C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvcuda.dll C:\\Windows\\System32\\OpenSSH\\nvcuda.dll C:\\Program Files\\dotnet\\nvcuda.dll C:\\Program Files\\Process Lasso\\nvcuda.dll C:\\Program Files\\Git\\cmd\\nvcuda.dll C:\\Program Files\\nodejs\\nvcuda.dll C:\\Program Files\\Docker\\Docker\\resources\\bin\\nvcuda.dll C:\\Users\\***\\miniconda3\\nvcuda.dll C:\\Users\\***\\miniconda3\\Library\\mingw-w64\\bin\\nvcuda.dll C:\\Users\\***\\miniconda3\\Library\\usr\\bin\\nvcuda.dll C:\\Users\\***\\miniconda3\\Library\\bin\\nvcuda.dll C:\\Users\\***\\miniconda3\\Scripts\\nvcuda.dll C:\\Users\\***\\AppData\\Local\\Programs\\Python\\Python312\\Scripts\\nvcuda.dll C:\\Users\\***\\AppData\\Local\\Programs\\Python\\Python312\\nvcuda.dll C:\\Users\\***\\AppData\\Local\\Microsoft\\WindowsApps\\nvcuda.dll C:\\Users\\***\\AppData\\Local\\GitHubDesktop\\bin\\nvcuda.dll C:\\Users\\***\\AppData\\Local\\Programs\\Ollama\\bin\\nvcuda.dll C:\\Users\\***\\AppData\\Roaming\\npm\\nvcuda.dll C:\\Users\\***\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll c:\\windows\\system*\\nvcuda.dll]"
time=2025-04-29T10:06:49.787+08:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[C:\Windows\system32\nvcuda.dll]
initializing C:\Windows\system32\nvcuda.dll
dlsym: cuInit - 00007FFC98774D20
dlsym: cuDriverGetVersion - 00007FFC98774DC0
dlsym: cuDeviceGetCount - 00007FFC987755B6
dlsym: cuDeviceGet - 00007FFC987755B0
dlsym: cuDeviceGetAttribute - 00007FFC98774F10
dlsym: cuDeviceGetUuid - 00007FFC987755C2
dlsym: cuDeviceGetName - 00007FFC987755BC
dlsym: cuCtxCreate_v3 - 00007FFC98775634
dlsym: cuMemGetInfo_v2 - 00007FFC98775736
dlsym: cuCtxDestroy - 00007FFC98775646
calling cuInit
calling cuDriverGetVersion
raw version 0x2f26
CUDA driver version: 12.7
calling cuDeviceGetCount
device count 1
time=2025-04-29T10:06:49.799+08:00 level=DEBUG source=gpu.go:125 msg="detected GPUs" count=1 library=C:\Windows\system32\nvcuda.dll
[GPU-f47e9117-13d8-d21e-7b80-735c8d31444d] CUDA totalMem 24563 mb
[GPU-f47e9117-13d8-d21e-7b80-735c8d31444d] CUDA freeMem 22994 mb
[GPU-f47e9117-13d8-d21e-7b80-735c8d31444d] Compute Capability 8.9
time=2025-04-29T10:06:49.897+08:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-f47e9117-13d8-d21e-7b80-735c8d31444d library=cuda compute=8.9 driver=12.7 name="NVIDIA GeForce RTX 4090" overhead="515.0 MiB"
time=2025-04-29T10:06:49.903+08:00 level=DEBUG source=amd_hip_windows.go:88 msg=hipDriverGetVersion version=60140252
time=2025-04-29T10:06:49.903+08:00 level=INFO source=amd_hip_windows.go:103 msg="AMD ROCm reports no devices found"
time=2025-04-29T10:06:49.903+08:00 level=INFO source=amd_windows.go:49 msg="no compatible amdgpu devices detected"
releasing cuda driver library
releasing nvml library
time=2025-04-29T10:06:49.904+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-f47e9117-13d8-d21e-7b80-735c8d31444d library=cuda variant=v12 compute=8.9 driver=12.7 name="NVIDIA GeForce RTX 4090" total="24.0 GiB" available="22.5 GiB"
[GIN] 2025/04/29 - 10:07:26 | 200 |            0s |       127.0.0.1 | HEAD     "/"
time=2025-04-29T10:07:26.718+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-04-29T10:07:26.726+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/04/29 - 10:07:26 | 200 |     30.5995ms |       127.0.0.1 | POST     "/api/show"
time=2025-04-29T10:07:26.757+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-04-29T10:07:26.758+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="63.6 GiB" before.free="54.9 GiB" before.free_swap="109.4 GiB" now.total="63.6 GiB" now.free="54.9 GiB" now.free_swap="109.1 GiB"
time=2025-04-29T10:07:26.774+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-f47e9117-13d8-d21e-7b80-735c8d31444d name="NVIDIA GeForce RTX 4090" overhead="515.0 MiB" before.total="24.0 GiB" before.free="22.5 GiB" now.total="24.0 GiB" now.free="22.4 GiB" now.used="1.1 GiB"
releasing nvml library
time=2025-04-29T10:07:26.775+08:00 level=DEBUG source=sched.go:183 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1
time=2025-04-29T10:07:26.783+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-04-29T10:07:26.792+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-04-29T10:07:26.794+08:00 level=DEBUG source=sched.go:226 msg="loading first model" model=D:\LLM\.ollama\models\blobs\sha256-e9183b5c18a0cf736578c1e3d1cbd4b7e98e3ad3be6176b68c20f156d54a07ac
time=2025-04-29T10:07:26.794+08:00 level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[22.4 GiB]"
time=2025-04-29T10:07:26.794+08:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen3moe.vision.block_count default=0
time=2025-04-29T10:07:26.794+08:00 level=INFO source=sched.go:722 msg="new model will fit in available VRAM in single GPU, loading" model=D:\LLM\.ollama\models\blobs\sha256-e9183b5c18a0cf736578c1e3d1cbd4b7e98e3ad3be6176b68c20f156d54a07ac gpu=GPU-f47e9117-13d8-d21e-7b80-735c8d31444d parallel=4 available=24074002432 required="19.8 GiB"
time=2025-04-29T10:07:26.794+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="63.6 GiB" before.free="54.9 GiB" before.free_swap="109.1 GiB" now.total="63.6 GiB" now.free="54.9 GiB" now.free_swap="109.1 GiB"
time=2025-04-29T10:07:26.805+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-f47e9117-13d8-d21e-7b80-735c8d31444d name="NVIDIA GeForce RTX 4090" overhead="515.0 MiB" before.total="24.0 GiB" before.free="22.4 GiB" now.total="24.0 GiB" now.free="22.4 GiB" now.used="1.1 GiB"
releasing nvml library
time=2025-04-29T10:07:26.805+08:00 level=INFO source=server.go:105 msg="system memory" total="63.6 GiB" free="54.9 GiB" free_swap="109.1 GiB"
time=2025-04-29T10:07:26.805+08:00 level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[22.4 GiB]"
time=2025-04-29T10:07:26.805+08:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen3moe.vision.block_count default=0
time=2025-04-29T10:07:26.805+08:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[22.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="19.8 GiB" memory.required.partial="19.8 GiB" memory.required.kv="768.0 MiB" memory.required.allocations="[19.8 GiB]" memory.weights.total="17.2 GiB" memory.weights.repeating="16.9 GiB" memory.weights.nonrepeating="243.4 MiB" memory.graph.full="1.0 GiB" memory.graph.partial="1.0 GiB"
time=2025-04-29T10:07:26.806+08:00 level=DEBUG source=server.go:262 msg="compatible gpu libraries" compatible="[cuda_v12 cuda_v11]"
llama_model_loader: loaded meta data with 31 key-value pairs and 579 tensors from D:\LLM\.ollama\models\blobs\sha256-e9183b5c18a0cf736578c1e3d1cbd4b7e98e3ad3be6176b68c20f156d54a07ac (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen3moe
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Qwen3 30B A3B
llama_model_loader: - kv   3:                           general.basename str              = Qwen3
llama_model_loader: - kv   4:                         general.size_label str              = 30B-A3B
llama_model_loader: - kv   5:                            general.license str              = apache-2.0
llama_model_loader: - kv   6:                       qwen3moe.block_count u32              = 48
llama_model_loader: - kv   7:                    qwen3moe.context_length u32              = 40960
llama_model_loader: - kv   8:                  qwen3moe.embedding_length u32              = 2048
llama_model_loader: - kv   9:               qwen3moe.feed_forward_length u32              = 6144
llama_model_loader: - kv  10:              qwen3moe.attention.head_count u32              = 32
llama_model_loader: - kv  11:           qwen3moe.attention.head_count_kv u32              = 4
llama_model_loader: - kv  12:                    qwen3moe.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  13:  qwen3moe.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  14:                 qwen3moe.expert_used_count u32              = 8
llama_model_loader: - kv  15:              qwen3moe.attention.key_length u32              = 128
llama_model_loader: - kv  16:            qwen3moe.attention.value_length u32              = 128
llama_model_loader: - kv  17:                      qwen3moe.expert_count u32              = 128
llama_model_loader: - kv  18:        qwen3moe.expert_feed_forward_length u32              = 768
llama_model_loader: - kv  19:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  20:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  21:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  22:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  23:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  24:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  25:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  26:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  27:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  28:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  29:               general.quantization_version u32              = 2
llama_model_loader: - kv  30:                          general.file_type u32              = 15
llama_model_loader: - type  f32:  241 tensors
llama_model_loader: - type  f16:   48 tensors
llama_model_loader: - type q4_K:  265 tensors
llama_model_loader: - type q6_K:   25 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 17.34 GiB (4.88 BPW)
init_tokenizer: initializing tokenizer for type 2
load: control token: 151659 '<|fim_prefix|>' is not marked as EOG
load: control token: 151656 '<|video_pad|>' is not marked as EOG
load: control token: 151655 '<|image_pad|>' is not marked as EOG
load: control token: 151653 '<|vision_end|>' is not marked as EOG
load: control token: 151652 '<|vision_start|>' is not marked as EOG
load: control token: 151651 '<|quad_end|>' is not marked as EOG
load: control token: 151649 '<|box_end|>' is not marked as EOG
load: control token: 151648 '<|box_start|>' is not marked as EOG
load: control token: 151646 '<|object_ref_start|>' is not marked as EOG
load: control token: 151644 '<|im_start|>' is not marked as EOG
load: control token: 151661 '<|fim_suffix|>' is not marked as EOG
load: control token: 151647 '<|object_ref_end|>' is not marked as EOG
load: control token: 151660 '<|fim_middle|>' is not marked as EOG
load: control token: 151654 '<|vision_pad|>' is not marked as EOG
load: control token: 151650 '<|quad_start|>' is not marked as EOG
load: special tokens cache size = 26
load: token to piece cache size = 0.9311 MB
print_info: arch             = qwen3moe
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 30.53 B
print_info: general.name     = Qwen3 30B A3B
print_info: n_ff_exp         = 0
print_info: vocab type       = BPE
print_info: n_vocab          = 151936
print_info: n_merges         = 151387
print_info: BOS token        = 151643 '<|endoftext|>'
print_info: EOS token        = 151645 '<|im_end|>'
print_info: EOT token        = 151645 '<|im_end|>'
print_info: PAD token        = 151643 '<|endoftext|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|endoftext|>'
print_info: EOG token        = 151645 '<|im_end|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-04-29T10:07:26.930+08:00 level=DEBUG source=server.go:335 msg="adding gpu library" path=C:\Users\***\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12
time=2025-04-29T10:07:26.930+08:00 level=DEBUG source=server.go:343 msg="adding gpu dependency paths" paths=[C:\Users\***\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12]
time=2025-04-29T10:07:26.930+08:00 level=INFO source=server.go:405 msg="starting llama server" cmd="C:\\Users\\***\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model D:\\LLM\\.ollama\\models\\blobs\\sha256-e9183b5c18a0cf736578c1e3d1cbd4b7e98e3ad3be6176b68c20f156d54a07ac --ctx-size 8192 --batch-size 512 --n-gpu-layers 49 --verbose --threads 16 --no-mmap --parallel 4 --port 49584"
time=2025-04-29T10:07:26.930+08:00 level=DEBUG source=server.go:423 msg=subprocess environment="[CUDA_PATH=C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.5 CUDA_PATH_V12_5=C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.5 PATH=C:\\Users\\***\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.5\\bin;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.5\\libnvvp;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\Process Lasso\\;C:\\Program Files\\Git\\cmd;C:\\Program Files\\nodejs\\;C:\\Program Files\\Docker\\Docker\\resources\\bin;C:\\Users\\***\\miniconda3;C:\\Users\\***\\miniconda3\\Library\\mingw-w64\\bin;C:\\Users\\***\\miniconda3\\Library\\usr\\bin;C:\\Users\\***\\miniconda3\\Library\\bin;C:\\Users\\***\\miniconda3\\Scripts;C:\\Users\\***\\AppData\\Local\\Programs\\Python\\Python312\\Scripts\\;C:\\Users\\***\\AppData\\Local\\Programs\\Python\\Python312\\;C:\\Users\\***\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\***\\AppData\\Local\\GitHubDesktop\\bin;C:\\Users\\***\\AppData\\Local\\Programs\\Ollama\\bin;C:\\Users\\***\\AppData\\Roaming\\npm;C:\\Users\\***\\AppData\\Local\\Programs\\Ollama;C:\\Users\\***\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12;C:\\Users\\***\\AppData\\Local\\Programs\\Ollama\\lib\\ollama CUDA_VISIBLE_DEVICES=GPU-f47e9117-13d8-d21e-7b80-735c8d31444d]"
time=2025-04-29T10:07:26.934+08:00 level=INFO source=sched.go:451 msg="loaded runners" count=1
time=2025-04-29T10:07:26.934+08:00 level=INFO source=server.go:580 msg="waiting for llama runner to start responding"
time=2025-04-29T10:07:26.935+08:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error"
time=2025-04-29T10:07:26.951+08:00 level=INFO source=runner.go:853 msg="starting go runner"
time=2025-04-29T10:07:26.955+08:00 level=DEBUG source=ggml.go:99 msg="ggml backend load all from path" path=C:\Users\***\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
load_backend: loaded CUDA backend from C:\Users\***\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
time=2025-04-29T10:07:27.019+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path="C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.5\\bin"
time=2025-04-29T10:07:27.019+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path="C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.5\\libnvvp"
time=2025-04-29T10:07:27.019+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\Windows\system32
time=2025-04-29T10:07:27.019+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\Windows
time=2025-04-29T10:07:27.019+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\Windows\System32\Wbem
time=2025-04-29T10:07:27.019+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\Windows\System32\WindowsPowerShell\v1.0
time=2025-04-29T10:07:27.019+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\Windows\System32\OpenSSH
time=2025-04-29T10:07:27.019+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path="C:\\Program Files\\dotnet"
time=2025-04-29T10:07:27.019+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path="C:\\Program Files\\Process Lasso"
time=2025-04-29T10:07:27.019+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path="C:\\Program Files\\Git\\cmd"
time=2025-04-29T10:07:27.020+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path="C:\\Program Files\\nodejs"
time=2025-04-29T10:07:27.020+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path="C:\\Program Files\\Docker\\Docker\\resources\\bin"
time=2025-04-29T10:07:27.020+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\Users\***\miniconda3
time=2025-04-29T10:07:27.020+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\Users\***\miniconda3\Library\mingw-w64\bin
time=2025-04-29T10:07:27.020+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\Users\***\miniconda3\Library\usr\bin
time=2025-04-29T10:07:27.020+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\Users\***\miniconda3\Library\bin
time=2025-04-29T10:07:27.020+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\Users\***\miniconda3\Scripts
time=2025-04-29T10:07:27.020+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\Users\***\AppData\Local\Programs\Python\Python312\Scripts
time=2025-04-29T10:07:27.020+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\Users\***\AppData\Local\Programs\Python\Python312
time=2025-04-29T10:07:27.020+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\Users\***\AppData\Local\Microsoft\WindowsApps
time=2025-04-29T10:07:27.020+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\Users\***\AppData\Local\GitHubDesktop\bin
time=2025-04-29T10:07:27.020+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\Users\***\AppData\Local\Programs\Ollama\bin
time=2025-04-29T10:07:27.020+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\Users\***\AppData\Roaming\npm
time=2025-04-29T10:07:27.020+08:00 level=DEBUG source=ggml.go:99 msg="ggml backend load all from path" path=C:\Users\***\AppData\Local\Programs\Ollama
time=2025-04-29T10:07:27.020+08:00 level=DEBUG source=ggml.go:99 msg="ggml backend load all from path" path=C:\Users\***\AppData\Local\Programs\Ollama\lib\ollama
load_backend: loaded CPU backend from C:\Users\***\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll
time=2025-04-29T10:07:27.024+08:00 level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-04-29T10:07:27.025+08:00 level=INFO source=runner.go:913 msg="Server listening on 127.0.0.1:49584"
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4090) - 22994 MiB free
llama_model_loader: loaded meta data with 31 key-value pairs and 579 tensors from D:\LLM\.ollama\models\blobs\sha256-e9183b5c18a0cf736578c1e3d1cbd4b7e98e3ad3be6176b68c20f156d54a07ac (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen3moe
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Qwen3 30B A3B
llama_model_loader: - kv   3:                           general.basename str              = Qwen3
llama_model_loader: - kv   4:                         general.size_label str              = 30B-A3B
llama_model_loader: - kv   5:                            general.license str              = apache-2.0
llama_model_loader: - kv   6:                       qwen3moe.block_count u32              = 48
llama_model_loader: - kv   7:                    qwen3moe.context_length u32              = 40960
llama_model_loader: - kv   8:                  qwen3moe.embedding_length u32              = 2048
llama_model_loader: - kv   9:               qwen3moe.feed_forward_length u32              = 6144
llama_model_loader: - kv  10:              qwen3moe.attention.head_count u32              = 32
llama_model_loader: - kv  11:           qwen3moe.attention.head_count_kv u32              = 4
llama_model_loader: - kv  12:                    qwen3moe.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  13:  qwen3moe.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  14:                 qwen3moe.expert_used_count u32              = 8
llama_model_loader: - kv  15:              qwen3moe.attention.key_length u32              = 128
llama_model_loader: - kv  16:            qwen3moe.attention.value_length u32              = 128
llama_model_loader: - kv  17:                      qwen3moe.expert_count u32              = 128
llama_model_loader: - kv  18:        qwen3moe.expert_feed_forward_length u32              = 768
llama_model_loader: - kv  19:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  20:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  21:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  22:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  23:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  24:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  25:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  26:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  27:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  28:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  29:               general.quantization_version u32              = 2
llama_model_loader: - kv  30:                          general.file_type u32              = 15
llama_model_loader: - type  f32:  241 tensors
llama_model_loader: - type  f16:   48 tensors
llama_model_loader: - type q4_K:  265 tensors
llama_model_loader: - type q6_K:   25 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 17.34 GiB (4.88 BPW)
init_tokenizer: initializing tokenizer for type 2
time=2025-04-29T10:07:27.188+08:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model"
load: control token: 151659 '<|fim_prefix|>' is not marked as EOG
load: control token: 151656 '<|video_pad|>' is not marked as EOG
load: control token: 151655 '<|image_pad|>' is not marked as EOG
load: control token: 151653 '<|vision_end|>' is not marked as EOG
load: control token: 151652 '<|vision_start|>' is not marked as EOG
load: control token: 151651 '<|quad_end|>' is not marked as EOG
load: control token: 151649 '<|box_end|>' is not marked as EOG
load: control token: 151648 '<|box_start|>' is not marked as EOG
load: control token: 151646 '<|object_ref_start|>' is not marked as EOG
load: control token: 151644 '<|im_start|>' is not marked as EOG
load: control token: 151661 '<|fim_suffix|>' is not marked as EOG
load: control token: 151647 '<|object_ref_end|>' is not marked as EOG
load: control token: 151660 '<|fim_middle|>' is not marked as EOG
load: control token: 151654 '<|vision_pad|>' is not marked as EOG
load: control token: 151650 '<|quad_start|>' is not marked as EOG
load: special tokens cache size = 26
load: token to piece cache size = 0.9311 MB
print_info: arch             = qwen3moe
print_info: vocab_only       = 0
print_info: n_ctx_train      = 40960
print_info: n_embd           = 2048
print_info: n_layer          = 48
print_info: n_head           = 32
print_info: n_head_kv        = 4
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: n_swa_pattern    = 1
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 8
print_info: n_embd_k_gqa     = 512
print_info: n_embd_v_gqa     = 512
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-06
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 6144
print_info: n_expert         = 128
print_info: n_expert_used    = 8
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 2
print_info: rope scaling     = linear
print_info: freq_base_train  = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 40960
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = ?B
print_info: model params     = 30.53 B
print_info: general.name     = Qwen3 30B A3B
print_info: n_ff_exp         = 768
print_info: vocab type       = BPE
print_info: n_vocab          = 151936
print_info: n_merges         = 151387
print_info: BOS token        = 151643 '<|endoftext|>'
print_info: EOS token        = 151645 '<|im_end|>'
print_info: EOT token        = 151645 '<|im_end|>'
print_info: PAD token        = 151643 '<|endoftext|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|endoftext|>'
print_info: EOG token        = 151645 '<|im_end|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors: layer   0 assigned to device CUDA0, is_swa = 0
load_tensors: layer   1 assigned to device CUDA0, is_swa = 0
load_tensors: layer   2 assigned to device CUDA0, is_swa = 0
load_tensors: layer   3 assigned to device CUDA0, is_swa = 0
load_tensors: layer   4 assigned to device CUDA0, is_swa = 0
load_tensors: layer   5 assigned to device CUDA0, is_swa = 0
load_tensors: layer   6 assigned to device CUDA0, is_swa = 0
load_tensors: layer   7 assigned to device CUDA0, is_swa = 0
load_tensors: layer   8 assigned to device CUDA0, is_swa = 0
load_tensors: layer   9 assigned to device CUDA0, is_swa = 0
load_tensors: layer  10 assigned to device CUDA0, is_swa = 0
load_tensors: layer  11 assigned to device CUDA0, is_swa = 0
load_tensors: layer  12 assigned to device CUDA0, is_swa = 0
load_tensors: layer  13 assigned to device CUDA0, is_swa = 0
load_tensors: layer  14 assigned to device CUDA0, is_swa = 0
load_tensors: layer  15 assigned to device CUDA0, is_swa = 0
load_tensors: layer  16 assigned to device CUDA0, is_swa = 0
load_tensors: layer  17 assigned to device CUDA0, is_swa = 0
load_tensors: layer  18 assigned to device CUDA0, is_swa = 0
load_tensors: layer  19 assigned to device CUDA0, is_swa = 0
load_tensors: layer  20 assigned to device CUDA0, is_swa = 0
load_tensors: layer  21 assigned to device CUDA0, is_swa = 0
load_tensors: layer  22 assigned to device CUDA0, is_swa = 0
load_tensors: layer  23 assigned to device CUDA0, is_swa = 0
load_tensors: layer  24 assigned to device CUDA0, is_swa = 0
load_tensors: layer  25 assigned to device CUDA0, is_swa = 0
load_tensors: layer  26 assigned to device CUDA0, is_swa = 0
load_tensors: layer  27 assigned to device CUDA0, is_swa = 0
load_tensors: layer  28 assigned to device CUDA0, is_swa = 0
load_tensors: layer  29 assigned to device CUDA0, is_swa = 0
load_tensors: layer  30 assigned to device CUDA0, is_swa = 0
load_tensors: layer  31 assigned to device CUDA0, is_swa = 0
load_tensors: layer  32 assigned to device CUDA0, is_swa = 0
load_tensors: layer  33 assigned to device CUDA0, is_swa = 0
load_tensors: layer  34 assigned to device CUDA0, is_swa = 0
load_tensors: layer  35 assigned to device CUDA0, is_swa = 0
load_tensors: layer  36 assigned to device CUDA0, is_swa = 0
load_tensors: layer  37 assigned to device CUDA0, is_swa = 0
load_tensors: layer  38 assigned to device CUDA0, is_swa = 0
load_tensors: layer  39 assigned to device CUDA0, is_swa = 0
load_tensors: layer  40 assigned to device CUDA0, is_swa = 0
load_tensors: layer  41 assigned to device CUDA0, is_swa = 0
load_tensors: layer  42 assigned to device CUDA0, is_swa = 0
load_tensors: layer  43 assigned to device CUDA0, is_swa = 0
load_tensors: layer  44 assigned to device CUDA0, is_swa = 0
load_tensors: layer  45 assigned to device CUDA0, is_swa = 0
load_tensors: layer  46 assigned to device CUDA0, is_swa = 0
load_tensors: layer  47 assigned to device CUDA0, is_swa = 0
load_tensors: layer  48 assigned to device CUDA0, is_swa = 0
load_tensors: tensor 'token_embd.weight' (q4_K) (and 0 others) cannot be used with preferred buffer type CUDA_Host, using CPU instead
load_tensors: offloading 48 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 49/49 layers to GPU
load_tensors:        CUDA0 model buffer size = 17587.24 MiB
load_tensors:          CPU model buffer size =   166.92 MiB
load_all_data: using async uploads for device CUDA0, buffer type CUDA0, backend CUDA0
time=2025-04-29T10:07:27.439+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.00"
time=2025-04-29T10:07:27.690+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.06"
time=2025-04-29T10:07:27.941+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.22"
time=2025-04-29T10:07:28.194+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.35"
time=2025-04-29T10:07:28.445+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.52"
time=2025-04-29T10:07:28.696+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.68"
time=2025-04-29T10:07:28.947+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.84"
load_all_data: no device found for buffer type CPU for async uploads
time=2025-04-29T10:07:29.199+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.99"
llama_context: constructing llama_context
llama_context: n_seq_max     = 4
llama_context: n_ctx         = 8192
llama_context: n_ctx_per_seq = 2048
llama_context: n_batch       = 2048
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = 0
llama_context: freq_base     = 1000000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_per_seq (2048) < n_ctx_train (40960) -- the full capacity of the model will not be utilized
set_abort_callback: call
llama_context:  CUDA_Host  output buffer size =     2.35 MiB
llama_context: n_ctx = 8192
llama_context: n_ctx = 8192 (padded)
init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 48, can_shift = 1
init: layer   0: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer   1: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer   2: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer   3: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer   4: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer   5: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer   6: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer   7: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer   8: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer   9: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer  10: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer  11: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer  12: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer  13: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer  14: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer  15: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer  16: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer  17: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer  18: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer  19: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer  20: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer  21: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer  22: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer  23: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer  24: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer  25: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer  26: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer  27: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer  28: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer  29: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer  30: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer  31: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer  32: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer  33: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer  34: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer  35: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer  36: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer  37: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer  38: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer  39: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer  40: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer  41: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer  42: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer  43: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer  44: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer  45: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer  46: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init: layer  47: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0
init:      CUDA0 KV buffer size =   768.00 MiB
llama_context: KV self size  =  768.00 MiB, K (f16):  384.00 MiB, V (f16):  384.00 MiB
llama_context: enumerating backends
llama_context: backend_ptrs.size() = 2
llama_context: max_nodes = 65536
llama_context: worst-case: n_tokens = 512, n_seqs = 1, n_outputs = 0
llama_context: reserving graph for n_tokens = 512, n_seqs = 1
llama_context: reserving graph for n_tokens = 1, n_seqs = 1
llama_context: reserving graph for n_tokens = 512, n_seqs = 1
llama_context:      CUDA0 compute buffer size =   552.00 MiB
llama_context:  CUDA_Host compute buffer size =    20.01 MiB
llama_context: graph nodes  = 3126
llama_context: graph splits = 2
time=2025-04-29T10:07:29.450+08:00 level=INFO source=server.go:619 msg="llama runner started in 2.52 seconds"
time=2025-04-29T10:07:29.450+08:00 level=DEBUG source=sched.go:464 msg="finished setting up runner" model=D:\LLM\.ollama\models\blobs\sha256-e9183b5c18a0cf736578c1e3d1cbd4b7e98e3ad3be6176b68c20f156d54a07ac
[GIN] 2025/04/29 - 10:07:29 | 200 |    2.7141409s |       127.0.0.1 | POST     "/api/generate"
time=2025-04-29T10:07:29.450+08:00 level=DEBUG source=sched.go:468 msg="context for request finished"
time=2025-04-29T10:07:29.450+08:00 level=DEBUG source=sched.go:341 msg="runner with non-zero duration has gone idle, adding timer" modelPath=D:\LLM\.ollama\models\blobs\sha256-e9183b5c18a0cf736578c1e3d1cbd4b7e98e3ad3be6176b68c20f156d54a07ac duration=5m0s
time=2025-04-29T10:07:29.450+08:00 level=DEBUG source=sched.go:359 msg="after processing request finished event" modelPath=D:\LLM\.ollama\models\blobs\sha256-e9183b5c18a0cf736578c1e3d1cbd4b7e98e3ad3be6176b68c20f156d54a07ac refCount=0
time=2025-04-29T10:07:45.985+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-04-29T10:07:45.986+08:00 level=DEBUG source=sched.go:577 msg="evaluating already loaded" model=D:\LLM\.ollama\models\blobs\sha256-e9183b5c18a0cf736578c1e3d1cbd4b7e98e3ad3be6176b68c20f156d54a07ac
time=2025-04-29T10:07:45.987+08:00 level=DEBUG source=routes.go:1523 msg="chat request" images=0 prompt="<|im_start|>user\nhow far is moon<|im_end|>\n<|im_start|>assistant\n"
time=2025-04-29T10:07:45.989+08:00 level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=12 used=0 remaining=12
[GIN] 2025/04/29 - 10:08:08 | 200 |   22.9020241s |       127.0.0.1 | POST     "/api/chat"
time=2025-04-29T10:08:08.866+08:00 level=DEBUG source=sched.go:409 msg="context for request finished"
time=2025-04-29T10:08:08.866+08:00 level=DEBUG source=sched.go:341 msg="runner with non-zero duration has gone idle, adding timer" modelPath=D:\LLM\.ollama\models\blobs\sha256-e9183b5c18a0cf736578c1e3d1cbd4b7e98e3ad3be6176b68c20f156d54a07ac duration=5m0s
time=2025-04-29T10:08:08.866+08:00 level=DEBUG source=sched.go:359 msg="after processing request finished event" modelPath=D:\LLM\.ollama\models\blobs\sha256-e9183b5c18a0cf736578c1e3d1cbd4b7e98e3ad3be6176b68c20f156d54a07ac refCount=0

OS

Windows

GPU

Nvidia

CPU

AMD

Ollama version

0.6.6

Originally created by @vYLQs6 on GitHub (Apr 29, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10458 ### What is the issue? When running Qwen3-30b-a3b, my 4090 is only running at ~120w, really low utilization and slow speed for a 3B active MoE AMD 7950x3d, RTX 4090 24gb, 64gb RAM, windows 11 Test results down below: --- All default ollama settings. `set OLLAMA_DEBUG=1 && ollama serve` `ollama run qwen3:30b-a3b-q4_K_M --verbose` ``` PS C:\Users\***> ollama ps NAME ID SIZE PROCESSOR UNTIL qwen3:30b-a3b-q4_K_M 2ee832bc15b5 21 GB 100% GPU 3 minutes from now ``` ``` PS D:\> ollama run qwen3:30b-a3b-q4_K_M --verbose >>> how far is moon <think> Okay, the user is asking "how far is moon." I need to figure out what they mean. The Moon's distance from Earth varies because its orbit is elliptical. The average distance is about 384,400 kilometers, but it's not constant. There's also the concept of perigee and apogee. Maybe they want the average or the closest and farthest points. I should mention both. Also, maybe they're interested in how that distance is measured or some interesting facts. Let me check if there's any other context. The user might be a student or someone curious. I should explain it clearly, maybe mention that it's the fifth largest moon in the solar system. Also, the time it takes for light to travel from the Moon to Earth, which is about 1.3 seconds. That could be useful. Need to make sure the units are correct, kilometers or miles. The user didn't specify, so maybe provide both. Wait, the question is in English, so maybe they prefer kilometers or miles. But in the US, miles are more common. But since the user didn't specify, I should give both. Also, maybe mention that the Moon is moving away from Earth slowly, about 3.8 centimeters per year. That's a good point. Let me structure the answer: start with the average distance, then perigee and apogee, mention the measurement method (laser ranging), and the interesting facts like the light travel time and the moon's recession. Make sure it's clear and concise. Avoid any jargon. Check for accuracy. The average distance is 384,400 km, which is roughly 238,855 miles. Perigee is about 363,300 km (225,700 miles) and apogee around 405,500 km (252,000 miles). Light takes 1.3 seconds. The moon is moving away at 3.8 cm/year. That's all. I think that covers it. Let me put it all together in a friendly, informative way. </think> The distance from the **Earth to the Moon** varies because the Moon follows an elliptical orbit. Here's a breakdown: - **Average distance**: ~384,400 kilometers (238,855 miles). - **Closest point (perigee)**: ~363,300 km (225,700 miles). - **Farthest point (apogee)**: ~405,500 km (252,000 miles). ### Fun Facts: - **Light travel time**: It takes about **1.3 seconds** for light (or radio signals) to travel from the Moon to Earth. - **Laser ranging**: Scientists measure this distance precisely using lasers bounced off retroreflectors left by Apollo missions. - **Slow recession**: The Moon is moving away from Earth at a rate of **3.8 centimeters (1.5 inches) per year** due to tidal forces. Let me know if you'd like more details! 🌕✨ total duration: 22.9020241s load duration: 22.8488ms prompt eval count: 12 token(s) prompt eval duration: 551.174ms prompt eval rate: 21.77 tokens/s eval count: 676 token(s) eval duration: 22.3250165s eval rate: 30.28 tokens/s ``` ### Relevant log output ```shell C:\Users\***>set OLLAMA_DEBUG=1 && ollama serve 2025/04/29 10:06:49 routes.go:1232: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\\LLM\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-04-29T10:06:49.753+08:00 level=INFO source=images.go:458 msg="total blobs: 494" time=2025-04-29T10:06:49.763+08:00 level=INFO source=images.go:465 msg="total unused blobs removed: 0" time=2025-04-29T10:06:49.774+08:00 level=INFO source=routes.go:1299 msg="Listening on 127.0.0.1:11434 (version 0.6.6)" time=2025-04-29T10:06:49.774+08:00 level=DEBUG source=sched.go:107 msg="starting llm scheduler" time=2025-04-29T10:06:49.774+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-04-29T10:06:49.774+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2025-04-29T10:06:49.774+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=16 efficiency=0 threads=32 time=2025-04-29T10:06:49.774+08:00 level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA" time=2025-04-29T10:06:49.774+08:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=nvml.dll time=2025-04-29T10:06:49.774+08:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[C:\\Users\\***\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\nvml.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.5\\bin\\nvml.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.5\\libnvvp\\nvml.dll C:\\Windows\\system32\\nvml.dll C:\\Windows\\nvml.dll C:\\Windows\\System32\\Wbem\\nvml.dll C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvml.dll C:\\Windows\\System32\\OpenSSH\\nvml.dll C:\\Program Files\\dotnet\\nvml.dll C:\\Program Files\\Process Lasso\\nvml.dll C:\\Program Files\\Git\\cmd\\nvml.dll C:\\Program Files\\nodejs\\nvml.dll C:\\Program Files\\Docker\\Docker\\resources\\bin\\nvml.dll C:\\Users\\***\\miniconda3\\nvml.dll C:\\Users\\***\\miniconda3\\Library\\mingw-w64\\bin\\nvml.dll C:\\Users\\***\\miniconda3\\Library\\usr\\bin\\nvml.dll C:\\Users\\***\\miniconda3\\Library\\bin\\nvml.dll C:\\Users\\***\\miniconda3\\Scripts\\nvml.dll C:\\Users\\***\\AppData\\Local\\Programs\\Python\\Python312\\Scripts\\nvml.dll C:\\Users\\***\\AppData\\Local\\Programs\\Python\\Python312\\nvml.dll C:\\Users\\***\\AppData\\Local\\Microsoft\\WindowsApps\\nvml.dll C:\\Users\\***\\AppData\\Local\\GitHubDesktop\\bin\\nvml.dll C:\\Users\\***\\AppData\\Local\\Programs\\Ollama\\bin\\nvml.dll C:\\Users\\***\\AppData\\Roaming\\npm\\nvml.dll C:\\Users\\***\\AppData\\Local\\Programs\\Ollama\\nvml.dll c:\\Windows\\System32\\nvml.dll]" time=2025-04-29T10:06:49.775+08:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths="[C:\\Windows\\system32\\nvml.dll c:\\Windows\\System32\\nvml.dll]" time=2025-04-29T10:06:49.786+08:00 level=DEBUG source=gpu.go:111 msg="nvidia-ml loaded" library=C:\Windows\system32\nvml.dll time=2025-04-29T10:06:49.786+08:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=nvcuda.dll time=2025-04-29T10:06:49.786+08:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[C:\\Users\\***\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\nvcuda.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.5\\bin\\nvcuda.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.5\\libnvvp\\nvcuda.dll C:\\Windows\\system32\\nvcuda.dll C:\\Windows\\nvcuda.dll C:\\Windows\\System32\\Wbem\\nvcuda.dll C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvcuda.dll C:\\Windows\\System32\\OpenSSH\\nvcuda.dll C:\\Program Files\\dotnet\\nvcuda.dll C:\\Program Files\\Process Lasso\\nvcuda.dll C:\\Program Files\\Git\\cmd\\nvcuda.dll C:\\Program Files\\nodejs\\nvcuda.dll C:\\Program Files\\Docker\\Docker\\resources\\bin\\nvcuda.dll C:\\Users\\***\\miniconda3\\nvcuda.dll C:\\Users\\***\\miniconda3\\Library\\mingw-w64\\bin\\nvcuda.dll C:\\Users\\***\\miniconda3\\Library\\usr\\bin\\nvcuda.dll C:\\Users\\***\\miniconda3\\Library\\bin\\nvcuda.dll C:\\Users\\***\\miniconda3\\Scripts\\nvcuda.dll C:\\Users\\***\\AppData\\Local\\Programs\\Python\\Python312\\Scripts\\nvcuda.dll C:\\Users\\***\\AppData\\Local\\Programs\\Python\\Python312\\nvcuda.dll C:\\Users\\***\\AppData\\Local\\Microsoft\\WindowsApps\\nvcuda.dll C:\\Users\\***\\AppData\\Local\\GitHubDesktop\\bin\\nvcuda.dll C:\\Users\\***\\AppData\\Local\\Programs\\Ollama\\bin\\nvcuda.dll C:\\Users\\***\\AppData\\Roaming\\npm\\nvcuda.dll C:\\Users\\***\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll c:\\windows\\system*\\nvcuda.dll]" time=2025-04-29T10:06:49.787+08:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[C:\Windows\system32\nvcuda.dll] initializing C:\Windows\system32\nvcuda.dll dlsym: cuInit - 00007FFC98774D20 dlsym: cuDriverGetVersion - 00007FFC98774DC0 dlsym: cuDeviceGetCount - 00007FFC987755B6 dlsym: cuDeviceGet - 00007FFC987755B0 dlsym: cuDeviceGetAttribute - 00007FFC98774F10 dlsym: cuDeviceGetUuid - 00007FFC987755C2 dlsym: cuDeviceGetName - 00007FFC987755BC dlsym: cuCtxCreate_v3 - 00007FFC98775634 dlsym: cuMemGetInfo_v2 - 00007FFC98775736 dlsym: cuCtxDestroy - 00007FFC98775646 calling cuInit calling cuDriverGetVersion raw version 0x2f26 CUDA driver version: 12.7 calling cuDeviceGetCount device count 1 time=2025-04-29T10:06:49.799+08:00 level=DEBUG source=gpu.go:125 msg="detected GPUs" count=1 library=C:\Windows\system32\nvcuda.dll [GPU-f47e9117-13d8-d21e-7b80-735c8d31444d] CUDA totalMem 24563 mb [GPU-f47e9117-13d8-d21e-7b80-735c8d31444d] CUDA freeMem 22994 mb [GPU-f47e9117-13d8-d21e-7b80-735c8d31444d] Compute Capability 8.9 time=2025-04-29T10:06:49.897+08:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-f47e9117-13d8-d21e-7b80-735c8d31444d library=cuda compute=8.9 driver=12.7 name="NVIDIA GeForce RTX 4090" overhead="515.0 MiB" time=2025-04-29T10:06:49.903+08:00 level=DEBUG source=amd_hip_windows.go:88 msg=hipDriverGetVersion version=60140252 time=2025-04-29T10:06:49.903+08:00 level=INFO source=amd_hip_windows.go:103 msg="AMD ROCm reports no devices found" time=2025-04-29T10:06:49.903+08:00 level=INFO source=amd_windows.go:49 msg="no compatible amdgpu devices detected" releasing cuda driver library releasing nvml library time=2025-04-29T10:06:49.904+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-f47e9117-13d8-d21e-7b80-735c8d31444d library=cuda variant=v12 compute=8.9 driver=12.7 name="NVIDIA GeForce RTX 4090" total="24.0 GiB" available="22.5 GiB" [GIN] 2025/04/29 - 10:07:26 | 200 | 0s | 127.0.0.1 | HEAD "/" time=2025-04-29T10:07:26.718+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-29T10:07:26.726+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/04/29 - 10:07:26 | 200 | 30.5995ms | 127.0.0.1 | POST "/api/show" time=2025-04-29T10:07:26.757+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-29T10:07:26.758+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="63.6 GiB" before.free="54.9 GiB" before.free_swap="109.4 GiB" now.total="63.6 GiB" now.free="54.9 GiB" now.free_swap="109.1 GiB" time=2025-04-29T10:07:26.774+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-f47e9117-13d8-d21e-7b80-735c8d31444d name="NVIDIA GeForce RTX 4090" overhead="515.0 MiB" before.total="24.0 GiB" before.free="22.5 GiB" now.total="24.0 GiB" now.free="22.4 GiB" now.used="1.1 GiB" releasing nvml library time=2025-04-29T10:07:26.775+08:00 level=DEBUG source=sched.go:183 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1 time=2025-04-29T10:07:26.783+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-29T10:07:26.792+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-29T10:07:26.794+08:00 level=DEBUG source=sched.go:226 msg="loading first model" model=D:\LLM\.ollama\models\blobs\sha256-e9183b5c18a0cf736578c1e3d1cbd4b7e98e3ad3be6176b68c20f156d54a07ac time=2025-04-29T10:07:26.794+08:00 level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[22.4 GiB]" time=2025-04-29T10:07:26.794+08:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen3moe.vision.block_count default=0 time=2025-04-29T10:07:26.794+08:00 level=INFO source=sched.go:722 msg="new model will fit in available VRAM in single GPU, loading" model=D:\LLM\.ollama\models\blobs\sha256-e9183b5c18a0cf736578c1e3d1cbd4b7e98e3ad3be6176b68c20f156d54a07ac gpu=GPU-f47e9117-13d8-d21e-7b80-735c8d31444d parallel=4 available=24074002432 required="19.8 GiB" time=2025-04-29T10:07:26.794+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="63.6 GiB" before.free="54.9 GiB" before.free_swap="109.1 GiB" now.total="63.6 GiB" now.free="54.9 GiB" now.free_swap="109.1 GiB" time=2025-04-29T10:07:26.805+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-f47e9117-13d8-d21e-7b80-735c8d31444d name="NVIDIA GeForce RTX 4090" overhead="515.0 MiB" before.total="24.0 GiB" before.free="22.4 GiB" now.total="24.0 GiB" now.free="22.4 GiB" now.used="1.1 GiB" releasing nvml library time=2025-04-29T10:07:26.805+08:00 level=INFO source=server.go:105 msg="system memory" total="63.6 GiB" free="54.9 GiB" free_swap="109.1 GiB" time=2025-04-29T10:07:26.805+08:00 level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[22.4 GiB]" time=2025-04-29T10:07:26.805+08:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen3moe.vision.block_count default=0 time=2025-04-29T10:07:26.805+08:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[22.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="19.8 GiB" memory.required.partial="19.8 GiB" memory.required.kv="768.0 MiB" memory.required.allocations="[19.8 GiB]" memory.weights.total="17.2 GiB" memory.weights.repeating="16.9 GiB" memory.weights.nonrepeating="243.4 MiB" memory.graph.full="1.0 GiB" memory.graph.partial="1.0 GiB" time=2025-04-29T10:07:26.806+08:00 level=DEBUG source=server.go:262 msg="compatible gpu libraries" compatible="[cuda_v12 cuda_v11]" llama_model_loader: loaded meta data with 31 key-value pairs and 579 tensors from D:\LLM\.ollama\models\blobs\sha256-e9183b5c18a0cf736578c1e3d1cbd4b7e98e3ad3be6176b68c20f156d54a07ac (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen3moe llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen3 30B A3B llama_model_loader: - kv 3: general.basename str = Qwen3 llama_model_loader: - kv 4: general.size_label str = 30B-A3B llama_model_loader: - kv 5: general.license str = apache-2.0 llama_model_loader: - kv 6: qwen3moe.block_count u32 = 48 llama_model_loader: - kv 7: qwen3moe.context_length u32 = 40960 llama_model_loader: - kv 8: qwen3moe.embedding_length u32 = 2048 llama_model_loader: - kv 9: qwen3moe.feed_forward_length u32 = 6144 llama_model_loader: - kv 10: qwen3moe.attention.head_count u32 = 32 llama_model_loader: - kv 11: qwen3moe.attention.head_count_kv u32 = 4 llama_model_loader: - kv 12: qwen3moe.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 13: qwen3moe.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 14: qwen3moe.expert_used_count u32 = 8 llama_model_loader: - kv 15: qwen3moe.attention.key_length u32 = 128 llama_model_loader: - kv 16: qwen3moe.attention.value_length u32 = 128 llama_model_loader: - kv 17: qwen3moe.expert_count u32 = 128 llama_model_loader: - kv 18: qwen3moe.expert_feed_forward_length u32 = 768 llama_model_loader: - kv 19: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 20: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 21: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 22: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 23: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 26: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 27: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - kv 30: general.file_type u32 = 15 llama_model_loader: - type f32: 241 tensors llama_model_loader: - type f16: 48 tensors llama_model_loader: - type q4_K: 265 tensors llama_model_loader: - type q6_K: 25 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 17.34 GiB (4.88 BPW) init_tokenizer: initializing tokenizer for type 2 load: control token: 151659 '<|fim_prefix|>' is not marked as EOG load: control token: 151656 '<|video_pad|>' is not marked as EOG load: control token: 151655 '<|image_pad|>' is not marked as EOG load: control token: 151653 '<|vision_end|>' is not marked as EOG load: control token: 151652 '<|vision_start|>' is not marked as EOG load: control token: 151651 '<|quad_end|>' is not marked as EOG load: control token: 151649 '<|box_end|>' is not marked as EOG load: control token: 151648 '<|box_start|>' is not marked as EOG load: control token: 151646 '<|object_ref_start|>' is not marked as EOG load: control token: 151644 '<|im_start|>' is not marked as EOG load: control token: 151661 '<|fim_suffix|>' is not marked as EOG load: control token: 151647 '<|object_ref_end|>' is not marked as EOG load: control token: 151660 '<|fim_middle|>' is not marked as EOG load: control token: 151654 '<|vision_pad|>' is not marked as EOG load: control token: 151650 '<|quad_start|>' is not marked as EOG load: special tokens cache size = 26 load: token to piece cache size = 0.9311 MB print_info: arch = qwen3moe print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 30.53 B print_info: general.name = Qwen3 30B A3B print_info: n_ff_exp = 0 print_info: vocab type = BPE print_info: n_vocab = 151936 print_info: n_merges = 151387 print_info: BOS token = 151643 '<|endoftext|>' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-04-29T10:07:26.930+08:00 level=DEBUG source=server.go:335 msg="adding gpu library" path=C:\Users\***\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12 time=2025-04-29T10:07:26.930+08:00 level=DEBUG source=server.go:343 msg="adding gpu dependency paths" paths=[C:\Users\***\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12] time=2025-04-29T10:07:26.930+08:00 level=INFO source=server.go:405 msg="starting llama server" cmd="C:\\Users\\***\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model D:\\LLM\\.ollama\\models\\blobs\\sha256-e9183b5c18a0cf736578c1e3d1cbd4b7e98e3ad3be6176b68c20f156d54a07ac --ctx-size 8192 --batch-size 512 --n-gpu-layers 49 --verbose --threads 16 --no-mmap --parallel 4 --port 49584" time=2025-04-29T10:07:26.930+08:00 level=DEBUG source=server.go:423 msg=subprocess environment="[CUDA_PATH=C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.5 CUDA_PATH_V12_5=C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.5 PATH=C:\\Users\\***\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.5\\bin;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.5\\libnvvp;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\Process Lasso\\;C:\\Program Files\\Git\\cmd;C:\\Program Files\\nodejs\\;C:\\Program Files\\Docker\\Docker\\resources\\bin;C:\\Users\\***\\miniconda3;C:\\Users\\***\\miniconda3\\Library\\mingw-w64\\bin;C:\\Users\\***\\miniconda3\\Library\\usr\\bin;C:\\Users\\***\\miniconda3\\Library\\bin;C:\\Users\\***\\miniconda3\\Scripts;C:\\Users\\***\\AppData\\Local\\Programs\\Python\\Python312\\Scripts\\;C:\\Users\\***\\AppData\\Local\\Programs\\Python\\Python312\\;C:\\Users\\***\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\***\\AppData\\Local\\GitHubDesktop\\bin;C:\\Users\\***\\AppData\\Local\\Programs\\Ollama\\bin;C:\\Users\\***\\AppData\\Roaming\\npm;C:\\Users\\***\\AppData\\Local\\Programs\\Ollama;C:\\Users\\***\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12;C:\\Users\\***\\AppData\\Local\\Programs\\Ollama\\lib\\ollama CUDA_VISIBLE_DEVICES=GPU-f47e9117-13d8-d21e-7b80-735c8d31444d]" time=2025-04-29T10:07:26.934+08:00 level=INFO source=sched.go:451 msg="loaded runners" count=1 time=2025-04-29T10:07:26.934+08:00 level=INFO source=server.go:580 msg="waiting for llama runner to start responding" time=2025-04-29T10:07:26.935+08:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error" time=2025-04-29T10:07:26.951+08:00 level=INFO source=runner.go:853 msg="starting go runner" time=2025-04-29T10:07:26.955+08:00 level=DEBUG source=ggml.go:99 msg="ggml backend load all from path" path=C:\Users\***\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes load_backend: loaded CUDA backend from C:\Users\***\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll time=2025-04-29T10:07:27.019+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path="C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.5\\bin" time=2025-04-29T10:07:27.019+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path="C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.5\\libnvvp" time=2025-04-29T10:07:27.019+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\Windows\system32 time=2025-04-29T10:07:27.019+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\Windows time=2025-04-29T10:07:27.019+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\Windows\System32\Wbem time=2025-04-29T10:07:27.019+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\Windows\System32\WindowsPowerShell\v1.0 time=2025-04-29T10:07:27.019+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\Windows\System32\OpenSSH time=2025-04-29T10:07:27.019+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path="C:\\Program Files\\dotnet" time=2025-04-29T10:07:27.019+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path="C:\\Program Files\\Process Lasso" time=2025-04-29T10:07:27.019+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path="C:\\Program Files\\Git\\cmd" time=2025-04-29T10:07:27.020+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path="C:\\Program Files\\nodejs" time=2025-04-29T10:07:27.020+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path="C:\\Program Files\\Docker\\Docker\\resources\\bin" time=2025-04-29T10:07:27.020+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\Users\***\miniconda3 time=2025-04-29T10:07:27.020+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\Users\***\miniconda3\Library\mingw-w64\bin time=2025-04-29T10:07:27.020+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\Users\***\miniconda3\Library\usr\bin time=2025-04-29T10:07:27.020+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\Users\***\miniconda3\Library\bin time=2025-04-29T10:07:27.020+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\Users\***\miniconda3\Scripts time=2025-04-29T10:07:27.020+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\Users\***\AppData\Local\Programs\Python\Python312\Scripts time=2025-04-29T10:07:27.020+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\Users\***\AppData\Local\Programs\Python\Python312 time=2025-04-29T10:07:27.020+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\Users\***\AppData\Local\Microsoft\WindowsApps time=2025-04-29T10:07:27.020+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\Users\***\AppData\Local\GitHubDesktop\bin time=2025-04-29T10:07:27.020+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\Users\***\AppData\Local\Programs\Ollama\bin time=2025-04-29T10:07:27.020+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\Users\***\AppData\Roaming\npm time=2025-04-29T10:07:27.020+08:00 level=DEBUG source=ggml.go:99 msg="ggml backend load all from path" path=C:\Users\***\AppData\Local\Programs\Ollama time=2025-04-29T10:07:27.020+08:00 level=DEBUG source=ggml.go:99 msg="ggml backend load all from path" path=C:\Users\***\AppData\Local\Programs\Ollama\lib\ollama load_backend: loaded CPU backend from C:\Users\***\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll time=2025-04-29T10:07:27.024+08:00 level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-04-29T10:07:27.025+08:00 level=INFO source=runner.go:913 msg="Server listening on 127.0.0.1:49584" llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4090) - 22994 MiB free llama_model_loader: loaded meta data with 31 key-value pairs and 579 tensors from D:\LLM\.ollama\models\blobs\sha256-e9183b5c18a0cf736578c1e3d1cbd4b7e98e3ad3be6176b68c20f156d54a07ac (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen3moe llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen3 30B A3B llama_model_loader: - kv 3: general.basename str = Qwen3 llama_model_loader: - kv 4: general.size_label str = 30B-A3B llama_model_loader: - kv 5: general.license str = apache-2.0 llama_model_loader: - kv 6: qwen3moe.block_count u32 = 48 llama_model_loader: - kv 7: qwen3moe.context_length u32 = 40960 llama_model_loader: - kv 8: qwen3moe.embedding_length u32 = 2048 llama_model_loader: - kv 9: qwen3moe.feed_forward_length u32 = 6144 llama_model_loader: - kv 10: qwen3moe.attention.head_count u32 = 32 llama_model_loader: - kv 11: qwen3moe.attention.head_count_kv u32 = 4 llama_model_loader: - kv 12: qwen3moe.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 13: qwen3moe.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 14: qwen3moe.expert_used_count u32 = 8 llama_model_loader: - kv 15: qwen3moe.attention.key_length u32 = 128 llama_model_loader: - kv 16: qwen3moe.attention.value_length u32 = 128 llama_model_loader: - kv 17: qwen3moe.expert_count u32 = 128 llama_model_loader: - kv 18: qwen3moe.expert_feed_forward_length u32 = 768 llama_model_loader: - kv 19: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 20: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 21: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 22: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 23: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 26: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 27: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - kv 30: general.file_type u32 = 15 llama_model_loader: - type f32: 241 tensors llama_model_loader: - type f16: 48 tensors llama_model_loader: - type q4_K: 265 tensors llama_model_loader: - type q6_K: 25 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 17.34 GiB (4.88 BPW) init_tokenizer: initializing tokenizer for type 2 time=2025-04-29T10:07:27.188+08:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model" load: control token: 151659 '<|fim_prefix|>' is not marked as EOG load: control token: 151656 '<|video_pad|>' is not marked as EOG load: control token: 151655 '<|image_pad|>' is not marked as EOG load: control token: 151653 '<|vision_end|>' is not marked as EOG load: control token: 151652 '<|vision_start|>' is not marked as EOG load: control token: 151651 '<|quad_end|>' is not marked as EOG load: control token: 151649 '<|box_end|>' is not marked as EOG load: control token: 151648 '<|box_start|>' is not marked as EOG load: control token: 151646 '<|object_ref_start|>' is not marked as EOG load: control token: 151644 '<|im_start|>' is not marked as EOG load: control token: 151661 '<|fim_suffix|>' is not marked as EOG load: control token: 151647 '<|object_ref_end|>' is not marked as EOG load: control token: 151660 '<|fim_middle|>' is not marked as EOG load: control token: 151654 '<|vision_pad|>' is not marked as EOG load: control token: 151650 '<|quad_start|>' is not marked as EOG load: special tokens cache size = 26 load: token to piece cache size = 0.9311 MB print_info: arch = qwen3moe print_info: vocab_only = 0 print_info: n_ctx_train = 40960 print_info: n_embd = 2048 print_info: n_layer = 48 print_info: n_head = 32 print_info: n_head_kv = 4 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_swa_pattern = 1 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 8 print_info: n_embd_k_gqa = 512 print_info: n_embd_v_gqa = 512 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-06 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 6144 print_info: n_expert = 128 print_info: n_expert_used = 8 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 2 print_info: rope scaling = linear print_info: freq_base_train = 1000000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 40960 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = ?B print_info: model params = 30.53 B print_info: general.name = Qwen3 30B A3B print_info: n_ff_exp = 768 print_info: vocab type = BPE print_info: n_vocab = 151936 print_info: n_merges = 151387 print_info: BOS token = 151643 '<|endoftext|>' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = false) load_tensors: layer 0 assigned to device CUDA0, is_swa = 0 load_tensors: layer 1 assigned to device CUDA0, is_swa = 0 load_tensors: layer 2 assigned to device CUDA0, is_swa = 0 load_tensors: layer 3 assigned to device CUDA0, is_swa = 0 load_tensors: layer 4 assigned to device CUDA0, is_swa = 0 load_tensors: layer 5 assigned to device CUDA0, is_swa = 0 load_tensors: layer 6 assigned to device CUDA0, is_swa = 0 load_tensors: layer 7 assigned to device CUDA0, is_swa = 0 load_tensors: layer 8 assigned to device CUDA0, is_swa = 0 load_tensors: layer 9 assigned to device CUDA0, is_swa = 0 load_tensors: layer 10 assigned to device CUDA0, is_swa = 0 load_tensors: layer 11 assigned to device CUDA0, is_swa = 0 load_tensors: layer 12 assigned to device CUDA0, is_swa = 0 load_tensors: layer 13 assigned to device CUDA0, is_swa = 0 load_tensors: layer 14 assigned to device CUDA0, is_swa = 0 load_tensors: layer 15 assigned to device CUDA0, is_swa = 0 load_tensors: layer 16 assigned to device CUDA0, is_swa = 0 load_tensors: layer 17 assigned to device CUDA0, is_swa = 0 load_tensors: layer 18 assigned to device CUDA0, is_swa = 0 load_tensors: layer 19 assigned to device CUDA0, is_swa = 0 load_tensors: layer 20 assigned to device CUDA0, is_swa = 0 load_tensors: layer 21 assigned to device CUDA0, is_swa = 0 load_tensors: layer 22 assigned to device CUDA0, is_swa = 0 load_tensors: layer 23 assigned to device CUDA0, is_swa = 0 load_tensors: layer 24 assigned to device CUDA0, is_swa = 0 load_tensors: layer 25 assigned to device CUDA0, is_swa = 0 load_tensors: layer 26 assigned to device CUDA0, is_swa = 0 load_tensors: layer 27 assigned to device CUDA0, is_swa = 0 load_tensors: layer 28 assigned to device CUDA0, is_swa = 0 load_tensors: layer 29 assigned to device CUDA0, is_swa = 0 load_tensors: layer 30 assigned to device CUDA0, is_swa = 0 load_tensors: layer 31 assigned to device CUDA0, is_swa = 0 load_tensors: layer 32 assigned to device CUDA0, is_swa = 0 load_tensors: layer 33 assigned to device CUDA0, is_swa = 0 load_tensors: layer 34 assigned to device CUDA0, is_swa = 0 load_tensors: layer 35 assigned to device CUDA0, is_swa = 0 load_tensors: layer 36 assigned to device CUDA0, is_swa = 0 load_tensors: layer 37 assigned to device CUDA0, is_swa = 0 load_tensors: layer 38 assigned to device CUDA0, is_swa = 0 load_tensors: layer 39 assigned to device CUDA0, is_swa = 0 load_tensors: layer 40 assigned to device CUDA0, is_swa = 0 load_tensors: layer 41 assigned to device CUDA0, is_swa = 0 load_tensors: layer 42 assigned to device CUDA0, is_swa = 0 load_tensors: layer 43 assigned to device CUDA0, is_swa = 0 load_tensors: layer 44 assigned to device CUDA0, is_swa = 0 load_tensors: layer 45 assigned to device CUDA0, is_swa = 0 load_tensors: layer 46 assigned to device CUDA0, is_swa = 0 load_tensors: layer 47 assigned to device CUDA0, is_swa = 0 load_tensors: layer 48 assigned to device CUDA0, is_swa = 0 load_tensors: tensor 'token_embd.weight' (q4_K) (and 0 others) cannot be used with preferred buffer type CUDA_Host, using CPU instead load_tensors: offloading 48 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 49/49 layers to GPU load_tensors: CUDA0 model buffer size = 17587.24 MiB load_tensors: CPU model buffer size = 166.92 MiB load_all_data: using async uploads for device CUDA0, buffer type CUDA0, backend CUDA0 time=2025-04-29T10:07:27.439+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.00" time=2025-04-29T10:07:27.690+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.06" time=2025-04-29T10:07:27.941+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.22" time=2025-04-29T10:07:28.194+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.35" time=2025-04-29T10:07:28.445+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.52" time=2025-04-29T10:07:28.696+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.68" time=2025-04-29T10:07:28.947+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.84" load_all_data: no device found for buffer type CPU for async uploads time=2025-04-29T10:07:29.199+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.99" llama_context: constructing llama_context llama_context: n_seq_max = 4 llama_context: n_ctx = 8192 llama_context: n_ctx_per_seq = 2048 llama_context: n_batch = 2048 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 0 llama_context: freq_base = 1000000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (2048) < n_ctx_train (40960) -- the full capacity of the model will not be utilized set_abort_callback: call llama_context: CUDA_Host output buffer size = 2.35 MiB llama_context: n_ctx = 8192 llama_context: n_ctx = 8192 (padded) init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 48, can_shift = 1 init: layer 0: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 1: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 2: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 3: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 4: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 5: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 6: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 7: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 8: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 9: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 10: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 11: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 12: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 13: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 14: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 15: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 16: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 17: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 18: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 19: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 20: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 21: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 22: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 23: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 24: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 25: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 26: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 27: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 28: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 29: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 30: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 31: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 32: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 33: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 34: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 35: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 36: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 37: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 38: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 39: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 40: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 41: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 42: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 43: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 44: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 45: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 46: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: layer 47: n_embd_k_gqa = 512, n_embd_v_gqa = 512, dev = CUDA0 init: CUDA0 KV buffer size = 768.00 MiB llama_context: KV self size = 768.00 MiB, K (f16): 384.00 MiB, V (f16): 384.00 MiB llama_context: enumerating backends llama_context: backend_ptrs.size() = 2 llama_context: max_nodes = 65536 llama_context: worst-case: n_tokens = 512, n_seqs = 1, n_outputs = 0 llama_context: reserving graph for n_tokens = 512, n_seqs = 1 llama_context: reserving graph for n_tokens = 1, n_seqs = 1 llama_context: reserving graph for n_tokens = 512, n_seqs = 1 llama_context: CUDA0 compute buffer size = 552.00 MiB llama_context: CUDA_Host compute buffer size = 20.01 MiB llama_context: graph nodes = 3126 llama_context: graph splits = 2 time=2025-04-29T10:07:29.450+08:00 level=INFO source=server.go:619 msg="llama runner started in 2.52 seconds" time=2025-04-29T10:07:29.450+08:00 level=DEBUG source=sched.go:464 msg="finished setting up runner" model=D:\LLM\.ollama\models\blobs\sha256-e9183b5c18a0cf736578c1e3d1cbd4b7e98e3ad3be6176b68c20f156d54a07ac [GIN] 2025/04/29 - 10:07:29 | 200 | 2.7141409s | 127.0.0.1 | POST "/api/generate" time=2025-04-29T10:07:29.450+08:00 level=DEBUG source=sched.go:468 msg="context for request finished" time=2025-04-29T10:07:29.450+08:00 level=DEBUG source=sched.go:341 msg="runner with non-zero duration has gone idle, adding timer" modelPath=D:\LLM\.ollama\models\blobs\sha256-e9183b5c18a0cf736578c1e3d1cbd4b7e98e3ad3be6176b68c20f156d54a07ac duration=5m0s time=2025-04-29T10:07:29.450+08:00 level=DEBUG source=sched.go:359 msg="after processing request finished event" modelPath=D:\LLM\.ollama\models\blobs\sha256-e9183b5c18a0cf736578c1e3d1cbd4b7e98e3ad3be6176b68c20f156d54a07ac refCount=0 time=2025-04-29T10:07:45.985+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-29T10:07:45.986+08:00 level=DEBUG source=sched.go:577 msg="evaluating already loaded" model=D:\LLM\.ollama\models\blobs\sha256-e9183b5c18a0cf736578c1e3d1cbd4b7e98e3ad3be6176b68c20f156d54a07ac time=2025-04-29T10:07:45.987+08:00 level=DEBUG source=routes.go:1523 msg="chat request" images=0 prompt="<|im_start|>user\nhow far is moon<|im_end|>\n<|im_start|>assistant\n" time=2025-04-29T10:07:45.989+08:00 level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=12 used=0 remaining=12 [GIN] 2025/04/29 - 10:08:08 | 200 | 22.9020241s | 127.0.0.1 | POST "/api/chat" time=2025-04-29T10:08:08.866+08:00 level=DEBUG source=sched.go:409 msg="context for request finished" time=2025-04-29T10:08:08.866+08:00 level=DEBUG source=sched.go:341 msg="runner with non-zero duration has gone idle, adding timer" modelPath=D:\LLM\.ollama\models\blobs\sha256-e9183b5c18a0cf736578c1e3d1cbd4b7e98e3ad3be6176b68c20f156d54a07ac duration=5m0s time=2025-04-29T10:08:08.866+08:00 level=DEBUG source=sched.go:359 msg="after processing request finished event" modelPath=D:\LLM\.ollama\models\blobs\sha256-e9183b5c18a0cf736578c1e3d1cbd4b7e98e3ad3be6176b68c20f156d54a07ac refCount=0 ``` ### OS Windows ### GPU Nvidia ### CPU AMD ### Ollama version 0.6.6
GiteaMirror added the bug label 2026-05-04 15:55:41 -05:00
Author
Owner

@lwh9346 commented on GitHub (Apr 29, 2025):

Same issue on RTX 5090.

<!-- gh-comment-id:2837800379 --> @lwh9346 commented on GitHub (Apr 29, 2025): Same issue on RTX 5090.
Author
Owner

@vYLQs6 commented on GitHub (Apr 29, 2025):

For comparison, I'm getting 138.52 tok/sec when using lmstudio, same prompt & Q4 model

Since both project are based on llama.cpp, I guess there is some bugs in ollama

Image

<!-- gh-comment-id:2837863103 --> @vYLQs6 commented on GitHub (Apr 29, 2025): For comparison, I'm getting `138.52 tok/sec` when using lmstudio, same prompt & Q4 model Since both project are based on llama.cpp, I guess there is some bugs in ollama ![Image](https://github.com/user-attachments/assets/a1bb5524-58f4-466b-9d39-e53929fcbd17)
Author
Owner

@INDEX108 commented on GitHub (Apr 29, 2025):

same issue with Qwen3:32b ,poor performance on 2x4090

<!-- gh-comment-id:2838405076 --> @INDEX108 commented on GitHub (Apr 29, 2025): same issue with Qwen3:32b ,poor performance on 2x4090
Author
Owner

@bitcandy commented on GitHub (Apr 29, 2025):

same, poor GPU memory utilization issue, but this is not only related to Qwen3:32b, but to other models... It's utilize only 1/2 or 2/3 memory of some GPUs

<!-- gh-comment-id:2839220096 --> @bitcandy commented on GitHub (Apr 29, 2025): same, poor GPU memory utilization issue, but this is not only related to Qwen3:32b, but to other models... It's utilize only 1/2 or 2/3 memory of some GPUs
Author
Owner

@merc4derp commented on GitHub (Apr 29, 2025):

It's significantly slower in gpu mode on my 5070ti than on the 3700x in cpu mode. Completely broken atm.

<!-- gh-comment-id:2839477329 --> @merc4derp commented on GitHub (Apr 29, 2025): It's significantly slower in gpu mode on my 5070ti than on the 3700x in cpu mode. Completely broken atm.
Author
Owner

@dpk-it commented on GitHub (Apr 29, 2025):

same issue with 5090

Image

Image
<!-- gh-comment-id:2839633319 --> @dpk-it commented on GitHub (Apr 29, 2025): same issue with 5090 ![Image](https://github.com/user-attachments/assets/3598ad32-a0ec-40f8-96c7-79f2c5ae8c4b) <img width="208" alt="Image" src="https://github.com/user-attachments/assets/fac32154-1018-4c56-9b94-330435444573" />
Author
Owner

@jfgonsalves commented on GitHub (Apr 29, 2025):

Yes I'm also seeing this behavior. Using Linux with 2x3090.

<!-- gh-comment-id:2840370659 --> @jfgonsalves commented on GitHub (Apr 29, 2025): Yes I'm also seeing this behavior. Using Linux with 2x3090.
Author
Owner

@tripleS-Dev commented on GitHub (Apr 29, 2025):

Im also around 120w in 5090, but Lm studio is very fast

<!-- gh-comment-id:2840384886 --> @tripleS-Dev commented on GitHub (Apr 29, 2025): Im also around 120w in 5090, but Lm studio is very fast
Author
Owner

@Blueman2 commented on GitHub (Apr 29, 2025):

Yep, same issue here on windows with a 5090

<!-- gh-comment-id:2840445832 --> @Blueman2 commented on GitHub (Apr 29, 2025): Yep, same issue here on windows with a 5090
Author
Owner

@hsz1273327 commented on GitHub (Apr 30, 2025):

same, run in 8700g in ubuntu, poor performance on 780m(9~10t/s) event lower than pure cpu(16t/s)

<!-- gh-comment-id:2841450342 --> @hsz1273327 commented on GitHub (Apr 30, 2025): same, run in 8700g in ubuntu, poor performance on 780m(9~10t/s) event lower than pure cpu(16t/s)
Author
Owner

@bigZos commented on GitHub (Apr 30, 2025):

same issue here with 5090

<!-- gh-comment-id:2843750902 --> @bigZos commented on GitHub (Apr 30, 2025): same issue here with 5090
Author
Owner

@MX-Goliath commented on GitHub (May 1, 2025):

Same issue using single 7900 xtx or 7900 xtx + 6950xt. On CPU working just fine

<!-- gh-comment-id:2844398268 --> @MX-Goliath commented on GitHub (May 1, 2025): Same issue using single 7900 xtx or 7900 xtx + 6950xt. On CPU working just fine
Author
Owner

@StailGot commented on GitHub (May 1, 2025):

7900xtx ollama ~33 t/s, llama.cpp rocm ~117t/s

<!-- gh-comment-id:2845897853 --> @StailGot commented on GitHub (May 1, 2025): 7900xtx ollama ~33 t/s, llama.cpp rocm ~117t/s
Author
Owner

@Lawlietr commented on GitHub (May 2, 2025):

I use 30b-a3b with 3950x + 128GB and rtx 3080 10GB, it only take 7.x tok/sec.
But in my pc2 4650G + 28GB( apu taken 4GB ), got 15.99 tok/sec.
It's really weird.

<!-- gh-comment-id:2846387845 --> @Lawlietr commented on GitHub (May 2, 2025): I use 30b-a3b with 3950x + 128GB and rtx 3080 10GB, it only take 7.x tok/sec. But in my pc2 4650G + 28GB( apu taken 4GB ), got 15.99 tok/sec. It's really weird.
Author
Owner

@kshabanaa commented on GitHub (May 2, 2025):

same issue with an rtx 3090, ollama team pls try to fix it

<!-- gh-comment-id:2847069579 --> @kshabanaa commented on GitHub (May 2, 2025): same issue with an rtx 3090, ollama team pls try to fix it
Author
Owner

@cwiggi01 commented on GitHub (May 2, 2025):

Same issue.

<!-- gh-comment-id:2847701813 --> @cwiggi01 commented on GitHub (May 2, 2025): Same issue.
Author
Owner

@dominae commented on GitHub (May 2, 2025):

Same issues on Debian with AMD Instinct MI60 and on another Debian box with Nvidia A4000

<!-- gh-comment-id:2847774046 --> @dominae commented on GitHub (May 2, 2025): Same issues on Debian with AMD Instinct MI60 and on another Debian box with Nvidia A4000
Author
Owner

@jmorganca commented on GitHub (May 4, 2025):

Hi all, sorry about the performance issue. It will be fixed in 0.6.8: https://github.com/ollama/ollama/releases/tag/v0.6.8-rc0 a pre-release is ready if you'd like to try it

<!-- gh-comment-id:2849012826 --> @jmorganca commented on GitHub (May 4, 2025): Hi all, sorry about the performance issue. It will be fixed in 0.6.8: https://github.com/ollama/ollama/releases/tag/v0.6.8-rc0 a pre-release is ready if you'd like to try it
Author
Owner

@kargaranamir commented on GitHub (May 4, 2025):

Hi all, sorry about the performance issue. It will be fixed in 0.6.8: https://github.com/ollama/ollama/releases/tag/v0.6.8-rc0 a pre-release is ready if you'd like to try it

Hi @jmorganca, Can you point out what the issue was and in which commit or pull request it was solved?

<!-- gh-comment-id:2849265902 --> @kargaranamir commented on GitHub (May 4, 2025): > Hi all, sorry about the performance issue. It will be fixed in 0.6.8: https://github.com/ollama/ollama/releases/tag/v0.6.8-rc0 a pre-release is ready if you'd like to try it Hi @jmorganca, Can you point out what the issue was and in which commit or pull request it was solved?
Author
Owner

@VideoFX commented on GitHub (May 4, 2025):

only 5-13 t/s here.

v0.6.8-rc0
A4000 (15GB vram)
Ryzen 9 3650x, 64GB DDR4

2048 context length
OLLAMA_NUM_PARALLEL=1

Very slow going. Maybe not enough vram for any more speed?

<!-- gh-comment-id:2849367294 --> @VideoFX commented on GitHub (May 4, 2025): only 5-13 t/s here. v0.6.8-rc0 A4000 (15GB vram) Ryzen 9 3650x, 64GB DDR4 2048 context length OLLAMA_NUM_PARALLEL=1 Very slow going. Maybe not enough vram for any more speed?
Author
Owner

@Lawlietr commented on GitHub (May 5, 2025):

In my case 0.6.8 in the 4650G pure CPU it's still fast then 3950x + rtx 3080.

Almost achieved a two-fold performance gap

<!-- gh-comment-id:2851662903 --> @Lawlietr commented on GitHub (May 5, 2025): In my case 0.6.8 in the 4650G pure CPU it's still fast then 3950x + rtx 3080. Almost achieved a two-fold performance gap
Author
Owner

@TungstenWolframite commented on GitHub (May 7, 2025):

Hi all, sorry about the performance issue. It will be fixed in 0.6.8: https://github.com/ollama/ollama/releases/tag/v0.6.8-rc0 a pre-release is ready if you'd like to try it

Thank you for fixing this issue!
My speeds were slower than LM Studio prior to the fix.
Now it's 3x faster LM Studio :)

<!-- gh-comment-id:2858472005 --> @TungstenWolframite commented on GitHub (May 7, 2025): > Hi all, sorry about the performance issue. It will be fixed in 0.6.8: https://github.com/ollama/ollama/releases/tag/v0.6.8-rc0 a pre-release is ready if you'd like to try it Thank you for fixing this issue! My speeds were slower than LM Studio prior to the fix. Now it's 3x faster LM Studio :)
Author
Owner

@chrisoutwright commented on GitHub (Jun 8, 2025):

same issue with 0.9.0

qwen3:30b-a3b-q4_K_M

about 30-40 t/sec

time=2025-06-08T19:48:49.166+02:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=49 layers.model=49 layers.offload=49 layers.split=25,24 memory.available="[22.8 GiB 22.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="44.6 GiB" memory.required.partial="44.6 GiB" memory.required.kv="7.0 GiB" memory.required.allocations="[22.4 GiB 22.2 GiB]" memory.weights.total="17.2 GiB" memory.weights.repeating="16.9 GiB" memory.weights.nonrepeating="243.4 MiB" memory.graph.full="9.3 GiB" memory.graph.partial="9.3 GiB"
llama_model_loader: loaded meta data with 31 key-value pairs and 579 tensors from D:\Ollama\models\blobs\sha256-e9183b5c18a0cf736578c1e3d1cbd4b7e98e3ad3be6176b68c20f156d54a07ac (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen3moe
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Qwen3 30B A3B
llama_model_loader: - kv   3:                           general.basename str              = Qwen3
llama_model_loader: - kv   4:                         general.size_label str              = 30B-A3B
llama_model_loader: - kv   5:                            general.license str              = apache-2.0
llama_model_loader: - kv   6:                       qwen3moe.block_count u32              = 48
llama_model_loader: - kv   7:                    qwen3moe.context_length u32              = 40960
llama_model_loader: - kv   8:                  qwen3moe.embedding_length u32              = 2048
llama_model_loader: - kv   9:               qwen3moe.feed_forward_length u32              = 6144
llama_model_loader: - kv  10:              qwen3moe.attention.head_count u32              = 32
llama_model_loader: - kv  11:           qwen3moe.attention.head_count_kv u32              = 4
llama_model_loader: - kv  12:                    qwen3moe.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  13:  qwen3moe.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  14:                 qwen3moe.expert_used_count u32              = 8
llama_model_loader: - kv  15:              qwen3moe.attention.key_length u32              = 128
llama_model_loader: - kv  16:            qwen3moe.attention.value_length u32              = 128
llama_model_loader: - kv  17:                      qwen3moe.expert_count u32              = 128
llama_model_loader: - kv  18:        qwen3moe.expert_feed_forward_length u32              = 768
llama_model_loader: - kv  19:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  20:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  21:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  22:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  23:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  24:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  25:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  26:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  27:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  28:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  29:               general.quantization_version u32              = 2
llama_model_loader: - kv  30:                          general.file_type u32              = 15
llama_model_loader: - type  f32:  241 tensors
llama_model_loader: - type  f16:   48 tensors
llama_model_loader: - type q4_K:  265 tensors
llama_model_loader: - type q6_K:   25 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 17.34 GiB (4.88 BPW)
load: special tokens cache size = 26
load: token to piece cache size = 0.9311 MB
print_info: arch             = qwen3moe
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 30.53 B
print_info: general.name     = Qwen3 30B A3B
print_info: n_ff_exp         = 0
print_info: vocab type       = BPE
print_info: n_vocab          = 151936
print_info: n_merges         = 151387
print_info: BOS token        = 151643 '<|endoftext|>'
print_info: EOS token        = 151645 '<|im_end|>'
print_info: EOT token        = 151645 '<|im_end|>'
print_info: PAD token        = 151643 '<|endoftext|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|endoftext|>'
print_info: EOG token        = 151645 '<|im_end|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-06-08T19:48:49.386+02:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\\Users\\Chris\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model D:\\Ollama\\models\\blobs\\sha256-e9183b5c18a0cf736578c1e3d1cbd4b7e98e3ad3be6176b68c20f156d54a07ac --ctx-size 76000 --batch-size 512 --n-gpu-layers 49 --threads 8 --no-mmap --parallel 2 --tensor-split 25,24 --port 52078"
time=2025-06-08T19:48:49.440+02:00 level=INFO source=sched.go:483 msg="loaded runners" count=1
time=2025-06-08T19:48:49.440+02:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
time=2025-06-08T19:48:49.442+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error"
time=2025-06-08T19:48:49.467+02:00 level=INFO source=runner.go:815 msg="starting go runner"
load_backend: loaded CPU backend from C:\Users\Chris\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 2 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
  Device 1: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
load_backend: loaded CUDA backend from C:\Users\Chris\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
time=2025-06-08T19:48:49.898+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-06-08T19:48:49.898+02:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:52078"
time=2025-06-08T19:48:49.944+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23306 MiB free
llama_model_load_from_file_impl: using device CUDA1 (NVIDIA GeForce RTX 4090) - 22994 MiB free
llama_model_loader: loaded meta data with 31 key-value pairs and 579 tensors from D:\Ollama\models\blobs\sha256-e9183b5c18a0cf736578c1e3d1cbd4b7e98e3ad3be6176b68c20f156d54a07ac (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen3moe
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Qwen3 30B A3B
llama_model_loader: - kv   3:                           general.basename str              = Qwen3
llama_model_loader: - kv   4:                         general.size_label str              = 30B-A3B
llama_model_loader: - kv   5:                            general.license str              = apache-2.0
llama_model_loader: - kv   6:                       qwen3moe.block_count u32              = 48
llama_model_loader: - kv   7:                    qwen3moe.context_length u32              = 40960
llama_model_loader: - kv   8:                  qwen3moe.embedding_length u32              = 2048
llama_model_loader: - kv   9:               qwen3moe.feed_forward_length u32              = 6144
llama_model_loader: - kv  10:              qwen3moe.attention.head_count u32              = 32
llama_model_loader: - kv  11:           qwen3moe.attention.head_count_kv u32              = 4
llama_model_loader: - kv  12:                    qwen3moe.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  13:  qwen3moe.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  14:                 qwen3moe.expert_used_count u32              = 8
llama_model_loader: - kv  15:              qwen3moe.attention.key_length u32              = 128
llama_model_loader: - kv  16:            qwen3moe.attention.value_length u32              = 128
llama_model_loader: - kv  17:                      qwen3moe.expert_count u32              = 128
llama_model_loader: - kv  18:        qwen3moe.expert_feed_forward_length u32              = 768
llama_model_loader: - kv  19:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  20:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  21:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  22:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  23:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  24:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  25:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  26:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  27:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  28:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  29:               general.quantization_version u32              = 2
llama_model_loader: - kv  30:                          general.file_type u32              = 15
llama_model_loader: - type  f32:  241 tensors
llama_model_loader: - type  f16:   48 tensors
llama_model_loader: - type q4_K:  265 tensors
llama_model_loader: - type q6_K:   25 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 17.34 GiB (4.88 BPW)
load: special tokens cache size = 26
load: token to piece cache size = 0.9311 MB
print_info: arch             = qwen3moe
print_info: vocab_only       = 0
print_info: n_ctx_train      = 40960
print_info: n_embd           = 2048
print_info: n_layer          = 48
print_info: n_head           = 32
print_info: n_head_kv        = 4
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: n_swa_pattern    = 1
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 8
print_info: n_embd_k_gqa     = 512
print_info: n_embd_v_gqa     = 512
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-06
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 6144
print_info: n_expert         = 128
print_info: n_expert_used    = 8
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 2
print_info: rope scaling     = linear
print_info: freq_base_train  = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 40960
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 30B.A3B
print_info: model params     = 30.53 B
print_info: general.name     = Qwen3 30B A3B
print_info: n_ff_exp         = 768
print_info: vocab type       = BPE
print_info: n_vocab          = 151936
print_info: n_merges         = 151387
print_info: BOS token        = 151643 '<|endoftext|>'
print_info: EOS token        = 151645 '<|im_end|>'
print_info: EOT token        = 151645 '<|im_end|>'
print_info: PAD token        = 151643 '<|endoftext|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|endoftext|>'
print_info: EOG token        = 151645 '<|im_end|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors: offloading 48 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 49/49 layers to GPU
load_tensors:          CPU model buffer size =   166.92 MiB
load_tensors:        CUDA0 model buffer size =  9008.48 MiB
load_tensors:        CUDA1 model buffer size =  8578.76 MiB

Image

about 170w on 4090 and 70w on 3090

<!-- gh-comment-id:2954198804 --> @chrisoutwright commented on GitHub (Jun 8, 2025): same issue with 0.9.0 qwen3:30b-a3b-q4_K_M about 30-40 t/sec ``` time=2025-06-08T19:48:49.166+02:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=49 layers.model=49 layers.offload=49 layers.split=25,24 memory.available="[22.8 GiB 22.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="44.6 GiB" memory.required.partial="44.6 GiB" memory.required.kv="7.0 GiB" memory.required.allocations="[22.4 GiB 22.2 GiB]" memory.weights.total="17.2 GiB" memory.weights.repeating="16.9 GiB" memory.weights.nonrepeating="243.4 MiB" memory.graph.full="9.3 GiB" memory.graph.partial="9.3 GiB" llama_model_loader: loaded meta data with 31 key-value pairs and 579 tensors from D:\Ollama\models\blobs\sha256-e9183b5c18a0cf736578c1e3d1cbd4b7e98e3ad3be6176b68c20f156d54a07ac (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen3moe llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen3 30B A3B llama_model_loader: - kv 3: general.basename str = Qwen3 llama_model_loader: - kv 4: general.size_label str = 30B-A3B llama_model_loader: - kv 5: general.license str = apache-2.0 llama_model_loader: - kv 6: qwen3moe.block_count u32 = 48 llama_model_loader: - kv 7: qwen3moe.context_length u32 = 40960 llama_model_loader: - kv 8: qwen3moe.embedding_length u32 = 2048 llama_model_loader: - kv 9: qwen3moe.feed_forward_length u32 = 6144 llama_model_loader: - kv 10: qwen3moe.attention.head_count u32 = 32 llama_model_loader: - kv 11: qwen3moe.attention.head_count_kv u32 = 4 llama_model_loader: - kv 12: qwen3moe.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 13: qwen3moe.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 14: qwen3moe.expert_used_count u32 = 8 llama_model_loader: - kv 15: qwen3moe.attention.key_length u32 = 128 llama_model_loader: - kv 16: qwen3moe.attention.value_length u32 = 128 llama_model_loader: - kv 17: qwen3moe.expert_count u32 = 128 llama_model_loader: - kv 18: qwen3moe.expert_feed_forward_length u32 = 768 llama_model_loader: - kv 19: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 20: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 21: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 22: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 23: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 26: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 27: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - kv 30: general.file_type u32 = 15 llama_model_loader: - type f32: 241 tensors llama_model_loader: - type f16: 48 tensors llama_model_loader: - type q4_K: 265 tensors llama_model_loader: - type q6_K: 25 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 17.34 GiB (4.88 BPW) load: special tokens cache size = 26 load: token to piece cache size = 0.9311 MB print_info: arch = qwen3moe print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 30.53 B print_info: general.name = Qwen3 30B A3B print_info: n_ff_exp = 0 print_info: vocab type = BPE print_info: n_vocab = 151936 print_info: n_merges = 151387 print_info: BOS token = 151643 '<|endoftext|>' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-06-08T19:48:49.386+02:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\\Users\\Chris\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model D:\\Ollama\\models\\blobs\\sha256-e9183b5c18a0cf736578c1e3d1cbd4b7e98e3ad3be6176b68c20f156d54a07ac --ctx-size 76000 --batch-size 512 --n-gpu-layers 49 --threads 8 --no-mmap --parallel 2 --tensor-split 25,24 --port 52078" time=2025-06-08T19:48:49.440+02:00 level=INFO source=sched.go:483 msg="loaded runners" count=1 time=2025-06-08T19:48:49.440+02:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2025-06-08T19:48:49.442+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error" time=2025-06-08T19:48:49.467+02:00 level=INFO source=runner.go:815 msg="starting go runner" load_backend: loaded CPU backend from C:\Users\Chris\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 2 CUDA devices: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Device 1: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes load_backend: loaded CUDA backend from C:\Users\Chris\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll time=2025-06-08T19:48:49.898+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-06-08T19:48:49.898+02:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:52078" time=2025-06-08T19:48:49.944+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23306 MiB free llama_model_load_from_file_impl: using device CUDA1 (NVIDIA GeForce RTX 4090) - 22994 MiB free llama_model_loader: loaded meta data with 31 key-value pairs and 579 tensors from D:\Ollama\models\blobs\sha256-e9183b5c18a0cf736578c1e3d1cbd4b7e98e3ad3be6176b68c20f156d54a07ac (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen3moe llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen3 30B A3B llama_model_loader: - kv 3: general.basename str = Qwen3 llama_model_loader: - kv 4: general.size_label str = 30B-A3B llama_model_loader: - kv 5: general.license str = apache-2.0 llama_model_loader: - kv 6: qwen3moe.block_count u32 = 48 llama_model_loader: - kv 7: qwen3moe.context_length u32 = 40960 llama_model_loader: - kv 8: qwen3moe.embedding_length u32 = 2048 llama_model_loader: - kv 9: qwen3moe.feed_forward_length u32 = 6144 llama_model_loader: - kv 10: qwen3moe.attention.head_count u32 = 32 llama_model_loader: - kv 11: qwen3moe.attention.head_count_kv u32 = 4 llama_model_loader: - kv 12: qwen3moe.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 13: qwen3moe.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 14: qwen3moe.expert_used_count u32 = 8 llama_model_loader: - kv 15: qwen3moe.attention.key_length u32 = 128 llama_model_loader: - kv 16: qwen3moe.attention.value_length u32 = 128 llama_model_loader: - kv 17: qwen3moe.expert_count u32 = 128 llama_model_loader: - kv 18: qwen3moe.expert_feed_forward_length u32 = 768 llama_model_loader: - kv 19: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 20: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 21: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 22: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 23: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 26: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 27: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - kv 30: general.file_type u32 = 15 llama_model_loader: - type f32: 241 tensors llama_model_loader: - type f16: 48 tensors llama_model_loader: - type q4_K: 265 tensors llama_model_loader: - type q6_K: 25 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 17.34 GiB (4.88 BPW) load: special tokens cache size = 26 load: token to piece cache size = 0.9311 MB print_info: arch = qwen3moe print_info: vocab_only = 0 print_info: n_ctx_train = 40960 print_info: n_embd = 2048 print_info: n_layer = 48 print_info: n_head = 32 print_info: n_head_kv = 4 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_swa_pattern = 1 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 8 print_info: n_embd_k_gqa = 512 print_info: n_embd_v_gqa = 512 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-06 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 6144 print_info: n_expert = 128 print_info: n_expert_used = 8 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 2 print_info: rope scaling = linear print_info: freq_base_train = 1000000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 40960 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 30B.A3B print_info: model params = 30.53 B print_info: general.name = Qwen3 30B A3B print_info: n_ff_exp = 768 print_info: vocab type = BPE print_info: n_vocab = 151936 print_info: n_merges = 151387 print_info: BOS token = 151643 '<|endoftext|>' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = false) load_tensors: offloading 48 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 49/49 layers to GPU load_tensors: CPU model buffer size = 166.92 MiB load_tensors: CUDA0 model buffer size = 9008.48 MiB load_tensors: CUDA1 model buffer size = 8578.76 MiB ``` ![Image](https://github.com/user-attachments/assets/f34327fa-db44-4bf7-82a6-840da93d7cfb) about 170w on 4090 and 70w on 3090
Author
Owner

@maglat commented on GitHub (Aug 7, 2025):

Watching nvidia-smi during the processing, the latest qwen3-30B-a3b just utilize 350W of 550W on my RTX 5090. processing feels slow.

<!-- gh-comment-id:3163931113 --> @maglat commented on GitHub (Aug 7, 2025): Watching nvidia-smi during the processing, the latest qwen3-30B-a3b just utilize 350W of 550W on my RTX 5090. processing feels slow.
Author
Owner

@lukasz-secret-account commented on GitHub (Aug 17, 2025):

Yeah the issue is still alive.

<!-- gh-comment-id:3194234201 --> @lukasz-secret-account commented on GitHub (Aug 17, 2025): Yeah the issue is still alive.
Author
Owner

@gengyuchao commented on GitHub (Aug 21, 2025):

Ollama version 0.11.3 has the same issue on RTX 5090.

<!-- gh-comment-id:3208901841 --> @gengyuchao commented on GitHub (Aug 21, 2025): Ollama version 0.11.3 has the same issue on RTX 5090.
Author
Owner

@maxi1134 commented on GitHub (Oct 12, 2025):

Problem still present in October

Nvidia-SMI does not report anything over 80% usage when running Qwen3 MoE 30b-a3b, but I get to 100% usage when running Qwen 3 4b Q4KM

<!-- gh-comment-id:3394609200 --> @maxi1134 commented on GitHub (Oct 12, 2025): Problem still present in October Nvidia-SMI does not report anything over 80% usage when running Qwen3 MoE 30b-a3b, but I get to 100% usage when running Qwen 3 4b Q4KM
Author
Owner

@YazanGhafir commented on GitHub (Oct 20, 2025):

Same issue on Nvidia A16 cluster and latest Ollama 0.12.6.

<!-- gh-comment-id:3421882493 --> @YazanGhafir commented on GitHub (Oct 20, 2025): Same issue on Nvidia A16 cluster and latest Ollama 0.12.6.
Author
Owner

@SurealCereal commented on GitHub (Oct 20, 2025):

I am seeing this problem with Ollama 0.12.6 on Docker/Linux, running Qwen3-30B-A3B-Instruct-2507 (Q4_K_M and Q4_K_XL), on an RTX PRO 6000 Server card. Typically it hits max 20% GPU utilization. Similar problem with GPT OSS 120B which reaches 25% GPU utilization. I tried with the following settings:

OLLAMA_NEW_ENGINE=0 and 1
OLLAMA_FLASH_ATTENTION=1
OLLAMA_KV_CACHE_TYPE: q8_0
<!-- gh-comment-id:3423621818 --> @SurealCereal commented on GitHub (Oct 20, 2025): I am seeing this problem with Ollama 0.12.6 on Docker/Linux, running Qwen3-30B-A3B-Instruct-2507 (Q4_K_M and Q4_K_XL), on an RTX PRO 6000 Server card. Typically it hits max 20% GPU utilization. Similar problem with GPT OSS 120B which reaches 25% GPU utilization. I tried with the following settings: ``` OLLAMA_NEW_ENGINE=0 and 1 OLLAMA_FLASH_ATTENTION=1 OLLAMA_KV_CACHE_TYPE: q8_0
Author
Owner

@emzaedu commented on GitHub (Oct 30, 2025):

Same problem with Ollama 0.12.7 on Windows 11. RTX 5090 max 40-44%. with Qwen3-30B-A3B-Instruct-2507 / Qwen3-VL-30B-A3B-Instruct
Image

<!-- gh-comment-id:3468754596 --> @emzaedu commented on GitHub (Oct 30, 2025): Same problem with Ollama 0.12.7 on Windows 11. RTX 5090 max 40-44%. with Qwen3-30B-A3B-Instruct-2507 / Qwen3-VL-30B-A3B-Instruct <img width="1431" height="388" alt="Image" src="https://github.com/user-attachments/assets/b77b20fa-0cca-4ed4-84aa-62c68ad5cd86" />
Author
Owner

@SurealCereal commented on GitHub (Nov 10, 2025):

I am seeing this in 0.12.10. At first qwen3-vl:30b was working properly, using close to 100% of the RTX PRO 6000 GPU.

Two actions consistently make it slow: Switching to other models and running several queries one after another; eventually it breaks and gets slow. Also waiting many hours seems to do it. Same problem with GPT-OSS 120B.

The models end up being able to use 10-25% of the GPU, hovering around 15-20% mostly. I tried restarting the Ollama container and also re-creating the container but observed no change. Its like something breaks in the docker engine, CUDA or driver. Restarting the Docker service fixes it.

<!-- gh-comment-id:3514194352 --> @SurealCereal commented on GitHub (Nov 10, 2025): I am seeing this in 0.12.10. At first `qwen3-vl:30b` was working properly, using close to 100% of the RTX PRO 6000 GPU. Two actions consistently make it slow: Switching to other models and running several queries one after another; eventually it breaks and gets slow. Also waiting many hours seems to do it. Same problem with GPT-OSS 120B. The models end up being able to use 10-25% of the GPU, hovering around 15-20% mostly. I tried restarting the Ollama container and also re-creating the container but observed no change. Its like something breaks in the docker engine, CUDA or driver. Restarting the Docker service fixes it.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#68935