Ollama is not utilizing GPU #6830

Closed
opened 2025-11-12 13:46:23 -06:00 by GiteaMirror · 3 comments
Owner

Originally created by @oo33shan on GitHub (Apr 23, 2025).

What is the issue?

2025/04/23 21:08:47 routes.go:1231: INFO server config env="map[CUDA_VISIBLE_DEVICES:GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\ollama\Models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-04-23T21:08:47.812+08:00 level=INFO source=images.go:458 msg="total blobs: 11"
time=2025-04-23T21:08:47.812+08:00 level=INFO source=images.go:465 msg="total unused blobs removed: 0"
time=2025-04-23T21:08:47.812+08:00 level=INFO source=routes.go:1298 msg="Listening on 127.0.0.1:11434 (version 0.6.5)"
time=2025-04-23T21:08:47.812+08:00 level=DEBUG source=sched.go:107 msg="starting llm scheduler"
time=2025-04-23T21:08:47.813+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-04-23T21:08:47.813+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-04-23T21:08:47.813+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=8 efficiency=0 threads=16
time=2025-04-23T21:08:47.813+08:00 level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA"
time=2025-04-23T21:08:47.813+08:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=nvml.dll
time=2025-04-23T21:08:47.813+08:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[C:\Users\cc\AppData\Local\Programs\Ollama\lib\ollama\nvml.dll C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\bin\nvml.dll C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\libnvvp\nvml.dll C:\Program Files\Python313\Scripts\nvml.dll C:\Program Files\Python313\nvml.dll C:\WINDOWS\system32\nvml.dll C:\WINDOWS\nvml.dll C:\WINDOWS\System32\Wbem\nvml.dll C:\WINDOWS\System32\WindowsPowerShell\v1.0\nvml.dll C:\WINDOWS\System32\OpenSSH\nvml.dll D:\mingw64\bin\nvml.dll C:\Program Files\NVIDIA Corporation\NVIDIA App\NvDLISR\nvml.dll C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common\nvml.dll C:\Users\cc\AppData\Local\Microsoft\WindowsApps\python.exe\nvml.dll D:\PotPlayer\Module\Whisper\Faster-Whisper-XXL\faster-whisper-xxl.exe\nvml.dll C:\Program Files\NVIDIA Corporation\Nsight Compute 2025.1.1\nvml.dll C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\bin\nvml.dll C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\libnvvp\nvml.dll C:\Users\cc\AppData\Local\Programs\Ollama\nvml.dll C:\Users\cc\AppData\Local\Programs\Ollama\lib\ollama\nvml.dll C:\Program Files\Docker\Docker\resources\bin\nvml.dll C:\Users\cc\AppData\Local\Microsoft\WindowsApps\nvml.dll D:\Microsoft VS Code\bin\nvml.dll D:\ollama\nvml.dll D:\Bandizip\nvml.dll C:\Users\cc\AppData\Local\Microsoft\WindowsApps\python.exe\nvml.dll C:\Users\cc\AppData\Local\Programs\Ollama\nvml.dll C:\Users\cc\AppData\Local\Programs\Ollama\nvml.dll c:\Windows\System32\nvml.dll]"
time=2025-04-23T21:08:47.813+08:00 level=DEBUG source=gpu.go:529 msg="skipping PhysX cuda library path" path="C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common\nvml.dll"
time=2025-04-23T21:08:47.813+08:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths="[C:\WINDOWS\system32\nvml.dll c:\Windows\System32\nvml.dll]"
time=2025-04-23T21:08:47.829+08:00 level=DEBUG source=gpu.go:111 msg="nvidia-ml loaded" library=C:\WINDOWS\system32\nvml.dll
time=2025-04-23T21:08:47.830+08:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=nvcuda.dll
time=2025-04-23T21:08:47.830+08:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[C:\Users\cc\AppData\Local\Programs\Ollama\lib\ollama\nvcuda.dll C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\bin\nvcuda.dll C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\libnvvp\nvcuda.dll C:\Program Files\Python313\Scripts\nvcuda.dll C:\Program Files\Python313\nvcuda.dll C:\WINDOWS\system32\nvcuda.dll C:\WINDOWS\nvcuda.dll C:\WINDOWS\System32\Wbem\nvcuda.dll C:\WINDOWS\System32\WindowsPowerShell\v1.0\nvcuda.dll C:\WINDOWS\System32\OpenSSH\nvcuda.dll D:\mingw64\bin\nvcuda.dll C:\Program Files\NVIDIA Corporation\NVIDIA App\NvDLISR\nvcuda.dll C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common\nvcuda.dll C:\Users\cc\AppData\Local\Microsoft\WindowsApps\python.exe\nvcuda.dll D:\PotPlayer\Module\Whisper\Faster-Whisper-XXL\faster-whisper-xxl.exe\nvcuda.dll C:\Program Files\NVIDIA Corporation\Nsight Compute 2025.1.1\nvcuda.dll C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\bin\nvcuda.dll C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\libnvvp\nvcuda.dll C:\Users\cc\AppData\Local\Programs\Ollama\nvcuda.dll C:\Users\cc\AppData\Local\Programs\Ollama\lib\ollama\nvcuda.dll C:\Program Files\Docker\Docker\resources\bin\nvcuda.dll C:\Users\cc\AppData\Local\Microsoft\WindowsApps\nvcuda.dll D:\Microsoft VS Code\bin\nvcuda.dll D:\ollama\nvcuda.dll D:\Bandizip\nvcuda.dll C:\Users\cc\AppData\Local\Microsoft\WindowsApps\python.exe\nvcuda.dll C:\Users\cc\AppData\Local\Programs\Ollama\nvcuda.dll C:\Users\cc\AppData\Local\Programs\Ollama\nvcuda.dll c:\windows\system
\nvcuda.dll]"
time=2025-04-23T21:08:47.830+08:00 level=DEBUG source=gpu.go:529 msg="skipping PhysX cuda library path" path="C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common\nvcuda.dll"
time=2025-04-23T21:08:47.831+08:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[C:\WINDOWS\system32\nvcuda.dll]
initializing C:\WINDOWS\system32\nvcuda.dll
dlsym: cuInit - 00007FFB66FF1F80
dlsym: cuDriverGetVersion - 00007FFB66FF2020
dlsym: cuDeviceGetCount - 00007FFB66FF2816
dlsym: cuDeviceGet - 00007FFB66FF2810
dlsym: cuDeviceGetAttribute - 00007FFB66FF2170
dlsym: cuDeviceGetUuid - 00007FFB66FF2822
dlsym: cuDeviceGetName - 00007FFB66FF281C
dlsym: cuCtxCreate_v3 - 00007FFB66FF2894
dlsym: cuMemGetInfo_v2 - 00007FFB66FF2996
dlsym: cuCtxDestroy - 00007FFB66FF28A6
calling cuInit
calling cuDriverGetVersion
raw version 0x2f3a
CUDA driver version: 12.9
calling cuDeviceGetCount
device count 1
time=2025-04-23T21:08:47.875+08:00 level=DEBUG source=gpu.go:125 msg="detected GPUs" count=1 library=C:\WINDOWS\system32\nvcuda.dll
[GPU-4e1c9188-638e-6afd-457b-9715a3f90b26] CUDA totalMem 16302 mb
[GPU-4e1c9188-638e-6afd-457b-9715a3f90b26] CUDA freeMem 14923 mb
[GPU-4e1c9188-638e-6afd-457b-9715a3f90b26] Compute Capability 12.0
time=2025-04-23T21:08:48.013+08:00 level=DEBUG source=amd_hip_windows.go:88 msg=hipDriverGetVersion version=60342560
time=2025-04-23T21:08:48.013+08:00 level=INFO source=amd_hip_windows.go:103 msg="AMD ROCm reports no devices found"
time=2025-04-23T21:08:48.013+08:00 level=INFO source=amd_windows.go:49 msg="no compatible amdgpu devices detected"
releasing cuda driver library
releasing nvml library
time=2025-04-23T21:08:48.015+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 library=cuda variant=v12 compute=12.0 driver=12.9 name="NVIDIA GeForce RTX 5070 Ti" total="15.9 GiB" available="14.6 GiB"
[GIN] 2025/04/23 - 21:09:01 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/04/23 - 21:09:01 | 200 | 46.5132ms | 127.0.0.1 | POST "/api/show"
time=2025-04-23T21:09:01.517+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.1 GiB" before.free="22.3 GiB" before.free_swap="40.8 GiB" now.total="31.1 GiB" now.free="22.2 GiB" now.free_swap="40.7 GiB"
time=2025-04-23T21:09:01.540+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 name="NVIDIA GeForce RTX 5070 Ti" overhead="0 B" before.total="15.9 GiB" before.free="14.6 GiB" now.total="15.9 GiB" now.free="13.1 GiB" now.used="2.9 GiB"
releasing nvml library
time=2025-04-23T21:09:01.541+08:00 level=DEBUG source=sched.go:183 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1
time=2025-04-23T21:09:01.560+08:00 level=DEBUG source=sched.go:226 msg="loading first model" model=D:\ollama\Models\blobs\sha256-8f503e18bc39900d38e1ab39509091a5f2c8251c81e11f9264c452325378ade0
time=2025-04-23T21:09:01.560+08:00 level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[13.1 GiB]"
time=2025-04-23T21:09:01.560+08:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.vision.block_count default=0
time=2025-04-23T21:09:01.561+08:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.key_length default=128
time=2025-04-23T21:09:01.561+08:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.value_length default=128
time=2025-04-23T21:09:01.561+08:00 level=INFO source=sched.go:716 msg="new model will fit in available VRAM in single GPU, loading" model=D:\ollama\Models\blobs\sha256-8f503e18bc39900d38e1ab39509091a5f2c8251c81e11f9264c452325378ade0 gpu=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 parallel=4 available=14030696448 required="10.8 GiB"
time=2025-04-23T21:09:01.561+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.1 GiB" before.free="22.2 GiB" before.free_swap="40.7 GiB" now.total="31.1 GiB" now.free="22.2 GiB" now.free_swap="40.7 GiB"
time=2025-04-23T21:09:01.571+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 name="NVIDIA GeForce RTX 5070 Ti" overhead="0 B" before.total="15.9 GiB" before.free="13.1 GiB" now.total="15.9 GiB" now.free="13.1 GiB" now.used="2.9 GiB"
releasing nvml library
time=2025-04-23T21:09:01.571+08:00 level=INFO source=server.go:105 msg="system memory" total="31.1 GiB" free="22.2 GiB" free_swap="40.7 GiB"
time=2025-04-23T21:09:01.571+08:00 level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[13.1 GiB]"
time=2025-04-23T21:09:01.571+08:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.vision.block_count default=0
time=2025-04-23T21:09:01.571+08:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.key_length default=128
time=2025-04-23T21:09:01.571+08:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.value_length default=128
time=2025-04-23T21:09:01.571+08:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[13.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="10.8 GiB" memory.required.partial="10.8 GiB" memory.required.kv="1.5 GiB" memory.required.allocations="[10.8 GiB]" memory.weights.total="8.0 GiB" memory.weights.repeating="7.4 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="676.0 MiB" memory.graph.partial="916.1 MiB"
time=2025-04-23T21:09:01.572+08:00 level=DEBUG source=server.go:262 msg="compatible gpu libraries" compatible="[cuda_v12 cuda_v11]"
llama_model_loader: loaded meta data with 34 key-value pairs and 579 tensors from D:\ollama\Models\blobs\sha256-8f503e18bc39900d38e1ab39509091a5f2c8251c81e11f9264c452325378ade0 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Qwen2.5 14B Instruct 1M Abliterated
llama_model_loader: - kv 3: general.finetune str = Instruct-1m-abliterated
llama_model_loader: - kv 4: general.basename str = Qwen2.5
llama_model_loader: - kv 5: general.size_label str = 14B
llama_model_loader: - kv 6: general.license str = apache-2.0
llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/huihui-ai/Qwen...
llama_model_loader: - kv 8: general.base_model.count u32 = 1
llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 14B Instruct 1M
llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen
llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-1...
llama_model_loader: - kv 12: general.tags arr[str,4] = ["chat", "abliterated", "uncensored",...
llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"]
llama_model_loader: - kv 14: qwen2.block_count u32 = 48
llama_model_loader: - kv 15: qwen2.context_length u32 = 1010000
llama_model_loader: - kv 16: qwen2.embedding_length u32 = 5120
llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 13824
llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 40
llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 8
llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 10000000.000000
llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 22: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 23: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 24: tokenizer.ggml.tokens arr[str,152064] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 25: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 26: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 27: tokenizer.ggml.eos_token_id u32 = 151645
llama_model_loader: - kv 28: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 29: tokenizer.ggml.bos_token_id u32 = 151643
llama_model_loader: - kv 30: tokenizer.ggml.add_bos_token bool = false
llama_model_loader: - kv 31: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
llama_model_loader: - kv 32: general.quantization_version u32 = 2
llama_model_loader: - kv 33: general.file_type u32 = 15
llama_model_loader: - type f32: 241 tensors
llama_model_loader: - type q4_K: 289 tensors
llama_model_loader: - type q6_K: 49 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q4_K - Medium
print_info: file size = 8.37 GiB (4.87 BPW)
init_tokenizer: initializing tokenizer for type 2
load: control token: 151659 '<|fim_prefix|>' is not marked as EOG
load: control token: 151656 '<|video_pad|>' is not marked as EOG
load: control token: 151655 '<|image_pad|>' is not marked as EOG
load: control token: 151653 '<|vision_end|>' is not marked as EOG
load: control token: 151652 '<|vision_start|>' is not marked as EOG
load: control token: 151651 '<|quad_end|>' is not marked as EOG
load: control token: 151649 '<|box_end|>' is not marked as EOG
load: control token: 151648 '<|box_start|>' is not marked as EOG
load: control token: 151646 '<|object_ref_start|>' is not marked as EOG
load: control token: 151644 '<|im_start|>' is not marked as EOG
load: control token: 151661 '<|fim_suffix|>' is not marked as EOG
load: control token: 151647 '<|object_ref_end|>' is not marked as EOG
load: control token: 151660 '<|fim_middle|>' is not marked as EOG
load: control token: 151654 '<|vision_pad|>' is not marked as EOG
load: control token: 151650 '<|quad_start|>' is not marked as EOG
load: special tokens cache size = 22
load: token to piece cache size = 0.9310 MB
print_info: arch = qwen2
print_info: vocab_only = 1
print_info: model type = ?B
print_info: model params = 14.77 B
print_info: general.name = Qwen2.5 14B Instruct 1M Abliterated
print_info: vocab type = BPE
print_info: n_vocab = 152064
print_info: n_merges = 151387
print_info: BOS token = 151643 '<|endoftext|>'
print_info: EOS token = 151645 '<|im_end|>'
print_info: EOT token = 151645 '<|im_end|>'
print_info: PAD token = 151643 '<|endoftext|>'
print_info: LF token = 198 'Ċ'
print_info: FIM PRE token = 151659 '<|fim_prefix|>'
print_info: FIM SUF token = 151661 '<|fim_suffix|>'
print_info: FIM MID token = 151660 '<|fim_middle|>'
print_info: FIM PAD token = 151662 '<|fim_pad|>'
print_info: FIM REP token = 151663 '<|repo_name|>'
print_info: FIM SEP token = 151664 '<|file_sep|>'
print_info: EOG token = 151643 '<|endoftext|>'
print_info: EOG token = 151645 '<|im_end|>'
print_info: EOG token = 151662 '<|fim_pad|>'
print_info: EOG token = 151663 '<|repo_name|>'
print_info: EOG token = 151664 '<|file_sep|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-04-23T21:09:01.699+08:00 level=DEBUG source=server.go:335 msg="adding gpu library" path=C:\Users\cc\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12
time=2025-04-23T21:09:01.700+08:00 level=DEBUG source=server.go:343 msg="adding gpu dependency paths" paths=[C:\Users\cc\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12]
time=2025-04-23T21:09:01.700+08:00 level=INFO source=server.go:405 msg="starting llama server" cmd="C:\Users\cc\AppData\Local\Programs\Ollama\ollama.exe runner --model D:\ollama\Models\blobs\sha256-8f503e18bc39900d38e1ab39509091a5f2c8251c81e11f9264c452325378ade0 --ctx-size 8192 --batch-size 512 --n-gpu-layers 49 --verbose --threads 8 --no-mmap --parallel 4 --port 1915"
time=2025-04-23T21:09:01.700+08:00 level=DEBUG source=server.go:423 msg=subprocess environment="[CUDA_PATH=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8 CUDA_PATH_V12_8=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8 CUDA_VISIBLE_DEVICES=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 PATH=C:\Users\cc\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\bin;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\libnvvp;C:\Program Files\Python313\Scripts\;C:\Program Files\Python313\;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0\;C:\WINDOWS\System32\OpenSSH\;D:\mingw64\bin;C:\Program Files\NVIDIA Corporation\NVIDIA App\NvDLISR;C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;C:\Users\cc\AppData\Local\Microsoft\WindowsApps\python.exe;D:\PotPlayer\Module\Whisper\Faster-Whisper-XXL\faster-whisper-xxl.exe;C:\Program Files\NVIDIA Corporation\Nsight Compute 2025.1.1\;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\bin;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\libnvvp;C:\Users\cc\AppData\Local\Programs\Ollama;C:\Users\cc\AppData\Local\Programs\Ollama\lib\ollama;C:\Program Files\Docker\Docker\resources\bin;C:\Users\cc\AppData\Local\Microsoft\WindowsApps;D:\Microsoft VS Code\bin;D:\ollama;D:\Bandizip\;C:\Users\cc\AppData\Local\Microsoft\WindowsApps\python.exe;;C:\Users\cc\AppData\Local\Programs\Ollama;C:\Users\cc\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12;C:\Users\cc\AppData\Local\Programs\Ollama\lib\ollama]"
time=2025-04-23T21:09:01.702+08:00 level=INFO source=sched.go:451 msg="loaded runners" count=1
time=2025-04-23T21:09:01.702+08:00 level=INFO source=server.go:580 msg="waiting for llama runner to start responding"
time=2025-04-23T21:09:01.702+08:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error"
time=2025-04-23T21:09:01.721+08:00 level=INFO source=runner.go:853 msg="starting go runner"
time=2025-04-23T21:09:01.726+08:00 level=DEBUG source=ggml.go:99 msg="ggml backend load all from path" path=C:\Users\cc\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12
time=2025-04-23T21:09:21.440+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path="C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\bin"
time=2025-04-23T21:09:21.440+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path="C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\libnvvp"
time=2025-04-23T21:09:21.440+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path="C:\Program Files\Python313\Scripts"
time=2025-04-23T21:09:21.440+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path="C:\Program Files\Python313"
time=2025-04-23T21:09:21.440+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\WINDOWS\system32
time=2025-04-23T21:09:21.440+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\WINDOWS
time=2025-04-23T21:09:21.440+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\WINDOWS\System32\Wbem
time=2025-04-23T21:09:21.440+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\WINDOWS\System32\WindowsPowerShell\v1.0
time=2025-04-23T21:09:21.440+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\WINDOWS\System32\OpenSSH
time=2025-04-23T21:09:21.440+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=D:\mingw64\bin
time=2025-04-23T21:09:21.440+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path="C:\Program Files\NVIDIA Corporation\NVIDIA App\NvDLISR"
time=2025-04-23T21:09:21.440+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path="C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common"
time=2025-04-23T21:09:21.440+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\Users\cc\AppData\Local\Microsoft\WindowsApps\python.exe
time=2025-04-23T21:09:21.440+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=D:\PotPlayer\Module\Whisper\Faster-Whisper-XXL\faster-whisper-xxl.exe
time=2025-04-23T21:09:21.440+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path="C:\Program Files\NVIDIA Corporation\Nsight Compute 2025.1.1"
time=2025-04-23T21:09:21.440+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path="C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\bin"
time=2025-04-23T21:09:21.440+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path="C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\libnvvp"
time=2025-04-23T21:09:21.440+08:00 level=DEBUG source=ggml.go:99 msg="ggml backend load all from path" path=C:\Users\cc\AppData\Local\Programs\Ollama
time=2025-04-23T21:09:21.441+08:00 level=DEBUG source=ggml.go:99 msg="ggml backend load all from path" path=C:\Users\cc\AppData\Local\Programs\Ollama\lib\ollama
ggml_backend_load_best: failed to load C:\Users\cc\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll
ggml_backend_load_best: failed to load C:\Users\cc\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
ggml_backend_load_best: failed to load C:\Users\cc\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll
ggml_backend_load_best: failed to load C:\Users\cc\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-sandybridge.dll
ggml_backend_load_best: failed to load C:\Users\cc\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-skylakex.dll
time=2025-04-23T21:09:21.502+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path="C:\Program Files\Docker\Docker\resources\bin"
time=2025-04-23T21:09:21.502+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\Users\cc\AppData\Local\Microsoft\WindowsApps
time=2025-04-23T21:09:21.502+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path="D:\Microsoft VS Code\bin"
time=2025-04-23T21:09:21.502+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=D:\ollama
time=2025-04-23T21:09:21.502+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=D:\Bandizip
time=2025-04-23T21:09:21.502+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\Users\cc\AppData\Local\Microsoft\WindowsApps\python.exe
time=2025-04-23T21:09:21.502+08:00 level=INFO source=ggml.go:109 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(clang)
time=2025-04-23T21:09:21.502+08:00 level=INFO source=runner.go:913 msg="Server listening on 127.0.0.1:1915"
llama_model_loader: loaded meta data with 34 key-value pairs and 579 tensors from D:\ollama\Models\blobs\sha256-8f503e18bc39900d38e1ab39509091a5f2c8251c81e11f9264c452325378ade0 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Qwen2.5 14B Instruct 1M Abliterated
llama_model_loader: - kv 3: general.finetune str = Instruct-1m-abliterated
llama_model_loader: - kv 4: general.basename str = Qwen2.5
llama_model_loader: - kv 5: general.size_label str = 14B
llama_model_loader: - kv 6: general.license str = apache-2.0
llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/huihui-ai/Qwen...
llama_model_loader: - kv 8: general.base_model.count u32 = 1
llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 14B Instruct 1M
llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen
llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-1...
llama_model_loader: - kv 12: general.tags arr[str,4] = ["chat", "abliterated", "uncensored",...
llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"]
llama_model_loader: - kv 14: qwen2.block_count u32 = 48
llama_model_loader: - kv 15: qwen2.context_length u32 = 1010000
llama_model_loader: - kv 16: qwen2.embedding_length u32 = 5120
llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 13824
llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 40
llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 8
llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 10000000.000000
llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 22: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 23: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 24: tokenizer.ggml.tokens arr[str,152064] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 25: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 26: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 27: tokenizer.ggml.eos_token_id u32 = 151645
llama_model_loader: - kv 28: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 29: tokenizer.ggml.bos_token_id u32 = 151643
llama_model_loader: - kv 30: tokenizer.ggml.add_bos_token bool = false
llama_model_loader: - kv 31: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
llama_model_loader: - kv 32: general.quantization_version u32 = 2
llama_model_loader: - kv 33: general.file_type u32 = 15
llama_model_loader: - type f32: 241 tensors
llama_model_loader: - type q4_K: 289 tensors
llama_model_loader: - type q6_K: 49 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q4_K - Medium
print_info: file size = 8.37 GiB (4.87 BPW)
init_tokenizer: initializing tokenizer for type 2
load: control token: 151659 '<|fim_prefix|>' is not marked as EOG
load: control token: 151656 '<|video_pad|>' is not marked as EOG
load: control token: 151655 '<|image_pad|>' is not marked as EOG
load: control token: 151653 '<|vision_end|>' is not marked as EOG
load: control token: 151652 '<|vision_start|>' is not marked as EOG
load: control token: 151651 '<|quad_end|>' is not marked as EOG
load: control token: 151649 '<|box_end|>' is not marked as EOG
load: control token: 151648 '<|box_start|>' is not marked as EOG
load: control token: 151646 '<|object_ref_start|>' is not marked as EOG
load: control token: 151644 '<|im_start|>' is not marked as EOG
load: control token: 151661 '<|fim_suffix|>' is not marked as EOG
load: control token: 151647 '<|object_ref_end|>' is not marked as EOG
load: control token: 151660 '<|fim_middle|>' is not marked as EOG
load: control token: 151654 '<|vision_pad|>' is not marked as EOG
load: control token: 151650 '<|quad_start|>' is not marked as EOG
load: special tokens cache size = 22
load: token to piece cache size = 0.9310 MB
print_info: arch = qwen2
print_info: vocab_only = 0
print_info: n_ctx_train = 1010000
print_info: n_embd = 5120
print_info: n_layer = 48
print_info: n_head = 40
print_info: n_head_kv = 8
print_info: n_rot = 128
print_info: n_swa = 0
print_info: n_embd_head_k = 128
print_info: n_embd_head_v = 128
print_info: n_gqa = 5
print_info: n_embd_k_gqa = 1024
print_info: n_embd_v_gqa = 1024
print_info: f_norm_eps = 0.0e+00
print_info: f_norm_rms_eps = 1.0e-05
print_info: f_clamp_kqv = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale = 0.0e+00
print_info: n_ff = 13824
print_info: n_expert = 0
print_info: n_expert_used = 0
print_info: causal attn = 1
print_info: pooling type = 0
print_info: rope type = 2
print_info: rope scaling = linear
print_info: freq_base_train = 10000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn = 1010000
print_info: rope_finetuned = unknown
print_info: ssm_d_conv = 0
print_info: ssm_d_inner = 0
print_info: ssm_d_state = 0
print_info: ssm_dt_rank = 0
print_info: ssm_dt_b_c_rms = 0
print_info: model type = 14B
print_info: model params = 14.77 B
print_info: general.name = Qwen2.5 14B Instruct 1M Abliterated
print_info: vocab type = BPE
print_info: n_vocab = 152064
print_info: n_merges = 151387
print_info: BOS token = 151643 '<|endoftext|>'
print_info: EOS token = 151645 '<|im_end|>'
print_info: EOT token = 151645 '<|im_end|>'
print_info: PAD token = 151643 '<|endoftext|>'
print_info: LF token = 198 'Ċ'
print_info: FIM PRE token = 151659 '<|fim_prefix|>'
print_info: FIM SUF token = 151661 '<|fim_suffix|>'
print_info: FIM MID token = 151660 '<|fim_middle|>'
print_info: FIM PAD token = 151662 '<|fim_pad|>'
print_info: FIM REP token = 151663 '<|repo_name|>'
print_info: FIM SEP token = 151664 '<|file_sep|>'
print_info: EOG token = 151643 '<|endoftext|>'
print_info: EOG token = 151645 '<|im_end|>'
print_info: EOG token = 151662 '<|fim_pad|>'
print_info: EOG token = 151663 '<|repo_name|>'
print_info: EOG token = 151664 '<|file_sep|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors: layer 0 assigned to device CPU
load_tensors: layer 1 assigned to device CPU
load_tensors: layer 2 assigned to device CPU
load_tensors: layer 3 assigned to device CPU
load_tensors: layer 4 assigned to device CPU
load_tensors: layer 5 assigned to device CPU
load_tensors: layer 6 assigned to device CPU
load_tensors: layer 7 assigned to device CPU
load_tensors: layer 8 assigned to device CPU
load_tensors: layer 9 assigned to device CPU
load_tensors: layer 10 assigned to device CPU
load_tensors: layer 11 assigned to device CPU
load_tensors: layer 12 assigned to device CPU
load_tensors: layer 13 assigned to device CPU
load_tensors: layer 14 assigned to device CPU
load_tensors: layer 15 assigned to device CPU
load_tensors: layer 16 assigned to device CPU
load_tensors: layer 17 assigned to device CPU
load_tensors: layer 18 assigned to device CPU
load_tensors: layer 19 assigned to device CPU
load_tensors: layer 20 assigned to device CPU
load_tensors: layer 21 assigned to device CPU
load_tensors: layer 22 assigned to device CPU
load_tensors: layer 23 assigned to device CPU
load_tensors: layer 24 assigned to device CPU
load_tensors: layer 25 assigned to device CPU
load_tensors: layer 26 assigned to device CPU
load_tensors: layer 27 assigned to device CPU
load_tensors: layer 28 assigned to device CPU
load_tensors: layer 29 assigned to device CPU
load_tensors: layer 30 assigned to device CPU
load_tensors: layer 31 assigned to device CPU
load_tensors: layer 32 assigned to device CPU
load_tensors: layer 33 assigned to device CPU
load_tensors: layer 34 assigned to device CPU
load_tensors: layer 35 assigned to device CPU
load_tensors: layer 36 assigned to device CPU
load_tensors: layer 37 assigned to device CPU
load_tensors: layer 38 assigned to device CPU
load_tensors: layer 39 assigned to device CPU
load_tensors: layer 40 assigned to device CPU
load_tensors: layer 41 assigned to device CPU
load_tensors: layer 42 assigned to device CPU
load_tensors: layer 43 assigned to device CPU
load_tensors: layer 44 assigned to device CPU
load_tensors: layer 45 assigned to device CPU
load_tensors: layer 46 assigned to device CPU
load_tensors: layer 47 assigned to device CPU
load_tensors: layer 48 assigned to device CPU
load_tensors: CPU model buffer size = 8566.04 MiB
load_all_data: no device found for buffer type CPU for async uploads
time=2025-04-23T21:09:21.727+08:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model"
time=2025-04-23T21:09:21.977+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.05"
time=2025-04-23T21:09:22.227+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.17"
time=2025-04-23T21:09:22.477+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.24"
time=2025-04-23T21:09:22.727+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.30"
time=2025-04-23T21:09:22.977+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.37"
time=2025-04-23T21:09:23.228+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.44"
time=2025-04-23T21:09:23.478+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.51"
time=2025-04-23T21:09:23.729+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.57"
time=2025-04-23T21:09:23.979+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.64"
time=2025-04-23T21:09:24.229+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.70"
time=2025-04-23T21:09:24.479+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.76"
time=2025-04-23T21:09:24.730+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.83"
time=2025-04-23T21:09:24.980+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.89"
time=2025-04-23T21:09:25.230+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.94"
time=2025-04-23T21:09:25.481+08:00 level=DEBUG source=server.go:625 msg="model load progress 1.00"
llama_init_from_model: n_seq_max = 4
llama_init_from_model: n_ctx = 8192
llama_init_from_model: n_ctx_per_seq = 2048
llama_init_from_model: n_batch = 2048
llama_init_from_model: n_ubatch = 512
llama_init_from_model: flash_attn = 0
llama_init_from_model: freq_base = 10000000.0
llama_init_from_model: freq_scale = 1
llama_init_from_model: n_ctx_per_seq (2048) < n_ctx_train (1010000) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 48, can_shift = 1
llama_kv_cache_init: layer 0: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 1: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 2: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 3: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 4: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 5: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 6: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 7: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 8: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 9: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 10: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 11: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 12: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 13: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 14: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 15: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 16: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 17: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 18: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 19: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 20: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 21: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 22: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 23: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 24: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 25: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 26: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 27: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 28: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 29: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 30: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 31: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 32: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 33: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 34: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 35: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 36: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 37: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 38: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 39: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 40: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 41: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 42: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 43: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 44: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 45: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 46: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 47: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: CPU KV buffer size = 1536.00 MiB
llama_init_from_model: KV self size = 1536.00 MiB, K (f16): 768.00 MiB, V (f16): 768.00 MiB
llama_init_from_model: CPU output buffer size = 2.40 MiB
llama_init_from_model: CPU compute buffer size = 696.01 MiB
llama_init_from_model: graph nodes = 1686
llama_init_from_model: graph splits = 1
time=2025-04-23T21:09:25.731+08:00 level=INFO source=server.go:619 msg="llama runner started in 24.03 seconds"
time=2025-04-23T21:09:25.731+08:00 level=DEBUG source=sched.go:464 msg="finished setting up runner" model=D:\ollama\Models\blobs\sha256-8f503e18bc39900d38e1ab39509091a5f2c8251c81e11f9264c452325378ade0
[GIN] 2025/04/23 - 21:09:25 | 200 | 24.2252665s | 127.0.0.1 | POST "/api/generate"
time=2025-04-23T21:09:25.731+08:00 level=DEBUG source=sched.go:468 msg="context for request finished"
time=2025-04-23T21:09:25.732+08:00 level=DEBUG source=sched.go:341 msg="runner with non-zero duration has gone idle, adding timer" modelPath=D:\ollama\Models\blobs\sha256-8f503e18bc39900d38e1ab39509091a5f2c8251c81e11f9264c452325378ade0 duration=5m0s
time=2025-04-23T21:09:25.732+08:00 level=DEBUG source=sched.go:359 msg="after processing request finished event" modelPath=D:\ollama\Models\blobs\sha256-8f503e18bc39900d38e1ab39509091a5f2c8251c81e11f9264c452325378ade0 refCount=0
[GIN] 2025/04/23 - 21:09:36 | 200 | 517.5µs | 127.0.0.1 | HEAD "/"
[GIN] 2025/04/23 - 21:09:36 | 200 | 0s | 127.0.0.1 | GET "/api/ps"
time=2025-04-23T21:09:44.166+08:00 level=DEBUG source=sched.go:577 msg="evaluating already loaded" model=D:\ollama\Models\blobs\sha256-8f503e18bc39900d38e1ab39509091a5f2c8251c81e11f9264c452325378ade0
time=2025-04-23T21:09:44.167+08:00 level=DEBUG source=routes.go:1522 msg="chat request" images=0 prompt="<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n<|im_start|>user\nca<|im_end|>\n<|im_start|>assistant\n"
time=2025-04-23T21:09:44.169+08:00 level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=30 used=0 remaining=30
[GIN] 2025/04/23 - 21:10:03 | 200 | 19.5927388s | 127.0.0.1 | POST "/api/chat"
time=2025-04-23T21:10:03.747+08:00 level=DEBUG source=sched.go:409 msg="context for request finished"
time=2025-04-23T21:10:03.747+08:00 level=DEBUG source=sched.go:341 msg="runner with non-zero duration has gone idle, adding timer" modelPath=D:\ollama\Models\blobs\sha256-8f503e18bc39900d38e1ab39509091a5f2c8251c81e11f9264c452325378ade0 duration=5m0s
time=2025-04-23T21:10:03.747+08:00 level=DEBUG source=sched.go:359 msg="after processing request finished event" modelPath=D:\ollama\Models\blobs\sha256-8f503e18bc39900d38e1ab39509091a5f2c8251c81e11f9264c452325378ade0 refCount=0
time=2025-04-23T21:15:03.751+08:00 level=DEBUG source=sched.go:343 msg="timer expired, expiring to unload" modelPath=D:\ollama\Models\blobs\sha256-8f503e18bc39900d38e1ab39509091a5f2c8251c81e11f9264c452325378ade0
time=2025-04-23T21:15:03.751+08:00 level=DEBUG source=sched.go:362 msg="runner expired event received" modelPath=D:\ollama\Models\blobs\sha256-8f503e18bc39900d38e1ab39509091a5f2c8251c81e11f9264c452325378ade0
time=2025-04-23T21:15:03.751+08:00 level=DEBUG source=sched.go:377 msg="got lock to unload" modelPath=D:\ollama\Models\blobs\sha256-8f503e18bc39900d38e1ab39509091a5f2c8251c81e11f9264c452325378ade0
time=2025-04-23T21:15:03.751+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.1 GiB" before.free="22.2 GiB" before.free_swap="40.7 GiB" now.total="31.1 GiB" now.free="12.2 GiB" now.free_swap="28.9 GiB"
time=2025-04-23T21:15:03.763+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 name="NVIDIA GeForce RTX 5070 Ti" overhead="0 B" before.total="15.9 GiB" before.free="13.1 GiB" now.total="15.9 GiB" now.free="12.5 GiB" now.used="3.4 GiB"
releasing nvml library
time=2025-04-23T21:15:03.783+08:00 level=DEBUG source=server.go:1001 msg="stopping llama server"
time=2025-04-23T21:15:03.783+08:00 level=DEBUG source=server.go:1007 msg="waiting for llama server to exit"
time=2025-04-23T21:15:04.001+08:00 level=DEBUG source=server.go:1011 msg="llama server stopped"
time=2025-04-23T21:15:04.001+08:00 level=DEBUG source=sched.go:382 msg="runner released" modelPath=D:\ollama\Models\blobs\sha256-8f503e18bc39900d38e1ab39509091a5f2c8251c81e11f9264c452325378ade0
time=2025-04-23T21:15:04.013+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.1 GiB" before.free="12.2 GiB" before.free_swap="28.9 GiB" now.total="31.1 GiB" now.free="22.2 GiB" now.free_swap="39.6 GiB"
time=2025-04-23T21:15:04.045+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 name="NVIDIA GeForce RTX 5070 Ti" overhead="0 B" before.total="15.9 GiB" before.free="12.5 GiB" now.total="15.9 GiB" now.free="12.5 GiB" now.used="3.4 GiB"
releasing nvml library
time=2025-04-23T21:15:04.263+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.1 GiB" before.free="22.2 GiB" before.free_swap="39.6 GiB" now.total="31.1 GiB" now.free="22.2 GiB" now.free_swap="39.6 GiB"
time=2025-04-23T21:15:04.277+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 name="NVIDIA GeForce RTX 5070 Ti" overhead="0 B" before.total="15.9 GiB" before.free="12.5 GiB" now.total="15.9 GiB" now.free="12.5 GiB" now.used="3.4 GiB"
releasing nvml library
time=2025-04-23T21:15:04.513+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.1 GiB" before.free="22.2 GiB" before.free_swap="39.6 GiB" now.total="31.1 GiB" now.free="22.2 GiB" now.free_swap="39.6 GiB"
time=2025-04-23T21:15:04.526+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 name="NVIDIA GeForce RTX 5070 Ti" overhead="0 B" before.total="15.9 GiB" before.free="12.5 GiB" now.total="15.9 GiB" now.free="12.5 GiB" now.used="3.4 GiB"
releasing nvml library
time=2025-04-23T21:15:04.763+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.1 GiB" before.free="22.2 GiB" before.free_swap="39.6 GiB" now.total="31.1 GiB" now.free="22.2 GiB" now.free_swap="39.6 GiB"
time=2025-04-23T21:15:04.775+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 name="NVIDIA GeForce RTX 5070 Ti" overhead="0 B" before.total="15.9 GiB" before.free="12.5 GiB" now.total="15.9 GiB" now.free="12.6 GiB" now.used="3.3 GiB"
releasing nvml library
time=2025-04-23T21:15:05.013+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.1 GiB" before.free="22.2 GiB" before.free_swap="39.6 GiB" now.total="31.1 GiB" now.free="22.2 GiB" now.free_swap="39.6 GiB"
time=2025-04-23T21:15:05.024+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 name="NVIDIA GeForce RTX 5070 Ti" overhead="0 B" before.total="15.9 GiB" before.free="12.6 GiB" now.total="15.9 GiB" now.free="12.6 GiB" now.used="3.3 GiB"
releasing nvml library
time=2025-04-23T21:15:05.263+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.1 GiB" before.free="22.2 GiB" before.free_swap="39.6 GiB" now.total="31.1 GiB" now.free="22.2 GiB" now.free_swap="39.6 GiB"
time=2025-04-23T21:15:05.273+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 name="NVIDIA GeForce RTX 5070 Ti" overhead="0 B" before.total="15.9 GiB" before.free="12.6 GiB" now.total="15.9 GiB" now.free="12.6 GiB" now.used="3.3 GiB"
releasing nvml library
time=2025-04-23T21:15:05.513+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.1 GiB" before.free="22.2 GiB" before.free_swap="39.6 GiB" now.total="31.1 GiB" now.free="22.2 GiB" now.free_swap="39.6 GiB"
time=2025-04-23T21:15:05.522+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 name="NVIDIA GeForce RTX 5070 Ti" overhead="0 B" before.total="15.9 GiB" before.free="12.6 GiB" now.total="15.9 GiB" now.free="12.6 GiB" now.used="3.3 GiB"
releasing nvml library
time=2025-04-23T21:15:05.763+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.1 GiB" before.free="22.2 GiB" before.free_swap="39.6 GiB" now.total="31.1 GiB" now.free="22.2 GiB" now.free_swap="39.6 GiB"
time=2025-04-23T21:15:05.785+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 name="NVIDIA GeForce RTX 5070 Ti" overhead="0 B" before.total="15.9 GiB" before.free="12.6 GiB" now.total="15.9 GiB" now.free="12.6 GiB" now.used="3.3 GiB"
releasing nvml library
time=2025-04-23T21:15:06.013+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.1 GiB" before.free="22.2 GiB" before.free_swap="39.6 GiB" now.total="31.1 GiB" now.free="22.2 GiB" now.free_swap="39.6 GiB"
time=2025-04-23T21:15:06.032+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 name="NVIDIA GeForce RTX 5070 Ti" overhead="0 B" before.total="15.9 GiB" before.free="12.6 GiB" now.total="15.9 GiB" now.free="12.6 GiB" now.used="3.3 GiB"
releasing nvml library
time=2025-04-23T21:15:06.263+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.1 GiB" before.free="22.2 GiB" before.free_swap="39.6 GiB" now.total="31.1 GiB" now.free="22.2 GiB" now.free_swap="39.6 GiB"
time=2025-04-23T21:15:06.279+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 name="NVIDIA GeForce RTX 5070 Ti" overhead="0 B" before.total="15.9 GiB" before.free="12.6 GiB" now.total="15.9 GiB" now.free="12.6 GiB" now.used="3.3 GiB"
releasing nvml library
time=2025-04-23T21:15:06.513+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.1 GiB" before.free="22.2 GiB" before.free_swap="39.6 GiB" now.total="31.1 GiB" now.free="22.2 GiB" now.free_swap="39.6 GiB"
time=2025-04-23T21:15:06.527+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 name="NVIDIA GeForce RTX 5070 Ti" overhead="0 B" before.total="15.9 GiB" before.free="12.6 GiB" now.total="15.9 GiB" now.free="12.6 GiB" now.used="3.3 GiB"
releasing nvml library
time=2025-04-23T21:15:06.763+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.1 GiB" before.free="22.2 GiB" before.free_swap="39.6 GiB" now.total="31.1 GiB" now.free="22.2 GiB" now.free_swap="39.6 GiB"
time=2025-04-23T21:15:06.774+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 name="NVIDIA GeForce RTX 5070 Ti" overhead="0 B" before.total="15.9 GiB" before.free="12.6 GiB" now.total="15.9 GiB" now.free="12.6 GiB" now.used="3.3 GiB"
releasing nvml library
time=2025-04-23T21:15:07.013+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.1 GiB" before.free="22.2 GiB" before.free_swap="39.6 GiB" now.total="31.1 GiB" now.free="22.2 GiB" now.free_swap="39.6 GiB"
time=2025-04-23T21:15:07.023+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 name="NVIDIA GeForce RTX 5070 Ti" overhead="0 B" before.total="15.9 GiB" before.free="12.6 GiB" now.total="15.9 GiB" now.free="12.5 GiB" now.used="3.4 GiB"
releasing nvml library
time=2025-04-23T21:15:07.263+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.1 GiB" before.free="22.2 GiB" before.free_swap="39.6 GiB" now.total="31.1 GiB" now.free="22.2 GiB" now.free_swap="39.6 GiB"
time=2025-04-23T21:15:07.287+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 name="NVIDIA GeForce RTX 5070 Ti" overhead="0 B" before.total="15.9 GiB" before.free="12.5 GiB" now.total="15.9 GiB" now.free="12.6 GiB" now.used="3.4 GiB"
releasing nvml library
time=2025-04-23T21:15:07.513+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.1 GiB" before.free="22.2 GiB" before.free_swap="39.6 GiB" now.total="31.1 GiB" now.free="22.2 GiB" now.free_swap="39.6 GiB"
time=2025-04-23T21:15:07.520+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 name="NVIDIA GeForce RTX 5070 Ti" overhead="0 B" before.total="15.9 GiB" before.free="12.6 GiB" now.total="15.9 GiB" now.free="12.5 GiB" now.used="3.4 GiB"
releasing nvml library
time=2025-04-23T21:15:07.520+08:00 level=DEBUG source=sched.go:661 msg="gpu VRAM free memory converged after 3.77 seconds" model=D:\ollama\Models\blobs\sha256-8f503e18bc39900d38e1ab39509091a5f2c8251c81e11f9264c452325378ade0
time=2025-04-23T21:15:07.520+08:00 level=DEBUG source=sched.go:386 msg="sending an unloaded event" modelPath=D:\ollama\Models\blobs\sha256-8f503e18bc39900d38e1ab39509091a5f2c8251c81e11f9264c452325378ade0
time=2025-04-23T21:15:07.521+08:00 level=DEBUG source=sched.go:310 msg="ignoring unload event with no pending requests"

Relevant log output


OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @oo33shan on GitHub (Apr 23, 2025). ### What is the issue? 2025/04/23 21:08:47 routes.go:1231: INFO server config env="map[CUDA_VISIBLE_DEVICES:GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\\ollama\\Models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-04-23T21:08:47.812+08:00 level=INFO source=images.go:458 msg="total blobs: 11" time=2025-04-23T21:08:47.812+08:00 level=INFO source=images.go:465 msg="total unused blobs removed: 0" time=2025-04-23T21:08:47.812+08:00 level=INFO source=routes.go:1298 msg="Listening on 127.0.0.1:11434 (version 0.6.5)" time=2025-04-23T21:08:47.812+08:00 level=DEBUG source=sched.go:107 msg="starting llm scheduler" time=2025-04-23T21:08:47.813+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-04-23T21:08:47.813+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2025-04-23T21:08:47.813+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=8 efficiency=0 threads=16 time=2025-04-23T21:08:47.813+08:00 level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA" time=2025-04-23T21:08:47.813+08:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=nvml.dll time=2025-04-23T21:08:47.813+08:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[C:\\Users\\cc\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\nvml.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\bin\\nvml.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\libnvvp\\nvml.dll C:\\Program Files\\Python313\\Scripts\\nvml.dll C:\\Program Files\\Python313\\nvml.dll C:\\WINDOWS\\system32\\nvml.dll C:\\WINDOWS\\nvml.dll C:\\WINDOWS\\System32\\Wbem\\nvml.dll C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\nvml.dll C:\\WINDOWS\\System32\\OpenSSH\\nvml.dll D:\\mingw64\\bin\\nvml.dll C:\\Program Files\\NVIDIA Corporation\\NVIDIA App\\NvDLISR\\nvml.dll C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll C:\\Users\\cc\\AppData\\Local\\Microsoft\\WindowsApps\\python.exe\\nvml.dll D:\\PotPlayer\\Module\\Whisper\\Faster-Whisper-XXL\\faster-whisper-xxl.exe\\nvml.dll C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2025.1.1\\nvml.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\bin\\nvml.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\libnvvp\\nvml.dll C:\\Users\\cc\\AppData\\Local\\Programs\\Ollama\\nvml.dll C:\\Users\\cc\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\nvml.dll C:\\Program Files\\Docker\\Docker\\resources\\bin\\nvml.dll C:\\Users\\cc\\AppData\\Local\\Microsoft\\WindowsApps\\nvml.dll D:\\Microsoft VS Code\\bin\\nvml.dll D:\\ollama\\nvml.dll D:\\Bandizip\\nvml.dll C:\\Users\\cc\\AppData\\Local\\Microsoft\\WindowsApps\\python.exe\\nvml.dll C:\\Users\\cc\\AppData\\Local\\Programs\\Ollama\\nvml.dll C:\\Users\\cc\\AppData\\Local\\Programs\\Ollama\\nvml.dll c:\\Windows\\System32\\nvml.dll]" time=2025-04-23T21:08:47.813+08:00 level=DEBUG source=gpu.go:529 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll" time=2025-04-23T21:08:47.813+08:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths="[C:\\WINDOWS\\system32\\nvml.dll c:\\Windows\\System32\\nvml.dll]" time=2025-04-23T21:08:47.829+08:00 level=DEBUG source=gpu.go:111 msg="nvidia-ml loaded" library=C:\WINDOWS\system32\nvml.dll time=2025-04-23T21:08:47.830+08:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=nvcuda.dll time=2025-04-23T21:08:47.830+08:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[C:\\Users\\cc\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\nvcuda.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\bin\\nvcuda.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\libnvvp\\nvcuda.dll C:\\Program Files\\Python313\\Scripts\\nvcuda.dll C:\\Program Files\\Python313\\nvcuda.dll C:\\WINDOWS\\system32\\nvcuda.dll C:\\WINDOWS\\nvcuda.dll C:\\WINDOWS\\System32\\Wbem\\nvcuda.dll C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\nvcuda.dll C:\\WINDOWS\\System32\\OpenSSH\\nvcuda.dll D:\\mingw64\\bin\\nvcuda.dll C:\\Program Files\\NVIDIA Corporation\\NVIDIA App\\NvDLISR\\nvcuda.dll C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll C:\\Users\\cc\\AppData\\Local\\Microsoft\\WindowsApps\\python.exe\\nvcuda.dll D:\\PotPlayer\\Module\\Whisper\\Faster-Whisper-XXL\\faster-whisper-xxl.exe\\nvcuda.dll C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2025.1.1\\nvcuda.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\bin\\nvcuda.dll C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\libnvvp\\nvcuda.dll C:\\Users\\cc\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll C:\\Users\\cc\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\nvcuda.dll C:\\Program Files\\Docker\\Docker\\resources\\bin\\nvcuda.dll C:\\Users\\cc\\AppData\\Local\\Microsoft\\WindowsApps\\nvcuda.dll D:\\Microsoft VS Code\\bin\\nvcuda.dll D:\\ollama\\nvcuda.dll D:\\Bandizip\\nvcuda.dll C:\\Users\\cc\\AppData\\Local\\Microsoft\\WindowsApps\\python.exe\\nvcuda.dll C:\\Users\\cc\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll C:\\Users\\cc\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll c:\\windows\\system*\\nvcuda.dll]" time=2025-04-23T21:08:47.830+08:00 level=DEBUG source=gpu.go:529 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll" time=2025-04-23T21:08:47.831+08:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[C:\WINDOWS\system32\nvcuda.dll] initializing C:\WINDOWS\system32\nvcuda.dll dlsym: cuInit - 00007FFB66FF1F80 dlsym: cuDriverGetVersion - 00007FFB66FF2020 dlsym: cuDeviceGetCount - 00007FFB66FF2816 dlsym: cuDeviceGet - 00007FFB66FF2810 dlsym: cuDeviceGetAttribute - 00007FFB66FF2170 dlsym: cuDeviceGetUuid - 00007FFB66FF2822 dlsym: cuDeviceGetName - 00007FFB66FF281C dlsym: cuCtxCreate_v3 - 00007FFB66FF2894 dlsym: cuMemGetInfo_v2 - 00007FFB66FF2996 dlsym: cuCtxDestroy - 00007FFB66FF28A6 calling cuInit calling cuDriverGetVersion raw version 0x2f3a CUDA driver version: 12.9 calling cuDeviceGetCount device count 1 time=2025-04-23T21:08:47.875+08:00 level=DEBUG source=gpu.go:125 msg="detected GPUs" count=1 library=C:\WINDOWS\system32\nvcuda.dll [GPU-4e1c9188-638e-6afd-457b-9715a3f90b26] CUDA totalMem 16302 mb [GPU-4e1c9188-638e-6afd-457b-9715a3f90b26] CUDA freeMem 14923 mb [GPU-4e1c9188-638e-6afd-457b-9715a3f90b26] Compute Capability 12.0 time=2025-04-23T21:08:48.013+08:00 level=DEBUG source=amd_hip_windows.go:88 msg=hipDriverGetVersion version=60342560 time=2025-04-23T21:08:48.013+08:00 level=INFO source=amd_hip_windows.go:103 msg="AMD ROCm reports no devices found" time=2025-04-23T21:08:48.013+08:00 level=INFO source=amd_windows.go:49 msg="no compatible amdgpu devices detected" releasing cuda driver library releasing nvml library time=2025-04-23T21:08:48.015+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 library=cuda variant=v12 compute=12.0 driver=12.9 name="NVIDIA GeForce RTX 5070 Ti" total="15.9 GiB" available="14.6 GiB" [GIN] 2025/04/23 - 21:09:01 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/04/23 - 21:09:01 | 200 | 46.5132ms | 127.0.0.1 | POST "/api/show" time=2025-04-23T21:09:01.517+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.1 GiB" before.free="22.3 GiB" before.free_swap="40.8 GiB" now.total="31.1 GiB" now.free="22.2 GiB" now.free_swap="40.7 GiB" time=2025-04-23T21:09:01.540+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 name="NVIDIA GeForce RTX 5070 Ti" overhead="0 B" before.total="15.9 GiB" before.free="14.6 GiB" now.total="15.9 GiB" now.free="13.1 GiB" now.used="2.9 GiB" releasing nvml library time=2025-04-23T21:09:01.541+08:00 level=DEBUG source=sched.go:183 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1 time=2025-04-23T21:09:01.560+08:00 level=DEBUG source=sched.go:226 msg="loading first model" model=D:\ollama\Models\blobs\sha256-8f503e18bc39900d38e1ab39509091a5f2c8251c81e11f9264c452325378ade0 time=2025-04-23T21:09:01.560+08:00 level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[13.1 GiB]" time=2025-04-23T21:09:01.560+08:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.vision.block_count default=0 time=2025-04-23T21:09:01.561+08:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-04-23T21:09:01.561+08:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-04-23T21:09:01.561+08:00 level=INFO source=sched.go:716 msg="new model will fit in available VRAM in single GPU, loading" model=D:\ollama\Models\blobs\sha256-8f503e18bc39900d38e1ab39509091a5f2c8251c81e11f9264c452325378ade0 gpu=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 parallel=4 available=14030696448 required="10.8 GiB" time=2025-04-23T21:09:01.561+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.1 GiB" before.free="22.2 GiB" before.free_swap="40.7 GiB" now.total="31.1 GiB" now.free="22.2 GiB" now.free_swap="40.7 GiB" time=2025-04-23T21:09:01.571+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 name="NVIDIA GeForce RTX 5070 Ti" overhead="0 B" before.total="15.9 GiB" before.free="13.1 GiB" now.total="15.9 GiB" now.free="13.1 GiB" now.used="2.9 GiB" releasing nvml library time=2025-04-23T21:09:01.571+08:00 level=INFO source=server.go:105 msg="system memory" total="31.1 GiB" free="22.2 GiB" free_swap="40.7 GiB" time=2025-04-23T21:09:01.571+08:00 level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[13.1 GiB]" time=2025-04-23T21:09:01.571+08:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.vision.block_count default=0 time=2025-04-23T21:09:01.571+08:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-04-23T21:09:01.571+08:00 level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-04-23T21:09:01.571+08:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[13.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="10.8 GiB" memory.required.partial="10.8 GiB" memory.required.kv="1.5 GiB" memory.required.allocations="[10.8 GiB]" memory.weights.total="8.0 GiB" memory.weights.repeating="7.4 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="676.0 MiB" memory.graph.partial="916.1 MiB" time=2025-04-23T21:09:01.572+08:00 level=DEBUG source=server.go:262 msg="compatible gpu libraries" compatible="[cuda_v12 cuda_v11]" llama_model_loader: loaded meta data with 34 key-value pairs and 579 tensors from D:\ollama\Models\blobs\sha256-8f503e18bc39900d38e1ab39509091a5f2c8251c81e11f9264c452325378ade0 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen2.5 14B Instruct 1M Abliterated llama_model_loader: - kv 3: general.finetune str = Instruct-1m-abliterated llama_model_loader: - kv 4: general.basename str = Qwen2.5 llama_model_loader: - kv 5: general.size_label str = 14B llama_model_loader: - kv 6: general.license str = apache-2.0 llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/huihui-ai/Qwen... llama_model_loader: - kv 8: general.base_model.count u32 = 1 llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 14B Instruct 1M llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-1... llama_model_loader: - kv 12: general.tags arr[str,4] = ["chat", "abliterated", "uncensored",... llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 14: qwen2.block_count u32 = 48 llama_model_loader: - kv 15: qwen2.context_length u32 = 1010000 llama_model_loader: - kv 16: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 13824 llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 10000000.000000 llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 22: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 23: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 24: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 25: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 26: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 27: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 28: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 29: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 30: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 31: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 32: general.quantization_version u32 = 2 llama_model_loader: - kv 33: general.file_type u32 = 15 llama_model_loader: - type f32: 241 tensors llama_model_loader: - type q4_K: 289 tensors llama_model_loader: - type q6_K: 49 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 8.37 GiB (4.87 BPW) init_tokenizer: initializing tokenizer for type 2 load: control token: 151659 '<|fim_prefix|>' is not marked as EOG load: control token: 151656 '<|video_pad|>' is not marked as EOG load: control token: 151655 '<|image_pad|>' is not marked as EOG load: control token: 151653 '<|vision_end|>' is not marked as EOG load: control token: 151652 '<|vision_start|>' is not marked as EOG load: control token: 151651 '<|quad_end|>' is not marked as EOG load: control token: 151649 '<|box_end|>' is not marked as EOG load: control token: 151648 '<|box_start|>' is not marked as EOG load: control token: 151646 '<|object_ref_start|>' is not marked as EOG load: control token: 151644 '<|im_start|>' is not marked as EOG load: control token: 151661 '<|fim_suffix|>' is not marked as EOG load: control token: 151647 '<|object_ref_end|>' is not marked as EOG load: control token: 151660 '<|fim_middle|>' is not marked as EOG load: control token: 151654 '<|vision_pad|>' is not marked as EOG load: control token: 151650 '<|quad_start|>' is not marked as EOG load: special tokens cache size = 22 load: token to piece cache size = 0.9310 MB print_info: arch = qwen2 print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 14.77 B print_info: general.name = Qwen2.5 14B Instruct 1M Abliterated print_info: vocab type = BPE print_info: n_vocab = 152064 print_info: n_merges = 151387 print_info: BOS token = 151643 '<|endoftext|>' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-04-23T21:09:01.699+08:00 level=DEBUG source=server.go:335 msg="adding gpu library" path=C:\Users\cc\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12 time=2025-04-23T21:09:01.700+08:00 level=DEBUG source=server.go:343 msg="adding gpu dependency paths" paths=[C:\Users\cc\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12] time=2025-04-23T21:09:01.700+08:00 level=INFO source=server.go:405 msg="starting llama server" cmd="C:\\Users\\cc\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model D:\\ollama\\Models\\blobs\\sha256-8f503e18bc39900d38e1ab39509091a5f2c8251c81e11f9264c452325378ade0 --ctx-size 8192 --batch-size 512 --n-gpu-layers 49 --verbose --threads 8 --no-mmap --parallel 4 --port 1915" time=2025-04-23T21:09:01.700+08:00 level=DEBUG source=server.go:423 msg=subprocess environment="[CUDA_PATH=C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8 CUDA_PATH_V12_8=C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8 CUDA_VISIBLE_DEVICES=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 PATH=C:\\Users\\cc\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\bin;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\libnvvp;C:\\Program Files\\Python313\\Scripts\\;C:\\Program Files\\Python313\\;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\;C:\\WINDOWS\\System32\\OpenSSH\\;D:\\mingw64\\bin;C:\\Program Files\\NVIDIA Corporation\\NVIDIA App\\NvDLISR;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Users\\cc\\AppData\\Local\\Microsoft\\WindowsApps\\python.exe;D:\\PotPlayer\\Module\\Whisper\\Faster-Whisper-XXL\\faster-whisper-xxl.exe;C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2025.1.1\\;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\bin;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\libnvvp;C:\\Users\\cc\\AppData\\Local\\Programs\\Ollama;C:\\Users\\cc\\AppData\\Local\\Programs\\Ollama\\lib\\ollama;C:\\Program Files\\Docker\\Docker\\resources\\bin;C:\\Users\\cc\\AppData\\Local\\Microsoft\\WindowsApps;D:\\Microsoft VS Code\\bin;D:\\ollama;D:\\Bandizip\\;C:\\Users\\cc\\AppData\\Local\\Microsoft\\WindowsApps\\python.exe;;C:\\Users\\cc\\AppData\\Local\\Programs\\Ollama;C:\\Users\\cc\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12;C:\\Users\\cc\\AppData\\Local\\Programs\\Ollama\\lib\\ollama]" time=2025-04-23T21:09:01.702+08:00 level=INFO source=sched.go:451 msg="loaded runners" count=1 time=2025-04-23T21:09:01.702+08:00 level=INFO source=server.go:580 msg="waiting for llama runner to start responding" time=2025-04-23T21:09:01.702+08:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error" time=2025-04-23T21:09:01.721+08:00 level=INFO source=runner.go:853 msg="starting go runner" time=2025-04-23T21:09:01.726+08:00 level=DEBUG source=ggml.go:99 msg="ggml backend load all from path" path=C:\Users\cc\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12 time=2025-04-23T21:09:21.440+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path="C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\bin" time=2025-04-23T21:09:21.440+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path="C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\libnvvp" time=2025-04-23T21:09:21.440+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path="C:\\Program Files\\Python313\\Scripts" time=2025-04-23T21:09:21.440+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path="C:\\Program Files\\Python313" time=2025-04-23T21:09:21.440+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\WINDOWS\system32 time=2025-04-23T21:09:21.440+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\WINDOWS time=2025-04-23T21:09:21.440+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\WINDOWS\System32\Wbem time=2025-04-23T21:09:21.440+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\WINDOWS\System32\WindowsPowerShell\v1.0 time=2025-04-23T21:09:21.440+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\WINDOWS\System32\OpenSSH time=2025-04-23T21:09:21.440+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=D:\mingw64\bin time=2025-04-23T21:09:21.440+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path="C:\\Program Files\\NVIDIA Corporation\\NVIDIA App\\NvDLISR" time=2025-04-23T21:09:21.440+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common" time=2025-04-23T21:09:21.440+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\Users\cc\AppData\Local\Microsoft\WindowsApps\python.exe time=2025-04-23T21:09:21.440+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=D:\PotPlayer\Module\Whisper\Faster-Whisper-XXL\faster-whisper-xxl.exe time=2025-04-23T21:09:21.440+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path="C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2025.1.1" time=2025-04-23T21:09:21.440+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path="C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\bin" time=2025-04-23T21:09:21.440+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path="C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\libnvvp" time=2025-04-23T21:09:21.440+08:00 level=DEBUG source=ggml.go:99 msg="ggml backend load all from path" path=C:\Users\cc\AppData\Local\Programs\Ollama time=2025-04-23T21:09:21.441+08:00 level=DEBUG source=ggml.go:99 msg="ggml backend load all from path" path=C:\Users\cc\AppData\Local\Programs\Ollama\lib\ollama ggml_backend_load_best: failed to load C:\Users\cc\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll ggml_backend_load_best: failed to load C:\Users\cc\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll ggml_backend_load_best: failed to load C:\Users\cc\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll ggml_backend_load_best: failed to load C:\Users\cc\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-sandybridge.dll ggml_backend_load_best: failed to load C:\Users\cc\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-skylakex.dll time=2025-04-23T21:09:21.502+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path="C:\\Program Files\\Docker\\Docker\\resources\\bin" time=2025-04-23T21:09:21.502+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\Users\cc\AppData\Local\Microsoft\WindowsApps time=2025-04-23T21:09:21.502+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path="D:\\Microsoft VS Code\\bin" time=2025-04-23T21:09:21.502+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=D:\ollama time=2025-04-23T21:09:21.502+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=D:\Bandizip time=2025-04-23T21:09:21.502+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=C:\Users\cc\AppData\Local\Microsoft\WindowsApps\python.exe time=2025-04-23T21:09:21.502+08:00 level=INFO source=ggml.go:109 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(clang) time=2025-04-23T21:09:21.502+08:00 level=INFO source=runner.go:913 msg="Server listening on 127.0.0.1:1915" llama_model_loader: loaded meta data with 34 key-value pairs and 579 tensors from D:\ollama\Models\blobs\sha256-8f503e18bc39900d38e1ab39509091a5f2c8251c81e11f9264c452325378ade0 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen2.5 14B Instruct 1M Abliterated llama_model_loader: - kv 3: general.finetune str = Instruct-1m-abliterated llama_model_loader: - kv 4: general.basename str = Qwen2.5 llama_model_loader: - kv 5: general.size_label str = 14B llama_model_loader: - kv 6: general.license str = apache-2.0 llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/huihui-ai/Qwen... llama_model_loader: - kv 8: general.base_model.count u32 = 1 llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 14B Instruct 1M llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-1... llama_model_loader: - kv 12: general.tags arr[str,4] = ["chat", "abliterated", "uncensored",... llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 14: qwen2.block_count u32 = 48 llama_model_loader: - kv 15: qwen2.context_length u32 = 1010000 llama_model_loader: - kv 16: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 13824 llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 10000000.000000 llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 22: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 23: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 24: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 25: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 26: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 27: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 28: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 29: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 30: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 31: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 32: general.quantization_version u32 = 2 llama_model_loader: - kv 33: general.file_type u32 = 15 llama_model_loader: - type f32: 241 tensors llama_model_loader: - type q4_K: 289 tensors llama_model_loader: - type q6_K: 49 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 8.37 GiB (4.87 BPW) init_tokenizer: initializing tokenizer for type 2 load: control token: 151659 '<|fim_prefix|>' is not marked as EOG load: control token: 151656 '<|video_pad|>' is not marked as EOG load: control token: 151655 '<|image_pad|>' is not marked as EOG load: control token: 151653 '<|vision_end|>' is not marked as EOG load: control token: 151652 '<|vision_start|>' is not marked as EOG load: control token: 151651 '<|quad_end|>' is not marked as EOG load: control token: 151649 '<|box_end|>' is not marked as EOG load: control token: 151648 '<|box_start|>' is not marked as EOG load: control token: 151646 '<|object_ref_start|>' is not marked as EOG load: control token: 151644 '<|im_start|>' is not marked as EOG load: control token: 151661 '<|fim_suffix|>' is not marked as EOG load: control token: 151647 '<|object_ref_end|>' is not marked as EOG load: control token: 151660 '<|fim_middle|>' is not marked as EOG load: control token: 151654 '<|vision_pad|>' is not marked as EOG load: control token: 151650 '<|quad_start|>' is not marked as EOG load: special tokens cache size = 22 load: token to piece cache size = 0.9310 MB print_info: arch = qwen2 print_info: vocab_only = 0 print_info: n_ctx_train = 1010000 print_info: n_embd = 5120 print_info: n_layer = 48 print_info: n_head = 40 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 5 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: n_ff = 13824 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 2 print_info: rope scaling = linear print_info: freq_base_train = 10000000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 1010000 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 14B print_info: model params = 14.77 B print_info: general.name = Qwen2.5 14B Instruct 1M Abliterated print_info: vocab type = BPE print_info: n_vocab = 152064 print_info: n_merges = 151387 print_info: BOS token = 151643 '<|endoftext|>' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = false) load_tensors: layer 0 assigned to device CPU load_tensors: layer 1 assigned to device CPU load_tensors: layer 2 assigned to device CPU load_tensors: layer 3 assigned to device CPU load_tensors: layer 4 assigned to device CPU load_tensors: layer 5 assigned to device CPU load_tensors: layer 6 assigned to device CPU load_tensors: layer 7 assigned to device CPU load_tensors: layer 8 assigned to device CPU load_tensors: layer 9 assigned to device CPU load_tensors: layer 10 assigned to device CPU load_tensors: layer 11 assigned to device CPU load_tensors: layer 12 assigned to device CPU load_tensors: layer 13 assigned to device CPU load_tensors: layer 14 assigned to device CPU load_tensors: layer 15 assigned to device CPU load_tensors: layer 16 assigned to device CPU load_tensors: layer 17 assigned to device CPU load_tensors: layer 18 assigned to device CPU load_tensors: layer 19 assigned to device CPU load_tensors: layer 20 assigned to device CPU load_tensors: layer 21 assigned to device CPU load_tensors: layer 22 assigned to device CPU load_tensors: layer 23 assigned to device CPU load_tensors: layer 24 assigned to device CPU load_tensors: layer 25 assigned to device CPU load_tensors: layer 26 assigned to device CPU load_tensors: layer 27 assigned to device CPU load_tensors: layer 28 assigned to device CPU load_tensors: layer 29 assigned to device CPU load_tensors: layer 30 assigned to device CPU load_tensors: layer 31 assigned to device CPU load_tensors: layer 32 assigned to device CPU load_tensors: layer 33 assigned to device CPU load_tensors: layer 34 assigned to device CPU load_tensors: layer 35 assigned to device CPU load_tensors: layer 36 assigned to device CPU load_tensors: layer 37 assigned to device CPU load_tensors: layer 38 assigned to device CPU load_tensors: layer 39 assigned to device CPU load_tensors: layer 40 assigned to device CPU load_tensors: layer 41 assigned to device CPU load_tensors: layer 42 assigned to device CPU load_tensors: layer 43 assigned to device CPU load_tensors: layer 44 assigned to device CPU load_tensors: layer 45 assigned to device CPU load_tensors: layer 46 assigned to device CPU load_tensors: layer 47 assigned to device CPU load_tensors: layer 48 assigned to device CPU load_tensors: CPU model buffer size = 8566.04 MiB load_all_data: no device found for buffer type CPU for async uploads time=2025-04-23T21:09:21.727+08:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model" time=2025-04-23T21:09:21.977+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.05" time=2025-04-23T21:09:22.227+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.17" time=2025-04-23T21:09:22.477+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.24" time=2025-04-23T21:09:22.727+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.30" time=2025-04-23T21:09:22.977+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.37" time=2025-04-23T21:09:23.228+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.44" time=2025-04-23T21:09:23.478+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.51" time=2025-04-23T21:09:23.729+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.57" time=2025-04-23T21:09:23.979+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.64" time=2025-04-23T21:09:24.229+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.70" time=2025-04-23T21:09:24.479+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.76" time=2025-04-23T21:09:24.730+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.83" time=2025-04-23T21:09:24.980+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.89" time=2025-04-23T21:09:25.230+08:00 level=DEBUG source=server.go:625 msg="model load progress 0.94" time=2025-04-23T21:09:25.481+08:00 level=DEBUG source=server.go:625 msg="model load progress 1.00" llama_init_from_model: n_seq_max = 4 llama_init_from_model: n_ctx = 8192 llama_init_from_model: n_ctx_per_seq = 2048 llama_init_from_model: n_batch = 2048 llama_init_from_model: n_ubatch = 512 llama_init_from_model: flash_attn = 0 llama_init_from_model: freq_base = 10000000.0 llama_init_from_model: freq_scale = 1 llama_init_from_model: n_ctx_per_seq (2048) < n_ctx_train (1010000) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 48, can_shift = 1 llama_kv_cache_init: layer 0: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 1: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 2: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 3: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 4: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 5: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 6: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 7: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 8: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 9: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 10: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 11: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 12: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 13: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 14: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 15: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 16: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 17: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 18: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 19: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 20: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 21: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 22: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 23: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 24: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 25: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 26: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 27: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 28: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 29: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 30: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 31: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 32: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 33: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 34: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 35: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 36: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 37: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 38: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 39: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 40: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 41: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 42: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 43: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 44: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 45: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 46: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 47: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: CPU KV buffer size = 1536.00 MiB llama_init_from_model: KV self size = 1536.00 MiB, K (f16): 768.00 MiB, V (f16): 768.00 MiB llama_init_from_model: CPU output buffer size = 2.40 MiB llama_init_from_model: CPU compute buffer size = 696.01 MiB llama_init_from_model: graph nodes = 1686 llama_init_from_model: graph splits = 1 time=2025-04-23T21:09:25.731+08:00 level=INFO source=server.go:619 msg="llama runner started in 24.03 seconds" time=2025-04-23T21:09:25.731+08:00 level=DEBUG source=sched.go:464 msg="finished setting up runner" model=D:\ollama\Models\blobs\sha256-8f503e18bc39900d38e1ab39509091a5f2c8251c81e11f9264c452325378ade0 [GIN] 2025/04/23 - 21:09:25 | 200 | 24.2252665s | 127.0.0.1 | POST "/api/generate" time=2025-04-23T21:09:25.731+08:00 level=DEBUG source=sched.go:468 msg="context for request finished" time=2025-04-23T21:09:25.732+08:00 level=DEBUG source=sched.go:341 msg="runner with non-zero duration has gone idle, adding timer" modelPath=D:\ollama\Models\blobs\sha256-8f503e18bc39900d38e1ab39509091a5f2c8251c81e11f9264c452325378ade0 duration=5m0s time=2025-04-23T21:09:25.732+08:00 level=DEBUG source=sched.go:359 msg="after processing request finished event" modelPath=D:\ollama\Models\blobs\sha256-8f503e18bc39900d38e1ab39509091a5f2c8251c81e11f9264c452325378ade0 refCount=0 [GIN] 2025/04/23 - 21:09:36 | 200 | 517.5µs | 127.0.0.1 | HEAD "/" [GIN] 2025/04/23 - 21:09:36 | 200 | 0s | 127.0.0.1 | GET "/api/ps" time=2025-04-23T21:09:44.166+08:00 level=DEBUG source=sched.go:577 msg="evaluating already loaded" model=D:\ollama\Models\blobs\sha256-8f503e18bc39900d38e1ab39509091a5f2c8251c81e11f9264c452325378ade0 time=2025-04-23T21:09:44.167+08:00 level=DEBUG source=routes.go:1522 msg="chat request" images=0 prompt="<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n<|im_start|>user\nca<|im_end|>\n<|im_start|>assistant\n" time=2025-04-23T21:09:44.169+08:00 level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=30 used=0 remaining=30 [GIN] 2025/04/23 - 21:10:03 | 200 | 19.5927388s | 127.0.0.1 | POST "/api/chat" time=2025-04-23T21:10:03.747+08:00 level=DEBUG source=sched.go:409 msg="context for request finished" time=2025-04-23T21:10:03.747+08:00 level=DEBUG source=sched.go:341 msg="runner with non-zero duration has gone idle, adding timer" modelPath=D:\ollama\Models\blobs\sha256-8f503e18bc39900d38e1ab39509091a5f2c8251c81e11f9264c452325378ade0 duration=5m0s time=2025-04-23T21:10:03.747+08:00 level=DEBUG source=sched.go:359 msg="after processing request finished event" modelPath=D:\ollama\Models\blobs\sha256-8f503e18bc39900d38e1ab39509091a5f2c8251c81e11f9264c452325378ade0 refCount=0 time=2025-04-23T21:15:03.751+08:00 level=DEBUG source=sched.go:343 msg="timer expired, expiring to unload" modelPath=D:\ollama\Models\blobs\sha256-8f503e18bc39900d38e1ab39509091a5f2c8251c81e11f9264c452325378ade0 time=2025-04-23T21:15:03.751+08:00 level=DEBUG source=sched.go:362 msg="runner expired event received" modelPath=D:\ollama\Models\blobs\sha256-8f503e18bc39900d38e1ab39509091a5f2c8251c81e11f9264c452325378ade0 time=2025-04-23T21:15:03.751+08:00 level=DEBUG source=sched.go:377 msg="got lock to unload" modelPath=D:\ollama\Models\blobs\sha256-8f503e18bc39900d38e1ab39509091a5f2c8251c81e11f9264c452325378ade0 time=2025-04-23T21:15:03.751+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.1 GiB" before.free="22.2 GiB" before.free_swap="40.7 GiB" now.total="31.1 GiB" now.free="12.2 GiB" now.free_swap="28.9 GiB" time=2025-04-23T21:15:03.763+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 name="NVIDIA GeForce RTX 5070 Ti" overhead="0 B" before.total="15.9 GiB" before.free="13.1 GiB" now.total="15.9 GiB" now.free="12.5 GiB" now.used="3.4 GiB" releasing nvml library time=2025-04-23T21:15:03.783+08:00 level=DEBUG source=server.go:1001 msg="stopping llama server" time=2025-04-23T21:15:03.783+08:00 level=DEBUG source=server.go:1007 msg="waiting for llama server to exit" time=2025-04-23T21:15:04.001+08:00 level=DEBUG source=server.go:1011 msg="llama server stopped" time=2025-04-23T21:15:04.001+08:00 level=DEBUG source=sched.go:382 msg="runner released" modelPath=D:\ollama\Models\blobs\sha256-8f503e18bc39900d38e1ab39509091a5f2c8251c81e11f9264c452325378ade0 time=2025-04-23T21:15:04.013+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.1 GiB" before.free="12.2 GiB" before.free_swap="28.9 GiB" now.total="31.1 GiB" now.free="22.2 GiB" now.free_swap="39.6 GiB" time=2025-04-23T21:15:04.045+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 name="NVIDIA GeForce RTX 5070 Ti" overhead="0 B" before.total="15.9 GiB" before.free="12.5 GiB" now.total="15.9 GiB" now.free="12.5 GiB" now.used="3.4 GiB" releasing nvml library time=2025-04-23T21:15:04.263+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.1 GiB" before.free="22.2 GiB" before.free_swap="39.6 GiB" now.total="31.1 GiB" now.free="22.2 GiB" now.free_swap="39.6 GiB" time=2025-04-23T21:15:04.277+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 name="NVIDIA GeForce RTX 5070 Ti" overhead="0 B" before.total="15.9 GiB" before.free="12.5 GiB" now.total="15.9 GiB" now.free="12.5 GiB" now.used="3.4 GiB" releasing nvml library time=2025-04-23T21:15:04.513+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.1 GiB" before.free="22.2 GiB" before.free_swap="39.6 GiB" now.total="31.1 GiB" now.free="22.2 GiB" now.free_swap="39.6 GiB" time=2025-04-23T21:15:04.526+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 name="NVIDIA GeForce RTX 5070 Ti" overhead="0 B" before.total="15.9 GiB" before.free="12.5 GiB" now.total="15.9 GiB" now.free="12.5 GiB" now.used="3.4 GiB" releasing nvml library time=2025-04-23T21:15:04.763+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.1 GiB" before.free="22.2 GiB" before.free_swap="39.6 GiB" now.total="31.1 GiB" now.free="22.2 GiB" now.free_swap="39.6 GiB" time=2025-04-23T21:15:04.775+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 name="NVIDIA GeForce RTX 5070 Ti" overhead="0 B" before.total="15.9 GiB" before.free="12.5 GiB" now.total="15.9 GiB" now.free="12.6 GiB" now.used="3.3 GiB" releasing nvml library time=2025-04-23T21:15:05.013+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.1 GiB" before.free="22.2 GiB" before.free_swap="39.6 GiB" now.total="31.1 GiB" now.free="22.2 GiB" now.free_swap="39.6 GiB" time=2025-04-23T21:15:05.024+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 name="NVIDIA GeForce RTX 5070 Ti" overhead="0 B" before.total="15.9 GiB" before.free="12.6 GiB" now.total="15.9 GiB" now.free="12.6 GiB" now.used="3.3 GiB" releasing nvml library time=2025-04-23T21:15:05.263+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.1 GiB" before.free="22.2 GiB" before.free_swap="39.6 GiB" now.total="31.1 GiB" now.free="22.2 GiB" now.free_swap="39.6 GiB" time=2025-04-23T21:15:05.273+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 name="NVIDIA GeForce RTX 5070 Ti" overhead="0 B" before.total="15.9 GiB" before.free="12.6 GiB" now.total="15.9 GiB" now.free="12.6 GiB" now.used="3.3 GiB" releasing nvml library time=2025-04-23T21:15:05.513+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.1 GiB" before.free="22.2 GiB" before.free_swap="39.6 GiB" now.total="31.1 GiB" now.free="22.2 GiB" now.free_swap="39.6 GiB" time=2025-04-23T21:15:05.522+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 name="NVIDIA GeForce RTX 5070 Ti" overhead="0 B" before.total="15.9 GiB" before.free="12.6 GiB" now.total="15.9 GiB" now.free="12.6 GiB" now.used="3.3 GiB" releasing nvml library time=2025-04-23T21:15:05.763+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.1 GiB" before.free="22.2 GiB" before.free_swap="39.6 GiB" now.total="31.1 GiB" now.free="22.2 GiB" now.free_swap="39.6 GiB" time=2025-04-23T21:15:05.785+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 name="NVIDIA GeForce RTX 5070 Ti" overhead="0 B" before.total="15.9 GiB" before.free="12.6 GiB" now.total="15.9 GiB" now.free="12.6 GiB" now.used="3.3 GiB" releasing nvml library time=2025-04-23T21:15:06.013+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.1 GiB" before.free="22.2 GiB" before.free_swap="39.6 GiB" now.total="31.1 GiB" now.free="22.2 GiB" now.free_swap="39.6 GiB" time=2025-04-23T21:15:06.032+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 name="NVIDIA GeForce RTX 5070 Ti" overhead="0 B" before.total="15.9 GiB" before.free="12.6 GiB" now.total="15.9 GiB" now.free="12.6 GiB" now.used="3.3 GiB" releasing nvml library time=2025-04-23T21:15:06.263+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.1 GiB" before.free="22.2 GiB" before.free_swap="39.6 GiB" now.total="31.1 GiB" now.free="22.2 GiB" now.free_swap="39.6 GiB" time=2025-04-23T21:15:06.279+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 name="NVIDIA GeForce RTX 5070 Ti" overhead="0 B" before.total="15.9 GiB" before.free="12.6 GiB" now.total="15.9 GiB" now.free="12.6 GiB" now.used="3.3 GiB" releasing nvml library time=2025-04-23T21:15:06.513+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.1 GiB" before.free="22.2 GiB" before.free_swap="39.6 GiB" now.total="31.1 GiB" now.free="22.2 GiB" now.free_swap="39.6 GiB" time=2025-04-23T21:15:06.527+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 name="NVIDIA GeForce RTX 5070 Ti" overhead="0 B" before.total="15.9 GiB" before.free="12.6 GiB" now.total="15.9 GiB" now.free="12.6 GiB" now.used="3.3 GiB" releasing nvml library time=2025-04-23T21:15:06.763+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.1 GiB" before.free="22.2 GiB" before.free_swap="39.6 GiB" now.total="31.1 GiB" now.free="22.2 GiB" now.free_swap="39.6 GiB" time=2025-04-23T21:15:06.774+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 name="NVIDIA GeForce RTX 5070 Ti" overhead="0 B" before.total="15.9 GiB" before.free="12.6 GiB" now.total="15.9 GiB" now.free="12.6 GiB" now.used="3.3 GiB" releasing nvml library time=2025-04-23T21:15:07.013+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.1 GiB" before.free="22.2 GiB" before.free_swap="39.6 GiB" now.total="31.1 GiB" now.free="22.2 GiB" now.free_swap="39.6 GiB" time=2025-04-23T21:15:07.023+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 name="NVIDIA GeForce RTX 5070 Ti" overhead="0 B" before.total="15.9 GiB" before.free="12.6 GiB" now.total="15.9 GiB" now.free="12.5 GiB" now.used="3.4 GiB" releasing nvml library time=2025-04-23T21:15:07.263+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.1 GiB" before.free="22.2 GiB" before.free_swap="39.6 GiB" now.total="31.1 GiB" now.free="22.2 GiB" now.free_swap="39.6 GiB" time=2025-04-23T21:15:07.287+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 name="NVIDIA GeForce RTX 5070 Ti" overhead="0 B" before.total="15.9 GiB" before.free="12.5 GiB" now.total="15.9 GiB" now.free="12.6 GiB" now.used="3.4 GiB" releasing nvml library time=2025-04-23T21:15:07.513+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.1 GiB" before.free="22.2 GiB" before.free_swap="39.6 GiB" now.total="31.1 GiB" now.free="22.2 GiB" now.free_swap="39.6 GiB" time=2025-04-23T21:15:07.520+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-4e1c9188-638e-6afd-457b-9715a3f90b26 name="NVIDIA GeForce RTX 5070 Ti" overhead="0 B" before.total="15.9 GiB" before.free="12.6 GiB" now.total="15.9 GiB" now.free="12.5 GiB" now.used="3.4 GiB" releasing nvml library time=2025-04-23T21:15:07.520+08:00 level=DEBUG source=sched.go:661 msg="gpu VRAM free memory converged after 3.77 seconds" model=D:\ollama\Models\blobs\sha256-8f503e18bc39900d38e1ab39509091a5f2c8251c81e11f9264c452325378ade0 time=2025-04-23T21:15:07.520+08:00 level=DEBUG source=sched.go:386 msg="sending an unloaded event" modelPath=D:\ollama\Models\blobs\sha256-8f503e18bc39900d38e1ab39509091a5f2c8251c81e11f9264c452325378ade0 time=2025-04-23T21:15:07.521+08:00 level=DEBUG source=sched.go:310 msg="ignoring unload event with no pending requests" ### Relevant log output ```shell ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the bug label 2025-11-12 13:46:24 -06:00
Author
Owner

@oo33shan commented on GitHub (Apr 23, 2025):

Image

@oo33shan commented on GitHub (Apr 23, 2025): ![Image](https://github.com/user-attachments/assets/ba2c6599-540e-43c9-a767-5e516977759f)
Author
Owner

@JBGitHub11 commented on GitHub (Apr 23, 2025):

'OLLAMA_NEW_ENGINE:false'

give true a shot.

@JBGitHub11 commented on GitHub (Apr 23, 2025): 'OLLAMA_NEW_ENGINE:false' give true a shot.
Author
Owner

@rick-github commented on GitHub (Apr 23, 2025):

qwen2 is not supported by the new engine yet. The problem is likely related to:

ggml_backend_load_best: failed to load C:\Users\cc\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll

Unfortunately it's not clear why the backend load fails. The screenshot shows that the backends are available, and that the directories for the CUDA libraries exist.

@rick-github commented on GitHub (Apr 23, 2025): qwen2 is not supported by the new engine yet. The problem is likely related to: ``` ggml_backend_load_best: failed to load C:\Users\cc\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll ``` Unfortunately it's not clear why the backend load fails. The screenshot shows that the backends are available, and that the directories for the CUDA libraries exist.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama-ollama#6830