a lot of CUDA errors and CUDA error: out of memory and SIGSEGV: segmentation violation eventhough VRAM memory still available #5382

Closed
opened 2025-11-12 12:54:40 -06:00 by GiteaMirror · 1 comment
Owner

Originally created by @oussemah on GitHub (Jan 10, 2025).

What is the issue?

Having ollama "crash" after cuda segementation violation where ollama seems to be over-estimating the memory size it can use. This happens almost systematically when i use a 32b q4_k_m or q5_k_l model with 32k context ( happend with qwq and qwen-coder) , also happened when i wanted to use glm4-9b with 100K context or more.
So it happened both when vram was fully used, and also when i still had like 20% of VRAM still free.
I have never seen the issue happen if some part of the model is oflloaded to cpu.

Os: Ubuntu 24.04.1 LTS
CPU: Intel(R) Core(TM) i5-14500
GPU: RTX 3090 on Pcie16 + RTX 4060 ti 16GB on Pcie8

00:01.0 PCI bridge: Intel Corporation 12th Gen Core Processor PCI Express x16 Controller #1 (rev 02)
00:06.0 PCI bridge: Intel Corporation 12th Gen Core Processor PCI Express x4 Controller #0 (rev 02)
01:00.0 VGA compatible controller: NVIDIA Corporation GA102 [GeForce RTX 3090] (rev a1)
05:00.0 VGA compatible controller: NVIDIA Corporation AD106 [GeForce RTX 4060 Ti] (rev a1)

server log: tried to hightlight the interesting parts

Jan 10 16:30:57 node1 systemd[1]: Started ollama.service - Ollama Service.
Jan 10 16:30:57 node1 ollama[119046]: 2025/01/10 16:30:57 routes.go:1259: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
........
Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.584+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx/ollama_llama_server
Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.584+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx2/ollama_llama_server
Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.584+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v11_avx/ollama_llama_server
Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.584+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server
Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.584+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/rocm_avx/ollama_llama_server
Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.584+01:00 level=INFO source=routes.go:1339 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx]"
Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.584+01:00 level=DEBUG source=routes.go:1340 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.584+01:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler"
Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.584+01:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.585+01:00 level=DEBUG source=gpu.go:99 msg="searching for GPU discovery libraries for NVIDIA"
Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.585+01:00 level=DEBUG source=gpu.go:517 msg="Searching for GPU library" name=libcuda.so

Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.585+01:00 level=DEBUG source=gpu.go:543 msg="gpu library search" globs="[/usr/local/lib/ollama/libcuda.so* /usr/local/lib/ollama/libcuda.so* /libcuda.so* /usr/local/cuda*/targets//lib/libcuda.so /usr/lib/-linux-gnu/nvidia/current/libcuda.so /usr/lib/-linux-gnu/libcuda.so /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers//libcuda.so /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.592+01:00 level=DEBUG source=gpu.go:577 msg="discovered GPU libraries" paths="[/usr/lib/i386-linux-gnu/libcuda.so.560.35.05 /usr/lib/x86_64-linux-gnu/libcuda.so.560.35.05]"
Jan 10 16:30:57 node1 ollama[119046]: initializing /usr/lib/i386-linux-gnu/libcuda.so.560.35.05
Jan 10 16:30:57 node1 ollama[119046]: library /usr/lib/i386-linux-gnu/libcuda.so.560.35.05 load err: /usr/lib/i386-linux-gnu/libcuda.so.560.35.05: wrong ELF class: ELFCLASS32
Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.593+01:00 level=DEBUG source=gpu.go:628 msg="skipping 32bit library" library=/usr/lib/i386-linux-gnu/libcuda.so.560.35.05
Jan 10 16:30:57 node1 ollama[119046]: initializing /usr/lib/x86_64-linux-gnu/libcuda.so.560.35.05

Jan 10 16:30:57 node1 ollama[119046]: dlsym: cuInit - 0x72e616060800
Jan 10 16:30:57 node1 ollama[119046]: dlsym: cuDriverGetVersion - 0x72e616060820
Jan 10 16:30:57 node1 ollama[119046]: dlsym: cuDeviceGetCount - 0x72e616060860
Jan 10 16:30:57 node1 ollama[119046]: dlsym: cuDeviceGet - 0x72e616060840
Jan 10 16:30:57 node1 ollama[119046]: dlsym: cuDeviceGetAttribute - 0x72e616060940
Jan 10 16:30:57 node1 ollama[119046]: dlsym: cuDeviceGetUuid - 0x72e6160608a0
Jan 10 16:30:57 node1 ollama[119046]: dlsym: cuDeviceGetName - 0x72e616060880
Jan 10 16:30:57 node1 ollama[119046]: dlsym: cuCtxCreate_v3 - 0x72e61606b020
Jan 10 16:30:57 node1 ollama[119046]: dlsym: cuMemGetInfo_v2 - 0x72e6160764e0
Jan 10 16:30:57 node1 ollama[119046]: dlsym: cuCtxDestroy - 0x72e6160d11b0
Jan 10 16:30:57 node1 ollama[119046]: calling cuInit
Jan 10 16:30:57 node1 ollama[119046]: calling cuDriverGetVersion
Jan 10 16:30:57 node1 ollama[119046]: raw version 0x2f1c
Jan 10 16:30:57 node1 ollama[119046]: CUDA driver version: 12.6
Jan 10 16:30:57 node1 ollama[119046]: calling cuDeviceGetCount
Jan 10 16:30:57 node1 ollama[119046]: device count 2
Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.672+01:00 level=DEBUG source=gpu.go:134 msg="detected GPUs" count=2 library=/usr/lib/x86_64-linux-gnu/libcuda.so.560.35.05
Jan 10 16:30:57 node1 ollama[119046]: [GPU-0e3f2440-290b-0b48-4d15-8b43b8638f30] CUDA totalMem 24154 mb
Jan 10 16:30:57 node1 ollama[119046]: [GPU-0e3f2440-290b-0b48-4d15-8b43b8638f30] CUDA freeMem 23877 mb
Jan 10 16:30:57 node1 ollama[119046]: [GPU-0e3f2440-290b-0b48-4d15-8b43b8638f30] Compute Capability 8.6
Jan 10 16:30:57 node1 ollama[119046]: [GPU-08cf14dc-3d8a-c656-c643-4ba2eb37b51f] CUDA totalMem 15978 mb
Jan 10 16:30:57 node1 ollama[119046]: [GPU-08cf14dc-3d8a-c656-c643-4ba2eb37b51f] CUDA freeMem 15837 mb
Jan 10 16:30:57 node1 ollama[119046]: [GPU-08cf14dc-3d8a-c656-c643-4ba2eb37b51f] Compute Capability 8.9
Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.898+01:00 level=DEBUG source=amd_linux.go:421 msg="amdgpu driver not detected /sys/module/amdgpu"
Jan 10 16:30:57 node1 ollama[119046]: releasing cuda driver library
.........
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.528+01:00 level=INFO source=server.go:104 msg="system memory" total="46.8 GiB" free="40.2 GiB" free_swap="62.0 GiB"
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.528+01:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=2 available="[23.3 GiB 15.5 GiB]"
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.528+01:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=65 layers.offload=65 layers.split=40,25 memory.available="[23.3 GiB 15.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="37.6 GiB" memory.required.partial="37.6 GiB" memory.required.kv="8.0 GiB" memory.required.allocations="[22.3 GiB 15.2 GiB]" memory.weights.total="28.6 GiB" memory.weights.repeating="27.8 GiB" memory.weights.nonrepeating="788.9 MiB" memory.graph.full="3.2 GiB" memory.graph.partial="3.2 GiB"
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.528+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx/ollama_llama_server
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.528+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx2/ollama_llama_server
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.528+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v11_avx/ollama_llama_server
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.528+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.529+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/rocm_avx/ollama_llama_server
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.529+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx/ollama_llama_server
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.529+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx2/ollama_llama_server
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.529+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v11_avx/ollama_llama_server
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.529+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.529+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/rocm_avx/ollama_llama_server
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.529+01:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server runner --model /home/ollama/.ollama/models/blobs/sha256-5c5975fb16bebb4e77f71a0ac616f60b680412611f6503e59f76de4393fc2e6a --ctx-size 32768 --batch-size 512 --n-gpu-layers 65 --verbose --threads 6 --parallel 1 --tensor-split 40,25 --port 46519"
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.529+01:00 level=DEBUG source=server.go:393 msg=subprocess environment="[PATH=/home/ous/.pyenv/shims:/home/ous/.pyenv/bin:/home/ous/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/ous/.local/bin:/home/ous/.local/bin:/home/ous/.local/anaconda3/bin LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama:/usr/local/lib/ollama/runners/cuda_v12_avx CUDA_VISIBLE_DEVICES=GPU-0e3f2440-290b-0b48-4d15-8b43b8638f30,GPU-08cf14dc-3d8a-c656-c643-4ba2eb37b51f]"
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.529+01:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.529+01:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.530+01:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.580+01:00 level=INFO source=runner.go:945 msg="starting go runner"
Jan 10 16:30:58 node1 ollama[119046]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
Jan 10 16:30:58 node1 ollama[119046]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Jan 10 16:30:58 node1 ollama[119046]: ggml_cuda_init: found 2 CUDA devices:
Jan 10 16:30:58 node1 ollama[119046]: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Jan 10 16:30:58 node1 ollama[119046]: Device 1: NVIDIA GeForce RTX 4060 Ti, compute capability 8.9, VMM: yes
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.659+01:00 level=INFO source=runner.go:946 msg=system info="CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=6
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.659+01:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:46519"
Jan 10 16:30:58 node1 ollama[119046]: llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23877 MiB free
Jan 10 16:30:58 node1 ollama[119046]: llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 4060 Ti) - 15837 MiB free
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: loaded meta data with 38 key-value pairs and 771 tensors from /home/ollama/.ollama/models/blobs/sha256-5c5975fb16bebb4e77f71a0ac616f60b680412611f6503e59f76de4393fc2e6a (version GGUF V3 (latest))
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 0: general.architecture str = qwen2
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 1: general.type str = model
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 2: general.name str = QwQ 32B Preview
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 3: general.finetune str = Preview
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 4: general.basename str = QwQ
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 5: general.size_label str = 32B
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 6: general.license str = apache-2.0
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/QwQ-32B-P...
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 8: general.base_model.count u32 = 1
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 32B Instruct
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-3...
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 12: general.tags arr[str,2] = ["chat", "text-generation"]
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"]
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 14: qwen2.block_count u32 = 64
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 15: qwen2.context_length u32 = 32768
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 16: qwen2.embedding_length u32 = 5120
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 27648
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 40
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 8
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 22: general.file_type u32 = 17
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", """, "#", "$", "%", "&", "'", ...
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.780+01:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 33: general.quantization_version u32 = 2
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 34: quantize.imatrix.file str = /models_out/QwQ-32B-Preview-GGUF/QwQ-...
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 35: quantize.imatrix.dataset str = /training_dir/calibration_datav3.txt
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 36: quantize.imatrix.entries_count i32 = 448
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 37: quantize.imatrix.chunks_count i32 = 128
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - type f32: 321 tensors
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - type q8_0: 2 tensors
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - type q5_K: 384 tensors
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - type q6_K: 64 tensors
Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151660 '<|fim_middle|>' is not marked as EOG
Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151659 '<|fim_prefix|>' is not marked as EOG
Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151653 '<|vision_end|>' is not marked as EOG
Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151648 '<|box_start|>' is not marked as EOG
Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151646 '<|object_ref_start|>' is not marked as EOG
Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151649 '<|box_end|>' is not marked as EOG
Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151655 '<|image_pad|>' is not marked as EOG
Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151651 '<|quad_end|>' is not marked as EOG
Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151647 '<|object_ref_end|>' is not marked as EOG
Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151652 '<|vision_start|>' is not marked as EOG
Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151654 '<|vision_pad|>' is not marked as EOG
Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151656 '<|video_pad|>' is not marked as EOG
Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151644 '<|im_start|>' is not marked as EOG
Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151661 '<|fim_suffix|>' is not marked as EOG
Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151650 '<|quad_start|>' is not marked as EOG
Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: special tokens cache size = 22
Jan 10 16:30:59 node1 ollama[119046]: llm_load_vocab: token to piece cache size = 0.9310 MB
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: format = GGUF V3 (latest)
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: arch = qwen2
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: vocab type = BPE
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_vocab = 152064
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_merges = 151387
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: vocab_only = 0
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_ctx_train = 32768
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_embd = 5120
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_layer = 64
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_head = 40
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_head_kv = 8
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_rot = 128
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_swa = 0
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_embd_head_k = 128
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_embd_head_v = 128
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_gqa = 5
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_embd_k_gqa = 1024
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_embd_v_gqa = 1024
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: f_norm_eps = 0.0e+00
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: f_clamp_kqv = 0.0e+00
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: f_logit_scale = 0.0e+00
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_ff = 27648
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_expert = 0
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_expert_used = 0
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: causal attn = 1
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: pooling type = 0
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: rope type = 2
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: rope scaling = linear
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: freq_base_train = 1000000.0
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: freq_scale_train = 1
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_ctx_orig_yarn = 32768
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: rope_finetuned = unknown
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: ssm_d_conv = 0
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: ssm_d_inner = 0
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: ssm_d_state = 0
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: ssm_dt_rank = 0
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: ssm_dt_b_c_rms = 0
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: model type = 32B
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: model ftype = Q5_K - Medium
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: model params = 32.76 B
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: model size = 22.11 GiB (5.80 BPW)
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: general.name = QwQ 32B Preview
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: BOS token = 151643 '<|endoftext|>'
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: EOS token = 151645 '<|im_end|>'
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: EOT token = 151645 '<|im_end|>'
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: PAD token = 151643 '<|endoftext|>'
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: LF token = 148848 'ÄĬ'
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>'
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>'
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>'
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>'
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>'
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>'
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: EOG token = 151643 '<|endoftext|>'
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: EOG token = 151645 '<|im_end|>'
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: EOG token = 151662 '<|fim_pad|>'
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: EOG token = 151663 '<|repo_name|>'
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: EOG token = 151664 '<|file_sep|>'
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: max token length = 256
Jan 10 16:30:59 node1 ollama[119046]: llm_load_tensors: tensor 'token_embd.weight' (q8_0) (and 0 others) cannot be used with preferred buffer type CPU_AARCH64, using CPU instead
Jan 10 16:30:59 node1 ollama[119046]: llm_load_tensors: offloading 64 repeating layers to GPU
Jan 10 16:30:59 node1 ollama[119046]: llm_load_tensors: offloading output layer to GPU
Jan 10 16:30:59 node1 ollama[119046]: llm_load_tensors: offloaded 65/65 layers to GPU
Jan 10 16:30:59 node1 ollama[119046]: llm_load_tensors: CPU_Mapped model buffer size = 788.91 MiB
Jan 10 16:30:59 node1 ollama[119046]: llm_load_tensors: CUDA0 model buffer size = 13124.84 MiB
Jan 10 16:30:59 node1 ollama[119046]: llm_load_tensors: CUDA1 model buffer size = 8723.33 MiB
Jan 10 16:30:59 node1 ollama[119046]: time=2025-01-10T16:30:59.784+01:00 level=DEBUG source=server.go:600 msg="model load progress 0.18"
Jan 10 16:31:00 node1 ollama[119046]: time=2025-01-10T16:31:00.034+01:00 level=DEBUG source=server.go:600 msg="model load progress 0.36"
Jan 10 16:31:00 node1 ollama[119046]: time=2025-01-10T16:31:00.285+01:00 level=DEBUG source=server.go:600 msg="model load progress 0.54"
Jan 10 16:31:00 node1 ollama[119046]: time=2025-01-10T16:31:00.536+01:00 level=DEBUG source=server.go:600 msg="model load progress 0.66"
Jan 10 16:31:00 node1 ollama[119046]: time=2025-01-10T16:31:00.787+01:00 level=DEBUG source=server.go:600 msg="model load progress 0.72"
Jan 10 16:31:01 node1 ollama[119046]: time=2025-01-10T16:31:01.038+01:00 level=DEBUG source=server.go:600 msg="model load progress 0.79"
Jan 10 16:31:01 node1 ollama[119046]: time=2025-01-10T16:31:01.288+01:00 level=DEBUG source=server.go:600 msg="model load progress 0.86"
Jan 10 16:31:01 node1 ollama[119046]: time=2025-01-10T16:31:01.539+01:00 level=DEBUG source=server.go:600 msg="model load progress 0.93"
Jan 10 16:31:01 node1 ollama[119046]: time=2025-01-10T16:31:01.790+01:00 level=DEBUG source=server.go:600 msg="model load progress 1.00"
Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: n_seq_max = 1
Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: n_ctx = 32768
Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: n_ctx_per_seq = 32768
Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: n_batch = 512
Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: n_ubatch = 512
Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: flash_attn = 0
Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: freq_base = 1000000.0
Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: freq_scale = 1
Jan 10 16:31:02 node1 ollama[119046]: llama_kv_cache_init: CUDA0 KV buffer size = 5120.00 MiB
Jan 10 16:31:02 node1 ollama[119046]: llama_kv_cache_init: CUDA1 KV buffer size = 3072.00 MiB
Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: KV self size = 8192.00 MiB, K (f16): 4096.00 MiB, V (f16): 4096.00 MiB
Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: CUDA_Host output buffer size = 0.60 MiB
Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: CUDA0 compute buffer size = 2896.01 MiB
Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: CUDA1 compute buffer size = 2896.02 MiB
Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: CUDA_Host compute buffer size = 266.02 MiB
Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: graph nodes = 2246
Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: graph splits = 3
Jan 10 16:31:02 node1 ollama[119046]: time=2025-01-10T16:31:02.292+01:00 level=INFO source=server.go:594 msg="llama runner started in 3.76 seconds"
Jan 10 16:31:02 node1 ollama[119046]: time=2025-01-10T16:31:02.292+01:00 level=DEBUG source=sched.go:462 msg="finished setting up runner" model=/home/ollama/.ollama/models/blobs/sha256-5c5975fb16bebb4e77f71a0ac616f60b680412611f6503e59f76de4393fc2e6a
Jan 10 16:31:02 node1 ollama[119046]: time=2025-01-10T16:31:02.295+01:00 level=DEBUG source=server.go:967 msg="new runner detected, loading model for cgo tokenization"
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: loaded meta data with 38 key-value pairs and 771 tensors from /home/ollama/.ollama/models/blobs/sha256-5c5975fb16bebb4e77f71a0ac616f60b680412611f6503e59f76de4393fc2e6a (version GGUF V3 (latest))
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 0: general.architecture str = qwen2
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 1: general.type str = model
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 2: general.name str = QwQ 32B Preview
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 3: general.finetune str = Preview
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 4: general.basename str = QwQ
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 5: general.size_label str = 32B
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 6: general.license str = apache-2.0
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/QwQ-32B-P...
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 8: general.base_model.count u32 = 1
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 32B Instruct
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-3...
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 12: general.tags arr[str,2] = ["chat", "text-generation"]
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"]
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 14: qwen2.block_count u32 = 64
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 15: qwen2.context_length u32 = 32768
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 16: qwen2.embedding_length u32 = 5120
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 27648
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 40
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 8
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 22: general.file_type u32 = 17
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", """, "#", "$", "%", "&", "'", ...
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 33: general.quantization_version u32 = 2
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 34: quantize.imatrix.file str = /models_out/QwQ-32B-Preview-GGUF/QwQ-...
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 35: quantize.imatrix.dataset str = /training_dir/calibration_datav3.txt
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 36: quantize.imatrix.entries_count i32 = 448
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 37: quantize.imatrix.chunks_count i32 = 128
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - type f32: 321 tensors
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - type q8_0: 2 tensors
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - type q5_K: 384 tensors
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - type q6_K: 64 tensors
Jan 10 16:31:02 node1 ollama[119046]: llm_load_vocab: special tokens cache size = 22
Jan 10 16:31:02 node1 ollama[119046]: llm_load_vocab: token to piece cache size = 0.9310 MB
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: format = GGUF V3 (latest)
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: arch = qwen2
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: vocab type = BPE
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: n_vocab = 152064
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: n_merges = 151387
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: vocab_only = 1
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: model type = ?B
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: model ftype = all F32
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: model params = 32.76 B
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: model size = 22.11 GiB (5.80 BPW)
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: general.name = QwQ 32B Preview
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: BOS token = 151643 '<|endoftext|>'
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: EOS token = 151645 '<|im_end|>'
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: EOT token = 151645 '<|im_end|>'
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: PAD token = 151643 '<|endoftext|>'
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: LF token = 148848 'ÄĬ'
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>'
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>'
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>'
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>'
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>'
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>'
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: EOG token = 151643 '<|endoftext|>'
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: EOG token = 151645 '<|im_end|>'
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: EOG token = 151662 '<|fim_pad|>'
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: EOG token = 151663 '<|repo_name|>'
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: EOG token = 151664 '<|file_sep|>'
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: max token length = 256
Jan 10 16:31:02 node1 ollama[119046]: llama_model_load: vocab only - skipping tensors
Jan 10 16:31:08 node1 ollama[119046]: time=2025-01-10T16:31:08.286+01:00 level=DEBUG source=prompt.go:77 msg="truncating input messages which exceed context length" truncated=60
Jan 10 16:31:08 node1 ollama[119046]: time=2025-01-10T16:31:08.286+01:00 level=DEBUG source=routes.go:1542 msg="chat request" images=0 prompt="<|im_start|>system\nYou are Cline, .REMOVED ORIGINAL PROMPT....</environment_details><|im_end|>\n<|im_start|>assistant\n"
Jan 10 16:31:08 node1 ollama[119046]: time=2025-01-10T16:31:08.417+01:00 level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=32752 used=0 remaining=32752
Jan 10 16:31:27 node1 ollama[119046]: [GIN] 2025/01/10 - 16:31:27 | 200 | 14.762µs | 127.0.0.1 | HEAD "/"
Jan 10 16:31:27 node1 ollama[119046]: [GIN] 2025/01/10 - 16:31:27 | 200 | 72.782µs | 127.0.0.1 | GET "/api/ps"
Jan 10 16:31:55 node1 ollama[119046]: CUDA error: out of memory
Jan 10 16:31:55 node1 ollama[119046]: current device: 1, in function alloc at llama/ggml-cuda/ggml-cuda.cu:370
Jan 10 16:31:55 node1 ollama[119046]: cuMemCreate(&handle, reserve_size, &prop, 0)
Jan 10 16:31:55 node1 ollama[119046]: llama/ggml-cuda/ggml-cuda.cu:96: CUDA error
Jan 10 16:31:55 node1 ollama[119046]: SIGSEGV: segmentation violation

Jan 10 16:31:55 node1 ollama[119046]: PC=0x7cfdb17efe57 m=0 sigcode=1 addr=0x20b203fd0
Jan 10 16:31:55 node1 ollama[119046]: signal arrived during cgo execution
Jan 10 16:31:55 node1 ollama[119046]: goroutine 19 gp=0xc0001b01c0 m=0 mp=0x5cfc8ff8e1a0 [syscall]:
Jan 10 16:31:55 node1 ollama[119046]: runtime.cgocall(0x5cfc8f9a87d0, 0xc000081b90)
Jan 10 16:31:55 node1 ollama[119046]: runtime/cgocall.go:167 +0x4b fp=0xc000081b68 sp=0xc000081b30 pc=0x5cfc8f75cb2b
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama._Cfunc_llama_decode(0x7cfd30c5b730, {0x200, 0x7cfd30e0aa50, 0x0, 0x0, 0x7cfd30e0b260, 0x7cfd30e0ba70, 0x7cfd30c7f6c0, 0x7cfd30c5f9e0})
Jan 10 16:31:55 node1 ollama[119046]: _cgo_gotypes.go:556 +0x4f fp=0xc000081b90 sp=0xc000081b68 pc=0x5cfc8f806baf
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama.(*Context).Decode.func1(0x5cfc8f9a3f0b?, 0x7cfd30c5b730?)
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/llama.go:207 +0xf5 fp=0xc000081c80 sp=0xc000081b90 pc=0x5cfc8f809475
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama.(*Context).Decode(0xc000081d70?, 0x0?)
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/llama.go:207 +0x13 fp=0xc000081cc8 sp=0xc000081c80 pc=0x5cfc8f8092f3
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner.(*Server).processBatch(0xc00019a000, 0xc0000a0360, 0xc000081f20)
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner/runner.go:434 +0x23f fp=0xc000081ee0 sp=0xc000081cc8 pc=0x5cfc8f9a2bdf
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner.(*Server).run(0xc00019a000, {0x5cfc8fda1de0, 0xc0001980a0})
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner/runner.go:342 +0x1d5 fp=0xc000081fb8 sp=0xc000081ee0 pc=0x5cfc8f9a2615
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner.Execute.gowrap2()
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner/runner.go:984 +0x28 fp=0xc000081fe0 sp=0xc000081fb8 pc=0x5cfc8f9a7628
Jan 10 16:31:55 node1 ollama[119046]: runtime.goexit({})
Jan 10 16:31:55 node1 ollama[119046]: runtime/asm_amd64.s:1700 +0x1 fp=0xc000081fe8 sp=0xc000081fe0 pc=0x5cfc8f76a561
Jan 10 16:31:55 node1 ollama[119046]: created by github.com/ollama/ollama/llama/runner.Execute in goroutine 1
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner/runner.go:984 +0xde5
Jan 10 16:31:55 node1 ollama[119046]: goroutine 1 gp=0xc0000061c0 m=nil [IO wait]:
Jan 10 16:31:55 node1 ollama[119046]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
Jan 10 16:31:55 node1 ollama[119046]: runtime/proc.go:424 +0xce fp=0xc0000e77b0 sp=0xc0000e7790 pc=0x5cfc8f76292e
Jan 10 16:31:55 node1 ollama[119046]: runtime.netpollblock(0xc000029800?, 0x8f6fb186?, 0xfc?)
Jan 10 16:31:55 node1 ollama[119046]: runtime/netpoll.go:575 +0xf7 fp=0xc0000e77e8 sp=0xc0000e77b0 pc=0x5cfc8f727697
Jan 10 16:31:55 node1 ollama[119046]: internal/poll.runtime_pollWait(0x7cfdb0cc6650, 0x72)
Jan 10 16:31:55 node1 ollama[119046]: runtime/netpoll.go:351 +0x85 fp=0xc0000e7808 sp=0xc0000e77e8 pc=0x5cfc8f761c25
Jan 10 16:31:55 node1 ollama[119046]: internal/poll.(*pollDesc).wait(0xc000194180?, 0x900000036?, 0x0)
Jan 10 16:31:55 node1 ollama[119046]: internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc0000e7830 sp=0xc0000e7808 pc=0x5cfc8f7b7a67
Jan 10 16:31:55 node1 ollama[119046]: internal/poll.(*pollDesc).waitRead(...)
Jan 10 16:31:55 node1 ollama[119046]: internal/poll/fd_poll_runtime.go:89
Jan 10 16:31:55 node1 ollama[119046]: internal/poll.(*FD).Accept(0xc000194180)
Jan 10 16:31:55 node1 ollama[119046]: internal/poll/fd_unix.go:620 +0x295 fp=0xc0000e78d8 sp=0xc0000e7830 pc=0x5cfc8f7b8fd5
Jan 10 16:31:55 node1 ollama[119046]: net.(*netFD).accept(0xc000194180)
Jan 10 16:31:55 node1 ollama[119046]: net/fd_unix.go:172 +0x29 fp=0xc0000e7990 sp=0xc0000e78d8 pc=0x5cfc8f831969
Jan 10 16:31:55 node1 ollama[119046]: net.(*TCPListener).accept(0xc0001b2040)
Jan 10 16:31:55 node1 ollama[119046]: net/tcpsock_posix.go:159 +0x1e fp=0xc0000e79e0 sp=0xc0000e7990 pc=0x5cfc8f841fbe
Jan 10 16:31:55 node1 ollama[119046]: net.(*TCPListener).Accept(0xc0001b2040)
Jan 10 16:31:55 node1 ollama[119046]: net/tcpsock.go:372 +0x30 fp=0xc0000e7a10 sp=0xc0000e79e0 pc=0x5cfc8f8412f0
Jan 10 16:31:55 node1 ollama[119046]: net/http.(*onceCloseListener).Accept(0xc00019a3f0?)
Jan 10 16:31:55 node1 ollama[119046]: :1 +0x24 fp=0xc0000e7a28 sp=0xc0000e7a10 pc=0x5cfc8f97fec4
Jan 10 16:31:55 node1 ollama[119046]: net/http.(*Server).Serve(0xc0001904b0, {0x5cfc8fda17f8, 0xc0001b2040})
Jan 10 16:31:55 node1 ollama[119046]: net/http/server.go:3330 +0x30c fp=0xc0000e7b58 sp=0xc0000e7a28 pc=0x5cfc8f971c0c
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner.Execute({0xc000016150?, 0x5cfc8f76a1bc?, 0x0?})
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner/runner.go:1005 +0x11a9 fp=0xc0000e7ef8 sp=0xc0000e7b58 pc=0x5cfc8f9a7309
Jan 10 16:31:55 node1 ollama[119046]: main.main()
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/cmd/runner/main.go:11 +0x54 fp=0xc0000e7f50 sp=0xc0000e7ef8 pc=0x5cfc8f9a8294
Jan 10 16:31:55 node1 ollama[119046]: runtime.main()
Jan 10 16:31:55 node1 ollama[119046]: runtime/proc.go:272 +0x29d fp=0xc0000e7fe0 sp=0xc0000e7f50 pc=0x5cfc8f72ec7d
Jan 10 16:31:55 node1 ollama[119046]: runtime.goexit({})
Jan 10 16:31:55 node1 ollama[119046]: runtime/asm_amd64.s:1700 +0x1 fp=0xc0000e7fe8 sp=0xc0000e7fe0 pc=0x5cfc8f76a561
Jan 10 16:31:55 node1 ollama[119046]: goroutine 2 gp=0xc000006c40 m=nil [force gc (idle)]:
Jan 10 16:31:55 node1 ollama[119046]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
Jan 10 16:31:55 node1 ollama[119046]: runtime/proc.go:424 +0xce fp=0xc00006efa8 sp=0xc00006ef88 pc=0x5cfc8f76292e
Jan 10 16:31:55 node1 ollama[119046]: runtime.goparkunlock(...)
Jan 10 16:31:55 node1 ollama[119046]: runtime/proc.go:430
Jan 10 16:31:55 node1 ollama[119046]: runtime.forcegchelper()
Jan 10 16:31:55 node1 ollama[119046]: runtime/proc.go:337 +0xb8 fp=0xc00006efe0 sp=0xc00006efa8 pc=0x5cfc8f72efb8
Jan 10 16:31:55 node1 ollama[119046]: runtime.goexit({})
------ removed some duplicate traces -----------
Jan 10 16:31:55 node1 ollama[119046]: runtime/asm_amd64.s:1700 +0x1 fp=0xc0004a17e8 sp=0xc0004a17e0 pc=0x5cfc8f76a561
Jan 10 16:31:55 node1 ollama[119046]: created by runtime.gcBgMarkStartWorkers in goroutine 26
Jan 10 16:31:55 node1 ollama[119046]: runtime/mgc.go:1328 +0x105
Jan 10 16:31:55 node1 ollama[119046]: goroutine 67 gp=0xc000278540 m=nil [GC worker (idle)]:
Jan 10 16:31:55 node1 ollama[119046]: runtime.gopark(0x97e81a03181?, 0x1?, 0xfc?, 0x38?, 0x0?)
Jan 10 16:31:55 node1 ollama[119046]: runtime/proc.go:424 +0xce fp=0xc0004a1f38 sp=0xc0004a1f18 pc=0x5cfc8f76292e
Jan 10 16:31:55 node1 ollama[119046]: runtime.gcBgMarkWorker(0xc000022700)
Jan 10 16:31:55 node1 ollama[119046]: runtime/mgc.go:1412 +0xe9 fp=0xc0004a1fc8 sp=0xc0004a1f38 pc=0x5cfc8f710209
Jan 10 16:31:55 node1 ollama[119046]: runtime.gcBgMarkStartWorkers.gowrap1()
Jan 10 16:31:55 node1 ollama[119046]: runtime/mgc.go:1328 +0x25 fp=0xc0004a1fe0 sp=0xc0004a1fc8 pc=0x5cfc8f7100e5
Jan 10 16:31:55 node1 ollama[119046]: runtime.goexit({})
Jan 10 16:31:55 node1 ollama[119046]: runtime/asm_amd64.s:1700 +0x1 fp=0xc0004a1fe8 sp=0xc0004a1fe0 pc=0x5cfc8f76a561
Jan 10 16:31:55 node1 ollama[119046]: created by runtime.gcBgMarkStartWorkers in goroutine 26
Jan 10 16:31:55 node1 ollama[119046]: runtime/mgc.go:1328 +0x105
Jan 10 16:31:55 node1 ollama[119046]: rax 0x20b203fd0
Jan 10 16:31:55 node1 ollama[119046]: rbx 0x7cfd303b4130
Jan 10 16:31:55 node1 ollama[119046]: rcx 0xff4
Jan 10 16:31:55 node1 ollama[119046]: rdx 0x7cfd30006a50
Jan 10 16:31:55 node1 ollama[119046]: rdi 0x7cfd30006a60
Jan 10 16:31:55 node1 ollama[119046]: rsi 0x0
Jan 10 16:31:55 node1 ollama[119046]: rbp 0x7fff595f5fd0
Jan 10 16:31:55 node1 ollama[119046]: rsp 0x7fff595f5fb0
Jan 10 16:31:55 node1 ollama[119046]: r8 0x0
Jan 10 16:31:55 node1 ollama[119046]: r9 0x0
Jan 10 16:31:55 node1 ollama[119046]: r10 0x0
Jan 10 16:31:55 node1 ollama[119046]: r11 0x246
Jan 10 16:31:55 node1 ollama[119046]: r12 0x7cf944007060
Jan 10 16:31:55 node1 ollama[119046]: r13 0x7cfd30006a60
Jan 10 16:31:55 node1 ollama[119046]: r14 0x0
Jan 10 16:31:55 node1 ollama[119046]: r15 0x7cfdfd063d50
Jan 10 16:31:55 node1 ollama[119046]: rip 0x7cfdb17efe57
Jan 10 16:31:55 node1 ollama[119046]: rflags 0x10297
Jan 10 16:31:55 node1 ollama[119046]: cs 0x33
Jan 10 16:31:55 node1 ollama[119046]: fs 0x0
Jan 10 16:31:55 node1 ollama[119046]: gs 0x0
Jan 10 16:31:55 node1 ollama[119046]: SIGABRT: abort
Jan 10 16:31:55 node1 ollama[119046]: PC=0x7cfd8b69eb1c m=0 sigcode=18446744073709551610
Jan 10 16:31:55 node1 ollama[119046]: signal arrived during cgo execution
Jan 10 16:31:55 node1 ollama[119046]: goroutine 19 gp=0xc0001b01c0 m=0 mp=0x5cfc8ff8e1a0 [syscall]:
Jan 10 16:31:55 node1 ollama[119046]: runtime.cgocall(0x5cfc8f9a87d0, 0xc000081b90)
Jan 10 16:31:55 node1 ollama[119046]: runtime/cgocall.go:167 +0x4b fp=0xc000081b68 sp=0xc000081b30 pc=0x5cfc8f75cb2b
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama._Cfunc_llama_decode(0x7cfd30c5b730, {0x200, 0x7cfd30e0aa50, 0x0, 0x0, 0x7cfd30e0b260, 0x7cfd30e0ba70, 0x7cfd30c7f6c0, 0x7cfd30c5f9e0})
Jan 10 16:31:55 node1 ollama[119046]: _cgo_gotypes.go:556 +0x4f fp=0xc000081b90 sp=0xc000081b68 pc=0x5cfc8f806baf
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama.(*Context).Decode.func1(0x5cfc8f9a3f0b?, 0x7cfd30c5b730?)
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/llama.go:207 +0xf5 fp=0xc000081c80 sp=0xc000081b90 pc=0x5cfc8f809475
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama.(*Context).Decode(0xc000081d70?, 0x0?)
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/llama.go:207 +0x13 fp=0xc000081cc8 sp=0xc000081c80 pc=0x5cfc8f8092f3
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner.(*Server).processBatch(0xc00019a000, 0xc0000a0360, 0xc000081f20)
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner/runner.go:434 +0x23f fp=0xc000081ee0 sp=0xc000081cc8 pc=0x5cfc8f9a2bdf
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner.(*Server).run(0xc00019a000, {0x5cfc8fda1de0, 0xc0001980a0})
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner/runner.go:342 +0x1d5 fp=0xc000081fb8 sp=0xc000081ee0 pc=0x5cfc8f9a2615
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner.Execute.gowrap2()
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner/runner.go:984 +0x28 fp=0xc000081fe0 sp=0xc000081fb8 pc=0x5cfc8f9a7628
Jan 10 16:31:55 node1 ollama[119046]: runtime.goexit({})
------ remove duplicate traces ------------
Jan 10 16:31:55 node1 ollama[119046]: [GIN] 2025/01/10 - 16:31:55 | 200 | 57.936292517s | 127.0.0.1 | POST "/v1/chat/completions"
Jan 10 16:31:55 node1 ollama[119046]: time=2025-01-10T16:31:55.983+01:00 level=DEBUG source=sched.go:466 msg="context for request finished"
Jan 10 16:31:55 node1 ollama[119046]: time=2025-01-10T16:31:55.983+01:00 level=DEBUG source=sched.go:339 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/ollama/.ollama/models/blobs/sha256-5c5975fb16bebb4e77f71a0ac616f60b680412611f6503e59f76de4393fc2e6a duration=5m0s
Jan 10 16:31:55 node1 ollama[119046]: time=2025-01-10T16:31:55.983+01:00 level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/home/ollama/.ollama/models/blobs/sha256-5c5975fb16bebb4e77f71a0ac616f60b680412611f6503e59f76de4393fc2e6a refCount=0
Jan 10 16:31:56 node1 ollama[119046]: time=2025-01-10T16:31:56.096+01:00 level=DEBUG source=server.go:416 msg="llama runner terminated" error="exit status 2"

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.5.4

Originally created by @oussemah on GitHub (Jan 10, 2025). ### What is the issue? Having ollama "crash" after cuda segementation violation where ollama seems to be over-estimating the memory size it can use. This happens almost systematically when i use a 32b q4_k_m or q5_k_l model with 32k context ( happend with qwq and qwen-coder) , also happened when i wanted to use glm4-9b with 100K context or more. So it happened both when vram was fully used, and also when i still had like 20% of VRAM still free. I have never seen the issue happen if some part of the model is oflloaded to cpu. Os: Ubuntu 24.04.1 LTS CPU: Intel(R) Core(TM) i5-14500 GPU: RTX 3090 on Pcie16 + RTX 4060 ti 16GB on Pcie8 00:01.0 PCI bridge: Intel Corporation 12th Gen Core Processor PCI Express x16 Controller #1 (rev 02) 00:06.0 PCI bridge: Intel Corporation 12th Gen Core Processor PCI Express x4 Controller #0 (rev 02) 01:00.0 VGA compatible controller: NVIDIA Corporation GA102 [GeForce RTX 3090] (rev a1) 05:00.0 VGA compatible controller: NVIDIA Corporation AD106 [GeForce RTX 4060 Ti] (rev a1) server log: tried to hightlight the interesting parts Jan 10 16:30:57 node1 systemd[1]: Started ollama.service - Ollama Service. Jan 10 16:30:57 node1 ollama[119046]: 2025/01/10 16:30:57 routes.go:1259: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" ........ Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.584+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx/ollama_llama_server Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.584+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx2/ollama_llama_server Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.584+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v11_avx/ollama_llama_server Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.584+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.584+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/rocm_avx/ollama_llama_server Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.584+01:00 level=INFO source=routes.go:1339 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx]" Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.584+01:00 level=DEBUG source=routes.go:1340 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.584+01:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler" Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.584+01:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.585+01:00 level=DEBUG source=gpu.go:99 msg="searching for GPU discovery libraries for NVIDIA" Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.585+01:00 level=DEBUG source=gpu.go:517 msg="Searching for GPU library" name=libcuda.so* Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.585+01:00 level=DEBUG source=gpu.go:543 msg="gpu library search" globs="[/usr/local/lib/ollama/libcuda.so* /usr/local/lib/ollama/libcuda.so* /libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.592+01:00 level=DEBUG source=gpu.go:577 msg="discovered GPU libraries" paths="[/usr/lib/i386-linux-gnu/libcuda.so.560.35.05 /usr/lib/x86_64-linux-gnu/libcuda.so.560.35.05]" **Jan 10 16:30:57 node1 ollama[119046]: initializing /usr/lib/i386-linux-gnu/libcuda.so.560.35.05 Jan 10 16:30:57 node1 ollama[119046]: library /usr/lib/i386-linux-gnu/libcuda.so.560.35.05 load err: /usr/lib/i386-linux-gnu/libcuda.so.560.35.05: wrong ELF class: ELFCLASS32 Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.593+01:00 level=DEBUG source=gpu.go:628 msg="skipping 32bit library" library=/usr/lib/i386-linux-gnu/libcuda.so.560.35.05 Jan 10 16:30:57 node1 ollama[119046]: initializing /usr/lib/x86_64-linux-gnu/libcuda.so.560.35.05** Jan 10 16:30:57 node1 ollama[119046]: dlsym: cuInit - 0x72e616060800 Jan 10 16:30:57 node1 ollama[119046]: dlsym: cuDriverGetVersion - 0x72e616060820 Jan 10 16:30:57 node1 ollama[119046]: dlsym: cuDeviceGetCount - 0x72e616060860 Jan 10 16:30:57 node1 ollama[119046]: dlsym: cuDeviceGet - 0x72e616060840 Jan 10 16:30:57 node1 ollama[119046]: dlsym: cuDeviceGetAttribute - 0x72e616060940 Jan 10 16:30:57 node1 ollama[119046]: dlsym: cuDeviceGetUuid - 0x72e6160608a0 Jan 10 16:30:57 node1 ollama[119046]: dlsym: cuDeviceGetName - 0x72e616060880 Jan 10 16:30:57 node1 ollama[119046]: dlsym: cuCtxCreate_v3 - 0x72e61606b020 Jan 10 16:30:57 node1 ollama[119046]: dlsym: cuMemGetInfo_v2 - 0x72e6160764e0 Jan 10 16:30:57 node1 ollama[119046]: dlsym: cuCtxDestroy - 0x72e6160d11b0 Jan 10 16:30:57 node1 ollama[119046]: calling cuInit Jan 10 16:30:57 node1 ollama[119046]: calling cuDriverGetVersion Jan 10 16:30:57 node1 ollama[119046]: raw version 0x2f1c Jan 10 16:30:57 node1 ollama[119046]: CUDA driver version: 12.6 Jan 10 16:30:57 node1 ollama[119046]: calling cuDeviceGetCount Jan 10 16:30:57 node1 ollama[119046]: device count 2 Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.672+01:00 level=DEBUG source=gpu.go:134 msg="detected GPUs" count=2 library=/usr/lib/x86_64-linux-gnu/libcuda.so.560.35.05 Jan 10 16:30:57 node1 ollama[119046]: [GPU-0e3f2440-290b-0b48-4d15-8b43b8638f30] CUDA totalMem 24154 mb Jan 10 16:30:57 node1 ollama[119046]: [GPU-0e3f2440-290b-0b48-4d15-8b43b8638f30] CUDA freeMem 23877 mb Jan 10 16:30:57 node1 ollama[119046]: [GPU-0e3f2440-290b-0b48-4d15-8b43b8638f30] Compute Capability 8.6 Jan 10 16:30:57 node1 ollama[119046]: [GPU-08cf14dc-3d8a-c656-c643-4ba2eb37b51f] CUDA totalMem 15978 mb Jan 10 16:30:57 node1 ollama[119046]: [GPU-08cf14dc-3d8a-c656-c643-4ba2eb37b51f] CUDA freeMem 15837 mb Jan 10 16:30:57 node1 ollama[119046]: [GPU-08cf14dc-3d8a-c656-c643-4ba2eb37b51f] Compute Capability 8.9 Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.898+01:00 level=DEBUG source=amd_linux.go:421 msg="amdgpu driver not detected /sys/module/amdgpu" Jan 10 16:30:57 node1 ollama[119046]: releasing cuda driver library ......... Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.528+01:00 level=INFO source=server.go:104 msg="system memory" total="46.8 GiB" free="40.2 GiB" free_swap="62.0 GiB" Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.528+01:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=2 available="[23.3 GiB 15.5 GiB]" Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.528+01:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=65 layers.offload=65 layers.split=40,25 memory.available="[23.3 GiB 15.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="37.6 GiB" memory.required.partial="37.6 GiB" memory.required.kv="8.0 GiB" memory.required.allocations="[22.3 GiB 15.2 GiB]" memory.weights.total="28.6 GiB" memory.weights.repeating="27.8 GiB" memory.weights.nonrepeating="788.9 MiB" memory.graph.full="3.2 GiB" memory.graph.partial="3.2 GiB" Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.528+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx/ollama_llama_server Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.528+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx2/ollama_llama_server Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.528+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v11_avx/ollama_llama_server Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.528+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.529+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/rocm_avx/ollama_llama_server Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.529+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx/ollama_llama_server Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.529+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx2/ollama_llama_server Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.529+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v11_avx/ollama_llama_server Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.529+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.529+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/rocm_avx/ollama_llama_server Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.529+01:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server runner --model /home/ollama/.ollama/models/blobs/sha256-5c5975fb16bebb4e77f71a0ac616f60b680412611f6503e59f76de4393fc2e6a --ctx-size 32768 --batch-size 512 --n-gpu-layers 65 --verbose --threads 6 --parallel 1 --tensor-split 40,25 --port 46519" Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.529+01:00 level=DEBUG source=server.go:393 msg=subprocess environment="[PATH=/home/ous/.pyenv/shims:/home/ous/.pyenv/bin:/home/ous/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/ous/.local/bin:/home/ous/.local/bin:/home/ous/.local/anaconda3/bin LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama:/usr/local/lib/ollama/runners/cuda_v12_avx CUDA_VISIBLE_DEVICES=GPU-0e3f2440-290b-0b48-4d15-8b43b8638f30,GPU-08cf14dc-3d8a-c656-c643-4ba2eb37b51f]" Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.529+01:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.529+01:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding" Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.530+01:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.580+01:00 level=INFO source=runner.go:945 msg="starting go runner" Jan 10 16:30:58 node1 ollama[119046]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Jan 10 16:30:58 node1 ollama[119046]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Jan 10 16:30:58 node1 ollama[119046]: ggml_cuda_init: found 2 CUDA devices: Jan 10 16:30:58 node1 ollama[119046]: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Jan 10 16:30:58 node1 ollama[119046]: Device 1: NVIDIA GeForce RTX 4060 Ti, compute capability 8.9, VMM: yes Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.659+01:00 level=INFO source=runner.go:946 msg=system info="CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=6 Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.659+01:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:46519" Jan 10 16:30:58 node1 ollama[119046]: llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23877 MiB free Jan 10 16:30:58 node1 ollama[119046]: llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 4060 Ti) - 15837 MiB free Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: loaded meta data with 38 key-value pairs and 771 tensors from /home/ollama/.ollama/models/blobs/sha256-5c5975fb16bebb4e77f71a0ac616f60b680412611f6503e59f76de4393fc2e6a (version GGUF V3 (latest)) Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 0: general.architecture str = qwen2 Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 1: general.type str = model Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 2: general.name str = QwQ 32B Preview Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 3: general.finetune str = Preview Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 4: general.basename str = QwQ Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 5: general.size_label str = 32B Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 6: general.license str = apache-2.0 Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/QwQ-32B-P... Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 8: general.base_model.count u32 = 1 Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 32B Instruct Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-3... Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 12: general.tags arr[str,2] = ["chat", "text-generation"] Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"] Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 14: qwen2.block_count u32 = 64 Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 15: qwen2.context_length u32 = 32768 Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 16: qwen2.embedding_length u32 = 5120 Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 27648 Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 40 Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 8 Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000 Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010 Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 22: general.file_type u32 = 17 Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2 Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.780+01:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645 Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643 Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643 Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 33: general.quantization_version u32 = 2 Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 34: quantize.imatrix.file str = /models_out/QwQ-32B-Preview-GGUF/QwQ-... Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 35: quantize.imatrix.dataset str = /training_dir/calibration_datav3.txt Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 36: quantize.imatrix.entries_count i32 = 448 Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 37: quantize.imatrix.chunks_count i32 = 128 Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - type f32: 321 tensors Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - type q8_0: 2 tensors Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - type q5_K: 384 tensors Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - type q6_K: 64 tensors Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151660 '<|fim_middle|>' is not marked as EOG Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151659 '<|fim_prefix|>' is not marked as EOG Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151653 '<|vision_end|>' is not marked as EOG Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151648 '<|box_start|>' is not marked as EOG Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151646 '<|object_ref_start|>' is not marked as EOG Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151649 '<|box_end|>' is not marked as EOG Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151655 '<|image_pad|>' is not marked as EOG Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151651 '<|quad_end|>' is not marked as EOG Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151647 '<|object_ref_end|>' is not marked as EOG Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151652 '<|vision_start|>' is not marked as EOG Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151654 '<|vision_pad|>' is not marked as EOG Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151656 '<|video_pad|>' is not marked as EOG Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151644 '<|im_start|>' is not marked as EOG Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151661 '<|fim_suffix|>' is not marked as EOG Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151650 '<|quad_start|>' is not marked as EOG Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: special tokens cache size = 22 Jan 10 16:30:59 node1 ollama[119046]: llm_load_vocab: token to piece cache size = 0.9310 MB Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: format = GGUF V3 (latest) Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: arch = qwen2 Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: vocab type = BPE Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_vocab = 152064 Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_merges = 151387 Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: vocab_only = 0 Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_ctx_train = 32768 Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_embd = 5120 Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_layer = 64 Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_head = 40 Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_head_kv = 8 Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_rot = 128 Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_swa = 0 Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_embd_head_k = 128 Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_embd_head_v = 128 Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_gqa = 5 Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_embd_k_gqa = 1024 Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_embd_v_gqa = 1024 Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: f_norm_eps = 0.0e+00 Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05 Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: f_logit_scale = 0.0e+00 Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_ff = 27648 Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_expert = 0 Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_expert_used = 0 Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: causal attn = 1 Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: pooling type = 0 Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: rope type = 2 Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: rope scaling = linear Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: freq_base_train = 1000000.0 Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: freq_scale_train = 1 Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_ctx_orig_yarn = 32768 Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: rope_finetuned = unknown Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: ssm_d_conv = 0 Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: ssm_d_inner = 0 Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: ssm_d_state = 0 Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: ssm_dt_rank = 0 Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: ssm_dt_b_c_rms = 0 Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: model type = 32B Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: model ftype = Q5_K - Medium Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: model params = 32.76 B Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: model size = 22.11 GiB (5.80 BPW) Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: general.name = QwQ 32B Preview Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: BOS token = 151643 '<|endoftext|>' Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: EOS token = 151645 '<|im_end|>' Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: EOT token = 151645 '<|im_end|>' Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: PAD token = 151643 '<|endoftext|>' Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: LF token = 148848 'ÄĬ' Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>' Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>' Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>' Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>' Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>' Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>' Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: EOG token = 151643 '<|endoftext|>' Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: EOG token = 151645 '<|im_end|>' Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: EOG token = 151662 '<|fim_pad|>' Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: EOG token = 151663 '<|repo_name|>' Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: EOG token = 151664 '<|file_sep|>' Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: max token length = 256 Jan 10 16:30:59 node1 ollama[119046]: llm_load_tensors: tensor 'token_embd.weight' (q8_0) (and 0 others) cannot be used with preferred buffer type CPU_AARCH64, using CPU instead Jan 10 16:30:59 node1 ollama[119046]: llm_load_tensors: offloading 64 repeating layers to GPU Jan 10 16:30:59 node1 ollama[119046]: llm_load_tensors: offloading output layer to GPU Jan 10 16:30:59 node1 ollama[119046]: llm_load_tensors: offloaded 65/65 layers to GPU Jan 10 16:30:59 node1 ollama[119046]: llm_load_tensors: CPU_Mapped model buffer size = 788.91 MiB Jan 10 16:30:59 node1 ollama[119046]: llm_load_tensors: CUDA0 model buffer size = 13124.84 MiB Jan 10 16:30:59 node1 ollama[119046]: llm_load_tensors: CUDA1 model buffer size = 8723.33 MiB Jan 10 16:30:59 node1 ollama[119046]: time=2025-01-10T16:30:59.784+01:00 level=DEBUG source=server.go:600 msg="model load progress 0.18" Jan 10 16:31:00 node1 ollama[119046]: time=2025-01-10T16:31:00.034+01:00 level=DEBUG source=server.go:600 msg="model load progress 0.36" Jan 10 16:31:00 node1 ollama[119046]: time=2025-01-10T16:31:00.285+01:00 level=DEBUG source=server.go:600 msg="model load progress 0.54" Jan 10 16:31:00 node1 ollama[119046]: time=2025-01-10T16:31:00.536+01:00 level=DEBUG source=server.go:600 msg="model load progress 0.66" Jan 10 16:31:00 node1 ollama[119046]: time=2025-01-10T16:31:00.787+01:00 level=DEBUG source=server.go:600 msg="model load progress 0.72" Jan 10 16:31:01 node1 ollama[119046]: time=2025-01-10T16:31:01.038+01:00 level=DEBUG source=server.go:600 msg="model load progress 0.79" Jan 10 16:31:01 node1 ollama[119046]: time=2025-01-10T16:31:01.288+01:00 level=DEBUG source=server.go:600 msg="model load progress 0.86" Jan 10 16:31:01 node1 ollama[119046]: time=2025-01-10T16:31:01.539+01:00 level=DEBUG source=server.go:600 msg="model load progress 0.93" Jan 10 16:31:01 node1 ollama[119046]: time=2025-01-10T16:31:01.790+01:00 level=DEBUG source=server.go:600 msg="model load progress 1.00" Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: n_seq_max = 1 Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: n_ctx = 32768 Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: n_ctx_per_seq = 32768 Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: n_batch = 512 Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: n_ubatch = 512 Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: flash_attn = 0 Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: freq_base = 1000000.0 Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: freq_scale = 1 Jan 10 16:31:02 node1 ollama[119046]: llama_kv_cache_init: CUDA0 KV buffer size = 5120.00 MiB Jan 10 16:31:02 node1 ollama[119046]: llama_kv_cache_init: CUDA1 KV buffer size = 3072.00 MiB Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: KV self size = 8192.00 MiB, K (f16): 4096.00 MiB, V (f16): 4096.00 MiB Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: CUDA_Host output buffer size = 0.60 MiB Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: pipeline parallelism enabled (n_copies=4) Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: CUDA0 compute buffer size = 2896.01 MiB Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: CUDA1 compute buffer size = 2896.02 MiB Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: CUDA_Host compute buffer size = 266.02 MiB Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: graph nodes = 2246 Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: graph splits = 3 Jan 10 16:31:02 node1 ollama[119046]: time=2025-01-10T16:31:02.292+01:00 level=INFO source=server.go:594 msg="llama runner started in 3.76 seconds" Jan 10 16:31:02 node1 ollama[119046]: time=2025-01-10T16:31:02.292+01:00 level=DEBUG source=sched.go:462 msg="finished setting up runner" model=/home/ollama/.ollama/models/blobs/sha256-5c5975fb16bebb4e77f71a0ac616f60b680412611f6503e59f76de4393fc2e6a Jan 10 16:31:02 node1 ollama[119046]: time=2025-01-10T16:31:02.295+01:00 level=DEBUG source=server.go:967 msg="new runner detected, loading model for cgo tokenization" Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: loaded meta data with 38 key-value pairs and 771 tensors from /home/ollama/.ollama/models/blobs/sha256-5c5975fb16bebb4e77f71a0ac616f60b680412611f6503e59f76de4393fc2e6a (version GGUF V3 (latest)) Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 0: general.architecture str = qwen2 Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 1: general.type str = model Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 2: general.name str = QwQ 32B Preview Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 3: general.finetune str = Preview Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 4: general.basename str = QwQ Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 5: general.size_label str = 32B Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 6: general.license str = apache-2.0 Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/QwQ-32B-P... Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 8: general.base_model.count u32 = 1 Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 32B Instruct Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-3... Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 12: general.tags arr[str,2] = ["chat", "text-generation"] Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"] Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 14: qwen2.block_count u32 = 64 Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 15: qwen2.context_length u32 = 32768 Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 16: qwen2.embedding_length u32 = 5120 Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 27648 Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 40 Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 8 Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000 Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010 Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 22: general.file_type u32 = 17 Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2 Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645 Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643 Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643 Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 33: general.quantization_version u32 = 2 Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 34: quantize.imatrix.file str = /models_out/QwQ-32B-Preview-GGUF/QwQ-... Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 35: quantize.imatrix.dataset str = /training_dir/calibration_datav3.txt Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 36: quantize.imatrix.entries_count i32 = 448 Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 37: quantize.imatrix.chunks_count i32 = 128 Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - type f32: 321 tensors Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - type q8_0: 2 tensors Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - type q5_K: 384 tensors Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - type q6_K: 64 tensors Jan 10 16:31:02 node1 ollama[119046]: llm_load_vocab: special tokens cache size = 22 Jan 10 16:31:02 node1 ollama[119046]: llm_load_vocab: token to piece cache size = 0.9310 MB Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: format = GGUF V3 (latest) Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: arch = qwen2 Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: vocab type = BPE Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: n_vocab = 152064 Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: n_merges = 151387 Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: vocab_only = 1 Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: model type = ?B Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: model ftype = all F32 Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: model params = 32.76 B Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: model size = 22.11 GiB (5.80 BPW) Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: general.name = QwQ 32B Preview Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: BOS token = 151643 '<|endoftext|>' Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: EOS token = 151645 '<|im_end|>' Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: EOT token = 151645 '<|im_end|>' Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: PAD token = 151643 '<|endoftext|>' Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: LF token = 148848 'ÄĬ' Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>' Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>' Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>' Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>' Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>' Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>' Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: EOG token = 151643 '<|endoftext|>' Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: EOG token = 151645 '<|im_end|>' Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: EOG token = 151662 '<|fim_pad|>' Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: EOG token = 151663 '<|repo_name|>' Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: EOG token = 151664 '<|file_sep|>' Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: max token length = 256 Jan 10 16:31:02 node1 ollama[119046]: llama_model_load: vocab only - skipping tensors Jan 10 16:31:08 node1 ollama[119046]: time=2025-01-10T16:31:08.286+01:00 level=DEBUG source=prompt.go:77 msg="truncating input messages which exceed context length" truncated=60 Jan 10 16:31:08 node1 ollama[119046]: time=2025-01-10T16:31:08.286+01:00 level=DEBUG source=routes.go:1542 msg="chat request" images=0 prompt="<|im_start|>system\nYou are Cline, .REMOVED ORIGINAL PROMPT....</environment_details><|im_end|>\n<|im_start|>assistant\n" Jan 10 16:31:08 node1 ollama[119046]: time=2025-01-10T16:31:08.417+01:00 level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=32752 used=0 remaining=32752 Jan 10 16:31:27 node1 ollama[119046]: [GIN] 2025/01/10 - 16:31:27 | 200 | 14.762µs | 127.0.0.1 | HEAD "/" Jan 10 16:31:27 node1 ollama[119046]: [GIN] 2025/01/10 - 16:31:27 | 200 | 72.782µs | 127.0.0.1 | GET "/api/ps" **Jan 10 16:31:55 node1 ollama[119046]: CUDA error: out of memory Jan 10 16:31:55 node1 ollama[119046]: current device: 1, in function alloc at llama/ggml-cuda/ggml-cuda.cu:370 Jan 10 16:31:55 node1 ollama[119046]: cuMemCreate(&handle, reserve_size, &prop, 0) Jan 10 16:31:55 node1 ollama[119046]: llama/ggml-cuda/ggml-cuda.cu:96: CUDA error Jan 10 16:31:55 node1 ollama[119046]: SIGSEGV: segmentation violation** Jan 10 16:31:55 node1 ollama[119046]: PC=0x7cfdb17efe57 m=0 sigcode=1 addr=0x20b203fd0 Jan 10 16:31:55 node1 ollama[119046]: signal arrived during cgo execution Jan 10 16:31:55 node1 ollama[119046]: goroutine 19 gp=0xc0001b01c0 m=0 mp=0x5cfc8ff8e1a0 [syscall]: Jan 10 16:31:55 node1 ollama[119046]: runtime.cgocall(0x5cfc8f9a87d0, 0xc000081b90) Jan 10 16:31:55 node1 ollama[119046]: runtime/cgocall.go:167 +0x4b fp=0xc000081b68 sp=0xc000081b30 pc=0x5cfc8f75cb2b Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama._Cfunc_llama_decode(0x7cfd30c5b730, {0x200, 0x7cfd30e0aa50, 0x0, 0x0, 0x7cfd30e0b260, 0x7cfd30e0ba70, 0x7cfd30c7f6c0, 0x7cfd30c5f9e0}) Jan 10 16:31:55 node1 ollama[119046]: _cgo_gotypes.go:556 +0x4f fp=0xc000081b90 sp=0xc000081b68 pc=0x5cfc8f806baf Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama.(*Context).Decode.func1(0x5cfc8f9a3f0b?, 0x7cfd30c5b730?) Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/llama.go:207 +0xf5 fp=0xc000081c80 sp=0xc000081b90 pc=0x5cfc8f809475 Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama.(*Context).Decode(0xc000081d70?, 0x0?) Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/llama.go:207 +0x13 fp=0xc000081cc8 sp=0xc000081c80 pc=0x5cfc8f8092f3 Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner.(*Server).processBatch(0xc00019a000, 0xc0000a0360, 0xc000081f20) Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner/runner.go:434 +0x23f fp=0xc000081ee0 sp=0xc000081cc8 pc=0x5cfc8f9a2bdf Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner.(*Server).run(0xc00019a000, {0x5cfc8fda1de0, 0xc0001980a0}) Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner/runner.go:342 +0x1d5 fp=0xc000081fb8 sp=0xc000081ee0 pc=0x5cfc8f9a2615 Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner.Execute.gowrap2() Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner/runner.go:984 +0x28 fp=0xc000081fe0 sp=0xc000081fb8 pc=0x5cfc8f9a7628 Jan 10 16:31:55 node1 ollama[119046]: runtime.goexit({}) Jan 10 16:31:55 node1 ollama[119046]: runtime/asm_amd64.s:1700 +0x1 fp=0xc000081fe8 sp=0xc000081fe0 pc=0x5cfc8f76a561 Jan 10 16:31:55 node1 ollama[119046]: created by github.com/ollama/ollama/llama/runner.Execute in goroutine 1 Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner/runner.go:984 +0xde5 Jan 10 16:31:55 node1 ollama[119046]: goroutine 1 gp=0xc0000061c0 m=nil [IO wait]: Jan 10 16:31:55 node1 ollama[119046]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) Jan 10 16:31:55 node1 ollama[119046]: runtime/proc.go:424 +0xce fp=0xc0000e77b0 sp=0xc0000e7790 pc=0x5cfc8f76292e Jan 10 16:31:55 node1 ollama[119046]: runtime.netpollblock(0xc000029800?, 0x8f6fb186?, 0xfc?) Jan 10 16:31:55 node1 ollama[119046]: runtime/netpoll.go:575 +0xf7 fp=0xc0000e77e8 sp=0xc0000e77b0 pc=0x5cfc8f727697 Jan 10 16:31:55 node1 ollama[119046]: internal/poll.runtime_pollWait(0x7cfdb0cc6650, 0x72) Jan 10 16:31:55 node1 ollama[119046]: runtime/netpoll.go:351 +0x85 fp=0xc0000e7808 sp=0xc0000e77e8 pc=0x5cfc8f761c25 Jan 10 16:31:55 node1 ollama[119046]: internal/poll.(*pollDesc).wait(0xc000194180?, 0x900000036?, 0x0) Jan 10 16:31:55 node1 ollama[119046]: internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc0000e7830 sp=0xc0000e7808 pc=0x5cfc8f7b7a67 Jan 10 16:31:55 node1 ollama[119046]: internal/poll.(*pollDesc).waitRead(...) Jan 10 16:31:55 node1 ollama[119046]: internal/poll/fd_poll_runtime.go:89 Jan 10 16:31:55 node1 ollama[119046]: internal/poll.(*FD).Accept(0xc000194180) Jan 10 16:31:55 node1 ollama[119046]: internal/poll/fd_unix.go:620 +0x295 fp=0xc0000e78d8 sp=0xc0000e7830 pc=0x5cfc8f7b8fd5 Jan 10 16:31:55 node1 ollama[119046]: net.(*netFD).accept(0xc000194180) Jan 10 16:31:55 node1 ollama[119046]: net/fd_unix.go:172 +0x29 fp=0xc0000e7990 sp=0xc0000e78d8 pc=0x5cfc8f831969 Jan 10 16:31:55 node1 ollama[119046]: net.(*TCPListener).accept(0xc0001b2040) Jan 10 16:31:55 node1 ollama[119046]: net/tcpsock_posix.go:159 +0x1e fp=0xc0000e79e0 sp=0xc0000e7990 pc=0x5cfc8f841fbe Jan 10 16:31:55 node1 ollama[119046]: net.(*TCPListener).Accept(0xc0001b2040) Jan 10 16:31:55 node1 ollama[119046]: net/tcpsock.go:372 +0x30 fp=0xc0000e7a10 sp=0xc0000e79e0 pc=0x5cfc8f8412f0 Jan 10 16:31:55 node1 ollama[119046]: net/http.(*onceCloseListener).Accept(0xc00019a3f0?) Jan 10 16:31:55 node1 ollama[119046]: <autogenerated>:1 +0x24 fp=0xc0000e7a28 sp=0xc0000e7a10 pc=0x5cfc8f97fec4 Jan 10 16:31:55 node1 ollama[119046]: net/http.(*Server).Serve(0xc0001904b0, {0x5cfc8fda17f8, 0xc0001b2040}) Jan 10 16:31:55 node1 ollama[119046]: net/http/server.go:3330 +0x30c fp=0xc0000e7b58 sp=0xc0000e7a28 pc=0x5cfc8f971c0c Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner.Execute({0xc000016150?, 0x5cfc8f76a1bc?, 0x0?}) Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner/runner.go:1005 +0x11a9 fp=0xc0000e7ef8 sp=0xc0000e7b58 pc=0x5cfc8f9a7309 Jan 10 16:31:55 node1 ollama[119046]: main.main() Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/cmd/runner/main.go:11 +0x54 fp=0xc0000e7f50 sp=0xc0000e7ef8 pc=0x5cfc8f9a8294 Jan 10 16:31:55 node1 ollama[119046]: runtime.main() Jan 10 16:31:55 node1 ollama[119046]: runtime/proc.go:272 +0x29d fp=0xc0000e7fe0 sp=0xc0000e7f50 pc=0x5cfc8f72ec7d Jan 10 16:31:55 node1 ollama[119046]: runtime.goexit({}) Jan 10 16:31:55 node1 ollama[119046]: runtime/asm_amd64.s:1700 +0x1 fp=0xc0000e7fe8 sp=0xc0000e7fe0 pc=0x5cfc8f76a561 Jan 10 16:31:55 node1 ollama[119046]: goroutine 2 gp=0xc000006c40 m=nil [force gc (idle)]: Jan 10 16:31:55 node1 ollama[119046]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) Jan 10 16:31:55 node1 ollama[119046]: runtime/proc.go:424 +0xce fp=0xc00006efa8 sp=0xc00006ef88 pc=0x5cfc8f76292e Jan 10 16:31:55 node1 ollama[119046]: runtime.goparkunlock(...) Jan 10 16:31:55 node1 ollama[119046]: runtime/proc.go:430 Jan 10 16:31:55 node1 ollama[119046]: runtime.forcegchelper() Jan 10 16:31:55 node1 ollama[119046]: runtime/proc.go:337 +0xb8 fp=0xc00006efe0 sp=0xc00006efa8 pc=0x5cfc8f72efb8 Jan 10 16:31:55 node1 ollama[119046]: runtime.goexit({}) ------ removed some duplicate traces ----------- Jan 10 16:31:55 node1 ollama[119046]: runtime/asm_amd64.s:1700 +0x1 fp=0xc0004a17e8 sp=0xc0004a17e0 pc=0x5cfc8f76a561 Jan 10 16:31:55 node1 ollama[119046]: created by runtime.gcBgMarkStartWorkers in goroutine 26 Jan 10 16:31:55 node1 ollama[119046]: runtime/mgc.go:1328 +0x105 Jan 10 16:31:55 node1 ollama[119046]: goroutine 67 gp=0xc000278540 m=nil [GC worker (idle)]: Jan 10 16:31:55 node1 ollama[119046]: runtime.gopark(0x97e81a03181?, 0x1?, 0xfc?, 0x38?, 0x0?) Jan 10 16:31:55 node1 ollama[119046]: runtime/proc.go:424 +0xce fp=0xc0004a1f38 sp=0xc0004a1f18 pc=0x5cfc8f76292e Jan 10 16:31:55 node1 ollama[119046]: runtime.gcBgMarkWorker(0xc000022700) Jan 10 16:31:55 node1 ollama[119046]: runtime/mgc.go:1412 +0xe9 fp=0xc0004a1fc8 sp=0xc0004a1f38 pc=0x5cfc8f710209 Jan 10 16:31:55 node1 ollama[119046]: runtime.gcBgMarkStartWorkers.gowrap1() Jan 10 16:31:55 node1 ollama[119046]: runtime/mgc.go:1328 +0x25 fp=0xc0004a1fe0 sp=0xc0004a1fc8 pc=0x5cfc8f7100e5 Jan 10 16:31:55 node1 ollama[119046]: runtime.goexit({}) Jan 10 16:31:55 node1 ollama[119046]: runtime/asm_amd64.s:1700 +0x1 fp=0xc0004a1fe8 sp=0xc0004a1fe0 pc=0x5cfc8f76a561 Jan 10 16:31:55 node1 ollama[119046]: created by runtime.gcBgMarkStartWorkers in goroutine 26 Jan 10 16:31:55 node1 ollama[119046]: runtime/mgc.go:1328 +0x105 Jan 10 16:31:55 node1 ollama[119046]: rax 0x20b203fd0 Jan 10 16:31:55 node1 ollama[119046]: rbx 0x7cfd303b4130 Jan 10 16:31:55 node1 ollama[119046]: rcx 0xff4 Jan 10 16:31:55 node1 ollama[119046]: rdx 0x7cfd30006a50 Jan 10 16:31:55 node1 ollama[119046]: rdi 0x7cfd30006a60 Jan 10 16:31:55 node1 ollama[119046]: rsi 0x0 Jan 10 16:31:55 node1 ollama[119046]: rbp 0x7fff595f5fd0 Jan 10 16:31:55 node1 ollama[119046]: rsp 0x7fff595f5fb0 Jan 10 16:31:55 node1 ollama[119046]: r8 0x0 Jan 10 16:31:55 node1 ollama[119046]: r9 0x0 Jan 10 16:31:55 node1 ollama[119046]: r10 0x0 Jan 10 16:31:55 node1 ollama[119046]: r11 0x246 Jan 10 16:31:55 node1 ollama[119046]: r12 0x7cf944007060 Jan 10 16:31:55 node1 ollama[119046]: r13 0x7cfd30006a60 Jan 10 16:31:55 node1 ollama[119046]: r14 0x0 Jan 10 16:31:55 node1 ollama[119046]: r15 0x7cfdfd063d50 Jan 10 16:31:55 node1 ollama[119046]: rip 0x7cfdb17efe57 Jan 10 16:31:55 node1 ollama[119046]: rflags 0x10297 Jan 10 16:31:55 node1 ollama[119046]: cs 0x33 Jan 10 16:31:55 node1 ollama[119046]: fs 0x0 Jan 10 16:31:55 node1 ollama[119046]: gs 0x0 Jan 10 16:31:55 node1 ollama[119046]: SIGABRT: abort Jan 10 16:31:55 node1 ollama[119046]: PC=0x7cfd8b69eb1c m=0 sigcode=18446744073709551610 Jan 10 16:31:55 node1 ollama[119046]: signal arrived during cgo execution Jan 10 16:31:55 node1 ollama[119046]: goroutine 19 gp=0xc0001b01c0 m=0 mp=0x5cfc8ff8e1a0 [syscall]: Jan 10 16:31:55 node1 ollama[119046]: runtime.cgocall(0x5cfc8f9a87d0, 0xc000081b90) Jan 10 16:31:55 node1 ollama[119046]: runtime/cgocall.go:167 +0x4b fp=0xc000081b68 sp=0xc000081b30 pc=0x5cfc8f75cb2b Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama._Cfunc_llama_decode(0x7cfd30c5b730, {0x200, 0x7cfd30e0aa50, 0x0, 0x0, 0x7cfd30e0b260, 0x7cfd30e0ba70, 0x7cfd30c7f6c0, 0x7cfd30c5f9e0}) Jan 10 16:31:55 node1 ollama[119046]: _cgo_gotypes.go:556 +0x4f fp=0xc000081b90 sp=0xc000081b68 pc=0x5cfc8f806baf Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama.(*Context).Decode.func1(0x5cfc8f9a3f0b?, 0x7cfd30c5b730?) Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/llama.go:207 +0xf5 fp=0xc000081c80 sp=0xc000081b90 pc=0x5cfc8f809475 Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama.(*Context).Decode(0xc000081d70?, 0x0?) Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/llama.go:207 +0x13 fp=0xc000081cc8 sp=0xc000081c80 pc=0x5cfc8f8092f3 Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner.(*Server).processBatch(0xc00019a000, 0xc0000a0360, 0xc000081f20) Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner/runner.go:434 +0x23f fp=0xc000081ee0 sp=0xc000081cc8 pc=0x5cfc8f9a2bdf Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner.(*Server).run(0xc00019a000, {0x5cfc8fda1de0, 0xc0001980a0}) Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner/runner.go:342 +0x1d5 fp=0xc000081fb8 sp=0xc000081ee0 pc=0x5cfc8f9a2615 Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner.Execute.gowrap2() Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner/runner.go:984 +0x28 fp=0xc000081fe0 sp=0xc000081fb8 pc=0x5cfc8f9a7628 Jan 10 16:31:55 node1 ollama[119046]: runtime.goexit({}) ------ remove duplicate traces ------------ Jan 10 16:31:55 node1 ollama[119046]: [GIN] 2025/01/10 - 16:31:55 | 200 | 57.936292517s | 127.0.0.1 | POST "/v1/chat/completions" Jan 10 16:31:55 node1 ollama[119046]: time=2025-01-10T16:31:55.983+01:00 level=DEBUG source=sched.go:466 msg="context for request finished" Jan 10 16:31:55 node1 ollama[119046]: time=2025-01-10T16:31:55.983+01:00 level=DEBUG source=sched.go:339 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/ollama/.ollama/models/blobs/sha256-5c5975fb16bebb4e77f71a0ac616f60b680412611f6503e59f76de4393fc2e6a duration=5m0s Jan 10 16:31:55 node1 ollama[119046]: time=2025-01-10T16:31:55.983+01:00 level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/home/ollama/.ollama/models/blobs/sha256-5c5975fb16bebb4e77f71a0ac616f60b680412611f6503e59f76de4393fc2e6a refCount=0 Jan 10 16:31:56 node1 ollama[119046]: time=2025-01-10T16:31:56.096+01:00 level=DEBUG source=server.go:416 msg="llama runner terminated" error="exit status 2" ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.4
GiteaMirror added the bug label 2025-11-12 12:54:40 -06:00
Author
Owner

@rick-github commented on GitHub (Jan 10, 2025):

Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.528+01:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=65 layers.offload=65 layers.split=40,25 memory.available="[23.3 GiB 15.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="37.6 GiB" memory.required.partial="37.6 GiB" memory.required.kv="8.0 GiB" memory.required.allocations="[22.3 GiB 15.2 GiB]" memory.weights.total="28.6 GiB" memory.weights.repeating="27.8 GiB" memory.weights.nonrepeating="788.9 MiB" memory.graph.full="3.2 GiB" memory.graph.partial="3.2 GiB"
Jan 10 16:31:55 node1 ollama[119046]: current device: 1, in function alloc at llama/ggml-cuda/ggml-cuda.cu:370

Device 1 had 15.5G free and ollama wants to use 15.2G. Some temporary allocation during inference exhausted VRAM and the runner OOMed. The following mitigations are possible:

  1. Set OLLAMA_GPU_OVERHEAD to give llama.cpp a buffer to grow in to (eg, OLLAMA_GPU_OVERHEAD=536870912 to reserve 512M)
  2. Enable flash attention by setting OLLAMA_FLASH_ATTENTION=1 in the server environment. Flash attention is a more efficient use of memory and may reduce memory pressure.
  3. Reduce the number layers that ollama thinks it can offload to the GPU, see here. Ollama is currently offloading 65 layers, try setting num_gpu to 60.
  4. Set GGML_CUDA_ENABLE_UNIFIED_MEMORY=1. This will allow the GPU to offload to CPU memory if VRAM is exhausted. This is only useful for small amounts of memory as there is a performance penalty. However, in the case where the goal is to reduce OOMs, the amount offloaded will be small and the impact minimal.
@rick-github commented on GitHub (Jan 10, 2025): ``` Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.528+01:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=65 layers.offload=65 layers.split=40,25 memory.available="[23.3 GiB 15.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="37.6 GiB" memory.required.partial="37.6 GiB" memory.required.kv="8.0 GiB" memory.required.allocations="[22.3 GiB 15.2 GiB]" memory.weights.total="28.6 GiB" memory.weights.repeating="27.8 GiB" memory.weights.nonrepeating="788.9 MiB" memory.graph.full="3.2 GiB" memory.graph.partial="3.2 GiB" Jan 10 16:31:55 node1 ollama[119046]: current device: 1, in function alloc at llama/ggml-cuda/ggml-cuda.cu:370 ``` Device 1 had 15.5G free and ollama wants to use 15.2G. Some temporary allocation during inference exhausted VRAM and the runner OOMed. The following mitigations are possible: 1. Set [`OLLAMA_GPU_OVERHEAD`](https://github.com/ollama/ollama/blob/5f8051180e3b9aeafc153f6b5056e7358a939c88/envconfig/config.go#L237) to give llama.cpp a buffer to grow in to (eg, `OLLAMA_GPU_OVERHEAD=536870912` to reserve 512M) 2. Enable flash attention by setting [`OLLAMA_FLASH_ATTENTION=1`](https://github.com/ollama/ollama/blob/5f8051180e3b9aeafc153f6b5056e7358a939c88/envconfig/config.go#L236) in the server environment. Flash attention is a more efficient use of memory and may reduce memory pressure. 3. Reduce the number layers that ollama thinks it can offload to the GPU, see [here](https://github.com/ollama/ollama/issues/6950#issuecomment-2373663650). Ollama is currently offloading 65 layers, try setting `num_gpu` to 60. 4. Set `GGML_CUDA_ENABLE_UNIFIED_MEMORY=1`. This will allow the GPU to offload to CPU memory if VRAM is exhausted. This is only useful for small amounts of memory as there is a [performance penalty](https://github.com/ollama/ollama/issues/7584#issuecomment-2466715900). However, in the case where the goal is to reduce OOMs, the amount offloaded will be small and the impact minimal.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama-ollama#5382