[GH-ISSUE #10483] Ollama detects 4090 GPU, but does not use it #6895

Closed
opened 2026-04-12 18:46:05 -05:00 by GiteaMirror · 11 comments
Owner

Originally created by @heyjohnlim on GitHub (Apr 29, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10483

What is the issue?

Just upgraded to ollama 0.6.6. Working fine previously.
ran ollama serve
then ran ollama run gemma3

Noticed that it wasn't using GPU, but in the logs i can see it detected GPU

time=2025-04-30T02:46:26.042+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-a91a72c4-6733-ae33-c532-f159bfc07477 library=cuda variant=v12 compute=8.9 driver=12.5 name="NVIDIA GeForce RTX 4090" total="23.6 GiB" available="21.8 GiB"

When i run gemme3, i can see some message compatible libraries=[ ] (don't know whether its relevant):


calling cuDriverGetVersion
raw version 0x2f12
CUDA driver version: 12.5
calling cuDeviceGetCount
device count 1
time=2025-04-30T02:46:42.390+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-a91a72c4-6733-ae33-c532-f159bfc07477 name="NVIDIA GeForce RTX 4090" overhead="0 B" before.total="23.6 GiB" before.free="21.8 GiB" now.total="23.6 GiB" now.free="21.8 GiB" now.used="1.8 GiB"
releasing cuda driver library
time=2025-04-30T02:46:42.390+08:00 level=INFO source=server.go:105 msg="system memory" total="61.8 GiB" free="41.3 GiB" free_swap="28.7 GiB"
time=2025-04-30T02:46:42.390+08:00 level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[21.8 GiB]"
time=2025-04-30T02:46:42.391+08:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=35 layers.offload=35 layers.split="" memory.available="[21.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.8 GiB" memory.required.partial="5.8 GiB" memory.required.kv="682.0 MiB" memory.required.allocations="[5.8 GiB]" memory.weights.total="2.3 GiB" memory.weights.repeating="1.8 GiB" memory.weights.nonrepeating="525.0 MiB" memory.graph.full="517.0 MiB" memory.graph.partial="1.0 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB"
time=2025-04-30T02:46:42.391+08:00 level=DEBUG source=server.go:262 msg="compatible gpu libraries" compatible=[]

Relevant log output

2025/04/30 02:46:25 routes.go:1232: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/jadmin/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-04-30T02:46:25.826+08:00 level=INFO source=images.go:458 msg="total blobs: 5"
time=2025-04-30T02:46:25.826+08:00 level=INFO source=images.go:465 msg="total unused blobs removed: 0"
time=2025-04-30T02:46:25.826+08:00 level=INFO source=routes.go:1299 msg="Listening on 127.0.0.1:11434 (version 0.6.6)"
time=2025-04-30T02:46:25.826+08:00 level=DEBUG source=sched.go:107 msg="starting llm scheduler"
time=2025-04-30T02:46:25.826+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-04-30T02:46:25.840+08:00 level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA"
time=2025-04-30T02:46:25.840+08:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcuda.so*
time=2025-04-30T02:46:25.841+08:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/local/lib/ollama/libcuda.so* /home/jadmin/anaconda3/lib/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2025-04-30T02:46:25.842+08:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[/usr/lib64/libcuda.so.555.42.06]
initializing /usr/lib64/libcuda.so.555.42.06
dlsym: cuInit - 0x7fbe1e71e2b0
dlsym: cuDriverGetVersion - 0x7fbe1e71e2d0
dlsym: cuDeviceGetCount - 0x7fbe1e71e310
dlsym: cuDeviceGet - 0x7fbe1e71e2f0
dlsym: cuDeviceGetAttribute - 0x7fbe1e71e3f0
dlsym: cuDeviceGetUuid - 0x7fbe1e71e350
dlsym: cuDeviceGetName - 0x7fbe1e71e330
dlsym: cuCtxCreate_v3 - 0x7fbe1e71e5d0
dlsym: cuMemGetInfo_v2 - 0x7fbe1e7286d0
dlsym: cuCtxDestroy - 0x7fbe1e783000
calling cuInit
calling cuDriverGetVersion
raw version 0x2f12
CUDA driver version: 12.5
calling cuDeviceGetCount
device count 1
time=2025-04-30T02:46:25.878+08:00 level=DEBUG source=gpu.go:125 msg="detected GPUs" count=1 library=/usr/lib64/libcuda.so.555.42.06
[GPU-a91a72c4-6733-ae33-c532-f159bfc07477] CUDA totalMem 24118 mb
[GPU-a91a72c4-6733-ae33-c532-f159bfc07477] CUDA freeMem 22286 mb
[GPU-a91a72c4-6733-ae33-c532-f159bfc07477] Compute Capability 8.9
time=2025-04-30T02:46:26.042+08:00 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2025-04-30T02:46:26.042+08:00 level=DEBUG source=amd_linux.go:101 msg="evaluating amdgpu node /sys/class/kfd/kfd/topology/nodes/0/properties"
time=2025-04-30T02:46:26.042+08:00 level=DEBUG source=amd_linux.go:121 msg="detected CPU /sys/class/kfd/kfd/topology/nodes/0/properties"
time=2025-04-30T02:46:26.042+08:00 level=DEBUG source=amd_linux.go:101 msg="evaluating amdgpu node /sys/class/kfd/kfd/topology/nodes/1/properties"
time=2025-04-30T02:46:26.042+08:00 level=DEBUG source=amd_linux.go:206 msg="mapping amdgpu to drm sysfs nodes" amdgpu=/sys/class/kfd/kfd/topology/nodes/1/properties vendor=4098 device=5710 unique_id=0
time=2025-04-30T02:46:26.042+08:00 level=DEBUG source=amd_linux.go:240 msg=matched amdgpu=/sys/class/kfd/kfd/topology/nodes/1/properties drm=/sys/class/drm/card0/device
time=2025-04-30T02:46:26.042+08:00 level=DEBUG source=amd_linux.go:244 msg="failed to read sysfs node" file=/sys/class/drm/card0/device/mem_info_vram_total error="open /sys/class/drm/card0/device/mem_info_vram_total: no such file or directory"
time=2025-04-30T02:46:26.042+08:00 level=INFO source=amd_linux.go:296 msg="unsupported Radeon iGPU detected skipping" id=0 total="0 B"
time=2025-04-30T02:46:26.042+08:00 level=INFO source=amd_linux.go:402 msg="no compatible amdgpu devices detected"
releasing cuda driver library
time=2025-04-30T02:46:26.042+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-a91a72c4-6733-ae33-c532-f159bfc07477 library=cuda variant=v12 compute=8.9 driver=12.5 name="NVIDIA GeForce RTX 4090" total="23.6 GiB" available="21.8 GiB"
[GIN] 2025/04/30 - 02:46:41 | 200 |       33.12µs |       127.0.0.1 | HEAD     "/"
time=2025-04-30T02:46:41.934+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-04-30T02:46:41.961+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/04/30 - 02:46:41 | 200 |    54.18421ms |       127.0.0.1 | POST     "/api/show"
time=2025-04-30T02:46:41.990+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-04-30T02:46:41.991+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="61.8 GiB" before.free="41.3 GiB" before.free_swap="28.7 GiB" now.total="61.8 GiB" now.free="41.3 GiB" now.free_swap="28.7 GiB"
initializing /usr/lib64/libcuda.so.555.42.06
dlsym: cuInit - 0x7fbe1e71e2b0
dlsym: cuDriverGetVersion - 0x7fbe1e71e2d0
dlsym: cuDeviceGetCount - 0x7fbe1e71e310
dlsym: cuDeviceGet - 0x7fbe1e71e2f0
dlsym: cuDeviceGetAttribute - 0x7fbe1e71e3f0
dlsym: cuDeviceGetUuid - 0x7fbe1e71e350
dlsym: cuDeviceGetName - 0x7fbe1e71e330
dlsym: cuCtxCreate_v3 - 0x7fbe1e71e5d0
dlsym: cuMemGetInfo_v2 - 0x7fbe1e7286d0
dlsym: cuCtxDestroy - 0x7fbe1e783000
calling cuInit
calling cuDriverGetVersion
raw version 0x2f12
CUDA driver version: 12.5
calling cuDeviceGetCount
device count 1
time=2025-04-30T02:46:42.156+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-a91a72c4-6733-ae33-c532-f159bfc07477 name="NVIDIA GeForce RTX 4090" overhead="0 B" before.total="23.6 GiB" before.free="21.8 GiB" now.total="23.6 GiB" now.free="21.8 GiB" now.used="1.8 GiB"
releasing cuda driver library
time=2025-04-30T02:46:42.156+08:00 level=DEBUG source=sched.go:183 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1
time=2025-04-30T02:46:42.200+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-04-30T02:46:42.227+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-04-30T02:46:42.227+08:00 level=DEBUG source=sched.go:226 msg="loading first model" model=/home/jadmin/.ollama/models/blobs/sha256-aeda25e63ebd698fab8638ffb778e68bed908b960d39d0becc650fa981609d25
time=2025-04-30T02:46:42.227+08:00 level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[21.8 GiB]"
time=2025-04-30T02:46:42.228+08:00 level=INFO source=sched.go:722 msg="new model will fit in available VRAM in single GPU, loading" model=/home/jadmin/.ollama/models/blobs/sha256-aeda25e63ebd698fab8638ffb778e68bed908b960d39d0becc650fa981609d25 gpu=GPU-a91a72c4-6733-ae33-c532-f159bfc07477 parallel=4 available=23369220096 required="5.8 GiB"
time=2025-04-30T02:46:42.228+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="61.8 GiB" before.free="41.3 GiB" before.free_swap="28.7 GiB" now.total="61.8 GiB" now.free="41.3 GiB" now.free_swap="28.7 GiB"
initializing /usr/lib64/libcuda.so.555.42.06
dlsym: cuInit - 0x7fbe1e71e2b0
dlsym: cuDriverGetVersion - 0x7fbe1e71e2d0
dlsym: cuDeviceGetCount - 0x7fbe1e71e310
dlsym: cuDeviceGet - 0x7fbe1e71e2f0
dlsym: cuDeviceGetAttribute - 0x7fbe1e71e3f0
dlsym: cuDeviceGetUuid - 0x7fbe1e71e350
dlsym: cuDeviceGetName - 0x7fbe1e71e330
dlsym: cuCtxCreate_v3 - 0x7fbe1e71e5d0
dlsym: cuMemGetInfo_v2 - 0x7fbe1e7286d0
dlsym: cuCtxDestroy - 0x7fbe1e783000
calling cuInit
calling cuDriverGetVersion
raw version 0x2f12
CUDA driver version: 12.5
calling cuDeviceGetCount
device count 1
time=2025-04-30T02:46:42.390+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-a91a72c4-6733-ae33-c532-f159bfc07477 name="NVIDIA GeForce RTX 4090" overhead="0 B" before.total="23.6 GiB" before.free="21.8 GiB" now.total="23.6 GiB" now.free="21.8 GiB" now.used="1.8 GiB"
releasing cuda driver library
time=2025-04-30T02:46:42.390+08:00 level=INFO source=server.go:105 msg="system memory" total="61.8 GiB" free="41.3 GiB" free_swap="28.7 GiB"
time=2025-04-30T02:46:42.390+08:00 level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[21.8 GiB]"
time=2025-04-30T02:46:42.391+08:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=35 layers.offload=35 layers.split="" memory.available="[21.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.8 GiB" memory.required.partial="5.8 GiB" memory.required.kv="682.0 MiB" memory.required.allocations="[5.8 GiB]" memory.weights.total="2.3 GiB" memory.weights.repeating="1.8 GiB" memory.weights.nonrepeating="525.0 MiB" memory.graph.full="517.0 MiB" memory.graph.partial="1.0 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB"
time=2025-04-30T02:46:42.391+08:00 level=DEBUG source=server.go:262 msg="compatible gpu libraries" compatible=[]
time=2025-04-30T02:46:42.459+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-04-30T02:46:42.461+08:00 level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-04-30T02:46:42.461+08:00 level=DEBUG source=process_text_spm.go:21 msg=Tokens "num tokens"=262145 vals="[<pad> <eos> <bos> <unk> <mask>]" scores="[0 0 0 0 0]" types="[3 3 3 2 1]"
time=2025-04-30T02:46:42.463+08:00 level=DEBUG source=process_text_spm.go:35 msg="Token counts" normal=261882 unknown=1 control=5 "user defined"=1 unused=0 byte=256 "max token len"=93
time=2025-04-30T02:46:42.464+08:00 level=DEBUG source=process_text_spm.go:21 msg=Tokens "num tokens"=262145 vals="[<pad> <eos> <bos> <unk> <mask>]" scores="[0 0 0 0 0]" types="[3 3 3 2 1]"
time=2025-04-30T02:46:42.466+08:00 level=DEBUG source=process_text_spm.go:35 msg="Token counts" normal=261882 unknown=1 control=5 "user defined"=1 unused=0 byte=256 "max token len"=93
time=2025-04-30T02:46:42.466+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07
time=2025-04-30T02:46:42.466+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-04-30T02:46:42.466+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-04-30T02:46:42.466+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-04-30T02:46:42.466+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-04-30T02:46:42.466+08:00 level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/local/bin/ollama runner --ollama-engine --model /home/jadmin/.ollama/models/blobs/sha256-aeda25e63ebd698fab8638ffb778e68bed908b960d39d0becc650fa981609d25 --ctx-size 8192 --batch-size 512 --n-gpu-layers 35 --verbose --threads 16 --parallel 4 --port 42585"
time=2025-04-30T02:46:42.466+08:00 level=DEBUG source=server.go:423 msg=subprocess environment="[LD_LIBRARY_PATH=/home/jadmin/anaconda3/lib:/usr/local/lib/ollama PATH=/usr/local/cuda-12.5/bin:/home/jadmin/instantclient_19_19:/home/jadmin/anaconda3/bin:/home/jadmin/anaconda3/condabin:/home/jadmin/.local/bin:/home/jadmin/bin:/usr/share/Modules/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin CUDA_VISIBLE_DEVICES=GPU-a91a72c4-6733-ae33-c532-f159bfc07477]"
time=2025-04-30T02:46:42.466+08:00 level=INFO source=sched.go:451 msg="loaded runners" count=1
time=2025-04-30T02:46:42.466+08:00 level=INFO source=server.go:580 msg="waiting for llama runner to start responding"
time=2025-04-30T02:46:42.467+08:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error"
time=2025-04-30T02:46:42.473+08:00 level=INFO source=runner.go:866 msg="starting ollama engine"
time=2025-04-30T02:46:42.473+08:00 level=INFO source=runner.go:929 msg="Server listening on 127.0.0.1:42585"
time=2025-04-30T02:46:42.538+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-04-30T02:46:42.539+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.name default=""
time=2025-04-30T02:46:42.539+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.description default=""
time=2025-04-30T02:46:42.539+08:00 level=INFO source=ggml.go:72 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=883 num_key_values=36
time=2025-04-30T02:46:42.539+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=/home/jadmin/anaconda3/lib
time=2025-04-30T02:46:42.539+08:00 level=DEBUG source=ggml.go:99 msg="ggml backend load all from path" path=/usr/local/lib/ollama
time=2025-04-30T02:46:42.539+08:00 level=INFO source=ggml.go:109 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc)
time=2025-04-30T02:46:42.552+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=mm.mm_input_projection.weight shape="[2560 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.552+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=mm.mm_soft_emb_norm.weight shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.552+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=output_norm.weight shape=[2560] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.552+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=token_embd.weight shape="[2560 262144]" dtype=14 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=output.weight shape="[2560 262144]" dtype=14 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.0.attn_k.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.0.attn_k.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.0.attn_output.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.0.attn_output.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.0.attn_q.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.0.attn_q.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.0.attn_v.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.0.attn_v.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.0.layer_norm1.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.0.layer_norm1.weight shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.0.layer_norm2.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.0.layer_norm2.weight shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.0.mlp.fc1.bias shape=[4304] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.0.mlp.fc1.weight shape="[1152 4304]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.0.mlp.fc2.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.0.mlp.fc2.weight shape="[4304 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.1.attn_k.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.1.attn_k.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.1.attn_output.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.1.attn_output.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.1.attn_q.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.1.attn_q.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.1.attn_v.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.1.attn_v.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.1.layer_norm1.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.1.layer_norm1.weight shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.1.layer_norm2.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.1.layer_norm2.weight shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.1.mlp.fc1.bias shape=[4304] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.1.mlp.fc1.weight shape="[1152 4304]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.1.mlp.fc2.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.1.mlp.fc2.weight shape="[4304 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.10.attn_k.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.10.attn_k.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.10.attn_output.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.10.attn_output.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.10.attn_q.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.10.attn_q.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.10.attn_v.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.10.attn_v.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.10.layer_norm1.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.10.layer_norm1.weight shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.10.layer_norm2.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.10.layer_norm2.weight shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.10.mlp.fc1.bias shape=[4304] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.10.mlp.fc1.weight shape="[1152 4304]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.10.mlp.fc2.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.10.mlp.fc2.weight shape="[4304 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.11.attn_k.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.11.attn_k.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.11.attn_output.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.11.attn_output.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.11.attn_q.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.11.attn_q.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.11.attn_v.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.11.attn_v.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.11.layer_norm1.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.11.layer_norm1.weight shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.11.layer_norm2.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.11.layer_norm2.weight shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.11.mlp.fc1.bias shape=[4304] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.11.mlp.fc1.weight shape="[1152 4304]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.11.mlp.fc2.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.11.mlp.fc2.weight shape="[4304 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.12.attn_k.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.12.attn_k.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.12.attn_output.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.12.attn_output.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.12.attn_q.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.12.attn_q.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.12.attn_v.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.12.attn_v.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.12.layer_norm1.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.12.layer_norm1.weight shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.12.layer_norm2.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.12.layer_norm2.weight shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.12.mlp.fc1.bias shape=[4304] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.12.mlp.fc1.weight shape="[1152 4304]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.12.mlp.fc2.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.12.mlp.fc2.weight shape="[4304 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.13.attn_k.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.13.attn_k.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.13.attn_output.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.13.attn_output.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.13.attn_q.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.13.attn_q.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.13.attn_v.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.13.attn_v.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.13.layer_norm1.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.13.layer_norm1.weight shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.13.layer_norm2.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.13.layer_norm2.weight shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.13.mlp.fc1.bias shape=[4304] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.13.mlp.fc1.weight shape="[1152 4304]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.13.mlp.fc2.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.13.mlp.fc2.weight shape="[4304 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.14.attn_k.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.14.attn_k.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.14.attn_output.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.14.attn_output.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.14.attn_q.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.14.attn_q.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.14.attn_v.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.14.attn_v.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.14.layer_norm1.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.14.layer_norm1.weight shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.14.layer_norm2.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.14.layer_norm2.weight shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.14.mlp.fc1.bias shape=[4304] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.14.mlp.fc1.weight shape="[1152 4304]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.14.mlp.fc2.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.14.mlp.fc2.weight shape="[4304 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.15.attn_k.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.15.attn_k.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.15.attn_output.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.15.attn_output.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.15.attn_q.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.15.attn_q.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.15.attn_v.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.15.attn_v.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.15.layer_norm1.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.15.layer_norm1.weight shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.15.layer_norm2.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.15.layer_norm2.weight shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.15.mlp.fc1.bias shape=[4304] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.15.mlp.fc1.weight shape="[1152 4304]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.15.mlp.fc2.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.15.mlp.fc2.weight shape="[4304 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.16.attn_k.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.16.attn_k.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.16.attn_output.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.16.attn_output.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.16.attn_q.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.16.attn_q.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.16.attn_v.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.16.attn_v.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.16.layer_norm1.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.16.layer_norm1.weight shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.16.layer_norm2.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.16.layer_norm2.weight shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.16.mlp.fc1.bias shape=[4304] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.16.mlp.fc1.weight shape="[1152 4304]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.16.mlp.fc2.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.16.mlp.fc2.weight shape="[4304 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.17.attn_k.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.17.attn_k.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.17.attn_output.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.17.attn_output.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.17.attn_q.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.17.attn_q.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.17.attn_v.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.17.attn_v.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.17.layer_norm1.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.17.layer_norm1.weight shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.17.layer_norm2.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.17.layer_norm2.weight shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.17.mlp.fc1.bias shape=[4304] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.17.mlp.fc1.weight shape="[1152 4304]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.17.mlp.fc2.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.17.mlp.fc2.weight shape="[4304 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.18.attn_k.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.18.attn_k.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.18.attn_output.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.18.attn_output.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.18.attn_q.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.18.attn_q.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.18.attn_v.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.18.attn_v.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.18.layer_norm1.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.18.layer_norm1.weight shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.18.layer_norm2.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.18.layer_norm2.weight shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.18.mlp.fc1.bias shape=[4304] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.18.mlp.fc1.weight shape="[1152 4304]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.18.mlp.fc2.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.18.mlp.fc2.weight shape="[4304 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.19.attn_k.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.19.attn_k.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.19.attn_output.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.19.attn_output.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.19.attn_q.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.19.attn_q.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.19.attn_v.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.19.attn_v.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.19.layer_norm1.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.19.layer_norm1.weight shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.19.layer_norm2.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.19.layer_norm2.weight shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.19.mlp.fc1.bias shape=[4304] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.19.mlp.fc1.weight shape="[1152 4304]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.19.mlp.fc2.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.19.mlp.fc2.weight shape="[4304 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.2.attn_k.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.2.attn_k.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.2.attn_output.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.2.attn_output.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.2.attn_q.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.2.attn_q.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.2.attn_v.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.2.attn_v.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.2.layer_norm1.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.2.layer_norm1.weight shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.2.layer_norm2.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.2.layer_norm2.weight shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.2.mlp.fc1.bias shape=[4304] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.2.mlp.fc1.weight shape="[1152 4304]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.2.mlp.fc2.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.2.mlp.fc2.weight shape="[4304 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.20.attn_k.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.20.attn_k.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.20.attn_output.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.20.attn_output.weight shape="[1152 1152]" dtype=1 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.20.attn_q.bias shape=[1152] dtype=0 buffer_type=CPU
time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor"

OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

0.6.6

Originally created by @heyjohnlim on GitHub (Apr 29, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10483 ### What is the issue? Just upgraded to ollama 0.6.6. Working fine previously. ran `ollama serve` then ran `ollama run gemma3` Noticed that it wasn't using GPU, but in the logs i can see it detected GPU `time=2025-04-30T02:46:26.042+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-a91a72c4-6733-ae33-c532-f159bfc07477 library=cuda variant=v12 compute=8.9 driver=12.5 name="NVIDIA GeForce RTX 4090" total="23.6 GiB" available="21.8 GiB"` When i run gemme3, i can see some message compatible libraries=[ ] (don't know whether its relevant): ``` calling cuDriverGetVersion raw version 0x2f12 CUDA driver version: 12.5 calling cuDeviceGetCount device count 1 time=2025-04-30T02:46:42.390+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-a91a72c4-6733-ae33-c532-f159bfc07477 name="NVIDIA GeForce RTX 4090" overhead="0 B" before.total="23.6 GiB" before.free="21.8 GiB" now.total="23.6 GiB" now.free="21.8 GiB" now.used="1.8 GiB" releasing cuda driver library time=2025-04-30T02:46:42.390+08:00 level=INFO source=server.go:105 msg="system memory" total="61.8 GiB" free="41.3 GiB" free_swap="28.7 GiB" time=2025-04-30T02:46:42.390+08:00 level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[21.8 GiB]" time=2025-04-30T02:46:42.391+08:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=35 layers.offload=35 layers.split="" memory.available="[21.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.8 GiB" memory.required.partial="5.8 GiB" memory.required.kv="682.0 MiB" memory.required.allocations="[5.8 GiB]" memory.weights.total="2.3 GiB" memory.weights.repeating="1.8 GiB" memory.weights.nonrepeating="525.0 MiB" memory.graph.full="517.0 MiB" memory.graph.partial="1.0 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB" time=2025-04-30T02:46:42.391+08:00 level=DEBUG source=server.go:262 msg="compatible gpu libraries" compatible=[] ``` ### Relevant log output ```shell 2025/04/30 02:46:25 routes.go:1232: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/jadmin/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-04-30T02:46:25.826+08:00 level=INFO source=images.go:458 msg="total blobs: 5" time=2025-04-30T02:46:25.826+08:00 level=INFO source=images.go:465 msg="total unused blobs removed: 0" time=2025-04-30T02:46:25.826+08:00 level=INFO source=routes.go:1299 msg="Listening on 127.0.0.1:11434 (version 0.6.6)" time=2025-04-30T02:46:25.826+08:00 level=DEBUG source=sched.go:107 msg="starting llm scheduler" time=2025-04-30T02:46:25.826+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-04-30T02:46:25.840+08:00 level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA" time=2025-04-30T02:46:25.840+08:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcuda.so* time=2025-04-30T02:46:25.841+08:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/local/lib/ollama/libcuda.so* /home/jadmin/anaconda3/lib/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" time=2025-04-30T02:46:25.842+08:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[/usr/lib64/libcuda.so.555.42.06] initializing /usr/lib64/libcuda.so.555.42.06 dlsym: cuInit - 0x7fbe1e71e2b0 dlsym: cuDriverGetVersion - 0x7fbe1e71e2d0 dlsym: cuDeviceGetCount - 0x7fbe1e71e310 dlsym: cuDeviceGet - 0x7fbe1e71e2f0 dlsym: cuDeviceGetAttribute - 0x7fbe1e71e3f0 dlsym: cuDeviceGetUuid - 0x7fbe1e71e350 dlsym: cuDeviceGetName - 0x7fbe1e71e330 dlsym: cuCtxCreate_v3 - 0x7fbe1e71e5d0 dlsym: cuMemGetInfo_v2 - 0x7fbe1e7286d0 dlsym: cuCtxDestroy - 0x7fbe1e783000 calling cuInit calling cuDriverGetVersion raw version 0x2f12 CUDA driver version: 12.5 calling cuDeviceGetCount device count 1 time=2025-04-30T02:46:25.878+08:00 level=DEBUG source=gpu.go:125 msg="detected GPUs" count=1 library=/usr/lib64/libcuda.so.555.42.06 [GPU-a91a72c4-6733-ae33-c532-f159bfc07477] CUDA totalMem 24118 mb [GPU-a91a72c4-6733-ae33-c532-f159bfc07477] CUDA freeMem 22286 mb [GPU-a91a72c4-6733-ae33-c532-f159bfc07477] Compute Capability 8.9 time=2025-04-30T02:46:26.042+08:00 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2025-04-30T02:46:26.042+08:00 level=DEBUG source=amd_linux.go:101 msg="evaluating amdgpu node /sys/class/kfd/kfd/topology/nodes/0/properties" time=2025-04-30T02:46:26.042+08:00 level=DEBUG source=amd_linux.go:121 msg="detected CPU /sys/class/kfd/kfd/topology/nodes/0/properties" time=2025-04-30T02:46:26.042+08:00 level=DEBUG source=amd_linux.go:101 msg="evaluating amdgpu node /sys/class/kfd/kfd/topology/nodes/1/properties" time=2025-04-30T02:46:26.042+08:00 level=DEBUG source=amd_linux.go:206 msg="mapping amdgpu to drm sysfs nodes" amdgpu=/sys/class/kfd/kfd/topology/nodes/1/properties vendor=4098 device=5710 unique_id=0 time=2025-04-30T02:46:26.042+08:00 level=DEBUG source=amd_linux.go:240 msg=matched amdgpu=/sys/class/kfd/kfd/topology/nodes/1/properties drm=/sys/class/drm/card0/device time=2025-04-30T02:46:26.042+08:00 level=DEBUG source=amd_linux.go:244 msg="failed to read sysfs node" file=/sys/class/drm/card0/device/mem_info_vram_total error="open /sys/class/drm/card0/device/mem_info_vram_total: no such file or directory" time=2025-04-30T02:46:26.042+08:00 level=INFO source=amd_linux.go:296 msg="unsupported Radeon iGPU detected skipping" id=0 total="0 B" time=2025-04-30T02:46:26.042+08:00 level=INFO source=amd_linux.go:402 msg="no compatible amdgpu devices detected" releasing cuda driver library time=2025-04-30T02:46:26.042+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-a91a72c4-6733-ae33-c532-f159bfc07477 library=cuda variant=v12 compute=8.9 driver=12.5 name="NVIDIA GeForce RTX 4090" total="23.6 GiB" available="21.8 GiB" [GIN] 2025/04/30 - 02:46:41 | 200 | 33.12µs | 127.0.0.1 | HEAD "/" time=2025-04-30T02:46:41.934+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-30T02:46:41.961+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/04/30 - 02:46:41 | 200 | 54.18421ms | 127.0.0.1 | POST "/api/show" time=2025-04-30T02:46:41.990+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-30T02:46:41.991+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="61.8 GiB" before.free="41.3 GiB" before.free_swap="28.7 GiB" now.total="61.8 GiB" now.free="41.3 GiB" now.free_swap="28.7 GiB" initializing /usr/lib64/libcuda.so.555.42.06 dlsym: cuInit - 0x7fbe1e71e2b0 dlsym: cuDriverGetVersion - 0x7fbe1e71e2d0 dlsym: cuDeviceGetCount - 0x7fbe1e71e310 dlsym: cuDeviceGet - 0x7fbe1e71e2f0 dlsym: cuDeviceGetAttribute - 0x7fbe1e71e3f0 dlsym: cuDeviceGetUuid - 0x7fbe1e71e350 dlsym: cuDeviceGetName - 0x7fbe1e71e330 dlsym: cuCtxCreate_v3 - 0x7fbe1e71e5d0 dlsym: cuMemGetInfo_v2 - 0x7fbe1e7286d0 dlsym: cuCtxDestroy - 0x7fbe1e783000 calling cuInit calling cuDriverGetVersion raw version 0x2f12 CUDA driver version: 12.5 calling cuDeviceGetCount device count 1 time=2025-04-30T02:46:42.156+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-a91a72c4-6733-ae33-c532-f159bfc07477 name="NVIDIA GeForce RTX 4090" overhead="0 B" before.total="23.6 GiB" before.free="21.8 GiB" now.total="23.6 GiB" now.free="21.8 GiB" now.used="1.8 GiB" releasing cuda driver library time=2025-04-30T02:46:42.156+08:00 level=DEBUG source=sched.go:183 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1 time=2025-04-30T02:46:42.200+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-30T02:46:42.227+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-30T02:46:42.227+08:00 level=DEBUG source=sched.go:226 msg="loading first model" model=/home/jadmin/.ollama/models/blobs/sha256-aeda25e63ebd698fab8638ffb778e68bed908b960d39d0becc650fa981609d25 time=2025-04-30T02:46:42.227+08:00 level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[21.8 GiB]" time=2025-04-30T02:46:42.228+08:00 level=INFO source=sched.go:722 msg="new model will fit in available VRAM in single GPU, loading" model=/home/jadmin/.ollama/models/blobs/sha256-aeda25e63ebd698fab8638ffb778e68bed908b960d39d0becc650fa981609d25 gpu=GPU-a91a72c4-6733-ae33-c532-f159bfc07477 parallel=4 available=23369220096 required="5.8 GiB" time=2025-04-30T02:46:42.228+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="61.8 GiB" before.free="41.3 GiB" before.free_swap="28.7 GiB" now.total="61.8 GiB" now.free="41.3 GiB" now.free_swap="28.7 GiB" initializing /usr/lib64/libcuda.so.555.42.06 dlsym: cuInit - 0x7fbe1e71e2b0 dlsym: cuDriverGetVersion - 0x7fbe1e71e2d0 dlsym: cuDeviceGetCount - 0x7fbe1e71e310 dlsym: cuDeviceGet - 0x7fbe1e71e2f0 dlsym: cuDeviceGetAttribute - 0x7fbe1e71e3f0 dlsym: cuDeviceGetUuid - 0x7fbe1e71e350 dlsym: cuDeviceGetName - 0x7fbe1e71e330 dlsym: cuCtxCreate_v3 - 0x7fbe1e71e5d0 dlsym: cuMemGetInfo_v2 - 0x7fbe1e7286d0 dlsym: cuCtxDestroy - 0x7fbe1e783000 calling cuInit calling cuDriverGetVersion raw version 0x2f12 CUDA driver version: 12.5 calling cuDeviceGetCount device count 1 time=2025-04-30T02:46:42.390+08:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-a91a72c4-6733-ae33-c532-f159bfc07477 name="NVIDIA GeForce RTX 4090" overhead="0 B" before.total="23.6 GiB" before.free="21.8 GiB" now.total="23.6 GiB" now.free="21.8 GiB" now.used="1.8 GiB" releasing cuda driver library time=2025-04-30T02:46:42.390+08:00 level=INFO source=server.go:105 msg="system memory" total="61.8 GiB" free="41.3 GiB" free_swap="28.7 GiB" time=2025-04-30T02:46:42.390+08:00 level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[21.8 GiB]" time=2025-04-30T02:46:42.391+08:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=35 layers.offload=35 layers.split="" memory.available="[21.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.8 GiB" memory.required.partial="5.8 GiB" memory.required.kv="682.0 MiB" memory.required.allocations="[5.8 GiB]" memory.weights.total="2.3 GiB" memory.weights.repeating="1.8 GiB" memory.weights.nonrepeating="525.0 MiB" memory.graph.full="517.0 MiB" memory.graph.partial="1.0 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB" time=2025-04-30T02:46:42.391+08:00 level=DEBUG source=server.go:262 msg="compatible gpu libraries" compatible=[] time=2025-04-30T02:46:42.459+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-30T02:46:42.461+08:00 level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false time=2025-04-30T02:46:42.461+08:00 level=DEBUG source=process_text_spm.go:21 msg=Tokens "num tokens"=262145 vals="[<pad> <eos> <bos> <unk> <mask>]" scores="[0 0 0 0 0]" types="[3 3 3 2 1]" time=2025-04-30T02:46:42.463+08:00 level=DEBUG source=process_text_spm.go:35 msg="Token counts" normal=261882 unknown=1 control=5 "user defined"=1 unused=0 byte=256 "max token len"=93 time=2025-04-30T02:46:42.464+08:00 level=DEBUG source=process_text_spm.go:21 msg=Tokens "num tokens"=262145 vals="[<pad> <eos> <bos> <unk> <mask>]" scores="[0 0 0 0 0]" types="[3 3 3 2 1]" time=2025-04-30T02:46:42.466+08:00 level=DEBUG source=process_text_spm.go:35 msg="Token counts" normal=261882 unknown=1 control=5 "user defined"=1 unused=0 byte=256 "max token len"=93 time=2025-04-30T02:46:42.466+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07 time=2025-04-30T02:46:42.466+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000 time=2025-04-30T02:46:42.466+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 time=2025-04-30T02:46:42.466+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1 time=2025-04-30T02:46:42.466+08:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256 time=2025-04-30T02:46:42.466+08:00 level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/local/bin/ollama runner --ollama-engine --model /home/jadmin/.ollama/models/blobs/sha256-aeda25e63ebd698fab8638ffb778e68bed908b960d39d0becc650fa981609d25 --ctx-size 8192 --batch-size 512 --n-gpu-layers 35 --verbose --threads 16 --parallel 4 --port 42585" time=2025-04-30T02:46:42.466+08:00 level=DEBUG source=server.go:423 msg=subprocess environment="[LD_LIBRARY_PATH=/home/jadmin/anaconda3/lib:/usr/local/lib/ollama PATH=/usr/local/cuda-12.5/bin:/home/jadmin/instantclient_19_19:/home/jadmin/anaconda3/bin:/home/jadmin/anaconda3/condabin:/home/jadmin/.local/bin:/home/jadmin/bin:/usr/share/Modules/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin CUDA_VISIBLE_DEVICES=GPU-a91a72c4-6733-ae33-c532-f159bfc07477]" time=2025-04-30T02:46:42.466+08:00 level=INFO source=sched.go:451 msg="loaded runners" count=1 time=2025-04-30T02:46:42.466+08:00 level=INFO source=server.go:580 msg="waiting for llama runner to start responding" time=2025-04-30T02:46:42.467+08:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error" time=2025-04-30T02:46:42.473+08:00 level=INFO source=runner.go:866 msg="starting ollama engine" time=2025-04-30T02:46:42.473+08:00 level=INFO source=runner.go:929 msg="Server listening on 127.0.0.1:42585" time=2025-04-30T02:46:42.538+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-30T02:46:42.539+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.name default="" time=2025-04-30T02:46:42.539+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.description default="" time=2025-04-30T02:46:42.539+08:00 level=INFO source=ggml.go:72 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=883 num_key_values=36 time=2025-04-30T02:46:42.539+08:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=/home/jadmin/anaconda3/lib time=2025-04-30T02:46:42.539+08:00 level=DEBUG source=ggml.go:99 msg="ggml backend load all from path" path=/usr/local/lib/ollama time=2025-04-30T02:46:42.539+08:00 level=INFO source=ggml.go:109 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc) time=2025-04-30T02:46:42.552+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=mm.mm_input_projection.weight shape="[2560 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.552+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=mm.mm_soft_emb_norm.weight shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.552+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=output_norm.weight shape=[2560] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.552+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=token_embd.weight shape="[2560 262144]" dtype=14 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=output.weight shape="[2560 262144]" dtype=14 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.0.attn_k.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.0.attn_k.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.0.attn_output.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.0.attn_output.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.0.attn_q.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.0.attn_q.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.0.attn_v.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.0.attn_v.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.0.layer_norm1.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.0.layer_norm1.weight shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.0.layer_norm2.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.0.layer_norm2.weight shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.0.mlp.fc1.bias shape=[4304] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.0.mlp.fc1.weight shape="[1152 4304]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.0.mlp.fc2.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.0.mlp.fc2.weight shape="[4304 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.1.attn_k.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.1.attn_k.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.1.attn_output.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.1.attn_output.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.1.attn_q.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.1.attn_q.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.1.attn_v.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.1.attn_v.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.1.layer_norm1.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.1.layer_norm1.weight shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.1.layer_norm2.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.1.layer_norm2.weight shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.1.mlp.fc1.bias shape=[4304] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.1.mlp.fc1.weight shape="[1152 4304]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.1.mlp.fc2.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.1.mlp.fc2.weight shape="[4304 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.10.attn_k.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.10.attn_k.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.10.attn_output.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.10.attn_output.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.10.attn_q.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.10.attn_q.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.10.attn_v.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.10.attn_v.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.10.layer_norm1.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.10.layer_norm1.weight shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.10.layer_norm2.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.10.layer_norm2.weight shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.10.mlp.fc1.bias shape=[4304] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.10.mlp.fc1.weight shape="[1152 4304]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.10.mlp.fc2.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.10.mlp.fc2.weight shape="[4304 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.11.attn_k.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.11.attn_k.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.11.attn_output.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.11.attn_output.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.11.attn_q.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.11.attn_q.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.11.attn_v.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.11.attn_v.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.11.layer_norm1.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.11.layer_norm1.weight shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.11.layer_norm2.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.11.layer_norm2.weight shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.11.mlp.fc1.bias shape=[4304] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.11.mlp.fc1.weight shape="[1152 4304]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.11.mlp.fc2.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.11.mlp.fc2.weight shape="[4304 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.12.attn_k.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.12.attn_k.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.12.attn_output.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.12.attn_output.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.12.attn_q.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.12.attn_q.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.12.attn_v.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.12.attn_v.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.12.layer_norm1.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.12.layer_norm1.weight shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.12.layer_norm2.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.12.layer_norm2.weight shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.12.mlp.fc1.bias shape=[4304] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.12.mlp.fc1.weight shape="[1152 4304]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.12.mlp.fc2.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.12.mlp.fc2.weight shape="[4304 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.13.attn_k.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.13.attn_k.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.13.attn_output.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.13.attn_output.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.13.attn_q.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.13.attn_q.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.13.attn_v.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.13.attn_v.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.13.layer_norm1.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.13.layer_norm1.weight shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.13.layer_norm2.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.13.layer_norm2.weight shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.13.mlp.fc1.bias shape=[4304] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.13.mlp.fc1.weight shape="[1152 4304]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.13.mlp.fc2.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.13.mlp.fc2.weight shape="[4304 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.14.attn_k.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.14.attn_k.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.14.attn_output.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.14.attn_output.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.14.attn_q.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.14.attn_q.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.14.attn_v.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.14.attn_v.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.14.layer_norm1.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.14.layer_norm1.weight shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.14.layer_norm2.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.14.layer_norm2.weight shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.14.mlp.fc1.bias shape=[4304] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.14.mlp.fc1.weight shape="[1152 4304]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.14.mlp.fc2.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.14.mlp.fc2.weight shape="[4304 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.15.attn_k.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.15.attn_k.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.15.attn_output.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.15.attn_output.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.15.attn_q.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.15.attn_q.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.15.attn_v.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.15.attn_v.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.15.layer_norm1.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.15.layer_norm1.weight shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.15.layer_norm2.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.15.layer_norm2.weight shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.15.mlp.fc1.bias shape=[4304] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.15.mlp.fc1.weight shape="[1152 4304]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.15.mlp.fc2.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.15.mlp.fc2.weight shape="[4304 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.16.attn_k.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.16.attn_k.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.16.attn_output.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.16.attn_output.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.16.attn_q.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.16.attn_q.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.16.attn_v.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.16.attn_v.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.16.layer_norm1.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.16.layer_norm1.weight shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.16.layer_norm2.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.16.layer_norm2.weight shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.16.mlp.fc1.bias shape=[4304] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.16.mlp.fc1.weight shape="[1152 4304]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.16.mlp.fc2.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.16.mlp.fc2.weight shape="[4304 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.17.attn_k.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.17.attn_k.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.17.attn_output.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.17.attn_output.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.17.attn_q.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.17.attn_q.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.17.attn_v.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.17.attn_v.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.17.layer_norm1.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.17.layer_norm1.weight shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.17.layer_norm2.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.17.layer_norm2.weight shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.17.mlp.fc1.bias shape=[4304] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.17.mlp.fc1.weight shape="[1152 4304]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.17.mlp.fc2.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.17.mlp.fc2.weight shape="[4304 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.18.attn_k.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.18.attn_k.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.18.attn_output.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.18.attn_output.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.18.attn_q.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.18.attn_q.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.18.attn_v.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.18.attn_v.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.18.layer_norm1.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.18.layer_norm1.weight shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.18.layer_norm2.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.18.layer_norm2.weight shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.18.mlp.fc1.bias shape=[4304] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.18.mlp.fc1.weight shape="[1152 4304]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.18.mlp.fc2.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.18.mlp.fc2.weight shape="[4304 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.19.attn_k.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.19.attn_k.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.19.attn_output.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.19.attn_output.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.19.attn_q.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.19.attn_q.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.19.attn_v.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.19.attn_v.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.19.layer_norm1.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.19.layer_norm1.weight shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.19.layer_norm2.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.19.layer_norm2.weight shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.19.mlp.fc1.bias shape=[4304] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.19.mlp.fc1.weight shape="[1152 4304]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.19.mlp.fc2.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.19.mlp.fc2.weight shape="[4304 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.2.attn_k.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.2.attn_k.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.2.attn_output.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.2.attn_output.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.2.attn_q.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.2.attn_q.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.2.attn_v.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.2.attn_v.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.2.layer_norm1.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.2.layer_norm1.weight shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.2.layer_norm2.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.2.layer_norm2.weight shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.2.mlp.fc1.bias shape=[4304] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.2.mlp.fc1.weight shape="[1152 4304]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.2.mlp.fc2.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.2.mlp.fc2.weight shape="[4304 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.20.attn_k.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.20.attn_k.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.20.attn_output.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.20.attn_output.weight shape="[1152 1152]" dtype=1 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" name=v.blk.20.attn_q.bias shape=[1152] dtype=0 buffer_type=CPU time=2025-04-30T02:46:42.553+08:00 level=DEBUG source=ggml.go:225 msg="created tensor" ``` ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version 0.6.6
GiteaMirror added the bug label 2026-04-12 18:46:05 -05:00
Author
Owner

@ehan1990 commented on GitHub (Apr 29, 2025):

hey @heyjohnlim , so you're saying that GPU usage % doesnt go up or down when you're using/running the llm through ollama? Could you also paste the output of nvidia-smi as well? thanks

<!-- gh-comment-id:2839917770 --> @ehan1990 commented on GitHub (Apr 29, 2025): hey @heyjohnlim , so you're saying that GPU usage % doesnt go up or down when you're using/running the llm through ollama? Could you also paste the output of `nvidia-smi` as well? thanks
Author
Owner

@heyjohnlim commented on GitHub (Apr 29, 2025):

Wed Apr 30 03:03:25 2025       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 555.42.06              Driver Version: 555.42.06      CUDA Version: 12.5     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 4090        Off |   00000000:01:00.0 Off |                  Off |
|  0%   26C    P8              5W /  500W |    1443MiB /  24564MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A   2212949      G   /usr/libexec/Xorg                               5MiB |
|    0   N/A  N/A   2908522      C   .../anaconda3/envs/jurisrag/bin/python        664MiB |
|    0   N/A  N/A   2908523      C   .../anaconda3/envs/jurisrag/bin/python        748MiB |
+-----------------------------------------------------------------------------------------+
<!-- gh-comment-id:2839925369 --> @heyjohnlim commented on GitHub (Apr 29, 2025): ``` Wed Apr 30 03:03:25 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 555.42.06 Driver Version: 555.42.06 CUDA Version: 12.5 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 4090 Off | 00000000:01:00.0 Off | Off | | 0% 26C P8 5W / 500W | 1443MiB / 24564MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 2212949 G /usr/libexec/Xorg 5MiB | | 0 N/A N/A 2908522 C .../anaconda3/envs/jurisrag/bin/python 664MiB | | 0 N/A N/A 2908523 C .../anaconda3/envs/jurisrag/bin/python 748MiB | +-----------------------------------------------------------------------------------------+ ```
Author
Owner

@heyjohnlim commented on GitHub (Apr 29, 2025):

Above when running ollama with gemma3 and running prompt "count 1 to 100"
I can see CPU 100%

<!-- gh-comment-id:2839932377 --> @heyjohnlim commented on GitHub (Apr 29, 2025): Above when running ollama with gemma3 and running prompt "count 1 to 100" I can see CPU 100%
Author
Owner

@rick-github commented on GitHub (Apr 29, 2025):

time=2025-04-30T02:46:42.539+08:00 level=DEBUG source=ggml.go:99 msg="ggml backend load all from path" path=/usr/local/lib/ollama
time=2025-04-30T02:46:42.539+08:00 level=INFO source=ggml.go:109 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc)

ollama tried to load backends but didn't find anything suitable. What's the output of

ls -lR /usr/local/lib/ollama /usr/local/bin/ollama
<!-- gh-comment-id:2840006321 --> @rick-github commented on GitHub (Apr 29, 2025): ``` time=2025-04-30T02:46:42.539+08:00 level=DEBUG source=ggml.go:99 msg="ggml backend load all from path" path=/usr/local/lib/ollama time=2025-04-30T02:46:42.539+08:00 level=INFO source=ggml.go:109 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc) ``` ollama tried to load backends but didn't find anything suitable. What's the output of ``` ls -lR /usr/local/lib/ollama /usr/local/bin/ollama ```
Author
Owner

@heyjohnlim commented on GitHub (Apr 30, 2025):

-rwxr-xr-x 1 root root 32020384 Apr 30 01:34 /usr/local/bin/ollama

/usr/local/lib/ollama:
total 915888
lrwxrwxrwx 1 root root        15 Apr  5  2024 libcublas.so -> libcublas.so.12
lrwxrwxrwx 1 root root        23 Jun 14  2023 libcublas.so.11 -> libcublas.so.11.5.1.109
-rwxr-xr-x 1 root root 121866104 May  5  2021 libcublas.so.11.5.1.109
lrwxrwxrwx 1 root root        24 Apr  5  2024 libcublas.so.12 -> ./libcublas.so.12.4.2.65
-rwxr-xr-x 1 root root 109760416 Feb 28  2024 libcublas.so.12.4.2.65
lrwxrwxrwx 1 root root        17 Apr  5  2024 libcublasLt.so -> libcublasLt.so.12
lrwxrwxrwx 1 root root        25 Jun 14  2023 libcublasLt.so.11 -> libcublasLt.so.11.5.1.109
-rwxr-xr-x 1 root root 263770264 May  5  2021 libcublasLt.so.11.5.1.109
lrwxrwxrwx 1 root root        26 Apr  5  2024 libcublasLt.so.12 -> ./libcublasLt.so.12.4.2.65
-rwxr-xr-x 1 root root 441131728 Feb 28  2024 libcublasLt.so.12.4.2.65
lrwxrwxrwx 1 root root        15 Apr  5  2024 libcudart.so -> libcudart.so.12
lrwxrwxrwx 1 root root        21 Jun 14  2023 libcudart.so.11.0 -> libcudart.so.11.3.109
-rwxr-xr-x 1 root root    619192 May  4  2021 libcudart.so.11.3.109
lrwxrwxrwx 1 root root        20 Apr  5  2024 libcudart.so.12 -> libcudart.so.12.4.99
-rwxr-xr-x 1 root root    707904 Feb 28  2024 libcudart.so.12.4.99

<!-- gh-comment-id:2840629387 --> @heyjohnlim commented on GitHub (Apr 30, 2025): ``` -rwxr-xr-x 1 root root 32020384 Apr 30 01:34 /usr/local/bin/ollama /usr/local/lib/ollama: total 915888 lrwxrwxrwx 1 root root 15 Apr 5 2024 libcublas.so -> libcublas.so.12 lrwxrwxrwx 1 root root 23 Jun 14 2023 libcublas.so.11 -> libcublas.so.11.5.1.109 -rwxr-xr-x 1 root root 121866104 May 5 2021 libcublas.so.11.5.1.109 lrwxrwxrwx 1 root root 24 Apr 5 2024 libcublas.so.12 -> ./libcublas.so.12.4.2.65 -rwxr-xr-x 1 root root 109760416 Feb 28 2024 libcublas.so.12.4.2.65 lrwxrwxrwx 1 root root 17 Apr 5 2024 libcublasLt.so -> libcublasLt.so.12 lrwxrwxrwx 1 root root 25 Jun 14 2023 libcublasLt.so.11 -> libcublasLt.so.11.5.1.109 -rwxr-xr-x 1 root root 263770264 May 5 2021 libcublasLt.so.11.5.1.109 lrwxrwxrwx 1 root root 26 Apr 5 2024 libcublasLt.so.12 -> ./libcublasLt.so.12.4.2.65 -rwxr-xr-x 1 root root 441131728 Feb 28 2024 libcublasLt.so.12.4.2.65 lrwxrwxrwx 1 root root 15 Apr 5 2024 libcudart.so -> libcudart.so.12 lrwxrwxrwx 1 root root 21 Jun 14 2023 libcudart.so.11.0 -> libcudart.so.11.3.109 -rwxr-xr-x 1 root root 619192 May 4 2021 libcudart.so.11.3.109 lrwxrwxrwx 1 root root 20 Apr 5 2024 libcudart.so.12 -> libcudart.so.12.4.99 -rwxr-xr-x 1 root root 707904 Feb 28 2024 libcudart.so.12.4.99 ```
Author
Owner

@heyjohnlim commented on GitHub (Apr 30, 2025):

one strange thing is i followed the manual install instructions:

If you are upgrading from a prior version, you should remove the old libraries with sudo rm -rf /usr/lib/ollama first.

Download and extract the package:

curl -L https://ollama.com/download/ollama-linux-amd64.tgz -o ollama-linux-amd64.tgz
sudo tar -C /usr -xzf ollama-linux-amd64.tgz

The ollama binary was installed in /usr/bin/ollama, not /usr/bin/local, so i copied the ollama binary to /usr/local/bin because i noticed the ollama service required it in /usr/local/bin/ollama

<!-- gh-comment-id:2840633688 --> @heyjohnlim commented on GitHub (Apr 30, 2025): one strange thing is i followed the manual install instructions: ``` If you are upgrading from a prior version, you should remove the old libraries with sudo rm -rf /usr/lib/ollama first. Download and extract the package: curl -L https://ollama.com/download/ollama-linux-amd64.tgz -o ollama-linux-amd64.tgz sudo tar -C /usr -xzf ollama-linux-amd64.tgz ``` The ollama binary was installed in /usr/bin/ollama, not /usr/bin/local, so i copied the ollama binary to /usr/local/bin because i noticed the ollama service required it in /usr/local/bin/ollama
Author
Owner

@heyjohnlim commented on GitHub (Apr 30, 2025):

installed here:
-rwxr-xr-x 1 root root 32020384 Apr 19 09:29 /usr/bin/ollama

i copied to here, hence the newer date:
-rwxr-xr-x 1 root root 32020384 Apr 30 01:34 /usr/local/bin/ollama

<!-- gh-comment-id:2840637079 --> @heyjohnlim commented on GitHub (Apr 30, 2025): installed here: -rwxr-xr-x 1 root root 32020384 Apr 19 09:29 /usr/bin/ollama i copied to here, hence the newer date: -rwxr-xr-x 1 root root 32020384 Apr 30 01:34 /usr/local/bin/ollama
Author
Owner

@heyjohnlim commented on GitHub (Apr 30, 2025):

This is the dir that the manual said i should delete on upgrade:

ls -lR /usr/lib/ollama

/usr/lib/ollama:
total 3768
drwxr-xr-x 2 root root   4096 Apr 19 09:38 cuda_v11
drwxr-xr-x 2 root root    188 Apr 19 09:40 cuda_v12
-rwxr-xr-x 1 root root 587424 Apr 19 09:28 libggml-base.so
-rwxr-xr-x 1 root root 615184 Apr 19 09:28 libggml-cpu-alderlake.so
-rwxr-xr-x 1 root root 611088 Apr 19 09:28 libggml-cpu-haswell.so
-rwxr-xr-x 1 root root 709392 Apr 19 09:28 libggml-cpu-icelake.so
-rwxr-xr-x 1 root root 602896 Apr 19 09:28 libggml-cpu-sandybridge.so
-rwxr-xr-x 1 root root 709392 Apr 19 09:28 libggml-cpu-skylakex.so

/usr/lib/ollama/cuda_v11:
total 1057512
lrwxrwxrwx 1 root root        23 Apr 19 09:38 libcublas.so.11 -> libcublas.so.11.5.1.109
-rwxr-xr-x 1 root root 121866104 May  5  2021 libcublas.so.11.5.1.109
lrwxrwxrwx 1 root root        25 Apr 19 09:38 libcublasLt.so.11 -> libcublasLt.so.11.5.1.109
-rwxr-xr-x 1 root root 263770264 May  5  2021 libcublasLt.so.11.5.1.109
lrwxrwxrwx 1 root root        21 Apr 19 09:38 libcudart.so.11.0 -> libcudart.so.11.3.109
-rwxr-xr-x 1 root root    619192 May  4  2021 libcudart.so.11.3.109
-rwxr-xr-x 1 root root 696623776 Apr 19 09:38 libggml-cuda.so

/usr/lib/ollama/cuda_v12:
total 1945756
lrwxrwxrwx 1 root root         21 Apr 19 09:40 libcublas.so.12 -> libcublas.so.12.8.4.1
-rwxr-xr-x 1 root root  116388640 Jul  8  2015 libcublas.so.12.8.4.1
lrwxrwxrwx 1 root root         23 Apr 19 09:40 libcublasLt.so.12 -> libcublasLt.so.12.8.4.1
-rwxr-xr-x 1 root root  751771728 Jul  8  2015 libcublasLt.so.12.8.4.1
lrwxrwxrwx 1 root root         20 Apr 19 09:40 libcudart.so.12 -> libcudart.so.12.8.90
-rwxr-xr-x 1 root root     728800 Jul  8  2015 libcudart.so.12.8.90
-rwxr-xr-x 1 root root 1123555280 Apr 19 09:40 libggml-cuda.so
<!-- gh-comment-id:2840641564 --> @heyjohnlim commented on GitHub (Apr 30, 2025): This is the dir that the manual said i should delete on upgrade: ls -lR /usr/lib/ollama ``` /usr/lib/ollama: total 3768 drwxr-xr-x 2 root root 4096 Apr 19 09:38 cuda_v11 drwxr-xr-x 2 root root 188 Apr 19 09:40 cuda_v12 -rwxr-xr-x 1 root root 587424 Apr 19 09:28 libggml-base.so -rwxr-xr-x 1 root root 615184 Apr 19 09:28 libggml-cpu-alderlake.so -rwxr-xr-x 1 root root 611088 Apr 19 09:28 libggml-cpu-haswell.so -rwxr-xr-x 1 root root 709392 Apr 19 09:28 libggml-cpu-icelake.so -rwxr-xr-x 1 root root 602896 Apr 19 09:28 libggml-cpu-sandybridge.so -rwxr-xr-x 1 root root 709392 Apr 19 09:28 libggml-cpu-skylakex.so /usr/lib/ollama/cuda_v11: total 1057512 lrwxrwxrwx 1 root root 23 Apr 19 09:38 libcublas.so.11 -> libcublas.so.11.5.1.109 -rwxr-xr-x 1 root root 121866104 May 5 2021 libcublas.so.11.5.1.109 lrwxrwxrwx 1 root root 25 Apr 19 09:38 libcublasLt.so.11 -> libcublasLt.so.11.5.1.109 -rwxr-xr-x 1 root root 263770264 May 5 2021 libcublasLt.so.11.5.1.109 lrwxrwxrwx 1 root root 21 Apr 19 09:38 libcudart.so.11.0 -> libcudart.so.11.3.109 -rwxr-xr-x 1 root root 619192 May 4 2021 libcudart.so.11.3.109 -rwxr-xr-x 1 root root 696623776 Apr 19 09:38 libggml-cuda.so /usr/lib/ollama/cuda_v12: total 1945756 lrwxrwxrwx 1 root root 21 Apr 19 09:40 libcublas.so.12 -> libcublas.so.12.8.4.1 -rwxr-xr-x 1 root root 116388640 Jul 8 2015 libcublas.so.12.8.4.1 lrwxrwxrwx 1 root root 23 Apr 19 09:40 libcublasLt.so.12 -> libcublasLt.so.12.8.4.1 -rwxr-xr-x 1 root root 751771728 Jul 8 2015 libcublasLt.so.12.8.4.1 lrwxrwxrwx 1 root root 20 Apr 19 09:40 libcudart.so.12 -> libcudart.so.12.8.90 -rwxr-xr-x 1 root root 728800 Jul 8 2015 libcudart.so.12.8.90 -rwxr-xr-x 1 root root 1123555280 Apr 19 09:40 libggml-cuda.so ```
Author
Owner

@heyjohnlim commented on GitHub (Apr 30, 2025):

Hi, from above info, I realise that the

curl -L https://ollama.com/download/ollama-linux-amd64.tgz -o ollama-linux-amd64.tgz
sudo tar -C /usr -xzf ollama-linux-amd64.tgz

installed to /usr/bin/ollama and not to /usr/local/bin

So i changed my ollama.service to point to /usr/bin/ollama and deleted the copy of ollama i created at /usr/bin/local/ollama

<!-- gh-comment-id:2841107729 --> @heyjohnlim commented on GitHub (Apr 30, 2025): Hi, from above info, I realise that the ``` curl -L https://ollama.com/download/ollama-linux-amd64.tgz -o ollama-linux-amd64.tgz sudo tar -C /usr -xzf ollama-linux-amd64.tgz ``` installed to /usr/bin/ollama and not to /usr/local/bin So i changed my ollama.service to point to /usr/bin/ollama and deleted the copy of ollama i created at /usr/bin/local/ollama
Author
Owner

@rick-github commented on GitHub (Apr 30, 2025):

https://github.com/ollama/ollama/issues/8532#issuecomment-2616281903

<!-- gh-comment-id:2841447512 --> @rick-github commented on GitHub (Apr 30, 2025): https://github.com/ollama/ollama/issues/8532#issuecomment-2616281903
Author
Owner

@tdkgo commented on GitHub (Apr 30, 2025):

I encountered the same problem, create a shortcut (symbolic link) with administrator privileges under/etc/local/bin that points to/etc/bin/ollama to resolve the issue
sudo ln -s /usr/bin/ollama /usr/local/bin/ollama

<!-- gh-comment-id:2841481104 --> @tdkgo commented on GitHub (Apr 30, 2025): I encountered the same problem, create a shortcut (symbolic link) with administrator privileges under/etc/local/bin that points to/etc/bin/ollama to resolve the issue `sudo ln -s /usr/bin/ollama /usr/local/bin/ollama`
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6895