[GH-ISSUE #11555] TensileLibrary_lazy_gfx1201.dat: No such file or directory #69683

Closed
opened 2026-05-04 18:49:37 -05:00 by GiteaMirror · 9 comments
Owner

Originally created by @inpure on GitHub (Jul 28, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11555

What is the issue?

ollama 服务日志的显示有报错:没有 TensileLibrary_lazy_gfx1201.dat,但ollama 能工作,能正常提问和回答问题
system: ubuntu24
GPU: RX9070(gfx1201)
这个报错是否表示 GPU 没有正常使用 ROCm 加速?

Relevant log output

Jul 28 15:11:48 ubuntu24 ollama[307422]: rocblaslt error: Cannot read /usr/local/lib/ollama/rocm/hipblaslt/library/TensileLibrary_lazy_gfx1201.dat: No such file or directory
Jul 28 15:11:48 ubuntu24 ollama[307422]: rocblaslt error: Could not load /usr/local/lib/ollama/rocm/hipblaslt/library/TensileLibrary_lazy_gfx1201.dat
Jul 28 15:11:48 ubuntu24 ollama[307422]: hipBLASLt error: Heuristic Fetch Failed

OS

Linux

GPU

AMD

CPU

AMD

Ollama version

0.9.6

Originally created by @inpure on GitHub (Jul 28, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11555 ### What is the issue? ollama 服务日志的显示有报错:没有 `TensileLibrary_lazy_gfx1201.dat`,但ollama 能工作,能正常提问和回答问题 system: ubuntu24 GPU: RX9070(gfx1201) 这个报错是否表示 GPU 没有正常使用 ROCm 加速? ### Relevant log output ```shell Jul 28 15:11:48 ubuntu24 ollama[307422]: rocblaslt error: Cannot read /usr/local/lib/ollama/rocm/hipblaslt/library/TensileLibrary_lazy_gfx1201.dat: No such file or directory Jul 28 15:11:48 ubuntu24 ollama[307422]: rocblaslt error: Could not load /usr/local/lib/ollama/rocm/hipblaslt/library/TensileLibrary_lazy_gfx1201.dat Jul 28 15:11:48 ubuntu24 ollama[307422]: hipBLASLt error: Heuristic Fetch Failed ``` ### OS Linux ### GPU AMD ### CPU AMD ### Ollama version 0.9.6
GiteaMirror added the amdbuglinux labels 2026-05-04 18:49:38 -05:00
Author
Owner

@dhiltgen commented on GitHub (Jul 31, 2025):

I don't have one of these GPUs to verify, but the paths in your log are unusual. How did you install Ollama?

Here's where that file normally lives:

% tar tzf ~/Downloads/ollama-linux-amd64-rocm.tgz | grep TensileLibrary_lazy_gfx1201.dat
lib/ollama/rocm/rocblas/library/TensileLibrary_lazy_gfx1201.dat

If you installed using our recommended install flow and it still doesn't work, turning on debug and sharing your server log may help understand why ROCm isn't loading properly.

https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md

<!-- gh-comment-id:3141444349 --> @dhiltgen commented on GitHub (Jul 31, 2025): I don't have one of these GPUs to verify, but the paths in your log are unusual. How did you install Ollama? Here's where that file normally lives: ``` % tar tzf ~/Downloads/ollama-linux-amd64-rocm.tgz | grep TensileLibrary_lazy_gfx1201.dat lib/ollama/rocm/rocblas/library/TensileLibrary_lazy_gfx1201.dat ``` If you installed using our [recommended install flow](https://github.com/ollama/ollama/blob/main/docs/linux.md) and it still doesn't work, turning on debug and sharing your server log may help understand why ROCm isn't loading properly. https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md
Author
Owner

@inpure commented on GitHub (Aug 1, 2025):

Thanks for reply,

Yes I installed follw the recommeded command: curl -fsSL https://ollama.com/install.sh | sh

server log

Aug 01 09:43:39 ubuntu24 systemd[1]: Started ollama.service - Ollama Service.
Aug 01 09:43:40 ubuntu24 ollama[1239]: time=2025-08-01T09:43:40.019+08:00 level=INFO source=routes.go:1235 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Aug 01 09:43:40 ubuntu24 ollama[1239]: time=2025-08-01T09:43:40.023+08:00 level=INFO source=images.go:476 msg="total blobs: 21"
Aug 01 09:43:40 ubuntu24 ollama[1239]: time=2025-08-01T09:43:40.023+08:00 level=INFO source=images.go:483 msg="total unused blobs removed: 0"
Aug 01 09:43:40 ubuntu24 ollama[1239]: time=2025-08-01T09:43:40.023+08:00 level=INFO source=routes.go:1288 msg="Listening on [::]:11434 (version 0.9.6)"
Aug 01 09:43:40 ubuntu24 ollama[1239]: time=2025-08-01T09:43:40.024+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
Aug 01 09:43:40 ubuntu24 ollama[1239]: time=2025-08-01T09:43:40.033+08:00 level=INFO source=amd_linux.go:386 msg="amdgpu is supported" gpu=GPU-da4a13daba57996a gpu_type=gfx1201
Aug 01 09:43:40 ubuntu24 ollama[1239]: time=2025-08-01T09:43:40.033+08:00 level=INFO source=amd_linux.go:296 msg="unsupported Radeon iGPU detected skipping" id=1 total="512.0 MiB"
Aug 01 09:43:40 ubuntu24 ollama[1239]: time=2025-08-01T09:43:40.034+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-da4a13daba57996a library=rocm variant="" compute=gfx1201 driver=6.12 name=1002:7550 total="15.9 GiB" available="15.4 GiB"
Aug 01 11:53:16 ubuntu24 ollama[1239]: [GIN] 2025/08/01 - 11:53:16 | 200 |     436.836µs |       127.0.0.1 | HEAD     "/"
Aug 01 11:53:16 ubuntu24 ollama[1239]: [GIN] 2025/08/01 - 11:53:16 | 200 |    2.340467ms |       127.0.0.1 | GET      "/api/tags"
Aug 01 11:56:33 ubuntu24 ollama[1239]: time=2025-08-01T11:56:33.947+08:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e gpu=GPU-da4a13daba57996a parallel=2 available=16551673856 required="11.2 GiB"
Aug 01 11:56:33 ubuntu24 ollama[1239]: time=2025-08-01T11:56:33.948+08:00 level=INFO source=server.go:135 msg="system memory" total="30.5 GiB" free="28.1 GiB" free_swap="8.0 GiB"
Aug 01 11:56:33 ubuntu24 ollama[1239]: time=2025-08-01T11:56:33.948+08:00 level=INFO source=server.go:175 msg=offload library=rocm layers.requested=-1 layers.model=41 layers.offload=41 layers.split="" memory.available="[15.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.2 GiB" memory.required.partial="11.2 GiB" memory.required.kv="1.2 GiB" memory.required.allocations="[11.2 GiB]" memory.weights.total="8.2 GiB" memory.weights.repeating="7.6 GiB" memory.weights.nonrepeating="608.6 MiB" memory.graph.full="1.0 GiB" memory.graph.partial="1.0 GiB"
Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: loaded meta data with 27 key-value pairs and 443 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e (version GGUF V3 (latest))
Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv   0:                       general.architecture str              = qwen3
Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv   1:                               general.type str              = model
Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv   2:                               general.name str              = Qwen3 14B
Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv   3:                           general.basename str              = Qwen3
Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv   4:                         general.size_label str              = 14B
Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv   5:                          qwen3.block_count u32              = 40
Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv   6:                       qwen3.context_length u32              = 40960
Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv   7:                     qwen3.embedding_length u32              = 5120
Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv   8:                  qwen3.feed_forward_length u32              = 17408
Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv   9:                 qwen3.attention.head_count u32              = 40
Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv  10:              qwen3.attention.head_count_kv u32              = 8
Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv  11:                       qwen3.rope.freq_base f32              = 1000000.000000
Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv  12:     qwen3.attention.layer_norm_rms_epsilon f32              = 0.000001
Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv  13:                 qwen3.attention.key_length u32              = 128
Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv  14:               qwen3.attention.value_length u32              = 128
Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv  15:                       tokenizer.ggml.model str              = gpt2
Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv  16:                         tokenizer.ggml.pre str              = qwen2
Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv  17:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv  18:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv  19:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 151645
Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 151643
Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv  22:                tokenizer.ggml.bos_token_id u32              = 151643
Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv  23:               tokenizer.ggml.add_bos_token bool             = false
Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv  25:               general.quantization_version u32              = 2
Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv  26:                          general.file_type u32              = 15
Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - type  f32:  161 tensors
Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - type  f16:   40 tensors
Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - type q4_K:  221 tensors
Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - type q6_K:   21 tensors
Aug 01 11:56:33 ubuntu24 ollama[1239]: print_info: file format = GGUF V3 (latest)
Aug 01 11:56:33 ubuntu24 ollama[1239]: print_info: file type   = Q4_K - Medium
Aug 01 11:56:33 ubuntu24 ollama[1239]: print_info: file size   = 8.63 GiB (5.02 BPW)
Aug 01 11:56:34 ubuntu24 ollama[1239]: load: special tokens cache size = 26
Aug 01 11:56:34 ubuntu24 ollama[1239]: load: token to piece cache size = 0.9311 MB
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: arch             = qwen3
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: vocab_only       = 1
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: model type       = ?B
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: model params     = 14.77 B
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: general.name     = Qwen3 14B
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: vocab type       = BPE
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_vocab          = 151936
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_merges         = 151387
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: BOS token        = 151643 '<|endoftext|>'
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: EOS token        = 151645 '<|im_end|>'
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: EOT token        = 151645 '<|im_end|>'
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: PAD token        = 151643 '<|endoftext|>'
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: LF token         = 198 'Ċ'
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: FIM MID token    = 151660 '<|fim_middle|>'
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: FIM PAD token    = 151662 '<|fim_pad|>'
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: FIM REP token    = 151663 '<|repo_name|>'
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: FIM SEP token    = 151664 '<|file_sep|>'
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: EOG token        = 151643 '<|endoftext|>'
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: EOG token        = 151645 '<|im_end|>'
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: EOG token        = 151662 '<|fim_pad|>'
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: EOG token        = 151663 '<|repo_name|>'
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: EOG token        = 151664 '<|file_sep|>'
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: max token length = 256
Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_load: vocab only - skipping tensors
Aug 01 11:56:34 ubuntu24 ollama[1239]: time=2025-08-01T11:56:34.043+08:00 level=INFO source=server.go:438 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e --ctx-size 8192 --batch-size 512 --n-gpu-layers 41 --threads 6 --parallel 2 --port 46397"
Aug 01 11:56:34 ubuntu24 ollama[1239]: time=2025-08-01T11:56:34.043+08:00 level=INFO source=sched.go:483 msg="loaded runners" count=1
Aug 01 11:56:34 ubuntu24 ollama[1239]: time=2025-08-01T11:56:34.043+08:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding"
Aug 01 11:56:34 ubuntu24 ollama[1239]: time=2025-08-01T11:56:34.044+08:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding"
Aug 01 11:56:34 ubuntu24 ollama[1239]: time=2025-08-01T11:56:34.048+08:00 level=INFO source=runner.go:815 msg="starting go runner"
Aug 01 11:56:34 ubuntu24 ollama[1239]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Aug 01 11:56:34 ubuntu24 ollama[1239]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Aug 01 11:56:34 ubuntu24 ollama[1239]: ggml_cuda_init: found 1 ROCm devices:
Aug 01 11:56:34 ubuntu24 ollama[1239]:   Device 0: AMD Radeon RX 9070, gfx1201 (0x1201), VMM: no, Wave Size: 32
Aug 01 11:56:34 ubuntu24 ollama[1239]: load_backend: loaded ROCm backend from /usr/local/lib/ollama/libggml-hip.so
Aug 01 11:56:34 ubuntu24 ollama[1239]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so
Aug 01 11:56:34 ubuntu24 ollama[1239]: time=2025-08-01T11:56:34.697+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
Aug 01 11:56:34 ubuntu24 ollama[1239]: time=2025-08-01T11:56:34.698+08:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:46397"
Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_load_from_file_impl: using device ROCm0 (AMD Radeon RX 9070) - 15776 MiB free
Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: loaded meta data with 27 key-value pairs and 443 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e (version GGUF V3 (latest))
Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv   0:                       general.architecture str              = qwen3
Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv   1:                               general.type str              = model
Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv   2:                               general.name str              = Qwen3 14B
Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv   3:                           general.basename str              = Qwen3
Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv   4:                         general.size_label str              = 14B
Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv   5:                          qwen3.block_count u32              = 40
Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv   6:                       qwen3.context_length u32              = 40960
Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv   7:                     qwen3.embedding_length u32              = 5120
Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv   8:                  qwen3.feed_forward_length u32              = 17408
Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv   9:                 qwen3.attention.head_count u32              = 40
Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv  10:              qwen3.attention.head_count_kv u32              = 8
Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv  11:                       qwen3.rope.freq_base f32              = 1000000.000000
Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv  12:     qwen3.attention.layer_norm_rms_epsilon f32              = 0.000001
Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv  13:                 qwen3.attention.key_length u32              = 128
Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv  14:               qwen3.attention.value_length u32              = 128
Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv  15:                       tokenizer.ggml.model str              = gpt2
Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv  16:                         tokenizer.ggml.pre str              = qwen2
Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv  17:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv  18:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv  19:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 151645
Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 151643
Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv  22:                tokenizer.ggml.bos_token_id u32              = 151643
Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv  23:               tokenizer.ggml.add_bos_token bool             = false
Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv  25:               general.quantization_version u32              = 2
Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv  26:                          general.file_type u32              = 15
Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - type  f32:  161 tensors
Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - type  f16:   40 tensors
Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - type q4_K:  221 tensors
Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - type q6_K:   21 tensors
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: file format = GGUF V3 (latest)
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: file type   = Q4_K - Medium
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: file size   = 8.63 GiB (5.02 BPW)
Aug 01 11:56:34 ubuntu24 ollama[1239]: load: special tokens cache size = 26
Aug 01 11:56:34 ubuntu24 ollama[1239]: load: token to piece cache size = 0.9311 MB
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: arch             = qwen3
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: vocab_only       = 0
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_ctx_train      = 40960
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_embd           = 5120
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_layer          = 40
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_head           = 40
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_head_kv        = 8
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_rot            = 128
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_swa            = 0
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_swa_pattern    = 1
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_embd_head_k    = 128
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_embd_head_v    = 128
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_gqa            = 5
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_embd_k_gqa     = 1024
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_embd_v_gqa     = 1024
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: f_norm_eps       = 0.0e+00
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: f_norm_rms_eps   = 1.0e-06
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: f_clamp_kqv      = 0.0e+00
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: f_max_alibi_bias = 0.0e+00
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: f_logit_scale    = 0.0e+00
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: f_attn_scale     = 0.0e+00
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_ff             = 17408
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_expert         = 0
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_expert_used    = 0
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: causal attn      = 1
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: pooling type     = 0
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: rope type        = 2
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: rope scaling     = linear
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: freq_base_train  = 1000000.0
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: freq_scale_train = 1
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_ctx_orig_yarn  = 40960
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: rope_finetuned   = unknown
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: ssm_d_conv       = 0
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: ssm_d_inner      = 0
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: ssm_d_state      = 0
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: ssm_dt_rank      = 0
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: ssm_dt_b_c_rms   = 0
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: model type       = 14B
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: model params     = 14.77 B
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: general.name     = Qwen3 14B
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: vocab type       = BPE
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_vocab          = 151936
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_merges         = 151387
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: BOS token        = 151643 '<|endoftext|>'
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: EOS token        = 151645 '<|im_end|>'
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: EOT token        = 151645 '<|im_end|>'
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: PAD token        = 151643 '<|endoftext|>'
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: LF token         = 198 'Ċ'
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: FIM MID token    = 151660 '<|fim_middle|>'
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: FIM PAD token    = 151662 '<|fim_pad|>'
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: FIM REP token    = 151663 '<|repo_name|>'
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: FIM SEP token    = 151664 '<|file_sep|>'
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: EOG token        = 151643 '<|endoftext|>'
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: EOG token        = 151645 '<|im_end|>'
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: EOG token        = 151662 '<|fim_pad|>'
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: EOG token        = 151663 '<|repo_name|>'
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: EOG token        = 151664 '<|file_sep|>'
Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: max token length = 256
Aug 01 11:56:34 ubuntu24 ollama[1239]: load_tensors: loading model tensors, this can take a while... (mmap = true)
Aug 01 11:56:34 ubuntu24 ollama[1239]: time=2025-08-01T11:56:34.797+08:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model"
Aug 01 11:56:38 ubuntu24 ollama[1239]: load_tensors: offloading 40 repeating layers to GPU
Aug 01 11:56:38 ubuntu24 ollama[1239]: load_tensors: offloading output layer to GPU
Aug 01 11:56:38 ubuntu24 ollama[1239]: load_tensors: offloaded 41/41 layers to GPU
Aug 01 11:56:38 ubuntu24 ollama[1239]: load_tensors:        ROCm0 model buffer size =  8423.47 MiB
Aug 01 11:56:38 ubuntu24 ollama[1239]: load_tensors:   CPU_Mapped model buffer size =   417.30 MiB
Aug 01 11:56:38 ubuntu24 ollama[1239]: llama_context: constructing llama_context
Aug 01 11:56:38 ubuntu24 ollama[1239]: llama_context: n_seq_max     = 2
Aug 01 11:56:38 ubuntu24 ollama[1239]: llama_context: n_ctx         = 8192
Aug 01 11:56:38 ubuntu24 ollama[1239]: llama_context: n_ctx_per_seq = 4096
Aug 01 11:56:38 ubuntu24 ollama[1239]: llama_context: n_batch       = 1024
Aug 01 11:56:38 ubuntu24 ollama[1239]: llama_context: n_ubatch      = 512
Aug 01 11:56:38 ubuntu24 ollama[1239]: llama_context: causal_attn   = 1
Aug 01 11:56:38 ubuntu24 ollama[1239]: llama_context: flash_attn    = 0
Aug 01 11:56:38 ubuntu24 ollama[1239]: llama_context: freq_base     = 1000000.0
Aug 01 11:56:38 ubuntu24 ollama[1239]: llama_context: freq_scale    = 1
Aug 01 11:56:38 ubuntu24 ollama[1239]: llama_context: n_ctx_per_seq (4096) < n_ctx_train (40960) -- the full capacity of the model will not be utilized
Aug 01 11:56:38 ubuntu24 ollama[1239]: llama_context:  ROCm_Host  output buffer size =     1.20 MiB
Aug 01 11:56:38 ubuntu24 ollama[1239]: llama_kv_cache_unified: kv_size = 8192, type_k = 'f16', type_v = 'f16', n_layer = 40, can_shift = 1, padding = 32
Aug 01 11:56:38 ubuntu24 ollama[1239]: llama_kv_cache_unified:      ROCm0 KV buffer size =  1280.00 MiB
Aug 01 11:56:38 ubuntu24 ollama[1239]: llama_kv_cache_unified: KV self size  = 1280.00 MiB, K (f16):  640.00 MiB, V (f16):  640.00 MiB
Aug 01 11:56:38 ubuntu24 ollama[1239]: llama_context:      ROCm0 compute buffer size =   696.00 MiB
Aug 01 11:56:38 ubuntu24 ollama[1239]: llama_context:  ROCm_Host compute buffer size =    26.01 MiB
Aug 01 11:56:38 ubuntu24 ollama[1239]: llama_context: graph nodes  = 1526
Aug 01 11:56:38 ubuntu24 ollama[1239]: llama_context: graph splits = 2
Aug 01 11:56:38 ubuntu24 ollama[1239]: time=2025-08-01T11:56:38.805+08:00 level=INFO source=server.go:637 msg="llama runner started in 4.76 seconds"
Aug 01 11:56:38 ubuntu24 ollama[1239]: rocblaslt error: Cannot read /usr/local/lib/ollama/rocm/hipblaslt/library/TensileLibrary_lazy_gfx1201.dat: No such file or directory
Aug 01 11:56:38 ubuntu24 ollama[1239]: rocblaslt error: Could not load /usr/local/lib/ollama/rocm/hipblaslt/library/TensileLibrary_lazy_gfx1201.dat
Aug 01 11:56:38 ubuntu24 ollama[1239]: hipBLASLt error: Heuristic Fetch Failed!
Aug 01 11:56:38 ubuntu24 ollama[1239]: This message will be only be displayed once, unless the ROCBLAS_VERBOSE_HIPBLASLT_ERROR environment variable is set.
Aug 01 11:56:38 ubuntu24 ollama[1239]: rocBLAS warning: hipBlasLT failed, falling back to tensile.
Aug 01 11:56:38 ubuntu24 ollama[1239]: This message will be only be displayed once, unless the ROCBLAS_VERBOSE_TENSILE_ERROR environment variable is set.
Aug 01 12:02:04 ubuntu24 ollama[1239]: time=2025-08-01T12:02:04.759+08:00 level=ERROR source=server.go:464 msg="llama runner terminated" error="signal: killed"
Aug 01 12:21:29 ubuntu24 systemd[1]: Stopping ollama.service - Ollama Service...
Aug 01 12:21:29 ubuntu24 systemd[1]: ollama.service: Deactivated successfully.
Aug 01 12:21:29 ubuntu24 systemd[1]: Stopped ollama.service - Ollama Service.
Aug 01 12:21:29 ubuntu24 systemd[1]: ollama.service: Consumed 28.547s CPU time, 9.4G memory peak, 0B memory swap peak.
Aug 01 12:21:29 ubuntu24 systemd[1]: Started ollama.service - Ollama Service.
Aug 01 12:21:29 ubuntu24 ollama[29733]: time=2025-08-01T12:21:29.668+08:00 level=INFO source=routes.go:1235 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Aug 01 12:21:29 ubuntu24 ollama[29733]: time=2025-08-01T12:21:29.670+08:00 level=INFO source=images.go:476 msg="total blobs: 21"
Aug 01 12:21:29 ubuntu24 ollama[29733]: time=2025-08-01T12:21:29.670+08:00 level=INFO source=images.go:483 msg="total unused blobs removed: 0"
Aug 01 12:21:29 ubuntu24 ollama[29733]: time=2025-08-01T12:21:29.670+08:00 level=INFO source=routes.go:1288 msg="Listening on [::]:11434 (version 0.9.6)"
Aug 01 12:21:29 ubuntu24 ollama[29733]: time=2025-08-01T12:21:29.670+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
Aug 01 12:21:29 ubuntu24 ollama[29733]: time=2025-08-01T12:21:29.679+08:00 level=INFO source=amd_linux.go:386 msg="amdgpu is supported" gpu=GPU-da4a13daba57996a gpu_type=gfx1201
Aug 01 12:21:29 ubuntu24 ollama[29733]: time=2025-08-01T12:21:29.679+08:00 level=INFO source=amd_linux.go:296 msg="unsupported Radeon iGPU detected skipping" id=1 total="512.0 MiB"
Aug 01 12:21:29 ubuntu24 ollama[29733]: time=2025-08-01T12:21:29.681+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-da4a13daba57996a library=rocm variant="" compute=gfx1201 driver=6.12 name=1002:7550 total="15.9 GiB" available="15.4 GiB"
Aug 01 12:22:06 ubuntu24 systemd[1]: Stopping ollama.service - Ollama Service...
Aug 01 12:22:06 ubuntu24 systemd[1]: ollama.service: Deactivated successfully.
Aug 01 12:22:06 ubuntu24 systemd[1]: Stopped ollama.service - Ollama Service.
Aug 01 12:22:06 ubuntu24 systemd[1]: Started ollama.service - Ollama Service.
Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.223+08:00 level=INFO source=routes.go:1235 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.224+08:00 level=INFO source=images.go:476 msg="total blobs: 21"
Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.224+08:00 level=INFO source=images.go:483 msg="total unused blobs removed: 0"
Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.224+08:00 level=INFO source=routes.go:1288 msg="Listening on [::]:11434 (version 0.9.6)"
Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.224+08:00 level=DEBUG source=sched.go:108 msg="starting llm scheduler"
Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.224+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.224+08:00 level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA"
Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.224+08:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcuda.so*
Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.224+08:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/local/lib/ollama/libcuda.so* /libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.226+08:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[]
Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.226+08:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcudart.so*
Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.226+08:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/local/lib/ollama/libcudart.so* /libcudart.so* /usr/local/lib/ollama/cuda_v*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]"
Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.226+08:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[/usr/local/lib/ollama/libcudart.so.12.8.90]
Aug 01 12:22:06 ubuntu24 ollama[29820]: cudaSetDevice err: 35
Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.226+08:00 level=DEBUG source=gpu.go:574 msg="Unable to load cudart library /usr/local/lib/ollama/libcudart.so.12.8.90: your nvidia driver is too old or missing.  If you have a CUDA GPU please upgrade to run ollama"
Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.227+08:00 level=DEBUG source=amd_linux.go:101 msg="evaluating amdgpu node /sys/class/kfd/kfd/topology/nodes/0/properties"
Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.227+08:00 level=DEBUG source=amd_linux.go:121 msg="detected CPU /sys/class/kfd/kfd/topology/nodes/0/properties"
Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.227+08:00 level=DEBUG source=amd_linux.go:101 msg="evaluating amdgpu node /sys/class/kfd/kfd/topology/nodes/1/properties"
Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.227+08:00 level=DEBUG source=amd_linux.go:206 msg="mapping amdgpu to drm sysfs nodes" amdgpu=/sys/class/kfd/kfd/topology/nodes/1/properties vendor=4098 device=30032 unique_id=15729406478694979946
Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.227+08:00 level=DEBUG source=amd_linux.go:219 msg="failed to read sysfs node" file=/sys/class/drm/card0-HDMI-A-2/device/vendor error="open /sys/class/drm/card0-HDMI-A-2/device/vendor: no such file or directory"
Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.227+08:00 level=DEBUG source=amd_linux.go:219 msg="failed to read sysfs node" file=/sys/class/drm/card0-Writeback-2/device/vendor error="open /sys/class/drm/card0-Writeback-2/device/vendor: no such file or directory"
Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.227+08:00 level=DEBUG source=amd_linux.go:240 msg=matched amdgpu=/sys/class/kfd/kfd/topology/nodes/1/properties drm=/sys/class/drm/card1/device
Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.227+08:00 level=DEBUG source=amd_linux.go:318 msg="amdgpu memory" gpu=0 total="15.9 GiB"
Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.227+08:00 level=DEBUG source=amd_linux.go:319 msg="amdgpu memory" gpu=0 available="15.4 GiB"
Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.227+08:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /usr/local/lib/ollama/rocm"
Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.227+08:00 level=DEBUG source=amd_common.go:44 msg="detected ROCM next to ollama executable /usr/local/lib/ollama/rocm"
Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.227+08:00 level=DEBUG source=amd_linux.go:371 msg="rocm supported GPUs" types="[gfx1010 gfx1012 gfx1030 gfx1100 gfx1101 gfx1102 gfx1151 gfx1200 gfx1201 gfx900 gfx906 gfx908 gfx90a gfx942]"
Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.227+08:00 level=INFO source=amd_linux.go:386 msg="amdgpu is supported" gpu=GPU-da4a13daba57996a gpu_type=gfx1201
Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.228+08:00 level=DEBUG source=amd_linux.go:101 msg="evaluating amdgpu node /sys/class/kfd/kfd/topology/nodes/2/properties"
Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.228+08:00 level=DEBUG source=amd_linux.go:206 msg="mapping amdgpu to drm sysfs nodes" amdgpu=/sys/class/kfd/kfd/topology/nodes/2/properties vendor=4098 device=5056 unique_id=0
Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.228+08:00 level=DEBUG source=amd_linux.go:240 msg=matched amdgpu=/sys/class/kfd/kfd/topology/nodes/2/properties drm=/sys/class/drm/card0/device
Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.228+08:00 level=INFO source=amd_linux.go:296 msg="unsupported Radeon iGPU detected skipping" id=1 total="512.0 MiB"
Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.229+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-da4a13daba57996a library=rocm variant="" compute=gfx1201 driver=6.12 name=1002:7550 total="15.9 GiB" available="15.4 GiB"
Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.480+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="30.5 GiB" before.free="28.1 GiB" before.free_swap="8.0 GiB" now.total="30.5 GiB" now.free="28.1 GiB" now.free_swap="8.0 GiB"
Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.480+08:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-da4a13daba57996a name=1002:7550 before="15.4 GiB" now="15.4 GiB"
Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.480+08:00 level=DEBUG source=sched.go:185 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1
Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.486+08:00 level=DEBUG source=ggml.go:206 msg="key with type not found" key=general.alignment default=32
Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.500+08:00 level=DEBUG source=sched.go:228 msg="loading first model" model=/usr/share/ollama/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e
Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.500+08:00 level=DEBUG source=memory.go:111 msg=evaluating library=rocm gpu_count=1 available="[15.4 GiB]"
Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.500+08:00 level=DEBUG source=ggml.go:206 msg="key with type not found" key=qwen3.vision.block_count default=0
Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.500+08:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e gpu=GPU-da4a13daba57996a parallel=2 available=16551673856 required="11.2 GiB"
Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.500+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="30.5 GiB" before.free="28.1 GiB" before.free_swap="8.0 GiB" now.total="30.5 GiB" now.free="28.1 GiB" now.free_swap="8.0 GiB"
Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.500+08:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-da4a13daba57996a name=1002:7550 before="15.4 GiB" now="15.4 GiB"
Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.500+08:00 level=INFO source=server.go:135 msg="system memory" total="30.5 GiB" free="28.1 GiB" free_swap="8.0 GiB"
Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.500+08:00 level=DEBUG source=memory.go:111 msg=evaluating library=rocm gpu_count=1 available="[15.4 GiB]"
Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.500+08:00 level=DEBUG source=ggml.go:206 msg="key with type not found" key=qwen3.vision.block_count default=0
Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.500+08:00 level=INFO source=server.go:175 msg=offload library=rocm layers.requested=-1 layers.model=41 layers.offload=41 layers.split="" memory.available="[15.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.2 GiB" memory.required.partial="11.2 GiB" memory.required.kv="1.2 GiB" memory.required.allocations="[11.2 GiB]" memory.weights.total="8.2 GiB" memory.weights.repeating="7.6 GiB" memory.weights.nonrepeating="608.6 MiB" memory.graph.full="1.0 GiB" memory.graph.partial="1.0 GiB"
Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.500+08:00 level=DEBUG source=server.go:291 msg="compatible gpu libraries" compatible=[rocm]
Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: loaded meta data with 27 key-value pairs and 443 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e (version GGUF V3 (latest))
Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv   0:                       general.architecture str              = qwen3
Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv   1:                               general.type str              = model
Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv   2:                               general.name str              = Qwen3 14B
Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv   3:                           general.basename str              = Qwen3
Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv   4:                         general.size_label str              = 14B
Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv   5:                          qwen3.block_count u32              = 40
Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv   6:                       qwen3.context_length u32              = 40960
Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv   7:                     qwen3.embedding_length u32              = 5120
Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv   8:                  qwen3.feed_forward_length u32              = 17408
Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv   9:                 qwen3.attention.head_count u32              = 40
Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv  10:              qwen3.attention.head_count_kv u32              = 8
Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv  11:                       qwen3.rope.freq_base f32              = 1000000.000000
Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv  12:     qwen3.attention.layer_norm_rms_epsilon f32              = 0.000001
Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv  13:                 qwen3.attention.key_length u32              = 128
Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv  14:               qwen3.attention.value_length u32              = 128
Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv  15:                       tokenizer.ggml.model str              = gpt2
Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv  16:                         tokenizer.ggml.pre str              = qwen2
Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv  17:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv  18:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv  19:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 151645
Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 151643
Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv  22:                tokenizer.ggml.bos_token_id u32              = 151643
Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv  23:               tokenizer.ggml.add_bos_token bool             = false
Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv  25:               general.quantization_version u32              = 2
Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv  26:                          general.file_type u32              = 15
Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - type  f32:  161 tensors
Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - type  f16:   40 tensors
Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - type q4_K:  221 tensors
Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - type q6_K:   21 tensors
Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: file format = GGUF V3 (latest)
Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: file type   = Q4_K - Medium
Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: file size   = 8.63 GiB (5.02 BPW)
Aug 01 12:24:00 ubuntu24 ollama[29820]: init_tokenizer: initializing tokenizer for type 2
Aug 01 12:24:00 ubuntu24 ollama[29820]: load: control token: 151660 '<|fim_middle|>' is not marked as EOG
Aug 01 12:24:00 ubuntu24 ollama[29820]: load: control token: 151659 '<|fim_prefix|>' is not marked as EOG
Aug 01 12:24:00 ubuntu24 ollama[29820]: load: control token: 151653 '<|vision_end|>' is not marked as EOG
Aug 01 12:24:00 ubuntu24 ollama[29820]: load: control token: 151648 '<|box_start|>' is not marked as EOG
Aug 01 12:24:00 ubuntu24 ollama[29820]: load: control token: 151646 '<|object_ref_start|>' is not marked as EOG
Aug 01 12:24:00 ubuntu24 ollama[29820]: load: control token: 151649 '<|box_end|>' is not marked as EOG
Aug 01 12:24:00 ubuntu24 ollama[29820]: load: control token: 151655 '<|image_pad|>' is not marked as EOG
Aug 01 12:24:00 ubuntu24 ollama[29820]: load: control token: 151651 '<|quad_end|>' is not marked as EOG
Aug 01 12:24:00 ubuntu24 ollama[29820]: load: control token: 151647 '<|object_ref_end|>' is not marked as EOG
Aug 01 12:24:00 ubuntu24 ollama[29820]: load: control token: 151652 '<|vision_start|>' is not marked as EOG
Aug 01 12:24:00 ubuntu24 ollama[29820]: load: control token: 151654 '<|vision_pad|>' is not marked as EOG
Aug 01 12:24:00 ubuntu24 ollama[29820]: load: control token: 151656 '<|video_pad|>' is not marked as EOG
Aug 01 12:24:00 ubuntu24 ollama[29820]: load: control token: 151644 '<|im_start|>' is not marked as EOG
Aug 01 12:24:00 ubuntu24 ollama[29820]: load: control token: 151661 '<|fim_suffix|>' is not marked as EOG
Aug 01 12:24:00 ubuntu24 ollama[29820]: load: control token: 151650 '<|quad_start|>' is not marked as EOG
Aug 01 12:24:00 ubuntu24 ollama[29820]: load: special tokens cache size = 26
Aug 01 12:24:00 ubuntu24 ollama[29820]: load: token to piece cache size = 0.9311 MB
Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: arch             = qwen3
Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: vocab_only       = 1
Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: model type       = ?B
Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: model params     = 14.77 B
Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: general.name     = Qwen3 14B
Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: vocab type       = BPE
Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: n_vocab          = 151936
Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: n_merges         = 151387
Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: BOS token        = 151643 '<|endoftext|>'
Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: EOS token        = 151645 '<|im_end|>'
Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: EOT token        = 151645 '<|im_end|>'
Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: PAD token        = 151643 '<|endoftext|>'
Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: LF token         = 198 'Ċ'
Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: FIM MID token    = 151660 '<|fim_middle|>'
Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: FIM PAD token    = 151662 '<|fim_pad|>'
Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: FIM REP token    = 151663 '<|repo_name|>'
Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: FIM SEP token    = 151664 '<|file_sep|>'
Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: EOG token        = 151643 '<|endoftext|>'
Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: EOG token        = 151645 '<|im_end|>'
Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: EOG token        = 151662 '<|fim_pad|>'
Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: EOG token        = 151663 '<|repo_name|>'
Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: EOG token        = 151664 '<|file_sep|>'
Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: max token length = 256
Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_load: vocab only - skipping tensors
Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.594+08:00 level=DEBUG source=server.go:367 msg="adding gpu library" path=/usr/local/lib/ollama/rocm
Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.594+08:00 level=DEBUG source=server.go:374 msg="adding gpu dependency paths" paths=[/usr/local/lib/ollama/rocm]
Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.594+08:00 level=INFO source=server.go:438 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e --ctx-size 8192 --batch-size 512 --n-gpu-layers 41 --threads 6 --parallel 2 --port 40421"
Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.594+08:00 level=DEBUG source=server.go:439 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin OLLAMA_HOST=0.0.0.0 OLLAMA_DEBUG=1 OLLAMA_MAX_LOADED_MODELS=3 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/rocm LD_LIBRARY_PATH=/usr/local/lib/ollama/rocm:/usr/local/lib/ollama/rocm:/usr/local/lib/ollama:/usr/local/lib/ollama ROCR_VISIBLE_DEVICES=GPU-da4a13daba57996a
Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.594+08:00 level=INFO source=sched.go:483 msg="loaded runners" count=1
Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.594+08:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding"
Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.594+08:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding"
Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.599+08:00 level=INFO source=runner.go:815 msg="starting go runner"
Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.599+08:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama
Aug 01 12:24:01 ubuntu24 ollama[29820]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Aug 01 12:24:01 ubuntu24 ollama[29820]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Aug 01 12:24:01 ubuntu24 ollama[29820]: ggml_cuda_init: found 1 ROCm devices:
Aug 01 12:24:01 ubuntu24 ollama[29820]:   Device 0: AMD Radeon RX 9070, gfx1201 (0x1201), VMM: no, Wave Size: 32
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_backend: loaded ROCm backend from /usr/local/lib/ollama/libggml-hip.so
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so
Aug 01 12:24:01 ubuntu24 ollama[29820]: time=2025-08-01T12:24:01.066+08:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/rocm
Aug 01 12:24:01 ubuntu24 ollama[29820]: time=2025-08-01T12:24:01.066+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_load_from_file_impl: using device ROCm0 (AMD Radeon RX 9070) - 15776 MiB free
Aug 01 12:24:01 ubuntu24 ollama[29820]: time=2025-08-01T12:24:01.066+08:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:40421"
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: loaded meta data with 27 key-value pairs and 443 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e (version GGUF V3 (latest))
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv   0:                       general.architecture str              = qwen3
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv   1:                               general.type str              = model
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv   2:                               general.name str              = Qwen3 14B
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv   3:                           general.basename str              = Qwen3
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv   4:                         general.size_label str              = 14B
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv   5:                          qwen3.block_count u32              = 40
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv   6:                       qwen3.context_length u32              = 40960
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv   7:                     qwen3.embedding_length u32              = 5120
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv   8:                  qwen3.feed_forward_length u32              = 17408
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv   9:                 qwen3.attention.head_count u32              = 40
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv  10:              qwen3.attention.head_count_kv u32              = 8
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv  11:                       qwen3.rope.freq_base f32              = 1000000.000000
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv  12:     qwen3.attention.layer_norm_rms_epsilon f32              = 0.000001
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv  13:                 qwen3.attention.key_length u32              = 128
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv  14:               qwen3.attention.value_length u32              = 128
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv  15:                       tokenizer.ggml.model str              = gpt2
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv  16:                         tokenizer.ggml.pre str              = qwen2
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv  17:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv  18:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Aug 01 12:24:01 ubuntu24 ollama[29820]: time=2025-08-01T12:24:01.096+08:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model"
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv  19:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 151645
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 151643
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv  22:                tokenizer.ggml.bos_token_id u32              = 151643
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv  23:               tokenizer.ggml.add_bos_token bool             = false
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv  25:               general.quantization_version u32              = 2
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv  26:                          general.file_type u32              = 15
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - type  f32:  161 tensors
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - type  f16:   40 tensors
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - type q4_K:  221 tensors
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - type q6_K:   21 tensors
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: file format = GGUF V3 (latest)
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: file type   = Q4_K - Medium
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: file size   = 8.63 GiB (5.02 BPW)
Aug 01 12:24:01 ubuntu24 ollama[29820]: init_tokenizer: initializing tokenizer for type 2
Aug 01 12:24:01 ubuntu24 ollama[29820]: load: control token: 151660 '<|fim_middle|>' is not marked as EOG
Aug 01 12:24:01 ubuntu24 ollama[29820]: load: control token: 151659 '<|fim_prefix|>' is not marked as EOG
Aug 01 12:24:01 ubuntu24 ollama[29820]: load: control token: 151653 '<|vision_end|>' is not marked as EOG
Aug 01 12:24:01 ubuntu24 ollama[29820]: load: control token: 151648 '<|box_start|>' is not marked as EOG
Aug 01 12:24:01 ubuntu24 ollama[29820]: load: control token: 151646 '<|object_ref_start|>' is not marked as EOG
Aug 01 12:24:01 ubuntu24 ollama[29820]: load: control token: 151649 '<|box_end|>' is not marked as EOG
Aug 01 12:24:01 ubuntu24 ollama[29820]: load: control token: 151655 '<|image_pad|>' is not marked as EOG
Aug 01 12:24:01 ubuntu24 ollama[29820]: load: control token: 151651 '<|quad_end|>' is not marked as EOG
Aug 01 12:24:01 ubuntu24 ollama[29820]: load: control token: 151647 '<|object_ref_end|>' is not marked as EOG
Aug 01 12:24:01 ubuntu24 ollama[29820]: load: control token: 151652 '<|vision_start|>' is not marked as EOG
Aug 01 12:24:01 ubuntu24 ollama[29820]: load: control token: 151654 '<|vision_pad|>' is not marked as EOG
Aug 01 12:24:01 ubuntu24 ollama[29820]: load: control token: 151656 '<|video_pad|>' is not marked as EOG
Aug 01 12:24:01 ubuntu24 ollama[29820]: load: control token: 151644 '<|im_start|>' is not marked as EOG
Aug 01 12:24:01 ubuntu24 ollama[29820]: load: control token: 151661 '<|fim_suffix|>' is not marked as EOG
Aug 01 12:24:01 ubuntu24 ollama[29820]: load: control token: 151650 '<|quad_start|>' is not marked as EOG
Aug 01 12:24:01 ubuntu24 ollama[29820]: load: special tokens cache size = 26
Aug 01 12:24:01 ubuntu24 ollama[29820]: load: token to piece cache size = 0.9311 MB
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: arch             = qwen3
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: vocab_only       = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: n_ctx_train      = 40960
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: n_embd           = 5120
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: n_layer          = 40
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: n_head           = 40
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: n_head_kv        = 8
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: n_rot            = 128
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: n_swa            = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: n_swa_pattern    = 1
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: n_embd_head_k    = 128
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: n_embd_head_v    = 128
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: n_gqa            = 5
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: n_embd_k_gqa     = 1024
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: n_embd_v_gqa     = 1024
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: f_norm_eps       = 0.0e+00
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: f_norm_rms_eps   = 1.0e-06
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: f_clamp_kqv      = 0.0e+00
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: f_max_alibi_bias = 0.0e+00
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: f_logit_scale    = 0.0e+00
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: f_attn_scale     = 0.0e+00
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: n_ff             = 17408
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: n_expert         = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: n_expert_used    = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: causal attn      = 1
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: pooling type     = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: rope type        = 2
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: rope scaling     = linear
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: freq_base_train  = 1000000.0
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: freq_scale_train = 1
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: n_ctx_orig_yarn  = 40960
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: rope_finetuned   = unknown
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: ssm_d_conv       = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: ssm_d_inner      = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: ssm_d_state      = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: ssm_dt_rank      = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: ssm_dt_b_c_rms   = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: model type       = 14B
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: model params     = 14.77 B
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: general.name     = Qwen3 14B
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: vocab type       = BPE
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: n_vocab          = 151936
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: n_merges         = 151387
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: BOS token        = 151643 '<|endoftext|>'
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: EOS token        = 151645 '<|im_end|>'
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: EOT token        = 151645 '<|im_end|>'
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: PAD token        = 151643 '<|endoftext|>'
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: LF token         = 198 'Ċ'
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: FIM MID token    = 151660 '<|fim_middle|>'
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: FIM PAD token    = 151662 '<|fim_pad|>'
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: FIM REP token    = 151663 '<|repo_name|>'
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: FIM SEP token    = 151664 '<|file_sep|>'
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: EOG token        = 151643 '<|endoftext|>'
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: EOG token        = 151645 '<|im_end|>'
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: EOG token        = 151662 '<|fim_pad|>'
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: EOG token        = 151663 '<|repo_name|>'
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: EOG token        = 151664 '<|file_sep|>'
Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: max token length = 256
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: loading model tensors, this can take a while... (mmap = true)
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer   0 assigned to device ROCm0, is_swa = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer   1 assigned to device ROCm0, is_swa = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer   2 assigned to device ROCm0, is_swa = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer   3 assigned to device ROCm0, is_swa = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer   4 assigned to device ROCm0, is_swa = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer   5 assigned to device ROCm0, is_swa = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer   6 assigned to device ROCm0, is_swa = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer   7 assigned to device ROCm0, is_swa = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer   8 assigned to device ROCm0, is_swa = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer   9 assigned to device ROCm0, is_swa = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer  10 assigned to device ROCm0, is_swa = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer  11 assigned to device ROCm0, is_swa = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer  12 assigned to device ROCm0, is_swa = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer  13 assigned to device ROCm0, is_swa = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer  14 assigned to device ROCm0, is_swa = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer  15 assigned to device ROCm0, is_swa = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer  16 assigned to device ROCm0, is_swa = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer  17 assigned to device ROCm0, is_swa = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer  18 assigned to device ROCm0, is_swa = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer  19 assigned to device ROCm0, is_swa = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer  20 assigned to device ROCm0, is_swa = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer  21 assigned to device ROCm0, is_swa = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer  22 assigned to device ROCm0, is_swa = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer  23 assigned to device ROCm0, is_swa = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer  24 assigned to device ROCm0, is_swa = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer  25 assigned to device ROCm0, is_swa = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer  26 assigned to device ROCm0, is_swa = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer  27 assigned to device ROCm0, is_swa = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer  28 assigned to device ROCm0, is_swa = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer  29 assigned to device ROCm0, is_swa = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer  30 assigned to device ROCm0, is_swa = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer  31 assigned to device ROCm0, is_swa = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer  32 assigned to device ROCm0, is_swa = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer  33 assigned to device ROCm0, is_swa = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer  34 assigned to device ROCm0, is_swa = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer  35 assigned to device ROCm0, is_swa = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer  36 assigned to device ROCm0, is_swa = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer  37 assigned to device ROCm0, is_swa = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer  38 assigned to device ROCm0, is_swa = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer  39 assigned to device ROCm0, is_swa = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer  40 assigned to device ROCm0, is_swa = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: tensor 'token_embd.weight' (q4_K) (and 0 others) cannot be used with preferred buffer type ROCm_Host, using CPU instead
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: offloading 40 repeating layers to GPU
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: offloading output layer to GPU
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: offloaded 41/41 layers to GPU
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors:        ROCm0 model buffer size =  8423.47 MiB
Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors:   CPU_Mapped model buffer size =   417.30 MiB
Aug 01 12:24:01 ubuntu24 ollama[29820]: time=2025-08-01T12:24:01.598+08:00 level=DEBUG source=server.go:643 msg="model load progress 0.59"
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: constructing llama_context
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: n_seq_max     = 2
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: n_ctx         = 8192
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: n_ctx_per_seq = 4096
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: n_batch       = 1024
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: n_ubatch      = 512
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: causal_attn   = 1
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: flash_attn    = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: freq_base     = 1000000.0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: freq_scale    = 1
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: n_ctx_per_seq (4096) < n_ctx_train (40960) -- the full capacity of the model will not be utilized
Aug 01 12:24:01 ubuntu24 ollama[29820]: set_abort_callback: call
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context:  ROCm_Host  output buffer size =     1.20 MiB
Aug 01 12:24:01 ubuntu24 ollama[29820]: create_memory: n_ctx = 8192 (padded)
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: kv_size = 8192, type_k = 'f16', type_v = 'f16', n_layer = 40, can_shift = 1, padding = 32
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer   0: dev = ROCm0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer   1: dev = ROCm0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer   2: dev = ROCm0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer   3: dev = ROCm0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer   4: dev = ROCm0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer   5: dev = ROCm0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer   6: dev = ROCm0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer   7: dev = ROCm0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer   8: dev = ROCm0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer   9: dev = ROCm0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer  10: dev = ROCm0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer  11: dev = ROCm0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer  12: dev = ROCm0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer  13: dev = ROCm0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer  14: dev = ROCm0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer  15: dev = ROCm0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer  16: dev = ROCm0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer  17: dev = ROCm0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer  18: dev = ROCm0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer  19: dev = ROCm0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer  20: dev = ROCm0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer  21: dev = ROCm0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer  22: dev = ROCm0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer  23: dev = ROCm0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer  24: dev = ROCm0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer  25: dev = ROCm0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer  26: dev = ROCm0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer  27: dev = ROCm0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer  28: dev = ROCm0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer  29: dev = ROCm0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer  30: dev = ROCm0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer  31: dev = ROCm0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer  32: dev = ROCm0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer  33: dev = ROCm0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer  34: dev = ROCm0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer  35: dev = ROCm0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer  36: dev = ROCm0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer  37: dev = ROCm0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer  38: dev = ROCm0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer  39: dev = ROCm0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified:      ROCm0 KV buffer size =  1280.00 MiB
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: KV self size  = 1280.00 MiB, K (f16):  640.00 MiB, V (f16):  640.00 MiB
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: enumerating backends
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: backend_ptrs.size() = 2
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: max_nodes = 65536
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: worst-case: n_tokens = 512, n_seqs = 1, n_outputs = 0
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: reserving graph for n_tokens = 512, n_seqs = 1
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: reserving graph for n_tokens = 1, n_seqs = 1
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: reserving graph for n_tokens = 512, n_seqs = 1
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context:      ROCm0 compute buffer size =   696.00 MiB
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context:  ROCm_Host compute buffer size =    26.01 MiB
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: graph nodes  = 1526
Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: graph splits = 2
Aug 01 12:24:01 ubuntu24 ollama[29820]: time=2025-08-01T12:24:01.849+08:00 level=INFO source=server.go:637 msg="llama runner started in 1.25 seconds"
Aug 01 12:24:01 ubuntu24 ollama[29820]: time=2025-08-01T12:24:01.849+08:00 level=DEBUG source=sched.go:495 msg="finished setting up" runner.name=registry.ollama.ai/library/qwen3:14b-q4_K_M runner.inference=rocm runner.devices=1 runner.size="11.2 GiB" runner.vram="11.2 GiB" runner.parallel=2 runner.pid=29845 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e runner.num_ctx=8192
Aug 01 12:24:01 ubuntu24 ollama[29820]: time=2025-08-01T12:24:01.849+08:00 level=DEBUG source=server.go:736 msg="completion request" images=0 prompt=78 format=""
Aug 01 12:24:01 ubuntu24 ollama[29820]: time=2025-08-01T12:24:01.851+08:00 level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=13 used=0 remaining=13
Aug 01 12:24:01 ubuntu24 ollama[29820]: rocblaslt error: Cannot read /usr/local/lib/ollama/rocm/hipblaslt/library/TensileLibrary_lazy_gfx1201.dat: No such file or directory
Aug 01 12:24:01 ubuntu24 ollama[29820]: rocblaslt error: Could not load /usr/local/lib/ollama/rocm/hipblaslt/library/TensileLibrary_lazy_gfx1201.dat
Aug 01 12:24:01 ubuntu24 ollama[29820]: hipBLASLt error: Heuristic Fetch Failed!
Aug 01 12:24:01 ubuntu24 ollama[29820]: This message will be only be displayed once, unless the ROCBLAS_VERBOSE_HIPBLASLT_ERROR environment variable is set.
Aug 01 12:24:01 ubuntu24 ollama[29820]: rocBLAS warning: hipBlasLT failed, falling back to tensile.
Aug 01 12:24:01 ubuntu24 ollama[29820]: This message will be only be displayed once, unless the ROCBLAS_VERBOSE_TENSILE_ERROR environment variable is set.
Aug 01 12:24:31 ubuntu24 ollama[29820]: time=2025-08-01T12:24:31.610+08:00 level=DEBUG source=sched.go:503 msg="context for request finished"
Aug 01 12:24:31 ubuntu24 ollama[29820]: time=2025-08-01T12:24:31.610+08:00 level=DEBUG source=sched.go:343 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/qwen3:14b-q4_K_M runner.inference=rocm runner.devices=1 runner.size="11.2 GiB" runner.vram="11.2 GiB" runner.parallel=2 runner.pid=29845 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e runner.num_ctx=8192 duration=5m0s
Aug 01 12:24:31 ubuntu24 ollama[29820]: time=2025-08-01T12:24:31.610+08:00 level=DEBUG source=sched.go:361 msg="after processing request finished event" runner.name=registry.ollama.ai/library/qwen3:14b-q4_K_M runner.inference=rocm runner.devices=1 runner.size="11.2 GiB" runner.vram="11.2 GiB" runner.parallel=2 runner.pid=29845 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e runner.num_ctx=8192 refCount=0

<!-- gh-comment-id:3142101399 --> @inpure commented on GitHub (Aug 1, 2025): Thanks for reply, Yes I installed follw the recommeded command: curl -fsSL https://ollama.com/install.sh | sh ## server log ``` Aug 01 09:43:39 ubuntu24 systemd[1]: Started ollama.service - Ollama Service. Aug 01 09:43:40 ubuntu24 ollama[1239]: time=2025-08-01T09:43:40.019+08:00 level=INFO source=routes.go:1235 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Aug 01 09:43:40 ubuntu24 ollama[1239]: time=2025-08-01T09:43:40.023+08:00 level=INFO source=images.go:476 msg="total blobs: 21" Aug 01 09:43:40 ubuntu24 ollama[1239]: time=2025-08-01T09:43:40.023+08:00 level=INFO source=images.go:483 msg="total unused blobs removed: 0" Aug 01 09:43:40 ubuntu24 ollama[1239]: time=2025-08-01T09:43:40.023+08:00 level=INFO source=routes.go:1288 msg="Listening on [::]:11434 (version 0.9.6)" Aug 01 09:43:40 ubuntu24 ollama[1239]: time=2025-08-01T09:43:40.024+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" Aug 01 09:43:40 ubuntu24 ollama[1239]: time=2025-08-01T09:43:40.033+08:00 level=INFO source=amd_linux.go:386 msg="amdgpu is supported" gpu=GPU-da4a13daba57996a gpu_type=gfx1201 Aug 01 09:43:40 ubuntu24 ollama[1239]: time=2025-08-01T09:43:40.033+08:00 level=INFO source=amd_linux.go:296 msg="unsupported Radeon iGPU detected skipping" id=1 total="512.0 MiB" Aug 01 09:43:40 ubuntu24 ollama[1239]: time=2025-08-01T09:43:40.034+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-da4a13daba57996a library=rocm variant="" compute=gfx1201 driver=6.12 name=1002:7550 total="15.9 GiB" available="15.4 GiB" Aug 01 11:53:16 ubuntu24 ollama[1239]: [GIN] 2025/08/01 - 11:53:16 | 200 | 436.836µs | 127.0.0.1 | HEAD "/" Aug 01 11:53:16 ubuntu24 ollama[1239]: [GIN] 2025/08/01 - 11:53:16 | 200 | 2.340467ms | 127.0.0.1 | GET "/api/tags" Aug 01 11:56:33 ubuntu24 ollama[1239]: time=2025-08-01T11:56:33.947+08:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e gpu=GPU-da4a13daba57996a parallel=2 available=16551673856 required="11.2 GiB" Aug 01 11:56:33 ubuntu24 ollama[1239]: time=2025-08-01T11:56:33.948+08:00 level=INFO source=server.go:135 msg="system memory" total="30.5 GiB" free="28.1 GiB" free_swap="8.0 GiB" Aug 01 11:56:33 ubuntu24 ollama[1239]: time=2025-08-01T11:56:33.948+08:00 level=INFO source=server.go:175 msg=offload library=rocm layers.requested=-1 layers.model=41 layers.offload=41 layers.split="" memory.available="[15.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.2 GiB" memory.required.partial="11.2 GiB" memory.required.kv="1.2 GiB" memory.required.allocations="[11.2 GiB]" memory.weights.total="8.2 GiB" memory.weights.repeating="7.6 GiB" memory.weights.nonrepeating="608.6 MiB" memory.graph.full="1.0 GiB" memory.graph.partial="1.0 GiB" Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: loaded meta data with 27 key-value pairs and 443 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e (version GGUF V3 (latest)) Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv 0: general.architecture str = qwen3 Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv 1: general.type str = model Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv 2: general.name str = Qwen3 14B Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv 3: general.basename str = Qwen3 Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv 4: general.size_label str = 14B Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv 5: qwen3.block_count u32 = 40 Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv 6: qwen3.context_length u32 = 40960 Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv 7: qwen3.embedding_length u32 = 5120 Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv 8: qwen3.feed_forward_length u32 = 17408 Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv 9: qwen3.attention.head_count u32 = 40 Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv 10: qwen3.attention.head_count_kv u32 = 8 Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv 11: qwen3.rope.freq_base f32 = 1000000.000000 Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv 12: qwen3.attention.layer_norm_rms_epsilon f32 = 0.000001 Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv 13: qwen3.attention.key_length u32 = 128 Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv 14: qwen3.attention.value_length u32 = 128 Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv 15: tokenizer.ggml.model str = gpt2 Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv 16: tokenizer.ggml.pre str = qwen2 Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv 17: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv 19: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151645 Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 151643 Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv 23: tokenizer.ggml.add_bos_token bool = false Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv 24: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv 25: general.quantization_version u32 = 2 Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - kv 26: general.file_type u32 = 15 Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - type f32: 161 tensors Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - type f16: 40 tensors Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - type q4_K: 221 tensors Aug 01 11:56:33 ubuntu24 ollama[1239]: llama_model_loader: - type q6_K: 21 tensors Aug 01 11:56:33 ubuntu24 ollama[1239]: print_info: file format = GGUF V3 (latest) Aug 01 11:56:33 ubuntu24 ollama[1239]: print_info: file type = Q4_K - Medium Aug 01 11:56:33 ubuntu24 ollama[1239]: print_info: file size = 8.63 GiB (5.02 BPW) Aug 01 11:56:34 ubuntu24 ollama[1239]: load: special tokens cache size = 26 Aug 01 11:56:34 ubuntu24 ollama[1239]: load: token to piece cache size = 0.9311 MB Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: arch = qwen3 Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: vocab_only = 1 Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: model type = ?B Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: model params = 14.77 B Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: general.name = Qwen3 14B Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: vocab type = BPE Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_vocab = 151936 Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_merges = 151387 Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: BOS token = 151643 '<|endoftext|>' Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: EOS token = 151645 '<|im_end|>' Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: EOT token = 151645 '<|im_end|>' Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: PAD token = 151643 '<|endoftext|>' Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: LF token = 198 'Ċ' Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: FIM PRE token = 151659 '<|fim_prefix|>' Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: FIM SUF token = 151661 '<|fim_suffix|>' Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: FIM MID token = 151660 '<|fim_middle|>' Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: FIM PAD token = 151662 '<|fim_pad|>' Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: FIM REP token = 151663 '<|repo_name|>' Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: FIM SEP token = 151664 '<|file_sep|>' Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: EOG token = 151643 '<|endoftext|>' Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: EOG token = 151645 '<|im_end|>' Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: EOG token = 151662 '<|fim_pad|>' Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: EOG token = 151663 '<|repo_name|>' Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: EOG token = 151664 '<|file_sep|>' Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: max token length = 256 Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_load: vocab only - skipping tensors Aug 01 11:56:34 ubuntu24 ollama[1239]: time=2025-08-01T11:56:34.043+08:00 level=INFO source=server.go:438 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e --ctx-size 8192 --batch-size 512 --n-gpu-layers 41 --threads 6 --parallel 2 --port 46397" Aug 01 11:56:34 ubuntu24 ollama[1239]: time=2025-08-01T11:56:34.043+08:00 level=INFO source=sched.go:483 msg="loaded runners" count=1 Aug 01 11:56:34 ubuntu24 ollama[1239]: time=2025-08-01T11:56:34.043+08:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding" Aug 01 11:56:34 ubuntu24 ollama[1239]: time=2025-08-01T11:56:34.044+08:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding" Aug 01 11:56:34 ubuntu24 ollama[1239]: time=2025-08-01T11:56:34.048+08:00 level=INFO source=runner.go:815 msg="starting go runner" Aug 01 11:56:34 ubuntu24 ollama[1239]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Aug 01 11:56:34 ubuntu24 ollama[1239]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Aug 01 11:56:34 ubuntu24 ollama[1239]: ggml_cuda_init: found 1 ROCm devices: Aug 01 11:56:34 ubuntu24 ollama[1239]: Device 0: AMD Radeon RX 9070, gfx1201 (0x1201), VMM: no, Wave Size: 32 Aug 01 11:56:34 ubuntu24 ollama[1239]: load_backend: loaded ROCm backend from /usr/local/lib/ollama/libggml-hip.so Aug 01 11:56:34 ubuntu24 ollama[1239]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so Aug 01 11:56:34 ubuntu24 ollama[1239]: time=2025-08-01T11:56:34.697+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) Aug 01 11:56:34 ubuntu24 ollama[1239]: time=2025-08-01T11:56:34.698+08:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:46397" Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_load_from_file_impl: using device ROCm0 (AMD Radeon RX 9070) - 15776 MiB free Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: loaded meta data with 27 key-value pairs and 443 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e (version GGUF V3 (latest)) Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv 0: general.architecture str = qwen3 Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv 1: general.type str = model Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv 2: general.name str = Qwen3 14B Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv 3: general.basename str = Qwen3 Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv 4: general.size_label str = 14B Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv 5: qwen3.block_count u32 = 40 Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv 6: qwen3.context_length u32 = 40960 Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv 7: qwen3.embedding_length u32 = 5120 Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv 8: qwen3.feed_forward_length u32 = 17408 Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv 9: qwen3.attention.head_count u32 = 40 Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv 10: qwen3.attention.head_count_kv u32 = 8 Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv 11: qwen3.rope.freq_base f32 = 1000000.000000 Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv 12: qwen3.attention.layer_norm_rms_epsilon f32 = 0.000001 Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv 13: qwen3.attention.key_length u32 = 128 Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv 14: qwen3.attention.value_length u32 = 128 Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv 15: tokenizer.ggml.model str = gpt2 Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv 16: tokenizer.ggml.pre str = qwen2 Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv 17: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv 19: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151645 Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 151643 Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv 23: tokenizer.ggml.add_bos_token bool = false Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv 24: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv 25: general.quantization_version u32 = 2 Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - kv 26: general.file_type u32 = 15 Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - type f32: 161 tensors Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - type f16: 40 tensors Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - type q4_K: 221 tensors Aug 01 11:56:34 ubuntu24 ollama[1239]: llama_model_loader: - type q6_K: 21 tensors Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: file format = GGUF V3 (latest) Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: file type = Q4_K - Medium Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: file size = 8.63 GiB (5.02 BPW) Aug 01 11:56:34 ubuntu24 ollama[1239]: load: special tokens cache size = 26 Aug 01 11:56:34 ubuntu24 ollama[1239]: load: token to piece cache size = 0.9311 MB Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: arch = qwen3 Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: vocab_only = 0 Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_ctx_train = 40960 Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_embd = 5120 Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_layer = 40 Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_head = 40 Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_head_kv = 8 Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_rot = 128 Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_swa = 0 Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_swa_pattern = 1 Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_embd_head_k = 128 Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_embd_head_v = 128 Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_gqa = 5 Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_embd_k_gqa = 1024 Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_embd_v_gqa = 1024 Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: f_norm_eps = 0.0e+00 Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: f_norm_rms_eps = 1.0e-06 Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: f_clamp_kqv = 0.0e+00 Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: f_max_alibi_bias = 0.0e+00 Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: f_logit_scale = 0.0e+00 Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: f_attn_scale = 0.0e+00 Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_ff = 17408 Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_expert = 0 Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_expert_used = 0 Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: causal attn = 1 Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: pooling type = 0 Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: rope type = 2 Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: rope scaling = linear Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: freq_base_train = 1000000.0 Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: freq_scale_train = 1 Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_ctx_orig_yarn = 40960 Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: rope_finetuned = unknown Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: ssm_d_conv = 0 Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: ssm_d_inner = 0 Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: ssm_d_state = 0 Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: ssm_dt_rank = 0 Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: ssm_dt_b_c_rms = 0 Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: model type = 14B Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: model params = 14.77 B Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: general.name = Qwen3 14B Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: vocab type = BPE Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_vocab = 151936 Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: n_merges = 151387 Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: BOS token = 151643 '<|endoftext|>' Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: EOS token = 151645 '<|im_end|>' Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: EOT token = 151645 '<|im_end|>' Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: PAD token = 151643 '<|endoftext|>' Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: LF token = 198 'Ċ' Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: FIM PRE token = 151659 '<|fim_prefix|>' Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: FIM SUF token = 151661 '<|fim_suffix|>' Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: FIM MID token = 151660 '<|fim_middle|>' Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: FIM PAD token = 151662 '<|fim_pad|>' Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: FIM REP token = 151663 '<|repo_name|>' Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: FIM SEP token = 151664 '<|file_sep|>' Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: EOG token = 151643 '<|endoftext|>' Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: EOG token = 151645 '<|im_end|>' Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: EOG token = 151662 '<|fim_pad|>' Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: EOG token = 151663 '<|repo_name|>' Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: EOG token = 151664 '<|file_sep|>' Aug 01 11:56:34 ubuntu24 ollama[1239]: print_info: max token length = 256 Aug 01 11:56:34 ubuntu24 ollama[1239]: load_tensors: loading model tensors, this can take a while... (mmap = true) Aug 01 11:56:34 ubuntu24 ollama[1239]: time=2025-08-01T11:56:34.797+08:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model" Aug 01 11:56:38 ubuntu24 ollama[1239]: load_tensors: offloading 40 repeating layers to GPU Aug 01 11:56:38 ubuntu24 ollama[1239]: load_tensors: offloading output layer to GPU Aug 01 11:56:38 ubuntu24 ollama[1239]: load_tensors: offloaded 41/41 layers to GPU Aug 01 11:56:38 ubuntu24 ollama[1239]: load_tensors: ROCm0 model buffer size = 8423.47 MiB Aug 01 11:56:38 ubuntu24 ollama[1239]: load_tensors: CPU_Mapped model buffer size = 417.30 MiB Aug 01 11:56:38 ubuntu24 ollama[1239]: llama_context: constructing llama_context Aug 01 11:56:38 ubuntu24 ollama[1239]: llama_context: n_seq_max = 2 Aug 01 11:56:38 ubuntu24 ollama[1239]: llama_context: n_ctx = 8192 Aug 01 11:56:38 ubuntu24 ollama[1239]: llama_context: n_ctx_per_seq = 4096 Aug 01 11:56:38 ubuntu24 ollama[1239]: llama_context: n_batch = 1024 Aug 01 11:56:38 ubuntu24 ollama[1239]: llama_context: n_ubatch = 512 Aug 01 11:56:38 ubuntu24 ollama[1239]: llama_context: causal_attn = 1 Aug 01 11:56:38 ubuntu24 ollama[1239]: llama_context: flash_attn = 0 Aug 01 11:56:38 ubuntu24 ollama[1239]: llama_context: freq_base = 1000000.0 Aug 01 11:56:38 ubuntu24 ollama[1239]: llama_context: freq_scale = 1 Aug 01 11:56:38 ubuntu24 ollama[1239]: llama_context: n_ctx_per_seq (4096) < n_ctx_train (40960) -- the full capacity of the model will not be utilized Aug 01 11:56:38 ubuntu24 ollama[1239]: llama_context: ROCm_Host output buffer size = 1.20 MiB Aug 01 11:56:38 ubuntu24 ollama[1239]: llama_kv_cache_unified: kv_size = 8192, type_k = 'f16', type_v = 'f16', n_layer = 40, can_shift = 1, padding = 32 Aug 01 11:56:38 ubuntu24 ollama[1239]: llama_kv_cache_unified: ROCm0 KV buffer size = 1280.00 MiB Aug 01 11:56:38 ubuntu24 ollama[1239]: llama_kv_cache_unified: KV self size = 1280.00 MiB, K (f16): 640.00 MiB, V (f16): 640.00 MiB Aug 01 11:56:38 ubuntu24 ollama[1239]: llama_context: ROCm0 compute buffer size = 696.00 MiB Aug 01 11:56:38 ubuntu24 ollama[1239]: llama_context: ROCm_Host compute buffer size = 26.01 MiB Aug 01 11:56:38 ubuntu24 ollama[1239]: llama_context: graph nodes = 1526 Aug 01 11:56:38 ubuntu24 ollama[1239]: llama_context: graph splits = 2 Aug 01 11:56:38 ubuntu24 ollama[1239]: time=2025-08-01T11:56:38.805+08:00 level=INFO source=server.go:637 msg="llama runner started in 4.76 seconds" Aug 01 11:56:38 ubuntu24 ollama[1239]: rocblaslt error: Cannot read /usr/local/lib/ollama/rocm/hipblaslt/library/TensileLibrary_lazy_gfx1201.dat: No such file or directory Aug 01 11:56:38 ubuntu24 ollama[1239]: rocblaslt error: Could not load /usr/local/lib/ollama/rocm/hipblaslt/library/TensileLibrary_lazy_gfx1201.dat Aug 01 11:56:38 ubuntu24 ollama[1239]: hipBLASLt error: Heuristic Fetch Failed! Aug 01 11:56:38 ubuntu24 ollama[1239]: This message will be only be displayed once, unless the ROCBLAS_VERBOSE_HIPBLASLT_ERROR environment variable is set. Aug 01 11:56:38 ubuntu24 ollama[1239]: rocBLAS warning: hipBlasLT failed, falling back to tensile. Aug 01 11:56:38 ubuntu24 ollama[1239]: This message will be only be displayed once, unless the ROCBLAS_VERBOSE_TENSILE_ERROR environment variable is set. Aug 01 12:02:04 ubuntu24 ollama[1239]: time=2025-08-01T12:02:04.759+08:00 level=ERROR source=server.go:464 msg="llama runner terminated" error="signal: killed" Aug 01 12:21:29 ubuntu24 systemd[1]: Stopping ollama.service - Ollama Service... Aug 01 12:21:29 ubuntu24 systemd[1]: ollama.service: Deactivated successfully. Aug 01 12:21:29 ubuntu24 systemd[1]: Stopped ollama.service - Ollama Service. Aug 01 12:21:29 ubuntu24 systemd[1]: ollama.service: Consumed 28.547s CPU time, 9.4G memory peak, 0B memory swap peak. Aug 01 12:21:29 ubuntu24 systemd[1]: Started ollama.service - Ollama Service. Aug 01 12:21:29 ubuntu24 ollama[29733]: time=2025-08-01T12:21:29.668+08:00 level=INFO source=routes.go:1235 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Aug 01 12:21:29 ubuntu24 ollama[29733]: time=2025-08-01T12:21:29.670+08:00 level=INFO source=images.go:476 msg="total blobs: 21" Aug 01 12:21:29 ubuntu24 ollama[29733]: time=2025-08-01T12:21:29.670+08:00 level=INFO source=images.go:483 msg="total unused blobs removed: 0" Aug 01 12:21:29 ubuntu24 ollama[29733]: time=2025-08-01T12:21:29.670+08:00 level=INFO source=routes.go:1288 msg="Listening on [::]:11434 (version 0.9.6)" Aug 01 12:21:29 ubuntu24 ollama[29733]: time=2025-08-01T12:21:29.670+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" Aug 01 12:21:29 ubuntu24 ollama[29733]: time=2025-08-01T12:21:29.679+08:00 level=INFO source=amd_linux.go:386 msg="amdgpu is supported" gpu=GPU-da4a13daba57996a gpu_type=gfx1201 Aug 01 12:21:29 ubuntu24 ollama[29733]: time=2025-08-01T12:21:29.679+08:00 level=INFO source=amd_linux.go:296 msg="unsupported Radeon iGPU detected skipping" id=1 total="512.0 MiB" Aug 01 12:21:29 ubuntu24 ollama[29733]: time=2025-08-01T12:21:29.681+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-da4a13daba57996a library=rocm variant="" compute=gfx1201 driver=6.12 name=1002:7550 total="15.9 GiB" available="15.4 GiB" Aug 01 12:22:06 ubuntu24 systemd[1]: Stopping ollama.service - Ollama Service... Aug 01 12:22:06 ubuntu24 systemd[1]: ollama.service: Deactivated successfully. Aug 01 12:22:06 ubuntu24 systemd[1]: Stopped ollama.service - Ollama Service. Aug 01 12:22:06 ubuntu24 systemd[1]: Started ollama.service - Ollama Service. Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.223+08:00 level=INFO source=routes.go:1235 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.224+08:00 level=INFO source=images.go:476 msg="total blobs: 21" Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.224+08:00 level=INFO source=images.go:483 msg="total unused blobs removed: 0" Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.224+08:00 level=INFO source=routes.go:1288 msg="Listening on [::]:11434 (version 0.9.6)" Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.224+08:00 level=DEBUG source=sched.go:108 msg="starting llm scheduler" Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.224+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.224+08:00 level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA" Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.224+08:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcuda.so* Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.224+08:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/local/lib/ollama/libcuda.so* /libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.226+08:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[] Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.226+08:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcudart.so* Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.226+08:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/local/lib/ollama/libcudart.so* /libcudart.so* /usr/local/lib/ollama/cuda_v*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]" Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.226+08:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[/usr/local/lib/ollama/libcudart.so.12.8.90] Aug 01 12:22:06 ubuntu24 ollama[29820]: cudaSetDevice err: 35 Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.226+08:00 level=DEBUG source=gpu.go:574 msg="Unable to load cudart library /usr/local/lib/ollama/libcudart.so.12.8.90: your nvidia driver is too old or missing. If you have a CUDA GPU please upgrade to run ollama" Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.227+08:00 level=DEBUG source=amd_linux.go:101 msg="evaluating amdgpu node /sys/class/kfd/kfd/topology/nodes/0/properties" Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.227+08:00 level=DEBUG source=amd_linux.go:121 msg="detected CPU /sys/class/kfd/kfd/topology/nodes/0/properties" Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.227+08:00 level=DEBUG source=amd_linux.go:101 msg="evaluating amdgpu node /sys/class/kfd/kfd/topology/nodes/1/properties" Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.227+08:00 level=DEBUG source=amd_linux.go:206 msg="mapping amdgpu to drm sysfs nodes" amdgpu=/sys/class/kfd/kfd/topology/nodes/1/properties vendor=4098 device=30032 unique_id=15729406478694979946 Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.227+08:00 level=DEBUG source=amd_linux.go:219 msg="failed to read sysfs node" file=/sys/class/drm/card0-HDMI-A-2/device/vendor error="open /sys/class/drm/card0-HDMI-A-2/device/vendor: no such file or directory" Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.227+08:00 level=DEBUG source=amd_linux.go:219 msg="failed to read sysfs node" file=/sys/class/drm/card0-Writeback-2/device/vendor error="open /sys/class/drm/card0-Writeback-2/device/vendor: no such file or directory" Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.227+08:00 level=DEBUG source=amd_linux.go:240 msg=matched amdgpu=/sys/class/kfd/kfd/topology/nodes/1/properties drm=/sys/class/drm/card1/device Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.227+08:00 level=DEBUG source=amd_linux.go:318 msg="amdgpu memory" gpu=0 total="15.9 GiB" Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.227+08:00 level=DEBUG source=amd_linux.go:319 msg="amdgpu memory" gpu=0 available="15.4 GiB" Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.227+08:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /usr/local/lib/ollama/rocm" Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.227+08:00 level=DEBUG source=amd_common.go:44 msg="detected ROCM next to ollama executable /usr/local/lib/ollama/rocm" Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.227+08:00 level=DEBUG source=amd_linux.go:371 msg="rocm supported GPUs" types="[gfx1010 gfx1012 gfx1030 gfx1100 gfx1101 gfx1102 gfx1151 gfx1200 gfx1201 gfx900 gfx906 gfx908 gfx90a gfx942]" Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.227+08:00 level=INFO source=amd_linux.go:386 msg="amdgpu is supported" gpu=GPU-da4a13daba57996a gpu_type=gfx1201 Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.228+08:00 level=DEBUG source=amd_linux.go:101 msg="evaluating amdgpu node /sys/class/kfd/kfd/topology/nodes/2/properties" Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.228+08:00 level=DEBUG source=amd_linux.go:206 msg="mapping amdgpu to drm sysfs nodes" amdgpu=/sys/class/kfd/kfd/topology/nodes/2/properties vendor=4098 device=5056 unique_id=0 Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.228+08:00 level=DEBUG source=amd_linux.go:240 msg=matched amdgpu=/sys/class/kfd/kfd/topology/nodes/2/properties drm=/sys/class/drm/card0/device Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.228+08:00 level=INFO source=amd_linux.go:296 msg="unsupported Radeon iGPU detected skipping" id=1 total="512.0 MiB" Aug 01 12:22:06 ubuntu24 ollama[29820]: time=2025-08-01T12:22:06.229+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-da4a13daba57996a library=rocm variant="" compute=gfx1201 driver=6.12 name=1002:7550 total="15.9 GiB" available="15.4 GiB" Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.480+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="30.5 GiB" before.free="28.1 GiB" before.free_swap="8.0 GiB" now.total="30.5 GiB" now.free="28.1 GiB" now.free_swap="8.0 GiB" Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.480+08:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-da4a13daba57996a name=1002:7550 before="15.4 GiB" now="15.4 GiB" Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.480+08:00 level=DEBUG source=sched.go:185 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1 Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.486+08:00 level=DEBUG source=ggml.go:206 msg="key with type not found" key=general.alignment default=32 Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.500+08:00 level=DEBUG source=sched.go:228 msg="loading first model" model=/usr/share/ollama/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.500+08:00 level=DEBUG source=memory.go:111 msg=evaluating library=rocm gpu_count=1 available="[15.4 GiB]" Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.500+08:00 level=DEBUG source=ggml.go:206 msg="key with type not found" key=qwen3.vision.block_count default=0 Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.500+08:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e gpu=GPU-da4a13daba57996a parallel=2 available=16551673856 required="11.2 GiB" Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.500+08:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="30.5 GiB" before.free="28.1 GiB" before.free_swap="8.0 GiB" now.total="30.5 GiB" now.free="28.1 GiB" now.free_swap="8.0 GiB" Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.500+08:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-da4a13daba57996a name=1002:7550 before="15.4 GiB" now="15.4 GiB" Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.500+08:00 level=INFO source=server.go:135 msg="system memory" total="30.5 GiB" free="28.1 GiB" free_swap="8.0 GiB" Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.500+08:00 level=DEBUG source=memory.go:111 msg=evaluating library=rocm gpu_count=1 available="[15.4 GiB]" Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.500+08:00 level=DEBUG source=ggml.go:206 msg="key with type not found" key=qwen3.vision.block_count default=0 Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.500+08:00 level=INFO source=server.go:175 msg=offload library=rocm layers.requested=-1 layers.model=41 layers.offload=41 layers.split="" memory.available="[15.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.2 GiB" memory.required.partial="11.2 GiB" memory.required.kv="1.2 GiB" memory.required.allocations="[11.2 GiB]" memory.weights.total="8.2 GiB" memory.weights.repeating="7.6 GiB" memory.weights.nonrepeating="608.6 MiB" memory.graph.full="1.0 GiB" memory.graph.partial="1.0 GiB" Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.500+08:00 level=DEBUG source=server.go:291 msg="compatible gpu libraries" compatible=[rocm] Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: loaded meta data with 27 key-value pairs and 443 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e (version GGUF V3 (latest)) Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv 0: general.architecture str = qwen3 Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv 1: general.type str = model Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv 2: general.name str = Qwen3 14B Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv 3: general.basename str = Qwen3 Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv 4: general.size_label str = 14B Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv 5: qwen3.block_count u32 = 40 Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv 6: qwen3.context_length u32 = 40960 Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv 7: qwen3.embedding_length u32 = 5120 Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv 8: qwen3.feed_forward_length u32 = 17408 Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv 9: qwen3.attention.head_count u32 = 40 Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv 10: qwen3.attention.head_count_kv u32 = 8 Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv 11: qwen3.rope.freq_base f32 = 1000000.000000 Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv 12: qwen3.attention.layer_norm_rms_epsilon f32 = 0.000001 Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv 13: qwen3.attention.key_length u32 = 128 Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv 14: qwen3.attention.value_length u32 = 128 Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv 15: tokenizer.ggml.model str = gpt2 Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv 16: tokenizer.ggml.pre str = qwen2 Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv 17: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv 19: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151645 Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 151643 Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv 23: tokenizer.ggml.add_bos_token bool = false Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv 24: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv 25: general.quantization_version u32 = 2 Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - kv 26: general.file_type u32 = 15 Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - type f32: 161 tensors Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - type f16: 40 tensors Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - type q4_K: 221 tensors Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_loader: - type q6_K: 21 tensors Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: file format = GGUF V3 (latest) Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: file type = Q4_K - Medium Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: file size = 8.63 GiB (5.02 BPW) Aug 01 12:24:00 ubuntu24 ollama[29820]: init_tokenizer: initializing tokenizer for type 2 Aug 01 12:24:00 ubuntu24 ollama[29820]: load: control token: 151660 '<|fim_middle|>' is not marked as EOG Aug 01 12:24:00 ubuntu24 ollama[29820]: load: control token: 151659 '<|fim_prefix|>' is not marked as EOG Aug 01 12:24:00 ubuntu24 ollama[29820]: load: control token: 151653 '<|vision_end|>' is not marked as EOG Aug 01 12:24:00 ubuntu24 ollama[29820]: load: control token: 151648 '<|box_start|>' is not marked as EOG Aug 01 12:24:00 ubuntu24 ollama[29820]: load: control token: 151646 '<|object_ref_start|>' is not marked as EOG Aug 01 12:24:00 ubuntu24 ollama[29820]: load: control token: 151649 '<|box_end|>' is not marked as EOG Aug 01 12:24:00 ubuntu24 ollama[29820]: load: control token: 151655 '<|image_pad|>' is not marked as EOG Aug 01 12:24:00 ubuntu24 ollama[29820]: load: control token: 151651 '<|quad_end|>' is not marked as EOG Aug 01 12:24:00 ubuntu24 ollama[29820]: load: control token: 151647 '<|object_ref_end|>' is not marked as EOG Aug 01 12:24:00 ubuntu24 ollama[29820]: load: control token: 151652 '<|vision_start|>' is not marked as EOG Aug 01 12:24:00 ubuntu24 ollama[29820]: load: control token: 151654 '<|vision_pad|>' is not marked as EOG Aug 01 12:24:00 ubuntu24 ollama[29820]: load: control token: 151656 '<|video_pad|>' is not marked as EOG Aug 01 12:24:00 ubuntu24 ollama[29820]: load: control token: 151644 '<|im_start|>' is not marked as EOG Aug 01 12:24:00 ubuntu24 ollama[29820]: load: control token: 151661 '<|fim_suffix|>' is not marked as EOG Aug 01 12:24:00 ubuntu24 ollama[29820]: load: control token: 151650 '<|quad_start|>' is not marked as EOG Aug 01 12:24:00 ubuntu24 ollama[29820]: load: special tokens cache size = 26 Aug 01 12:24:00 ubuntu24 ollama[29820]: load: token to piece cache size = 0.9311 MB Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: arch = qwen3 Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: vocab_only = 1 Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: model type = ?B Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: model params = 14.77 B Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: general.name = Qwen3 14B Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: vocab type = BPE Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: n_vocab = 151936 Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: n_merges = 151387 Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: BOS token = 151643 '<|endoftext|>' Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: EOS token = 151645 '<|im_end|>' Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: EOT token = 151645 '<|im_end|>' Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: PAD token = 151643 '<|endoftext|>' Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: LF token = 198 'Ċ' Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: FIM PRE token = 151659 '<|fim_prefix|>' Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: FIM SUF token = 151661 '<|fim_suffix|>' Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: FIM MID token = 151660 '<|fim_middle|>' Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: FIM PAD token = 151662 '<|fim_pad|>' Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: FIM REP token = 151663 '<|repo_name|>' Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: FIM SEP token = 151664 '<|file_sep|>' Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: EOG token = 151643 '<|endoftext|>' Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: EOG token = 151645 '<|im_end|>' Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: EOG token = 151662 '<|fim_pad|>' Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: EOG token = 151663 '<|repo_name|>' Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: EOG token = 151664 '<|file_sep|>' Aug 01 12:24:00 ubuntu24 ollama[29820]: print_info: max token length = 256 Aug 01 12:24:00 ubuntu24 ollama[29820]: llama_model_load: vocab only - skipping tensors Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.594+08:00 level=DEBUG source=server.go:367 msg="adding gpu library" path=/usr/local/lib/ollama/rocm Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.594+08:00 level=DEBUG source=server.go:374 msg="adding gpu dependency paths" paths=[/usr/local/lib/ollama/rocm] Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.594+08:00 level=INFO source=server.go:438 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e --ctx-size 8192 --batch-size 512 --n-gpu-layers 41 --threads 6 --parallel 2 --port 40421" Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.594+08:00 level=DEBUG source=server.go:439 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin OLLAMA_HOST=0.0.0.0 OLLAMA_DEBUG=1 OLLAMA_MAX_LOADED_MODELS=3 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/rocm LD_LIBRARY_PATH=/usr/local/lib/ollama/rocm:/usr/local/lib/ollama/rocm:/usr/local/lib/ollama:/usr/local/lib/ollama ROCR_VISIBLE_DEVICES=GPU-da4a13daba57996a Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.594+08:00 level=INFO source=sched.go:483 msg="loaded runners" count=1 Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.594+08:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding" Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.594+08:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding" Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.599+08:00 level=INFO source=runner.go:815 msg="starting go runner" Aug 01 12:24:00 ubuntu24 ollama[29820]: time=2025-08-01T12:24:00.599+08:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama Aug 01 12:24:01 ubuntu24 ollama[29820]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Aug 01 12:24:01 ubuntu24 ollama[29820]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Aug 01 12:24:01 ubuntu24 ollama[29820]: ggml_cuda_init: found 1 ROCm devices: Aug 01 12:24:01 ubuntu24 ollama[29820]: Device 0: AMD Radeon RX 9070, gfx1201 (0x1201), VMM: no, Wave Size: 32 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_backend: loaded ROCm backend from /usr/local/lib/ollama/libggml-hip.so Aug 01 12:24:01 ubuntu24 ollama[29820]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so Aug 01 12:24:01 ubuntu24 ollama[29820]: time=2025-08-01T12:24:01.066+08:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/rocm Aug 01 12:24:01 ubuntu24 ollama[29820]: time=2025-08-01T12:24:01.066+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_load_from_file_impl: using device ROCm0 (AMD Radeon RX 9070) - 15776 MiB free Aug 01 12:24:01 ubuntu24 ollama[29820]: time=2025-08-01T12:24:01.066+08:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:40421" Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: loaded meta data with 27 key-value pairs and 443 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e (version GGUF V3 (latest)) Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv 0: general.architecture str = qwen3 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv 1: general.type str = model Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv 2: general.name str = Qwen3 14B Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv 3: general.basename str = Qwen3 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv 4: general.size_label str = 14B Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv 5: qwen3.block_count u32 = 40 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv 6: qwen3.context_length u32 = 40960 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv 7: qwen3.embedding_length u32 = 5120 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv 8: qwen3.feed_forward_length u32 = 17408 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv 9: qwen3.attention.head_count u32 = 40 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv 10: qwen3.attention.head_count_kv u32 = 8 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv 11: qwen3.rope.freq_base f32 = 1000000.000000 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv 12: qwen3.attention.layer_norm_rms_epsilon f32 = 0.000001 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv 13: qwen3.attention.key_length u32 = 128 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv 14: qwen3.attention.value_length u32 = 128 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv 15: tokenizer.ggml.model str = gpt2 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv 16: tokenizer.ggml.pre str = qwen2 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv 17: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... Aug 01 12:24:01 ubuntu24 ollama[29820]: time=2025-08-01T12:24:01.096+08:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model" Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv 19: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151645 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 151643 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv 23: tokenizer.ggml.add_bos_token bool = false Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv 24: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv 25: general.quantization_version u32 = 2 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - kv 26: general.file_type u32 = 15 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - type f32: 161 tensors Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - type f16: 40 tensors Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - type q4_K: 221 tensors Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_model_loader: - type q6_K: 21 tensors Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: file format = GGUF V3 (latest) Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: file type = Q4_K - Medium Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: file size = 8.63 GiB (5.02 BPW) Aug 01 12:24:01 ubuntu24 ollama[29820]: init_tokenizer: initializing tokenizer for type 2 Aug 01 12:24:01 ubuntu24 ollama[29820]: load: control token: 151660 '<|fim_middle|>' is not marked as EOG Aug 01 12:24:01 ubuntu24 ollama[29820]: load: control token: 151659 '<|fim_prefix|>' is not marked as EOG Aug 01 12:24:01 ubuntu24 ollama[29820]: load: control token: 151653 '<|vision_end|>' is not marked as EOG Aug 01 12:24:01 ubuntu24 ollama[29820]: load: control token: 151648 '<|box_start|>' is not marked as EOG Aug 01 12:24:01 ubuntu24 ollama[29820]: load: control token: 151646 '<|object_ref_start|>' is not marked as EOG Aug 01 12:24:01 ubuntu24 ollama[29820]: load: control token: 151649 '<|box_end|>' is not marked as EOG Aug 01 12:24:01 ubuntu24 ollama[29820]: load: control token: 151655 '<|image_pad|>' is not marked as EOG Aug 01 12:24:01 ubuntu24 ollama[29820]: load: control token: 151651 '<|quad_end|>' is not marked as EOG Aug 01 12:24:01 ubuntu24 ollama[29820]: load: control token: 151647 '<|object_ref_end|>' is not marked as EOG Aug 01 12:24:01 ubuntu24 ollama[29820]: load: control token: 151652 '<|vision_start|>' is not marked as EOG Aug 01 12:24:01 ubuntu24 ollama[29820]: load: control token: 151654 '<|vision_pad|>' is not marked as EOG Aug 01 12:24:01 ubuntu24 ollama[29820]: load: control token: 151656 '<|video_pad|>' is not marked as EOG Aug 01 12:24:01 ubuntu24 ollama[29820]: load: control token: 151644 '<|im_start|>' is not marked as EOG Aug 01 12:24:01 ubuntu24 ollama[29820]: load: control token: 151661 '<|fim_suffix|>' is not marked as EOG Aug 01 12:24:01 ubuntu24 ollama[29820]: load: control token: 151650 '<|quad_start|>' is not marked as EOG Aug 01 12:24:01 ubuntu24 ollama[29820]: load: special tokens cache size = 26 Aug 01 12:24:01 ubuntu24 ollama[29820]: load: token to piece cache size = 0.9311 MB Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: arch = qwen3 Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: vocab_only = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: n_ctx_train = 40960 Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: n_embd = 5120 Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: n_layer = 40 Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: n_head = 40 Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: n_head_kv = 8 Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: n_rot = 128 Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: n_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: n_swa_pattern = 1 Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: n_embd_head_k = 128 Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: n_embd_head_v = 128 Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: n_gqa = 5 Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: n_embd_k_gqa = 1024 Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: n_embd_v_gqa = 1024 Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: f_norm_eps = 0.0e+00 Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: f_norm_rms_eps = 1.0e-06 Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: f_clamp_kqv = 0.0e+00 Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: f_max_alibi_bias = 0.0e+00 Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: f_logit_scale = 0.0e+00 Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: f_attn_scale = 0.0e+00 Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: n_ff = 17408 Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: n_expert = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: n_expert_used = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: causal attn = 1 Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: pooling type = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: rope type = 2 Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: rope scaling = linear Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: freq_base_train = 1000000.0 Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: freq_scale_train = 1 Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: n_ctx_orig_yarn = 40960 Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: rope_finetuned = unknown Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: ssm_d_conv = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: ssm_d_inner = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: ssm_d_state = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: ssm_dt_rank = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: ssm_dt_b_c_rms = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: model type = 14B Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: model params = 14.77 B Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: general.name = Qwen3 14B Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: vocab type = BPE Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: n_vocab = 151936 Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: n_merges = 151387 Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: BOS token = 151643 '<|endoftext|>' Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: EOS token = 151645 '<|im_end|>' Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: EOT token = 151645 '<|im_end|>' Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: PAD token = 151643 '<|endoftext|>' Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: LF token = 198 'Ċ' Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: FIM PRE token = 151659 '<|fim_prefix|>' Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: FIM SUF token = 151661 '<|fim_suffix|>' Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: FIM MID token = 151660 '<|fim_middle|>' Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: FIM PAD token = 151662 '<|fim_pad|>' Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: FIM REP token = 151663 '<|repo_name|>' Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: FIM SEP token = 151664 '<|file_sep|>' Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: EOG token = 151643 '<|endoftext|>' Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: EOG token = 151645 '<|im_end|>' Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: EOG token = 151662 '<|fim_pad|>' Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: EOG token = 151663 '<|repo_name|>' Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: EOG token = 151664 '<|file_sep|>' Aug 01 12:24:01 ubuntu24 ollama[29820]: print_info: max token length = 256 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: loading model tensors, this can take a while... (mmap = true) Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer 0 assigned to device ROCm0, is_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer 1 assigned to device ROCm0, is_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer 2 assigned to device ROCm0, is_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer 3 assigned to device ROCm0, is_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer 4 assigned to device ROCm0, is_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer 5 assigned to device ROCm0, is_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer 6 assigned to device ROCm0, is_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer 7 assigned to device ROCm0, is_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer 8 assigned to device ROCm0, is_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer 9 assigned to device ROCm0, is_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer 10 assigned to device ROCm0, is_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer 11 assigned to device ROCm0, is_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer 12 assigned to device ROCm0, is_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer 13 assigned to device ROCm0, is_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer 14 assigned to device ROCm0, is_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer 15 assigned to device ROCm0, is_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer 16 assigned to device ROCm0, is_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer 17 assigned to device ROCm0, is_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer 18 assigned to device ROCm0, is_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer 19 assigned to device ROCm0, is_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer 20 assigned to device ROCm0, is_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer 21 assigned to device ROCm0, is_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer 22 assigned to device ROCm0, is_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer 23 assigned to device ROCm0, is_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer 24 assigned to device ROCm0, is_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer 25 assigned to device ROCm0, is_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer 26 assigned to device ROCm0, is_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer 27 assigned to device ROCm0, is_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer 28 assigned to device ROCm0, is_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer 29 assigned to device ROCm0, is_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer 30 assigned to device ROCm0, is_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer 31 assigned to device ROCm0, is_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer 32 assigned to device ROCm0, is_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer 33 assigned to device ROCm0, is_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer 34 assigned to device ROCm0, is_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer 35 assigned to device ROCm0, is_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer 36 assigned to device ROCm0, is_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer 37 assigned to device ROCm0, is_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer 38 assigned to device ROCm0, is_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer 39 assigned to device ROCm0, is_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: layer 40 assigned to device ROCm0, is_swa = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: tensor 'token_embd.weight' (q4_K) (and 0 others) cannot be used with preferred buffer type ROCm_Host, using CPU instead Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: offloading 40 repeating layers to GPU Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: offloading output layer to GPU Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: offloaded 41/41 layers to GPU Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: ROCm0 model buffer size = 8423.47 MiB Aug 01 12:24:01 ubuntu24 ollama[29820]: load_tensors: CPU_Mapped model buffer size = 417.30 MiB Aug 01 12:24:01 ubuntu24 ollama[29820]: time=2025-08-01T12:24:01.598+08:00 level=DEBUG source=server.go:643 msg="model load progress 0.59" Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: constructing llama_context Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: n_seq_max = 2 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: n_ctx = 8192 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: n_ctx_per_seq = 4096 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: n_batch = 1024 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: n_ubatch = 512 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: causal_attn = 1 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: flash_attn = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: freq_base = 1000000.0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: freq_scale = 1 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: n_ctx_per_seq (4096) < n_ctx_train (40960) -- the full capacity of the model will not be utilized Aug 01 12:24:01 ubuntu24 ollama[29820]: set_abort_callback: call Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: ROCm_Host output buffer size = 1.20 MiB Aug 01 12:24:01 ubuntu24 ollama[29820]: create_memory: n_ctx = 8192 (padded) Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: kv_size = 8192, type_k = 'f16', type_v = 'f16', n_layer = 40, can_shift = 1, padding = 32 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer 0: dev = ROCm0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer 1: dev = ROCm0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer 2: dev = ROCm0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer 3: dev = ROCm0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer 4: dev = ROCm0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer 5: dev = ROCm0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer 6: dev = ROCm0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer 7: dev = ROCm0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer 8: dev = ROCm0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer 9: dev = ROCm0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer 10: dev = ROCm0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer 11: dev = ROCm0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer 12: dev = ROCm0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer 13: dev = ROCm0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer 14: dev = ROCm0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer 15: dev = ROCm0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer 16: dev = ROCm0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer 17: dev = ROCm0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer 18: dev = ROCm0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer 19: dev = ROCm0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer 20: dev = ROCm0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer 21: dev = ROCm0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer 22: dev = ROCm0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer 23: dev = ROCm0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer 24: dev = ROCm0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer 25: dev = ROCm0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer 26: dev = ROCm0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer 27: dev = ROCm0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer 28: dev = ROCm0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer 29: dev = ROCm0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer 30: dev = ROCm0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer 31: dev = ROCm0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer 32: dev = ROCm0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer 33: dev = ROCm0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer 34: dev = ROCm0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer 35: dev = ROCm0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer 36: dev = ROCm0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer 37: dev = ROCm0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer 38: dev = ROCm0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: layer 39: dev = ROCm0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: ROCm0 KV buffer size = 1280.00 MiB Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_kv_cache_unified: KV self size = 1280.00 MiB, K (f16): 640.00 MiB, V (f16): 640.00 MiB Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: enumerating backends Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: backend_ptrs.size() = 2 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: max_nodes = 65536 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: worst-case: n_tokens = 512, n_seqs = 1, n_outputs = 0 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: reserving graph for n_tokens = 512, n_seqs = 1 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: reserving graph for n_tokens = 1, n_seqs = 1 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: reserving graph for n_tokens = 512, n_seqs = 1 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: ROCm0 compute buffer size = 696.00 MiB Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: ROCm_Host compute buffer size = 26.01 MiB Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: graph nodes = 1526 Aug 01 12:24:01 ubuntu24 ollama[29820]: llama_context: graph splits = 2 Aug 01 12:24:01 ubuntu24 ollama[29820]: time=2025-08-01T12:24:01.849+08:00 level=INFO source=server.go:637 msg="llama runner started in 1.25 seconds" Aug 01 12:24:01 ubuntu24 ollama[29820]: time=2025-08-01T12:24:01.849+08:00 level=DEBUG source=sched.go:495 msg="finished setting up" runner.name=registry.ollama.ai/library/qwen3:14b-q4_K_M runner.inference=rocm runner.devices=1 runner.size="11.2 GiB" runner.vram="11.2 GiB" runner.parallel=2 runner.pid=29845 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e runner.num_ctx=8192 Aug 01 12:24:01 ubuntu24 ollama[29820]: time=2025-08-01T12:24:01.849+08:00 level=DEBUG source=server.go:736 msg="completion request" images=0 prompt=78 format="" Aug 01 12:24:01 ubuntu24 ollama[29820]: time=2025-08-01T12:24:01.851+08:00 level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=13 used=0 remaining=13 Aug 01 12:24:01 ubuntu24 ollama[29820]: rocblaslt error: Cannot read /usr/local/lib/ollama/rocm/hipblaslt/library/TensileLibrary_lazy_gfx1201.dat: No such file or directory Aug 01 12:24:01 ubuntu24 ollama[29820]: rocblaslt error: Could not load /usr/local/lib/ollama/rocm/hipblaslt/library/TensileLibrary_lazy_gfx1201.dat Aug 01 12:24:01 ubuntu24 ollama[29820]: hipBLASLt error: Heuristic Fetch Failed! Aug 01 12:24:01 ubuntu24 ollama[29820]: This message will be only be displayed once, unless the ROCBLAS_VERBOSE_HIPBLASLT_ERROR environment variable is set. Aug 01 12:24:01 ubuntu24 ollama[29820]: rocBLAS warning: hipBlasLT failed, falling back to tensile. Aug 01 12:24:01 ubuntu24 ollama[29820]: This message will be only be displayed once, unless the ROCBLAS_VERBOSE_TENSILE_ERROR environment variable is set. Aug 01 12:24:31 ubuntu24 ollama[29820]: time=2025-08-01T12:24:31.610+08:00 level=DEBUG source=sched.go:503 msg="context for request finished" Aug 01 12:24:31 ubuntu24 ollama[29820]: time=2025-08-01T12:24:31.610+08:00 level=DEBUG source=sched.go:343 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/qwen3:14b-q4_K_M runner.inference=rocm runner.devices=1 runner.size="11.2 GiB" runner.vram="11.2 GiB" runner.parallel=2 runner.pid=29845 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e runner.num_ctx=8192 duration=5m0s Aug 01 12:24:31 ubuntu24 ollama[29820]: time=2025-08-01T12:24:31.610+08:00 level=DEBUG source=sched.go:361 msg="after processing request finished event" runner.name=registry.ollama.ai/library/qwen3:14b-q4_K_M runner.inference=rocm runner.devices=1 runner.size="11.2 GiB" runner.vram="11.2 GiB" runner.parallel=2 runner.pid=29845 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e runner.num_ctx=8192 refCount=0 ```
Author
Owner

@paugh7 commented on GitHub (Aug 9, 2025):

I am having the same error when I use the install script, would love to know a fix/workaround to correct this

<!-- gh-comment-id:3171948139 --> @paugh7 commented on GitHub (Aug 9, 2025): I am having the same error when I use the install script, would love to know a fix/workaround to correct this
Author
Owner

@Dwerg commented on GitHub (Sep 18, 2025):

I see the exact same errors in my log as well, I'm using a RX 9070 XT. Running the latest ollama/ollama:rocm image in Docker.

I once tried to manually copy the entire library from rocblas folder to the path stated in the error within the container, this produced a bunch of errors and did not work at all. I also tried to copy the library from the installed hipblaslt folder on my host system (Arch Linux) to the path it's looking at within the container, same result, a whole lot of errors and not working at all.

GPU acceleration is working out of the box when passing my GPU to the container though. This error does make me wonder if it could utilize the GPU even better.

<!-- gh-comment-id:3308486421 --> @Dwerg commented on GitHub (Sep 18, 2025): I see the exact same errors in my log as well, I'm using a RX 9070 XT. Running the latest ollama/ollama:rocm image in Docker. I once tried to manually copy the entire library from rocblas folder to the path stated in the error within the container, this produced a bunch of errors and did not work at all. I also tried to copy the library from the installed hipblaslt folder on my host system (Arch Linux) to the path it's looking at within the container, same result, a whole lot of errors and not working at all. GPU acceleration is working out of the box when passing my GPU to the container though. This error does make me wonder if it could utilize the GPU even better.
Author
Owner

@kkcinterface-bit commented on GitHub (Oct 1, 2025):

I'm also having same error, but still working with gemma3:4b, indeed not best performance though.
(since I compared, this performance is same as RTX2060 6G, ancient card like fossil. some years later, be LNG or OIL.)
Native installation with venv, and environment value, like below.

===
Environment=PATH=$PATH:/ai/ollama/bin:/ai/ollama/lib
Environment=ROCM_PATH=/opt/rocm-6.4.3/bin
Environment=LD_LIBRARY_PATH=/opt/rocm-6.4.3/lib:$ROCM_PATH:$LD_LIBRARY_PATH

as you know, /opt/rocm-6.4.3 is default path of ROCm installation.

looks like, bug.

<!-- gh-comment-id:3355611442 --> @kkcinterface-bit commented on GitHub (Oct 1, 2025): I'm also having same error, but still working with gemma3:4b, indeed not best performance though. (since I compared, this performance is same as RTX2060 6G, ancient card like fossil. some years later, be LNG or OIL.) Native installation with venv, and environment value, like below. === Environment=PATH=$PATH:/ai/ollama/bin:/ai/ollama/lib Environment=ROCM_PATH=/opt/rocm-6.4.3/bin Environment=LD_LIBRARY_PATH=/opt/rocm-6.4.3/lib:$ROCM_PATH:$LD_LIBRARY_PATH === ## as you know, /opt/rocm-6.4.3 is default path of ROCm installation. looks like, bug.
Author
Owner

@MorrisLu-Taipei commented on GitHub (Dec 14, 2025):

same issue here, running inference model pretty slowly with ollama

AMD-SMI 26.2.0+021c61fc amdgpu version: 6.16.6 ROCm version: 7.1.1 VBIOS version: 00131162 Ubuntu 24.04.3

docker/images/sha256%3Ae49d21dfbf72b5fea88b512c83d41764de462b3efd059f625d3a1d2a855852ed

<!-- gh-comment-id:3650441657 --> @MorrisLu-Taipei commented on GitHub (Dec 14, 2025): same issue here, running inference model pretty slowly with ollama AMD-SMI 26.2.0+021c61fc amdgpu version: 6.16.6 ROCm version: 7.1.1 VBIOS version: 00131162 Ubuntu 24.04.3 docker/images/sha256%3Ae49d21dfbf72b5fea88b512c83d41764de462b3efd059f625d3a1d2a855852ed
Author
Owner

@BloodyIron commented on GitHub (Jan 8, 2026):

I'm having this issue too reported in logs. RX 9070 XT, ROCM 7.1.1, Ubuntu 25.04, and I'm unsure what to do about it. Yes I used the install script, even ran it again today to update to latest, which is 0.13.5. Am I leaving performance on the table here with this error?

<!-- gh-comment-id:3726000371 --> @BloodyIron commented on GitHub (Jan 8, 2026): I'm having this issue too reported in logs. RX 9070 XT, ROCM 7.1.1, Ubuntu 25.04, and I'm unsure what to do about it. Yes I used the install script, even ran it again today to update to latest, which is 0.13.5. Am I leaving performance on the table here with this error?
Author
Owner

@androiddrew commented on GitHub (Feb 5, 2026):

@inpure checkout https://github.com/ollama/ollama/issues/12908#issuecomment-3854823325

<!-- gh-comment-id:3856698737 --> @androiddrew commented on GitHub (Feb 5, 2026): @inpure checkout https://github.com/ollama/ollama/issues/12908#issuecomment-3854823325
Author
Owner

@inpure commented on GitHub (Apr 17, 2026):

@inpure checkout #12908 (comment)

Update to ROCm 7.2.2, done.

<!-- gh-comment-id:4266291858 --> @inpure commented on GitHub (Apr 17, 2026): > [@inpure](https://github.com/inpure) checkout [#12908 (comment)](https://github.com/ollama/ollama/issues/12908#issuecomment-3854823325) Update to ROCm 7.2.2, done.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#69683