[GH-ISSUE #9516] Ollama not using AMD GPU despite detecting it correctly #68260

Open
opened 2026-05-04 13:02:46 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @SteelPh0enix on GitHub (Mar 5, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9516

What is the issue?

I'm currently trying to build ollama from source on latest Arch Linux with ROCm 6.3.2.
My GPU is RX 7900XT
I have ROCm installed system-wide (from Arch repository), i was able to successfully build it and install ollama system-wide with following commands:

cmake -DCMAKE_BUILD_TYPE=Release -G Ninja -B build
cmake --build build
sudo cmake --install build # not sure if that's needed
go build
go install

I have set the following environmental variables before running any ollama command:

export GIN_MODE="release"
export GPU_ARCHS="gfx1100"
export HSA_OVERRIDE_GFX_VERSION="11.0.0"
export OLLAMA_FLASH_ATTENTION=1
export OLLAMA_LLM_LIBRARY="rocm_v6"
export OLLAMA_DEBUG=1
export AMD_LOG_LEVEL=3

I have also added my user to render and video groups, per troubleshooting.md.

When i run ollama serve, i get following logs:

2025/03/05 11:26:34 routes.go:1215: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION:11.0.0 HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY:rocm_v6 OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/steelph0enix/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-03-05T11:26:34.639+01:00 level=INFO source=images.go:432 msg="total blobs: 5"
time=2025-03-05T11:26:34.639+01:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
 - using env:   export GIN_MODE=release
 - using code:  gin.SetMode(gin.ReleaseMode)

[GIN-debug] HEAD   /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET    /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD   /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func3 (5 handlers)
[GIN-debug] GET    /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func4 (5 handlers)
[GIN-debug] POST   /api/pull                 --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
[GIN-debug] POST   /api/push                 --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] GET    /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] POST   /api/show                 --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
[GIN-debug] POST   /api/create               --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
[GIN-debug] POST   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] POST   /api/copy                 --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
[GIN-debug] GET    /api/ps                   --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
[GIN-debug] POST   /api/generate             --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST   /api/chat                 --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST   /api/embed                --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST   /api/embeddings           --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST   /v1/chat/completions      --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST   /v1/completions           --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST   /v1/embeddings            --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models                --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models/:model         --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
time=2025-03-05T11:26:34.640+01:00 level=INFO source=routes.go:1277 msg="Listening on 127.0.0.1:11434 (version 0.0.0)"
time=2025-03-05T11:26:34.640+01:00 level=DEBUG source=sched.go:106 msg="starting llm scheduler"
time=2025-03-05T11:26:34.640+01:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-03-05T11:26:34.641+01:00 level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA"
time=2025-03-05T11:26:34.641+01:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcuda.so*
time=2025-03-05T11:26:34.641+01:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/home/steelph0enix/gopath/bin/libcuda.so* /home/steelph0enix/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2025-03-05T11:26:34.656+01:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[]
time=2025-03-05T11:26:34.656+01:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcudart.so*
time=2025-03-05T11:26:34.656+01:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/home/steelph0enix/gopath/bin/libcudart.so* /home/steelph0enix/libcudart.so* /home/steelph0enix/gopath/bin/cuda_v*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]"
time=2025-03-05T11:26:34.662+01:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[]
time=2025-03-05T11:26:34.662+01:00 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2025-03-05T11:26:34.662+01:00 level=DEBUG source=amd_linux.go:101 msg="evaluating amdgpu node /sys/class/kfd/kfd/topology/nodes/0/properties"
time=2025-03-05T11:26:34.662+01:00 level=DEBUG source=amd_linux.go:121 msg="detected CPU /sys/class/kfd/kfd/topology/nodes/0/properties"
time=2025-03-05T11:26:34.662+01:00 level=DEBUG source=amd_linux.go:101 msg="evaluating amdgpu node /sys/class/kfd/kfd/topology/nodes/1/properties"
time=2025-03-05T11:26:34.662+01:00 level=DEBUG source=amd_linux.go:206 msg="mapping amdgpu to drm sysfs nodes" amdgpu=/sys/class/kfd/kfd/topology/nodes/1/properties vendor=4098 device=29772 unique_id=1584538289711511137
time=2025-03-05T11:26:34.662+01:00 level=DEBUG source=amd_linux.go:240 msg=matched amdgpu=/sys/class/kfd/kfd/topology/nodes/1/properties drm=/sys/class/drm/card1/device
time=2025-03-05T11:26:34.662+01:00 level=DEBUG source=amd_linux.go:318 msg="amdgpu memory" gpu=0 total="20.0 GiB"
time=2025-03-05T11:26:34.662+01:00 level=DEBUG source=amd_linux.go:319 msg="amdgpu memory" gpu=0 available="18.2 GiB"
time=2025-03-05T11:26:34.662+01:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /home/steelph0enix/gopath/bin/rocm"
time=2025-03-05T11:26:34.662+01:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /opt/rocm/lib"
time=2025-03-05T11:26:34.662+01:00 level=INFO source=amd_linux.go:389 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION=11.0.0
time=2025-03-05T11:26:34.662+01:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-15fd692de3427661 library=rocm variant="" compute=gfx1100 driver=0.0 name=1002:744c total="20.0 GiB" available="18.2 GiB"

Note that even without setting the HSA_OVERRIDE and OLLAMA_LLM_LIBRARY, the output is similar and Ollama has no issues detecting my GPU.
However, when trying to load a model, the LLM back-end crashes (without any meaningful logs) and it starts using CPU instead of GPU for inference. I've attached the logs below.

ollama-rocm package from Arch repository works correctly! This is an issue happening only when i build ollama myself!

I have no idea what else can i do to force it to use my GPU.

I had very similar issue when i tried to build and run Ollama on Windows 11, on the same hardware - that's the reason why i tried to run it on Linux, without success.

Relevant log output

time=2025-03-05T11:26:34.662+01:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-15fd692de3427661 library=rocm variant="" compute=gfx1100 driver=0.0 name=1002:744c total="20.0 GiB" available="18.2 GiB"
[GIN] 2025/03/05 - 11:27:50 | 200 |       28.34µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/05 - 11:27:50 | 200 |   12.946166ms |       127.0.0.1 | POST     "/api/show"
time=2025-03-05T11:27:50.718+01:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.3 GiB" before.free="26.5 GiB" before.free_swap="0 B" now.total="31.3 GiB" now.free="26.8 GiB" now.free_swap="0 B"
time=2025-03-05T11:27:50.718+01:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-15fd692de3427661 name=1002:744c before="18.2 GiB" now="18.2 GiB"
time=2025-03-05T11:27:50.718+01:00 level=DEBUG source=sched.go:182 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1
time=2025-03-05T11:27:50.742+01:00 level=DEBUG source=sched.go:225 msg="loading first model" model=/home/steelph0enix/.ollama/models/blobs/sha256-f0d8cbed51de74ed312a645366e24ed0114081b000f1216452f70aea424f7aa9
time=2025-03-05T11:27:50.742+01:00 level=DEBUG source=memory.go:108 msg=evaluating library=rocm gpu_count=1 available="[18.2 GiB]"
time=2025-03-05T11:27:50.742+01:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.3 GiB" before.free="26.8 GiB" before.free_swap="0 B" now.total="31.3 GiB" now.free="26.8 GiB" now.free_swap="0 B"
time=2025-03-05T11:27:50.742+01:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-15fd692de3427661 name=1002:744c before="18.2 GiB" now="18.2 GiB"
time=2025-03-05T11:27:50.742+01:00 level=DEBUG source=memory.go:108 msg=evaluating library=rocm gpu_count=1 available="[18.2 GiB]"
time=2025-03-05T11:27:50.743+01:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.3 GiB" before.free="26.8 GiB" before.free_swap="0 B" now.total="31.3 GiB" now.free="26.8 GiB" now.free_swap="0 B"
time=2025-03-05T11:27:50.743+01:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-15fd692de3427661 name=1002:744c before="18.2 GiB" now="18.2 GiB"
time=2025-03-05T11:27:50.743+01:00 level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/home/steelph0enix/.ollama/models/blobs/sha256-f0d8cbed51de74ed312a645366e24ed0114081b000f1216452f70aea424f7aa9 gpu=GPU-15fd692de3427661 parallel=1 available=19547545600 required="12.6 GiB"
time=2025-03-05T11:27:50.743+01:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.3 GiB" before.free="26.8 GiB" before.free_swap="0 B" now.total="31.3 GiB" now.free="26.8 GiB" now.free_swap="0 B"
time=2025-03-05T11:27:50.743+01:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-15fd692de3427661 name=1002:744c before="18.2 GiB" now="18.2 GiB"
time=2025-03-05T11:27:50.743+01:00 level=INFO source=server.go:105 msg="system memory" total="31.3 GiB" free="26.8 GiB" free_swap="0 B"
time=2025-03-05T11:27:50.743+01:00 level=DEBUG source=memory.go:108 msg=evaluating library=rocm gpu_count=1 available="[18.2 GiB]"
time=2025-03-05T11:27:50.743+01:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.3 GiB" before.free="26.8 GiB" before.free_swap="0 B" now.total="31.3 GiB" now.free="26.8 GiB" now.free_swap="0 B"
time=2025-03-05T11:27:50.743+01:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-15fd692de3427661 name=1002:744c before="18.2 GiB" now="18.2 GiB"
time=2025-03-05T11:27:50.743+01:00 level=INFO source=server.go:138 msg=offload library=rocm layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[18.2 GiB]" memory.gpu_overhead="0 B" memory.required.full="12.6 GiB" memory.required.partial="12.6 GiB" memory.required.kv="4.0 GiB" memory.required.allocations="[12.6 GiB]" memory.weights.total="9.3 GiB" memory.weights.repeating="8.9 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="2.1 GiB" memory.graph.partial="2.2 GiB"
time=2025-03-05T11:27:50.743+01:00 level=INFO source=server.go:185 msg="enabling flash attention"
time=2025-03-05T11:27:50.743+01:00 level=WARN source=server.go:193 msg="kv cache type not supported by model" type=""
time=2025-03-05T11:27:50.743+01:00 level=DEBUG source=server.go:262 msg="compatible gpu libraries" compatible=[]
llama_model_loader: loaded meta data with 40 key-value pairs and 292 tensors from /home/steelph0enix/.ollama/models/blobs/sha256-f0d8cbed51de74ed312a645366e24ed0114081b000f1216452f70aea424f7aa9 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Meta Llama 3.1 8B
llama_model_loader: - kv   3:                       general.organization str              = Unsloth
llama_model_loader: - kv   4:                           general.finetune str              = Preview
llama_model_loader: - kv   5:                           general.basename str              = Meta-Llama-3.1
llama_model_loader: - kv   6:                         general.size_label str              = 8B
llama_model_loader: - kv   7:                            general.license str              = llama3
llama_model_loader: - kv   8:                   general.base_model.count u32              = 1
llama_model_loader: - kv   9:                  general.base_model.0.name str              = Meta Llama 3.1 8B
llama_model_loader: - kv  10:          general.base_model.0.organization str              = Meta Llama
llama_model_loader: - kv  11:              general.base_model.0.repo_url str              = https://huggingface.co/meta-llama/Met...
llama_model_loader: - kv  12:                               general.tags arr[str,15]      = ["Llama-3", "instruct", "finetune", "...
llama_model_loader: - kv  13:                          general.languages arr[str,1]       = ["en"]
llama_model_loader: - kv  14:                          llama.block_count u32              = 32
llama_model_loader: - kv  15:                       llama.context_length u32              = 131072
llama_model_loader: - kv  16:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv  17:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv  18:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv  19:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  20:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  21:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  22:                 llama.attention.key_length u32              = 128
llama_model_loader: - kv  23:               llama.attention.value_length u32              = 128
llama_model_loader: - kv  24:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  25:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  26:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  27:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  28:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  29:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  30:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  31:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  32:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  33:            tokenizer.ggml.padding_token_id u32              = 128001
llama_model_loader: - kv  34:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  35:           tokenizer.chat_template.tool_use str              = {%- macro json_to_python_type(json_sp...
llama_model_loader: - kv  36:                   tokenizer.chat_templates arr[str,1]       = ["tool_use"]
llama_model_loader: - kv  37:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  38:               general.quantization_version u32              = 2
llama_model_loader: - kv  39:                          general.file_type u32              = 18
llama_model_loader: - type  f32:   66 tensors
llama_model_loader: - type q6_K:  226 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q6_K
print_info: file size   = 6.14 GiB (6.56 BPW)
init_tokenizer: initializing tokenizer for type 2
load: control token: 128254 '<|reserved_special_token_246|>' is not marked as EOG
[...]
load: control token: 128126 '<|reserved_special_token_118|>' is not marked as EOG
load: special tokens cache size = 256
load: token to piece cache size = 0.7999 MB
print_info: arch             = llama
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 8.03 B
print_info: general.name     = Meta Llama 3.1 8B
print_info: vocab type       = BPE
print_info: n_vocab          = 128256
print_info: n_merges         = 280147
print_info: BOS token        = 128000 '<|begin_of_text|>'
print_info: EOS token        = 128009 '<|eot_id|>'
print_info: EOT token        = 128009 '<|eot_id|>'
print_info: EOM token        = 128008 '<|eom_id|>'
print_info: PAD token        = 128001 '<|end_of_text|>'
print_info: LF token         = 198 'Ċ'
print_info: EOG token        = 128008 '<|eom_id|>'
print_info: EOG token        = 128009 '<|eot_id|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-03-05T11:27:50.913+01:00 level=DEBUG source=server.go:343 msg="adding gpu dependency paths" paths=[/opt/rocm/lib]
time=2025-03-05T11:27:50.913+01:00 level=INFO source=server.go:405 msg="starting llama server" cmd="/home/steelph0enix/gopath/bin/ollama runner --model /home/steelph0enix/.ollama/models/blobs/sha256-f0d8cbed51de74ed312a645366e24ed0114081b000f1216452f70aea424f7aa9 --ctx-size 32768 --batch-size 512 --n-gpu-layers 33 --verbose --threads 12 --flash-attn --parallel 1 --port 46009"
time=2025-03-05T11:27:50.913+01:00 level=DEBUG source=server.go:423 msg=subprocess environment="[PATH=/home/steelph0enix/gopath/bin:/home/steelph0enix/gopath/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/opt/rocm/bin:/usr/lib/rustup/bin:/home/steelph0enix/.cargo/bin ROCM_PATH=/opt/rocm GPU_ARCHS=gfx1100 HSA_OVERRIDE_GFX_VERSION=11.0.0 LD_LIBRARY_PATH=/opt/rocm/lib:/home/steelph0enix/gopath/bin ROCR_VISIBLE_DEVICES=GPU-15fd692de3427661]"
time=2025-03-05T11:27:50.913+01:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-03-05T11:27:50.913+01:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding"
time=2025-03-05T11:27:50.914+01:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error"
time=2025-03-05T11:27:50.920+01:00 level=INFO source=runner.go:931 msg="starting go runner"
time=2025-03-05T11:27:50.920+01:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=/opt/rocm/lib
time=2025-03-05T11:27:50.920+01:00 level=DEBUG source=ggml.go:99 msg="ggml backend load all from path" path=/home/steelph0enix/gopath/bin
time=2025-03-05T11:27:50.920+01:00 level=INFO source=ggml.go:109 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc)
time=2025-03-05T11:27:50.921+01:00 level=INFO source=runner.go:991 msg="Server listening on 127.0.0.1:46009"
llama_model_loader: loaded meta data with 40 key-value pairs and 292 tensors from /home/steelph0enix/.ollama/models/blobs/sha256-f0d8cbed51de74ed312a645366e24ed0114081b000f1216452f70aea424f7aa9 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Meta Llama 3.1 8B
llama_model_loader: - kv   3:                       general.organization str              = Unsloth
llama_model_loader: - kv   4:                           general.finetune str              = Preview
llama_model_loader: - kv   5:                           general.basename str              = Meta-Llama-3.1
llama_model_loader: - kv   6:                         general.size_label str              = 8B
llama_model_loader: - kv   7:                            general.license str              = llama3
llama_model_loader: - kv   8:                   general.base_model.count u32              = 1
llama_model_loader: - kv   9:                  general.base_model.0.name str              = Meta Llama 3.1 8B
llama_model_loader: - kv  10:          general.base_model.0.organization str              = Meta Llama
llama_model_loader: - kv  11:              general.base_model.0.repo_url str              = https://huggingface.co/meta-llama/Met...
llama_model_loader: - kv  12:                               general.tags arr[str,15]      = ["Llama-3", "instruct", "finetune", "...
llama_model_loader: - kv  13:                          general.languages arr[str,1]       = ["en"]
llama_model_loader: - kv  14:                          llama.block_count u32              = 32
llama_model_loader: - kv  15:                       llama.context_length u32              = 131072
llama_model_loader: - kv  16:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv  17:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv  18:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv  19:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  20:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  21:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  22:                 llama.attention.key_length u32              = 128
llama_model_loader: - kv  23:               llama.attention.value_length u32              = 128
llama_model_loader: - kv  24:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  25:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  26:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  27:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  28:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  29:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  30:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  31:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  32:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  33:            tokenizer.ggml.padding_token_id u32              = 128001
llama_model_loader: - kv  34:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  35:           tokenizer.chat_template.tool_use str              = {%- macro json_to_python_type(json_sp...
llama_model_loader: - kv  36:                   tokenizer.chat_templates arr[str,1]       = ["tool_use"]
llama_model_loader: - kv  37:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  38:               general.quantization_version u32              = 2
llama_model_loader: - kv  39:                          general.file_type u32              = 18
llama_model_loader: - type  f32:   66 tensors
llama_model_loader: - type q6_K:  226 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q6_K
print_info: file size   = 6.14 GiB (6.56 BPW)
init_tokenizer: initializing tokenizer for type 2
load: control token: 128254 '<|reserved_special_token_246|>' is not marked as EOG
[...]
load: control token: 128126 '<|reserved_special_token_118|>' is not marked as EOG
load: special tokens cache size = 256
load: token to piece cache size = 0.7999 MB
print_info: arch             = llama
print_info: vocab_only       = 0
print_info: n_ctx_train      = 131072
print_info: n_embd           = 4096
print_info: n_layer          = 32
print_info: n_head           = 32
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 4
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: n_ff             = 14336
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 0
print_info: rope scaling     = linear
print_info: freq_base_train  = 500000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 131072
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 8B
print_info: model params     = 8.03 B
print_info: general.name     = Meta Llama 3.1 8B
print_info: vocab type       = BPE
print_info: n_vocab          = 128256
print_info: n_merges         = 280147
print_info: BOS token        = 128000 '<|begin_of_text|>'
print_info: EOS token        = 128009 '<|eot_id|>'
print_info: EOT token        = 128009 '<|eot_id|>'
print_info: EOM token        = 128008 '<|eom_id|>'
print_info: PAD token        = 128001 '<|end_of_text|>'
print_info: LF token         = 198 'Ċ'
print_info: EOG token        = 128008 '<|eom_id|>'
print_info: EOG token        = 128009 '<|eot_id|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors: layer   0 assigned to device CPU
load_tensors: layer   1 assigned to device CPU
load_tensors: layer   2 assigned to device CPU
load_tensors: layer   3 assigned to device CPU
load_tensors: layer   4 assigned to device CPU
load_tensors: layer   5 assigned to device CPU
load_tensors: layer   6 assigned to device CPU
load_tensors: layer   7 assigned to device CPU
load_tensors: layer   8 assigned to device CPU
load_tensors: layer   9 assigned to device CPU
load_tensors: layer  10 assigned to device CPU
load_tensors: layer  11 assigned to device CPU
load_tensors: layer  12 assigned to device CPU
load_tensors: layer  13 assigned to device CPU
load_tensors: layer  14 assigned to device CPU
load_tensors: layer  15 assigned to device CPU
load_tensors: layer  16 assigned to device CPU
load_tensors: layer  17 assigned to device CPU
load_tensors: layer  18 assigned to device CPU
load_tensors: layer  19 assigned to device CPU
load_tensors: layer  20 assigned to device CPU
load_tensors: layer  21 assigned to device CPU
load_tensors: layer  22 assigned to device CPU
load_tensors: layer  23 assigned to device CPU
load_tensors: layer  24 assigned to device CPU
load_tensors: layer  25 assigned to device CPU
load_tensors: layer  26 assigned to device CPU
load_tensors: layer  27 assigned to device CPU
load_tensors: layer  28 assigned to device CPU
load_tensors: layer  29 assigned to device CPU
load_tensors: layer  30 assigned to device CPU
load_tensors: layer  31 assigned to device CPU
load_tensors: layer  32 assigned to device CPU
time=2025-03-05T11:27:51.165+01:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server loading model"
load_tensors:   CPU_Mapped model buffer size =  6282.97 MiB
llama_init_from_model: n_seq_max     = 1
llama_init_from_model: n_ctx         = 32768
llama_init_from_model: n_ctx_per_seq = 32768
llama_init_from_model: n_batch       = 512
llama_init_from_model: n_ubatch      = 512
llama_init_from_model: flash_attn    = 1
llama_init_from_model: freq_base     = 500000.0
llama_init_from_model: freq_scale    = 1
llama_init_from_model: n_ctx_per_seq (32768) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 32768, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 32, can_shift = 1
llama_kv_cache_init: layer 0: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
[...]
llama_kv_cache_init: layer 31: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
time=2025-03-05T11:27:51.666+01:00 level=DEBUG source=server.go:630 msg="model load progress 1.00"
llama_kv_cache_init:        CPU KV buffer size =  4096.00 MiB
llama_init_from_model: KV self size  = 4096.00 MiB, K (f16): 2048.00 MiB, V (f16): 2048.00 MiB
llama_init_from_model:        CPU  output buffer size =     0.50 MiB
llama_init_from_model:        CPU compute buffer size =   258.50 MiB
llama_init_from_model: graph nodes  = 903
llama_init_from_model: graph splits = 1
time=2025-03-05T11:27:51.918+01:00 level=INFO source=server.go:624 msg="llama runner started in 1.00 seconds"
time=2025-03-05T11:27:51.918+01:00 level=DEBUG source=sched.go:463 msg="finished setting up runner" model=/home/steelph0enix/.ollama/models/blobs/sha256-f0d8cbed51de74ed312a645366e24ed0114081b000f1216452f70aea424f7aa9
[GIN] 2025/03/05 - 11:27:51 | 200 |  1.214550081s |       127.0.0.1 | POST     "/api/generate"
time=2025-03-05T11:27:51.918+01:00 level=DEBUG source=sched.go:467 msg="context for request finished"
time=2025-03-05T11:27:51.918+01:00 level=DEBUG source=sched.go:340 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/steelph0enix/.ollama/models/blobs/sha256-f0d8cbed51de74ed312a645366e24ed0114081b000f1216452f70aea424f7aa9 duration=5m0s
time=2025-03-05T11:27:51.918+01:00 level=DEBUG source=sched.go:358 msg="after processing request finished event" modelPath=/home/steelph0enix/.ollama/models/blobs/sha256-f0d8cbed51de74ed312a645366e24ed0114081b000f1216452f70aea424f7aa9 refCount=0
time=2025-03-05T11:27:55.380+01:00 level=DEBUG source=sched.go:576 msg="evaluating already loaded" model=/home/steelph0enix/.ollama/models/blobs/sha256-f0d8cbed51de74ed312a645366e24ed0114081b000f1216452f70aea424f7aa9
time=2025-03-05T11:27:55.380+01:00 level=DEBUG source=routes.go:1501 msg="chat request" images=0 prompt="<|start_header_id|>system<|end_header_id|>\n\nYou are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\ntest<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
time=2025-03-05T11:27:55.381+01:00 level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=85 used=0 remaining=85

OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @SteelPh0enix on GitHub (Mar 5, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9516 ### What is the issue? I'm currently trying to build ollama from source on latest Arch Linux with ROCm 6.3.2. My GPU is RX 7900XT I have ROCm installed system-wide (from Arch repository), i was able to successfully build it and install ollama system-wide with following commands: ```bash cmake -DCMAKE_BUILD_TYPE=Release -G Ninja -B build cmake --build build sudo cmake --install build # not sure if that's needed go build go install ``` I have set the following environmental variables before running any ollama command: ```bash export GIN_MODE="release" export GPU_ARCHS="gfx1100" export HSA_OVERRIDE_GFX_VERSION="11.0.0" export OLLAMA_FLASH_ATTENTION=1 export OLLAMA_LLM_LIBRARY="rocm_v6" export OLLAMA_DEBUG=1 export AMD_LOG_LEVEL=3 ``` I have also added my user to `render` and `video` groups, per `troubleshooting.md`. When i run `ollama serve`, i get following logs: ``` 2025/03/05 11:26:34 routes.go:1215: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION:11.0.0 HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY:rocm_v6 OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/steelph0enix/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-03-05T11:26:34.639+01:00 level=INFO source=images.go:432 msg="total blobs: 5" time=2025-03-05T11:26:34.639+01:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func3 (5 handlers) [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func4 (5 handlers) [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers) [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers) [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers) [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers) [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers) [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers) [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers) [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers) [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers) [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers) [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers) [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers) [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers) [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers) [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers) [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers) [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers) time=2025-03-05T11:26:34.640+01:00 level=INFO source=routes.go:1277 msg="Listening on 127.0.0.1:11434 (version 0.0.0)" time=2025-03-05T11:26:34.640+01:00 level=DEBUG source=sched.go:106 msg="starting llm scheduler" time=2025-03-05T11:26:34.640+01:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-03-05T11:26:34.641+01:00 level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA" time=2025-03-05T11:26:34.641+01:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcuda.so* time=2025-03-05T11:26:34.641+01:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/home/steelph0enix/gopath/bin/libcuda.so* /home/steelph0enix/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" time=2025-03-05T11:26:34.656+01:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[] time=2025-03-05T11:26:34.656+01:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcudart.so* time=2025-03-05T11:26:34.656+01:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/home/steelph0enix/gopath/bin/libcudart.so* /home/steelph0enix/libcudart.so* /home/steelph0enix/gopath/bin/cuda_v*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]" time=2025-03-05T11:26:34.662+01:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[] time=2025-03-05T11:26:34.662+01:00 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2025-03-05T11:26:34.662+01:00 level=DEBUG source=amd_linux.go:101 msg="evaluating amdgpu node /sys/class/kfd/kfd/topology/nodes/0/properties" time=2025-03-05T11:26:34.662+01:00 level=DEBUG source=amd_linux.go:121 msg="detected CPU /sys/class/kfd/kfd/topology/nodes/0/properties" time=2025-03-05T11:26:34.662+01:00 level=DEBUG source=amd_linux.go:101 msg="evaluating amdgpu node /sys/class/kfd/kfd/topology/nodes/1/properties" time=2025-03-05T11:26:34.662+01:00 level=DEBUG source=amd_linux.go:206 msg="mapping amdgpu to drm sysfs nodes" amdgpu=/sys/class/kfd/kfd/topology/nodes/1/properties vendor=4098 device=29772 unique_id=1584538289711511137 time=2025-03-05T11:26:34.662+01:00 level=DEBUG source=amd_linux.go:240 msg=matched amdgpu=/sys/class/kfd/kfd/topology/nodes/1/properties drm=/sys/class/drm/card1/device time=2025-03-05T11:26:34.662+01:00 level=DEBUG source=amd_linux.go:318 msg="amdgpu memory" gpu=0 total="20.0 GiB" time=2025-03-05T11:26:34.662+01:00 level=DEBUG source=amd_linux.go:319 msg="amdgpu memory" gpu=0 available="18.2 GiB" time=2025-03-05T11:26:34.662+01:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /home/steelph0enix/gopath/bin/rocm" time=2025-03-05T11:26:34.662+01:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /opt/rocm/lib" time=2025-03-05T11:26:34.662+01:00 level=INFO source=amd_linux.go:389 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION=11.0.0 time=2025-03-05T11:26:34.662+01:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-15fd692de3427661 library=rocm variant="" compute=gfx1100 driver=0.0 name=1002:744c total="20.0 GiB" available="18.2 GiB" ``` Note that even without setting the HSA_OVERRIDE and OLLAMA_LLM_LIBRARY, the output is similar and Ollama has no issues detecting my GPU. However, when trying to load a model, the LLM back-end crashes (without any meaningful logs) and it starts using CPU instead of GPU for inference. I've attached the logs below. **ollama-rocm package from Arch repository works correctly! This is an issue happening only when i build ollama myself!** I have no idea what else can i do to force it to use my GPU. **I had very similar issue when i tried to build and run Ollama on Windows 11, on the same hardware - that's the reason why i tried to run it on Linux, without success.** ### Relevant log output ```shell time=2025-03-05T11:26:34.662+01:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-15fd692de3427661 library=rocm variant="" compute=gfx1100 driver=0.0 name=1002:744c total="20.0 GiB" available="18.2 GiB" [GIN] 2025/03/05 - 11:27:50 | 200 | 28.34µs | 127.0.0.1 | HEAD "/" [GIN] 2025/03/05 - 11:27:50 | 200 | 12.946166ms | 127.0.0.1 | POST "/api/show" time=2025-03-05T11:27:50.718+01:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.3 GiB" before.free="26.5 GiB" before.free_swap="0 B" now.total="31.3 GiB" now.free="26.8 GiB" now.free_swap="0 B" time=2025-03-05T11:27:50.718+01:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-15fd692de3427661 name=1002:744c before="18.2 GiB" now="18.2 GiB" time=2025-03-05T11:27:50.718+01:00 level=DEBUG source=sched.go:182 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1 time=2025-03-05T11:27:50.742+01:00 level=DEBUG source=sched.go:225 msg="loading first model" model=/home/steelph0enix/.ollama/models/blobs/sha256-f0d8cbed51de74ed312a645366e24ed0114081b000f1216452f70aea424f7aa9 time=2025-03-05T11:27:50.742+01:00 level=DEBUG source=memory.go:108 msg=evaluating library=rocm gpu_count=1 available="[18.2 GiB]" time=2025-03-05T11:27:50.742+01:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.3 GiB" before.free="26.8 GiB" before.free_swap="0 B" now.total="31.3 GiB" now.free="26.8 GiB" now.free_swap="0 B" time=2025-03-05T11:27:50.742+01:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-15fd692de3427661 name=1002:744c before="18.2 GiB" now="18.2 GiB" time=2025-03-05T11:27:50.742+01:00 level=DEBUG source=memory.go:108 msg=evaluating library=rocm gpu_count=1 available="[18.2 GiB]" time=2025-03-05T11:27:50.743+01:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.3 GiB" before.free="26.8 GiB" before.free_swap="0 B" now.total="31.3 GiB" now.free="26.8 GiB" now.free_swap="0 B" time=2025-03-05T11:27:50.743+01:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-15fd692de3427661 name=1002:744c before="18.2 GiB" now="18.2 GiB" time=2025-03-05T11:27:50.743+01:00 level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/home/steelph0enix/.ollama/models/blobs/sha256-f0d8cbed51de74ed312a645366e24ed0114081b000f1216452f70aea424f7aa9 gpu=GPU-15fd692de3427661 parallel=1 available=19547545600 required="12.6 GiB" time=2025-03-05T11:27:50.743+01:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.3 GiB" before.free="26.8 GiB" before.free_swap="0 B" now.total="31.3 GiB" now.free="26.8 GiB" now.free_swap="0 B" time=2025-03-05T11:27:50.743+01:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-15fd692de3427661 name=1002:744c before="18.2 GiB" now="18.2 GiB" time=2025-03-05T11:27:50.743+01:00 level=INFO source=server.go:105 msg="system memory" total="31.3 GiB" free="26.8 GiB" free_swap="0 B" time=2025-03-05T11:27:50.743+01:00 level=DEBUG source=memory.go:108 msg=evaluating library=rocm gpu_count=1 available="[18.2 GiB]" time=2025-03-05T11:27:50.743+01:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.3 GiB" before.free="26.8 GiB" before.free_swap="0 B" now.total="31.3 GiB" now.free="26.8 GiB" now.free_swap="0 B" time=2025-03-05T11:27:50.743+01:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-15fd692de3427661 name=1002:744c before="18.2 GiB" now="18.2 GiB" time=2025-03-05T11:27:50.743+01:00 level=INFO source=server.go:138 msg=offload library=rocm layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[18.2 GiB]" memory.gpu_overhead="0 B" memory.required.full="12.6 GiB" memory.required.partial="12.6 GiB" memory.required.kv="4.0 GiB" memory.required.allocations="[12.6 GiB]" memory.weights.total="9.3 GiB" memory.weights.repeating="8.9 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="2.1 GiB" memory.graph.partial="2.2 GiB" time=2025-03-05T11:27:50.743+01:00 level=INFO source=server.go:185 msg="enabling flash attention" time=2025-03-05T11:27:50.743+01:00 level=WARN source=server.go:193 msg="kv cache type not supported by model" type="" time=2025-03-05T11:27:50.743+01:00 level=DEBUG source=server.go:262 msg="compatible gpu libraries" compatible=[] llama_model_loader: loaded meta data with 40 key-value pairs and 292 tensors from /home/steelph0enix/.ollama/models/blobs/sha256-f0d8cbed51de74ed312a645366e24ed0114081b000f1216452f70aea424f7aa9 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Meta Llama 3.1 8B llama_model_loader: - kv 3: general.organization str = Unsloth llama_model_loader: - kv 4: general.finetune str = Preview llama_model_loader: - kv 5: general.basename str = Meta-Llama-3.1 llama_model_loader: - kv 6: general.size_label str = 8B llama_model_loader: - kv 7: general.license str = llama3 llama_model_loader: - kv 8: general.base_model.count u32 = 1 llama_model_loader: - kv 9: general.base_model.0.name str = Meta Llama 3.1 8B llama_model_loader: - kv 10: general.base_model.0.organization str = Meta Llama llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/meta-llama/Met... llama_model_loader: - kv 12: general.tags arr[str,15] = ["Llama-3", "instruct", "finetune", "... llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 14: llama.block_count u32 = 32 llama_model_loader: - kv 15: llama.context_length u32 = 131072 llama_model_loader: - kv 16: llama.embedding_length u32 = 4096 llama_model_loader: - kv 17: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 18: llama.attention.head_count u32 = 32 llama_model_loader: - kv 19: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 20: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 21: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 22: llama.attention.key_length u32 = 128 llama_model_loader: - kv 23: llama.attention.value_length u32 = 128 llama_model_loader: - kv 24: llama.vocab_size u32 = 128256 llama_model_loader: - kv 25: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 26: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 27: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 28: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 29: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 30: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 31: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 32: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 33: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 34: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 35: tokenizer.chat_template.tool_use str = {%- macro json_to_python_type(json_sp... llama_model_loader: - kv 36: tokenizer.chat_templates arr[str,1] = ["tool_use"] llama_model_loader: - kv 37: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 38: general.quantization_version u32 = 2 llama_model_loader: - kv 39: general.file_type u32 = 18 llama_model_loader: - type f32: 66 tensors llama_model_loader: - type q6_K: 226 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q6_K print_info: file size = 6.14 GiB (6.56 BPW) init_tokenizer: initializing tokenizer for type 2 load: control token: 128254 '<|reserved_special_token_246|>' is not marked as EOG [...] load: control token: 128126 '<|reserved_special_token_118|>' is not marked as EOG load: special tokens cache size = 256 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 8.03 B print_info: general.name = Meta Llama 3.1 8B print_info: vocab type = BPE print_info: n_vocab = 128256 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin_of_text|>' print_info: EOS token = 128009 '<|eot_id|>' print_info: EOT token = 128009 '<|eot_id|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: PAD token = 128001 '<|end_of_text|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-03-05T11:27:50.913+01:00 level=DEBUG source=server.go:343 msg="adding gpu dependency paths" paths=[/opt/rocm/lib] time=2025-03-05T11:27:50.913+01:00 level=INFO source=server.go:405 msg="starting llama server" cmd="/home/steelph0enix/gopath/bin/ollama runner --model /home/steelph0enix/.ollama/models/blobs/sha256-f0d8cbed51de74ed312a645366e24ed0114081b000f1216452f70aea424f7aa9 --ctx-size 32768 --batch-size 512 --n-gpu-layers 33 --verbose --threads 12 --flash-attn --parallel 1 --port 46009" time=2025-03-05T11:27:50.913+01:00 level=DEBUG source=server.go:423 msg=subprocess environment="[PATH=/home/steelph0enix/gopath/bin:/home/steelph0enix/gopath/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/opt/rocm/bin:/usr/lib/rustup/bin:/home/steelph0enix/.cargo/bin ROCM_PATH=/opt/rocm GPU_ARCHS=gfx1100 HSA_OVERRIDE_GFX_VERSION=11.0.0 LD_LIBRARY_PATH=/opt/rocm/lib:/home/steelph0enix/gopath/bin ROCR_VISIBLE_DEVICES=GPU-15fd692de3427661]" time=2025-03-05T11:27:50.913+01:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2025-03-05T11:27:50.913+01:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding" time=2025-03-05T11:27:50.914+01:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error" time=2025-03-05T11:27:50.920+01:00 level=INFO source=runner.go:931 msg="starting go runner" time=2025-03-05T11:27:50.920+01:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=/opt/rocm/lib time=2025-03-05T11:27:50.920+01:00 level=DEBUG source=ggml.go:99 msg="ggml backend load all from path" path=/home/steelph0enix/gopath/bin time=2025-03-05T11:27:50.920+01:00 level=INFO source=ggml.go:109 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc) time=2025-03-05T11:27:50.921+01:00 level=INFO source=runner.go:991 msg="Server listening on 127.0.0.1:46009" llama_model_loader: loaded meta data with 40 key-value pairs and 292 tensors from /home/steelph0enix/.ollama/models/blobs/sha256-f0d8cbed51de74ed312a645366e24ed0114081b000f1216452f70aea424f7aa9 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Meta Llama 3.1 8B llama_model_loader: - kv 3: general.organization str = Unsloth llama_model_loader: - kv 4: general.finetune str = Preview llama_model_loader: - kv 5: general.basename str = Meta-Llama-3.1 llama_model_loader: - kv 6: general.size_label str = 8B llama_model_loader: - kv 7: general.license str = llama3 llama_model_loader: - kv 8: general.base_model.count u32 = 1 llama_model_loader: - kv 9: general.base_model.0.name str = Meta Llama 3.1 8B llama_model_loader: - kv 10: general.base_model.0.organization str = Meta Llama llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/meta-llama/Met... llama_model_loader: - kv 12: general.tags arr[str,15] = ["Llama-3", "instruct", "finetune", "... llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 14: llama.block_count u32 = 32 llama_model_loader: - kv 15: llama.context_length u32 = 131072 llama_model_loader: - kv 16: llama.embedding_length u32 = 4096 llama_model_loader: - kv 17: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 18: llama.attention.head_count u32 = 32 llama_model_loader: - kv 19: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 20: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 21: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 22: llama.attention.key_length u32 = 128 llama_model_loader: - kv 23: llama.attention.value_length u32 = 128 llama_model_loader: - kv 24: llama.vocab_size u32 = 128256 llama_model_loader: - kv 25: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 26: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 27: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 28: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 29: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 30: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 31: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 32: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 33: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 34: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 35: tokenizer.chat_template.tool_use str = {%- macro json_to_python_type(json_sp... llama_model_loader: - kv 36: tokenizer.chat_templates arr[str,1] = ["tool_use"] llama_model_loader: - kv 37: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 38: general.quantization_version u32 = 2 llama_model_loader: - kv 39: general.file_type u32 = 18 llama_model_loader: - type f32: 66 tensors llama_model_loader: - type q6_K: 226 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q6_K print_info: file size = 6.14 GiB (6.56 BPW) init_tokenizer: initializing tokenizer for type 2 load: control token: 128254 '<|reserved_special_token_246|>' is not marked as EOG [...] load: control token: 128126 '<|reserved_special_token_118|>' is not marked as EOG load: special tokens cache size = 256 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 4096 print_info: n_layer = 32 print_info: n_head = 32 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 4 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: n_ff = 14336 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 0 print_info: rope scaling = linear print_info: freq_base_train = 500000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 131072 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 8B print_info: model params = 8.03 B print_info: general.name = Meta Llama 3.1 8B print_info: vocab type = BPE print_info: n_vocab = 128256 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin_of_text|>' print_info: EOS token = 128009 '<|eot_id|>' print_info: EOT token = 128009 '<|eot_id|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: PAD token = 128001 '<|end_of_text|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = true) load_tensors: layer 0 assigned to device CPU load_tensors: layer 1 assigned to device CPU load_tensors: layer 2 assigned to device CPU load_tensors: layer 3 assigned to device CPU load_tensors: layer 4 assigned to device CPU load_tensors: layer 5 assigned to device CPU load_tensors: layer 6 assigned to device CPU load_tensors: layer 7 assigned to device CPU load_tensors: layer 8 assigned to device CPU load_tensors: layer 9 assigned to device CPU load_tensors: layer 10 assigned to device CPU load_tensors: layer 11 assigned to device CPU load_tensors: layer 12 assigned to device CPU load_tensors: layer 13 assigned to device CPU load_tensors: layer 14 assigned to device CPU load_tensors: layer 15 assigned to device CPU load_tensors: layer 16 assigned to device CPU load_tensors: layer 17 assigned to device CPU load_tensors: layer 18 assigned to device CPU load_tensors: layer 19 assigned to device CPU load_tensors: layer 20 assigned to device CPU load_tensors: layer 21 assigned to device CPU load_tensors: layer 22 assigned to device CPU load_tensors: layer 23 assigned to device CPU load_tensors: layer 24 assigned to device CPU load_tensors: layer 25 assigned to device CPU load_tensors: layer 26 assigned to device CPU load_tensors: layer 27 assigned to device CPU load_tensors: layer 28 assigned to device CPU load_tensors: layer 29 assigned to device CPU load_tensors: layer 30 assigned to device CPU load_tensors: layer 31 assigned to device CPU load_tensors: layer 32 assigned to device CPU time=2025-03-05T11:27:51.165+01:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server loading model" load_tensors: CPU_Mapped model buffer size = 6282.97 MiB llama_init_from_model: n_seq_max = 1 llama_init_from_model: n_ctx = 32768 llama_init_from_model: n_ctx_per_seq = 32768 llama_init_from_model: n_batch = 512 llama_init_from_model: n_ubatch = 512 llama_init_from_model: flash_attn = 1 llama_init_from_model: freq_base = 500000.0 llama_init_from_model: freq_scale = 1 llama_init_from_model: n_ctx_per_seq (32768) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 32768, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 32, can_shift = 1 llama_kv_cache_init: layer 0: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 [...] llama_kv_cache_init: layer 31: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 time=2025-03-05T11:27:51.666+01:00 level=DEBUG source=server.go:630 msg="model load progress 1.00" llama_kv_cache_init: CPU KV buffer size = 4096.00 MiB llama_init_from_model: KV self size = 4096.00 MiB, K (f16): 2048.00 MiB, V (f16): 2048.00 MiB llama_init_from_model: CPU output buffer size = 0.50 MiB llama_init_from_model: CPU compute buffer size = 258.50 MiB llama_init_from_model: graph nodes = 903 llama_init_from_model: graph splits = 1 time=2025-03-05T11:27:51.918+01:00 level=INFO source=server.go:624 msg="llama runner started in 1.00 seconds" time=2025-03-05T11:27:51.918+01:00 level=DEBUG source=sched.go:463 msg="finished setting up runner" model=/home/steelph0enix/.ollama/models/blobs/sha256-f0d8cbed51de74ed312a645366e24ed0114081b000f1216452f70aea424f7aa9 [GIN] 2025/03/05 - 11:27:51 | 200 | 1.214550081s | 127.0.0.1 | POST "/api/generate" time=2025-03-05T11:27:51.918+01:00 level=DEBUG source=sched.go:467 msg="context for request finished" time=2025-03-05T11:27:51.918+01:00 level=DEBUG source=sched.go:340 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/steelph0enix/.ollama/models/blobs/sha256-f0d8cbed51de74ed312a645366e24ed0114081b000f1216452f70aea424f7aa9 duration=5m0s time=2025-03-05T11:27:51.918+01:00 level=DEBUG source=sched.go:358 msg="after processing request finished event" modelPath=/home/steelph0enix/.ollama/models/blobs/sha256-f0d8cbed51de74ed312a645366e24ed0114081b000f1216452f70aea424f7aa9 refCount=0 time=2025-03-05T11:27:55.380+01:00 level=DEBUG source=sched.go:576 msg="evaluating already loaded" model=/home/steelph0enix/.ollama/models/blobs/sha256-f0d8cbed51de74ed312a645366e24ed0114081b000f1216452f70aea424f7aa9 time=2025-03-05T11:27:55.380+01:00 level=DEBUG source=routes.go:1501 msg="chat request" images=0 prompt="<|start_header_id|>system<|end_header_id|>\n\nYou are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\ntest<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" time=2025-03-05T11:27:55.381+01:00 level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=85 used=0 remaining=85 ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the bug label 2026-05-04 13:02:46 -05:00
Author
Owner

@rick-github commented on GitHub (Mar 5, 2025):

time=2025-03-05T11:27:50.913+01:00 level=INFO source=server.go:405 msg="starting llama server" cmd="/home/steelph0enix/gopath/bin/ollama runner --model /home/steelph0enix/.ollama/models/blobs/sha256-f0d8cbed51de74ed312a645366e24ed0114081b000f1216452f70aea424f7aa9 --ctx-size 32768 --batch-size 512 --n-gpu-layers 33 --verbose --threads 12 --flash-attn --parallel 1 --port 46009"

The binary has been installed in /home/steelph0enix/gopath/bin/ollama, are the libraries in /home/steelph0enix/gopath/lib?

https://github.com/ollama/ollama/issues/8532#issuecomment-2616281903

<!-- gh-comment-id:2700526379 --> @rick-github commented on GitHub (Mar 5, 2025): ``` time=2025-03-05T11:27:50.913+01:00 level=INFO source=server.go:405 msg="starting llama server" cmd="/home/steelph0enix/gopath/bin/ollama runner --model /home/steelph0enix/.ollama/models/blobs/sha256-f0d8cbed51de74ed312a645366e24ed0114081b000f1216452f70aea424f7aa9 --ctx-size 32768 --batch-size 512 --n-gpu-layers 33 --verbose --threads 12 --flash-attn --parallel 1 --port 46009" ``` The binary has been installed in `/home/steelph0enix/gopath/bin/ollama`, are the libraries in `/home/steelph0enix/gopath/lib`? https://github.com/ollama/ollama/issues/8532#issuecomment-2616281903
Author
Owner

@SteelPh0enix commented on GitHub (Mar 5, 2025):

time=2025-03-05T11:27:50.913+01:00 level=INFO source=server.go:405 msg="starting llama server" cmd="/home/steelph0enix/gopath/bin/ollama runner --model /home/steelph0enix/.ollama/models/blobs/sha256-f0d8cbed51de74ed312a645366e24ed0114081b000f1216452f70aea424f7aa9 --ctx-size 32768 --batch-size 512 --n-gpu-layers 33 --verbose --threads 12 --flash-attn --parallel 1 --port 46009"

The binary has been installed in /home/steelph0enix/gopath/bin/ollama, are the libraries in /home/steelph0enix/gopath/lib?

#8532 (comment)

Nope, i don't have lib subdirectory there.
I also have added gopath/bin to PATH, as this comment suggests (i think?)

export GOPATH="$HOME/gopath"
export PATH="$GOPATH/bin:$PATH"

a bit counter-intuitive, considering the fact that logs explicitly state that they have found ROCm libraries...

time=2025-03-05T11:27:50.913+01:00 level=DEBUG source=server.go:343 msg="adding gpu dependency paths" paths=[/opt/rocm/lib]
time=2025-03-05T11:27:50.913+01:00 level=INFO source=server.go:405 msg="starting llama server" cmd="/home/steelph0enix/gopath/bin/ollama runner --model /home/steelph0enix/.ollama/models/blobs/sha256-f0d8cbed51de74ed312a645366e24ed0114081b000f1216452f70aea424f7aa9 --ctx-size 32768 --batch-size 512 --n-gpu-layers 33 --verbose --threads 12 --flash-attn --parallel 1 --port 46009"
time=2025-03-05T11:27:50.913+01:00 level=DEBUG source=server.go:423 msg=subprocess environment="[PATH=/home/steelph0enix/gopath/bin:/home/steelph0enix/gopath/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/opt/rocm/bin:/usr/lib/rustup/bin:/home/steelph0enix/.cargo/bin ROCM_PATH=/opt/rocm GPU_ARCHS=gfx1100 HSA_OVERRIDE_GFX_VERSION=11.0.0 LD_LIBRARY_PATH=/opt/rocm/lib:/home/steelph0enix/gopath/bin ROCR_VISIBLE_DEVICES=GPU-15fd692de3427661]"
time=2025-03-05T11:27:50.913+01:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-03-05T11:27:50.913+01:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding"
time=2025-03-05T11:27:50.914+01:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error"
time=2025-03-05T11:27:50.920+01:00 level=INFO source=runner.go:931 msg="starting go runner"
time=2025-03-05T11:27:50.920+01:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=/opt/rocm/lib
time=2025-03-05T11:27:50.920+01:00 level=DEBUG source=ggml.go:99 msg="ggml backend load all from path" path=/home/steelph0enix/gopath/bin

i guess i have to symlink ollama lib directory there?

<!-- gh-comment-id:2700533199 --> @SteelPh0enix commented on GitHub (Mar 5, 2025): > ``` > time=2025-03-05T11:27:50.913+01:00 level=INFO source=server.go:405 msg="starting llama server" cmd="/home/steelph0enix/gopath/bin/ollama runner --model /home/steelph0enix/.ollama/models/blobs/sha256-f0d8cbed51de74ed312a645366e24ed0114081b000f1216452f70aea424f7aa9 --ctx-size 32768 --batch-size 512 --n-gpu-layers 33 --verbose --threads 12 --flash-attn --parallel 1 --port 46009" > ``` > > The binary has been installed in `/home/steelph0enix/gopath/bin/ollama`, are the libraries in `/home/steelph0enix/gopath/lib`? > > [#8532 (comment)](https://github.com/ollama/ollama/issues/8532#issuecomment-2616281903) Nope, i don't have `lib` subdirectory there. I also have added `gopath/bin` to PATH, as this comment suggests (i think?) ``` export GOPATH="$HOME/gopath" export PATH="$GOPATH/bin:$PATH" ``` a bit counter-intuitive, considering the fact that logs explicitly state that they have found ROCm libraries... ``` time=2025-03-05T11:27:50.913+01:00 level=DEBUG source=server.go:343 msg="adding gpu dependency paths" paths=[/opt/rocm/lib] time=2025-03-05T11:27:50.913+01:00 level=INFO source=server.go:405 msg="starting llama server" cmd="/home/steelph0enix/gopath/bin/ollama runner --model /home/steelph0enix/.ollama/models/blobs/sha256-f0d8cbed51de74ed312a645366e24ed0114081b000f1216452f70aea424f7aa9 --ctx-size 32768 --batch-size 512 --n-gpu-layers 33 --verbose --threads 12 --flash-attn --parallel 1 --port 46009" time=2025-03-05T11:27:50.913+01:00 level=DEBUG source=server.go:423 msg=subprocess environment="[PATH=/home/steelph0enix/gopath/bin:/home/steelph0enix/gopath/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/opt/rocm/bin:/usr/lib/rustup/bin:/home/steelph0enix/.cargo/bin ROCM_PATH=/opt/rocm GPU_ARCHS=gfx1100 HSA_OVERRIDE_GFX_VERSION=11.0.0 LD_LIBRARY_PATH=/opt/rocm/lib:/home/steelph0enix/gopath/bin ROCR_VISIBLE_DEVICES=GPU-15fd692de3427661]" time=2025-03-05T11:27:50.913+01:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2025-03-05T11:27:50.913+01:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding" time=2025-03-05T11:27:50.914+01:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error" time=2025-03-05T11:27:50.920+01:00 level=INFO source=runner.go:931 msg="starting go runner" time=2025-03-05T11:27:50.920+01:00 level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=/opt/rocm/lib time=2025-03-05T11:27:50.920+01:00 level=DEBUG source=ggml.go:99 msg="ggml backend load all from path" path=/home/steelph0enix/gopath/bin ``` i guess i have to symlink ollama lib directory there?
Author
Owner

@SteelPh0enix commented on GitHub (Mar 5, 2025):

I have symlinked the /usr/local/lib (where ollama/ subdirectory containing the back-end libraries resides) to gopath/lib, and it did try to use ROCm this time, but it crashes - i guess this solves my primary issue, but something is still wrong there. Is this a ROCm compatibility issue? Here's the log:

2025/03/05 11:47:19 routes.go:1215: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION:11.0.0 HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY:rocm_v6 OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/steelph0enix/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-03-05T11:47:19.925+01:00 level=INFO source=images.go:432 msg="total blobs: 5"
time=2025-03-05T11:47:19.926+01:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
 - using env:   export GIN_MODE=release
 - using code:  gin.SetMode(gin.ReleaseMode)

[GIN-debug] HEAD   /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET    /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD   /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func3 (5 handlers)
[GIN-debug] GET    /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func4 (5 handlers)
[GIN-debug] POST   /api/pull                 --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
[GIN-debug] POST   /api/push                 --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] GET    /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] POST   /api/show                 --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
[GIN-debug] POST   /api/create               --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
[GIN-debug] POST   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] POST   /api/copy                 --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
[GIN-debug] GET    /api/ps                   --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
[GIN-debug] POST   /api/generate             --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST   /api/chat                 --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST   /api/embed                --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST   /api/embeddings           --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST   /v1/chat/completions      --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST   /v1/completions           --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST   /v1/embeddings            --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models                --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models/:model         --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
time=2025-03-05T11:47:19.926+01:00 level=INFO source=routes.go:1277 msg="Listening on 127.0.0.1:11434 (version 0.0.0)"
time=2025-03-05T11:47:19.926+01:00 level=DEBUG source=sched.go:106 msg="starting llm scheduler"
time=2025-03-05T11:47:19.926+01:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-03-05T11:47:19.927+01:00 level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA"
time=2025-03-05T11:47:19.927+01:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcuda.so*
time=2025-03-05T11:47:19.927+01:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/home/steelph0enix/gopath/lib/ollama/libcuda.so* /home/steelph0enix/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2025-03-05T11:47:19.942+01:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[]
time=2025-03-05T11:47:19.942+01:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcudart.so*
time=2025-03-05T11:47:19.942+01:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/home/steelph0enix/gopath/lib/ollama/libcudart.so* /home/steelph0enix/libcudart.so* /home/steelph0enix/gopath/lib/ollama/cuda_v*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]"
time=2025-03-05T11:47:19.947+01:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[]
time=2025-03-05T11:47:19.947+01:00 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2025-03-05T11:47:19.947+01:00 level=DEBUG source=amd_linux.go:101 msg="evaluating amdgpu node /sys/class/kfd/kfd/topology/nodes/0/properties"
time=2025-03-05T11:47:19.947+01:00 level=DEBUG source=amd_linux.go:121 msg="detected CPU /sys/class/kfd/kfd/topology/nodes/0/properties"
time=2025-03-05T11:47:19.947+01:00 level=DEBUG source=amd_linux.go:101 msg="evaluating amdgpu node /sys/class/kfd/kfd/topology/nodes/1/properties"
time=2025-03-05T11:47:19.948+01:00 level=DEBUG source=amd_linux.go:206 msg="mapping amdgpu to drm sysfs nodes" amdgpu=/sys/class/kfd/kfd/topology/nodes/1/properties vendor=4098 device=29772 unique_id=1584538289711511137
time=2025-03-05T11:47:19.948+01:00 level=DEBUG source=amd_linux.go:240 msg=matched amdgpu=/sys/class/kfd/kfd/topology/nodes/1/properties drm=/sys/class/drm/card1/device
time=2025-03-05T11:47:19.948+01:00 level=DEBUG source=amd_linux.go:318 msg="amdgpu memory" gpu=0 total="20.0 GiB"
time=2025-03-05T11:47:19.948+01:00 level=DEBUG source=amd_linux.go:319 msg="amdgpu memory" gpu=0 available="18.2 GiB"
time=2025-03-05T11:47:19.948+01:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /home/steelph0enix/gopath/lib/ollama/rocm"
time=2025-03-05T11:47:19.948+01:00 level=DEBUG source=amd_common.go:44 msg="detected ROCM next to ollama executable /home/steelph0enix/gopath/lib/ollama/rocm"
time=2025-03-05T11:47:19.948+01:00 level=INFO source=amd_linux.go:389 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION=11.0.0
time=2025-03-05T11:47:19.948+01:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-15fd692de3427661 library=rocm variant="" compute=gfx1100 driver=0.0 name=1002:744c total="20.0 GiB" available="18.2 GiB"
[GIN] 2025/03/05 - 11:47:22 | 200 |       30.29µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/05 - 11:47:22 | 200 |   12.234028ms |       127.0.0.1 | POST     "/api/show"
time=2025-03-05T11:47:22.417+01:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.3 GiB" before.free="26.2 GiB" before.free_swap="0 B" now.total="31.3 GiB" now.free="26.2 GiB" now.free_swap="0 B"
time=2025-03-05T11:47:22.417+01:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-15fd692de3427661 name=1002:744c before="18.2 GiB" now="18.2 GiB"
time=2025-03-05T11:47:22.417+01:00 level=DEBUG source=sched.go:182 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1
time=2025-03-05T11:47:22.440+01:00 level=DEBUG source=sched.go:225 msg="loading first model" model=/home/steelph0enix/.ollama/models/blobs/sha256-f0d8cbed51de74ed312a645366e24ed0114081b000f1216452f70aea424f7aa9
time=2025-03-05T11:47:22.440+01:00 level=DEBUG source=memory.go:108 msg=evaluating library=rocm gpu_count=1 available="[18.2 GiB]"
time=2025-03-05T11:47:22.440+01:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.3 GiB" before.free="26.2 GiB" before.free_swap="0 B" now.total="31.3 GiB" now.free="26.2 GiB" now.free_swap="0 B"
time=2025-03-05T11:47:22.440+01:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-15fd692de3427661 name=1002:744c before="18.2 GiB" now="18.2 GiB"
time=2025-03-05T11:47:22.441+01:00 level=DEBUG source=memory.go:108 msg=evaluating library=rocm gpu_count=1 available="[18.2 GiB]"
time=2025-03-05T11:47:22.441+01:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.3 GiB" before.free="26.2 GiB" before.free_swap="0 B" now.total="31.3 GiB" now.free="26.2 GiB" now.free_swap="0 B"
time=2025-03-05T11:47:22.441+01:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-15fd692de3427661 name=1002:744c before="18.2 GiB" now="18.2 GiB"
time=2025-03-05T11:47:22.441+01:00 level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/home/steelph0enix/.ollama/models/blobs/sha256-f0d8cbed51de74ed312a645366e24ed0114081b000f1216452f70aea424f7aa9 gpu=GPU-15fd692de3427661 parallel=1 available=19536535552 required="12.6 GiB"
time=2025-03-05T11:47:22.441+01:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.3 GiB" before.free="26.2 GiB" before.free_swap="0 B" now.total="31.3 GiB" now.free="26.2 GiB" now.free_swap="0 B"
time=2025-03-05T11:47:22.441+01:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-15fd692de3427661 name=1002:744c before="18.2 GiB" now="18.2 GiB"
time=2025-03-05T11:47:22.441+01:00 level=INFO source=server.go:105 msg="system memory" total="31.3 GiB" free="26.2 GiB" free_swap="0 B"
time=2025-03-05T11:47:22.441+01:00 level=DEBUG source=memory.go:108 msg=evaluating library=rocm gpu_count=1 available="[18.2 GiB]"
time=2025-03-05T11:47:22.441+01:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.3 GiB" before.free="26.2 GiB" before.free_swap="0 B" now.total="31.3 GiB" now.free="26.2 GiB" now.free_swap="0 B"
time=2025-03-05T11:47:22.441+01:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-15fd692de3427661 name=1002:744c before="18.2 GiB" now="18.2 GiB"
time=2025-03-05T11:47:22.442+01:00 level=INFO source=server.go:138 msg=offload library=rocm layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[18.2 GiB]" memory.gpu_overhead="0 B" memory.required.full="12.6 GiB" memory.required.partial="12.6 GiB" memory.required.kv="4.0 GiB" memory.required.allocations="[12.6 GiB]" memory.weights.total="9.3 GiB" memory.weights.repeating="8.9 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="2.1 GiB" memory.graph.partial="2.2 GiB"
time=2025-03-05T11:47:22.442+01:00 level=INFO source=server.go:185 msg="enabling flash attention"
time=2025-03-05T11:47:22.442+01:00 level=WARN source=server.go:193 msg="kv cache type not supported by model" type=""
time=2025-03-05T11:47:22.442+01:00 level=DEBUG source=server.go:262 msg="compatible gpu libraries" compatible=[rocm]
llama_model_loader: loaded meta data with 40 key-value pairs and 292 tensors from /home/steelph0enix/.ollama/models/blobs/sha256-f0d8cbed51de74ed312a645366e24ed0114081b000f1216452f70aea424f7aa9 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Meta Llama 3.1 8B
llama_model_loader: - kv   3:                       general.organization str              = Unsloth
llama_model_loader: - kv   4:                           general.finetune str              = Preview
llama_model_loader: - kv   5:                           general.basename str              = Meta-Llama-3.1
llama_model_loader: - kv   6:                         general.size_label str              = 8B
llama_model_loader: - kv   7:                            general.license str              = llama3
llama_model_loader: - kv   8:                   general.base_model.count u32              = 1
llama_model_loader: - kv   9:                  general.base_model.0.name str              = Meta Llama 3.1 8B
llama_model_loader: - kv  10:          general.base_model.0.organization str              = Meta Llama
llama_model_loader: - kv  11:              general.base_model.0.repo_url str              = https://huggingface.co/meta-llama/Met...
llama_model_loader: - kv  12:                               general.tags arr[str,15]      = ["Llama-3", "instruct", "finetune", "...
llama_model_loader: - kv  13:                          general.languages arr[str,1]       = ["en"]
llama_model_loader: - kv  14:                          llama.block_count u32              = 32
llama_model_loader: - kv  15:                       llama.context_length u32              = 131072
llama_model_loader: - kv  16:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv  17:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv  18:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv  19:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  20:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  21:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  22:                 llama.attention.key_length u32              = 128
llama_model_loader: - kv  23:               llama.attention.value_length u32              = 128
llama_model_loader: - kv  24:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  25:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  26:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  27:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  28:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  29:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  30:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  31:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  32:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  33:            tokenizer.ggml.padding_token_id u32              = 128001
llama_model_loader: - kv  34:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  35:           tokenizer.chat_template.tool_use str              = {%- macro json_to_python_type(json_sp...
llama_model_loader: - kv  36:                   tokenizer.chat_templates arr[str,1]       = ["tool_use"]
llama_model_loader: - kv  37:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  38:               general.quantization_version u32              = 2
llama_model_loader: - kv  39:                          general.file_type u32              = 18
llama_model_loader: - type  f32:   66 tensors
llama_model_loader: - type q6_K:  226 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q6_K
print_info: file size   = 6.14 GiB (6.56 BPW)
init_tokenizer: initializing tokenizer for type 2
load: control token: 128254 '<|reserved_special_token_246|>' is not marked as EOG
[...]
load: control token: 128126 '<|reserved_special_token_118|>' is not marked as EOG
load: special tokens cache size = 256
load: token to piece cache size = 0.7999 MB
print_info: arch             = llama
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 8.03 B
print_info: general.name     = Meta Llama 3.1 8B
print_info: vocab type       = BPE
print_info: n_vocab          = 128256
print_info: n_merges         = 280147
print_info: BOS token        = 128000 '<|begin_of_text|>'
print_info: EOS token        = 128009 '<|eot_id|>'
print_info: EOT token        = 128009 '<|eot_id|>'
print_info: EOM token        = 128008 '<|eom_id|>'
print_info: PAD token        = 128001 '<|end_of_text|>'
print_info: LF token         = 198 'Ċ'
print_info: EOG token        = 128008 '<|eom_id|>'
print_info: EOG token        = 128009 '<|eot_id|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-03-05T11:47:22.601+01:00 level=DEBUG source=server.go:335 msg="adding gpu library" path=/home/steelph0enix/gopath/lib/ollama/rocm
time=2025-03-05T11:47:22.601+01:00 level=DEBUG source=server.go:343 msg="adding gpu dependency paths" paths=[/home/steelph0enix/gopath/lib/ollama/rocm]
time=2025-03-05T11:47:22.601+01:00 level=INFO source=server.go:405 msg="starting llama server" cmd="/home/steelph0enix/gopath/bin/ollama runner --model /home/steelph0enix/.ollama/models/blobs/sha256-f0d8cbed51de74ed312a645366e24ed0114081b000f1216452f70aea424f7aa9 --ctx-size 32768 --batch-size 512 --n-gpu-layers 33 --verbose --threads 12 --flash-attn --parallel 1 --port 34555"
time=2025-03-05T11:47:22.601+01:00 level=DEBUG source=server.go:423 msg=subprocess environment="[PATH=/home/steelph0enix/gopath/bin:/home/steelph0enix/gopath/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/opt/rocm/bin:/usr/lib/rustup/bin:/home/steelph0enix/.cargo/bin ROCM_PATH=/opt/rocm GPU_ARCHS=gfx1100 HSA_OVERRIDE_GFX_VERSION=11.0.0 LD_LIBRARY_PATH=/home/steelph0enix/gopath/lib/ollama/rocm:/home/steelph0enix/gopath/lib/ollama/rocm:/home/steelph0enix/gopath/lib/ollama ROCR_VISIBLE_DEVICES=GPU-15fd692de3427661]"
time=2025-03-05T11:47:22.601+01:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-03-05T11:47:22.601+01:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding"
time=2025-03-05T11:47:22.601+01:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error"
time=2025-03-05T11:47:22.607+01:00 level=INFO source=runner.go:931 msg="starting go runner"
time=2025-03-05T11:47:22.608+01:00 level=DEBUG source=ggml.go:99 msg="ggml backend load all from path" path=/home/steelph0enix/gopath/lib/ollama/rocm
:3:rocdevice.cpp            :469 : 2793179545d us:  Initializing HSA stack.
:3:rocdevice.cpp            :555 : 2793187431d us:  Enumerated GPU agents = 1
:3:rocdevice.cpp            :233 : 2793187452d us:  Numa selects cpu agent[0]=0xf5aaa30(fine=0xf130fa0,coarse=0xf5ab4b0) for gpu agent=0xf5ab870 CPU<->GPU XGMI=0
:3:rocsettings.cpp          :287 : 2793187458d us:  Using dev kernel arg wa = 0
:3:comgrctx.cpp             :33  : 2793187462d us:  Loading COMGR library.
:3:comgrctx.cpp             :126 : 2793187482d us:  Loaded COMGR library version 2.8.
:3:rocdevice.cpp            :1800: 2793187601d us:  Gfx Major/Minor/Stepping: 11/0/0
:3:rocdevice.cpp            :1802: 2793187604d us:  HMM support: 1, XNACK: 0, Direct host access: 0
:3:rocdevice.cpp            :1804: 2793187605d us:  Max SDMA Read Mask: 0x3, Max SDMA Write Mask: 0x3
:3:hip_context.cpp          :49  : 2793188539d us:  Direct Dispatch: 1
:3:hip_code_object.cpp      :839 : 2793194196d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf62ebe0(compressed), Size=76376, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793260209d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x7ad456076010(compressed), Size=9525448, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793265628d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf75bdc0(compressed), Size=706488, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793267821d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf67e760(compressed), Size=113712, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793270007d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf6c3000(compressed), Size=166984, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793272425d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf808580(compressed), Size=331784, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793272817d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf69a3a0(compressed), Size=48800, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793364523d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x7ad455430010(compressed), Size=12867216, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793365537d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf6ebc50(compressed), Size=139864, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793366113d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf641640(compressed), Size=116568, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793366602d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf6b2110(compressed), Size=48816, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793390958d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xfcbdcf0(compressed), Size=3107416, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793391775d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf725b60(compressed), Size=97432, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793392350d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf65dda0(compressed), Size=81168, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793392954d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf859590(compressed), Size=93896, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793522350d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x7ad454419010(compressed), Size=16869472, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793524259d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf8ff020(compressed), Size=296520, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793524647d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf61e010(compressed), Size=48800, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793588724d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x108ca020(compressed), Size=9525432, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793589319d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf73d800(compressed), Size=54136, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793589894d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf8852f0(compressed), Size=85632, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793590809d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf947670(compressed), Size=169720, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793591340d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf89a180(compressed), Size=80056, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793613116d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x1021be60(compressed), Size=2520832, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793748190d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x11e5b3d0(compressed), Size=13089504, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793749476d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xfffffa0(compressed), Size=309312, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793750209d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xffd6240(compressed), Size=137944, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793776300d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x111df8e0(compressed), Size=2694928, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793777134d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf8ada40(compressed), Size=99376, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793778711d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x100d3cb0(compressed), Size=293680, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793779345d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf70deb0(compressed), Size=66504, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793827438d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x12ad6ec0(compressed), Size=5543512, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793835660d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xfab5550(compressed), Size=1329104, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793836775d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x10072860(compressed), Size=159832, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793844803d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x105f73b0(compressed), Size=1523248, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793845877d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x100998c0(compressed), Size=159320, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793846478d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x1004b7f0(compressed), Size=109072, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793848160d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x1011b7f0(compressed), Size=296592, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793848913d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf870460(compressed), Size=72704, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793850452d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xfbf9d30(compressed), Size=325504, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793852782d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf970d70(compressed), Size=532024, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793854466d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x101ae960(compressed), Size=305856, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793855417d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xfc494c0(compressed), Size=194864, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793920007d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x13935db0(compressed), Size=9525376, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793958040d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x11471800(compressed), Size=4650920, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793959258d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x10163e90(compressed), Size=223688, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793993658d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x13467290(compressed), Size=4484440, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793995420d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xfa3b220(compressed), Size=296536, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793996427d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xfc78e00(compressed), Size=159288, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793997362d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf8c5e80(compressed), Size=181080, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793997976d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x101f9430(compressed), Size=129576, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793998826d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf9f2bb0(compressed), Size=163176, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2793999520d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xffb4750(compressed), Size=115128, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2794001751d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x107d2c50(compressed), Size=424520, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2794003238d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x138adff0(compressed), Size=378664, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2794005255d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x1083a6a0(compressed), Size=290232, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2794006082d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xfa1a920(compressed), Size=96040, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2794007124d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x1390a720(compressed), Size=122912, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2794009248d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x1076b1f0(compressed), Size=237000, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2794089395d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x154ab2c0(compressed), Size=9957352, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2794090456d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xfa83880(compressed), Size=126200, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2794095325d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x13020520(compressed), Size=846928, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2794097639d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x105081e0(compressed), Size=543832, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2794234605d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x17155770(compressed), Size=13083272, num_code_objs=0
:3:hip_code_object.cpp      :839 : 2794236626d us:  Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x10881460(compressed), Size=98824, num_code_objs=0
:3:hip_device_runtime.cpp   :649 : 2794237317d us:   hipGetDeviceCount ( 0x7ad5d22c8f70 )
:3:hip_device_runtime.cpp   :651 : 2794237321d us:  hipGetDeviceCount: Returned hipSuccess :
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 ROCm devices:
:3:hip_device.cpp           :647 : 2794237328d us:   hipGetDevicePropertiesR0600 ( 0x7ffeb1b98838, 0 )
:3:hip_device.cpp           :649 : 2794237330d us:  hipGetDevicePropertiesR0600: Returned hipSuccess :
  Device 0: AMD Radeon RX 7900 XT, gfx1100 (0x1100), VMM: no, Wave Size: 32
:3:hip_device_runtime.cpp   :634 : 2794237334d us:   hipGetDevice ( 0x7ffeb1b98e1c )
:3:hip_device_runtime.cpp   :642 : 2794237335d us:  hipGetDevice: Returned hipSuccess :
:3:hip_device.cpp           :647 : 2794237336d us:   hipGetDevicePropertiesR0600 ( 0x7ffeb1b98e68, 0 )
:3:hip_device.cpp           :649 : 2794237337d us:  hipGetDevicePropertiesR0600: Returned hipSuccess :
load_backend: loaded ROCm backend from /home/steelph0enix/gopath/lib/ollama/rocm/libggml-hip.so
time=2025-03-05T11:47:23.690+01:00 level=DEBUG source=ggml.go:99 msg="ggml backend load all from path" path=/home/steelph0enix/gopath/lib/ollama
ggml_backend_load_best: /home/steelph0enix/gopath/lib/ollama/libggml-cpu-haswell.so score: 55
ggml_backend_load_best: /home/steelph0enix/gopath/lib/ollama/libggml-cpu-icelake.so score: 0
ggml_backend_load_best: /home/steelph0enix/gopath/lib/ollama/libggml-cpu-sandybridge.so score: 20
ggml_backend_load_best: /home/steelph0enix/gopath/lib/ollama/libggml-cpu-skylakex.so score: 0
ggml_backend_load_best: /home/steelph0enix/gopath/lib/ollama/libggml-cpu-alderlake.so score: 0
load_backend: loaded CPU backend from /home/steelph0enix/gopath/lib/ollama/libggml-cpu-haswell.so
time=2025-03-05T11:47:23.691+01:00 level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
:3:hip_device_runtime.cpp   :634 : 2794239495d us:   hipGetDevice ( 0x7ad5f93fca4c )
:3:hip_device_runtime.cpp   :642 : 2794239500d us:  hipGetDevice: Returned hipSuccess :
:3:hip_memory.cpp           :845 : 2794239507d us:   hipMemGetInfo ( 0x7ad5f93fcab8, 0x7ad5f93fcac0 )
:3:hip_memory.cpp           :869 : 2794239516d us:  hipMemGetInfo: Returned hipSuccess :
llama_model_load_from_file_impl: using device ROCm0 (AMD Radeon RX 7900 XT) - 20388 MiB free
time=2025-03-05T11:47:23.692+01:00 level=INFO source=runner.go:991 msg="Server listening on 127.0.0.1:34555"
llama_model_loader: loaded meta data with 40 key-value pairs and 292 tensors from /home/steelph0enix/.ollama/models/blobs/sha256-f0d8cbed51de74ed312a645366e24ed0114081b000f1216452f70aea424f7aa9 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Meta Llama 3.1 8B
llama_model_loader: - kv   3:                       general.organization str              = Unsloth
llama_model_loader: - kv   4:                           general.finetune str              = Preview
llama_model_loader: - kv   5:                           general.basename str              = Meta-Llama-3.1
llama_model_loader: - kv   6:                         general.size_label str              = 8B
llama_model_loader: - kv   7:                            general.license str              = llama3
llama_model_loader: - kv   8:                   general.base_model.count u32              = 1
llama_model_loader: - kv   9:                  general.base_model.0.name str              = Meta Llama 3.1 8B
llama_model_loader: - kv  10:          general.base_model.0.organization str              = Meta Llama
llama_model_loader: - kv  11:              general.base_model.0.repo_url str              = https://huggingface.co/meta-llama/Met...
llama_model_loader: - kv  12:                               general.tags arr[str,15]      = ["Llama-3", "instruct", "finetune", "...
llama_model_loader: - kv  13:                          general.languages arr[str,1]       = ["en"]
llama_model_loader: - kv  14:                          llama.block_count u32              = 32
llama_model_loader: - kv  15:                       llama.context_length u32              = 131072
llama_model_loader: - kv  16:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv  17:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv  18:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv  19:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  20:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  21:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  22:                 llama.attention.key_length u32              = 128
llama_model_loader: - kv  23:               llama.attention.value_length u32              = 128
llama_model_loader: - kv  24:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  25:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  26:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  27:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  28:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  29:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  30:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  31:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  32:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  33:            tokenizer.ggml.padding_token_id u32              = 128001
llama_model_loader: - kv  34:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  35:           tokenizer.chat_template.tool_use str              = {%- macro json_to_python_type(json_sp...
llama_model_loader: - kv  36:                   tokenizer.chat_templates arr[str,1]       = ["tool_use"]
llama_model_loader: - kv  37:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  38:               general.quantization_version u32              = 2
llama_model_loader: - kv  39:                          general.file_type u32              = 18
llama_model_loader: - type  f32:   66 tensors
llama_model_loader: - type q6_K:  226 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q6_K
print_info: file size   = 6.14 GiB (6.56 BPW)
init_tokenizer: initializing tokenizer for type 2
load: control token: 128254 '<|reserved_special_token_246|>' is not marked as EOG
[...]
load: control token: 128126 '<|reserved_special_token_118|>' is not marked as EOG
load: special tokens cache size = 256
load: token to piece cache size = 0.7999 MB
print_info: arch             = llama
print_info: vocab_only       = 0
print_info: n_ctx_train      = 131072
print_info: n_embd           = 4096
print_info: n_layer          = 32
print_info: n_head           = 32
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 4
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: n_ff             = 14336
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 0
print_info: rope scaling     = linear
print_info: freq_base_train  = 500000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 131072
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 8B
print_info: model params     = 8.03 B
print_info: general.name     = Meta Llama 3.1 8B
print_info: vocab type       = BPE
print_info: n_vocab          = 128256
print_info: n_merges         = 280147
print_info: BOS token        = 128000 '<|begin_of_text|>'
print_info: EOS token        = 128009 '<|eot_id|>'
print_info: EOT token        = 128009 '<|eot_id|>'
print_info: EOM token        = 128008 '<|eom_id|>'
print_info: PAD token        = 128001 '<|end_of_text|>'
print_info: LF token         = 198 'Ċ'
print_info: EOG token        = 128008 '<|eom_id|>'
print_info: EOG token        = 128009 '<|eot_id|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = true)
:3:hip_device_runtime.cpp   :634 : 2794389430d us:   hipGetDevice ( 0x7ad5f93fc75c )
:3:hip_device_runtime.cpp   :642 : 2794389434d us:  hipGetDevice: Returned hipSuccess :
:3:hip_memory.cpp           :845 : 2794389436d us:   hipMemGetInfo ( 0x7ad5f93fc9d0, 0x7ad5f93fc9a0 )
:3:hip_memory.cpp           :869 : 2794389443d us:  hipMemGetInfo: Returned hipSuccess :
load_tensors: layer   0 assigned to device ROCm0
load_tensors: layer   1 assigned to device ROCm0
load_tensors: layer   2 assigned to device ROCm0
load_tensors: layer   3 assigned to device ROCm0
load_tensors: layer   4 assigned to device ROCm0
load_tensors: layer   5 assigned to device ROCm0
load_tensors: layer   6 assigned to device ROCm0
load_tensors: layer   7 assigned to device ROCm0
load_tensors: layer   8 assigned to device ROCm0
load_tensors: layer   9 assigned to device ROCm0
load_tensors: layer  10 assigned to device ROCm0
load_tensors: layer  11 assigned to device ROCm0
load_tensors: layer  12 assigned to device ROCm0
load_tensors: layer  13 assigned to device ROCm0
load_tensors: layer  14 assigned to device ROCm0
load_tensors: layer  15 assigned to device ROCm0
load_tensors: layer  16 assigned to device ROCm0
load_tensors: layer  17 assigned to device ROCm0
load_tensors: layer  18 assigned to device ROCm0
load_tensors: layer  19 assigned to device ROCm0
load_tensors: layer  20 assigned to device ROCm0
load_tensors: layer  21 assigned to device ROCm0
load_tensors: layer  22 assigned to device ROCm0
load_tensors: layer  23 assigned to device ROCm0
load_tensors: layer  24 assigned to device ROCm0
load_tensors: layer  25 assigned to device ROCm0
load_tensors: layer  26 assigned to device ROCm0
load_tensors: layer  27 assigned to device ROCm0
load_tensors: layer  28 assigned to device ROCm0
load_tensors: layer  29 assigned to device ROCm0
load_tensors: layer  30 assigned to device ROCm0
load_tensors: layer  31 assigned to device ROCm0
load_tensors: layer  32 assigned to device ROCm0
load_tensors: tensor 'token_embd.weight' (q6_K) (and 0 others) cannot be used with preferred buffer type ROCm_Host, using CPU instead
time=2025-03-05T11:47:23.856+01:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server loading model"
:3:hip_device_runtime.cpp   :634 : 2794693911d us:   hipGetDevice ( 0x7ad5f93fc75c )
:3:hip_device_runtime.cpp   :642 : 2794693917d us:  hipGetDevice: Returned hipSuccess :
:3:hip_memory.cpp           :845 : 2794693920d us:   hipMemGetInfo ( 0x7ad5f93fc980, 0x7ad5f93fc988 )
:3:hip_memory.cpp           :869 : 2794693927d us:  hipMemGetInfo: Returned hipSuccess :
:3:hip_device_runtime.cpp   :634 : 2794693938d us:   hipGetDevice ( 0x7ad5f93fc5ec )
:3:hip_device_runtime.cpp   :642 : 2794693939d us:  hipGetDevice: Returned hipSuccess :
:3:hip_device_runtime.cpp   :634 : 2794693940d us:   hipGetDevice ( 0x7ad5f93fc5ec )
:3:hip_device_runtime.cpp   :642 : 2794693941d us:  hipGetDevice: Returned hipSuccess :
:3:hip_memory.cpp           :703 : 2794693947d us:   hipMalloc ( 0x7ad5f93fc648, 6157230336 )
:3:rocdevice.cpp            :2425: 2794695136d us:  Device=0xf5c1000, freeMem_ = 0x390001f00
:3:hip_memory.cpp           :705 : 2794695144d us:  hipMalloc: Returned hipSuccess : 0x7ad14fa00000: duration: 1197d us
load_tensors: offloading 32 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 33/33 layers to GPU
load_tensors:        ROCm0 model buffer size =  5871.99 MiB
load_tensors:   CPU_Mapped model buffer size =   410.98 MiB
:3:hip_device_runtime.cpp   :634 : 2794695245d us:   hipGetDevice ( 0x7ad5f93fc53c )
:3:hip_device_runtime.cpp   :642 : 2794695248d us:  hipGetDevice: Returned hipSuccess :
:3:hip_memory.cpp           :1572: 2794695262d us:   hipMemcpyAsync ( 0x7ad14fa00000, 0x7ad2d8e74980, 16384, hipMemcpyHostToDevice, stream:0x2 )
:3:rocdevice.cpp            :3060: 2794695275d us:  Number of allocated hardware queues with low priority: 0, with normal priority: 0, with high priority: 0, maximum per priority is: 4
:3:rocdevice.cpp            :3138: 2794703230d us:  Created SWq=0x7ad5f8056000 to map on HWq=0x7ad456a00000 with size 16384 with priority 1, cooperative: 0
:3:rocdevice.cpp            :3231: 2794703239d us:  acquireQueue refCount: 0x7ad456a00000 (1)
:3:devprogram.cpp           :2648: 2794907232d us:  Using Code Object V5.
:3:rocvirtual.hpp           :66  : 2794909318d us:  Host active wait for Signal = (0x7ad5587ff700) for -1 ns
:3:rocvirtual.cpp           :483 : 2794909451d us:  Set Handler: handle(0x7ad5587ff680), timestamp(0x7ad44c524590)
:3:rocvirtual.hpp           :66  : 2794909453d us:  Host active wait for Signal = (0x7ad5587ff680) for -1 ns
:3:hip_memory.cpp           :1573: 2794909478d us:  hipMemcpyAsync: Returned hipSuccess : : duration: 214216d us
:3:hip_stream.cpp           :371 : 2794909487d us:   hipStreamSynchronize ( stream:0x2 )
:3:hip_stream.cpp           :372 : 2794909493d us:  hipStreamSynchronize: Returned hipSuccess :
:3:rocvirtual.cpp           :226 : 2794909494d us:  Handler: value(0), timestamp(0x7ad44c78cc90), handle(0x7ad5587ff680)
:3:hip_device_runtime.cpp   :634 : 2794909514d us:   hipGetDevice ( 0x7ad5f93fc53c )
:3:hip_device_runtime.cpp   :642 : 2794909519d us:  hipGetDevice: Returned hipSuccess :
:3:hip_memory.cpp           :1572: 2794909522d us:   hipMemcpyAsync ( 0x7ad14fa04000, 0x7ad2bf37a980, 430940160, hipMemcpyHostToDevice, stream:0x2 )
:3:rocvirtual.hpp           :66  : 2794912292d us:  Host active wait for Signal = (0x7ad5587ff600) for -1 ns
:3:rocvirtual.hpp           :66  : 2794919710d us:  Host active wait for Signal = (0x7ad5587ff580) for -1 ns
:3:rocvirtual.hpp           :66  : 2794926984d us:  Host active wait for Signal = (0x7ad5587ff500) for -1 ns
:3:rocvirtual.hpp           :66  : 2794933196d us:  Host active wait for Signal = (0x7ad5587ff480) for -1 ns
:3:rocvirtual.cpp           :483 : 2794934406d us:  Set Handler: handle(0x7ad5587ff400), timestamp(0x7ad44d0f6080)
:3:rocvirtual.hpp           :66  : 2794934409d us:  Host active wait for Signal = (0x7ad5587ff400) for -1 ns
:3:hip_memory.cpp           :1573: 2794934427d us:  hipMemcpyAsync: Returned hipSuccess : : duration: 24905d us
:3:hip_stream.cpp           :371 : 2794934431d us:   hipStreamSynchronize ( stream:0x2 )
:3:hip_stream.cpp           :372 : 2794934433d us:  hipStreamSynchronize: Returned hipSuccess :
:3:rocvirtual.cpp           :226 : 2794934435d us:  Handler: value(0), timestamp(0x7ad44d16a230), handle(0x7ad5587ff400)
:3:hip_device_runtime.cpp   :634 : 2794934444d us:   hipGetDevice ( 0x7ad5f93fc53c )
:3:hip_device_runtime.cpp   :642 : 2794934445d us:  hipGetDevice: Returned hipSuccess :
:3:hip_memory.cpp           :1572: 2794934448d us:   hipMemcpyAsync ( 0x7ad1694fe000, 0x7ad2f2cbaa80, 16384, hipMemcpyHostToDevice, stream:0x2 )
:3:rocvirtual.hpp           :66  : 2794934457d us:  Host active wait for Signal = (0x7ad5587ff380) for -1 ns
:3:rocvirtual.cpp           :483 : 2794934469d us:  Set Handler: handle(0x7ad5587ff300), timestamp(0x7ad44d0f62e0)
:3:rocvirtual.hpp           :66  : 2794934472d us:  Host active wait for Signal = (0x7ad5587ff300) for -1 ns
:3:hip_memory.cpp           :1573: 2794934486d us:  hipMemcpyAsync: Returned hipSuccess : : duration: 38d us
:3:hip_stream.cpp           :371 : 2794934487d us:   hipStreamSynchronize ( stream:0x2 )
:3:hip_stream.cpp           :372 : 2794934489d us:  hipStreamSynchronize: Returned hipSuccess :
:3:hip_device_runtime.cpp   :634 : 2794934491d us:   hipGetDevice ( 0x7ad5f93fc53c )
:3:hip_device_runtime.cpp   :642 : 2794934492d us:  hipGetDevice: Returned hipSuccess :
:3:hip_memory.cpp           :1572: 2794934494d us:   hipMemcpyAsync ( 0x7ad169502000, 0x7ad2f39dea80, 13762560, hipMemcpyHostToDevice, stream:0x2 )
:3:rocvirtual.cpp           :226 : 2794934536d us:  Handler: value(0), timestamp(0x7ad44d16a230), handle(0x7ad5587ff300)
:3:rocvirtual.hpp           :66  : 2794934809d us:  Host active wait for Signal = (0x7ad5587ff280) for -1 ns
:3:rocvirtual.cpp           :483 : 2794935414d us:  Set Handler: handle(0x7ad5587ff200), timestamp(0x7ad44d0f78f0)
:3:rocvirtual.hpp           :66  : 2794935416d us:  Host active wait for Signal = (0x7ad5587ff200) for -1 ns
:3:hip_memory.cpp           :1573: 2794935433d us:  hipMemcpyAsync: Returned hipSuccess : : duration: 939d us
:3:hip_stream.cpp           :371 : 2794935436d us:   hipStreamSynchronize ( stream:0x2 )
:3:hip_stream.cpp           :372 : 2794935437d us:  hipStreamSynchronize: Returned hipSuccess :
:3:hip_device_runtime.cpp   :634 : 2794935442d us:   hipGetDevice ( 0x7ad5f93fc53c )
:3:hip_device_runtime.cpp   :642 : 2794935443d us:  hipGetDevice: Returned hipSuccess :
:3:hip_memory.cpp           :1572: 2794935445d us:   hipMemcpyAsync ( 0x7ad16a222000, 0x7ad2f2972a80, 3440640, hipMemcpyHostToDevice, stream:0x2 )
:3:rocvirtual.cpp           :226 : 2794935446d us:  Handler: value(0), timestamp(0x7ad44c710e50), handle(0x7ad5587ff200)
:3:rocvirtual.hpp           :66  : 2794935685d us:  Host active wait for Signal = (0x7ad5587ff180) for -1 ns
:3:rocvirtual.cpp           :483 : 2794935843d us:  Set Handler: handle(0x7ad5587ff100), timestamp(0x7ad44d0f7b10)
:3:rocvirtual.hpp           :66  : 2794935845d us:  Host active wait for Signal = (0x7ad5587ff100) for -1 ns
:3:hip_memory.cpp           :1573: 2794935859d us:  hipMemcpyAsync: Returned hipSuccess : : duration: 414d us
:3:hip_stream.cpp           :371 : 2794935860d us:   hipStreamSynchronize ( stream:0x2 )
:3:hip_stream.cpp           :372 : 2794935862d us:  hipStreamSynchronize: Returned hipSuccess :
:3:hip_device_runtime.cpp   :634 : 2794935866d us:   hipGetDevice ( 0x7ad5f93fc53c )
:3:hip_device_runtime.cpp   :642 : 2794935867d us:  hipGetDevice: Returned hipSuccess :
:3:hip_memory.cpp           :1572: 2794935869d us:   hipMemcpyAsync ( 0x7ad16a56a000, 0x7ad2f46fea80, 3440640, hipMemcpyHostToDevice, stream:0x2 )
:3:rocvirtual.cpp           :226 : 2794935878d us:  Handler: value(0), timestamp(0x7ad44c710f20), handle(0x7ad5587ff100)
:3:rocvirtual.hpp           :66  : 2794935925d us:  Host active wait for Signal = (0x7ad5587ff080) for -1 ns
:3:rocvirtual.cpp           :483 : 2794936082d us:  Set Handler: handle(0x7ad5587ff000), timestamp(0x7ad44c53a770)
:3:rocvirtual.hpp           :66  : 2794936084d us:  Host active wait for Signal = (0x7ad5587ff000) for -1 ns
:3:hip_memory.cpp           :1573: 2794936098d us:  hipMemcpyAsync: Returned hipSuccess : : duration: 229d us
:3:hip_stream.cpp           :371 : 2794936100d us:   hipStreamSynchronize ( stream:0x2 )
:3:hip_stream.cpp           :372 : 2794936101d us:  hipStreamSynchronize: Returned hipSuccess :
:3:hip_device_runtime.cpp   :634 : 2794936105d us:   hipGetDevice ( 0x7ad5f93fc53c )
:3:hip_device_runtime.cpp   :642 : 2794936106d us:  hipGetDevice: Returned hipSuccess :
:3:hip_memory.cpp           :1572: 2794936108d us:   hipMemcpyAsync ( 0x7ad16a8b2000, 0x7ad2f2cbea80, 13762560, hipMemcpyHostToDevice, stream:0x2 )
:3:rocvirtual.cpp           :226 : 2794936111d us:  Handler: value(0), timestamp(0x7ad44c710ff0), handle(0x7ad5587ff000)
:3:rocvirtual.hpp           :66  : 2794936458d us:  Host active wait for Signal = (0x7ad5587fef80) for -1 ns
:3:rocvirtual.cpp           :483 : 2794937051d us:  Set Handler: handle(0x7ad5587fef00), timestamp(0x7ad44c53a9d0)
:3:rocvirtual.hpp           :66  : 2794937053d us:  Host active wait for Signal = (0x7ad5587fef00) for -1 ns
:3:hip_memory.cpp           :1573: 2794937068d us:  hipMemcpyAsync: Returned hipSuccess : : duration: 960d us
:3:hip_stream.cpp           :371 : 2794937070d us:   hipStreamSynchronize ( stream:0x2 )
:3:hip_stream.cpp           :372 : 2794937072d us:  hipStreamSynchronize: Returned hipSuccess :
:3:hip_device_runtime.cpp   :634 : 2794937077d us:   hipGetDevice ( 0x7ad5f93fc53c )
:3:hip_device_runtime.cpp   :642 : 2794937078d us:  hipGetDevice: Returned hipSuccess :
:3:hip_memory.cpp           :1572: 2794937080d us:   hipMemcpyAsync ( 0x7ad16b5d2000, 0x7ad2fa626a80, 16384, hipMemcpyHostToDevice, stream:0x2 )
:3:rocvirtual.hpp           :66  : 2794937086d us:  Host active wait for Signal = (0x7ad5587fee80) for -1 ns
/usr/include/c++/14.2.1/bits/stl_vector.h:1149: std::vector<_Tp, _Alloc>::const_reference std::vector<_Tp, _Alloc>::operator[](size_type) const [with _Tp = amd::roc::ProfilingSignal*; _Alloc = std::allocator<amd::roc::ProfilingSignal*>; const_reference = amd::roc::ProfilingSignal* const&; size_type = long unsigned int]: Assertion '__n < this->size()' failed.
:3:rocvirtual.cpp           :483 : 2794937096d us:  Set Handler: handle(0x7ad5587fee00), timestamp(0x7ad44d0631a0)
:3:rocvirtual.hpp           :66  : 2794937097d us:  Host active wait for Signal = (0x7ad5587fee00) for -1 ns
:3:hip_memory.cpp           :1573: 2794937110d us:  hipMemcpyAsync: Returned hipSuccess : : duration: 30d us
:3:hip_stream.cpp           :371 : 2794937112d us:   hipStreamSynchronize ( stream:0x2 )
:3:hip_stream.cpp           :372 : 2794937113d us:  hipStreamSynchronize: Returned hipSuccess :
:3:hip_device_runtime.cpp   :634 : 2794937116d us:   hipGetDevice ( 0x7ad5f93fc53c )
:3:hip_device_runtime.cpp   :642 : 2794937117d us:  hipGetDevice: Returned hipSuccess :
:3:hip_memory.cpp           :1572: 2794937118d us:   hipMemcpyAsync ( 0x7ad16b5d6000, 0x7ad2d8e78980, 256, hipMemcpyHostToDevice, stream:0x2 )
time=2025-03-05T11:47:24.809+01:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server not responding"
time=2025-03-05T11:47:26.057+01:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error"
time=2025-03-05T11:47:26.308+01:00 level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: signal: aborted (core dumped)"
time=2025-03-05T11:47:26.308+01:00 level=DEBUG source=sched.go:459 msg="triggering expiration for failed load" model=/home/steelph0enix/.ollama/models/blobs/sha256-f0d8cbed51de74ed312a645366e24ed0114081b000f1216452f70aea424f7aa9
time=2025-03-05T11:47:26.308+01:00 level=DEBUG source=sched.go:361 msg="runner expired event received" modelPath=/home/steelph0enix/.ollama/models/blobs/sha256-f0d8cbed51de74ed312a645366e24ed0114081b000f1216452f70aea424f7aa9
time=2025-03-05T11:47:26.308+01:00 level=DEBUG source=sched.go:376 msg="got lock to unload" modelPath=/home/steelph0enix/.ollama/models/blobs/sha256-f0d8cbed51de74ed312a645366e24ed0114081b000f1216452f70aea424f7aa9
time=2025-03-05T11:47:26.308+01:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.3 GiB" before.free="26.2 GiB" before.free_swap="0 B" now.total="31.3 GiB" now.free="26.2 GiB" now.free_swap="0 B"
time=2025-03-05T11:47:26.308+01:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-15fd692de3427661 name=1002:744c before="18.2 GiB" now="18.2 GiB"
[GIN] 2025/03/05 - 11:47:26 | 500 |  3.904392486s |       127.0.0.1 | POST     "/api/generate"
<!-- gh-comment-id:2700553801 --> @SteelPh0enix commented on GitHub (Mar 5, 2025): I have symlinked the `/usr/local/lib` (where ollama/ subdirectory containing the back-end libraries resides) to `gopath/lib`, and it did try to use ROCm this time, but it crashes - i guess this solves my primary issue, but something is still wrong there. Is this a ROCm compatibility issue? Here's the log: ``` 2025/03/05 11:47:19 routes.go:1215: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION:11.0.0 HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY:rocm_v6 OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/steelph0enix/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-03-05T11:47:19.925+01:00 level=INFO source=images.go:432 msg="total blobs: 5" time=2025-03-05T11:47:19.926+01:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func3 (5 handlers) [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func4 (5 handlers) [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers) [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers) [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers) [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers) [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers) [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers) [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers) [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers) [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers) [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers) [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers) [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers) [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers) [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers) [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers) [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers) [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers) time=2025-03-05T11:47:19.926+01:00 level=INFO source=routes.go:1277 msg="Listening on 127.0.0.1:11434 (version 0.0.0)" time=2025-03-05T11:47:19.926+01:00 level=DEBUG source=sched.go:106 msg="starting llm scheduler" time=2025-03-05T11:47:19.926+01:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-03-05T11:47:19.927+01:00 level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA" time=2025-03-05T11:47:19.927+01:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcuda.so* time=2025-03-05T11:47:19.927+01:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/home/steelph0enix/gopath/lib/ollama/libcuda.so* /home/steelph0enix/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" time=2025-03-05T11:47:19.942+01:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[] time=2025-03-05T11:47:19.942+01:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcudart.so* time=2025-03-05T11:47:19.942+01:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/home/steelph0enix/gopath/lib/ollama/libcudart.so* /home/steelph0enix/libcudart.so* /home/steelph0enix/gopath/lib/ollama/cuda_v*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]" time=2025-03-05T11:47:19.947+01:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[] time=2025-03-05T11:47:19.947+01:00 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2025-03-05T11:47:19.947+01:00 level=DEBUG source=amd_linux.go:101 msg="evaluating amdgpu node /sys/class/kfd/kfd/topology/nodes/0/properties" time=2025-03-05T11:47:19.947+01:00 level=DEBUG source=amd_linux.go:121 msg="detected CPU /sys/class/kfd/kfd/topology/nodes/0/properties" time=2025-03-05T11:47:19.947+01:00 level=DEBUG source=amd_linux.go:101 msg="evaluating amdgpu node /sys/class/kfd/kfd/topology/nodes/1/properties" time=2025-03-05T11:47:19.948+01:00 level=DEBUG source=amd_linux.go:206 msg="mapping amdgpu to drm sysfs nodes" amdgpu=/sys/class/kfd/kfd/topology/nodes/1/properties vendor=4098 device=29772 unique_id=1584538289711511137 time=2025-03-05T11:47:19.948+01:00 level=DEBUG source=amd_linux.go:240 msg=matched amdgpu=/sys/class/kfd/kfd/topology/nodes/1/properties drm=/sys/class/drm/card1/device time=2025-03-05T11:47:19.948+01:00 level=DEBUG source=amd_linux.go:318 msg="amdgpu memory" gpu=0 total="20.0 GiB" time=2025-03-05T11:47:19.948+01:00 level=DEBUG source=amd_linux.go:319 msg="amdgpu memory" gpu=0 available="18.2 GiB" time=2025-03-05T11:47:19.948+01:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /home/steelph0enix/gopath/lib/ollama/rocm" time=2025-03-05T11:47:19.948+01:00 level=DEBUG source=amd_common.go:44 msg="detected ROCM next to ollama executable /home/steelph0enix/gopath/lib/ollama/rocm" time=2025-03-05T11:47:19.948+01:00 level=INFO source=amd_linux.go:389 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION=11.0.0 time=2025-03-05T11:47:19.948+01:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-15fd692de3427661 library=rocm variant="" compute=gfx1100 driver=0.0 name=1002:744c total="20.0 GiB" available="18.2 GiB" [GIN] 2025/03/05 - 11:47:22 | 200 | 30.29µs | 127.0.0.1 | HEAD "/" [GIN] 2025/03/05 - 11:47:22 | 200 | 12.234028ms | 127.0.0.1 | POST "/api/show" time=2025-03-05T11:47:22.417+01:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.3 GiB" before.free="26.2 GiB" before.free_swap="0 B" now.total="31.3 GiB" now.free="26.2 GiB" now.free_swap="0 B" time=2025-03-05T11:47:22.417+01:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-15fd692de3427661 name=1002:744c before="18.2 GiB" now="18.2 GiB" time=2025-03-05T11:47:22.417+01:00 level=DEBUG source=sched.go:182 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1 time=2025-03-05T11:47:22.440+01:00 level=DEBUG source=sched.go:225 msg="loading first model" model=/home/steelph0enix/.ollama/models/blobs/sha256-f0d8cbed51de74ed312a645366e24ed0114081b000f1216452f70aea424f7aa9 time=2025-03-05T11:47:22.440+01:00 level=DEBUG source=memory.go:108 msg=evaluating library=rocm gpu_count=1 available="[18.2 GiB]" time=2025-03-05T11:47:22.440+01:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.3 GiB" before.free="26.2 GiB" before.free_swap="0 B" now.total="31.3 GiB" now.free="26.2 GiB" now.free_swap="0 B" time=2025-03-05T11:47:22.440+01:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-15fd692de3427661 name=1002:744c before="18.2 GiB" now="18.2 GiB" time=2025-03-05T11:47:22.441+01:00 level=DEBUG source=memory.go:108 msg=evaluating library=rocm gpu_count=1 available="[18.2 GiB]" time=2025-03-05T11:47:22.441+01:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.3 GiB" before.free="26.2 GiB" before.free_swap="0 B" now.total="31.3 GiB" now.free="26.2 GiB" now.free_swap="0 B" time=2025-03-05T11:47:22.441+01:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-15fd692de3427661 name=1002:744c before="18.2 GiB" now="18.2 GiB" time=2025-03-05T11:47:22.441+01:00 level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/home/steelph0enix/.ollama/models/blobs/sha256-f0d8cbed51de74ed312a645366e24ed0114081b000f1216452f70aea424f7aa9 gpu=GPU-15fd692de3427661 parallel=1 available=19536535552 required="12.6 GiB" time=2025-03-05T11:47:22.441+01:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.3 GiB" before.free="26.2 GiB" before.free_swap="0 B" now.total="31.3 GiB" now.free="26.2 GiB" now.free_swap="0 B" time=2025-03-05T11:47:22.441+01:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-15fd692de3427661 name=1002:744c before="18.2 GiB" now="18.2 GiB" time=2025-03-05T11:47:22.441+01:00 level=INFO source=server.go:105 msg="system memory" total="31.3 GiB" free="26.2 GiB" free_swap="0 B" time=2025-03-05T11:47:22.441+01:00 level=DEBUG source=memory.go:108 msg=evaluating library=rocm gpu_count=1 available="[18.2 GiB]" time=2025-03-05T11:47:22.441+01:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.3 GiB" before.free="26.2 GiB" before.free_swap="0 B" now.total="31.3 GiB" now.free="26.2 GiB" now.free_swap="0 B" time=2025-03-05T11:47:22.441+01:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-15fd692de3427661 name=1002:744c before="18.2 GiB" now="18.2 GiB" time=2025-03-05T11:47:22.442+01:00 level=INFO source=server.go:138 msg=offload library=rocm layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[18.2 GiB]" memory.gpu_overhead="0 B" memory.required.full="12.6 GiB" memory.required.partial="12.6 GiB" memory.required.kv="4.0 GiB" memory.required.allocations="[12.6 GiB]" memory.weights.total="9.3 GiB" memory.weights.repeating="8.9 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="2.1 GiB" memory.graph.partial="2.2 GiB" time=2025-03-05T11:47:22.442+01:00 level=INFO source=server.go:185 msg="enabling flash attention" time=2025-03-05T11:47:22.442+01:00 level=WARN source=server.go:193 msg="kv cache type not supported by model" type="" time=2025-03-05T11:47:22.442+01:00 level=DEBUG source=server.go:262 msg="compatible gpu libraries" compatible=[rocm] llama_model_loader: loaded meta data with 40 key-value pairs and 292 tensors from /home/steelph0enix/.ollama/models/blobs/sha256-f0d8cbed51de74ed312a645366e24ed0114081b000f1216452f70aea424f7aa9 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Meta Llama 3.1 8B llama_model_loader: - kv 3: general.organization str = Unsloth llama_model_loader: - kv 4: general.finetune str = Preview llama_model_loader: - kv 5: general.basename str = Meta-Llama-3.1 llama_model_loader: - kv 6: general.size_label str = 8B llama_model_loader: - kv 7: general.license str = llama3 llama_model_loader: - kv 8: general.base_model.count u32 = 1 llama_model_loader: - kv 9: general.base_model.0.name str = Meta Llama 3.1 8B llama_model_loader: - kv 10: general.base_model.0.organization str = Meta Llama llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/meta-llama/Met... llama_model_loader: - kv 12: general.tags arr[str,15] = ["Llama-3", "instruct", "finetune", "... llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 14: llama.block_count u32 = 32 llama_model_loader: - kv 15: llama.context_length u32 = 131072 llama_model_loader: - kv 16: llama.embedding_length u32 = 4096 llama_model_loader: - kv 17: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 18: llama.attention.head_count u32 = 32 llama_model_loader: - kv 19: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 20: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 21: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 22: llama.attention.key_length u32 = 128 llama_model_loader: - kv 23: llama.attention.value_length u32 = 128 llama_model_loader: - kv 24: llama.vocab_size u32 = 128256 llama_model_loader: - kv 25: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 26: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 27: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 28: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 29: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 30: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 31: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 32: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 33: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 34: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 35: tokenizer.chat_template.tool_use str = {%- macro json_to_python_type(json_sp... llama_model_loader: - kv 36: tokenizer.chat_templates arr[str,1] = ["tool_use"] llama_model_loader: - kv 37: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 38: general.quantization_version u32 = 2 llama_model_loader: - kv 39: general.file_type u32 = 18 llama_model_loader: - type f32: 66 tensors llama_model_loader: - type q6_K: 226 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q6_K print_info: file size = 6.14 GiB (6.56 BPW) init_tokenizer: initializing tokenizer for type 2 load: control token: 128254 '<|reserved_special_token_246|>' is not marked as EOG [...] load: control token: 128126 '<|reserved_special_token_118|>' is not marked as EOG load: special tokens cache size = 256 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 8.03 B print_info: general.name = Meta Llama 3.1 8B print_info: vocab type = BPE print_info: n_vocab = 128256 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin_of_text|>' print_info: EOS token = 128009 '<|eot_id|>' print_info: EOT token = 128009 '<|eot_id|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: PAD token = 128001 '<|end_of_text|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-03-05T11:47:22.601+01:00 level=DEBUG source=server.go:335 msg="adding gpu library" path=/home/steelph0enix/gopath/lib/ollama/rocm time=2025-03-05T11:47:22.601+01:00 level=DEBUG source=server.go:343 msg="adding gpu dependency paths" paths=[/home/steelph0enix/gopath/lib/ollama/rocm] time=2025-03-05T11:47:22.601+01:00 level=INFO source=server.go:405 msg="starting llama server" cmd="/home/steelph0enix/gopath/bin/ollama runner --model /home/steelph0enix/.ollama/models/blobs/sha256-f0d8cbed51de74ed312a645366e24ed0114081b000f1216452f70aea424f7aa9 --ctx-size 32768 --batch-size 512 --n-gpu-layers 33 --verbose --threads 12 --flash-attn --parallel 1 --port 34555" time=2025-03-05T11:47:22.601+01:00 level=DEBUG source=server.go:423 msg=subprocess environment="[PATH=/home/steelph0enix/gopath/bin:/home/steelph0enix/gopath/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/opt/rocm/bin:/usr/lib/rustup/bin:/home/steelph0enix/.cargo/bin ROCM_PATH=/opt/rocm GPU_ARCHS=gfx1100 HSA_OVERRIDE_GFX_VERSION=11.0.0 LD_LIBRARY_PATH=/home/steelph0enix/gopath/lib/ollama/rocm:/home/steelph0enix/gopath/lib/ollama/rocm:/home/steelph0enix/gopath/lib/ollama ROCR_VISIBLE_DEVICES=GPU-15fd692de3427661]" time=2025-03-05T11:47:22.601+01:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2025-03-05T11:47:22.601+01:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding" time=2025-03-05T11:47:22.601+01:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error" time=2025-03-05T11:47:22.607+01:00 level=INFO source=runner.go:931 msg="starting go runner" time=2025-03-05T11:47:22.608+01:00 level=DEBUG source=ggml.go:99 msg="ggml backend load all from path" path=/home/steelph0enix/gopath/lib/ollama/rocm :3:rocdevice.cpp :469 : 2793179545d us: Initializing HSA stack. :3:rocdevice.cpp :555 : 2793187431d us: Enumerated GPU agents = 1 :3:rocdevice.cpp :233 : 2793187452d us: Numa selects cpu agent[0]=0xf5aaa30(fine=0xf130fa0,coarse=0xf5ab4b0) for gpu agent=0xf5ab870 CPU<->GPU XGMI=0 :3:rocsettings.cpp :287 : 2793187458d us: Using dev kernel arg wa = 0 :3:comgrctx.cpp :33 : 2793187462d us: Loading COMGR library. :3:comgrctx.cpp :126 : 2793187482d us: Loaded COMGR library version 2.8. :3:rocdevice.cpp :1800: 2793187601d us: Gfx Major/Minor/Stepping: 11/0/0 :3:rocdevice.cpp :1802: 2793187604d us: HMM support: 1, XNACK: 0, Direct host access: 0 :3:rocdevice.cpp :1804: 2793187605d us: Max SDMA Read Mask: 0x3, Max SDMA Write Mask: 0x3 :3:hip_context.cpp :49 : 2793188539d us: Direct Dispatch: 1 :3:hip_code_object.cpp :839 : 2793194196d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf62ebe0(compressed), Size=76376, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793260209d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x7ad456076010(compressed), Size=9525448, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793265628d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf75bdc0(compressed), Size=706488, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793267821d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf67e760(compressed), Size=113712, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793270007d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf6c3000(compressed), Size=166984, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793272425d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf808580(compressed), Size=331784, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793272817d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf69a3a0(compressed), Size=48800, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793364523d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x7ad455430010(compressed), Size=12867216, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793365537d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf6ebc50(compressed), Size=139864, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793366113d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf641640(compressed), Size=116568, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793366602d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf6b2110(compressed), Size=48816, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793390958d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xfcbdcf0(compressed), Size=3107416, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793391775d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf725b60(compressed), Size=97432, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793392350d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf65dda0(compressed), Size=81168, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793392954d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf859590(compressed), Size=93896, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793522350d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x7ad454419010(compressed), Size=16869472, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793524259d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf8ff020(compressed), Size=296520, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793524647d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf61e010(compressed), Size=48800, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793588724d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x108ca020(compressed), Size=9525432, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793589319d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf73d800(compressed), Size=54136, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793589894d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf8852f0(compressed), Size=85632, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793590809d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf947670(compressed), Size=169720, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793591340d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf89a180(compressed), Size=80056, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793613116d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x1021be60(compressed), Size=2520832, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793748190d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x11e5b3d0(compressed), Size=13089504, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793749476d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xfffffa0(compressed), Size=309312, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793750209d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xffd6240(compressed), Size=137944, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793776300d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x111df8e0(compressed), Size=2694928, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793777134d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf8ada40(compressed), Size=99376, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793778711d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x100d3cb0(compressed), Size=293680, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793779345d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf70deb0(compressed), Size=66504, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793827438d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x12ad6ec0(compressed), Size=5543512, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793835660d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xfab5550(compressed), Size=1329104, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793836775d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x10072860(compressed), Size=159832, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793844803d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x105f73b0(compressed), Size=1523248, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793845877d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x100998c0(compressed), Size=159320, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793846478d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x1004b7f0(compressed), Size=109072, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793848160d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x1011b7f0(compressed), Size=296592, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793848913d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf870460(compressed), Size=72704, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793850452d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xfbf9d30(compressed), Size=325504, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793852782d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf970d70(compressed), Size=532024, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793854466d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x101ae960(compressed), Size=305856, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793855417d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xfc494c0(compressed), Size=194864, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793920007d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x13935db0(compressed), Size=9525376, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793958040d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x11471800(compressed), Size=4650920, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793959258d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x10163e90(compressed), Size=223688, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793993658d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x13467290(compressed), Size=4484440, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793995420d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xfa3b220(compressed), Size=296536, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793996427d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xfc78e00(compressed), Size=159288, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793997362d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf8c5e80(compressed), Size=181080, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793997976d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x101f9430(compressed), Size=129576, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793998826d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xf9f2bb0(compressed), Size=163176, num_code_objs=0 :3:hip_code_object.cpp :839 : 2793999520d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xffb4750(compressed), Size=115128, num_code_objs=0 :3:hip_code_object.cpp :839 : 2794001751d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x107d2c50(compressed), Size=424520, num_code_objs=0 :3:hip_code_object.cpp :839 : 2794003238d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x138adff0(compressed), Size=378664, num_code_objs=0 :3:hip_code_object.cpp :839 : 2794005255d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x1083a6a0(compressed), Size=290232, num_code_objs=0 :3:hip_code_object.cpp :839 : 2794006082d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xfa1a920(compressed), Size=96040, num_code_objs=0 :3:hip_code_object.cpp :839 : 2794007124d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x1390a720(compressed), Size=122912, num_code_objs=0 :3:hip_code_object.cpp :839 : 2794009248d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x1076b1f0(compressed), Size=237000, num_code_objs=0 :3:hip_code_object.cpp :839 : 2794089395d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x154ab2c0(compressed), Size=9957352, num_code_objs=0 :3:hip_code_object.cpp :839 : 2794090456d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0xfa83880(compressed), Size=126200, num_code_objs=0 :3:hip_code_object.cpp :839 : 2794095325d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x13020520(compressed), Size=846928, num_code_objs=0 :3:hip_code_object.cpp :839 : 2794097639d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x105081e0(compressed), Size=543832, num_code_objs=0 :3:hip_code_object.cpp :839 : 2794234605d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x17155770(compressed), Size=13083272, num_code_objs=0 :3:hip_code_object.cpp :839 : 2794236626d us: Found agent_triple_target_ids[0]=amdgcn-amd-amdhsa--gfx1100: item: Data=0x10881460(compressed), Size=98824, num_code_objs=0 :3:hip_device_runtime.cpp :649 : 2794237317d us: hipGetDeviceCount ( 0x7ad5d22c8f70 ) :3:hip_device_runtime.cpp :651 : 2794237321d us: hipGetDeviceCount: Returned hipSuccess : ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 ROCm devices: :3:hip_device.cpp :647 : 2794237328d us: hipGetDevicePropertiesR0600 ( 0x7ffeb1b98838, 0 ) :3:hip_device.cpp :649 : 2794237330d us: hipGetDevicePropertiesR0600: Returned hipSuccess : Device 0: AMD Radeon RX 7900 XT, gfx1100 (0x1100), VMM: no, Wave Size: 32 :3:hip_device_runtime.cpp :634 : 2794237334d us: hipGetDevice ( 0x7ffeb1b98e1c ) :3:hip_device_runtime.cpp :642 : 2794237335d us: hipGetDevice: Returned hipSuccess : :3:hip_device.cpp :647 : 2794237336d us: hipGetDevicePropertiesR0600 ( 0x7ffeb1b98e68, 0 ) :3:hip_device.cpp :649 : 2794237337d us: hipGetDevicePropertiesR0600: Returned hipSuccess : load_backend: loaded ROCm backend from /home/steelph0enix/gopath/lib/ollama/rocm/libggml-hip.so time=2025-03-05T11:47:23.690+01:00 level=DEBUG source=ggml.go:99 msg="ggml backend load all from path" path=/home/steelph0enix/gopath/lib/ollama ggml_backend_load_best: /home/steelph0enix/gopath/lib/ollama/libggml-cpu-haswell.so score: 55 ggml_backend_load_best: /home/steelph0enix/gopath/lib/ollama/libggml-cpu-icelake.so score: 0 ggml_backend_load_best: /home/steelph0enix/gopath/lib/ollama/libggml-cpu-sandybridge.so score: 20 ggml_backend_load_best: /home/steelph0enix/gopath/lib/ollama/libggml-cpu-skylakex.so score: 0 ggml_backend_load_best: /home/steelph0enix/gopath/lib/ollama/libggml-cpu-alderlake.so score: 0 load_backend: loaded CPU backend from /home/steelph0enix/gopath/lib/ollama/libggml-cpu-haswell.so time=2025-03-05T11:47:23.691+01:00 level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) :3:hip_device_runtime.cpp :634 : 2794239495d us: hipGetDevice ( 0x7ad5f93fca4c ) :3:hip_device_runtime.cpp :642 : 2794239500d us: hipGetDevice: Returned hipSuccess : :3:hip_memory.cpp :845 : 2794239507d us: hipMemGetInfo ( 0x7ad5f93fcab8, 0x7ad5f93fcac0 ) :3:hip_memory.cpp :869 : 2794239516d us: hipMemGetInfo: Returned hipSuccess : llama_model_load_from_file_impl: using device ROCm0 (AMD Radeon RX 7900 XT) - 20388 MiB free time=2025-03-05T11:47:23.692+01:00 level=INFO source=runner.go:991 msg="Server listening on 127.0.0.1:34555" llama_model_loader: loaded meta data with 40 key-value pairs and 292 tensors from /home/steelph0enix/.ollama/models/blobs/sha256-f0d8cbed51de74ed312a645366e24ed0114081b000f1216452f70aea424f7aa9 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Meta Llama 3.1 8B llama_model_loader: - kv 3: general.organization str = Unsloth llama_model_loader: - kv 4: general.finetune str = Preview llama_model_loader: - kv 5: general.basename str = Meta-Llama-3.1 llama_model_loader: - kv 6: general.size_label str = 8B llama_model_loader: - kv 7: general.license str = llama3 llama_model_loader: - kv 8: general.base_model.count u32 = 1 llama_model_loader: - kv 9: general.base_model.0.name str = Meta Llama 3.1 8B llama_model_loader: - kv 10: general.base_model.0.organization str = Meta Llama llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/meta-llama/Met... llama_model_loader: - kv 12: general.tags arr[str,15] = ["Llama-3", "instruct", "finetune", "... llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 14: llama.block_count u32 = 32 llama_model_loader: - kv 15: llama.context_length u32 = 131072 llama_model_loader: - kv 16: llama.embedding_length u32 = 4096 llama_model_loader: - kv 17: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 18: llama.attention.head_count u32 = 32 llama_model_loader: - kv 19: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 20: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 21: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 22: llama.attention.key_length u32 = 128 llama_model_loader: - kv 23: llama.attention.value_length u32 = 128 llama_model_loader: - kv 24: llama.vocab_size u32 = 128256 llama_model_loader: - kv 25: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 26: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 27: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 28: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 29: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 30: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 31: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 32: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 33: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 34: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 35: tokenizer.chat_template.tool_use str = {%- macro json_to_python_type(json_sp... llama_model_loader: - kv 36: tokenizer.chat_templates arr[str,1] = ["tool_use"] llama_model_loader: - kv 37: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 38: general.quantization_version u32 = 2 llama_model_loader: - kv 39: general.file_type u32 = 18 llama_model_loader: - type f32: 66 tensors llama_model_loader: - type q6_K: 226 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q6_K print_info: file size = 6.14 GiB (6.56 BPW) init_tokenizer: initializing tokenizer for type 2 load: control token: 128254 '<|reserved_special_token_246|>' is not marked as EOG [...] load: control token: 128126 '<|reserved_special_token_118|>' is not marked as EOG load: special tokens cache size = 256 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 4096 print_info: n_layer = 32 print_info: n_head = 32 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 4 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: n_ff = 14336 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 0 print_info: rope scaling = linear print_info: freq_base_train = 500000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 131072 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 8B print_info: model params = 8.03 B print_info: general.name = Meta Llama 3.1 8B print_info: vocab type = BPE print_info: n_vocab = 128256 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin_of_text|>' print_info: EOS token = 128009 '<|eot_id|>' print_info: EOT token = 128009 '<|eot_id|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: PAD token = 128001 '<|end_of_text|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = true) :3:hip_device_runtime.cpp :634 : 2794389430d us: hipGetDevice ( 0x7ad5f93fc75c ) :3:hip_device_runtime.cpp :642 : 2794389434d us: hipGetDevice: Returned hipSuccess : :3:hip_memory.cpp :845 : 2794389436d us: hipMemGetInfo ( 0x7ad5f93fc9d0, 0x7ad5f93fc9a0 ) :3:hip_memory.cpp :869 : 2794389443d us: hipMemGetInfo: Returned hipSuccess : load_tensors: layer 0 assigned to device ROCm0 load_tensors: layer 1 assigned to device ROCm0 load_tensors: layer 2 assigned to device ROCm0 load_tensors: layer 3 assigned to device ROCm0 load_tensors: layer 4 assigned to device ROCm0 load_tensors: layer 5 assigned to device ROCm0 load_tensors: layer 6 assigned to device ROCm0 load_tensors: layer 7 assigned to device ROCm0 load_tensors: layer 8 assigned to device ROCm0 load_tensors: layer 9 assigned to device ROCm0 load_tensors: layer 10 assigned to device ROCm0 load_tensors: layer 11 assigned to device ROCm0 load_tensors: layer 12 assigned to device ROCm0 load_tensors: layer 13 assigned to device ROCm0 load_tensors: layer 14 assigned to device ROCm0 load_tensors: layer 15 assigned to device ROCm0 load_tensors: layer 16 assigned to device ROCm0 load_tensors: layer 17 assigned to device ROCm0 load_tensors: layer 18 assigned to device ROCm0 load_tensors: layer 19 assigned to device ROCm0 load_tensors: layer 20 assigned to device ROCm0 load_tensors: layer 21 assigned to device ROCm0 load_tensors: layer 22 assigned to device ROCm0 load_tensors: layer 23 assigned to device ROCm0 load_tensors: layer 24 assigned to device ROCm0 load_tensors: layer 25 assigned to device ROCm0 load_tensors: layer 26 assigned to device ROCm0 load_tensors: layer 27 assigned to device ROCm0 load_tensors: layer 28 assigned to device ROCm0 load_tensors: layer 29 assigned to device ROCm0 load_tensors: layer 30 assigned to device ROCm0 load_tensors: layer 31 assigned to device ROCm0 load_tensors: layer 32 assigned to device ROCm0 load_tensors: tensor 'token_embd.weight' (q6_K) (and 0 others) cannot be used with preferred buffer type ROCm_Host, using CPU instead time=2025-03-05T11:47:23.856+01:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server loading model" :3:hip_device_runtime.cpp :634 : 2794693911d us: hipGetDevice ( 0x7ad5f93fc75c ) :3:hip_device_runtime.cpp :642 : 2794693917d us: hipGetDevice: Returned hipSuccess : :3:hip_memory.cpp :845 : 2794693920d us: hipMemGetInfo ( 0x7ad5f93fc980, 0x7ad5f93fc988 ) :3:hip_memory.cpp :869 : 2794693927d us: hipMemGetInfo: Returned hipSuccess : :3:hip_device_runtime.cpp :634 : 2794693938d us: hipGetDevice ( 0x7ad5f93fc5ec ) :3:hip_device_runtime.cpp :642 : 2794693939d us: hipGetDevice: Returned hipSuccess : :3:hip_device_runtime.cpp :634 : 2794693940d us: hipGetDevice ( 0x7ad5f93fc5ec ) :3:hip_device_runtime.cpp :642 : 2794693941d us: hipGetDevice: Returned hipSuccess : :3:hip_memory.cpp :703 : 2794693947d us: hipMalloc ( 0x7ad5f93fc648, 6157230336 ) :3:rocdevice.cpp :2425: 2794695136d us: Device=0xf5c1000, freeMem_ = 0x390001f00 :3:hip_memory.cpp :705 : 2794695144d us: hipMalloc: Returned hipSuccess : 0x7ad14fa00000: duration: 1197d us load_tensors: offloading 32 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 33/33 layers to GPU load_tensors: ROCm0 model buffer size = 5871.99 MiB load_tensors: CPU_Mapped model buffer size = 410.98 MiB :3:hip_device_runtime.cpp :634 : 2794695245d us: hipGetDevice ( 0x7ad5f93fc53c ) :3:hip_device_runtime.cpp :642 : 2794695248d us: hipGetDevice: Returned hipSuccess : :3:hip_memory.cpp :1572: 2794695262d us: hipMemcpyAsync ( 0x7ad14fa00000, 0x7ad2d8e74980, 16384, hipMemcpyHostToDevice, stream:0x2 ) :3:rocdevice.cpp :3060: 2794695275d us: Number of allocated hardware queues with low priority: 0, with normal priority: 0, with high priority: 0, maximum per priority is: 4 :3:rocdevice.cpp :3138: 2794703230d us: Created SWq=0x7ad5f8056000 to map on HWq=0x7ad456a00000 with size 16384 with priority 1, cooperative: 0 :3:rocdevice.cpp :3231: 2794703239d us: acquireQueue refCount: 0x7ad456a00000 (1) :3:devprogram.cpp :2648: 2794907232d us: Using Code Object V5. :3:rocvirtual.hpp :66 : 2794909318d us: Host active wait for Signal = (0x7ad5587ff700) for -1 ns :3:rocvirtual.cpp :483 : 2794909451d us: Set Handler: handle(0x7ad5587ff680), timestamp(0x7ad44c524590) :3:rocvirtual.hpp :66 : 2794909453d us: Host active wait for Signal = (0x7ad5587ff680) for -1 ns :3:hip_memory.cpp :1573: 2794909478d us: hipMemcpyAsync: Returned hipSuccess : : duration: 214216d us :3:hip_stream.cpp :371 : 2794909487d us: hipStreamSynchronize ( stream:0x2 ) :3:hip_stream.cpp :372 : 2794909493d us: hipStreamSynchronize: Returned hipSuccess : :3:rocvirtual.cpp :226 : 2794909494d us: Handler: value(0), timestamp(0x7ad44c78cc90), handle(0x7ad5587ff680) :3:hip_device_runtime.cpp :634 : 2794909514d us: hipGetDevice ( 0x7ad5f93fc53c ) :3:hip_device_runtime.cpp :642 : 2794909519d us: hipGetDevice: Returned hipSuccess : :3:hip_memory.cpp :1572: 2794909522d us: hipMemcpyAsync ( 0x7ad14fa04000, 0x7ad2bf37a980, 430940160, hipMemcpyHostToDevice, stream:0x2 ) :3:rocvirtual.hpp :66 : 2794912292d us: Host active wait for Signal = (0x7ad5587ff600) for -1 ns :3:rocvirtual.hpp :66 : 2794919710d us: Host active wait for Signal = (0x7ad5587ff580) for -1 ns :3:rocvirtual.hpp :66 : 2794926984d us: Host active wait for Signal = (0x7ad5587ff500) for -1 ns :3:rocvirtual.hpp :66 : 2794933196d us: Host active wait for Signal = (0x7ad5587ff480) for -1 ns :3:rocvirtual.cpp :483 : 2794934406d us: Set Handler: handle(0x7ad5587ff400), timestamp(0x7ad44d0f6080) :3:rocvirtual.hpp :66 : 2794934409d us: Host active wait for Signal = (0x7ad5587ff400) for -1 ns :3:hip_memory.cpp :1573: 2794934427d us: hipMemcpyAsync: Returned hipSuccess : : duration: 24905d us :3:hip_stream.cpp :371 : 2794934431d us: hipStreamSynchronize ( stream:0x2 ) :3:hip_stream.cpp :372 : 2794934433d us: hipStreamSynchronize: Returned hipSuccess : :3:rocvirtual.cpp :226 : 2794934435d us: Handler: value(0), timestamp(0x7ad44d16a230), handle(0x7ad5587ff400) :3:hip_device_runtime.cpp :634 : 2794934444d us: hipGetDevice ( 0x7ad5f93fc53c ) :3:hip_device_runtime.cpp :642 : 2794934445d us: hipGetDevice: Returned hipSuccess : :3:hip_memory.cpp :1572: 2794934448d us: hipMemcpyAsync ( 0x7ad1694fe000, 0x7ad2f2cbaa80, 16384, hipMemcpyHostToDevice, stream:0x2 ) :3:rocvirtual.hpp :66 : 2794934457d us: Host active wait for Signal = (0x7ad5587ff380) for -1 ns :3:rocvirtual.cpp :483 : 2794934469d us: Set Handler: handle(0x7ad5587ff300), timestamp(0x7ad44d0f62e0) :3:rocvirtual.hpp :66 : 2794934472d us: Host active wait for Signal = (0x7ad5587ff300) for -1 ns :3:hip_memory.cpp :1573: 2794934486d us: hipMemcpyAsync: Returned hipSuccess : : duration: 38d us :3:hip_stream.cpp :371 : 2794934487d us: hipStreamSynchronize ( stream:0x2 ) :3:hip_stream.cpp :372 : 2794934489d us: hipStreamSynchronize: Returned hipSuccess : :3:hip_device_runtime.cpp :634 : 2794934491d us: hipGetDevice ( 0x7ad5f93fc53c ) :3:hip_device_runtime.cpp :642 : 2794934492d us: hipGetDevice: Returned hipSuccess : :3:hip_memory.cpp :1572: 2794934494d us: hipMemcpyAsync ( 0x7ad169502000, 0x7ad2f39dea80, 13762560, hipMemcpyHostToDevice, stream:0x2 ) :3:rocvirtual.cpp :226 : 2794934536d us: Handler: value(0), timestamp(0x7ad44d16a230), handle(0x7ad5587ff300) :3:rocvirtual.hpp :66 : 2794934809d us: Host active wait for Signal = (0x7ad5587ff280) for -1 ns :3:rocvirtual.cpp :483 : 2794935414d us: Set Handler: handle(0x7ad5587ff200), timestamp(0x7ad44d0f78f0) :3:rocvirtual.hpp :66 : 2794935416d us: Host active wait for Signal = (0x7ad5587ff200) for -1 ns :3:hip_memory.cpp :1573: 2794935433d us: hipMemcpyAsync: Returned hipSuccess : : duration: 939d us :3:hip_stream.cpp :371 : 2794935436d us: hipStreamSynchronize ( stream:0x2 ) :3:hip_stream.cpp :372 : 2794935437d us: hipStreamSynchronize: Returned hipSuccess : :3:hip_device_runtime.cpp :634 : 2794935442d us: hipGetDevice ( 0x7ad5f93fc53c ) :3:hip_device_runtime.cpp :642 : 2794935443d us: hipGetDevice: Returned hipSuccess : :3:hip_memory.cpp :1572: 2794935445d us: hipMemcpyAsync ( 0x7ad16a222000, 0x7ad2f2972a80, 3440640, hipMemcpyHostToDevice, stream:0x2 ) :3:rocvirtual.cpp :226 : 2794935446d us: Handler: value(0), timestamp(0x7ad44c710e50), handle(0x7ad5587ff200) :3:rocvirtual.hpp :66 : 2794935685d us: Host active wait for Signal = (0x7ad5587ff180) for -1 ns :3:rocvirtual.cpp :483 : 2794935843d us: Set Handler: handle(0x7ad5587ff100), timestamp(0x7ad44d0f7b10) :3:rocvirtual.hpp :66 : 2794935845d us: Host active wait for Signal = (0x7ad5587ff100) for -1 ns :3:hip_memory.cpp :1573: 2794935859d us: hipMemcpyAsync: Returned hipSuccess : : duration: 414d us :3:hip_stream.cpp :371 : 2794935860d us: hipStreamSynchronize ( stream:0x2 ) :3:hip_stream.cpp :372 : 2794935862d us: hipStreamSynchronize: Returned hipSuccess : :3:hip_device_runtime.cpp :634 : 2794935866d us: hipGetDevice ( 0x7ad5f93fc53c ) :3:hip_device_runtime.cpp :642 : 2794935867d us: hipGetDevice: Returned hipSuccess : :3:hip_memory.cpp :1572: 2794935869d us: hipMemcpyAsync ( 0x7ad16a56a000, 0x7ad2f46fea80, 3440640, hipMemcpyHostToDevice, stream:0x2 ) :3:rocvirtual.cpp :226 : 2794935878d us: Handler: value(0), timestamp(0x7ad44c710f20), handle(0x7ad5587ff100) :3:rocvirtual.hpp :66 : 2794935925d us: Host active wait for Signal = (0x7ad5587ff080) for -1 ns :3:rocvirtual.cpp :483 : 2794936082d us: Set Handler: handle(0x7ad5587ff000), timestamp(0x7ad44c53a770) :3:rocvirtual.hpp :66 : 2794936084d us: Host active wait for Signal = (0x7ad5587ff000) for -1 ns :3:hip_memory.cpp :1573: 2794936098d us: hipMemcpyAsync: Returned hipSuccess : : duration: 229d us :3:hip_stream.cpp :371 : 2794936100d us: hipStreamSynchronize ( stream:0x2 ) :3:hip_stream.cpp :372 : 2794936101d us: hipStreamSynchronize: Returned hipSuccess : :3:hip_device_runtime.cpp :634 : 2794936105d us: hipGetDevice ( 0x7ad5f93fc53c ) :3:hip_device_runtime.cpp :642 : 2794936106d us: hipGetDevice: Returned hipSuccess : :3:hip_memory.cpp :1572: 2794936108d us: hipMemcpyAsync ( 0x7ad16a8b2000, 0x7ad2f2cbea80, 13762560, hipMemcpyHostToDevice, stream:0x2 ) :3:rocvirtual.cpp :226 : 2794936111d us: Handler: value(0), timestamp(0x7ad44c710ff0), handle(0x7ad5587ff000) :3:rocvirtual.hpp :66 : 2794936458d us: Host active wait for Signal = (0x7ad5587fef80) for -1 ns :3:rocvirtual.cpp :483 : 2794937051d us: Set Handler: handle(0x7ad5587fef00), timestamp(0x7ad44c53a9d0) :3:rocvirtual.hpp :66 : 2794937053d us: Host active wait for Signal = (0x7ad5587fef00) for -1 ns :3:hip_memory.cpp :1573: 2794937068d us: hipMemcpyAsync: Returned hipSuccess : : duration: 960d us :3:hip_stream.cpp :371 : 2794937070d us: hipStreamSynchronize ( stream:0x2 ) :3:hip_stream.cpp :372 : 2794937072d us: hipStreamSynchronize: Returned hipSuccess : :3:hip_device_runtime.cpp :634 : 2794937077d us: hipGetDevice ( 0x7ad5f93fc53c ) :3:hip_device_runtime.cpp :642 : 2794937078d us: hipGetDevice: Returned hipSuccess : :3:hip_memory.cpp :1572: 2794937080d us: hipMemcpyAsync ( 0x7ad16b5d2000, 0x7ad2fa626a80, 16384, hipMemcpyHostToDevice, stream:0x2 ) :3:rocvirtual.hpp :66 : 2794937086d us: Host active wait for Signal = (0x7ad5587fee80) for -1 ns /usr/include/c++/14.2.1/bits/stl_vector.h:1149: std::vector<_Tp, _Alloc>::const_reference std::vector<_Tp, _Alloc>::operator[](size_type) const [with _Tp = amd::roc::ProfilingSignal*; _Alloc = std::allocator<amd::roc::ProfilingSignal*>; const_reference = amd::roc::ProfilingSignal* const&; size_type = long unsigned int]: Assertion '__n < this->size()' failed. :3:rocvirtual.cpp :483 : 2794937096d us: Set Handler: handle(0x7ad5587fee00), timestamp(0x7ad44d0631a0) :3:rocvirtual.hpp :66 : 2794937097d us: Host active wait for Signal = (0x7ad5587fee00) for -1 ns :3:hip_memory.cpp :1573: 2794937110d us: hipMemcpyAsync: Returned hipSuccess : : duration: 30d us :3:hip_stream.cpp :371 : 2794937112d us: hipStreamSynchronize ( stream:0x2 ) :3:hip_stream.cpp :372 : 2794937113d us: hipStreamSynchronize: Returned hipSuccess : :3:hip_device_runtime.cpp :634 : 2794937116d us: hipGetDevice ( 0x7ad5f93fc53c ) :3:hip_device_runtime.cpp :642 : 2794937117d us: hipGetDevice: Returned hipSuccess : :3:hip_memory.cpp :1572: 2794937118d us: hipMemcpyAsync ( 0x7ad16b5d6000, 0x7ad2d8e78980, 256, hipMemcpyHostToDevice, stream:0x2 ) time=2025-03-05T11:47:24.809+01:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server not responding" time=2025-03-05T11:47:26.057+01:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error" time=2025-03-05T11:47:26.308+01:00 level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: signal: aborted (core dumped)" time=2025-03-05T11:47:26.308+01:00 level=DEBUG source=sched.go:459 msg="triggering expiration for failed load" model=/home/steelph0enix/.ollama/models/blobs/sha256-f0d8cbed51de74ed312a645366e24ed0114081b000f1216452f70aea424f7aa9 time=2025-03-05T11:47:26.308+01:00 level=DEBUG source=sched.go:361 msg="runner expired event received" modelPath=/home/steelph0enix/.ollama/models/blobs/sha256-f0d8cbed51de74ed312a645366e24ed0114081b000f1216452f70aea424f7aa9 time=2025-03-05T11:47:26.308+01:00 level=DEBUG source=sched.go:376 msg="got lock to unload" modelPath=/home/steelph0enix/.ollama/models/blobs/sha256-f0d8cbed51de74ed312a645366e24ed0114081b000f1216452f70aea424f7aa9 time=2025-03-05T11:47:26.308+01:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.3 GiB" before.free="26.2 GiB" before.free_swap="0 B" now.total="31.3 GiB" now.free="26.2 GiB" now.free_swap="0 B" time=2025-03-05T11:47:26.308+01:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-15fd692de3427661 name=1002:744c before="18.2 GiB" now="18.2 GiB" [GIN] 2025/03/05 - 11:47:26 | 500 | 3.904392486s | 127.0.0.1 | POST "/api/generate"
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#68260