[GH-ISSUE #10445] ollama ps shows 100% on GPU VRAM, but CPU/RAM is actually being used #53380

Closed
opened 2026-04-29 02:49:13 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @samteezy on GitHub (Apr 28, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10445

What is the issue?

Been running into an intermittent issue with an AMD card that is causing me to lose hair (more quickly).

I have a 16GB AMD 7600XT in a Ryzen system, running Opensuse Tumbleweed on the 6.14.x kernel. I'm running Ollama in the official Docker image with the :rocm tag.

Sometimes this works fine, but often I'm finding the system is actually using the CPU for inference, and not the GPU. I am confirming this by lower performance and top showing that the CPU is being loaded when processing inference.

The container is setup as thus:

version: '3.9'
services:
    ollama:
        image: ollama/ollama:rocm
        container_name: ollama
        user: "${UID}:${GID}"
        group_add:
            - 488 # group ID from checking ls -lnd /dev/kfd /dev/dri /dev/dri/*
        devices:
         #   - /dev/dri/renderD128:/dev/dri/renderD128 # try both setting this and access to both devices, same result
            - /dev/kfd:/dev/kfd
            - /dev/dri/:/dev/dri/
        ports:
            - '11434:11434'
        volumes:
            - 'ollama:/root/.ollama'
        environment:
            - OLLAMA_HOST=0.0.0.0
         #   - HSA_OVERRIDE_GFX_VERSION='11.0.2' # tried both with and without this setting
            - OLLAMA_FLASH_ATTENTION=1
            - OLLAMA_KV_CACHE_TYPE=q8_0
volumes:
    ollama:

When the container starts I get this:

time=2025-04-28T18:40:33.151Z level=INFO source=routes.go:1299 msg="Listening on [::]:11434 (version 0.6.6)"

time=2025-04-28T18:40:33.151Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"

time=2025-04-28T18:40:33.153Z level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"

time=2025-04-28T18:40:33.154Z level=INFO source=amd_linux.go:386 msg="amdgpu is supported" gpu=0 gpu_type=gfx1102

time=2025-04-28T18:40:33.155Z level=INFO source=amd_linux.go:296 msg="unsupported Radeon iGPU detected skipping" id=1 total="512.0 MiB"

time=2025-04-28T18:40:33.155Z level=INFO source=types.go:130 msg="inference compute" id=0 library=rocm variant="" compute=gfx1102 driver=0.0 name=1002:7480 total="16.0 GiB" available="16.0 GiB"

When processing a query, (in this example, using deepcoder 14b at Q4) I see in the logs that it's intending to fit it all in the GPU:

time=2025-04-28T18:55:46.083Z level=INFO source=sched.go:722 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-a814bd1f5db7a8d0f387769dd58462e75c8f19ce830b57be6fdf7de3084302e8 gpu=0 parallel=4 available=17135964160 required="11.4 GiB"

...

time=2025-04-28T18:55:46.084Z level=INFO source=server.go:138 msg=offload library=rocm layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[16.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.4 GiB" memory.required.partial="11.4 GiB" memory.required.kv="1.5 GiB" memory.required.allocations="[11.4 GiB]" memory.weights.total="8.0 GiB" memory.weights.repeating="7.4 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="1.3 GiB" memory.graph.partial="1.6 GiB"

And checking the loading of the model:

user@machine:> docker exec ollama ollama ps
NAME                ID              SIZE     PROCESSOR    UNTIL              
deepcoder:latest    12bdda054d23    10 GB    100% GPU     3 minutes from now  

but then later, when attempting inference, it says no ROCm-capable device is detected:

time=2025-04-28T18:56:21.992Z level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-a814bd1f5db7a8d0f387769dd58462e75c8f19ce830b57be6fdf7de3084302e8 --ctx-size 16384 --batch-size 512 --n-gpu-layers 49 --threads 6 --flash-attn --kv-cache-type q8_0 --mlock --parallel 4 --port 42207"

time=2025-04-28T18:56:21.992Z level=INFO source=sched.go:451 msg="loaded runners" count=1

time=2025-04-28T18:56:21.992Z level=INFO source=server.go:580 msg="waiting for llama runner to start responding"

time=2025-04-28T18:56:21.992Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error"

time=2025-04-28T18:56:22.001Z level=INFO source=runner.go:853 msg="starting go runner"

ggml_cuda_init: failed to initialize ROCm: no ROCm-capable device is detected

load_backend: loaded ROCm backend from /usr/lib/ollama/rocm/libggml-hip.so

load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so

time=2025-04-28T18:56:22.031Z level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)

time=2025-04-28T18:56:22.032Z level=INFO source=runner.go:913 msg="Server listening on 127.0.0.1:42207"

And my performance reflects this. CPU usage maxes out with <10 tk/sec performance.

Relevant log output

2025/04/28 19:02:11 routes.go:1232: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE:q8_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"

time=2025-04-28T19:02:11.458Z level=INFO source=images.go:458 msg="total blobs: 5"

time=2025-04-28T19:02:11.458Z level=INFO source=images.go:465 msg="total unused blobs removed: 0"

time=2025-04-28T19:02:11.458Z level=INFO source=routes.go:1299 msg="Listening on [::]:11434 (version 0.6.6)"

time=2025-04-28T19:02:11.458Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"

time=2025-04-28T19:02:11.459Z level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"

time=2025-04-28T19:02:11.461Z level=INFO source=amd_linux.go:386 msg="amdgpu is supported" gpu=0 gpu_type=gfx1102

time=2025-04-28T19:02:11.462Z level=INFO source=amd_linux.go:296 msg="unsupported Radeon iGPU detected skipping" id=1 total="512.0 MiB"

time=2025-04-28T19:02:11.462Z level=INFO source=types.go:130 msg="inference compute" id=0 library=rocm variant="" compute=gfx1102 driver=0.0 name=1002:7480 total="16.0 GiB" available="16.0 GiB"

[GIN] 2025/04/28 - 19:02:14 | 200 |       159.3µs |      10.10.1.10 | GET      "/api/ps"

[GIN] 2025/04/28 - 19:02:19 | 200 |       20.42µs |      10.10.1.10 | GET      "/api/ps"

[GIN] 2025/04/28 - 19:02:24 | 200 |       35.33µs |      10.10.1.10 | GET      "/api/ps"

[GIN] 2025/04/28 - 19:02:29 | 200 |       28.44µs |      10.10.1.10 | GET      "/api/ps"

[GIN] 2025/04/28 - 19:02:34 | 200 |        16.3µs |      10.10.1.10 | GET      "/api/ps"

[GIN] 2025/04/28 - 19:02:39 | 200 |       12.91µs |      10.10.1.10 | GET      "/api/ps"

[GIN] 2025/04/28 - 19:02:44 | 200 |      16.119µs |      10.10.1.10 | GET      "/api/ps"

[GIN] 2025/04/28 - 19:02:49 | 200 |       18.06µs |      10.10.1.10 | GET      "/api/ps"

[GIN] 2025/04/28 - 19:02:54 | 200 |       28.31µs |      10.10.1.10 | GET      "/api/ps"

[GIN] 2025/04/28 - 19:02:59 | 200 |        31.7µs |      10.10.1.10 | GET      "/api/ps"

[GIN] 2025/04/28 - 19:03:04 | 200 |       26.73µs |      10.10.1.10 | GET      "/api/ps"

[GIN] 2025/04/28 - 19:03:09 | 200 |       21.97µs |      10.10.1.10 | GET      "/api/ps"

[GIN] 2025/04/28 - 19:03:14 | 200 |       20.91µs |      10.10.1.10 | GET      "/api/ps"

time=2025-04-28T19:03:15.314Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32

time=2025-04-28T19:03:15.327Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32

time=2025-04-28T19:03:15.339Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32

time=2025-04-28T19:03:15.340Z level=WARN source=ggml.go:152 msg="key not found" key=qwen2.vision.block_count default=0

time=2025-04-28T19:03:15.340Z level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.key_length default=128

time=2025-04-28T19:03:15.340Z level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.value_length default=128

time=2025-04-28T19:03:15.340Z level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.key_length default=128

time=2025-04-28T19:03:15.340Z level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.value_length default=128

time=2025-04-28T19:03:15.340Z level=INFO source=sched.go:722 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-a814bd1f5db7a8d0f387769dd58462e75c8f19ce830b57be6fdf7de3084302e8 gpu=0 parallel=4 available=17135964160 required="10.0 GiB"

time=2025-04-28T19:03:15.340Z level=INFO source=server.go:105 msg="system memory" total="22.8 GiB" free="21.7 GiB" free_swap="2.0 GiB"

time=2025-04-28T19:03:15.340Z level=WARN source=ggml.go:152 msg="key not found" key=qwen2.vision.block_count default=0

time=2025-04-28T19:03:15.340Z level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.key_length default=128

time=2025-04-28T19:03:15.340Z level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.value_length default=128

time=2025-04-28T19:03:15.340Z level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.key_length default=128

time=2025-04-28T19:03:15.340Z level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.value_length default=128

time=2025-04-28T19:03:15.341Z level=INFO source=server.go:138 msg=offload library=rocm layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[16.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="10.0 GiB" memory.required.partial="10.0 GiB" memory.required.kv="768.0 MiB" memory.required.allocations="[10.0 GiB]" memory.weights.total="8.0 GiB" memory.weights.repeating="7.4 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="676.0 MiB" memory.graph.partial="916.1 MiB"

time=2025-04-28T19:03:15.341Z level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.key_length default=128

time=2025-04-28T19:03:15.341Z level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.value_length default=128

time=2025-04-28T19:03:15.341Z level=INFO source=server.go:185 msg="enabling flash attention"

llama_model_loader: loaded meta data with 45 key-value pairs and 579 tensors from /root/.ollama/models/blobs/sha256-a814bd1f5db7a8d0f387769dd58462e75c8f19ce830b57be6fdf7de3084302e8 (version GGUF V3 (latest))

llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.

llama_model_loader: - kv   0:                       general.architecture str              = qwen2

llama_model_loader: - kv   1:                               general.type str              = model

llama_model_loader: - kv   2:                               general.name str              = DeepCoder 14B Preview

llama_model_loader: - kv   3:                       general.organization str              = Agentica Org

llama_model_loader: - kv   4:                           general.finetune str              = Preview

llama_model_loader: - kv   5:                           general.basename str              = DeepCoder

llama_model_loader: - kv   6:                         general.size_label str              = 14B

llama_model_loader: - kv   7:                            general.license str              = mit

llama_model_loader: - kv   8:                   general.base_model.count u32              = 1

llama_model_loader: - kv   9:                  general.base_model.0.name str              = DeepSeek R1 Distill Qwen 14B

llama_model_loader: - kv  10:          general.base_model.0.organization str              = Deepseek Ai

llama_model_loader: - kv  11:              general.base_model.0.repo_url str              = https://huggingface.co/deepseek-ai/De...

llama_model_loader: - kv  12:                      general.dataset.count u32              = 3

llama_model_loader: - kv  13:                     general.dataset.0.name str              = Verifiable Coding Problems

llama_model_loader: - kv  14:             general.dataset.0.organization str              = PrimeIntellect

llama_model_loader: - kv  15:                 general.dataset.0.repo_url str              = https://huggingface.co/PrimeIntellect...

llama_model_loader: - kv  16:                     general.dataset.1.name str              = TACO Verified

llama_model_loader: - kv  17:             general.dataset.1.organization str              = Likaixin

llama_model_loader: - kv  18:                 general.dataset.1.repo_url str              = https://huggingface.co/likaixin/TACO-...

llama_model_loader: - kv  19:                     general.dataset.2.name str              = Code_Generation_Lite

llama_model_loader: - kv  20:             general.dataset.2.organization str              = Livecodebench

llama_model_loader: - kv  21:                 general.dataset.2.repo_url str              = https://huggingface.co/livecodebench/...

llama_model_loader: - kv  22:                               general.tags arr[str,1]       = ["text-generation"]

llama_model_loader: - kv  23:                          general.languages arr[str,1]       = ["en"]

llama_model_loader: - kv  24:                          qwen2.block_count u32              = 48

llama_model_loader: - kv  25:                       qwen2.context_length u32              = 131072

llama_model_loader: - kv  26:                     qwen2.embedding_length u32              = 5120

llama_model_loader: - kv  27:                  qwen2.feed_forward_length u32              = 13824

llama_model_loader: - kv  28:                 qwen2.attention.head_count u32              = 40

llama_model_loader: - kv  29:              qwen2.attention.head_count_kv u32              = 8

llama_model_loader: - kv  30:                       qwen2.rope.freq_base f32              = 1000000.000000

llama_model_loader: - kv  31:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000010

llama_model_loader: - kv  32:                       tokenizer.ggml.model str              = gpt2

llama_model_loader: - kv  33:                         tokenizer.ggml.pre str              = deepseek-r1-qwen

llama_model_loader: - kv  34:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...

llama_model_loader: - kv  35:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...

llama_model_loader: - kv  36:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...

llama_model_loader: - kv  37:                tokenizer.ggml.bos_token_id u32              = 151646

llama_model_loader: - kv  38:                tokenizer.ggml.eos_token_id u32              = 151643

llama_model_loader: - kv  39:            tokenizer.ggml.padding_token_id u32              = 151643

llama_model_loader: - kv  40:               tokenizer.ggml.add_bos_token bool             = true

llama_model_loader: - kv  41:               tokenizer.ggml.add_eos_token bool             = false

llama_model_loader: - kv  42:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...

llama_model_loader: - kv  43:               general.quantization_version u32              = 2

llama_model_loader: - kv  44:                          general.file_type u32              = 15

llama_model_loader: - type  f32:  241 tensors

llama_model_loader: - type q4_K:  289 tensors

llama_model_loader: - type q6_K:   49 tensors

print_info: file format = GGUF V3 (latest)

print_info: file type   = Q4_K - Medium

print_info: file size   = 8.37 GiB (4.87 BPW) 

load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect

load: special tokens cache size = 22

load: token to piece cache size = 0.9310 MB

print_info: arch             = qwen2

print_info: vocab_only       = 1

print_info: model type       = ?B

print_info: model params     = 14.77 B

print_info: general.name     = DeepCoder 14B Preview

print_info: vocab type       = BPE

print_info: n_vocab          = 152064

print_info: n_merges         = 151387

print_info: BOS token        = 151646 '<|begin▁of▁sentence|>'

print_info: EOS token        = 151643 '<|end▁of▁sentence|>'

print_info: EOT token        = 151643 '<|end▁of▁sentence|>'

print_info: PAD token        = 151643 '<|end▁of▁sentence|>'

print_info: LF token         = 198 'Ċ'

print_info: FIM PRE token    = 151659 '<|fim_prefix|>'

print_info: FIM SUF token    = 151661 '<|fim_suffix|>'

print_info: FIM MID token    = 151660 '<|fim_middle|>'

print_info: FIM PAD token    = 151662 '<|fim_pad|>'

print_info: FIM REP token    = 151663 '<|repo_name|>'

print_info: FIM SEP token    = 151664 '<|file_sep|>'

print_info: EOG token        = 151643 '<|end▁of▁sentence|>'

print_info: EOG token        = 151662 '<|fim_pad|>'

print_info: EOG token        = 151663 '<|repo_name|>'

print_info: EOG token        = 151664 '<|file_sep|>'

print_info: max token length = 256

llama_model_load: vocab only - skipping tensors

time=2025-04-28T19:03:15.507Z level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-a814bd1f5db7a8d0f387769dd58462e75c8f19ce830b57be6fdf7de3084302e8 --ctx-size 8192 --batch-size 512 --n-gpu-layers 49 --threads 6 --flash-attn --kv-cache-type q8_0 --mlock --parallel 4 --port 44005"

time=2025-04-28T19:03:15.507Z level=INFO source=sched.go:451 msg="loaded runners" count=1

time=2025-04-28T19:03:15.507Z level=INFO source=server.go:580 msg="waiting for llama runner to start responding"

time=2025-04-28T19:03:15.507Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error"

time=2025-04-28T19:03:15.515Z level=INFO source=runner.go:853 msg="starting go runner"

ggml_cuda_init: failed to initialize ROCm: no ROCm-capable device is detected

load_backend: loaded ROCm backend from /usr/lib/ollama/rocm/libggml-hip.so

load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so

time=2025-04-28T19:03:15.546Z level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)

time=2025-04-28T19:03:15.546Z level=INFO source=runner.go:913 msg="Server listening on 127.0.0.1:44005"

llama_model_loader: loaded meta data with 45 key-value pairs and 579 tensors from /root/.ollama/models/blobs/sha256-a814bd1f5db7a8d0f387769dd58462e75c8f19ce830b57be6fdf7de3084302e8 (version GGUF V3 (latest))

llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.

llama_model_loader: - kv   0:                       general.architecture str              = qwen2

llama_model_loader: - kv   1:                               general.type str              = model

llama_model_loader: - kv   2:                               general.name str              = DeepCoder 14B Preview

llama_model_loader: - kv   3:                       general.organization str              = Agentica Org

llama_model_loader: - kv   4:                           general.finetune str              = Preview

llama_model_loader: - kv   5:                           general.basename str              = DeepCoder

llama_model_loader: - kv   6:                         general.size_label str              = 14B

llama_model_loader: - kv   7:                            general.license str              = mit

llama_model_loader: - kv   8:                   general.base_model.count u32              = 1

llama_model_loader: - kv   9:                  general.base_model.0.name str              = DeepSeek R1 Distill Qwen 14B

llama_model_loader: - kv  10:          general.base_model.0.organization str              = Deepseek Ai

llama_model_loader: - kv  11:              general.base_model.0.repo_url str              = https://huggingface.co/deepseek-ai/De...

llama_model_loader: - kv  12:                      general.dataset.count u32              = 3

llama_model_loader: - kv  13:                     general.dataset.0.name str              = Verifiable Coding Problems

llama_model_loader: - kv  14:             general.dataset.0.organization str              = PrimeIntellect

llama_model_loader: - kv  15:                 general.dataset.0.repo_url str              = https://huggingface.co/PrimeIntellect...

llama_model_loader: - kv  16:                     general.dataset.1.name str              = TACO Verified

llama_model_loader: - kv  17:             general.dataset.1.organization str              = Likaixin

llama_model_loader: - kv  18:                 general.dataset.1.repo_url str              = https://huggingface.co/likaixin/TACO-...

llama_model_loader: - kv  19:                     general.dataset.2.name str              = Code_Generation_Lite

llama_model_loader: - kv  20:             general.dataset.2.organization str              = Livecodebench

llama_model_loader: - kv  21:                 general.dataset.2.repo_url str              = https://huggingface.co/livecodebench/...

llama_model_loader: - kv  22:                               general.tags arr[str,1]       = ["text-generation"]

llama_model_loader: - kv  23:                          general.languages arr[str,1]       = ["en"]

llama_model_loader: - kv  24:                          qwen2.block_count u32              = 48

llama_model_loader: - kv  25:                       qwen2.context_length u32              = 131072

llama_model_loader: - kv  26:                     qwen2.embedding_length u32              = 5120

llama_model_loader: - kv  27:                  qwen2.feed_forward_length u32              = 13824

llama_model_loader: - kv  28:                 qwen2.attention.head_count u32              = 40

llama_model_loader: - kv  29:              qwen2.attention.head_count_kv u32              = 8

llama_model_loader: - kv  30:                       qwen2.rope.freq_base f32              = 1000000.000000

llama_model_loader: - kv  31:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000010

llama_model_loader: - kv  32:                       tokenizer.ggml.model str              = gpt2

llama_model_loader: - kv  33:                         tokenizer.ggml.pre str              = deepseek-r1-qwen

llama_model_loader: - kv  34:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...

llama_model_loader: - kv  35:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...

llama_model_loader: - kv  36:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...

llama_model_loader: - kv  37:                tokenizer.ggml.bos_token_id u32              = 151646

llama_model_loader: - kv  38:                tokenizer.ggml.eos_token_id u32              = 151643

llama_model_loader: - kv  39:            tokenizer.ggml.padding_token_id u32              = 151643

llama_model_loader: - kv  40:               tokenizer.ggml.add_bos_token bool             = true

llama_model_loader: - kv  41:               tokenizer.ggml.add_eos_token bool             = false

llama_model_loader: - kv  42:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...

llama_model_loader: - kv  43:               general.quantization_version u32              = 2

llama_model_loader: - kv  44:                          general.file_type u32              = 15

llama_model_loader: - type  f32:  241 tensors

llama_model_loader: - type q4_K:  289 tensors

llama_model_loader: - type q6_K:   49 tensors

print_info: file format = GGUF V3 (latest)

print_info: file type   = Q4_K - Medium

print_info: file size   = 8.37 GiB (4.87 BPW) 

load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect

load: special tokens cache size = 22

load: token to piece cache size = 0.9310 MB

print_info: arch             = qwen2

print_info: vocab_only       = 0

print_info: n_ctx_train      = 131072

print_info: n_embd           = 5120

print_info: n_layer          = 48

print_info: n_head           = 40

print_info: n_head_kv        = 8

print_info: n_rot            = 128

print_info: n_swa            = 0

print_info: n_swa_pattern    = 1

print_info: n_embd_head_k    = 128

print_info: n_embd_head_v    = 128

print_info: n_gqa            = 5

print_info: n_embd_k_gqa     = 1024

print_info: n_embd_v_gqa     = 1024

print_info: f_norm_eps       = 0.0e+00

print_info: f_norm_rms_eps   = 1.0e-05

print_info: f_clamp_kqv      = 0.0e+00

print_info: f_max_alibi_bias = 0.0e+00

print_info: f_logit_scale    = 0.0e+00

print_info: f_attn_scale     = 0.0e+00

print_info: n_ff             = 13824

print_info: n_expert         = 0

print_info: n_expert_used    = 0

print_info: causal attn      = 1

print_info: pooling type     = 0

print_info: rope type        = 2

print_info: rope scaling     = linear

print_info: freq_base_train  = 1000000.0

print_info: freq_scale_train = 1

print_info: n_ctx_orig_yarn  = 131072

print_info: rope_finetuned   = unknown

print_info: ssm_d_conv       = 0

print_info: ssm_d_inner      = 0

print_info: ssm_d_state      = 0

print_info: ssm_dt_rank      = 0

print_info: ssm_dt_b_c_rms   = 0

print_info: model type       = 14B

print_info: model params     = 14.77 B

print_info: general.name     = DeepCoder 14B Preview

print_info: vocab type       = BPE

print_info: n_vocab          = 152064

print_info: n_merges         = 151387

print_info: BOS token        = 151646 '<|begin▁of▁sentence|>'

print_info: EOS token        = 151643 '<|end▁of▁sentence|>'

print_info: EOT token        = 151643 '<|end▁of▁sentence|>'

print_info: PAD token        = 151643 '<|end▁of▁sentence|>'

print_info: LF token         = 198 'Ċ'

print_info: FIM PRE token    = 151659 '<|fim_prefix|>'

print_info: FIM SUF token    = 151661 '<|fim_suffix|>'

print_info: FIM MID token    = 151660 '<|fim_middle|>'

print_info: FIM PAD token    = 151662 '<|fim_pad|>'

print_info: FIM REP token    = 151663 '<|repo_name|>'

print_info: FIM SEP token    = 151664 '<|file_sep|>'

print_info: EOG token        = 151643 '<|end▁of▁sentence|>'

print_info: EOG token        = 151662 '<|fim_pad|>'

print_info: EOG token        = 151663 '<|repo_name|>'

print_info: EOG token        = 151664 '<|file_sep|>'

print_info: max token length = 256

load_tensors: loading model tensors, this can take a while... (mmap = true)

time=2025-04-28T19:03:15.958Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server not responding"

time=2025-04-28T19:03:16.210Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model"

load_tensors:   CPU_Mapped model buffer size =  8566.04 MiB

warning: failed to mlock 1082605568-byte buffer (after previously locking 0 bytes): Cannot allocate memory

Try increasing RLIMIT_MEMLOCK ('ulimit -l' as root).

llama_context: constructing llama_context

llama_context: n_seq_max     = 4

llama_context: n_ctx         = 8192

llama_context: n_ctx_per_seq = 2048

llama_context: n_batch       = 2048

llama_context: n_ubatch      = 512

llama_context: causal_attn   = 1

llama_context: flash_attn    = 1

llama_context: freq_base     = 1000000.0

llama_context: freq_scale    = 1

llama_context: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized

llama_context:        CPU  output buffer size =     2.40 MiB

init: kv_size = 8192, offload = 1, type_k = 'q8_0', type_v = 'q8_0', n_layer = 48, can_shift = 1

init:        CPU KV buffer size =   816.00 MiB

llama_context: KV self size  =  816.00 MiB, K (q8_0):  408.00 MiB, V (q8_0):  408.00 MiB

llama_context:        CPU compute buffer size =   307.00 MiB

llama_context: graph nodes  = 1591

llama_context: graph splits = 1

time=2025-04-28T19:03:16.461Z level=INFO source=server.go:619 msg="llama runner started in 0.95 seconds"

[GIN] 2025/04/28 - 19:03:19 | 200 |      194.41µs |      10.10.1.10 | GET      "/api/ps"

OS

Docker

GPU

AMD

CPU

AMD

Ollama version

0.6.6

Originally created by @samteezy on GitHub (Apr 28, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10445 ### What is the issue? Been running into an intermittent issue with an AMD card that is causing me to lose hair (more quickly). I have a 16GB AMD 7600XT in a Ryzen system, running Opensuse Tumbleweed on the 6.14.x kernel. I'm running Ollama in the official Docker image with the `:rocm` tag. Sometimes this works fine, but often I'm finding the system is actually using the CPU for inference, and not the GPU. I am confirming this by lower performance and `top` showing that the CPU is being loaded when processing inference. The container is setup as thus: ```yaml version: '3.9' services: ollama: image: ollama/ollama:rocm container_name: ollama user: "${UID}:${GID}" group_add: - 488 # group ID from checking ls -lnd /dev/kfd /dev/dri /dev/dri/* devices: # - /dev/dri/renderD128:/dev/dri/renderD128 # try both setting this and access to both devices, same result - /dev/kfd:/dev/kfd - /dev/dri/:/dev/dri/ ports: - '11434:11434' volumes: - 'ollama:/root/.ollama' environment: - OLLAMA_HOST=0.0.0.0 # - HSA_OVERRIDE_GFX_VERSION='11.0.2' # tried both with and without this setting - OLLAMA_FLASH_ATTENTION=1 - OLLAMA_KV_CACHE_TYPE=q8_0 volumes: ollama: ``` When the container starts I get this: ``` time=2025-04-28T18:40:33.151Z level=INFO source=routes.go:1299 msg="Listening on [::]:11434 (version 0.6.6)" time=2025-04-28T18:40:33.151Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-04-28T18:40:33.153Z level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2025-04-28T18:40:33.154Z level=INFO source=amd_linux.go:386 msg="amdgpu is supported" gpu=0 gpu_type=gfx1102 time=2025-04-28T18:40:33.155Z level=INFO source=amd_linux.go:296 msg="unsupported Radeon iGPU detected skipping" id=1 total="512.0 MiB" time=2025-04-28T18:40:33.155Z level=INFO source=types.go:130 msg="inference compute" id=0 library=rocm variant="" compute=gfx1102 driver=0.0 name=1002:7480 total="16.0 GiB" available="16.0 GiB" ``` When processing a query, (in this example, using deepcoder 14b at Q4) I see in the logs that it's intending to fit it all in the GPU: ``` time=2025-04-28T18:55:46.083Z level=INFO source=sched.go:722 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-a814bd1f5db7a8d0f387769dd58462e75c8f19ce830b57be6fdf7de3084302e8 gpu=0 parallel=4 available=17135964160 required="11.4 GiB" ... time=2025-04-28T18:55:46.084Z level=INFO source=server.go:138 msg=offload library=rocm layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[16.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.4 GiB" memory.required.partial="11.4 GiB" memory.required.kv="1.5 GiB" memory.required.allocations="[11.4 GiB]" memory.weights.total="8.0 GiB" memory.weights.repeating="7.4 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="1.3 GiB" memory.graph.partial="1.6 GiB" ``` And checking the loading of the model: ```bash user@machine:> docker exec ollama ollama ps NAME ID SIZE PROCESSOR UNTIL deepcoder:latest 12bdda054d23 10 GB 100% GPU 3 minutes from now ``` but then later, when attempting inference, it says no ROCm-capable device is detected: ``` time=2025-04-28T18:56:21.992Z level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-a814bd1f5db7a8d0f387769dd58462e75c8f19ce830b57be6fdf7de3084302e8 --ctx-size 16384 --batch-size 512 --n-gpu-layers 49 --threads 6 --flash-attn --kv-cache-type q8_0 --mlock --parallel 4 --port 42207" time=2025-04-28T18:56:21.992Z level=INFO source=sched.go:451 msg="loaded runners" count=1 time=2025-04-28T18:56:21.992Z level=INFO source=server.go:580 msg="waiting for llama runner to start responding" time=2025-04-28T18:56:21.992Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error" time=2025-04-28T18:56:22.001Z level=INFO source=runner.go:853 msg="starting go runner" ggml_cuda_init: failed to initialize ROCm: no ROCm-capable device is detected load_backend: loaded ROCm backend from /usr/lib/ollama/rocm/libggml-hip.so load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so time=2025-04-28T18:56:22.031Z level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2025-04-28T18:56:22.032Z level=INFO source=runner.go:913 msg="Server listening on 127.0.0.1:42207" ``` And my performance reflects this. CPU usage maxes out with <10 tk/sec performance. ### Relevant log output ```shell 2025/04/28 19:02:11 routes.go:1232: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE:q8_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-04-28T19:02:11.458Z level=INFO source=images.go:458 msg="total blobs: 5" time=2025-04-28T19:02:11.458Z level=INFO source=images.go:465 msg="total unused blobs removed: 0" time=2025-04-28T19:02:11.458Z level=INFO source=routes.go:1299 msg="Listening on [::]:11434 (version 0.6.6)" time=2025-04-28T19:02:11.458Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-04-28T19:02:11.459Z level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2025-04-28T19:02:11.461Z level=INFO source=amd_linux.go:386 msg="amdgpu is supported" gpu=0 gpu_type=gfx1102 time=2025-04-28T19:02:11.462Z level=INFO source=amd_linux.go:296 msg="unsupported Radeon iGPU detected skipping" id=1 total="512.0 MiB" time=2025-04-28T19:02:11.462Z level=INFO source=types.go:130 msg="inference compute" id=0 library=rocm variant="" compute=gfx1102 driver=0.0 name=1002:7480 total="16.0 GiB" available="16.0 GiB" [GIN] 2025/04/28 - 19:02:14 | 200 | 159.3µs | 10.10.1.10 | GET "/api/ps" [GIN] 2025/04/28 - 19:02:19 | 200 | 20.42µs | 10.10.1.10 | GET "/api/ps" [GIN] 2025/04/28 - 19:02:24 | 200 | 35.33µs | 10.10.1.10 | GET "/api/ps" [GIN] 2025/04/28 - 19:02:29 | 200 | 28.44µs | 10.10.1.10 | GET "/api/ps" [GIN] 2025/04/28 - 19:02:34 | 200 | 16.3µs | 10.10.1.10 | GET "/api/ps" [GIN] 2025/04/28 - 19:02:39 | 200 | 12.91µs | 10.10.1.10 | GET "/api/ps" [GIN] 2025/04/28 - 19:02:44 | 200 | 16.119µs | 10.10.1.10 | GET "/api/ps" [GIN] 2025/04/28 - 19:02:49 | 200 | 18.06µs | 10.10.1.10 | GET "/api/ps" [GIN] 2025/04/28 - 19:02:54 | 200 | 28.31µs | 10.10.1.10 | GET "/api/ps" [GIN] 2025/04/28 - 19:02:59 | 200 | 31.7µs | 10.10.1.10 | GET "/api/ps" [GIN] 2025/04/28 - 19:03:04 | 200 | 26.73µs | 10.10.1.10 | GET "/api/ps" [GIN] 2025/04/28 - 19:03:09 | 200 | 21.97µs | 10.10.1.10 | GET "/api/ps" [GIN] 2025/04/28 - 19:03:14 | 200 | 20.91µs | 10.10.1.10 | GET "/api/ps" time=2025-04-28T19:03:15.314Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-28T19:03:15.327Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-28T19:03:15.339Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-28T19:03:15.340Z level=WARN source=ggml.go:152 msg="key not found" key=qwen2.vision.block_count default=0 time=2025-04-28T19:03:15.340Z level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-04-28T19:03:15.340Z level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-04-28T19:03:15.340Z level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-04-28T19:03:15.340Z level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-04-28T19:03:15.340Z level=INFO source=sched.go:722 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-a814bd1f5db7a8d0f387769dd58462e75c8f19ce830b57be6fdf7de3084302e8 gpu=0 parallel=4 available=17135964160 required="10.0 GiB" time=2025-04-28T19:03:15.340Z level=INFO source=server.go:105 msg="system memory" total="22.8 GiB" free="21.7 GiB" free_swap="2.0 GiB" time=2025-04-28T19:03:15.340Z level=WARN source=ggml.go:152 msg="key not found" key=qwen2.vision.block_count default=0 time=2025-04-28T19:03:15.340Z level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-04-28T19:03:15.340Z level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-04-28T19:03:15.340Z level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-04-28T19:03:15.340Z level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-04-28T19:03:15.341Z level=INFO source=server.go:138 msg=offload library=rocm layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[16.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="10.0 GiB" memory.required.partial="10.0 GiB" memory.required.kv="768.0 MiB" memory.required.allocations="[10.0 GiB]" memory.weights.total="8.0 GiB" memory.weights.repeating="7.4 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="676.0 MiB" memory.graph.partial="916.1 MiB" time=2025-04-28T19:03:15.341Z level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-04-28T19:03:15.341Z level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-04-28T19:03:15.341Z level=INFO source=server.go:185 msg="enabling flash attention" llama_model_loader: loaded meta data with 45 key-value pairs and 579 tensors from /root/.ollama/models/blobs/sha256-a814bd1f5db7a8d0f387769dd58462e75c8f19ce830b57be6fdf7de3084302e8 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepCoder 14B Preview llama_model_loader: - kv 3: general.organization str = Agentica Org llama_model_loader: - kv 4: general.finetune str = Preview llama_model_loader: - kv 5: general.basename str = DeepCoder llama_model_loader: - kv 6: general.size_label str = 14B llama_model_loader: - kv 7: general.license str = mit llama_model_loader: - kv 8: general.base_model.count u32 = 1 llama_model_loader: - kv 9: general.base_model.0.name str = DeepSeek R1 Distill Qwen 14B llama_model_loader: - kv 10: general.base_model.0.organization str = Deepseek Ai llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/deepseek-ai/De... llama_model_loader: - kv 12: general.dataset.count u32 = 3 llama_model_loader: - kv 13: general.dataset.0.name str = Verifiable Coding Problems llama_model_loader: - kv 14: general.dataset.0.organization str = PrimeIntellect llama_model_loader: - kv 15: general.dataset.0.repo_url str = https://huggingface.co/PrimeIntellect... llama_model_loader: - kv 16: general.dataset.1.name str = TACO Verified llama_model_loader: - kv 17: general.dataset.1.organization str = Likaixin llama_model_loader: - kv 18: general.dataset.1.repo_url str = https://huggingface.co/likaixin/TACO-... llama_model_loader: - kv 19: general.dataset.2.name str = Code_Generation_Lite llama_model_loader: - kv 20: general.dataset.2.organization str = Livecodebench llama_model_loader: - kv 21: general.dataset.2.repo_url str = https://huggingface.co/livecodebench/... llama_model_loader: - kv 22: general.tags arr[str,1] = ["text-generation"] llama_model_loader: - kv 23: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 24: qwen2.block_count u32 = 48 llama_model_loader: - kv 25: qwen2.context_length u32 = 131072 llama_model_loader: - kv 26: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 27: qwen2.feed_forward_length u32 = 13824 llama_model_loader: - kv 28: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 29: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 30: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 31: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 32: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 33: tokenizer.ggml.pre str = deepseek-r1-qwen llama_model_loader: - kv 34: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 35: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 36: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 37: tokenizer.ggml.bos_token_id u32 = 151646 llama_model_loader: - kv 38: tokenizer.ggml.eos_token_id u32 = 151643 llama_model_loader: - kv 39: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 40: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 41: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 42: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 43: general.quantization_version u32 = 2 llama_model_loader: - kv 44: general.file_type u32 = 15 llama_model_loader: - type f32: 241 tensors llama_model_loader: - type q4_K: 289 tensors llama_model_loader: - type q6_K: 49 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 8.37 GiB (4.87 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 22 load: token to piece cache size = 0.9310 MB print_info: arch = qwen2 print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 14.77 B print_info: general.name = DeepCoder 14B Preview print_info: vocab type = BPE print_info: n_vocab = 152064 print_info: n_merges = 151387 print_info: BOS token = 151646 '<|begin▁of▁sentence|>' print_info: EOS token = 151643 '<|end▁of▁sentence|>' print_info: EOT token = 151643 '<|end▁of▁sentence|>' print_info: PAD token = 151643 '<|end▁of▁sentence|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|end▁of▁sentence|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-04-28T19:03:15.507Z level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-a814bd1f5db7a8d0f387769dd58462e75c8f19ce830b57be6fdf7de3084302e8 --ctx-size 8192 --batch-size 512 --n-gpu-layers 49 --threads 6 --flash-attn --kv-cache-type q8_0 --mlock --parallel 4 --port 44005" time=2025-04-28T19:03:15.507Z level=INFO source=sched.go:451 msg="loaded runners" count=1 time=2025-04-28T19:03:15.507Z level=INFO source=server.go:580 msg="waiting for llama runner to start responding" time=2025-04-28T19:03:15.507Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error" time=2025-04-28T19:03:15.515Z level=INFO source=runner.go:853 msg="starting go runner" ggml_cuda_init: failed to initialize ROCm: no ROCm-capable device is detected load_backend: loaded ROCm backend from /usr/lib/ollama/rocm/libggml-hip.so load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so time=2025-04-28T19:03:15.546Z level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2025-04-28T19:03:15.546Z level=INFO source=runner.go:913 msg="Server listening on 127.0.0.1:44005" llama_model_loader: loaded meta data with 45 key-value pairs and 579 tensors from /root/.ollama/models/blobs/sha256-a814bd1f5db7a8d0f387769dd58462e75c8f19ce830b57be6fdf7de3084302e8 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepCoder 14B Preview llama_model_loader: - kv 3: general.organization str = Agentica Org llama_model_loader: - kv 4: general.finetune str = Preview llama_model_loader: - kv 5: general.basename str = DeepCoder llama_model_loader: - kv 6: general.size_label str = 14B llama_model_loader: - kv 7: general.license str = mit llama_model_loader: - kv 8: general.base_model.count u32 = 1 llama_model_loader: - kv 9: general.base_model.0.name str = DeepSeek R1 Distill Qwen 14B llama_model_loader: - kv 10: general.base_model.0.organization str = Deepseek Ai llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/deepseek-ai/De... llama_model_loader: - kv 12: general.dataset.count u32 = 3 llama_model_loader: - kv 13: general.dataset.0.name str = Verifiable Coding Problems llama_model_loader: - kv 14: general.dataset.0.organization str = PrimeIntellect llama_model_loader: - kv 15: general.dataset.0.repo_url str = https://huggingface.co/PrimeIntellect... llama_model_loader: - kv 16: general.dataset.1.name str = TACO Verified llama_model_loader: - kv 17: general.dataset.1.organization str = Likaixin llama_model_loader: - kv 18: general.dataset.1.repo_url str = https://huggingface.co/likaixin/TACO-... llama_model_loader: - kv 19: general.dataset.2.name str = Code_Generation_Lite llama_model_loader: - kv 20: general.dataset.2.organization str = Livecodebench llama_model_loader: - kv 21: general.dataset.2.repo_url str = https://huggingface.co/livecodebench/... llama_model_loader: - kv 22: general.tags arr[str,1] = ["text-generation"] llama_model_loader: - kv 23: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 24: qwen2.block_count u32 = 48 llama_model_loader: - kv 25: qwen2.context_length u32 = 131072 llama_model_loader: - kv 26: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 27: qwen2.feed_forward_length u32 = 13824 llama_model_loader: - kv 28: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 29: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 30: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 31: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 32: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 33: tokenizer.ggml.pre str = deepseek-r1-qwen llama_model_loader: - kv 34: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 35: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 36: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 37: tokenizer.ggml.bos_token_id u32 = 151646 llama_model_loader: - kv 38: tokenizer.ggml.eos_token_id u32 = 151643 llama_model_loader: - kv 39: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 40: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 41: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 42: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 43: general.quantization_version u32 = 2 llama_model_loader: - kv 44: general.file_type u32 = 15 llama_model_loader: - type f32: 241 tensors llama_model_loader: - type q4_K: 289 tensors llama_model_loader: - type q6_K: 49 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 8.37 GiB (4.87 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 22 load: token to piece cache size = 0.9310 MB print_info: arch = qwen2 print_info: vocab_only = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 5120 print_info: n_layer = 48 print_info: n_head = 40 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_swa_pattern = 1 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 5 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 13824 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 2 print_info: rope scaling = linear print_info: freq_base_train = 1000000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 131072 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 14B print_info: model params = 14.77 B print_info: general.name = DeepCoder 14B Preview print_info: vocab type = BPE print_info: n_vocab = 152064 print_info: n_merges = 151387 print_info: BOS token = 151646 '<|begin▁of▁sentence|>' print_info: EOS token = 151643 '<|end▁of▁sentence|>' print_info: EOT token = 151643 '<|end▁of▁sentence|>' print_info: PAD token = 151643 '<|end▁of▁sentence|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|end▁of▁sentence|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = true) time=2025-04-28T19:03:15.958Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server not responding" time=2025-04-28T19:03:16.210Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model" load_tensors: CPU_Mapped model buffer size = 8566.04 MiB warning: failed to mlock 1082605568-byte buffer (after previously locking 0 bytes): Cannot allocate memory Try increasing RLIMIT_MEMLOCK ('ulimit -l' as root). llama_context: constructing llama_context llama_context: n_seq_max = 4 llama_context: n_ctx = 8192 llama_context: n_ctx_per_seq = 2048 llama_context: n_batch = 2048 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 1 llama_context: freq_base = 1000000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_context: CPU output buffer size = 2.40 MiB init: kv_size = 8192, offload = 1, type_k = 'q8_0', type_v = 'q8_0', n_layer = 48, can_shift = 1 init: CPU KV buffer size = 816.00 MiB llama_context: KV self size = 816.00 MiB, K (q8_0): 408.00 MiB, V (q8_0): 408.00 MiB llama_context: CPU compute buffer size = 307.00 MiB llama_context: graph nodes = 1591 llama_context: graph splits = 1 time=2025-04-28T19:03:16.461Z level=INFO source=server.go:619 msg="llama runner started in 0.95 seconds" [GIN] 2025/04/28 - 19:03:19 | 200 | 194.41µs | 10.10.1.10 | GET "/api/ps" ``` ### OS Docker ### GPU AMD ### CPU AMD ### Ollama version 0.6.6
GiteaMirror added the bug label 2026-04-29 02:49:13 -05:00
Author
Owner
<!-- gh-comment-id:2836318779 --> @rick-github commented on GitHub (Apr 28, 2025): https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#linux-docker
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#53380