[GH-ISSUE #10993] crash on Radeon 8060S Graphics, gfx1151 on windows #33008

Open
opened 2026-04-22 15:06:53 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @FAIpang on GitHub (Jun 6, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10993

What is the issue?

server-2.log

system:
windows

GPU
amd

Relevant log output

llama_model_load: vocab only - skipping tensors
time=2025-06-05T18:11:09.001+08:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\\Users\\AI PC\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\AI PC\\.ollama\\models\\blobs\\sha256-daec91ffb5dd0c27411bd71f29932917c49cf529a641d0168496c3a501e3062c --ctx-size 4096 --batch-size 512 --n-gpu-layers 25 --threads 16 --parallel 1 --port 58515"
time=2025-06-05T18:11:09.005+08:00 level=INFO source=sched.go:483 msg="loaded runners" count=1
time=2025-06-05T18:11:09.005+08:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
time=2025-06-05T18:11:09.005+08:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error"
time=2025-06-05T18:11:09.051+08:00 level=INFO source=runner.go:815 msg="starting go runner"
load_backend: loaded CPU backend from C:\Users\AI PC\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 ROCm devices:
  Device 0: Radeon 8060S Graphics, gfx1151 (0x1151), VMM: no, Wave Size: 32
load_backend: loaded ROCm backend from C:\Users\AI PC\AppData\Local\Programs\Ollama\lib\ollama\rocm\ggml-hip.dll
time=2025-06-05T18:11:09.121+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.NO_PEER_COPY=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-06-05T18:11:09.122+08:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:58515"
time=2025-06-05T18:11:09.257+08:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
llama_model_load_from_file_impl: using device ROCm0 (Radeon 8060S Graphics) - 49176 MiB free
llama_model_loader: loaded meta data with 33 key-value pairs and 389 tensors from C:\Users\AI PC\.ollama\models\blobs\sha256-daec91ffb5dd0c27411bd71f29932917c49cf529a641d0168496c3a501e3062c (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = bert
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                         general.size_label str              = 567M
llama_model_loader: - kv   3:                            general.license str              = mit
llama_model_loader: - kv   4:                               general.tags arr[str,4]       = ["sentence-transformers", "feature-ex...
llama_model_loader: - kv   5:                           bert.block_count u32              = 24
llama_model_loader: - kv   6:                        bert.context_length u32              = 8192
llama_model_loader: - kv   7:                      bert.embedding_length u32              = 1024
llama_model_loader: - kv   8:                   bert.feed_forward_length u32              = 4096
llama_model_loader: - kv   9:                  bert.attention.head_count u32              = 16
llama_model_loader: - kv  10:          bert.attention.layer_norm_epsilon f32              = 0.000010
llama_model_loader: - kv  11:                          general.file_type u32              = 1
llama_model_loader: - kv  12:                      bert.attention.causal bool             = false
llama_model_loader: - kv  13:                          bert.pooling_type u32              = 2
llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = t5
llama_model_loader: - kv  15:                         tokenizer.ggml.pre str              = default
llama_model_loader: - kv  16:                      tokenizer.ggml.tokens arr[str,250002]  = ["<s>", "<pad>", "</s>", "<unk>", ","...
llama_model_loader: - kv  17:                      tokenizer.ggml.scores arr[f32,250002]  = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  18:                  tokenizer.ggml.token_type arr[i32,250002]  = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  19:            tokenizer.ggml.add_space_prefix bool             = true
llama_model_loader: - kv  20:            tokenizer.ggml.token_type_count u32              = 1
llama_model_loader: - kv  21:    tokenizer.ggml.remove_extra_whitespaces bool             = true
llama_model_loader: - kv  22:        tokenizer.ggml.precompiled_charsmap arr[u8,237539]   = [0, 180, 2, 0, 0, 132, 0, 0, 0, 0, 0,...
llama_model_loader: - kv  23:                tokenizer.ggml.bos_token_id u32              = 0
llama_model_loader: - kv  24:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  25:            tokenizer.ggml.unknown_token_id u32              = 3
llama_model_loader: - kv  26:          tokenizer.ggml.seperator_token_id u32              = 2
llama_model_loader: - kv  27:            tokenizer.ggml.padding_token_id u32              = 1
llama_model_loader: - kv  28:                tokenizer.ggml.cls_token_id u32              = 0
llama_model_loader: - kv  29:               tokenizer.ggml.mask_token_id u32              = 250001
llama_model_loader: - kv  30:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  31:               tokenizer.ggml.add_eos_token bool             = true
llama_model_loader: - kv  32:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  244 tensors
llama_model_loader: - type  f16:  145 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = F16
print_info: file size   = 1.07 GiB (16.25 BPW) 
load: model vocab missing newline token, using special_pad_id instead
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 4
load: token to piece cache size = 2.1668 MB
print_info: arch             = bert
print_info: vocab_only       = 0
print_info: n_ctx_train      = 8192
print_info: n_embd           = 1024
print_info: n_layer          = 24
print_info: n_head           = 16
print_info: n_head_kv        = 16
print_info: n_rot            = 64
print_info: n_swa            = 0
print_info: n_swa_pattern    = 1
print_info: n_embd_head_k    = 64
print_info: n_embd_head_v    = 64
print_info: n_gqa            = 1
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 1.0e-05
print_info: f_norm_rms_eps   = 0.0e+00
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 4096
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 0
print_info: pooling type     = 2
print_info: rope type        = 2
print_info: rope scaling     = linear
print_info: freq_base_train  = 10000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 8192
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 335M
print_info: model params     = 566.70 M
print_info: general.name     = n/a
print_info: vocab type       = UGM
print_info: n_vocab          = 250002
print_info: n_merges         = 0
print_info: BOS token        = 0 '<s>'
print_info: EOS token        = 2 '</s>'
print_info: UNK token        = 3 '<unk>'
print_info: SEP token        = 2 '</s>'
print_info: PAD token        = 1 '<pad>'
print_info: MASK token       = 250001 '[PAD250000]'
print_info: LF token         = 0 '<s>'
print_info: EOG token        = 2 '</s>'
print_info: max token length = 48
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors: offloading 24 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 25/25 layers to GPU
load_tensors:   CPU_Mapped model buffer size =   520.30 MiB
load_tensors:        ROCm0 model buffer size =   577.22 MiB
time=2025-06-05T18:11:11.510+08:00 level=ERROR source=sched.go:489 msg="error loading llama server" error="llama runner process has terminated: exit status 0xc0000409"
[GIN] 2025/06/05 - 18:11:11 | 500 |    5.4939804s |   192.168.1.243 | POST     "/v1/embeddings"

OS

No response

GPU

AMD

CPU

AMD

Ollama version

0.7.0

Originally created by @FAIpang on GitHub (Jun 6, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10993 ### What is the issue? [server-2.log](https://github.com/user-attachments/files/20622674/server-2.log) system: windows GPU amd ### Relevant log output ```shell llama_model_load: vocab only - skipping tensors time=2025-06-05T18:11:09.001+08:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\\Users\\AI PC\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\AI PC\\.ollama\\models\\blobs\\sha256-daec91ffb5dd0c27411bd71f29932917c49cf529a641d0168496c3a501e3062c --ctx-size 4096 --batch-size 512 --n-gpu-layers 25 --threads 16 --parallel 1 --port 58515" time=2025-06-05T18:11:09.005+08:00 level=INFO source=sched.go:483 msg="loaded runners" count=1 time=2025-06-05T18:11:09.005+08:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2025-06-05T18:11:09.005+08:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error" time=2025-06-05T18:11:09.051+08:00 level=INFO source=runner.go:815 msg="starting go runner" load_backend: loaded CPU backend from C:\Users\AI PC\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 ROCm devices: Device 0: Radeon 8060S Graphics, gfx1151 (0x1151), VMM: no, Wave Size: 32 load_backend: loaded ROCm backend from C:\Users\AI PC\AppData\Local\Programs\Ollama\lib\ollama\rocm\ggml-hip.dll time=2025-06-05T18:11:09.121+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.NO_PEER_COPY=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-06-05T18:11:09.122+08:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:58515" time=2025-06-05T18:11:09.257+08:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" llama_model_load_from_file_impl: using device ROCm0 (Radeon 8060S Graphics) - 49176 MiB free llama_model_loader: loaded meta data with 33 key-value pairs and 389 tensors from C:\Users\AI PC\.ollama\models\blobs\sha256-daec91ffb5dd0c27411bd71f29932917c49cf529a641d0168496c3a501e3062c (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = bert llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.size_label str = 567M llama_model_loader: - kv 3: general.license str = mit llama_model_loader: - kv 4: general.tags arr[str,4] = ["sentence-transformers", "feature-ex... llama_model_loader: - kv 5: bert.block_count u32 = 24 llama_model_loader: - kv 6: bert.context_length u32 = 8192 llama_model_loader: - kv 7: bert.embedding_length u32 = 1024 llama_model_loader: - kv 8: bert.feed_forward_length u32 = 4096 llama_model_loader: - kv 9: bert.attention.head_count u32 = 16 llama_model_loader: - kv 10: bert.attention.layer_norm_epsilon f32 = 0.000010 llama_model_loader: - kv 11: general.file_type u32 = 1 llama_model_loader: - kv 12: bert.attention.causal bool = false llama_model_loader: - kv 13: bert.pooling_type u32 = 2 llama_model_loader: - kv 14: tokenizer.ggml.model str = t5 llama_model_loader: - kv 15: tokenizer.ggml.pre str = default llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,250002] = ["<s>", "<pad>", "</s>", "<unk>", ","... llama_model_loader: - kv 17: tokenizer.ggml.scores arr[f32,250002] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,250002] = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 19: tokenizer.ggml.add_space_prefix bool = true llama_model_loader: - kv 20: tokenizer.ggml.token_type_count u32 = 1 llama_model_loader: - kv 21: tokenizer.ggml.remove_extra_whitespaces bool = true llama_model_loader: - kv 22: tokenizer.ggml.precompiled_charsmap arr[u8,237539] = [0, 180, 2, 0, 0, 132, 0, 0, 0, 0, 0,... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 0 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 25: tokenizer.ggml.unknown_token_id u32 = 3 llama_model_loader: - kv 26: tokenizer.ggml.seperator_token_id u32 = 2 llama_model_loader: - kv 27: tokenizer.ggml.padding_token_id u32 = 1 llama_model_loader: - kv 28: tokenizer.ggml.cls_token_id u32 = 0 llama_model_loader: - kv 29: tokenizer.ggml.mask_token_id u32 = 250001 llama_model_loader: - kv 30: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 31: tokenizer.ggml.add_eos_token bool = true llama_model_loader: - kv 32: general.quantization_version u32 = 2 llama_model_loader: - type f32: 244 tensors llama_model_loader: - type f16: 145 tensors print_info: file format = GGUF V3 (latest) print_info: file type = F16 print_info: file size = 1.07 GiB (16.25 BPW) load: model vocab missing newline token, using special_pad_id instead load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 4 load: token to piece cache size = 2.1668 MB print_info: arch = bert print_info: vocab_only = 0 print_info: n_ctx_train = 8192 print_info: n_embd = 1024 print_info: n_layer = 24 print_info: n_head = 16 print_info: n_head_kv = 16 print_info: n_rot = 64 print_info: n_swa = 0 print_info: n_swa_pattern = 1 print_info: n_embd_head_k = 64 print_info: n_embd_head_v = 64 print_info: n_gqa = 1 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 1.0e-05 print_info: f_norm_rms_eps = 0.0e+00 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 4096 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 0 print_info: pooling type = 2 print_info: rope type = 2 print_info: rope scaling = linear print_info: freq_base_train = 10000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 8192 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 335M print_info: model params = 566.70 M print_info: general.name = n/a print_info: vocab type = UGM print_info: n_vocab = 250002 print_info: n_merges = 0 print_info: BOS token = 0 '<s>' print_info: EOS token = 2 '</s>' print_info: UNK token = 3 '<unk>' print_info: SEP token = 2 '</s>' print_info: PAD token = 1 '<pad>' print_info: MASK token = 250001 '[PAD250000]' print_info: LF token = 0 '<s>' print_info: EOG token = 2 '</s>' print_info: max token length = 48 load_tensors: loading model tensors, this can take a while... (mmap = true) load_tensors: offloading 24 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 25/25 layers to GPU load_tensors: CPU_Mapped model buffer size = 520.30 MiB load_tensors: ROCm0 model buffer size = 577.22 MiB time=2025-06-05T18:11:11.510+08:00 level=ERROR source=sched.go:489 msg="error loading llama server" error="llama runner process has terminated: exit status 0xc0000409" [GIN] 2025/06/05 - 18:11:11 | 500 | 5.4939804s | 192.168.1.243 | POST "/v1/embeddings" ``` ### OS _No response_ ### GPU AMD ### CPU AMD ### Ollama version 0.7.0
GiteaMirror added the amdbugwindows labels 2026-04-22 15:06:55 -05:00
Author
Owner

@JasonHonKL commented on GitHub (Jun 6, 2025):

I can't reproduce your bug mind if you share the code ?

<!-- gh-comment-id:2948827686 --> @JasonHonKL commented on GitHub (Jun 6, 2025): I can't reproduce your bug mind if you share the code ?
Author
Owner

@FAIpang commented on GitHub (Jun 9, 2025):

I can't reproduce your bug mind if you share the code ?

The code may not be convenient to share, but the same code works fine with GPU acceleration on both Intel and NVIDIA graphics cards. I used the AMD GPU acceleration library package linked here: https://github.com/likelovewant/ollama-for-amd.

<!-- gh-comment-id:2954464355 --> @FAIpang commented on GitHub (Jun 9, 2025): > I can't reproduce your bug mind if you share the code ? The code may not be convenient to share, but the same code works fine with GPU acceleration on both Intel and NVIDIA graphics cards. I used the AMD GPU acceleration library package linked here: https://github.com/likelovewant/ollama-for-amd.
Author
Owner

@dhiltgen commented on GitHub (Jul 5, 2025):

My suspicion is the AMD ROCm library is crashing on the iGPU. You might get some more details by setting AMD_LOG_LEVEL=3

Quit Ollama from the tray and in a powershell terminal

$env:AMD_LOG_LEVEL="3"
$env:OLLAMA_DEBUG="1"
ollama serve  2>&1 | % ToString | Tee-Object serve.log

Then try to load the same model in another terminal.

<!-- gh-comment-id:3040350303 --> @dhiltgen commented on GitHub (Jul 5, 2025): My suspicion is the AMD ROCm library is crashing on the iGPU. You might get some more details by setting `AMD_LOG_LEVEL=3` Quit Ollama from the tray and in a powershell terminal ``` $env:AMD_LOG_LEVEL="3" $env:OLLAMA_DEBUG="1" ollama serve 2>&1 | % ToString | Tee-Object serve.log ``` Then try to load the same model in another terminal.
Author
Owner

@expnn commented on GitHub (Sep 3, 2025):

Crash on Linux (Ubuntu 24.04) too. Maybe we can remove the "windows" from the title and labels @dhiltgen

I'm using ollama 0.11.8. Here is an example to reproduce using the gpt-oss-20b model.

curl -X POST http://localhost:11434/api/chat -H "Content-Type: application/json" -d @example.json

It returns

{
  "error": "an error was encountered while running the model: unexpected EOF"
}

Here is the example.json file the above command line reads, and here is the truncated (last 40k lines) log from ollama serve. The whole log file is too large (~1.8G).

However, it exit normally with the qwen3:32b or phi4:14b models, even though, the qwen3:32b model returns garbage.

{
  "model": "qwen3:32b",
  "created_at": "2025-09-03T09:31:04.979002242Z",
  "message": {
    "role": "assistant",
    "content": "3333333333333333333333333333333"
  },
  "done": false
}

Note the gpt-oss-20b model works perfectly using llama.cpp with Vulkan backend:

llama-server -hf ggml-org/gpt-oss-20b-GGUF -c 0  --jinja -ub 2048 -b 2048 --port 8080
<!-- gh-comment-id:3248383362 --> @expnn commented on GitHub (Sep 3, 2025): Crash on Linux (Ubuntu 24.04) too. Maybe we can remove the "windows" from the title and labels @dhiltgen I'm using ollama 0.11.8. Here is an example to reproduce using the `gpt-oss-20b` model. ```bash curl -X POST http://localhost:11434/api/chat -H "Content-Type: application/json" -d @example.json ``` It returns ```json { "error": "an error was encountered while running the model: unexpected EOF" } ``` Here is the [example.json](https://github.com/user-attachments/files/22114394/example.json) file the above command line reads, and here is the [ truncated (last 40k lines) log](https://github.com/user-attachments/files/22114475/ollam-trunc.log) from `ollama serve`. The whole log file is too large (~1.8G). However, it exit normally with the `qwen3:32b` or `phi4:14b` models, even though, the `qwen3:32b` model returns garbage. ```json { "model": "qwen3:32b", "created_at": "2025-09-03T09:31:04.979002242Z", "message": { "role": "assistant", "content": "3333333333333333333333333333333" }, "done": false } ``` Note the gpt-oss-20b model works perfectly using `llama.cpp` with Vulkan backend: ```bash llama-server -hf ggml-org/gpt-oss-20b-GGUF -c 0 --jinja -ub 2048 -b 2048 --port 8080 ```
Author
Owner

@galets commented on GitHub (Sep 3, 2025):

I understand that Vulkan is not supported by ollama. Is it completely out of question to get such support going?

<!-- gh-comment-id:3250698478 --> @galets commented on GitHub (Sep 3, 2025): I understand that Vulkan is not supported by ollama. Is it completely out of question to get such support going?
Author
Owner

@expnn commented on GitHub (Sep 4, 2025):

Based on my testing, I've found that Ollama's ROCm support has bugs in two aspects:

  1. The program crashes with certain models, such as gpt-oss-20b and gpt-oss-120b.
  2. The output quality for the same model drops significantly, potentially resulting in garbled or repetitive characters.

My previous post above illustrates these two issues. Below, I'll provide more details about the second one.

Text generate by ollama using the qwen3:32b model

The model's output was just a repeating sequence of the digit "3" tabs ("\t").

curl -X POST http://localhost:8080/api/chat -H "Content-Type: application/json" -d @example.json | jq .

returns

{
  "model": "qwen3:32b",
  "created_at": "2025-09-03T09:31:04.979002242Z",
  "message": {
    "role": "assistant",
    "content": "3333333333333333333333333333333"
  },
  "done": false
}

or

{
  "model": "qwen3:32b",
  "created_at": "2025-09-04T04:49:20.564292217Z",
  "message": {
    "role": "assistant",
    "content": "\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t"
  },
  "done": false
}

Text generate by llama.cpp using the qwen3:32b model

To make sure the ollama and lamma.cpp use the same model, I create a symlink to the gguf file in the ollama model storage, and use this link as input to llama.cpp.

ln -s /usr/share/ollama/.ollama/models/blobs/sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312 qwen3-32b
llama-server -m qwen3-32b -c 0  --jinja -ub 2048 -b 2048

Then in another terminal pane, the following command

curl -X POST http://localhost:8080/api/chat -H "Content-Type: application/json" -d @example.json | jq .

returns

{
  "choices": [
    {
      "finish_reason": "stop",
      "index": 0,
      "message": {
        "role": "assistant",
        "reasoning_content": "Okay, the user is asking about the best ways to store their kids' original artwork before digitizing it. <...many words ommitted>",
        "content": "Storing your kids’ original artwork properly is key to preserving their <...many words ommitted>"
      }
    }
  ],
  "created": 1756953911,
  "system_fingerprint": "b6355-25f1045f",
  "object": "chat.completion",
  "usage": {
    "completion_tokens": 1612,
    "prompt_tokens": 922,
    "total_tokens": 2534
  },
  "id": "chatcmpl-V8ahAdnOA9mDiKRG8hLyiNy7aZPrKpAz",
  "timings": {
    "prompt_n": 922,
    "prompt_ms": 6924.668,
    "prompt_per_token_ms": 7.51048590021692,
    "prompt_per_second": 133.1471775975397,
    "predicted_n": 1612,
    "predicted_ms": 159450.818,
    "predicted_per_token_ms": 98.91489950372208,
    "predicted_per_second": 10.109700409313675
  }
}

Let's extract the content in markdown format:



Storing your kids’ original artwork properly is key to preserving their creativity while keeping it safe and organized before digitizing. Here are **creative and practical solutions** to balance preservation, space, and accessibility:

---

### **1. Use Archival-Quality Storage**
- **Acid-Free Boxes/Binders**: Store artwork in **acid-free, lignin-free folders** or **archival storage boxes** to prevent yellowing or degradation over time.  
- **Clear Page Protectors**: For small pieces, use **clear, plastic page protectors** (avoid PVC if possible) in binders for easy viewing.  
- **Vacuum-Sealed Bags (with Caution)**: For large quantities, use **non-heat vacuum-sealed bags** (to avoid creasing) for compact storage. Label each bag by date or theme.  
- **Why It Works**: Protects art from dust, humidity, and light while saving space.

---

### **2. Create a "Digital-First" Organizing System**
- **Sort by Theme/Year**: Organize artwork into labeled folders (e.g., “2024 – Animals” or “2023 – Holidays”) and rotate a small selection into display or digitization monthly.  
- **Digitize First, Then Store**: Use a **scanning schedule** to digitize 1–2 pieces weekly. Store the originals in a **"Digitized Art" box** until you’re ready to archive them.  
- **Why It Works**: Keeps the pile manageable and ensures no art is forgotten.

---

### **3. Wall-Mounted or Vertical Storage**
- **Corkboard with Clips**: Hang a **vertical corkboard** in a hallway or playroom. Use clips to display 5–10 pieces at a time, and rotate them weekly.  
- **Magnetic Whiteboard**: Use **magnetic whiteboard panels** and attach small artwork with magnets.  
- **Why It Works**: Turns storage into a rotating gallery that uses vertical space.

---

### **4. "Art Museum" Storage Boxes**
- **Theme-Based Boxes**: Label boxes with themes like **"Space Explorers"** or **"Under the Sea"** and fill them with matching artworks.  
- **Luggage or Totes**: Use **stackable plastic bins** or old suitcases for easy transport and storage.  
- **Why It Works**: Makes it fun to revisit the art later and avoids chaotic piles.

---

### **5. Involve Your Kids in the Process**
- **"Art Time" Storage**: Let kids choose which pieces to keep in a **"Favourite Art" box** and which to digitize. This teaches responsibility and helps them value their work.  
- **DIY Storage Projects**: Turn storage into a craft activity—kids can decorate a **personal art portfolio** or label boxes themselves.  
- **Why It Works**: Keeps them engaged and reduces clutter by letting them prioritize what matters most.

---

### **6. Use Space-Saving Hacks**
- **Stackable Storage Cubes**: Use clear, stackable bins to group artworks by child or age.  
- **Vertical File Folders**: For large pieces, use **vertical file folders** in a closet or under-the-bed storage.  
- **Why It Works**: Maximizes small spaces and keeps art accessible.

---

### **7. Archive with Care**
- **Avoid Plastic Bags**: Use **acid-free tissue paper** or **archival-grade paper** to wrap delicate art (e.g., crayon-heavy drawings).  
- **Climate Control**: Store in a **cool, dry area** (avoid basements or attics) to prevent mold or warping.  
- **Why It Works**: Ensures longevity for future generations.

---

### **8. Plan a "Goodbye" Strategy**
- **Letting Go Gracefully**: Set a rule to **rotate out 10% of stored art yearly** (e.g., donate older pieces to a local school or museum).  
- **Create a "Memory Jar"**: Let kids write a short note about their favorite art and seal it in a jar for future reflection.  
- **Why It Works**: Keeps the collection from growing too large and honors sentimental pieces.

---

### **Bonus: Digital Backup Plan**
- While digitizing, save high-res scans in a **cloud drive** (e.g., Google Drive, Dropbox) or **physical external hard drive**. Label files clearly (e.g., "ChildName_ArtDate_Description"). This way, even if the physical art is eventually stored away, you’ll have a digital archive forever.

---

By combining these methods, you’ll preserve your kids’ creativity while keeping your home clutter-free. The goal is to **honor their art without letting it overwhelm your space**! 🎨✨


It's pretty good, at least much, much better than the text generated by ollama, which returns repteated '3's.

Note the above testing is performed on a Linux box with:

  • OS: Ubuntu 24.04.3 LTS x86_64
  • Kernel: 6.14.0-29-generic
  • CPU: AMD RYZEN AI MAX+ 395 w/ Radeon 8060S (32) @ 5.187GHz
  • GPU: AMD ATI Radeon Graphics / Radeon 8050S Graphics / Radeon 8060S Graphics
  • ollama version is 0.11.8
  • llama.cpp version is b6358

I have noticed that the Vulkan backend will not be supported by ollama (https://github.com/ollama/ollama/issues/2033, https://github.com/ollama/ollama/issues/11247).

There are several issues (https://github.com/ollama/ollama/issues/11714, https://github.com/ollama/ollama/issues/2637) related to this one, potentially they may be duplicated. Following @Crandel's advice, I have switched to llama.cpp + llama-swap for now on AMD hardware.

<!-- gh-comment-id:3251848803 --> @expnn commented on GitHub (Sep 4, 2025): Based on my testing, I've found that Ollama's ROCm support has bugs in two aspects: 1. The program crashes with certain models, such as `gpt-oss-20b` and `gpt-oss-120b`. 2. The output quality for the same model drops significantly, potentially resulting in garbled or repetitive characters. My previous post above illustrates these two issues. Below, I'll provide more details about the second one. ### Text generate by `ollama` using the `qwen3:32b` model The model's output was just a repeating sequence of the digit "3" tabs ("\t"). ```bash curl -X POST http://localhost:8080/api/chat -H "Content-Type: application/json" -d @example.json | jq . ``` returns ```json { "model": "qwen3:32b", "created_at": "2025-09-03T09:31:04.979002242Z", "message": { "role": "assistant", "content": "3333333333333333333333333333333" }, "done": false } ``` or ```json { "model": "qwen3:32b", "created_at": "2025-09-04T04:49:20.564292217Z", "message": { "role": "assistant", "content": "\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t" }, "done": false } ``` ### Text generate by `llama.cpp` using the `qwen3:32b` model To make sure the ollama and lamma.cpp use the same model, I create a symlink to the gguf file in the ollama model storage, and use this link as input to `llama.cpp`. ```bash ln -s /usr/share/ollama/.ollama/models/blobs/sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312 qwen3-32b llama-server -m qwen3-32b -c 0 --jinja -ub 2048 -b 2048 ``` Then in another terminal pane, the following command ```bash curl -X POST http://localhost:8080/api/chat -H "Content-Type: application/json" -d @example.json | jq . ``` returns ```json { "choices": [ { "finish_reason": "stop", "index": 0, "message": { "role": "assistant", "reasoning_content": "Okay, the user is asking about the best ways to store their kids' original artwork before digitizing it. <...many words ommitted>", "content": "Storing your kids’ original artwork properly is key to preserving their <...many words ommitted>" } } ], "created": 1756953911, "system_fingerprint": "b6355-25f1045f", "object": "chat.completion", "usage": { "completion_tokens": 1612, "prompt_tokens": 922, "total_tokens": 2534 }, "id": "chatcmpl-V8ahAdnOA9mDiKRG8hLyiNy7aZPrKpAz", "timings": { "prompt_n": 922, "prompt_ms": 6924.668, "prompt_per_token_ms": 7.51048590021692, "prompt_per_second": 133.1471775975397, "predicted_n": 1612, "predicted_ms": 159450.818, "predicted_per_token_ms": 98.91489950372208, "predicted_per_second": 10.109700409313675 } } ``` Let's extract the content in markdown format: --- ```markdown Storing your kids’ original artwork properly is key to preserving their creativity while keeping it safe and organized before digitizing. Here are **creative and practical solutions** to balance preservation, space, and accessibility: --- ### **1. Use Archival-Quality Storage** - **Acid-Free Boxes/Binders**: Store artwork in **acid-free, lignin-free folders** or **archival storage boxes** to prevent yellowing or degradation over time. - **Clear Page Protectors**: For small pieces, use **clear, plastic page protectors** (avoid PVC if possible) in binders for easy viewing. - **Vacuum-Sealed Bags (with Caution)**: For large quantities, use **non-heat vacuum-sealed bags** (to avoid creasing) for compact storage. Label each bag by date or theme. - **Why It Works**: Protects art from dust, humidity, and light while saving space. --- ### **2. Create a "Digital-First" Organizing System** - **Sort by Theme/Year**: Organize artwork into labeled folders (e.g., “2024 – Animals” or “2023 – Holidays”) and rotate a small selection into display or digitization monthly. - **Digitize First, Then Store**: Use a **scanning schedule** to digitize 1–2 pieces weekly. Store the originals in a **"Digitized Art" box** until you’re ready to archive them. - **Why It Works**: Keeps the pile manageable and ensures no art is forgotten. --- ### **3. Wall-Mounted or Vertical Storage** - **Corkboard with Clips**: Hang a **vertical corkboard** in a hallway or playroom. Use clips to display 5–10 pieces at a time, and rotate them weekly. - **Magnetic Whiteboard**: Use **magnetic whiteboard panels** and attach small artwork with magnets. - **Why It Works**: Turns storage into a rotating gallery that uses vertical space. --- ### **4. "Art Museum" Storage Boxes** - **Theme-Based Boxes**: Label boxes with themes like **"Space Explorers"** or **"Under the Sea"** and fill them with matching artworks. - **Luggage or Totes**: Use **stackable plastic bins** or old suitcases for easy transport and storage. - **Why It Works**: Makes it fun to revisit the art later and avoids chaotic piles. --- ### **5. Involve Your Kids in the Process** - **"Art Time" Storage**: Let kids choose which pieces to keep in a **"Favourite Art" box** and which to digitize. This teaches responsibility and helps them value their work. - **DIY Storage Projects**: Turn storage into a craft activity—kids can decorate a **personal art portfolio** or label boxes themselves. - **Why It Works**: Keeps them engaged and reduces clutter by letting them prioritize what matters most. --- ### **6. Use Space-Saving Hacks** - **Stackable Storage Cubes**: Use clear, stackable bins to group artworks by child or age. - **Vertical File Folders**: For large pieces, use **vertical file folders** in a closet or under-the-bed storage. - **Why It Works**: Maximizes small spaces and keeps art accessible. --- ### **7. Archive with Care** - **Avoid Plastic Bags**: Use **acid-free tissue paper** or **archival-grade paper** to wrap delicate art (e.g., crayon-heavy drawings). - **Climate Control**: Store in a **cool, dry area** (avoid basements or attics) to prevent mold or warping. - **Why It Works**: Ensures longevity for future generations. --- ### **8. Plan a "Goodbye" Strategy** - **Letting Go Gracefully**: Set a rule to **rotate out 10% of stored art yearly** (e.g., donate older pieces to a local school or museum). - **Create a "Memory Jar"**: Let kids write a short note about their favorite art and seal it in a jar for future reflection. - **Why It Works**: Keeps the collection from growing too large and honors sentimental pieces. --- ### **Bonus: Digital Backup Plan** - While digitizing, save high-res scans in a **cloud drive** (e.g., Google Drive, Dropbox) or **physical external hard drive**. Label files clearly (e.g., "ChildName_ArtDate_Description"). This way, even if the physical art is eventually stored away, you’ll have a digital archive forever. --- By combining these methods, you’ll preserve your kids’ creativity while keeping your home clutter-free. The goal is to **honor their art without letting it overwhelm your space**! 🎨✨ ``` --- It's pretty good, at least much, much better than the text generated by ollama, which returns repteated '3's. Note the above testing is performed on a Linux box with: - OS: Ubuntu 24.04.3 LTS x86_64 - Kernel: 6.14.0-29-generic - CPU: AMD RYZEN AI MAX+ 395 w/ Radeon 8060S (32) @ 5.187GHz - GPU: AMD ATI Radeon Graphics / Radeon 8050S Graphics / Radeon 8060S Graphics - ollama version is 0.11.8 - llama.cpp version is [b6358](https://github.com/ggml-org/llama.cpp/releases/tag/b6358) I have noticed that the Vulkan backend will not be supported by ollama (https://github.com/ollama/ollama/issues/2033, https://github.com/ollama/ollama/issues/11247). There are several issues (https://github.com/ollama/ollama/issues/11714, https://github.com/ollama/ollama/issues/2637) related to this one, potentially they may be duplicated. Following @Crandel's advice, I have switched to llama.cpp + llama-swap for now on AMD hardware.
Author
Owner

@linuxd3v commented on GitHub (Oct 12, 2025):

I understand that Vulkan is not supported by ollama. Is it completely out of question to get such support going?

I think thats why there is ramalama - it supports vulkan.

<!-- gh-comment-id:3394974760 --> @linuxd3v commented on GitHub (Oct 12, 2025): > I understand that Vulkan is not supported by ollama. Is it completely out of question to get such support going? I think thats why there is ramalama - it supports vulkan.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#33008