[GH-ISSUE #9553] Windows 11 Ollama 0.5.13/0.6.0 ROCm on gfx1151 is broken #31990

Open
opened 2026-04-22 12:51:12 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @zztop007 on GitHub (Mar 6, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9553

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

The installation and startup seem to work as they should. After downloading the first model, the error below occurs.

Relevant log output

OS
W11 24H2 Home

GPU
AMD Radeon(TM) 8060S Graphics, gfx1151

CPU
Amd ryzen ai max+ 395 (Asus Flow 2025 32gb)

Ollama version
v0.5.13


Clean install, first try: 

pulling manifest
pulling dde5aa3fc5ff... 100% ▕████████████████████████████████████████████████████████▏ 2.0 GB
pulling 966de95ca8a6... 100% ▕████████████████████████████████████████████████████████▏ 1.4 KB
pulling fcc5a6bec9da... 100% ▕████████████████████████████████████████████████████████▏ 7.7 KB
pulling a70ff7e570d9... 100% ▕████████████████████████████████████████████████████████▏ 6.0 KB
pulling 56bb8bd477a5... 100% ▕████████████████████████████████████████████████████████▏   96 B
pulling 34bb5ab01051... 100% ▕████████████████████████████████████████████████████████▏  561 B
verifying sha256 digest
writing manifest
success
>>> Hi
Error: POST predict: Post "http://127.0.0.1:49910/completion": read tcp 127.0.0.1:50064->127.0.0.1:49910: wsarecv: An existing connection was forcibly closed by the remote host.


------------------

Start up:

set DEBUG=1 && ollama serve
2025/03/06 18:14:04 routes.go:1215: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\donda\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-03-06T18:14:04.654+01:00 level=INFO source=images.go:432 msg="total blobs: 6"
time=2025-03-06T18:14:04.654+01:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-03-06T18:14:04.656+01:00 level=INFO source=routes.go:1277 msg="Listening on 127.0.0.1:11434 (version 0.5.13)"
time=2025-03-06T18:14:04.656+01:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-03-06T18:14:04.656+01:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-03-06T18:14:04.656+01:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=16 efficiency=0 threads=32
time=2025-03-06T18:14:05.000+01:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=rocm variant="" compute=gfx1151 driver=6.3 name="AMD Radeon(TM) 8060S Graphics" total="16.9 GiB" available="16.7 GiB"


Errorlog after first "Hi" on ollama run llama3.2: 

set DEBUG=1 && ollama serve
2025/03/06 18:14:04 routes.go:1215: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\donda\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-03-06T18:14:04.654+01:00 level=INFO source=images.go:432 msg="total blobs: 6"
time=2025-03-06T18:14:04.654+01:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-03-06T18:14:04.656+01:00 level=INFO source=routes.go:1277 msg="Listening on 127.0.0.1:11434 (version 0.5.13)"
time=2025-03-06T18:14:04.656+01:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-03-06T18:14:04.656+01:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-03-06T18:14:04.656+01:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=16 efficiency=0 threads=32
time=2025-03-06T18:14:05.000+01:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=rocm variant="" compute=gfx1151 driver=6.3 name="AMD Radeon(TM) 8060S Graphics" total="16.9 GiB" available="16.7 GiB"
[GIN] 2025/03/06 - 18:15:57 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/06 - 18:15:57 | 200 |     24.7169ms |       127.0.0.1 | POST     "/api/show"
time=2025-03-06T18:15:58.342+01:00 level=INFO source=sched.go:186 msg="one or more GPUs detected that are unable to accurately report free memory - disabling default concurrency"
time=2025-03-06T18:15:58.381+01:00 level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\donda\.ollama\models\blobs\sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff gpu=0 parallel=4 available=17779830784 required="3.7 GiB"
time=2025-03-06T18:15:58.855+01:00 level=INFO source=server.go:97 msg="system memory" total="23.6 GiB" free="16.1 GiB" free_swap="16.2 GiB"
time=2025-03-06T18:15:58.855+01:00 level=INFO source=server.go:130 msg=offload library=rocm layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[16.6 GiB]" memory.gpu_overhead="0 B" memory.required.full="3.7 GiB" memory.required.partial="3.7 GiB" memory.required.kv="896.0 MiB" memory.required.allocations="[3.7 GiB]" memory.weights.total="2.4 GiB" memory.weights.repeating="2.1 GiB" memory.weights.nonrepeating="308.2 MiB" memory.graph.full="424.0 MiB" memory.graph.partial="570.7 MiB"
time=2025-03-06T18:15:58.867+01:00 level=INFO source=server.go:380 msg="starting llama server" cmd="C:\\Users\\donda\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\donda\\.ollama\\models\\blobs\\sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --threads 16 --parallel 4 --port 50089"
time=2025-03-06T18:15:58.872+01:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-03-06T18:15:58.872+01:00 level=INFO source=server.go:557 msg="waiting for llama runner to start responding"
time=2025-03-06T18:15:58.873+01:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
time=2025-03-06T18:15:58.898+01:00 level=INFO source=runner.go:931 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 ROCm devices:
  Device 0: AMD Radeon(TM) 8060S Graphics, gfx1151 (0x1151), VMM: no, Wave Size: 32
load_backend: loaded ROCm backend from C:\Users\donda\AppData\Local\Programs\Ollama\lib\ollama\rocm\ggml-hip.dll
load_backend: loaded CPU backend from C:\Users\donda\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll
time=2025-03-06T18:15:58.986+01:00 level=INFO source=runner.go:934 msg=system info="CPU : LLAMAFILE = 1 | ROCm : NO_VMM = 1 | NO_PEER_COPY = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | LLAMAFILE = 1 | cgo(clang)" threads=16
time=2025-03-06T18:15:58.987+01:00 level=INFO source=runner.go:992 msg="Server listening on 127.0.0.1:50089"
time=2025-03-06T18:15:59.125+01:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model"
llama_model_load_from_file_impl: using device ROCm0 (AMD Radeon(TM) 8060S Graphics) - 17112 MiB free
llama_model_loader: loaded meta data with 30 key-value pairs and 255 tensors from C:\Users\donda\.ollama\models\blobs\sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Llama 3.2 3B Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Llama-3.2
llama_model_loader: - kv   5:                         general.size_label str              = 3B
llama_model_loader: - kv   6:                               general.tags arr[str,6]       = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv   7:                          general.languages arr[str,8]       = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv   8:                          llama.block_count u32              = 28
llama_model_loader: - kv   9:                       llama.context_length u32              = 131072
llama_model_loader: - kv  10:                     llama.embedding_length u32              = 3072
llama_model_loader: - kv  11:                  llama.feed_forward_length u32              = 8192
llama_model_loader: - kv  12:                 llama.attention.head_count u32              = 24
llama_model_loader: - kv  13:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  14:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  15:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  16:                 llama.attention.key_length u32              = 128
llama_model_loader: - kv  17:               llama.attention.value_length u32              = 128
llama_model_loader: - kv  18:                          general.file_type u32              = 15
llama_model_loader: - kv  19:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  20:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  21:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  22:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  23:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  24:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  25:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  26:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  27:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  28:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv  29:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   58 tensors
llama_model_loader: - type q4_K:  168 tensors
llama_model_loader: - type q6_K:   29 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 1.87 GiB (5.01 BPW)
load: special tokens cache size = 256
load: token to piece cache size = 0.7999 MB
print_info: arch             = llama
print_info: vocab_only       = 0
print_info: n_ctx_train      = 131072
print_info: n_embd           = 3072
print_info: n_layer          = 28
print_info: n_head           = 24
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 3
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: n_ff             = 8192
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 0
print_info: rope scaling     = linear
print_info: freq_base_train  = 500000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 131072
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 3B
print_info: model params     = 3.21 B
print_info: general.name     = Llama 3.2 3B Instruct
print_info: vocab type       = BPE
print_info: n_vocab          = 128256
print_info: n_merges         = 280147
print_info: BOS token        = 128000 '<|begin_of_text|>'
print_info: EOS token        = 128009 '<|eot_id|>'
print_info: EOT token        = 128009 '<|eot_id|>'
print_info: EOM token        = 128008 '<|eom_id|>'
print_info: LF token         = 198 'Ċ'
print_info: EOG token        = 128008 '<|eom_id|>'
print_info: EOG token        = 128009 '<|eot_id|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors: offloading 28 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 29/29 layers to GPU
load_tensors:        ROCm0 model buffer size =  1918.35 MiB
load_tensors:   CPU_Mapped model buffer size =   308.23 MiB
llama_init_from_model: n_seq_max     = 4
llama_init_from_model: n_ctx         = 8192
llama_init_from_model: n_ctx_per_seq = 2048
llama_init_from_model: n_batch       = 2048
llama_init_from_model: n_ubatch      = 512
llama_init_from_model: flash_attn    = 0
llama_init_from_model: freq_base     = 500000.0
llama_init_from_model: freq_scale    = 1
llama_init_from_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 28, can_shift = 1
llama_kv_cache_init:      ROCm0 KV buffer size =   896.00 MiB
llama_init_from_model: KV self size  =  896.00 MiB, K (f16):  448.00 MiB, V (f16):  448.00 MiB
llama_init_from_model:  ROCm_Host  output buffer size =     2.00 MiB
llama_init_from_model:      ROCm0 compute buffer size =   424.00 MiB
llama_init_from_model:  ROCm_Host compute buffer size =    22.01 MiB
llama_init_from_model: graph nodes  = 902
llama_init_from_model: graph splits = 2
time=2025-03-06T18:16:01.638+01:00 level=INFO source=server.go:596 msg="llama runner started in 2.77 seconds"
[GIN] 2025/03/06 - 18:16:01 | 200 |    3.8171883s |       127.0.0.1 | POST     "/api/generate"
ggml_cuda_compute_forward: RMS_NORM failed
ROCm error: invalid device function
  current device: 0, in function ggml_cuda_compute_forward at C:/a/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:2315
  err

OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @zztop007 on GitHub (Mar 6, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9553 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? The installation and startup seem to work as they should. After downloading the first model, the error below occurs. ### Relevant log output ```shell OS W11 24H2 Home GPU AMD Radeon(TM) 8060S Graphics, gfx1151 CPU Amd ryzen ai max+ 395 (Asus Flow 2025 32gb) Ollama version v0.5.13 Clean install, first try: pulling manifest pulling dde5aa3fc5ff... 100% ▕████████████████████████████████████████████████████████▏ 2.0 GB pulling 966de95ca8a6... 100% ▕████████████████████████████████████████████████████████▏ 1.4 KB pulling fcc5a6bec9da... 100% ▕████████████████████████████████████████████████████████▏ 7.7 KB pulling a70ff7e570d9... 100% ▕████████████████████████████████████████████████████████▏ 6.0 KB pulling 56bb8bd477a5... 100% ▕████████████████████████████████████████████████████████▏ 96 B pulling 34bb5ab01051... 100% ▕████████████████████████████████████████████████████████▏ 561 B verifying sha256 digest writing manifest success >>> Hi Error: POST predict: Post "http://127.0.0.1:49910/completion": read tcp 127.0.0.1:50064->127.0.0.1:49910: wsarecv: An existing connection was forcibly closed by the remote host. ------------------ Start up: set DEBUG=1 && ollama serve 2025/03/06 18:14:04 routes.go:1215: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\donda\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-03-06T18:14:04.654+01:00 level=INFO source=images.go:432 msg="total blobs: 6" time=2025-03-06T18:14:04.654+01:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-03-06T18:14:04.656+01:00 level=INFO source=routes.go:1277 msg="Listening on 127.0.0.1:11434 (version 0.5.13)" time=2025-03-06T18:14:04.656+01:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-03-06T18:14:04.656+01:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2025-03-06T18:14:04.656+01:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=16 efficiency=0 threads=32 time=2025-03-06T18:14:05.000+01:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=rocm variant="" compute=gfx1151 driver=6.3 name="AMD Radeon(TM) 8060S Graphics" total="16.9 GiB" available="16.7 GiB" Errorlog after first "Hi" on ollama run llama3.2: set DEBUG=1 && ollama serve 2025/03/06 18:14:04 routes.go:1215: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\donda\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-03-06T18:14:04.654+01:00 level=INFO source=images.go:432 msg="total blobs: 6" time=2025-03-06T18:14:04.654+01:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-03-06T18:14:04.656+01:00 level=INFO source=routes.go:1277 msg="Listening on 127.0.0.1:11434 (version 0.5.13)" time=2025-03-06T18:14:04.656+01:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-03-06T18:14:04.656+01:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2025-03-06T18:14:04.656+01:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=16 efficiency=0 threads=32 time=2025-03-06T18:14:05.000+01:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=rocm variant="" compute=gfx1151 driver=6.3 name="AMD Radeon(TM) 8060S Graphics" total="16.9 GiB" available="16.7 GiB" [GIN] 2025/03/06 - 18:15:57 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/03/06 - 18:15:57 | 200 | 24.7169ms | 127.0.0.1 | POST "/api/show" time=2025-03-06T18:15:58.342+01:00 level=INFO source=sched.go:186 msg="one or more GPUs detected that are unable to accurately report free memory - disabling default concurrency" time=2025-03-06T18:15:58.381+01:00 level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\donda\.ollama\models\blobs\sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff gpu=0 parallel=4 available=17779830784 required="3.7 GiB" time=2025-03-06T18:15:58.855+01:00 level=INFO source=server.go:97 msg="system memory" total="23.6 GiB" free="16.1 GiB" free_swap="16.2 GiB" time=2025-03-06T18:15:58.855+01:00 level=INFO source=server.go:130 msg=offload library=rocm layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[16.6 GiB]" memory.gpu_overhead="0 B" memory.required.full="3.7 GiB" memory.required.partial="3.7 GiB" memory.required.kv="896.0 MiB" memory.required.allocations="[3.7 GiB]" memory.weights.total="2.4 GiB" memory.weights.repeating="2.1 GiB" memory.weights.nonrepeating="308.2 MiB" memory.graph.full="424.0 MiB" memory.graph.partial="570.7 MiB" time=2025-03-06T18:15:58.867+01:00 level=INFO source=server.go:380 msg="starting llama server" cmd="C:\\Users\\donda\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model C:\\Users\\donda\\.ollama\\models\\blobs\\sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --threads 16 --parallel 4 --port 50089" time=2025-03-06T18:15:58.872+01:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2025-03-06T18:15:58.872+01:00 level=INFO source=server.go:557 msg="waiting for llama runner to start responding" time=2025-03-06T18:15:58.873+01:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" time=2025-03-06T18:15:58.898+01:00 level=INFO source=runner.go:931 msg="starting go runner" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 ROCm devices: Device 0: AMD Radeon(TM) 8060S Graphics, gfx1151 (0x1151), VMM: no, Wave Size: 32 load_backend: loaded ROCm backend from C:\Users\donda\AppData\Local\Programs\Ollama\lib\ollama\rocm\ggml-hip.dll load_backend: loaded CPU backend from C:\Users\donda\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll time=2025-03-06T18:15:58.986+01:00 level=INFO source=runner.go:934 msg=system info="CPU : LLAMAFILE = 1 | ROCm : NO_VMM = 1 | NO_PEER_COPY = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | LLAMAFILE = 1 | cgo(clang)" threads=16 time=2025-03-06T18:15:58.987+01:00 level=INFO source=runner.go:992 msg="Server listening on 127.0.0.1:50089" time=2025-03-06T18:15:59.125+01:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model" llama_model_load_from_file_impl: using device ROCm0 (AMD Radeon(TM) 8060S Graphics) - 17112 MiB free llama_model_loader: loaded meta data with 30 key-value pairs and 255 tensors from C:\Users\donda\.ollama\models\blobs\sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Llama 3.2 3B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Llama-3.2 llama_model_loader: - kv 5: general.size_label str = 3B llama_model_loader: - kv 6: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... llama_model_loader: - kv 7: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... llama_model_loader: - kv 8: llama.block_count u32 = 28 llama_model_loader: - kv 9: llama.context_length u32 = 131072 llama_model_loader: - kv 10: llama.embedding_length u32 = 3072 llama_model_loader: - kv 11: llama.feed_forward_length u32 = 8192 llama_model_loader: - kv 12: llama.attention.head_count u32 = 24 llama_model_loader: - kv 13: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 14: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 15: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 16: llama.attention.key_length u32 = 128 llama_model_loader: - kv 17: llama.attention.value_length u32 = 128 llama_model_loader: - kv 18: general.file_type u32 = 15 llama_model_loader: - kv 19: llama.vocab_size u32 = 128256 llama_model_loader: - kv 20: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 22: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 26: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 27: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 28: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 58 tensors llama_model_loader: - type q4_K: 168 tensors llama_model_loader: - type q6_K: 29 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 1.87 GiB (5.01 BPW) load: special tokens cache size = 256 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 3072 print_info: n_layer = 28 print_info: n_head = 24 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 3 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: n_ff = 8192 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 0 print_info: rope scaling = linear print_info: freq_base_train = 500000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 131072 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 3B print_info: model params = 3.21 B print_info: general.name = Llama 3.2 3B Instruct print_info: vocab type = BPE print_info: n_vocab = 128256 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin_of_text|>' print_info: EOS token = 128009 '<|eot_id|>' print_info: EOT token = 128009 '<|eot_id|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = true) load_tensors: offloading 28 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 29/29 layers to GPU load_tensors: ROCm0 model buffer size = 1918.35 MiB load_tensors: CPU_Mapped model buffer size = 308.23 MiB llama_init_from_model: n_seq_max = 4 llama_init_from_model: n_ctx = 8192 llama_init_from_model: n_ctx_per_seq = 2048 llama_init_from_model: n_batch = 2048 llama_init_from_model: n_ubatch = 512 llama_init_from_model: flash_attn = 0 llama_init_from_model: freq_base = 500000.0 llama_init_from_model: freq_scale = 1 llama_init_from_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 28, can_shift = 1 llama_kv_cache_init: ROCm0 KV buffer size = 896.00 MiB llama_init_from_model: KV self size = 896.00 MiB, K (f16): 448.00 MiB, V (f16): 448.00 MiB llama_init_from_model: ROCm_Host output buffer size = 2.00 MiB llama_init_from_model: ROCm0 compute buffer size = 424.00 MiB llama_init_from_model: ROCm_Host compute buffer size = 22.01 MiB llama_init_from_model: graph nodes = 902 llama_init_from_model: graph splits = 2 time=2025-03-06T18:16:01.638+01:00 level=INFO source=server.go:596 msg="llama runner started in 2.77 seconds" [GIN] 2025/03/06 - 18:16:01 | 200 | 3.8171883s | 127.0.0.1 | POST "/api/generate" ggml_cuda_compute_forward: RMS_NORM failed ROCm error: invalid device function current device: 0, in function ggml_cuda_compute_forward at C:/a/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:2315 err ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the gpuamdbug labels 2026-04-22 12:51:13 -05:00
Author
Owner

@zztop007 commented on GitHub (Mar 6, 2025):

  1. Using ollama-windows-amd64.zip works fine.
  2. Adding ollama-windows-amd64-rocm.zip and doing the >>> Hi
    Error: POST predict: Post "http://127.0.0.1:52945/completion": read tcp 127.0.0.1:53005->127.0.0.1:52945: wsarecv: An existing connection was forcibly closed by the remote host.
<!-- gh-comment-id:2704839200 --> @zztop007 commented on GitHub (Mar 6, 2025): 1. Using [ollama-windows-amd64.zip](https://github.com/ollama/ollama/releases/download/v0.5.13/ollama-windows-amd64.zip) works fine. 2. Adding [ollama-windows-amd64-rocm.zip](https://github.com/ollama/ollama/releases/download/v0.5.13/ollama-windows-amd64-rocm.zip) and doing the >>> Hi Error: POST predict: Post "http://127.0.0.1:52945/completion": read tcp 127.0.0.1:53005->127.0.0.1:52945: wsarecv: An existing connection was forcibly closed by the remote host.
Author
Owner

@zztop007 commented on GitHub (Mar 12, 2025):

The iGPU has 8GB vram allocated (not sure if it's relevant). 8GB vram and the rest of the 32GB for Windows
time=2025-03-06T18:14:05.000+01:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=rocm variant="" compute=gfx1151 driver=6.3 name="AMD Radeon(TM) 8060S Graphics" total="16.9 GiB" available="16.7 GiB"

After first Hi!

C:/a/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:73: ROCm error
[GIN] 2025/03/12 - 11:10:54 | 200 | 3.6227445s | 127.0.0.1 | POST "/api/chat"
time=2025-03-12T11:10:54.266+01:00 level=ERROR source=server.go:449 msg="llama runner terminated" error="exit status 0xc0000409"

0xc0000409 memory related?

<!-- gh-comment-id:2717341364 --> @zztop007 commented on GitHub (Mar 12, 2025): The iGPU has 8GB vram allocated (not sure if it's relevant). 8GB vram and the rest of the 32GB for Windows time=2025-03-06T18:14:05.000+01:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=rocm variant="" compute=gfx1151 driver=6.3 name="AMD Radeon(TM) 8060S Graphics" total="16.9 GiB" available="16.7 GiB" After first Hi! C:/a/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:73: ROCm error [GIN] 2025/03/12 - 11:10:54 | 200 | 3.6227445s | 127.0.0.1 | POST "/api/chat" time=2025-03-12T11:10:54.266+01:00 level=ERROR source=server.go:449 msg="llama runner terminated" error="exit status 0xc0000409" 0xc0000409 memory related?
Author
Owner

@winstonma commented on GitHub (Mar 13, 2025):

Sorry I am not familiar with Windows but there is a patch for Linux in the pull request.

<!-- gh-comment-id:2722908643 --> @winstonma commented on GitHub (Mar 13, 2025): Sorry I am not familiar with Windows but there is a patch for Linux in the [pull request](https://github.com/ollama/ollama/pull/6282).
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#31990