[GH-ISSUE #3107] Windows Rocm: HSA_OVERRIDE_GFX_VERSION doesn´t work #63946

Open
opened 2026-05-03 15:32:37 -05:00 by GiteaMirror · 32 comments
Owner

Originally created by @julian-ben on GitHub (Mar 13, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3107

Originally assigned to: @dhiltgen on GitHub.

I'm eager to explore the new Windows ROCm compatibility feature, but I'm encountering an issue with forcing the GFX version. Currently, I'm using the 0.1.29 pre-release.

My setup includes an RX 6600 XT (GFX1032), which isn't fully supported in the ROCm library. According to the troubleshooting guide available at https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md, it's recommended to override the version to a similar one (in my case HSA_OVERRIDE_GFX_VERSION="10.3.0"). Despite setting the environment variable, the logs continue to display the following error:

rocBLAS error: Cannot read C:\Users\user\AppData\Local\Programs\Ollama\rocm\/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx1032.

However, based on the overridden version, shouldn't it be looking at gfx1030 instead?
The result is ollama crashing after that.

Originally created by @julian-ben on GitHub (Mar 13, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3107 Originally assigned to: @dhiltgen on GitHub. I'm eager to explore the new Windows ROCm compatibility feature, but I'm encountering an issue with forcing the GFX version. Currently, I'm using the 0.1.29 pre-release. My setup includes an RX 6600 XT (GFX1032), which isn't fully supported in the ROCm library. According to the troubleshooting guide available at https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md, it's recommended to override the version to a similar one (in my case HSA_OVERRIDE_GFX_VERSION="10.3.0"). Despite setting the environment variable, the logs continue to display the following error: ``` rocBLAS error: Cannot read C:\Users\user\AppData\Local\Programs\Ollama\rocm\/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx1032. ``` However, based on the overridden version, shouldn't it be looking at gfx1030 instead? The result is ollama crashing after that.
GiteaMirror added the bugamdwindows labels 2026-05-03 15:32:39 -05:00
Author
Owner

@christiaangoossens commented on GitHub (Mar 13, 2024):

Can confirm, same card, version 0.1.29, same error.

AppData\Local\Programs\Ollama\rocm\/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx1032

Have the env var in my user and the system. Also doesn't work when running $env:HSA_OVERRIDE_GFX_VERSION="10.3.0"; ollama serve directly in Powershell.

Seems that the fix in https://github.com/ollama/ollama/issues/2598 doesn't fully work?

<!-- gh-comment-id:1995020440 --> @christiaangoossens commented on GitHub (Mar 13, 2024): Can confirm, same card, version 0.1.29, same error. ``` AppData\Local\Programs\Ollama\rocm\/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx1032 ``` Have the env var in my user and the system. Also doesn't work when running `$env:HSA_OVERRIDE_GFX_VERSION="10.3.0"; ollama serve` directly in Powershell. Seems that the fix in https://github.com/ollama/ollama/issues/2598 doesn't fully work?
Author
Owner

@zharklm commented on GitHub (Mar 15, 2024):

Can also confirm getting similar issues with windows 0.1.29 release using Vega 64 card. Looking at debug output it seems the environment variable is being recognized but still getting the gfx900 error. Have tried using environment variable values of: 9.0.0, "9.0.0", 9.0.6, "9.0.6", 10.3.0, "10.3.0"
Full debug server log follows:

time=2024-03-15T15:31:54.275+11:00 level=INFO source=images.go:806 msg="total blobs: 34"
time=2024-03-15T15:31:54.277+11:00 level=INFO source=images.go:813 msg="total unused blobs removed: 0"
time=2024-03-15T15:31:54.278+11:00 level=INFO source=routes.go:1110 msg="Listening on 127.0.0.1:11434 (version 0.1.29)"
time=2024-03-15T15:31:54.278+11:00 level=INFO source=payload_common.go:112 msg="Extracting dynamic libraries to C:\\Users\\Admin\\AppData\\Local\\Temp\\ollama2659182147\\runners ..."
time=2024-03-15T15:31:54.317+11:00 level=INFO source=payload_common.go:139 msg="Dynamic LLM libraries [cpu cpu_avx cuda_v11.3 rocm_v5.7 cpu_avx2]"
time=2024-03-15T15:31:54.317+11:00 level=DEBUG source=payload_common.go:140 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
[GIN] 2024/03/15 - 15:32:07 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2024/03/15 - 15:32:07 | 200 |      1.5012ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2024/03/15 - 15:32:07 | 200 |       500.8µs |       127.0.0.1 | POST     "/api/show"
time=2024-03-15T15:32:07.391+11:00 level=INFO source=gpu.go:77 msg="Detecting GPU type"
time=2024-03-15T15:32:07.391+11:00 level=INFO source=gpu.go:191 msg="Searching for GPU management library nvml.dll"
time=2024-03-15T15:32:07.392+11:00 level=DEBUG source=gpu.go:209 msg="gpu management search paths: [c:\\Windows\\System32\\nvml.dll C:\\Program Files\\Docker\\Docker\\resources\\bin\\nvml.dll* C:\\Windows\\System32\\nvml.dll* C:\\Windows\\nvml.dll* C:\\Windows\\System32\\wbem\\nvml.dll* C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvml.dll* C:\\Windows\\System32\\OpenSSH\\nvml.dll* C:\\Windows\\System32\\nvml.dll* C:\\Windows\\nvml.dll* C:\\Windows\\System32\\wbem\\nvml.dll* C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvml.dll* C:\\Windows\\System32\\OpenSSH\\nvml.dll* C:\\Program Files\\nodejs\\nvml.dll* C:\\Program Files (x86)\\Bluetooth Command Line Tools\\bin\\nvml.dll* C:\\Program Files (x86)\\GnuPG\\bin\\nvml.dll* C:\\Program Files\\Microsoft SQL Server\\Client SDK\\ODBC\\170\\Tools\\Binn\\nvml.dll* C:\\Program Files (x86)\\Microsoft SQL Server\\150\\Tools\\Binn\\nvml.dll* C:\\Program Files\\Microsoft SQL Server\\150\\Tools\\Binn\\nvml.dll* C:\\Program Files\\Microsoft SQL Server\\150\\DTS\\Binn\\nvml.dll* C:\\Program Files (x86)\\Microsoft SQL Server\\150\\DTS\\Binn\\nvml.dll* C:\\Program Files\\Azure Data Studio\\bin\\nvml.dll* C:\\Users\\Admin\\AppData\\Local\\Programs\\Python\\Python37\\Scripts\\nvml.dll* C:\\Users\\Admin\\AppData\\Local\\Programs\\Python\\Python37\\nvml.dll* C:\\Users\\Admin\\AppData\\Local\\Microsoft\\WindowsApps\\nvml.dll* C:\\Users\\Admin\\AppData\\Local\\Programs\\Microsoft VS Code\\bin\\nvml.dll* C:\\Users\\Admin\\AppData\\Local\\Programs\\Python\\Python37\\Scripts\\nvml.dll* C:\\Users\\Admin\\AppData\\Local\\Programs\\Microsoft VS Code\\nvml.dll* C:\\Users\\Admin\\AppData\\Local\\Programs\\ffmpeg-4.1-win64-static\\bin\\nvml.dll* C:\\Users\\Admin\\AppData\\Roaming\\npm\\nvml.dll* C:\\Users\\Admin\\AppData\\Local\\Programs\\Ollama\\nvml.dll* C:\\Program Files\\Azure Data Studio\\bin\\nvml.dll* C:\\Users\\Admin\\AppData\\Local\\Programs\\Ollama\\nvml.dll*]"
time=2024-03-15T15:32:07.400+11:00 level=INFO source=gpu.go:237 msg="Discovered GPU libraries: []"
time=2024-03-15T15:32:07.400+11:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-15T15:32:07.415+11:00 level=INFO source=amd_windows.go:40 msg="AMD Driver: 50731541"
time=2024-03-15T15:32:07.415+11:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir C:\\Users\\Admin\\AppData\\Local\\Programs\\Ollama\\rocm"
time=2024-03-15T15:32:07.415+11:00 level=DEBUG source=amd_windows.go:148 msg="detected ROCM next to ollama executable C:\\Users\\Admin\\AppData\\Local\\Programs\\Ollama\\rocm"
time=2024-03-15T15:32:07.415+11:00 level=DEBUG source=amd_windows.go:66 msg="skipping rocm gfx compatibility check with HSA_OVERRIDE_GFX_VERSION=\"10.3.0\""
time=2024-03-15T15:32:07.415+11:00 level=INFO source=amd_windows.go:69 msg="detected 1 hip devices"
time=2024-03-15T15:32:07.415+11:00 level=INFO source=amd_windows.go:87 msg="[0] Name: Radeon RX Vega"
time=2024-03-15T15:32:07.415+11:00 level=INFO source=amd_windows.go:90 msg="[0] GcnArchName: gfx900:xnack-"
time=2024-03-15T15:32:07.750+11:00 level=INFO source=amd_windows.go:117 msg="[0] Total Mem: 8438939648"
time=2024-03-15T15:32:07.751+11:00 level=INFO source=amd_windows.go:118 msg="[0] Free Mem:  8573157376"
time=2024-03-15T15:32:07.751+11:00 level=INFO source=assets.go:63 msg="Updating PATH to C:\\Users\\Admin\\AppData\\Local\\Programs\\Ollama\\rocm;C:\\Program Files\\Docker\\Docker\\resources\\bin;C:\\Windows\\System32;C:\\Windows;C:\\Windows\\System32\\wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Windows\\System32;C:\\Windows;C:\\Windows\\System32\\wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\nodejs\\;C:\\Program Files (x86)\\Bluetooth Command Line Tools\\bin;C:\\Program Files (x86)\\GnuPG\\bin;C:\\Program Files\\Microsoft SQL Server\\Client SDK\\ODBC\\170\\Tools\\Binn\\;C:\\Program Files (x86)\\Microsoft SQL Server\\150\\Tools\\Binn\\;C:\\Program Files\\Microsoft SQL Server\\150\\Tools\\Binn\\;C:\\Program Files\\Microsoft SQL Server\\150\\DTS\\Binn\\;C:\\Program Files (x86)\\Microsoft SQL Server\\150\\DTS\\Binn\\;C:\\Program Files\\Azure Data Studio\\bin;C:\\Users\\Admin\\AppData\\Local\\Programs\\Python\\Python37\\Scripts\\;C:\\Users\\Admin\\AppData\\Local\\Programs\\Python\\Python37\\;C:\\Users\\Admin\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\Admin\\AppData\\Local\\Programs\\Microsoft VS Code\\bin;C:\\Users\\Admin\\AppData\\Local\\Programs\\Python\\Python37\\Scripts;C:\\Users\\Admin\\AppData\\Local\\Programs\\Microsoft VS Code\\;C:\\Users\\Admin\\AppData\\Local\\Programs\\ffmpeg-4.1-win64-static\\bin;C:\\Users\\Admin\\AppData\\Roaming\\npm;;C:\\Program Files\\Azure Data Studio\\bin;C:\\Users\\Admin\\AppData\\Local\\Programs\\Ollama"
time=2024-03-15T15:32:07.817+11:00 level=DEBUG source=gpu.go:180 msg="rocm detected 1 devices with 7152M available memory"
time=2024-03-15T15:32:07.817+11:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-15T15:32:07.829+11:00 level=INFO source=amd_windows.go:40 msg="AMD Driver: 50731541"
time=2024-03-15T15:32:07.829+11:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir C:\\Users\\Admin\\AppData\\Local\\Programs\\Ollama\\rocm"
time=2024-03-15T15:32:07.829+11:00 level=DEBUG source=amd_windows.go:148 msg="detected ROCM next to ollama executable C:\\Users\\Admin\\AppData\\Local\\Programs\\Ollama\\rocm"
time=2024-03-15T15:32:07.829+11:00 level=DEBUG source=amd_windows.go:66 msg="skipping rocm gfx compatibility check with HSA_OVERRIDE_GFX_VERSION=\"10.3.0\""
time=2024-03-15T15:32:07.829+11:00 level=INFO source=amd_windows.go:69 msg="detected 1 hip devices"
time=2024-03-15T15:32:07.829+11:00 level=INFO source=amd_windows.go:87 msg="[0] Name: Radeon RX Vega"
time=2024-03-15T15:32:07.829+11:00 level=INFO source=amd_windows.go:90 msg="[0] GcnArchName: gfx900:xnack-"
time=2024-03-15T15:32:08.160+11:00 level=INFO source=amd_windows.go:117 msg="[0] Total Mem: 8438939648"
time=2024-03-15T15:32:08.160+11:00 level=INFO source=amd_windows.go:118 msg="[0] Free Mem:  8573157376"
time=2024-03-15T15:32:08.221+11:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-15T15:32:08.221+11:00 level=DEBUG source=payload_common.go:93 msg="ordered list of LLM libraries to try [C:\\Users\\Admin\\AppData\\Local\\Temp\\ollama2659182147\\runners\\rocm_v5.7\\ext_server.dll C:\\Users\\Admin\\AppData\\Local\\Temp\\ollama2659182147\\runners\\cpu_avx2\\ext_server.dll]"
time=2024-03-15T15:32:08.221+11:00 level=INFO source=assets.go:63 msg="Updating PATH to C:\\Users\\Admin\\AppData\\Local\\Temp\\ollama2659182147\\runners\\rocm_v5.7;C:\\Users\\Admin\\AppData\\Local\\Programs\\Ollama\\rocm;C:\\Program Files\\Docker\\Docker\\resources\\bin;C:\\Windows\\System32;C:\\Windows;C:\\Windows\\System32\\wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Windows\\System32;C:\\Windows;C:\\Windows\\System32\\wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\nodejs\\;C:\\Program Files (x86)\\Bluetooth Command Line Tools\\bin;C:\\Program Files (x86)\\GnuPG\\bin;C:\\Program Files\\Microsoft SQL Server\\Client SDK\\ODBC\\170\\Tools\\Binn\\;C:\\Program Files (x86)\\Microsoft SQL Server\\150\\Tools\\Binn\\;C:\\Program Files\\Microsoft SQL Server\\150\\Tools\\Binn\\;C:\\Program Files\\Microsoft SQL Server\\150\\DTS\\Binn\\;C:\\Program Files (x86)\\Microsoft SQL Server\\150\\DTS\\Binn\\;C:\\Program Files\\Azure Data Studio\\bin;C:\\Users\\Admin\\AppData\\Local\\Programs\\Python\\Python37\\Scripts\\;C:\\Users\\Admin\\AppData\\Local\\Programs\\Python\\Python37\\;C:\\Users\\Admin\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\Admin\\AppData\\Local\\Programs\\Microsoft VS Code\\bin;C:\\Users\\Admin\\AppData\\Local\\Programs\\Python\\Python37\\Scripts;C:\\Users\\Admin\\AppData\\Local\\Programs\\Microsoft VS Code\\;C:\\Users\\Admin\\AppData\\Local\\Programs\\ffmpeg-4.1-win64-static\\bin;C:\\Users\\Admin\\AppData\\Roaming\\npm;;C:\\Program Files\\Azure Data Studio\\bin;C:\\Users\\Admin\\AppData\\Local\\Programs\\Ollama"
time=2024-03-15T15:32:08.241+11:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: C:\\Users\\Admin\\AppData\\Local\\Temp\\ollama2659182147\\runners\\rocm_v5.7\\ext_server.dll"
time=2024-03-15T15:32:08.241+11:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
time=2024-03-15T15:32:08.241+11:00 level=DEBUG source=dyn_ext_server.go:151 msg="server params: {model:0x1c9390bc1b0 n_ctx:2048 n_batch:512 n_threads:0 n_parallel:1 rope_freq_base:0 rope_freq_scale:0 memory_f16:true n_gpu_layers:33 main_gpu:0 use_mlock:false use_mmap:true numa:0 embedding:true lora_adapters:<nil> mmproj:0x1c9390bc220 verbose_logging:true _:[0 0 0 0 0 0 0]}"
[1710477128] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | 
[1710477128] Performing pre-initialization of GPU

rocBLAS error: Cannot read C:\Users\Admin\AppData\Local\Programs\Ollama\rocm\/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx900
 List of available TensileLibrary Files : 

<!-- gh-comment-id:1998928646 --> @zharklm commented on GitHub (Mar 15, 2024): Can also confirm getting similar issues with windows 0.1.29 release using Vega 64 card. Looking at debug output it seems the environment variable is being recognized but still getting the gfx900 error. Have tried using environment variable values of: 9.0.0, "9.0.0", 9.0.6, "9.0.6", 10.3.0, "10.3.0" Full debug server log follows: ``` time=2024-03-15T15:31:54.275+11:00 level=INFO source=images.go:806 msg="total blobs: 34" time=2024-03-15T15:31:54.277+11:00 level=INFO source=images.go:813 msg="total unused blobs removed: 0" time=2024-03-15T15:31:54.278+11:00 level=INFO source=routes.go:1110 msg="Listening on 127.0.0.1:11434 (version 0.1.29)" time=2024-03-15T15:31:54.278+11:00 level=INFO source=payload_common.go:112 msg="Extracting dynamic libraries to C:\\Users\\Admin\\AppData\\Local\\Temp\\ollama2659182147\\runners ..." time=2024-03-15T15:31:54.317+11:00 level=INFO source=payload_common.go:139 msg="Dynamic LLM libraries [cpu cpu_avx cuda_v11.3 rocm_v5.7 cpu_avx2]" time=2024-03-15T15:31:54.317+11:00 level=DEBUG source=payload_common.go:140 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" [GIN] 2024/03/15 - 15:32:07 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/03/15 - 15:32:07 | 200 | 1.5012ms | 127.0.0.1 | POST "/api/show" [GIN] 2024/03/15 - 15:32:07 | 200 | 500.8µs | 127.0.0.1 | POST "/api/show" time=2024-03-15T15:32:07.391+11:00 level=INFO source=gpu.go:77 msg="Detecting GPU type" time=2024-03-15T15:32:07.391+11:00 level=INFO source=gpu.go:191 msg="Searching for GPU management library nvml.dll" time=2024-03-15T15:32:07.392+11:00 level=DEBUG source=gpu.go:209 msg="gpu management search paths: [c:\\Windows\\System32\\nvml.dll C:\\Program Files\\Docker\\Docker\\resources\\bin\\nvml.dll* C:\\Windows\\System32\\nvml.dll* C:\\Windows\\nvml.dll* C:\\Windows\\System32\\wbem\\nvml.dll* C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvml.dll* C:\\Windows\\System32\\OpenSSH\\nvml.dll* C:\\Windows\\System32\\nvml.dll* C:\\Windows\\nvml.dll* C:\\Windows\\System32\\wbem\\nvml.dll* C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvml.dll* C:\\Windows\\System32\\OpenSSH\\nvml.dll* C:\\Program Files\\nodejs\\nvml.dll* C:\\Program Files (x86)\\Bluetooth Command Line Tools\\bin\\nvml.dll* C:\\Program Files (x86)\\GnuPG\\bin\\nvml.dll* C:\\Program Files\\Microsoft SQL Server\\Client SDK\\ODBC\\170\\Tools\\Binn\\nvml.dll* C:\\Program Files (x86)\\Microsoft SQL Server\\150\\Tools\\Binn\\nvml.dll* C:\\Program Files\\Microsoft SQL Server\\150\\Tools\\Binn\\nvml.dll* C:\\Program Files\\Microsoft SQL Server\\150\\DTS\\Binn\\nvml.dll* C:\\Program Files (x86)\\Microsoft SQL Server\\150\\DTS\\Binn\\nvml.dll* C:\\Program Files\\Azure Data Studio\\bin\\nvml.dll* C:\\Users\\Admin\\AppData\\Local\\Programs\\Python\\Python37\\Scripts\\nvml.dll* C:\\Users\\Admin\\AppData\\Local\\Programs\\Python\\Python37\\nvml.dll* C:\\Users\\Admin\\AppData\\Local\\Microsoft\\WindowsApps\\nvml.dll* C:\\Users\\Admin\\AppData\\Local\\Programs\\Microsoft VS Code\\bin\\nvml.dll* C:\\Users\\Admin\\AppData\\Local\\Programs\\Python\\Python37\\Scripts\\nvml.dll* C:\\Users\\Admin\\AppData\\Local\\Programs\\Microsoft VS Code\\nvml.dll* C:\\Users\\Admin\\AppData\\Local\\Programs\\ffmpeg-4.1-win64-static\\bin\\nvml.dll* C:\\Users\\Admin\\AppData\\Roaming\\npm\\nvml.dll* C:\\Users\\Admin\\AppData\\Local\\Programs\\Ollama\\nvml.dll* C:\\Program Files\\Azure Data Studio\\bin\\nvml.dll* C:\\Users\\Admin\\AppData\\Local\\Programs\\Ollama\\nvml.dll*]" time=2024-03-15T15:32:07.400+11:00 level=INFO source=gpu.go:237 msg="Discovered GPU libraries: []" time=2024-03-15T15:32:07.400+11:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-15T15:32:07.415+11:00 level=INFO source=amd_windows.go:40 msg="AMD Driver: 50731541" time=2024-03-15T15:32:07.415+11:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir C:\\Users\\Admin\\AppData\\Local\\Programs\\Ollama\\rocm" time=2024-03-15T15:32:07.415+11:00 level=DEBUG source=amd_windows.go:148 msg="detected ROCM next to ollama executable C:\\Users\\Admin\\AppData\\Local\\Programs\\Ollama\\rocm" time=2024-03-15T15:32:07.415+11:00 level=DEBUG source=amd_windows.go:66 msg="skipping rocm gfx compatibility check with HSA_OVERRIDE_GFX_VERSION=\"10.3.0\"" time=2024-03-15T15:32:07.415+11:00 level=INFO source=amd_windows.go:69 msg="detected 1 hip devices" time=2024-03-15T15:32:07.415+11:00 level=INFO source=amd_windows.go:87 msg="[0] Name: Radeon RX Vega" time=2024-03-15T15:32:07.415+11:00 level=INFO source=amd_windows.go:90 msg="[0] GcnArchName: gfx900:xnack-" time=2024-03-15T15:32:07.750+11:00 level=INFO source=amd_windows.go:117 msg="[0] Total Mem: 8438939648" time=2024-03-15T15:32:07.751+11:00 level=INFO source=amd_windows.go:118 msg="[0] Free Mem: 8573157376" time=2024-03-15T15:32:07.751+11:00 level=INFO source=assets.go:63 msg="Updating PATH to C:\\Users\\Admin\\AppData\\Local\\Programs\\Ollama\\rocm;C:\\Program Files\\Docker\\Docker\\resources\\bin;C:\\Windows\\System32;C:\\Windows;C:\\Windows\\System32\\wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Windows\\System32;C:\\Windows;C:\\Windows\\System32\\wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\nodejs\\;C:\\Program Files (x86)\\Bluetooth Command Line Tools\\bin;C:\\Program Files (x86)\\GnuPG\\bin;C:\\Program Files\\Microsoft SQL Server\\Client SDK\\ODBC\\170\\Tools\\Binn\\;C:\\Program Files (x86)\\Microsoft SQL Server\\150\\Tools\\Binn\\;C:\\Program Files\\Microsoft SQL Server\\150\\Tools\\Binn\\;C:\\Program Files\\Microsoft SQL Server\\150\\DTS\\Binn\\;C:\\Program Files (x86)\\Microsoft SQL Server\\150\\DTS\\Binn\\;C:\\Program Files\\Azure Data Studio\\bin;C:\\Users\\Admin\\AppData\\Local\\Programs\\Python\\Python37\\Scripts\\;C:\\Users\\Admin\\AppData\\Local\\Programs\\Python\\Python37\\;C:\\Users\\Admin\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\Admin\\AppData\\Local\\Programs\\Microsoft VS Code\\bin;C:\\Users\\Admin\\AppData\\Local\\Programs\\Python\\Python37\\Scripts;C:\\Users\\Admin\\AppData\\Local\\Programs\\Microsoft VS Code\\;C:\\Users\\Admin\\AppData\\Local\\Programs\\ffmpeg-4.1-win64-static\\bin;C:\\Users\\Admin\\AppData\\Roaming\\npm;;C:\\Program Files\\Azure Data Studio\\bin;C:\\Users\\Admin\\AppData\\Local\\Programs\\Ollama" time=2024-03-15T15:32:07.817+11:00 level=DEBUG source=gpu.go:180 msg="rocm detected 1 devices with 7152M available memory" time=2024-03-15T15:32:07.817+11:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-15T15:32:07.829+11:00 level=INFO source=amd_windows.go:40 msg="AMD Driver: 50731541" time=2024-03-15T15:32:07.829+11:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir C:\\Users\\Admin\\AppData\\Local\\Programs\\Ollama\\rocm" time=2024-03-15T15:32:07.829+11:00 level=DEBUG source=amd_windows.go:148 msg="detected ROCM next to ollama executable C:\\Users\\Admin\\AppData\\Local\\Programs\\Ollama\\rocm" time=2024-03-15T15:32:07.829+11:00 level=DEBUG source=amd_windows.go:66 msg="skipping rocm gfx compatibility check with HSA_OVERRIDE_GFX_VERSION=\"10.3.0\"" time=2024-03-15T15:32:07.829+11:00 level=INFO source=amd_windows.go:69 msg="detected 1 hip devices" time=2024-03-15T15:32:07.829+11:00 level=INFO source=amd_windows.go:87 msg="[0] Name: Radeon RX Vega" time=2024-03-15T15:32:07.829+11:00 level=INFO source=amd_windows.go:90 msg="[0] GcnArchName: gfx900:xnack-" time=2024-03-15T15:32:08.160+11:00 level=INFO source=amd_windows.go:117 msg="[0] Total Mem: 8438939648" time=2024-03-15T15:32:08.160+11:00 level=INFO source=amd_windows.go:118 msg="[0] Free Mem: 8573157376" time=2024-03-15T15:32:08.221+11:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-15T15:32:08.221+11:00 level=DEBUG source=payload_common.go:93 msg="ordered list of LLM libraries to try [C:\\Users\\Admin\\AppData\\Local\\Temp\\ollama2659182147\\runners\\rocm_v5.7\\ext_server.dll C:\\Users\\Admin\\AppData\\Local\\Temp\\ollama2659182147\\runners\\cpu_avx2\\ext_server.dll]" time=2024-03-15T15:32:08.221+11:00 level=INFO source=assets.go:63 msg="Updating PATH to C:\\Users\\Admin\\AppData\\Local\\Temp\\ollama2659182147\\runners\\rocm_v5.7;C:\\Users\\Admin\\AppData\\Local\\Programs\\Ollama\\rocm;C:\\Program Files\\Docker\\Docker\\resources\\bin;C:\\Windows\\System32;C:\\Windows;C:\\Windows\\System32\\wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Windows\\System32;C:\\Windows;C:\\Windows\\System32\\wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\nodejs\\;C:\\Program Files (x86)\\Bluetooth Command Line Tools\\bin;C:\\Program Files (x86)\\GnuPG\\bin;C:\\Program Files\\Microsoft SQL Server\\Client SDK\\ODBC\\170\\Tools\\Binn\\;C:\\Program Files (x86)\\Microsoft SQL Server\\150\\Tools\\Binn\\;C:\\Program Files\\Microsoft SQL Server\\150\\Tools\\Binn\\;C:\\Program Files\\Microsoft SQL Server\\150\\DTS\\Binn\\;C:\\Program Files (x86)\\Microsoft SQL Server\\150\\DTS\\Binn\\;C:\\Program Files\\Azure Data Studio\\bin;C:\\Users\\Admin\\AppData\\Local\\Programs\\Python\\Python37\\Scripts\\;C:\\Users\\Admin\\AppData\\Local\\Programs\\Python\\Python37\\;C:\\Users\\Admin\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\Admin\\AppData\\Local\\Programs\\Microsoft VS Code\\bin;C:\\Users\\Admin\\AppData\\Local\\Programs\\Python\\Python37\\Scripts;C:\\Users\\Admin\\AppData\\Local\\Programs\\Microsoft VS Code\\;C:\\Users\\Admin\\AppData\\Local\\Programs\\ffmpeg-4.1-win64-static\\bin;C:\\Users\\Admin\\AppData\\Roaming\\npm;;C:\\Program Files\\Azure Data Studio\\bin;C:\\Users\\Admin\\AppData\\Local\\Programs\\Ollama" time=2024-03-15T15:32:08.241+11:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: C:\\Users\\Admin\\AppData\\Local\\Temp\\ollama2659182147\\runners\\rocm_v5.7\\ext_server.dll" time=2024-03-15T15:32:08.241+11:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server" time=2024-03-15T15:32:08.241+11:00 level=DEBUG source=dyn_ext_server.go:151 msg="server params: {model:0x1c9390bc1b0 n_ctx:2048 n_batch:512 n_threads:0 n_parallel:1 rope_freq_base:0 rope_freq_scale:0 memory_f16:true n_gpu_layers:33 main_gpu:0 use_mlock:false use_mmap:true numa:0 embedding:true lora_adapters:<nil> mmproj:0x1c9390bc220 verbose_logging:true _:[0 0 0 0 0 0 0]}" [1710477128] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | [1710477128] Performing pre-initialization of GPU rocBLAS error: Cannot read C:\Users\Admin\AppData\Local\Programs\Ollama\rocm\/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx900 List of available TensileLibrary Files : ```
Author
Owner

@moozoo64 commented on GitHub (Mar 15, 2024):

I'll one up you
"amdgpu [0] gfx906:sramecc-:xnack- is not supported by C:\Users\michael\AppData\Local\Programs\Ollama\rocm [gfx1030 gfx1100 gfx1101 gfx1102 gfx906]"

I have a Radeon VII which is actually on the list as gfx906. Well kind of if you ignore the :sramecc-:xnack bit.
Forcing it by setting HSA_OVERRIDE_GFX_VERSION = 9.0.6
Results in a Gpu crash (screen goes blank AMD reporting tool comes up).
Past the crash I do get full gpu acceleration, but it soon crashes again.

The real problem is llama.cpp ggml-cuda.cu doesn't support gfx906 even though the LLVM CLANG does.
look for conditionals with gfx
Yes I lldb debugged the HIP support in llama.cpp....
https://github.com/ggerganov/llama.cpp/issues/5712

And FYI the vulkan llama.cpp backend works great with my Radeon VII

<!-- gh-comment-id:1999776584 --> @moozoo64 commented on GitHub (Mar 15, 2024): I'll one up you "amdgpu [0] gfx906:sramecc-:xnack- is not supported by C:\\Users\\michael\\AppData\\Local\\Programs\\Ollama\\rocm [gfx1030 gfx1100 gfx1101 gfx1102 gfx906]" I have a Radeon VII which is actually on the list as gfx906. Well kind of if you ignore the :sramecc-:xnack bit. Forcing it by setting HSA_OVERRIDE_GFX_VERSION = 9.0.6 Results in a Gpu crash (screen goes blank AMD reporting tool comes up). Past the crash I do get full gpu acceleration, but it soon crashes again. The real problem is llama.cpp ggml-cuda.cu doesn't support gfx906 even though the LLVM CLANG does. look for conditionals with gfx Yes I lldb debugged the HIP support in llama.cpp.... [https://github.com/ggerganov/llama.cpp/issues/5712](https://github.com/ggerganov/llama.cpp/issues/5712) And FYI the vulkan llama.cpp backend works great with my Radeon VII
Author
Owner

@ftoppi commented on GitHub (Mar 15, 2024):

Hello,

RX 6700 XT here and facing the same issue.

time=2024-03-15T15:31:52.043+01:00 level=INFO source=images.go:806 msg="total blobs: 12"
time=2024-03-15T15:31:52.044+01:00 level=INFO source=images.go:813 msg="total unused blobs removed: 0"
time=2024-03-15T15:31:52.045+01:00 level=INFO source=routes.go:1110 msg="Listening on 127.0.0.1:11434 (version 0.1.29)"
time=2024-03-15T15:31:52.045+01:00 level=INFO source=payload_common.go:112 msg="Extracting dynamic libraries to C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama3707111582\\runners ..."
time=2024-03-15T15:31:52.183+01:00 level=INFO source=payload_common.go:139 msg="Dynamic LLM libraries [cpu_avx rocm_v5.7 cpu_avx2 cuda_v11.3]"
time=2024-03-15T15:31:52.183+01:00 level=DEBUG source=payload_common.go:140 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
[GIN] 2024/03/15 - 15:32:06 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2024/03/15 - 15:32:06 | 200 |      1.0411ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2024/03/15 - 15:32:06 | 200 |       525.2µs |       127.0.0.1 | POST     "/api/show"
time=2024-03-15T15:32:06.738+01:00 level=INFO source=gpu.go:77 msg="Detecting GPU type"
time=2024-03-15T15:32:06.738+01:00 level=INFO source=gpu.go:191 msg="Searching for GPU management library nvml.dll"
time=2024-03-15T15:32:06.739+01:00 level=DEBUG source=gpu.go:209 msg="gpu management search paths: [c:\\Windows\\System32\\nvml.dll C:\\Windows\\system32\\nvml.dll* C:\\Windows\\nvml.dll* C:\\Windows\\System32\\Wbem\\nvml.dll* C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvml.dll* C:\\Windows\\System32\\OpenSSH\\nvml.dll* C:\\Program Files\\Git\\cmd\\nvml.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll* C:\\Program Files\\Calibre2\\nvml.dll* C:\\Program Files\\dotnet\\nvml.dll* C:\\Program Files\\Intel\\PresentMon\\PresentMonApplication\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Microsoft\\WindowsApps\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\nvml.dll*]"
time=2024-03-15T15:32:06.742+01:00 level=INFO source=gpu.go:237 msg="Discovered GPU libraries: []"
time=2024-03-15T15:32:06.742+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-15T15:32:06.755+01:00 level=INFO source=amd_windows.go:40 msg="AMD Driver: 50732000"
time=2024-03-15T15:32:06.755+01:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm"
time=2024-03-15T15:32:06.755+01:00 level=DEBUG source=amd_windows.go:148 msg="detected ROCM next to ollama executable C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm"
time=2024-03-15T15:32:06.755+01:00 level=DEBUG source=amd_windows.go:66 msg="skipping rocm gfx compatibility check with HSA_OVERRIDE_GFX_VERSION=10.3.0"
time=2024-03-15T15:32:06.755+01:00 level=INFO source=amd_windows.go:69 msg="detected 1 hip devices"
time=2024-03-15T15:32:06.755+01:00 level=INFO source=amd_windows.go:87 msg="[0] Name: AMD Radeon RX 6700 XT"
time=2024-03-15T15:32:06.755+01:00 level=INFO source=amd_windows.go:90 msg="[0] GcnArchName: gfx1031"
time=2024-03-15T15:32:06.984+01:00 level=INFO source=amd_windows.go:117 msg="[0] Total Mem: 12733906944"
time=2024-03-15T15:32:06.984+01:00 level=INFO source=amd_windows.go:118 msg="[0] Free Mem:  12868124672"
time=2024-03-15T15:32:06.984+01:00 level=INFO source=assets.go:63 msg="Updating PATH to C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\Git\\cmd;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\Calibre2\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\Intel\\PresentMon\\PresentMonApplication\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\;C:\\Users\\myuser\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama"
time=2024-03-15T15:32:07.050+01:00 level=DEBUG source=gpu.go:180 msg="rocm detected 1 devices with 11044M available memory"
time=2024-03-15T15:32:07.050+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-15T15:32:07.059+01:00 level=INFO source=amd_windows.go:40 msg="AMD Driver: 50732000"
time=2024-03-15T15:32:07.059+01:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm"
time=2024-03-15T15:32:07.059+01:00 level=DEBUG source=amd_windows.go:148 msg="detected ROCM next to ollama executable C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm"
time=2024-03-15T15:32:07.059+01:00 level=DEBUG source=amd_windows.go:66 msg="skipping rocm gfx compatibility check with HSA_OVERRIDE_GFX_VERSION=10.3.0"
time=2024-03-15T15:32:07.059+01:00 level=INFO source=amd_windows.go:69 msg="detected 1 hip devices"
time=2024-03-15T15:32:07.059+01:00 level=INFO source=amd_windows.go:87 msg="[0] Name: AMD Radeon RX 6700 XT"
time=2024-03-15T15:32:07.059+01:00 level=INFO source=amd_windows.go:90 msg="[0] GcnArchName: gfx1031"
time=2024-03-15T15:32:07.279+01:00 level=INFO source=amd_windows.go:117 msg="[0] Total Mem: 12733906944"
time=2024-03-15T15:32:07.279+01:00 level=INFO source=amd_windows.go:118 msg="[0] Free Mem:  12868124672"
time=2024-03-15T15:32:07.345+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-15T15:32:07.345+01:00 level=DEBUG source=payload_common.go:93 msg="ordered list of LLM libraries to try [C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama3707111582\\runners\\rocm_v5.7\\ext_server.dll C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama3707111582\\runners\\cpu_avx2\\ext_server.dll]"
time=2024-03-15T15:32:07.346+01:00 level=INFO source=assets.go:63 msg="Updating PATH to C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama3707111582\\runners\\rocm_v5.7;C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\Git\\cmd;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\Calibre2\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\Intel\\PresentMon\\PresentMonApplication\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\;C:\\Users\\myuser\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama"
time=2024-03-15T15:32:07.352+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama3707111582\\runners\\rocm_v5.7\\ext_server.dll"
time=2024-03-15T15:32:07.352+01:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
time=2024-03-15T15:32:07.352+01:00 level=DEBUG source=dyn_ext_server.go:151 msg="server params: {model:0x28e6670ee00 n_ctx:2048 n_batch:512 n_threads:0 n_parallel:1 rope_freq_base:0 rope_freq_scale:0 memory_f16:true n_gpu_layers:33 main_gpu:0 use_mlock:false use_mmap:true numa:0 embedding:true lora_adapters:<nil> mmproj:<nil> verbose_logging:true _:[0 0 0 0 0 0 0]}"
[1710513127] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | 
[1710513127] Performing pre-initialization of GPU

rocBLAS error: Cannot read C:\Users\myuser\AppData\Local\Programs\Ollama\rocm\/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx1031
 List of available TensileLibrary Files : 

So I copied all the gfx1030 files as gfx1031 and it went a bit further but then crashed again:

time=2024-03-15T15:37:06.151+01:00 level=INFO source=images.go:806 msg="total blobs: 12"
time=2024-03-15T15:37:06.152+01:00 level=INFO source=images.go:813 msg="total unused blobs removed: 0"
time=2024-03-15T15:37:06.152+01:00 level=INFO source=routes.go:1110 msg="Listening on 127.0.0.1:11434 (version 0.1.29)"
time=2024-03-15T15:37:06.152+01:00 level=INFO source=payload_common.go:112 msg="Extracting dynamic libraries to C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama1387933699\\runners ..."
time=2024-03-15T15:37:06.289+01:00 level=INFO source=payload_common.go:139 msg="Dynamic LLM libraries [cpu cuda_v11.3 cpu_avx cpu_avx2 rocm_v5.7]"
time=2024-03-15T15:37:06.289+01:00 level=DEBUG source=payload_common.go:140 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
[GIN] 2024/03/15 - 15:37:15 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2024/03/15 - 15:37:15 | 200 |      1.0443ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2024/03/15 - 15:37:15 | 200 |       525.5µs |       127.0.0.1 | POST     "/api/show"
time=2024-03-15T15:37:15.633+01:00 level=INFO source=gpu.go:77 msg="Detecting GPU type"
time=2024-03-15T15:37:15.633+01:00 level=INFO source=gpu.go:191 msg="Searching for GPU management library nvml.dll"
time=2024-03-15T15:37:15.633+01:00 level=DEBUG source=gpu.go:209 msg="gpu management search paths: [c:\\Windows\\System32\\nvml.dll C:\\Windows\\system32\\nvml.dll* C:\\Windows\\nvml.dll* C:\\Windows\\System32\\Wbem\\nvml.dll* C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvml.dll* C:\\Windows\\System32\\OpenSSH\\nvml.dll* C:\\Program Files\\Git\\cmd\\nvml.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll* C:\\Program Files\\Calibre2\\nvml.dll* C:\\Program Files\\dotnet\\nvml.dll* C:\\Program Files\\Intel\\PresentMon\\PresentMonApplication\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Microsoft\\WindowsApps\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\nvml.dll*]"
time=2024-03-15T15:37:15.636+01:00 level=INFO source=gpu.go:237 msg="Discovered GPU libraries: []"
time=2024-03-15T15:37:15.636+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-15T15:37:15.649+01:00 level=INFO source=amd_windows.go:40 msg="AMD Driver: 50732000"
time=2024-03-15T15:37:15.649+01:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm"
time=2024-03-15T15:37:15.649+01:00 level=DEBUG source=amd_windows.go:148 msg="detected ROCM next to ollama executable C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm"
time=2024-03-15T15:37:15.649+01:00 level=DEBUG source=amd_windows.go:66 msg="skipping rocm gfx compatibility check with HSA_OVERRIDE_GFX_VERSION=10.3.0"
time=2024-03-15T15:37:15.649+01:00 level=INFO source=amd_windows.go:69 msg="detected 1 hip devices"
time=2024-03-15T15:37:15.649+01:00 level=INFO source=amd_windows.go:87 msg="[0] Name: AMD Radeon RX 6700 XT"
time=2024-03-15T15:37:15.649+01:00 level=INFO source=amd_windows.go:90 msg="[0] GcnArchName: gfx1031"
time=2024-03-15T15:37:15.878+01:00 level=INFO source=amd_windows.go:117 msg="[0] Total Mem: 12733906944"
time=2024-03-15T15:37:15.879+01:00 level=INFO source=amd_windows.go:118 msg="[0] Free Mem:  12868124672"
time=2024-03-15T15:37:15.879+01:00 level=INFO source=assets.go:63 msg="Updating PATH to C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\Git\\cmd;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\Calibre2\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\Intel\\PresentMon\\PresentMonApplication\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\;C:\\Users\\myuser\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama"
time=2024-03-15T15:37:15.932+01:00 level=DEBUG source=gpu.go:180 msg="rocm detected 1 devices with 11044M available memory"
time=2024-03-15T15:37:15.932+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-15T15:37:15.941+01:00 level=INFO source=amd_windows.go:40 msg="AMD Driver: 50732000"
time=2024-03-15T15:37:15.941+01:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm"
time=2024-03-15T15:37:15.941+01:00 level=DEBUG source=amd_windows.go:148 msg="detected ROCM next to ollama executable C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm"
time=2024-03-15T15:37:15.941+01:00 level=DEBUG source=amd_windows.go:66 msg="skipping rocm gfx compatibility check with HSA_OVERRIDE_GFX_VERSION=10.3.0"
time=2024-03-15T15:37:15.941+01:00 level=INFO source=amd_windows.go:69 msg="detected 1 hip devices"
time=2024-03-15T15:37:15.941+01:00 level=INFO source=amd_windows.go:87 msg="[0] Name: AMD Radeon RX 6700 XT"
time=2024-03-15T15:37:15.941+01:00 level=INFO source=amd_windows.go:90 msg="[0] GcnArchName: gfx1031"
time=2024-03-15T15:37:16.159+01:00 level=INFO source=amd_windows.go:117 msg="[0] Total Mem: 12733906944"
time=2024-03-15T15:37:16.159+01:00 level=INFO source=amd_windows.go:118 msg="[0] Free Mem:  12868124672"
time=2024-03-15T15:37:16.215+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-15T15:37:16.215+01:00 level=DEBUG source=payload_common.go:93 msg="ordered list of LLM libraries to try [C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama1387933699\\runners\\rocm_v5.7\\ext_server.dll C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama1387933699\\runners\\cpu_avx2\\ext_server.dll]"
time=2024-03-15T15:37:16.215+01:00 level=INFO source=assets.go:63 msg="Updating PATH to C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama1387933699\\runners\\rocm_v5.7;C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\Git\\cmd;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\Calibre2\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\Intel\\PresentMon\\PresentMonApplication\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\;C:\\Users\\myuser\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama"
time=2024-03-15T15:37:16.221+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama1387933699\\runners\\rocm_v5.7\\ext_server.dll"
time=2024-03-15T15:37:16.221+01:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
time=2024-03-15T15:37:16.221+01:00 level=DEBUG source=dyn_ext_server.go:151 msg="server params: {model:0x2314baeee40 n_ctx:2048 n_batch:512 n_threads:0 n_parallel:1 rope_freq_base:0 rope_freq_scale:0 memory_f16:true n_gpu_layers:33 main_gpu:0 use_mlock:false use_mmap:true numa:0 embedding:true lora_adapters:<nil> mmproj:<nil> verbose_logging:true _:[0 0 0 0 0 0 0]}"
[1710513436] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | 
[1710513436] Performing pre-initialization of GPU
ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 ROCm devices:
  Device 0: AMD Radeon RX 6700 XT, compute capability 10.3, VMM: no
llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from C:\Users\myuser\.ollama\models\blobs\sha256-e8a35b5937a5e6d5c35d1f2a15f161e07eefe5e5bb0a3cdd42998ee79b057730 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = mistralai
llama_model_loader: - kv   2:                       llama.context_length u32              = 32768
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                       llama.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  11:                          general.file_type u32              = 2
llama_model_loader: - kv  12:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  14:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  16:                      tokenizer.ggml.merges arr[str,58980]   = ["▁ t", "i n", "e r", "▁ a", "h e...
llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  18:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  19:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  20:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  21:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  22:                    tokenizer.chat_template str              = {{ bos_token }}{% for message in mess...
llama_model_loader: - kv  23:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 7.24 B
llm_load_print_meta: model size       = 3.83 GiB (4.54 BPW) 
llm_load_print_meta: general.name     = mistralai
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_tensors: ggml ctx size =    0.22 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors:      ROCm0 buffer size =  3847.55 MiB
llm_load_tensors:        CPU buffer size =    70.31 MiB
..................................................................................................
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      ROCm0 KV buffer size =   256.00 MiB
llama_new_context_with_model: KV self size  =  256.00 MiB, K (f16):  128.00 MiB, V (f16):  128.00 MiB
llama_new_context_with_model:  ROCm_Host input buffer size   =    13.02 MiB
llama_new_context_with_model:      ROCm0 compute buffer size =   164.00 MiB
llama_new_context_with_model:  ROCm_Host compute buffer size =     8.00 MiB
llama_new_context_with_model: graph splits (measure): 2
[1710513438] warming up the model with an empty run
CUDA error: invalid device function
  current device: 0, in function ggml_cuda_op_flatten at C:/Users/jeff/git/ollama/llm/llama.cpp/ggml-cuda.cu:10110
  hipGetLastError()
GGML_ASSERT: C:/Users/jeff/git/ollama/llm/llama.cpp/ggml-cuda.cu:256: !"CUDA error"
<!-- gh-comment-id:1999807553 --> @ftoppi commented on GitHub (Mar 15, 2024): Hello, RX 6700 XT here and facing the same issue. ``` time=2024-03-15T15:31:52.043+01:00 level=INFO source=images.go:806 msg="total blobs: 12" time=2024-03-15T15:31:52.044+01:00 level=INFO source=images.go:813 msg="total unused blobs removed: 0" time=2024-03-15T15:31:52.045+01:00 level=INFO source=routes.go:1110 msg="Listening on 127.0.0.1:11434 (version 0.1.29)" time=2024-03-15T15:31:52.045+01:00 level=INFO source=payload_common.go:112 msg="Extracting dynamic libraries to C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama3707111582\\runners ..." time=2024-03-15T15:31:52.183+01:00 level=INFO source=payload_common.go:139 msg="Dynamic LLM libraries [cpu_avx rocm_v5.7 cpu_avx2 cuda_v11.3]" time=2024-03-15T15:31:52.183+01:00 level=DEBUG source=payload_common.go:140 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" [GIN] 2024/03/15 - 15:32:06 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/03/15 - 15:32:06 | 200 | 1.0411ms | 127.0.0.1 | POST "/api/show" [GIN] 2024/03/15 - 15:32:06 | 200 | 525.2µs | 127.0.0.1 | POST "/api/show" time=2024-03-15T15:32:06.738+01:00 level=INFO source=gpu.go:77 msg="Detecting GPU type" time=2024-03-15T15:32:06.738+01:00 level=INFO source=gpu.go:191 msg="Searching for GPU management library nvml.dll" time=2024-03-15T15:32:06.739+01:00 level=DEBUG source=gpu.go:209 msg="gpu management search paths: [c:\\Windows\\System32\\nvml.dll C:\\Windows\\system32\\nvml.dll* C:\\Windows\\nvml.dll* C:\\Windows\\System32\\Wbem\\nvml.dll* C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvml.dll* C:\\Windows\\System32\\OpenSSH\\nvml.dll* C:\\Program Files\\Git\\cmd\\nvml.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll* C:\\Program Files\\Calibre2\\nvml.dll* C:\\Program Files\\dotnet\\nvml.dll* C:\\Program Files\\Intel\\PresentMon\\PresentMonApplication\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Microsoft\\WindowsApps\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\nvml.dll*]" time=2024-03-15T15:32:06.742+01:00 level=INFO source=gpu.go:237 msg="Discovered GPU libraries: []" time=2024-03-15T15:32:06.742+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-15T15:32:06.755+01:00 level=INFO source=amd_windows.go:40 msg="AMD Driver: 50732000" time=2024-03-15T15:32:06.755+01:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm" time=2024-03-15T15:32:06.755+01:00 level=DEBUG source=amd_windows.go:148 msg="detected ROCM next to ollama executable C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm" time=2024-03-15T15:32:06.755+01:00 level=DEBUG source=amd_windows.go:66 msg="skipping rocm gfx compatibility check with HSA_OVERRIDE_GFX_VERSION=10.3.0" time=2024-03-15T15:32:06.755+01:00 level=INFO source=amd_windows.go:69 msg="detected 1 hip devices" time=2024-03-15T15:32:06.755+01:00 level=INFO source=amd_windows.go:87 msg="[0] Name: AMD Radeon RX 6700 XT" time=2024-03-15T15:32:06.755+01:00 level=INFO source=amd_windows.go:90 msg="[0] GcnArchName: gfx1031" time=2024-03-15T15:32:06.984+01:00 level=INFO source=amd_windows.go:117 msg="[0] Total Mem: 12733906944" time=2024-03-15T15:32:06.984+01:00 level=INFO source=amd_windows.go:118 msg="[0] Free Mem: 12868124672" time=2024-03-15T15:32:06.984+01:00 level=INFO source=assets.go:63 msg="Updating PATH to C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\Git\\cmd;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\Calibre2\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\Intel\\PresentMon\\PresentMonApplication\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\;C:\\Users\\myuser\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama" time=2024-03-15T15:32:07.050+01:00 level=DEBUG source=gpu.go:180 msg="rocm detected 1 devices with 11044M available memory" time=2024-03-15T15:32:07.050+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-15T15:32:07.059+01:00 level=INFO source=amd_windows.go:40 msg="AMD Driver: 50732000" time=2024-03-15T15:32:07.059+01:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm" time=2024-03-15T15:32:07.059+01:00 level=DEBUG source=amd_windows.go:148 msg="detected ROCM next to ollama executable C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm" time=2024-03-15T15:32:07.059+01:00 level=DEBUG source=amd_windows.go:66 msg="skipping rocm gfx compatibility check with HSA_OVERRIDE_GFX_VERSION=10.3.0" time=2024-03-15T15:32:07.059+01:00 level=INFO source=amd_windows.go:69 msg="detected 1 hip devices" time=2024-03-15T15:32:07.059+01:00 level=INFO source=amd_windows.go:87 msg="[0] Name: AMD Radeon RX 6700 XT" time=2024-03-15T15:32:07.059+01:00 level=INFO source=amd_windows.go:90 msg="[0] GcnArchName: gfx1031" time=2024-03-15T15:32:07.279+01:00 level=INFO source=amd_windows.go:117 msg="[0] Total Mem: 12733906944" time=2024-03-15T15:32:07.279+01:00 level=INFO source=amd_windows.go:118 msg="[0] Free Mem: 12868124672" time=2024-03-15T15:32:07.345+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-15T15:32:07.345+01:00 level=DEBUG source=payload_common.go:93 msg="ordered list of LLM libraries to try [C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama3707111582\\runners\\rocm_v5.7\\ext_server.dll C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama3707111582\\runners\\cpu_avx2\\ext_server.dll]" time=2024-03-15T15:32:07.346+01:00 level=INFO source=assets.go:63 msg="Updating PATH to C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama3707111582\\runners\\rocm_v5.7;C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\Git\\cmd;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\Calibre2\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\Intel\\PresentMon\\PresentMonApplication\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\;C:\\Users\\myuser\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama" time=2024-03-15T15:32:07.352+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama3707111582\\runners\\rocm_v5.7\\ext_server.dll" time=2024-03-15T15:32:07.352+01:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server" time=2024-03-15T15:32:07.352+01:00 level=DEBUG source=dyn_ext_server.go:151 msg="server params: {model:0x28e6670ee00 n_ctx:2048 n_batch:512 n_threads:0 n_parallel:1 rope_freq_base:0 rope_freq_scale:0 memory_f16:true n_gpu_layers:33 main_gpu:0 use_mlock:false use_mmap:true numa:0 embedding:true lora_adapters:<nil> mmproj:<nil> verbose_logging:true _:[0 0 0 0 0 0 0]}" [1710513127] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | [1710513127] Performing pre-initialization of GPU rocBLAS error: Cannot read C:\Users\myuser\AppData\Local\Programs\Ollama\rocm\/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx1031 List of available TensileLibrary Files : ``` So I copied all the gfx1030 files as gfx1031 and it went a bit further but then crashed again: ``` time=2024-03-15T15:37:06.151+01:00 level=INFO source=images.go:806 msg="total blobs: 12" time=2024-03-15T15:37:06.152+01:00 level=INFO source=images.go:813 msg="total unused blobs removed: 0" time=2024-03-15T15:37:06.152+01:00 level=INFO source=routes.go:1110 msg="Listening on 127.0.0.1:11434 (version 0.1.29)" time=2024-03-15T15:37:06.152+01:00 level=INFO source=payload_common.go:112 msg="Extracting dynamic libraries to C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama1387933699\\runners ..." time=2024-03-15T15:37:06.289+01:00 level=INFO source=payload_common.go:139 msg="Dynamic LLM libraries [cpu cuda_v11.3 cpu_avx cpu_avx2 rocm_v5.7]" time=2024-03-15T15:37:06.289+01:00 level=DEBUG source=payload_common.go:140 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" [GIN] 2024/03/15 - 15:37:15 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/03/15 - 15:37:15 | 200 | 1.0443ms | 127.0.0.1 | POST "/api/show" [GIN] 2024/03/15 - 15:37:15 | 200 | 525.5µs | 127.0.0.1 | POST "/api/show" time=2024-03-15T15:37:15.633+01:00 level=INFO source=gpu.go:77 msg="Detecting GPU type" time=2024-03-15T15:37:15.633+01:00 level=INFO source=gpu.go:191 msg="Searching for GPU management library nvml.dll" time=2024-03-15T15:37:15.633+01:00 level=DEBUG source=gpu.go:209 msg="gpu management search paths: [c:\\Windows\\System32\\nvml.dll C:\\Windows\\system32\\nvml.dll* C:\\Windows\\nvml.dll* C:\\Windows\\System32\\Wbem\\nvml.dll* C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvml.dll* C:\\Windows\\System32\\OpenSSH\\nvml.dll* C:\\Program Files\\Git\\cmd\\nvml.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll* C:\\Program Files\\Calibre2\\nvml.dll* C:\\Program Files\\dotnet\\nvml.dll* C:\\Program Files\\Intel\\PresentMon\\PresentMonApplication\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Microsoft\\WindowsApps\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\nvml.dll*]" time=2024-03-15T15:37:15.636+01:00 level=INFO source=gpu.go:237 msg="Discovered GPU libraries: []" time=2024-03-15T15:37:15.636+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-15T15:37:15.649+01:00 level=INFO source=amd_windows.go:40 msg="AMD Driver: 50732000" time=2024-03-15T15:37:15.649+01:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm" time=2024-03-15T15:37:15.649+01:00 level=DEBUG source=amd_windows.go:148 msg="detected ROCM next to ollama executable C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm" time=2024-03-15T15:37:15.649+01:00 level=DEBUG source=amd_windows.go:66 msg="skipping rocm gfx compatibility check with HSA_OVERRIDE_GFX_VERSION=10.3.0" time=2024-03-15T15:37:15.649+01:00 level=INFO source=amd_windows.go:69 msg="detected 1 hip devices" time=2024-03-15T15:37:15.649+01:00 level=INFO source=amd_windows.go:87 msg="[0] Name: AMD Radeon RX 6700 XT" time=2024-03-15T15:37:15.649+01:00 level=INFO source=amd_windows.go:90 msg="[0] GcnArchName: gfx1031" time=2024-03-15T15:37:15.878+01:00 level=INFO source=amd_windows.go:117 msg="[0] Total Mem: 12733906944" time=2024-03-15T15:37:15.879+01:00 level=INFO source=amd_windows.go:118 msg="[0] Free Mem: 12868124672" time=2024-03-15T15:37:15.879+01:00 level=INFO source=assets.go:63 msg="Updating PATH to C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\Git\\cmd;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\Calibre2\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\Intel\\PresentMon\\PresentMonApplication\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\;C:\\Users\\myuser\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama" time=2024-03-15T15:37:15.932+01:00 level=DEBUG source=gpu.go:180 msg="rocm detected 1 devices with 11044M available memory" time=2024-03-15T15:37:15.932+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-15T15:37:15.941+01:00 level=INFO source=amd_windows.go:40 msg="AMD Driver: 50732000" time=2024-03-15T15:37:15.941+01:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm" time=2024-03-15T15:37:15.941+01:00 level=DEBUG source=amd_windows.go:148 msg="detected ROCM next to ollama executable C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm" time=2024-03-15T15:37:15.941+01:00 level=DEBUG source=amd_windows.go:66 msg="skipping rocm gfx compatibility check with HSA_OVERRIDE_GFX_VERSION=10.3.0" time=2024-03-15T15:37:15.941+01:00 level=INFO source=amd_windows.go:69 msg="detected 1 hip devices" time=2024-03-15T15:37:15.941+01:00 level=INFO source=amd_windows.go:87 msg="[0] Name: AMD Radeon RX 6700 XT" time=2024-03-15T15:37:15.941+01:00 level=INFO source=amd_windows.go:90 msg="[0] GcnArchName: gfx1031" time=2024-03-15T15:37:16.159+01:00 level=INFO source=amd_windows.go:117 msg="[0] Total Mem: 12733906944" time=2024-03-15T15:37:16.159+01:00 level=INFO source=amd_windows.go:118 msg="[0] Free Mem: 12868124672" time=2024-03-15T15:37:16.215+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-15T15:37:16.215+01:00 level=DEBUG source=payload_common.go:93 msg="ordered list of LLM libraries to try [C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama1387933699\\runners\\rocm_v5.7\\ext_server.dll C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama1387933699\\runners\\cpu_avx2\\ext_server.dll]" time=2024-03-15T15:37:16.215+01:00 level=INFO source=assets.go:63 msg="Updating PATH to C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama1387933699\\runners\\rocm_v5.7;C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\Git\\cmd;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\Calibre2\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\Intel\\PresentMon\\PresentMonApplication\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\;C:\\Users\\myuser\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama" time=2024-03-15T15:37:16.221+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama1387933699\\runners\\rocm_v5.7\\ext_server.dll" time=2024-03-15T15:37:16.221+01:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server" time=2024-03-15T15:37:16.221+01:00 level=DEBUG source=dyn_ext_server.go:151 msg="server params: {model:0x2314baeee40 n_ctx:2048 n_batch:512 n_threads:0 n_parallel:1 rope_freq_base:0 rope_freq_scale:0 memory_f16:true n_gpu_layers:33 main_gpu:0 use_mlock:false use_mmap:true numa:0 embedding:true lora_adapters:<nil> mmproj:<nil> verbose_logging:true _:[0 0 0 0 0 0 0]}" [1710513436] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | [1710513436] Performing pre-initialization of GPU ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes ggml_init_cublas: found 1 ROCm devices: Device 0: AMD Radeon RX 6700 XT, compute capability 10.3, VMM: no llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from C:\Users\myuser\.ollama\models\blobs\sha256-e8a35b5937a5e6d5c35d1f2a15f161e07eefe5e5bb0a3cdd42998ee79b057730 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = mistralai llama_model_loader: - kv 2: llama.context_length u32 = 32768 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: llama.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 11: general.file_type u32 = 2 llama_model_loader: - kv 12: tokenizer.ggml.model str = llama llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,58980] = ["▁ t", "i n", "e r", "▁ a", "h e... llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 22: tokenizer.chat_template str = {{ bos_token }}{% for message in mess... llama_model_loader: - kv 23: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_0: 225 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens definition check successful ( 259/32000 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 7.24 B llm_load_print_meta: model size = 3.83 GiB (4.54 BPW) llm_load_print_meta: general.name = mistralai llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_tensors: ggml ctx size = 0.22 MiB llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 33/33 layers to GPU llm_load_tensors: ROCm0 buffer size = 3847.55 MiB llm_load_tensors: CPU buffer size = 70.31 MiB .................................................................................................. llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: ROCm0 KV buffer size = 256.00 MiB llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB llama_new_context_with_model: ROCm_Host input buffer size = 13.02 MiB llama_new_context_with_model: ROCm0 compute buffer size = 164.00 MiB llama_new_context_with_model: ROCm_Host compute buffer size = 8.00 MiB llama_new_context_with_model: graph splits (measure): 2 [1710513438] warming up the model with an empty run CUDA error: invalid device function current device: 0, in function ggml_cuda_op_flatten at C:/Users/jeff/git/ollama/llm/llama.cpp/ggml-cuda.cu:10110 hipGetLastError() GGML_ASSERT: C:/Users/jeff/git/ollama/llm/llama.cpp/ggml-cuda.cu:256: !"CUDA error" ```
Author
Owner

@eL1fe commented on GitHub (Mar 15, 2024):

Hello,

RX 6650 XT, the same issue.

rocBLAS error: Cannot read C:\Users\username\AppData\Local\Programs\Ollama\rocm\/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx1032
 List of available TensileLibrary Files : 
<!-- gh-comment-id:2000077003 --> @eL1fe commented on GitHub (Mar 15, 2024): Hello, RX 6650 XT, the same issue. ``` rocBLAS error: Cannot read C:\Users\username\AppData\Local\Programs\Ollama\rocm\/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx1032 List of available TensileLibrary Files : ```
Author
Owner

@dhiltgen commented on GitHub (Mar 15, 2024):

Thanks for the logs @zharklm. The line skipping rocm gfx compatibility check with HSA_OVERRIDE_GFX_VERSION=\"10.3.0\"" looks suspicious to me, and I'm wondering if people are setting the variable with quotes and that might be what's causing things not to work. If you're setting it at the system level through the GUI, don't use quotes around the version string. If you're running powershel and trying to set the value to test and then run ollama serve in that same terminal, it requires quotes to set a variable to a literal string.

Update: Actually, @ftoppi's log shows it without quotes but ROCm still isn't respecting it as it's supposed to.

<!-- gh-comment-id:2000389138 --> @dhiltgen commented on GitHub (Mar 15, 2024): Thanks for the logs @zharklm. The line `skipping rocm gfx compatibility check with HSA_OVERRIDE_GFX_VERSION=\"10.3.0\""` looks suspicious to me, and I'm wondering if people are setting the variable with quotes and that might be what's causing things not to work. If you're setting it at the system level through the GUI, don't use quotes around the version string. If you're running powershel and trying to set the value to test and then run `ollama serve` in that same terminal, it requires quotes to set a variable to a literal string. Update: Actually, @ftoppi's log shows it without quotes but ROCm still isn't respecting it as it's supposed to.
Author
Owner

@ftoppi commented on GitHub (Mar 15, 2024):

Hello,

No quotes in the GUI:
image
image

No quotes when checking with set:
image

Logs when trying to load mistral:

C:\Users\myuser>set | findstr HSA
HSA_OVERRIDE_GFX_VERSION=10.3.0

C:\Users\myuser>set | findstr H

C:\Users\myuser>ollama serve
time=2024-03-15T21:25:52.057+01:00 level=INFO source=images.go:806 msg="total blobs: 16"
time=2024-03-15T21:25:52.058+01:00 level=INFO source=images.go:813 msg="total unused blobs removed: 0"
time=2024-03-15T21:25:52.059+01:00 level=INFO source=routes.go:1110 msg="Listening on 127.0.0.1:11434 (version 0.1.29)"
time=2024-03-15T21:25:52.059+01:00 level=INFO source=payload_common.go:112 msg="Extracting dynamic libraries to C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama4018497180\\runners ..."
time=2024-03-15T21:25:52.194+01:00 level=INFO source=payload_common.go:139 msg="Dynamic LLM libraries [rocm_v5.7 cuda_v11.3 cpu_avx2 cpu_avx cpu]"
time=2024-03-15T21:25:52.194+01:00 level=DEBUG source=payload_common.go:140 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
[GIN] 2024/03/15 - 21:26:11 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2024/03/15 - 21:26:11 | 200 |      1.0429ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2024/03/15 - 21:26:11 | 200 |       524.7µs |       127.0.0.1 | POST     "/api/show"
time=2024-03-15T21:26:12.143+01:00 level=INFO source=gpu.go:77 msg="Detecting GPU type"
time=2024-03-15T21:26:12.144+01:00 level=INFO source=gpu.go:191 msg="Searching for GPU management library nvml.dll"
time=2024-03-15T21:26:12.144+01:00 level=DEBUG source=gpu.go:209 msg="gpu management search paths: [c:\\Windows\\System32\\nvml.dll C:\\Windows\\system32\\nvml.dll* C:\\Windows\\nvml.dll* C:\\Windows\\System32\\Wbem\\nvml.dll* C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvml.dll* C:\\Windows\\System32\\OpenSSH\\nvml.dll* C:\\Program Files\\Git\\cmd\\nvml.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll* C:\\Program Files\\Calibre2\\nvml.dll* C:\\Program Files\\dotnet\\nvml.dll* C:\\Program Files\\Intel\\PresentMon\\PresentMonApplication\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Microsoft\\WindowsApps\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\nvml.dll*]"
time=2024-03-15T21:26:12.147+01:00 level=INFO source=gpu.go:237 msg="Discovered GPU libraries: []"
time=2024-03-15T21:26:12.147+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-15T21:26:12.163+01:00 level=INFO source=amd_windows.go:40 msg="AMD Driver: 50732000"
time=2024-03-15T21:26:12.163+01:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm"
time=2024-03-15T21:26:12.163+01:00 level=DEBUG source=amd_windows.go:148 msg="detected ROCM next to ollama executable C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm"
time=2024-03-15T21:26:12.163+01:00 level=DEBUG source=amd_windows.go:66 msg="skipping rocm gfx compatibility check with HSA_OVERRIDE_GFX_VERSION=10.3.0"
time=2024-03-15T21:26:12.163+01:00 level=INFO source=amd_windows.go:69 msg="detected 1 hip devices"
time=2024-03-15T21:26:12.163+01:00 level=INFO source=amd_windows.go:87 msg="[0] Name: AMD Radeon RX 6700 XT"
time=2024-03-15T21:26:12.163+01:00 level=INFO source=amd_windows.go:90 msg="[0] GcnArchName: gfx1031"
time=2024-03-15T21:26:12.395+01:00 level=INFO source=amd_windows.go:117 msg="[0] Total Mem: 12733906944"
time=2024-03-15T21:26:12.395+01:00 level=INFO source=amd_windows.go:118 msg="[0] Free Mem:  12868124672"
time=2024-03-15T21:26:12.395+01:00 level=INFO source=assets.go:63 msg="Updating PATH to C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\Git\\cmd;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\Calibre2\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\Intel\\PresentMon\\PresentMonApplication\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\;C:\\Users\\myuser\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama"
time=2024-03-15T21:26:12.448+01:00 level=DEBUG source=gpu.go:180 msg="rocm detected 1 devices with 11044M available memory"
time=2024-03-15T21:26:12.448+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-15T21:26:12.457+01:00 level=INFO source=amd_windows.go:40 msg="AMD Driver: 50732000"
time=2024-03-15T21:26:12.457+01:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm"
time=2024-03-15T21:26:12.458+01:00 level=DEBUG source=amd_windows.go:148 msg="detected ROCM next to ollama executable C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm"
time=2024-03-15T21:26:12.458+01:00 level=DEBUG source=amd_windows.go:66 msg="skipping rocm gfx compatibility check with HSA_OVERRIDE_GFX_VERSION=10.3.0"
time=2024-03-15T21:26:12.458+01:00 level=INFO source=amd_windows.go:69 msg="detected 1 hip devices"
time=2024-03-15T21:26:12.458+01:00 level=INFO source=amd_windows.go:87 msg="[0] Name: AMD Radeon RX 6700 XT"
time=2024-03-15T21:26:12.459+01:00 level=INFO source=amd_windows.go:90 msg="[0] GcnArchName: gfx1031"
time=2024-03-15T21:26:12.687+01:00 level=INFO source=amd_windows.go:117 msg="[0] Total Mem: 12733906944"
time=2024-03-15T21:26:12.687+01:00 level=INFO source=amd_windows.go:118 msg="[0] Free Mem:  12868124672"
time=2024-03-15T21:26:12.745+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-15T21:26:12.745+01:00 level=DEBUG source=payload_common.go:93 msg="ordered list of LLM libraries to try [C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama4018497180\\runners\\rocm_v5.7\\ext_server.dll C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama4018497180\\runners\\cpu_avx2\\ext_server.dll]"
time=2024-03-15T21:26:12.746+01:00 level=INFO source=assets.go:63 msg="Updating PATH to C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama4018497180\\runners\\rocm_v5.7;C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\Git\\cmd;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\Calibre2\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\Intel\\PresentMon\\PresentMonApplication\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\;C:\\Users\\myuser\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama"
loading library C:\Users\myuser\AppData\Local\Temp\ollama4018497180\runners\rocm_v5.7\ext_server.dll
time=2024-03-15T21:26:12.757+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama4018497180\\runners\\rocm_v5.7\\ext_server.dll"
time=2024-03-15T21:26:12.757+01:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
time=2024-03-15T21:26:12.757+01:00 level=DEBUG source=dyn_ext_server.go:151 msg="server params: {model:0x200dca0ee30 n_ctx:2048 n_batch:512 n_threads:0 n_parallel:1 rope_freq_base:0 rope_freq_scale:0 memory_f16:true n_gpu_layers:33 main_gpu:0 use_mlock:false use_mmap:true numa:0 embedding:true lora_adapters:<nil> mmproj:<nil> verbose_logging:true _:[0 0 0 0 0 0 0]}"
[1710534372] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 |
[1710534372] Performing pre-initialization of GPU

rocBLAS error: Cannot read C:\Users\myuser\AppData\Local\Programs\Ollama\rocm\/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx1031
 List of available TensileLibrary Files :

Not much happened on the VRAM side:
image

<!-- gh-comment-id:2000397414 --> @ftoppi commented on GitHub (Mar 15, 2024): Hello, No quotes in the GUI: ![image](https://github.com/ollama/ollama/assets/4704016/eec72557-4001-4d20-afc9-4515d5f0b997) ![image](https://github.com/ollama/ollama/assets/4704016/14f0b709-baed-40c2-b4cb-fd69348b70ac) No quotes when checking with set: ![image](https://github.com/ollama/ollama/assets/4704016/2dd39080-137e-4c71-a4a2-83b7a6f9adf3) Logs when trying to load mistral: ``` C:\Users\myuser>set | findstr HSA HSA_OVERRIDE_GFX_VERSION=10.3.0 C:\Users\myuser>set | findstr H C:\Users\myuser>ollama serve time=2024-03-15T21:25:52.057+01:00 level=INFO source=images.go:806 msg="total blobs: 16" time=2024-03-15T21:25:52.058+01:00 level=INFO source=images.go:813 msg="total unused blobs removed: 0" time=2024-03-15T21:25:52.059+01:00 level=INFO source=routes.go:1110 msg="Listening on 127.0.0.1:11434 (version 0.1.29)" time=2024-03-15T21:25:52.059+01:00 level=INFO source=payload_common.go:112 msg="Extracting dynamic libraries to C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama4018497180\\runners ..." time=2024-03-15T21:25:52.194+01:00 level=INFO source=payload_common.go:139 msg="Dynamic LLM libraries [rocm_v5.7 cuda_v11.3 cpu_avx2 cpu_avx cpu]" time=2024-03-15T21:25:52.194+01:00 level=DEBUG source=payload_common.go:140 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" [GIN] 2024/03/15 - 21:26:11 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/03/15 - 21:26:11 | 200 | 1.0429ms | 127.0.0.1 | POST "/api/show" [GIN] 2024/03/15 - 21:26:11 | 200 | 524.7µs | 127.0.0.1 | POST "/api/show" time=2024-03-15T21:26:12.143+01:00 level=INFO source=gpu.go:77 msg="Detecting GPU type" time=2024-03-15T21:26:12.144+01:00 level=INFO source=gpu.go:191 msg="Searching for GPU management library nvml.dll" time=2024-03-15T21:26:12.144+01:00 level=DEBUG source=gpu.go:209 msg="gpu management search paths: [c:\\Windows\\System32\\nvml.dll C:\\Windows\\system32\\nvml.dll* C:\\Windows\\nvml.dll* C:\\Windows\\System32\\Wbem\\nvml.dll* C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvml.dll* C:\\Windows\\System32\\OpenSSH\\nvml.dll* C:\\Program Files\\Git\\cmd\\nvml.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll* C:\\Program Files\\Calibre2\\nvml.dll* C:\\Program Files\\dotnet\\nvml.dll* C:\\Program Files\\Intel\\PresentMon\\PresentMonApplication\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Microsoft\\WindowsApps\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\nvml.dll*]" time=2024-03-15T21:26:12.147+01:00 level=INFO source=gpu.go:237 msg="Discovered GPU libraries: []" time=2024-03-15T21:26:12.147+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-15T21:26:12.163+01:00 level=INFO source=amd_windows.go:40 msg="AMD Driver: 50732000" time=2024-03-15T21:26:12.163+01:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm" time=2024-03-15T21:26:12.163+01:00 level=DEBUG source=amd_windows.go:148 msg="detected ROCM next to ollama executable C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm" time=2024-03-15T21:26:12.163+01:00 level=DEBUG source=amd_windows.go:66 msg="skipping rocm gfx compatibility check with HSA_OVERRIDE_GFX_VERSION=10.3.0" time=2024-03-15T21:26:12.163+01:00 level=INFO source=amd_windows.go:69 msg="detected 1 hip devices" time=2024-03-15T21:26:12.163+01:00 level=INFO source=amd_windows.go:87 msg="[0] Name: AMD Radeon RX 6700 XT" time=2024-03-15T21:26:12.163+01:00 level=INFO source=amd_windows.go:90 msg="[0] GcnArchName: gfx1031" time=2024-03-15T21:26:12.395+01:00 level=INFO source=amd_windows.go:117 msg="[0] Total Mem: 12733906944" time=2024-03-15T21:26:12.395+01:00 level=INFO source=amd_windows.go:118 msg="[0] Free Mem: 12868124672" time=2024-03-15T21:26:12.395+01:00 level=INFO source=assets.go:63 msg="Updating PATH to C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\Git\\cmd;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\Calibre2\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\Intel\\PresentMon\\PresentMonApplication\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\;C:\\Users\\myuser\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama" time=2024-03-15T21:26:12.448+01:00 level=DEBUG source=gpu.go:180 msg="rocm detected 1 devices with 11044M available memory" time=2024-03-15T21:26:12.448+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-15T21:26:12.457+01:00 level=INFO source=amd_windows.go:40 msg="AMD Driver: 50732000" time=2024-03-15T21:26:12.457+01:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm" time=2024-03-15T21:26:12.458+01:00 level=DEBUG source=amd_windows.go:148 msg="detected ROCM next to ollama executable C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm" time=2024-03-15T21:26:12.458+01:00 level=DEBUG source=amd_windows.go:66 msg="skipping rocm gfx compatibility check with HSA_OVERRIDE_GFX_VERSION=10.3.0" time=2024-03-15T21:26:12.458+01:00 level=INFO source=amd_windows.go:69 msg="detected 1 hip devices" time=2024-03-15T21:26:12.458+01:00 level=INFO source=amd_windows.go:87 msg="[0] Name: AMD Radeon RX 6700 XT" time=2024-03-15T21:26:12.459+01:00 level=INFO source=amd_windows.go:90 msg="[0] GcnArchName: gfx1031" time=2024-03-15T21:26:12.687+01:00 level=INFO source=amd_windows.go:117 msg="[0] Total Mem: 12733906944" time=2024-03-15T21:26:12.687+01:00 level=INFO source=amd_windows.go:118 msg="[0] Free Mem: 12868124672" time=2024-03-15T21:26:12.745+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-15T21:26:12.745+01:00 level=DEBUG source=payload_common.go:93 msg="ordered list of LLM libraries to try [C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama4018497180\\runners\\rocm_v5.7\\ext_server.dll C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama4018497180\\runners\\cpu_avx2\\ext_server.dll]" time=2024-03-15T21:26:12.746+01:00 level=INFO source=assets.go:63 msg="Updating PATH to C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama4018497180\\runners\\rocm_v5.7;C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\Git\\cmd;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\Calibre2\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\Intel\\PresentMon\\PresentMonApplication\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\;C:\\Users\\myuser\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama" loading library C:\Users\myuser\AppData\Local\Temp\ollama4018497180\runners\rocm_v5.7\ext_server.dll time=2024-03-15T21:26:12.757+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama4018497180\\runners\\rocm_v5.7\\ext_server.dll" time=2024-03-15T21:26:12.757+01:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server" time=2024-03-15T21:26:12.757+01:00 level=DEBUG source=dyn_ext_server.go:151 msg="server params: {model:0x200dca0ee30 n_ctx:2048 n_batch:512 n_threads:0 n_parallel:1 rope_freq_base:0 rope_freq_scale:0 memory_f16:true n_gpu_layers:33 main_gpu:0 use_mlock:false use_mmap:true numa:0 embedding:true lora_adapters:<nil> mmproj:<nil> verbose_logging:true _:[0 0 0 0 0 0 0]}" [1710534372] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | [1710534372] Performing pre-initialization of GPU rocBLAS error: Cannot read C:\Users\myuser\AppData\Local\Programs\Ollama\rocm\/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx1031 List of available TensileLibrary Files : ``` Not much happened on the VRAM side: ![image](https://github.com/ollama/ollama/assets/4704016/6238b990-5017-4ad1-ac62-7a4416206d0b)
Author
Owner

@dhiltgen commented on GitHub (Mar 15, 2024):

Unfortunately it appears this might not be possible on Windows ROCm yet. (I didn't realize this was a Linux only capability)

https://github.com/ROCm/ROCm/issues/2654

<!-- gh-comment-id:2000401023 --> @dhiltgen commented on GitHub (Mar 15, 2024): Unfortunately it appears this might not be possible on Windows ROCm yet. (I didn't realize this was a Linux only capability) https://github.com/ROCm/ROCm/issues/2654
Author
Owner

@ftoppi commented on GitHub (Mar 15, 2024):

That's unfortunate.
What do you think about the error during my second try when I copied the gfx1030 files and named them gfx1031?
Do you think it could work?

CUDA error: invalid device function
  current device: 0, in function ggml_cuda_op_flatten at C:/Users/jeff/git/ollama/llm/llama.cpp/ggml-cuda.cu:10110
  hipGetLastError()
GGML_ASSERT: C:/Users/jeff/git/ollama/llm/llama.cpp/ggml-cuda.cu:256: !"CUDA error"

When trying to load the model, it seems to do something, the VRAM usage increases:
image

<!-- gh-comment-id:2000407011 --> @ftoppi commented on GitHub (Mar 15, 2024): That's unfortunate. What do you think about the error during my second try when I copied the gfx1030 files and named them gfx1031? Do you think it could work? ``` CUDA error: invalid device function current device: 0, in function ggml_cuda_op_flatten at C:/Users/jeff/git/ollama/llm/llama.cpp/ggml-cuda.cu:10110 hipGetLastError() GGML_ASSERT: C:/Users/jeff/git/ollama/llm/llama.cpp/ggml-cuda.cu:256: !"CUDA error" ``` When trying to load the model, it seems to do something, the VRAM usage increases: ![image](https://github.com/ollama/ollama/assets/4704016/94511ec6-1334-4aa2-ab33-e40ab4c45851)
Author
Owner

@dhiltgen commented on GitHub (Mar 15, 2024):

@ftoppi unfortunately I've never tried to perform surgery on the tensor files, so I'm not sure how complex that is (or if it's even possible.)

I'll explore if there's some workaround we can do at our level, but we may have to wait for an updated ROCm release on windows to get the override support.

<!-- gh-comment-id:2000417529 --> @dhiltgen commented on GitHub (Mar 15, 2024): @ftoppi unfortunately I've never tried to perform surgery on the tensor files, so I'm not sure how complex that is (or if it's even possible.) I'll explore if there's some workaround we can do at our level, but we may have to wait for an updated ROCm release on windows to get the override support.
Author
Owner

@ftoppi commented on GitHub (Mar 15, 2024):

I think ROCm 6.0.2 is available on Windows but I still don't understand if HIP SDK is required or not.
https://rocm.docs.amd.com/projects/install-on-windows/en/latest/reference/system-requirements.html

<!-- gh-comment-id:2000442945 --> @ftoppi commented on GitHub (Mar 15, 2024): I think ROCm 6.0.2 is available on Windows but I still don't understand if HIP SDK is required or not. https://rocm.docs.amd.com/projects/install-on-windows/en/latest/reference/system-requirements.html
Author
Owner

@zharklm commented on GitHub (Mar 19, 2024):

Thanks for the response @dhiltgen, as @ftoppi mentioned I also tried variations with and without quotes using 9.0.0, 9.0.6 and 10.3.0. As you mentioned it seems that we are waiting for an updated version of rocm. If it is of any benefit to #3192 or #2033 I have been able to get vulkan working on the same hardware using this release of nod.ai/shark

<!-- gh-comment-id:2006397556 --> @zharklm commented on GitHub (Mar 19, 2024): Thanks for the response @dhiltgen, as @ftoppi mentioned I also tried variations with and without quotes using 9.0.0, 9.0.6 and 10.3.0. As you mentioned it seems that we are waiting for an updated version of rocm. If it is of any benefit to #3192 or #2033 I have been able to get vulkan working on the same hardware using [this release of nod.ai/shark](https://github.com/nod-ai/SHARK/releases/tag/20231229.1091)
Author
Owner

@likelovewant commented on GitHub (Mar 30, 2024):

Unfortunately it appears this might not be possible on Windows ROCm yet. (I didn't realize this was a Linux only capability)

https://github.com/ROCm/ROCm/issues/2654

Actually, they have a solution. The error due to the roclabs.dll did not links with lazyload file in Rocm. The solution is to build a new dedicated card library with amd Rocm GitHub roclabs and tensile. Build the library with the card which share similar architecture like my gfx1103, I can use the gfx 1102,1101 roclabs data to build and compile my library and roclabs then I got roclabs.dll and library replace official data in the Rocm roclabs file . The method available in the last step on my GitHub undertitle forge on amd . However, I find this Rocm is conflict with my zluda in path . The ollama will automatically detacted my gpu as navida rather than detacted as Rocm . And fall back to cpu running.So I had to romove my zluda from path each time I need to use ollama. Which is quite uncomfortable. The topic has been opened and talk about it . I check the code and it's updated . However , that doesn't solve the zluda issues

<!-- gh-comment-id:2028209437 --> @likelovewant commented on GitHub (Mar 30, 2024): > Unfortunately it appears this might not be possible on Windows ROCm yet. (I didn't realize this was a Linux only capability) > > > > https://github.com/ROCm/ROCm/issues/2654 > > Actually, they have a solution. The error due to the roclabs.dll did not links with lazyload file in Rocm. The solution is to build a new dedicated card library with amd Rocm GitHub roclabs and tensile. Build the library with the card which share similar architecture like my gfx1103, I can use the gfx 1102,1101 roclabs data to build and compile my library and roclabs then I got roclabs.dll and library replace official data in the Rocm roclabs file . The method available in the last step on my GitHub undertitle forge on amd . However, I find this Rocm is conflict with my zluda in path . The ollama will automatically detacted my gpu as navida rather than detacted as Rocm . And fall back to cpu running.So I had to romove my zluda from path each time I need to use ollama. Which is quite uncomfortable. The topic has been opened and talk about it . I check the code and it's updated . However , that doesn't solve the zluda issues
Author
Owner

@TheLapinMalin commented on GitHub (Mar 31, 2024):

Have you looked at what YellowRose did on the KoboldCPP-ROCm fork to get the 6700 cards to work? It runs on my 6750 with no issues.

https://github.com/YellowRoseCx/koboldcpp-rocm

<!-- gh-comment-id:2028682243 --> @TheLapinMalin commented on GitHub (Mar 31, 2024): Have you looked at what YellowRose did on the KoboldCPP-ROCm fork to get the 6700 cards to work? It runs on my 6750 with no issues. https://github.com/YellowRoseCx/koboldcpp-rocm
Author
Owner

@Sohnny0 commented on GitHub (Apr 23, 2024):

你好,

RX 6700 XT 也面临同样的问题。

time=2024-03-15T15:31:52.043+01:00 level=INFO source=images.go:806 msg="total blobs: 12"
time=2024-03-15T15:31:52.044+01:00 level=INFO source=images.go:813 msg="total unused blobs removed: 0"
time=2024-03-15T15:31:52.045+01:00 level=INFO source=routes.go:1110 msg="Listening on 127.0.0.1:11434 (version 0.1.29)"
time=2024-03-15T15:31:52.045+01:00 level=INFO source=payload_common.go:112 msg="Extracting dynamic libraries to C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama3707111582\\runners ..."
time=2024-03-15T15:31:52.183+01:00 level=INFO source=payload_common.go:139 msg="Dynamic LLM libraries [cpu_avx rocm_v5.7 cpu_avx2 cuda_v11.3]"
time=2024-03-15T15:31:52.183+01:00 level=DEBUG source=payload_common.go:140 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
[GIN] 2024/03/15 - 15:32:06 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2024/03/15 - 15:32:06 | 200 |      1.0411ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2024/03/15 - 15:32:06 | 200 |       525.2µs |       127.0.0.1 | POST     "/api/show"
time=2024-03-15T15:32:06.738+01:00 level=INFO source=gpu.go:77 msg="Detecting GPU type"
time=2024-03-15T15:32:06.738+01:00 level=INFO source=gpu.go:191 msg="Searching for GPU management library nvml.dll"
time=2024-03-15T15:32:06.739+01:00 level=DEBUG source=gpu.go:209 msg="gpu management search paths: [c:\\Windows\\System32\\nvml.dll C:\\Windows\\system32\\nvml.dll* C:\\Windows\\nvml.dll* C:\\Windows\\System32\\Wbem\\nvml.dll* C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvml.dll* C:\\Windows\\System32\\OpenSSH\\nvml.dll* C:\\Program Files\\Git\\cmd\\nvml.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll* C:\\Program Files\\Calibre2\\nvml.dll* C:\\Program Files\\dotnet\\nvml.dll* C:\\Program Files\\Intel\\PresentMon\\PresentMonApplication\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Microsoft\\WindowsApps\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\nvml.dll*]"
time=2024-03-15T15:32:06.742+01:00 level=INFO source=gpu.go:237 msg="Discovered GPU libraries: []"
time=2024-03-15T15:32:06.742+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-15T15:32:06.755+01:00 level=INFO source=amd_windows.go:40 msg="AMD Driver: 50732000"
time=2024-03-15T15:32:06.755+01:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm"
time=2024-03-15T15:32:06.755+01:00 level=DEBUG source=amd_windows.go:148 msg="detected ROCM next to ollama executable C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm"
time=2024-03-15T15:32:06.755+01:00 level=DEBUG source=amd_windows.go:66 msg="skipping rocm gfx compatibility check with HSA_OVERRIDE_GFX_VERSION=10.3.0"
time=2024-03-15T15:32:06.755+01:00 level=INFO source=amd_windows.go:69 msg="detected 1 hip devices"
time=2024-03-15T15:32:06.755+01:00 level=INFO source=amd_windows.go:87 msg="[0] Name: AMD Radeon RX 6700 XT"
time=2024-03-15T15:32:06.755+01:00 level=INFO source=amd_windows.go:90 msg="[0] GcnArchName: gfx1031"
time=2024-03-15T15:32:06.984+01:00 level=INFO source=amd_windows.go:117 msg="[0] Total Mem: 12733906944"
time=2024-03-15T15:32:06.984+01:00 level=INFO source=amd_windows.go:118 msg="[0] Free Mem:  12868124672"
time=2024-03-15T15:32:06.984+01:00 level=INFO source=assets.go:63 msg="Updating PATH to C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\Git\\cmd;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\Calibre2\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\Intel\\PresentMon\\PresentMonApplication\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\;C:\\Users\\myuser\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama"
time=2024-03-15T15:32:07.050+01:00 level=DEBUG source=gpu.go:180 msg="rocm detected 1 devices with 11044M available memory"
time=2024-03-15T15:32:07.050+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-15T15:32:07.059+01:00 level=INFO source=amd_windows.go:40 msg="AMD Driver: 50732000"
time=2024-03-15T15:32:07.059+01:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm"
time=2024-03-15T15:32:07.059+01:00 level=DEBUG source=amd_windows.go:148 msg="detected ROCM next to ollama executable C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm"
time=2024-03-15T15:32:07.059+01:00 level=DEBUG source=amd_windows.go:66 msg="skipping rocm gfx compatibility check with HSA_OVERRIDE_GFX_VERSION=10.3.0"
time=2024-03-15T15:32:07.059+01:00 level=INFO source=amd_windows.go:69 msg="detected 1 hip devices"
time=2024-03-15T15:32:07.059+01:00 level=INFO source=amd_windows.go:87 msg="[0] Name: AMD Radeon RX 6700 XT"
time=2024-03-15T15:32:07.059+01:00 level=INFO source=amd_windows.go:90 msg="[0] GcnArchName: gfx1031"
time=2024-03-15T15:32:07.279+01:00 level=INFO source=amd_windows.go:117 msg="[0] Total Mem: 12733906944"
time=2024-03-15T15:32:07.279+01:00 level=INFO source=amd_windows.go:118 msg="[0] Free Mem:  12868124672"
time=2024-03-15T15:32:07.345+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-15T15:32:07.345+01:00 level=DEBUG source=payload_common.go:93 msg="ordered list of LLM libraries to try [C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama3707111582\\runners\\rocm_v5.7\\ext_server.dll C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama3707111582\\runners\\cpu_avx2\\ext_server.dll]"
time=2024-03-15T15:32:07.346+01:00 level=INFO source=assets.go:63 msg="Updating PATH to C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama3707111582\\runners\\rocm_v5.7;C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\Git\\cmd;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\Calibre2\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\Intel\\PresentMon\\PresentMonApplication\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\;C:\\Users\\myuser\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama"
time=2024-03-15T15:32:07.352+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama3707111582\\runners\\rocm_v5.7\\ext_server.dll"
time=2024-03-15T15:32:07.352+01:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
time=2024-03-15T15:32:07.352+01:00 level=DEBUG source=dyn_ext_server.go:151 msg="server params: {model:0x28e6670ee00 n_ctx:2048 n_batch:512 n_threads:0 n_parallel:1 rope_freq_base:0 rope_freq_scale:0 memory_f16:true n_gpu_layers:33 main_gpu:0 use_mlock:false use_mmap:true numa:0 embedding:true lora_adapters:<nil> mmproj:<nil> verbose_logging:true _:[0 0 0 0 0 0 0]}"
[1710513127] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | 
[1710513127] Performing pre-initialization of GPU

rocBLAS error: Cannot read C:\Users\myuser\AppData\Local\Programs\Ollama\rocm\/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx1031
 List of available TensileLibrary Files : 

所以我将所有 gfx1030 文件复制为 gfx1031,它走得更远,但随后再次崩溃:

time=2024-03-15T15:37:06.151+01:00 level=INFO source=images.go:806 msg="total blobs: 12"
time=2024-03-15T15:37:06.152+01:00 level=INFO source=images.go:813 msg="total unused blobs removed: 0"
time=2024-03-15T15:37:06.152+01:00 level=INFO source=routes.go:1110 msg="Listening on 127.0.0.1:11434 (version 0.1.29)"
time=2024-03-15T15:37:06.152+01:00 level=INFO source=payload_common.go:112 msg="Extracting dynamic libraries to C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama1387933699\\runners ..."
time=2024-03-15T15:37:06.289+01:00 level=INFO source=payload_common.go:139 msg="Dynamic LLM libraries [cpu cuda_v11.3 cpu_avx cpu_avx2 rocm_v5.7]"
time=2024-03-15T15:37:06.289+01:00 level=DEBUG source=payload_common.go:140 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
[GIN] 2024/03/15 - 15:37:15 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2024/03/15 - 15:37:15 | 200 |      1.0443ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2024/03/15 - 15:37:15 | 200 |       525.5µs |       127.0.0.1 | POST     "/api/show"
time=2024-03-15T15:37:15.633+01:00 level=INFO source=gpu.go:77 msg="Detecting GPU type"
time=2024-03-15T15:37:15.633+01:00 level=INFO source=gpu.go:191 msg="Searching for GPU management library nvml.dll"
time=2024-03-15T15:37:15.633+01:00 level=DEBUG source=gpu.go:209 msg="gpu management search paths: [c:\\Windows\\System32\\nvml.dll C:\\Windows\\system32\\nvml.dll* C:\\Windows\\nvml.dll* C:\\Windows\\System32\\Wbem\\nvml.dll* C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvml.dll* C:\\Windows\\System32\\OpenSSH\\nvml.dll* C:\\Program Files\\Git\\cmd\\nvml.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll* C:\\Program Files\\Calibre2\\nvml.dll* C:\\Program Files\\dotnet\\nvml.dll* C:\\Program Files\\Intel\\PresentMon\\PresentMonApplication\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Microsoft\\WindowsApps\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\nvml.dll*]"
time=2024-03-15T15:37:15.636+01:00 level=INFO source=gpu.go:237 msg="Discovered GPU libraries: []"
time=2024-03-15T15:37:15.636+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-15T15:37:15.649+01:00 level=INFO source=amd_windows.go:40 msg="AMD Driver: 50732000"
time=2024-03-15T15:37:15.649+01:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm"
time=2024-03-15T15:37:15.649+01:00 level=DEBUG source=amd_windows.go:148 msg="detected ROCM next to ollama executable C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm"
time=2024-03-15T15:37:15.649+01:00 level=DEBUG source=amd_windows.go:66 msg="skipping rocm gfx compatibility check with HSA_OVERRIDE_GFX_VERSION=10.3.0"
time=2024-03-15T15:37:15.649+01:00 level=INFO source=amd_windows.go:69 msg="detected 1 hip devices"
time=2024-03-15T15:37:15.649+01:00 level=INFO source=amd_windows.go:87 msg="[0] Name: AMD Radeon RX 6700 XT"
time=2024-03-15T15:37:15.649+01:00 level=INFO source=amd_windows.go:90 msg="[0] GcnArchName: gfx1031"
time=2024-03-15T15:37:15.878+01:00 level=INFO source=amd_windows.go:117 msg="[0] Total Mem: 12733906944"
time=2024-03-15T15:37:15.879+01:00 level=INFO source=amd_windows.go:118 msg="[0] Free Mem:  12868124672"
time=2024-03-15T15:37:15.879+01:00 level=INFO source=assets.go:63 msg="Updating PATH to C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\Git\\cmd;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\Calibre2\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\Intel\\PresentMon\\PresentMonApplication\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\;C:\\Users\\myuser\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama"
time=2024-03-15T15:37:15.932+01:00 level=DEBUG source=gpu.go:180 msg="rocm detected 1 devices with 11044M available memory"
time=2024-03-15T15:37:15.932+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-15T15:37:15.941+01:00 level=INFO source=amd_windows.go:40 msg="AMD Driver: 50732000"
time=2024-03-15T15:37:15.941+01:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm"
time=2024-03-15T15:37:15.941+01:00 level=DEBUG source=amd_windows.go:148 msg="detected ROCM next to ollama executable C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm"
time=2024-03-15T15:37:15.941+01:00 level=DEBUG source=amd_windows.go:66 msg="skipping rocm gfx compatibility check with HSA_OVERRIDE_GFX_VERSION=10.3.0"
time=2024-03-15T15:37:15.941+01:00 level=INFO source=amd_windows.go:69 msg="detected 1 hip devices"
time=2024-03-15T15:37:15.941+01:00 level=INFO source=amd_windows.go:87 msg="[0] Name: AMD Radeon RX 6700 XT"
time=2024-03-15T15:37:15.941+01:00 level=INFO source=amd_windows.go:90 msg="[0] GcnArchName: gfx1031"
time=2024-03-15T15:37:16.159+01:00 level=INFO source=amd_windows.go:117 msg="[0] Total Mem: 12733906944"
time=2024-03-15T15:37:16.159+01:00 level=INFO source=amd_windows.go:118 msg="[0] Free Mem:  12868124672"
time=2024-03-15T15:37:16.215+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-15T15:37:16.215+01:00 level=DEBUG source=payload_common.go:93 msg="ordered list of LLM libraries to try [C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama1387933699\\runners\\rocm_v5.7\\ext_server.dll C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama1387933699\\runners\\cpu_avx2\\ext_server.dll]"
time=2024-03-15T15:37:16.215+01:00 level=INFO source=assets.go:63 msg="Updating PATH to C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama1387933699\\runners\\rocm_v5.7;C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\Git\\cmd;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\Calibre2\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\Intel\\PresentMon\\PresentMonApplication\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\;C:\\Users\\myuser\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama"
time=2024-03-15T15:37:16.221+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama1387933699\\runners\\rocm_v5.7\\ext_server.dll"
time=2024-03-15T15:37:16.221+01:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
time=2024-03-15T15:37:16.221+01:00 level=DEBUG source=dyn_ext_server.go:151 msg="server params: {model:0x2314baeee40 n_ctx:2048 n_batch:512 n_threads:0 n_parallel:1 rope_freq_base:0 rope_freq_scale:0 memory_f16:true n_gpu_layers:33 main_gpu:0 use_mlock:false use_mmap:true numa:0 embedding:true lora_adapters:<nil> mmproj:<nil> verbose_logging:true _:[0 0 0 0 0 0 0]}"
[1710513436] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | 
[1710513436] Performing pre-initialization of GPU
ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 ROCm devices:
  Device 0: AMD Radeon RX 6700 XT, compute capability 10.3, VMM: no
llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from C:\Users\myuser\.ollama\models\blobs\sha256-e8a35b5937a5e6d5c35d1f2a15f161e07eefe5e5bb0a3cdd42998ee79b057730 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = mistralai
llama_model_loader: - kv   2:                       llama.context_length u32              = 32768
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                       llama.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  11:                          general.file_type u32              = 2
llama_model_loader: - kv  12:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  14:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  16:                      tokenizer.ggml.merges arr[str,58980]   = ["▁ t", "i n", "e r", "▁ a", "h e...
llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  18:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  19:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  20:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  21:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  22:                    tokenizer.chat_template str              = {{ bos_token }}{% for message in mess...
llama_model_loader: - kv  23:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 7.24 B
llm_load_print_meta: model size       = 3.83 GiB (4.54 BPW) 
llm_load_print_meta: general.name     = mistralai
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_tensors: ggml ctx size =    0.22 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors:      ROCm0 buffer size =  3847.55 MiB
llm_load_tensors:        CPU buffer size =    70.31 MiB
..................................................................................................
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      ROCm0 KV buffer size =   256.00 MiB
llama_new_context_with_model: KV self size  =  256.00 MiB, K (f16):  128.00 MiB, V (f16):  128.00 MiB
llama_new_context_with_model:  ROCm_Host input buffer size   =    13.02 MiB
llama_new_context_with_model:      ROCm0 compute buffer size =   164.00 MiB
llama_new_context_with_model:  ROCm_Host compute buffer size =     8.00 MiB
llama_new_context_with_model: graph splits (measure): 2
[1710513438] warming up the model with an empty run
CUDA error: invalid device function
  current device: 0, in function ggml_cuda_op_flatten at C:/Users/jeff/git/ollama/llm/llama.cpp/ggml-cuda.cu:10110
  hipGetLastError()
GGML_ASSERT: C:/Users/jeff/git/ollama/llm/llama.cpp/ggml-cuda.cu:256: !"CUDA error"

GPT4ALL supports at least the 6700XT GPU, but only within their chat client. Attempting to access it through the API interface causes the program to crash. I wonder if the support principles behind GPT4ALL could be shared to help those in need?

<!-- gh-comment-id:2071498670 --> @Sohnny0 commented on GitHub (Apr 23, 2024): > 你好, > > RX 6700 XT 也面临同样的问题。 > > ``` > time=2024-03-15T15:31:52.043+01:00 level=INFO source=images.go:806 msg="total blobs: 12" > time=2024-03-15T15:31:52.044+01:00 level=INFO source=images.go:813 msg="total unused blobs removed: 0" > time=2024-03-15T15:31:52.045+01:00 level=INFO source=routes.go:1110 msg="Listening on 127.0.0.1:11434 (version 0.1.29)" > time=2024-03-15T15:31:52.045+01:00 level=INFO source=payload_common.go:112 msg="Extracting dynamic libraries to C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama3707111582\\runners ..." > time=2024-03-15T15:31:52.183+01:00 level=INFO source=payload_common.go:139 msg="Dynamic LLM libraries [cpu_avx rocm_v5.7 cpu_avx2 cuda_v11.3]" > time=2024-03-15T15:31:52.183+01:00 level=DEBUG source=payload_common.go:140 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" > [GIN] 2024/03/15 - 15:32:06 | 200 | 0s | 127.0.0.1 | HEAD "/" > [GIN] 2024/03/15 - 15:32:06 | 200 | 1.0411ms | 127.0.0.1 | POST "/api/show" > [GIN] 2024/03/15 - 15:32:06 | 200 | 525.2µs | 127.0.0.1 | POST "/api/show" > time=2024-03-15T15:32:06.738+01:00 level=INFO source=gpu.go:77 msg="Detecting GPU type" > time=2024-03-15T15:32:06.738+01:00 level=INFO source=gpu.go:191 msg="Searching for GPU management library nvml.dll" > time=2024-03-15T15:32:06.739+01:00 level=DEBUG source=gpu.go:209 msg="gpu management search paths: [c:\\Windows\\System32\\nvml.dll C:\\Windows\\system32\\nvml.dll* C:\\Windows\\nvml.dll* C:\\Windows\\System32\\Wbem\\nvml.dll* C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvml.dll* C:\\Windows\\System32\\OpenSSH\\nvml.dll* C:\\Program Files\\Git\\cmd\\nvml.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll* C:\\Program Files\\Calibre2\\nvml.dll* C:\\Program Files\\dotnet\\nvml.dll* C:\\Program Files\\Intel\\PresentMon\\PresentMonApplication\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Microsoft\\WindowsApps\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\nvml.dll*]" > time=2024-03-15T15:32:06.742+01:00 level=INFO source=gpu.go:237 msg="Discovered GPU libraries: []" > time=2024-03-15T15:32:06.742+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" > time=2024-03-15T15:32:06.755+01:00 level=INFO source=amd_windows.go:40 msg="AMD Driver: 50732000" > time=2024-03-15T15:32:06.755+01:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm" > time=2024-03-15T15:32:06.755+01:00 level=DEBUG source=amd_windows.go:148 msg="detected ROCM next to ollama executable C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm" > time=2024-03-15T15:32:06.755+01:00 level=DEBUG source=amd_windows.go:66 msg="skipping rocm gfx compatibility check with HSA_OVERRIDE_GFX_VERSION=10.3.0" > time=2024-03-15T15:32:06.755+01:00 level=INFO source=amd_windows.go:69 msg="detected 1 hip devices" > time=2024-03-15T15:32:06.755+01:00 level=INFO source=amd_windows.go:87 msg="[0] Name: AMD Radeon RX 6700 XT" > time=2024-03-15T15:32:06.755+01:00 level=INFO source=amd_windows.go:90 msg="[0] GcnArchName: gfx1031" > time=2024-03-15T15:32:06.984+01:00 level=INFO source=amd_windows.go:117 msg="[0] Total Mem: 12733906944" > time=2024-03-15T15:32:06.984+01:00 level=INFO source=amd_windows.go:118 msg="[0] Free Mem: 12868124672" > time=2024-03-15T15:32:06.984+01:00 level=INFO source=assets.go:63 msg="Updating PATH to C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\Git\\cmd;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\Calibre2\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\Intel\\PresentMon\\PresentMonApplication\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\;C:\\Users\\myuser\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama" > time=2024-03-15T15:32:07.050+01:00 level=DEBUG source=gpu.go:180 msg="rocm detected 1 devices with 11044M available memory" > time=2024-03-15T15:32:07.050+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" > time=2024-03-15T15:32:07.059+01:00 level=INFO source=amd_windows.go:40 msg="AMD Driver: 50732000" > time=2024-03-15T15:32:07.059+01:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm" > time=2024-03-15T15:32:07.059+01:00 level=DEBUG source=amd_windows.go:148 msg="detected ROCM next to ollama executable C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm" > time=2024-03-15T15:32:07.059+01:00 level=DEBUG source=amd_windows.go:66 msg="skipping rocm gfx compatibility check with HSA_OVERRIDE_GFX_VERSION=10.3.0" > time=2024-03-15T15:32:07.059+01:00 level=INFO source=amd_windows.go:69 msg="detected 1 hip devices" > time=2024-03-15T15:32:07.059+01:00 level=INFO source=amd_windows.go:87 msg="[0] Name: AMD Radeon RX 6700 XT" > time=2024-03-15T15:32:07.059+01:00 level=INFO source=amd_windows.go:90 msg="[0] GcnArchName: gfx1031" > time=2024-03-15T15:32:07.279+01:00 level=INFO source=amd_windows.go:117 msg="[0] Total Mem: 12733906944" > time=2024-03-15T15:32:07.279+01:00 level=INFO source=amd_windows.go:118 msg="[0] Free Mem: 12868124672" > time=2024-03-15T15:32:07.345+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" > time=2024-03-15T15:32:07.345+01:00 level=DEBUG source=payload_common.go:93 msg="ordered list of LLM libraries to try [C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama3707111582\\runners\\rocm_v5.7\\ext_server.dll C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama3707111582\\runners\\cpu_avx2\\ext_server.dll]" > time=2024-03-15T15:32:07.346+01:00 level=INFO source=assets.go:63 msg="Updating PATH to C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama3707111582\\runners\\rocm_v5.7;C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\Git\\cmd;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\Calibre2\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\Intel\\PresentMon\\PresentMonApplication\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\;C:\\Users\\myuser\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama" > time=2024-03-15T15:32:07.352+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama3707111582\\runners\\rocm_v5.7\\ext_server.dll" > time=2024-03-15T15:32:07.352+01:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server" > time=2024-03-15T15:32:07.352+01:00 level=DEBUG source=dyn_ext_server.go:151 msg="server params: {model:0x28e6670ee00 n_ctx:2048 n_batch:512 n_threads:0 n_parallel:1 rope_freq_base:0 rope_freq_scale:0 memory_f16:true n_gpu_layers:33 main_gpu:0 use_mlock:false use_mmap:true numa:0 embedding:true lora_adapters:<nil> mmproj:<nil> verbose_logging:true _:[0 0 0 0 0 0 0]}" > [1710513127] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | > [1710513127] Performing pre-initialization of GPU > > rocBLAS error: Cannot read C:\Users\myuser\AppData\Local\Programs\Ollama\rocm\/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx1031 > List of available TensileLibrary Files : > ``` > > 所以我将所有 gfx1030 文件复制为 gfx1031,它走得更远,但随后再次崩溃: > > ``` > time=2024-03-15T15:37:06.151+01:00 level=INFO source=images.go:806 msg="total blobs: 12" > time=2024-03-15T15:37:06.152+01:00 level=INFO source=images.go:813 msg="total unused blobs removed: 0" > time=2024-03-15T15:37:06.152+01:00 level=INFO source=routes.go:1110 msg="Listening on 127.0.0.1:11434 (version 0.1.29)" > time=2024-03-15T15:37:06.152+01:00 level=INFO source=payload_common.go:112 msg="Extracting dynamic libraries to C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama1387933699\\runners ..." > time=2024-03-15T15:37:06.289+01:00 level=INFO source=payload_common.go:139 msg="Dynamic LLM libraries [cpu cuda_v11.3 cpu_avx cpu_avx2 rocm_v5.7]" > time=2024-03-15T15:37:06.289+01:00 level=DEBUG source=payload_common.go:140 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" > [GIN] 2024/03/15 - 15:37:15 | 200 | 0s | 127.0.0.1 | HEAD "/" > [GIN] 2024/03/15 - 15:37:15 | 200 | 1.0443ms | 127.0.0.1 | POST "/api/show" > [GIN] 2024/03/15 - 15:37:15 | 200 | 525.5µs | 127.0.0.1 | POST "/api/show" > time=2024-03-15T15:37:15.633+01:00 level=INFO source=gpu.go:77 msg="Detecting GPU type" > time=2024-03-15T15:37:15.633+01:00 level=INFO source=gpu.go:191 msg="Searching for GPU management library nvml.dll" > time=2024-03-15T15:37:15.633+01:00 level=DEBUG source=gpu.go:209 msg="gpu management search paths: [c:\\Windows\\System32\\nvml.dll C:\\Windows\\system32\\nvml.dll* C:\\Windows\\nvml.dll* C:\\Windows\\System32\\Wbem\\nvml.dll* C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvml.dll* C:\\Windows\\System32\\OpenSSH\\nvml.dll* C:\\Program Files\\Git\\cmd\\nvml.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll* C:\\Program Files\\Calibre2\\nvml.dll* C:\\Program Files\\dotnet\\nvml.dll* C:\\Program Files\\Intel\\PresentMon\\PresentMonApplication\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Microsoft\\WindowsApps\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\nvml.dll*]" > time=2024-03-15T15:37:15.636+01:00 level=INFO source=gpu.go:237 msg="Discovered GPU libraries: []" > time=2024-03-15T15:37:15.636+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" > time=2024-03-15T15:37:15.649+01:00 level=INFO source=amd_windows.go:40 msg="AMD Driver: 50732000" > time=2024-03-15T15:37:15.649+01:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm" > time=2024-03-15T15:37:15.649+01:00 level=DEBUG source=amd_windows.go:148 msg="detected ROCM next to ollama executable C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm" > time=2024-03-15T15:37:15.649+01:00 level=DEBUG source=amd_windows.go:66 msg="skipping rocm gfx compatibility check with HSA_OVERRIDE_GFX_VERSION=10.3.0" > time=2024-03-15T15:37:15.649+01:00 level=INFO source=amd_windows.go:69 msg="detected 1 hip devices" > time=2024-03-15T15:37:15.649+01:00 level=INFO source=amd_windows.go:87 msg="[0] Name: AMD Radeon RX 6700 XT" > time=2024-03-15T15:37:15.649+01:00 level=INFO source=amd_windows.go:90 msg="[0] GcnArchName: gfx1031" > time=2024-03-15T15:37:15.878+01:00 level=INFO source=amd_windows.go:117 msg="[0] Total Mem: 12733906944" > time=2024-03-15T15:37:15.879+01:00 level=INFO source=amd_windows.go:118 msg="[0] Free Mem: 12868124672" > time=2024-03-15T15:37:15.879+01:00 level=INFO source=assets.go:63 msg="Updating PATH to C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\Git\\cmd;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\Calibre2\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\Intel\\PresentMon\\PresentMonApplication\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\;C:\\Users\\myuser\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama" > time=2024-03-15T15:37:15.932+01:00 level=DEBUG source=gpu.go:180 msg="rocm detected 1 devices with 11044M available memory" > time=2024-03-15T15:37:15.932+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" > time=2024-03-15T15:37:15.941+01:00 level=INFO source=amd_windows.go:40 msg="AMD Driver: 50732000" > time=2024-03-15T15:37:15.941+01:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm" > time=2024-03-15T15:37:15.941+01:00 level=DEBUG source=amd_windows.go:148 msg="detected ROCM next to ollama executable C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm" > time=2024-03-15T15:37:15.941+01:00 level=DEBUG source=amd_windows.go:66 msg="skipping rocm gfx compatibility check with HSA_OVERRIDE_GFX_VERSION=10.3.0" > time=2024-03-15T15:37:15.941+01:00 level=INFO source=amd_windows.go:69 msg="detected 1 hip devices" > time=2024-03-15T15:37:15.941+01:00 level=INFO source=amd_windows.go:87 msg="[0] Name: AMD Radeon RX 6700 XT" > time=2024-03-15T15:37:15.941+01:00 level=INFO source=amd_windows.go:90 msg="[0] GcnArchName: gfx1031" > time=2024-03-15T15:37:16.159+01:00 level=INFO source=amd_windows.go:117 msg="[0] Total Mem: 12733906944" > time=2024-03-15T15:37:16.159+01:00 level=INFO source=amd_windows.go:118 msg="[0] Free Mem: 12868124672" > time=2024-03-15T15:37:16.215+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" > time=2024-03-15T15:37:16.215+01:00 level=DEBUG source=payload_common.go:93 msg="ordered list of LLM libraries to try [C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama1387933699\\runners\\rocm_v5.7\\ext_server.dll C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama1387933699\\runners\\cpu_avx2\\ext_server.dll]" > time=2024-03-15T15:37:16.215+01:00 level=INFO source=assets.go:63 msg="Updating PATH to C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama1387933699\\runners\\rocm_v5.7;C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\Git\\cmd;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\Calibre2\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\Intel\\PresentMon\\PresentMonApplication\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\;C:\\Users\\myuser\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama" > time=2024-03-15T15:37:16.221+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama1387933699\\runners\\rocm_v5.7\\ext_server.dll" > time=2024-03-15T15:37:16.221+01:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server" > time=2024-03-15T15:37:16.221+01:00 level=DEBUG source=dyn_ext_server.go:151 msg="server params: {model:0x2314baeee40 n_ctx:2048 n_batch:512 n_threads:0 n_parallel:1 rope_freq_base:0 rope_freq_scale:0 memory_f16:true n_gpu_layers:33 main_gpu:0 use_mlock:false use_mmap:true numa:0 embedding:true lora_adapters:<nil> mmproj:<nil> verbose_logging:true _:[0 0 0 0 0 0 0]}" > [1710513436] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | > [1710513436] Performing pre-initialization of GPU > ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no > ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes > ggml_init_cublas: found 1 ROCm devices: > Device 0: AMD Radeon RX 6700 XT, compute capability 10.3, VMM: no > llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from C:\Users\myuser\.ollama\models\blobs\sha256-e8a35b5937a5e6d5c35d1f2a15f161e07eefe5e5bb0a3cdd42998ee79b057730 (version GGUF V3 (latest)) > llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. > llama_model_loader: - kv 0: general.architecture str = llama > llama_model_loader: - kv 1: general.name str = mistralai > llama_model_loader: - kv 2: llama.context_length u32 = 32768 > llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 > llama_model_loader: - kv 4: llama.block_count u32 = 32 > llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 > llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 > llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 > llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8 > llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 > llama_model_loader: - kv 10: llama.rope.freq_base f32 = 1000000.000000 > llama_model_loader: - kv 11: general.file_type u32 = 2 > llama_model_loader: - kv 12: tokenizer.ggml.model str = llama > llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... > llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... > llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... > llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,58980] = ["▁ t", "i n", "e r", "▁ a", "h e... > llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 1 > llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 2 > llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 0 > llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true > llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false > llama_model_loader: - kv 22: tokenizer.chat_template str = {{ bos_token }}{% for message in mess... > llama_model_loader: - kv 23: general.quantization_version u32 = 2 > llama_model_loader: - type f32: 65 tensors > llama_model_loader: - type q4_0: 225 tensors > llama_model_loader: - type q6_K: 1 tensors > llm_load_vocab: special tokens definition check successful ( 259/32000 ). > llm_load_print_meta: format = GGUF V3 (latest) > llm_load_print_meta: arch = llama > llm_load_print_meta: vocab type = SPM > llm_load_print_meta: n_vocab = 32000 > llm_load_print_meta: n_merges = 0 > llm_load_print_meta: n_ctx_train = 32768 > llm_load_print_meta: n_embd = 4096 > llm_load_print_meta: n_head = 32 > llm_load_print_meta: n_head_kv = 8 > llm_load_print_meta: n_layer = 32 > llm_load_print_meta: n_rot = 128 > llm_load_print_meta: n_embd_head_k = 128 > llm_load_print_meta: n_embd_head_v = 128 > llm_load_print_meta: n_gqa = 4 > llm_load_print_meta: n_embd_k_gqa = 1024 > llm_load_print_meta: n_embd_v_gqa = 1024 > llm_load_print_meta: f_norm_eps = 0.0e+00 > llm_load_print_meta: f_norm_rms_eps = 1.0e-05 > llm_load_print_meta: f_clamp_kqv = 0.0e+00 > llm_load_print_meta: f_max_alibi_bias = 0.0e+00 > llm_load_print_meta: n_ff = 14336 > llm_load_print_meta: n_expert = 0 > llm_load_print_meta: n_expert_used = 0 > llm_load_print_meta: pooling type = 0 > llm_load_print_meta: rope type = 0 > llm_load_print_meta: rope scaling = linear > llm_load_print_meta: freq_base_train = 1000000.0 > llm_load_print_meta: freq_scale_train = 1 > llm_load_print_meta: n_yarn_orig_ctx = 32768 > llm_load_print_meta: rope_finetuned = unknown > llm_load_print_meta: model type = 7B > llm_load_print_meta: model ftype = Q4_0 > llm_load_print_meta: model params = 7.24 B > llm_load_print_meta: model size = 3.83 GiB (4.54 BPW) > llm_load_print_meta: general.name = mistralai > llm_load_print_meta: BOS token = 1 '<s>' > llm_load_print_meta: EOS token = 2 '</s>' > llm_load_print_meta: UNK token = 0 '<unk>' > llm_load_print_meta: LF token = 13 '<0x0A>' > llm_load_tensors: ggml ctx size = 0.22 MiB > llm_load_tensors: offloading 32 repeating layers to GPU > llm_load_tensors: offloading non-repeating layers to GPU > llm_load_tensors: offloaded 33/33 layers to GPU > llm_load_tensors: ROCm0 buffer size = 3847.55 MiB > llm_load_tensors: CPU buffer size = 70.31 MiB > .................................................................................................. > llama_new_context_with_model: n_ctx = 2048 > llama_new_context_with_model: freq_base = 1000000.0 > llama_new_context_with_model: freq_scale = 1 > llama_kv_cache_init: ROCm0 KV buffer size = 256.00 MiB > llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB > llama_new_context_with_model: ROCm_Host input buffer size = 13.02 MiB > llama_new_context_with_model: ROCm0 compute buffer size = 164.00 MiB > llama_new_context_with_model: ROCm_Host compute buffer size = 8.00 MiB > llama_new_context_with_model: graph splits (measure): 2 > [1710513438] warming up the model with an empty run > CUDA error: invalid device function > current device: 0, in function ggml_cuda_op_flatten at C:/Users/jeff/git/ollama/llm/llama.cpp/ggml-cuda.cu:10110 > hipGetLastError() > GGML_ASSERT: C:/Users/jeff/git/ollama/llm/llama.cpp/ggml-cuda.cu:256: !"CUDA error" > ``` GPT4ALL supports at least the 6700XT GPU, but only within their chat client. Attempting to access it through the API interface causes the program to crash. I wonder if the support principles behind GPT4ALL could be shared to help those in need?
Author
Owner

@likelovewant commented on GitHub (Apr 24, 2024):

你好,

RX 6700 XT 也面临同样的问题。

time=2024-03-15T15:31:52.043+01:00 level=INFO source=images.go:806 msg="total blobs: 12"

time=2024-03-15T15:31:52.044+01:00 level=INFO source=images.go:813 msg="total unused blobs removed: 0"

time=2024-03-15T15:31:52.045+01:00 level=INFO source=routes.go:1110 msg="Listening on 127.0.0.1:11434 (version 0.1.29)"

time=2024-03-15T15:31:52.045+01:00 level=INFO source=payload_common.go:112 msg="Extracting dynamic libraries to C:\Users\myuser\AppData\Local\Temp\ollama3707111582\runners ..."

time=2024-03-15T15:31:52.183+01:00 level=INFO source=payload_common.go:139 msg="Dynamic LLM libraries [cpu_avx rocm_v5.7 cpu_avx2 cuda_v11.3]"

time=2024-03-15T15:31:52.183+01:00 level=DEBUG source=payload_common.go:140 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"

[GIN] 2024/03/15 - 15:32:06 | 200 | 0s | 127.0.0.1 | HEAD "/"

[GIN] 2024/03/15 - 15:32:06 | 200 | 1.0411ms | 127.0.0.1 | POST "/api/show"

[GIN] 2024/03/15 - 15:32:06 | 200 | 525.2µs | 127.0.0.1 | POST "/api/show"

time=2024-03-15T15:32:06.738+01:00 level=INFO source=gpu.go:77 msg="Detecting GPU type"

time=2024-03-15T15:32:06.738+01:00 level=INFO source=gpu.go:191 msg="Searching for GPU management library nvml.dll"

time=2024-03-15T15:32:06.739+01:00 level=DEBUG source=gpu.go:209 msg="gpu management search paths: [c:\Windows\System32\nvml.dll C:\Windows\system32\nvml.dll* C:\Windows\nvml.dll* C:\Windows\System32\Wbem\nvml.dll* C:\Windows\System32\WindowsPowerShell\v1.0\nvml.dll* C:\Windows\System32\OpenSSH\nvml.dll* C:\Program Files\Git\cmd\nvml.dll* C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common\nvml.dll* C:\Program Files\Calibre2\nvml.dll* C:\Program Files\dotnet\nvml.dll* C:\Program Files\Intel\PresentMon\PresentMonApplication\nvml.dll* C:\Users\myuser\AppData\Local\Programs\Python\Python311\Scripts\nvml.dll* C:\Users\myuser\AppData\Local\Programs\Python\Python311\nvml.dll* C:\Users\myuser\AppData\Local\Programs\Python\Python310\Scripts\nvml.dll* C:\Users\myuser\AppData\Local\Programs\Python\Python310\nvml.dll* C:\Users\myuser\AppData\Local\Microsoft\WindowsApps\nvml.dll* C:\Users\myuser\AppData\Local\Programs\Ollama\nvml.dll*]"

time=2024-03-15T15:32:06.742+01:00 level=INFO source=gpu.go:237 msg="Discovered GPU libraries: []"

time=2024-03-15T15:32:06.742+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"

time=2024-03-15T15:32:06.755+01:00 level=INFO source=amd_windows.go:40 msg="AMD Driver: 50732000"

time=2024-03-15T15:32:06.755+01:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir C:\Users\myuser\AppData\Local\Programs\Ollama\rocm"

time=2024-03-15T15:32:06.755+01:00 level=DEBUG source=amd_windows.go:148 msg="detected ROCM next to ollama executable C:\Users\myuser\AppData\Local\Programs\Ollama\rocm"

time=2024-03-15T15:32:06.755+01:00 level=DEBUG source=amd_windows.go:66 msg="skipping rocm gfx compatibility check with HSA_OVERRIDE_GFX_VERSION=10.3.0"

time=2024-03-15T15:32:06.755+01:00 level=INFO source=amd_windows.go:69 msg="detected 1 hip devices"

time=2024-03-15T15:32:06.755+01:00 level=INFO source=amd_windows.go:87 msg="[0] Name: AMD Radeon RX 6700 XT"

time=2024-03-15T15:32:06.755+01:00 level=INFO source=amd_windows.go:90 msg="[0] GcnArchName: gfx1031"

time=2024-03-15T15:32:06.984+01:00 level=INFO source=amd_windows.go:117 msg="[0] Total Mem: 12733906944"

time=2024-03-15T15:32:06.984+01:00 level=INFO source=amd_windows.go:118 msg="[0] Free Mem: 12868124672"

time=2024-03-15T15:32:06.984+01:00 level=INFO source=assets.go:63 msg="Updating PATH to C:\Users\myuser\AppData\Local\Programs\Ollama\rocm;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Windows\System32\OpenSSH\;C:\Program Files\Git\cmd;C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;C:\Program Files\Calibre2\;C:\Program Files\dotnet\;C:\Program Files\Intel\PresentMon\PresentMonApplication\;C:\Users\myuser\AppData\Local\Programs\Python\Python311\Scripts\;C:\Users\myuser\AppData\Local\Programs\Python\Python311\;C:\Users\myuser\AppData\Local\Programs\Python\Python310\Scripts\;C:\Users\myuser\AppData\Local\Programs\Python\Python310\;C:\Users\myuser\AppData\Local\Microsoft\WindowsApps;C:\Users\myuser\AppData\Local\Programs\Ollama"

time=2024-03-15T15:32:07.050+01:00 level=DEBUG source=gpu.go:180 msg="rocm detected 1 devices with 11044M available memory"

time=2024-03-15T15:32:07.050+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"

time=2024-03-15T15:32:07.059+01:00 level=INFO source=amd_windows.go:40 msg="AMD Driver: 50732000"

time=2024-03-15T15:32:07.059+01:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir C:\Users\myuser\AppData\Local\Programs\Ollama\rocm"

time=2024-03-15T15:32:07.059+01:00 level=DEBUG source=amd_windows.go:148 msg="detected ROCM next to ollama executable C:\Users\myuser\AppData\Local\Programs\Ollama\rocm"

time=2024-03-15T15:32:07.059+01:00 level=DEBUG source=amd_windows.go:66 msg="skipping rocm gfx compatibility check with HSA_OVERRIDE_GFX_VERSION=10.3.0"

time=2024-03-15T15:32:07.059+01:00 level=INFO source=amd_windows.go:69 msg="detected 1 hip devices"

time=2024-03-15T15:32:07.059+01:00 level=INFO source=amd_windows.go:87 msg="[0] Name: AMD Radeon RX 6700 XT"

time=2024-03-15T15:32:07.059+01:00 level=INFO source=amd_windows.go:90 msg="[0] GcnArchName: gfx1031"

time=2024-03-15T15:32:07.279+01:00 level=INFO source=amd_windows.go:117 msg="[0] Total Mem: 12733906944"

time=2024-03-15T15:32:07.279+01:00 level=INFO source=amd_windows.go:118 msg="[0] Free Mem: 12868124672"

time=2024-03-15T15:32:07.345+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"

time=2024-03-15T15:32:07.345+01:00 level=DEBUG source=payload_common.go:93 msg="ordered list of LLM libraries to try [C:\Users\myuser\AppData\Local\Temp\ollama3707111582\runners\rocm_v5.7\ext_server.dll C:\Users\myuser\AppData\Local\Temp\ollama3707111582\runners\cpu_avx2\ext_server.dll]"

time=2024-03-15T15:32:07.346+01:00 level=INFO source=assets.go:63 msg="Updating PATH to C:\Users\myuser\AppData\Local\Temp\ollama3707111582\runners\rocm_v5.7;C:\Users\myuser\AppData\Local\Programs\Ollama\rocm;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Windows\System32\OpenSSH\;C:\Program Files\Git\cmd;C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;C:\Program Files\Calibre2\;C:\Program Files\dotnet\;C:\Program Files\Intel\PresentMon\PresentMonApplication\;C:\Users\myuser\AppData\Local\Programs\Python\Python311\Scripts\;C:\Users\myuser\AppData\Local\Programs\Python\Python311\;C:\Users\myuser\AppData\Local\Programs\Python\Python310\Scripts\;C:\Users\myuser\AppData\Local\Programs\Python\Python310\;C:\Users\myuser\AppData\Local\Microsoft\WindowsApps;C:\Users\myuser\AppData\Local\Programs\Ollama"

time=2024-03-15T15:32:07.352+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: C:\Users\myuser\AppData\Local\Temp\ollama3707111582\runners\rocm_v5.7\ext_server.dll"

time=2024-03-15T15:32:07.352+01:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"

time=2024-03-15T15:32:07.352+01:00 level=DEBUG source=dyn_ext_server.go:151 msg="server params: {model:0x28e6670ee00 n_ctx:2048 n_batch:512 n_threads:0 n_parallel:1 rope_freq_base:0 rope_freq_scale:0 memory_f16:true n_gpu_layers:33 main_gpu:0 use_mlock:false use_mmap:true numa:0 embedding:true lora_adapters: mmproj: verbose_logging:true _:[0 0 0 0 0 0 0]}"

[1710513127] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 |

[1710513127] Performing pre-initialization of GPU

rocBLAS error: Cannot read C:\Users\myuser\AppData\Local\Programs\Ollama\rocm/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx1031

List of available TensileLibrary Files :

所以我将所有 gfx1030 文件复制为 gfx1031,它走得更远,但随后再次崩溃:

time=2024-03-15T15:37:06.151+01:00 level=INFO source=images.go:806 msg="total blobs: 12"

time=2024-03-15T15:37:06.152+01:00 level=INFO source=images.go:813 msg="total unused blobs removed: 0"

time=2024-03-15T15:37:06.152+01:00 level=INFO source=routes.go:1110 msg="Listening on 127.0.0.1:11434 (version 0.1.29)"

time=2024-03-15T15:37:06.152+01:00 level=INFO source=payload_common.go:112 msg="Extracting dynamic libraries to C:\Users\myuser\AppData\Local\Temp\ollama1387933699\runners ..."

time=2024-03-15T15:37:06.289+01:00 level=INFO source=payload_common.go:139 msg="Dynamic LLM libraries [cpu cuda_v11.3 cpu_avx cpu_avx2 rocm_v5.7]"

time=2024-03-15T15:37:06.289+01:00 level=DEBUG source=payload_common.go:140 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"

[GIN] 2024/03/15 - 15:37:15 | 200 | 0s | 127.0.0.1 | HEAD "/"

[GIN] 2024/03/15 - 15:37:15 | 200 | 1.0443ms | 127.0.0.1 | POST "/api/show"

[GIN] 2024/03/15 - 15:37:15 | 200 | 525.5µs | 127.0.0.1 | POST "/api/show"

time=2024-03-15T15:37:15.633+01:00 level=INFO source=gpu.go:77 msg="Detecting GPU type"

time=2024-03-15T15:37:15.633+01:00 level=INFO source=gpu.go:191 msg="Searching for GPU management library nvml.dll"

time=2024-03-15T15:37:15.633+01:00 level=DEBUG source=gpu.go:209 msg="gpu management search paths: [c:\Windows\System32\nvml.dll C:\Windows\system32\nvml.dll* C:\Windows\nvml.dll* C:\Windows\System32\Wbem\nvml.dll* C:\Windows\System32\WindowsPowerShell\v1.0\nvml.dll* C:\Windows\System32\OpenSSH\nvml.dll* C:\Program Files\Git\cmd\nvml.dll* C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common\nvml.dll* C:\Program Files\Calibre2\nvml.dll* C:\Program Files\dotnet\nvml.dll* C:\Program Files\Intel\PresentMon\PresentMonApplication\nvml.dll* C:\Users\myuser\AppData\Local\Programs\Python\Python311\Scripts\nvml.dll* C:\Users\myuser\AppData\Local\Programs\Python\Python311\nvml.dll* C:\Users\myuser\AppData\Local\Programs\Python\Python310\Scripts\nvml.dll* C:\Users\myuser\AppData\Local\Programs\Python\Python310\nvml.dll* C:\Users\myuser\AppData\Local\Microsoft\WindowsApps\nvml.dll* C:\Users\myuser\AppData\Local\Programs\Ollama\nvml.dll*]"

time=2024-03-15T15:37:15.636+01:00 level=INFO source=gpu.go:237 msg="Discovered GPU libraries: []"

time=2024-03-15T15:37:15.636+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"

time=2024-03-15T15:37:15.649+01:00 level=INFO source=amd_windows.go:40 msg="AMD Driver: 50732000"

time=2024-03-15T15:37:15.649+01:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir C:\Users\myuser\AppData\Local\Programs\Ollama\rocm"

time=2024-03-15T15:37:15.649+01:00 level=DEBUG source=amd_windows.go:148 msg="detected ROCM next to ollama executable C:\Users\myuser\AppData\Local\Programs\Ollama\rocm"

time=2024-03-15T15:37:15.649+01:00 level=DEBUG source=amd_windows.go:66 msg="skipping rocm gfx compatibility check with HSA_OVERRIDE_GFX_VERSION=10.3.0"

time=2024-03-15T15:37:15.649+01:00 level=INFO source=amd_windows.go:69 msg="detected 1 hip devices"

time=2024-03-15T15:37:15.649+01:00 level=INFO source=amd_windows.go:87 msg="[0] Name: AMD Radeon RX 6700 XT"

time=2024-03-15T15:37:15.649+01:00 level=INFO source=amd_windows.go:90 msg="[0] GcnArchName: gfx1031"

time=2024-03-15T15:37:15.878+01:00 level=INFO source=amd_windows.go:117 msg="[0] Total Mem: 12733906944"

time=2024-03-15T15:37:15.879+01:00 level=INFO source=amd_windows.go:118 msg="[0] Free Mem: 12868124672"

time=2024-03-15T15:37:15.879+01:00 level=INFO source=assets.go:63 msg="Updating PATH to C:\Users\myuser\AppData\Local\Programs\Ollama\rocm;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Windows\System32\OpenSSH\;C:\Program Files\Git\cmd;C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;C:\Program Files\Calibre2\;C:\Program Files\dotnet\;C:\Program Files\Intel\PresentMon\PresentMonApplication\;C:\Users\myuser\AppData\Local\Programs\Python\Python311\Scripts\;C:\Users\myuser\AppData\Local\Programs\Python\Python311\;C:\Users\myuser\AppData\Local\Programs\Python\Python310\Scripts\;C:\Users\myuser\AppData\Local\Programs\Python\Python310\;C:\Users\myuser\AppData\Local\Microsoft\WindowsApps;C:\Users\myuser\AppData\Local\Programs\Ollama"

time=2024-03-15T15:37:15.932+01:00 level=DEBUG source=gpu.go:180 msg="rocm detected 1 devices with 11044M available memory"

time=2024-03-15T15:37:15.932+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"

time=2024-03-15T15:37:15.941+01:00 level=INFO source=amd_windows.go:40 msg="AMD Driver: 50732000"

time=2024-03-15T15:37:15.941+01:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir C:\Users\myuser\AppData\Local\Programs\Ollama\rocm"

time=2024-03-15T15:37:15.941+01:00 level=DEBUG source=amd_windows.go:148 msg="detected ROCM next to ollama executable C:\Users\myuser\AppData\Local\Programs\Ollama\rocm"

time=2024-03-15T15:37:15.941+01:00 level=DEBUG source=amd_windows.go:66 msg="skipping rocm gfx compatibility check with HSA_OVERRIDE_GFX_VERSION=10.3.0"

time=2024-03-15T15:37:15.941+01:00 level=INFO source=amd_windows.go:69 msg="detected 1 hip devices"

time=2024-03-15T15:37:15.941+01:00 level=INFO source=amd_windows.go:87 msg="[0] Name: AMD Radeon RX 6700 XT"

time=2024-03-15T15:37:15.941+01:00 level=INFO source=amd_windows.go:90 msg="[0] GcnArchName: gfx1031"

time=2024-03-15T15:37:16.159+01:00 level=INFO source=amd_windows.go:117 msg="[0] Total Mem: 12733906944"

time=2024-03-15T15:37:16.159+01:00 level=INFO source=amd_windows.go:118 msg="[0] Free Mem: 12868124672"

time=2024-03-15T15:37:16.215+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"

time=2024-03-15T15:37:16.215+01:00 level=DEBUG source=payload_common.go:93 msg="ordered list of LLM libraries to try [C:\Users\myuser\AppData\Local\Temp\ollama1387933699\runners\rocm_v5.7\ext_server.dll C:\Users\myuser\AppData\Local\Temp\ollama1387933699\runners\cpu_avx2\ext_server.dll]"

time=2024-03-15T15:37:16.215+01:00 level=INFO source=assets.go:63 msg="Updating PATH to C:\Users\myuser\AppData\Local\Temp\ollama1387933699\runners\rocm_v5.7;C:\Users\myuser\AppData\Local\Programs\Ollama\rocm;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Windows\System32\OpenSSH\;C:\Program Files\Git\cmd;C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;C:\Program Files\Calibre2\;C:\Program Files\dotnet\;C:\Program Files\Intel\PresentMon\PresentMonApplication\;C:\Users\myuser\AppData\Local\Programs\Python\Python311\Scripts\;C:\Users\myuser\AppData\Local\Programs\Python\Python311\;C:\Users\myuser\AppData\Local\Programs\Python\Python310\Scripts\;C:\Users\myuser\AppData\Local\Programs\Python\Python310\;C:\Users\myuser\AppData\Local\Microsoft\WindowsApps;C:\Users\myuser\AppData\Local\Programs\Ollama"

time=2024-03-15T15:37:16.221+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: C:\Users\myuser\AppData\Local\Temp\ollama1387933699\runners\rocm_v5.7\ext_server.dll"

time=2024-03-15T15:37:16.221+01:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"

time=2024-03-15T15:37:16.221+01:00 level=DEBUG source=dyn_ext_server.go:151 msg="server params: {model:0x2314baeee40 n_ctx:2048 n_batch:512 n_threads:0 n_parallel:1 rope_freq_base:0 rope_freq_scale:0 memory_f16:true n_gpu_layers:33 main_gpu:0 use_mlock:false use_mmap:true numa:0 embedding:true lora_adapters: mmproj: verbose_logging:true _:[0 0 0 0 0 0 0]}"

[1710513436] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 |

[1710513436] Performing pre-initialization of GPU

ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no

ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes

ggml_init_cublas: found 1 ROCm devices:

Device 0: AMD Radeon RX 6700 XT, compute capability 10.3, VMM: no

llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from C:\Users\myuser.ollama\models\blobs\sha256-e8a35b5937a5e6d5c35d1f2a15f161e07eefe5e5bb0a3cdd42998ee79b057730 (version GGUF V3 (latest))

llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.

llama_model_loader: - kv 0: general.architecture str = llama

llama_model_loader: - kv 1: general.name str = mistralai

llama_model_loader: - kv 2: llama.context_length u32 = 32768

llama_model_loader: - kv 3: llama.embedding_length u32 = 4096

llama_model_loader: - kv 4: llama.block_count u32 = 32

llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336

llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128

llama_model_loader: - kv 7: llama.attention.head_count u32 = 32

llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8

llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010

llama_model_loader: - kv 10: llama.rope.freq_base f32 = 1000000.000000

llama_model_loader: - kv 11: general.file_type u32 = 2

llama_model_loader: - kv 12: tokenizer.ggml.model str = llama

llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["", "", "", "<0x00>", "<...

llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...

llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...

llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,58980] = ["▁ t", "i n", "e r", "▁ a", "h e...

llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 1

llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 2

llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 0

llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true

llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false

llama_model_loader: - kv 22: tokenizer.chat_template str = {{ bos_token }}{% for message in mess...

llama_model_loader: - kv 23: general.quantization_version u32 = 2

llama_model_loader: - type f32: 65 tensors

llama_model_loader: - type q4_0: 225 tensors

llama_model_loader: - type q6_K: 1 tensors

llm_load_vocab: special tokens definition check successful ( 259/32000 ).

llm_load_print_meta: format = GGUF V3 (latest)

llm_load_print_meta: arch = llama

llm_load_print_meta: vocab type = SPM

llm_load_print_meta: n_vocab = 32000

llm_load_print_meta: n_merges = 0

llm_load_print_meta: n_ctx_train = 32768

llm_load_print_meta: n_embd = 4096

llm_load_print_meta: n_head = 32

llm_load_print_meta: n_head_kv = 8

llm_load_print_meta: n_layer = 32

llm_load_print_meta: n_rot = 128

llm_load_print_meta: n_embd_head_k = 128

llm_load_print_meta: n_embd_head_v = 128

llm_load_print_meta: n_gqa = 4

llm_load_print_meta: n_embd_k_gqa = 1024

llm_load_print_meta: n_embd_v_gqa = 1024

llm_load_print_meta: f_norm_eps = 0.0e+00

llm_load_print_meta: f_norm_rms_eps = 1.0e-05

llm_load_print_meta: f_clamp_kqv = 0.0e+00

llm_load_print_meta: f_max_alibi_bias = 0.0e+00

llm_load_print_meta: n_ff = 14336

llm_load_print_meta: n_expert = 0

llm_load_print_meta: n_expert_used = 0

llm_load_print_meta: pooling type = 0

llm_load_print_meta: rope type = 0

llm_load_print_meta: rope scaling = linear

llm_load_print_meta: freq_base_train = 1000000.0

llm_load_print_meta: freq_scale_train = 1

llm_load_print_meta: n_yarn_orig_ctx = 32768

llm_load_print_meta: rope_finetuned = unknown

llm_load_print_meta: model type = 7B

llm_load_print_meta: model ftype = Q4_0

llm_load_print_meta: model params = 7.24 B

llm_load_print_meta: model size = 3.83 GiB (4.54 BPW)

llm_load_print_meta: general.name = mistralai

llm_load_print_meta: BOS token = 1 ''

llm_load_print_meta: EOS token = 2 ''

llm_load_print_meta: UNK token = 0 ''

llm_load_print_meta: LF token = 13 '<0x0A>'

llm_load_tensors: ggml ctx size = 0.22 MiB

llm_load_tensors: offloading 32 repeating layers to GPU

llm_load_tensors: offloading non-repeating layers to GPU

llm_load_tensors: offloaded 33/33 layers to GPU

llm_load_tensors: ROCm0 buffer size = 3847.55 MiB

llm_load_tensors: CPU buffer size = 70.31 MiB

..................................................................................................

llama_new_context_with_model: n_ctx = 2048

llama_new_context_with_model: freq_base = 1000000.0

llama_new_context_with_model: freq_scale = 1

llama_kv_cache_init: ROCm0 KV buffer size = 256.00 MiB

llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB

llama_new_context_with_model: ROCm_Host input buffer size = 13.02 MiB

llama_new_context_with_model: ROCm0 compute buffer size = 164.00 MiB

llama_new_context_with_model: ROCm_Host compute buffer size = 8.00 MiB

llama_new_context_with_model: graph splits (measure): 2

[1710513438] warming up the model with an empty run

CUDA error: invalid device function

current device: 0, in function ggml_cuda_op_flatten at C:/Users/jeff/git/ollama/llm/llama.cpp/ggml-cuda.cu:10110

hipGetLastError()

GGML_ASSERT: C:/Users/jeff/git/ollama/llm/llama.cpp/ggml-cuda.cu:256: !"CUDA error"

GPT4ALL supports at least the 6700XT GPU, but only within their chat client. Attempting to access it through the API interface causes the program to crash. I wonder if the support principles behind GPT4ALL could be shared to help those in need?

You have navida nvml.dll in there . Make sure you don't have another navida card . If not , remove the navida tool. It might help. Also there are dedicted gfx1032 ,gfx1032 roclabs available on GitHab and zhihu.

<!-- gh-comment-id:2074012039 --> @likelovewant commented on GitHub (Apr 24, 2024): > > 你好, > > > > > > RX 6700 XT 也面临同样的问题。 > > > > > > ``` > > > time=2024-03-15T15:31:52.043+01:00 level=INFO source=images.go:806 msg="total blobs: 12" > > > time=2024-03-15T15:31:52.044+01:00 level=INFO source=images.go:813 msg="total unused blobs removed: 0" > > > time=2024-03-15T15:31:52.045+01:00 level=INFO source=routes.go:1110 msg="Listening on 127.0.0.1:11434 (version 0.1.29)" > > > time=2024-03-15T15:31:52.045+01:00 level=INFO source=payload_common.go:112 msg="Extracting dynamic libraries to C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama3707111582\\runners ..." > > > time=2024-03-15T15:31:52.183+01:00 level=INFO source=payload_common.go:139 msg="Dynamic LLM libraries [cpu_avx rocm_v5.7 cpu_avx2 cuda_v11.3]" > > > time=2024-03-15T15:31:52.183+01:00 level=DEBUG source=payload_common.go:140 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" > > > [GIN] 2024/03/15 - 15:32:06 | 200 | 0s | 127.0.0.1 | HEAD "/" > > > [GIN] 2024/03/15 - 15:32:06 | 200 | 1.0411ms | 127.0.0.1 | POST "/api/show" > > > [GIN] 2024/03/15 - 15:32:06 | 200 | 525.2µs | 127.0.0.1 | POST "/api/show" > > > time=2024-03-15T15:32:06.738+01:00 level=INFO source=gpu.go:77 msg="Detecting GPU type" > > > time=2024-03-15T15:32:06.738+01:00 level=INFO source=gpu.go:191 msg="Searching for GPU management library nvml.dll" > > > time=2024-03-15T15:32:06.739+01:00 level=DEBUG source=gpu.go:209 msg="gpu management search paths: [c:\\Windows\\System32\\nvml.dll C:\\Windows\\system32\\nvml.dll* C:\\Windows\\nvml.dll* C:\\Windows\\System32\\Wbem\\nvml.dll* C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvml.dll* C:\\Windows\\System32\\OpenSSH\\nvml.dll* C:\\Program Files\\Git\\cmd\\nvml.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll* C:\\Program Files\\Calibre2\\nvml.dll* C:\\Program Files\\dotnet\\nvml.dll* C:\\Program Files\\Intel\\PresentMon\\PresentMonApplication\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Microsoft\\WindowsApps\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\nvml.dll*]" > > > time=2024-03-15T15:32:06.742+01:00 level=INFO source=gpu.go:237 msg="Discovered GPU libraries: []" > > > time=2024-03-15T15:32:06.742+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" > > > time=2024-03-15T15:32:06.755+01:00 level=INFO source=amd_windows.go:40 msg="AMD Driver: 50732000" > > > time=2024-03-15T15:32:06.755+01:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm" > > > time=2024-03-15T15:32:06.755+01:00 level=DEBUG source=amd_windows.go:148 msg="detected ROCM next to ollama executable C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm" > > > time=2024-03-15T15:32:06.755+01:00 level=DEBUG source=amd_windows.go:66 msg="skipping rocm gfx compatibility check with HSA_OVERRIDE_GFX_VERSION=10.3.0" > > > time=2024-03-15T15:32:06.755+01:00 level=INFO source=amd_windows.go:69 msg="detected 1 hip devices" > > > time=2024-03-15T15:32:06.755+01:00 level=INFO source=amd_windows.go:87 msg="[0] Name: AMD Radeon RX 6700 XT" > > > time=2024-03-15T15:32:06.755+01:00 level=INFO source=amd_windows.go:90 msg="[0] GcnArchName: gfx1031" > > > time=2024-03-15T15:32:06.984+01:00 level=INFO source=amd_windows.go:117 msg="[0] Total Mem: 12733906944" > > > time=2024-03-15T15:32:06.984+01:00 level=INFO source=amd_windows.go:118 msg="[0] Free Mem: 12868124672" > > > time=2024-03-15T15:32:06.984+01:00 level=INFO source=assets.go:63 msg="Updating PATH to C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\Git\\cmd;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\Calibre2\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\Intel\\PresentMon\\PresentMonApplication\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\;C:\\Users\\myuser\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama" > > > time=2024-03-15T15:32:07.050+01:00 level=DEBUG source=gpu.go:180 msg="rocm detected 1 devices with 11044M available memory" > > > time=2024-03-15T15:32:07.050+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" > > > time=2024-03-15T15:32:07.059+01:00 level=INFO source=amd_windows.go:40 msg="AMD Driver: 50732000" > > > time=2024-03-15T15:32:07.059+01:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm" > > > time=2024-03-15T15:32:07.059+01:00 level=DEBUG source=amd_windows.go:148 msg="detected ROCM next to ollama executable C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm" > > > time=2024-03-15T15:32:07.059+01:00 level=DEBUG source=amd_windows.go:66 msg="skipping rocm gfx compatibility check with HSA_OVERRIDE_GFX_VERSION=10.3.0" > > > time=2024-03-15T15:32:07.059+01:00 level=INFO source=amd_windows.go:69 msg="detected 1 hip devices" > > > time=2024-03-15T15:32:07.059+01:00 level=INFO source=amd_windows.go:87 msg="[0] Name: AMD Radeon RX 6700 XT" > > > time=2024-03-15T15:32:07.059+01:00 level=INFO source=amd_windows.go:90 msg="[0] GcnArchName: gfx1031" > > > time=2024-03-15T15:32:07.279+01:00 level=INFO source=amd_windows.go:117 msg="[0] Total Mem: 12733906944" > > > time=2024-03-15T15:32:07.279+01:00 level=INFO source=amd_windows.go:118 msg="[0] Free Mem: 12868124672" > > > time=2024-03-15T15:32:07.345+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" > > > time=2024-03-15T15:32:07.345+01:00 level=DEBUG source=payload_common.go:93 msg="ordered list of LLM libraries to try [C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama3707111582\\runners\\rocm_v5.7\\ext_server.dll C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama3707111582\\runners\\cpu_avx2\\ext_server.dll]" > > > time=2024-03-15T15:32:07.346+01:00 level=INFO source=assets.go:63 msg="Updating PATH to C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama3707111582\\runners\\rocm_v5.7;C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\Git\\cmd;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\Calibre2\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\Intel\\PresentMon\\PresentMonApplication\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\;C:\\Users\\myuser\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama" > > > time=2024-03-15T15:32:07.352+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama3707111582\\runners\\rocm_v5.7\\ext_server.dll" > > > time=2024-03-15T15:32:07.352+01:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server" > > > time=2024-03-15T15:32:07.352+01:00 level=DEBUG source=dyn_ext_server.go:151 msg="server params: {model:0x28e6670ee00 n_ctx:2048 n_batch:512 n_threads:0 n_parallel:1 rope_freq_base:0 rope_freq_scale:0 memory_f16:true n_gpu_layers:33 main_gpu:0 use_mlock:false use_mmap:true numa:0 embedding:true lora_adapters:<nil> mmproj:<nil> verbose_logging:true _:[0 0 0 0 0 0 0]}" > > > [1710513127] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | > > > [1710513127] Performing pre-initialization of GPU > > > > > > rocBLAS error: Cannot read C:\Users\myuser\AppData\Local\Programs\Ollama\rocm\/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx1031 > > > List of available TensileLibrary Files : > > > ``` > > > > > > 所以我将所有 gfx1030 文件复制为 gfx1031,它走得更远,但随后再次崩溃: > > > > > > ``` > > > time=2024-03-15T15:37:06.151+01:00 level=INFO source=images.go:806 msg="total blobs: 12" > > > time=2024-03-15T15:37:06.152+01:00 level=INFO source=images.go:813 msg="total unused blobs removed: 0" > > > time=2024-03-15T15:37:06.152+01:00 level=INFO source=routes.go:1110 msg="Listening on 127.0.0.1:11434 (version 0.1.29)" > > > time=2024-03-15T15:37:06.152+01:00 level=INFO source=payload_common.go:112 msg="Extracting dynamic libraries to C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama1387933699\\runners ..." > > > time=2024-03-15T15:37:06.289+01:00 level=INFO source=payload_common.go:139 msg="Dynamic LLM libraries [cpu cuda_v11.3 cpu_avx cpu_avx2 rocm_v5.7]" > > > time=2024-03-15T15:37:06.289+01:00 level=DEBUG source=payload_common.go:140 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" > > > [GIN] 2024/03/15 - 15:37:15 | 200 | 0s | 127.0.0.1 | HEAD "/" > > > [GIN] 2024/03/15 - 15:37:15 | 200 | 1.0443ms | 127.0.0.1 | POST "/api/show" > > > [GIN] 2024/03/15 - 15:37:15 | 200 | 525.5µs | 127.0.0.1 | POST "/api/show" > > > time=2024-03-15T15:37:15.633+01:00 level=INFO source=gpu.go:77 msg="Detecting GPU type" > > > time=2024-03-15T15:37:15.633+01:00 level=INFO source=gpu.go:191 msg="Searching for GPU management library nvml.dll" > > > time=2024-03-15T15:37:15.633+01:00 level=DEBUG source=gpu.go:209 msg="gpu management search paths: [c:\\Windows\\System32\\nvml.dll C:\\Windows\\system32\\nvml.dll* C:\\Windows\\nvml.dll* C:\\Windows\\System32\\Wbem\\nvml.dll* C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvml.dll* C:\\Windows\\System32\\OpenSSH\\nvml.dll* C:\\Program Files\\Git\\cmd\\nvml.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll* C:\\Program Files\\Calibre2\\nvml.dll* C:\\Program Files\\dotnet\\nvml.dll* C:\\Program Files\\Intel\\PresentMon\\PresentMonApplication\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Microsoft\\WindowsApps\\nvml.dll* C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\nvml.dll*]" > > > time=2024-03-15T15:37:15.636+01:00 level=INFO source=gpu.go:237 msg="Discovered GPU libraries: []" > > > time=2024-03-15T15:37:15.636+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" > > > time=2024-03-15T15:37:15.649+01:00 level=INFO source=amd_windows.go:40 msg="AMD Driver: 50732000" > > > time=2024-03-15T15:37:15.649+01:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm" > > > time=2024-03-15T15:37:15.649+01:00 level=DEBUG source=amd_windows.go:148 msg="detected ROCM next to ollama executable C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm" > > > time=2024-03-15T15:37:15.649+01:00 level=DEBUG source=amd_windows.go:66 msg="skipping rocm gfx compatibility check with HSA_OVERRIDE_GFX_VERSION=10.3.0" > > > time=2024-03-15T15:37:15.649+01:00 level=INFO source=amd_windows.go:69 msg="detected 1 hip devices" > > > time=2024-03-15T15:37:15.649+01:00 level=INFO source=amd_windows.go:87 msg="[0] Name: AMD Radeon RX 6700 XT" > > > time=2024-03-15T15:37:15.649+01:00 level=INFO source=amd_windows.go:90 msg="[0] GcnArchName: gfx1031" > > > time=2024-03-15T15:37:15.878+01:00 level=INFO source=amd_windows.go:117 msg="[0] Total Mem: 12733906944" > > > time=2024-03-15T15:37:15.879+01:00 level=INFO source=amd_windows.go:118 msg="[0] Free Mem: 12868124672" > > > time=2024-03-15T15:37:15.879+01:00 level=INFO source=assets.go:63 msg="Updating PATH to C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\Git\\cmd;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\Calibre2\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\Intel\\PresentMon\\PresentMonApplication\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\;C:\\Users\\myuser\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama" > > > time=2024-03-15T15:37:15.932+01:00 level=DEBUG source=gpu.go:180 msg="rocm detected 1 devices with 11044M available memory" > > > time=2024-03-15T15:37:15.932+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" > > > time=2024-03-15T15:37:15.941+01:00 level=INFO source=amd_windows.go:40 msg="AMD Driver: 50732000" > > > time=2024-03-15T15:37:15.941+01:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm" > > > time=2024-03-15T15:37:15.941+01:00 level=DEBUG source=amd_windows.go:148 msg="detected ROCM next to ollama executable C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm" > > > time=2024-03-15T15:37:15.941+01:00 level=DEBUG source=amd_windows.go:66 msg="skipping rocm gfx compatibility check with HSA_OVERRIDE_GFX_VERSION=10.3.0" > > > time=2024-03-15T15:37:15.941+01:00 level=INFO source=amd_windows.go:69 msg="detected 1 hip devices" > > > time=2024-03-15T15:37:15.941+01:00 level=INFO source=amd_windows.go:87 msg="[0] Name: AMD Radeon RX 6700 XT" > > > time=2024-03-15T15:37:15.941+01:00 level=INFO source=amd_windows.go:90 msg="[0] GcnArchName: gfx1031" > > > time=2024-03-15T15:37:16.159+01:00 level=INFO source=amd_windows.go:117 msg="[0] Total Mem: 12733906944" > > > time=2024-03-15T15:37:16.159+01:00 level=INFO source=amd_windows.go:118 msg="[0] Free Mem: 12868124672" > > > time=2024-03-15T15:37:16.215+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" > > > time=2024-03-15T15:37:16.215+01:00 level=DEBUG source=payload_common.go:93 msg="ordered list of LLM libraries to try [C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama1387933699\\runners\\rocm_v5.7\\ext_server.dll C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama1387933699\\runners\\cpu_avx2\\ext_server.dll]" > > > time=2024-03-15T15:37:16.215+01:00 level=INFO source=assets.go:63 msg="Updating PATH to C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama1387933699\\runners\\rocm_v5.7;C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama\\rocm;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\Git\\cmd;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\Calibre2\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\Intel\\PresentMon\\PresentMonApplication\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python311\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\;C:\\Users\\myuser\\AppData\\Local\\Programs\\Python\\Python310\\;C:\\Users\\myuser\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\myuser\\AppData\\Local\\Programs\\Ollama" > > > time=2024-03-15T15:37:16.221+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: C:\\Users\\myuser\\AppData\\Local\\Temp\\ollama1387933699\\runners\\rocm_v5.7\\ext_server.dll" > > > time=2024-03-15T15:37:16.221+01:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server" > > > time=2024-03-15T15:37:16.221+01:00 level=DEBUG source=dyn_ext_server.go:151 msg="server params: {model:0x2314baeee40 n_ctx:2048 n_batch:512 n_threads:0 n_parallel:1 rope_freq_base:0 rope_freq_scale:0 memory_f16:true n_gpu_layers:33 main_gpu:0 use_mlock:false use_mmap:true numa:0 embedding:true lora_adapters:<nil> mmproj:<nil> verbose_logging:true _:[0 0 0 0 0 0 0]}" > > > [1710513436] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | > > > [1710513436] Performing pre-initialization of GPU > > > ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no > > > ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes > > > ggml_init_cublas: found 1 ROCm devices: > > > Device 0: AMD Radeon RX 6700 XT, compute capability 10.3, VMM: no > > > llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from C:\Users\myuser\.ollama\models\blobs\sha256-e8a35b5937a5e6d5c35d1f2a15f161e07eefe5e5bb0a3cdd42998ee79b057730 (version GGUF V3 (latest)) > > > llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. > > > llama_model_loader: - kv 0: general.architecture str = llama > > > llama_model_loader: - kv 1: general.name str = mistralai > > > llama_model_loader: - kv 2: llama.context_length u32 = 32768 > > > llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 > > > llama_model_loader: - kv 4: llama.block_count u32 = 32 > > > llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 > > > llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 > > > llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 > > > llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8 > > > llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 > > > llama_model_loader: - kv 10: llama.rope.freq_base f32 = 1000000.000000 > > > llama_model_loader: - kv 11: general.file_type u32 = 2 > > > llama_model_loader: - kv 12: tokenizer.ggml.model str = llama > > > llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... > > > llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... > > > llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... > > > llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,58980] = ["▁ t", "i n", "e r", "▁ a", "h e... > > > llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 1 > > > llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 2 > > > llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 0 > > > llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true > > > llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false > > > llama_model_loader: - kv 22: tokenizer.chat_template str = {{ bos_token }}{% for message in mess... > > > llama_model_loader: - kv 23: general.quantization_version u32 = 2 > > > llama_model_loader: - type f32: 65 tensors > > > llama_model_loader: - type q4_0: 225 tensors > > > llama_model_loader: - type q6_K: 1 tensors > > > llm_load_vocab: special tokens definition check successful ( 259/32000 ). > > > llm_load_print_meta: format = GGUF V3 (latest) > > > llm_load_print_meta: arch = llama > > > llm_load_print_meta: vocab type = SPM > > > llm_load_print_meta: n_vocab = 32000 > > > llm_load_print_meta: n_merges = 0 > > > llm_load_print_meta: n_ctx_train = 32768 > > > llm_load_print_meta: n_embd = 4096 > > > llm_load_print_meta: n_head = 32 > > > llm_load_print_meta: n_head_kv = 8 > > > llm_load_print_meta: n_layer = 32 > > > llm_load_print_meta: n_rot = 128 > > > llm_load_print_meta: n_embd_head_k = 128 > > > llm_load_print_meta: n_embd_head_v = 128 > > > llm_load_print_meta: n_gqa = 4 > > > llm_load_print_meta: n_embd_k_gqa = 1024 > > > llm_load_print_meta: n_embd_v_gqa = 1024 > > > llm_load_print_meta: f_norm_eps = 0.0e+00 > > > llm_load_print_meta: f_norm_rms_eps = 1.0e-05 > > > llm_load_print_meta: f_clamp_kqv = 0.0e+00 > > > llm_load_print_meta: f_max_alibi_bias = 0.0e+00 > > > llm_load_print_meta: n_ff = 14336 > > > llm_load_print_meta: n_expert = 0 > > > llm_load_print_meta: n_expert_used = 0 > > > llm_load_print_meta: pooling type = 0 > > > llm_load_print_meta: rope type = 0 > > > llm_load_print_meta: rope scaling = linear > > > llm_load_print_meta: freq_base_train = 1000000.0 > > > llm_load_print_meta: freq_scale_train = 1 > > > llm_load_print_meta: n_yarn_orig_ctx = 32768 > > > llm_load_print_meta: rope_finetuned = unknown > > > llm_load_print_meta: model type = 7B > > > llm_load_print_meta: model ftype = Q4_0 > > > llm_load_print_meta: model params = 7.24 B > > > llm_load_print_meta: model size = 3.83 GiB (4.54 BPW) > > > llm_load_print_meta: general.name = mistralai > > > llm_load_print_meta: BOS token = 1 '<s>' > > > llm_load_print_meta: EOS token = 2 '</s>' > > > llm_load_print_meta: UNK token = 0 '<unk>' > > > llm_load_print_meta: LF token = 13 '<0x0A>' > > > llm_load_tensors: ggml ctx size = 0.22 MiB > > > llm_load_tensors: offloading 32 repeating layers to GPU > > > llm_load_tensors: offloading non-repeating layers to GPU > > > llm_load_tensors: offloaded 33/33 layers to GPU > > > llm_load_tensors: ROCm0 buffer size = 3847.55 MiB > > > llm_load_tensors: CPU buffer size = 70.31 MiB > > > .................................................................................................. > > > llama_new_context_with_model: n_ctx = 2048 > > > llama_new_context_with_model: freq_base = 1000000.0 > > > llama_new_context_with_model: freq_scale = 1 > > > llama_kv_cache_init: ROCm0 KV buffer size = 256.00 MiB > > > llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB > > > llama_new_context_with_model: ROCm_Host input buffer size = 13.02 MiB > > > llama_new_context_with_model: ROCm0 compute buffer size = 164.00 MiB > > > llama_new_context_with_model: ROCm_Host compute buffer size = 8.00 MiB > > > llama_new_context_with_model: graph splits (measure): 2 > > > [1710513438] warming up the model with an empty run > > > CUDA error: invalid device function > > > current device: 0, in function ggml_cuda_op_flatten at C:/Users/jeff/git/ollama/llm/llama.cpp/ggml-cuda.cu:10110 > > > hipGetLastError() > > > GGML_ASSERT: C:/Users/jeff/git/ollama/llm/llama.cpp/ggml-cuda.cu:256: !"CUDA error" > > > ``` > > > > GPT4ALL supports at least the 6700XT GPU, but only within their chat client. Attempting to access it through the API interface causes the program to crash. I wonder if the support principles behind GPT4ALL could be shared to help those in need? You have navida nvml.dll in there . Make sure you don't have another navida card . If not , remove the navida tool. It might help. Also there are dedicted gfx1032 ,gfx1032 roclabs available on GitHab and zhihu.
Author
Owner

@FelipeLujan commented on GitHub (May 10, 2024):

Is there anything we can borrow from

Have you looked at what YellowRose did on the KoboldCPP-ROCm fork to get the 6700 cards to work? It runs on my 6750 with no issues.

https://github.com/YellowRoseCx/koboldcpp-rocm

Is there anything we can borrow from YellowRoseCx/koboldcpp-rocm to make Ollama work on 6750XT and lower?

<!-- gh-comment-id:2103641441 --> @FelipeLujan commented on GitHub (May 10, 2024): Is there anything we can borrow from > Have you looked at what YellowRose did on the KoboldCPP-ROCm fork to get the 6700 cards to work? It runs on my 6750 with no issues. > > https://github.com/YellowRoseCx/koboldcpp-rocm Is there anything we can borrow from YellowRoseCx/koboldcpp-rocm to make Ollama work on 6750XT and lower?
Author
Owner

@hazzabeee commented on GitHub (May 27, 2024):

Following on from the discussion at issue #3781 (and in particular this comment by @likelovewant), I was able to get ROCm support working on Windows with my RX 6700 XT (gfx1031) by:

  1. Setting up the build environment for ollama, including installing AMD HIP SDK and Strawberry Perl (as described in the Developer Guide)
  2. Extracting the improved gfx1031 libs from https://github.com/brknsoul/ROCmLibs into C:\Program Files\AMD\ROCm\5.7\bin\rocblas\library
  3. Adding gfx1031 to the list of supported GPUs in ollama\llm\generate\gen_windows.ps1
  4. Building ollama
  5. Replacing the normal version of ollama_runners with the newly built one from ollama\dist\windows-amd64. I guess I should also replace the rocm folder (though I now also have the patched version in Program Files).

Since patched ROCm libraries are also available for gfx1032 (RX 6600 and 6600XT) I imagine the same process would work for them too.

Apologies if this is obvious to everyone else, but as someone who just wanted to give ollama a try, it took me a bit of time to figure out :)

<!-- gh-comment-id:2133970898 --> @hazzabeee commented on GitHub (May 27, 2024): Following on from the discussion at issue #3781 (and in particular [this comment](https://github.com/ollama/ollama/issues/3781#issuecomment-2068051898) by @likelovewant), I was able to get ROCm support working on Windows with my RX 6700 XT (gfx1031) by: 1) Setting up the build environment for ollama, including installing AMD HIP SDK and Strawberry Perl (as described in the Developer Guide) 2) Extracting the improved gfx1031 libs from https://github.com/brknsoul/ROCmLibs into C:\Program Files\AMD\ROCm\5.7\bin\rocblas\library 3) Adding gfx1031 to the list of supported GPUs in ollama\llm\generate\gen_windows.ps1 4) Building ollama 5) Replacing the normal version of ollama_runners with the newly built one from ollama\dist\windows-amd64. I guess I should also replace the rocm folder (though I now also have the patched version in Program Files). Since patched ROCm libraries are also available for gfx1032 (RX 6600 and 6600XT) I imagine the same process would work for them too. Apologies if this is obvious to everyone else, but as someone who just wanted to give ollama a try, it took me a bit of time to figure out :)
Author
Owner

@tugzii commented on GitHub (Jun 3, 2024):

Following on from the discussion at issue #3781, I was able to get ROCm support working on Windows with my RX 6700 XT (gfx1031) by:

  1. Setting up the build environment for ollama, including installing AMD HIP SDK and Strawberry Perl (as described in the Developer Guide)
  2. Extracting the improved gfx1031 libs from https://github.com/brknsoul/ROCmLibs into C:\Program Files\AMD\ROCm\5.7\bin\rocblas\library
  3. Adding gfx1031 to the list of supported GPUs in ollama\llm\generate\gen_windows.ps1
  4. Building ollama
  5. Replacing the normal version of ollama_runners with the newly built one from ollama\dist\windows-amd64. I guess I should also replace the rocm folder (though I now also the patched version in Program Files).

Since patched ROCm libraries are also available for gfx1032 (RX 6600 and 6600XT) I imagine the same process would work for them too.

Apologies if this is obvious to everyone else, but as someone who just wanted to give ollama a try, it took me a bit of time to figure out :)

I can say, this was very helpful (and not obvious) so thank you.

With your instructions above and using ChatGPT to expand on every step you provided I managed to get this working perfectly for my RX 6750 XT Windows 11 Pro setup (using gfx1031).

Built with no errors and used WinMerge to merge ROCm dlls to C:\Program Files\AMD\ROCm\5.7\bin\rocblas\library.

Also noticed, that after first successful run that ollama\ollama-windows-amd64\rocm\rocblas\library automatically copied files from C:\Program Files\AMD\ROCm\5.7\bin\rocblas\library and thus picked up the RX 6700/6750 XT compatible dlls.

Note: Currently running .\ollama.exe run llama3:8b-instruct-q8_0 (which uses 10 or my 12 VRAM)

image

<!-- gh-comment-id:2144741411 --> @tugzii commented on GitHub (Jun 3, 2024): > Following on from the discussion at issue #3781, I was able to get ROCm support working on Windows with my RX 6700 XT (gfx1031) by: > > 1. Setting up the build environment for ollama, including installing AMD HIP SDK and Strawberry Perl (as described in the Developer Guide) > 2. Extracting the improved gfx1031 libs from https://github.com/brknsoul/ROCmLibs into C:\Program Files\AMD\ROCm\5.7\bin\rocblas\library > 3. Adding gfx1031 to the list of supported GPUs in ollama\llm\generate\gen_windows.ps1 > 4. Building ollama > 5. Replacing the normal version of ollama_runners with the newly built one from ollama\dist\windows-amd64. I guess I should also replace the rocm folder (though I now also the patched version in Program Files). > > Since patched ROCm libraries are also available for gfx1032 (RX 6600 and 6600XT) I imagine the same process would work for them too. > > Apologies if this is obvious to everyone else, but as someone who just wanted to give ollama a try, it took me a bit of time to figure out :) I can say, this was very helpful (and not obvious) so thank you. With your instructions above and using ChatGPT to expand on every step you provided I managed to get this working perfectly for my RX 6750 XT Windows 11 Pro setup (using gfx1031). Built with no errors and used WinMerge to merge ROCm dlls to C:\Program Files\AMD\ROCm\5.7\bin\rocblas\library. Also noticed, that after first successful run that ollama\ollama-windows-amd64\rocm\rocblas\library automatically copied files from C:\Program Files\AMD\ROCm\5.7\bin\rocblas\library and thus picked up the RX 6700/6750 XT compatible dlls. Note: Currently running .\ollama.exe run llama3:8b-instruct-q8_0 (which uses 10 or my 12 VRAM) ![image](https://github.com/ollama/ollama/assets/29587732/90235ac2-2285-439f-a7e8-9e19b7c143e7)
Author
Owner

@istevkovski commented on GitHub (Jun 4, 2024):

Following on from the discussion at issue #3781, I was able to get ROCm support working on Windows with my RX 6700 XT (gfx1031) by:

  1. Setting up the build environment for ollama, including installing AMD HIP SDK and Strawberry Perl (as described in the Developer Guide)
  2. Extracting the improved gfx1031 libs from https://github.com/brknsoul/ROCmLibs into C:\Program Files\AMD\ROCm\5.7\bin\rocblas\library
  3. Adding gfx1031 to the list of supported GPUs in ollama\llm\generate\gen_windows.ps1
  4. Building ollama
  5. Replacing the normal version of ollama_runners with the newly built one from ollama\dist\windows-amd64. I guess I should also replace the rocm folder (though I now also the patched version in Program Files).

Since patched ROCm libraries are also available for gfx1032 (RX 6600 and 6600XT) I imagine the same process would work for them too.

Apologies if this is obvious to everyone else, but as someone who just wanted to give ollama a try, it took me a bit of time to figure out :)

You beautiful soul! Confirmation that RX6700XT works!

SEO Keywords (so more people can come across this): Ollama RX6700XT | RX 6700 XT | Ollama ROCm

<!-- gh-comment-id:2148338248 --> @istevkovski commented on GitHub (Jun 4, 2024): > Following on from the discussion at issue #3781, I was able to get ROCm support working on Windows with my RX 6700 XT (gfx1031) by: > > 1. Setting up the build environment for ollama, including installing AMD HIP SDK and Strawberry Perl (as described in the Developer Guide) > 2. Extracting the improved gfx1031 libs from https://github.com/brknsoul/ROCmLibs into C:\Program Files\AMD\ROCm\5.7\bin\rocblas\library > 3. Adding gfx1031 to the list of supported GPUs in ollama\llm\generate\gen_windows.ps1 > 4. Building ollama > 5. Replacing the normal version of ollama_runners with the newly built one from ollama\dist\windows-amd64. I guess I should also replace the rocm folder (though I now also the patched version in Program Files). > > Since patched ROCm libraries are also available for gfx1032 (RX 6600 and 6600XT) I imagine the same process would work for them too. > > Apologies if this is obvious to everyone else, but as someone who just wanted to give ollama a try, it took me a bit of time to figure out :) You beautiful soul! Confirmation that RX6700XT works! SEO Keywords (so more people can come across this): Ollama RX6700XT | RX 6700 XT | Ollama ROCm
Author
Owner

@mpugli commented on GitHub (Jun 5, 2024):

Thank you very much guys, thanks to this guide I was able to get ollama working with my gpu (6700xt) it took me a while to download everything but it worked!

<!-- gh-comment-id:2150484262 --> @mpugli commented on GitHub (Jun 5, 2024): Thank you very much guys, thanks to this guide I was able to get ollama working with my gpu (6700xt) it took me a while to download everything but it worked!
Author
Owner

@hcoona commented on GitHub (Jun 9, 2024):

It works! 6600XT on Windows 11.

<!-- gh-comment-id:2156508303 --> @hcoona commented on GitHub (Jun 9, 2024): It works! 6600XT on Windows 11.
Author
Owner

@spenny42069 commented on GitHub (Jun 27, 2024):

Hello I have the 6700XT.

I have been trying for 4 days now to get this to work with no luck! I am so lost from installed windows preview, linux, blah blah blah.

I see the steps by hazzabeee appear to work. They however, make absolutely almost no sense to me at this point. Can anyone provide some more clear directions (including links to developer guides)? I am finding the ollama documentation in regards to old amd GPU is a total disaster.

Really appreciate this in advance, thank you

<!-- gh-comment-id:2193303895 --> @spenny42069 commented on GitHub (Jun 27, 2024): Hello I have the 6700XT. I have been trying for 4 days now to get this to work with no luck! I am so lost from installed windows preview, linux, blah blah blah. I see the steps by hazzabeee appear to work. They however, make absolutely almost no sense to me at this point. Can anyone provide some more clear directions (including links to developer guides)? I am finding the ollama documentation in regards to old amd GPU is a total disaster. Really appreciate this in advance, thank you
Author
Owner

@likelovewant commented on GitHub (Jun 27, 2024):

Hello I have the 6700XT.

I have been trying for 4 days now to get this to work with no luck! I am so lost from installed windows preview, linux, blah blah blah.

I see the steps by hazzabeee appear to work. They however, make absolutely almost no sense to me at this point. Can anyone provide some more clear directions (including links to developer guides)? I am finding the ollama documentation in regards to old amd GPU is a total disaster.

Really appreciate this in advance, thank you

some prebuild for those unsupported amd gpu card available by this wiki, you may try this instead of build yourself.

<!-- gh-comment-id:2193691870 --> @likelovewant commented on GitHub (Jun 27, 2024): > Hello I have the 6700XT. > > I have been trying for 4 days now to get this to work with no luck! I am so lost from installed windows preview, linux, blah blah blah. > > I see the steps by hazzabeee appear to work. They however, make absolutely almost no sense to me at this point. Can anyone provide some more clear directions (including links to developer guides)? I am finding the ollama documentation in regards to old amd GPU is a total disaster. > > Really appreciate this in advance, thank you some prebuild for those unsupported amd gpu card available by this [wiki](https://github.com/likelovewant/ollama-for-amd/wiki), you may try this instead of build yourself.
Author
Owner

@spenny42069 commented on GitHub (Jun 27, 2024):

I tried two more times and was unsuccessful. I've included my error below, I have got to this point many, many, times now.
I get this error when I attempt to run ollama. I can hear the GPU work a little and then it crashes before I can type the first prompt:

Error: llama runner process has terminated: exit status 0xc0000409 CUDA error: invalid device function
  current device: 0, in function ggml_cuda_compute_forward at C:/a/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:2313
  err
GGML_ASSERT: C:/a/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:100: !"CUDA error"

I'm going to try one more time before throwing int the towel.

<!-- gh-comment-id:2195012707 --> @spenny42069 commented on GitHub (Jun 27, 2024): I tried two more times and was unsuccessful. I've included my error below, I have got to this point many, many, times now. I get this error when I attempt to run ollama. I can hear the GPU work a little and then it crashes before I can type the first prompt: ``` Error: llama runner process has terminated: exit status 0xc0000409 CUDA error: invalid device function current device: 0, in function ggml_cuda_compute_forward at C:/a/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:2313 err GGML_ASSERT: C:/a/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:100: !"CUDA error" ``` I'm going to try one more time before throwing int the towel.
Author
Owner

@likelovewant commented on GitHub (Jun 27, 2024):

I tried two more times and was unsuccessful. I've included my error below, I have got to this point many, many, times now.

I get this error when I attempt to run ollama. I can hear the GPU work a little and then it crashes before I can type the first prompt:


Error: llama runner process has terminated: exit status 0xc0000409 CUDA error: invalid device function

  current device: 0, in function ggml_cuda_compute_forward at C:/a/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:2313

  err

GGML_ASSERT: C:/a/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:100: !"CUDA error"

I'm going to try one more time before throwing int the towel.

Then you are using Nvida Card other than AMD . Otherwise , it's not show cuda error or you clicked the update push message. It will overwrite by the unsupported build. Simply Don't click the system pop up update message .try them again. Make sure to remove full program and fully close the ollama you had before you try .

<!-- gh-comment-id:2195532568 --> @likelovewant commented on GitHub (Jun 27, 2024): > I tried two more times and was unsuccessful. I've included my error below, I have got to this point many, many, times now. > > I get this error when I attempt to run ollama. I can hear the GPU work a little and then it crashes before I can type the first prompt: > > > > ``` > > Error: llama runner process has terminated: exit status 0xc0000409 CUDA error: invalid device function > > current device: 0, in function ggml_cuda_compute_forward at C:/a/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:2313 > > err > > GGML_ASSERT: C:/a/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:100: !"CUDA error" > > ``` > > > > I'm going to try one more time before throwing int the towel. > > Then you are using Nvida Card other than AMD . Otherwise , it's not show cuda error or you clicked the update push message. It will overwrite by the unsupported build. Simply Don't click the system pop up update message .try them again. Make sure to remove full program and fully close the ollama you had before you try .
Author
Owner

@dhiltgen commented on GitHub (Jul 30, 2024):

If PR #6033 works out, a follow up we can explore is creating cloned rocblas data files for the known overrides that do work to workaround the lack of support for HSA_OVERRIDE_GFX_VERSION on windows.

<!-- gh-comment-id:2258935353 --> @dhiltgen commented on GitHub (Jul 30, 2024): If PR #6033 works out, a follow up we can explore is creating cloned rocblas data files for the known overrides that do work to workaround the lack of support for `HSA_OVERRIDE_GFX_VERSION` on windows.
Author
Owner

@likelovewant commented on GitHub (Jul 31, 2024):

"Regarding HSA_OVERRIDE_GFX_VERSION on Windows, it currently doesn't work due to an AMD HSA tag check.

The simplest workaround might be to use the ROCBLAS library from the Linux version on Windows, potentially
leveraging the default ROCBLAS.dll provided with HIP SDK.
I've observed partial loading with HIP 6.1.2, but it worked fully on HIP 5.7. However, this approach still faces
limitations with other architectures lacking support.

Alternatively, could build ROCBLAS and its libraries from the official AMD repository. This requires some
experimentation as demonstrated in this project.

Building ROCBLAS for all HIP runtime architectures is a complex task due to the library's extensive size. HIP
6.1.2, for example, the library take from 200MB to 300 MB for per GPU architectures, leading to a significantly large
build volume.

I hope this information is helpful.. @dhiltgen

<!-- gh-comment-id:2259935662 --> @likelovewant commented on GitHub (Jul 31, 2024): "Regarding `HSA_OVERRIDE_GFX_VERSION` on Windows, it currently doesn't work due to an AMD HSA tag check. The simplest workaround might be to use the ROCBLAS library from the Linux version on Windows, potentially leveraging the default ROCBLAS.dll provided with HIP SDK. I've observed partial loading with HIP 6.1.2, but it worked fully on HIP 5.7. However, this approach still faces limitations with other architectures lacking support. Alternatively, could build ROCBLAS and its libraries from the official AMD repository. This requires some experimentation as demonstrated in [this project](https://github.com/likelovewant/ROCmLibs-for-gfx1103-AMD780M-APU/wiki). Building ROCBLAS for all HIP runtime architectures is a complex task due to the library's extensive size. HIP 6.1.2, for example, the library take from 200MB to 300 MB for per GPU architectures, leading to a significantly large build volume. I hope this information is helpful.. @dhiltgen
Author
Owner

@tannerstell commented on GitHub (Aug 11, 2024):

I'm experiencing a gpu VRAM recovery timeout and llama runner termination. The GPU usage spikes to 100% for almost exactly 5.5 seconds. I was going to ask if there's a way to override the gpu wait for recovery timeout, but that would be trivial, as as it's just a warning and this seems to be another issue. I may try pre-built ROCm libraries for gfx 1032 or build rocblas myself.

Specs:
GPU: RX 6600 XT 8GB (gfx 1032, which isn't supported but users have reported overriding GFX version successfully)
CPU: i5 2500k Sandy Bridge (AVX is supported)
RAM: 16GB
BIOS: IVT disabled; secure boot disabled
System OS: Ubuntu 24 Server X11 (using as desktop)
ROCm: 6.2.0.60200-66~24.04 (used --no-dkms and installed with amdgpu-installer)
Ollama: 0.3.4

Global env (/etc/systemd/system/ollama.service): Environment="OLLAMA_HOST=0.0.0.0:11434" HSA_OVERRIDE_GFX_VERSION=10.3.0 ollama serve

Server parameters (when debugging): sudo ROCR_VISIBLE_DEVICES=0 HSA_OVERRIDE_GFX_VERSION=10.3.0 OLLAMA_DEBUG=1 ollama serve
Client parameters: sudo HSA_OVERRIDE_GFX_VERSION=10.3.0 ollama run llama3.1:8b

time=2024-08-11T15:39:35.346-07:00 level=DEBUG source=server.go:637 msg="model load progress 0.82" time=2024-08-11T15:39:35.599-07:00 level=DEBUG source=server.go:637 msg="model load progress 0.84" time=2024-08-11T15:39:35.850-07:00 level=DEBUG source=server.go:637 msg="model load progress 0.86" time=2024-08-11T15:39:36.101-07:00 level=DEBUG source=server.go:637 msg="model load progress 0.89" time=2024-08-11T15:39:36.352-07:00 level=DEBUG source=server.go:637 msg="model load progress 0.90" time=2024-08-11T15:39:36.603-07:00 level=DEBUG source=server.go:637 msg="model load progress 0.92" llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: ROCm0 KV buffer size = 1024.00 MiB llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB llama_new_context_with_model: ROCm_Host output buffer size = 2.02 MiB time=2024-08-11T15:39:36.854-07:00 level=DEBUG source=server.go:637 msg="model load progress 1.00" llama_new_context_with_model: ROCm0 compute buffer size = 560.00 MiB llama_new_context_with_model: ROCm_Host compute buffer size = 24.01 MiB llama_new_context_with_model: graph nodes = 1030 llama_new_context_with_model: graph splits = 2 time=2024-08-11T15:39:37.150-07:00 level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server error" time=2024-08-11T15:39:37.150-07:00 level=DEBUG source=server.go:640 msg="model load completed, waiting for server to become available" status="llm server error" time=2024-08-11T15:39:37.400-07:00 level=ERROR source=sched.go:451 msg="error loading llama server" error="llama runner process has terminated: signal: illegal instruction (core dumped)"

time=2024-08-11T15:39:42.652-07:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=0 name=1002:73ff before="7.9 GiB" now="7.9 GiB" time=2024-08-11T15:39:42.901-07:00 level=WARN source=sched.go:642 msg="gpu VRAM usage didn't recover within timeout" seconds=5.501042348 model=/root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe

<!-- gh-comment-id:2282917648 --> @tannerstell commented on GitHub (Aug 11, 2024): I'm experiencing a gpu VRAM recovery timeout and llama runner termination. The GPU usage spikes to 100% for almost exactly 5.5 seconds. I was going to ask if there's a way to override the gpu wait for recovery timeout, but that would be trivial, as as it's just a warning and this seems to be another issue. I may try pre-built ROCm libraries for gfx 1032 or build rocblas myself. **Specs:** GPU: RX 6600 XT 8GB (gfx 1032, which isn't supported but users have reported overriding GFX version successfully) CPU: i5 2500k Sandy Bridge (AVX is supported) RAM: 16GB BIOS: IVT disabled; secure boot disabled System OS: Ubuntu 24 Server X11 (using as desktop) ROCm: 6.2.0.60200-66~24.04 (used --no-dkms and installed with amdgpu-installer) Ollama: 0.3.4 Global env (/etc/systemd/system/ollama.service): Environment="OLLAMA_HOST=0.0.0.0:11434" HSA_OVERRIDE_GFX_VERSION=10.3.0 ollama serve Server parameters (when debugging): sudo ROCR_VISIBLE_DEVICES=0 HSA_OVERRIDE_GFX_VERSION=10.3.0 OLLAMA_DEBUG=1 ollama serve Client parameters: sudo HSA_OVERRIDE_GFX_VERSION=10.3.0 ollama run llama3.1:8b `time=2024-08-11T15:39:35.346-07:00 level=DEBUG source=server.go:637 msg="model load progress 0.82" time=2024-08-11T15:39:35.599-07:00 level=DEBUG source=server.go:637 msg="model load progress 0.84" time=2024-08-11T15:39:35.850-07:00 level=DEBUG source=server.go:637 msg="model load progress 0.86" time=2024-08-11T15:39:36.101-07:00 level=DEBUG source=server.go:637 msg="model load progress 0.89" time=2024-08-11T15:39:36.352-07:00 level=DEBUG source=server.go:637 msg="model load progress 0.90" time=2024-08-11T15:39:36.603-07:00 level=DEBUG source=server.go:637 msg="model load progress 0.92" llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: ROCm0 KV buffer size = 1024.00 MiB llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB llama_new_context_with_model: ROCm_Host output buffer size = 2.02 MiB time=2024-08-11T15:39:36.854-07:00 level=DEBUG source=server.go:637 msg="model load progress 1.00" llama_new_context_with_model: ROCm0 compute buffer size = 560.00 MiB llama_new_context_with_model: ROCm_Host compute buffer size = 24.01 MiB llama_new_context_with_model: graph nodes = 1030 llama_new_context_with_model: graph splits = 2 time=2024-08-11T15:39:37.150-07:00 level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server error" time=2024-08-11T15:39:37.150-07:00 level=DEBUG source=server.go:640 msg="model load completed, waiting for server to become available" status="llm server error" time=2024-08-11T15:39:37.400-07:00 level=ERROR source=sched.go:451 msg="error loading llama server" error="llama runner process has terminated: signal: illegal instruction (core dumped)"` `time=2024-08-11T15:39:42.652-07:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=0 name=1002:73ff before="7.9 GiB" now="7.9 GiB" time=2024-08-11T15:39:42.901-07:00 level=WARN source=sched.go:642 msg="gpu VRAM usage didn't recover within timeout" seconds=5.501042348 model=/root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe `
Author
Owner

@dhiltgen commented on GitHub (Sep 3, 2024):

@tannerstell it looks like you're on linux, so your crash is unrelated to the topic of this issue: ROCm not supporting the override environment variable on Windows. If you're still experiencing the crash on the latest version of Ollama, please file a new issue so we can track it.

<!-- gh-comment-id:2327314010 --> @dhiltgen commented on GitHub (Sep 3, 2024): @tannerstell it looks like you're on linux, so your crash is unrelated to the topic of this issue: ROCm not supporting the override environment variable on Windows. If you're still experiencing the crash on the latest version of Ollama, please file a new issue so we can track it.
Author
Owner

@Novapixel1010 commented on GitHub (Sep 23, 2025):

I would like to add that I got it working using. This install helper. With that said I did run into this weird issue.

C:\Users\Mike>ollama serve                                                                                                                                                                                                           
Error: listen tcp: lookup HSA_OVERRIDE_GFX_VERSION="10.3.0: no such host

I also did

C:\Users\Mike>ollama run llama3
Error: Head "http://HSA_OVERRIDE_GFX_VERSION=\"10.3.0:11434/": dial tcp: lookup HSA_OVERRIDE_GFX_VERSION="10.3.0: no such host

The only way I fixed it was doing this to serve ollama. For downloading a model keep scrolling.

C:\Users\Mike>set OLLAMA_HOST=127.0.0.1:11434

C:\Users\Mike>set OLLAMA_MODELS=%USERPROFILE%\.ollama\models

C:\Users\Mike>set HSA_OVERRIDE_GFX_VERSION=

C:\Users\Mike>mkdir "%USERPROFILE%\.ollama\models" 2>nul

C:\Users\Mike>ollama serve
time=2025-09-23T15:13:28.865-06:00 level=INFO source=routes.go:1331 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\Mike\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NEW_ESTIMATES:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-09-23T15:13:28.869-06:00 level=INFO source=images.go:477 msg="total blobs: 0"
time=2025-09-23T15:13:28.869-06:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0"
time=2025-09-23T15:13:28.870-06:00 level=INFO source=routes.go:1384 msg="Listening on 127.0.0.1:11434 (version 0.11.10-0-g501cb38)"
time=2025-09-23T15:13:28.870-06:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-09-23T15:13:28.870-06:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-09-23T15:13:28.870-06:00 level=INFO source=gpu_windows.go:183 msg="efficiency cores detected" maxEfficiencyClass=1
time=2025-09-23T15:13:28.870-06:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=12 efficiency=4 threads=20
time=2025-09-23T15:13:29.264-06:00 level=INFO source=types.go:131 msg="inference compute" id=0 library=rocm variant="" compute=gfx1032 driver=6.2 name="AMD Radeon RX 6600" total="8.0 GiB" available="7.8 GiB"
time=2025-09-23T15:13:29.265-06:00 level=INFO source=routes.go:1425 msg="entering low vram mode" "total vram"="8.0 GiB" threshold="20.0 GiB"

For downloading model

set "OLLAMA_HOST=127.0.0.1:11434" & set "HSA_OVERRIDE_GFX_VERSION=" & ollama run llama3

output should be

pulling manifest
pulling 6a0746a1ec1a: 100% ▕███████���███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 4.7 GB
pulling 4fa551d4f938: 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏  12 KB
pulling 8ab4849b038c: 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏  254 B
pulling 577073ffcc6c: 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏  110 B
pulling 3f8eb4da87fa: 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏  485 B
verifying sha256 digest
writing manifest
success
<!-- gh-comment-id:3325650040 --> @Novapixel1010 commented on GitHub (Sep 23, 2025): I would like to add that I got it working using. This [install helper](https://github.com/ByronLeeeee/Ollama-For-AMD-Installer). With that said I did run into this weird issue. ``` C:\Users\Mike>ollama serve Error: listen tcp: lookup HSA_OVERRIDE_GFX_VERSION="10.3.0: no such host ``` I also did ``` C:\Users\Mike>ollama run llama3 Error: Head "http://HSA_OVERRIDE_GFX_VERSION=\"10.3.0:11434/": dial tcp: lookup HSA_OVERRIDE_GFX_VERSION="10.3.0: no such host ``` The only way I fixed it was doing this to serve ollama. For downloading a model keep scrolling. ``` C:\Users\Mike>set OLLAMA_HOST=127.0.0.1:11434 C:\Users\Mike>set OLLAMA_MODELS=%USERPROFILE%\.ollama\models C:\Users\Mike>set HSA_OVERRIDE_GFX_VERSION= C:\Users\Mike>mkdir "%USERPROFILE%\.ollama\models" 2>nul C:\Users\Mike>ollama serve time=2025-09-23T15:13:28.865-06:00 level=INFO source=routes.go:1331 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\Mike\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NEW_ESTIMATES:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-09-23T15:13:28.869-06:00 level=INFO source=images.go:477 msg="total blobs: 0" time=2025-09-23T15:13:28.869-06:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0" time=2025-09-23T15:13:28.870-06:00 level=INFO source=routes.go:1384 msg="Listening on 127.0.0.1:11434 (version 0.11.10-0-g501cb38)" time=2025-09-23T15:13:28.870-06:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-09-23T15:13:28.870-06:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2025-09-23T15:13:28.870-06:00 level=INFO source=gpu_windows.go:183 msg="efficiency cores detected" maxEfficiencyClass=1 time=2025-09-23T15:13:28.870-06:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=12 efficiency=4 threads=20 time=2025-09-23T15:13:29.264-06:00 level=INFO source=types.go:131 msg="inference compute" id=0 library=rocm variant="" compute=gfx1032 driver=6.2 name="AMD Radeon RX 6600" total="8.0 GiB" available="7.8 GiB" time=2025-09-23T15:13:29.265-06:00 level=INFO source=routes.go:1425 msg="entering low vram mode" "total vram"="8.0 GiB" threshold="20.0 GiB" ``` For downloading model ``` set "OLLAMA_HOST=127.0.0.1:11434" & set "HSA_OVERRIDE_GFX_VERSION=" & ollama run llama3 ``` output should be ``` pulling manifest pulling 6a0746a1ec1a: 100% ▕███████���███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 4.7 GB pulling 4fa551d4f938: 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 12 KB pulling 8ab4849b038c: 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 254 B pulling 577073ffcc6c: 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 110 B pulling 3f8eb4da87fa: 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 485 B verifying sha256 digest writing manifest success ```
Author
Owner

@dhiltgen commented on GitHub (Nov 17, 2025):

In 0.12.11 Vulkan is now included in the official binaries, but still experimental. To enable, set OLLAMA_VULKAN=1 for the server. https://github.com/ollama/ollama/blob/main/docs/faq.mdx#how-do-i-configure-ollama-server

<!-- gh-comment-id:3544316847 --> @dhiltgen commented on GitHub (Nov 17, 2025): In 0.12.11 Vulkan is now included in the official binaries, but still experimental. To enable, set OLLAMA_VULKAN=1 for the server. https://github.com/ollama/ollama/blob/main/docs/faq.mdx#how-do-i-configure-ollama-server
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#63946