[GH-ISSUE #3693] Ollama v0.1.32-rocm throws "CUDA error: out of memory" on AMD GPU with model that worked on v0.1.31-rocm #48787

Closed
opened 2026-04-28 09:16:09 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @artem-zinnatullin on GitHub (Apr 17, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3693

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

Hi, I've updated the Docker image ollama/ollama:0.1.31-rocm to 0.1.32-rocm and started experiencing CUDA error: out of memory on mixtral:8x7b (7708c059a8bb) model that worked fine on 0.1.31-rocm!

CUDA error: out of memory
  current device: 0, in function alloc at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:233
  hipMalloc((void **) &ptr, look_ahead_size)
GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:60: !"CUDA error"

I am running on 24GB VRAM AMD 7900 XTX GPU with 64GB of RAM (rocminfo below).

Full log:
time=2024-04-16T22:01:18.558-06:00 level=INFO source=images.go:817 msg="total blobs: 33"
time=2024-04-16T22:01:18.559-06:00 level=INFO source=images.go:824 msg="total unused blobs removed: 0"
time=2024-04-16T22:01:18.559-06:00 level=INFO source=routes.go:1143 msg="Listening on [::]:11434 (version 0.1.32)"
time=2024-04-16T22:01:18.560-06:00 level=INFO source=payload.go:28 msg="extracting embedded files" dir=/tmp/ollama3765403603/runners
time=2024-04-16T22:01:20.061-06:00 level=INFO source=payload.go:41 msg="Dynamic LLM libraries [cuda_v11 rocm_v60002 cpu cpu_avx cpu_avx2]"
time=2024-04-16T22:01:20.061-06:00 level=INFO source=gpu.go:121 msg="Detecting GPU type"
time=2024-04-16T22:01:20.061-06:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
time=2024-04-16T22:01:20.064-06:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama3765403603/runners/cuda_v11/libcudart.so.11.0]"
time=2024-04-16T22:01:20.064-06:00 level=INFO source=gpu.go:343 msg="Unable to load cudart CUDA management library /tmp/ollama3765403603/runners/cuda_v11/libcudart.so.11.0: your nvidia driver is too old or missing, please upgrade to run ollama"
time=2024-04-16T22:01:20.064-06:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-04-16T22:01:20.065-06:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: []"
time=2024-04-16T22:01:20.065-06:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-16T22:01:20.065-06:00 level=INFO source=amd_linux.go:50 msg="AMD Driver: 6.3.6"
time=2024-04-16T22:01:20.065-06:00 level=INFO source=amd_linux.go:88 msg="detected amdgpu versions [gfx1100]"
time=2024-04-16T22:01:20.069-06:00 level=INFO source=amd_linux.go:121 msg="amdgpu [0] gfx1100 is supported"
time=2024-04-16T22:01:20.069-06:00 level=INFO source=amd_linux.go:263 msg="[0] amdgpu totalMemory 24560M"
time=2024-04-16T22:01:20.069-06:00 level=INFO source=amd_linux.go:264 msg="[0] amdgpu freeMemory  24560M"
[GIN] 2024/04/17 - 00:22:03 | 200 |     3.41949ms |     10.244.0.71 | GET      "/api/tags"
[GIN] 2024/04/17 - 00:22:03 | 200 |     704.963µs |     10.244.0.71 | GET      "/api/tags"
[GIN] 2024/04/17 - 00:22:03 | 200 |     887.792µs |     10.244.0.71 | GET      "/api/tags"
[GIN] 2024/04/17 - 00:22:03 | 200 |      34.579µs |     10.244.0.71 | GET      "/api/version"
time=2024-04-17T00:22:16.564-06:00 level=INFO source=gpu.go:121 msg="Detecting GPU type"
time=2024-04-17T00:22:16.564-06:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
time=2024-04-17T00:22:16.565-06:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama3765403603/runners/cuda_v11/libcudart.so.11.0]"
time=2024-04-17T00:22:16.565-06:00 level=INFO source=gpu.go:343 msg="Unable to load cudart CUDA management library /tmp/ollama3765403603/runners/cuda_v11/libcudart.so.11.0: your nvidia driver is too old or missing, please upgrade to run ollama"
time=2024-04-17T00:22:16.565-06:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-04-17T00:22:16.566-06:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: []"
time=2024-04-17T00:22:16.566-06:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-17T00:22:16.566-06:00 level=INFO source=amd_linux.go:50 msg="AMD Driver: 6.3.6"
time=2024-04-17T00:22:16.566-06:00 level=INFO source=amd_linux.go:88 msg="detected amdgpu versions [gfx1100]"
time=2024-04-17T00:22:16.568-06:00 level=INFO source=amd_linux.go:121 msg="amdgpu [0] gfx1100 is supported"
time=2024-04-17T00:22:16.568-06:00 level=INFO source=amd_linux.go:263 msg="[0] amdgpu totalMemory 24560M"
time=2024-04-17T00:22:16.568-06:00 level=INFO source=amd_linux.go:264 msg="[0] amdgpu freeMemory  24560M"
time=2024-04-17T00:22:16.568-06:00 level=INFO source=gpu.go:121 msg="Detecting GPU type"
time=2024-04-17T00:22:16.568-06:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
time=2024-04-17T00:22:16.569-06:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama3765403603/runners/cuda_v11/libcudart.so.11.0]"
time=2024-04-17T00:22:16.569-06:00 level=INFO source=gpu.go:343 msg="Unable to load cudart CUDA management library /tmp/ollama3765403603/runners/cuda_v11/libcudart.so.11.0: your nvidia driver is too old or missing, please upgrade to run ollama"
time=2024-04-17T00:22:16.569-06:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-04-17T00:22:16.570-06:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: []"
time=2024-04-17T00:22:16.570-06:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-17T00:22:16.570-06:00 level=INFO source=amd_linux.go:50 msg="AMD Driver: 6.3.6"
time=2024-04-17T00:22:16.570-06:00 level=INFO source=amd_linux.go:88 msg="detected amdgpu versions [gfx1100]"
time=2024-04-17T00:22:16.571-06:00 level=INFO source=amd_linux.go:121 msg="amdgpu [0] gfx1100 is supported"
time=2024-04-17T00:22:16.571-06:00 level=INFO source=amd_linux.go:263 msg="[0] amdgpu totalMemory 24560M"
time=2024-04-17T00:22:16.571-06:00 level=INFO source=amd_linux.go:264 msg="[0] amdgpu freeMemory  24560M"
time=2024-04-17T00:22:16.572-06:00 level=INFO source=server.go:127 msg="offload to gpu" reallayers=29 layers=29 required="26042.6 MiB" used="24319.2 MiB" available="24560.0 MiB" kv="256.0 MiB" fulloffload="184.0 MiB" partialoffload="935.0 MiB"
time=2024-04-17T00:22:16.572-06:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-17T00:22:16.573-06:00 level=INFO source=server.go:264 msg="starting llama server" cmd="/tmp/ollama3765403603/runners/rocm_v60002/ollama_llama_server --model /root/.ollama/models/blobs/sha256-e9e56e8bb5f0fcd4860675e6837a8f6a94e659f5fa7dce6a1076279336320f2b --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 29 --port 45603"
time=2024-04-17T00:22:16.573-06:00 level=INFO source=server.go:389 msg="waiting for llama runner to start responding"
{"function":"server_params_parse","level":"INFO","line":2603,"msg":"logging to file is disabled.","tid":"134859583487040","timestamp":1713334936}
{"build":1,"commit":"7593639","function":"main","level":"INFO","line":2820,"msg":"build info","tid":"134859583487040","timestamp":1713334936}
{"function":"main","level":"INFO","line":2827,"msg":"system info","n_threads":12,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | ","tid":"134859583487040","timestamp":1713334936,"total_threads":24}
llama_model_loader: loaded meta data with 26 key-value pairs and 995 tensors from /root/.ollama/models/blobs/sha256-e9e56e8bb5f0fcd4860675e6837a8f6a94e659f5fa7dce6a1076279336320f2b (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = mistralai
llama_model_loader: - kv   2:                       llama.context_length u32              = 32768
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   9:                         llama.expert_count u32              = 8
llama_model_loader: - kv  10:                    llama.expert_used_count u32              = 2
llama_model_loader: - kv  11:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  12:                       llama.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  13:                          general.file_type u32              = 2
llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  15:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  16:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  18:                      tokenizer.ggml.merges arr[str,58980]   = [" t", "i n", "e r", " a", "h e...
llama_model_loader: - kv  19:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  21:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  22:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  23:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {{ bos_token }}{% for message in mess...
llama_model_loader: - kv  25:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type  f16:   32 tensors
llama_model_loader: - type q4_0:  833 tensors
llama_model_loader: - type q8_0:   64 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 8
llm_load_print_meta: n_expert_used    = 2
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 8x7B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 46.70 B
llm_load_print_meta: model size       = 24.62 GiB (4.53 BPW)
llm_load_print_meta: general.name     = mistralai
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:   no
ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
ggml_cuda_init: found 1 ROCm devices:
  Device 0: Radeon RX 7900 XTX, compute capability 11.0, VMM: no
llm_load_tensors: ggml ctx size =    0.96 MiB
llm_load_tensors: offloading 29 repeating layers to GPU
llm_load_tensors: offloaded 29/33 layers to GPU
llm_load_tensors:      ROCm0 buffer size = 22695.22 MiB
llm_load_tensors:  ROCm_Host buffer size =  2520.65 MiB
....................................................................................................
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      ROCm0 KV buffer size =   232.00 MiB
llama_kv_cache_init:  ROCm_Host KV buffer size =    24.00 MiB
llama_new_context_with_model: KV self size  =  256.00 MiB, K (f16):  128.00 MiB, V (f16):  128.00 MiB
llama_new_context_with_model:  ROCm_Host  output buffer size =     0.14 MiB
llama_new_context_with_model:      ROCm0 compute buffer size =   826.00 MiB
llama_new_context_with_model:  ROCm_Host compute buffer size =    12.01 MiB
llama_new_context_with_model: graph nodes  = 1638
llama_new_context_with_model: graph splits = 41
{"function":"initialize","level":"INFO","line":448,"msg":"initializing slots","n_slots":1,"tid":"134859583487040","timestamp":1713334952}
{"function":"initialize","level":"INFO","line":460,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"134859583487040","timestamp":1713334952}
{"function":"main","level":"INFO","line":3064,"msg":"model loaded","tid":"134859583487040","timestamp":1713334952}
{"function":"main","hostname":"127.0.0.1","level":"INFO","line":3267,"msg":"HTTP server listening","n_threads_http":"23","port":"45603","tid":"134859583487040","timestamp":1713334952}
{"function":"update_slots","level":"INFO","line":1578,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"134859583487040","timestamp":1713334952}
{"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":0,"tid":"134859583487040","timestamp":1713334952}
{"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":2,"tid":"134859583487040","timestamp":1713334952}
{"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":41606,"status":200,"tid":"134849364965120","timestamp":1713334952}
{"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":59396,"status":200,"tid":"134849413240576","timestamp":1713334952}
{"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":3,"tid":"134859583487040","timestamp":1713334952}
{"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":4,"tid":"134859583487040","timestamp":1713334952}
{"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":59382,"status":200,"tid":"134849430025984","timestamp":1713334952}
{"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":1,"tid":"134859583487040","timestamp":1713334952}
{"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":59400,"status":200,"tid":"134849421633280","timestamp":1713334952}
{"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":5,"tid":"134859583487040","timestamp":1713334952}
{"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":41588,"status":200,"tid":"134849348179712","timestamp":1713334952}
{"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":41602,"status":200,"tid":"134849356572416","timestamp":1713334952}
{"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":6,"tid":"134859583487040","timestamp":1713334952}
{"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":41236,"status":200,"tid":"134849404847872","timestamp":1713334952}
{"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":7,"tid":"134859583487040","timestamp":1713334952}
{"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":41236,"status":200,"tid":"134849404847872","timestamp":1713334952}
{"function":"log_server_request","level":"INFO","line":2741,"method":"POST","msg":"request","params":{},"path":"/tokenize","remote_addr":"127.0.0.1","remote_port":41236,"status":200,"tid":"134849404847872","timestamp":1713334952}
{"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":8,"tid":"134859583487040","timestamp":1713334953}
{"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":41236,"status":200,"tid":"134849404847872","timestamp":1713334953}
{"function":"launch_slot_with_data","level":"INFO","line":833,"msg":"slot is processing task","slot_id":0,"task_id":9,"tid":"134859583487040","timestamp":1713334953}
{"function":"update_slots","ga_i":0,"level":"INFO","line":1816,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":80,"slot_id":0,"task_id":9,"tid":"134859583487040","timestamp":1713334953}
{"function":"update_slots","level":"INFO","line":1840,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":9,"tid":"134859583487040","timestamp":1713334953}
CUDA error: out of memory
  current device: 0, in function alloc at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:233
  hipMalloc((void **) &ptr, look_ahead_size)
GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:60: !"CUDA error"
No symbol table is loaded.  Use the "file" command.
[New LWP 37]
[New LWP 62]
[New LWP 63]
[New LWP 64]
[New LWP 65]
[New LWP 66]
[New LWP 67]
[New LWP 68]
[New LWP 69]
[New LWP 70]
[New LWP 71]
[New LWP 72]
[New LWP 73]
[New LWP 74]
[New LWP 75]
[New LWP 76]
[New LWP 77]
[New LWP 78]
[New LWP 79]
[New LWP 80]
[New LWP 81]
[New LWP 82]
[New LWP 83]
[New LWP 84]
[New LWP 85]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
0x00007aa73cfe71d9 in waitpid () from /lib64/libpthread.so.0
No symbol table is loaded.  Use the "file" command.

When I try to run smaller model on v0.1.32-rocm such as llama2:7b it works well and I don't see any Nvidia/CUDA related errors in the log.

Happy to test dev Docker image builds, thank you for this project!

What did you expect to see?

As per release notes for v0.1.32

Ollama will now better utilize available VRAM, leading to less out-of-memory errors

CUDA error: out of memory
  current device: 0, in function alloc at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:233
  hipMalloc((void **) &ptr, look_ahead_size)
GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:60: !"CUDA error"

But in fact the model that fit the GPU previously now doesn't, not sure if error message is indicating that Nvidia stack was activated for my AMD GPU system or is a generic error message but it kind of suggests that unneeded CUDA assert was triggered on AMD GPU system 🙃

Expectation is for v0.1.32-rocm to be able to handle models that v0.1.31-rocm was able to handle :)

Steps to reproduce

  1. Update Docker image ollama/ollama:0.1.31-rocm to 0.1.32-rocm on AMD 7900XTX 24GB VRAM 64GB RAM system
  2. Select mixtral:8x7b model
  3. Observe crash CUDA error: out of memory

Are there any recent changes that introduced the issue?

No response

OS

Linux

Architecture

amd64

Platform

Docker

Ollama version

0.1.32-rocm

GPU

AMD

GPU info

rocminfo
ROCk module is loaded
=====================
HSA System Attributes
=====================
Runtime Version:         1.1
System Timestamp Freq.:  1000.000000MHz
Sig. Max Wait Duration:  18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count)
Machine Model:           LARGE
System Endianness:       LITTLE
Mwaitx:                  DISABLED
DMAbuf Support:          YES

==========
HSA Agents
==========
*******
Agent 1
*******
  Name:                    AMD Ryzen 9 7900 12-Core Processor
  Uuid:                    CPU-XX
  Marketing Name:          AMD Ryzen 9 7900 12-Core Processor
  Vendor Name:             CPU
  Feature:                 None specified
  Profile:                 FULL_PROFILE
  Float Round Mode:        NEAR
  Max Queue Number:        0(0x0)
  Queue Min Size:          0(0x0)
  Queue Max Size:          0(0x0)
  Queue Type:              MULTI
  Node:                    0
  Device Type:             CPU
  Cache Info:
    L1:                      32768(0x8000) KB
  Chip ID:                 0(0x0)
  ASIC Revision:           0(0x0)
  Cacheline Size:          64(0x40)
  Max Clock Freq. (MHz):   5482
  BDFID:                   0
  Internal Node ID:        0
  Compute Unit:            24
  SIMDs per CU:            0
  Shader Engines:          0
  Shader Arrs. per Eng.:   0
  WatchPts on Addr. Ranges:1
  Features:                None
  Pool Info:
    Pool 1
      Segment:                 GLOBAL; FLAGS: FINE GRAINED
      Size:                    65436972(0x3e67d2c) KB
      Allocatable:             TRUE
      Alloc Granule:           4KB
      Alloc Alignment:         4KB
      Accessible by all:       TRUE
    Pool 2
      Segment:                 GLOBAL; FLAGS: KERNARG, FINE GRAINED
      Size:                    65436972(0x3e67d2c) KB
      Allocatable:             TRUE
      Alloc Granule:           4KB
      Alloc Alignment:         4KB
      Accessible by all:       TRUE
    Pool 3
      Segment:                 GLOBAL; FLAGS: COARSE GRAINED
      Size:                    65436972(0x3e67d2c) KB
      Allocatable:             TRUE
      Alloc Granule:           4KB
      Alloc Alignment:         4KB
      Accessible by all:       TRUE
  ISA Info:
*******
Agent 2
*******
  Name:                    gfx1100
  Uuid:                    GPU-6ab835b902b859a1
  Marketing Name:          Radeon RX 7900 XTX
  Vendor Name:             AMD
  Feature:                 KERNEL_DISPATCH
  Profile:                 BASE_PROFILE
  Float Round Mode:        NEAR
  Max Queue Number:        128(0x80)
  Queue Min Size:          64(0x40)
  Queue Max Size:          131072(0x20000)
  Queue Type:              MULTI
  Node:                    1
  Device Type:             GPU
  Cache Info:
    L1:                      32(0x20) KB
    L2:                      6144(0x1800) KB
    L3:                      98304(0x18000) KB
  Chip ID:                 29772(0x744c)
  ASIC Revision:           0(0x0)
  Cacheline Size:          64(0x40)
  Max Clock Freq. (MHz):   2482
  BDFID:                   768
  Internal Node ID:        1
  Compute Unit:            96
  SIMDs per CU:            2
  Shader Engines:          6
  Shader Arrs. per Eng.:   2
  WatchPts on Addr. Ranges:4
  Coherent Host Access:    FALSE
  Features:                KERNEL_DISPATCH
  Fast F16 Operation:      TRUE
  Wavefront Size:          32(0x20)
  Workgroup Max Size:      1024(0x400)
  Workgroup Max Size per Dimension:
    x                        1024(0x400)
    y                        1024(0x400)
    z                        1024(0x400)
  Max Waves Per CU:        32(0x20)
  Max Work-item Per CU:    1024(0x400)
  Grid Max Size:           4294967295(0xffffffff)
  Grid Max Size per Dimension:
    x                        4294967295(0xffffffff)
    y                        4294967295(0xffffffff)
    z                        4294967295(0xffffffff)
  Max fbarriers/Workgrp:   32
  Packet Processor uCode:: 550
  SDMA engine uCode::      19
  IOMMU Support::          None
  Pool Info:
    Pool 1
      Segment:                 GLOBAL; FLAGS: COARSE GRAINED
      Size:                    25149440(0x17fc000) KB
      Allocatable:             TRUE
      Alloc Granule:           4KB
      Alloc Alignment:         4KB
      Accessible by all:       FALSE
    Pool 2
      Segment:                 GLOBAL; FLAGS: EXTENDED FINE GRAINED
      Size:                    25149440(0x17fc000) KB
      Allocatable:             TRUE
      Alloc Granule:           4KB
      Alloc Alignment:         4KB
      Accessible by all:       FALSE
    Pool 3
      Segment:                 GROUP
      Size:                    64(0x40) KB
      Allocatable:             FALSE
      Alloc Granule:           0KB
      Alloc Alignment:         0KB
      Accessible by all:       FALSE
  ISA Info:
    ISA 1
      Name:                    amdgcn-amd-amdhsa--gfx1100
      Machine Models:          HSA_MACHINE_MODEL_LARGE
      Profiles:                HSA_PROFILE_BASE
      Default Rounding Mode:   NEAR
      Default Rounding Mode:   NEAR
      Fast f16:                TRUE
      Workgroup Max Size:      1024(0x400)
      Workgroup Max Size per Dimension:
        x                        1024(0x400)
        y                        1024(0x400)
        z                        1024(0x400)
      Grid Max Size:           4294967295(0xffffffff)
      Grid Max Size per Dimension:
        x                        4294967295(0xffffffff)
        y                        4294967295(0xffffffff)
        z                        4294967295(0xffffffff)
      FBarrier Max Size:       32
*** Done ***

CPU

AMD

Other software

No response

Originally created by @artem-zinnatullin on GitHub (Apr 17, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3693 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? Hi, I've updated the Docker image `ollama/ollama:0.1.31-rocm` to `0.1.32-rocm` and started experiencing `CUDA error: out of memory` on `mixtral:8x7b` (`7708c059a8bb`) model that worked fine on `0.1.31-rocm`! ```js CUDA error: out of memory current device: 0, in function alloc at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:233 hipMalloc((void **) &ptr, look_ahead_size) GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:60: !"CUDA error" ``` I am running on `24GB VRAM` AMD `7900 XTX GPU` with 64GB of RAM (`rocminfo` below). <details> <summary>Full log:</summary> ```js time=2024-04-16T22:01:18.558-06:00 level=INFO source=images.go:817 msg="total blobs: 33" time=2024-04-16T22:01:18.559-06:00 level=INFO source=images.go:824 msg="total unused blobs removed: 0" time=2024-04-16T22:01:18.559-06:00 level=INFO source=routes.go:1143 msg="Listening on [::]:11434 (version 0.1.32)" time=2024-04-16T22:01:18.560-06:00 level=INFO source=payload.go:28 msg="extracting embedded files" dir=/tmp/ollama3765403603/runners time=2024-04-16T22:01:20.061-06:00 level=INFO source=payload.go:41 msg="Dynamic LLM libraries [cuda_v11 rocm_v60002 cpu cpu_avx cpu_avx2]" time=2024-04-16T22:01:20.061-06:00 level=INFO source=gpu.go:121 msg="Detecting GPU type" time=2024-04-16T22:01:20.061-06:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*" time=2024-04-16T22:01:20.064-06:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama3765403603/runners/cuda_v11/libcudart.so.11.0]" time=2024-04-16T22:01:20.064-06:00 level=INFO source=gpu.go:343 msg="Unable to load cudart CUDA management library /tmp/ollama3765403603/runners/cuda_v11/libcudart.so.11.0: your nvidia driver is too old or missing, please upgrade to run ollama" time=2024-04-16T22:01:20.064-06:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libnvidia-ml.so" time=2024-04-16T22:01:20.065-06:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: []" time=2024-04-16T22:01:20.065-06:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-04-16T22:01:20.065-06:00 level=INFO source=amd_linux.go:50 msg="AMD Driver: 6.3.6" time=2024-04-16T22:01:20.065-06:00 level=INFO source=amd_linux.go:88 msg="detected amdgpu versions [gfx1100]" time=2024-04-16T22:01:20.069-06:00 level=INFO source=amd_linux.go:121 msg="amdgpu [0] gfx1100 is supported" time=2024-04-16T22:01:20.069-06:00 level=INFO source=amd_linux.go:263 msg="[0] amdgpu totalMemory 24560M" time=2024-04-16T22:01:20.069-06:00 level=INFO source=amd_linux.go:264 msg="[0] amdgpu freeMemory 24560M" [GIN] 2024/04/17 - 00:22:03 | 200 | 3.41949ms | 10.244.0.71 | GET "/api/tags" [GIN] 2024/04/17 - 00:22:03 | 200 | 704.963µs | 10.244.0.71 | GET "/api/tags" [GIN] 2024/04/17 - 00:22:03 | 200 | 887.792µs | 10.244.0.71 | GET "/api/tags" [GIN] 2024/04/17 - 00:22:03 | 200 | 34.579µs | 10.244.0.71 | GET "/api/version" time=2024-04-17T00:22:16.564-06:00 level=INFO source=gpu.go:121 msg="Detecting GPU type" time=2024-04-17T00:22:16.564-06:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*" time=2024-04-17T00:22:16.565-06:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama3765403603/runners/cuda_v11/libcudart.so.11.0]" time=2024-04-17T00:22:16.565-06:00 level=INFO source=gpu.go:343 msg="Unable to load cudart CUDA management library /tmp/ollama3765403603/runners/cuda_v11/libcudart.so.11.0: your nvidia driver is too old or missing, please upgrade to run ollama" time=2024-04-17T00:22:16.565-06:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libnvidia-ml.so" time=2024-04-17T00:22:16.566-06:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: []" time=2024-04-17T00:22:16.566-06:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-04-17T00:22:16.566-06:00 level=INFO source=amd_linux.go:50 msg="AMD Driver: 6.3.6" time=2024-04-17T00:22:16.566-06:00 level=INFO source=amd_linux.go:88 msg="detected amdgpu versions [gfx1100]" time=2024-04-17T00:22:16.568-06:00 level=INFO source=amd_linux.go:121 msg="amdgpu [0] gfx1100 is supported" time=2024-04-17T00:22:16.568-06:00 level=INFO source=amd_linux.go:263 msg="[0] amdgpu totalMemory 24560M" time=2024-04-17T00:22:16.568-06:00 level=INFO source=amd_linux.go:264 msg="[0] amdgpu freeMemory 24560M" time=2024-04-17T00:22:16.568-06:00 level=INFO source=gpu.go:121 msg="Detecting GPU type" time=2024-04-17T00:22:16.568-06:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*" time=2024-04-17T00:22:16.569-06:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama3765403603/runners/cuda_v11/libcudart.so.11.0]" time=2024-04-17T00:22:16.569-06:00 level=INFO source=gpu.go:343 msg="Unable to load cudart CUDA management library /tmp/ollama3765403603/runners/cuda_v11/libcudart.so.11.0: your nvidia driver is too old or missing, please upgrade to run ollama" time=2024-04-17T00:22:16.569-06:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libnvidia-ml.so" time=2024-04-17T00:22:16.570-06:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: []" time=2024-04-17T00:22:16.570-06:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-04-17T00:22:16.570-06:00 level=INFO source=amd_linux.go:50 msg="AMD Driver: 6.3.6" time=2024-04-17T00:22:16.570-06:00 level=INFO source=amd_linux.go:88 msg="detected amdgpu versions [gfx1100]" time=2024-04-17T00:22:16.571-06:00 level=INFO source=amd_linux.go:121 msg="amdgpu [0] gfx1100 is supported" time=2024-04-17T00:22:16.571-06:00 level=INFO source=amd_linux.go:263 msg="[0] amdgpu totalMemory 24560M" time=2024-04-17T00:22:16.571-06:00 level=INFO source=amd_linux.go:264 msg="[0] amdgpu freeMemory 24560M" time=2024-04-17T00:22:16.572-06:00 level=INFO source=server.go:127 msg="offload to gpu" reallayers=29 layers=29 required="26042.6 MiB" used="24319.2 MiB" available="24560.0 MiB" kv="256.0 MiB" fulloffload="184.0 MiB" partialoffload="935.0 MiB" time=2024-04-17T00:22:16.572-06:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-04-17T00:22:16.573-06:00 level=INFO source=server.go:264 msg="starting llama server" cmd="/tmp/ollama3765403603/runners/rocm_v60002/ollama_llama_server --model /root/.ollama/models/blobs/sha256-e9e56e8bb5f0fcd4860675e6837a8f6a94e659f5fa7dce6a1076279336320f2b --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 29 --port 45603" time=2024-04-17T00:22:16.573-06:00 level=INFO source=server.go:389 msg="waiting for llama runner to start responding" {"function":"server_params_parse","level":"INFO","line":2603,"msg":"logging to file is disabled.","tid":"134859583487040","timestamp":1713334936} {"build":1,"commit":"7593639","function":"main","level":"INFO","line":2820,"msg":"build info","tid":"134859583487040","timestamp":1713334936} {"function":"main","level":"INFO","line":2827,"msg":"system info","n_threads":12,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | ","tid":"134859583487040","timestamp":1713334936,"total_threads":24} llama_model_loader: loaded meta data with 26 key-value pairs and 995 tensors from /root/.ollama/models/blobs/sha256-e9e56e8bb5f0fcd4860675e6837a8f6a94e659f5fa7dce6a1076279336320f2b (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = mistralai llama_model_loader: - kv 2: llama.context_length u32 = 32768 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 9: llama.expert_count u32 = 8 llama_model_loader: - kv 10: llama.expert_used_count u32 = 2 llama_model_loader: - kv 11: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 12: llama.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 13: general.file_type u32 = 2 llama_model_loader: - kv 14: tokenizer.ggml.model str = llama llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 16: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,58980] = ["▁ t", "i n", "e r", "▁ a", "h e... llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 21: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {{ bos_token }}{% for message in mess... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type f16: 32 tensors llama_model_loader: - type q4_0: 833 tensors llama_model_loader: - type q8_0: 64 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens definition check successful ( 259/32000 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 8 llm_load_print_meta: n_expert_used = 2 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 8x7B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 46.70 B llm_load_print_meta: model size = 24.62 GiB (4.53 BPW) llm_load_print_meta: general.name = mistralai llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: LF token = 13 '<0x0A>' ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes ggml_cuda_init: found 1 ROCm devices: Device 0: Radeon RX 7900 XTX, compute capability 11.0, VMM: no llm_load_tensors: ggml ctx size = 0.96 MiB llm_load_tensors: offloading 29 repeating layers to GPU llm_load_tensors: offloaded 29/33 layers to GPU llm_load_tensors: ROCm0 buffer size = 22695.22 MiB llm_load_tensors: ROCm_Host buffer size = 2520.65 MiB .................................................................................................... llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: ROCm0 KV buffer size = 232.00 MiB llama_kv_cache_init: ROCm_Host KV buffer size = 24.00 MiB llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB llama_new_context_with_model: ROCm_Host output buffer size = 0.14 MiB llama_new_context_with_model: ROCm0 compute buffer size = 826.00 MiB llama_new_context_with_model: ROCm_Host compute buffer size = 12.01 MiB llama_new_context_with_model: graph nodes = 1638 llama_new_context_with_model: graph splits = 41 {"function":"initialize","level":"INFO","line":448,"msg":"initializing slots","n_slots":1,"tid":"134859583487040","timestamp":1713334952} {"function":"initialize","level":"INFO","line":460,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"134859583487040","timestamp":1713334952} {"function":"main","level":"INFO","line":3064,"msg":"model loaded","tid":"134859583487040","timestamp":1713334952} {"function":"main","hostname":"127.0.0.1","level":"INFO","line":3267,"msg":"HTTP server listening","n_threads_http":"23","port":"45603","tid":"134859583487040","timestamp":1713334952} {"function":"update_slots","level":"INFO","line":1578,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"134859583487040","timestamp":1713334952} {"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":0,"tid":"134859583487040","timestamp":1713334952} {"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":2,"tid":"134859583487040","timestamp":1713334952} {"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":41606,"status":200,"tid":"134849364965120","timestamp":1713334952} {"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":59396,"status":200,"tid":"134849413240576","timestamp":1713334952} {"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":3,"tid":"134859583487040","timestamp":1713334952} {"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":4,"tid":"134859583487040","timestamp":1713334952} {"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":59382,"status":200,"tid":"134849430025984","timestamp":1713334952} {"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":1,"tid":"134859583487040","timestamp":1713334952} {"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":59400,"status":200,"tid":"134849421633280","timestamp":1713334952} {"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":5,"tid":"134859583487040","timestamp":1713334952} {"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":41588,"status":200,"tid":"134849348179712","timestamp":1713334952} {"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":41602,"status":200,"tid":"134849356572416","timestamp":1713334952} {"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":6,"tid":"134859583487040","timestamp":1713334952} {"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":41236,"status":200,"tid":"134849404847872","timestamp":1713334952} {"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":7,"tid":"134859583487040","timestamp":1713334952} {"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":41236,"status":200,"tid":"134849404847872","timestamp":1713334952} {"function":"log_server_request","level":"INFO","line":2741,"method":"POST","msg":"request","params":{},"path":"/tokenize","remote_addr":"127.0.0.1","remote_port":41236,"status":200,"tid":"134849404847872","timestamp":1713334952} {"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":8,"tid":"134859583487040","timestamp":1713334953} {"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":41236,"status":200,"tid":"134849404847872","timestamp":1713334953} {"function":"launch_slot_with_data","level":"INFO","line":833,"msg":"slot is processing task","slot_id":0,"task_id":9,"tid":"134859583487040","timestamp":1713334953} {"function":"update_slots","ga_i":0,"level":"INFO","line":1816,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":80,"slot_id":0,"task_id":9,"tid":"134859583487040","timestamp":1713334953} {"function":"update_slots","level":"INFO","line":1840,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":9,"tid":"134859583487040","timestamp":1713334953} CUDA error: out of memory current device: 0, in function alloc at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:233 hipMalloc((void **) &ptr, look_ahead_size) GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:60: !"CUDA error" No symbol table is loaded. Use the "file" command. [New LWP 37] [New LWP 62] [New LWP 63] [New LWP 64] [New LWP 65] [New LWP 66] [New LWP 67] [New LWP 68] [New LWP 69] [New LWP 70] [New LWP 71] [New LWP 72] [New LWP 73] [New LWP 74] [New LWP 75] [New LWP 76] [New LWP 77] [New LWP 78] [New LWP 79] [New LWP 80] [New LWP 81] [New LWP 82] [New LWP 83] [New LWP 84] [New LWP 85] [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". 0x00007aa73cfe71d9 in waitpid () from /lib64/libpthread.so.0 No symbol table is loaded. Use the "file" command. ``` </details> --- When I try to run smaller model on `v0.1.32-rocm` such as `llama2:7b` it works well and I don't see any Nvidia/CUDA related errors in the log. Happy to test dev Docker image builds, thank you for this project! ### What did you expect to see? As per [release notes for v0.1.32](https://github.com/ollama/ollama/releases/tag/v0.1.32) >Ollama will now better utilize available VRAM, leading to less out-of-memory errors ```js CUDA error: out of memory current device: 0, in function alloc at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:233 hipMalloc((void **) &ptr, look_ahead_size) GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:60: !"CUDA error" ``` But in fact the model that fit the GPU previously now doesn't, not sure if error message is indicating that Nvidia stack was activated for my AMD GPU system or is a generic error message but it kind of suggests that unneeded CUDA assert was triggered on AMD GPU system 🙃 Expectation is for `v0.1.32-rocm` to be able to handle models that `v0.1.31-rocm` was able to handle :) ### Steps to reproduce 1. Update Docker image `ollama/ollama:0.1.31-rocm` to `0.1.32-rocm` on AMD 7900XTX 24GB VRAM 64GB RAM system 2. Select `mixtral:8x7b` model 3. Observe crash `CUDA error: out of memory` ### Are there any recent changes that introduced the issue? _No response_ ### OS Linux ### Architecture amd64 ### Platform Docker ### Ollama version 0.1.32-rocm ### GPU AMD ### GPU info <details> <summary>rocminfo</summary> ```js ROCk module is loaded ===================== HSA System Attributes ===================== Runtime Version: 1.1 System Timestamp Freq.: 1000.000000MHz Sig. Max Wait Duration: 18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count) Machine Model: LARGE System Endianness: LITTLE Mwaitx: DISABLED DMAbuf Support: YES ========== HSA Agents ========== ******* Agent 1 ******* Name: AMD Ryzen 9 7900 12-Core Processor Uuid: CPU-XX Marketing Name: AMD Ryzen 9 7900 12-Core Processor Vendor Name: CPU Feature: None specified Profile: FULL_PROFILE Float Round Mode: NEAR Max Queue Number: 0(0x0) Queue Min Size: 0(0x0) Queue Max Size: 0(0x0) Queue Type: MULTI Node: 0 Device Type: CPU Cache Info: L1: 32768(0x8000) KB Chip ID: 0(0x0) ASIC Revision: 0(0x0) Cacheline Size: 64(0x40) Max Clock Freq. (MHz): 5482 BDFID: 0 Internal Node ID: 0 Compute Unit: 24 SIMDs per CU: 0 Shader Engines: 0 Shader Arrs. per Eng.: 0 WatchPts on Addr. Ranges:1 Features: None Pool Info: Pool 1 Segment: GLOBAL; FLAGS: FINE GRAINED Size: 65436972(0x3e67d2c) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Alignment: 4KB Accessible by all: TRUE Pool 2 Segment: GLOBAL; FLAGS: KERNARG, FINE GRAINED Size: 65436972(0x3e67d2c) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Alignment: 4KB Accessible by all: TRUE Pool 3 Segment: GLOBAL; FLAGS: COARSE GRAINED Size: 65436972(0x3e67d2c) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Alignment: 4KB Accessible by all: TRUE ISA Info: ******* Agent 2 ******* Name: gfx1100 Uuid: GPU-6ab835b902b859a1 Marketing Name: Radeon RX 7900 XTX Vendor Name: AMD Feature: KERNEL_DISPATCH Profile: BASE_PROFILE Float Round Mode: NEAR Max Queue Number: 128(0x80) Queue Min Size: 64(0x40) Queue Max Size: 131072(0x20000) Queue Type: MULTI Node: 1 Device Type: GPU Cache Info: L1: 32(0x20) KB L2: 6144(0x1800) KB L3: 98304(0x18000) KB Chip ID: 29772(0x744c) ASIC Revision: 0(0x0) Cacheline Size: 64(0x40) Max Clock Freq. (MHz): 2482 BDFID: 768 Internal Node ID: 1 Compute Unit: 96 SIMDs per CU: 2 Shader Engines: 6 Shader Arrs. per Eng.: 2 WatchPts on Addr. Ranges:4 Coherent Host Access: FALSE Features: KERNEL_DISPATCH Fast F16 Operation: TRUE Wavefront Size: 32(0x20) Workgroup Max Size: 1024(0x400) Workgroup Max Size per Dimension: x 1024(0x400) y 1024(0x400) z 1024(0x400) Max Waves Per CU: 32(0x20) Max Work-item Per CU: 1024(0x400) Grid Max Size: 4294967295(0xffffffff) Grid Max Size per Dimension: x 4294967295(0xffffffff) y 4294967295(0xffffffff) z 4294967295(0xffffffff) Max fbarriers/Workgrp: 32 Packet Processor uCode:: 550 SDMA engine uCode:: 19 IOMMU Support:: None Pool Info: Pool 1 Segment: GLOBAL; FLAGS: COARSE GRAINED Size: 25149440(0x17fc000) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Alignment: 4KB Accessible by all: FALSE Pool 2 Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED Size: 25149440(0x17fc000) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Alignment: 4KB Accessible by all: FALSE Pool 3 Segment: GROUP Size: 64(0x40) KB Allocatable: FALSE Alloc Granule: 0KB Alloc Alignment: 0KB Accessible by all: FALSE ISA Info: ISA 1 Name: amdgcn-amd-amdhsa--gfx1100 Machine Models: HSA_MACHINE_MODEL_LARGE Profiles: HSA_PROFILE_BASE Default Rounding Mode: NEAR Default Rounding Mode: NEAR Fast f16: TRUE Workgroup Max Size: 1024(0x400) Workgroup Max Size per Dimension: x 1024(0x400) y 1024(0x400) z 1024(0x400) Grid Max Size: 4294967295(0xffffffff) Grid Max Size per Dimension: x 4294967295(0xffffffff) y 4294967295(0xffffffff) z 4294967295(0xffffffff) FBarrier Max Size: 32 *** Done *** ``` </details> ### CPU AMD ### Other software _No response_
GiteaMirror added the bugamd labels 2026-04-28 09:16:09 -05:00
Author
Owner

@dhiltgen commented on GitHub (Jun 1, 2024):

This should be resolved in the latest release. (verified on an RX 7900 XTX)

> ollama run --verbose mixtral:8x7b why is the sky blue
 The sky appears blue to us because of a process called Rayleigh scattering. As sunlight reaches Earth's atmosphere, it is made up of different colors, which are
...
total duration:       23.8581005s
load duration:        13.31133s
prompt eval count:    13 token(s)
prompt eval duration: 405.856ms
prompt eval rate:     32.03 tokens/s
eval count:           224 token(s)
eval duration:        10.139949s
eval rate:            22.09 tokens/s
> ollama ps
NAME            ID              SIZE    PROCESSOR       UNTIL
mixtral:8x7b    d39eb76ed9c5    28 GB   9%/91% CPU/GPU  4 minutes from now
<!-- gh-comment-id:2143630906 --> @dhiltgen commented on GitHub (Jun 1, 2024): This should be resolved in the latest release. (verified on an RX 7900 XTX) ``` > ollama run --verbose mixtral:8x7b why is the sky blue The sky appears blue to us because of a process called Rayleigh scattering. As sunlight reaches Earth's atmosphere, it is made up of different colors, which are ... total duration: 23.8581005s load duration: 13.31133s prompt eval count: 13 token(s) prompt eval duration: 405.856ms prompt eval rate: 32.03 tokens/s eval count: 224 token(s) eval duration: 10.139949s eval rate: 22.09 tokens/s > ollama ps NAME ID SIZE PROCESSOR UNTIL mixtral:8x7b d39eb76ed9c5 28 GB 9%/91% CPU/GPU 4 minutes from now ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#48787