[GH-ISSUE #6289] some models crash on rocm (7900XT) #3941

Closed
opened 2026-04-12 14:49:05 -05:00 by GiteaMirror · 12 comments
Owner

Originally created by @markg85 on GitHub (Aug 9, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6289

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

I was trying to run the (new) embedding example:

curl http://10.0.3.22:11434/api/embed -d '{
  "model": "all-minilm",
  "input": ["Why is the sky blue?", "Why is the grass green?"]
}'

Which triggered a crash (i did pull the model first). Note that it crashes for some models but works for others. Llama 3.1 works just fine for instance.

The output you can probably do something with:

Aug 09 20:08:14 newphobos ollama[5501]: time=2024-08-09T20:08:14.014+02:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=0 name=1002:744c before="19.2 GiB" now="19.2 GiB"
Aug 09 20:08:14 newphobos ollama[5501]: time=2024-08-09T20:08:14.014+02:00 level=DEBUG source=sched.go:181 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=0x604b1c442c80 gpu_count=1
Aug 09 20:08:14 newphobos ollama[5501]: time=2024-08-09T20:08:14.018+02:00 level=DEBUG source=sched.go:219 msg="loading first model" model=/var/lib/ollama/.ollama/models/blobs/sha256-797b70c4edf85907fe0a49eb85811256f65fa0f7bf52166b147fd16be2be4662
Aug 09 20:08:14 newphobos ollama[5501]: time=2024-08-09T20:08:14.018+02:00 level=DEBUG source=memory.go:101 msg=evaluating library=rocm gpu_count=1 available="[19.2 GiB]"
Aug 09 20:08:14 newphobos ollama[5501]: time=2024-08-09T20:08:14.018+02:00 level=INFO source=sched.go:710 msg="new model will fit in available VRAM in single GPU, loading" model=/var/lib/ollama/.ollama/models/blobs/sha256-797b70c4edf85907fe0a49eb85811256f65fa0f7bf52166b147fd16be2be4662 gpu=0 parallel=4 available=20665856000 required="505.5 MiB"
Aug 09 20:08:14 newphobos ollama[5501]: time=2024-08-09T20:08:14.018+02:00 level=DEBUG source=server.go:101 msg="system memory" total="62.7 GiB" free="59.4 GiB" free_swap="0 B"
Aug 09 20:08:14 newphobos ollama[5501]: time=2024-08-09T20:08:14.018+02:00 level=DEBUG source=memory.go:101 msg=evaluating library=rocm gpu_count=1 available="[19.2 GiB]"
Aug 09 20:08:14 newphobos ollama[5501]: time=2024-08-09T20:08:14.018+02:00 level=INFO source=memory.go:309 msg="offload to rocm" layers.requested=-1 layers.model=7 layers.offload=7 layers.split="" memory.available="[19.2 GiB]" memory.required.full="505.5 MiB" memory.required.partial="505.5 MiB" memory.required.kv="768.0 KiB" memory.required.allocations="[505.5 MiB]" memory.weights.total="21.1 MiB" memory.weights.repeating="17179869184.0 GiB" memory.weights.nonrepeating="22.4 MiB" memory.graph.full="1.5 MiB" memory.graph.partial="1.5 MiB"
Aug 09 20:08:14 newphobos ollama[5501]: time=2024-08-09T20:08:14.018+02:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama213676016/runners/cpu/ollama_llama_server
Aug 09 20:08:14 newphobos ollama[5501]: time=2024-08-09T20:08:14.018+02:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama213676016/runners/rocm/ollama_llama_server
Aug 09 20:08:14 newphobos ollama[5501]: time=2024-08-09T20:08:14.018+02:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama213676016/runners/cpu/ollama_llama_server
Aug 09 20:08:14 newphobos ollama[5501]: time=2024-08-09T20:08:14.018+02:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama213676016/runners/rocm/ollama_llama_server
Aug 09 20:08:14 newphobos ollama[5501]: time=2024-08-09T20:08:14.021+02:00 level=INFO source=server.go:392 msg="starting llama server" cmd="/tmp/ollama213676016/runners/rocm/ollama_llama_server --model /var/lib/ollama/.ollama/models/blobs/sha256-797b70c4edf85907fe0a49eb85811256f65fa0f7bf52166b147fd16be2be4662 --ctx-size 1024 --batch-size 512 --embedding --log-disable --n-gpu-layers 7 --verbose --parallel 4 --port 35631"
Aug 09 20:08:14 newphobos ollama[5501]: time=2024-08-09T20:08:14.021+02:00 level=DEBUG source=server.go:409 msg=subprocess environment="[PATH=/home/mark/.nvm/versions/node/v20.8.0/bin:/home/mark/kde/src/kdesrc-build:/home/mark/bin:/usr/local/bin:/opt/rocm/bin:/home/mark/.local/bin:/opt/miniconda3/condabin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/opt/rocm/bin:/usr/lib/rustup/bin LD_LIBRARY_PATH=/opt/rocm/lib:/tmp/ollama213676016/runners/rocm:/tmp/ollama213676016/runners HIP_VISIBLE_DEVICES=0]"
Aug 09 20:08:14 newphobos ollama[5501]: time=2024-08-09T20:08:14.021+02:00 level=INFO source=sched.go:445 msg="loaded runners" count=1
Aug 09 20:08:14 newphobos ollama[5501]: time=2024-08-09T20:08:14.021+02:00 level=INFO source=server.go:592 msg="waiting for llama runner to start responding"
Aug 09 20:08:14 newphobos ollama[5501]: time=2024-08-09T20:08:14.022+02:00 level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server error"
Aug 09 20:08:14 newphobos ollama[5524]: INFO [main] build info | build=3535 commit="1e6f6554a" tid="135273391656000" timestamp=1723226894
Aug 09 20:08:14 newphobos ollama[5524]: INFO [main] system info | n_threads=16 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="135273391656000" timestamp=1723226894 total_threads=32
Aug 09 20:08:14 newphobos ollama[5524]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="31" port="35631" tid="135273391656000" timestamp=1723226894
Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: loaded meta data with 23 key-value pairs and 101 tensors from /var/lib/ollama/.ollama/models/blobs/sha256-797b70c4edf85907fe0a49eb85811256f65fa0f7bf52166b147fd16be2be4662 (version GGUF V3 (latest))
Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv   0:                       general.architecture str              = bert
Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv   1:                               general.name str              = all-MiniLM-L6-v2
Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv   2:                           bert.block_count u32              = 6
Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv   3:                        bert.context_length u32              = 512
Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv   4:                      bert.embedding_length u32              = 384
Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv   5:                   bert.feed_forward_length u32              = 1536
Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv   6:                  bert.attention.head_count u32              = 12
Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv   7:          bert.attention.layer_norm_epsilon f32              = 0.000000
Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv   8:                          general.file_type u32              = 1
Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv   9:                      bert.attention.causal bool             = false
Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv  10:                          bert.pooling_type u32              = 1
Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv  11:            tokenizer.ggml.token_type_count u32              = 2
Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv  12:                tokenizer.ggml.bos_token_id u32              = 101
Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv  13:                tokenizer.ggml.eos_token_id u32              = 102
Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = bert
Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv  15:                      tokenizer.ggml.tokens arr[str,30522]   = ["[PAD]", "[unused0]", "[unused1]", "...
Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv  16:                      tokenizer.ggml.scores arr[f32,30522]   = [-1000.000000, -1000.000000, -1000.00...
Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,30522]   = [3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv  18:            tokenizer.ggml.unknown_token_id u32              = 100
Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv  19:          tokenizer.ggml.seperator_token_id u32              = 102
Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv  20:            tokenizer.ggml.padding_token_id u32              = 0
Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv  21:                tokenizer.ggml.cls_token_id u32              = 101
Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv  22:               tokenizer.ggml.mask_token_id u32              = 103
Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - type  f32:   63 tensors
Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - type  f16:   38 tensors
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_vocab: special tokens cache size = 5
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_vocab: token to piece cache size = 0.2032 MB
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: format           = GGUF V3 (latest)
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: arch             = bert
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: vocab type       = WPM
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: n_vocab          = 30522
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: n_merges         = 0
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: vocab_only       = 0
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: n_ctx_train      = 512
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: n_embd           = 384
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: n_layer          = 6
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: n_head           = 12
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: n_head_kv        = 12
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: n_rot            = 32
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: n_swa            = 0
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: n_embd_head_k    = 32
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: n_embd_head_v    = 32
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: n_gqa            = 1
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: n_embd_k_gqa     = 384
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: n_embd_v_gqa     = 384
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: f_norm_eps       = 1.0e-12
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: f_norm_rms_eps   = 0.0e+00
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: f_clamp_kqv      = 0.0e+00
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: f_logit_scale    = 0.0e+00
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: n_ff             = 1536
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: n_expert         = 0
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: n_expert_used    = 0
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: causal attn      = 0
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: pooling type     = 1
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: rope type        = 2
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: rope scaling     = linear
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: freq_base_train  = 10000.0
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: freq_scale_train = 1
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: n_ctx_orig_yarn  = 512
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: rope_finetuned   = unknown
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: ssm_d_conv       = 0
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: ssm_d_inner      = 0
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: ssm_d_state      = 0
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: ssm_dt_rank      = 0
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: model type       = 22M
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: model ftype      = F16
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: model params     = 22.57 M
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: model size       = 43.10 MiB (16.02 BPW)
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: general.name     = all-MiniLM-L6-v2
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: BOS token        = 101 '[CLS]'
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: EOS token        = 102 '[SEP]'
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: UNK token        = 100 '[UNK]'
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: SEP token        = 102 '[SEP]'
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: PAD token        = 0 '[PAD]'
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: CLS token        = 101 '[CLS]'
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: MASK token       = 103 '[MASK]'
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: LF token         = 0 '[PAD]'
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: max token length = 21
Aug 09 20:08:14 newphobos ollama[5501]: time=2024-08-09T20:08:14.272+02:00 level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server loading model"
Aug 09 20:08:14 newphobos ollama[5501]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Aug 09 20:08:14 newphobos ollama[5501]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Aug 09 20:08:14 newphobos ollama[5501]: ggml_cuda_init: found 1 ROCm devices:
Aug 09 20:08:14 newphobos ollama[5501]:   Device 0: AMD Radeon RX 7900 XT, compute capability 11.0, VMM: no
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_tensors: ggml ctx size =    0.08 MiB
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_tensors: offloading 6 repeating layers to GPU
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_tensors: offloading non-repeating layers to GPU
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_tensors: offloaded 7/7 layers to GPU
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_tensors:      ROCm0 buffer size =    20.37 MiB
Aug 09 20:08:14 newphobos ollama[5501]: llm_load_tensors:        CPU buffer size =    22.73 MiB
Aug 09 20:08:15 newphobos ollama[5501]: llama_new_context_with_model: n_ctx      = 1024
Aug 09 20:08:15 newphobos ollama[5501]: llama_new_context_with_model: n_batch    = 512
Aug 09 20:08:15 newphobos ollama[5501]: llama_new_context_with_model: n_ubatch   = 512
Aug 09 20:08:15 newphobos ollama[5501]: llama_new_context_with_model: flash_attn = 0
Aug 09 20:08:15 newphobos ollama[5501]: llama_new_context_with_model: freq_base  = 10000.0
Aug 09 20:08:15 newphobos ollama[5501]: llama_new_context_with_model: freq_scale = 1
Aug 09 20:08:15 newphobos ollama[5501]: llama_kv_cache_init:      ROCm0 KV buffer size =     9.00 MiB
Aug 09 20:08:15 newphobos ollama[5501]: llama_new_context_with_model: KV self size  =    9.00 MiB, K (f16):    4.50 MiB, V (f16):    4.50 MiB
Aug 09 20:08:15 newphobos ollama[5501]: llama_new_context_with_model:        CPU  output buffer size =     0.00 MiB
Aug 09 20:08:15 newphobos ollama[5501]: llama_new_context_with_model:      ROCm0 compute buffer size =    16.01 MiB
Aug 09 20:08:15 newphobos ollama[5501]: llama_new_context_with_model:  ROCm_Host compute buffer size =     2.51 MiB
Aug 09 20:08:15 newphobos ollama[5501]: llama_new_context_with_model: graph nodes  = 221
Aug 09 20:08:15 newphobos ollama[5501]: llama_new_context_with_model: graph splits = 2
Aug 09 20:08:15 newphobos ollama[5501]: /usr/lib64/gcc/x86_64-pc-linux-gnu/14.2.1/../../../../include/c++/14.2.1/bits/stl_vector.h:1130: reference std::vector<unsigned long>::operator[](size_type) [_Tp = unsigned long, _Alloc = std::allocator<unsigned long>]: Assertion '__n < this->size()' failed.
Aug 09 20:08:15 newphobos ollama[5501]: time=2024-08-09T20:08:15.476+02:00 level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server not responding"
Aug 09 20:08:17 newphobos ollama[5501]: time=2024-08-09T20:08:17.982+02:00 level=ERROR source=sched.go:451 msg="error loading llama server" error="llama runner process has terminated: signal: aborted (core dumped)"
Aug 09 20:08:17 newphobos ollama[5501]: time=2024-08-09T20:08:17.982+02:00 level=DEBUG source=sched.go:454 msg="triggering expiration for failed load" model=/var/lib/ollama/.ollama/models/blobs/sha256-797b70c4edf85907fe0a49eb85811256f65fa0f7bf52166b147fd16be2be4662
Aug 09 20:08:17 newphobos ollama[5501]: time=2024-08-09T20:08:17.982+02:00 level=DEBUG source=sched.go:355 msg="runner expired event received" modelPath=/var/lib/ollama/.ollama/models/blobs/sha256-797b70c4edf85907fe0a49eb85811256f65fa0f7bf52166b147fd16be2be4662
Aug 09 20:08:17 newphobos ollama[5501]: time=2024-08-09T20:08:17.982+02:00 level=DEBUG source=sched.go:371 msg="got lock to unload" modelPath=/var/lib/ollama/.ollama/models/blobs/sha256-797b70c4edf85907fe0a49eb85811256f65fa0f7bf52166b147fd16be2be4662
Aug 09 20:08:17 newphobos ollama[5501]: [GIN] 2024/08/09 - 20:08:17 | 500 |  3.969014093s |       10.0.3.96 | POST     "/api/embed"
Aug 09 20:08:17 newphobos ollama[5501]: time=2024-08-09T20:08:17.983+02:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="62.7 GiB" before.free="59.4 GiB" before.free_swap="0 B" now.total="62.7 GiB" now.free="59.3 GiB" now.free_swap="0 B"
Aug 09 20:08:17 newphobos ollama[5501]: time=2024-08-09T20:08:17.983+02:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=0 name=1002:744c before="19.2 GiB" now="19.2 GiB"
Aug 09 20:08:17 newphobos ollama[5501]: time=2024-08-09T20:08:17.983+02:00 level=DEBUG source=server.go:1053 msg="stopping llama server"
Aug 09 20:08:17 newphobos ollama[5501]: time=2024-08-09T20:08:17.983+02:00 level=DEBUG source=sched.go:376 msg="runner released" modelPath=/var/lib/ollama/.ollama/models/blobs/sha256-797b70c4edf85907fe0a49eb85811256f65fa0f7bf52166b147fd16be2be4662
Aug 09 20:08:18 newphobos ollama[5501]: time=2024-08-09T20:08:18.234+02:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="62.7 GiB" before.free="59.3 GiB" before.free_swap="0 B" now.total="62.7 GiB" now.free="59.3 GiB" now.free_swap="0 B"
Aug 09 20:08:18 newphobos ollama[5501]: time=2024-08-09T20:08:18.234+02:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=0 name=1002:744c before="19.2 GiB" now="19.2 GiB"
Aug 09 20:08:18 newphobos ollama[5501]: time=2024-08-09T20:08:18.483+02:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="62.7 GiB" before.free="59.3 GiB" before.free_swap="0 B" now.total="62.7 GiB" now.free="59.3 GiB" now.free_swap="0 B"

Special emphasis on this line:

Aug 09 20:08:15 newphobos ollama[5501]: /usr/lib64/gcc/x86_64-pc-linux-gnu/14.2.1/../../../../include/c++/14.2.1/bits/stl_vector.h:1130: reference std::vector<unsigned long>::operator[](size_type) [_Tp = unsigned long, _Alloc = std::allocator<unsigned long>]: Assertion '__n < this->size()' failed.

Index out of bounds perhaps? I don't get how/where/what with that error as ollama is a Go application and this error is C++..

OS

Linux

GPU

AMD

CPU

AMD

Ollama version

0.3.4

Originally created by @markg85 on GitHub (Aug 9, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6289 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? I was trying to run the (new) embedding example: ``` curl http://10.0.3.22:11434/api/embed -d '{ "model": "all-minilm", "input": ["Why is the sky blue?", "Why is the grass green?"] }' ``` Which triggered a crash (i did pull the model first). Note that it crashes for some models but works for others. Llama 3.1 works just fine for instance. The output you can probably do something with: ``` Aug 09 20:08:14 newphobos ollama[5501]: time=2024-08-09T20:08:14.014+02:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=0 name=1002:744c before="19.2 GiB" now="19.2 GiB" Aug 09 20:08:14 newphobos ollama[5501]: time=2024-08-09T20:08:14.014+02:00 level=DEBUG source=sched.go:181 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=0x604b1c442c80 gpu_count=1 Aug 09 20:08:14 newphobos ollama[5501]: time=2024-08-09T20:08:14.018+02:00 level=DEBUG source=sched.go:219 msg="loading first model" model=/var/lib/ollama/.ollama/models/blobs/sha256-797b70c4edf85907fe0a49eb85811256f65fa0f7bf52166b147fd16be2be4662 Aug 09 20:08:14 newphobos ollama[5501]: time=2024-08-09T20:08:14.018+02:00 level=DEBUG source=memory.go:101 msg=evaluating library=rocm gpu_count=1 available="[19.2 GiB]" Aug 09 20:08:14 newphobos ollama[5501]: time=2024-08-09T20:08:14.018+02:00 level=INFO source=sched.go:710 msg="new model will fit in available VRAM in single GPU, loading" model=/var/lib/ollama/.ollama/models/blobs/sha256-797b70c4edf85907fe0a49eb85811256f65fa0f7bf52166b147fd16be2be4662 gpu=0 parallel=4 available=20665856000 required="505.5 MiB" Aug 09 20:08:14 newphobos ollama[5501]: time=2024-08-09T20:08:14.018+02:00 level=DEBUG source=server.go:101 msg="system memory" total="62.7 GiB" free="59.4 GiB" free_swap="0 B" Aug 09 20:08:14 newphobos ollama[5501]: time=2024-08-09T20:08:14.018+02:00 level=DEBUG source=memory.go:101 msg=evaluating library=rocm gpu_count=1 available="[19.2 GiB]" Aug 09 20:08:14 newphobos ollama[5501]: time=2024-08-09T20:08:14.018+02:00 level=INFO source=memory.go:309 msg="offload to rocm" layers.requested=-1 layers.model=7 layers.offload=7 layers.split="" memory.available="[19.2 GiB]" memory.required.full="505.5 MiB" memory.required.partial="505.5 MiB" memory.required.kv="768.0 KiB" memory.required.allocations="[505.5 MiB]" memory.weights.total="21.1 MiB" memory.weights.repeating="17179869184.0 GiB" memory.weights.nonrepeating="22.4 MiB" memory.graph.full="1.5 MiB" memory.graph.partial="1.5 MiB" Aug 09 20:08:14 newphobos ollama[5501]: time=2024-08-09T20:08:14.018+02:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama213676016/runners/cpu/ollama_llama_server Aug 09 20:08:14 newphobos ollama[5501]: time=2024-08-09T20:08:14.018+02:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama213676016/runners/rocm/ollama_llama_server Aug 09 20:08:14 newphobos ollama[5501]: time=2024-08-09T20:08:14.018+02:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama213676016/runners/cpu/ollama_llama_server Aug 09 20:08:14 newphobos ollama[5501]: time=2024-08-09T20:08:14.018+02:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama213676016/runners/rocm/ollama_llama_server Aug 09 20:08:14 newphobos ollama[5501]: time=2024-08-09T20:08:14.021+02:00 level=INFO source=server.go:392 msg="starting llama server" cmd="/tmp/ollama213676016/runners/rocm/ollama_llama_server --model /var/lib/ollama/.ollama/models/blobs/sha256-797b70c4edf85907fe0a49eb85811256f65fa0f7bf52166b147fd16be2be4662 --ctx-size 1024 --batch-size 512 --embedding --log-disable --n-gpu-layers 7 --verbose --parallel 4 --port 35631" Aug 09 20:08:14 newphobos ollama[5501]: time=2024-08-09T20:08:14.021+02:00 level=DEBUG source=server.go:409 msg=subprocess environment="[PATH=/home/mark/.nvm/versions/node/v20.8.0/bin:/home/mark/kde/src/kdesrc-build:/home/mark/bin:/usr/local/bin:/opt/rocm/bin:/home/mark/.local/bin:/opt/miniconda3/condabin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/opt/rocm/bin:/usr/lib/rustup/bin LD_LIBRARY_PATH=/opt/rocm/lib:/tmp/ollama213676016/runners/rocm:/tmp/ollama213676016/runners HIP_VISIBLE_DEVICES=0]" Aug 09 20:08:14 newphobos ollama[5501]: time=2024-08-09T20:08:14.021+02:00 level=INFO source=sched.go:445 msg="loaded runners" count=1 Aug 09 20:08:14 newphobos ollama[5501]: time=2024-08-09T20:08:14.021+02:00 level=INFO source=server.go:592 msg="waiting for llama runner to start responding" Aug 09 20:08:14 newphobos ollama[5501]: time=2024-08-09T20:08:14.022+02:00 level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server error" Aug 09 20:08:14 newphobos ollama[5524]: INFO [main] build info | build=3535 commit="1e6f6554a" tid="135273391656000" timestamp=1723226894 Aug 09 20:08:14 newphobos ollama[5524]: INFO [main] system info | n_threads=16 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="135273391656000" timestamp=1723226894 total_threads=32 Aug 09 20:08:14 newphobos ollama[5524]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="31" port="35631" tid="135273391656000" timestamp=1723226894 Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: loaded meta data with 23 key-value pairs and 101 tensors from /var/lib/ollama/.ollama/models/blobs/sha256-797b70c4edf85907fe0a49eb85811256f65fa0f7bf52166b147fd16be2be4662 (version GGUF V3 (latest)) Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv 0: general.architecture str = bert Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv 1: general.name str = all-MiniLM-L6-v2 Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv 2: bert.block_count u32 = 6 Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv 3: bert.context_length u32 = 512 Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv 4: bert.embedding_length u32 = 384 Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv 5: bert.feed_forward_length u32 = 1536 Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv 6: bert.attention.head_count u32 = 12 Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv 7: bert.attention.layer_norm_epsilon f32 = 0.000000 Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv 8: general.file_type u32 = 1 Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv 9: bert.attention.causal bool = false Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv 10: bert.pooling_type u32 = 1 Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv 11: tokenizer.ggml.token_type_count u32 = 2 Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv 12: tokenizer.ggml.bos_token_id u32 = 101 Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv 13: tokenizer.ggml.eos_token_id u32 = 102 Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv 14: tokenizer.ggml.model str = bert Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,30522] = ["[PAD]", "[unused0]", "[unused1]", "... Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv 16: tokenizer.ggml.scores arr[f32,30522] = [-1000.000000, -1000.000000, -1000.00... Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,30522] = [3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 100 Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv 19: tokenizer.ggml.seperator_token_id u32 = 102 Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv 20: tokenizer.ggml.padding_token_id u32 = 0 Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv 21: tokenizer.ggml.cls_token_id u32 = 101 Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - kv 22: tokenizer.ggml.mask_token_id u32 = 103 Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - type f32: 63 tensors Aug 09 20:08:14 newphobos ollama[5501]: llama_model_loader: - type f16: 38 tensors Aug 09 20:08:14 newphobos ollama[5501]: llm_load_vocab: special tokens cache size = 5 Aug 09 20:08:14 newphobos ollama[5501]: llm_load_vocab: token to piece cache size = 0.2032 MB Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: format = GGUF V3 (latest) Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: arch = bert Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: vocab type = WPM Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: n_vocab = 30522 Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: n_merges = 0 Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: vocab_only = 0 Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: n_ctx_train = 512 Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: n_embd = 384 Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: n_layer = 6 Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: n_head = 12 Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: n_head_kv = 12 Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: n_rot = 32 Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: n_swa = 0 Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: n_embd_head_k = 32 Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: n_embd_head_v = 32 Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: n_gqa = 1 Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: n_embd_k_gqa = 384 Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: n_embd_v_gqa = 384 Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: f_norm_eps = 1.0e-12 Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: f_norm_rms_eps = 0.0e+00 Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: f_logit_scale = 0.0e+00 Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: n_ff = 1536 Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: n_expert = 0 Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: n_expert_used = 0 Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: causal attn = 0 Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: pooling type = 1 Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: rope type = 2 Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: rope scaling = linear Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: freq_base_train = 10000.0 Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: freq_scale_train = 1 Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: n_ctx_orig_yarn = 512 Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: rope_finetuned = unknown Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: ssm_d_conv = 0 Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: ssm_d_inner = 0 Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: ssm_d_state = 0 Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: ssm_dt_rank = 0 Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: model type = 22M Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: model ftype = F16 Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: model params = 22.57 M Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: model size = 43.10 MiB (16.02 BPW) Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: general.name = all-MiniLM-L6-v2 Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: BOS token = 101 '[CLS]' Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: EOS token = 102 '[SEP]' Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: UNK token = 100 '[UNK]' Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: SEP token = 102 '[SEP]' Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: PAD token = 0 '[PAD]' Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: CLS token = 101 '[CLS]' Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: MASK token = 103 '[MASK]' Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: LF token = 0 '[PAD]' Aug 09 20:08:14 newphobos ollama[5501]: llm_load_print_meta: max token length = 21 Aug 09 20:08:14 newphobos ollama[5501]: time=2024-08-09T20:08:14.272+02:00 level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server loading model" Aug 09 20:08:14 newphobos ollama[5501]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Aug 09 20:08:14 newphobos ollama[5501]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Aug 09 20:08:14 newphobos ollama[5501]: ggml_cuda_init: found 1 ROCm devices: Aug 09 20:08:14 newphobos ollama[5501]: Device 0: AMD Radeon RX 7900 XT, compute capability 11.0, VMM: no Aug 09 20:08:14 newphobos ollama[5501]: llm_load_tensors: ggml ctx size = 0.08 MiB Aug 09 20:08:14 newphobos ollama[5501]: llm_load_tensors: offloading 6 repeating layers to GPU Aug 09 20:08:14 newphobos ollama[5501]: llm_load_tensors: offloading non-repeating layers to GPU Aug 09 20:08:14 newphobos ollama[5501]: llm_load_tensors: offloaded 7/7 layers to GPU Aug 09 20:08:14 newphobos ollama[5501]: llm_load_tensors: ROCm0 buffer size = 20.37 MiB Aug 09 20:08:14 newphobos ollama[5501]: llm_load_tensors: CPU buffer size = 22.73 MiB Aug 09 20:08:15 newphobos ollama[5501]: llama_new_context_with_model: n_ctx = 1024 Aug 09 20:08:15 newphobos ollama[5501]: llama_new_context_with_model: n_batch = 512 Aug 09 20:08:15 newphobos ollama[5501]: llama_new_context_with_model: n_ubatch = 512 Aug 09 20:08:15 newphobos ollama[5501]: llama_new_context_with_model: flash_attn = 0 Aug 09 20:08:15 newphobos ollama[5501]: llama_new_context_with_model: freq_base = 10000.0 Aug 09 20:08:15 newphobos ollama[5501]: llama_new_context_with_model: freq_scale = 1 Aug 09 20:08:15 newphobos ollama[5501]: llama_kv_cache_init: ROCm0 KV buffer size = 9.00 MiB Aug 09 20:08:15 newphobos ollama[5501]: llama_new_context_with_model: KV self size = 9.00 MiB, K (f16): 4.50 MiB, V (f16): 4.50 MiB Aug 09 20:08:15 newphobos ollama[5501]: llama_new_context_with_model: CPU output buffer size = 0.00 MiB Aug 09 20:08:15 newphobos ollama[5501]: llama_new_context_with_model: ROCm0 compute buffer size = 16.01 MiB Aug 09 20:08:15 newphobos ollama[5501]: llama_new_context_with_model: ROCm_Host compute buffer size = 2.51 MiB Aug 09 20:08:15 newphobos ollama[5501]: llama_new_context_with_model: graph nodes = 221 Aug 09 20:08:15 newphobos ollama[5501]: llama_new_context_with_model: graph splits = 2 Aug 09 20:08:15 newphobos ollama[5501]: /usr/lib64/gcc/x86_64-pc-linux-gnu/14.2.1/../../../../include/c++/14.2.1/bits/stl_vector.h:1130: reference std::vector<unsigned long>::operator[](size_type) [_Tp = unsigned long, _Alloc = std::allocator<unsigned long>]: Assertion '__n < this->size()' failed. Aug 09 20:08:15 newphobos ollama[5501]: time=2024-08-09T20:08:15.476+02:00 level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server not responding" Aug 09 20:08:17 newphobos ollama[5501]: time=2024-08-09T20:08:17.982+02:00 level=ERROR source=sched.go:451 msg="error loading llama server" error="llama runner process has terminated: signal: aborted (core dumped)" Aug 09 20:08:17 newphobos ollama[5501]: time=2024-08-09T20:08:17.982+02:00 level=DEBUG source=sched.go:454 msg="triggering expiration for failed load" model=/var/lib/ollama/.ollama/models/blobs/sha256-797b70c4edf85907fe0a49eb85811256f65fa0f7bf52166b147fd16be2be4662 Aug 09 20:08:17 newphobos ollama[5501]: time=2024-08-09T20:08:17.982+02:00 level=DEBUG source=sched.go:355 msg="runner expired event received" modelPath=/var/lib/ollama/.ollama/models/blobs/sha256-797b70c4edf85907fe0a49eb85811256f65fa0f7bf52166b147fd16be2be4662 Aug 09 20:08:17 newphobos ollama[5501]: time=2024-08-09T20:08:17.982+02:00 level=DEBUG source=sched.go:371 msg="got lock to unload" modelPath=/var/lib/ollama/.ollama/models/blobs/sha256-797b70c4edf85907fe0a49eb85811256f65fa0f7bf52166b147fd16be2be4662 Aug 09 20:08:17 newphobos ollama[5501]: [GIN] 2024/08/09 - 20:08:17 | 500 | 3.969014093s | 10.0.3.96 | POST "/api/embed" Aug 09 20:08:17 newphobos ollama[5501]: time=2024-08-09T20:08:17.983+02:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="62.7 GiB" before.free="59.4 GiB" before.free_swap="0 B" now.total="62.7 GiB" now.free="59.3 GiB" now.free_swap="0 B" Aug 09 20:08:17 newphobos ollama[5501]: time=2024-08-09T20:08:17.983+02:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=0 name=1002:744c before="19.2 GiB" now="19.2 GiB" Aug 09 20:08:17 newphobos ollama[5501]: time=2024-08-09T20:08:17.983+02:00 level=DEBUG source=server.go:1053 msg="stopping llama server" Aug 09 20:08:17 newphobos ollama[5501]: time=2024-08-09T20:08:17.983+02:00 level=DEBUG source=sched.go:376 msg="runner released" modelPath=/var/lib/ollama/.ollama/models/blobs/sha256-797b70c4edf85907fe0a49eb85811256f65fa0f7bf52166b147fd16be2be4662 Aug 09 20:08:18 newphobos ollama[5501]: time=2024-08-09T20:08:18.234+02:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="62.7 GiB" before.free="59.3 GiB" before.free_swap="0 B" now.total="62.7 GiB" now.free="59.3 GiB" now.free_swap="0 B" Aug 09 20:08:18 newphobos ollama[5501]: time=2024-08-09T20:08:18.234+02:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=0 name=1002:744c before="19.2 GiB" now="19.2 GiB" Aug 09 20:08:18 newphobos ollama[5501]: time=2024-08-09T20:08:18.483+02:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="62.7 GiB" before.free="59.3 GiB" before.free_swap="0 B" now.total="62.7 GiB" now.free="59.3 GiB" now.free_swap="0 B" ``` Special emphasis on this line: ``` Aug 09 20:08:15 newphobos ollama[5501]: /usr/lib64/gcc/x86_64-pc-linux-gnu/14.2.1/../../../../include/c++/14.2.1/bits/stl_vector.h:1130: reference std::vector<unsigned long>::operator[](size_type) [_Tp = unsigned long, _Alloc = std::allocator<unsigned long>]: Assertion '__n < this->size()' failed. ``` Index out of bounds perhaps? I don't get how/where/what with that error as ollama is a Go application and this error is C++.. ### OS Linux ### GPU AMD ### CPU AMD ### Ollama version 0.3.4
GiteaMirror added the linuxneeds more infobugamd labels 2026-04-12 14:49:05 -05:00
Author
Owner

@Cephra commented on GitHub (Aug 9, 2024):

I think the reason why you're seeing C++ headers is that Ollama is utilizing llama.cpp as a backend. I suggest you may also post your issue there, if you haven't already.

<!-- gh-comment-id:2278891571 --> @Cephra commented on GitHub (Aug 9, 2024): I think the reason why you're seeing C++ headers is that Ollama is utilizing [llama.cpp](https://github.com/ggerganov/llama.cpp) as a backend. I suggest you may also post your issue there, if you haven't already.
Author
Owner

@markg85 commented on GitHub (Aug 10, 2024):

I don't see how or why i should go there. I'm not using llama.cpp directly at all and the logging doesn't show much of it either. It would be a rather useless error with not much to go on, right?

I would love to help out where it makes sense but here i don't really see how i can help.

<!-- gh-comment-id:2278936205 --> @markg85 commented on GitHub (Aug 10, 2024): I don't see how or why i should go there. I'm not using llama.cpp directly at all and the logging doesn't show much of it either. It would be a rather useless error with not much to go on, right? I would love to help out where it makes sense but here i don't really see how i can help.
Author
Owner

@Cephra commented on GitHub (Aug 10, 2024):

I don't see how or why i should go there. I'm not using llama.cpp directly at all and the logging doesn't show much of it either. It would be a rather useless error with not much to go on, right?

I would love to help out where it makes sense but here i don't really see how i can help.

Since it only happens on your specific GPU, ROCm could also be the cause.

However, I think the issue is not related to ollama specifically. 🤔

<!-- gh-comment-id:2280753180 --> @Cephra commented on GitHub (Aug 10, 2024): > I don't see how or why i should go there. I'm not using llama.cpp directly at all and the logging doesn't show much of it either. It would be a rather useless error with not much to go on, right? > > I would love to help out where it makes sense but here i don't really see how i can help. Since it only happens on your specific GPU, ROCm could also be the cause. However, I think the issue is not related to ollama specifically. 🤔
Author
Owner

@Cephra commented on GitHub (Aug 12, 2024):

In case you still have this issue, if you have access to docker give the Ollama Docker Image a try. Look for AMD GPU in the README. The reason I suggest this is because I've had some odd dependency issues when trying to get GPU acceleration to work on my host system. However, with the Docker Image it works flawlessly for me. Maybe it's something similar for you, but in your case it causes the error you've mentioned. I think it's worth a try, what do you think?

<!-- gh-comment-id:2284518036 --> @Cephra commented on GitHub (Aug 12, 2024): In case you still have this issue, if you have access to docker give the [Ollama Docker Image](https://hub.docker.com/r/ollama/ollama) a try. Look for **AMD GPU** in the README. The reason I suggest this is because I've had some odd dependency issues when trying to get GPU acceleration to work on my host system. However, with the Docker Image it works flawlessly for me. Maybe it's something similar for you, but in your case it causes the error you've mentioned. I think it's worth a try, what do you think?
Author
Owner

@markg85 commented on GitHub (Aug 12, 2024):

Pff, well, i tried to stay away from docker images as i'm using distribution updates no (arch linux). While they are fast in updates, i do occasionally catch myself wanting to update to a later version that isn't in their repos yet.

In other terms, yeah, i should probably try the docker image. I'm curious even if it's slightly off-topic. Is there a way to automatically update docker images? As i'm not really a fan of having to pull the image again, finding my exec line and get it up and running again. Sure, it works, but it's a bit of a hassle which i already have enough in docker so adding another one isn't something i look forward too...

<!-- gh-comment-id:2285050791 --> @markg85 commented on GitHub (Aug 12, 2024): Pff, well, i tried to stay away from docker images as i'm using distribution updates no (arch linux). While they are fast in updates, i do occasionally catch myself wanting to update to a later version that isn't in their repos yet. In other terms, yeah, i should probably try the docker image. I'm curious even if it's slightly off-topic. Is there a way to automatically update docker images? As i'm not really a fan of having to pull the image again, finding my exec line and get it up and running again. Sure, it works, but it's a bit of a hassle which i already have enough in docker so adding another one isn't something i look forward too...
Author
Owner

@Cephra commented on GitHub (Aug 12, 2024):

Pff, well, i tried to stay away from docker images as i'm using distribution updates no (arch linux). While they are fast in updates, i do occasionally catch myself wanting to update to a later version that isn't in their repos yet.

In other terms, yeah, i should probably try the docker image. I'm curious even if it's slightly off-topic. Is there a way to automatically update docker images? As i'm not really a fan of having to pull the image again, finding my exec line and get it up and running again. Sure, it works, but it's a bit of a hassle which i already have enough in docker so adding another one isn't something i look forward too...

According to the docs, there is --pull=always

I'm curious if this fixes it for you.

<!-- gh-comment-id:2285059721 --> @Cephra commented on GitHub (Aug 12, 2024): > Pff, well, i tried to stay away from docker images as i'm using distribution updates no (arch linux). While they are fast in updates, i do occasionally catch myself wanting to update to a later version that isn't in their repos yet. > > In other terms, yeah, i should probably try the docker image. I'm curious even if it's slightly off-topic. Is there a way to automatically update docker images? As i'm not really a fan of having to pull the image again, finding my exec line and get it up and running again. Sure, it works, but it's a bit of a hassle which i already have enough in docker so adding another one isn't something i look forward too... According to the [docs](https://docs.docker.com/reference/cli/docker/container/run/#pull), there is `--pull=always` I'm curious if this fixes it for you.
Author
Owner

@dhiltgen commented on GitHub (Aug 18, 2024):

As others have pointed out, this may be a llama.cpp bug. I haven't been able to reproduce, but I don't have the exact same setup. We did update llama.cpp in 0.3.5 and other ROCm users have reported it resolved different crash scenarios.

<!-- gh-comment-id:2295325802 --> @dhiltgen commented on GitHub (Aug 18, 2024): As others have pointed out, this may be a llama.cpp bug. I haven't been able to reproduce, but I don't have the exact same setup. We did update llama.cpp in 0.3.5 and other ROCm users have reported it resolved different crash scenarios.
Author
Owner

@markg85 commented on GitHub (Aug 18, 2024):

Hmm, oke.

Docker + ollama question. I'd like to be able to keep using ollama pull and all other ollama tools that you get on a native install. You lose that with docker (sudo docker exec .... ). note the extra addition of sudo in the docker case too.

Yes, i know, i can make things easier (user-level docker) and playing with command line aliases to get some functionality back. It's just a lot of fiddly annoyances to get it working ok'ish. But i much prefer if i could tell ollama to use the api that's running in the docker image. I don't know if that's possible? It would be if ollama is split in a service and an api part where the service is essentially a tool connecting to the api and running commands.

and there's still the update part that i'd now have to do myself instead of just updating system packages :( yeah, not a fan of this approach through docker. Seems like my choices are gonna boil down to discomfort on the one side or waiting for system updates (and taking the hit when that takes long) on the other side.

<!-- gh-comment-id:2295381976 --> @markg85 commented on GitHub (Aug 18, 2024): Hmm, oke. Docker + ollama question. I'd like to be able to keep using `ollama pull` and all other ollama tools that you get on a native install. You lose that with docker (`sudo docker exec .... `). note the extra addition of `sudo` in the docker case too. Yes, i know, i can make things easier (user-level docker) and playing with command line aliases to get some functionality back. It's just a lot of fiddly annoyances to get it working ok'ish. But i much prefer if i could tell ollama to use the api that's running in the docker image. I don't know if that's possible? It would be if ollama is split in a service and an api part where the service is essentially a tool connecting to the api and running commands. and there's still the update part that i'd now have to do myself instead of just updating system packages :( yeah, not a fan of this approach through docker. Seems like my choices are gonna boil down to discomfort on the one side or waiting for system updates (and taking the hit when that takes long) on the other side.
Author
Owner

@Cephra commented on GitHub (Aug 19, 2024):

Hmm, oke.

Docker + ollama question. I'd like to be able to keep using ollama pull and all other ollama tools that you get on a native install. You lose that with docker (sudo docker exec .... ). note the extra addition of sudo in the docker case too.

Yes, i know, i can make things easier (user-level docker) and playing with command line aliases to get some functionality back. It's just a lot of fiddly annoyances to get it working ok'ish. But i much prefer if i could tell ollama to use the api that's running in the docker image. I don't know if that's possible? It would be if ollama is split in a service and an api part where the service is essentially a tool connecting to the api and running commands.

and there's still the update part that i'd now have to do myself instead of just updating system packages :( yeah, not a fan of this approach through docker. Seems like my choices are gonna boil down to discomfort on the one side or waiting for system updates (and taking the hit when that takes long) on the other side.

The way I'm using Ollama is as follows:

I'm using the ollama client supplied by my distro and have the ollama daemon running in docker. This works because the CLI tool is just trying to reach Ollama running on port 11434 locally. My distro is lacking behind a little bit but so far I haven't run into any problems. And if I get any problem I'd probably run ollama within docker and it'll be just fine. This way I have best of both worlds so to speak.

<!-- gh-comment-id:2297216145 --> @Cephra commented on GitHub (Aug 19, 2024): > Hmm, oke. > > Docker + ollama question. I'd like to be able to keep using `ollama pull` and all other ollama tools that you get on a native install. You lose that with docker (`sudo docker exec .... `). note the extra addition of `sudo` in the docker case too. > > Yes, i know, i can make things easier (user-level docker) and playing with command line aliases to get some functionality back. It's just a lot of fiddly annoyances to get it working ok'ish. But i much prefer if i could tell ollama to use the api that's running in the docker image. I don't know if that's possible? It would be if ollama is split in a service and an api part where the service is essentially a tool connecting to the api and running commands. > > and there's still the update part that i'd now have to do myself instead of just updating system packages :( yeah, not a fan of this approach through docker. Seems like my choices are gonna boil down to discomfort on the one side or waiting for system updates (and taking the hit when that takes long) on the other side. The way I'm using Ollama is as follows: I'm using the ollama client supplied by my distro and have the ollama daemon running in docker. This works because the CLI tool is just trying to reach Ollama running on port 11434 locally. My distro is lacking behind a little bit but so far I haven't run into any problems. And if I get any problem I'd probably run ollama within docker and it'll be just fine. This way I have best of both worlds so to speak.
Author
Owner

@markg85 commented on GitHub (Aug 19, 2024):

That could work exactly as i want, thank you for that @Cephra! Gotta try that out!

<!-- gh-comment-id:2297404346 --> @markg85 commented on GitHub (Aug 19, 2024): That could work exactly as i want, thank you for that @Cephra! Gotta try that out!
Author
Owner

@dhiltgen commented on GitHub (Oct 22, 2024):

@markg85 are you still seeing the crash on the latest release?

<!-- gh-comment-id:2430517268 --> @dhiltgen commented on GitHub (Oct 22, 2024): @markg85 are you still seeing the crash on the latest release?
Author
Owner

@markg85 commented on GitHub (Oct 23, 2024):

Hi @dhiltgen! Nope, doesn't seem to be an issue anymore. Closing this one.

For future reference. I didn't change anything besides just being up to date.

<!-- gh-comment-id:2432161640 --> @markg85 commented on GitHub (Oct 23, 2024): Hi @dhiltgen! Nope, doesn't seem to be an issue anymore. Closing this one. For future reference. I didn't change anything besides just being up to date.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#3941