GGML_ASSERT(ggml_nelements(a) == ne0*ne1*ne2) failed #4842

Open
opened 2025-11-12 12:33:52 -06:00 by GiteaMirror · 19 comments
Owner

Originally created by @Volker-Weissmann on GitHub (Nov 9, 2024).

What is the issue?

If I try to run the llama3.2-vision model using ollama run llama3.2-vision on my Arch Linux machine, I get this error:

Error: llama runner process has terminated: GGML_ASSERT(ggml_nelements(a) == ne0*ne1*ne2) failed

ollama run llama3.2 and ollama run llava works fine.

I have an i7-6700K and a GeForce GTX 1060 6GB. I installed ollama using pacman -S ollama

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.4.1

Originally created by @Volker-Weissmann on GitHub (Nov 9, 2024). ### What is the issue? If I try to run the `llama3.2-vision` model using `ollama run llama3.2-vision` on my Arch Linux machine, I get this error: ``` Error: llama runner process has terminated: GGML_ASSERT(ggml_nelements(a) == ne0*ne1*ne2) failed ``` `ollama run llama3.2` and `ollama run llava` works fine. I have an i7-6700K and a GeForce GTX 1060 6GB. I installed ollama using `pacman -S ollama` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.4.1
GiteaMirror added the bug label 2025-11-12 12:33:52 -06:00
Author
Owner

@Volker-Weissmann commented on GitHub (Nov 9, 2024):

ollama serve outputs this when I run ollama run llama3.2-vision:

GIN] 2024/11/09 - 22:02:04 | 200 |      40.171µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/11/09 - 22:02:04 | 200 |     840.598µs |       127.0.0.1 | GET      "/api/tags"
[GIN] 2024/11/09 - 22:02:12 | 200 |      61.987µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/11/09 - 22:02:12 | 200 |   37.288128ms |       127.0.0.1 | POST     "/api/show"
time=2024-11-09T22:02:12.845+01:00 level=WARN source=sched.go:137 msg="mllama doesn't support parallel requests yet"
time=2024-11-09T22:02:12.952+01:00 level=INFO source=sched.go:507 msg="updated VRAM based on existing loaded models" gpu=GPU-791df0eb-6a6b-6f1e-0efc-0cd5e70d2eca library=cuda total="5.9 GiB" available="5.1 GiB"
time=2024-11-09T22:02:18.015+01:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.059203983 model=/home/volker/.ollama/models/blobs/sha256-170370233dd5c5415250a2ecd5c71586352850729062ccef1496385647293868
time=2024-11-09T22:02:18.260+01:00 level=INFO source=server.go:105 msg="system memory" total="31.2 GiB" free="23.4 GiB" free_swap="24.0 GiB"
time=2024-11-09T22:02:18.262+01:00 level=INFO source=memory.go:343 msg="offload to cuda" projector.weights="1.8 GiB" projector.graph="2.8 GiB" layers.requested=-1 layers.model=41 layers.offload=0 layers.split="" memory.available="[5.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="6.2 GiB" memory.required.partial="0 B" memory.required.kv="656.2 MiB" memory.required.allocations="[0 B]" memory.weights.total="5.5 GiB" memory.weights.repeating="5.1 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="258.5 MiB" memory.graph.partial="669.5 MiB"
time=2024-11-09T22:02:18.262+01:00 level=INFO source=server.go:383 msg="starting llama server" cmd="/tmp/ollama2537133296/runners/cpu_avx2/ollama_llama_server --model /home/volker/.ollama/models/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 --ctx-size 2048 --batch-size 512 --mmproj /home/volker/.ollama/models/blobs/sha256-ece5e659647a20a5c28ab9eea1c12a1ad430bc0f2a27021d00ad103b3bf5206f --threads 4 --no-mmap --parallel 1 --port 36581"
time=2024-11-09T22:02:18.263+01:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2024-11-09T22:02:18.263+01:00 level=INFO source=server.go:562 msg="waiting for llama runner to start responding"
time=2024-11-09T22:02:18.263+01:00 level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server error"
time=2024-11-09T22:02:18.265+01:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.309311834 model=/home/volker/.ollama/models/blobs/sha256-170370233dd5c5415250a2ecd5c71586352850729062ccef1496385647293868
time=2024-11-09T22:02:18.265+01:00 level=INFO source=runner.go:863 msg="starting go runner"
time=2024-11-09T22:02:18.265+01:00 level=INFO source=runner.go:864 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=4
time=2024-11-09T22:02:18.265+01:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:36581"
llama_model_loader: loaded meta data with 27 key-value pairs and 396 tensors from /home/volker/.ollama/models/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = mllama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Model
llama_model_loader: - kv   3:                         general.size_label str              = 10B
llama_model_loader: - kv   4:                         mllama.block_count u32              = 40
llama_model_loader: - kv   5:                      mllama.context_length u32              = 131072
llama_model_loader: - kv   6:                    mllama.embedding_length u32              = 4096
llama_model_loader: - kv   7:                 mllama.feed_forward_length u32              = 14336
llama_model_loader: - kv   8:                mllama.attention.head_count u32              = 32
llama_model_loader: - kv   9:             mllama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  10:                      mllama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  11:    mllama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  12:                          general.file_type u32              = 15
llama_model_loader: - kv  13:                          mllama.vocab_size u32              = 128256
llama_model_loader: - kv  14:                mllama.rope.dimension_count u32              = 128
llama_model_loader: - kv  15:    mllama.attention.cross_attention_layers arr[i32,8]       = [3, 8, 13, 18, 23, 28, 33, 38]
llama_model_loader: - kv  16:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  17:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  18:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  19:                      tokenizer.ggml.tokens arr[str,128257]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  20:                  tokenizer.ggml.token_type arr[i32,128257]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  21:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  22:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  23:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  24:            tokenizer.ggml.padding_token_id u32              = 128004
llama_model_loader: - kv  25:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv  26:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  114 tensors
llama_model_loader: - type q4_K:  245 tensors
llama_model_loader: - type q6_K:   37 tensors
time=2024-11-09T22:02:18.514+01:00 level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server loading model"
time=2024-11-09T22:02:18.514+01:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.558393053 model=/home/volker/.ollama/models/blobs/sha256-170370233dd5c5415250a2ecd5c71586352850729062ccef1496385647293868
llm_load_vocab: special tokens cache size = 257
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = mllama
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 128256
llm_load_print_meta: n_merges         = 280147
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 131072
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_layer          = 40
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 131072
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = 11B
llm_load_print_meta: model ftype      = Q4_K - Medium
llm_load_print_meta: model params     = 9.78 B
llm_load_print_meta: model size       = 5.55 GiB (4.87 BPW) 
llm_load_print_meta: general.name     = Model
llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token        = 128009 '<|eot_id|>'
llm_load_print_meta: PAD token        = 128004 '<|finetune_right_pad_id|>'
llm_load_print_meta: LF token         = 128 'Ä'
llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token        = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token        = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token        = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llama_model_load: vocab mismatch 128256 !- 128257 ...
llm_load_tensors: ggml ctx size =    0.18 MiB
llm_load_tensors:        CPU buffer size =  5679.34 MiB
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:        CPU KV buffer size =   656.25 MiB
llama_new_context_with_model: KV self size  =  656.25 MiB, K (f16):  328.12 MiB, V (f16):  328.12 MiB
llama_new_context_with_model:        CPU  output buffer size =     0.50 MiB
llama_new_context_with_model:        CPU compute buffer size =   258.50 MiB
llama_new_context_with_model: graph nodes  = 1030
llama_new_context_with_model: graph splits = 1
mllama_model_load: model name:   Llama-3.2-11B-Vision-Instruct
mllama_model_load: description:  vision encoder for Mllama
mllama_model_load: GGUF version: 3
mllama_model_load: alignment:    32
mllama_model_load: n_tensors:    512
mllama_model_load: n_kv:         17
mllama_model_load: ftype:        f16
mllama_model_load: 
mllama_model_load: vision using CPU backend
ggml.c:6712: GGML_ASSERT(a->ne[2] == b->ne[2]) failed
ptrace: Operation not permitted.
No stack.
The program is not being run.
SIGABRT: abort
PC=0x7f087d6a53f4 m=5 sigcode=18446744073709551610
signal arrived during cgo execution

goroutine 7 gp=0xc0000e4000 m=5 mp=0xc000100008 [syscall]:
runtime.cgocall(0x61d256ff6ab0, 0xc000071ca0)
	runtime/cgocall.go:167 +0x4b fp=0xc000071c78 sp=0xc000071c40 pc=0x61d256d9724b
github.com/ollama/ollama/llama._Cfunc_mllama_model_load(0x7f081c0cdd10, 0x1)
	_cgo_gotypes.go:981 +0x50 fp=0xc000071ca0 sp=0xc000071c78 pc=0x61d256e42570
github.com/ollama/ollama/llama.NewMllamaContext(0xc00018e070, {0x7fff654891fc, 0x69})
	github.com/ollama/ollama/llama/llama.go:551 +0x90 fp=0xc000071d60 sp=0xc000071ca0 pc=0x61d256e461d0
main.NewImageContext(0xc00018e070, {0x7fff654891fc, 0x69})
	github.com/ollama/ollama/llama/runner/image.go:39 +0x168 fp=0xc000071de0 sp=0xc000071d60 pc=0x61d256fd9268
main.(*Server).loadModel(0xc0000b21b0, {0x0, 0x0, 0x0, 0x0, {0x0, 0x0, 0x0}, 0xc00002c2b0, 0x0}, ...)
	github.com/ollama/ollama/llama/runner/runner.go:811 +0x25c fp=0xc000071f38 sp=0xc000071de0 pc=0x61d256fde87c
main.main.gowrap1()
	github.com/ollama/ollama/llama/runner/runner.go:896 +0x95 fp=0xc000071fe0 sp=0xc000071f38 pc=0x61d256fdfeb5
runtime.goexit({})
	runtime/asm_amd64.s:1700 +0x1 fp=0xc000071fe8 sp=0xc000071fe0 pc=0x61d256da4c41
created by main.main in goroutine 1
	github.com/ollama/ollama/llama/runner/runner.go:896 +0xb56

goroutine 1 gp=0xc0000061c0 m=nil [IO wait]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
	runtime/proc.go:424 +0xce fp=0xc000031860 sp=0xc000031840 pc=0x61d256d9d00e
runtime.netpollblock(0xc0000318b0?, 0x56d35a26?, 0xd2?)
	runtime/netpoll.go:575 +0xf7 fp=0xc000031898 sp=0xc000031860 pc=0x61d256d61dd7
internal/poll.runtime_pollWait(0x7f087d49b008, 0x72)
	runtime/netpoll.go:351 +0x85 fp=0xc0000318b8 sp=0xc000031898 pc=0x61d256d9c305
internal/poll.(*pollDesc).wait(0xc0000de100?, 0x2c?, 0x0)
	internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc0000318e0 sp=0xc0000318b8 pc=0x61d256df4fa7
internal/poll.(*pollDesc).waitRead(...)
	internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc0000de100)
	internal/poll/fd_unix.go:620 +0x295 fp=0xc000031988 sp=0xc0000318e0 pc=0x61d256df6515
net.(*netFD).accept(0xc0000de100)
	net/fd_unix.go:172 +0x29 fp=0xc000031a40 sp=0xc000031988 pc=0x61d256e6b589
net.(*TCPListener).accept(0xc000038740)
	net/tcpsock_posix.go:159 +0x1e fp=0xc000031a90 sp=0xc000031a40 pc=0x61d256e7bbde
net.(*TCPListener).Accept(0xc000038740)
	net/tcpsock.go:372 +0x30 fp=0xc000031ac0 sp=0xc000031a90 pc=0x61d256e7af10
net/http.(*onceCloseListener).Accept(0xc0000b2240?)
	<autogenerated>:1 +0x24 fp=0xc000031ad8 sp=0xc000031ac0 pc=0x61d256fb99e4
net/http.(*Server).Serve(0xc0000dc4b0, {0x61d2572abe98, 0xc000038740})
	net/http/server.go:3330 +0x30c fp=0xc000031c08 sp=0xc000031ad8 pc=0x61d256fab72c
main.main()
	github.com/ollama/ollama/llama/runner/runner.go:921 +0xfa7 fp=0xc000031f50 sp=0xc000031c08 pc=0x61d256fdfb67
runtime.main()
	runtime/proc.go:272 +0x29d fp=0xc000031fe0 sp=0xc000031f50 pc=0x61d256d693bd
runtime.goexit({})
	runtime/asm_amd64.s:1700 +0x1 fp=0xc000031fe8 sp=0xc000031fe0 pc=0x61d256da4c41

goroutine 2 gp=0xc000006c40 m=nil [force gc (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
	runtime/proc.go:424 +0xce fp=0xc00005afa8 sp=0xc00005af88 pc=0x61d256d9d00e
runtime.goparkunlock(...)
	runtime/proc.go:430
runtime.forcegchelper()
	runtime/proc.go:337 +0xb8 fp=0xc00005afe0 sp=0xc00005afa8 pc=0x61d256d696f8
runtime.goexit({})
	runtime/asm_amd64.s:1700 +0x1 fp=0xc00005afe8 sp=0xc00005afe0 pc=0x61d256da4c41
created by runtime.init.7 in goroutine 1
	runtime/proc.go:325 +0x1a

goroutine 3 gp=0xc000007180 m=nil [GC sweep wait]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
	runtime/proc.go:424 +0xce fp=0xc00005b780 sp=0xc00005b760 pc=0x61d256d9d00e
runtime.goparkunlock(...)
	runtime/proc.go:430
runtime.bgsweep(0xc000028080)
	runtime/mgcsweep.go:277 +0x94 fp=0xc00005b7c8 sp=0xc00005b780 pc=0x61d256d54074
runtime.gcenable.gowrap1()
	runtime/mgc.go:203 +0x25 fp=0xc00005b7e0 sp=0xc00005b7c8 pc=0x61d256d48945
runtime.goexit({})
	runtime/asm_amd64.s:1700 +0x1 fp=0xc00005b7e8 sp=0xc00005b7e0 pc=0x61d256da4c41
created by runtime.gcenable in goroutine 1
	runtime/mgc.go:203 +0x66

goroutine 4 gp=0xc000007340 m=nil [GC scavenge wait]:
runtime.gopark(0xc000028080?, 0x61d2571c97a8?, 0x1?, 0x0?, 0xc000007340?)
	runtime/proc.go:424 +0xce fp=0xc00005bf78 sp=0xc00005bf58 pc=0x61d256d9d00e
runtime.goparkunlock(...)
	runtime/proc.go:430
runtime.(*scavengerState).park(0x61d257492120)
	runtime/mgcscavenge.go:425 +0x49 fp=0xc00005bfa8 sp=0xc00005bf78 pc=0x61d256d51aa9
runtime.bgscavenge(0xc000028080)
	runtime/mgcscavenge.go:653 +0x3c fp=0xc00005bfc8 sp=0xc00005bfa8 pc=0x61d256d5201c
runtime.gcenable.gowrap2()
	runtime/mgc.go:204 +0x25 fp=0xc00005bfe0 sp=0xc00005bfc8 pc=0x61d256d488e5
runtime.goexit({})
	runtime/asm_amd64.s:1700 +0x1 fp=0xc00005bfe8 sp=0xc00005bfe0 pc=0x61d256da4c41
created by runtime.gcenable in goroutine 1
	runtime/mgc.go:204 +0xa5

goroutine 5 gp=0xc000007c00 m=nil [finalizer wait]:
runtime.gopark(0xc00005a648?, 0x61d256d3ee45?, 0xb0?, 0x1?, 0xc0000061c0?)
	runtime/proc.go:424 +0xce fp=0xc00005a620 sp=0xc00005a600 pc=0x61d256d9d00e
runtime.runfinq()
	runtime/mfinal.go:193 +0x107 fp=0xc00005a7e0 sp=0xc00005a620 pc=0x61d256d479c7
runtime.goexit({})
	runtime/asm_amd64.s:1700 +0x1 fp=0xc00005a7e8 sp=0xc00005a7e0 pc=0x61d256da4c41
created by runtime.createfing in goroutine 1
	runtime/mfinal.go:163 +0x3d

goroutine 6 gp=0xc000007dc0 m=nil [chan receive]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
	runtime/proc.go:424 +0xce fp=0xc00005c718 sp=0xc00005c6f8 pc=0x61d256d9d00e
runtime.chanrecv(0xc0000940e0, 0x0, 0x1)
	runtime/chan.go:639 +0x41c fp=0xc00005c790 sp=0xc00005c718 pc=0x61d256d3861c
runtime.chanrecv1(0x0?, 0x0?)
	runtime/chan.go:489 +0x12 fp=0xc00005c7b8 sp=0xc00005c790 pc=0x61d256d381f2
runtime.unique_runtime_registerUniqueMapCleanup.func1(...)
	runtime/mgc.go:1732
runtime.unique_runtime_registerUniqueMapCleanup.gowrap1()
	runtime/mgc.go:1735 +0x2f fp=0xc00005c7e0 sp=0xc00005c7b8 pc=0x61d256d4b78f
runtime.goexit({})
	runtime/asm_amd64.s:1700 +0x1 fp=0xc00005c7e8 sp=0xc00005c7e0 pc=0x61d256da4c41
created by unique.runtime_registerUniqueMapCleanup in goroutine 1
	runtime/mgc.go:1730 +0x96

goroutine 8 gp=0xc0000e41c0 m=nil [semacquire]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x20?, 0x0?)
	runtime/proc.go:424 +0xce fp=0xc00006ce08 sp=0xc00006cde8 pc=0x61d256d9d00e
runtime.goparkunlock(...)
	runtime/proc.go:430
runtime.semacquire1(0xc0000b2210, 0x0, 0x1, 0x0, 0x12)
	runtime/sema.go:178 +0x22c fp=0xc00006ce70 sp=0xc00006ce08 pc=0x61d256d7c38c
sync.runtime_Semacquire(0x0?)
	runtime/sema.go:71 +0x25 fp=0xc00006cea8 sp=0xc00006ce70 pc=0x61d256d9e245
sync.(*WaitGroup).Wait(0x0?)
	sync/waitgroup.go:118 +0x48 fp=0xc00006ced0 sp=0xc00006cea8 pc=0x61d256daf2e8
main.(*Server).run(0xc0000b21b0, {0x61d2572ac480, 0xc000092050})
	github.com/ollama/ollama/llama/runner/runner.go:311 +0x4e fp=0xc00006cfb8 sp=0xc00006ced0 pc=0x61d256fdb56e
main.main.gowrap2()
	github.com/ollama/ollama/llama/runner/runner.go:901 +0x28 fp=0xc00006cfe0 sp=0xc00006cfb8 pc=0x61d256fdfde8
runtime.goexit({})
	runtime/asm_amd64.s:1700 +0x1 fp=0xc00006cfe8 sp=0xc00006cfe0 pc=0x61d256da4c41
created by main.main in goroutine 1
	github.com/ollama/ollama/llama/runner/runner.go:901 +0xc2b

goroutine 9 gp=0xc0000e4540 m=nil [IO wait]:
runtime.gopark(0x61d256df6245?, 0xc0000de180?, 0x10?, 0x3a?, 0xb?)
	runtime/proc.go:424 +0xce fp=0xc0000d3918 sp=0xc0000d38f8 pc=0x61d256d9d00e
runtime.netpollblock(0x61d256db5598?, 0x56d35a26?, 0xd2?)
	runtime/netpoll.go:575 +0xf7 fp=0xc0000d3950 sp=0xc0000d3918 pc=0x61d256d61dd7
internal/poll.runtime_pollWait(0x7f087d49af00, 0x72)
	runtime/netpoll.go:351 +0x85 fp=0xc0000d3970 sp=0xc0000d3950 pc=0x61d256d9c305
internal/poll.(*pollDesc).wait(0xc0000de180?, 0xc000120000?, 0x0)
	internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc0000d3998 sp=0xc0000d3970 pc=0x61d256df4fa7
internal/poll.(*pollDesc).waitRead(...)
	internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0000de180, {0xc000120000, 0x1000, 0x1000})
	internal/poll/fd_unix.go:165 +0x27a fp=0xc0000d3a30 sp=0xc0000d3998 pc=0x61d256df5afa
net.(*netFD).Read(0xc0000de180, {0xc000120000?, 0xc0000d3aa0?, 0x61d256df5465?})
	net/fd_posix.go:55 +0x25 fp=0xc0000d3a78 sp=0xc0000d3a30 pc=0x61d256e6a4a5
net.(*conn).Read(0xc00005e0e8, {0xc000120000?, 0x0?, 0xc000116038?})
	net/net.go:189 +0x45 fp=0xc0000d3ac0 sp=0xc0000d3a78 pc=0x61d256e73ea5
net.(*TCPConn).Read(0xc000116030?, {0xc000120000?, 0xc0000de180?, 0xc0000d3af8?})
	<autogenerated>:1 +0x25 fp=0xc0000d3af0 sp=0xc0000d3ac0 pc=0x61d256e80f45
net/http.(*connReader).Read(0xc000116030, {0xc000120000, 0x1000, 0x1000})
	net/http/server.go:798 +0x14b fp=0xc0000d3b40 sp=0xc0000d3af0 pc=0x61d256fa202b
bufio.(*Reader).fill(0xc000112060)
	bufio/bufio.go:110 +0x103 fp=0xc0000d3b78 sp=0xc0000d3b40 pc=0x61d256f60c43
bufio.(*Reader).Peek(0xc000112060, 0x4)
	bufio/bufio.go:148 +0x53 fp=0xc0000d3b98 sp=0xc0000d3b78 pc=0x61d256f60d73
net/http.(*conn).serve(0xc0000b2240, {0x61d2572ac448, 0xc00009f0e0})
	net/http/server.go:2127 +0x738 fp=0xc0000d3fb8 sp=0xc0000d3b98 pc=0x61d256fa7378
net/http.(*Server).Serve.gowrap3()
	net/http/server.go:3360 +0x28 fp=0xc0000d3fe0 sp=0xc0000d3fb8 pc=0x61d256fabb28
runtime.goexit({})
	runtime/asm_amd64.s:1700 +0x1 fp=0xc0000d3fe8 sp=0xc0000d3fe0 pc=0x61d256da4c41
created by net/http.(*Server).Serve in goroutine 1
	net/http/server.go:3360 +0x485

rax    0x0
rbx    0xee1d
rcx    0x7f087d6a53f4
rdx    0x6
rdi    0xee19
rsi    0xee1d
rbp    0x7f0834bff630
rsp    0x7f0834bff5f0
r8     0x0
r9     0xfffffffb
r10    0x8
r11    0x246
r12    0x7f0834c006c0
r13    0x61d2571cfe44
r14    0x6
r15    0xe
rip    0x7f087d6a53f4
rflags 0x246
cs     0x33
fs     0x0
gs     0x0
time=2024-11-09T22:02:21.404+01:00 level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server error"
time=2024-11-09T22:02:21.655+01:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: GGML_ASSERT(a->ne[2] == b->ne[2]) failed"
[GIN] 2024/11/09 - 22:02:21 | 500 |  8.831309925s |       127.0.0.1 | POST     "/api/generate"
time=2024-11-09T22:02:26.754+01:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.098750813 model=/home/volker/.ollama/models/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068
time=2024-11-09T22:02:27.004+01:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.349176485 model=/home/volker/.ollama/models/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068
time=2024-11-09T22:02:27.254+01:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.598901983 model=/home/volker/.ollama/models/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068
@Volker-Weissmann commented on GitHub (Nov 9, 2024): `ollama serve` outputs this when I run `ollama run llama3.2-vision`: ``` GIN] 2024/11/09 - 22:02:04 | 200 | 40.171µs | 127.0.0.1 | HEAD "/" [GIN] 2024/11/09 - 22:02:04 | 200 | 840.598µs | 127.0.0.1 | GET "/api/tags" [GIN] 2024/11/09 - 22:02:12 | 200 | 61.987µs | 127.0.0.1 | HEAD "/" [GIN] 2024/11/09 - 22:02:12 | 200 | 37.288128ms | 127.0.0.1 | POST "/api/show" time=2024-11-09T22:02:12.845+01:00 level=WARN source=sched.go:137 msg="mllama doesn't support parallel requests yet" time=2024-11-09T22:02:12.952+01:00 level=INFO source=sched.go:507 msg="updated VRAM based on existing loaded models" gpu=GPU-791df0eb-6a6b-6f1e-0efc-0cd5e70d2eca library=cuda total="5.9 GiB" available="5.1 GiB" time=2024-11-09T22:02:18.015+01:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.059203983 model=/home/volker/.ollama/models/blobs/sha256-170370233dd5c5415250a2ecd5c71586352850729062ccef1496385647293868 time=2024-11-09T22:02:18.260+01:00 level=INFO source=server.go:105 msg="system memory" total="31.2 GiB" free="23.4 GiB" free_swap="24.0 GiB" time=2024-11-09T22:02:18.262+01:00 level=INFO source=memory.go:343 msg="offload to cuda" projector.weights="1.8 GiB" projector.graph="2.8 GiB" layers.requested=-1 layers.model=41 layers.offload=0 layers.split="" memory.available="[5.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="6.2 GiB" memory.required.partial="0 B" memory.required.kv="656.2 MiB" memory.required.allocations="[0 B]" memory.weights.total="5.5 GiB" memory.weights.repeating="5.1 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="258.5 MiB" memory.graph.partial="669.5 MiB" time=2024-11-09T22:02:18.262+01:00 level=INFO source=server.go:383 msg="starting llama server" cmd="/tmp/ollama2537133296/runners/cpu_avx2/ollama_llama_server --model /home/volker/.ollama/models/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 --ctx-size 2048 --batch-size 512 --mmproj /home/volker/.ollama/models/blobs/sha256-ece5e659647a20a5c28ab9eea1c12a1ad430bc0f2a27021d00ad103b3bf5206f --threads 4 --no-mmap --parallel 1 --port 36581" time=2024-11-09T22:02:18.263+01:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2024-11-09T22:02:18.263+01:00 level=INFO source=server.go:562 msg="waiting for llama runner to start responding" time=2024-11-09T22:02:18.263+01:00 level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server error" time=2024-11-09T22:02:18.265+01:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.309311834 model=/home/volker/.ollama/models/blobs/sha256-170370233dd5c5415250a2ecd5c71586352850729062ccef1496385647293868 time=2024-11-09T22:02:18.265+01:00 level=INFO source=runner.go:863 msg="starting go runner" time=2024-11-09T22:02:18.265+01:00 level=INFO source=runner.go:864 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=4 time=2024-11-09T22:02:18.265+01:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:36581" llama_model_loader: loaded meta data with 27 key-value pairs and 396 tensors from /home/volker/.ollama/models/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = mllama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Model llama_model_loader: - kv 3: general.size_label str = 10B llama_model_loader: - kv 4: mllama.block_count u32 = 40 llama_model_loader: - kv 5: mllama.context_length u32 = 131072 llama_model_loader: - kv 6: mllama.embedding_length u32 = 4096 llama_model_loader: - kv 7: mllama.feed_forward_length u32 = 14336 llama_model_loader: - kv 8: mllama.attention.head_count u32 = 32 llama_model_loader: - kv 9: mllama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 10: mllama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 11: mllama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 12: general.file_type u32 = 15 llama_model_loader: - kv 13: mllama.vocab_size u32 = 128256 llama_model_loader: - kv 14: mllama.rope.dimension_count u32 = 128 llama_model_loader: - kv 15: mllama.attention.cross_attention_layers arr[i32,8] = [3, 8, 13, 18, 23, 28, 33, 38] llama_model_loader: - kv 16: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 17: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 18: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 19: tokenizer.ggml.tokens arr[str,128257] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 20: tokenizer.ggml.token_type arr[i32,128257] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 21: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 23: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 24: tokenizer.ggml.padding_token_id u32 = 128004 llama_model_loader: - kv 25: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 26: general.quantization_version u32 = 2 llama_model_loader: - type f32: 114 tensors llama_model_loader: - type q4_K: 245 tensors llama_model_loader: - type q6_K: 37 tensors time=2024-11-09T22:02:18.514+01:00 level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server loading model" time=2024-11-09T22:02:18.514+01:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.558393053 model=/home/volker/.ollama/models/blobs/sha256-170370233dd5c5415250a2ecd5c71586352850729062ccef1496385647293868 llm_load_vocab: special tokens cache size = 257 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = mllama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_layer = 40 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 11B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 9.78 B llm_load_print_meta: model size = 5.55 GiB (4.87 BPW) llm_load_print_meta: general.name = Model llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: PAD token = 128004 '<|finetune_right_pad_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llama_model_load: vocab mismatch 128256 !- 128257 ... llm_load_tensors: ggml ctx size = 0.18 MiB llm_load_tensors: CPU buffer size = 5679.34 MiB llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CPU KV buffer size = 656.25 MiB llama_new_context_with_model: KV self size = 656.25 MiB, K (f16): 328.12 MiB, V (f16): 328.12 MiB llama_new_context_with_model: CPU output buffer size = 0.50 MiB llama_new_context_with_model: CPU compute buffer size = 258.50 MiB llama_new_context_with_model: graph nodes = 1030 llama_new_context_with_model: graph splits = 1 mllama_model_load: model name: Llama-3.2-11B-Vision-Instruct mllama_model_load: description: vision encoder for Mllama mllama_model_load: GGUF version: 3 mllama_model_load: alignment: 32 mllama_model_load: n_tensors: 512 mllama_model_load: n_kv: 17 mllama_model_load: ftype: f16 mllama_model_load: mllama_model_load: vision using CPU backend ggml.c:6712: GGML_ASSERT(a->ne[2] == b->ne[2]) failed ptrace: Operation not permitted. No stack. The program is not being run. SIGABRT: abort PC=0x7f087d6a53f4 m=5 sigcode=18446744073709551610 signal arrived during cgo execution goroutine 7 gp=0xc0000e4000 m=5 mp=0xc000100008 [syscall]: runtime.cgocall(0x61d256ff6ab0, 0xc000071ca0) runtime/cgocall.go:167 +0x4b fp=0xc000071c78 sp=0xc000071c40 pc=0x61d256d9724b github.com/ollama/ollama/llama._Cfunc_mllama_model_load(0x7f081c0cdd10, 0x1) _cgo_gotypes.go:981 +0x50 fp=0xc000071ca0 sp=0xc000071c78 pc=0x61d256e42570 github.com/ollama/ollama/llama.NewMllamaContext(0xc00018e070, {0x7fff654891fc, 0x69}) github.com/ollama/ollama/llama/llama.go:551 +0x90 fp=0xc000071d60 sp=0xc000071ca0 pc=0x61d256e461d0 main.NewImageContext(0xc00018e070, {0x7fff654891fc, 0x69}) github.com/ollama/ollama/llama/runner/image.go:39 +0x168 fp=0xc000071de0 sp=0xc000071d60 pc=0x61d256fd9268 main.(*Server).loadModel(0xc0000b21b0, {0x0, 0x0, 0x0, 0x0, {0x0, 0x0, 0x0}, 0xc00002c2b0, 0x0}, ...) github.com/ollama/ollama/llama/runner/runner.go:811 +0x25c fp=0xc000071f38 sp=0xc000071de0 pc=0x61d256fde87c main.main.gowrap1() github.com/ollama/ollama/llama/runner/runner.go:896 +0x95 fp=0xc000071fe0 sp=0xc000071f38 pc=0x61d256fdfeb5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc000071fe8 sp=0xc000071fe0 pc=0x61d256da4c41 created by main.main in goroutine 1 github.com/ollama/ollama/llama/runner/runner.go:896 +0xb56 goroutine 1 gp=0xc0000061c0 m=nil [IO wait]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:424 +0xce fp=0xc000031860 sp=0xc000031840 pc=0x61d256d9d00e runtime.netpollblock(0xc0000318b0?, 0x56d35a26?, 0xd2?) runtime/netpoll.go:575 +0xf7 fp=0xc000031898 sp=0xc000031860 pc=0x61d256d61dd7 internal/poll.runtime_pollWait(0x7f087d49b008, 0x72) runtime/netpoll.go:351 +0x85 fp=0xc0000318b8 sp=0xc000031898 pc=0x61d256d9c305 internal/poll.(*pollDesc).wait(0xc0000de100?, 0x2c?, 0x0) internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc0000318e0 sp=0xc0000318b8 pc=0x61d256df4fa7 internal/poll.(*pollDesc).waitRead(...) internal/poll/fd_poll_runtime.go:89 internal/poll.(*FD).Accept(0xc0000de100) internal/poll/fd_unix.go:620 +0x295 fp=0xc000031988 sp=0xc0000318e0 pc=0x61d256df6515 net.(*netFD).accept(0xc0000de100) net/fd_unix.go:172 +0x29 fp=0xc000031a40 sp=0xc000031988 pc=0x61d256e6b589 net.(*TCPListener).accept(0xc000038740) net/tcpsock_posix.go:159 +0x1e fp=0xc000031a90 sp=0xc000031a40 pc=0x61d256e7bbde net.(*TCPListener).Accept(0xc000038740) net/tcpsock.go:372 +0x30 fp=0xc000031ac0 sp=0xc000031a90 pc=0x61d256e7af10 net/http.(*onceCloseListener).Accept(0xc0000b2240?) <autogenerated>:1 +0x24 fp=0xc000031ad8 sp=0xc000031ac0 pc=0x61d256fb99e4 net/http.(*Server).Serve(0xc0000dc4b0, {0x61d2572abe98, 0xc000038740}) net/http/server.go:3330 +0x30c fp=0xc000031c08 sp=0xc000031ad8 pc=0x61d256fab72c main.main() github.com/ollama/ollama/llama/runner/runner.go:921 +0xfa7 fp=0xc000031f50 sp=0xc000031c08 pc=0x61d256fdfb67 runtime.main() runtime/proc.go:272 +0x29d fp=0xc000031fe0 sp=0xc000031f50 pc=0x61d256d693bd runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc000031fe8 sp=0xc000031fe0 pc=0x61d256da4c41 goroutine 2 gp=0xc000006c40 m=nil [force gc (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:424 +0xce fp=0xc00005afa8 sp=0xc00005af88 pc=0x61d256d9d00e runtime.goparkunlock(...) runtime/proc.go:430 runtime.forcegchelper() runtime/proc.go:337 +0xb8 fp=0xc00005afe0 sp=0xc00005afa8 pc=0x61d256d696f8 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00005afe8 sp=0xc00005afe0 pc=0x61d256da4c41 created by runtime.init.7 in goroutine 1 runtime/proc.go:325 +0x1a goroutine 3 gp=0xc000007180 m=nil [GC sweep wait]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:424 +0xce fp=0xc00005b780 sp=0xc00005b760 pc=0x61d256d9d00e runtime.goparkunlock(...) runtime/proc.go:430 runtime.bgsweep(0xc000028080) runtime/mgcsweep.go:277 +0x94 fp=0xc00005b7c8 sp=0xc00005b780 pc=0x61d256d54074 runtime.gcenable.gowrap1() runtime/mgc.go:203 +0x25 fp=0xc00005b7e0 sp=0xc00005b7c8 pc=0x61d256d48945 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00005b7e8 sp=0xc00005b7e0 pc=0x61d256da4c41 created by runtime.gcenable in goroutine 1 runtime/mgc.go:203 +0x66 goroutine 4 gp=0xc000007340 m=nil [GC scavenge wait]: runtime.gopark(0xc000028080?, 0x61d2571c97a8?, 0x1?, 0x0?, 0xc000007340?) runtime/proc.go:424 +0xce fp=0xc00005bf78 sp=0xc00005bf58 pc=0x61d256d9d00e runtime.goparkunlock(...) runtime/proc.go:430 runtime.(*scavengerState).park(0x61d257492120) runtime/mgcscavenge.go:425 +0x49 fp=0xc00005bfa8 sp=0xc00005bf78 pc=0x61d256d51aa9 runtime.bgscavenge(0xc000028080) runtime/mgcscavenge.go:653 +0x3c fp=0xc00005bfc8 sp=0xc00005bfa8 pc=0x61d256d5201c runtime.gcenable.gowrap2() runtime/mgc.go:204 +0x25 fp=0xc00005bfe0 sp=0xc00005bfc8 pc=0x61d256d488e5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00005bfe8 sp=0xc00005bfe0 pc=0x61d256da4c41 created by runtime.gcenable in goroutine 1 runtime/mgc.go:204 +0xa5 goroutine 5 gp=0xc000007c00 m=nil [finalizer wait]: runtime.gopark(0xc00005a648?, 0x61d256d3ee45?, 0xb0?, 0x1?, 0xc0000061c0?) runtime/proc.go:424 +0xce fp=0xc00005a620 sp=0xc00005a600 pc=0x61d256d9d00e runtime.runfinq() runtime/mfinal.go:193 +0x107 fp=0xc00005a7e0 sp=0xc00005a620 pc=0x61d256d479c7 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00005a7e8 sp=0xc00005a7e0 pc=0x61d256da4c41 created by runtime.createfing in goroutine 1 runtime/mfinal.go:163 +0x3d goroutine 6 gp=0xc000007dc0 m=nil [chan receive]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:424 +0xce fp=0xc00005c718 sp=0xc00005c6f8 pc=0x61d256d9d00e runtime.chanrecv(0xc0000940e0, 0x0, 0x1) runtime/chan.go:639 +0x41c fp=0xc00005c790 sp=0xc00005c718 pc=0x61d256d3861c runtime.chanrecv1(0x0?, 0x0?) runtime/chan.go:489 +0x12 fp=0xc00005c7b8 sp=0xc00005c790 pc=0x61d256d381f2 runtime.unique_runtime_registerUniqueMapCleanup.func1(...) runtime/mgc.go:1732 runtime.unique_runtime_registerUniqueMapCleanup.gowrap1() runtime/mgc.go:1735 +0x2f fp=0xc00005c7e0 sp=0xc00005c7b8 pc=0x61d256d4b78f runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00005c7e8 sp=0xc00005c7e0 pc=0x61d256da4c41 created by unique.runtime_registerUniqueMapCleanup in goroutine 1 runtime/mgc.go:1730 +0x96 goroutine 8 gp=0xc0000e41c0 m=nil [semacquire]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x20?, 0x0?) runtime/proc.go:424 +0xce fp=0xc00006ce08 sp=0xc00006cde8 pc=0x61d256d9d00e runtime.goparkunlock(...) runtime/proc.go:430 runtime.semacquire1(0xc0000b2210, 0x0, 0x1, 0x0, 0x12) runtime/sema.go:178 +0x22c fp=0xc00006ce70 sp=0xc00006ce08 pc=0x61d256d7c38c sync.runtime_Semacquire(0x0?) runtime/sema.go:71 +0x25 fp=0xc00006cea8 sp=0xc00006ce70 pc=0x61d256d9e245 sync.(*WaitGroup).Wait(0x0?) sync/waitgroup.go:118 +0x48 fp=0xc00006ced0 sp=0xc00006cea8 pc=0x61d256daf2e8 main.(*Server).run(0xc0000b21b0, {0x61d2572ac480, 0xc000092050}) github.com/ollama/ollama/llama/runner/runner.go:311 +0x4e fp=0xc00006cfb8 sp=0xc00006ced0 pc=0x61d256fdb56e main.main.gowrap2() github.com/ollama/ollama/llama/runner/runner.go:901 +0x28 fp=0xc00006cfe0 sp=0xc00006cfb8 pc=0x61d256fdfde8 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00006cfe8 sp=0xc00006cfe0 pc=0x61d256da4c41 created by main.main in goroutine 1 github.com/ollama/ollama/llama/runner/runner.go:901 +0xc2b goroutine 9 gp=0xc0000e4540 m=nil [IO wait]: runtime.gopark(0x61d256df6245?, 0xc0000de180?, 0x10?, 0x3a?, 0xb?) runtime/proc.go:424 +0xce fp=0xc0000d3918 sp=0xc0000d38f8 pc=0x61d256d9d00e runtime.netpollblock(0x61d256db5598?, 0x56d35a26?, 0xd2?) runtime/netpoll.go:575 +0xf7 fp=0xc0000d3950 sp=0xc0000d3918 pc=0x61d256d61dd7 internal/poll.runtime_pollWait(0x7f087d49af00, 0x72) runtime/netpoll.go:351 +0x85 fp=0xc0000d3970 sp=0xc0000d3950 pc=0x61d256d9c305 internal/poll.(*pollDesc).wait(0xc0000de180?, 0xc000120000?, 0x0) internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc0000d3998 sp=0xc0000d3970 pc=0x61d256df4fa7 internal/poll.(*pollDesc).waitRead(...) internal/poll/fd_poll_runtime.go:89 internal/poll.(*FD).Read(0xc0000de180, {0xc000120000, 0x1000, 0x1000}) internal/poll/fd_unix.go:165 +0x27a fp=0xc0000d3a30 sp=0xc0000d3998 pc=0x61d256df5afa net.(*netFD).Read(0xc0000de180, {0xc000120000?, 0xc0000d3aa0?, 0x61d256df5465?}) net/fd_posix.go:55 +0x25 fp=0xc0000d3a78 sp=0xc0000d3a30 pc=0x61d256e6a4a5 net.(*conn).Read(0xc00005e0e8, {0xc000120000?, 0x0?, 0xc000116038?}) net/net.go:189 +0x45 fp=0xc0000d3ac0 sp=0xc0000d3a78 pc=0x61d256e73ea5 net.(*TCPConn).Read(0xc000116030?, {0xc000120000?, 0xc0000de180?, 0xc0000d3af8?}) <autogenerated>:1 +0x25 fp=0xc0000d3af0 sp=0xc0000d3ac0 pc=0x61d256e80f45 net/http.(*connReader).Read(0xc000116030, {0xc000120000, 0x1000, 0x1000}) net/http/server.go:798 +0x14b fp=0xc0000d3b40 sp=0xc0000d3af0 pc=0x61d256fa202b bufio.(*Reader).fill(0xc000112060) bufio/bufio.go:110 +0x103 fp=0xc0000d3b78 sp=0xc0000d3b40 pc=0x61d256f60c43 bufio.(*Reader).Peek(0xc000112060, 0x4) bufio/bufio.go:148 +0x53 fp=0xc0000d3b98 sp=0xc0000d3b78 pc=0x61d256f60d73 net/http.(*conn).serve(0xc0000b2240, {0x61d2572ac448, 0xc00009f0e0}) net/http/server.go:2127 +0x738 fp=0xc0000d3fb8 sp=0xc0000d3b98 pc=0x61d256fa7378 net/http.(*Server).Serve.gowrap3() net/http/server.go:3360 +0x28 fp=0xc0000d3fe0 sp=0xc0000d3fb8 pc=0x61d256fabb28 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000d3fe8 sp=0xc0000d3fe0 pc=0x61d256da4c41 created by net/http.(*Server).Serve in goroutine 1 net/http/server.go:3360 +0x485 rax 0x0 rbx 0xee1d rcx 0x7f087d6a53f4 rdx 0x6 rdi 0xee19 rsi 0xee1d rbp 0x7f0834bff630 rsp 0x7f0834bff5f0 r8 0x0 r9 0xfffffffb r10 0x8 r11 0x246 r12 0x7f0834c006c0 r13 0x61d2571cfe44 r14 0x6 r15 0xe rip 0x7f087d6a53f4 rflags 0x246 cs 0x33 fs 0x0 gs 0x0 time=2024-11-09T22:02:21.404+01:00 level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server error" time=2024-11-09T22:02:21.655+01:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: GGML_ASSERT(a->ne[2] == b->ne[2]) failed" [GIN] 2024/11/09 - 22:02:21 | 500 | 8.831309925s | 127.0.0.1 | POST "/api/generate" time=2024-11-09T22:02:26.754+01:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.098750813 model=/home/volker/.ollama/models/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 time=2024-11-09T22:02:27.004+01:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.349176485 model=/home/volker/.ollama/models/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 time=2024-11-09T22:02:27.254+01:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.598901983 model=/home/volker/.ollama/models/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 ```
Author
Owner

@nonetrix commented on GitHub (Nov 10, 2024):

This happens to me too on AMD and ROCm so not GPU related, I went out of my pay to install the -rocm-git package

@nonetrix commented on GitHub (Nov 10, 2024): This happens to me too on AMD and ROCm so not GPU related, I went out of my pay to install the `-rocm-git` package
Author
Owner

@lilyanatia commented on GitHub (Nov 11, 2024):

same error, running ollama 0.4.1 on Arch Linux.

@lilyanatia commented on GitHub (Nov 11, 2024): same error, running ollama 0.4.1 on Arch Linux.
Author
Owner

@grzjur commented on GitHub (Nov 13, 2024):

I have the same error
ollama version is 0.4.1
OS: Garuda Linux x86_64
CPU: Intel(R) Core(TM) i7-14700 (28) @ 5.40 GHz
GPU: NVIDIA GeForce RTX 4060 Ti 16GB

@grzjur commented on GitHub (Nov 13, 2024): I have the same error ollama version is 0.4.1 OS: Garuda Linux x86_64 CPU: Intel(R) Core(TM) i7-14700 (28) @ 5.40 GHz GPU: NVIDIA GeForce RTX 4060 Ti 16GB
Author
Owner

@stephensrmmartin commented on GitHub (Nov 16, 2024):

I am confirming that this bug is still occurring despite the Arch linux package fixes, Arch rocblas rebuild, and using ollama 0.4.2.

Nov 16 13:39:19 hwkiller-desktop ollama[313373]: ggml.c:5978: GGML_ASSERT(ggml_nelements(a) == ne0*ne1*ne2) failed
Nov 16 13:39:20 hwkiller-desktop ollama[313373]: ptrace: Operation not permitted.
Nov 16 13:39:20 hwkiller-desktop ollama[313373]: No stack.
Nov 16 13:39:20 hwkiller-desktop ollama[313373]: The program is not being run.
Nov 16 13:39:20 hwkiller-desktop ollama[313373]: SIGABRT: abort
Nov 16 13:39:20 hwkiller-desktop ollama[313373]: PC=0x7951ed0a53f4 m=5 sigcode=18446744073709551610
Nov 16 13:39:20 hwkiller-desktop ollama[313373]: signal arrived during cgo execution

@stephensrmmartin commented on GitHub (Nov 16, 2024): I am confirming that this bug is still occurring despite the Arch linux package fixes, Arch rocblas rebuild, and using ollama 0.4.2. ``` Nov 16 13:39:19 hwkiller-desktop ollama[313373]: ggml.c:5978: GGML_ASSERT(ggml_nelements(a) == ne0*ne1*ne2) failed Nov 16 13:39:20 hwkiller-desktop ollama[313373]: ptrace: Operation not permitted. Nov 16 13:39:20 hwkiller-desktop ollama[313373]: No stack. Nov 16 13:39:20 hwkiller-desktop ollama[313373]: The program is not being run. Nov 16 13:39:20 hwkiller-desktop ollama[313373]: SIGABRT: abort Nov 16 13:39:20 hwkiller-desktop ollama[313373]: PC=0x7951ed0a53f4 m=5 sigcode=18446744073709551610 Nov 16 13:39:20 hwkiller-desktop ollama[313373]: signal arrived during cgo execution ```
Author
Owner

@jonlap commented on GitHub (Nov 19, 2024):

Same error here. Arch Linux, ROCm 6.2.2, ollama 0.4.2, AMD CPU & GPU.

ollama-rocm      | mllama_model_load: vision using CUDA backend
ollama-rocm      | ggml.c:6712: GGML_ASSERT(a->ne[2] == b->ne[2]) failed
ollama-rocm      | SIGSEGV: segmentation violation
ollama-rocm      | PC=0x714f1e459a1f m=5 sigcode=1 addr=0x714deb802018
ollama-rocm      | signal arrived during cgo execution
@jonlap commented on GitHub (Nov 19, 2024): Same error here. Arch Linux, ROCm 6.2.2, ollama 0.4.2, AMD CPU & GPU. ``` ollama-rocm | mllama_model_load: vision using CUDA backend ollama-rocm | ggml.c:6712: GGML_ASSERT(a->ne[2] == b->ne[2]) failed ollama-rocm | SIGSEGV: segmentation violation ollama-rocm | PC=0x714f1e459a1f m=5 sigcode=1 addr=0x714deb802018 ollama-rocm | signal arrived during cgo execution ```
Author
Owner

@The-afroman commented on GitHub (Nov 25, 2024):

Same here with ollama-rocm 0.4.4, rocm 6.2.4 and AMD 6950xt

Nov 25 10:43:19 archmain ollama[6848]: time=2024-11-25T10:43:19.472-08:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0"
Nov 25 10:43:19 archmain ollama[6848]: time=2024-11-25T10:43:19.472-08:00 level=INFO source=routes.go:1248 msg="Listening on 127.0.0.1:11434 (version 0.4.4)"
Nov 25 10:43:19 archmain ollama[6848]: time=2024-11-25T10:43:19.472-08:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/tmp/ollama2095919014/runners
Nov 25 10:43:19 archmain ollama[6848]: time=2024-11-25T10:43:19.513-08:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 rocm]"
Nov 25 10:43:19 archmain ollama[6848]: time=2024-11-25T10:43:19.514-08:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
Nov 25 10:43:19 archmain ollama[6848]: time=2024-11-25T10:43:19.538-08:00 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
Nov 25 10:43:19 archmain ollama[6848]: time=2024-11-25T10:43:19.538-08:00 level=INFO source=amd_linux.go:386 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION=10.3.0
Nov 25 10:43:19 archmain ollama[6848]: time=2024-11-25T10:43:19.538-08:00 level=INFO source=amd_linux.go:296 msg="unsupported Radeon iGPU detected skipping" id=1 total="512.0 MiB"
Nov 25 10:43:19 archmain ollama[6848]: time=2024-11-25T10:43:19.539-08:00 level=INFO source=types.go:123 msg="inference compute" id=GPU-55a67b9cc7522484 library=rocm variant="" compute=gfx1030 driver=0.0 name=1002:73a5 total="16.0 GiB" available="14.9 GiB"
Nov 25 10:43:33 archmain ollama[6848]: [GIN] 2024/11/25 - 10:43:33 | 200 |      42.999µs |       127.0.0.1 | GET      "/api/version"
Nov 25 10:43:42 archmain ollama[6848]: [GIN] 2024/11/25 - 10:43:42 | 200 |     266.927µs |       127.0.0.1 | GET      "/api/tags"
Nov 25 10:43:45 archmain ollama[6848]: [GIN] 2024/11/25 - 10:43:45 | 200 |       27.18µs |       127.0.0.1 | GET      "/api/version"
Nov 25 10:43:54 archmain ollama[6848]: [GIN] 2024/11/25 - 10:43:54 | 200 |     258.537µs |       127.0.0.1 | GET      "/api/tags"
Nov 25 10:43:54 archmain ollama[6848]: time=2024-11-25T10:43:54.975-08:00 level=WARN source=sched.go:137 msg="mllama doesn't support parallel requests yet"
Nov 25 10:43:54 archmain ollama[6848]: time=2024-11-25T10:43:54.998-08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/var/lib/ollama/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 gpu=GPU-55a67b9cc7522484 parallel=1 available=16009924608 required="11.3 GiB"
Nov 25 10:43:54 archmain ollama[6848]: time=2024-11-25T10:43:54.998-08:00 level=INFO source=server.go:105 msg="system memory" total="30.6 GiB" free="26.1 GiB" free_swap="8.0 GiB"
Nov 25 10:43:54 archmain ollama[6848]: time=2024-11-25T10:43:54.999-08:00 level=INFO source=memory.go:343 msg="offload to rocm" projector.weights="1.8 GiB" projector.graph="2.8 GiB" layers.requested=-1 layers.model=41 layers.offload=41 layers.split="" memory.available="[14.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.3 GiB" memory.required.partial="11.3 GiB" memory.required.kv="656.2 MiB" memory.required.allocations="[11.3 GiB]" memory.weights.total="5.5 GiB" memory.weights.repeating="5.1 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="258.5 MiB" memory.graph.partial="669.5 MiB"
Nov 25 10:43:55 archmain ollama[6848]: time=2024-11-25T10:43:55.000-08:00 level=INFO source=server.go:383 msg="starting llama server" cmd="/tmp/ollama2095919014/runners/rocm/ollama_llama_server --model /var/lib/ollama/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 --ctx-size 2048 --batch-size 512 --n-gpu-layers 41 --mmproj /var/lib/ollama/blobs/sha256-ece5e659647a20a5c28ab9eea1c12a1ad430bc0f2a27021d00ad103b3bf5206f --threads 12 --parallel 1 --port 42147"
Nov 25 10:43:55 archmain ollama[6848]: time=2024-11-25T10:43:55.000-08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
Nov 25 10:43:55 archmain ollama[6848]: time=2024-11-25T10:43:55.000-08:00 level=INFO source=server.go:562 msg="waiting for llama runner to start responding"
Nov 25 10:43:55 archmain ollama[6848]: time=2024-11-25T10:43:55.001-08:00 level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server error"
Nov 25 10:43:55 archmain ollama[6848]: time=2024-11-25T10:43:55.028-08:00 level=INFO source=runner.go:916 msg="starting go runner"
Nov 25 10:43:55 archmain ollama[6848]: time=2024-11-25T10:43:55.029-08:00 level=INFO source=runner.go:917 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=12
Nov 25 10:43:55 archmain ollama[6848]: time=2024-11-25T10:43:55.029-08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:42147"
Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: loaded meta data with 27 key-value pairs and 396 tensors from /var/lib/ollama/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 (version GGUF V3 (latest))
Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv   0:                       general.architecture str              = mllama
Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv   1:                               general.type str              = model
Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv   2:                               general.name str              = Model
Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv   3:                         general.size_label str              = 10B
Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv   4:                         mllama.block_count u32              = 40
Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv   5:                      mllama.context_length u32              = 131072
Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv   6:                    mllama.embedding_length u32              = 4096
Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv   7:                 mllama.feed_forward_length u32              = 14336
Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv   8:                mllama.attention.head_count u32              = 32
Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv   9:             mllama.attention.head_count_kv u32              = 8
Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv  10:                      mllama.rope.freq_base f32              = 500000.000000
Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv  11:    mllama.attention.layer_norm_rms_epsilon f32              = 0.000010
Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv  12:                          general.file_type u32              = 15
Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv  13:                          mllama.vocab_size u32              = 128256
Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv  14:                mllama.rope.dimension_count u32              = 128
Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv  15:    mllama.attention.cross_attention_layers arr[i32,8]       = [3, 8, 13, 18, 23, 28, 33, 38]
Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv  16:               tokenizer.ggml.add_bos_token bool             = true
Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv  17:                       tokenizer.ggml.model str              = gpt2
Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv  18:                         tokenizer.ggml.pre str              = llama-bpe
Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv  19:                      tokenizer.ggml.tokens arr[str,128257]  = ["!", "\"", "#", "$", "%", "&", "'", ...
Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv  20:                  tokenizer.ggml.token_type arr[i32,128257]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv  21:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv  22:                tokenizer.ggml.bos_token_id u32              = 128000
Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv  23:                tokenizer.ggml.eos_token_id u32              = 128009
Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv  24:            tokenizer.ggml.padding_token_id u32              = 128004
Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv  25:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv  26:               general.quantization_version u32              = 2
Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - type  f32:  114 tensors
Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - type q4_K:  245 tensors
Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - type q6_K:   37 tensors
Nov 25 10:43:55 archmain ollama[6848]: llm_load_vocab: special tokens cache size = 257
Nov 25 10:43:55 archmain ollama[6848]: llm_load_vocab: token to piece cache size = 0.7999 MB
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: format           = GGUF V3 (latest)
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: arch             = mllama
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: vocab type       = BPE
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: n_vocab          = 128256
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: n_merges         = 280147
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: vocab_only       = 0
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: n_ctx_train      = 131072
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: n_embd           = 4096
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: n_layer          = 40
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: n_head           = 32
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: n_head_kv        = 8
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: n_rot            = 128
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: n_swa            = 0
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: n_embd_head_k    = 128
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: n_embd_head_v    = 128
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: n_gqa            = 4
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: n_embd_k_gqa     = 1024
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: n_embd_v_gqa     = 1024
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: f_norm_eps       = 0.0e+00
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: f_clamp_kqv      = 0.0e+00
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: f_logit_scale    = 0.0e+00
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: n_ff             = 14336
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: n_expert         = 0
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: n_expert_used    = 0
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: causal attn      = 1
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: pooling type     = 0
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: rope type        = 0
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: rope scaling     = linear
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: freq_base_train  = 500000.0
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: freq_scale_train = 1
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: n_ctx_orig_yarn  = 131072
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: rope_finetuned   = unknown
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: ssm_d_conv       = 0
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: ssm_d_inner      = 0
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: ssm_d_state      = 0
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: ssm_dt_rank      = 0
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: ssm_dt_b_c_rms   = 0
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: model type       = 11B
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: model ftype      = Q4_K - Medium
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: model params     = 9.78 B
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: model size       = 5.55 GiB (4.87 BPW)
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: general.name     = Model
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: EOS token        = 128009 '<|eot_id|>'
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: PAD token        = 128004 '<|finetune_right_pad_id|>'
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: LF token         = 128 'Ä'
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: EOM token        = 128008 '<|eom_id|>'
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: EOG token        = 128008 '<|eom_id|>'
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: EOG token        = 128009 '<|eot_id|>'
Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: max token length = 256
Nov 25 10:43:55 archmain ollama[6848]: llama_model_load: vocab mismatch 128256 !- 128257 ...
Nov 25 10:43:55 archmain ollama[6848]: time=2024-11-25T10:43:55.252-08:00 level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server loading model"
Nov 25 10:43:55 archmain ollama[6848]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Nov 25 10:43:55 archmain ollama[6848]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Nov 25 10:43:55 archmain ollama[6848]: ggml_cuda_init: found 1 ROCm devices:
Nov 25 10:43:55 archmain ollama[6848]:   Device 0: AMD Radeon RX 6950 XT, compute capability 10.3, VMM: no
Nov 25 10:43:55 archmain ollama[6848]: llm_load_tensors: ggml ctx size =    0.36 MiB
Nov 25 10:43:55 archmain ollama[6848]: llm_load_tensors: offloading 40 repeating layers to GPU
Nov 25 10:43:55 archmain ollama[6848]: llm_load_tensors: offloading non-repeating layers to GPU
Nov 25 10:43:55 archmain ollama[6848]: llm_load_tensors: offloaded 41/41 layers to GPU
Nov 25 10:43:55 archmain ollama[6848]: llm_load_tensors:      ROCm0 buffer size =  5397.51 MiB
Nov 25 10:43:55 archmain ollama[6848]: llm_load_tensors:        CPU buffer size =   281.83 MiB
Nov 25 10:43:56 archmain ollama[6848]: llama_new_context_with_model: n_ctx      = 2048
Nov 25 10:43:56 archmain ollama[6848]: llama_new_context_with_model: n_batch    = 512
Nov 25 10:43:56 archmain ollama[6848]: llama_new_context_with_model: n_ubatch   = 512
Nov 25 10:43:56 archmain ollama[6848]: llama_new_context_with_model: flash_attn = 0
Nov 25 10:43:56 archmain ollama[6848]: llama_new_context_with_model: freq_base  = 500000.0
Nov 25 10:43:56 archmain ollama[6848]: llama_new_context_with_model: freq_scale = 1
Nov 25 10:43:56 archmain ollama[6848]: llama_kv_cache_init:      ROCm0 KV buffer size =   656.25 MiB
Nov 25 10:43:56 archmain ollama[6848]: llama_new_context_with_model: KV self size  =  656.25 MiB, K (f16):  328.12 MiB, V (f16):  328.12 MiB
Nov 25 10:43:56 archmain ollama[6848]: llama_new_context_with_model:  ROCm_Host  output buffer size =     0.50 MiB
Nov 25 10:43:56 archmain ollama[6848]: llama_new_context_with_model:      ROCm0 compute buffer size =   258.50 MiB
Nov 25 10:43:56 archmain ollama[6848]: llama_new_context_with_model:  ROCm_Host compute buffer size =    12.01 MiB
Nov 25 10:43:56 archmain ollama[6848]: llama_new_context_with_model: graph nodes  = 1030
Nov 25 10:43:56 archmain ollama[6848]: llama_new_context_with_model: graph splits = 2
Nov 25 10:43:56 archmain ollama[6848]: mllama_model_load: model name:   Llama-3.2-11B-Vision-Instruct
Nov 25 10:43:56 archmain ollama[6848]: mllama_model_load: description:  vision encoder for Mllama
Nov 25 10:43:56 archmain ollama[6848]: mllama_model_load: GGUF version: 3
Nov 25 10:43:56 archmain ollama[6848]: mllama_model_load: alignment:    32
Nov 25 10:43:56 archmain ollama[6848]: mllama_model_load: n_tensors:    512
Nov 25 10:43:56 archmain ollama[6848]: mllama_model_load: n_kv:         17
Nov 25 10:43:56 archmain ollama[6848]: mllama_model_load: ftype:        f16
Nov 25 10:43:56 archmain ollama[6848]: mllama_model_load:
Nov 25 10:43:56 archmain ollama[6848]: mllama_model_load: vision using CUDA backend
Nov 25 10:43:56 archmain ollama[6848]: ggml.c:6712: GGML_ASSERT(a->ne[2] == b->ne[2]) failed
Nov 25 10:43:56 archmain ollama[6848]: ptrace: Operation not permitted.
Nov 25 10:43:56 archmain ollama[6848]: No stack.
Nov 25 10:43:56 archmain ollama[6848]: The program is not being run.
Nov 25 10:43:56 archmain ollama[6848]: SIGABRT: abort
Nov 25 10:43:56 archmain ollama[6848]: PC=0x70fedbaa53f4 m=5 sigcode=18446744073709551610
Nov 25 10:43:56 archmain ollama[6848]: signal arrived during cgo execution
Nov 25 10:43:56 archmain ollama[6848]: goroutine 7 gp=0xc000182000 m=5 mp=0xc000100008 [syscall]:
Nov 25 10:43:56 archmain ollama[6848]: runtime.cgocall(0x574ce3eb7fc0, 0xc000085cd0)
Nov 25 10:43:56 archmain ollama[6848]:         runtime/cgocall.go:167 +0x4b fp=0xc000085ca8 sp=0xc000085c70 pc=0x574ce3c69d2b
Nov 25 10:43:56 archmain ollama[6848]: github.com/ollama/ollama/llama._Cfunc_mllama_model_load(0x70fc77cb4f80, 0x1)
Nov 25 10:43:56 archmain ollama[6848]:         _cgo_gotypes.go:1010 +0x50 fp=0xc000085cd0 sp=0xc000085ca8 pc=0x574ce3d15730
Nov 25 10:43:56 archmain ollama[6848]: github.com/ollama/ollama/llama.NewMllamaContext(0xc00020c030, {0x7fffc03c1ccb, 0x5d})
Nov 25 10:43:56 archmain ollama[6848]:         github.com/ollama/ollama/llama/llama.go:561 +0x90 fp=0xc000085d90 sp=0xc000085cd0 pc=0x574ce3d193b0
Nov 25 10:43:56 archmain ollama[6848]: main.NewImageContext(0xc00020c030, {0x7fffc03c1ccb, 0x5d})
Nov 25 10:43:56 archmain ollama[6848]:         github.com/ollama/ollama/llama/runner/image.go:39 +0x168 fp=0xc000085e10 sp=0xc000085d90 pc=0x574ce3ead2a8
Nov 25 10:43:56 archmain ollama[6848]: main.(*Server).loadModel(0xc0000c61b0, {0x29, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0xc0000222b0, 0x0}, ...)
Nov 25 10:43:56 archmain ollama[6848]:         github.com/ollama/ollama/llama/runner/runner.go:861 +0x236 fp=0xc000085f38 sp=0xc000085e10 pc=0x574ce3eb2cd6
Nov 25 10:43:56 archmain ollama[6848]: main.main.gowrap1()
Nov 25 10:43:56 archmain ollama[6848]:         github.com/ollama/ollama/llama/runner/runner.go:950 +0x95 fp=0xc000085fe0 sp=0xc000085f38 pc=0x574ce3eb4175
Nov 25 10:43:56 archmain ollama[6848]: runtime.goexit({})
Nov 25 10:43:56 archmain ollama[6848]:         runtime/asm_amd64.s:1700 +0x1 fp=0xc000085fe8 sp=0xc000085fe0 pc=0x574ce3c77721
Nov 25 10:43:56 archmain ollama[6848]: created by main.main in goroutine 1
Nov 25 10:43:56 archmain ollama[6848]:         github.com/ollama/ollama/llama/runner/runner.go:950 +0xb7e
Nov 25 10:43:56 archmain ollama[6848]: goroutine 1 gp=0xc0000061c0 m=nil [IO wait]:
Nov 25 10:43:56 archmain ollama[6848]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
Nov 25 10:43:56 archmain ollama[6848]:         runtime/proc.go:424 +0xce fp=0xc000027858 sp=0xc000027838 pc=0x574ce3c6faee
Nov 25 10:43:56 archmain ollama[6848]: runtime.netpollblock(0xc0000278a8?, 0xe3c084e6?, 0x4c?)
Nov 25 10:43:56 archmain ollama[6848]:         runtime/netpoll.go:575 +0xf7 fp=0xc000027890 sp=0xc000027858 pc=0x574ce3c34897
Nov 25 10:43:56 archmain ollama[6848]: internal/poll.runtime_pollWait(0x70fe6978bf90, 0x72)
Nov 25 10:43:56 archmain ollama[6848]:         runtime/netpoll.go:351 +0x85 fp=0xc0000278b0 sp=0xc000027890 pc=0x574ce3c6ede5
Nov 25 10:43:56 archmain ollama[6848]: internal/poll.(*pollDesc).wait(0xc0000fa100?, 0x2c?, 0x0)
Nov 25 10:43:56 archmain ollama[6848]:         internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc0000278d8 sp=0xc0000278b0 pc=0x574ce3cc4c27
Nov 25 10:43:56 archmain ollama[6848]: internal/poll.(*pollDesc).waitRead(...)
Nov 25 10:43:56 archmain ollama[6848]:         internal/poll/fd_poll_runtime.go:89
Nov 25 10:43:56 archmain ollama[6848]: internal/poll.(*FD).Accept(0xc0000fa100)
Nov 25 10:43:56 archmain ollama[6848]:         internal/poll/fd_unix.go:620 +0x295 fp=0xc000027980 sp=0xc0000278d8 pc=0x574ce3cc6195
Nov 25 10:43:56 archmain ollama[6848]: net.(*netFD).accept(0xc0000fa100)
Nov 25 10:43:56 archmain ollama[6848]:         net/fd_unix.go:172 +0x29 fp=0xc000027a38 sp=0xc000027980 pc=0x574ce3d3e769
Nov 25 10:43:56 archmain ollama[6848]: net.(*TCPListener).accept(0xc00009a700)
Nov 25 10:43:56 archmain ollama[6848]:         net/tcpsock_posix.go:159 +0x1e fp=0xc000027a88 sp=0xc000027a38 pc=0x574ce3d4edbe
Nov 25 10:43:56 archmain ollama[6848]: net.(*TCPListener).Accept(0xc00009a700)
Nov 25 10:43:56 archmain ollama[6848]:         net/tcpsock.go:372 +0x30 fp=0xc000027ab8 sp=0xc000027a88 pc=0x574ce3d4e0f0
Nov 25 10:43:56 archmain ollama[6848]: net/http.(*onceCloseListener).Accept(0xc0000c6240?)
Nov 25 10:43:56 archmain ollama[6848]:         <autogenerated>:1 +0x24 fp=0xc000027ad0 sp=0xc000027ab8 pc=0x574ce3e8ccc4
Nov 25 10:43:56 archmain ollama[6848]: net/http.(*Server).Serve(0xc0000f84b0, {0x574ce41a8b58, 0xc00009a700})
Nov 25 10:43:56 archmain ollama[6848]:         net/http/server.go:3330 +0x30c fp=0xc000027c00 sp=0xc000027ad0 pc=0x574ce3e7ea0c
Nov 25 10:43:56 archmain ollama[6848]: main.main()
Nov 25 10:43:56 archmain ollama[6848]:         github.com/ollama/ollama/llama/runner/runner.go:975 +0xfc7 fp=0xc000027f50 sp=0xc000027c00 pc=0x574ce3eb3e27
Nov 25 10:43:56 archmain ollama[6848]: runtime.main()
Nov 25 10:43:56 archmain ollama[6848]:         runtime/proc.go:272 +0x29d fp=0xc000027fe0 sp=0xc000027f50 pc=0x574ce3c3be7d
Nov 25 10:43:56 archmain ollama[6848]: runtime.goexit({})
Nov 25 10:43:56 archmain ollama[6848]:         runtime/asm_amd64.s:1700 +0x1 fp=0xc000027fe8 sp=0xc000027fe0 pc=0x574ce3c77721
Nov 25 10:43:56 archmain ollama[6848]: goroutine 2 gp=0xc000006c40 m=nil [force gc (idle)]:
Nov 25 10:43:56 archmain ollama[6848]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
Nov 25 10:43:56 archmain ollama[6848]:         runtime/proc.go:424 +0xce fp=0xc00006afa8 sp=0xc00006af88 pc=0x574ce3c6faee
Nov 25 10:43:56 archmain ollama[6848]: runtime.goparkunlock(...)
Nov 25 10:43:56 archmain ollama[6848]:         runtime/proc.go:430
Nov 25 10:43:56 archmain ollama[6848]: runtime.forcegchelper()
Nov 25 10:43:56 archmain ollama[6848]:         runtime/proc.go:337 +0xb8 fp=0xc00006afe0 sp=0xc00006afa8 pc=0x574ce3c3c1b8
Nov 25 10:43:56 archmain ollama[6848]: runtime.goexit({})
Nov 25 10:43:56 archmain ollama[6848]:         runtime/asm_amd64.s:1700 +0x1 fp=0xc00006afe8 sp=0xc00006afe0 pc=0x574ce3c77721
Nov 25 10:43:56 archmain ollama[6848]: created by runtime.init.7 in goroutine 1
Nov 25 10:43:56 archmain ollama[6848]:         runtime/proc.go:325 +0x1a
Nov 25 10:43:56 archmain ollama[6848]: goroutine 3 gp=0xc000007180 m=nil [GC sweep wait]:
Nov 25 10:43:56 archmain ollama[6848]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
Nov 25 10:43:56 archmain ollama[6848]:         runtime/proc.go:424 +0xce fp=0xc00006b780 sp=0xc00006b760 pc=0x574ce3c6faee
Nov 25 10:43:56 archmain ollama[6848]: runtime.goparkunlock(...)
Nov 25 10:43:56 archmain ollama[6848]:         runtime/proc.go:430
Nov 25 10:43:56 archmain ollama[6848]: runtime.bgsweep(0xc000098000)
Nov 25 10:43:56 archmain ollama[6848]:         runtime/mgcsweep.go:277 +0x94 fp=0xc00006b7c8 sp=0xc00006b780 pc=0x574ce3c26b34
Nov 25 10:43:56 archmain ollama[6848]: runtime.gcenable.gowrap1()
Nov 25 10:43:56 archmain ollama[6848]:         runtime/mgc.go:203 +0x25 fp=0xc00006b7e0 sp=0xc00006b7c8 pc=0x574ce3c1b405
Nov 25 10:43:56 archmain ollama[6848]: runtime.goexit({})
Nov 25 10:43:56 archmain ollama[6848]:         runtime/asm_amd64.s:1700 +0x1 fp=0xc00006b7e8 sp=0xc00006b7e0 pc=0x574ce3c77721
Nov 25 10:43:56 archmain ollama[6848]: created by runtime.gcenable in goroutine 1
Nov 25 10:43:56 archmain ollama[6848]:         runtime/mgc.go:203 +0x66
Nov 25 10:43:56 archmain ollama[6848]: goroutine 4 gp=0xc000007340 m=nil [GC scavenge wait]:
Nov 25 10:43:56 archmain ollama[6848]: runtime.gopark(0xc000098000?, 0x574ce40bbd88?, 0x1?, 0x0?, 0xc000007340?)
Nov 25 10:43:56 archmain ollama[6848]:         runtime/proc.go:424 +0xce fp=0xc00006bf78 sp=0xc00006bf58 pc=0x574ce3c6faee
Nov 25 10:43:56 archmain ollama[6848]: runtime.goparkunlock(...)
Nov 25 10:43:56 archmain ollama[6848]:         runtime/proc.go:430
Nov 25 10:43:56 archmain ollama[6848]: runtime.(*scavengerState).park(0x574ce4391180)
Nov 25 10:43:56 archmain ollama[6848]:         runtime/mgcscavenge.go:425 +0x49 fp=0xc00006bfa8 sp=0xc00006bf78 pc=0x574ce3c24569
Nov 25 10:43:56 archmain ollama[6848]: runtime.bgscavenge(0xc000098000)
Nov 25 10:43:56 archmain ollama[6848]:         runtime/mgcscavenge.go:653 +0x3c fp=0xc00006bfc8 sp=0xc00006bfa8 pc=0x574ce3c24adc
Nov 25 10:43:56 archmain ollama[6848]: runtime.gcenable.gowrap2()
Nov 25 10:43:56 archmain ollama[6848]:         runtime/mgc.go:204 +0x25 fp=0xc00006bfe0 sp=0xc00006bfc8 pc=0x574ce3c1b3a5
Nov 25 10:43:56 archmain ollama[6848]: runtime.goexit({})
Nov 25 10:43:56 archmain ollama[6848]:         runtime/asm_amd64.s:1700 +0x1 fp=0xc00006bfe8 sp=0xc00006bfe0 pc=0x574ce3c77721
Nov 25 10:43:56 archmain ollama[6848]: created by runtime.gcenable in goroutine 1
Nov 25 10:43:56 archmain ollama[6848]:         runtime/mgc.go:204 +0xa5
Nov 25 10:43:56 archmain ollama[6848]: goroutine 5 gp=0xc000007c00 m=nil [finalizer wait]:
Nov 25 10:43:56 archmain ollama[6848]: runtime.gopark(0xc00006a648?, 0x574ce3c11905?, 0xb0?, 0x1?, 0xc0000061c0?)
Nov 25 10:43:56 archmain ollama[6848]:         runtime/proc.go:424 +0xce fp=0xc00006a620 sp=0xc00006a600 pc=0x574ce3c6faee
Nov 25 10:43:56 archmain ollama[6848]: runtime.runfinq()
Nov 25 10:43:56 archmain ollama[6848]:         runtime/mfinal.go:193 +0x107 fp=0xc00006a7e0 sp=0xc00006a620 pc=0x574ce3c1a487
Nov 25 10:43:56 archmain ollama[6848]: runtime.goexit({})
Nov 25 10:43:56 archmain ollama[6848]:         runtime/asm_amd64.s:1700 +0x1 fp=0xc00006a7e8 sp=0xc00006a7e0 pc=0x574ce3c77721
Nov 25 10:43:56 archmain ollama[6848]: created by runtime.createfing in goroutine 1
Nov 25 10:43:56 archmain ollama[6848]:         runtime/mfinal.go:163 +0x3d
Nov 25 10:43:56 archmain ollama[6848]: goroutine 6 gp=0xc000007dc0 m=nil [chan receive]:
Nov 25 10:43:56 archmain ollama[6848]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
Nov 25 10:43:56 archmain ollama[6848]:         runtime/proc.go:424 +0xce fp=0xc00006c718 sp=0xc00006c6f8 pc=0x574ce3c6faee
Nov 25 10:43:56 archmain ollama[6848]: runtime.chanrecv(0xc0000a60e0, 0x0, 0x1)
Nov 25 10:43:56 archmain ollama[6848]:         runtime/chan.go:639 +0x41c fp=0xc00006c790 sp=0xc00006c718 pc=0x574ce3c0b0dc
Nov 25 10:43:56 archmain ollama[6848]: runtime.chanrecv1(0x0?, 0x0?)
Nov 25 10:43:56 archmain ollama[6848]:         runtime/chan.go:489 +0x12 fp=0xc00006c7b8 sp=0xc00006c790 pc=0x574ce3c0acb2
Nov 25 10:43:56 archmain ollama[6848]: runtime.unique_runtime_registerUniqueMapCleanup.func1(...)
Nov 25 10:43:56 archmain ollama[6848]:         runtime/mgc.go:1732
Nov 25 10:43:56 archmain ollama[6848]: runtime.unique_runtime_registerUniqueMapCleanup.gowrap1()
Nov 25 10:43:56 archmain ollama[6848]:         runtime/mgc.go:1735 +0x2f fp=0xc00006c7e0 sp=0xc00006c7b8 pc=0x574ce3c1e24f
Nov 25 10:43:56 archmain ollama[6848]: runtime.goexit({})
Nov 25 10:43:56 archmain ollama[6848]:         runtime/asm_amd64.s:1700 +0x1 fp=0xc00006c7e8 sp=0xc00006c7e0 pc=0x574ce3c77721
Nov 25 10:43:56 archmain ollama[6848]: created by unique.runtime_registerUniqueMapCleanup in goroutine 1
Nov 25 10:43:56 archmain ollama[6848]:         runtime/mgc.go:1730 +0x96
Nov 25 10:43:56 archmain ollama[6848]: goroutine 8 gp=0xc0001821c0 m=nil [semacquire]:
Nov 25 10:43:56 archmain ollama[6848]: runtime.gopark(0x0?, 0x0?, 0x60?, 0x20?, 0x0?)
Nov 25 10:43:56 archmain ollama[6848]:         runtime/proc.go:424 +0xce fp=0xc00006d608 sp=0xc00006d5e8 pc=0x574ce3c6faee
Nov 25 10:43:56 archmain ollama[6848]: runtime.goparkunlock(...)
Nov 25 10:43:56 archmain ollama[6848]:         runtime/proc.go:430
Nov 25 10:43:56 archmain ollama[6848]: runtime.semacquire1(0xc0000c61b8, 0x0, 0x1, 0x0, 0x12)
Nov 25 10:43:56 archmain ollama[6848]:         runtime/sema.go:178 +0x22c fp=0xc00006d670 sp=0xc00006d608 pc=0x574ce3c4ee4c
Nov 25 10:43:56 archmain ollama[6848]: sync.runtime_Semacquire(0x0?)
Nov 25 10:43:56 archmain ollama[6848]:         runtime/sema.go:71 +0x25 fp=0xc00006d6a8 sp=0xc00006d670 pc=0x574ce3c70d25
Nov 25 10:43:56 archmain ollama[6848]: sync.(*WaitGroup).Wait(0x0?)
Nov 25 10:43:56 archmain ollama[6848]:         sync/waitgroup.go:118 +0x48 fp=0xc00006d6d0 sp=0xc00006d6a8 pc=0x574ce3c8cfc8
Nov 25 10:43:56 archmain ollama[6848]: main.(*Server).run(0xc0000c61b0, {0x574ce41a9140, 0xc000180050})
Nov 25 10:43:56 archmain ollama[6848]:         github.com/ollama/ollama/llama/runner/runner.go:307 +0x47 fp=0xc00006d7b8 sp=0xc00006d6d0 pc=0x574ce3eaf3a7
Nov 25 10:43:56 archmain ollama[6848]: main.main.gowrap2()
Nov 25 10:43:56 archmain ollama[6848]:         github.com/ollama/ollama/llama/runner/runner.go:955 +0x28 fp=0xc00006d7e0 sp=0xc00006d7b8 pc=0x574ce3eb40a8
Nov 25 10:43:56 archmain ollama[6848]: runtime.goexit({})
Nov 25 10:43:56 archmain ollama[6848]:         runtime/asm_amd64.s:1700 +0x1 fp=0xc00006d7e8 sp=0xc00006d7e0 pc=0x574ce3c77721
Nov 25 10:43:56 archmain ollama[6848]: created by main.main in goroutine 1
Nov 25 10:43:56 archmain ollama[6848]:         github.com/ollama/ollama/llama/runner/runner.go:955 +0xc52
Nov 25 10:43:56 archmain ollama[6848]: goroutine 9 gp=0xc000182540 m=nil [IO wait]:
Nov 25 10:43:56 archmain ollama[6848]: runtime.gopark(0x574ce3cc5ec5?, 0xc0000fa180?, 0x10?, 0xba?, 0xb?)
Nov 25 10:43:56 archmain ollama[6848]:         runtime/proc.go:424 +0xce fp=0xc0000eb918 sp=0xc0000eb8f8 pc=0x574ce3c6faee
Nov 25 10:43:56 archmain ollama[6848]: runtime.netpollblock(0x574ce3cab318?, 0xe3c084e6?, 0x4c?)
Nov 25 10:43:56 archmain ollama[6848]:         runtime/netpoll.go:575 +0xf7 fp=0xc0000eb950 sp=0xc0000eb918 pc=0x574ce3c34897
Nov 25 10:43:56 archmain ollama[6848]: internal/poll.runtime_pollWait(0x70fe6978be78, 0x72)
Nov 25 10:43:56 archmain ollama[6848]:         runtime/netpoll.go:351 +0x85 fp=0xc0000eb970 sp=0xc0000eb950 pc=0x574ce3c6ede5
Nov 25 10:43:56 archmain ollama[6848]: internal/poll.(*pollDesc).wait(0xc0000fa180?, 0xc000196000?, 0x0)
Nov 25 10:43:56 archmain ollama[6848]:         internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc0000eb998 sp=0xc0000eb970 pc=0x574ce3cc4c27
Nov 25 10:43:56 archmain ollama[6848]: internal/poll.(*pollDesc).waitRead(...)
Nov 25 10:43:56 archmain ollama[6848]:         internal/poll/fd_poll_runtime.go:89
Nov 25 10:43:56 archmain ollama[6848]: internal/poll.(*FD).Read(0xc0000fa180, {0xc000196000, 0x1000, 0x1000})
Nov 25 10:43:56 archmain ollama[6848]:         internal/poll/fd_unix.go:165 +0x27a fp=0xc0000eba30 sp=0xc0000eb998 pc=0x574ce3cc577a
Nov 25 10:43:56 archmain ollama[6848]: net.(*netFD).Read(0xc0000fa180, {0xc000196000?, 0xc0000ebaa0?, 0x574ce3cc50e5?})
Nov 25 10:43:56 archmain ollama[6848]:         net/fd_posix.go:55 +0x25 fp=0xc0000eba78 sp=0xc0000eba30 pc=0x574ce3d3d685
Nov 25 10:43:56 archmain ollama[6848]: net.(*conn).Read(0xc00006e0e8, {0xc000196000?, 0x0?, 0xc0000b3208?})
Nov 25 10:43:56 archmain ollama[6848]:         net/net.go:189 +0x45 fp=0xc0000ebac0 sp=0xc0000eba78 pc=0x574ce3d47085
Nov 25 10:43:56 archmain ollama[6848]: net.(*TCPConn).Read(0xc0000b3200?, {0xc000196000?, 0xc0000fa180?, 0xc0000ebaf8?})
Nov 25 10:43:56 archmain ollama[6848]:         <autogenerated>:1 +0x25 fp=0xc0000ebaf0 sp=0xc0000ebac0 pc=0x574ce3d54125
Nov 25 10:43:56 archmain ollama[6848]: net/http.(*connReader).Read(0xc0000b3200, {0xc000196000, 0x1000, 0x1000})
Nov 25 10:43:56 archmain ollama[6848]:         net/http/server.go:798 +0x14b fp=0xc0000ebb40 sp=0xc0000ebaf0 pc=0x574ce3e7530b
Nov 25 10:43:56 archmain ollama[6848]: bufio.(*Reader).fill(0xc00009c4e0)
Nov 25 10:43:56 archmain ollama[6848]:         bufio/bufio.go:110 +0x103 fp=0xc0000ebb78 sp=0xc0000ebb40 pc=0x574ce3e33f23
Nov 25 10:43:56 archmain ollama[6848]: bufio.(*Reader).Peek(0xc00009c4e0, 0x4)
Nov 25 10:43:56 archmain ollama[6848]:         bufio/bufio.go:148 +0x53 fp=0xc0000ebb98 sp=0xc0000ebb78 pc=0x574ce3e34053
Nov 25 10:43:56 archmain ollama[6848]: net/http.(*conn).serve(0xc0000c6240, {0x574ce41a9108, 0xc0000b30e0})
Nov 25 10:43:56 archmain ollama[6848]:         net/http/server.go:2127 +0x738 fp=0xc0000ebfb8 sp=0xc0000ebb98 pc=0x574ce3e7a658
Nov 25 10:43:56 archmain ollama[6848]: net/http.(*Server).Serve.gowrap3()
Nov 25 10:43:56 archmain ollama[6848]:         net/http/server.go:3360 +0x28 fp=0xc0000ebfe0 sp=0xc0000ebfb8 pc=0x574ce3e7ee08
Nov 25 10:43:56 archmain ollama[6848]: runtime.goexit({})
Nov 25 10:43:56 archmain ollama[6848]:         runtime/asm_amd64.s:1700 +0x1 fp=0xc0000ebfe8 sp=0xc0000ebfe0 pc=0x574ce3c77721
Nov 25 10:43:56 archmain ollama[6848]: created by net/http.(*Server).Serve in goroutine 1
Nov 25 10:43:56 archmain ollama[6848]:         net/http/server.go:3360 +0x485
Nov 25 10:43:56 archmain ollama[6848]: rax    0x0
Nov 25 10:43:56 archmain ollama[6848]: rbx    0x1b91
Nov 25 10:43:56 archmain ollama[6848]: rcx    0x70fedbaa53f4
Nov 25 10:43:56 archmain ollama[6848]: rdx    0x6
Nov 25 10:43:56 archmain ollama[6848]: rdi    0x1b8d
Nov 25 10:43:56 archmain ollama[6848]: rsi    0x1b91
Nov 25 10:43:56 archmain ollama[6848]: rbp    0x70fdc3ff69b0
Nov 25 10:43:56 archmain ollama[6848]: rsp    0x70fdc3ff6970
Nov 25 10:43:56 archmain ollama[6848]: r8     0x0
Nov 25 10:43:56 archmain ollama[6848]: r9     0x0
Nov 25 10:43:56 archmain ollama[6848]: r10    0x8
Nov 25 10:43:56 archmain ollama[6848]: r11    0x246
Nov 25 10:43:56 archmain ollama[6848]: r12    0x70fdc3fff6c0
Nov 25 10:43:56 archmain ollama[6848]: r13    0x574ce40c23c0
Nov 25 10:43:56 archmain ollama[6848]: r14    0x6
Nov 25 10:43:56 archmain ollama[6848]: r15    0x1
Nov 25 10:43:56 archmain ollama[6848]: rip    0x70fedbaa53f4
Nov 25 10:43:56 archmain ollama[6848]: rflags 0x246
Nov 25 10:43:56 archmain ollama[6848]: cs     0x33
Nov 25 10:43:56 archmain ollama[6848]: fs     0x0
Nov 25 10:43:56 archmain ollama[6848]: gs     0x0
Nov 25 10:43:56 archmain ollama[6848]: time=2024-11-25T10:43:56.756-08:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: GGML_ASSERT(a->ne[2] == b->ne[2]) failed"
Nov 25 10:43:56 archmain ollama[6848]: [GIN] 2024/11/25 - 10:43:56 | 500 |  1.882700432s |       127.0.0.1 | POST     "/api/chat"
Nov 25 10:44:01 archmain ollama[6848]: time=2024-11-25T10:44:01.757-08:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.001085507 model=/var/lib/ollama/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068```
@The-afroman commented on GitHub (Nov 25, 2024): Same here with ollama-rocm 0.4.4, rocm 6.2.4 and AMD 6950xt ```Nov 25 10:43:19 archmain ollama[6848]: time=2024-11-25T10:43:19.472-08:00 level=INFO source=images.go:753 msg="total blobs: 11" Nov 25 10:43:19 archmain ollama[6848]: time=2024-11-25T10:43:19.472-08:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0" Nov 25 10:43:19 archmain ollama[6848]: time=2024-11-25T10:43:19.472-08:00 level=INFO source=routes.go:1248 msg="Listening on 127.0.0.1:11434 (version 0.4.4)" Nov 25 10:43:19 archmain ollama[6848]: time=2024-11-25T10:43:19.472-08:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/tmp/ollama2095919014/runners Nov 25 10:43:19 archmain ollama[6848]: time=2024-11-25T10:43:19.513-08:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 rocm]" Nov 25 10:43:19 archmain ollama[6848]: time=2024-11-25T10:43:19.514-08:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs" Nov 25 10:43:19 archmain ollama[6848]: time=2024-11-25T10:43:19.538-08:00 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" Nov 25 10:43:19 archmain ollama[6848]: time=2024-11-25T10:43:19.538-08:00 level=INFO source=amd_linux.go:386 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION=10.3.0 Nov 25 10:43:19 archmain ollama[6848]: time=2024-11-25T10:43:19.538-08:00 level=INFO source=amd_linux.go:296 msg="unsupported Radeon iGPU detected skipping" id=1 total="512.0 MiB" Nov 25 10:43:19 archmain ollama[6848]: time=2024-11-25T10:43:19.539-08:00 level=INFO source=types.go:123 msg="inference compute" id=GPU-55a67b9cc7522484 library=rocm variant="" compute=gfx1030 driver=0.0 name=1002:73a5 total="16.0 GiB" available="14.9 GiB" Nov 25 10:43:33 archmain ollama[6848]: [GIN] 2024/11/25 - 10:43:33 | 200 | 42.999µs | 127.0.0.1 | GET "/api/version" Nov 25 10:43:42 archmain ollama[6848]: [GIN] 2024/11/25 - 10:43:42 | 200 | 266.927µs | 127.0.0.1 | GET "/api/tags" Nov 25 10:43:45 archmain ollama[6848]: [GIN] 2024/11/25 - 10:43:45 | 200 | 27.18µs | 127.0.0.1 | GET "/api/version" Nov 25 10:43:54 archmain ollama[6848]: [GIN] 2024/11/25 - 10:43:54 | 200 | 258.537µs | 127.0.0.1 | GET "/api/tags" Nov 25 10:43:54 archmain ollama[6848]: time=2024-11-25T10:43:54.975-08:00 level=WARN source=sched.go:137 msg="mllama doesn't support parallel requests yet" Nov 25 10:43:54 archmain ollama[6848]: time=2024-11-25T10:43:54.998-08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/var/lib/ollama/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 gpu=GPU-55a67b9cc7522484 parallel=1 available=16009924608 required="11.3 GiB" Nov 25 10:43:54 archmain ollama[6848]: time=2024-11-25T10:43:54.998-08:00 level=INFO source=server.go:105 msg="system memory" total="30.6 GiB" free="26.1 GiB" free_swap="8.0 GiB" Nov 25 10:43:54 archmain ollama[6848]: time=2024-11-25T10:43:54.999-08:00 level=INFO source=memory.go:343 msg="offload to rocm" projector.weights="1.8 GiB" projector.graph="2.8 GiB" layers.requested=-1 layers.model=41 layers.offload=41 layers.split="" memory.available="[14.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.3 GiB" memory.required.partial="11.3 GiB" memory.required.kv="656.2 MiB" memory.required.allocations="[11.3 GiB]" memory.weights.total="5.5 GiB" memory.weights.repeating="5.1 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="258.5 MiB" memory.graph.partial="669.5 MiB" Nov 25 10:43:55 archmain ollama[6848]: time=2024-11-25T10:43:55.000-08:00 level=INFO source=server.go:383 msg="starting llama server" cmd="/tmp/ollama2095919014/runners/rocm/ollama_llama_server --model /var/lib/ollama/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 --ctx-size 2048 --batch-size 512 --n-gpu-layers 41 --mmproj /var/lib/ollama/blobs/sha256-ece5e659647a20a5c28ab9eea1c12a1ad430bc0f2a27021d00ad103b3bf5206f --threads 12 --parallel 1 --port 42147" Nov 25 10:43:55 archmain ollama[6848]: time=2024-11-25T10:43:55.000-08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 Nov 25 10:43:55 archmain ollama[6848]: time=2024-11-25T10:43:55.000-08:00 level=INFO source=server.go:562 msg="waiting for llama runner to start responding" Nov 25 10:43:55 archmain ollama[6848]: time=2024-11-25T10:43:55.001-08:00 level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server error" Nov 25 10:43:55 archmain ollama[6848]: time=2024-11-25T10:43:55.028-08:00 level=INFO source=runner.go:916 msg="starting go runner" Nov 25 10:43:55 archmain ollama[6848]: time=2024-11-25T10:43:55.029-08:00 level=INFO source=runner.go:917 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=12 Nov 25 10:43:55 archmain ollama[6848]: time=2024-11-25T10:43:55.029-08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:42147" Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: loaded meta data with 27 key-value pairs and 396 tensors from /var/lib/ollama/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 (version GGUF V3 (latest)) Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv 0: general.architecture str = mllama Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv 1: general.type str = model Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv 2: general.name str = Model Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv 3: general.size_label str = 10B Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv 4: mllama.block_count u32 = 40 Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv 5: mllama.context_length u32 = 131072 Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv 6: mllama.embedding_length u32 = 4096 Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv 7: mllama.feed_forward_length u32 = 14336 Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv 8: mllama.attention.head_count u32 = 32 Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv 9: mllama.attention.head_count_kv u32 = 8 Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv 10: mllama.rope.freq_base f32 = 500000.000000 Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv 11: mllama.attention.layer_norm_rms_epsilon f32 = 0.000010 Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv 12: general.file_type u32 = 15 Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv 13: mllama.vocab_size u32 = 128256 Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv 14: mllama.rope.dimension_count u32 = 128 Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv 15: mllama.attention.cross_attention_layers arr[i32,8] = [3, 8, 13, 18, 23, 28, 33, 38] Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv 16: tokenizer.ggml.add_bos_token bool = true Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv 17: tokenizer.ggml.model str = gpt2 Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv 18: tokenizer.ggml.pre str = llama-bpe Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv 19: tokenizer.ggml.tokens arr[str,128257] = ["!", "\"", "#", "$", "%", "&", "'", ... Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv 20: tokenizer.ggml.token_type arr[i32,128257] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv 21: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 128000 Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv 23: tokenizer.ggml.eos_token_id u32 = 128009 Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv 24: tokenizer.ggml.padding_token_id u32 = 128004 Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv 25: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - kv 26: general.quantization_version u32 = 2 Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - type f32: 114 tensors Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - type q4_K: 245 tensors Nov 25 10:43:55 archmain ollama[6848]: llama_model_loader: - type q6_K: 37 tensors Nov 25 10:43:55 archmain ollama[6848]: llm_load_vocab: special tokens cache size = 257 Nov 25 10:43:55 archmain ollama[6848]: llm_load_vocab: token to piece cache size = 0.7999 MB Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: format = GGUF V3 (latest) Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: arch = mllama Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: vocab type = BPE Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: n_vocab = 128256 Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: n_merges = 280147 Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: vocab_only = 0 Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: n_ctx_train = 131072 Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: n_embd = 4096 Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: n_layer = 40 Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: n_head = 32 Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: n_head_kv = 8 Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: n_rot = 128 Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: n_swa = 0 Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: n_embd_head_k = 128 Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: n_embd_head_v = 128 Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: n_gqa = 4 Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: n_embd_k_gqa = 1024 Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: n_embd_v_gqa = 1024 Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: f_norm_eps = 0.0e+00 Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05 Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: f_logit_scale = 0.0e+00 Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: n_ff = 14336 Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: n_expert = 0 Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: n_expert_used = 0 Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: causal attn = 1 Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: pooling type = 0 Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: rope type = 0 Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: rope scaling = linear Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: freq_base_train = 500000.0 Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: freq_scale_train = 1 Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: n_ctx_orig_yarn = 131072 Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: rope_finetuned = unknown Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: ssm_d_conv = 0 Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: ssm_d_inner = 0 Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: ssm_d_state = 0 Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: ssm_dt_rank = 0 Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: ssm_dt_b_c_rms = 0 Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: model type = 11B Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: model ftype = Q4_K - Medium Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: model params = 9.78 B Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: model size = 5.55 GiB (4.87 BPW) Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: general.name = Model Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: EOS token = 128009 '<|eot_id|>' Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: PAD token = 128004 '<|finetune_right_pad_id|>' Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: LF token = 128 'Ä' Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: EOT token = 128009 '<|eot_id|>' Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: EOM token = 128008 '<|eom_id|>' Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: EOG token = 128008 '<|eom_id|>' Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: EOG token = 128009 '<|eot_id|>' Nov 25 10:43:55 archmain ollama[6848]: llm_load_print_meta: max token length = 256 Nov 25 10:43:55 archmain ollama[6848]: llama_model_load: vocab mismatch 128256 !- 128257 ... Nov 25 10:43:55 archmain ollama[6848]: time=2024-11-25T10:43:55.252-08:00 level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server loading model" Nov 25 10:43:55 archmain ollama[6848]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Nov 25 10:43:55 archmain ollama[6848]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Nov 25 10:43:55 archmain ollama[6848]: ggml_cuda_init: found 1 ROCm devices: Nov 25 10:43:55 archmain ollama[6848]: Device 0: AMD Radeon RX 6950 XT, compute capability 10.3, VMM: no Nov 25 10:43:55 archmain ollama[6848]: llm_load_tensors: ggml ctx size = 0.36 MiB Nov 25 10:43:55 archmain ollama[6848]: llm_load_tensors: offloading 40 repeating layers to GPU Nov 25 10:43:55 archmain ollama[6848]: llm_load_tensors: offloading non-repeating layers to GPU Nov 25 10:43:55 archmain ollama[6848]: llm_load_tensors: offloaded 41/41 layers to GPU Nov 25 10:43:55 archmain ollama[6848]: llm_load_tensors: ROCm0 buffer size = 5397.51 MiB Nov 25 10:43:55 archmain ollama[6848]: llm_load_tensors: CPU buffer size = 281.83 MiB Nov 25 10:43:56 archmain ollama[6848]: llama_new_context_with_model: n_ctx = 2048 Nov 25 10:43:56 archmain ollama[6848]: llama_new_context_with_model: n_batch = 512 Nov 25 10:43:56 archmain ollama[6848]: llama_new_context_with_model: n_ubatch = 512 Nov 25 10:43:56 archmain ollama[6848]: llama_new_context_with_model: flash_attn = 0 Nov 25 10:43:56 archmain ollama[6848]: llama_new_context_with_model: freq_base = 500000.0 Nov 25 10:43:56 archmain ollama[6848]: llama_new_context_with_model: freq_scale = 1 Nov 25 10:43:56 archmain ollama[6848]: llama_kv_cache_init: ROCm0 KV buffer size = 656.25 MiB Nov 25 10:43:56 archmain ollama[6848]: llama_new_context_with_model: KV self size = 656.25 MiB, K (f16): 328.12 MiB, V (f16): 328.12 MiB Nov 25 10:43:56 archmain ollama[6848]: llama_new_context_with_model: ROCm_Host output buffer size = 0.50 MiB Nov 25 10:43:56 archmain ollama[6848]: llama_new_context_with_model: ROCm0 compute buffer size = 258.50 MiB Nov 25 10:43:56 archmain ollama[6848]: llama_new_context_with_model: ROCm_Host compute buffer size = 12.01 MiB Nov 25 10:43:56 archmain ollama[6848]: llama_new_context_with_model: graph nodes = 1030 Nov 25 10:43:56 archmain ollama[6848]: llama_new_context_with_model: graph splits = 2 Nov 25 10:43:56 archmain ollama[6848]: mllama_model_load: model name: Llama-3.2-11B-Vision-Instruct Nov 25 10:43:56 archmain ollama[6848]: mllama_model_load: description: vision encoder for Mllama Nov 25 10:43:56 archmain ollama[6848]: mllama_model_load: GGUF version: 3 Nov 25 10:43:56 archmain ollama[6848]: mllama_model_load: alignment: 32 Nov 25 10:43:56 archmain ollama[6848]: mllama_model_load: n_tensors: 512 Nov 25 10:43:56 archmain ollama[6848]: mllama_model_load: n_kv: 17 Nov 25 10:43:56 archmain ollama[6848]: mllama_model_load: ftype: f16 Nov 25 10:43:56 archmain ollama[6848]: mllama_model_load: Nov 25 10:43:56 archmain ollama[6848]: mllama_model_load: vision using CUDA backend Nov 25 10:43:56 archmain ollama[6848]: ggml.c:6712: GGML_ASSERT(a->ne[2] == b->ne[2]) failed Nov 25 10:43:56 archmain ollama[6848]: ptrace: Operation not permitted. Nov 25 10:43:56 archmain ollama[6848]: No stack. Nov 25 10:43:56 archmain ollama[6848]: The program is not being run. Nov 25 10:43:56 archmain ollama[6848]: SIGABRT: abort Nov 25 10:43:56 archmain ollama[6848]: PC=0x70fedbaa53f4 m=5 sigcode=18446744073709551610 Nov 25 10:43:56 archmain ollama[6848]: signal arrived during cgo execution Nov 25 10:43:56 archmain ollama[6848]: goroutine 7 gp=0xc000182000 m=5 mp=0xc000100008 [syscall]: Nov 25 10:43:56 archmain ollama[6848]: runtime.cgocall(0x574ce3eb7fc0, 0xc000085cd0) Nov 25 10:43:56 archmain ollama[6848]: runtime/cgocall.go:167 +0x4b fp=0xc000085ca8 sp=0xc000085c70 pc=0x574ce3c69d2b Nov 25 10:43:56 archmain ollama[6848]: github.com/ollama/ollama/llama._Cfunc_mllama_model_load(0x70fc77cb4f80, 0x1) Nov 25 10:43:56 archmain ollama[6848]: _cgo_gotypes.go:1010 +0x50 fp=0xc000085cd0 sp=0xc000085ca8 pc=0x574ce3d15730 Nov 25 10:43:56 archmain ollama[6848]: github.com/ollama/ollama/llama.NewMllamaContext(0xc00020c030, {0x7fffc03c1ccb, 0x5d}) Nov 25 10:43:56 archmain ollama[6848]: github.com/ollama/ollama/llama/llama.go:561 +0x90 fp=0xc000085d90 sp=0xc000085cd0 pc=0x574ce3d193b0 Nov 25 10:43:56 archmain ollama[6848]: main.NewImageContext(0xc00020c030, {0x7fffc03c1ccb, 0x5d}) Nov 25 10:43:56 archmain ollama[6848]: github.com/ollama/ollama/llama/runner/image.go:39 +0x168 fp=0xc000085e10 sp=0xc000085d90 pc=0x574ce3ead2a8 Nov 25 10:43:56 archmain ollama[6848]: main.(*Server).loadModel(0xc0000c61b0, {0x29, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0xc0000222b0, 0x0}, ...) Nov 25 10:43:56 archmain ollama[6848]: github.com/ollama/ollama/llama/runner/runner.go:861 +0x236 fp=0xc000085f38 sp=0xc000085e10 pc=0x574ce3eb2cd6 Nov 25 10:43:56 archmain ollama[6848]: main.main.gowrap1() Nov 25 10:43:56 archmain ollama[6848]: github.com/ollama/ollama/llama/runner/runner.go:950 +0x95 fp=0xc000085fe0 sp=0xc000085f38 pc=0x574ce3eb4175 Nov 25 10:43:56 archmain ollama[6848]: runtime.goexit({}) Nov 25 10:43:56 archmain ollama[6848]: runtime/asm_amd64.s:1700 +0x1 fp=0xc000085fe8 sp=0xc000085fe0 pc=0x574ce3c77721 Nov 25 10:43:56 archmain ollama[6848]: created by main.main in goroutine 1 Nov 25 10:43:56 archmain ollama[6848]: github.com/ollama/ollama/llama/runner/runner.go:950 +0xb7e Nov 25 10:43:56 archmain ollama[6848]: goroutine 1 gp=0xc0000061c0 m=nil [IO wait]: Nov 25 10:43:56 archmain ollama[6848]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) Nov 25 10:43:56 archmain ollama[6848]: runtime/proc.go:424 +0xce fp=0xc000027858 sp=0xc000027838 pc=0x574ce3c6faee Nov 25 10:43:56 archmain ollama[6848]: runtime.netpollblock(0xc0000278a8?, 0xe3c084e6?, 0x4c?) Nov 25 10:43:56 archmain ollama[6848]: runtime/netpoll.go:575 +0xf7 fp=0xc000027890 sp=0xc000027858 pc=0x574ce3c34897 Nov 25 10:43:56 archmain ollama[6848]: internal/poll.runtime_pollWait(0x70fe6978bf90, 0x72) Nov 25 10:43:56 archmain ollama[6848]: runtime/netpoll.go:351 +0x85 fp=0xc0000278b0 sp=0xc000027890 pc=0x574ce3c6ede5 Nov 25 10:43:56 archmain ollama[6848]: internal/poll.(*pollDesc).wait(0xc0000fa100?, 0x2c?, 0x0) Nov 25 10:43:56 archmain ollama[6848]: internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc0000278d8 sp=0xc0000278b0 pc=0x574ce3cc4c27 Nov 25 10:43:56 archmain ollama[6848]: internal/poll.(*pollDesc).waitRead(...) Nov 25 10:43:56 archmain ollama[6848]: internal/poll/fd_poll_runtime.go:89 Nov 25 10:43:56 archmain ollama[6848]: internal/poll.(*FD).Accept(0xc0000fa100) Nov 25 10:43:56 archmain ollama[6848]: internal/poll/fd_unix.go:620 +0x295 fp=0xc000027980 sp=0xc0000278d8 pc=0x574ce3cc6195 Nov 25 10:43:56 archmain ollama[6848]: net.(*netFD).accept(0xc0000fa100) Nov 25 10:43:56 archmain ollama[6848]: net/fd_unix.go:172 +0x29 fp=0xc000027a38 sp=0xc000027980 pc=0x574ce3d3e769 Nov 25 10:43:56 archmain ollama[6848]: net.(*TCPListener).accept(0xc00009a700) Nov 25 10:43:56 archmain ollama[6848]: net/tcpsock_posix.go:159 +0x1e fp=0xc000027a88 sp=0xc000027a38 pc=0x574ce3d4edbe Nov 25 10:43:56 archmain ollama[6848]: net.(*TCPListener).Accept(0xc00009a700) Nov 25 10:43:56 archmain ollama[6848]: net/tcpsock.go:372 +0x30 fp=0xc000027ab8 sp=0xc000027a88 pc=0x574ce3d4e0f0 Nov 25 10:43:56 archmain ollama[6848]: net/http.(*onceCloseListener).Accept(0xc0000c6240?) Nov 25 10:43:56 archmain ollama[6848]: <autogenerated>:1 +0x24 fp=0xc000027ad0 sp=0xc000027ab8 pc=0x574ce3e8ccc4 Nov 25 10:43:56 archmain ollama[6848]: net/http.(*Server).Serve(0xc0000f84b0, {0x574ce41a8b58, 0xc00009a700}) Nov 25 10:43:56 archmain ollama[6848]: net/http/server.go:3330 +0x30c fp=0xc000027c00 sp=0xc000027ad0 pc=0x574ce3e7ea0c Nov 25 10:43:56 archmain ollama[6848]: main.main() Nov 25 10:43:56 archmain ollama[6848]: github.com/ollama/ollama/llama/runner/runner.go:975 +0xfc7 fp=0xc000027f50 sp=0xc000027c00 pc=0x574ce3eb3e27 Nov 25 10:43:56 archmain ollama[6848]: runtime.main() Nov 25 10:43:56 archmain ollama[6848]: runtime/proc.go:272 +0x29d fp=0xc000027fe0 sp=0xc000027f50 pc=0x574ce3c3be7d Nov 25 10:43:56 archmain ollama[6848]: runtime.goexit({}) Nov 25 10:43:56 archmain ollama[6848]: runtime/asm_amd64.s:1700 +0x1 fp=0xc000027fe8 sp=0xc000027fe0 pc=0x574ce3c77721 Nov 25 10:43:56 archmain ollama[6848]: goroutine 2 gp=0xc000006c40 m=nil [force gc (idle)]: Nov 25 10:43:56 archmain ollama[6848]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) Nov 25 10:43:56 archmain ollama[6848]: runtime/proc.go:424 +0xce fp=0xc00006afa8 sp=0xc00006af88 pc=0x574ce3c6faee Nov 25 10:43:56 archmain ollama[6848]: runtime.goparkunlock(...) Nov 25 10:43:56 archmain ollama[6848]: runtime/proc.go:430 Nov 25 10:43:56 archmain ollama[6848]: runtime.forcegchelper() Nov 25 10:43:56 archmain ollama[6848]: runtime/proc.go:337 +0xb8 fp=0xc00006afe0 sp=0xc00006afa8 pc=0x574ce3c3c1b8 Nov 25 10:43:56 archmain ollama[6848]: runtime.goexit({}) Nov 25 10:43:56 archmain ollama[6848]: runtime/asm_amd64.s:1700 +0x1 fp=0xc00006afe8 sp=0xc00006afe0 pc=0x574ce3c77721 Nov 25 10:43:56 archmain ollama[6848]: created by runtime.init.7 in goroutine 1 Nov 25 10:43:56 archmain ollama[6848]: runtime/proc.go:325 +0x1a Nov 25 10:43:56 archmain ollama[6848]: goroutine 3 gp=0xc000007180 m=nil [GC sweep wait]: Nov 25 10:43:56 archmain ollama[6848]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) Nov 25 10:43:56 archmain ollama[6848]: runtime/proc.go:424 +0xce fp=0xc00006b780 sp=0xc00006b760 pc=0x574ce3c6faee Nov 25 10:43:56 archmain ollama[6848]: runtime.goparkunlock(...) Nov 25 10:43:56 archmain ollama[6848]: runtime/proc.go:430 Nov 25 10:43:56 archmain ollama[6848]: runtime.bgsweep(0xc000098000) Nov 25 10:43:56 archmain ollama[6848]: runtime/mgcsweep.go:277 +0x94 fp=0xc00006b7c8 sp=0xc00006b780 pc=0x574ce3c26b34 Nov 25 10:43:56 archmain ollama[6848]: runtime.gcenable.gowrap1() Nov 25 10:43:56 archmain ollama[6848]: runtime/mgc.go:203 +0x25 fp=0xc00006b7e0 sp=0xc00006b7c8 pc=0x574ce3c1b405 Nov 25 10:43:56 archmain ollama[6848]: runtime.goexit({}) Nov 25 10:43:56 archmain ollama[6848]: runtime/asm_amd64.s:1700 +0x1 fp=0xc00006b7e8 sp=0xc00006b7e0 pc=0x574ce3c77721 Nov 25 10:43:56 archmain ollama[6848]: created by runtime.gcenable in goroutine 1 Nov 25 10:43:56 archmain ollama[6848]: runtime/mgc.go:203 +0x66 Nov 25 10:43:56 archmain ollama[6848]: goroutine 4 gp=0xc000007340 m=nil [GC scavenge wait]: Nov 25 10:43:56 archmain ollama[6848]: runtime.gopark(0xc000098000?, 0x574ce40bbd88?, 0x1?, 0x0?, 0xc000007340?) Nov 25 10:43:56 archmain ollama[6848]: runtime/proc.go:424 +0xce fp=0xc00006bf78 sp=0xc00006bf58 pc=0x574ce3c6faee Nov 25 10:43:56 archmain ollama[6848]: runtime.goparkunlock(...) Nov 25 10:43:56 archmain ollama[6848]: runtime/proc.go:430 Nov 25 10:43:56 archmain ollama[6848]: runtime.(*scavengerState).park(0x574ce4391180) Nov 25 10:43:56 archmain ollama[6848]: runtime/mgcscavenge.go:425 +0x49 fp=0xc00006bfa8 sp=0xc00006bf78 pc=0x574ce3c24569 Nov 25 10:43:56 archmain ollama[6848]: runtime.bgscavenge(0xc000098000) Nov 25 10:43:56 archmain ollama[6848]: runtime/mgcscavenge.go:653 +0x3c fp=0xc00006bfc8 sp=0xc00006bfa8 pc=0x574ce3c24adc Nov 25 10:43:56 archmain ollama[6848]: runtime.gcenable.gowrap2() Nov 25 10:43:56 archmain ollama[6848]: runtime/mgc.go:204 +0x25 fp=0xc00006bfe0 sp=0xc00006bfc8 pc=0x574ce3c1b3a5 Nov 25 10:43:56 archmain ollama[6848]: runtime.goexit({}) Nov 25 10:43:56 archmain ollama[6848]: runtime/asm_amd64.s:1700 +0x1 fp=0xc00006bfe8 sp=0xc00006bfe0 pc=0x574ce3c77721 Nov 25 10:43:56 archmain ollama[6848]: created by runtime.gcenable in goroutine 1 Nov 25 10:43:56 archmain ollama[6848]: runtime/mgc.go:204 +0xa5 Nov 25 10:43:56 archmain ollama[6848]: goroutine 5 gp=0xc000007c00 m=nil [finalizer wait]: Nov 25 10:43:56 archmain ollama[6848]: runtime.gopark(0xc00006a648?, 0x574ce3c11905?, 0xb0?, 0x1?, 0xc0000061c0?) Nov 25 10:43:56 archmain ollama[6848]: runtime/proc.go:424 +0xce fp=0xc00006a620 sp=0xc00006a600 pc=0x574ce3c6faee Nov 25 10:43:56 archmain ollama[6848]: runtime.runfinq() Nov 25 10:43:56 archmain ollama[6848]: runtime/mfinal.go:193 +0x107 fp=0xc00006a7e0 sp=0xc00006a620 pc=0x574ce3c1a487 Nov 25 10:43:56 archmain ollama[6848]: runtime.goexit({}) Nov 25 10:43:56 archmain ollama[6848]: runtime/asm_amd64.s:1700 +0x1 fp=0xc00006a7e8 sp=0xc00006a7e0 pc=0x574ce3c77721 Nov 25 10:43:56 archmain ollama[6848]: created by runtime.createfing in goroutine 1 Nov 25 10:43:56 archmain ollama[6848]: runtime/mfinal.go:163 +0x3d Nov 25 10:43:56 archmain ollama[6848]: goroutine 6 gp=0xc000007dc0 m=nil [chan receive]: Nov 25 10:43:56 archmain ollama[6848]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) Nov 25 10:43:56 archmain ollama[6848]: runtime/proc.go:424 +0xce fp=0xc00006c718 sp=0xc00006c6f8 pc=0x574ce3c6faee Nov 25 10:43:56 archmain ollama[6848]: runtime.chanrecv(0xc0000a60e0, 0x0, 0x1) Nov 25 10:43:56 archmain ollama[6848]: runtime/chan.go:639 +0x41c fp=0xc00006c790 sp=0xc00006c718 pc=0x574ce3c0b0dc Nov 25 10:43:56 archmain ollama[6848]: runtime.chanrecv1(0x0?, 0x0?) Nov 25 10:43:56 archmain ollama[6848]: runtime/chan.go:489 +0x12 fp=0xc00006c7b8 sp=0xc00006c790 pc=0x574ce3c0acb2 Nov 25 10:43:56 archmain ollama[6848]: runtime.unique_runtime_registerUniqueMapCleanup.func1(...) Nov 25 10:43:56 archmain ollama[6848]: runtime/mgc.go:1732 Nov 25 10:43:56 archmain ollama[6848]: runtime.unique_runtime_registerUniqueMapCleanup.gowrap1() Nov 25 10:43:56 archmain ollama[6848]: runtime/mgc.go:1735 +0x2f fp=0xc00006c7e0 sp=0xc00006c7b8 pc=0x574ce3c1e24f Nov 25 10:43:56 archmain ollama[6848]: runtime.goexit({}) Nov 25 10:43:56 archmain ollama[6848]: runtime/asm_amd64.s:1700 +0x1 fp=0xc00006c7e8 sp=0xc00006c7e0 pc=0x574ce3c77721 Nov 25 10:43:56 archmain ollama[6848]: created by unique.runtime_registerUniqueMapCleanup in goroutine 1 Nov 25 10:43:56 archmain ollama[6848]: runtime/mgc.go:1730 +0x96 Nov 25 10:43:56 archmain ollama[6848]: goroutine 8 gp=0xc0001821c0 m=nil [semacquire]: Nov 25 10:43:56 archmain ollama[6848]: runtime.gopark(0x0?, 0x0?, 0x60?, 0x20?, 0x0?) Nov 25 10:43:56 archmain ollama[6848]: runtime/proc.go:424 +0xce fp=0xc00006d608 sp=0xc00006d5e8 pc=0x574ce3c6faee Nov 25 10:43:56 archmain ollama[6848]: runtime.goparkunlock(...) Nov 25 10:43:56 archmain ollama[6848]: runtime/proc.go:430 Nov 25 10:43:56 archmain ollama[6848]: runtime.semacquire1(0xc0000c61b8, 0x0, 0x1, 0x0, 0x12) Nov 25 10:43:56 archmain ollama[6848]: runtime/sema.go:178 +0x22c fp=0xc00006d670 sp=0xc00006d608 pc=0x574ce3c4ee4c Nov 25 10:43:56 archmain ollama[6848]: sync.runtime_Semacquire(0x0?) Nov 25 10:43:56 archmain ollama[6848]: runtime/sema.go:71 +0x25 fp=0xc00006d6a8 sp=0xc00006d670 pc=0x574ce3c70d25 Nov 25 10:43:56 archmain ollama[6848]: sync.(*WaitGroup).Wait(0x0?) Nov 25 10:43:56 archmain ollama[6848]: sync/waitgroup.go:118 +0x48 fp=0xc00006d6d0 sp=0xc00006d6a8 pc=0x574ce3c8cfc8 Nov 25 10:43:56 archmain ollama[6848]: main.(*Server).run(0xc0000c61b0, {0x574ce41a9140, 0xc000180050}) Nov 25 10:43:56 archmain ollama[6848]: github.com/ollama/ollama/llama/runner/runner.go:307 +0x47 fp=0xc00006d7b8 sp=0xc00006d6d0 pc=0x574ce3eaf3a7 Nov 25 10:43:56 archmain ollama[6848]: main.main.gowrap2() Nov 25 10:43:56 archmain ollama[6848]: github.com/ollama/ollama/llama/runner/runner.go:955 +0x28 fp=0xc00006d7e0 sp=0xc00006d7b8 pc=0x574ce3eb40a8 Nov 25 10:43:56 archmain ollama[6848]: runtime.goexit({}) Nov 25 10:43:56 archmain ollama[6848]: runtime/asm_amd64.s:1700 +0x1 fp=0xc00006d7e8 sp=0xc00006d7e0 pc=0x574ce3c77721 Nov 25 10:43:56 archmain ollama[6848]: created by main.main in goroutine 1 Nov 25 10:43:56 archmain ollama[6848]: github.com/ollama/ollama/llama/runner/runner.go:955 +0xc52 Nov 25 10:43:56 archmain ollama[6848]: goroutine 9 gp=0xc000182540 m=nil [IO wait]: Nov 25 10:43:56 archmain ollama[6848]: runtime.gopark(0x574ce3cc5ec5?, 0xc0000fa180?, 0x10?, 0xba?, 0xb?) Nov 25 10:43:56 archmain ollama[6848]: runtime/proc.go:424 +0xce fp=0xc0000eb918 sp=0xc0000eb8f8 pc=0x574ce3c6faee Nov 25 10:43:56 archmain ollama[6848]: runtime.netpollblock(0x574ce3cab318?, 0xe3c084e6?, 0x4c?) Nov 25 10:43:56 archmain ollama[6848]: runtime/netpoll.go:575 +0xf7 fp=0xc0000eb950 sp=0xc0000eb918 pc=0x574ce3c34897 Nov 25 10:43:56 archmain ollama[6848]: internal/poll.runtime_pollWait(0x70fe6978be78, 0x72) Nov 25 10:43:56 archmain ollama[6848]: runtime/netpoll.go:351 +0x85 fp=0xc0000eb970 sp=0xc0000eb950 pc=0x574ce3c6ede5 Nov 25 10:43:56 archmain ollama[6848]: internal/poll.(*pollDesc).wait(0xc0000fa180?, 0xc000196000?, 0x0) Nov 25 10:43:56 archmain ollama[6848]: internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc0000eb998 sp=0xc0000eb970 pc=0x574ce3cc4c27 Nov 25 10:43:56 archmain ollama[6848]: internal/poll.(*pollDesc).waitRead(...) Nov 25 10:43:56 archmain ollama[6848]: internal/poll/fd_poll_runtime.go:89 Nov 25 10:43:56 archmain ollama[6848]: internal/poll.(*FD).Read(0xc0000fa180, {0xc000196000, 0x1000, 0x1000}) Nov 25 10:43:56 archmain ollama[6848]: internal/poll/fd_unix.go:165 +0x27a fp=0xc0000eba30 sp=0xc0000eb998 pc=0x574ce3cc577a Nov 25 10:43:56 archmain ollama[6848]: net.(*netFD).Read(0xc0000fa180, {0xc000196000?, 0xc0000ebaa0?, 0x574ce3cc50e5?}) Nov 25 10:43:56 archmain ollama[6848]: net/fd_posix.go:55 +0x25 fp=0xc0000eba78 sp=0xc0000eba30 pc=0x574ce3d3d685 Nov 25 10:43:56 archmain ollama[6848]: net.(*conn).Read(0xc00006e0e8, {0xc000196000?, 0x0?, 0xc0000b3208?}) Nov 25 10:43:56 archmain ollama[6848]: net/net.go:189 +0x45 fp=0xc0000ebac0 sp=0xc0000eba78 pc=0x574ce3d47085 Nov 25 10:43:56 archmain ollama[6848]: net.(*TCPConn).Read(0xc0000b3200?, {0xc000196000?, 0xc0000fa180?, 0xc0000ebaf8?}) Nov 25 10:43:56 archmain ollama[6848]: <autogenerated>:1 +0x25 fp=0xc0000ebaf0 sp=0xc0000ebac0 pc=0x574ce3d54125 Nov 25 10:43:56 archmain ollama[6848]: net/http.(*connReader).Read(0xc0000b3200, {0xc000196000, 0x1000, 0x1000}) Nov 25 10:43:56 archmain ollama[6848]: net/http/server.go:798 +0x14b fp=0xc0000ebb40 sp=0xc0000ebaf0 pc=0x574ce3e7530b Nov 25 10:43:56 archmain ollama[6848]: bufio.(*Reader).fill(0xc00009c4e0) Nov 25 10:43:56 archmain ollama[6848]: bufio/bufio.go:110 +0x103 fp=0xc0000ebb78 sp=0xc0000ebb40 pc=0x574ce3e33f23 Nov 25 10:43:56 archmain ollama[6848]: bufio.(*Reader).Peek(0xc00009c4e0, 0x4) Nov 25 10:43:56 archmain ollama[6848]: bufio/bufio.go:148 +0x53 fp=0xc0000ebb98 sp=0xc0000ebb78 pc=0x574ce3e34053 Nov 25 10:43:56 archmain ollama[6848]: net/http.(*conn).serve(0xc0000c6240, {0x574ce41a9108, 0xc0000b30e0}) Nov 25 10:43:56 archmain ollama[6848]: net/http/server.go:2127 +0x738 fp=0xc0000ebfb8 sp=0xc0000ebb98 pc=0x574ce3e7a658 Nov 25 10:43:56 archmain ollama[6848]: net/http.(*Server).Serve.gowrap3() Nov 25 10:43:56 archmain ollama[6848]: net/http/server.go:3360 +0x28 fp=0xc0000ebfe0 sp=0xc0000ebfb8 pc=0x574ce3e7ee08 Nov 25 10:43:56 archmain ollama[6848]: runtime.goexit({}) Nov 25 10:43:56 archmain ollama[6848]: runtime/asm_amd64.s:1700 +0x1 fp=0xc0000ebfe8 sp=0xc0000ebfe0 pc=0x574ce3c77721 Nov 25 10:43:56 archmain ollama[6848]: created by net/http.(*Server).Serve in goroutine 1 Nov 25 10:43:56 archmain ollama[6848]: net/http/server.go:3360 +0x485 Nov 25 10:43:56 archmain ollama[6848]: rax 0x0 Nov 25 10:43:56 archmain ollama[6848]: rbx 0x1b91 Nov 25 10:43:56 archmain ollama[6848]: rcx 0x70fedbaa53f4 Nov 25 10:43:56 archmain ollama[6848]: rdx 0x6 Nov 25 10:43:56 archmain ollama[6848]: rdi 0x1b8d Nov 25 10:43:56 archmain ollama[6848]: rsi 0x1b91 Nov 25 10:43:56 archmain ollama[6848]: rbp 0x70fdc3ff69b0 Nov 25 10:43:56 archmain ollama[6848]: rsp 0x70fdc3ff6970 Nov 25 10:43:56 archmain ollama[6848]: r8 0x0 Nov 25 10:43:56 archmain ollama[6848]: r9 0x0 Nov 25 10:43:56 archmain ollama[6848]: r10 0x8 Nov 25 10:43:56 archmain ollama[6848]: r11 0x246 Nov 25 10:43:56 archmain ollama[6848]: r12 0x70fdc3fff6c0 Nov 25 10:43:56 archmain ollama[6848]: r13 0x574ce40c23c0 Nov 25 10:43:56 archmain ollama[6848]: r14 0x6 Nov 25 10:43:56 archmain ollama[6848]: r15 0x1 Nov 25 10:43:56 archmain ollama[6848]: rip 0x70fedbaa53f4 Nov 25 10:43:56 archmain ollama[6848]: rflags 0x246 Nov 25 10:43:56 archmain ollama[6848]: cs 0x33 Nov 25 10:43:56 archmain ollama[6848]: fs 0x0 Nov 25 10:43:56 archmain ollama[6848]: gs 0x0 Nov 25 10:43:56 archmain ollama[6848]: time=2024-11-25T10:43:56.756-08:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: GGML_ASSERT(a->ne[2] == b->ne[2]) failed" Nov 25 10:43:56 archmain ollama[6848]: [GIN] 2024/11/25 - 10:43:56 | 500 | 1.882700432s | 127.0.0.1 | POST "/api/chat" Nov 25 10:44:01 archmain ollama[6848]: time=2024-11-25T10:44:01.757-08:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.001085507 model=/var/lib/ollama/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068```
Author
Owner

@The-afroman commented on GitHub (Nov 25, 2024):

I reinstalled with the install script https://ollama.com/install.sh and it seems to be working with llama3.2-vision for me now, must be a problem with the arch package?

@The-afroman commented on GitHub (Nov 25, 2024): I reinstalled with the install script https://ollama.com/install.sh and it seems to be working with llama3.2-vision for me now, must be a problem with the arch package?
Author
Owner

@shvedes commented on GitHub (Nov 27, 2024):

Installing via script helped me as well. Looks like Arch package is broken indeed

@shvedes commented on GitHub (Nov 27, 2024): Installing via script helped me as well. Looks like Arch package is broken indeed
Author
Owner

@stephensrmmartin commented on GitHub (Nov 28, 2024):

Has anyone reported this issue on the arch packaging gitlab for ollama-rocm?

@stephensrmmartin commented on GitHub (Nov 28, 2024): Has anyone reported this issue on the arch packaging gitlab for ollama-rocm?
Author
Owner

@shvedes commented on GitHub (Nov 28, 2024):

not yet

@shvedes commented on GitHub (Nov 28, 2024): not yet
Author
Owner

@The-afroman commented on GitHub (Nov 28, 2024):

I would but don't have an account on the archlinux gitlab, also seems like anything the compiler in ROCm 6.2.2-1 SDK is producing broken ollama binaries according to the comments on ollama-rocm-git AUR package (https://aur.archlinux.org/packages/ollama-rocm-git#comment-997768) so that seems like the likely culprit for this issue if ollama-rocm official arch package was compiled with that version.

@The-afroman commented on GitHub (Nov 28, 2024): I would but don't have an account on the archlinux gitlab, also seems like anything the compiler in ROCm 6.2.2-1 SDK is producing broken ollama binaries according to the comments on ollama-rocm-git AUR package (https://aur.archlinux.org/packages/ollama-rocm-git#comment-997768) so that seems like the likely culprit for this issue if ollama-rocm official arch package was compiled with that version.
Author
Owner

@ManuLinares commented on GitHub (Dec 4, 2024):

https://gitlab.archlinux.org/archlinux/packaging/packages/ollama/-/issues/8

@ManuLinares commented on GitHub (Dec 4, 2024): https://gitlab.archlinux.org/archlinux/packaging/packages/ollama/-/issues/8
Author
Owner

@ABHIRAMSHIBU commented on GitHub (Dec 6, 2024):

The problem is due to one of the compile flags used in ArchLinux makepkg.conf, I have not yet found exactly which flag it is but here is the commit which will fix it "28449fdaf8".
Please feel free to clone the repo and execute makepkg -i

@ABHIRAMSHIBU commented on GitHub (Dec 6, 2024): The problem is due to one of the compile flags used in ArchLinux makepkg.conf, I have not yet found exactly which flag it is but here is the commit which will fix it "https://github.com/ABHIRAMSHIBU/ollama-archlinux/commit/28449fdaf89a32a72b4bda89268092a50261ca7b". Please feel free to clone the repo and execute `makepkg -i`
Author
Owner

@shvedes commented on GitHub (Dec 6, 2024):

The problem is due to one of the compile flags used in ArchLinux makepkg.conf, I have not yet found exactly which flag it is but here is the commit which will fix it "28449fdaf8".
Please feel free to clone the repo and execute makepkg -i

Weird solution. It's better to wait for the fix in the official package

@shvedes commented on GitHub (Dec 6, 2024): > The problem is due to one of the compile flags used in ArchLinux makepkg.conf, I have not yet found exactly which flag it is but here is the commit which will fix it "https://github.com/ABHIRAMSHIBU/ollama-archlinux/commit/28449fdaf89a32a72b4bda89268092a50261ca7b". > Please feel free to clone the repo and execute `makepkg -i` Weird solution. It's better to wait for the fix in the official package
Author
Owner

@ManuLinares commented on GitHub (Dec 7, 2024):

Just built 0.5.1 on archlinux, can confirm this error still happens.

@ManuLinares commented on GitHub (Dec 7, 2024): Just built 0.5.1 on archlinux, can confirm this error still happens.
Author
Owner

@ABHIRAMSHIBU commented on GitHub (Dec 7, 2024):

The problem is due to one of the compile flags used in ArchLinux makepkg.conf, I have not yet found exactly which flag it is but here is the commit which will fix it "ABHIRAMSHIBU/ollama-archlinux@28449fd".
Please feel free to clone the repo and execute makepkg -i

Weird solution. It's better to wait for the fix in the official package

True, I have narrowed down to

export CGO_CXXFLAGS="-march=x86-64 -mtune=generic -O2 -pipe -fno-plt -fexceptions \
        -Wformat -Werror=format-security"

Builds take a lot of time in my machine.. appx 15 mins per build. Once I narrow it down, I will post it here.
I have also posted the same in archlinux-gitlab issues.

@ABHIRAMSHIBU commented on GitHub (Dec 7, 2024): > > The problem is due to one of the compile flags used in ArchLinux makepkg.conf, I have not yet found exactly which flag it is but here is the commit which will fix it "[ABHIRAMSHIBU/ollama-archlinux@28449fd](https://github.com/ABHIRAMSHIBU/ollama-archlinux/commit/28449fdaf89a32a72b4bda89268092a50261ca7b)". > > Please feel free to clone the repo and execute `makepkg -i` > > Weird solution. It's better to wait for the fix in the official package True, I have narrowed down to ``` export CGO_CXXFLAGS="-march=x86-64 -mtune=generic -O2 -pipe -fno-plt -fexceptions \ -Wformat -Werror=format-security" ``` Builds take a lot of time in my machine.. appx 15 mins per build. Once I narrow it down, I will post it here. I have also posted the same in archlinux-gitlab issues.
Author
Owner

@ABHIRAMSHIBU commented on GitHub (Dec 7, 2024):

Looks like I narrowed it down. Either '-fstack-clash-protection' needs to be disabled or '-fcf-protection' needs to be disabled in my machine. I don't know if this solution will work for others.
I have disabled '-fstack-clash-protection' and committed here https://github.com/ABHIRAMSHIBU/ollama-archlinux

Please feel free to try it out and let me know the result.

@ABHIRAMSHIBU commented on GitHub (Dec 7, 2024): Looks like I narrowed it down. Either '-fstack-clash-protection' needs to be disabled or '-fcf-protection' needs to be disabled in my machine. I don't know if this solution will work for others. I have disabled '-fstack-clash-protection' and committed here https://github.com/ABHIRAMSHIBU/ollama-archlinux Please feel free to try it out and let me know the result.
Author
Owner

@The-afroman commented on GitHub (Dec 8, 2024):

3.2-vision working now for me with ollama-rocm-0.5.1-2 a fix was merged, thanks all gitlab.archlinux.org/archlinux/packaging/packages/ollama/-/issues/8

@The-afroman commented on GitHub (Dec 8, 2024): 3.2-vision working now for me with ollama-rocm-0.5.1-2 a fix was merged, thanks all [gitlab.archlinux.org/archlinux/packaging/packages/ollama/-/issues/8](https://gitlab.archlinux.org/archlinux/packaging/packages/ollama/-/issues/8)
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama-ollama#4842