[GH-ISSUE #7618] llama runner process has terminated: signal: segmentation fault (core dumped) #66917

Closed
opened 2026-05-04 08:48:56 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @Dhruv-1212 on GitHub (Nov 11, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7618

What is the issue?

segmentation fault (core dumped) error for snowflake-arctic-embed:latest, other models are working fine

these are the system logs

Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: time=2024-11-11T08:26:04.252Z level=INFO source=server.go:108 msg="system memory" total="29.4 GiB" free="26.9 GiB" free_swap="0 B"
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: time=2024-11-11T08:26:04.253Z level=INFO source=memory.go:326 msg="offload to cpu" layers.requested=-1 layers.model=25 layers.offload=0 layers.split="" memory.available="[26.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="705.4 MiB" memory.required.partial="0 B" memory.required.kv="12.0 MiB" memory.required.allocations="[705.4 MiB]" memory.weights.total="589.2 MiB" memory.weights.repeating="529.6 MiB" memory.weights.nonrepeating="59.6 MiB" memory.graph.full="32.0 MiB" memory.graph.partial="32.0 MiB"
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: time=2024-11-11T08:26:04.255Z level=INFO source=server.go:399 msg="starting llama server" cmd="/tmp/ollama1595154785/runners/cpu_avx2/ollama_llama_server --model /var/snap/ollama/common/models/blobs/sha256-fb3b66c7bdf6dabbb2edbc22627f4cb2df021c9e9545b54feafd8a7c09fe8ec5 --ctx-size 2048 --batch-size 512 --embedding --log-disable --no-mmap --parallel 1 --port 35273"
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: time=2024-11-11T08:26:04.256Z level=INFO source=sched.go:449 msg="loaded runners" count=1
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: time=2024-11-11T08:26:04.256Z level=INFO source=server.go:598 msg="waiting for llama runner to start responding"
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: time=2024-11-11T08:26:04.256Z level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server error"
Nov 11 08:26:04 dev--aiml-reco ollama.listener[13066]: INFO [main] starting c++ runner | tid="134564798229440" timestamp=1731313564
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[13066]: INFO [main] build info | build=10 commit="3cd3d45b" tid="134564798229440" timestamp=1731313564
Nov 11 08:26:04 dev--aiml-reco ollama.listener[13066]: INFO [main] system info | n_threads=4 n_threads_batch=4 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="134564798229440" timestamp=1731313564 total_threads=8
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[13066]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="7" port="35273" tid="134564798229440" timestamp=1731313564
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llama_model_loader: loaded meta data with 20 key-value pairs and 389 tensors from /var/snap/ollama/common/models/blobs/sha256-fb3b66c7bdf6dabbb2edbc22627f4cb2df021c9e9545b54feafd8a7c09fe8ec5 (version GGUF V3 (latest))
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llama_model_loader: - kv 0: general.architecture str = bert
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 1: general.name str = snowflake-arctic-embed-l
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llama_model_loader: - kv 2: bert.block_count u32 = 24
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 3: bert.context_length u32 = 512
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llama_model_loader: - kv 4: bert.embedding_length u32 = 1024
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 5: bert.feed_forward_length u32 = 4096
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llama_model_loader: - kv 6: bert.attention.head_count u32 = 16
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 7: bert.attention.layer_norm_epsilon f32 = 0.000000
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llama_model_loader: - kv 8: general.file_type u32 = 1
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 9: bert.attention.causal bool = false
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llama_model_loader: - kv 10: bert.pooling_type u32 = 2
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 11: tokenizer.ggml.token_type_count u32 = 2
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llama_model_loader: - kv 12: tokenizer.ggml.model str = bert
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,30522] = ["[PAD]", "[unused0]", "[unused1]", "...
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,30522] = [3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 15: tokenizer.ggml.unknown_token_id u32 = 100
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llama_model_loader: - kv 16: tokenizer.ggml.seperator_token_id u32 = 102
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 17: tokenizer.ggml.padding_token_id u32 = 0
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llama_model_loader: - kv 18: tokenizer.ggml.cls_token_id u32 = 101
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 19: tokenizer.ggml.mask_token_id u32 = 103
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llama_model_loader: - type f32: 243 tensors
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llama_model_loader: - type f16: 146 tensors
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llm_load_vocab: special tokens cache size = 5
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llm_load_vocab: token to piece cache size = 0.2032 MB
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llm_load_print_meta: format = GGUF V3 (latest)
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llm_load_print_meta: arch = bert
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llm_load_print_meta: vocab type = WPM
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_vocab = 30522
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llm_load_print_meta: n_merges = 0
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llm_load_print_meta: vocab_only = 0
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llm_load_print_meta: n_ctx_train = 512
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_embd = 1024
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llm_load_print_meta: n_layer = 24
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_head = 16
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llm_load_print_meta: n_head_kv = 16
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_rot = 64
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llm_load_print_meta: n_swa = 0
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_embd_head_k = 64
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llm_load_print_meta: n_embd_head_v = 64
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_gqa = 1
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llm_load_print_meta: n_embd_k_gqa = 1024
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_embd_v_gqa = 1024
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llm_load_print_meta: f_norm_eps = 1.0e-12
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llm_load_print_meta: f_norm_rms_eps = 0.0e+00
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llm_load_print_meta: f_clamp_kqv = 0.0e+00
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llm_load_print_meta: f_logit_scale = 0.0e+00
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_ff = 4096
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llm_load_print_meta: n_expert = 0
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_expert_used = 0
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llm_load_print_meta: causal attn = 0
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llm_load_print_meta: pooling type = 2
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llm_load_print_meta: rope type = 2
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llm_load_print_meta: rope scaling = linear
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llm_load_print_meta: freq_base_train = 10000.0
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llm_load_print_meta: freq_scale_train = 1
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llm_load_print_meta: n_ctx_orig_yarn = 512
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llm_load_print_meta: rope_finetuned = unknown
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llm_load_print_meta: ssm_d_conv = 0
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llm_load_print_meta: ssm_d_inner = 0
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llm_load_print_meta: ssm_d_state = 0
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llm_load_print_meta: ssm_dt_rank = 0
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llm_load_print_meta: ssm_dt_b_c_rms = 0
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llm_load_print_meta: model type = 335M
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llm_load_print_meta: model ftype = F16
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llm_load_print_meta: model params = 334.09 M
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llm_load_print_meta: model size = 637.85 MiB (16.02 BPW)
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llm_load_print_meta: general.name = snowflake-arctic-embed-l
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llm_load_print_meta: UNK token = 100 '[UNK]'
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llm_load_print_meta: SEP token = 102 '[SEP]'
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llm_load_print_meta: PAD token = 0 '[PAD]'
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llm_load_print_meta: CLS token = 101 '[CLS]'
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llm_load_print_meta: MASK token = 103 '[MASK]'
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llm_load_print_meta: LF token = 0 '[PAD]'
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llm_load_print_meta: max token length = 21
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llm_load_tensors: ggml ctx size = 0.16 MiB
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llm_load_tensors: CPU buffer size = 637.85 MiB
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: time=2024-11-11T08:26:04.508Z level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model"
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llama_new_context_with_model: n_ctx = 2048
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llama_new_context_with_model: n_batch = 512
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llama_new_context_with_model: n_ubatch = 512
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llama_new_context_with_model: flash_attn = 0
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llama_new_context_with_model: freq_base = 10000.0
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llama_new_context_with_model: freq_scale = 1
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llama_kv_cache_init: CPU KV buffer size = 192.00 MiB
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llama_new_context_with_model: KV self size = 192.00 MiB, K (f16): 96.00 MiB, V (f16): 96.00 MiB
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llama_new_context_with_model: CPU output buffer size = 0.00 MiB
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llama_new_context_with_model: CPU compute buffer size = 25.01 MiB
Nov 11 08:26:04 dev--aiml-reco ollama.listener[1455]: llama_new_context_with_model: graph nodes = 849
Nov 11 08:26:04 dev-
-aiml-reco ollama.listener[1455]: llama_new_context_with_model: graph splits = 1
Nov 11 08:26:04 dev--aiml-reco kernel: ollama_llama_se[13066]: segfault at 7a629d9ff820 ip 00007a62cf570dc8 sp 00007fff67c94298 error 4 in libggml.so[7a62cf56e000+98000] likely on CPU 3 (core 3, socket 0)
Nov 11 08:26:04 dev-
-aiml-reco kernel: Code: 00 00 f3 0f 1e fa e9 77 ff ff ff 0f 1f 80 00 00 00 00 f3 0f 1e fa 48 85 d2 7e 27 4c 8b 05 00 52 0b 00 31 c0 66 0f 1f 44 00 00 <0f> b7 0c 47 c4 c1 7a 10 04 88 c5 fa 11 04 86 48 83 c0 01 48 39 c2
Nov 11 08:26:05 dev--aiml-reco ollama.listener[1455]: time=2024-11-11T08:26:05.032Z level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server error"
Nov 11 08:26:05 dev-
-aiml-reco ollama.listener[1455]: time=2024-11-11T08:26:05.282Z level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: signal: segmentation fault (core dumped)"
Nov 11 08:26:05 dev-_-aiml-reco ollama.listener[1455]: [GIN] 2024/11/11 - 08:26:05 | 500 | 1.038579932s | 127.0.0.1 | POST "/api/embeddings"

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.0.0

Originally created by @Dhruv-1212 on GitHub (Nov 11, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7618 ### What is the issue? segmentation fault (core dumped) error for snowflake-arctic-embed:latest, other models are working fine these are the system logs Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: time=2024-11-11T08:26:04.252Z level=INFO source=server.go:108 msg="system memory" total="29.4 GiB" free="26.9 GiB" free_swap="0 B" Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: time=2024-11-11T08:26:04.253Z level=INFO source=memory.go:326 msg="offload to cpu" layers.requested=-1 layers.model=25 layers.offload=0 layers.split="" memory.available="[26.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="705.4 MiB" memory.required.partial="0 B" memory.required.kv="12.0 MiB" memory.required.allocations="[705.4 MiB]" memory.weights.total="589.2 MiB" memory.weights.repeating="529.6 MiB" memory.weights.nonrepeating="59.6 MiB" memory.graph.full="32.0 MiB" memory.graph.partial="32.0 MiB" Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: time=2024-11-11T08:26:04.255Z level=INFO source=server.go:399 msg="starting llama server" cmd="/tmp/ollama1595154785/runners/cpu_avx2/ollama_llama_server --model /var/snap/ollama/common/models/blobs/sha256-fb3b66c7bdf6dabbb2edbc22627f4cb2df021c9e9545b54feafd8a7c09fe8ec5 --ctx-size 2048 --batch-size 512 --embedding --log-disable --no-mmap --parallel 1 --port 35273" Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: time=2024-11-11T08:26:04.256Z level=INFO source=sched.go:449 msg="loaded runners" count=1 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: time=2024-11-11T08:26:04.256Z level=INFO source=server.go:598 msg="waiting for llama runner to start responding" Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: time=2024-11-11T08:26:04.256Z level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server error" Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[13066]: INFO [main] starting c++ runner | tid="134564798229440" timestamp=1731313564 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[13066]: INFO [main] build info | build=10 commit="3cd3d45b" tid="134564798229440" timestamp=1731313564 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[13066]: INFO [main] system info | n_threads=4 n_threads_batch=4 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="134564798229440" timestamp=1731313564 total_threads=8 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[13066]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="7" port="35273" tid="134564798229440" timestamp=1731313564 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: loaded meta data with 20 key-value pairs and 389 tensors from /var/snap/ollama/common/models/blobs/sha256-fb3b66c7bdf6dabbb2edbc22627f4cb2df021c9e9545b54feafd8a7c09fe8ec5 (version GGUF V3 (latest)) Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 0: general.architecture str = bert Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 1: general.name str = snowflake-arctic-embed-l Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 2: bert.block_count u32 = 24 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 3: bert.context_length u32 = 512 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 4: bert.embedding_length u32 = 1024 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 5: bert.feed_forward_length u32 = 4096 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 6: bert.attention.head_count u32 = 16 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 7: bert.attention.layer_norm_epsilon f32 = 0.000000 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 8: general.file_type u32 = 1 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 9: bert.attention.causal bool = false Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 10: bert.pooling_type u32 = 2 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 11: tokenizer.ggml.token_type_count u32 = 2 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 12: tokenizer.ggml.model str = bert Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,30522] = ["[PAD]", "[unused0]", "[unused1]", "... Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,30522] = [3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 15: tokenizer.ggml.unknown_token_id u32 = 100 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 16: tokenizer.ggml.seperator_token_id u32 = 102 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 17: tokenizer.ggml.padding_token_id u32 = 0 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 18: tokenizer.ggml.cls_token_id u32 = 101 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 19: tokenizer.ggml.mask_token_id u32 = 103 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - type f32: 243 tensors Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - type f16: 146 tensors Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_vocab: special tokens cache size = 5 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_vocab: token to piece cache size = 0.2032 MB Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: format = GGUF V3 (latest) Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: arch = bert Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: vocab type = WPM Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_vocab = 30522 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_merges = 0 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: vocab_only = 0 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_ctx_train = 512 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_embd = 1024 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_layer = 24 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_head = 16 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_head_kv = 16 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_rot = 64 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_swa = 0 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_embd_head_k = 64 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_embd_head_v = 64 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_gqa = 1 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_embd_k_gqa = 1024 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_embd_v_gqa = 1024 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: f_norm_eps = 1.0e-12 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: f_norm_rms_eps = 0.0e+00 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: f_logit_scale = 0.0e+00 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_ff = 4096 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_expert = 0 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_expert_used = 0 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: causal attn = 0 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: pooling type = 2 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: rope type = 2 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: rope scaling = linear Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: freq_base_train = 10000.0 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: freq_scale_train = 1 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_ctx_orig_yarn = 512 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: rope_finetuned = unknown Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: ssm_d_conv = 0 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: ssm_d_inner = 0 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: ssm_d_state = 0 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: ssm_dt_rank = 0 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: ssm_dt_b_c_rms = 0 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: model type = 335M Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: model ftype = F16 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: model params = 334.09 M Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: model size = 637.85 MiB (16.02 BPW) Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: general.name = snowflake-arctic-embed-l Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: UNK token = 100 '[UNK]' Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: SEP token = 102 '[SEP]' Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: PAD token = 0 '[PAD]' Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: CLS token = 101 '[CLS]' Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: MASK token = 103 '[MASK]' Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: LF token = 0 '[PAD]' Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: max token length = 21 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_tensors: ggml ctx size = 0.16 MiB Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_tensors: CPU buffer size = 637.85 MiB Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: time=2024-11-11T08:26:04.508Z level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model" Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_new_context_with_model: n_ctx = 2048 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_new_context_with_model: n_batch = 512 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_new_context_with_model: n_ubatch = 512 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_new_context_with_model: flash_attn = 0 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_new_context_with_model: freq_base = 10000.0 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_new_context_with_model: freq_scale = 1 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_kv_cache_init: CPU KV buffer size = 192.00 MiB Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_new_context_with_model: KV self size = 192.00 MiB, K (f16): 96.00 MiB, V (f16): 96.00 MiB Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_new_context_with_model: CPU output buffer size = 0.00 MiB Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_new_context_with_model: CPU compute buffer size = 25.01 MiB Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_new_context_with_model: graph nodes = 849 Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_new_context_with_model: graph splits = 1 Nov 11 08:26:04 dev-_-aiml-reco kernel: ollama_llama_se[13066]: segfault at 7a629d9ff820 ip 00007a62cf570dc8 sp 00007fff67c94298 error 4 in libggml.so[7a62cf56e000+98000] likely on CPU 3 (core 3, socket 0) Nov 11 08:26:04 dev-_-aiml-reco kernel: Code: 00 00 f3 0f 1e fa e9 77 ff ff ff 0f 1f 80 00 00 00 00 f3 0f 1e fa 48 85 d2 7e 27 4c 8b 05 00 52 0b 00 31 c0 66 0f 1f 44 00 00 <0f> b7 0c 47 c4 c1 7a 10 04 88 c5 fa 11 04 86 48 83 c0 01 48 39 c2 Nov 11 08:26:05 dev-_-aiml-reco ollama.listener[1455]: time=2024-11-11T08:26:05.032Z level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server error" Nov 11 08:26:05 dev-_-aiml-reco ollama.listener[1455]: time=2024-11-11T08:26:05.282Z level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: signal: segmentation fault (core dumped)" Nov 11 08:26:05 dev-_-aiml-reco ollama.listener[1455]: [GIN] 2024/11/11 - 08:26:05 | 500 | 1.038579932s | 127.0.0.1 | POST "/api/embeddings" ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.0.0
GiteaMirror added the bug label 2026-05-04 08:48:56 -05:00
Author
Owner

@rick-github commented on GitHub (Nov 11, 2024):

Does it work if you use an official release?

<!-- gh-comment-id:2468058148 --> @rick-github commented on GitHub (Nov 11, 2024): Does it work if you use an official release?
Author
Owner

@jessegross commented on GitHub (Nov 11, 2024):

I would also recommend trying the current 0.4.1 release, which has the newer runner architecture.

<!-- gh-comment-id:2468828719 --> @jessegross commented on GitHub (Nov 11, 2024): I would also recommend trying the current 0.4.1 release, which has the newer runner architecture.
Author
Owner

@Dhruv-1212 commented on GitHub (Nov 12, 2024):

I installed it with the linux installation command provided on git, so it should be the latest installation, but it is showing version to be 0.0.0.
later I installed 0.3.5 and it worked.

<!-- gh-comment-id:2470040812 --> @Dhruv-1212 commented on GitHub (Nov 12, 2024): I installed it with the linux installation command provided on git, so it should be the latest installation, but it is showing version to be 0.0.0. later I installed 0.3.5 and it worked.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#66917