[GH-ISSUE #10394] Ollama since 0.6.4 + Gemma 3 27b - SIGSEGV: segmentation violation after several /api/generate #6830

Closed
opened 2026-04-12 18:38:03 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @jetnet on GitHub (Apr 24, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10394

What is the issue?

  • Podman container start:
podman run -d \
  --device nvidia.com/gpu=all \
  --memory=100g \
  -v ollama:$HOME/.ollama \
  -v /local_path/ollama/models:/models \
  -p 11434:11434 \
  -e OLLAMA_MODELS=/models \
  -e CUDA_VISIBLE_DEVICES=0,1 \
  -e OLLAMA_SCHED_SPREAD=1 \
  -e OLLAMA_GPU_LAYERS=12 \
  --name ollama ollama/ollama:0.6.6
  • Model:
podman exec -it ollama ollama ps
  NAME          ID              SIZE     PROCESSOR    UNTIL
  gemma3:27b    a418f5838eaf    26 GB    100% GPU     22 seconds from now
  • GPU
nvidia-smi
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 555.42.06              Driver Version: 555.42.06      CUDA Version: 12.5     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA L40                     Off |   00000000:06:10.0 Off |                    0 |
| N/A   40C    P0            106W /  300W |   33380MiB /  46068MiB |      7%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   1  NVIDIA L40                     Off |   00000000:06:11.0 Off |                    0 |
| N/A   51C    P0            224W /  300W |   38519MiB /  46068MiB |     80%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A    426247      C   python3                                     23422MiB |
|    0   N/A  N/A   1517593      C   /usr/bin/ollama                              9944MiB |
|    1   N/A  N/A    425483      C   /usr/local/bin/python3.12                     850MiB |
|    1   N/A  N/A    425581      C   /usr/local/bin/python                       15962MiB |
|    1   N/A  N/A    425676      C   /usr/local/bin/python3.11                    4004MiB |
|    1   N/A  N/A    426050      C   /usr/bin/python3                             3014MiB |
|    1   N/A  N/A    426359      C   python                                       2242MiB |
|    1   N/A  N/A   1517593      C   /usr/bin/ollama                             12410MiB |
+-----------------------------------------------------------------------------------------+

Relevant log output

time=2025-04-24T05:53:38.718Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-04-24T05:53:38.718Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-04-24T05:53:39.182Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-04-24T05:53:39.221Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-04-24T05:53:39.225Z level=INFO source=sched.go:738 msg="new model will fit in available VRAM, loading" model=/models/blobs/sha256-e796792eba26c4d3b04b0ac5adb01a453dd9ec2dfd83b6c59cbf6fe5f30b0f68 library=cuda parallel=4 required="24.3 GiB"
time=2025-04-24T05:53:39.641Z level=INFO source=server.go:105 msg="system memory" total="251.8 GiB" free="214.7 GiB" free_swap="0 B"
time=2025-04-24T05:53:39.643Z level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=63 layers.offload=63 layers.split=32,31 memory.available="[18.5 GiB 15.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="24.3 GiB" memory.required.partial="24.3 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[13.4 GiB 10.8 GiB]" memory.weights.total="15.4 GiB" memory.weights.repeating="14.3 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="1.6 GiB" memory.graph.partial="1.6 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB"
time=2025-04-24T05:53:39.738Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-04-24T05:53:39.745Z level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-04-24T05:53:39.754Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07
time=2025-04-24T05:53:39.754Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-04-24T05:53:39.754Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-04-24T05:53:39.754Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-04-24T05:53:39.754Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-04-24T05:53:39.754Z level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/bin/ollama runner --ollama-engine --model /models/blobs/sha256-e796792eba26c4d3b04b0ac5adb01a453dd9ec2dfd83b6c59cbf6fe5f30b0f68 --ctx-size 8192 --batch-size 512 --n-gpu-layers 63 --threads 8 --parallel 4 --tensor-split 32,31 --port 43909"
time=2025-04-24T05:53:39.754Z level=INFO source=sched.go:451 msg="loaded runners" count=1
time=2025-04-24T05:53:39.754Z level=INFO source=server.go:580 msg="waiting for llama runner to start responding"
time=2025-04-24T05:53:39.755Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error"
time=2025-04-24T05:53:39.767Z level=INFO source=runner.go:866 msg="starting ollama engine"
time=2025-04-24T05:53:39.768Z level=INFO source=runner.go:929 msg="Server listening on 127.0.0.1:43909"
time=2025-04-24T05:53:39.795Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-04-24T05:53:39.867Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-04-24T05:53:39.869Z level=WARN source=ggml.go:152 msg="key not found" key=general.name default=""
time=2025-04-24T05:53:39.869Z level=WARN source=ggml.go:152 msg="key not found" key=general.description default=""
time=2025-04-24T05:53:39.869Z level=INFO source=ggml.go:72 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1247 num_key_values=37
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 2 CUDA devices:
  Device 0: NVIDIA L40, compute capability 8.9, VMM: yes
  Device 1: NVIDIA L40, compute capability 8.9, VMM: yes
time=2025-04-24T05:53:40.006Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model"
load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so
time=2025-04-24T05:53:40.099Z level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2025-04-24T05:53:40.246Z level=INFO source=ggml.go:298 msg="model weights" buffer=CUDA1 size="8.8 GiB"
time=2025-04-24T05:53:40.246Z level=INFO source=ggml.go:298 msg="model weights" buffer=CPU size="1.1 GiB"
time=2025-04-24T05:53:40.246Z level=INFO source=ggml.go:298 msg="model weights" buffer=CUDA0 size="7.4 GiB"
[GIN] 2025/04/24 - 05:53:41 | 200 |    1.020519ms |      10.0.2.100 | GET      "/api/tags"
time=2025-04-24T05:53:42.276Z level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-04-24T05:53:42.282Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07
time=2025-04-24T05:53:42.282Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-04-24T05:53:42.282Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-04-24T05:53:42.282Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-04-24T05:53:42.282Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-04-24T05:53:42.344Z level=INFO source=ggml.go:556 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="672.5 MiB"
time=2025-04-24T05:53:42.344Z level=INFO source=ggml.go:556 msg="compute graph" backend=CUDA1 buffer_type=CUDA1 size="714.5 MiB"
time=2025-04-24T05:53:42.344Z level=INFO source=ggml.go:556 msg="compute graph" backend=CPU buffer_type=CPU size="10.5 MiB"
time=2025-04-24T05:53:42.516Z level=INFO source=server.go:619 msg="llama runner started in 2.76 seconds"
[GIN] 2025/04/24 - 05:53:45 | 200 |  6.732202474s |      10.0.2.100 | POST     "/api/generate"
[GIN] 2025/04/24 - 05:53:45 | 200 |  5.667186798s |      10.0.2.100 | POST     "/api/generate"
[GIN] 2025/04/24 - 05:53:45 | 200 |  6.983914332s |      10.0.2.100 | POST     "/api/generate"
time=2025-04-24T05:53:46.505Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-04-24T05:53:46.552Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/04/24 - 05:53:48 | 200 |  1.786526636s |      10.0.2.100 | POST     "/api/generate"
[GIN] 2025/04/24 - 05:53:48 | 200 |  2.082799701s |      10.0.2.100 | POST     "/api/generate"
[GIN] 2025/04/24 - 05:53:52 | 200 |     914.588µs |      10.0.2.100 | GET      "/api/tags"
[GIN] 2025/04/24 - 05:54:17 | 200 |     910.382µs |      10.0.2.100 | GET      "/api/tags"
[GIN] 2025/04/24 - 05:54:28 | 200 |     856.408µs |      10.0.2.100 | GET      "/api/tags"
[GIN] 2025/04/24 - 05:54:53 | 200 |     894.086µs |      10.0.2.100 | GET      "/api/tags"
[GIN] 2025/04/24 - 05:55:04 | 200 |     902.539µs |      10.0.2.100 | GET      "/api/tags"
time=2025-04-24T05:55:13.410Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/04/24 - 05:55:15 | 200 |  1.922337536s |      10.0.2.100 | POST     "/api/generate"
[GIN] 2025/04/24 - 05:55:29 | 200 |     904.382µs |      10.0.2.100 | GET      "/api/tags"
[GIN] 2025/04/24 - 05:55:40 | 200 |     958.213µs |      10.0.2.100 | GET      "/api/tags"
[GIN] 2025/04/24 - 05:56:05 | 200 |     1.16004ms |      10.0.2.100 | GET      "/api/tags"
[GIN] 2025/04/24 - 05:56:16 | 200 |     899.695µs |      10.0.2.100 | GET      "/api/tags"
[GIN] 2025/04/24 - 05:56:41 | 200 |     893.896µs |      10.0.2.100 | GET      "/api/tags"
[GIN] 2025/04/24 - 05:56:52 | 200 |     877.171µs |      10.0.2.100 | GET      "/api/tags"
time=2025-04-24T05:57:07.490Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/04/24 - 05:57:08 | 200 |  1.442114545s |      10.0.2.100 | POST     "/api/generate"
[GIN] 2025/04/24 - 05:57:17 | 200 |     878.863µs |      10.0.2.100 | GET      "/api/tags"
time=2025-04-24T05:57:22.735Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/04/24 - 05:57:24 | 200 |  1.491941632s |      10.0.2.100 | POST     "/api/generate"
time=2025-04-24T05:57:26.305Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-04-24T05:57:26.438Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-04-24T05:57:26.557Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
ggml.c:1584: GGML_ASSERT(view_src == NULL || data_size == 0 || data_size + view_offs <= ggml_nbytes(view_src)) failed
SIGSEGV: segmentation violation
PC=0x7175327e2817 m=19 sigcode=1 addr=0x20d803f94
signal arrived during cgo execution

goroutine 8 gp=0xc000504c40 m=19 mp=0xc000581008 [syscall]:
runtime.cgocall(0x58bcaaea5480, 0xc002d15808)
        runtime/cgocall.go:167 +0x4b fp=0xc002d157e0 sp=0xc002d157a8 pc=0x58bcaa05e14b
github.com/ollama/ollama/ml/backend/ggml._Cfunc_ggml_view_1d(0x716f94932e90, 0x716f94937840, 0x150000, 0x680400)
        _cgo_gotypes.go:1475 +0x4b fp=0xc002d15808 sp=0xc002d157e0 pc=0x58bcaa45f70b
github.com/ollama/ollama/ml/backend/ggml.(*Tensor).View.func1(...)
        github.com/ollama/ollama/ml/backend/ggml/ggml.go:1014
github.com/ollama/ollama/ml/backend/ggml.(*Tensor).View(0xc004a06108, {0x58bcab3640b0?, 0xc004a140f0?}, 0x680400, {0xc00661a0e8?, 0xc000504c40?, 0xc000054508?})
        github.com/ollama/ollama/ml/backend/ggml/ggml.go:1014 +0xfa fp=0xc002d15928 sp=0xc002d15808 pc=0x58bcaa46965a
github.com/ollama/ollama/model/models/gemma3.(*TextModel).Forward(0xc000346150, {0x58bcab3640b0, 0xc004a140f0}, {0x58bcab36cab0?, 0xc004a06060?}, {0x58bcab36cab0, 0xc004a060c0}, {0x58bcab36cab0, 0xc004a060d8}, {{0x58bcab36cab0, ...}, ...}, ...)
        github.com/ollama/ollama/model/models/gemma3/model_text.go:182 +0x26d fp=0xc002d15a88 sp=0xc002d15928 pc=0x58bcaa4feb2d
github.com/ollama/ollama/model/models/gemma3.(*Model).Forward(0xc003224300, {0x58bcab3640b0, 0xc004a140f0}, {{0x58bcab36cab0, 0xc004a06060}, {0xc004a14090, 0x2, 0x2}, {0xc00660a000, 0x200, ...}, ...})
        github.com/ollama/ollama/model/models/gemma3/model.go:153 +0x1f1 fp=0xc002d15b88 sp=0xc002d15a88 pc=0x58bcaa4fd631
github.com/ollama/ollama/model.Forward({0x58bcab3640b0, 0xc004a140f0}, {0x58bcab35aa90, 0xc003224300}, {0xc00310d000, 0x200, 0x200}, {{0x58bcab36cab0, 0xc004a06060}, {0xc004a14090, ...}, ...})
        github.com/ollama/ollama/model/model.go:308 +0x1cd fp=0xc002d15c70 sp=0xc002d15b88 pc=0x58bcaa4926ed
github.com/ollama/ollama/runner/ollamarunner.(*Server).processBatch(0xc0000c27e0)
        github.com/ollama/ollama/runner/ollamarunner/runner.go:478 +0x476 fp=0xc002d15f98 sp=0xc002d15c70 pc=0x58bcaa514ab6
github.com/ollama/ollama/runner/ollamarunner.(*Server).run(0xc0000c27e0, {0x58bcab35bdf0, 0xc0000bacd0})
        github.com/ollama/ollama/runner/ollamarunner/runner.go:364 +0x4e fp=0xc002d15fb8 sp=0xc002d15f98 pc=0x58bcaa5145ee
github.com/ollama/ollama/runner/ollamarunner.Execute.gowrap2()
        github.com/ollama/ollama/runner/ollamarunner/runner.go:906 +0x28 fp=0xc002d15fe0 sp=0xc002d15fb8 pc=0x58bcaa5190e8
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc002d15fe8 sp=0xc002d15fe0 pc=0x58bcaa068b81
created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1
        github.com/ollama/ollama/runner/ollamarunner/runner.go:906 +0xb37

OS

Linux, Docker

GPU

Nvidia

CPU

AMD

Ollama version

0.6.6

Originally created by @jetnet on GitHub (Apr 24, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10394 ### What is the issue? * Podman container start: ```bash podman run -d \ --device nvidia.com/gpu=all \ --memory=100g \ -v ollama:$HOME/.ollama \ -v /local_path/ollama/models:/models \ -p 11434:11434 \ -e OLLAMA_MODELS=/models \ -e CUDA_VISIBLE_DEVICES=0,1 \ -e OLLAMA_SCHED_SPREAD=1 \ -e OLLAMA_GPU_LAYERS=12 \ --name ollama ollama/ollama:0.6.6 ``` * Model: ```bash podman exec -it ollama ollama ps NAME ID SIZE PROCESSOR UNTIL gemma3:27b a418f5838eaf 26 GB 100% GPU 22 seconds from now ``` * GPU <details> <summary>nvidia-smi</summary> ``` +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 555.42.06 Driver Version: 555.42.06 CUDA Version: 12.5 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA L40 Off | 00000000:06:10.0 Off | 0 | | N/A 40C P0 106W / 300W | 33380MiB / 46068MiB | 7% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 1 NVIDIA L40 Off | 00000000:06:11.0 Off | 0 | | N/A 51C P0 224W / 300W | 38519MiB / 46068MiB | 80% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 426247 C python3 23422MiB | | 0 N/A N/A 1517593 C /usr/bin/ollama 9944MiB | | 1 N/A N/A 425483 C /usr/local/bin/python3.12 850MiB | | 1 N/A N/A 425581 C /usr/local/bin/python 15962MiB | | 1 N/A N/A 425676 C /usr/local/bin/python3.11 4004MiB | | 1 N/A N/A 426050 C /usr/bin/python3 3014MiB | | 1 N/A N/A 426359 C python 2242MiB | | 1 N/A N/A 1517593 C /usr/bin/ollama 12410MiB | +-----------------------------------------------------------------------------------------+ ``` </details> ### Relevant log output ```shell time=2025-04-24T05:53:38.718Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-24T05:53:38.718Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-24T05:53:39.182Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-24T05:53:39.221Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-24T05:53:39.225Z level=INFO source=sched.go:738 msg="new model will fit in available VRAM, loading" model=/models/blobs/sha256-e796792eba26c4d3b04b0ac5adb01a453dd9ec2dfd83b6c59cbf6fe5f30b0f68 library=cuda parallel=4 required="24.3 GiB" time=2025-04-24T05:53:39.641Z level=INFO source=server.go:105 msg="system memory" total="251.8 GiB" free="214.7 GiB" free_swap="0 B" time=2025-04-24T05:53:39.643Z level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=63 layers.offload=63 layers.split=32,31 memory.available="[18.5 GiB 15.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="24.3 GiB" memory.required.partial="24.3 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[13.4 GiB 10.8 GiB]" memory.weights.total="15.4 GiB" memory.weights.repeating="14.3 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="1.6 GiB" memory.graph.partial="1.6 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB" time=2025-04-24T05:53:39.738Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-24T05:53:39.745Z level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false time=2025-04-24T05:53:39.754Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07 time=2025-04-24T05:53:39.754Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000 time=2025-04-24T05:53:39.754Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 time=2025-04-24T05:53:39.754Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1 time=2025-04-24T05:53:39.754Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256 time=2025-04-24T05:53:39.754Z level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/bin/ollama runner --ollama-engine --model /models/blobs/sha256-e796792eba26c4d3b04b0ac5adb01a453dd9ec2dfd83b6c59cbf6fe5f30b0f68 --ctx-size 8192 --batch-size 512 --n-gpu-layers 63 --threads 8 --parallel 4 --tensor-split 32,31 --port 43909" time=2025-04-24T05:53:39.754Z level=INFO source=sched.go:451 msg="loaded runners" count=1 time=2025-04-24T05:53:39.754Z level=INFO source=server.go:580 msg="waiting for llama runner to start responding" time=2025-04-24T05:53:39.755Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error" time=2025-04-24T05:53:39.767Z level=INFO source=runner.go:866 msg="starting ollama engine" time=2025-04-24T05:53:39.768Z level=INFO source=runner.go:929 msg="Server listening on 127.0.0.1:43909" time=2025-04-24T05:53:39.795Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-24T05:53:39.867Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-24T05:53:39.869Z level=WARN source=ggml.go:152 msg="key not found" key=general.name default="" time=2025-04-24T05:53:39.869Z level=WARN source=ggml.go:152 msg="key not found" key=general.description default="" time=2025-04-24T05:53:39.869Z level=INFO source=ggml.go:72 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1247 num_key_values=37 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 2 CUDA devices: Device 0: NVIDIA L40, compute capability 8.9, VMM: yes Device 1: NVIDIA L40, compute capability 8.9, VMM: yes time=2025-04-24T05:53:40.006Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model" load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so time=2025-04-24T05:53:40.099Z level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2025-04-24T05:53:40.246Z level=INFO source=ggml.go:298 msg="model weights" buffer=CUDA1 size="8.8 GiB" time=2025-04-24T05:53:40.246Z level=INFO source=ggml.go:298 msg="model weights" buffer=CPU size="1.1 GiB" time=2025-04-24T05:53:40.246Z level=INFO source=ggml.go:298 msg="model weights" buffer=CUDA0 size="7.4 GiB" [GIN] 2025/04/24 - 05:53:41 | 200 | 1.020519ms | 10.0.2.100 | GET "/api/tags" time=2025-04-24T05:53:42.276Z level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false time=2025-04-24T05:53:42.282Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07 time=2025-04-24T05:53:42.282Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000 time=2025-04-24T05:53:42.282Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 time=2025-04-24T05:53:42.282Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1 time=2025-04-24T05:53:42.282Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256 time=2025-04-24T05:53:42.344Z level=INFO source=ggml.go:556 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="672.5 MiB" time=2025-04-24T05:53:42.344Z level=INFO source=ggml.go:556 msg="compute graph" backend=CUDA1 buffer_type=CUDA1 size="714.5 MiB" time=2025-04-24T05:53:42.344Z level=INFO source=ggml.go:556 msg="compute graph" backend=CPU buffer_type=CPU size="10.5 MiB" time=2025-04-24T05:53:42.516Z level=INFO source=server.go:619 msg="llama runner started in 2.76 seconds" [GIN] 2025/04/24 - 05:53:45 | 200 | 6.732202474s | 10.0.2.100 | POST "/api/generate" [GIN] 2025/04/24 - 05:53:45 | 200 | 5.667186798s | 10.0.2.100 | POST "/api/generate" [GIN] 2025/04/24 - 05:53:45 | 200 | 6.983914332s | 10.0.2.100 | POST "/api/generate" time=2025-04-24T05:53:46.505Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-24T05:53:46.552Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/04/24 - 05:53:48 | 200 | 1.786526636s | 10.0.2.100 | POST "/api/generate" [GIN] 2025/04/24 - 05:53:48 | 200 | 2.082799701s | 10.0.2.100 | POST "/api/generate" [GIN] 2025/04/24 - 05:53:52 | 200 | 914.588µs | 10.0.2.100 | GET "/api/tags" [GIN] 2025/04/24 - 05:54:17 | 200 | 910.382µs | 10.0.2.100 | GET "/api/tags" [GIN] 2025/04/24 - 05:54:28 | 200 | 856.408µs | 10.0.2.100 | GET "/api/tags" [GIN] 2025/04/24 - 05:54:53 | 200 | 894.086µs | 10.0.2.100 | GET "/api/tags" [GIN] 2025/04/24 - 05:55:04 | 200 | 902.539µs | 10.0.2.100 | GET "/api/tags" time=2025-04-24T05:55:13.410Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/04/24 - 05:55:15 | 200 | 1.922337536s | 10.0.2.100 | POST "/api/generate" [GIN] 2025/04/24 - 05:55:29 | 200 | 904.382µs | 10.0.2.100 | GET "/api/tags" [GIN] 2025/04/24 - 05:55:40 | 200 | 958.213µs | 10.0.2.100 | GET "/api/tags" [GIN] 2025/04/24 - 05:56:05 | 200 | 1.16004ms | 10.0.2.100 | GET "/api/tags" [GIN] 2025/04/24 - 05:56:16 | 200 | 899.695µs | 10.0.2.100 | GET "/api/tags" [GIN] 2025/04/24 - 05:56:41 | 200 | 893.896µs | 10.0.2.100 | GET "/api/tags" [GIN] 2025/04/24 - 05:56:52 | 200 | 877.171µs | 10.0.2.100 | GET "/api/tags" time=2025-04-24T05:57:07.490Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/04/24 - 05:57:08 | 200 | 1.442114545s | 10.0.2.100 | POST "/api/generate" [GIN] 2025/04/24 - 05:57:17 | 200 | 878.863µs | 10.0.2.100 | GET "/api/tags" time=2025-04-24T05:57:22.735Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/04/24 - 05:57:24 | 200 | 1.491941632s | 10.0.2.100 | POST "/api/generate" time=2025-04-24T05:57:26.305Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-24T05:57:26.438Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-24T05:57:26.557Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 ggml.c:1584: GGML_ASSERT(view_src == NULL || data_size == 0 || data_size + view_offs <= ggml_nbytes(view_src)) failed SIGSEGV: segmentation violation PC=0x7175327e2817 m=19 sigcode=1 addr=0x20d803f94 signal arrived during cgo execution goroutine 8 gp=0xc000504c40 m=19 mp=0xc000581008 [syscall]: runtime.cgocall(0x58bcaaea5480, 0xc002d15808) runtime/cgocall.go:167 +0x4b fp=0xc002d157e0 sp=0xc002d157a8 pc=0x58bcaa05e14b github.com/ollama/ollama/ml/backend/ggml._Cfunc_ggml_view_1d(0x716f94932e90, 0x716f94937840, 0x150000, 0x680400) _cgo_gotypes.go:1475 +0x4b fp=0xc002d15808 sp=0xc002d157e0 pc=0x58bcaa45f70b github.com/ollama/ollama/ml/backend/ggml.(*Tensor).View.func1(...) github.com/ollama/ollama/ml/backend/ggml/ggml.go:1014 github.com/ollama/ollama/ml/backend/ggml.(*Tensor).View(0xc004a06108, {0x58bcab3640b0?, 0xc004a140f0?}, 0x680400, {0xc00661a0e8?, 0xc000504c40?, 0xc000054508?}) github.com/ollama/ollama/ml/backend/ggml/ggml.go:1014 +0xfa fp=0xc002d15928 sp=0xc002d15808 pc=0x58bcaa46965a github.com/ollama/ollama/model/models/gemma3.(*TextModel).Forward(0xc000346150, {0x58bcab3640b0, 0xc004a140f0}, {0x58bcab36cab0?, 0xc004a06060?}, {0x58bcab36cab0, 0xc004a060c0}, {0x58bcab36cab0, 0xc004a060d8}, {{0x58bcab36cab0, ...}, ...}, ...) github.com/ollama/ollama/model/models/gemma3/model_text.go:182 +0x26d fp=0xc002d15a88 sp=0xc002d15928 pc=0x58bcaa4feb2d github.com/ollama/ollama/model/models/gemma3.(*Model).Forward(0xc003224300, {0x58bcab3640b0, 0xc004a140f0}, {{0x58bcab36cab0, 0xc004a06060}, {0xc004a14090, 0x2, 0x2}, {0xc00660a000, 0x200, ...}, ...}) github.com/ollama/ollama/model/models/gemma3/model.go:153 +0x1f1 fp=0xc002d15b88 sp=0xc002d15a88 pc=0x58bcaa4fd631 github.com/ollama/ollama/model.Forward({0x58bcab3640b0, 0xc004a140f0}, {0x58bcab35aa90, 0xc003224300}, {0xc00310d000, 0x200, 0x200}, {{0x58bcab36cab0, 0xc004a06060}, {0xc004a14090, ...}, ...}) github.com/ollama/ollama/model/model.go:308 +0x1cd fp=0xc002d15c70 sp=0xc002d15b88 pc=0x58bcaa4926ed github.com/ollama/ollama/runner/ollamarunner.(*Server).processBatch(0xc0000c27e0) github.com/ollama/ollama/runner/ollamarunner/runner.go:478 +0x476 fp=0xc002d15f98 sp=0xc002d15c70 pc=0x58bcaa514ab6 github.com/ollama/ollama/runner/ollamarunner.(*Server).run(0xc0000c27e0, {0x58bcab35bdf0, 0xc0000bacd0}) github.com/ollama/ollama/runner/ollamarunner/runner.go:364 +0x4e fp=0xc002d15fb8 sp=0xc002d15f98 pc=0x58bcaa5145ee github.com/ollama/ollama/runner/ollamarunner.Execute.gowrap2() github.com/ollama/ollama/runner/ollamarunner/runner.go:906 +0x28 fp=0xc002d15fe0 sp=0xc002d15fb8 pc=0x58bcaa5190e8 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc002d15fe8 sp=0xc002d15fe0 pc=0x58bcaa068b81 created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1 github.com/ollama/ollama/runner/ollamarunner/runner.go:906 +0xb37 ``` ### OS Linux, Docker ### GPU Nvidia ### CPU AMD ### Ollama version 0.6.6
GiteaMirror added the bug label 2026-04-12 18:38:03 -05:00
Author
Owner

@jetnet commented on GitHub (Apr 24, 2025):

ollama-0.6.6-crash.log.gz

<!-- gh-comment-id:2828293637 --> @jetnet commented on GitHub (Apr 24, 2025): [ollama-0.6.6-crash.log.gz](https://github.com/user-attachments/files/19896131/ollama-0.6.6-crash.log.gz)
Author
Owner

@jetnet commented on GitHub (May 5, 2025):

FYI: no more crash with 0.6.7, but with a "bonus": warning for every request :)

ollama[1931305]: time=2025-05-05T17:29:03.943Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
ollama[1931305]: [GIN] 2025/05/05 - 17:29:04 | 200 |  3.402782133s |      10.0.2.100 | POST     "/api/chat"
<!-- gh-comment-id:2851750157 --> @jetnet commented on GitHub (May 5, 2025): FYI: no more crash with 0.6.7, but with a "bonus": warning for every request :) ``` ollama[1931305]: time=2025-05-05T17:29:03.943Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 ollama[1931305]: [GIN] 2025/05/05 - 17:29:04 | 200 | 3.402782133s | 10.0.2.100 | POST "/api/chat" ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6830