[GH-ISSUE #7558] llama3.2-vision crash on multiple cuda GPUs - unspecified launch failure #30572

Closed
opened 2026-04-22 10:19:27 -05:00 by GiteaMirror · 13 comments
Owner

Originally created by @HuronExplodium on GitHub (Nov 7, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7558

Originally assigned to: @mxyng on GitHub.

What is the issue?

Running on 3.2-vision:11b works with text and images
Running on 3.2-vision:90b works with text, segfault on images
Running llava: works with text and images

Debug log from segfault with text and image:
mllama_model_load: description: vision encoder for Mllama
mllama_model_load: GGUF version: 3
mllama_model_load: alignment: 32
mllama_model_load: n_tensors: 512
mllama_model_load: n_kv: 17
mllama_model_load: ftype: f16
mllama_model_load:
mllama_model_load: vision using CUDA backend
time=2024-11-07T04:46:42.696Z level=DEBUG source=server.go:615 msg="model load completed, waiting for server to become available" status="llm server loading model"
mllama_model_load: compute allocated memory: 2853.34 MB
time=2024-11-07T04:46:43.199Z level=INFO source=server.go:606 msg="llama runner started in 23.62 seconds"
time=2024-11-07T04:46:43.199Z level=DEBUG source=sched.go:462 msg="finished setting up runner" model=/home/user/.ollama/models/blobs/sha256-da63a910e34997d50c9f21cc7f16996d1e76e1c128b13319edd68348f760ecc7
time=2024-11-07T04:46:43.452Z level=DEBUG source=routes.go:1457 msg="chat request" images=1 prompt="<|start_header_id|>user<|end_header_id|>\n\n[img-0]<|image|>this is a random frame. describe in detail everything you can interpret. Ideally keep your response concise and information dense since this will be read in a chat room. ANYTHING OVER 200 CHARACTERS WILL BE CUT OFF.<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
time=2024-11-07T04:46:46.139Z level=DEBUG source=image.go:175 msg="storing image embeddings in cache" entry=0 used=0001-01-01T00:00:00.000Z
time=2024-11-07T04:46:46.139Z level=DEBUG source=cache.go:99 msg="loading cache slot" id=0 cache=0 prompt=61 used=0 remaining=61
CUDA error: unspecified launch failure
current device: 3, in function ggml_backend_cuda_synchronize at ggml-cuda.cu:2508
cudaStreamSynchronize(cuda_ctx->stream())
ggml-cuda.cu:132: CUDA error
SIGSEGV: segmentation violation
PC=0x72e7ad3ecc57 m=7 sigcode=1 addr=0x213203fcc
signal arrived during cgo execution

goroutine 7 gp=0xc00029c000 m=7 mp=0xc000302008 [syscall]:
runtime.cgocall(0x60e54306eeb0, 0xc000185b60)
runtime/cgocall.go:157 +0x4b fp=0xc000185b38 sp=0xc000185b00 pc=0x60e542df13cb
github.com/ollama/ollama/llama._Cfunc_llama_decode(0x72e7180064a0, {0x37, 0x72e7183ae990, 0x0, 0x0, 0x72e7183af1a0, 0x72e7183af9b0, 0x72e7183b01c0, 0x72d1281ef3e0, 0x0, ...})
_cgo_gotypes.go:543 +0x52 fp=0xc000185b60 sp=0xc000185b38 pc=0x60e542eee952
github.com/ollama/ollama/llama.(*Context).Decode.func1(0x60e54306aceb?, 0x72e7180064a0?)
github.com/ollama/ollama/llama/llama.go:167 +0xd8 fp=0xc000185c80 sp=0xc000185b60 pc=0x60e542ef0e78
github.com/ollama/ollama/llama.(*Context).Decode(0xc0001ec140?, 0x1?)
github.com/ollama/ollama/llama/llama.go:167 +0x17 fp=0xc000185cc8 sp=0xc000185c80 pc=0x60e542ef0cd7
main.(*Server).processBatch(0xc0001d0120, 0xc000234000, 0xc000234070)
github.com/ollama/ollama/llama/runner/runner.go:424 +0x29e fp=0xc000185ed0 sp=0xc000185cc8 pc=0x60e543069d1e
main.(*Server).run(0xc0001d0120, {0x60e5433a8a40, 0xc0001a60a0})
github.com/ollama/ollama/llama/runner/runner.go:338 +0x1a5 fp=0xc000185fb8 sp=0xc000185ed0 pc=0x60e543069705
main.main.gowrap2()
github.com/ollama/ollama/llama/runner/runner.go:907 +0x28 fp=0xc000185fe0 sp=0xc000185fb8 pc=0x60e54306dee8
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc000185fe8 sp=0xc000185fe0 pc=0x60e542e59de1
created by main.main in goroutine 1
github.com/ollama/ollama/llama/runner/runner.go:907 +0xcab

goroutine 1 gp=0xc0000061c0 m=nil [IO wait]:
runtime.gopark(0xc000034008?, 0x0?, 0xc0?, 0x61?, 0xc00002d8c0?)
runtime/proc.go:402 +0xce fp=0xc0001f5888 sp=0xc0001f5868 pc=0x60e542e2800e
runtime.netpollblock(0xc00002d920?, 0x42df0b26?, 0xe5?)

OS

No response

GPU

Nvidia

CPU

No response

Ollama version

0.4

Originally created by @HuronExplodium on GitHub (Nov 7, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7558 Originally assigned to: @mxyng on GitHub. ### What is the issue? Running on 3.2-vision:11b works with text and images Running on 3.2-vision:90b works with text, segfault on images Running llava: works with text and images Debug log from segfault with text and image: mllama_model_load: description: vision encoder for Mllama mllama_model_load: GGUF version: 3 mllama_model_load: alignment: 32 mllama_model_load: n_tensors: 512 mllama_model_load: n_kv: 17 mllama_model_load: ftype: f16 mllama_model_load: mllama_model_load: vision using CUDA backend time=2024-11-07T04:46:42.696Z level=DEBUG source=server.go:615 msg="model load completed, waiting for server to become available" status="llm server loading model" mllama_model_load: compute allocated memory: 2853.34 MB time=2024-11-07T04:46:43.199Z level=INFO source=server.go:606 msg="llama runner started in 23.62 seconds" time=2024-11-07T04:46:43.199Z level=DEBUG source=sched.go:462 msg="finished setting up runner" model=/home/user/.ollama/models/blobs/sha256-da63a910e34997d50c9f21cc7f16996d1e76e1c128b13319edd68348f760ecc7 time=2024-11-07T04:46:43.452Z level=DEBUG source=routes.go:1457 msg="chat request" images=1 prompt="<|start_header_id|>user<|end_header_id|>\n\n[img-0]<|image|>this is a random frame. describe in detail everything you can interpret. Ideally keep your response concise and information dense since this will be read in a chat room. ANYTHING OVER 200 CHARACTERS WILL BE CUT OFF.<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" time=2024-11-07T04:46:46.139Z level=DEBUG source=image.go:175 msg="storing image embeddings in cache" entry=0 used=0001-01-01T00:00:00.000Z time=2024-11-07T04:46:46.139Z level=DEBUG source=cache.go:99 msg="loading cache slot" id=0 cache=0 prompt=61 used=0 remaining=61 CUDA error: unspecified launch failure current device: 3, in function ggml_backend_cuda_synchronize at ggml-cuda.cu:2508 cudaStreamSynchronize(cuda_ctx->stream()) ggml-cuda.cu:132: CUDA error SIGSEGV: segmentation violation PC=0x72e7ad3ecc57 m=7 sigcode=1 addr=0x213203fcc signal arrived during cgo execution goroutine 7 gp=0xc00029c000 m=7 mp=0xc000302008 [syscall]: runtime.cgocall(0x60e54306eeb0, 0xc000185b60) runtime/cgocall.go:157 +0x4b fp=0xc000185b38 sp=0xc000185b00 pc=0x60e542df13cb github.com/ollama/ollama/llama._Cfunc_llama_decode(0x72e7180064a0, {0x37, 0x72e7183ae990, 0x0, 0x0, 0x72e7183af1a0, 0x72e7183af9b0, 0x72e7183b01c0, 0x72d1281ef3e0, 0x0, ...}) _cgo_gotypes.go:543 +0x52 fp=0xc000185b60 sp=0xc000185b38 pc=0x60e542eee952 github.com/ollama/ollama/llama.(*Context).Decode.func1(0x60e54306aceb?, 0x72e7180064a0?) github.com/ollama/ollama/llama/llama.go:167 +0xd8 fp=0xc000185c80 sp=0xc000185b60 pc=0x60e542ef0e78 github.com/ollama/ollama/llama.(*Context).Decode(0xc0001ec140?, 0x1?) github.com/ollama/ollama/llama/llama.go:167 +0x17 fp=0xc000185cc8 sp=0xc000185c80 pc=0x60e542ef0cd7 main.(*Server).processBatch(0xc0001d0120, 0xc000234000, 0xc000234070) github.com/ollama/ollama/llama/runner/runner.go:424 +0x29e fp=0xc000185ed0 sp=0xc000185cc8 pc=0x60e543069d1e main.(*Server).run(0xc0001d0120, {0x60e5433a8a40, 0xc0001a60a0}) github.com/ollama/ollama/llama/runner/runner.go:338 +0x1a5 fp=0xc000185fb8 sp=0xc000185ed0 pc=0x60e543069705 main.main.gowrap2() github.com/ollama/ollama/llama/runner/runner.go:907 +0x28 fp=0xc000185fe0 sp=0xc000185fb8 pc=0x60e54306dee8 runtime.goexit({}) runtime/asm_amd64.s:1695 +0x1 fp=0xc000185fe8 sp=0xc000185fe0 pc=0x60e542e59de1 created by main.main in goroutine 1 github.com/ollama/ollama/llama/runner/runner.go:907 +0xcab goroutine 1 gp=0xc0000061c0 m=nil [IO wait]: runtime.gopark(0xc000034008?, 0x0?, 0xc0?, 0x61?, 0xc00002d8c0?) runtime/proc.go:402 +0xce fp=0xc0001f5888 sp=0xc0001f5868 pc=0x60e542e2800e runtime.netpollblock(0xc00002d920?, 0x42df0b26?, 0xe5?) ### OS _No response_ ### GPU Nvidia ### CPU _No response_ ### Ollama version 0.4
GiteaMirror added the linuxnvidiabug labels 2026-04-22 10:19:27 -05:00
Author
Owner

@jessegross commented on GitHub (Nov 7, 2024):

Can you please post the full log?

<!-- gh-comment-id:2462846154 --> @jessegross commented on GitHub (Nov 7, 2024): Can you please post the full log?
Author
Owner

@HuronExplodium commented on GitHub (Nov 7, 2024):

Can you please post the full log?

sure thing
log.txt

<!-- gh-comment-id:2462918956 --> @HuronExplodium commented on GitHub (Nov 7, 2024): > Can you please post the full log? sure thing [log.txt](https://github.com/user-attachments/files/17667650/log.txt)
Author
Owner

@HuronExplodium commented on GitHub (Nov 7, 2024):

Interesting, seemingly the same thing now happens with mistral-large with 32k context on text only. (since 0.4)

<!-- gh-comment-id:2462958431 --> @HuronExplodium commented on GitHub (Nov 7, 2024): Interesting, seemingly the same thing now happens with mistral-large with 32k context on text only. (since 0.4)
Author
Owner

@jessegross commented on GitHub (Nov 7, 2024):

Thanks for the logs. If possible, can you try building main from source? This might be the same as #7546 which was recently fixed.

<!-- gh-comment-id:2463217029 --> @jessegross commented on GitHub (Nov 7, 2024): Thanks for the logs. If possible, can you try building `main` from source? This might be the same as #7546 which was recently fixed.
Author
Owner

@dhiltgen commented on GitHub (Nov 7, 2024):

Should be fixed by #7560

<!-- gh-comment-id:2463359653 --> @dhiltgen commented on GitHub (Nov 7, 2024): Should be fixed by #7560
Author
Owner

@HuronExplodium commented on GitHub (Nov 8, 2024):

Thanks @dhiltgen ! on current main It looks like mistral-large is ok now but I still get the sigsegv on vision:90b with the image.

<!-- gh-comment-id:2463552976 --> @HuronExplodium commented on GitHub (Nov 8, 2024): Thanks @dhiltgen ! on current `main` It looks like mistral-large is ok now but I still get the sigsegv on vision:90b with the image.
Author
Owner

@HuronExplodium commented on GitHub (Nov 8, 2024):

log2.txt

<!-- gh-comment-id:2463559975 --> @HuronExplodium commented on GitHub (Nov 8, 2024): [log2.txt](https://github.com/user-attachments/files/17671228/log2.txt)
Author
Owner

@HuronExplodium commented on GitHub (Nov 8, 2024):

Not sure if I should re-open this or start a new one (?) @dhiltgen

<!-- gh-comment-id:2465313493 --> @HuronExplodium commented on GitHub (Nov 8, 2024): Not sure if I should re-open this or start a new one (?) @dhiltgen
Author
Owner

@dhiltgen commented on GitHub (Nov 8, 2024):

@HuronExplodium it looks like you built from source. Can you run ldd on the various binaries to confirm it was correctly linked? If you picked up the link fix and things are correct there but you're still seeing the failure, we should continue investigating.

<!-- gh-comment-id:2465572335 --> @dhiltgen commented on GitHub (Nov 8, 2024): @HuronExplodium it looks like you built from source. Can you run ldd on the various binaries to confirm it was correctly linked? If you picked up the link fix and things are correct there but you're still seeing the failure, we should continue investigating.
Author
Owner

@HuronExplodium commented on GitHub (Nov 8, 2024):

Hmm... Looks right to me

user@newdev:~/ollama-git/ollama/llama/build/linux-amd64/runners/cuda_v12$ ldd *
libggml_cuda_v12.so:
	linux-vdso.so.1 (0x00007ffee1fab000)
	libcuda.so.1 => /lib/x86_64-linux-gnu/libcuda.so.1 (0x00007ea8cc400000)
	libcublas.so.12 => /usr/local/cuda/targets/x86_64-linux/lib/libcublas.so.12 (0x00007ea8c5800000)
	libcudart.so.12 => /usr/local/cuda/targets/x86_64-linux/lib/libcudart.so.12 (0x00007ea8c5200000)
	libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007ea8c4e00000)
	libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007ea8cc317000)
	libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007ea91813f000)
	libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007ea8c4a00000)
	libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007ea91813a000)
	libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007ea918135000)
	librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007ea918130000)
	libcublasLt.so.12 => /usr/local/cuda/targets/x86_64-linux/lib/libcublasLt.so.12 (0x00007ea8a2e00000)
	/lib64/ld-linux-x86-64.so.2 (0x00007ea918180000)
ollama_llama_server:
	linux-vdso.so.1 (0x00007ffe127c2000)
	libggml_cuda_v12.so => not found
	libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x000072e7b1600000)
	libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x000072e7b23c9000)
	libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x000072e7b239b000)
	libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x000072e7b1200000)
	/lib64/ld-linux-x86-64.so.2 (0x000072e7b24c3000)
<!-- gh-comment-id:2465585580 --> @HuronExplodium commented on GitHub (Nov 8, 2024): Hmm... Looks right to me ``` user@newdev:~/ollama-git/ollama/llama/build/linux-amd64/runners/cuda_v12$ ldd * libggml_cuda_v12.so: linux-vdso.so.1 (0x00007ffee1fab000) libcuda.so.1 => /lib/x86_64-linux-gnu/libcuda.so.1 (0x00007ea8cc400000) libcublas.so.12 => /usr/local/cuda/targets/x86_64-linux/lib/libcublas.so.12 (0x00007ea8c5800000) libcudart.so.12 => /usr/local/cuda/targets/x86_64-linux/lib/libcudart.so.12 (0x00007ea8c5200000) libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007ea8c4e00000) libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007ea8cc317000) libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007ea91813f000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007ea8c4a00000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007ea91813a000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007ea918135000) librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007ea918130000) libcublasLt.so.12 => /usr/local/cuda/targets/x86_64-linux/lib/libcublasLt.so.12 (0x00007ea8a2e00000) /lib64/ld-linux-x86-64.so.2 (0x00007ea918180000) ollama_llama_server: linux-vdso.so.1 (0x00007ffe127c2000) libggml_cuda_v12.so => not found libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x000072e7b1600000) libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x000072e7b23c9000) libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x000072e7b239b000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x000072e7b1200000) /lib64/ld-linux-x86-64.so.2 (0x000072e7b24c3000) ```
Author
Owner

@HuronExplodium commented on GitHub (Nov 8, 2024):

libggml_cuda_v12.so => not found

Or maybe it's this guy on ollama_llama_server

<!-- gh-comment-id:2465588066 --> @HuronExplodium commented on GitHub (Nov 8, 2024): > libggml_cuda_v12.so => not found Or maybe it's this guy on ollama_llama_server
Author
Owner

@HuronExplodium commented on GitHub (Nov 8, 2024):

BTW this was from 3d25e7bf8c

<!-- gh-comment-id:2465593343 --> @HuronExplodium commented on GitHub (Nov 8, 2024): BTW this was from 3d25e7bf8c32391a719336e5d990be9dee263f02
Author
Owner

@dhiltgen commented on GitHub (Nov 8, 2024):

It looks like the links are correct. The defect was libggml_cuda_v12.so incorrectly linked to v11 if it was present on the system.

We've repro'd the failure - there's a bug relating to the cross-attention implementation on cuda with multiple GPUs.

<!-- gh-comment-id:2465637953 --> @dhiltgen commented on GitHub (Nov 8, 2024): It looks like the links are correct. The defect was libggml_cuda_v12.so incorrectly linked to v11 if it was present on the system. We've repro'd the failure - there's a bug relating to the cross-attention implementation on cuda with multiple GPUs.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#30572