[GH-ISSUE #10580] 0.6.8 seems can't run llama4 #53473

Closed
opened 2026-04-29 03:19:18 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @mikehu0 on GitHub (May 6, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10580

What is the issue?

llama4 work fine on 0.6.7, but today upgrade 0.6.8, which is failed
And there are no problem for llama3.2

mikehu@pcie-test-MS-7D67:~$ ollama --version
ollama version is 0.6.8
mikehu@pcie-test-MS-7D67:~$ ollama pull llama4:latest
pulling manifest
pulling 9d507a36062c: 100% ▕██████████████████████████████████████▏  67 GB
pulling 399a8a5a36db: 100% ▕██████████████████████████████████████▏ 7.8 KB
pulling 24ca191a372b: 100% ▕██████████████████████████████████████▏ 6.0 KB
pulling 8a13cf51fd9e: 100% ▕██████████████████████████████████████▏ 1.1 KB
pulling fc1ffc71ab8e: 100% ▕██████████████████████████████████████▏ 1.6 KB
pulling bee89e20d457: 100% ▕██████████████████████████████████████▏   31 B
pulling f7ce8f326f5d: 100% ▕██████████████████████████████████████▏ 1.1 KB
verifying sha256 digest
writing manifest
removing unused layers
success

mikehu@pcie-test-MS-7D67:~$ ollama run llama3.2-vision:90b
>>> hello
Hello! How can I assist you today?

>>> /bye
mikehu@pcie-test-MS-7D67:~$ ollama stop llama3.2-vision:90b
mikehu@pcie-test-MS-7D67:~$ ollama run llama4:latest
>>> hello
Error: POST predict: Post "http://127.0.0.1:37359/completion": EOF

Relevant log output


OS

ubuntu 22.04

GPU

H100

CPU

model name : INTEL(R) XEON(R) GOLD 6526Y

Ollama version

0.6.8

Originally created by @mikehu0 on GitHub (May 6, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10580 ### What is the issue? llama4 work fine on 0.6.7, but today upgrade 0.6.8, which is failed And there are no problem for llama3.2 ``` mikehu@pcie-test-MS-7D67:~$ ollama --version ollama version is 0.6.8 mikehu@pcie-test-MS-7D67:~$ ollama pull llama4:latest pulling manifest pulling 9d507a36062c: 100% ▕██████████████████████████████████████▏ 67 GB pulling 399a8a5a36db: 100% ▕██████████████████████████████████████▏ 7.8 KB pulling 24ca191a372b: 100% ▕██████████████████████████████████████▏ 6.0 KB pulling 8a13cf51fd9e: 100% ▕██████████████████████████████████████▏ 1.1 KB pulling fc1ffc71ab8e: 100% ▕██████████████████████████████████████▏ 1.6 KB pulling bee89e20d457: 100% ▕██████████████████████████████████████▏ 31 B pulling f7ce8f326f5d: 100% ▕██████████████████████████████████████▏ 1.1 KB verifying sha256 digest writing manifest removing unused layers success mikehu@pcie-test-MS-7D67:~$ ollama run llama3.2-vision:90b >>> hello Hello! How can I assist you today? >>> /bye mikehu@pcie-test-MS-7D67:~$ ollama stop llama3.2-vision:90b mikehu@pcie-test-MS-7D67:~$ ollama run llama4:latest >>> hello Error: POST predict: Post "http://127.0.0.1:37359/completion": EOF ``` ### Relevant log output ```shell ``` ### OS ubuntu 22.04 ### GPU H100 ### CPU model name : INTEL(R) XEON(R) GOLD 6526Y ### Ollama version 0.6.8
GiteaMirror added the bug label 2026-04-29 03:19:18 -05:00
Author
Owner

@rick-github commented on GitHub (May 6, 2025):

Server logs will aid in debugging.

<!-- gh-comment-id:2853795337 --> @rick-github commented on GitHub (May 6, 2025): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will aid in debugging.
Author
Owner

@mikehu0 commented on GitHub (May 7, 2025):

attach the log file
ollama.log

Sorry for my machine
mikehu@pcie-test-MS-7D67:~$ nvidia-smi
Tue May 6 13:11:36 2025
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 570.133.20 Driver Version: 570.133.20 CUDA Version: 12.8 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA H100 PCIe Off | 00000000:17:00.0 Off | 0 |
| N/A 39C P0 50W / 350W | 17MiB / 81559MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
| 1 NVIDIA L4 Off | 00000000:63:00.0 Off | 0 |
| N/A 36C P8 16W / 72W | 16MiB / 23034MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 2 NVIDIA L4 Off | 00000000:F1:00.0 Off | 0 |
| N/A 39C P8 17W / 72W | 16MiB / 23034MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 2927 G /usr/lib/xorg/Xorg 4MiB |
| 1 N/A N/A 2927 G /usr/lib/xorg/Xorg 4MiB |
| 2 N/A N/A 2927 G /usr/lib/xorg/Xorg 4MiB |
+-----------------------------------------------------------------------------------------+

<!-- gh-comment-id:2856877902 --> @mikehu0 commented on GitHub (May 7, 2025): attach the log file [ollama.log](https://github.com/user-attachments/files/20075405/ollama.log) Sorry for my machine mikehu@pcie-test-MS-7D67:~$ nvidia-smi Tue May 6 13:11:36 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 570.133.20 Driver Version: 570.133.20 CUDA Version: 12.8 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA H100 PCIe Off | 00000000:17:00.0 Off | 0 | | N/A 39C P0 50W / 350W | 17MiB / 81559MiB | 0% Default | | | | Disabled | +-----------------------------------------+------------------------+----------------------+ | 1 NVIDIA L4 Off | 00000000:63:00.0 Off | 0 | | N/A 36C P8 16W / 72W | 16MiB / 23034MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 2 NVIDIA L4 Off | 00000000:F1:00.0 Off | 0 | | N/A 39C P8 17W / 72W | 16MiB / 23034MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 2927 G /usr/lib/xorg/Xorg 4MiB | | 1 N/A N/A 2927 G /usr/lib/xorg/Xorg 4MiB | | 2 N/A N/A 2927 G /usr/lib/xorg/Xorg 4MiB | +-----------------------------------------------------------------------------------------+
Author
Owner

@rick-github commented on GitHub (May 7, 2025):

May 06 13:06:09 pcie-test-MS-7D67 ollama[3244]: [GIN] 2025/05/06 - 13:06:09 | 200 |  53.72143896s |       127.0.0.1 | POST     "/api/generate"
May 06 13:06:17 pcie-test-MS-7D67 ollama[3244]: time=2025-05-06T13:06:17.438+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
May 06 13:06:18 pcie-test-MS-7D67 ollama[3244]: CUDA error: an illegal memory access was encountered
May 06 13:06:18 pcie-test-MS-7D67 ollama[3244]:   current device: 0, in function ggml_cuda_mul_mat_q at //ml/backend/ggml/ggml/src/ggml-cuda/mmq.cu:145
May 06 13:06:18 pcie-test-MS-7D67 ollama[3244]:   cudaMemcpyAsync(ids_host.data(), ids->data, ggml_nbytes(ids), cudaMemcpyDeviceToHost, stream)
May 06 13:06:18 pcie-test-MS-7D67 ollama[3244]: //ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:75: CUDA error
May 06 13:06:18 pcie-test-MS-7D67 ollama[7785]: Could not attach to process.  If your uid matches the uid of the target
May 06 13:06:18 pcie-test-MS-7D67 ollama[7785]: process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
May 06 13:06:18 pcie-test-MS-7D67 ollama[7785]: again as the root user.  For more details, see /etc/sysctl.d/10-ptrace.conf
May 06 13:06:18 pcie-test-MS-7D67 ollama[3244]: ptrace: Inappropriate ioctl for device.
May 06 13:06:18 pcie-test-MS-7D67 ollama[3244]: No stack.
May 06 13:06:18 pcie-test-MS-7D67 ollama[3244]: The program is not being run.
May 06 13:06:18 pcie-test-MS-7D67 ollama[3244]: SIGABRT: abort
May 06 13:06:18 pcie-test-MS-7D67 ollama[3244]: PC=0x7cfcb629eb2c m=82 sigcode=18446744073709551610
May 06 13:06:18 pcie-test-MS-7D67 ollama[3244]: signal arrived during cgo execution
May 06 13:06:18 pcie-test-MS-7D67 ollama[3244]: goroutine 70 gp=0xc000508fc0 m=82 mp=0xc001c95808 [syscall]:
May 06 13:06:18 pcie-test-MS-7D67 ollama[3244]: runtime.cgocall(0x5d8fe9324c70, 0xc00032faf8)
May 06 13:06:18 pcie-test-MS-7D67 ollama[3244]:         runtime/cgocall.go:167 +0x4b fp=0xc00032fad0 sp=0xc00032fa98 pc=0x5d8fe84c244b
May 06 13:06:18 pcie-test-MS-7D67 ollama[3244]: github.com/ollama/ollama/ml/backend/ggml._Cfunc_ggml_backend_sched_graph_compute_async(0x7ce8cc0034b0, 0x7ce890059f80)

Probably same issue as #10590

<!-- gh-comment-id:2858006926 --> @rick-github commented on GitHub (May 7, 2025): ``` May 06 13:06:09 pcie-test-MS-7D67 ollama[3244]: [GIN] 2025/05/06 - 13:06:09 | 200 | 53.72143896s | 127.0.0.1 | POST "/api/generate" May 06 13:06:17 pcie-test-MS-7D67 ollama[3244]: time=2025-05-06T13:06:17.438+08:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 May 06 13:06:18 pcie-test-MS-7D67 ollama[3244]: CUDA error: an illegal memory access was encountered May 06 13:06:18 pcie-test-MS-7D67 ollama[3244]: current device: 0, in function ggml_cuda_mul_mat_q at //ml/backend/ggml/ggml/src/ggml-cuda/mmq.cu:145 May 06 13:06:18 pcie-test-MS-7D67 ollama[3244]: cudaMemcpyAsync(ids_host.data(), ids->data, ggml_nbytes(ids), cudaMemcpyDeviceToHost, stream) May 06 13:06:18 pcie-test-MS-7D67 ollama[3244]: //ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:75: CUDA error May 06 13:06:18 pcie-test-MS-7D67 ollama[7785]: Could not attach to process. If your uid matches the uid of the target May 06 13:06:18 pcie-test-MS-7D67 ollama[7785]: process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try May 06 13:06:18 pcie-test-MS-7D67 ollama[7785]: again as the root user. For more details, see /etc/sysctl.d/10-ptrace.conf May 06 13:06:18 pcie-test-MS-7D67 ollama[3244]: ptrace: Inappropriate ioctl for device. May 06 13:06:18 pcie-test-MS-7D67 ollama[3244]: No stack. May 06 13:06:18 pcie-test-MS-7D67 ollama[3244]: The program is not being run. May 06 13:06:18 pcie-test-MS-7D67 ollama[3244]: SIGABRT: abort May 06 13:06:18 pcie-test-MS-7D67 ollama[3244]: PC=0x7cfcb629eb2c m=82 sigcode=18446744073709551610 May 06 13:06:18 pcie-test-MS-7D67 ollama[3244]: signal arrived during cgo execution May 06 13:06:18 pcie-test-MS-7D67 ollama[3244]: goroutine 70 gp=0xc000508fc0 m=82 mp=0xc001c95808 [syscall]: May 06 13:06:18 pcie-test-MS-7D67 ollama[3244]: runtime.cgocall(0x5d8fe9324c70, 0xc00032faf8) May 06 13:06:18 pcie-test-MS-7D67 ollama[3244]: runtime/cgocall.go:167 +0x4b fp=0xc00032fad0 sp=0xc00032fa98 pc=0x5d8fe84c244b May 06 13:06:18 pcie-test-MS-7D67 ollama[3244]: github.com/ollama/ollama/ml/backend/ggml._Cfunc_ggml_backend_sched_graph_compute_async(0x7ce8cc0034b0, 0x7ce890059f80) ``` Probably same issue as #10590
Author
Owner

@jokeOps commented on GitHub (May 7, 2025):

Same for me.

CPU: AMD Ryzen 9 9950X
RAM: 64gb
GPU: x3 3090, 1x p40
Driver: 535.230.02
OS: Ubuntu 24.04.2 + Kubernetes
Ollama image: ollama/ollama:0.6.8

ollama-668448c464-fk99q.log

<!-- gh-comment-id:2858017338 --> @jokeOps commented on GitHub (May 7, 2025): Same for me. CPU: AMD Ryzen 9 9950X RAM: 64gb GPU: x3 3090, 1x p40 Driver: 535.230.02 OS: Ubuntu 24.04.2 + Kubernetes Ollama image: ollama/ollama:0.6.8 [ollama-668448c464-fk99q.log](https://github.com/user-attachments/files/20081451/ollama-668448c464-fk99q.log)
Author
Owner

@rick-github commented on GitHub (May 7, 2025):

[GIN] 2025/05/07 - 10:16:37 | 200 | 44.491917148s |    192.168.88.1 | POST     "/api/generate"
time=2025-05-07T10:17:00.051Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
CUDA error: an illegal memory access was encountered
  current device: 2, in function ggml_cuda_mul_mat_q at //ml/backend/ggml/ggml/src/ggml-cuda/mmq.cu:145
  cudaMemcpyAsync(ids_host.data(), ids->data, ggml_nbytes(ids), cudaMemcpyDeviceToHost, stream)
//ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:75: CUDA error
SIGSEGV: segmentation violation
PC=0x7335216601c7 m=32 sigcode=1 addr=0x20f203f10
signal arrived during cgo execution

goroutine 7 gp=0xc000503180 m=32 mp=0xc001d9a808 [syscall]:
runtime.cgocall(0x5d481e44cc70, 0xc000511af8)
        runtime/cgocall.go:167 +0x4b fp=0xc000511ad0 sp=0xc000511a98 pc=0x5d481d5ea44b
github.com/ollama/ollama/ml/backend/ggml._Cfunc_ggml_backend_sched_graph_compute_async(0x7323b80046f0, 0x732418005080)

Same issue.

<!-- gh-comment-id:2858022293 --> @rick-github commented on GitHub (May 7, 2025): ``` [GIN] 2025/05/07 - 10:16:37 | 200 | 44.491917148s | 192.168.88.1 | POST "/api/generate" time=2025-05-07T10:17:00.051Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 CUDA error: an illegal memory access was encountered current device: 2, in function ggml_cuda_mul_mat_q at //ml/backend/ggml/ggml/src/ggml-cuda/mmq.cu:145 cudaMemcpyAsync(ids_host.data(), ids->data, ggml_nbytes(ids), cudaMemcpyDeviceToHost, stream) //ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:75: CUDA error SIGSEGV: segmentation violation PC=0x7335216601c7 m=32 sigcode=1 addr=0x20f203f10 signal arrived during cgo execution goroutine 7 gp=0xc000503180 m=32 mp=0xc001d9a808 [syscall]: runtime.cgocall(0x5d481e44cc70, 0xc000511af8) runtime/cgocall.go:167 +0x4b fp=0xc000511ad0 sp=0xc000511a98 pc=0x5d481d5ea44b github.com/ollama/ollama/ml/backend/ggml._Cfunc_ggml_backend_sched_graph_compute_async(0x7323b80046f0, 0x732418005080) ``` Same issue.
Author
Owner

@LeleKimi commented on GitHub (May 14, 2025):

Same issue, only on llama 4 llama4:17b-scout-16e-instruct-q4_K_M:
May 14 20:10:55 llama ollama[90778]: time=2025-05-14T20:10:55.554Z level=INFO source=ggml.go:556 msg="compute graph" backend=CPU buffer_type=CPU size="10.0 MiB"
May 14 20:10:55 llama ollama[90778]: time=2025-05-14T20:10:55.594Z level=INFO source=server.go:630 msg="llama runner started in 22.93 seconds"
May 14 20:10:56 llama ollama[90778]: CUDA error: an illegal memory access was encountered
May 14 20:10:56 llama ollama[90778]: current device: 0, in function ggml_cuda_mul_mat_q at //ml/backend/ggml/ggml/src/ggml-cuda/mmq.cu:145
May 14 20:10:56 llama ollama[90778]: cudaMemcpyAsync(ids_host.data(), ids->data, ggml_nbytes(ids), cudaMemcpyDeviceToHost, stream)
May 14 20:10:56 llama ollama[90778]: //ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:75: CUDA error
May 14 20:10:56 llama ollama[90778]: SIGSEGV: segmentation violation
May 14 20:10:56 llama ollama[90778]: PC=0x6ffdca2a3e47 m=24 sigcode=1 addr=0x20e403e94
May 14 20:10:56 llama ollama[90778]: signal arrived during cgo execution

<!-- gh-comment-id:2881442491 --> @LeleKimi commented on GitHub (May 14, 2025): Same issue, only on llama 4 llama4:17b-scout-16e-instruct-q4_K_M: May 14 20:10:55 llama ollama[90778]: time=2025-05-14T20:10:55.554Z level=INFO source=ggml.go:556 msg="compute graph" backend=CPU buffer_type=CPU size="10.0 MiB" May 14 20:10:55 llama ollama[90778]: time=2025-05-14T20:10:55.594Z level=INFO source=server.go:630 msg="llama runner started in 22.93 seconds" May 14 20:10:56 llama ollama[90778]: CUDA error: an illegal memory access was encountered May 14 20:10:56 llama ollama[90778]: current device: 0, in function ggml_cuda_mul_mat_q at //ml/backend/ggml/ggml/src/ggml-cuda/mmq.cu:145 May 14 20:10:56 llama ollama[90778]: cudaMemcpyAsync(ids_host.data(), ids->data, ggml_nbytes(ids), cudaMemcpyDeviceToHost, stream) May 14 20:10:56 llama ollama[90778]: //ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:75: CUDA error May 14 20:10:56 llama ollama[90778]: SIGSEGV: segmentation violation May 14 20:10:56 llama ollama[90778]: PC=0x6ffdca2a3e47 m=24 sigcode=1 addr=0x20e403e94 May 14 20:10:56 llama ollama[90778]: signal arrived during cgo execution
Author
Owner

@rick-github commented on GitHub (May 14, 2025):

May 14 20:10:56 llama ollama[90778]: current device: 0, in function ggml_cuda_mul_mat_q at //ml/backend/ggml/ggml/src/ggml-cuda/mmq.cu:145

Same issue.

<!-- gh-comment-id:2881477319 --> @rick-github commented on GitHub (May 14, 2025): ``` May 14 20:10:56 llama ollama[90778]: current device: 0, in function ggml_cuda_mul_mat_q at //ml/backend/ggml/ggml/src/ggml-cuda/mmq.cu:145 ``` Same issue.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#53473