[GH-ISSUE #3701] llama runner process no longer running: -1 CUDA error: CUBLAS_STATUS_EXECUTION_FAILED #48792

Closed
opened 2026-04-28 09:16:33 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @holycrypto on GitHub (Apr 17, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3701

What is the issue?

Follow the manual: 9df6c85c3a/docs/tutorials/nvidia-jetson.md
But run error:

ollama run mistral-jetson

Errors

Error: llama runner process no longer running: -1 CUDA error: CUBLAS_STATUS_EXECUTION_FAILED
  current device: 0, in function ggml_cuda_mul_mat_batched_cublas at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:1848
  cublasGemmBatchedEx(ctx.cublas_handle(), CUBLAS_OP_T, CUBLAS_OP_N, ne01, ne11, ne10, alpha, (const void **) (ptrs_src.get() + 0*ne23), CUDA_R_16F, nb01/nb00, (const void **) (ptrs_src.get() + 1*ne23), CUDA_R_16F, nb11/nb10, beta, ( void **) (ptrs_dst.get() + 0*ne23), cu_data_type, ne01, ne23, cu_compute_type, CUBLAS_GEMM_DEFAULT_TENSOR_OP)
GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:60: !"CUDA error"

What did you expect to see?

No response

Steps to reproduce

No response

Are there any recent changes that introduced the issue?

No response

OS

Linux

Architecture

arm64

Platform

No response

Ollama version

0.1.32

GPU

Nvidia

GPU info

+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 540.2.0 Driver Version: N/A CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 Orin (nvgpu) N/A | N/A N/A | N/A |
| N/A N/A N/A N/A / N/A | Not Supported | N/A N/A |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| No running processes found |
+---------------------------------------------------------------------------------------+

CPU

No response

Other software

No response

Originally created by @holycrypto on GitHub (Apr 17, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3701 ### What is the issue? Follow the manual: https://github.com/ollama/ollama/blob/9df6c85c3a51ce00d6a65be9dd8a06af07b24af5/docs/tutorials/nvidia-jetson.md But run error: ```bash ollama run mistral-jetson ``` Errors ```bash Error: llama runner process no longer running: -1 CUDA error: CUBLAS_STATUS_EXECUTION_FAILED current device: 0, in function ggml_cuda_mul_mat_batched_cublas at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:1848 cublasGemmBatchedEx(ctx.cublas_handle(), CUBLAS_OP_T, CUBLAS_OP_N, ne01, ne11, ne10, alpha, (const void **) (ptrs_src.get() + 0*ne23), CUDA_R_16F, nb01/nb00, (const void **) (ptrs_src.get() + 1*ne23), CUDA_R_16F, nb11/nb10, beta, ( void **) (ptrs_dst.get() + 0*ne23), cu_data_type, ne01, ne23, cu_compute_type, CUBLAS_GEMM_DEFAULT_TENSOR_OP) GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:60: !"CUDA error" ``` ### What did you expect to see? _No response_ ### Steps to reproduce _No response_ ### Are there any recent changes that introduced the issue? _No response_ ### OS Linux ### Architecture arm64 ### Platform _No response_ ### Ollama version 0.1.32 ### GPU Nvidia ### GPU info +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 540.2.0 Driver Version: N/A CUDA Version: 12.2 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 Orin (nvgpu) N/A | N/A N/A | N/A | | N/A N/A N/A N/A / N/A | Not Supported | N/A N/A | | | | N/A | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | No running processes found | +---------------------------------------------------------------------------------------+ ### CPU _No response_ ### Other software _No response_
GiteaMirror added the bug label 2026-04-28 09:16:33 -05:00
Author
Owner

@holycrypto commented on GitHub (Apr 17, 2024):

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Tue_Aug_15_22:08:11_PDT_2023
Cuda compilation tools, release 12.2, V12.2.140
Build cuda_12.2.r12.2/compiler.33191640_0

<!-- gh-comment-id:2061627643 --> @holycrypto commented on GitHub (Apr 17, 2024): nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2023 NVIDIA Corporation Built on Tue_Aug_15_22:08:11_PDT_2023 Cuda compilation tools, release 12.2, V12.2.140 Build cuda_12.2.r12.2/compiler.33191640_0
Author
Owner

@Aayog commented on GitHub (Apr 25, 2024):

Getting the same issue with the llama3:70b

ollama run llama3:70b
Error: llama runner process no longer running: 1

It works fine with other models, so I think this issue is that my GPU only has 24GB and this requires 40GB. The other issues might be due to the same.

<!-- gh-comment-id:2078111772 --> @Aayog commented on GitHub (Apr 25, 2024): Getting the same issue with the llama3:70b ollama run llama3:70b Error: llama runner process no longer running: 1 It works fine with other models, so I think this issue is that my GPU only has 24GB and this requires 40GB. The other issues might be due to the same.
Author
Owner

@holycrypto commented on GitHub (Apr 28, 2024):

I follow this tutorial: ollama 🆕 - NVIDIA Jetson AI Lab still got same error output.

jetson-containers docker env parameters:

CMAKE_CUDA_COMPILER	/usr/local/cuda/bin/nvcc
CUDA_ARCHITECTURES	87
CUDA_BIN_PATH	/usr/local/cuda/bin
CUDA_HOME	/usr/local/cuda
CUDA_NVCC_EXECUTABLE	/usr/local/cuda/bin/nvcc
CUDA_TOOLKIT_ROOT_DIR	/usr/local/cuda
CUDAARCHS	87
CUDACXX	/usr/local/cuda/bin/nvcc
CUDNN_LIB_INCLUDE_PATH	/usr/include
CUDNN_LIB_PATH	/usr/lib/aarch64-linux-gnu
DEBIAN_FRONTEND	noninteractive
JETSON_JETPACK	6.0
LANG	en_US.UTF-8
LANGUAGE	en_US:en
LC_ALL	en_US.UTF-8
LD_LIBRARY_PATH	/usr/local/cuda/lib:/usr/local/cuda/lib64:/usr/local/cuda/include:/usr/local/cuda/compat:/usr/local/cuda/lib64:
NVCC_PATH	/usr/local/cuda/bin/nvcc
NVIDIA_DRIVER_CAPABILITIES	all
NVIDIA_VISIBLE_DEVICES	all
OLLAMA_HOST	0.0.0.0
OLLAMA_MODELS	/ollama
PATH	/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PIP_INDEX_URL	http://jetson.webredirect.org/jp6/cu122
PIP_TRUSTED_HOST	jetson.webredirect.org
SCP_UPLOAD_PASS	nvidia
SCP_UPLOAD_URL	jao-51:/dist/jp6/cu122
SCP_UPLOAD_USER	nvidia
TAR_INDEX_URL	http://jetson.webredirect.org:8000/jp6/cu122
TORCH_NVCC_FLAGS	-Xfatbin -compress-all
TWINE_PASSWORD	NvidiaJetson24
TWINE_REPOSITORY_URL	http://jao-51/jp6/cu122
TWINE_USERNAME	jp6
<!-- gh-comment-id:2081310619 --> @holycrypto commented on GitHub (Apr 28, 2024): I follow this tutorial: [ollama 🆕 - NVIDIA Jetson AI Lab](https://www.jetson-ai-lab.com/tutorial_ollama.html) still got same error output. jetson-containers docker env parameters: ``` CMAKE_CUDA_COMPILER /usr/local/cuda/bin/nvcc CUDA_ARCHITECTURES 87 CUDA_BIN_PATH /usr/local/cuda/bin CUDA_HOME /usr/local/cuda CUDA_NVCC_EXECUTABLE /usr/local/cuda/bin/nvcc CUDA_TOOLKIT_ROOT_DIR /usr/local/cuda CUDAARCHS 87 CUDACXX /usr/local/cuda/bin/nvcc CUDNN_LIB_INCLUDE_PATH /usr/include CUDNN_LIB_PATH /usr/lib/aarch64-linux-gnu DEBIAN_FRONTEND noninteractive JETSON_JETPACK 6.0 LANG en_US.UTF-8 LANGUAGE en_US:en LC_ALL en_US.UTF-8 LD_LIBRARY_PATH /usr/local/cuda/lib:/usr/local/cuda/lib64:/usr/local/cuda/include:/usr/local/cuda/compat:/usr/local/cuda/lib64: NVCC_PATH /usr/local/cuda/bin/nvcc NVIDIA_DRIVER_CAPABILITIES all NVIDIA_VISIBLE_DEVICES all OLLAMA_HOST 0.0.0.0 OLLAMA_MODELS /ollama PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin PIP_INDEX_URL http://jetson.webredirect.org/jp6/cu122 PIP_TRUSTED_HOST jetson.webredirect.org SCP_UPLOAD_PASS nvidia SCP_UPLOAD_URL jao-51:/dist/jp6/cu122 SCP_UPLOAD_USER nvidia TAR_INDEX_URL http://jetson.webredirect.org:8000/jp6/cu122 TORCH_NVCC_FLAGS -Xfatbin -compress-all TWINE_PASSWORD NvidiaJetson24 TWINE_REPOSITORY_URL http://jao-51/jp6/cu122 TWINE_USERNAME jp6 ```
Author
Owner

@holycrypto commented on GitHub (Apr 29, 2024):

Getting the same issue with the llama3:70b

ollama run llama3:70b Error: llama runner process no longer running: 1

It works fine with other models, so I think this issue is that my GPU only has 24GB and this requires 40GB. The other issues might be due to the same.

I have tries all models with Jetson AGX Orin 64GB, but output same errors, can you share which version of your jetson device is?

<!-- gh-comment-id:2081913970 --> @holycrypto commented on GitHub (Apr 29, 2024): > Getting the same issue with the llama3:70b > > ollama run llama3:70b Error: llama runner process no longer running: 1 > > It works fine with other models, so I think this issue is that my GPU only has 24GB and this requires 40GB. The other issues might be due to the same. I have tries all models with Jetson AGX Orin 64GB, but output same errors, can you share which version of your jetson device is?
Author
Owner

@Aspirinkb commented on GitHub (Apr 29, 2024):

same error on AGX Orin 64GB, Jetpack 6.0 DP [L4T 36.2.0]

<!-- gh-comment-id:2082455456 --> @Aspirinkb commented on GitHub (Apr 29, 2024): same error on AGX Orin 64GB, Jetpack 6.0 DP [L4T 36.2.0]
Author
Owner

@Aspirinkb commented on GitHub (Apr 30, 2024):

I have resolved the issue because I made a silly mistake. In simple terms, I had already started the Ollama service in the system before launching the container. So when I executed ollama run phi3 inside the container, it was actually being processed by the Ollama service outside the container, not the one inside.

Therefore, when I shut down the Ollama service outside the container, started it inside the container, and tried running the model again, it worked successfully.

<!-- gh-comment-id:2084314810 --> @Aspirinkb commented on GitHub (Apr 30, 2024): I have resolved the issue because I made a silly mistake. In simple terms, I had already started the Ollama service in the system before launching the container. So when I executed `ollama run phi3` inside the container, it was actually being processed by the Ollama service outside the container, not the one inside. Therefore, when I shut down the Ollama service outside the container, started it inside the container, and tried running the model again, it worked successfully.
Author
Owner

@holycrypto commented on GitHub (Apr 30, 2024):

I have resolved the issue because I made a silly mistake. In simple terms, I had already started the Ollama service in the system before launching the container. So when I executed ollama run phi3 inside the container, it was actually being processed by the Ollama service outside the container, not the one inside.

Therefore, when I shut down the Ollama service outside the container, started it inside the container, and tried running the model again, it worked successfully.

Wow, bro thats right! I resolved it too, I also have ollama server outside container.

<!-- gh-comment-id:2085221601 --> @holycrypto commented on GitHub (Apr 30, 2024): # > I have resolved the issue because I made a silly mistake. In simple terms, I had already started the Ollama service in the system before launching the container. So when I executed `ollama run phi3` inside the container, it was actually being processed by the Ollama service outside the container, not the one inside. > > Therefore, when I shut down the Ollama service outside the container, started it inside the container, and tried running the model again, it worked successfully. Wow, bro thats right! I resolved it too, I also have ollama server outside container.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#48792