[GH-ISSUE #12770] Nvidia Tesla T4 doesn't work since 0.9.3 #8471

Closed
opened 2026-04-12 21:09:49 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @klapaudius on GitHub (Oct 24, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12770

What is the issue?

I found out that since 0.9.3 Tesla T4 are not used anymore and requests fallbacks to CPU
I tried each version and used a llama3.1 model to see the output speed. It drops from 35.54 token/s with 0.9.2 to 2.32 token/s with 0.9.3

Relevant log output

[ec2-user@awsxxxxxxxxxxxx ~]$ docker run -d --gpus=all -e OLLAMA_ORIGINS="*" -e NVIDIA_VISIBLE_DEVICES=all -e NVIDIA_DRIVER_CAPABILITIES=compute,utility -e OLLAMA_HEADERS='Content-Type,Authorization,Access-Control-Allow-Origin' -v ollama:/root/.ollama -p 11434:11434 --name ollama --restart always ollama/ollama:0.9.3
0d531f39a396b46d278cee2462e94c30591d44bf9bd30718f913609818f6a902
[ec2-user@awsxxxxxxxxxxxx ~]$ nvidia-smi
Fri Oct 24 16:42:30 2025
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.105.01   Driver Version: 515.105.01   CUDA Version: 11.7     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla T4            Off  | 00000000:00:1E.0 Off |                    0 |
| N/A   27C    P8     9W /  70W |      3MiB / 15360MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+
[ec2-user@awsxxxxxxxxxxxx ~]$ docker logs ollama 2>&1 | grep -E "GPU|CUDA|vram|discovering"
time=2025-10-24T16:42:10.580Z level=INFO source=routes.go:1235 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-10-24T16:42:10.583Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-10-24T16:42:11.184Z level=WARN source=cuda_common.go:65 msg="old CUDA driver detected - please upgrade to a newer driver" version=11.7
time=2025-10-24T16:42:11.184Z level=INFO source=types.go:130 msg="inference compute" id=GPU-51db25ba-c1ca-57bf-f716-8d5fc17f1694 library=cuda variant=v11 compute=7.5 driver=11.7 name="Tesla T4" total="14.8 GiB" available="14.7 GiB"
time=2025-10-24T16:42:22.986Z level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe gpu=GPU-51db25ba-c1ca-57bf-f716-8d5fc17f1694 parallel=2 available=15738994688 required="6.2 GiB"
ggml_cuda_init: failed to initialize CUDA: CUDA driver version is insufficient for CUDA runtime version
load_backend: loaded CUDA backend from /usr/lib/ollama/libggml-cuda.so
[ec2-user@awsxxxxxxxxxxxx ~]$ docker stop ollama
ollama
[ec2-user@awsxxxxxxxxxxxx ~]$ docker rm ollama
ollama
[ec2-user@awsxxxxxxxxxxxx ~]$ docker run -d --gpus=all -e OLLAMA_ORIGINS="*" -e NVIDIA_VISIBLE_DEVICES=all -e NVIDIA_DRIVER_CAPABILITIES=compute,utility -e OLLAMA_HEADERS='Content-Type,Authorization,Access-Control-Allow-Origin' -v ollama:/root/.ollama -p 11434:11434 --name ollama --restart always ollama/ollama:0.9.2
c3d55f3fabddf8f7981b04fcf6045c4b3586ff5ef24fe7736f5334e570e4d7d2
[ec2-user@awsxxxxxxxxxxxx ~]$ nvidia-smi
Fri Oct 24 16:43:35 2025
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.105.01   Driver Version: 515.105.01   CUDA Version: 11.7     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla T4            Off  | 00000000:00:1E.0 Off |                    0 |
| N/A   31C    P0    25W /  70W |   6162MiB / 15360MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A     16803      C   /usr/bin/ollama                  6159MiB |
+-----------------------------------------------------------------------------+
[ec2-user@awsxxxxxxxxxxxx ~]$ docker logs ollama 2>&1 | grep -E "GPU|CUDA|vram|discovering"
time=2025-10-24T16:43:12.521Z level=INFO source=routes.go:1235 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-10-24T16:43:12.522Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-10-24T16:43:13.127Z level=INFO source=types.go:130 msg="inference compute" id=GPU-51db25ba-c1ca-57bf-f716-8d5fc17f1694 library=cuda variant=v11 compute=7.5 driver=11.7 name="Tesla T4" total="14.8 GiB" available="14.7 GiB"
time=2025-10-24T16:43:24.660Z level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe gpu=GPU-51db25ba-c1ca-57bf-f716-8d5fc17f1694 parallel=2 available=15738994688 required="6.2 GiB"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v11/libggml-cuda.so
time=2025-10-24T16:43:25.145Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
llama_model_load_from_file_impl: using device CUDA0 (Tesla T4) - 14917 MiB free
load_tensors: offloading 32 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 33/33 layers to GPU
load_tensors:        CUDA0 model buffer size =  4155.99 MiB
llama_context:  CUDA_Host  output buffer size =     1.01 MiB
llama_kv_cache_unified:      CUDA0 KV buffer size =  1024.00 MiB
llama_context:      CUDA0 compute buffer size =   560.00 MiB
llama_context:  CUDA_Host compute buffer size =    24.01 MiB

OS

Docker

GPU

Nvidia

CPU

Intel

Ollama version

0.9.3 and newer

Originally created by @klapaudius on GitHub (Oct 24, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12770 ### What is the issue? I found out that since 0.9.3 Tesla T4 are not used anymore and requests fallbacks to CPU I tried each version and used a llama3.1 model to see the output speed. It drops from 35.54 token/s with 0.9.2 to 2.32 token/s with 0.9.3 ### Relevant log output ```shell [ec2-user@awsxxxxxxxxxxxx ~]$ docker run -d --gpus=all -e OLLAMA_ORIGINS="*" -e NVIDIA_VISIBLE_DEVICES=all -e NVIDIA_DRIVER_CAPABILITIES=compute,utility -e OLLAMA_HEADERS='Content-Type,Authorization,Access-Control-Allow-Origin' -v ollama:/root/.ollama -p 11434:11434 --name ollama --restart always ollama/ollama:0.9.3 0d531f39a396b46d278cee2462e94c30591d44bf9bd30718f913609818f6a902 [ec2-user@awsxxxxxxxxxxxx ~]$ nvidia-smi Fri Oct 24 16:42:30 2025 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 515.105.01 Driver Version: 515.105.01 CUDA Version: 11.7 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla T4 Off | 00000000:00:1E.0 Off | 0 | | N/A 27C P8 9W / 70W | 3MiB / 15360MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ [ec2-user@awsxxxxxxxxxxxx ~]$ docker logs ollama 2>&1 | grep -E "GPU|CUDA|vram|discovering" time=2025-10-24T16:42:10.580Z level=INFO source=routes.go:1235 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-10-24T16:42:10.583Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-10-24T16:42:11.184Z level=WARN source=cuda_common.go:65 msg="old CUDA driver detected - please upgrade to a newer driver" version=11.7 time=2025-10-24T16:42:11.184Z level=INFO source=types.go:130 msg="inference compute" id=GPU-51db25ba-c1ca-57bf-f716-8d5fc17f1694 library=cuda variant=v11 compute=7.5 driver=11.7 name="Tesla T4" total="14.8 GiB" available="14.7 GiB" time=2025-10-24T16:42:22.986Z level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe gpu=GPU-51db25ba-c1ca-57bf-f716-8d5fc17f1694 parallel=2 available=15738994688 required="6.2 GiB" ggml_cuda_init: failed to initialize CUDA: CUDA driver version is insufficient for CUDA runtime version load_backend: loaded CUDA backend from /usr/lib/ollama/libggml-cuda.so [ec2-user@awsxxxxxxxxxxxx ~]$ docker stop ollama ollama [ec2-user@awsxxxxxxxxxxxx ~]$ docker rm ollama ollama [ec2-user@awsxxxxxxxxxxxx ~]$ docker run -d --gpus=all -e OLLAMA_ORIGINS="*" -e NVIDIA_VISIBLE_DEVICES=all -e NVIDIA_DRIVER_CAPABILITIES=compute,utility -e OLLAMA_HEADERS='Content-Type,Authorization,Access-Control-Allow-Origin' -v ollama:/root/.ollama -p 11434:11434 --name ollama --restart always ollama/ollama:0.9.2 c3d55f3fabddf8f7981b04fcf6045c4b3586ff5ef24fe7736f5334e570e4d7d2 [ec2-user@awsxxxxxxxxxxxx ~]$ nvidia-smi Fri Oct 24 16:43:35 2025 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 515.105.01 Driver Version: 515.105.01 CUDA Version: 11.7 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla T4 Off | 00000000:00:1E.0 Off | 0 | | N/A 31C P0 25W / 70W | 6162MiB / 15360MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N/A N/A 16803 C /usr/bin/ollama 6159MiB | +-----------------------------------------------------------------------------+ [ec2-user@awsxxxxxxxxxxxx ~]$ docker logs ollama 2>&1 | grep -E "GPU|CUDA|vram|discovering" time=2025-10-24T16:43:12.521Z level=INFO source=routes.go:1235 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-10-24T16:43:12.522Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-10-24T16:43:13.127Z level=INFO source=types.go:130 msg="inference compute" id=GPU-51db25ba-c1ca-57bf-f716-8d5fc17f1694 library=cuda variant=v11 compute=7.5 driver=11.7 name="Tesla T4" total="14.8 GiB" available="14.7 GiB" time=2025-10-24T16:43:24.660Z level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe gpu=GPU-51db25ba-c1ca-57bf-f716-8d5fc17f1694 parallel=2 available=15738994688 required="6.2 GiB" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v11/libggml-cuda.so time=2025-10-24T16:43:25.145Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) llama_model_load_from_file_impl: using device CUDA0 (Tesla T4) - 14917 MiB free load_tensors: offloading 32 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 33/33 layers to GPU load_tensors: CUDA0 model buffer size = 4155.99 MiB llama_context: CUDA_Host output buffer size = 1.01 MiB llama_kv_cache_unified: CUDA0 KV buffer size = 1024.00 MiB llama_context: CUDA0 compute buffer size = 560.00 MiB llama_context: CUDA_Host compute buffer size = 24.01 MiB ``` ### OS Docker ### GPU Nvidia ### CPU Intel ### Ollama version 0.9.3 and newer
GiteaMirror added the bug label 2026-04-12 21:09:49 -05:00
Author
Owner

@rick-github commented on GitHub (Oct 24, 2025):

time=2025-10-24T16:42:11.184Z level=WARN source=cuda_common.go:65 msg="old CUDA driver detected - please upgrade to a newer driver" version=11.7
ggml_cuda_init: failed to initialize CUDA: CUDA driver version is insufficient for CUDA runtime version

ollama has dropped support for CUDA 11. Upgrade your driver to CUDA 12 or CUDA 13.

<!-- gh-comment-id:3444228016 --> @rick-github commented on GitHub (Oct 24, 2025): ``` time=2025-10-24T16:42:11.184Z level=WARN source=cuda_common.go:65 msg="old CUDA driver detected - please upgrade to a newer driver" version=11.7 ggml_cuda_init: failed to initialize CUDA: CUDA driver version is insufficient for CUDA runtime version ``` ollama has dropped support for CUDA 11. Upgrade your driver to CUDA 12 or CUDA 13.
Author
Owner

@pdevine commented on GitHub (Oct 24, 2025):

As @rick-github mentioned you'll need to upgrade to CUDA 12. The T4's are still supported. I'll go ahead and close the issue as answered.

<!-- gh-comment-id:3444591360 --> @pdevine commented on GitHub (Oct 24, 2025): As @rick-github mentioned you'll need to upgrade to CUDA 12. The T4's are still supported. I'll go ahead and close the issue as answered.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#8471