[GH-ISSUE #9503] NVIDIA GPU drivers not loaded on Jeston Orin Nano #6191

Open
opened 2026-04-12 17:33:38 -05:00 by GiteaMirror · 33 comments
Owner

Originally created by @virtualJonesie on GitHub (Mar 4, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9503

What is the issue?

Ollama does not load GPU drivers on the NVIDIA Jetson Orin Nano, going back to 0.5.8.

The last version to load these drivers successfully was 0.5.7.

I am running the latest version of the NVIDIA Jetpack 6.2 from NVIDIA on a newly flashed system. All updates have been applied via apt.

Relevant log output

ollama      | time=2025-03-04T20:48:09.206Z level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths="[/usr/lib/ollama/cuda_v11/libcudart.so.11.3.109 /usr/lib/ollama/cuda_v12/libcudart.so.12.4.127]"
ollama      | cudaSetDevice err: 35
ollama      | time=2025-03-04T20:48:09.209Z level=DEBUG source=gpu.go:574 msg="Unable to load cudart library /usr/lib/ollama/cuda_v11/libcudart.so.11.3.109: your nvidia driver is too old or missing.  If you have a CUDA GPU please upgrade to run ollama"
ollama      | cudaSetDevice err: 35
ollama      | time=2025-03-04T20:48:09.210Z level=DEBUG source=gpu.go:574 msg="Unable to load cudart library /usr/lib/ollama/cuda_v12/libcudart.so.12.4.127: your nvidia driver is too old or missing.  If you have a CUDA GPU please upgrade to run ollama"
ollama      | time=2025-03-04T20:48:09.210Z level=DEBUG source=amd_linux.go:419 msg="amdgpu driver not detected /sys/module/amdgpu"
ollama      | time=2025-03-04T20:48:09.210Z level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered"
ollama      | time=2025-03-04T20:48:09.211Z level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="7.4 GiB" available="6.5 GiB"

OS

Linux

GPU

Nvidia

CPU

Other

Ollama version

0.5.12

Originally created by @virtualJonesie on GitHub (Mar 4, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9503 ### What is the issue? Ollama does not load GPU drivers on the NVIDIA Jetson Orin Nano, going back to 0.5.8. The last version to load these drivers successfully was 0.5.7. I am running the latest version of the NVIDIA Jetpack 6.2 from NVIDIA on a newly flashed system. All updates have been applied via apt. ### Relevant log output ```shell ollama | time=2025-03-04T20:48:09.206Z level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths="[/usr/lib/ollama/cuda_v11/libcudart.so.11.3.109 /usr/lib/ollama/cuda_v12/libcudart.so.12.4.127]" ollama | cudaSetDevice err: 35 ollama | time=2025-03-04T20:48:09.209Z level=DEBUG source=gpu.go:574 msg="Unable to load cudart library /usr/lib/ollama/cuda_v11/libcudart.so.11.3.109: your nvidia driver is too old or missing. If you have a CUDA GPU please upgrade to run ollama" ollama | cudaSetDevice err: 35 ollama | time=2025-03-04T20:48:09.210Z level=DEBUG source=gpu.go:574 msg="Unable to load cudart library /usr/lib/ollama/cuda_v12/libcudart.so.12.4.127: your nvidia driver is too old or missing. If you have a CUDA GPU please upgrade to run ollama" ollama | time=2025-03-04T20:48:09.210Z level=DEBUG source=amd_linux.go:419 msg="amdgpu driver not detected /sys/module/amdgpu" ollama | time=2025-03-04T20:48:09.210Z level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered" ollama | time=2025-03-04T20:48:09.211Z level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="7.4 GiB" available="6.5 GiB" ``` ### OS Linux ### GPU Nvidia ### CPU Other ### Ollama version 0.5.12
GiteaMirror added the linuxnvidiabug labels 2026-04-12 17:33:39 -05:00
Author
Owner

@dhiltgen commented on GitHub (Mar 5, 2025):

I just tried on a AGX Orin with Jetpack 6 and it was able to run on the iGPU.

time=2025-03-04T17:15:10.605-08:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/tmp/daniel_ollama_test/lib/ollama/libcuda.so* /tmp/daniel_ollama_test/bin/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2025-03-04T17:15:10.614-08:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[/usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1]
initializing /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1
dlsym: cuInit - 0xffff30366540
dlsym: cuDriverGetVersion - 0xffff30366570
dlsym: cuDeviceGetCount - 0xffff303665d0
dlsym: cuDeviceGet - 0xffff303665a0
dlsym: cuDeviceGetAttribute - 0xffff30366720
dlsym: cuDeviceGetUuid - 0xffff30366630
dlsym: cuDeviceGetName - 0xffff30366600
dlsym: cuCtxCreate_v3 - 0xffff303669f0
dlsym: cuMemGetInfo_v2 - 0xffff3036f0d0
dlsym: cuCtxDestroy - 0xffff303aa9c0
calling cuInit
calling cuDriverGetVersion
raw version 0x2ef4
CUDA driver version: 12.2
calling cuDeviceGetCount
device count 1
time=2025-03-04T17:15:10.746-08:00 level=DEBUG source=gpu.go:125 msg="detected GPUs" count=1 library=/usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1
[GPU-67834ba8-0312-50b2-9286-9b3b02e80059] CUDA totalMem 62841 mb
[GPU-67834ba8-0312-50b2-9286-9b3b02e80059] CUDA freeMem 56142 mb
[GPU-67834ba8-0312-50b2-9286-9b3b02e80059] Compute Capability 8.7
time=2025-03-04T17:15:10.874-08:00 level=DEBUG source=amd_linux.go:419 msg="amdgpu driver not detected /sys/module/amdgpu"
releasing cuda driver library
time=2025-03-04T17:15:10.874-08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-67834ba8-0312-50b2-9286-9b3b02e80059 library=cuda variant=jetpack6 compute=8.7 driver=12.2 name=Orin total="61.4 GiB" available="54.8 GiB"

Is it possible you somehow uninstalled the base CUDA driver packages on your system?

On my system I see it installed via:

# dpkg -S /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1
nvidia-l4t-cuda: /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1
<!-- gh-comment-id:2699468736 --> @dhiltgen commented on GitHub (Mar 5, 2025): I just tried on a AGX Orin with Jetpack 6 and it was able to run on the iGPU. ``` time=2025-03-04T17:15:10.605-08:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/tmp/daniel_ollama_test/lib/ollama/libcuda.so* /tmp/daniel_ollama_test/bin/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" time=2025-03-04T17:15:10.614-08:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[/usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1] initializing /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1 dlsym: cuInit - 0xffff30366540 dlsym: cuDriverGetVersion - 0xffff30366570 dlsym: cuDeviceGetCount - 0xffff303665d0 dlsym: cuDeviceGet - 0xffff303665a0 dlsym: cuDeviceGetAttribute - 0xffff30366720 dlsym: cuDeviceGetUuid - 0xffff30366630 dlsym: cuDeviceGetName - 0xffff30366600 dlsym: cuCtxCreate_v3 - 0xffff303669f0 dlsym: cuMemGetInfo_v2 - 0xffff3036f0d0 dlsym: cuCtxDestroy - 0xffff303aa9c0 calling cuInit calling cuDriverGetVersion raw version 0x2ef4 CUDA driver version: 12.2 calling cuDeviceGetCount device count 1 time=2025-03-04T17:15:10.746-08:00 level=DEBUG source=gpu.go:125 msg="detected GPUs" count=1 library=/usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1 [GPU-67834ba8-0312-50b2-9286-9b3b02e80059] CUDA totalMem 62841 mb [GPU-67834ba8-0312-50b2-9286-9b3b02e80059] CUDA freeMem 56142 mb [GPU-67834ba8-0312-50b2-9286-9b3b02e80059] Compute Capability 8.7 time=2025-03-04T17:15:10.874-08:00 level=DEBUG source=amd_linux.go:419 msg="amdgpu driver not detected /sys/module/amdgpu" releasing cuda driver library time=2025-03-04T17:15:10.874-08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-67834ba8-0312-50b2-9286-9b3b02e80059 library=cuda variant=jetpack6 compute=8.7 driver=12.2 name=Orin total="61.4 GiB" available="54.8 GiB" ``` Is it possible you somehow uninstalled the base CUDA driver packages on your system? On my system I see it installed via: ``` # dpkg -S /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1 nvidia-l4t-cuda: /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1 ```
Author
Owner

@virtualJonesie commented on GitHub (Mar 5, 2025):

It appears to be there. I installed via SDKManager.

dpkg -S /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1
nvidia-l4t-cuda: /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1

It is only Ollama older than 0.5.8. I ran every version, starting at 0.5.12 moving backwards until it worked. Finally had success at 0.5.7.

<!-- gh-comment-id:2700845025 --> @virtualJonesie commented on GitHub (Mar 5, 2025): It appears to be there. I installed via SDKManager. ``` dpkg -S /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1 nvidia-l4t-cuda: /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1 ``` It is only Ollama older than 0.5.8. I ran every version, starting at 0.5.12 moving backwards until it worked. Finally had success at 0.5.7.
Author
Owner

@dhiltgen commented on GitHub (Mar 5, 2025):

Can you share a more complete server log from startup with OLLAMA_DEBUG=1 set?

<!-- gh-comment-id:2701897836 --> @dhiltgen commented on GitHub (Mar 5, 2025): Can you share a more complete server log from startup with `OLLAMA_DEBUG=1` set?
Author
Owner

@dhiltgen commented on GitHub (Mar 5, 2025):

Also, please confirm you downloaded the jetpack6 bundle ollama-linux-arm64-jetpack6.tgz and extracted it into the same location you put ollama.

<!-- gh-comment-id:2702334991 --> @dhiltgen commented on GitHub (Mar 5, 2025): Also, please confirm you downloaded the jetpack6 bundle [ollama-linux-arm64-jetpack6.tgz](https://github.com/ollama/ollama/releases/download/v0.5.13/ollama-linux-arm64-jetpack6.tgz) and extracted it into the same location you put ollama.
Author
Owner

@psocik commented on GitHub (Mar 9, 2025):

time=2025/03/08 19:58:07 routes.go:1215: INFO server config env="map[CUDA_VISIBLE_DEVICES:0 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:8192 OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:24h0m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY:12.6 OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:4 OLLAMA_MAX_QUEUE:2048 OLLAMA_MODELS:/ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:3 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-03-08T19:58:07.054Z level=INFO source=images.go:432 msg="total blobs: 32"
time=2025-03-08T19:58:07.054Z level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-03-08T19:58:07.055Z level=INFO source=routes.go:1277 msg="Listening on [::]:11434 (version 0.5.13)"
time=2025-03-08T19:58:07.055Z level=DEBUG source=sched.go:106 msg="starting llm scheduler"
time=2025-03-08T19:58:07.055Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-03-08T19:58:07.056Z level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA"
time=2025-03-08T19:58:07.056Z level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcuda.so*
time=2025-03-08T19:58:07.056Z level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2025-03-08T19:58:07.057Z level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[/usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1]
initializing /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1
library /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1 load err: /usr/lib/aarch64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by /usr/lib/aarch64-linux-gnu/nvidia/libnvrm_gpu.so)
time=2025-03-08T19:58:07.058Z level=INFO source=gpu.go:612 msg="Unable to load cudart library /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1: Unable to load /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1 library to query for Nvidia GPUs: /usr/lib/aarch64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by /usr/lib/aarch64-linux-gnu/nvidia/libnvrm_gpu.so)"
time=2025-03-08T19:58:07.058Z level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcudart.so*
time=2025-03-08T19:58:07.058Z level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/lib/ollama/libcudart.so* /usr/local/nvidia/lib/libcudart.so* /usr/local/nvidia/lib64/libcudart.so* /usr/lib/ollama/cuda_v*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]"
time=2025-03-08T19:58:07.059Z level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths="[/usr/lib/ollama/cuda_v11/libcudart.so.11.3.109 /usr/lib/ollama/cuda_v12/libcudart.so.12.8.57 /usr/local/lib_jetson/libcudart.so.12.6.68]"
cudaSetDevice err: 35
time=2025-03-08T19:58:07.059Z level=DEBUG source=gpu.go:574 msg="Unable to load cudart library /usr/lib/ollama/cuda_v11/libcudart.so.11.3.109: your nvidia driver is too old or missing.  If you have a CUDA GPU please upgrade to run ollama"
cudaSetDevice err: 35
time=2025-03-08T19:58:07.060Z level=DEBUG source=gpu.go:574 msg="Unable to load cudart library /usr/lib/ollama/cuda_v12/libcudart.so.12.8.57: your nvidia driver is too old or missing.  If you have a CUDA GPU please upgrade to run ollama"
cudaSetDevice err: 35
time=2025-03-08T19:58:07.061Z level=DEBUG source=gpu.go:574 msg="Unable to load cudart library /usr/local/lib_jetson/libcudart.so.12.6.68: your nvidia driver is too old or missing.  If you have a CUDA GPU please upgrade to run ollama"
time=2025-03-08T19:58:07.061Z level=DEBUG source=amd_linux.go:419 msg="amdgpu driver not detected /sys/module/amdgpu"
time=2025-03-08T19:58:07.061Z level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered"
time=2025-03-08T19:58:07.061Z level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="15.3 GiB" available="12.8 GiB

Platform
Machine: aarch64
System: Linux
Distribution: Ubuntu 22.04 Jammy Jellyfish
Release: 5.15.148-tegra
Python: 3.10.12

Hardware:
Model: NVIDIA Jetson Orin NX Engineering Reference Developer Kit
Module: NVIDIA Jetson Orin NX (16GB ram)
SoC: tegra234
CUDA Arch BIN: 8.7
L4T: 36.4.0
Jetpack: 6.1

Libraries
CUDA: 12.6.68
cuDNN: 9.3.0.75
TensorRT: 10.3.0.30
VPI: 3.2.4
Vulkan: 1.3.204
OpenCV: 4.8.0 with CUDA: NO

Same issue, only version 0.5.7 works for me. Ollama run as docker container.
path: /usr/local/lib_jetson/ is extracted ollama-linux-arm64-jetpack6.tgz added to container.
Jetson run on reComputer 4012 - Seedstudio
OS image and flashing guide : https://wiki.seeedstudio.com/reComputer_J4012_Flash_Jetpack/

<!-- gh-comment-id:2708820845 --> @psocik commented on GitHub (Mar 9, 2025): ``` time=2025/03/08 19:58:07 routes.go:1215: INFO server config env="map[CUDA_VISIBLE_DEVICES:0 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:8192 OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:24h0m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY:12.6 OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:4 OLLAMA_MAX_QUEUE:2048 OLLAMA_MODELS:/ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:3 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-03-08T19:58:07.054Z level=INFO source=images.go:432 msg="total blobs: 32" time=2025-03-08T19:58:07.054Z level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-03-08T19:58:07.055Z level=INFO source=routes.go:1277 msg="Listening on [::]:11434 (version 0.5.13)" time=2025-03-08T19:58:07.055Z level=DEBUG source=sched.go:106 msg="starting llm scheduler" time=2025-03-08T19:58:07.055Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-03-08T19:58:07.056Z level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA" time=2025-03-08T19:58:07.056Z level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcuda.so* time=2025-03-08T19:58:07.056Z level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" time=2025-03-08T19:58:07.057Z level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[/usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1] initializing /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1 library /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1 load err: /usr/lib/aarch64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by /usr/lib/aarch64-linux-gnu/nvidia/libnvrm_gpu.so) time=2025-03-08T19:58:07.058Z level=INFO source=gpu.go:612 msg="Unable to load cudart library /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1: Unable to load /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1 library to query for Nvidia GPUs: /usr/lib/aarch64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by /usr/lib/aarch64-linux-gnu/nvidia/libnvrm_gpu.so)" time=2025-03-08T19:58:07.058Z level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcudart.so* time=2025-03-08T19:58:07.058Z level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/lib/ollama/libcudart.so* /usr/local/nvidia/lib/libcudart.so* /usr/local/nvidia/lib64/libcudart.so* /usr/lib/ollama/cuda_v*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]" time=2025-03-08T19:58:07.059Z level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths="[/usr/lib/ollama/cuda_v11/libcudart.so.11.3.109 /usr/lib/ollama/cuda_v12/libcudart.so.12.8.57 /usr/local/lib_jetson/libcudart.so.12.6.68]" cudaSetDevice err: 35 time=2025-03-08T19:58:07.059Z level=DEBUG source=gpu.go:574 msg="Unable to load cudart library /usr/lib/ollama/cuda_v11/libcudart.so.11.3.109: your nvidia driver is too old or missing. If you have a CUDA GPU please upgrade to run ollama" cudaSetDevice err: 35 time=2025-03-08T19:58:07.060Z level=DEBUG source=gpu.go:574 msg="Unable to load cudart library /usr/lib/ollama/cuda_v12/libcudart.so.12.8.57: your nvidia driver is too old or missing. If you have a CUDA GPU please upgrade to run ollama" cudaSetDevice err: 35 time=2025-03-08T19:58:07.061Z level=DEBUG source=gpu.go:574 msg="Unable to load cudart library /usr/local/lib_jetson/libcudart.so.12.6.68: your nvidia driver is too old or missing. If you have a CUDA GPU please upgrade to run ollama" time=2025-03-08T19:58:07.061Z level=DEBUG source=amd_linux.go:419 msg="amdgpu driver not detected /sys/module/amdgpu" time=2025-03-08T19:58:07.061Z level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered" time=2025-03-08T19:58:07.061Z level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="15.3 GiB" available="12.8 GiB ``` Platform Machine: aarch64 System: Linux Distribution: Ubuntu 22.04 Jammy Jellyfish Release: 5.15.148-tegra Python: 3.10.12 Hardware: Model: NVIDIA Jetson Orin NX Engineering Reference Developer Kit Module: NVIDIA Jetson Orin NX (16GB ram) SoC: tegra234 CUDA Arch BIN: 8.7 L4T: 36.4.0 Jetpack: 6.1 Libraries CUDA: 12.6.68 cuDNN: 9.3.0.75 TensorRT: 10.3.0.30 VPI: 3.2.4 Vulkan: 1.3.204 OpenCV: 4.8.0 with CUDA: NO Same issue, only version 0.5.7 works for me. Ollama run as docker container. path: `/usr/local/lib_jetson/` is extracted `ollama-linux-arm64-jetpack6.tgz` added to container. Jetson run on reComputer 4012 - Seedstudio OS image and flashing guide : https://wiki.seeedstudio.com/reComputer_J4012_Flash_Jetpack/
Author
Owner

@Ronimsenn commented on GitHub (Mar 10, 2025):

Can confirm this happens after a reflash of the Jetson Orin Nano Developer Kit, following Nvidias instructions (see https://www.jetson-ai-lab.com/initial_setup_jon_sdkm.html and https://www.jetson-ai-lab.com/tips_ssd-docker.html) to install Docker after flashing.

Downgrading on version 0.5.7 works.

<!-- gh-comment-id:2710022622 --> @Ronimsenn commented on GitHub (Mar 10, 2025): Can confirm this happens after a reflash of the Jetson Orin Nano Developer Kit, following Nvidias instructions (see https://www.jetson-ai-lab.com/initial_setup_jon_sdkm.html and https://www.jetson-ai-lab.com/tips_ssd-docker.html) to install Docker after flashing. Downgrading on version 0.5.7 works.
Author
Owner

@dhiltgen commented on GitHub (Mar 10, 2025):

@psocik this error is in libraries which are not part of the Ollama install and are part of the core nvidia packages.

library /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1 load err: /usr/lib/aarch64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by /usr/lib/aarch64-linux-gnu/nvidia/libnvrm_gpu.so)

If the nvidia libraries aren't functional on your system, Ollama wont be able to use the GPU. I'm not sure how to resolve that, but my suspicion is there's a missing package dependency.

<!-- gh-comment-id:2711014867 --> @dhiltgen commented on GitHub (Mar 10, 2025): @psocik this error is in libraries which are not part of the Ollama install and are part of the core nvidia packages. ``` library /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1 load err: /usr/lib/aarch64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by /usr/lib/aarch64-linux-gnu/nvidia/libnvrm_gpu.so) ``` If the nvidia libraries aren't functional on your system, Ollama wont be able to use the GPU. I'm not sure how to resolve that, but my suspicion is there's a missing package dependency.
Author
Owner

@virtualJonesie commented on GitHub (Mar 10, 2025):

But it works on older versions of Ollama. I have GLIBC_2.3.5 installed. Is it possible that Ollama is looking for a hard-coded version of GLIBC?

<!-- gh-comment-id:2711044188 --> @virtualJonesie commented on GitHub (Mar 10, 2025): But it works on older versions of Ollama. I have GLIBC_2.3.5 installed. Is it possible that Ollama is looking for a hard-coded version of GLIBC?
Author
Owner

@psocik commented on GitHub (Mar 10, 2025):

@psocik this error is in libraries which are not part of the Ollama install and are part of the core nvidia packages.

library /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1 load err: /usr/lib/aarch64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by /usr/lib/aarch64-linux-gnu/nvidia/libnvrm_gpu.so)

If the nvidia libraries aren't functional on your system, Ollama wont be able to use the GPU. I'm not sure how to resolve that, but my suspicion is there's a missing package dependency.

Unfortunately disagree.
I have installed: ldd (Ubuntu GLIBC 2.35-0ubuntu3.8) 2.35
And version 0.5.7 works well. IMO there is a issue with searching libraries since this version.

<!-- gh-comment-id:2711146733 --> @psocik commented on GitHub (Mar 10, 2025): > [@psocik](https://github.com/psocik) this error is in libraries which are not part of the Ollama install and are part of the core nvidia packages. > > ``` > library /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1 load err: /usr/lib/aarch64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by /usr/lib/aarch64-linux-gnu/nvidia/libnvrm_gpu.so) > ``` > > If the nvidia libraries aren't functional on your system, Ollama wont be able to use the GPU. I'm not sure how to resolve that, but my suspicion is there's a missing package dependency. Unfortunately disagree. I have installed: `ldd (Ubuntu GLIBC 2.35-0ubuntu3.8) 2.35` And version 0.5.7 works well. IMO there is a issue with searching libraries since this version.
Author
Owner

@virtualJonesie commented on GitHub (Mar 10, 2025):

@psocik this error is in libraries which are not part of the Ollama install and are part of the core nvidia packages.

library /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1 load err: /usr/lib/aarch64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by /usr/lib/aarch64-linux-gnu/nvidia/libnvrm_gpu.so)

If the nvidia libraries aren't functional on your system, Ollama wont be able to use the GPU. I'm not sure how to resolve that, but my suspicion is there's a missing package dependency.

Unfortunately disagree. I have installed: ldd (Ubuntu GLIBC 2.35-0ubuntu3.8) 2.35 And version 0.5.7 works well. IMO there is a issue with searching libraries since this version.

I agree.

<!-- gh-comment-id:2711224569 --> @virtualJonesie commented on GitHub (Mar 10, 2025): > > [@psocik](https://github.com/psocik) this error is in libraries which are not part of the Ollama install and are part of the core nvidia packages. > > ``` > > library /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1 load err: /usr/lib/aarch64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by /usr/lib/aarch64-linux-gnu/nvidia/libnvrm_gpu.so) > > ``` > > > > > > > > > > > > > > > > > > > > > > > > If the nvidia libraries aren't functional on your system, Ollama wont be able to use the GPU. I'm not sure how to resolve that, but my suspicion is there's a missing package dependency. > > Unfortunately disagree. I have installed: `ldd (Ubuntu GLIBC 2.35-0ubuntu3.8) 2.35` And version 0.5.7 works well. IMO there is a issue with searching libraries since this version. I agree.
Author
Owner

@gonzal40 commented on GitHub (Mar 29, 2025):

Same issue for me. I have been testing each update to Ollama to see if the issue has been resolved and I have had to roll back to 0.5.7 each time. As soon as I roll back, it sees the GPU and runs as expected. I am running all of the most current updates to the Nvidia software on an Nvidia Jetson AGX Orin 64Gb developer kit. Jetpack 6.2 and L4T 36.4.3.

<!-- gh-comment-id:2763359849 --> @gonzal40 commented on GitHub (Mar 29, 2025): Same issue for me. I have been testing each update to Ollama to see if the issue has been resolved and I have had to roll back to 0.5.7 each time. As soon as I roll back, it sees the GPU and runs as expected. I am running all of the most current updates to the Nvidia software on an Nvidia Jetson AGX Orin 64Gb developer kit. Jetpack 6.2 and L4T 36.4.3.
Author
Owner

@conradicus commented on GitHub (Mar 31, 2025):

Same for me as well. Only 0.5.7 works with Jetson Orin Nano 8gb developer kit.

<!-- gh-comment-id:2766803080 --> @conradicus commented on GitHub (Mar 31, 2025): Same for me as well. Only 0.5.7 works with Jetson Orin Nano 8gb developer kit.
Author
Owner

@gonzal40 commented on GitHub (Apr 6, 2025):

Is there anything else we can try until this is resolved? Being stuck at an older version of Ollama is prohibiting my development work on the Nvidia Jetson platform.

<!-- gh-comment-id:2781460146 --> @gonzal40 commented on GitHub (Apr 6, 2025): Is there anything else we can try until this is resolved? Being stuck at an older version of Ollama is prohibiting my development work on the Nvidia Jetson platform.
Author
Owner

@goldyfruit commented on GitHub (Apr 10, 2025):

Same for me here.

<!-- gh-comment-id:2792662830 --> @goldyfruit commented on GitHub (Apr 10, 2025): Same for me here.
Author
Owner

@agrajagco commented on GitHub (Apr 18, 2025):

@dhiltgen & @conradicus - can you please provide what LLM you are running ollama with, I've installed 0.5.7, it seemed to install jetpack related libraries, did you both also add the v0.5.13 ollama-linux-arm64-jetpack6.tgz? What LLM have you pulled to ollama to test with to confirm things are working and have had success?

I'm planning on confirming I can get things working over 0.5.7 first, and then was going to uninstall & repeat setup/install over current ollama release to be able to provide debugging detail as well (thus the question).

I'm on a Jetson AGX Orin 64GB running JetPack 6.2 and am having no luck in getting a performant session going through ollama.

I've Confirmed libcuda.so is present.

root@tor-orin:~# dpkg -S /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1
nvidia-l4t-cuda: /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1

my install was via
curl -fsSL https://ollama.com/install.sh | OLLAMA_VERSION=0.5.7 sh

I'm ending up with the message below for the system falling back to CPU when using llama 3.1:70b

I'm not sure if Its because I'm attempting to use too "new" an LLM with 0.5.7?

Apr 17 23:08:50 tor-orin ollama[63620]: llm_load_tensors: tensor 'token_embd.weight' (q4_K) (and 0 others) cannot be used with preferred buffer type CPU_AARCH64, using CPU instead
<!-- gh-comment-id:2814594076 --> @agrajagco commented on GitHub (Apr 18, 2025): @dhiltgen & @conradicus - can you please provide what LLM you are running ollama with, I've installed 0.5.7, it seemed to install jetpack related libraries, did you both also add the v0.5.13 ollama-linux-arm64-jetpack6.tgz? What LLM have you pulled to ollama to test with to confirm things are working and have had success? I'm planning on confirming I can get things working over 0.5.7 first, and then was going to uninstall & repeat setup/install over current ollama release to be able to provide debugging detail as well (thus the question). I'm on a Jetson AGX Orin 64GB running JetPack 6.2 and am having no luck in getting a performant session going through ollama. I've Confirmed libcuda.so is present. root@tor-orin:~# dpkg -S /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1 nvidia-l4t-cuda: /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1 my install was via `curl -fsSL https://ollama.com/install.sh | OLLAMA_VERSION=0.5.7 sh` I'm ending up with the message below for the system falling back to CPU when using llama 3.1:70b I'm not sure if Its because I'm attempting to use too "new" an LLM with 0.5.7? ``` Apr 17 23:08:50 tor-orin ollama[63620]: llm_load_tensors: tensor 'token_embd.weight' (q4_K) (and 0 others) cannot be used with preferred buffer type CPU_AARCH64, using CPU instead ```
Author
Owner

@driversti commented on GitHub (Apr 21, 2025):

I managed to fix the issue with a workaround mentioned by @cookieshake yesterday. However, their comment is gone at the time of writing.
The workaround was as simple as upgrading the Ubuntu version from 20.04 to 22.04 in Dockerfile.
Then I built an image: docker build -t driversti/ollama:22.04.1 and got it up and running. Works as expected.

I built it on Nvidia's Jetson Orin Nano 8 GB, and it took me about 5.5 hours, so be patient. Use screen or tmux to avoid interrupting the process when using SSH (in my case).

Update. This command built an image in minutes:

docker buildx build --platform=linux/arm64 --build-arg FLAVOR=arm64 -f Dockerfile -t yourname/ollama:$(git describe --tags --first-parent --abbrev=7 --long --dirty --always | sed -e "s/^v//g") .
<!-- gh-comment-id:2817867166 --> @driversti commented on GitHub (Apr 21, 2025): I managed to fix the issue with a workaround mentioned by @cookieshake yesterday. However, their comment is gone at the time of writing. The workaround was as simple as upgrading the Ubuntu version from 20.04 to 22.04 in [Dockerfile](https://github.com/ollama/ollama/blob/2eb1fb3231063365408155d2fffce9d62ad3c5ee/Dockerfile#L117). Then I built an image: `docker build -t driversti/ollama:22.04.1` and got it up and running. Works as expected. I built it on Nvidia's Jetson Orin Nano 8 GB, and it took me about 5.5 hours, so be patient. Use screen or tmux to avoid interrupting the process when using SSH (in my case). **Update.** This command built an image in minutes: ```sh docker buildx build --platform=linux/arm64 --build-arg FLAVOR=arm64 -f Dockerfile -t yourname/ollama:$(git describe --tags --first-parent --abbrev=7 --long --dirty --always | sed -e "s/^v//g") . ```
Author
Owner

@cookieshake commented on GitHub (Apr 21, 2025):

Hello, @driversti

Right after I posted the solution, I realized that although the driver loading error was resolved, I hadn’t actually tested whether the GPU was being used during LLM inference. I didn’t want to share incomplete information, so I deleted my comment to run the proper tests and planned to update once I had accurate results.

I’m glad to hear that your issue has been resolved!

<!-- gh-comment-id:2817896423 --> @cookieshake commented on GitHub (Apr 21, 2025): Hello, @driversti Right after I posted the solution, I realized that although the driver loading error was resolved, I hadn’t actually tested whether the GPU was being used during LLM inference. I didn’t want to share incomplete information, so I deleted my comment to run the proper tests and planned to update once I had accurate results. I’m glad to hear that your issue has been resolved!
Author
Owner

@driversti commented on GitHub (Apr 21, 2025):

Thank you for the update, @cookieshake . I tested it on my Jetson Orin Nano, and it appeared to use the GPU. Gemma3 responded with about 16tps which is a very good result IMHO.
https://i.imgur.com/CPjduHe.png

P.S. It would be nice if you update your comments next time instead of removing them, because I got confused when I couldn't find it. I thought the comment might be removed by the repository owners and thought twice before posting my comment. Cheers, mate 🤝

<!-- gh-comment-id:2817919342 --> @driversti commented on GitHub (Apr 21, 2025): Thank you for the update, @cookieshake . I tested it on my Jetson Orin Nano, and it appeared to use the GPU. [Gemma3](https://ollama.com/library/gemma3:4b) responded with about 16tps which is a very good result IMHO. https://i.imgur.com/CPjduHe.png P.S. It would be nice if you update your comments next time instead of removing them, because I got confused when I couldn't find it. I thought the comment might be removed by the repository owners and thought twice before posting my comment. Cheers, mate 🤝
Author
Owner

@goldyfruit commented on GitHub (Apr 29, 2025):

Thanks @driversti & @cookieshake! 👍
Any plan to make this official?

<!-- gh-comment-id:2838518523 --> @goldyfruit commented on GitHub (Apr 29, 2025): Thanks @driversti & @cookieshake! 👍 Any plan to make this official?
Author
Owner

@regmibijay commented on GitHub (May 1, 2025):

For anyone looking for alternative to approaches mentioned above and wants to go through hassle of write own small Dockerfile, following worked for me with JP6.2 and cuda 12.8 with installed nvidia container toolkits:

FROM dustynv/pytorch:2.6-r36.4.0-cu128

RUN curl -fsSL https://ollama.com/install.sh | sh

CMD ["ollama", "serve"]

Base image is from here: https://github.com/dusty-nv/jetson-containers and should be compatible with mounting volumes from older ollama docker containers

<!-- gh-comment-id:2844663244 --> @regmibijay commented on GitHub (May 1, 2025): For anyone looking for alternative to approaches mentioned above and wants to go through hassle of write own small Dockerfile, following worked for me with JP6.2 and cuda 12.8 with installed nvidia container toolkits: ``` FROM dustynv/pytorch:2.6-r36.4.0-cu128 RUN curl -fsSL https://ollama.com/install.sh | sh CMD ["ollama", "serve"] ``` Base image is from here: https://github.com/dusty-nv/jetson-containers and should be compatible with mounting volumes from older ollama docker containers
Author
Owner

@conradicus commented on GitHub (May 1, 2025):

the dustynv/ollama:r36.4.0-cu128-24.04 docker image worked great for me. and no Dockerfile needed. Set the OLLAMA enviroment variable to JETSON_JETPACK=6

<!-- gh-comment-id:2845045022 --> @conradicus commented on GitHub (May 1, 2025): the dustynv/ollama:r36.4.0-cu128-24.04 docker image worked great for me. and no Dockerfile needed. Set the OLLAMA enviroment variable to JETSON_JETPACK=6
Author
Owner

@goldyfruit commented on GitHub (May 1, 2025):

I managed to fix the issue with a workaround mentioned by @cookieshake yesterday. However, their comment is gone at the time of writing. The workaround was as simple as upgrading the Ubuntu version from 20.04 to 22.04 in Dockerfile. Then I built an image: docker build -t driversti/ollama:22.04.1 and got it up and running. Works as expected.

I built it on Nvidia's Jetson Orin Nano 8 GB, and it took me about 5.5 hours, so be patient. Use screen or tmux to avoid interrupting the process when using SSH (in my case).

Update. This command built an image in minutes:

docker buildx build --platform=linux/arm64 --build-arg FLAVOR=arm64 -f Dockerfile -t yourname/ollama:$(git describe --tags --first-parent --abbrev=7 --long --dirty --always | sed -e "s/^v//g") .

Just tried to build an image based on 22.04 but the model doesn't load into the VRAM.

time=2025-05-01T22:01:09.963Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-bd18b7e7-59a3-57ea-bbfd-4413f20f5b6c name=Orin overhead="0 B" before.total="7.4 GiB" before.free="4.3 GiB" now.total="7.4 GiB" now.free="4.3 GiB" now.used="3.2 GiB"
releasing cuda driver library
time=2025-05-01T22:01:09.963Z level=INFO source=server.go:105 msg="system memory" total="7.4 GiB" free="4.3 GiB" free_swap="3.2 GiB"
time=2025-05-01T22:01:09.963Z level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[4.3 GiB]"
time=2025-05-01T22:01:09.963Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.block_count default=0
time=2025-05-01T22:01:09.963Z level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=27 layers.offload=27 layers.split="" memory.available="[4.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="1.8 GiB" memory.required.partial="1.8 GiB" memory.required.kv="65.0 MiB" memory.required.allocations="[1.8 GiB]" memory.weights.total="762.5 MiB" memory.weights.repeating="456.5 MiB" memory.weights.nonrepeating="306.0 MiB" memory.graph.full="514.2 MiB" memory.graph.partial="750.5 MiB"
time=2025-05-01T22:01:09.964Z level=DEBUG source=server.go:262 msg="compatible gpu libraries" compatible=[cuda_jetpack6]
time=2025-05-01T22:01:10.072Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-01T22:01:10.073Z level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-05-01T22:01:10.073Z level=DEBUG source=process_text_spm.go:25 msg=Tokens "num tokens"=262144 vals="[<pad> <eos> <bos> <unk> <mask>]" scores="[0 0 0 0 0]" types="[3 3 3 2 1]"
time=2025-05-01T22:01:10.078Z level=DEBUG source=process_text_spm.go:39 msg="Token counts" normal=261882 unknown=1 control=5 "user defined"=0 unused=0 byte=256 "max token len"=93
time=2025-05-01T22:01:10.078Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.image_size default=0
time=2025-05-01T22:01:10.078Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.patch_size default=0
time=2025-05-01T22:01:10.078Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.num_channels default=0
time=2025-05-01T22:01:10.078Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.block_count default=0
time=2025-05-01T22:01:10.078Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.embedding_length default=0
time=2025-05-01T22:01:10.078Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.attention.head_count default=0
time=2025-05-01T22:01:10.078Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.image_size default=0
time=2025-05-01T22:01:10.078Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.patch_size default=0
time=2025-05-01T22:01:10.078Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.attention.layer_norm_epsilon default=0
time=2025-05-01T22:01:10.078Z level=DEBUG source=process_text_spm.go:25 msg=Tokens "num tokens"=262144 vals="[<pad> <eos> <bos> <unk> <mask>]" scores="[0 0 0 0 0]" types="[3 3 3 2 1]"
time=2025-05-01T22:01:10.082Z level=DEBUG source=process_text_spm.go:39 msg="Token counts" normal=261882 unknown=1 control=5 "user defined"=0 unused=0 byte=256 "max token len"=93
time=2025-05-01T22:01:10.082Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-05-01T22:01:10.082Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-05-01T22:01:10.083Z level=DEBUG source=server.go:336 msg="adding gpu library" path=/usr/lib/ollama/cuda_jetpack6
time=2025-05-01T22:01:10.083Z level=DEBUG source=server.go:345 msg="adding gpu dependency paths" paths=[/usr/lib/ollama/cuda_jetpack6]
time=2025-05-01T22:01:10.083Z level=INFO source=server.go:409 msg="starting llama server" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-7cd4618c1faf8b7233c6c906dac1694b6a47684b37b8895d470ac688520b9c01 --ctx-size 8192 --batch-size 512 --n-gpu-layers 27 --verbose --threads 6 --parallel 2 --port 40727"
time=2025-05-01T22:01:10.083Z level=DEBUG source=server.go:428 msg=subprocess environment="[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LD_LIBRARY_PATH=/usr/lib/ollama/cuda_jetpack6:/usr/lib/aarch64-linux-gnu/nvidia:$LD_LIBRARY_PATH:/usr/lib/ollama/cuda_jetpack6:/usr/lib/ollama OLLAMA_DEBUG=1 OLLAMA_LLM_LIBRARY=cuda_12 OLLAMA_1_1746125209_PORT_11434_TCP_ADDR=10.43.129.52 OLLAMA_1_1746125209_SERVICE_HOST=10.43.129.52 OLLAMA_1_1746125209_SERVICE_PORT_HTTP=11434 OLLAMA_1_1746125209_PORT=tcp://10.43.129.52:11434 OLLAMA_1_1746125209_PORT_11434_TCP_PROTO=tcp OLLAMA_1_1746125209_PORT_11434_TCP=tcp://10.43.129.52:11434 OLLAMA_1_1746125209_SERVICE_PORT=11434 OLLAMA_1_1746125209_PORT_11434_TCP_PORT=11434 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_MAX_LOADED_MODELS=3 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_jetpack6 CUDA_VISIBLE_DEVICES=GPU-bd18b7e7-59a3-57ea-bbfd-4413f20f5b6c]"
time=2025-05-01T22:01:10.084Z level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-05-01T22:01:10.084Z level=INFO source=server.go:585 msg="waiting for llama runner to start responding"
time=2025-05-01T22:01:10.084Z level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error"
<!-- gh-comment-id:2845879868 --> @goldyfruit commented on GitHub (May 1, 2025): > I managed to fix the issue with a workaround mentioned by [@cookieshake](https://github.com/cookieshake) yesterday. However, their comment is gone at the time of writing. The workaround was as simple as upgrading the Ubuntu version from 20.04 to 22.04 in [Dockerfile](https://github.com/ollama/ollama/blob/2eb1fb3231063365408155d2fffce9d62ad3c5ee/Dockerfile#L117). Then I built an image: `docker build -t driversti/ollama:22.04.1` and got it up and running. Works as expected. > > I built it on Nvidia's Jetson Orin Nano 8 GB, and it took me about 5.5 hours, so be patient. Use screen or tmux to avoid interrupting the process when using SSH (in my case). > > **Update.** This command built an image in minutes: > > docker buildx build --platform=linux/arm64 --build-arg FLAVOR=arm64 -f Dockerfile -t yourname/ollama:$(git describe --tags --first-parent --abbrev=7 --long --dirty --always | sed -e "s/^v//g") . Just tried to build an image based on `22.04` but the model doesn't load into the VRAM. ``` time=2025-05-01T22:01:09.963Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-bd18b7e7-59a3-57ea-bbfd-4413f20f5b6c name=Orin overhead="0 B" before.total="7.4 GiB" before.free="4.3 GiB" now.total="7.4 GiB" now.free="4.3 GiB" now.used="3.2 GiB" releasing cuda driver library time=2025-05-01T22:01:09.963Z level=INFO source=server.go:105 msg="system memory" total="7.4 GiB" free="4.3 GiB" free_swap="3.2 GiB" time=2025-05-01T22:01:09.963Z level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[4.3 GiB]" time=2025-05-01T22:01:09.963Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.block_count default=0 time=2025-05-01T22:01:09.963Z level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=27 layers.offload=27 layers.split="" memory.available="[4.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="1.8 GiB" memory.required.partial="1.8 GiB" memory.required.kv="65.0 MiB" memory.required.allocations="[1.8 GiB]" memory.weights.total="762.5 MiB" memory.weights.repeating="456.5 MiB" memory.weights.nonrepeating="306.0 MiB" memory.graph.full="514.2 MiB" memory.graph.partial="750.5 MiB" time=2025-05-01T22:01:09.964Z level=DEBUG source=server.go:262 msg="compatible gpu libraries" compatible=[cuda_jetpack6] time=2025-05-01T22:01:10.072Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-01T22:01:10.073Z level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false time=2025-05-01T22:01:10.073Z level=DEBUG source=process_text_spm.go:25 msg=Tokens "num tokens"=262144 vals="[<pad> <eos> <bos> <unk> <mask>]" scores="[0 0 0 0 0]" types="[3 3 3 2 1]" time=2025-05-01T22:01:10.078Z level=DEBUG source=process_text_spm.go:39 msg="Token counts" normal=261882 unknown=1 control=5 "user defined"=0 unused=0 byte=256 "max token len"=93 time=2025-05-01T22:01:10.078Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.image_size default=0 time=2025-05-01T22:01:10.078Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.patch_size default=0 time=2025-05-01T22:01:10.078Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.num_channels default=0 time=2025-05-01T22:01:10.078Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.block_count default=0 time=2025-05-01T22:01:10.078Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.embedding_length default=0 time=2025-05-01T22:01:10.078Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.attention.head_count default=0 time=2025-05-01T22:01:10.078Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.image_size default=0 time=2025-05-01T22:01:10.078Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.patch_size default=0 time=2025-05-01T22:01:10.078Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.attention.layer_norm_epsilon default=0 time=2025-05-01T22:01:10.078Z level=DEBUG source=process_text_spm.go:25 msg=Tokens "num tokens"=262144 vals="[<pad> <eos> <bos> <unk> <mask>]" scores="[0 0 0 0 0]" types="[3 3 3 2 1]" time=2025-05-01T22:01:10.082Z level=DEBUG source=process_text_spm.go:39 msg="Token counts" normal=261882 unknown=1 control=5 "user defined"=0 unused=0 byte=256 "max token len"=93 time=2025-05-01T22:01:10.082Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1 time=2025-05-01T22:01:10.082Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256 time=2025-05-01T22:01:10.083Z level=DEBUG source=server.go:336 msg="adding gpu library" path=/usr/lib/ollama/cuda_jetpack6 time=2025-05-01T22:01:10.083Z level=DEBUG source=server.go:345 msg="adding gpu dependency paths" paths=[/usr/lib/ollama/cuda_jetpack6] time=2025-05-01T22:01:10.083Z level=INFO source=server.go:409 msg="starting llama server" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-7cd4618c1faf8b7233c6c906dac1694b6a47684b37b8895d470ac688520b9c01 --ctx-size 8192 --batch-size 512 --n-gpu-layers 27 --verbose --threads 6 --parallel 2 --port 40727" time=2025-05-01T22:01:10.083Z level=DEBUG source=server.go:428 msg=subprocess environment="[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LD_LIBRARY_PATH=/usr/lib/ollama/cuda_jetpack6:/usr/lib/aarch64-linux-gnu/nvidia:$LD_LIBRARY_PATH:/usr/lib/ollama/cuda_jetpack6:/usr/lib/ollama OLLAMA_DEBUG=1 OLLAMA_LLM_LIBRARY=cuda_12 OLLAMA_1_1746125209_PORT_11434_TCP_ADDR=10.43.129.52 OLLAMA_1_1746125209_SERVICE_HOST=10.43.129.52 OLLAMA_1_1746125209_SERVICE_PORT_HTTP=11434 OLLAMA_1_1746125209_PORT=tcp://10.43.129.52:11434 OLLAMA_1_1746125209_PORT_11434_TCP_PROTO=tcp OLLAMA_1_1746125209_PORT_11434_TCP=tcp://10.43.129.52:11434 OLLAMA_1_1746125209_SERVICE_PORT=11434 OLLAMA_1_1746125209_PORT_11434_TCP_PORT=11434 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_MAX_LOADED_MODELS=3 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_jetpack6 CUDA_VISIBLE_DEVICES=GPU-bd18b7e7-59a3-57ea-bbfd-4413f20f5b6c]" time=2025-05-01T22:01:10.084Z level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2025-05-01T22:01:10.084Z level=INFO source=server.go:585 msg="waiting for llama runner to start responding" time=2025-05-01T22:01:10.084Z level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error" ```
Author
Owner

@driversti commented on GitHub (May 2, 2025):

Just tried to build an image based on 22.04 but the model doesn't load into the VRAM.

I was forced to increase the SWAP to 64 GB, as it had filled with about 58-59 GB during the building process.

<!-- gh-comment-id:2847056805 --> @driversti commented on GitHub (May 2, 2025): > Just tried to build an image based on `22.04` but the model doesn't load into the VRAM. I was forced to increase the SWAP to 64 GB, as it had filled with about 58-59 GB during the building process.
Author
Owner

@goldyfruit commented on GitHub (May 2, 2025):

Just tried to build an image based on 22.04 but the model doesn't load into the VRAM.

I was forced to increase the SWAP to 64 GB, as it had filled with about 58-59 GB during the building process.

The build process works fine, it's just the outcome, the built image behaves at the official one.

<!-- gh-comment-id:2847089848 --> @goldyfruit commented on GitHub (May 2, 2025): > > Just tried to build an image based on `22.04` but the model doesn't load into the VRAM. > > I was forced to increase the SWAP to 64 GB, as it had filled with about 58-59 GB during the building process. The build process works fine, it's just the outcome, the built image behaves at the official one.
Author
Owner

@driversti commented on GitHub (May 2, 2025):

The build process works fine, it's just the outcome, the built image behaves at the official one.

Oh, I misunderstood your previous message. I'm sorry.

<!-- gh-comment-id:2847093333 --> @driversti commented on GitHub (May 2, 2025): > The build process works fine, it's just the outcome, the built image behaves at the official one. Oh, I misunderstood your previous message. I'm sorry.
Author
Owner

@goldyfruit commented on GitHub (May 2, 2025):

The build process works fine, it's just the outcome, the built image behaves at the official one.

Oh, I misunderstood your previous message. I'm sorry.

No worries at all 👍

<!-- gh-comment-id:2847176480 --> @goldyfruit commented on GitHub (May 2, 2025): > > The build process works fine, it's just the outcome, the built image behaves at the official one. > > Oh, I misunderstood your previous message. I'm sorry. No worries at all 👍
Author
Owner

@goldyfruit commented on GitHub (May 27, 2025):

I'm still not able to run the latest version of Ollama on JetPack 6.2 (Jetson Orin Nano), only 0.5.7 works which prevent the usage of latest models such as Qwen3, Gemma3, etc...

root@jetson01:~# nvidia-smi 
Tue May 27 15:08:46 2025       
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 540.4.0                Driver Version: 540.4.0      CUDA Version: 12.6     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  Orin (nvgpu)                  N/A  | N/A              N/A |                  N/A |
| N/A   N/A  N/A               N/A /  N/A | Not Supported        |     N/A          N/A |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|  No running processes found                                                           |
+---------------------------------------------------------------------------------------+

Here is the full log from the latest version:

root@jetson01:~# kubectl logs -n cattle-ai -f ollama-1-1748360776-7d64458944-cvlrn
time=2025-05-27T19:05:53.325Z level=INFO source=routes.go:1205 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-05-27T19:05:53.327Z level=INFO source=images.go:463 msg="total blobs: 5"
time=2025-05-27T19:05:53.327Z level=INFO source=images.go:470 msg="total unused blobs removed: 0"
time=2025-05-27T19:05:53.327Z level=INFO source=routes.go:1258 msg="Listening on [::]:11434 (version 0.7.1)"
time=2025-05-27T19:05:53.328Z level=DEBUG source=sched.go:108 msg="starting llm scheduler"
time=2025-05-27T19:05:53.328Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-05-27T19:05:53.328Z level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA"
time=2025-05-27T19:05:53.328Z level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcuda.so*
time=2025-05-27T19:05:53.328Z level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2025-05-27T19:05:53.330Z level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[/usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1]
initializing /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1
library /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1 load err: /usr/lib/aarch64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by /usr/lib/aarch64-linux-gnu/nvidia/libnvrm_gpu.so)
time=2025-05-27T19:05:53.331Z level=INFO source=gpu.go:612 msg="Unable to load cudart library /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1: Unable to load /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1 library to query for Nvidia GPUs: /usr/lib/aarch64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by /usr/lib/aarch64-linux-gnu/nvidia/libnvrm_gpu.so)"
time=2025-05-27T19:05:53.331Z level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcudart.so*
time=2025-05-27T19:05:53.331Z level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/lib/ollama/libcudart.so* /usr/local/nvidia/lib/libcudart.so* /usr/local/nvidia/lib64/libcudart.so* /usr/lib/ollama/cuda_v*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]"
time=2025-05-27T19:05:53.332Z level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths="[/usr/lib/ollama/cuda_v11/libcudart.so.11.3.109 /usr/lib/ollama/cuda_v12/libcudart.so.12.8.90]"
cudaSetDevice err: 35
time=2025-05-27T19:05:53.334Z level=DEBUG source=gpu.go:574 msg="Unable to load cudart library /usr/lib/ollama/cuda_v11/libcudart.so.11.3.109: your nvidia driver is too old or missing.  If you have a CUDA GPU please upgrade to run ollama"
cudaSetDevice err: 35
time=2025-05-27T19:05:53.336Z level=DEBUG source=gpu.go:574 msg="Unable to load cudart library /usr/lib/ollama/cuda_v12/libcudart.so.12.8.90: your nvidia driver is too old or missing.  If you have a CUDA GPU please upgrade to run ollama"
time=2025-05-27T19:05:53.336Z level=DEBUG source=amd_linux.go:419 msg="amdgpu driver not detected /sys/module/amdgpu"
time=2025-05-27T19:05:53.337Z level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered"
time=2025-05-27T19:05:53.337Z level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="7.4 GiB" available="4.7 GiB"
[GIN] 2025/05/27 - 19:05:53 | 200 |     103.555µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/05/27 - 19:05:53 | 200 |     164.613µs |       127.0.0.1 | GET      "/api/ps"
[GIN] 2025/05/27 - 19:05:53 | 200 |      44.513µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/05/27 - 19:05:53 | 200 |  384.982955ms |       127.0.0.1 | POST     "/api/pull"
[GIN] 2025/05/27 - 19:05:53 | 200 |      48.961µs |       127.0.0.1 | HEAD     "/"
time=2025-05-27T19:05:53.810Z level=DEBUG source=ggml.go:155 msg="key not found" key=general.alignment default=32
time=2025-05-27T19:05:53.852Z level=DEBUG source=ggml.go:155 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/27 - 19:05:53 | 200 |   86.109669ms |       127.0.0.1 | POST     "/api/show"
time=2025-05-27T19:05:53.898Z level=DEBUG source=ggml.go:155 msg="key not found" key=general.alignment default=32
time=2025-05-27T19:05:53.898Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="7.4 GiB" before.free="4.7 GiB" before.free_swap="3.7 GiB" now.total="7.4 GiB" now.free="4.6 GiB" now.free_swap="3.7 GiB"
time=2025-05-27T19:05:53.898Z level=DEBUG source=sched.go:185 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1
time=2025-05-27T19:05:53.940Z level=DEBUG source=ggml.go:155 msg="key not found" key=general.alignment default=32
time=2025-05-27T19:05:53.982Z level=DEBUG source=ggml.go:155 msg="key not found" key=general.alignment default=32
time=2025-05-27T19:05:53.982Z level=DEBUG source=sched.go:215 msg="cpu mode with first model, loading"
time=2025-05-27T19:05:53.982Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="7.4 GiB" before.free="4.6 GiB" before.free_swap="3.7 GiB" now.total="7.4 GiB" now.free="4.6 GiB" now.free_swap="3.7 GiB"
time=2025-05-27T19:05:53.982Z level=INFO source=server.go:135 msg="system memory" total="7.4 GiB" free="4.6 GiB" free_swap="3.7 GiB"
time=2025-05-27T19:05:53.982Z level=DEBUG source=memory.go:111 msg=evaluating library=cpu gpu_count=1 available="[4.6 GiB]"
time=2025-05-27T19:05:53.982Z level=DEBUG source=ggml.go:155 msg="key not found" key=gemma2.vision.block_count default=0
time=2025-05-27T19:05:53.983Z level=INFO source=server.go:168 msg=offload library=cpu layers.requested=-1 layers.model=27 layers.offload=0 layers.split="" memory.available="[4.6 GiB]" memory.gpu_overhead="0 B" memory.required.full="2.9 GiB" memory.required.partial="0 B" memory.required.kv="832.0 MiB" memory.required.allocations="[2.9 GiB]" memory.weights.total="1.5 GiB" memory.weights.repeating="1.1 GiB" memory.weights.nonrepeating="461.4 MiB" memory.graph.full="504.5 MiB" memory.graph.partial="965.9 MiB"
time=2025-05-27T19:05:53.983Z level=DEBUG source=server.go:284 msg="compatible gpu libraries" compatible=[]
llama_model_loader: loaded meta data with 34 key-value pairs and 288 tensors from /root/.ollama/models/blobs/sha256-7462734796d67c40ecec2ca98eddf970e171dbb6b370e43fd633ee75b69abe1b (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = gemma2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Gemma 2.0 2b It Transformers
llama_model_loader: - kv   3:                           general.finetune str              = it-transformers
llama_model_loader: - kv   4:                           general.basename str              = gemma-2.0
llama_model_loader: - kv   5:                         general.size_label str              = 2B
llama_model_loader: - kv   6:                            general.license str              = gemma
llama_model_loader: - kv   7:                      gemma2.context_length u32              = 8192
llama_model_loader: - kv   8:                    gemma2.embedding_length u32              = 2304
llama_model_loader: - kv   9:                         gemma2.block_count u32              = 26
llama_model_loader: - kv  10:                 gemma2.feed_forward_length u32              = 9216
llama_model_loader: - kv  11:                gemma2.attention.head_count u32              = 8
llama_model_loader: - kv  12:             gemma2.attention.head_count_kv u32              = 4
llama_model_loader: - kv  13:    gemma2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  14:                gemma2.attention.key_length u32              = 256
llama_model_loader: - kv  15:              gemma2.attention.value_length u32              = 256
llama_model_loader: - kv  16:                          general.file_type u32              = 2
llama_model_loader: - kv  17:              gemma2.attn_logit_softcapping f32              = 50.000000
llama_model_loader: - kv  18:             gemma2.final_logit_softcapping f32              = 30.000000
llama_model_loader: - kv  19:            gemma2.attention.sliding_window u32              = 4096
llama_model_loader: - kv  20:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  21:                         tokenizer.ggml.pre str              = default
llama_model_loader: - kv  22:                      tokenizer.ggml.tokens arr[str,256000]  = ["<pad>", "<eos>", "<bos>", "<unk>", ...
llama_model_loader: - kv  23:                      tokenizer.ggml.scores arr[f32,256000]  = [-1000.000000, -1000.000000, -1000.00...
llama_model_loader: - kv  24:                  tokenizer.ggml.token_type arr[i32,256000]  = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv  25:                tokenizer.ggml.bos_token_id u32              = 2
llama_model_loader: - kv  26:                tokenizer.ggml.eos_token_id u32              = 1
llama_model_loader: - kv  27:            tokenizer.ggml.unknown_token_id u32              = 3
llama_model_loader: - kv  28:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  29:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  30:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  31:                    tokenizer.chat_template str              = {{ bos_token }}{% if messages[0]['rol...
llama_model_loader: - kv  32:            tokenizer.ggml.add_space_prefix bool             = false
llama_model_loader: - kv  33:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  105 tensors
llama_model_loader: - type q4_0:  182 tensors
llama_model_loader: - type q6_K:    1 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_0
print_info: file size   = 1.51 GiB (4.97 BPW) 
init_tokenizer: initializing tokenizer for type 1
load: control-looking token:    107 '<end_of_turn>' was not control-type; this is probably a bug in the model. its type will be overridden
load: control token: 255999 '<unused99>' is not marked as EOG
load: control token:     45 '<unused38>' is not marked as EOG
load: control token:     74 '<unused67>' is not marked as EOG
load: control token:     55 '<unused48>' is not marked as EOG
load: control token:     99 '<unused92>' is not marked as EOG
load: control token:    102 '<unused95>' is not marked as EOG
load: control token:     44 '<unused37>' is not marked as EOG
load: control token:     26 '<unused19>' is not marked as EOG
load: control token:     42 '<unused35>' is not marked as EOG
load: control token:     92 '<unused85>' is not marked as EOG
load: control token:     90 '<unused83>' is not marked as EOG
load: control token:     88 '<unused81>' is not marked as EOG
load: control token:      5 '<2mass>' is not marked as EOG
load: control token:    104 '<unused97>' is not marked as EOG
load: control token:     68 '<unused61>' is not marked as EOG
load: control token:     94 '<unused87>' is not marked as EOG
load: control token:     59 '<unused52>' is not marked as EOG
load: control token:      2 '<bos>' is not marked as EOG
load: control token:     25 '<unused18>' is not marked as EOG
load: control token:     93 '<unused86>' is not marked as EOG
load: control token:     95 '<unused88>' is not marked as EOG
load: control token:     76 '<unused69>' is not marked as EOG
load: control token:     97 '<unused90>' is not marked as EOG
load: control token:     56 '<unused49>' is not marked as EOG
load: control token:     81 '<unused74>' is not marked as EOG
load: control token:     13 '<unused6>' is not marked as EOG
load: control token:     51 '<unused44>' is not marked as EOG
load: control token:     47 '<unused40>' is not marked as EOG
load: control token:      8 '<unused1>' is not marked as EOG
load: control token:    103 '<unused96>' is not marked as EOG
load: control token:     75 '<unused68>' is not marked as EOG
load: control token:     79 '<unused72>' is not marked as EOG
load: control token:     39 '<unused32>' is not marked as EOG
load: control token:     49 '<unused42>' is not marked as EOG
load: control token:     41 '<unused34>' is not marked as EOG
load: control token:     34 '<unused27>' is not marked as EOG
load: control token:      6 '[@BOS@]' is not marked as EOG
load: control token:     40 '<unused33>' is not marked as EOG
load: control token:     33 '<unused26>' is not marked as EOG
load: control token:     86 '<unused79>' is not marked as EOG
load: control token:     43 '<unused36>' is not marked as EOG
load: control token:     35 '<unused28>' is not marked as EOG
load: control token:     32 '<unused25>' is not marked as EOG
load: control token:     28 '<unused21>' is not marked as EOG
load: control token:     19 '<unused12>' is not marked as EOG
load: control token:     67 '<unused60>' is not marked as EOG
load: control token:      9 '<unused2>' is not marked as EOG
load: control token:     52 '<unused45>' is not marked as EOG
load: control token:     16 '<unused9>' is not marked as EOG
load: control token:     98 '<unused91>' is not marked as EOG
load: control token:     80 '<unused73>' is not marked as EOG
load: control token:     71 '<unused64>' is not marked as EOG
load: control token:     36 '<unused29>' is not marked as EOG
load: control token:      0 '<pad>' is not marked as EOG
load: control token:     11 '<unused4>' is not marked as EOG
load: control token:     70 '<unused63>' is not marked as EOG
load: control token:     77 '<unused70>' is not marked as EOG
load: control token:     64 '<unused57>' is not marked as EOG
load: control token:     50 '<unused43>' is not marked as EOG
load: control token:     20 '<unused13>' is not marked as EOG
load: control token:     73 '<unused66>' is not marked as EOG
load: control token:     23 '<unused16>' is not marked as EOG
load: control token:     38 '<unused31>' is not marked as EOG
load: control token:     21 '<unused14>' is not marked as EOG
load: control token:     15 '<unused8>' is not marked as EOG
load: control token:     37 '<unused30>' is not marked as EOG
load: control token:     14 '<unused7>' is not marked as EOG
load: control token:     30 '<unused23>' is not marked as EOG
load: control token:     62 '<unused55>' is not marked as EOG
load: control token:      3 '<unk>' is not marked as EOG
load: control token:     18 '<unused11>' is not marked as EOG
load: control token:     22 '<unused15>' is not marked as EOG
load: control token:     66 '<unused59>' is not marked as EOG
load: control token:     65 '<unused58>' is not marked as EOG
load: control token:     10 '<unused3>' is not marked as EOG
load: control token:    105 '<unused98>' is not marked as EOG
load: control token:     87 '<unused80>' is not marked as EOG
load: control token:    100 '<unused93>' is not marked as EOG
load: control token:     63 '<unused56>' is not marked as EOG
load: control token:     31 '<unused24>' is not marked as EOG
load: control token:     58 '<unused51>' is not marked as EOG
load: control token:     84 '<unused77>' is not marked as EOG
load: control token:     61 '<unused54>' is not marked as EOG
load: control token:      1 '<eos>' is not marked as EOG
load: control token:     60 '<unused53>' is not marked as EOG
load: control token:     91 '<unused84>' is not marked as EOG
load: control token:     83 '<unused76>' is not marked as EOG
load: control token:     85 '<unused78>' is not marked as EOG
load: control token:     27 '<unused20>' is not marked as EOG
load: control token:     96 '<unused89>' is not marked as EOG
load: control token:     72 '<unused65>' is not marked as EOG
load: control token:     53 '<unused46>' is not marked as EOG
load: control token:     82 '<unused75>' is not marked as EOG
load: control token:      7 '<unused0>' is not marked as EOG
load: control token:      4 '<mask>' is not marked as EOG
load: control token:    101 '<unused94>' is not marked as EOG
load: control token:     78 '<unused71>' is not marked as EOG
load: control token:     89 '<unused82>' is not marked as EOG
load: control token:     69 '<unused62>' is not marked as EOG
load: control token:     54 '<unused47>' is not marked as EOG
load: control token:     57 '<unused50>' is not marked as EOG
load: control token:     12 '<unused5>' is not marked as EOG
load: control token:     48 '<unused41>' is not marked as EOG
load: control token:     17 '<unused10>' is not marked as EOG
load: control token:     24 '<unused17>' is not marked as EOG
load: control token:     46 '<unused39>' is not marked as EOG
load: control token:     29 '<unused22>' is not marked as EOG
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 249
load: token to piece cache size = 1.6014 MB
print_info: arch             = gemma2
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 2.61 B
print_info: general.name     = Gemma 2.0 2b It Transformers
print_info: vocab type       = SPM
print_info: n_vocab          = 256000
print_info: n_merges         = 0
print_info: BOS token        = 2 '<bos>'
print_info: EOS token        = 1 '<eos>'
print_info: EOT token        = 107 '<end_of_turn>'
print_info: UNK token        = 3 '<unk>'
print_info: PAD token        = 0 '<pad>'
print_info: LF token         = 227 '<0x0A>'
print_info: EOG token        = 1 '<eos>'
print_info: EOG token        = 107 '<end_of_turn>'
print_info: max token length = 48
llama_model_load: vocab only - skipping tensors
time=2025-05-27T19:05:54.515Z level=DEBUG source=gpu.go:695 msg="no filter required for library cpu"
time=2025-05-27T19:05:54.515Z level=INFO source=server.go:431 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-7462734796d67c40ecec2ca98eddf970e171dbb6b370e43fd633ee75b69abe1b --ctx-size 8192 --batch-size 512 --threads 6 --no-mmap --parallel 2 --port 36331"
time=2025-05-27T19:05:54.515Z level=DEBUG source=server.go:432 msg=subprocess PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LD_LIBRARY_PATH=/usr/lib/ollama:/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/lib/ollama OLLAMA_HOST=0.0.0.0:11434 OLLAMA_KEEP_ALIVE=-1 OLLAMA_DEBUG=1 OLLAMA_1_1748360776_PORT=tcp://10.43.123.207:11434 OLLAMA_1_1748360776_PORT_11434_TCP_PROTO=tcp OLLAMA_1_1748360776_PORT_11434_TCP_PORT=11434 OLLAMA_1_1748360776_PORT_11434_TCP_ADDR=10.43.123.207 OLLAMA_1_1748360776_SERVICE_PORT=11434 OLLAMA_1_1748360776_PORT_11434_TCP=tcp://10.43.123.207:11434 OLLAMA_1_1748360776_SERVICE_HOST=10.43.123.207 OLLAMA_1_1748360776_SERVICE_PORT_HTTP=11434 OLLAMA_MAX_LOADED_MODELS=3 OLLAMA_LIBRARY_PATH=/usr/lib/ollama
time=2025-05-27T19:05:54.516Z level=INFO source=sched.go:483 msg="loaded runners" count=1
time=2025-05-27T19:05:54.516Z level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
time=2025-05-27T19:05:54.516Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding"
time=2025-05-27T19:05:54.531Z level=INFO source=runner.go:815 msg="starting go runner"
time=2025-05-27T19:05:54.531Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu.so
time=2025-05-27T19:05:54.538Z level=INFO source=ggml.go:104 msg=system CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.LLAMAFILE=1 CPU.1.NEON=1 CPU.1.ARM_FMA=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang)
time=2025-05-27T19:05:54.543Z level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:36331"
llama_model_loader: loaded meta data with 34 key-value pairs and 288 tensors from /root/.ollama/models/blobs/sha256-7462734796d67c40ecec2ca98eddf970e171dbb6b370e43fd633ee75b69abe1b (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = gemma2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Gemma 2.0 2b It Transformers
llama_model_loader: - kv   3:                           general.finetune str              = it-transformers
llama_model_loader: - kv   4:                           general.basename str              = gemma-2.0
llama_model_loader: - kv   5:                         general.size_label str              = 2B
llama_model_loader: - kv   6:                            general.license str              = gemma
llama_model_loader: - kv   7:                      gemma2.context_length u32              = 8192
llama_model_loader: - kv   8:                    gemma2.embedding_length u32              = 2304
llama_model_loader: - kv   9:                         gemma2.block_count u32              = 26
llama_model_loader: - kv  10:                 gemma2.feed_forward_length u32              = 9216
llama_model_loader: - kv  11:                gemma2.attention.head_count u32              = 8
llama_model_loader: - kv  12:             gemma2.attention.head_count_kv u32              = 4
llama_model_loader: - kv  13:    gemma2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  14:                gemma2.attention.key_length u32              = 256
llama_model_loader: - kv  15:              gemma2.attention.value_length u32              = 256
llama_model_loader: - kv  16:                          general.file_type u32              = 2
llama_model_loader: - kv  17:              gemma2.attn_logit_softcapping f32              = 50.000000
llama_model_loader: - kv  18:             gemma2.final_logit_softcapping f32              = 30.000000
llama_model_loader: - kv  19:            gemma2.attention.sliding_window u32              = 4096
llama_model_loader: - kv  20:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  21:                         tokenizer.ggml.pre str              = default
llama_model_loader: - kv  22:                      tokenizer.ggml.tokens arr[str,256000]  = ["<pad>", "<eos>", "<bos>", "<unk>", ...
time=2025-05-27T19:05:54.768Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: - kv  23:                      tokenizer.ggml.scores arr[f32,256000]  = [-1000.000000, -1000.000000, -1000.00...
llama_model_loader: - kv  24:                  tokenizer.ggml.token_type arr[i32,256000]  = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv  25:                tokenizer.ggml.bos_token_id u32              = 2
llama_model_loader: - kv  26:                tokenizer.ggml.eos_token_id u32              = 1
llama_model_loader: - kv  27:            tokenizer.ggml.unknown_token_id u32              = 3
llama_model_loader: - kv  28:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  29:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  30:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  31:                    tokenizer.chat_template str              = {{ bos_token }}{% if messages[0]['rol...
llama_model_loader: - kv  32:            tokenizer.ggml.add_space_prefix bool             = false
llama_model_loader: - kv  33:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  105 tensors
llama_model_loader: - type q4_0:  182 tensors
llama_model_loader: - type q6_K:    1 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_0
print_info: file size   = 1.51 GiB (4.97 BPW) 
init_tokenizer: initializing tokenizer for type 1
load: control-looking token:    107 '<end_of_turn>' was not control-type; this is probably a bug in the model. its type will be overridden
load: control token: 255999 '<unused99>' is not marked as EOG
load: control token:     45 '<unused38>' is not marked as EOG
load: control token:     74 '<unused67>' is not marked as EOG
load: control token:     55 '<unused48>' is not marked as EOG
load: control token:     99 '<unused92>' is not marked as EOG
load: control token:    102 '<unused95>' is not marked as EOG
load: control token:     44 '<unused37>' is not marked as EOG
load: control token:     26 '<unused19>' is not marked as EOG
load: control token:     42 '<unused35>' is not marked as EOG
load: control token:     92 '<unused85>' is not marked as EOG
load: control token:     90 '<unused83>' is not marked as EOG
load: control token:     88 '<unused81>' is not marked as EOG
load: control token:      5 '<2mass>' is not marked as EOG
load: control token:    104 '<unused97>' is not marked as EOG
load: control token:     68 '<unused61>' is not marked as EOG
load: control token:     94 '<unused87>' is not marked as EOG
load: control token:     59 '<unused52>' is not marked as EOG
load: control token:      2 '<bos>' is not marked as EOG
load: control token:     25 '<unused18>' is not marked as EOG
load: control token:     93 '<unused86>' is not marked as EOG
load: control token:     95 '<unused88>' is not marked as EOG
load: control token:     76 '<unused69>' is not marked as EOG
load: control token:     97 '<unused90>' is not marked as EOG
load: control token:     56 '<unused49>' is not marked as EOG
load: control token:     81 '<unused74>' is not marked as EOG
load: control token:     13 '<unused6>' is not marked as EOG
load: control token:     51 '<unused44>' is not marked as EOG
load: control token:     47 '<unused40>' is not marked as EOG
load: control token:      8 '<unused1>' is not marked as EOG
load: control token:    103 '<unused96>' is not marked as EOG
load: control token:     75 '<unused68>' is not marked as EOG
load: control token:     79 '<unused72>' is not marked as EOG
load: control token:     39 '<unused32>' is not marked as EOG
load: control token:     49 '<unused42>' is not marked as EOG
load: control token:     41 '<unused34>' is not marked as EOG
load: control token:     34 '<unused27>' is not marked as EOG
load: control token:      6 '[@BOS@]' is not marked as EOG
load: control token:     40 '<unused33>' is not marked as EOG
load: control token:     33 '<unused26>' is not marked as EOG
load: control token:     86 '<unused79>' is not marked as EOG
load: control token:     43 '<unused36>' is not marked as EOG
load: control token:     35 '<unused28>' is not marked as EOG
load: control token:     32 '<unused25>' is not marked as EOG
load: control token:     28 '<unused21>' is not marked as EOG
load: control token:     19 '<unused12>' is not marked as EOG
load: control token:     67 '<unused60>' is not marked as EOG
load: control token:      9 '<unused2>' is not marked as EOG
load: control token:     52 '<unused45>' is not marked as EOG
load: control token:     16 '<unused9>' is not marked as EOG
load: control token:     98 '<unused91>' is not marked as EOG
load: control token:     80 '<unused73>' is not marked as EOG
load: control token:     71 '<unused64>' is not marked as EOG
load: control token:     36 '<unused29>' is not marked as EOG
load: control token:      0 '<pad>' is not marked as EOG
load: control token:     11 '<unused4>' is not marked as EOG
load: control token:     70 '<unused63>' is not marked as EOG
load: control token:     77 '<unused70>' is not marked as EOG
load: control token:     64 '<unused57>' is not marked as EOG
load: control token:     50 '<unused43>' is not marked as EOG
load: control token:     20 '<unused13>' is not marked as EOG
load: control token:     73 '<unused66>' is not marked as EOG
load: control token:     23 '<unused16>' is not marked as EOG
load: control token:     38 '<unused31>' is not marked as EOG
load: control token:     21 '<unused14>' is not marked as EOG
load: control token:     15 '<unused8>' is not marked as EOG
load: control token:     37 '<unused30>' is not marked as EOG
load: control token:     14 '<unused7>' is not marked as EOG
load: control token:     30 '<unused23>' is not marked as EOG
load: control token:     62 '<unused55>' is not marked as EOG
load: control token:      3 '<unk>' is not marked as EOG
load: control token:     18 '<unused11>' is not marked as EOG
load: control token:     22 '<unused15>' is not marked as EOG
load: control token:     66 '<unused59>' is not marked as EOG
load: control token:     65 '<unused58>' is not marked as EOG
load: control token:     10 '<unused3>' is not marked as EOG
load: control token:    105 '<unused98>' is not marked as EOG
load: control token:     87 '<unused80>' is not marked as EOG
load: control token:    100 '<unused93>' is not marked as EOG
load: control token:     63 '<unused56>' is not marked as EOG
load: control token:     31 '<unused24>' is not marked as EOG
load: control token:     58 '<unused51>' is not marked as EOG
load: control token:     84 '<unused77>' is not marked as EOG
load: control token:     61 '<unused54>' is not marked as EOG
load: control token:      1 '<eos>' is not marked as EOG
load: control token:     60 '<unused53>' is not marked as EOG
load: control token:     91 '<unused84>' is not marked as EOG
load: control token:     83 '<unused76>' is not marked as EOG
load: control token:     85 '<unused78>' is not marked as EOG
load: control token:     27 '<unused20>' is not marked as EOG
load: control token:     96 '<unused89>' is not marked as EOG
load: control token:     72 '<unused65>' is not marked as EOG
load: control token:     53 '<unused46>' is not marked as EOG
load: control token:     82 '<unused75>' is not marked as EOG
load: control token:      7 '<unused0>' is not marked as EOG
load: control token:      4 '<mask>' is not marked as EOG
load: control token:    101 '<unused94>' is not marked as EOG
load: control token:     78 '<unused71>' is not marked as EOG
load: control token:     89 '<unused82>' is not marked as EOG
load: control token:     69 '<unused62>' is not marked as EOG
load: control token:     54 '<unused47>' is not marked as EOG
load: control token:     57 '<unused50>' is not marked as EOG
load: control token:     12 '<unused5>' is not marked as EOG
load: control token:     48 '<unused41>' is not marked as EOG
load: control token:     17 '<unused10>' is not marked as EOG
load: control token:     24 '<unused17>' is not marked as EOG
load: control token:     46 '<unused39>' is not marked as EOG
load: control token:     29 '<unused22>' is not marked as EOG
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 249
load: token to piece cache size = 1.6014 MB
print_info: arch             = gemma2
print_info: vocab_only       = 0
print_info: n_ctx_train      = 8192
print_info: n_embd           = 2304
print_info: n_layer          = 26
print_info: n_head           = 8
print_info: n_head_kv        = 4
print_info: n_rot            = 256
print_info: n_swa            = 4096
print_info: n_swa_pattern    = 2
print_info: n_embd_head_k    = 256
print_info: n_embd_head_v    = 256
print_info: n_gqa            = 2
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-06
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 9216
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 2
print_info: rope scaling     = linear
print_info: freq_base_train  = 10000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 8192
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 2B
print_info: model params     = 2.61 B
print_info: general.name     = Gemma 2.0 2b It Transformers
print_info: vocab type       = SPM
print_info: n_vocab          = 256000
print_info: n_merges         = 0
print_info: BOS token        = 2 '<bos>'
print_info: EOS token        = 1 '<eos>'
print_info: EOT token        = 107 '<end_of_turn>'
print_info: UNK token        = 3 '<unk>'
print_info: PAD token        = 0 '<pad>'
print_info: LF token         = 227 '<0x0A>'
print_info: EOG token        = 1 '<eos>'
print_info: EOG token        = 107 '<end_of_turn>'
print_info: max token length = 48
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors: layer   0 assigned to device CPU, is_swa = 1
load_tensors: layer   1 assigned to device CPU, is_swa = 0
load_tensors: layer   2 assigned to device CPU, is_swa = 1
load_tensors: layer   3 assigned to device CPU, is_swa = 0
load_tensors: layer   4 assigned to device CPU, is_swa = 1
load_tensors: layer   5 assigned to device CPU, is_swa = 0
load_tensors: layer   6 assigned to device CPU, is_swa = 1
load_tensors: layer   7 assigned to device CPU, is_swa = 0
load_tensors: layer   8 assigned to device CPU, is_swa = 1
load_tensors: layer   9 assigned to device CPU, is_swa = 0
load_tensors: layer  10 assigned to device CPU, is_swa = 1
load_tensors: layer  11 assigned to device CPU, is_swa = 0
load_tensors: layer  12 assigned to device CPU, is_swa = 1
load_tensors: layer  13 assigned to device CPU, is_swa = 0
load_tensors: layer  14 assigned to device CPU, is_swa = 1
load_tensors: layer  15 assigned to device CPU, is_swa = 0
load_tensors: layer  16 assigned to device CPU, is_swa = 1
load_tensors: layer  17 assigned to device CPU, is_swa = 0
load_tensors: layer  18 assigned to device CPU, is_swa = 1
load_tensors: layer  19 assigned to device CPU, is_swa = 0
load_tensors: layer  20 assigned to device CPU, is_swa = 1
load_tensors: layer  21 assigned to device CPU, is_swa = 0
load_tensors: layer  22 assigned to device CPU, is_swa = 1
load_tensors: layer  23 assigned to device CPU, is_swa = 0
load_tensors: layer  24 assigned to device CPU, is_swa = 1
load_tensors: layer  25 assigned to device CPU, is_swa = 0
load_tensors: layer  26 assigned to device CPU, is_swa = 0
load_tensors:          CPU model buffer size =  1548.25 MiB
load_all_data: no device found for buffer type CPU for async uploads
time=2025-05-27T19:05:55.521Z level=DEBUG source=server.go:636 msg="model load progress 0.36"
time=2025-05-27T19:05:55.772Z level=DEBUG source=server.go:636 msg="model load progress 0.51"
time=2025-05-27T19:05:56.023Z level=DEBUG source=server.go:636 msg="model load progress 0.71"
time=2025-05-27T19:05:56.274Z level=DEBUG source=server.go:636 msg="model load progress 0.86"
time=2025-05-27T19:05:56.525Z level=DEBUG source=server.go:636 msg="model load progress 0.98"
llama_context: constructing llama_context
llama_context: n_seq_max     = 2
llama_context: n_ctx         = 8192
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch       = 1024
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = 0
llama_context: freq_base     = 10000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_per_seq (4096) < n_ctx_train (8192) -- the full capacity of the model will not be utilized
set_abort_callback: call
llama_context:        CPU  output buffer size =     1.97 MiB
create_memory: n_ctx = 8192 (padded)
llama_kv_cache_unified: kv_size = 8192, type_k = 'f16', type_v = 'f16', n_layer = 26, can_shift = 1, padding = 32
llama_kv_cache_unified: layer   0: dev = CPU
llama_kv_cache_unified: layer   1: dev = CPU
llama_kv_cache_unified: layer   2: dev = CPU
llama_kv_cache_unified: layer   3: dev = CPU
llama_kv_cache_unified: layer   4: dev = CPU
llama_kv_cache_unified: layer   5: dev = CPU
llama_kv_cache_unified: layer   6: dev = CPU
llama_kv_cache_unified: layer   7: dev = CPU
llama_kv_cache_unified: layer   8: dev = CPU
llama_kv_cache_unified: layer   9: dev = CPU
llama_kv_cache_unified: layer  10: dev = CPU
llama_kv_cache_unified: layer  11: dev = CPU
llama_kv_cache_unified: layer  12: dev = CPU
llama_kv_cache_unified: layer  13: dev = CPU
llama_kv_cache_unified: layer  14: dev = CPU
llama_kv_cache_unified: layer  15: dev = CPU
llama_kv_cache_unified: layer  16: dev = CPU
llama_kv_cache_unified: layer  17: dev = CPU
llama_kv_cache_unified: layer  18: dev = CPU
llama_kv_cache_unified: layer  19: dev = CPU
llama_kv_cache_unified: layer  20: dev = CPU
llama_kv_cache_unified: layer  21: dev = CPU
llama_kv_cache_unified: layer  22: dev = CPU
llama_kv_cache_unified: layer  23: dev = CPU
llama_kv_cache_unified: layer  24: dev = CPU
llama_kv_cache_unified: layer  25: dev = CPU
time=2025-05-27T19:05:56.778Z level=DEBUG source=server.go:636 msg="model load progress 1.00"
llama_kv_cache_unified:        CPU KV buffer size =   832.00 MiB
llama_kv_cache_unified: KV self size  =  832.00 MiB, K (f16):  416.00 MiB, V (f16):  416.00 MiB
llama_context: enumerating backends
llama_context: backend_ptrs.size() = 1
llama_context: max_nodes = 65536
llama_context: worst-case: n_tokens = 512, n_seqs = 1, n_outputs = 0
llama_context: reserving graph for n_tokens = 512, n_seqs = 1
llama_context: reserving graph for n_tokens = 1, n_seqs = 1
llama_context: reserving graph for n_tokens = 512, n_seqs = 1
llama_context:        CPU compute buffer size =   504.50 MiB
llama_context: graph nodes  = 1102
llama_context: graph splits = 1
time=2025-05-27T19:05:57.028Z level=INFO source=server.go:630 msg="llama runner started in 2.51 seconds"
time=2025-05-27T19:05:57.030Z level=DEBUG source=sched.go:495 msg="finished setting up" runner.name=registry.ollama.ai/library/gemma2:2b runner.inference=cpu runner.devices=1 runner.size="2.9 GiB" runner.vram="0 B" runner.parallel=2 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-7462734796d67c40ecec2ca98eddf970e171dbb6b370e43fd633ee75b69abe1b runner.num_ctx=8192
[GIN] 2025/05/27 - 19:05:57 | 200 |  3.176625538s |       127.0.0.1 | POST     "/api/generate"
time=2025-05-27T19:05:57.031Z level=DEBUG source=sched.go:503 msg="context for request finished"
time=2025-05-27T19:05:57.031Z level=DEBUG source=sched.go:343 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma2:2b runner.inference=cpu runner.devices=1 runner.size="2.9 GiB" runner.vram="0 B" runner.parallel=2 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-7462734796d67c40ecec2ca98eddf970e171dbb6b370e43fd633ee75b69abe1b runner.num_ctx=8192 duration=2562047h47m16.854775807s
time=2025-05-27T19:05:57.031Z level=DEBUG source=sched.go:361 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma2:2b runner.inference=cpu runner.devices=1 runner.size="2.9 GiB" runner.vram="0 B" runner.parallel=2 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-7462734796d67c40ecec2ca98eddf970e171dbb6b370e43fd633ee75b69abe1b runner.num_ctx=8192 refCount=0
[GIN] 2025/05/27 - 19:06:27 | 200 |     125.443µs |       10.42.0.1 | GET      "/"
[GIN] 2025/05/27 - 19:06:32 | 200 |      52.162µs |       10.42.0.1 | GET      "/"
[GIN] 2025/05/27 - 19:06:37 | 200 |      52.994µs |       10.42.0.1 | GET      "/"
<!-- gh-comment-id:2913670309 --> @goldyfruit commented on GitHub (May 27, 2025): I'm still not able to run the latest version of Ollama on JetPack 6.2 _(Jetson Orin Nano)_, only `0.5.7` works which prevent the usage of latest models such as Qwen3, Gemma3, etc... ``` root@jetson01:~# nvidia-smi Tue May 27 15:08:46 2025 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 540.4.0 Driver Version: 540.4.0 CUDA Version: 12.6 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 Orin (nvgpu) N/A | N/A N/A | N/A | | N/A N/A N/A N/A / N/A | Not Supported | N/A N/A | | | | N/A | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | No running processes found | +---------------------------------------------------------------------------------------+ ``` Here is the full log from the `latest` version: ``` root@jetson01:~# kubectl logs -n cattle-ai -f ollama-1-1748360776-7d64458944-cvlrn time=2025-05-27T19:05:53.325Z level=INFO source=routes.go:1205 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-05-27T19:05:53.327Z level=INFO source=images.go:463 msg="total blobs: 5" time=2025-05-27T19:05:53.327Z level=INFO source=images.go:470 msg="total unused blobs removed: 0" time=2025-05-27T19:05:53.327Z level=INFO source=routes.go:1258 msg="Listening on [::]:11434 (version 0.7.1)" time=2025-05-27T19:05:53.328Z level=DEBUG source=sched.go:108 msg="starting llm scheduler" time=2025-05-27T19:05:53.328Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-05-27T19:05:53.328Z level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA" time=2025-05-27T19:05:53.328Z level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcuda.so* time=2025-05-27T19:05:53.328Z level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" time=2025-05-27T19:05:53.330Z level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[/usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1] initializing /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1 library /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1 load err: /usr/lib/aarch64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by /usr/lib/aarch64-linux-gnu/nvidia/libnvrm_gpu.so) time=2025-05-27T19:05:53.331Z level=INFO source=gpu.go:612 msg="Unable to load cudart library /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1: Unable to load /usr/lib/aarch64-linux-gnu/nvidia/libcuda.so.1.1 library to query for Nvidia GPUs: /usr/lib/aarch64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by /usr/lib/aarch64-linux-gnu/nvidia/libnvrm_gpu.so)" time=2025-05-27T19:05:53.331Z level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcudart.so* time=2025-05-27T19:05:53.331Z level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/lib/ollama/libcudart.so* /usr/local/nvidia/lib/libcudart.so* /usr/local/nvidia/lib64/libcudart.so* /usr/lib/ollama/cuda_v*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]" time=2025-05-27T19:05:53.332Z level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths="[/usr/lib/ollama/cuda_v11/libcudart.so.11.3.109 /usr/lib/ollama/cuda_v12/libcudart.so.12.8.90]" cudaSetDevice err: 35 time=2025-05-27T19:05:53.334Z level=DEBUG source=gpu.go:574 msg="Unable to load cudart library /usr/lib/ollama/cuda_v11/libcudart.so.11.3.109: your nvidia driver is too old or missing. If you have a CUDA GPU please upgrade to run ollama" cudaSetDevice err: 35 time=2025-05-27T19:05:53.336Z level=DEBUG source=gpu.go:574 msg="Unable to load cudart library /usr/lib/ollama/cuda_v12/libcudart.so.12.8.90: your nvidia driver is too old or missing. If you have a CUDA GPU please upgrade to run ollama" time=2025-05-27T19:05:53.336Z level=DEBUG source=amd_linux.go:419 msg="amdgpu driver not detected /sys/module/amdgpu" time=2025-05-27T19:05:53.337Z level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered" time=2025-05-27T19:05:53.337Z level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="7.4 GiB" available="4.7 GiB" [GIN] 2025/05/27 - 19:05:53 | 200 | 103.555µs | 127.0.0.1 | HEAD "/" [GIN] 2025/05/27 - 19:05:53 | 200 | 164.613µs | 127.0.0.1 | GET "/api/ps" [GIN] 2025/05/27 - 19:05:53 | 200 | 44.513µs | 127.0.0.1 | HEAD "/" [GIN] 2025/05/27 - 19:05:53 | 200 | 384.982955ms | 127.0.0.1 | POST "/api/pull" [GIN] 2025/05/27 - 19:05:53 | 200 | 48.961µs | 127.0.0.1 | HEAD "/" time=2025-05-27T19:05:53.810Z level=DEBUG source=ggml.go:155 msg="key not found" key=general.alignment default=32 time=2025-05-27T19:05:53.852Z level=DEBUG source=ggml.go:155 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/27 - 19:05:53 | 200 | 86.109669ms | 127.0.0.1 | POST "/api/show" time=2025-05-27T19:05:53.898Z level=DEBUG source=ggml.go:155 msg="key not found" key=general.alignment default=32 time=2025-05-27T19:05:53.898Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="7.4 GiB" before.free="4.7 GiB" before.free_swap="3.7 GiB" now.total="7.4 GiB" now.free="4.6 GiB" now.free_swap="3.7 GiB" time=2025-05-27T19:05:53.898Z level=DEBUG source=sched.go:185 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1 time=2025-05-27T19:05:53.940Z level=DEBUG source=ggml.go:155 msg="key not found" key=general.alignment default=32 time=2025-05-27T19:05:53.982Z level=DEBUG source=ggml.go:155 msg="key not found" key=general.alignment default=32 time=2025-05-27T19:05:53.982Z level=DEBUG source=sched.go:215 msg="cpu mode with first model, loading" time=2025-05-27T19:05:53.982Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="7.4 GiB" before.free="4.6 GiB" before.free_swap="3.7 GiB" now.total="7.4 GiB" now.free="4.6 GiB" now.free_swap="3.7 GiB" time=2025-05-27T19:05:53.982Z level=INFO source=server.go:135 msg="system memory" total="7.4 GiB" free="4.6 GiB" free_swap="3.7 GiB" time=2025-05-27T19:05:53.982Z level=DEBUG source=memory.go:111 msg=evaluating library=cpu gpu_count=1 available="[4.6 GiB]" time=2025-05-27T19:05:53.982Z level=DEBUG source=ggml.go:155 msg="key not found" key=gemma2.vision.block_count default=0 time=2025-05-27T19:05:53.983Z level=INFO source=server.go:168 msg=offload library=cpu layers.requested=-1 layers.model=27 layers.offload=0 layers.split="" memory.available="[4.6 GiB]" memory.gpu_overhead="0 B" memory.required.full="2.9 GiB" memory.required.partial="0 B" memory.required.kv="832.0 MiB" memory.required.allocations="[2.9 GiB]" memory.weights.total="1.5 GiB" memory.weights.repeating="1.1 GiB" memory.weights.nonrepeating="461.4 MiB" memory.graph.full="504.5 MiB" memory.graph.partial="965.9 MiB" time=2025-05-27T19:05:53.983Z level=DEBUG source=server.go:284 msg="compatible gpu libraries" compatible=[] llama_model_loader: loaded meta data with 34 key-value pairs and 288 tensors from /root/.ollama/models/blobs/sha256-7462734796d67c40ecec2ca98eddf970e171dbb6b370e43fd633ee75b69abe1b (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = gemma2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Gemma 2.0 2b It Transformers llama_model_loader: - kv 3: general.finetune str = it-transformers llama_model_loader: - kv 4: general.basename str = gemma-2.0 llama_model_loader: - kv 5: general.size_label str = 2B llama_model_loader: - kv 6: general.license str = gemma llama_model_loader: - kv 7: gemma2.context_length u32 = 8192 llama_model_loader: - kv 8: gemma2.embedding_length u32 = 2304 llama_model_loader: - kv 9: gemma2.block_count u32 = 26 llama_model_loader: - kv 10: gemma2.feed_forward_length u32 = 9216 llama_model_loader: - kv 11: gemma2.attention.head_count u32 = 8 llama_model_loader: - kv 12: gemma2.attention.head_count_kv u32 = 4 llama_model_loader: - kv 13: gemma2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 14: gemma2.attention.key_length u32 = 256 llama_model_loader: - kv 15: gemma2.attention.value_length u32 = 256 llama_model_loader: - kv 16: general.file_type u32 = 2 llama_model_loader: - kv 17: gemma2.attn_logit_softcapping f32 = 50.000000 llama_model_loader: - kv 18: gemma2.final_logit_softcapping f32 = 30.000000 llama_model_loader: - kv 19: gemma2.attention.sliding_window u32 = 4096 llama_model_loader: - kv 20: tokenizer.ggml.model str = llama llama_model_loader: - kv 21: tokenizer.ggml.pre str = default llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,256000] = ["<pad>", "<eos>", "<bos>", "<unk>", ... llama_model_loader: - kv 23: tokenizer.ggml.scores arr[f32,256000] = [-1000.000000, -1000.000000, -1000.00... llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,256000] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ... llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 2 llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 1 llama_model_loader: - kv 27: tokenizer.ggml.unknown_token_id u32 = 3 llama_model_loader: - kv 28: tokenizer.ggml.padding_token_id u32 = 0 llama_model_loader: - kv 29: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 30: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 31: tokenizer.chat_template str = {{ bos_token }}{% if messages[0]['rol... llama_model_loader: - kv 32: tokenizer.ggml.add_space_prefix bool = false llama_model_loader: - kv 33: general.quantization_version u32 = 2 llama_model_loader: - type f32: 105 tensors llama_model_loader: - type q4_0: 182 tensors llama_model_loader: - type q6_K: 1 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_0 print_info: file size = 1.51 GiB (4.97 BPW) init_tokenizer: initializing tokenizer for type 1 load: control-looking token: 107 '<end_of_turn>' was not control-type; this is probably a bug in the model. its type will be overridden load: control token: 255999 '<unused99>' is not marked as EOG load: control token: 45 '<unused38>' is not marked as EOG load: control token: 74 '<unused67>' is not marked as EOG load: control token: 55 '<unused48>' is not marked as EOG load: control token: 99 '<unused92>' is not marked as EOG load: control token: 102 '<unused95>' is not marked as EOG load: control token: 44 '<unused37>' is not marked as EOG load: control token: 26 '<unused19>' is not marked as EOG load: control token: 42 '<unused35>' is not marked as EOG load: control token: 92 '<unused85>' is not marked as EOG load: control token: 90 '<unused83>' is not marked as EOG load: control token: 88 '<unused81>' is not marked as EOG load: control token: 5 '<2mass>' is not marked as EOG load: control token: 104 '<unused97>' is not marked as EOG load: control token: 68 '<unused61>' is not marked as EOG load: control token: 94 '<unused87>' is not marked as EOG load: control token: 59 '<unused52>' is not marked as EOG load: control token: 2 '<bos>' is not marked as EOG load: control token: 25 '<unused18>' is not marked as EOG load: control token: 93 '<unused86>' is not marked as EOG load: control token: 95 '<unused88>' is not marked as EOG load: control token: 76 '<unused69>' is not marked as EOG load: control token: 97 '<unused90>' is not marked as EOG load: control token: 56 '<unused49>' is not marked as EOG load: control token: 81 '<unused74>' is not marked as EOG load: control token: 13 '<unused6>' is not marked as EOG load: control token: 51 '<unused44>' is not marked as EOG load: control token: 47 '<unused40>' is not marked as EOG load: control token: 8 '<unused1>' is not marked as EOG load: control token: 103 '<unused96>' is not marked as EOG load: control token: 75 '<unused68>' is not marked as EOG load: control token: 79 '<unused72>' is not marked as EOG load: control token: 39 '<unused32>' is not marked as EOG load: control token: 49 '<unused42>' is not marked as EOG load: control token: 41 '<unused34>' is not marked as EOG load: control token: 34 '<unused27>' is not marked as EOG load: control token: 6 '[@BOS@]' is not marked as EOG load: control token: 40 '<unused33>' is not marked as EOG load: control token: 33 '<unused26>' is not marked as EOG load: control token: 86 '<unused79>' is not marked as EOG load: control token: 43 '<unused36>' is not marked as EOG load: control token: 35 '<unused28>' is not marked as EOG load: control token: 32 '<unused25>' is not marked as EOG load: control token: 28 '<unused21>' is not marked as EOG load: control token: 19 '<unused12>' is not marked as EOG load: control token: 67 '<unused60>' is not marked as EOG load: control token: 9 '<unused2>' is not marked as EOG load: control token: 52 '<unused45>' is not marked as EOG load: control token: 16 '<unused9>' is not marked as EOG load: control token: 98 '<unused91>' is not marked as EOG load: control token: 80 '<unused73>' is not marked as EOG load: control token: 71 '<unused64>' is not marked as EOG load: control token: 36 '<unused29>' is not marked as EOG load: control token: 0 '<pad>' is not marked as EOG load: control token: 11 '<unused4>' is not marked as EOG load: control token: 70 '<unused63>' is not marked as EOG load: control token: 77 '<unused70>' is not marked as EOG load: control token: 64 '<unused57>' is not marked as EOG load: control token: 50 '<unused43>' is not marked as EOG load: control token: 20 '<unused13>' is not marked as EOG load: control token: 73 '<unused66>' is not marked as EOG load: control token: 23 '<unused16>' is not marked as EOG load: control token: 38 '<unused31>' is not marked as EOG load: control token: 21 '<unused14>' is not marked as EOG load: control token: 15 '<unused8>' is not marked as EOG load: control token: 37 '<unused30>' is not marked as EOG load: control token: 14 '<unused7>' is not marked as EOG load: control token: 30 '<unused23>' is not marked as EOG load: control token: 62 '<unused55>' is not marked as EOG load: control token: 3 '<unk>' is not marked as EOG load: control token: 18 '<unused11>' is not marked as EOG load: control token: 22 '<unused15>' is not marked as EOG load: control token: 66 '<unused59>' is not marked as EOG load: control token: 65 '<unused58>' is not marked as EOG load: control token: 10 '<unused3>' is not marked as EOG load: control token: 105 '<unused98>' is not marked as EOG load: control token: 87 '<unused80>' is not marked as EOG load: control token: 100 '<unused93>' is not marked as EOG load: control token: 63 '<unused56>' is not marked as EOG load: control token: 31 '<unused24>' is not marked as EOG load: control token: 58 '<unused51>' is not marked as EOG load: control token: 84 '<unused77>' is not marked as EOG load: control token: 61 '<unused54>' is not marked as EOG load: control token: 1 '<eos>' is not marked as EOG load: control token: 60 '<unused53>' is not marked as EOG load: control token: 91 '<unused84>' is not marked as EOG load: control token: 83 '<unused76>' is not marked as EOG load: control token: 85 '<unused78>' is not marked as EOG load: control token: 27 '<unused20>' is not marked as EOG load: control token: 96 '<unused89>' is not marked as EOG load: control token: 72 '<unused65>' is not marked as EOG load: control token: 53 '<unused46>' is not marked as EOG load: control token: 82 '<unused75>' is not marked as EOG load: control token: 7 '<unused0>' is not marked as EOG load: control token: 4 '<mask>' is not marked as EOG load: control token: 101 '<unused94>' is not marked as EOG load: control token: 78 '<unused71>' is not marked as EOG load: control token: 89 '<unused82>' is not marked as EOG load: control token: 69 '<unused62>' is not marked as EOG load: control token: 54 '<unused47>' is not marked as EOG load: control token: 57 '<unused50>' is not marked as EOG load: control token: 12 '<unused5>' is not marked as EOG load: control token: 48 '<unused41>' is not marked as EOG load: control token: 17 '<unused10>' is not marked as EOG load: control token: 24 '<unused17>' is not marked as EOG load: control token: 46 '<unused39>' is not marked as EOG load: control token: 29 '<unused22>' is not marked as EOG load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 249 load: token to piece cache size = 1.6014 MB print_info: arch = gemma2 print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 2.61 B print_info: general.name = Gemma 2.0 2b It Transformers print_info: vocab type = SPM print_info: n_vocab = 256000 print_info: n_merges = 0 print_info: BOS token = 2 '<bos>' print_info: EOS token = 1 '<eos>' print_info: EOT token = 107 '<end_of_turn>' print_info: UNK token = 3 '<unk>' print_info: PAD token = 0 '<pad>' print_info: LF token = 227 '<0x0A>' print_info: EOG token = 1 '<eos>' print_info: EOG token = 107 '<end_of_turn>' print_info: max token length = 48 llama_model_load: vocab only - skipping tensors time=2025-05-27T19:05:54.515Z level=DEBUG source=gpu.go:695 msg="no filter required for library cpu" time=2025-05-27T19:05:54.515Z level=INFO source=server.go:431 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-7462734796d67c40ecec2ca98eddf970e171dbb6b370e43fd633ee75b69abe1b --ctx-size 8192 --batch-size 512 --threads 6 --no-mmap --parallel 2 --port 36331" time=2025-05-27T19:05:54.515Z level=DEBUG source=server.go:432 msg=subprocess PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LD_LIBRARY_PATH=/usr/lib/ollama:/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/lib/ollama OLLAMA_HOST=0.0.0.0:11434 OLLAMA_KEEP_ALIVE=-1 OLLAMA_DEBUG=1 OLLAMA_1_1748360776_PORT=tcp://10.43.123.207:11434 OLLAMA_1_1748360776_PORT_11434_TCP_PROTO=tcp OLLAMA_1_1748360776_PORT_11434_TCP_PORT=11434 OLLAMA_1_1748360776_PORT_11434_TCP_ADDR=10.43.123.207 OLLAMA_1_1748360776_SERVICE_PORT=11434 OLLAMA_1_1748360776_PORT_11434_TCP=tcp://10.43.123.207:11434 OLLAMA_1_1748360776_SERVICE_HOST=10.43.123.207 OLLAMA_1_1748360776_SERVICE_PORT_HTTP=11434 OLLAMA_MAX_LOADED_MODELS=3 OLLAMA_LIBRARY_PATH=/usr/lib/ollama time=2025-05-27T19:05:54.516Z level=INFO source=sched.go:483 msg="loaded runners" count=1 time=2025-05-27T19:05:54.516Z level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2025-05-27T19:05:54.516Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding" time=2025-05-27T19:05:54.531Z level=INFO source=runner.go:815 msg="starting go runner" time=2025-05-27T19:05:54.531Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu.so time=2025-05-27T19:05:54.538Z level=INFO source=ggml.go:104 msg=system CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.LLAMAFILE=1 CPU.1.NEON=1 CPU.1.ARM_FMA=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang) time=2025-05-27T19:05:54.543Z level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:36331" llama_model_loader: loaded meta data with 34 key-value pairs and 288 tensors from /root/.ollama/models/blobs/sha256-7462734796d67c40ecec2ca98eddf970e171dbb6b370e43fd633ee75b69abe1b (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = gemma2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Gemma 2.0 2b It Transformers llama_model_loader: - kv 3: general.finetune str = it-transformers llama_model_loader: - kv 4: general.basename str = gemma-2.0 llama_model_loader: - kv 5: general.size_label str = 2B llama_model_loader: - kv 6: general.license str = gemma llama_model_loader: - kv 7: gemma2.context_length u32 = 8192 llama_model_loader: - kv 8: gemma2.embedding_length u32 = 2304 llama_model_loader: - kv 9: gemma2.block_count u32 = 26 llama_model_loader: - kv 10: gemma2.feed_forward_length u32 = 9216 llama_model_loader: - kv 11: gemma2.attention.head_count u32 = 8 llama_model_loader: - kv 12: gemma2.attention.head_count_kv u32 = 4 llama_model_loader: - kv 13: gemma2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 14: gemma2.attention.key_length u32 = 256 llama_model_loader: - kv 15: gemma2.attention.value_length u32 = 256 llama_model_loader: - kv 16: general.file_type u32 = 2 llama_model_loader: - kv 17: gemma2.attn_logit_softcapping f32 = 50.000000 llama_model_loader: - kv 18: gemma2.final_logit_softcapping f32 = 30.000000 llama_model_loader: - kv 19: gemma2.attention.sliding_window u32 = 4096 llama_model_loader: - kv 20: tokenizer.ggml.model str = llama llama_model_loader: - kv 21: tokenizer.ggml.pre str = default llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,256000] = ["<pad>", "<eos>", "<bos>", "<unk>", ... time=2025-05-27T19:05:54.768Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: - kv 23: tokenizer.ggml.scores arr[f32,256000] = [-1000.000000, -1000.000000, -1000.00... llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,256000] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ... llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 2 llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 1 llama_model_loader: - kv 27: tokenizer.ggml.unknown_token_id u32 = 3 llama_model_loader: - kv 28: tokenizer.ggml.padding_token_id u32 = 0 llama_model_loader: - kv 29: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 30: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 31: tokenizer.chat_template str = {{ bos_token }}{% if messages[0]['rol... llama_model_loader: - kv 32: tokenizer.ggml.add_space_prefix bool = false llama_model_loader: - kv 33: general.quantization_version u32 = 2 llama_model_loader: - type f32: 105 tensors llama_model_loader: - type q4_0: 182 tensors llama_model_loader: - type q6_K: 1 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_0 print_info: file size = 1.51 GiB (4.97 BPW) init_tokenizer: initializing tokenizer for type 1 load: control-looking token: 107 '<end_of_turn>' was not control-type; this is probably a bug in the model. its type will be overridden load: control token: 255999 '<unused99>' is not marked as EOG load: control token: 45 '<unused38>' is not marked as EOG load: control token: 74 '<unused67>' is not marked as EOG load: control token: 55 '<unused48>' is not marked as EOG load: control token: 99 '<unused92>' is not marked as EOG load: control token: 102 '<unused95>' is not marked as EOG load: control token: 44 '<unused37>' is not marked as EOG load: control token: 26 '<unused19>' is not marked as EOG load: control token: 42 '<unused35>' is not marked as EOG load: control token: 92 '<unused85>' is not marked as EOG load: control token: 90 '<unused83>' is not marked as EOG load: control token: 88 '<unused81>' is not marked as EOG load: control token: 5 '<2mass>' is not marked as EOG load: control token: 104 '<unused97>' is not marked as EOG load: control token: 68 '<unused61>' is not marked as EOG load: control token: 94 '<unused87>' is not marked as EOG load: control token: 59 '<unused52>' is not marked as EOG load: control token: 2 '<bos>' is not marked as EOG load: control token: 25 '<unused18>' is not marked as EOG load: control token: 93 '<unused86>' is not marked as EOG load: control token: 95 '<unused88>' is not marked as EOG load: control token: 76 '<unused69>' is not marked as EOG load: control token: 97 '<unused90>' is not marked as EOG load: control token: 56 '<unused49>' is not marked as EOG load: control token: 81 '<unused74>' is not marked as EOG load: control token: 13 '<unused6>' is not marked as EOG load: control token: 51 '<unused44>' is not marked as EOG load: control token: 47 '<unused40>' is not marked as EOG load: control token: 8 '<unused1>' is not marked as EOG load: control token: 103 '<unused96>' is not marked as EOG load: control token: 75 '<unused68>' is not marked as EOG load: control token: 79 '<unused72>' is not marked as EOG load: control token: 39 '<unused32>' is not marked as EOG load: control token: 49 '<unused42>' is not marked as EOG load: control token: 41 '<unused34>' is not marked as EOG load: control token: 34 '<unused27>' is not marked as EOG load: control token: 6 '[@BOS@]' is not marked as EOG load: control token: 40 '<unused33>' is not marked as EOG load: control token: 33 '<unused26>' is not marked as EOG load: control token: 86 '<unused79>' is not marked as EOG load: control token: 43 '<unused36>' is not marked as EOG load: control token: 35 '<unused28>' is not marked as EOG load: control token: 32 '<unused25>' is not marked as EOG load: control token: 28 '<unused21>' is not marked as EOG load: control token: 19 '<unused12>' is not marked as EOG load: control token: 67 '<unused60>' is not marked as EOG load: control token: 9 '<unused2>' is not marked as EOG load: control token: 52 '<unused45>' is not marked as EOG load: control token: 16 '<unused9>' is not marked as EOG load: control token: 98 '<unused91>' is not marked as EOG load: control token: 80 '<unused73>' is not marked as EOG load: control token: 71 '<unused64>' is not marked as EOG load: control token: 36 '<unused29>' is not marked as EOG load: control token: 0 '<pad>' is not marked as EOG load: control token: 11 '<unused4>' is not marked as EOG load: control token: 70 '<unused63>' is not marked as EOG load: control token: 77 '<unused70>' is not marked as EOG load: control token: 64 '<unused57>' is not marked as EOG load: control token: 50 '<unused43>' is not marked as EOG load: control token: 20 '<unused13>' is not marked as EOG load: control token: 73 '<unused66>' is not marked as EOG load: control token: 23 '<unused16>' is not marked as EOG load: control token: 38 '<unused31>' is not marked as EOG load: control token: 21 '<unused14>' is not marked as EOG load: control token: 15 '<unused8>' is not marked as EOG load: control token: 37 '<unused30>' is not marked as EOG load: control token: 14 '<unused7>' is not marked as EOG load: control token: 30 '<unused23>' is not marked as EOG load: control token: 62 '<unused55>' is not marked as EOG load: control token: 3 '<unk>' is not marked as EOG load: control token: 18 '<unused11>' is not marked as EOG load: control token: 22 '<unused15>' is not marked as EOG load: control token: 66 '<unused59>' is not marked as EOG load: control token: 65 '<unused58>' is not marked as EOG load: control token: 10 '<unused3>' is not marked as EOG load: control token: 105 '<unused98>' is not marked as EOG load: control token: 87 '<unused80>' is not marked as EOG load: control token: 100 '<unused93>' is not marked as EOG load: control token: 63 '<unused56>' is not marked as EOG load: control token: 31 '<unused24>' is not marked as EOG load: control token: 58 '<unused51>' is not marked as EOG load: control token: 84 '<unused77>' is not marked as EOG load: control token: 61 '<unused54>' is not marked as EOG load: control token: 1 '<eos>' is not marked as EOG load: control token: 60 '<unused53>' is not marked as EOG load: control token: 91 '<unused84>' is not marked as EOG load: control token: 83 '<unused76>' is not marked as EOG load: control token: 85 '<unused78>' is not marked as EOG load: control token: 27 '<unused20>' is not marked as EOG load: control token: 96 '<unused89>' is not marked as EOG load: control token: 72 '<unused65>' is not marked as EOG load: control token: 53 '<unused46>' is not marked as EOG load: control token: 82 '<unused75>' is not marked as EOG load: control token: 7 '<unused0>' is not marked as EOG load: control token: 4 '<mask>' is not marked as EOG load: control token: 101 '<unused94>' is not marked as EOG load: control token: 78 '<unused71>' is not marked as EOG load: control token: 89 '<unused82>' is not marked as EOG load: control token: 69 '<unused62>' is not marked as EOG load: control token: 54 '<unused47>' is not marked as EOG load: control token: 57 '<unused50>' is not marked as EOG load: control token: 12 '<unused5>' is not marked as EOG load: control token: 48 '<unused41>' is not marked as EOG load: control token: 17 '<unused10>' is not marked as EOG load: control token: 24 '<unused17>' is not marked as EOG load: control token: 46 '<unused39>' is not marked as EOG load: control token: 29 '<unused22>' is not marked as EOG load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 249 load: token to piece cache size = 1.6014 MB print_info: arch = gemma2 print_info: vocab_only = 0 print_info: n_ctx_train = 8192 print_info: n_embd = 2304 print_info: n_layer = 26 print_info: n_head = 8 print_info: n_head_kv = 4 print_info: n_rot = 256 print_info: n_swa = 4096 print_info: n_swa_pattern = 2 print_info: n_embd_head_k = 256 print_info: n_embd_head_v = 256 print_info: n_gqa = 2 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-06 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 9216 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 2 print_info: rope scaling = linear print_info: freq_base_train = 10000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 8192 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 2B print_info: model params = 2.61 B print_info: general.name = Gemma 2.0 2b It Transformers print_info: vocab type = SPM print_info: n_vocab = 256000 print_info: n_merges = 0 print_info: BOS token = 2 '<bos>' print_info: EOS token = 1 '<eos>' print_info: EOT token = 107 '<end_of_turn>' print_info: UNK token = 3 '<unk>' print_info: PAD token = 0 '<pad>' print_info: LF token = 227 '<0x0A>' print_info: EOG token = 1 '<eos>' print_info: EOG token = 107 '<end_of_turn>' print_info: max token length = 48 load_tensors: loading model tensors, this can take a while... (mmap = false) load_tensors: layer 0 assigned to device CPU, is_swa = 1 load_tensors: layer 1 assigned to device CPU, is_swa = 0 load_tensors: layer 2 assigned to device CPU, is_swa = 1 load_tensors: layer 3 assigned to device CPU, is_swa = 0 load_tensors: layer 4 assigned to device CPU, is_swa = 1 load_tensors: layer 5 assigned to device CPU, is_swa = 0 load_tensors: layer 6 assigned to device CPU, is_swa = 1 load_tensors: layer 7 assigned to device CPU, is_swa = 0 load_tensors: layer 8 assigned to device CPU, is_swa = 1 load_tensors: layer 9 assigned to device CPU, is_swa = 0 load_tensors: layer 10 assigned to device CPU, is_swa = 1 load_tensors: layer 11 assigned to device CPU, is_swa = 0 load_tensors: layer 12 assigned to device CPU, is_swa = 1 load_tensors: layer 13 assigned to device CPU, is_swa = 0 load_tensors: layer 14 assigned to device CPU, is_swa = 1 load_tensors: layer 15 assigned to device CPU, is_swa = 0 load_tensors: layer 16 assigned to device CPU, is_swa = 1 load_tensors: layer 17 assigned to device CPU, is_swa = 0 load_tensors: layer 18 assigned to device CPU, is_swa = 1 load_tensors: layer 19 assigned to device CPU, is_swa = 0 load_tensors: layer 20 assigned to device CPU, is_swa = 1 load_tensors: layer 21 assigned to device CPU, is_swa = 0 load_tensors: layer 22 assigned to device CPU, is_swa = 1 load_tensors: layer 23 assigned to device CPU, is_swa = 0 load_tensors: layer 24 assigned to device CPU, is_swa = 1 load_tensors: layer 25 assigned to device CPU, is_swa = 0 load_tensors: layer 26 assigned to device CPU, is_swa = 0 load_tensors: CPU model buffer size = 1548.25 MiB load_all_data: no device found for buffer type CPU for async uploads time=2025-05-27T19:05:55.521Z level=DEBUG source=server.go:636 msg="model load progress 0.36" time=2025-05-27T19:05:55.772Z level=DEBUG source=server.go:636 msg="model load progress 0.51" time=2025-05-27T19:05:56.023Z level=DEBUG source=server.go:636 msg="model load progress 0.71" time=2025-05-27T19:05:56.274Z level=DEBUG source=server.go:636 msg="model load progress 0.86" time=2025-05-27T19:05:56.525Z level=DEBUG source=server.go:636 msg="model load progress 0.98" llama_context: constructing llama_context llama_context: n_seq_max = 2 llama_context: n_ctx = 8192 llama_context: n_ctx_per_seq = 4096 llama_context: n_batch = 1024 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 0 llama_context: freq_base = 10000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (4096) < n_ctx_train (8192) -- the full capacity of the model will not be utilized set_abort_callback: call llama_context: CPU output buffer size = 1.97 MiB create_memory: n_ctx = 8192 (padded) llama_kv_cache_unified: kv_size = 8192, type_k = 'f16', type_v = 'f16', n_layer = 26, can_shift = 1, padding = 32 llama_kv_cache_unified: layer 0: dev = CPU llama_kv_cache_unified: layer 1: dev = CPU llama_kv_cache_unified: layer 2: dev = CPU llama_kv_cache_unified: layer 3: dev = CPU llama_kv_cache_unified: layer 4: dev = CPU llama_kv_cache_unified: layer 5: dev = CPU llama_kv_cache_unified: layer 6: dev = CPU llama_kv_cache_unified: layer 7: dev = CPU llama_kv_cache_unified: layer 8: dev = CPU llama_kv_cache_unified: layer 9: dev = CPU llama_kv_cache_unified: layer 10: dev = CPU llama_kv_cache_unified: layer 11: dev = CPU llama_kv_cache_unified: layer 12: dev = CPU llama_kv_cache_unified: layer 13: dev = CPU llama_kv_cache_unified: layer 14: dev = CPU llama_kv_cache_unified: layer 15: dev = CPU llama_kv_cache_unified: layer 16: dev = CPU llama_kv_cache_unified: layer 17: dev = CPU llama_kv_cache_unified: layer 18: dev = CPU llama_kv_cache_unified: layer 19: dev = CPU llama_kv_cache_unified: layer 20: dev = CPU llama_kv_cache_unified: layer 21: dev = CPU llama_kv_cache_unified: layer 22: dev = CPU llama_kv_cache_unified: layer 23: dev = CPU llama_kv_cache_unified: layer 24: dev = CPU llama_kv_cache_unified: layer 25: dev = CPU time=2025-05-27T19:05:56.778Z level=DEBUG source=server.go:636 msg="model load progress 1.00" llama_kv_cache_unified: CPU KV buffer size = 832.00 MiB llama_kv_cache_unified: KV self size = 832.00 MiB, K (f16): 416.00 MiB, V (f16): 416.00 MiB llama_context: enumerating backends llama_context: backend_ptrs.size() = 1 llama_context: max_nodes = 65536 llama_context: worst-case: n_tokens = 512, n_seqs = 1, n_outputs = 0 llama_context: reserving graph for n_tokens = 512, n_seqs = 1 llama_context: reserving graph for n_tokens = 1, n_seqs = 1 llama_context: reserving graph for n_tokens = 512, n_seqs = 1 llama_context: CPU compute buffer size = 504.50 MiB llama_context: graph nodes = 1102 llama_context: graph splits = 1 time=2025-05-27T19:05:57.028Z level=INFO source=server.go:630 msg="llama runner started in 2.51 seconds" time=2025-05-27T19:05:57.030Z level=DEBUG source=sched.go:495 msg="finished setting up" runner.name=registry.ollama.ai/library/gemma2:2b runner.inference=cpu runner.devices=1 runner.size="2.9 GiB" runner.vram="0 B" runner.parallel=2 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-7462734796d67c40ecec2ca98eddf970e171dbb6b370e43fd633ee75b69abe1b runner.num_ctx=8192 [GIN] 2025/05/27 - 19:05:57 | 200 | 3.176625538s | 127.0.0.1 | POST "/api/generate" time=2025-05-27T19:05:57.031Z level=DEBUG source=sched.go:503 msg="context for request finished" time=2025-05-27T19:05:57.031Z level=DEBUG source=sched.go:343 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma2:2b runner.inference=cpu runner.devices=1 runner.size="2.9 GiB" runner.vram="0 B" runner.parallel=2 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-7462734796d67c40ecec2ca98eddf970e171dbb6b370e43fd633ee75b69abe1b runner.num_ctx=8192 duration=2562047h47m16.854775807s time=2025-05-27T19:05:57.031Z level=DEBUG source=sched.go:361 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma2:2b runner.inference=cpu runner.devices=1 runner.size="2.9 GiB" runner.vram="0 B" runner.parallel=2 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-7462734796d67c40ecec2ca98eddf970e171dbb6b370e43fd633ee75b69abe1b runner.num_ctx=8192 refCount=0 [GIN] 2025/05/27 - 19:06:27 | 200 | 125.443µs | 10.42.0.1 | GET "/" [GIN] 2025/05/27 - 19:06:32 | 200 | 52.162µs | 10.42.0.1 | GET "/" [GIN] 2025/05/27 - 19:06:37 | 200 | 52.994µs | 10.42.0.1 | GET "/" ```
Author
Owner

@z0rb commented on GitHub (Jun 13, 2025):

I guess @dusty-nv is doing something right here: https://github.com/dusty-nv/jetson-containers/tree/master/packages/llm/ollama
Docker Hub: https://hub.docker.com/r/dustynv/ollama

His container is still on 6.8, but it is past the 5.7 referenced here. Would it be possible to combine the two approaches so we get Ollama 0.9 on the Nvidia Orin? In my case it is the Nvidia AGX Orin.

<!-- gh-comment-id:2970011328 --> @z0rb commented on GitHub (Jun 13, 2025): I guess @dusty-nv is doing something right here: https://github.com/dusty-nv/jetson-containers/tree/master/packages/llm/ollama Docker Hub: https://hub.docker.com/r/dustynv/ollama His container is still on 6.8, but it is past the 5.7 referenced here. Would it be possible to combine the two approaches so we get Ollama 0.9 on the Nvidia Orin? In my case it is the Nvidia AGX Orin.
Author
Owner

@driversti commented on GitHub (Jun 13, 2025):

@z0rb, in the meantime, you can build your own image as described above. Here is my current build: driversti/ollama:0.9.0-12-gfc03096-dirty. I use Orin Nano 8GB.

<!-- gh-comment-id:2970042209 --> @driversti commented on GitHub (Jun 13, 2025): @z0rb, in the meantime, you can build your own image as described above. Here is my current build: `driversti/ollama:0.9.0-12-gfc03096-dirty`. I use **Orin Nano 8GB**.
Author
Owner

@z0rb commented on GitHub (Jun 15, 2025):

@driversti Thanks for the suggestion. I went with upgrading the dustynv/ollama image instead with:

FROM dustynv/ollama:0.6.8-r36.4
ENV OLLAMA_VERSION=0.9.0
RUN curl -fsSL https://ollama.com/install.sh | sh

This leads to having both Ollama version in the container layers. It's ok for me now, hope the default ollama image gets fixed up soon.

<!-- gh-comment-id:2973867839 --> @z0rb commented on GitHub (Jun 15, 2025): @driversti Thanks for the suggestion. I went with upgrading the dustynv/ollama image instead with: ``` FROM dustynv/ollama:0.6.8-r36.4 ENV OLLAMA_VERSION=0.9.0 RUN curl -fsSL https://ollama.com/install.sh | sh ``` This leads to having both Ollama version in the container layers. It's ok for me now, hope the default ollama image gets fixed up soon.
Author
Owner

@regmibijay commented on GitHub (Jun 15, 2025):

@z0rb

This leads to having both Ollama version in the container layers. It's ok for me now, hope the default ollama image gets fixed up soon.

You can use base pytorch image from dusty instead of ollama image to trim down sizes and it works perfectly fine.

All this workaround is a hassle atm and would love a solution in this case too.

<!-- gh-comment-id:2974741462 --> @regmibijay commented on GitHub (Jun 15, 2025): @z0rb > This leads to having both Ollama version in the container layers. It's ok for me now, hope the default ollama image gets fixed up soon. You can use base pytorch image from dusty instead of ollama image to trim down sizes and it works perfectly fine. All this workaround is a hassle atm and would love a solution in this case too.
Author
Owner

@shimen commented on GitHub (Sep 30, 2025):

Here is how I build every version of ollama with jetson-containers (Adjust your params):

Install the container tools

git clone https://github.com/dusty-nv/jetson-containers
bash jetson-containers/install.sh

Build ollama once

ollama_tag="$(curl -s https://api.github.com/repos/ollama/ollama/releases/latest | jq -r '.tag_name' | sed 's/^v//')"
sudo jetson-containers build --build-flags="--no-cache" --name=ollama:$ollama_tag ollama
# Tag build base image
docker tag ollama:${ollama_tag}-r36.4.3-cuda ollama:r36.4.3-cuda

Rebuild with base image

ollama_tag="$(curl -s https://api.github.com/repos/ollama/ollama/releases/latest | jq -r '.tag_name' | sed 's/^v//')"
DOCKER_BUILDKIT=0 docker build --network=host --tag ollama:${ollama_tag}-r36.4.3 \
--file ~/jetson-containers/packages/llm/ollama/Dockerfile \
--build-arg BASE_IMAGE=ollama:r36.4.3-cuda \
--build-arg OLLAMA_REPO="ollama/ollama" \
--build-arg OLLAMA_BRANCH="v0.5.14" \
--build-arg GOLANG_VERSION="1.22.8" \
--build-arg CMAKE_VERSION="3.22.1" \
--build-arg JETPACK_VERSION="6.2" \
--build-arg CUDA_VERSION_MAJOR="12" \
--build-arg CMAKE_CUDA_ARCHITECTURES="87" \
--no-cache \
~/jetson-containers/packages/llm/ollama

echo "-- Testing container ollama:${ollama_tag}-r36.4.3"

docker run -t --rm --runtime=nvidia --network=host \
--volume ~/jetson-containers/packages/cuda/cuda:/test \
--volume ~/jetson-containers/data:/data \
--workdir /test \
ollama:${ollama_tag}-r36.4.3 \
/bin/bash -c '/bin/bash test.sh'
<!-- gh-comment-id:3353682313 --> @shimen commented on GitHub (Sep 30, 2025): Here is how I build every version of ollama with jetson-containers (**Adjust your params**): ## Install the container tools ``` git clone https://github.com/dusty-nv/jetson-containers bash jetson-containers/install.sh ``` ## Build ollama once ``` ollama_tag="$(curl -s https://api.github.com/repos/ollama/ollama/releases/latest | jq -r '.tag_name' | sed 's/^v//')" sudo jetson-containers build --build-flags="--no-cache" --name=ollama:$ollama_tag ollama # Tag build base image docker tag ollama:${ollama_tag}-r36.4.3-cuda ollama:r36.4.3-cuda ``` ## Rebuild with base image ``` ollama_tag="$(curl -s https://api.github.com/repos/ollama/ollama/releases/latest | jq -r '.tag_name' | sed 's/^v//')" DOCKER_BUILDKIT=0 docker build --network=host --tag ollama:${ollama_tag}-r36.4.3 \ --file ~/jetson-containers/packages/llm/ollama/Dockerfile \ --build-arg BASE_IMAGE=ollama:r36.4.3-cuda \ --build-arg OLLAMA_REPO="ollama/ollama" \ --build-arg OLLAMA_BRANCH="v0.5.14" \ --build-arg GOLANG_VERSION="1.22.8" \ --build-arg CMAKE_VERSION="3.22.1" \ --build-arg JETPACK_VERSION="6.2" \ --build-arg CUDA_VERSION_MAJOR="12" \ --build-arg CMAKE_CUDA_ARCHITECTURES="87" \ --no-cache \ ~/jetson-containers/packages/llm/ollama echo "-- Testing container ollama:${ollama_tag}-r36.4.3" docker run -t --rm --runtime=nvidia --network=host \ --volume ~/jetson-containers/packages/cuda/cuda:/test \ --volume ~/jetson-containers/data:/data \ --workdir /test \ ollama:${ollama_tag}-r36.4.3 \ /bin/bash -c '/bin/bash test.sh' ```
Author
Owner

@paul-eroomgroup commented on GitHub (Nov 11, 2025):

I was having problem with the GPU not being used. I didn't have the GLIBC error however, I did have an issue that anything I built would not use the GPU.

I was getting errors similar errors, ollama skipping available library at users request" requested=cuda_ libDir=/usr/local/lib/ollama/cuda_jetpack6

It turned out to be the start_ollama file.

It was changed here, e18584dfa6 (diff-5a64c537e51455b1759637853fc9f06504692dad0f1a184e4a1396a6729401dc)

By editing the start_ollama and removing the OLLAMA_LLM_LIBRARY reference, I was able to rebuild and the GPU was used in the container.

The following link led me to the fix, https://github.com/ollama/ollama/issues/12797

<!-- gh-comment-id:3519250349 --> @paul-eroomgroup commented on GitHub (Nov 11, 2025): I was having problem with the GPU not being used. I didn't have the GLIBC error however, I did have an issue that anything I built would not use the GPU. I was getting errors similar errors, ollama skipping available library at users request" requested=cuda_ libDir=/usr/local/lib/ollama/cuda_jetpack6 It turned out to be the start_ollama file. It was changed here, https://github.com/dusty-nv/jetson-containers/commit/e18584dfa62d57a031193b4206734dc7014bc502#diff-5a64c537e51455b1759637853fc9f06504692dad0f1a184e4a1396a6729401dc By editing the start_ollama and removing the OLLAMA_LLM_LIBRARY reference, I was able to rebuild and the GPU was used in the container. The following link led me to the fix, https://github.com/ollama/ollama/issues/12797
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6191