[GH-ISSUE #13247] Qwen3-VL models (2B/4B) not utilizing GPU on Jetson Orin Nano Super (JetPack 6.2.1), while Qwen2.5-VL:3B works correctly #8756

Closed
opened 2026-04-12 21:31:28 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @HuaXiong-Liu on GitHub (Nov 26, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13247

What is the issue?

Issue Title: Qwen3-VL models (2B/4B) not utilizing GPU on Jetson Orin Nano Super (JetPack 6.2.1), while Qwen2.5-VL:3B works correctly
Description:
I'm running Ollama v0.13.0 on a Jetson Orin Nano Super with JetPack 6.2.1 (L4T R36.4). When I load the official quantized Qwen3-VL:2B or Qwen3-VL:4B models using:
ollama pull qwen3-vl:2b
ollama run qwen3-vl:2b
the model is entirely loaded onto the CPU, and zero layers are offloaded to the GPU (verified via tegrastats and nvtop). No GPU memory usage or compute activity is observed.
However, when I load Qwen2.5-VL:3B:
ollama pull qwen2.5vl:3b
ollama run qwen2.5vl:3b
it correctly utilizes the GPU — 35 layers are loaded on GPU, and only 1 layer remains on CPU, as expected.
All models are official quantized versions from Ollama’s library, and I’m using the exact same hardware and software environment for all tests.
Questions:

  1. Why do Qwen3-VL:2B/4B fail to use the GPU on Jetson while Qwen2.5-VL:3B succeeds?
  2. Could this be due to architectural changes in Qwen3-VL (e.g., new operators, attention mechanisms, or vision encoder components) that are not yet supported by Ollama’s GPU offloading backend on ARM64/Jetson?
  3. Are there any known workarounds, environment variables (e.g., OLLAMA_NUM_GPU), or build flags to force GPU layer assignment for Qwen3-VL?
    Environment:
    • Device: NVIDIA Jetson Orin Nano Super
    • OS: Ubuntu 22.04 (via JetPack 6.2.1 / L4T R36.4)
    • JetPack version: 6.2.1+b38
    • Ollama version: 0.13.0
    • Models tested:
    o qwen3-vl:2b → 0 GPU layers
    o qwen3-vl:4b → 0 GPU layers
    o qwen2.5-vl:3b → 35 GPU layers, 1 CPU layer
    • GPU monitoring: tegrastats, nvtop (no GPU memory/compute observed for Qwen3-VL)
    Any insight into whether this is a compatibility issue, a bug, or a missing feature would be greatly appreciated!

Relevant log output

The following is a partial log for qwen2.5-vl:3b:

11月 26 14:07:47 jetson-desktop ollama[12920]: time=2025-11-26T14:07:47.515+08:00 level=INFO source=ggml.go:482 msg="offloading 36 repeating layers to GPU"
11月 26 14:07:47 jetson-desktop ollama[12920]: time=2025-11-26T14:07:47.515+08:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU"
11月 26 14:07:47 jetson-desktop ollama[12920]: time=2025-11-26T14:07:47.515+08:00 level=INFO source=ggml.go:494 msg="offloaded 36/37 layers to GPU"
11月 26 14:07:47 jetson-desktop ollama[12920]: time=2025-11-26T14:07:47.519+08:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="1.6 GiB"
11月 26 14:07:47 jetson-desktop ollama[12920]: time=2025-11-26T14:07:47.519+08:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="1.7 GiB"
11月 26 14:07:47 jetson-desktop ollama[12920]: time=2025-11-26T14:07:47.519+08:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="144.0 MiB"
11月 26 14:07:47 jetson-desktop ollama[12920]: time=2025-11-26T14:07:47.520+08:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="1.8 GiB"
11月 26 14:07:47 jetson-desktop ollama[12920]: time=2025-11-26T14:07:47.520+08:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="9.6 MiB"
11月 26 14:07:47 jetson-desktop ollama[12920]: time=2025-11-26T14:07:47.520+08:00 level=INFO source=device.go:272 msg="total memory" size="5.2 GiB"
11月 26 14:07:47 jetson-desktop ollama[12920]: time=2025-11-26T14:07:47.520+08:00 level=INFO source=sched.go:517 msg="loaded runners" count=1
11月 26 14:07:47 jetson-desktop ollama[12920]: time=2025-11-26T14:07:47.520+08:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding"
11月 26 14:07:47 jetson-desktop ollama[12920]: time=2025-11-26T14:07:47.520+08:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model"
11月 26 14:07:50 jetson-desktop ollama[12920]: time=2025-11-26T14:07:50.071+08:00 level=INFO source=server.go:1332 msg="llama runner started in 8.08 seconds"

The following is a partial log for qwen3-vl:4b:

11月 26 14:31:34 jetson-desktop ollama[25462]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 8.00 MiB on device 0: cudaMalloc failed: out of memory
11月 26 14:31:34 jetson-desktop ollama[25462]: time=2025-11-26T14:31:34.989+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:4[ID:GPU-585d214a-d92a-5d84-87ac-707da88ef760 Layers:4(24..27)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
11月 26 14:31:37 jetson-desktop ollama[25462]: NvMapMemAllocInternalTagged: 1075072515 error 12
11月 26 14:31:37 jetson-desktop ollama[25462]: NvMapMemHandleAlloc: error 0
11月 26 14:31:37 jetson-desktop ollama[25462]: NvMapMemAllocInternalTagged: 1075072515 error 12
11月 26 14:31:37 jetson-desktop ollama[25462]: NvMapMemHandleAlloc: error 0
11月 26 14:31:37 jetson-desktop ollama[25462]: NvMapMemAllocInternalTagged: 1075072515 error 12
11月 26 14:31:37 jetson-desktop ollama[25462]: NvMapMemHandleAlloc: error 0
11月 26 14:31:37 jetson-desktop ollama[25462]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 8.00 MiB on device 0: cudaMalloc failed: out of memory
11月 26 14:31:37 jetson-desktop ollama[25462]: time=2025-11-26T14:31:37.970+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:3[ID:GPU-585d214a-d92a-5d84-87ac-707da88ef760 Layers:3(25..27)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
11月 26 14:31:40 jetson-desktop ollama[25462]: NvMapMemAllocInternalTagged: 1075072515 error 12
11月 26 14:31:40 jetson-desktop ollama[25462]: NvMapMemHandleAlloc: error 0
11月 26 14:31:40 jetson-desktop ollama[25462]: NvMapMemAllocInternalTagged: 1075072515 error 12
11月 26 14:31:40 jetson-desktop ollama[25462]: NvMapMemHandleAlloc: error 0
11月 26 14:31:40 jetson-desktop ollama[25462]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 8.00 MiB on device 0: cudaMalloc failed: out of memory
11月 26 14:31:40 jetson-desktop ollama[25462]: time=2025-11-26T14:31:40.753+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:2[ID:GPU-585d214a-d92a-5d84-87ac-707da88ef760 Layers:2(26..27)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
11月 26 14:31:42 jetson-desktop ollama[25462]: NvMapMemAllocInternalTagged: 1075072515 error 12
11月 26 14:31:42 jetson-desktop ollama[25462]: NvMapMemHandleAlloc: error 0
11月 26 14:31:42 jetson-desktop ollama[25462]: NvMapMemAllocInternalTagged: 1075072515 error 12
11月 26 14:31:42 jetson-desktop ollama[25462]: NvMapMemHandleAlloc: error 0
11月 26 14:31:42 jetson-desktop ollama[25462]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 8.00 MiB on device 0: cudaMalloc failed: out of memory
11月 26 14:31:43 jetson-desktop ollama[25462]: time=2025-11-26T14:31:43.018+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:1[ID:GPU-585d214a-d92a-5d84-87ac-707da88ef760 Layers:1(27..27)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
11月 26 14:31:45 jetson-desktop ollama[25462]: NvMapMemAllocInternalTagged: 1075072515 error 12
11月 26 14:31:45 jetson-desktop ollama[25462]: NvMapMemHandleAlloc: error 0
11月 26 14:31:45 jetson-desktop ollama[25462]: NvMapMemAllocInternalTagged: 1075072515 error 12
11月 26 14:31:45 jetson-desktop ollama[25462]: NvMapMemHandleAlloc: error 0
11月 26 14:31:45 jetson-desktop ollama[25462]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 8.00 MiB on device 0: cudaMalloc failed: out of memory
11月 26 14:31:45 jetson-desktop ollama[25462]: time=2025-11-26T14:31:45.492+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
11月 26 14:31:46 jetson-desktop ollama[25462]: time=2025-11-26T14:31:46.409+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
11月 26 14:31:46 jetson-desktop ollama[25462]: time=2025-11-26T14:31:46.411+08:00 level=INFO source=ggml.go:482 msg="offloading 0 repeating layers to GPU"
11月 26 14:31:46 jetson-desktop ollama[25462]: time=2025-11-26T14:31:46.413+08:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU"
11月 26 14:31:46 jetson-desktop ollama[25462]: time=2025-11-26T14:31:46.413+08:00 level=INFO source=ggml.go:494 msg="offloaded 0/29 layers to GPU"
11月 26 14:31:46 jetson-desktop ollama[25462]: time=2025-11-26T14:31:46.412+08:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="2.0 GiB"
11月 26 14:31:46 jetson-desktop ollama[25462]: time=2025-11-26T14:31:46.413+08:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="448.0 MiB"
11月 26 14:31:46 jetson-desktop ollama[25462]: time=2025-11-26T14:31:46.413+08:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="4.2 GiB"
11月 26 14:31:46 jetson-desktop ollama[25462]: time=2025-11-26T14:31:46.413+08:00 level=INFO source=device.go:272 msg="total memory" size="6.6 GiB"
11月 26 14:31:46 jetson-desktop ollama[25462]: time=2025-11-26T14:31:46.415+08:00 level=INFO source=sched.go:517 msg="loaded runners" count=1
11月 26 14:31:46 jetson-desktop ollama[25462]: time=2025-11-26T14:31:46.416+08:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding"
11月 26 14:31:46 jetson-desktop ollama[25462]: time=2025-11-26T14:31:46.423+08:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model"
11月 26 14:31:47 jetson-desktop ollama[25462]: time=2025-11-26T14:31:47.686+08:00 level=INFO source=server.go:1332 msg="llama runner started in 143.32 seconds"

OS

No response

GPU

Nvidia

CPU

No response

Ollama version

0.13.0

Originally created by @HuaXiong-Liu on GitHub (Nov 26, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13247 ### What is the issue? Issue Title: Qwen3-VL models (2B/4B) not utilizing GPU on Jetson Orin Nano Super (JetPack 6.2.1), while Qwen2.5-VL:3B works correctly Description: I'm running Ollama v0.13.0 on a Jetson Orin Nano Super with JetPack 6.2.1 (L4T R36.4). When I load the official quantized Qwen3-VL:2B or Qwen3-VL:4B models using: ollama pull qwen3-vl:2b ollama run qwen3-vl:2b the model is entirely loaded onto the CPU, and zero layers are offloaded to the GPU (verified via tegrastats and nvtop). No GPU memory usage or compute activity is observed. However, when I load Qwen2.5-VL:3B: ollama pull qwen2.5vl:3b ollama run qwen2.5vl:3b it correctly utilizes the GPU — 35 layers are loaded on GPU, and only 1 layer remains on CPU, as expected. All models are official quantized versions from Ollama’s library, and I’m using the exact same hardware and software environment for all tests. Questions: 1. Why do Qwen3-VL:2B/4B fail to use the GPU on Jetson while Qwen2.5-VL:3B succeeds? 2. Could this be due to architectural changes in Qwen3-VL (e.g., new operators, attention mechanisms, or vision encoder components) that are not yet supported by Ollama’s GPU offloading backend on ARM64/Jetson? 3. Are there any known workarounds, environment variables (e.g., OLLAMA_NUM_GPU), or build flags to force GPU layer assignment for Qwen3-VL? Environment: • Device: NVIDIA Jetson Orin Nano Super • OS: Ubuntu 22.04 (via JetPack 6.2.1 / L4T R36.4) • JetPack version: 6.2.1+b38 • Ollama version: 0.13.0 • Models tested: o qwen3-vl:2b → 0 GPU layers o qwen3-vl:4b → 0 GPU layers o qwen2.5-vl:3b → 35 GPU layers, 1 CPU layer • GPU monitoring: tegrastats, nvtop (no GPU memory/compute observed for Qwen3-VL) Any insight into whether this is a compatibility issue, a bug, or a missing feature would be greatly appreciated! ### Relevant log output ```shell The following is a partial log for qwen2.5-vl:3b: 11月 26 14:07:47 jetson-desktop ollama[12920]: time=2025-11-26T14:07:47.515+08:00 level=INFO source=ggml.go:482 msg="offloading 36 repeating layers to GPU" 11月 26 14:07:47 jetson-desktop ollama[12920]: time=2025-11-26T14:07:47.515+08:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU" 11月 26 14:07:47 jetson-desktop ollama[12920]: time=2025-11-26T14:07:47.515+08:00 level=INFO source=ggml.go:494 msg="offloaded 36/37 layers to GPU" 11月 26 14:07:47 jetson-desktop ollama[12920]: time=2025-11-26T14:07:47.519+08:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="1.6 GiB" 11月 26 14:07:47 jetson-desktop ollama[12920]: time=2025-11-26T14:07:47.519+08:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="1.7 GiB" 11月 26 14:07:47 jetson-desktop ollama[12920]: time=2025-11-26T14:07:47.519+08:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="144.0 MiB" 11月 26 14:07:47 jetson-desktop ollama[12920]: time=2025-11-26T14:07:47.520+08:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="1.8 GiB" 11月 26 14:07:47 jetson-desktop ollama[12920]: time=2025-11-26T14:07:47.520+08:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="9.6 MiB" 11月 26 14:07:47 jetson-desktop ollama[12920]: time=2025-11-26T14:07:47.520+08:00 level=INFO source=device.go:272 msg="total memory" size="5.2 GiB" 11月 26 14:07:47 jetson-desktop ollama[12920]: time=2025-11-26T14:07:47.520+08:00 level=INFO source=sched.go:517 msg="loaded runners" count=1 11月 26 14:07:47 jetson-desktop ollama[12920]: time=2025-11-26T14:07:47.520+08:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding" 11月 26 14:07:47 jetson-desktop ollama[12920]: time=2025-11-26T14:07:47.520+08:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model" 11月 26 14:07:50 jetson-desktop ollama[12920]: time=2025-11-26T14:07:50.071+08:00 level=INFO source=server.go:1332 msg="llama runner started in 8.08 seconds" The following is a partial log for qwen3-vl:4b: 11月 26 14:31:34 jetson-desktop ollama[25462]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 8.00 MiB on device 0: cudaMalloc failed: out of memory 11月 26 14:31:34 jetson-desktop ollama[25462]: time=2025-11-26T14:31:34.989+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:4[ID:GPU-585d214a-d92a-5d84-87ac-707da88ef760 Layers:4(24..27)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" 11月 26 14:31:37 jetson-desktop ollama[25462]: NvMapMemAllocInternalTagged: 1075072515 error 12 11月 26 14:31:37 jetson-desktop ollama[25462]: NvMapMemHandleAlloc: error 0 11月 26 14:31:37 jetson-desktop ollama[25462]: NvMapMemAllocInternalTagged: 1075072515 error 12 11月 26 14:31:37 jetson-desktop ollama[25462]: NvMapMemHandleAlloc: error 0 11月 26 14:31:37 jetson-desktop ollama[25462]: NvMapMemAllocInternalTagged: 1075072515 error 12 11月 26 14:31:37 jetson-desktop ollama[25462]: NvMapMemHandleAlloc: error 0 11月 26 14:31:37 jetson-desktop ollama[25462]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 8.00 MiB on device 0: cudaMalloc failed: out of memory 11月 26 14:31:37 jetson-desktop ollama[25462]: time=2025-11-26T14:31:37.970+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:3[ID:GPU-585d214a-d92a-5d84-87ac-707da88ef760 Layers:3(25..27)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" 11月 26 14:31:40 jetson-desktop ollama[25462]: NvMapMemAllocInternalTagged: 1075072515 error 12 11月 26 14:31:40 jetson-desktop ollama[25462]: NvMapMemHandleAlloc: error 0 11月 26 14:31:40 jetson-desktop ollama[25462]: NvMapMemAllocInternalTagged: 1075072515 error 12 11月 26 14:31:40 jetson-desktop ollama[25462]: NvMapMemHandleAlloc: error 0 11月 26 14:31:40 jetson-desktop ollama[25462]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 8.00 MiB on device 0: cudaMalloc failed: out of memory 11月 26 14:31:40 jetson-desktop ollama[25462]: time=2025-11-26T14:31:40.753+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:2[ID:GPU-585d214a-d92a-5d84-87ac-707da88ef760 Layers:2(26..27)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" 11月 26 14:31:42 jetson-desktop ollama[25462]: NvMapMemAllocInternalTagged: 1075072515 error 12 11月 26 14:31:42 jetson-desktop ollama[25462]: NvMapMemHandleAlloc: error 0 11月 26 14:31:42 jetson-desktop ollama[25462]: NvMapMemAllocInternalTagged: 1075072515 error 12 11月 26 14:31:42 jetson-desktop ollama[25462]: NvMapMemHandleAlloc: error 0 11月 26 14:31:42 jetson-desktop ollama[25462]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 8.00 MiB on device 0: cudaMalloc failed: out of memory 11月 26 14:31:43 jetson-desktop ollama[25462]: time=2025-11-26T14:31:43.018+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:1[ID:GPU-585d214a-d92a-5d84-87ac-707da88ef760 Layers:1(27..27)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" 11月 26 14:31:45 jetson-desktop ollama[25462]: NvMapMemAllocInternalTagged: 1075072515 error 12 11月 26 14:31:45 jetson-desktop ollama[25462]: NvMapMemHandleAlloc: error 0 11月 26 14:31:45 jetson-desktop ollama[25462]: NvMapMemAllocInternalTagged: 1075072515 error 12 11月 26 14:31:45 jetson-desktop ollama[25462]: NvMapMemHandleAlloc: error 0 11月 26 14:31:45 jetson-desktop ollama[25462]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 8.00 MiB on device 0: cudaMalloc failed: out of memory 11月 26 14:31:45 jetson-desktop ollama[25462]: time=2025-11-26T14:31:45.492+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" 11月 26 14:31:46 jetson-desktop ollama[25462]: time=2025-11-26T14:31:46.409+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" 11月 26 14:31:46 jetson-desktop ollama[25462]: time=2025-11-26T14:31:46.411+08:00 level=INFO source=ggml.go:482 msg="offloading 0 repeating layers to GPU" 11月 26 14:31:46 jetson-desktop ollama[25462]: time=2025-11-26T14:31:46.413+08:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU" 11月 26 14:31:46 jetson-desktop ollama[25462]: time=2025-11-26T14:31:46.413+08:00 level=INFO source=ggml.go:494 msg="offloaded 0/29 layers to GPU" 11月 26 14:31:46 jetson-desktop ollama[25462]: time=2025-11-26T14:31:46.412+08:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="2.0 GiB" 11月 26 14:31:46 jetson-desktop ollama[25462]: time=2025-11-26T14:31:46.413+08:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="448.0 MiB" 11月 26 14:31:46 jetson-desktop ollama[25462]: time=2025-11-26T14:31:46.413+08:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="4.2 GiB" 11月 26 14:31:46 jetson-desktop ollama[25462]: time=2025-11-26T14:31:46.413+08:00 level=INFO source=device.go:272 msg="total memory" size="6.6 GiB" 11月 26 14:31:46 jetson-desktop ollama[25462]: time=2025-11-26T14:31:46.415+08:00 level=INFO source=sched.go:517 msg="loaded runners" count=1 11月 26 14:31:46 jetson-desktop ollama[25462]: time=2025-11-26T14:31:46.416+08:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding" 11月 26 14:31:46 jetson-desktop ollama[25462]: time=2025-11-26T14:31:46.423+08:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model" 11月 26 14:31:47 jetson-desktop ollama[25462]: time=2025-11-26T14:31:47.686+08:00 level=INFO source=server.go:1332 msg="llama runner started in 143.32 seconds" ``` ### OS _No response_ ### GPU Nvidia ### CPU _No response_ ### Ollama version 0.13.0
GiteaMirror added the bug label 2026-04-12 21:31:28 -05:00
Author
Owner

@HuaXiong-Liu commented on GitHub (Nov 26, 2025):

I'm running Ollama v0.13.0 on a Jetson Orin Nano Super with JetPack 6.2.1 (L4T R36.4). I observe a stark difference in GPU utilization between Qwen-VL model versions:
qwen2.5-vl:3b: Successfully offloads 36/37 layers to GPU (CUDA0), uses ~1.6 GiB GPU memory for weights.
qwen3-vl:2b: Fails to offload any layers — ends up running 0/29 layers on GPU, all on CPU.
🔍 Key Log Evidence
For Qwen2.5-VL:3b, logs show successful GPU offloading:
offloading 36 repeating layers to GPU
offloaded 36/37 layers to GPU
model weights device=CUDA0 size="1.6 GiB"
But for Qwen3-VL:2b, the logs reveal repeated CUDA out-of-memory errors during layer allocation attempts:
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 8.00 MiB on device 0: cudaMalloc failed: out of memory
NvMapMemAllocInternalTagged: 1075072515 error 12
Ollama appears to auto-retry with fewer GPU layers (29 →…→ 2 → 1 → 0), and ultimately gives up:
offloading 0 repeating layers to GPU
offloaded 0/29 layers to GPU
model weights device=CPU size="2.0 GiB"
compute graph device=CPU size="4.2 GiB"
This suggests that Qwen3-VL’s architecture requires more GPU memory per layer (possibly due to changes in vision encoder, attention mechanism, or intermediate tensor sizes), exceeding the available GPU memory budget during initialization—even though the total model is smaller (2B vs 3B).
💡 Note: The Jetson Orin Nano Super has 8 GB shared CPU/GPU memory, but GPU allocations are limited by carveout/reserved memory (typically ~6–7 GB usable). However, Qwen2.5-VL:3b works fine within this limit.
📌 Questions

  1. Has Qwen3-VL introduced architectural changes (e.g., larger context, multimodal projector, FlashAttention v2, etc.) that increase peak GPU memory demand during model load—even for quantized versions?
  2. Is there a way to reduce initial GPU buffer size or enable memory pooling / fragmentation mitigation for Jetson devices?
  3. Could Ollama expose a flag like --gpu-memory-limit or --force-gpu-layers=N to allow manual control when auto-detection fails?
  4. Are the official qwen3-vl:2b/4b models built with CUDA ops that are less memory-efficient on ARM64/Jetson compared to x86_64?
<!-- gh-comment-id:3579524599 --> @HuaXiong-Liu commented on GitHub (Nov 26, 2025): I'm running Ollama v0.13.0 on a Jetson Orin Nano Super with JetPack 6.2.1 (L4T R36.4). I observe a stark difference in GPU utilization between Qwen-VL model versions: • ✅ qwen2.5-vl:3b: Successfully offloads 36/37 layers to GPU (CUDA0), uses ~1.6 GiB GPU memory for weights. • ❌ qwen3-vl:2b: Fails to offload any layers — ends up running 0/29 layers on GPU, all on CPU. 🔍 Key Log Evidence For Qwen2.5-VL:3b, logs show successful GPU offloading: offloading 36 repeating layers to GPU offloaded 36/37 layers to GPU model weights device=CUDA0 size="1.6 GiB" But for Qwen3-VL:2b, the logs reveal repeated CUDA out-of-memory errors during layer allocation attempts: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 8.00 MiB on device 0: cudaMalloc failed: out of memory NvMapMemAllocInternalTagged: 1075072515 error 12 Ollama appears to auto-retry with fewer GPU layers (29 →…→ 2 → 1 → 0), and ultimately gives up: offloading 0 repeating layers to GPU offloaded 0/29 layers to GPU model weights device=CPU size="2.0 GiB" compute graph device=CPU size="4.2 GiB" This suggests that Qwen3-VL’s architecture requires more GPU memory per layer (possibly due to changes in vision encoder, attention mechanism, or intermediate tensor sizes), exceeding the available GPU memory budget during initialization—even though the total model is smaller (2B vs 3B). 💡 Note: The Jetson Orin Nano Super has 8 GB shared CPU/GPU memory, but GPU allocations are limited by carveout/reserved memory (typically ~6–7 GB usable). However, Qwen2.5-VL:3b works fine within this limit. 📌 Questions 1. Has Qwen3-VL introduced architectural changes (e.g., larger context, multimodal projector, FlashAttention v2, etc.) that increase peak GPU memory demand during model load—even for quantized versions? 2. Is there a way to reduce initial GPU buffer size or enable memory pooling / fragmentation mitigation for Jetson devices? 3. Could Ollama expose a flag like --gpu-memory-limit or --force-gpu-layers=N to allow manual control when auto-detection fails? 4. Are the official qwen3-vl:2b/4b models built with CUDA ops that are less memory-efficient on ARM64/Jetson compared to x86_64?
Author
Owner

@rick-github commented on GitHub (Nov 26, 2025):

compute graph device=CPU size="4.2 GiB"

The graph for qwen3-vl is larger. A device must be able to hold a copy of the graph, at least one layer, and ancillary data structures before it can be used for inference. In Linux, layers can be forced on to the GPU by setting GML_CUDA_ENABLE_UNIFIED_MEMORY=1 in the server environment and specifying the number of layers to load with num_gpu. This will result in layers spilling to system RAM but with the GPU doing inference instead of the CPU. This can cause performance issues.

<!-- gh-comment-id:3580025732 --> @rick-github commented on GitHub (Nov 26, 2025): > compute graph device=CPU size="4.2 GiB" The graph for qwen3-vl is larger. A device must be able to hold a copy of the graph, at least one layer, and ancillary data structures before it can be used for inference. In Linux, layers can be forced on to the GPU by setting `GML_CUDA_ENABLE_UNIFIED_MEMORY=1` in the server environment and specifying the number of layers to load with [`num_gpu`](https://github.com/ollama/ollama/issues/6950#issuecomment-2373663650). This will result in layers spilling to system RAM but with the GPU doing inference instead of the CPU. This can cause [performance issues](https://github.com/ollama/ollama/issues/7584#issuecomment-2466715900).
Author
Owner

@Calcifer97 commented on GitHub (Nov 28, 2025):

一模一样的问题,我都打算给jetson刷机了,有人尝试过吗

<!-- gh-comment-id:3588694092 --> @Calcifer97 commented on GitHub (Nov 28, 2025): 一模一样的问题,我都打算给jetson刷机了,有人尝试过吗
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#8756