[GH-ISSUE #13618] Qwen3VL issue on AMD GPU / Rocm #8961

Closed
opened 2026-04-12 21:47:37 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @FR-Mister-T on GitHub (Jan 4, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/13618

What is the issue?

I'm running ubuntu24.04/python 3.13.11/ the lastest ollama: rocm 7.0.2 (the most stable for my setup currently)

There is a bug in Ollama's Qwen3VL vision model implementation. The Conv3D layer is trying to access a nil pointer when processing the vision model components.

I tested both directly in terminal and throught comfyui OllamaConnectivityV2 node with the same issue
Trying to remake the modelfile with mmproj failed

model is available here https://huggingface.co/Phr00t/Qwen3-VL-32B-Instruct-heretic-v2-iQ5KS-GGUF

Relevant log output

zeuss194@zeuss194-ThinkStation-P520:~$ journalctl -u ollama -n 50 --no-pager
Jan 04 06:02:35 zeuss194-ThinkStation-P520 ollama[352603]: llama_context: constructing llama_context
Jan 04 06:02:35 zeuss194-ThinkStation-P520 ollama[352603]: llama_context: n_seq_max     = 1
Jan 04 06:02:35 zeuss194-ThinkStation-P520 ollama[352603]: llama_context: n_ctx         = 4096
Jan 04 06:02:35 zeuss194-ThinkStation-P520 ollama[352603]: llama_context: n_ctx_seq     = 4096
Jan 04 06:02:35 zeuss194-ThinkStation-P520 ollama[352603]: llama_context: n_batch       = 512
Jan 04 06:02:35 zeuss194-ThinkStation-P520 ollama[352603]: llama_context: n_ubatch      = 512
Jan 04 06:02:35 zeuss194-ThinkStation-P520 ollama[352603]: llama_context: causal_attn   = 1
Jan 04 06:02:35 zeuss194-ThinkStation-P520 ollama[352603]: llama_context: flash_attn    = auto
Jan 04 06:02:35 zeuss194-ThinkStation-P520 ollama[352603]: llama_context: kv_unified    = false
Jan 04 06:02:35 zeuss194-ThinkStation-P520 ollama[352603]: llama_context: freq_base     = 100000000.0
Jan 04 06:02:35 zeuss194-ThinkStation-P520 ollama[352603]: llama_context: freq_scale    = 1
Jan 04 06:02:35 zeuss194-ThinkStation-P520 ollama[352603]: llama_context: n_ctx_seq (4096) < n_ctx_train (32768) -- the full capacity of the model will not be utilized
Jan 04 06:02:36 zeuss194-ThinkStation-P520 ollama[352603]: llama_context:  ROCm_Host  output buffer size =     0.52 MiB
Jan 04 06:02:36 zeuss194-ThinkStation-P520 ollama[352603]: llama_kv_cache:      ROCm0 KV buffer size =   640.00 MiB
Jan 04 06:02:36 zeuss194-ThinkStation-P520 ollama[352603]: llama_kv_cache: size =  640.00 MiB (  4096 cells,  40 layers,  1/1 seqs), K (f16):  320.00 MiB, V (f16):  320.00 MiB
Jan 04 06:02:36 zeuss194-ThinkStation-P520 ollama[352603]: llama_context: Flash Attention was auto, set to enabled
Jan 04 06:02:36 zeuss194-ThinkStation-P520 ollama[352603]: llama_context:      ROCm0 compute buffer size =   266.00 MiB
Jan 04 06:02:36 zeuss194-ThinkStation-P520 ollama[352603]: llama_context:  ROCm_Host compute buffer size =    18.01 MiB
Jan 04 06:02:36 zeuss194-ThinkStation-P520 ollama[352603]: llama_context: graph nodes  = 1247
Jan 04 06:02:36 zeuss194-ThinkStation-P520 ollama[352603]: llama_context: graph splits = 2
Jan 04 06:02:36 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:02:36.449+01:00 level=INFO source=server.go:1376 msg="llama runner started in 6.05 seconds"
Jan 04 06:02:36 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:02:36.449+01:00 level=INFO source=sched.go:517 msg="loaded runners" count=1
Jan 04 06:02:36 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:02:36.449+01:00 level=INFO source=server.go:1338 msg="waiting for llama runner to start responding"
Jan 04 06:02:36 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:02:36.450+01:00 level=INFO source=server.go:1376 msg="llama runner started in 6.05 seconds"
Jan 04 06:02:36 zeuss194-ThinkStation-P520 ollama[352603]: check_double_bos_eos: Added a BOS token to the prompt as specified by the model but the prompt also starts with a BOS token. So now the final prompt starts with 2 BOS tokens. Are you sure this is what you want?
Jan 04 06:03:47 zeuss194-ThinkStation-P520 ollama[352603]: [GIN] 2026/01/04 - 06:03:47 | 200 |         1m18s |       127.0.0.1 | POST     "/api/chat"
Jan 04 06:08:47 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:08:47.058+01:00 level=INFO source=server.go:429 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 35401"
Jan 04 06:08:48 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:08:48.881+01:00 level=INFO source=server.go:429 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 34445"
Jan 04 06:12:10 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:12:10.557+01:00 level=INFO source=server.go:429 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 41917"
Jan 04 06:12:12 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:12:12.195+01:00 level=INFO source=server.go:245 msg="enabling flash attention"
Jan 04 06:12:12 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:12:12.195+01:00 level=INFO source=server.go:429 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-7e8c3957c3ed7475b652ae1c66848fb98265e3c1c31d127c927eb99532e0bdfa --port 38935"
Jan 04 06:12:12 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:12:12.196+01:00 level=INFO source=sched.go:443 msg="system memory" total="62.4 GiB" free="54.8 GiB" free_swap="5.6 GiB"
Jan 04 06:12:12 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:12:12.196+01:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-5c5383ed20b3fc7d library=ROCm available="30.9 GiB" free="31.4 GiB" minimum="457.0 MiB" overhead="0 B"
Jan 04 06:12:12 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:12:12.196+01:00 level=INFO source=server.go:746 msg="loading model" "model layers"=65 requested=-1
Jan 04 06:12:12 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:12:12.212+01:00 level=INFO source=runner.go:1405 msg="starting ollama engine"
Jan 04 06:12:12 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:12:12.212+01:00 level=INFO source=runner.go:1440 msg="Server listening on 127.0.0.1:38935"
Jan 04 06:12:12 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:12:12.218+01:00 level=INFO source=runner.go:1278 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:8192 KvCacheType: NumThreads:8 GPULayers:65[ID:GPU-5c5383ed20b3fc7d Layers:65(0..64)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Jan 04 06:12:12 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:12:12.255+01:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3vl file_type=Q5_K_S name="Q3 Vl H2" description="" num_tensors=707 num_key_values=39
Jan 04 06:12:12 zeuss194-ThinkStation-P520 ollama[352603]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-skylakex.so
Jan 04 06:12:13 zeuss194-ThinkStation-P520 ollama[352603]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Jan 04 06:12:13 zeuss194-ThinkStation-P520 ollama[352603]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Jan 04 06:12:13 zeuss194-ThinkStation-P520 ollama[352603]: ggml_cuda_init: found 1 ROCm devices:
Jan 04 06:12:13 zeuss194-ThinkStation-P520 ollama[352603]:   Device 0: AMD Radeon AI PRO R9700, gfx1201 (0x1201), VMM: no, Wave Size: 32, ID: GPU-5c5383ed20b3fc7d
Jan 04 06:12:13 zeuss194-ThinkStation-P520 ollama[352603]: load_backend: loaded ROCm backend from /usr/local/lib/ollama/rocm/libggml-hip.so
Jan 04 06:12:13 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:12:13.798+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
Jan 04 06:12:14 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:12:14.197+01:00 level=INFO source=server.go:3634 msg="http: panic serving 127.0.0.1:54742: runtime error: invalid memory address or nil pointer dereference\ngoroutine 13 [running]:\nnet/http.(*conn).serve.func1()\n\tnet/http/server.go:1947 +0xbe\npanic({0x583da72782c0?, 0x583da7c00410?})\n\truntime/panic.go:792 +0x132\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).allocModel.func1()\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:1187 +0x11a\npanic({0x583da72782c0?, 0x583da7c00410?})\n\truntime/panic.go:792 +0x132\ngithub.com/ollama/ollama/ml/nn.(*Conv3D).Forward(0x0, {0x583da73f4250, 0xc000f57d80}, {0x583da73feb20?, 0xc000f68048?}, 0x10?, 0xc000600808?, 0xc000667840?, 0xc000047190?, 0x0, ...)\n\tgithub.com/ollama/ollama/ml/nn/convolution.go:25 +0x3a\ngithub.com/ollama/ollama/model/models/qwen3vl.(*VisionModel).Forward(0xc00013c0c0, {0x583da73f4250, 0xc000f57d80}, {0x583da73feb20, 0xc000f68030}, 0xc000f40000)\n\tgithub.com/ollama/ollama/model/models/qwen3vl/model_vision.go:224 +0x118\ngithub.com/ollama/ollama/model/models/qwen3vl.(*Model).EncodeMultimodal(0xc000d7e270, {0x583da73f4250, 0xc000f57d80}, {0xc001cc8000, 0x400436, 0x700000})\n\tgithub.com/ollama/ollama/model/models/qwen3vl/model.go:43 +0x14e\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).reserveWorstCaseGraph(0xc000236f00, 0x1)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:1098 +0x34e\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).allocModel(0xc000236f00, {0x7fffc5285d04?, 0x583da612841a?}, {0x0, 0x8, {0xc000714140, 0x1, 0x1}, 0x1}, {0x0, ...}, ...)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:1226 +0x391\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).load(0xc000236f00, {0x583da73e6fa0, 0xc00078e000}, 0xc000788000)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:1305 +0x54b\nnet/http.HandlerFunc.ServeHTTP(0xc00013d5c0?, {0x583da73e6fa0?, 0xc00078e000?}, 0xc000555b60?)\n\tnet/http/server.go:2294 +0x29\nnet/http.(*ServeMux).ServeHTTP(0x583da5dd88c5?, {0x583da73e6fa0, 0xc00078e000}, 0xc000788000)\n\tnet/http/server.go:2822 +0x1c4\nnet/http.serverHandler.ServeHTTP({0x583da73e3590?}, {0x583da73e6fa0?, 0xc00078e000?}, 0x1?)\n\tnet/http/server.go:3301 +0x8e\nnet/http.(*conn).serve(0xc0005203f0, {0x583da73e93d8, 0xc000717140})\n\tnet/http/server.go:2102 +0x625\ncreated by net/http.(*Server).Serve in goroutine 1\n\tnet/http/server.go:3454 +0x485"
Jan 04 06:12:14 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:12:14.198+01:00 level=INFO source=runner.go:1278 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:Disabled KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Jan 04 06:12:14 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:12:14.198+01:00 level=INFO source=sched.go:470 msg="Load failed" model=/usr/share/ollama/.ollama/models/blobs/sha256-7e8c3957c3ed7475b652ae1c66848fb98265e3c1c31d127c927eb99532e0bdfa error="do load request: Post \"http://127.0.0.1:38935/load\": EOF"
Jan 04 06:12:14 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:12:14.240+01:00 level=ERROR source=server.go:302 msg="llama runner terminated" error="signal: killed"
Jan 04 06:12:14 zeuss194-ThinkStation-P520 ollama[352603]: [GIN] 2026/01/04 - 06:12:14 | 500 |  3.787155736s |       127.0.0.1 | POST     "/api/chat"

OS

Linux

GPU

AMD

CPU

Intel

Ollama version

0.13.5

Originally created by @FR-Mister-T on GitHub (Jan 4, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/13618 ### What is the issue? I'm running ubuntu24.04/python 3.13.11/ the lastest ollama: rocm 7.0.2 (the most stable for my setup currently) There is a bug in Ollama's Qwen3VL vision model implementation. The Conv3D layer is trying to access a nil pointer when processing the vision model components. I tested both directly in terminal and throught comfyui OllamaConnectivityV2 node with the same issue Trying to remake the modelfile with mmproj failed model is available here https://huggingface.co/Phr00t/Qwen3-VL-32B-Instruct-heretic-v2-iQ5KS-GGUF ### Relevant log output ```shell zeuss194@zeuss194-ThinkStation-P520:~$ journalctl -u ollama -n 50 --no-pager Jan 04 06:02:35 zeuss194-ThinkStation-P520 ollama[352603]: llama_context: constructing llama_context Jan 04 06:02:35 zeuss194-ThinkStation-P520 ollama[352603]: llama_context: n_seq_max = 1 Jan 04 06:02:35 zeuss194-ThinkStation-P520 ollama[352603]: llama_context: n_ctx = 4096 Jan 04 06:02:35 zeuss194-ThinkStation-P520 ollama[352603]: llama_context: n_ctx_seq = 4096 Jan 04 06:02:35 zeuss194-ThinkStation-P520 ollama[352603]: llama_context: n_batch = 512 Jan 04 06:02:35 zeuss194-ThinkStation-P520 ollama[352603]: llama_context: n_ubatch = 512 Jan 04 06:02:35 zeuss194-ThinkStation-P520 ollama[352603]: llama_context: causal_attn = 1 Jan 04 06:02:35 zeuss194-ThinkStation-P520 ollama[352603]: llama_context: flash_attn = auto Jan 04 06:02:35 zeuss194-ThinkStation-P520 ollama[352603]: llama_context: kv_unified = false Jan 04 06:02:35 zeuss194-ThinkStation-P520 ollama[352603]: llama_context: freq_base = 100000000.0 Jan 04 06:02:35 zeuss194-ThinkStation-P520 ollama[352603]: llama_context: freq_scale = 1 Jan 04 06:02:35 zeuss194-ThinkStation-P520 ollama[352603]: llama_context: n_ctx_seq (4096) < n_ctx_train (32768) -- the full capacity of the model will not be utilized Jan 04 06:02:36 zeuss194-ThinkStation-P520 ollama[352603]: llama_context: ROCm_Host output buffer size = 0.52 MiB Jan 04 06:02:36 zeuss194-ThinkStation-P520 ollama[352603]: llama_kv_cache: ROCm0 KV buffer size = 640.00 MiB Jan 04 06:02:36 zeuss194-ThinkStation-P520 ollama[352603]: llama_kv_cache: size = 640.00 MiB ( 4096 cells, 40 layers, 1/1 seqs), K (f16): 320.00 MiB, V (f16): 320.00 MiB Jan 04 06:02:36 zeuss194-ThinkStation-P520 ollama[352603]: llama_context: Flash Attention was auto, set to enabled Jan 04 06:02:36 zeuss194-ThinkStation-P520 ollama[352603]: llama_context: ROCm0 compute buffer size = 266.00 MiB Jan 04 06:02:36 zeuss194-ThinkStation-P520 ollama[352603]: llama_context: ROCm_Host compute buffer size = 18.01 MiB Jan 04 06:02:36 zeuss194-ThinkStation-P520 ollama[352603]: llama_context: graph nodes = 1247 Jan 04 06:02:36 zeuss194-ThinkStation-P520 ollama[352603]: llama_context: graph splits = 2 Jan 04 06:02:36 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:02:36.449+01:00 level=INFO source=server.go:1376 msg="llama runner started in 6.05 seconds" Jan 04 06:02:36 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:02:36.449+01:00 level=INFO source=sched.go:517 msg="loaded runners" count=1 Jan 04 06:02:36 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:02:36.449+01:00 level=INFO source=server.go:1338 msg="waiting for llama runner to start responding" Jan 04 06:02:36 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:02:36.450+01:00 level=INFO source=server.go:1376 msg="llama runner started in 6.05 seconds" Jan 04 06:02:36 zeuss194-ThinkStation-P520 ollama[352603]: check_double_bos_eos: Added a BOS token to the prompt as specified by the model but the prompt also starts with a BOS token. So now the final prompt starts with 2 BOS tokens. Are you sure this is what you want? Jan 04 06:03:47 zeuss194-ThinkStation-P520 ollama[352603]: [GIN] 2026/01/04 - 06:03:47 | 200 | 1m18s | 127.0.0.1 | POST "/api/chat" Jan 04 06:08:47 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:08:47.058+01:00 level=INFO source=server.go:429 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 35401" Jan 04 06:08:48 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:08:48.881+01:00 level=INFO source=server.go:429 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 34445" Jan 04 06:12:10 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:12:10.557+01:00 level=INFO source=server.go:429 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 41917" Jan 04 06:12:12 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:12:12.195+01:00 level=INFO source=server.go:245 msg="enabling flash attention" Jan 04 06:12:12 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:12:12.195+01:00 level=INFO source=server.go:429 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-7e8c3957c3ed7475b652ae1c66848fb98265e3c1c31d127c927eb99532e0bdfa --port 38935" Jan 04 06:12:12 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:12:12.196+01:00 level=INFO source=sched.go:443 msg="system memory" total="62.4 GiB" free="54.8 GiB" free_swap="5.6 GiB" Jan 04 06:12:12 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:12:12.196+01:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-5c5383ed20b3fc7d library=ROCm available="30.9 GiB" free="31.4 GiB" minimum="457.0 MiB" overhead="0 B" Jan 04 06:12:12 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:12:12.196+01:00 level=INFO source=server.go:746 msg="loading model" "model layers"=65 requested=-1 Jan 04 06:12:12 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:12:12.212+01:00 level=INFO source=runner.go:1405 msg="starting ollama engine" Jan 04 06:12:12 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:12:12.212+01:00 level=INFO source=runner.go:1440 msg="Server listening on 127.0.0.1:38935" Jan 04 06:12:12 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:12:12.218+01:00 level=INFO source=runner.go:1278 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:8192 KvCacheType: NumThreads:8 GPULayers:65[ID:GPU-5c5383ed20b3fc7d Layers:65(0..64)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Jan 04 06:12:12 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:12:12.255+01:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3vl file_type=Q5_K_S name="Q3 Vl H2" description="" num_tensors=707 num_key_values=39 Jan 04 06:12:12 zeuss194-ThinkStation-P520 ollama[352603]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-skylakex.so Jan 04 06:12:13 zeuss194-ThinkStation-P520 ollama[352603]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Jan 04 06:12:13 zeuss194-ThinkStation-P520 ollama[352603]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Jan 04 06:12:13 zeuss194-ThinkStation-P520 ollama[352603]: ggml_cuda_init: found 1 ROCm devices: Jan 04 06:12:13 zeuss194-ThinkStation-P520 ollama[352603]: Device 0: AMD Radeon AI PRO R9700, gfx1201 (0x1201), VMM: no, Wave Size: 32, ID: GPU-5c5383ed20b3fc7d Jan 04 06:12:13 zeuss194-ThinkStation-P520 ollama[352603]: load_backend: loaded ROCm backend from /usr/local/lib/ollama/rocm/libggml-hip.so Jan 04 06:12:13 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:12:13.798+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) Jan 04 06:12:14 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:12:14.197+01:00 level=INFO source=server.go:3634 msg="http: panic serving 127.0.0.1:54742: runtime error: invalid memory address or nil pointer dereference\ngoroutine 13 [running]:\nnet/http.(*conn).serve.func1()\n\tnet/http/server.go:1947 +0xbe\npanic({0x583da72782c0?, 0x583da7c00410?})\n\truntime/panic.go:792 +0x132\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).allocModel.func1()\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:1187 +0x11a\npanic({0x583da72782c0?, 0x583da7c00410?})\n\truntime/panic.go:792 +0x132\ngithub.com/ollama/ollama/ml/nn.(*Conv3D).Forward(0x0, {0x583da73f4250, 0xc000f57d80}, {0x583da73feb20?, 0xc000f68048?}, 0x10?, 0xc000600808?, 0xc000667840?, 0xc000047190?, 0x0, ...)\n\tgithub.com/ollama/ollama/ml/nn/convolution.go:25 +0x3a\ngithub.com/ollama/ollama/model/models/qwen3vl.(*VisionModel).Forward(0xc00013c0c0, {0x583da73f4250, 0xc000f57d80}, {0x583da73feb20, 0xc000f68030}, 0xc000f40000)\n\tgithub.com/ollama/ollama/model/models/qwen3vl/model_vision.go:224 +0x118\ngithub.com/ollama/ollama/model/models/qwen3vl.(*Model).EncodeMultimodal(0xc000d7e270, {0x583da73f4250, 0xc000f57d80}, {0xc001cc8000, 0x400436, 0x700000})\n\tgithub.com/ollama/ollama/model/models/qwen3vl/model.go:43 +0x14e\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).reserveWorstCaseGraph(0xc000236f00, 0x1)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:1098 +0x34e\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).allocModel(0xc000236f00, {0x7fffc5285d04?, 0x583da612841a?}, {0x0, 0x8, {0xc000714140, 0x1, 0x1}, 0x1}, {0x0, ...}, ...)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:1226 +0x391\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).load(0xc000236f00, {0x583da73e6fa0, 0xc00078e000}, 0xc000788000)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:1305 +0x54b\nnet/http.HandlerFunc.ServeHTTP(0xc00013d5c0?, {0x583da73e6fa0?, 0xc00078e000?}, 0xc000555b60?)\n\tnet/http/server.go:2294 +0x29\nnet/http.(*ServeMux).ServeHTTP(0x583da5dd88c5?, {0x583da73e6fa0, 0xc00078e000}, 0xc000788000)\n\tnet/http/server.go:2822 +0x1c4\nnet/http.serverHandler.ServeHTTP({0x583da73e3590?}, {0x583da73e6fa0?, 0xc00078e000?}, 0x1?)\n\tnet/http/server.go:3301 +0x8e\nnet/http.(*conn).serve(0xc0005203f0, {0x583da73e93d8, 0xc000717140})\n\tnet/http/server.go:2102 +0x625\ncreated by net/http.(*Server).Serve in goroutine 1\n\tnet/http/server.go:3454 +0x485" Jan 04 06:12:14 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:12:14.198+01:00 level=INFO source=runner.go:1278 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:Disabled KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Jan 04 06:12:14 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:12:14.198+01:00 level=INFO source=sched.go:470 msg="Load failed" model=/usr/share/ollama/.ollama/models/blobs/sha256-7e8c3957c3ed7475b652ae1c66848fb98265e3c1c31d127c927eb99532e0bdfa error="do load request: Post \"http://127.0.0.1:38935/load\": EOF" Jan 04 06:12:14 zeuss194-ThinkStation-P520 ollama[352603]: time=2026-01-04T06:12:14.240+01:00 level=ERROR source=server.go:302 msg="llama runner terminated" error="signal: killed" Jan 04 06:12:14 zeuss194-ThinkStation-P520 ollama[352603]: [GIN] 2026/01/04 - 06:12:14 | 500 | 3.787155736s | 127.0.0.1 | POST "/api/chat" ``` ### OS Linux ### GPU AMD ### CPU Intel ### Ollama version 0.13.5
GiteaMirror added the needs more infobug labels 2026-04-12 21:47:37 -05:00
Author
Owner

@rick-github commented on GitHub (Jan 4, 2026):

Does it work with the original model?

<!-- gh-comment-id:3707774126 --> @rick-github commented on GitHub (Jan 4, 2026): Does it work with the [original model](https://ollama.com/library/qwen3-vl:32b)?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#8961