[GH-ISSUE #13794] ROCm: Qwen3-VL / Vision models crash with nil Conv3D pointer (RX 6800M, gfx1030) #71096

Open
opened 2026-05-05 00:17:12 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @resynth on GitHub (Jan 20, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/13794

Ollama consistently crashes with a null pointer dereference panic when attempting to load the Qwen3-VL vision model hf.co/mradermacher/Huihui-Qwen3-VL-30B-A3B-Thinking-abliterated-i1-GGUF:Q5_K_M

The same issue also occurs with the Mistral Small 2 24b with VL:
hf.co/unsloth/Devstral-Small-2-24B-Instruct-2512-GGUF:Q5_K_M

The panic occurs in convolution.go:25 when calling (*Conv3D).Forward(0x0, ...) The 0x0 indicates the Conv3D object is nil/uninitialized. This happens during the vision model's forward pass in qwen3vl/model_vision.go:224, suggesting the Conv3D layer was never properly initialized when the vision model was constructed.

Possible Fixes

  • Initialize all Conv3D layers during model construction, or
  • Add nil checks before calling Forward() to provide a more helpful error message

Software

I have the same problem on a Ubuntu 24.04 machine with Ollama 0.13.5 with rocm7 using HIP
AND a Ubuntu 25.10 machine with Ollama 0.14.2 and rocm5.9 using openCL.
Both setups are running text only models very well.
Environment Variables:

HSA_OVERRIDE_GFX_VERSION=10.3.0
OLLAMA_FLASH_ATTENTION=true
OLLAMA_KV_CACHE_TYPE=Q8_0

Hardware

  • Asus ROG Strix AMD Advantage G513QY
  • CPU: AMD Ryzen 9 5900 HX
  • GPU: AMD Radeon RX 6800M
  • VRAM: 12 GB
  • RAM: 32GB

Relevant log output

time=2026-01-20T11:02:42.204Z level=INFO source=server.go:3634 msg="http: panic serving 127.0.0.1:57842: runtime error: invalid memory address or nil pointer dereference
goroutine 15 [running]:
net/http.(*conn).serve.func1()
	net/http/server.go:1947 +0xbe
panic({0x5cdba9faa120?, 0x5cdbaa96e430?})
	runtime/panic.go:792 +0x132
github.com/ollama/ollama/runner/ollamarunner.(*Server).allocModel.func1()
	github.com/ollama/ollama/runner/ollamarunner/runner.go:1187 +0x11a
panic({0x5cdba9faa120?, 0x5cdbaa96e430?})
	runtime/panic.go:792 +0x132
github.com/ollama/ollama/ml/nn.(*Conv3D).Forward(0x0, {0x5cdbaa131110, 0xc000edab40}, {0x5cdbaa13c140?, 0xc000124030?}, 0x10?, 0xc000600008?, 0xc000d22000?, 0xc000049190?, 0x0, ...)
	github.com/ollama/ollama/ml/nn/convolution.go:25 +0x3a
github.com/ollama/ollama/model/models/qwen3vl.(*VisionModel).Forward(0xc00014c0c0, {0x5cdbaa131110, 0xc000edab40}, {0x5cdbaa13c140, 0xc000124018}, 0xc000ea0000)
	github.com/ollama/ollama/model/models/qwen3vl/model_vision.go:224 +0x118
github.com/ollama/ollama/model/models/qwen3vl.(*Model).EncodeMultimodal(0xc0004b9790, {0x5cdbaa131110, 0xc000edab40}, {0xc001c48000, 0x400436, 0x700000})
	github.com/ollama/ollama/model/models/qwen3vl/model.go:43 +0x14e
github.com/ollama/ollama/runner/ollamarunner.(*Server).reserveWorstCaseGraph(0xc000234d20, 0x1)
	github.com/ollama/ollama/runner/ollamarunner/runner.go:1098 +0x34e
github.com/ollama/ollama/runner/ollamarunner.(*Server).allocModel(0xc000234d20, {0x7ffea1e99ccb?, 0x5cdba8df70da?}, {0x0, 0x8, {0xc0002f0080, 0x1, 0x1}, 0x1}, {0x0, ...}, ...)
	github.com/ollama/ollama/runner/ollamarunner/runner.go:1226 +0x391

OS

Linux

GPU

AMD

CPU

AMD

Ollama version

0.14.2 & 0.13.5

Originally created by @resynth on GitHub (Jan 20, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/13794 Ollama consistently crashes with a null pointer dereference panic when attempting to load the Qwen3-VL vision model hf.co/mradermacher/Huihui-Qwen3-VL-30B-A3B-Thinking-abliterated-i1-GGUF:Q5_K_M The same issue also occurs with the Mistral Small 2 24b with VL: hf.co/unsloth/Devstral-Small-2-24B-Instruct-2512-GGUF:Q5_K_M The panic occurs in convolution.go:25 when calling `(*Conv3D).Forward(0x0, ...)` The 0x0 indicates the Conv3D object is nil/uninitialized. This happens during the vision model's forward pass in qwen3vl/model_vision.go:224, suggesting the Conv3D layer was never properly initialized when the vision model was constructed. ### Possible Fixes - Initialize all Conv3D layers during model construction, or - Add nil checks before calling Forward() to provide a more helpful error message ### Software I have the same problem on a Ubuntu 24.04 machine with Ollama 0.13.5 with rocm7 using HIP AND a Ubuntu 25.10 machine with Ollama 0.14.2 and rocm5.9 using openCL. Both setups are running text only models very well. Environment Variables: ``` HSA_OVERRIDE_GFX_VERSION=10.3.0 OLLAMA_FLASH_ATTENTION=true OLLAMA_KV_CACHE_TYPE=Q8_0 ``` ### Hardware - Asus ROG Strix AMD Advantage G513QY - CPU: AMD Ryzen 9 5900 HX - GPU: AMD Radeon RX 6800M - VRAM: 12 GB - RAM: 32GB ### Relevant log output ```shell time=2026-01-20T11:02:42.204Z level=INFO source=server.go:3634 msg="http: panic serving 127.0.0.1:57842: runtime error: invalid memory address or nil pointer dereference goroutine 15 [running]: net/http.(*conn).serve.func1() net/http/server.go:1947 +0xbe panic({0x5cdba9faa120?, 0x5cdbaa96e430?}) runtime/panic.go:792 +0x132 github.com/ollama/ollama/runner/ollamarunner.(*Server).allocModel.func1() github.com/ollama/ollama/runner/ollamarunner/runner.go:1187 +0x11a panic({0x5cdba9faa120?, 0x5cdbaa96e430?}) runtime/panic.go:792 +0x132 github.com/ollama/ollama/ml/nn.(*Conv3D).Forward(0x0, {0x5cdbaa131110, 0xc000edab40}, {0x5cdbaa13c140?, 0xc000124030?}, 0x10?, 0xc000600008?, 0xc000d22000?, 0xc000049190?, 0x0, ...) github.com/ollama/ollama/ml/nn/convolution.go:25 +0x3a github.com/ollama/ollama/model/models/qwen3vl.(*VisionModel).Forward(0xc00014c0c0, {0x5cdbaa131110, 0xc000edab40}, {0x5cdbaa13c140, 0xc000124018}, 0xc000ea0000) github.com/ollama/ollama/model/models/qwen3vl/model_vision.go:224 +0x118 github.com/ollama/ollama/model/models/qwen3vl.(*Model).EncodeMultimodal(0xc0004b9790, {0x5cdbaa131110, 0xc000edab40}, {0xc001c48000, 0x400436, 0x700000}) github.com/ollama/ollama/model/models/qwen3vl/model.go:43 +0x14e github.com/ollama/ollama/runner/ollamarunner.(*Server).reserveWorstCaseGraph(0xc000234d20, 0x1) github.com/ollama/ollama/runner/ollamarunner/runner.go:1098 +0x34e github.com/ollama/ollama/runner/ollamarunner.(*Server).allocModel(0xc000234d20, {0x7ffea1e99ccb?, 0x5cdba8df70da?}, {0x0, 0x8, {0xc0002f0080, 0x1, 0x1}, 0x1}, {0x0, ...}, ...) github.com/ollama/ollama/runner/ollamarunner/runner.go:1226 +0x391 ``` ### OS Linux ### GPU AMD ### CPU AMD ### Ollama version 0.14.2 & 0.13.5
GiteaMirror added the bug label 2026-05-05 00:17:12 -05:00
Author
Owner

@FR-Mister-T commented on GitHub (Feb 9, 2026):

I can confirm same kind of issue on gfx1201 ;
The specific error: panic: ... github.com/ollama/ollama/ml/nn.(*Conv3D).Forward(0x0, ...)

<!-- gh-comment-id:3868650636 --> @FR-Mister-T commented on GitHub (Feb 9, 2026): I can confirm same kind of issue on gfx1201 ; The specific error: panic: ... github.com/ollama/ollama/ml/nn.(*Conv3D).Forward(0x0, ...)
Author
Owner

@urtzai commented on GitHub (Feb 19, 2026):

Same here on Linux Ubuntu 24.04.3 LTS with Ollama 0.16.2:

time=2026-02-19T17:48:02.779+01:00 level=INFO source=server.go:3634 msg="http: panic serving 127.0.0.1:54310: runtime error: invalid memory address or nil pointer dereference\ngoroutine 24 [running]:\nnet/http.(*conn).serve.func1()\n\tnet/http/server.go:1947 +0xbe\npanic({0x5c9621fb1900?, 0x5c9622a4bb80?})\n\truntime/panic.go:792 +0x132\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).allocModel.func1()\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:1193 +0x11a\npanic({0x5c9621fb1900?, 0x5c9622a4bb80?})\n\truntime/panic.go:792 +0x132\ngithub.com/ollama/ollama/ml/nn.(*Conv3D).Forward(0x0, {0x5c962215dbf0, 0xc000eea500}, {0x5c962216ac40?, 0xc000ede018?}, 0x10?, 0xc000095008?, 0xc000ca6000?, 0xc000049190?, 0x0, ...)\n\tgithub.com/ollama/ollama/ml/nn/convolution.go:25 +0x3a\ngithub.com/ollama/ollama/model/models/qwen3vl.(*VisionModel).Forward(0xc0004f20c0, {0x5c962215dbf0, 0xc000eea500}, {0x5c962216ac40, 0xc000ede000}, 0xc000e9c000)\n\tgithub.com/ollama/ollama/model/models/qwen3vl/model_vision.go:224 +0x118\ngithub.com/ollama/ollama/model/models/qwen3vl.(*Model).EncodeMultimodal(0xc000ca2c30, {0x5c962215dbf0, 0xc000eea500}, {0xc001bc8000, 0x400436, 0x700000})\n\tgithub.com/ollama/ollama/model/models/qwen3vl/model.go:44 +0x14e\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).reserveWorstCaseGraph(0xc0002510e0, 0x1)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:1104 +0x34e\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).allocModel(0xc0002510e0, {0x7ffdcc91cb3c?, 0x5c9620c29b3a?}, {0x0, 0x6, {0x5c9622c28440, 0x0, 0x0}, 0x1}, {0x0, ...}, ...)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:1232 +0x391\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).load(0xc0002510e0, {0x5c962214f3c0, 0xc0004db5e0}, 0xc000140500)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:1311 +0x54b\nnet/http.HandlerFunc.ServeHTTP(0xc0004f2780?, {0x5c962214f3c0?, 0xc0004db5e0?}, 0xc00017db60?)\n\tnet/http/server.go:2294 +0x29\nnet/http.(*ServeMux).ServeHTTP(0x5c96208d8b25?, {0x5c962214f3c0, 0xc0004db5e0}, 0xc000140500)\n\tnet/http/server.go:2822 +0x1c4\nnet/http.serverHandler.ServeHTTP({0x5c962214b6f0?}, {0x5c962214f3c0?, 0xc0004db5e0?}, 0x1?)\n\tnet/http/server.go:3301 +0x8e\nnet/http.(*conn).serve(0xc000142480, {0x5c9622151a48, 0xc00013f350})\n\tnet/http/server.go:2102 +0x625\ncreated by net/http.(*Server).Serve in goroutine 1\n\tnet/http/server.go:3454 +0x485"

<!-- gh-comment-id:3928617818 --> @urtzai commented on GitHub (Feb 19, 2026): Same here on Linux Ubuntu 24.04.3 LTS with Ollama 0.16.2: time=2026-02-19T17:48:02.779+01:00 level=INFO source=server.go:3634 msg="http: panic serving 127.0.0.1:54310: runtime error: invalid memory address or nil pointer dereference\ngoroutine 24 [running]:\nnet/http.(*conn).serve.func1()\n\tnet/http/server.go:1947 +0xbe\npanic({0x5c9621fb1900?, 0x5c9622a4bb80?})\n\truntime/panic.go:792 +0x132\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).allocModel.func1()\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:1193 +0x11a\npanic({0x5c9621fb1900?, 0x5c9622a4bb80?})\n\truntime/panic.go:792 +0x132\ngithub.com/ollama/ollama/ml/nn.(*Conv3D).Forward(0x0, {0x5c962215dbf0, 0xc000eea500}, {0x5c962216ac40?, 0xc000ede018?}, 0x10?, 0xc000095008?, 0xc000ca6000?, 0xc000049190?, 0x0, ...)\n\tgithub.com/ollama/ollama/ml/nn/convolution.go:25 +0x3a\ngithub.com/ollama/ollama/model/models/qwen3vl.(*VisionModel).Forward(0xc0004f20c0, {0x5c962215dbf0, 0xc000eea500}, {0x5c962216ac40, 0xc000ede000}, 0xc000e9c000)\n\tgithub.com/ollama/ollama/model/models/qwen3vl/model_vision.go:224 +0x118\ngithub.com/ollama/ollama/model/models/qwen3vl.(*Model).EncodeMultimodal(0xc000ca2c30, {0x5c962215dbf0, 0xc000eea500}, {0xc001bc8000, 0x400436, 0x700000})\n\tgithub.com/ollama/ollama/model/models/qwen3vl/model.go:44 +0x14e\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).reserveWorstCaseGraph(0xc0002510e0, 0x1)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:1104 +0x34e\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).allocModel(0xc0002510e0, {0x7ffdcc91cb3c?, 0x5c9620c29b3a?}, {0x0, 0x6, {0x5c9622c28440, 0x0, 0x0}, 0x1}, {0x0, ...}, ...)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:1232 +0x391\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).load(0xc0002510e0, {0x5c962214f3c0, 0xc0004db5e0}, 0xc000140500)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:1311 +0x54b\nnet/http.HandlerFunc.ServeHTTP(0xc0004f2780?, {0x5c962214f3c0?, 0xc0004db5e0?}, 0xc00017db60?)\n\tnet/http/server.go:2294 +0x29\nnet/http.(*ServeMux).ServeHTTP(0x5c96208d8b25?, {0x5c962214f3c0, 0xc0004db5e0}, 0xc000140500)\n\tnet/http/server.go:2822 +0x1c4\nnet/http.serverHandler.ServeHTTP({0x5c962214b6f0?}, {0x5c962214f3c0?, 0xc0004db5e0?}, 0x1?)\n\tnet/http/server.go:3301 +0x8e\nnet/http.(*conn).serve(0xc000142480, {0x5c9622151a48, 0xc00013f350})\n\tnet/http/server.go:2102 +0x625\ncreated by net/http.(*Server).Serve in goroutine 1\n\tnet/http/server.go:3454 +0x485"
Author
Owner

@Goekdeniz-Guelmez commented on GitHub (Feb 23, 2026):

Same here on MacOS

Edit:
looking at the models tensors there seems to be the vision backbone missing, just like in the above mentioned issue

<!-- gh-comment-id:3946383672 --> @Goekdeniz-Guelmez commented on GitHub (Feb 23, 2026): Same here on MacOS Edit: looking at the models tensors there seems to be the vision backbone missing, just like in the above mentioned issue
Author
Owner

@jclab-joseph commented on GitHub (Mar 27, 2026):

The same goes for ollama run MedAIBase/Qwen3-VL-Embedding:2b.

<!-- gh-comment-id:4142515793 --> @jclab-joseph commented on GitHub (Mar 27, 2026): The same goes for `ollama run MedAIBase/Qwen3-VL-Embedding:2b`.
Author
Owner

@Goekdeniz-Guelmez commented on GitHub (Mar 27, 2026):

the main reason that lead to my error is that the model doesn't have the vision adapters, so when the model type is a vision model e.g. qwen3.vl it expects language and vision parameters. but since some huihui model dont have the vision apaters in the weights it throws a error.

<!-- gh-comment-id:4142543120 --> @Goekdeniz-Guelmez commented on GitHub (Mar 27, 2026): the main reason that lead to my error is that the model doesn't have the vision adapters, so when the model type is a vision model e.g. qwen3.vl it expects language and vision parameters. but since some huihui model dont have the vision apaters in the weights it throws a error.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#71096