[GH-ISSUE #13150] "runtime error: invalid memory address or nil pointer dereference" with Qwen3 VL #34457

Open
opened 2026-04-22 18:03:37 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @fappaz on GitHub (Nov 19, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13150

What is the issue?

Getting the following error when interacting with Qwen3 Vl 8b Thinking Heretic:

http: panic serving 127.0.0.1:56959: runtime error: invalid memory address or nil pointer dereference

The UI shows this error:

500 Internal Server Error: do load request: Post "http://127.0.0.1:57029/load": EOF

Tried restarting ollama and redownloading the model, but to no avail.

I've found similar issues reported on github, but they mostly happened with other model architectures, so not sure if they're the same:

Relevant log output

`server.log`:


time=2025-11-19T17:40:54.102+13:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\Lab\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:]"
time=2025-11-19T17:40:54.105+13:00 level=INFO source=images.go:522 msg="total blobs: 17"
time=2025-11-19T17:40:54.106+13:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
time=2025-11-19T17:40:54.108+13:00 level=INFO source=routes.go:1597 msg="Listening on 127.0.0.1:11434 (version 0.12.11)"
time=2025-11-19T17:40:54.109+13:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2025-11-19T17:40:54.109+13:00 level=INFO source=runner.go:98 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
time=2025-11-19T17:40:54.119+13:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\Lab\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 56916"
time=2025-11-19T17:40:54.316+13:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\Lab\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 56922"
time=2025-11-19T17:40:54.463+13:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\Lab\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 56928"
time=2025-11-19T17:40:54.561+13:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-37b1c641-5754-e474-ca7d-ad35f2dac825 filter_id="" library=CUDA compute=8.6 name=CUDA0 description="NVIDIA GeForce RTX 3080 Laptop GPU" libdirs=ollama,cuda_v12 driver=12.7 pci_id=0000:01:00.0 type=discrete total="8.0 GiB" available="7.0 GiB"
time=2025-11-19T17:40:54.561+13:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="8.0 GiB" threshold="20.0 GiB"
[GIN] 2025/11/19 - 17:40:54 | 200 |       525.8µs |       127.0.0.1 | GET      "/api/version"
[GIN] 2025/11/19 - 17:40:54 | 200 |       2.653ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/11/19 - 17:40:55 | 200 |      53.143ms |       127.0.0.1 | POST     "/api/show"
time=2025-11-19T17:41:04.842+13:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\Lab\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 56948"
time=2025-11-19T17:41:05.045+13:00 level=INFO source=cpu_windows.go:148 msg=packages count=1
time=2025-11-19T17:41:05.045+13:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=8 efficiency=0 threads=16
time=2025-11-19T17:41:05.103+13:00 level=INFO source=server.go:209 msg="enabling flash attention"
time=2025-11-19T17:41:05.107+13:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\Lab\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\Lab\\.ollama\\models\\blobs\\sha256-61eba9f6307e5ea0f81608aae6725c9cb69b719efd60b04e479027485311d06f --port 56953"
time=2025-11-19T17:41:05.109+13:00 level=INFO source=sched.go:443 msg="system memory" total="31.4 GiB" free="17.7 GiB" free_swap="42.4 GiB"
time=2025-11-19T17:41:05.109+13:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-37b1c641-5754-e474-ca7d-ad35f2dac825 library=CUDA available="6.6 GiB" free="7.0 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-11-19T17:41:05.109+13:00 level=INFO source=server.go:702 msg="loading model" "model layers"=37 requested=-1
time=2025-11-19T17:41:05.151+13:00 level=INFO source=runner.go:1398 msg="starting ollama engine"
time=2025-11-19T17:41:05.152+13:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:56953"
time=2025-11-19T17:41:05.162+13:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:8 GPULayers:37[ID:GPU-37b1c641-5754-e474-ca7d-ad35f2dac825 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-11-19T17:41:05.186+13:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3vl file_type=Q4_K_M name="Qwen3 Vl 8b Thinking Heretic" description="" num_tensors=399 num_key_values=31
load_backend: loaded CPU backend from C:\Users\Lab\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3080 Laptop GPU, compute capability 8.6, VMM: yes, ID: GPU-37b1c641-5754-e474-ca7d-ad35f2dac825
load_backend: loaded CUDA backend from C:\Users\Lab\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
time=2025-11-19T17:41:05.311+13:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-11-19T17:41:05.620+13:00 level=INFO source=server.go:3634 msg="http: panic serving 127.0.0.1:56959: runtime error: invalid memory address or nil pointer dereference\ngoroutine 15 [running]:\nnet/http.(*conn).serve.func1()\n\tnet/http/server.go:1947 +0xbe\npanic({0x7ff72ad66220?, 0x7ff72b9f83d0?})\n\truntime/panic.go:792 +0x132\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).allocModel.func1()\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:1186 +0x11a\npanic({0x7ff72ad66220?, 0x7ff72b9f83d0?})\n\truntime/panic.go:792 +0x132\ngithub.com/ollama/ollama/ml/nn.(*Conv3D).Forward(0x0, {0x7ff72b0ad680, 0xc000e1a580}, {0x7ff72b0b9ee8?, 0xc000e0e048?}, 0x10101f72ba4e000?, 0x1f978441200?, 0x1f972eb0108?, 0x10?, 0x0, ...)\n\tgithub.com/ollama/ollama/ml/nn/convolution.go:25 +0x3a\ngithub.com/ollama/ollama/model/models/qwen3vl.(*VisionModel).Forward(0xc0004680c0, {0x7ff72b0ad680, 0xc000e1a580}, {0x7ff72b0b9ee8, 0xc000e0e030}, 0xc000cbd080)\n\tgithub.com/ollama/ollama/model/models/qwen3vl/model_vision.go:223 +0x118\ngithub.com/ollama/ollama/model/models/qwen3vl.(*Model).EncodeMultimodal(0xc000138680, {0x7ff72b0ad680, 0xc000e1a580}, {0xc001ac2000, 0x400436, 0x700000})\n\tgithub.com/ollama/ollama/model/models/qwen3vl/model.go:43 +0x14e\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).reserveWorstCaseGraph(0xc0002050e0, 0x1)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:1097 +0x34e\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).allocModel(0xc0002050e0, {0xc0000980e0?, 0x7ff729eca3ba?}, {0x0, 0x8, {0xc00011c080, 0x1, 0x1}, 0x1}, {0x0, ...}, ...)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:1219 +0x2b1\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).load(0xc0002050e0, {0x7ff72b09ff90, 0xc000122000}, 0xc0000c2000)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:1298 +0x54d\nnet/http.HandlerFunc.ServeHTTP(0xc000469440?, {0x7ff72b09ff90?, 0xc000122000?}, 0xc000047b60?)\n\tnet/http/server.go:2294 +0x29\nnet/http.(*ServeMux).ServeHTTP(0x7ff729b6b785?, {0x7ff72b09ff90, 0xc000122000}, 0xc0000c2000)\n\tnet/http/server.go:2822 +0x1c4\nnet/http.serverHandler.ServeHTTP({0x7ff72b09c510?}, {0x7ff72b09ff90?, 0xc000122000?}, 0x1?)\n\tnet/http/server.go:3301 +0x8e\nnet/http.(*conn).serve(0xc0006aa3f0, {0x7ff72b0a2348, 0xc0006b05d0})\n\tnet/http/server.go:2102 +0x625\ncreated by net/http.(*Server).Serve in goroutine 1\n\tnet/http/server.go:3454 +0x485"
time=2025-11-19T17:41:05.621+13:00 level=INFO source=runner.go:1271 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:false KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-11-19T17:41:05.621+13:00 level=INFO source=sched.go:470 msg="Load failed" model=C:\Users\Lab\.ollama\models\blobs\sha256-61eba9f6307e5ea0f81608aae6725c9cb69b719efd60b04e479027485311d06f error="do load request: Post \"http://127.0.0.1:56953/load\": EOF"
time=2025-11-19T17:41:05.659+13:00 level=ERROR source=server.go:265 msg="llama runner terminated" error="exit status 1"
[GIN] 2025/11/19 - 17:41:05 | 500 |    904.2454ms |       127.0.0.1 | POST     "/api/chat"
[GIN] 2025/11/19 - 17:41:09 | 200 |      2.0956ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/11/19 - 17:41:09 | 200 |            0s |       127.0.0.1 | GET      "/api/ps"

OS

Windows

GPU

Nvidia

CPU

AMD

Ollama version

0.12.11

Originally created by @fappaz on GitHub (Nov 19, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13150 ### What is the issue? Getting the following error when interacting with [Qwen3 Vl 8b Thinking Heretic](https://huggingface.co/Kizzington/Qwen3-VL-8B-Thinking-heretic-GGUF/blob/main/qwen3-vl-8B-thinking-heretic.Q4_K_M.gguf): ```log http: panic serving 127.0.0.1:56959: runtime error: invalid memory address or nil pointer dereference ``` The UI shows this error: ```log 500 Internal Server Error: do load request: Post "http://127.0.0.1:57029/load": EOF ``` Tried restarting ollama and redownloading the model, but to no avail. I've found similar issues reported on github, but they mostly happened with other model architectures, so not sure if they're the same: - https://github.com/ollama/ollama/issues/11280 - https://github.com/ollama/ollama/issues/12426 ### Relevant log output ```shell `server.log`: time=2025-11-19T17:40:54.102+13:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\Lab\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:]" time=2025-11-19T17:40:54.105+13:00 level=INFO source=images.go:522 msg="total blobs: 17" time=2025-11-19T17:40:54.106+13:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" time=2025-11-19T17:40:54.108+13:00 level=INFO source=routes.go:1597 msg="Listening on 127.0.0.1:11434 (version 0.12.11)" time=2025-11-19T17:40:54.109+13:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2025-11-19T17:40:54.109+13:00 level=INFO source=runner.go:98 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" time=2025-11-19T17:40:54.119+13:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\Lab\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 56916" time=2025-11-19T17:40:54.316+13:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\Lab\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 56922" time=2025-11-19T17:40:54.463+13:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\Lab\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 56928" time=2025-11-19T17:40:54.561+13:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-37b1c641-5754-e474-ca7d-ad35f2dac825 filter_id="" library=CUDA compute=8.6 name=CUDA0 description="NVIDIA GeForce RTX 3080 Laptop GPU" libdirs=ollama,cuda_v12 driver=12.7 pci_id=0000:01:00.0 type=discrete total="8.0 GiB" available="7.0 GiB" time=2025-11-19T17:40:54.561+13:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="8.0 GiB" threshold="20.0 GiB" [GIN] 2025/11/19 - 17:40:54 | 200 | 525.8µs | 127.0.0.1 | GET "/api/version" [GIN] 2025/11/19 - 17:40:54 | 200 | 2.653ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/19 - 17:40:55 | 200 | 53.143ms | 127.0.0.1 | POST "/api/show" time=2025-11-19T17:41:04.842+13:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\Lab\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 56948" time=2025-11-19T17:41:05.045+13:00 level=INFO source=cpu_windows.go:148 msg=packages count=1 time=2025-11-19T17:41:05.045+13:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=8 efficiency=0 threads=16 time=2025-11-19T17:41:05.103+13:00 level=INFO source=server.go:209 msg="enabling flash attention" time=2025-11-19T17:41:05.107+13:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\Lab\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\Lab\\.ollama\\models\\blobs\\sha256-61eba9f6307e5ea0f81608aae6725c9cb69b719efd60b04e479027485311d06f --port 56953" time=2025-11-19T17:41:05.109+13:00 level=INFO source=sched.go:443 msg="system memory" total="31.4 GiB" free="17.7 GiB" free_swap="42.4 GiB" time=2025-11-19T17:41:05.109+13:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-37b1c641-5754-e474-ca7d-ad35f2dac825 library=CUDA available="6.6 GiB" free="7.0 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-11-19T17:41:05.109+13:00 level=INFO source=server.go:702 msg="loading model" "model layers"=37 requested=-1 time=2025-11-19T17:41:05.151+13:00 level=INFO source=runner.go:1398 msg="starting ollama engine" time=2025-11-19T17:41:05.152+13:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:56953" time=2025-11-19T17:41:05.162+13:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:8 GPULayers:37[ID:GPU-37b1c641-5754-e474-ca7d-ad35f2dac825 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-11-19T17:41:05.186+13:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3vl file_type=Q4_K_M name="Qwen3 Vl 8b Thinking Heretic" description="" num_tensors=399 num_key_values=31 load_backend: loaded CPU backend from C:\Users\Lab\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3080 Laptop GPU, compute capability 8.6, VMM: yes, ID: GPU-37b1c641-5754-e474-ca7d-ad35f2dac825 load_backend: loaded CUDA backend from C:\Users\Lab\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll time=2025-11-19T17:41:05.311+13:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-11-19T17:41:05.620+13:00 level=INFO source=server.go:3634 msg="http: panic serving 127.0.0.1:56959: runtime error: invalid memory address or nil pointer dereference\ngoroutine 15 [running]:\nnet/http.(*conn).serve.func1()\n\tnet/http/server.go:1947 +0xbe\npanic({0x7ff72ad66220?, 0x7ff72b9f83d0?})\n\truntime/panic.go:792 +0x132\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).allocModel.func1()\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:1186 +0x11a\npanic({0x7ff72ad66220?, 0x7ff72b9f83d0?})\n\truntime/panic.go:792 +0x132\ngithub.com/ollama/ollama/ml/nn.(*Conv3D).Forward(0x0, {0x7ff72b0ad680, 0xc000e1a580}, {0x7ff72b0b9ee8?, 0xc000e0e048?}, 0x10101f72ba4e000?, 0x1f978441200?, 0x1f972eb0108?, 0x10?, 0x0, ...)\n\tgithub.com/ollama/ollama/ml/nn/convolution.go:25 +0x3a\ngithub.com/ollama/ollama/model/models/qwen3vl.(*VisionModel).Forward(0xc0004680c0, {0x7ff72b0ad680, 0xc000e1a580}, {0x7ff72b0b9ee8, 0xc000e0e030}, 0xc000cbd080)\n\tgithub.com/ollama/ollama/model/models/qwen3vl/model_vision.go:223 +0x118\ngithub.com/ollama/ollama/model/models/qwen3vl.(*Model).EncodeMultimodal(0xc000138680, {0x7ff72b0ad680, 0xc000e1a580}, {0xc001ac2000, 0x400436, 0x700000})\n\tgithub.com/ollama/ollama/model/models/qwen3vl/model.go:43 +0x14e\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).reserveWorstCaseGraph(0xc0002050e0, 0x1)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:1097 +0x34e\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).allocModel(0xc0002050e0, {0xc0000980e0?, 0x7ff729eca3ba?}, {0x0, 0x8, {0xc00011c080, 0x1, 0x1}, 0x1}, {0x0, ...}, ...)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:1219 +0x2b1\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).load(0xc0002050e0, {0x7ff72b09ff90, 0xc000122000}, 0xc0000c2000)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:1298 +0x54d\nnet/http.HandlerFunc.ServeHTTP(0xc000469440?, {0x7ff72b09ff90?, 0xc000122000?}, 0xc000047b60?)\n\tnet/http/server.go:2294 +0x29\nnet/http.(*ServeMux).ServeHTTP(0x7ff729b6b785?, {0x7ff72b09ff90, 0xc000122000}, 0xc0000c2000)\n\tnet/http/server.go:2822 +0x1c4\nnet/http.serverHandler.ServeHTTP({0x7ff72b09c510?}, {0x7ff72b09ff90?, 0xc000122000?}, 0x1?)\n\tnet/http/server.go:3301 +0x8e\nnet/http.(*conn).serve(0xc0006aa3f0, {0x7ff72b0a2348, 0xc0006b05d0})\n\tnet/http/server.go:2102 +0x625\ncreated by net/http.(*Server).Serve in goroutine 1\n\tnet/http/server.go:3454 +0x485" time=2025-11-19T17:41:05.621+13:00 level=INFO source=runner.go:1271 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:false KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-11-19T17:41:05.621+13:00 level=INFO source=sched.go:470 msg="Load failed" model=C:\Users\Lab\.ollama\models\blobs\sha256-61eba9f6307e5ea0f81608aae6725c9cb69b719efd60b04e479027485311d06f error="do load request: Post \"http://127.0.0.1:56953/load\": EOF" time=2025-11-19T17:41:05.659+13:00 level=ERROR source=server.go:265 msg="llama runner terminated" error="exit status 1" [GIN] 2025/11/19 - 17:41:05 | 500 | 904.2454ms | 127.0.0.1 | POST "/api/chat" [GIN] 2025/11/19 - 17:41:09 | 200 | 2.0956ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/19 - 17:41:09 | 200 | 0s | 127.0.0.1 | GET "/api/ps" ``` ### OS Windows ### GPU Nvidia ### CPU AMD ### Ollama version 0.12.11
GiteaMirror added the bug label 2026-04-22 18:03:37 -05:00
Author
Owner

@coreintelligence commented on GitHub (Nov 23, 2025):

Hi, I can also confirm that Qwen3-VL based models do not load with any quantization (e.g: bf16, q8, etc) and hit the same issue as is reported here.

Examples:

However, these Qwen3-VL based models do load and work when running llama.cpp standalone via llama-cli; see log for comparison. It's not that the system is running out of memory, it's something to do with how ollama is handling it.

My configuration: Mac Studio - M4 Max, 128GB RAM

Ollama log output (run)

% ollama list
NAME                                                         ID              SIZE      MODIFIED       
Qwen3-VL-32B-Instruct-Heretic:latest    53cdda9c8e38    34 GB     41 minutes ago    


% ollama run Qwen3-VL-32B-Instruct-Heretic:latest 
Error: 500 Internal Server Error: do load request: Post "http://127.0.0.1:51161/load": EOF

ollama log output (serve)

ggml_metal_init: allocating
ggml_metal_init: picking default device: Apple M4 Max
ggml_metal_init: use bfloat         = true
ggml_metal_init: use fusion         = true
ggml_metal_init: use concurrency    = true
ggml_metal_init: use graph optimize = true
time=2025-11-23T11:35:07.667-08:00 level=INFO source=server.go:3634 msg="http: panic serving 127.0.0.1:51164: runtime error: invalid memory address or nil pointer dereference\ngoroutine 22 [running]:\nnet/http.(*conn).serve.func1()\n\t/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/net/http/server.go:1947 +0xb0\npanic({0x101df6000?, 0x10271ea40?})\n\t/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/runtime/panic.go:792 +0x124\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).allocModel.func1()\n\t/Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:1186 +0x124\npanic({0x101df6000?, 0x10271ea40?})\n\t/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/runtime/panic.go:792 +0x124\ngithub.com/ollama/ollama/ml/nn.(*Conv3D).Forward(0x0, {0x101f6bd10, 0x14000f67b40}, {0x101f762a0?, 0x14000680018?}, 0x4?, 0x10010100047148?, 0x149940020?, 0x102bb8108?, 0x10?, ...)\n\t/Users/runner/work/ollama/ollama/ml/nn/convolution.go:25 +0x30\ngithub.com/ollama/ollama/model/models/qwen3vl.(*VisionModel).Forward(0x140005380c0, {0x101f6bd10, 0x14000f67b40}, {0x101f762a0, 0x14000680000}, 0x14000da4420)\n\t/Users/runner/work/ollama/ollama/model/models/qwen3vl/model_vision.go:224 +0xdc\ngithub.com/ollama/ollama/model/models/qwen3vl.(*Model).EncodeMultimodal(0x1400057fd40, {0x101f6bd10, 0x14000f67b40}, {0x14001c48000, 0x400436, 0x700000})\n\t/Users/runner/work/ollama/ollama/model/models/qwen3vl/model.go:43 +0xf8\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).reserveWorstCaseGraph(0x140005545a0, 0x1)\n\t/Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:1097 +0x294\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).allocModel(0x140005545a0, {0x16f547922?, 0x0?}, {0x0, 0xc, {0x140002f0080, 0x1, 0x1}, 0x1}, {0x0?, ...}, ...)\n\t/Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:1219 +0x230\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).load(0x140005545a0, {0x101f5ee60, 0x1400023a000}, 0x14000180000)\n\t/Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:1298 +0x460\nnet/http.HandlerFunc.ServeHTTP(0x14000539500?, {0x101f5ee60?, 0x1400023a000?}, 0x14000123b10?)\n\t/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/net/http/server.go:2294 +0x38\nnet/http.(*ServeMux).ServeHTTP(0x10?, {0x101f5ee60, 0x1400023a000}, 0x14000180000)\n\t/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/net/http/server.go:2822 +0x1b4\nnet/http.serverHandler.ServeHTTP({0x101f5b450?}, {0x101f5ee60?, 0x1400023a000?}, 0x1?)\n\t/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/net/http/server.go:3301 +0xbc\nnet/http.(*conn).serve(0x14000572000, {0x101f61268, 0x1400022ce10})\n\t/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/net/http/server.go:2102 +0x52c\ncreated by net/http.(*Server).Serve in goroutine 1\n\t/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/net/http/server.go:3454 +0x3d8"
time=2025-11-23T11:35:07.668-08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:false KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-11-23T11:35:07.668-08:00 level=INFO source=sched.go:470 msg="Load failed" model=/Users/user/.ollama/models/blobs/sha256-805362ae475afc4dde090966a5283c56cb5e2eef96e8122155d702b1519090c2 error="do load request: Post \"http://127.0.0.1:51161/load\": EOF"
time=2025-11-23T11:35:07.673-08:00 level=ERROR source=server.go:265 msg="llama runner terminated" error="signal: killed"

llama-cli (to compare)

% llama-cli -m /Users/admin/Qwen3-VL-32B-Instruct-Heretic-BF16.gguf

main: llama backend init
main: load the model and apply lora adapter, if any
llama_model_load_from_file_impl: using device Metal (Apple M4 Max) (unknown id) - 98303 MiB free
llama_model_loader: loaded meta data with 32 key-value pairs and 707 tensors from /Users/admin/Qwen3-VL-32B-Instruct-Heretic-BF16.gguf (version GGUF V3 (latest))

...

system_info: n_threads = 12 (n_threads_batch = 12) / 16 | Metal : EMBED_LIBRARY = 1 | CPU : NEON = 1 | ARM_FMA = 1 | FP16_VA = 1 | DOTPROD = 1 | LLAMAFILE = 1 | ACCELERATE = 1 | REPACK = 1 | 

main: interactive mode on.
sampler seed: 918576610
sampler params: 
	repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000
	dry_multiplier = 0.000, dry_base = 1.750, dry_allowed_length = 2, dry_penalty_last_n = 4096
	top_k = 40, top_p = 0.950, min_p = 0.050, xtc_probability = 0.000, xtc_threshold = 0.100, typical_p = 1.000, top_n_sigma = -1.000, temp = 0.800
	mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampler chain: logits -> logit-bias -> penalties -> dry -> top-n-sigma -> top-k -> typical -> top-p -> min-p -> xtc -> temp-ext -> dist 
generate: n_ctx = 4096, n_batch = 2048, n_predict = -1, n_keep = 0

== Running in interactive mode. ==
 - Press Ctrl+C to interject at any time.
 - Press Return to return control to the AI.
 - To return control without starting a new line, end your input with '/'.
 - If you want to submit another line, end your input with '\'.
 - Not using system message. To change it, set a different value via -sys PROMPT


>
<!-- gh-comment-id:3568274191 --> @coreintelligence commented on GitHub (Nov 23, 2025): Hi, I can also confirm that Qwen3-VL based models do not load with any quantization (e.g: `bf16`, `q8`, etc) and hit the same issue as is reported here. Examples: - https://huggingface.co/Kizzington/Qwen3-VL-8B-Thinking-heretic-GGUF/blob/main/qwen3-vl-8B-thinking-heretic.Q4_K_M.gguf - https://huggingface.co/collections/coder3101/qwen3-vl-instruct-heretic However, these Qwen3-VL based models do load and work when running llama.cpp standalone via `llama-cli`; see log for comparison. It's not that the system is running out of memory, it's something to do with how ollama is handling it. My configuration: Mac Studio - M4 Max, 128GB RAM Ollama log output (**run**) ``` % ollama list NAME ID SIZE MODIFIED Qwen3-VL-32B-Instruct-Heretic:latest 53cdda9c8e38 34 GB 41 minutes ago % ollama run Qwen3-VL-32B-Instruct-Heretic:latest Error: 500 Internal Server Error: do load request: Post "http://127.0.0.1:51161/load": EOF ``` ollama log output (**serve**) ``` ggml_metal_init: allocating ggml_metal_init: picking default device: Apple M4 Max ggml_metal_init: use bfloat = true ggml_metal_init: use fusion = true ggml_metal_init: use concurrency = true ggml_metal_init: use graph optimize = true time=2025-11-23T11:35:07.667-08:00 level=INFO source=server.go:3634 msg="http: panic serving 127.0.0.1:51164: runtime error: invalid memory address or nil pointer dereference\ngoroutine 22 [running]:\nnet/http.(*conn).serve.func1()\n\t/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/net/http/server.go:1947 +0xb0\npanic({0x101df6000?, 0x10271ea40?})\n\t/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/runtime/panic.go:792 +0x124\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).allocModel.func1()\n\t/Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:1186 +0x124\npanic({0x101df6000?, 0x10271ea40?})\n\t/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/runtime/panic.go:792 +0x124\ngithub.com/ollama/ollama/ml/nn.(*Conv3D).Forward(0x0, {0x101f6bd10, 0x14000f67b40}, {0x101f762a0?, 0x14000680018?}, 0x4?, 0x10010100047148?, 0x149940020?, 0x102bb8108?, 0x10?, ...)\n\t/Users/runner/work/ollama/ollama/ml/nn/convolution.go:25 +0x30\ngithub.com/ollama/ollama/model/models/qwen3vl.(*VisionModel).Forward(0x140005380c0, {0x101f6bd10, 0x14000f67b40}, {0x101f762a0, 0x14000680000}, 0x14000da4420)\n\t/Users/runner/work/ollama/ollama/model/models/qwen3vl/model_vision.go:224 +0xdc\ngithub.com/ollama/ollama/model/models/qwen3vl.(*Model).EncodeMultimodal(0x1400057fd40, {0x101f6bd10, 0x14000f67b40}, {0x14001c48000, 0x400436, 0x700000})\n\t/Users/runner/work/ollama/ollama/model/models/qwen3vl/model.go:43 +0xf8\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).reserveWorstCaseGraph(0x140005545a0, 0x1)\n\t/Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:1097 +0x294\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).allocModel(0x140005545a0, {0x16f547922?, 0x0?}, {0x0, 0xc, {0x140002f0080, 0x1, 0x1}, 0x1}, {0x0?, ...}, ...)\n\t/Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:1219 +0x230\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).load(0x140005545a0, {0x101f5ee60, 0x1400023a000}, 0x14000180000)\n\t/Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:1298 +0x460\nnet/http.HandlerFunc.ServeHTTP(0x14000539500?, {0x101f5ee60?, 0x1400023a000?}, 0x14000123b10?)\n\t/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/net/http/server.go:2294 +0x38\nnet/http.(*ServeMux).ServeHTTP(0x10?, {0x101f5ee60, 0x1400023a000}, 0x14000180000)\n\t/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/net/http/server.go:2822 +0x1b4\nnet/http.serverHandler.ServeHTTP({0x101f5b450?}, {0x101f5ee60?, 0x1400023a000?}, 0x1?)\n\t/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/net/http/server.go:3301 +0xbc\nnet/http.(*conn).serve(0x14000572000, {0x101f61268, 0x1400022ce10})\n\t/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/net/http/server.go:2102 +0x52c\ncreated by net/http.(*Server).Serve in goroutine 1\n\t/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/net/http/server.go:3454 +0x3d8" time=2025-11-23T11:35:07.668-08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:false KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-11-23T11:35:07.668-08:00 level=INFO source=sched.go:470 msg="Load failed" model=/Users/user/.ollama/models/blobs/sha256-805362ae475afc4dde090966a5283c56cb5e2eef96e8122155d702b1519090c2 error="do load request: Post \"http://127.0.0.1:51161/load\": EOF" time=2025-11-23T11:35:07.673-08:00 level=ERROR source=server.go:265 msg="llama runner terminated" error="signal: killed" ``` llama-cli (to compare) ``` % llama-cli -m /Users/admin/Qwen3-VL-32B-Instruct-Heretic-BF16.gguf main: llama backend init main: load the model and apply lora adapter, if any llama_model_load_from_file_impl: using device Metal (Apple M4 Max) (unknown id) - 98303 MiB free llama_model_loader: loaded meta data with 32 key-value pairs and 707 tensors from /Users/admin/Qwen3-VL-32B-Instruct-Heretic-BF16.gguf (version GGUF V3 (latest)) ... system_info: n_threads = 12 (n_threads_batch = 12) / 16 | Metal : EMBED_LIBRARY = 1 | CPU : NEON = 1 | ARM_FMA = 1 | FP16_VA = 1 | DOTPROD = 1 | LLAMAFILE = 1 | ACCELERATE = 1 | REPACK = 1 | main: interactive mode on. sampler seed: 918576610 sampler params: repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000 dry_multiplier = 0.000, dry_base = 1.750, dry_allowed_length = 2, dry_penalty_last_n = 4096 top_k = 40, top_p = 0.950, min_p = 0.050, xtc_probability = 0.000, xtc_threshold = 0.100, typical_p = 1.000, top_n_sigma = -1.000, temp = 0.800 mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000 sampler chain: logits -> logit-bias -> penalties -> dry -> top-n-sigma -> top-k -> typical -> top-p -> min-p -> xtc -> temp-ext -> dist generate: n_ctx = 4096, n_batch = 2048, n_predict = -1, n_keep = 0 == Running in interactive mode. == - Press Ctrl+C to interject at any time. - Press Return to return control to the AI. - To return control without starting a new line, end your input with '/'. - If you want to submit another line, end your input with '\'. - Not using system message. To change it, set a different value via -sys PROMPT > ```
Author
Owner

@rick-github commented on GitHub (Nov 23, 2025):

The Kizzington GGUFs are vision LLMs from which the vision component has been removed. You might have a better experience downloading the safetensors and importing those.

<!-- gh-comment-id:3568298140 --> @rick-github commented on GitHub (Nov 23, 2025): The Kizzington GGUFs are vision LLMs from which the vision component has been removed. You might have a better experience downloading the [safetensors](https://huggingface.co/Kizzington/Qwen3-VL-8B-Thinking-heretic) and importing those.
Author
Owner

@coreintelligence commented on GitHub (Nov 23, 2025):

Hi @rick-github, thanks for the guidance!

I swapped over to making my own GGUF main model using the safetensors directly and llama.cpp's convert_hf_to_gguf.py script. I also have a copy of the mmproj for Qwen3-VL's vision encoder (per @jessegross' insight from the logs) to ensure I have the pieces all there.

However, after successfully creating it runs into a different error when trying to run

ollama log output (for run)

% ollama run Qwen3-VL-32B-Instruct-Heretic-BF16:latest

Error: 500 Internal Server Error: unable to load model: /Users/admin/.ollama/models/blobs/sha256-f4935d534a16a67ab12319c1d724358aabbcbd06bf18e159925b7b186a59e3cd

Modelfile

FROM ./Qwen3-VL-32B-Instruct-Heretic-BF16.gguf
ADAPTER ./mmproj-BF16.gguf

ollama log output (for run)

llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen3vl'
llama_model_load_from_file_impl: failed to load model
<!-- gh-comment-id:3568410795 --> @coreintelligence commented on GitHub (Nov 23, 2025): Hi @rick-github, thanks for the guidance! I swapped over to making my own GGUF main model using the safetensors directly and llama.cpp's `convert_hf_to_gguf.py` script. I also have a copy of the mmproj for Qwen3-VL's vision encoder (per @jessegross' [insight from the logs](https://github.com/ollama/ollama/issues/13187#issuecomment-3564486572)) to ensure I have the pieces all there. However, after successfully creating it runs into a different error when trying to run ollama log output (for **run**) ``` % ollama run Qwen3-VL-32B-Instruct-Heretic-BF16:latest Error: 500 Internal Server Error: unable to load model: /Users/admin/.ollama/models/blobs/sha256-f4935d534a16a67ab12319c1d724358aabbcbd06bf18e159925b7b186a59e3cd ``` Modelfile ``` FROM ./Qwen3-VL-32B-Instruct-Heretic-BF16.gguf ADAPTER ./mmproj-BF16.gguf ``` ollama log output (for **run**) ``` llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen3vl' llama_model_load_from_file_impl: failed to load model ```
Author
Owner

@rick-github commented on GitHub (Nov 23, 2025):

The qwen3-vl architecture is only supported on the ollama engine at the moment, which does not support split models (text and image weights in separate files). Importing from the safetensors should result in a runnable model.

<!-- gh-comment-id:3568415313 --> @rick-github commented on GitHub (Nov 23, 2025): The qwen3-vl architecture is only supported on the ollama engine at the moment, which does not support split models (text and image weights in separate files). [Importing](https://github.com/ollama/ollama/blob/main/docs/import.mdx#importing-a-model-from-safetensors-weights) from the safetensors should result in a runnable model.
Author
Owner

@rick-github commented on GitHub (Nov 23, 2025):

Also note that the modelfile should have a template or a renderer/parser configuration.

FROM /path/to/safetensors
RENDERER qwen3-vl-instruct
PARSER qwen3-vl-instruct
<!-- gh-comment-id:3568419828 --> @rick-github commented on GitHub (Nov 23, 2025): Also note that the modelfile should have a template or a renderer/parser configuration. ```dockerfile FROM /path/to/safetensors RENDERER qwen3-vl-instruct PARSER qwen3-vl-instruct ```
Author
Owner

@coreintelligence commented on GitHub (Nov 24, 2025):

Great! Importing from a directory of safetensors (rather than from a combined GGUF) worked with both text and image prompting operational in the app. Thanks again @rick-github for your help

As an aside, does it make sense to support split models in ollama going forward? I'm afraid others will run into this issue so either the docs or the repo's source would need to accommodate this quirk in current support

<!-- gh-comment-id:3568489802 --> @coreintelligence commented on GitHub (Nov 24, 2025): Great! Importing from a directory of safetensors (rather than from a combined GGUF) worked with both text and image prompting operational in the app. Thanks again @rick-github for your help As an aside, does it make sense to support split models in ollama going forward? I'm afraid others will run into this issue so either the docs or the repo's source would need to accommodate this quirk in current support
Author
Owner

@rick-github commented on GitHub (Nov 24, 2025):

There are some models that are only available in split format so for legacy reasons split support will be around for a while. At the next vendor sync, ollama will support split mode for the qwen3-vl architecture

<!-- gh-comment-id:3568501715 --> @rick-github commented on GitHub (Nov 24, 2025): There are some models that are only available in split format so for legacy reasons split support will be around for a while. At the next vendor sync, ollama will support split mode for the qwen3-vl architecture
Author
Owner

@xje96 commented on GitHub (Nov 27, 2025):

There are some models that are only available in split format so for legacy reasons split support will be around for a while. At the next vendor sync, ollama will support split mode for the qwen3-vl architecture

That really makes sense, much thanks.

<!-- gh-comment-id:3584341948 --> @xje96 commented on GitHub (Nov 27, 2025): > There are some models that are only available in split format so for legacy reasons split support will be around for a while. At the next vendor sync, ollama will support split mode for the qwen3-vl architecture That really makes sense, much thanks.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#34457