[GH-ISSUE #12865] Issue with running qwen3-vl:2b localy and getting a 500 error from server #55038

Closed
opened 2026-04-29 08:13:47 -05:00 by GiteaMirror · 14 comments
Owner

Originally created by @magnusbonnevier on GitHub (Oct 30, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12865

What is the issue?

Hello, i am having an issue with running qwen3-vl:2b localy and getting a 500 error saying that:
"Error: 500 Internal Server Error: model requires more system memory (31.9 GiB) than is available (31.4 GiB)"

same with 4b variant.
I have ollama 0.12.7 installed.
Same issue in commandline or UI.

Relevant log output

time=2025-10-30T20:31:48.068+01:00 level=INFO source=server.go:385 msg="starting runner" cmd="C:\\Users\\Magnus\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 61027"
time=2025-10-30T20:31:49.118+01:00 level=INFO source=cpu_windows.go:139 msg=packages count=1
time=2025-10-30T20:31:49.118+01:00 level=INFO source=cpu_windows.go:186 msg="" package=0 cores=6 efficiency=0 threads=12
time=2025-10-30T20:31:49.143+01:00 level=INFO source=sched.go:559 msg="updated VRAM based on existing loaded models" gpu=GPU-b64a3c75-1a2b-484d-0f16-cbe794f13336 library=CUDA total="8.0 GiB" available="7.1 GiB"
time=2025-10-30T20:31:49.207+01:00 level=INFO source=server.go:385 msg="starting runner" cmd="C:\\Users\\Magnus\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model F:\\ollama_ai_models\\blobs\\sha256-ebabfa59b71a5b96e0281ec2994977e785284e0939807a99fc340dec3c6f10de --port 61037"
time=2025-10-30T20:31:49.211+01:00 level=INFO source=server.go:638 msg="loading model" "model layers"=29 requested=-1
time=2025-10-30T20:31:49.211+01:00 level=INFO source=server.go:643 msg="system memory" total="15.9 GiB" free="4.9 GiB" free_swap="7.1 GiB"
time=2025-10-30T20:31:49.211+01:00 level=INFO source=server.go:650 msg="gpu memory" id=GPU-b64a3c75-1a2b-484d-0f16-cbe794f13336 library=CUDA available="6.6 GiB" free="7.1 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-10-30T20:31:49.247+01:00 level=INFO source=runner.go:1337 msg="starting ollama engine"
time=2025-10-30T20:31:49.284+01:00 level=INFO source=runner.go:1372 msg="Server listening on 127.0.0.1:61037"
time=2025-10-30T20:31:49.287+01:00 level=INFO source=runner.go:1210 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:131072 KvCacheType: NumThreads:6 GPULayers:29[ID:GPU-b64a3c75-1a2b-484d-0f16-cbe794f13336 Layers:29(0..28)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-30T20:31:49.320+01:00 level=INFO source=ggml.go:135 msg="" architecture=qwen3vl file_type=Q4_K_M name="" description="" num_tensors=721 num_key_values=40
load_backend: loaded CPU backend from C:\Users\Magnus\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce GTX 1080, compute capability 6.1, VMM: yes, ID: GPU-b64a3c75-1a2b-484d-0f16-cbe794f13336
load_backend: loaded CUDA backend from C:\Users\Magnus\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
time=2025-10-30T20:31:49.420+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-10-30T20:31:51.075+01:00 level=INFO source=runner.go:1210 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:false KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-30T20:31:51.075+01:00 level=INFO source=device.go:212 msg="model weights" device=CUDA0 size="1.8 GiB"
time=2025-10-30T20:31:51.075+01:00 level=INFO source=device.go:217 msg="model weights" device=CPU size="243.4 MiB"
time=2025-10-30T20:31:51.075+01:00 level=INFO source=device.go:223 msg="kv cache" device=CUDA0 size="14.0 GiB"
time=2025-10-30T20:31:51.075+01:00 level=INFO source=device.go:234 msg="compute graph" device=CUDA0 size="17.3 GiB"
time=2025-10-30T20:31:51.075+01:00 level=INFO source=device.go:239 msg="compute graph" device=CPU size="31.7 MiB"
time=2025-10-30T20:31:51.075+01:00 level=INFO source=device.go:244 msg="total memory" size="33.3 GiB"
time=2025-10-30T20:31:51.548+01:00 level=ERROR source=server.go:273 msg="llama runner terminated" error="exit status 1"
time=2025-10-30T20:31:51.549+01:00 level=INFO source=server.go:385 msg="starting runner" cmd="C:\\Users\\Magnus\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 61046"
time=2025-10-30T20:31:51.768+01:00 level=INFO source=cpu_windows.go:139 msg=packages count=1
time=2025-10-30T20:31:51.768+01:00 level=INFO source=cpu_windows.go:186 msg="" package=0 cores=6 efficiency=0 threads=12
time=2025-10-30T20:31:51.823+01:00 level=INFO source=server.go:638 msg="loading model" "model layers"=29 requested=-1
time=2025-10-30T20:31:51.823+01:00 level=INFO source=server.go:643 msg="system memory" total="15.9 GiB" free="8.4 GiB" free_swap="23.0 GiB"
time=2025-10-30T20:31:51.823+01:00 level=INFO source=server.go:650 msg="gpu memory" id=GPU-b64a3c75-1a2b-484d-0f16-cbe794f13336 library=CUDA available="6.5 GiB" free="7.0 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-10-30T20:31:51.823+01:00 level=INFO source=runner.go:1210 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:131072 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-30T20:31:53.155+01:00 level=INFO source=runner.go:1210 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:131072 KvCacheType: NumThreads:6 GPULayers:11[ID:GPU-b64a3c75-1a2b-484d-0f16-cbe794f13336 Layers:11(17..27)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-30T20:31:54.377+01:00 level=WARN source=server.go:943 msg="model request too large for system" requested="31.9 GiB" available="31.4 GiB" total="15.9 GiB" free="8.4 GiB" swap="23.0 GiB"
time=2025-10-30T20:31:54.377+01:00 level=INFO source=runner.go:1210 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:false KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-30T20:31:54.377+01:00 level=INFO source=device.go:217 msg="model weights" device=CPU size="2.0 GiB"
time=2025-10-30T20:31:54.377+01:00 level=INFO source=device.go:228 msg="kv cache" device=CPU size="14.0 GiB"
time=2025-10-30T20:31:54.377+01:00 level=INFO source=device.go:239 msg="compute graph" device=CPU size="15.9 GiB"
time=2025-10-30T20:31:54.377+01:00 level=INFO source=device.go:244 msg="total memory" size="31.9 GiB"
time=2025-10-30T20:31:54.377+01:00 level=INFO source=sched.go:446 msg="Load failed" model=F:\ollama_ai_models\blobs\sha256-ebabfa59b71a5b96e0281ec2994977e785284e0939807a99fc340dec3c6f10de error="model requires more system memory (31.9 GiB) than is available (31.4 GiB)"
[GIN] 2025/10/30 - 20:31:54 | 500 |    6.7276411s |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/10/30 - 20:31:57 | 200 |            0s |       127.0.0.1 | GET      "/api/version"
[GIN] 2025/10/30 - 20:31:57 | 200 |     15.2256ms |       127.0.0.1 | GET      "/api/tags"

OS

Windows 10

GPU

NVIDA GeForce GTX 1080

CPU

No response

Ollama version

0.12.7

Originally created by @magnusbonnevier on GitHub (Oct 30, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12865 ### What is the issue? Hello, i am having an issue with running qwen3-vl:2b localy and getting a 500 error saying that: "Error: 500 Internal Server Error: model requires more system memory (31.9 GiB) than is available (31.4 GiB)" same with 4b variant. I have ollama 0.12.7 installed. Same issue in commandline or UI. ### Relevant log output ```shell time=2025-10-30T20:31:48.068+01:00 level=INFO source=server.go:385 msg="starting runner" cmd="C:\\Users\\Magnus\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 61027" time=2025-10-30T20:31:49.118+01:00 level=INFO source=cpu_windows.go:139 msg=packages count=1 time=2025-10-30T20:31:49.118+01:00 level=INFO source=cpu_windows.go:186 msg="" package=0 cores=6 efficiency=0 threads=12 time=2025-10-30T20:31:49.143+01:00 level=INFO source=sched.go:559 msg="updated VRAM based on existing loaded models" gpu=GPU-b64a3c75-1a2b-484d-0f16-cbe794f13336 library=CUDA total="8.0 GiB" available="7.1 GiB" time=2025-10-30T20:31:49.207+01:00 level=INFO source=server.go:385 msg="starting runner" cmd="C:\\Users\\Magnus\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model F:\\ollama_ai_models\\blobs\\sha256-ebabfa59b71a5b96e0281ec2994977e785284e0939807a99fc340dec3c6f10de --port 61037" time=2025-10-30T20:31:49.211+01:00 level=INFO source=server.go:638 msg="loading model" "model layers"=29 requested=-1 time=2025-10-30T20:31:49.211+01:00 level=INFO source=server.go:643 msg="system memory" total="15.9 GiB" free="4.9 GiB" free_swap="7.1 GiB" time=2025-10-30T20:31:49.211+01:00 level=INFO source=server.go:650 msg="gpu memory" id=GPU-b64a3c75-1a2b-484d-0f16-cbe794f13336 library=CUDA available="6.6 GiB" free="7.1 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-10-30T20:31:49.247+01:00 level=INFO source=runner.go:1337 msg="starting ollama engine" time=2025-10-30T20:31:49.284+01:00 level=INFO source=runner.go:1372 msg="Server listening on 127.0.0.1:61037" time=2025-10-30T20:31:49.287+01:00 level=INFO source=runner.go:1210 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:131072 KvCacheType: NumThreads:6 GPULayers:29[ID:GPU-b64a3c75-1a2b-484d-0f16-cbe794f13336 Layers:29(0..28)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-30T20:31:49.320+01:00 level=INFO source=ggml.go:135 msg="" architecture=qwen3vl file_type=Q4_K_M name="" description="" num_tensors=721 num_key_values=40 load_backend: loaded CPU backend from C:\Users\Magnus\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce GTX 1080, compute capability 6.1, VMM: yes, ID: GPU-b64a3c75-1a2b-484d-0f16-cbe794f13336 load_backend: loaded CUDA backend from C:\Users\Magnus\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll time=2025-10-30T20:31:49.420+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-10-30T20:31:51.075+01:00 level=INFO source=runner.go:1210 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:false KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-30T20:31:51.075+01:00 level=INFO source=device.go:212 msg="model weights" device=CUDA0 size="1.8 GiB" time=2025-10-30T20:31:51.075+01:00 level=INFO source=device.go:217 msg="model weights" device=CPU size="243.4 MiB" time=2025-10-30T20:31:51.075+01:00 level=INFO source=device.go:223 msg="kv cache" device=CUDA0 size="14.0 GiB" time=2025-10-30T20:31:51.075+01:00 level=INFO source=device.go:234 msg="compute graph" device=CUDA0 size="17.3 GiB" time=2025-10-30T20:31:51.075+01:00 level=INFO source=device.go:239 msg="compute graph" device=CPU size="31.7 MiB" time=2025-10-30T20:31:51.075+01:00 level=INFO source=device.go:244 msg="total memory" size="33.3 GiB" time=2025-10-30T20:31:51.548+01:00 level=ERROR source=server.go:273 msg="llama runner terminated" error="exit status 1" time=2025-10-30T20:31:51.549+01:00 level=INFO source=server.go:385 msg="starting runner" cmd="C:\\Users\\Magnus\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 61046" time=2025-10-30T20:31:51.768+01:00 level=INFO source=cpu_windows.go:139 msg=packages count=1 time=2025-10-30T20:31:51.768+01:00 level=INFO source=cpu_windows.go:186 msg="" package=0 cores=6 efficiency=0 threads=12 time=2025-10-30T20:31:51.823+01:00 level=INFO source=server.go:638 msg="loading model" "model layers"=29 requested=-1 time=2025-10-30T20:31:51.823+01:00 level=INFO source=server.go:643 msg="system memory" total="15.9 GiB" free="8.4 GiB" free_swap="23.0 GiB" time=2025-10-30T20:31:51.823+01:00 level=INFO source=server.go:650 msg="gpu memory" id=GPU-b64a3c75-1a2b-484d-0f16-cbe794f13336 library=CUDA available="6.5 GiB" free="7.0 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-10-30T20:31:51.823+01:00 level=INFO source=runner.go:1210 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:131072 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-30T20:31:53.155+01:00 level=INFO source=runner.go:1210 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:131072 KvCacheType: NumThreads:6 GPULayers:11[ID:GPU-b64a3c75-1a2b-484d-0f16-cbe794f13336 Layers:11(17..27)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-30T20:31:54.377+01:00 level=WARN source=server.go:943 msg="model request too large for system" requested="31.9 GiB" available="31.4 GiB" total="15.9 GiB" free="8.4 GiB" swap="23.0 GiB" time=2025-10-30T20:31:54.377+01:00 level=INFO source=runner.go:1210 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:false KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-30T20:31:54.377+01:00 level=INFO source=device.go:217 msg="model weights" device=CPU size="2.0 GiB" time=2025-10-30T20:31:54.377+01:00 level=INFO source=device.go:228 msg="kv cache" device=CPU size="14.0 GiB" time=2025-10-30T20:31:54.377+01:00 level=INFO source=device.go:239 msg="compute graph" device=CPU size="15.9 GiB" time=2025-10-30T20:31:54.377+01:00 level=INFO source=device.go:244 msg="total memory" size="31.9 GiB" time=2025-10-30T20:31:54.377+01:00 level=INFO source=sched.go:446 msg="Load failed" model=F:\ollama_ai_models\blobs\sha256-ebabfa59b71a5b96e0281ec2994977e785284e0939807a99fc340dec3c6f10de error="model requires more system memory (31.9 GiB) than is available (31.4 GiB)" [GIN] 2025/10/30 - 20:31:54 | 500 | 6.7276411s | 127.0.0.1 | POST "/api/generate" [GIN] 2025/10/30 - 20:31:57 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2025/10/30 - 20:31:57 | 200 | 15.2256ms | 127.0.0.1 | GET "/api/tags" ``` ### OS Windows 10 ### GPU NVIDA GeForce GTX 1080 ### CPU _No response_ ### Ollama version 0.12.7
GiteaMirror added the bug label 2026-04-29 08:13:47 -05:00
Author
Owner

@mchiang0610 commented on GitHub (Oct 30, 2025):

Thank you for submitting this

<!-- gh-comment-id:3469768907 --> @mchiang0610 commented on GitHub (Oct 30, 2025): Thank you for submitting this
Author
Owner

@magnusbonnevier commented on GitHub (Oct 30, 2025):

Thank you for submitting this

No problem, i love this software and want to help.

<!-- gh-comment-id:3469773482 --> @magnusbonnevier commented on GitHub (Oct 30, 2025): > Thank you for submitting this No problem, i love this software and want to help.
Author
Owner

@magnusbonnevier commented on GitHub (Oct 30, 2025):

Should mention i have a Intel Core i7-8700K CPU @ 3.70GHz
And 16 GB RAM.

<!-- gh-comment-id:3469786630 --> @magnusbonnevier commented on GitHub (Oct 30, 2025): Should mention i have a Intel Core i7-8700K CPU @ 3.70GHz And 16 GB RAM.
Author
Owner

@magnusbonnevier commented on GitHub (Oct 30, 2025):

To clarify, i get the same issue with both UI and commandline variants.

<!-- gh-comment-id:3469807384 --> @magnusbonnevier commented on GitHub (Oct 30, 2025): To clarify, i get the same issue with both UI and commandline variants.
Author
Owner

@jessegross commented on GitHub (Oct 30, 2025):

On the command line, you can try enabling flash attention by setting the environment variable OLLAMA_FLASH_ATTENTION=1. This should significantly reduce memory usage. In the next version it will be on default.

<!-- gh-comment-id:3469815731 --> @jessegross commented on GitHub (Oct 30, 2025): On the command line, you can try enabling flash attention by setting the environment variable OLLAMA_FLASH_ATTENTION=1. This should significantly reduce memory usage. In the next version it will be on default.
Author
Owner

@magnusbonnevier commented on GitHub (Oct 30, 2025):

On the command line, you can try enabling flash attention by setting the environment variable OLLAMA_FLASH_ATTENTION=1. This should significantly reduce memory usage. In the next version it will be on default.

This won´t work if there is a bug with the model or something else is going on, i tried it just now and the issue still exists.

<!-- gh-comment-id:3469848124 --> @magnusbonnevier commented on GitHub (Oct 30, 2025): > On the command line, you can try enabling flash attention by setting the environment variable OLLAMA_FLASH_ATTENTION=1. This should significantly reduce memory usage. In the next version it will be on default. This won´t work if there is a bug with the model or something else is going on, i tried it just now and the issue still exists.
Author
Owner

@jessegross commented on GitHub (Oct 30, 2025):

Please post your logs with flash attention enabled. This log shows that more than half the memory usage is coming from the compute graph:
time=2025-10-30T20:31:51.075+01:00 level=INFO source=device.go:234 msg="compute graph" device=CUDA0 size="17.3 GiB"

Flash attention will significantly reduce this.

<!-- gh-comment-id:3469863928 --> @jessegross commented on GitHub (Oct 30, 2025): Please post your logs with flash attention enabled. This log shows that more than half the memory usage is coming from the compute graph: `time=2025-10-30T20:31:51.075+01:00 level=INFO source=device.go:234 msg="compute graph" device=CUDA0 size="17.3 GiB"` Flash attention will significantly reduce this.
Author
Owner

@magnusbonnevier commented on GitHub (Oct 30, 2025):

time=2025-10-30T21:01:44.224+01:00 level=INFO source=server.go:385 msg="starting runner" cmd="C:\Users\Magnus\AppData\Local\Programs\Ollama\ollama.exe runner --ollama-engine --port 61397"
time=2025-10-30T21:01:44.459+01:00 level=INFO source=cpu_windows.go:139 msg=packages count=1
time=2025-10-30T21:01:44.459+01:00 level=INFO source=cpu_windows.go:186 msg="" package=0 cores=6 efficiency=0 threads=12
time=2025-10-30T21:01:44.544+01:00 level=INFO source=server.go:385 msg="starting runner" cmd="C:\Users\Magnus\AppData\Local\Programs\Ollama\ollama.exe runner --ollama-engine --model F:\ollama_ai_models\blobs\sha256-ebabfa59b71a5b96e0281ec2994977e785284e0939807a99fc340dec3c6f10de --port 61406"
time=2025-10-30T21:01:44.547+01:00 level=INFO source=server.go:638 msg="loading model" "model layers"=29 requested=-1
time=2025-10-30T21:01:44.547+01:00 level=INFO source=server.go:643 msg="system memory" total="15.9 GiB" free="8.1 GiB" free_swap="21.4 GiB"
time=2025-10-30T21:01:44.547+01:00 level=INFO source=server.go:650 msg="gpu memory" id=GPU-b64a3c75-1a2b-484d-0f16-cbe794f13336 library=CUDA available="6.5 GiB" free="6.9 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-10-30T21:01:44.584+01:00 level=INFO source=runner.go:1337 msg="starting ollama engine"
time=2025-10-30T21:01:44.621+01:00 level=INFO source=runner.go:1372 msg="Server listening on 127.0.0.1:61406"
time=2025-10-30T21:01:44.625+01:00 level=INFO source=runner.go:1210 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:131072 KvCacheType: NumThreads:6 GPULayers:29[ID:GPU-b64a3c75-1a2b-484d-0f16-cbe794f13336 Layers:29(0..28)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-30T21:01:44.659+01:00 level=INFO source=ggml.go:135 msg="" architecture=qwen3vl file_type=Q4_K_M name="" description="" num_tensors=721 num_key_values=40
load_backend: loaded CPU backend from C:\Users\Magnus\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce GTX 1080, compute capability 6.1, VMM: yes, ID: GPU-b64a3c75-1a2b-484d-0f16-cbe794f13336
load_backend: loaded CUDA backend from C:\Users\Magnus\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
time=2025-10-30T21:01:44.770+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
[GIN] 2025/10/30 - 21:01:45 | 200 | 0s | 127.0.0.1 | GET "/api/version"
[GIN] 2025/10/30 - 21:01:45 | 200 | 5.3556ms | 127.0.0.1 | GET "/api/tags"
time=2025-10-30T21:01:46.189+01:00 level=INFO source=runner.go:1210 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:131072 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-30T21:01:47.555+01:00 level=INFO source=runner.go:1210 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:131072 KvCacheType: NumThreads:6 GPULayers:11[ID:GPU-b64a3c75-1a2b-484d-0f16-cbe794f13336 Layers:11(17..27)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-30T21:01:48.863+01:00 level=WARN source=server.go:943 msg="model request too large for system" requested="31.9 GiB" available="29.5 GiB" total="15.9 GiB" free="8.1 GiB" swap="21.4 GiB"
time=2025-10-30T21:01:48.863+01:00 level=INFO source=runner.go:1210 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:false KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-30T21:01:48.863+01:00 level=INFO source=device.go:217 msg="model weights" device=CPU size="2.0 GiB"
time=2025-10-30T21:01:48.863+01:00 level=INFO source=device.go:228 msg="kv cache" device=CPU size="14.0 GiB"
time=2025-10-30T21:01:48.863+01:00 level=INFO source=device.go:239 msg="compute graph" device=CPU size="15.9 GiB"
time=2025-10-30T21:01:48.863+01:00 level=INFO source=device.go:244 msg="total memory" size="31.9 GiB"
time=2025-10-30T21:01:48.863+01:00 level=INFO source=sched.go:446 msg="Load failed" model=F:\ollama_ai_models\blobs\sha256-ebabfa59b71a5b96e0281ec2994977e785284e0939807a99fc340dec3c6f10de error="model requires more system memory (31.9 GiB) than is available (29.5 GiB)"

<!-- gh-comment-id:3469881779 --> @magnusbonnevier commented on GitHub (Oct 30, 2025): time=2025-10-30T21:01:44.224+01:00 level=INFO source=server.go:385 msg="starting runner" cmd="C:\\Users\\Magnus\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 61397" time=2025-10-30T21:01:44.459+01:00 level=INFO source=cpu_windows.go:139 msg=packages count=1 time=2025-10-30T21:01:44.459+01:00 level=INFO source=cpu_windows.go:186 msg="" package=0 cores=6 efficiency=0 threads=12 time=2025-10-30T21:01:44.544+01:00 level=INFO source=server.go:385 msg="starting runner" cmd="C:\\Users\\Magnus\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model F:\\ollama_ai_models\\blobs\\sha256-ebabfa59b71a5b96e0281ec2994977e785284e0939807a99fc340dec3c6f10de --port 61406" time=2025-10-30T21:01:44.547+01:00 level=INFO source=server.go:638 msg="loading model" "model layers"=29 requested=-1 time=2025-10-30T21:01:44.547+01:00 level=INFO source=server.go:643 msg="system memory" total="15.9 GiB" free="8.1 GiB" free_swap="21.4 GiB" time=2025-10-30T21:01:44.547+01:00 level=INFO source=server.go:650 msg="gpu memory" id=GPU-b64a3c75-1a2b-484d-0f16-cbe794f13336 library=CUDA available="6.5 GiB" free="6.9 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-10-30T21:01:44.584+01:00 level=INFO source=runner.go:1337 msg="starting ollama engine" time=2025-10-30T21:01:44.621+01:00 level=INFO source=runner.go:1372 msg="Server listening on 127.0.0.1:61406" time=2025-10-30T21:01:44.625+01:00 level=INFO source=runner.go:1210 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:131072 KvCacheType: NumThreads:6 GPULayers:29[ID:GPU-b64a3c75-1a2b-484d-0f16-cbe794f13336 Layers:29(0..28)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-30T21:01:44.659+01:00 level=INFO source=ggml.go:135 msg="" architecture=qwen3vl file_type=Q4_K_M name="" description="" num_tensors=721 num_key_values=40 load_backend: loaded CPU backend from C:\Users\Magnus\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce GTX 1080, compute capability 6.1, VMM: yes, ID: GPU-b64a3c75-1a2b-484d-0f16-cbe794f13336 load_backend: loaded CUDA backend from C:\Users\Magnus\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll time=2025-10-30T21:01:44.770+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) [GIN] 2025/10/30 - 21:01:45 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2025/10/30 - 21:01:45 | 200 | 5.3556ms | 127.0.0.1 | GET "/api/tags" time=2025-10-30T21:01:46.189+01:00 level=INFO source=runner.go:1210 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:131072 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-30T21:01:47.555+01:00 level=INFO source=runner.go:1210 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:131072 KvCacheType: NumThreads:6 GPULayers:11[ID:GPU-b64a3c75-1a2b-484d-0f16-cbe794f13336 Layers:11(17..27)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-30T21:01:48.863+01:00 level=WARN source=server.go:943 msg="model request too large for system" requested="31.9 GiB" available="29.5 GiB" total="15.9 GiB" free="8.1 GiB" swap="21.4 GiB" time=2025-10-30T21:01:48.863+01:00 level=INFO source=runner.go:1210 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:false KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-30T21:01:48.863+01:00 level=INFO source=device.go:217 msg="model weights" device=CPU size="2.0 GiB" time=2025-10-30T21:01:48.863+01:00 level=INFO source=device.go:228 msg="kv cache" device=CPU size="14.0 GiB" time=2025-10-30T21:01:48.863+01:00 level=INFO source=device.go:239 msg="compute graph" device=CPU size="15.9 GiB" time=2025-10-30T21:01:48.863+01:00 level=INFO source=device.go:244 msg="total memory" size="31.9 GiB" time=2025-10-30T21:01:48.863+01:00 level=INFO source=sched.go:446 msg="Load failed" model=F:\ollama_ai_models\blobs\sha256-ebabfa59b71a5b96e0281ec2994977e785284e0939807a99fc340dec3c6f10de error="model requires more system memory (31.9 GiB) than is available (29.5 GiB)"
Author
Owner

@magnusbonnevier commented on GitHub (Oct 30, 2025):

I hope thats correct.

<!-- gh-comment-id:3469883012 --> @magnusbonnevier commented on GitHub (Oct 30, 2025): I hope thats correct.
Author
Owner

@magnusbonnevier commented on GitHub (Oct 30, 2025):

i tried "set OLLAMA_FLASH_ATTENTION=1" on the win 10 cmd and it still says false in the logs..
So that did not work.

<!-- gh-comment-id:3469901947 --> @magnusbonnevier commented on GitHub (Oct 30, 2025): i tried "set OLLAMA_FLASH_ATTENTION=1" on the win 10 cmd and it still says false in the logs.. So that did not work.
Author
Owner

@jessegross commented on GitHub (Oct 30, 2025):

Flash attention is not enabled:
time=2025-10-30T21:01:46.189+01:00 level=INFO source=runner.go:1210 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:131072 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"

Make sure that you are setting it in the environment of the server process.

<!-- gh-comment-id:3469902255 --> @jessegross commented on GitHub (Oct 30, 2025): Flash attention is not enabled: time=2025-10-30T21:01:46.189+01:00 level=INFO source=runner.go:1210 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 **FlashAttention:false** KvSize:131072 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Make sure that you are setting it in the environment of the server process.
Author
Owner

@magnusbonnevier commented on GitHub (Oct 30, 2025):

Flash attention is not enabled: time=2025-10-30T21:01:46.189+01:00 level=INFO source=runner.go:1210 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:131072 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"

Make sure that you are setting it in the environment of the server process.

Oki i tried from cmd line and it did not work.

I had to set it in windows enviroment variables settings for my user name, and so far the model has loaded, trying to run an analyzis on an image now in the cmd.

It takes time on my hardware.

Will update.

<!-- gh-comment-id:3469942176 --> @magnusbonnevier commented on GitHub (Oct 30, 2025): > Flash attention is not enabled: time=2025-10-30T21:01:46.189+01:00 level=INFO source=runner.go:1210 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 **FlashAttention:false** KvSize:131072 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" > > Make sure that you are setting it in the environment of the server process. Oki i tried from cmd line and it did not work. I had to set it in windows enviroment variables settings for my user name, and so far the model has loaded, trying to run an analyzis on an image now in the cmd. It takes time on my hardware. Will update.
Author
Owner

@magnusbonnevier commented on GitHub (Oct 30, 2025):

Oki so this seem to work now, sorry for all the trouble.

Result:

C:\Users\Magnus>ollama run qwen3-vl:2b

analyze this image "F:#ai_image_test\boilerplate\barn-9616520_1920.jpg"
Added image 'F:#ai_image_test\boilerplate\barn-9616520_1920.jpg'
Thinking...
So, let's analyze this image. First, the main subject is two old wooden buildings, probably farm
sheds or barns, set in a rural landscape. The large structure in the foreground is a big barn with
a steep roof, dark wooden planks, and some windows. It looks weathered, maybe abandoned. The
smaller one in the background is also old, maybe a shed or another barn. The field around them is
dry, tall grass, suggesting it might be late fall or early spring. The sky is overcast with a mix
of gray and some lighter clouds, giving a somber or serene mood.

First, composition: the foreground barn is large, taking up most of the right side, the smaller
one is left. The horizon is low, so the buildings are prominent. The grass is golden, contrasting
with the dark wood and sky. Lighting: it's natural light, maybe early morning or late afternoon,
which creates soft shadows. The weathered look of the barns emphasizes age and abandonment.

Then, elements: the wooden planks, the roof texture, the dry grass. The sky has a gradient from
dark to light, giving a sense of depth. The overall mood is melancholic, quiet, maybe nostalgic or
evocative of the past. The scene feels rural, isolated, maybe part of a rural heritage or a farm
that's been left behind.

Possible details: the small structures are similar in style, maybe part of the same farm. The sky
has some clouds, but not too many, so not stormy. The field has some snow patches, maybe from a
cold season. The trees in the background are sparse, indicating a semi-arid or cold climate.

So, putting it all together: a rural scene with abandoned farm buildings, evoking a sense of
history, abandonment, and solitude. The composition emphasizes the contrast between the natural
landscape and the old man-made structures. The color palette is earthy with browns and grays,
creating a muted, calm atmosphere.

I should structure the analysis by key elements: subjects, setting, composition, mood, light,
color, and possible context. Need to be thorough but concise.
...done thinking.

This image presents a serene yet melancholic rural scene featuring two weathered wooden farm
structures (likely barns or sheds) set against an expansive, dry field under an overcast sky.
Here’s a detailed breakdown:

1. Subjects & Composition

  • Foreground: A large, aged barn dominates the right side of the frame. Its dark wooden
    planks, steep roof, and faded appearance convey a sense of abandonment or decay. A smaller,
    similarly aged structure sits in the background to the left.
  • Background: A sparse forest line (with some scattered trees) and a pale sky with wispy
    clouds form the horizon. The field is covered in dry, golden grass, suggesting late autumn or
    early winter.

2. Visual Elements

  • Color Palette: Earthy tones dominate—browns, grays, and muted golds—creating a somber,
    rustic mood. The overcast sky enhances this subdued tone, while the dry grass contrasts with the
    darker wood of the buildings.
  • Texture: The rough, weathered wood of the barns, the coarse texture of the grass, and the
    faint snow patches in the field (visible in the background) add tactile depth.
  • Lighting: Soft, diffused natural light (likely from early morning or late afternoon) casts
    gentle shadows, emphasizing the barns’ decay without harsh contrasts.

3. Mood & Atmosphere

  • The scene evokes solitude, nostalgia, and a sense of history. The abandoned structures
    suggest a bygone era of rural life, possibly hinting at the decline of agricultural practices or a
    forgotten farm.
  • The overcast sky and dry landscape amplify a feeling of quiet melancholy—implying isolation or
    the slow passage of time.

4. Contextual Clues

  • The presence of two modest farm buildings, their aged appearance, and the vast, open field
    suggest this is a rural farmland (possibly in a temperate climate with seasonal cycles like
    winter or early spring).
  • Subtle details (e.g., the cracked wood on the barns, the sparse trees) reinforce the idea of a
    site that has been left untouched for decades.

5. Symbolism & Interpretation

  • The image can be seen as a metaphor for the impermanence of human endeavors—the once-lively
    farmstead now relegated to quiet decay.
  • It evokes themes of rural heritage, loss, and the passage of time, inviting reflection on
    how landscapes and human structures reflect cultural shifts.

In summary, this photograph captures a tranquil yet poignant rural landscape, using composition,
color, and subject matter to convey a story of abandonment, history, and the quiet resilience of
nature.

Send a message (/? for help)

<!-- gh-comment-id:3469959098 --> @magnusbonnevier commented on GitHub (Oct 30, 2025): Oki so this seem to work now, sorry for all the trouble. Result: C:\Users\Magnus>ollama run qwen3-vl:2b >>> analyze this image "F:\#ai_image_test\boilerplate\barn-9616520_1920.jpg" Added image 'F:\#ai_image_test\boilerplate\barn-9616520_1920.jpg' Thinking... So, let's analyze this image. First, the main subject is two old wooden buildings, probably farm sheds or barns, set in a rural landscape. The large structure in the foreground is a big barn with a steep roof, dark wooden planks, and some windows. It looks weathered, maybe abandoned. The smaller one in the background is also old, maybe a shed or another barn. The field around them is dry, tall grass, suggesting it might be late fall or early spring. The sky is overcast with a mix of gray and some lighter clouds, giving a somber or serene mood. First, composition: the foreground barn is large, taking up most of the right side, the smaller one is left. The horizon is low, so the buildings are prominent. The grass is golden, contrasting with the dark wood and sky. Lighting: it's natural light, maybe early morning or late afternoon, which creates soft shadows. The weathered look of the barns emphasizes age and abandonment. Then, elements: the wooden planks, the roof texture, the dry grass. The sky has a gradient from dark to light, giving a sense of depth. The overall mood is melancholic, quiet, maybe nostalgic or evocative of the past. The scene feels rural, isolated, maybe part of a rural heritage or a farm that's been left behind. Possible details: the small structures are similar in style, maybe part of the same farm. The sky has some clouds, but not too many, so not stormy. The field has some snow patches, maybe from a cold season. The trees in the background are sparse, indicating a semi-arid or cold climate. So, putting it all together: a rural scene with abandoned farm buildings, evoking a sense of history, abandonment, and solitude. The composition emphasizes the contrast between the natural landscape and the old man-made structures. The color palette is earthy with browns and grays, creating a muted, calm atmosphere. I should structure the analysis by key elements: subjects, setting, composition, mood, light, color, and possible context. Need to be thorough but concise. ...done thinking. This image presents a **serene yet melancholic rural scene** featuring two weathered wooden farm structures (likely barns or sheds) set against an expansive, dry field under an overcast sky. Here’s a detailed breakdown: ### 1. **Subjects & Composition** - **Foreground**: A large, aged barn dominates the right side of the frame. Its dark wooden planks, steep roof, and faded appearance convey a sense of abandonment or decay. A smaller, similarly aged structure sits in the background to the left. - **Background**: A sparse forest line (with some scattered trees) and a pale sky with wispy clouds form the horizon. The field is covered in dry, golden grass, suggesting late autumn or early winter. ### 2. **Visual Elements** - **Color Palette**: Earthy tones dominate—browns, grays, and muted golds—creating a somber, rustic mood. The overcast sky enhances this subdued tone, while the dry grass contrasts with the darker wood of the buildings. - **Texture**: The rough, weathered wood of the barns, the coarse texture of the grass, and the faint snow patches in the field (visible in the background) add tactile depth. - **Lighting**: Soft, diffused natural light (likely from early morning or late afternoon) casts gentle shadows, emphasizing the barns’ decay without harsh contrasts. ### 3. **Mood & Atmosphere** - The scene evokes **solitude, nostalgia, and a sense of history**. The abandoned structures suggest a bygone era of rural life, possibly hinting at the decline of agricultural practices or a forgotten farm. - The overcast sky and dry landscape amplify a feeling of quiet melancholy—implying isolation or the slow passage of time. ### 4. **Contextual Clues** - The presence of two modest farm buildings, their aged appearance, and the vast, open field suggest this is a **rural farmland** (possibly in a temperate climate with seasonal cycles like winter or early spring). - Subtle details (e.g., the cracked wood on the barns, the sparse trees) reinforce the idea of a site that has been left untouched for decades. ### 5. **Symbolism & Interpretation** - The image can be seen as a metaphor for **the impermanence of human endeavors**—the once-lively farmstead now relegated to quiet decay. - It evokes themes of **rural heritage, loss, and the passage of time**, inviting reflection on how landscapes and human structures reflect cultural shifts. In summary, this photograph captures a tranquil yet poignant rural landscape, using composition, color, and subject matter to convey a story of abandonment, history, and the quiet resilience of nature. >>> Send a message (/? for help)
Author
Owner

@magnusbonnevier commented on GitHub (Oct 30, 2025):

Tried in the UI also with the enviroment variable set and now it works there to.

And blazing fast, faster then in the cmd, maybe cached ?

<!-- gh-comment-id:3469970369 --> @magnusbonnevier commented on GitHub (Oct 30, 2025): Tried in the UI also with the enviroment variable set and now it works there to. And blazing fast, faster then in the cmd, maybe cached ?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#55038