[GH-ISSUE #10218] Image recognition doesn't work with models downloaded from another site #53217

Closed
opened 2026-04-29 02:23:44 -05:00 by GiteaMirror · 9 comments
Owner

Originally created by @yukkuriTV on GitHub (Apr 10, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10218

What is the issue?

To troubleshoot, I tried using gemma-3-4b-it-Q8_0 that I downloaded from another site, and although it recognizes and uses text fine, it doesn't generate an answer when I attach an image.

Even if you provide an answer, nothing is generated.

Relevant log output

2025-04-10 23:51:34.615 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50141 - "POST /api/v1/chats/e001df23-806c-4e32-b5c2-c71ccd8d9a8d HTTP/1.1" 200 - {}
2025-04-10 23:51:34.632 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50141 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {}
2025-04-10 23:51:39.754 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50150 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {}
2025-04-10 23:51:44.792 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50152 - "GET /api/v1/chats/e001df23-806c-4e32-b5c2-c71ccd8d9a8d HTTP/1.1" 200 - {}
2025-04-10 23:51:44.801 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50152 - "GET /api/v1/chats/all/tags HTTP/1.1" 200 - {}
2025-04-10 23:52:08.090 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50153 - "GET /api/v1/chats/7e3f597a-f490-497c-ab9f-83cc5e72a4c5 HTTP/1.1" 200 - {}
2025-04-10 23:52:14.218 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50157 - "GET /static/favicon.png HTTP/1.1" 304 - {}
2025-04-10 23:52:14.232 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50161 - "GET /api/v1/chats/087c93bf-00a0-41f5-830f-8a563f521156 HTTP/1.1" 200 - {}
2025-04-10 23:52:14.234 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50162 - "GET /api/v1/chats/72d2437f-43b2-49c6-8c6a-408710f3dcb3 HTTP/1.1" 200 - {}
2025-04-10 23:52:14.234 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50163 - "GET /api/v1/chats/82efb2ba-9705-4905-b9ca-d7f1058e8d3f HTTP/1.1" 200 - {}
2025-04-10 23:52:14.274 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50153 - "GET /api/v1/chats/7e3f597a-f490-497c-ab9f-83cc5e72a4c5/tags HTTP/1.1" 200 - {}
2025-04-10 23:52:14.283 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50153 - "GET /api/v1/users/user/settings HTTP/1.1" 200 - {}
2025-04-10 23:52:14.567 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50153 - "GET /ollama/api/version HTTP/1.1" 200 - {}
2025-04-10 23:52:21.128 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50171 - "GET /api/v1/users/user/settings HTTP/1.1" 200 - {}
2025-04-10 23:52:21.298 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50169 - "GET /ollama/api/version HTTP/1.1" 200 - {}
2025-04-10 23:52:54.623 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50173 - "POST /api/v1/chats/new HTTP/1.1" 200 - {}
2025-04-10 23:52:58.774 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50173 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {}
2025-04-10 23:52:58.821 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50173 - "POST /api/v1/chats/4a8eda46-61ae-4a95-a89f-09be2513a31a HTTP/1.1" 200 - {}
2025-04-10 23:52:58.901 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50173 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {}
Batches: 100%|██████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 102.54it/s]
2025-04-10 23:52:58.926 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50173 - "POST /api/v1/memories/query HTTP/1.1" 200 - {}
2025-04-10 23:52:59.290 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50173 - "POST /api/chat/completions HTTP/1.1" 200 - {}
2025-04-10 23:52:59.297 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50173 - "POST /api/chat/completed HTTP/1.1" 200 - {}
2025-04-10 23:52:59.308 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50173 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {}
2025-04-10 23:52:59.330 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50173 - "POST /api/v1/chats/4a8eda46-61ae-4a95-a89f-09be2513a31a HTTP/1.1" 200 - {}
2025-04-10 23:52:59.342 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50173 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {}
2025-04-10 23:53:04.100 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50173 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {}

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.6.5

Originally created by @yukkuriTV on GitHub (Apr 10, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10218 ### What is the issue? To troubleshoot, I tried using gemma-3-4b-it-Q8_0 that I downloaded from another site, and although it recognizes and uses text fine, it doesn't generate an answer when I attach an image. Even if you provide an answer, nothing is generated. ### Relevant log output ```shell 2025-04-10 23:51:34.615 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50141 - "POST /api/v1/chats/e001df23-806c-4e32-b5c2-c71ccd8d9a8d HTTP/1.1" 200 - {} 2025-04-10 23:51:34.632 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50141 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {} 2025-04-10 23:51:39.754 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50150 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {} 2025-04-10 23:51:44.792 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50152 - "GET /api/v1/chats/e001df23-806c-4e32-b5c2-c71ccd8d9a8d HTTP/1.1" 200 - {} 2025-04-10 23:51:44.801 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50152 - "GET /api/v1/chats/all/tags HTTP/1.1" 200 - {} 2025-04-10 23:52:08.090 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50153 - "GET /api/v1/chats/7e3f597a-f490-497c-ab9f-83cc5e72a4c5 HTTP/1.1" 200 - {} 2025-04-10 23:52:14.218 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50157 - "GET /static/favicon.png HTTP/1.1" 304 - {} 2025-04-10 23:52:14.232 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50161 - "GET /api/v1/chats/087c93bf-00a0-41f5-830f-8a563f521156 HTTP/1.1" 200 - {} 2025-04-10 23:52:14.234 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50162 - "GET /api/v1/chats/72d2437f-43b2-49c6-8c6a-408710f3dcb3 HTTP/1.1" 200 - {} 2025-04-10 23:52:14.234 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50163 - "GET /api/v1/chats/82efb2ba-9705-4905-b9ca-d7f1058e8d3f HTTP/1.1" 200 - {} 2025-04-10 23:52:14.274 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50153 - "GET /api/v1/chats/7e3f597a-f490-497c-ab9f-83cc5e72a4c5/tags HTTP/1.1" 200 - {} 2025-04-10 23:52:14.283 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50153 - "GET /api/v1/users/user/settings HTTP/1.1" 200 - {} 2025-04-10 23:52:14.567 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50153 - "GET /ollama/api/version HTTP/1.1" 200 - {} 2025-04-10 23:52:21.128 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50171 - "GET /api/v1/users/user/settings HTTP/1.1" 200 - {} 2025-04-10 23:52:21.298 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50169 - "GET /ollama/api/version HTTP/1.1" 200 - {} 2025-04-10 23:52:54.623 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50173 - "POST /api/v1/chats/new HTTP/1.1" 200 - {} 2025-04-10 23:52:58.774 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50173 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {} 2025-04-10 23:52:58.821 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50173 - "POST /api/v1/chats/4a8eda46-61ae-4a95-a89f-09be2513a31a HTTP/1.1" 200 - {} 2025-04-10 23:52:58.901 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50173 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {} Batches: 100%|██████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 102.54it/s] 2025-04-10 23:52:58.926 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50173 - "POST /api/v1/memories/query HTTP/1.1" 200 - {} 2025-04-10 23:52:59.290 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50173 - "POST /api/chat/completions HTTP/1.1" 200 - {} 2025-04-10 23:52:59.297 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50173 - "POST /api/chat/completed HTTP/1.1" 200 - {} 2025-04-10 23:52:59.308 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50173 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {} 2025-04-10 23:52:59.330 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50173 - "POST /api/v1/chats/4a8eda46-61ae-4a95-a89f-09be2513a31a HTTP/1.1" 200 - {} 2025-04-10 23:52:59.342 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50173 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {} 2025-04-10 23:53:04.100 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50173 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {} ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.6.5
GiteaMirror added the bug label 2026-04-29 02:23:44 -05:00
Author
Owner

@rick-github commented on GitHub (Apr 10, 2025):

This log is not from ollama. If the model is from a different site, it may not have preserved the vision component of the model when it was quantized.

<!-- gh-comment-id:2794144223 --> @rick-github commented on GitHub (Apr 10, 2025): This log is not from ollama. If the model is from a different site, it may not have preserved the vision component of the model when it was quantized.
Author
Owner

@yukkuriTV commented on GitHub (Apr 10, 2025):

Sorry, here is the correct one.
2025/04/10 23:41:22 routes.go:1231: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\Users\yukkuriTV\.ollama\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-04-10T23:41:22.262+09:00 level=INFO source=images.go:458 msg="total blobs: 21"
time=2025-04-10T23:41:22.264+09:00 level=INFO source=images.go:465 msg="total unused blobs removed: 0"
time=2025-04-10T23:41:22.267+09:00 level=INFO source=routes.go:1298 msg="Listening on 127.0.0.1:11434 (version 0.6.5)"
time=2025-04-10T23:41:22.269+09:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-04-10T23:41:22.269+09:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-04-10T23:41:22.269+09:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=6 efficiency=0 threads=12
time=2025-04-10T23:41:22.535+09:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-23f2619b-2453-9ed8-f622-383f1a5428fb library=cuda compute=7.5 driver=12.6 name="NVIDIA GeForce GTX 1650" overhead="445.1 MiB"
time=2025-04-10T23:41:22.538+09:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-23f2619b-2453-9ed8-f622-383f1a5428fb library=cuda variant=v12 compute=7.5 driver=12.6 name="NVIDIA GeForce GTX 1650" total="4.0 GiB" available="3.2 GiB"
[GIN] 2025/04/10 - 23:43:48 | 200 | 77.7487ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/04/10 - 23:43:49 | 200 | 2.5884ms | 127.0.0.1 | GET "/api/version"
[GIN] 2025/04/10 - 23:47:31 | 200 | 0s | 127.0.0.1 | GET "/api/version"
[GIN] 2025/04/10 - 23:48:16 | 200 | 4.151ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/04/10 - 23:48:16 | 200 | 0s | 127.0.0.1 | GET "/api/version"
[GIN] 2025/04/10 - 23:48:31 | 200 | 0s | 127.0.0.1 | GET "/api/version"
[GIN] 2025/04/10 - 23:48:37 | 200 | 7.7663ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/04/10 - 23:48:54 | 200 | 6.9112ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/04/10 - 23:48:55 | 200 | 2.0605ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/04/10 - 23:49:04 | 200 | 2.5982ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/04/10 - 23:49:05 | 200 | 12.318ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/04/10 - 23:49:13 | 200 | 1.5331ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/04/10 - 23:49:13 | 200 | 0s | 127.0.0.1 | GET "/api/version"
[GIN] 2025/04/10 - 23:49:19 | 200 | 4.7153ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/04/10 - 23:49:29 | 200 | 1.5366ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/04/10 - 23:49:39 | 200 | 1.5306ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/04/10 - 23:49:40 | 200 | 1.531ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/04/10 - 23:49:47 | 200 | 0s | 127.0.0.1 | GET "/api/version"
[GIN] 2025/04/10 - 23:49:53 | 200 | 1.5666ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/04/10 - 23:49:59 | 200 | 2.5731ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/04/10 - 23:50:00 | 200 | 2.0272ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/04/10 - 23:50:01 | 200 | 1.567ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/04/10 - 23:50:03 | 200 | 2.1286ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/04/10 - 23:50:07 | 200 | 0s | 127.0.0.1 | GET "/api/version"
time=2025-04-10T23:50:17.896+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.block_count default=0
time=2025-04-10T23:50:17.896+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.block_count default=0
time=2025-04-10T23:50:17.897+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.block_count default=0
time=2025-04-10T23:50:17.897+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.block_count default=0
time=2025-04-10T23:50:17.908+09:00 level=INFO source=server.go:105 msg="system memory" total="11.8 GiB" free="5.3 GiB" free_swap="35.1 GiB"
time=2025-04-10T23:50:17.908+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.block_count default=0
time=2025-04-10T23:50:17.908+09:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=35 layers.offload=15 layers.split="" memory.available="[3.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.6 GiB" memory.required.partial="3.1 GiB" memory.required.kv="214.0 MiB" memory.required.allocations="[3.1 GiB]" memory.weights.total="3.8 GiB" memory.weights.repeating="3.2 GiB" memory.weights.nonrepeating="680.2 MiB" memory.graph.full="517.1 MiB" memory.graph.partial="1.0 GiB"
time=2025-04-10T23:50:17.975+09:00 level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-04-10T23:50:17.978+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.image_size default=0
time=2025-04-10T23:50:17.978+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.patch_size default=0
time=2025-04-10T23:50:17.978+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.num_channels default=0
time=2025-04-10T23:50:17.978+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.block_count default=0
time=2025-04-10T23:50:17.978+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.embedding_length default=0
time=2025-04-10T23:50:17.979+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.attention.head_count default=0
time=2025-04-10T23:50:17.979+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.image_size default=0
time=2025-04-10T23:50:17.979+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.patch_size default=0
time=2025-04-10T23:50:17.979+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.attention.layer_norm_epsilon default=0
time=2025-04-10T23:50:17.984+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-04-10T23:50:17.984+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-04-10T23:50:17.984+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-04-10T23:50:17.984+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-04-10T23:50:17.993+09:00 level=INFO source=server.go:405 msg="starting llama server" cmd="C:\Users\yukkuriTV\AppData\Local\Programs\Ollama\ollama.exe runner --ollama-engine --model C:\Users\yukkuriTV\.ollama\models\blobs\sha256-3d3a5470ffe8a3b9d5dff58f3be2c87c184b53f0b7fe25887328d17e8fd32eb2 --ctx-size 2048 --batch-size 512 --n-gpu-layers 15 --threads 6 --parallel 1 --port 50104"
time=2025-04-10T23:50:18.037+09:00 level=INFO source=sched.go:451 msg="loaded runners" count=1
time=2025-04-10T23:50:18.037+09:00 level=INFO source=server.go:580 msg="waiting for llama runner to start responding"
time=2025-04-10T23:50:18.042+09:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error"
time=2025-04-10T23:50:18.068+09:00 level=INFO source=runner.go:816 msg="starting ollama engine"
time=2025-04-10T23:50:18.094+09:00 level=INFO source=runner.go:879 msg="Server listening on 127.0.0.1:50104"
time=2025-04-10T23:50:18.150+09:00 level=WARN source=ggml.go:152 msg="key not found" key=general.description default=""
time=2025-04-10T23:50:18.150+09:00 level=INFO source=ggml.go:67 msg="" architecture=gemma3 file_type=Q8_0 name=Gemma-3-4B-It description="" num_tensors=444 num_key_values=35
time=2025-04-10T23:50:18.295+09:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce GTX 1650, compute capability 7.5, VMM: yes
load_backend: loaded CUDA backend from C:\Users\yukkuriTV\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
load_backend: loaded CPU backend from C:\Users\yukkuriTV\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
time=2025-04-10T23:50:19.230+09:00 level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-04-10T23:50:19.726+09:00 level=INFO source=ggml.go:289 msg="model weights" buffer=CPU size="3.1 GiB"
time=2025-04-10T23:50:19.726+09:00 level=INFO source=ggml.go:289 msg="model weights" buffer=CUDA0 size="1.4 GiB"
time=2025-04-10T23:50:22.780+09:00 level=INFO source=ggml.go:388 msg="compute graph" backend=CUDA0 buffer_type=CUDA0
time=2025-04-10T23:50:22.780+09:00 level=INFO source=ggml.go:388 msg="compute graph" backend=CPU buffer_type=CUDA_Host
time=2025-04-10T23:50:22.788+09:00 level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-04-10T23:50:22.791+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.image_size default=0
time=2025-04-10T23:50:22.791+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.patch_size default=0
time=2025-04-10T23:50:22.791+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.num_channels default=0
time=2025-04-10T23:50:22.791+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.block_count default=0
time=2025-04-10T23:50:22.791+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.embedding_length default=0
time=2025-04-10T23:50:22.791+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.attention.head_count default=0
time=2025-04-10T23:50:22.791+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.image_size default=0
time=2025-04-10T23:50:22.791+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.patch_size default=0
time=2025-04-10T23:50:22.791+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.attention.layer_norm_epsilon default=0
time=2025-04-10T23:50:22.795+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-04-10T23:50:22.795+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-04-10T23:50:22.795+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-04-10T23:50:22.795+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-04-10T23:50:22.809+09:00 level=INFO source=server.go:619 msg="llama runner started in 4.77 seconds"
[GIN] 2025/04/10 - 23:50:26 | 200 | 8.7700477s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/04/10 - 23:50:31 | 200 | 4.3203821s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/04/10 - 23:50:36 | 200 | 5.3622921s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/04/10 - 23:50:47 | 200 | 2.0825ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/04/10 - 23:50:56 | 200 | 1.6732ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/04/10 - 23:50:57 | 200 | 1.5642ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/04/10 - 23:51:09 | 200 | 1.5384ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/04/10 - 23:51:17 | 200 | 0s | 127.0.0.1 | GET "/api/version"
time=2025-04-10T23:51:34.541+09:00 level=INFO source=server.go:789 msg="llm predict error: Failed to create new sequence: failed to process inputs: this model is missing data required for image input"
[GIN] 2025/04/10 - 23:51:34 | 200 | 40.6321ms | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/04/10 - 23:51:39 | 200 | 4.8736343s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/04/10 - 23:51:44 | 200 | 4.7278875s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/04/10 - 23:52:14 | 200 | 0s | 127.0.0.1 | GET "/api/version"
[GIN] 2025/04/10 - 23:52:21 | 200 | 0s | 127.0.0.1 | GET "/api/version"
time=2025-04-10T23:52:59.251+09:00 level=INFO source=server.go:789 msg="llm predict error: Failed to create new sequence: failed to process inputs: this model is missing data required for image input"
[GIN] 2025/04/10 - 23:52:59 | 200 | 51.8141ms | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/04/10 - 23:53:04 | 200 | 4.5159211s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/04/10 - 23:53:25 | 200 | 4.763401s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/04/10 - 23:55:50 | 200 | 2.5738ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/04/10 - 23:55:50 | 200 | 6.3µs | 127.0.0.1 | GET "/api/version"
time=2025-04-10T23:55:54.508+09:00 level=INFO source=server.go:789 msg="llm predict error: Failed to create new sequence: failed to process inputs: this model is missing data required for image input"
[GIN] 2025/04/10 - 23:55:54 | 200 | 38.9413ms | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/04/10 - 23:55:59 | 200 | 4.3299013s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/04/10 - 23:56:03 | 200 | 4.4763269s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/04/10 - 23:56:57 | 200 | 0s | 127.0.0.1 | GET "/api/version"
[GIN] 2025/04/10 - 23:58:36 | 200 | 0s | 127.0.0.1 | GET "/api/version"
[GIN] 2025/04/10 - 23:58:43 | 200 | 3.0847ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/04/10 - 23:58:45 | 200 | 1.6163ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/04/10 - 23:58:59 | 200 | 1.5302ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/04/10 - 23:59:00 | 200 | 2.5908ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/04/10 - 23:59:05 | 200 | 0s | 127.0.0.1 | GET "/api/version"
time=2025-04-11T00:01:16.162+09:00 level=WARN source=sched.go:648 msg="gpu VRAM usage didn't recover within timeout" seconds=12.302142 model=C:\Users\yukkuriTV.ollama\models\blobs\sha256-3d3a5470ffe8a3b9d5dff58f3be2c87c184b53f0b7fe25887328d17e8fd32eb2
time=2025-04-11T00:01:16.375+09:00 level=WARN source=sched.go:648 msg="gpu VRAM usage didn't recover within timeout" seconds=12.5157752 model=C:\Users\yukkuriTV.ollama\models\blobs\sha256-3d3a5470ffe8a3b9d5dff58f3be2c87c184b53f0b7fe25887328d17e8fd32eb2
time=2025-04-11T00:01:16.630+09:00 level=WARN source=sched.go:648 msg="gpu VRAM usage didn't recover within timeout" seconds=12.7709461 model=C:\Users\yukkuriTV.ollama\models\blobs\sha256-3d3a5470ffe8a3b9d5dff58f3be2c87c184b53f0b7fe25887328d17e8fd32eb2
[GIN] 2025/04/11 - 00:02:14 | 200 | 2.7871ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/04/11 - 00:02:28 | 200 | 0s | 127.0.0.1 | GET "/api/version"
[GIN] 2025/04/11 - 00:03:12 | 200 | 2.4986ms | 127.0.0.1 | GET "/api/tags"

<!-- gh-comment-id:2794198247 --> @yukkuriTV commented on GitHub (Apr 10, 2025): Sorry, here is the correct one. 2025/04/10 23:41:22 routes.go:1231: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\yukkuriTV\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-04-10T23:41:22.262+09:00 level=INFO source=images.go:458 msg="total blobs: 21" time=2025-04-10T23:41:22.264+09:00 level=INFO source=images.go:465 msg="total unused blobs removed: 0" time=2025-04-10T23:41:22.267+09:00 level=INFO source=routes.go:1298 msg="Listening on 127.0.0.1:11434 (version 0.6.5)" time=2025-04-10T23:41:22.269+09:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-04-10T23:41:22.269+09:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2025-04-10T23:41:22.269+09:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=6 efficiency=0 threads=12 time=2025-04-10T23:41:22.535+09:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-23f2619b-2453-9ed8-f622-383f1a5428fb library=cuda compute=7.5 driver=12.6 name="NVIDIA GeForce GTX 1650" overhead="445.1 MiB" time=2025-04-10T23:41:22.538+09:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-23f2619b-2453-9ed8-f622-383f1a5428fb library=cuda variant=v12 compute=7.5 driver=12.6 name="NVIDIA GeForce GTX 1650" total="4.0 GiB" available="3.2 GiB" [GIN] 2025/04/10 - 23:43:48 | 200 | 77.7487ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/04/10 - 23:43:49 | 200 | 2.5884ms | 127.0.0.1 | GET "/api/version" [GIN] 2025/04/10 - 23:47:31 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2025/04/10 - 23:48:16 | 200 | 4.151ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/04/10 - 23:48:16 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2025/04/10 - 23:48:31 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2025/04/10 - 23:48:37 | 200 | 7.7663ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/04/10 - 23:48:54 | 200 | 6.9112ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/04/10 - 23:48:55 | 200 | 2.0605ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/04/10 - 23:49:04 | 200 | 2.5982ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/04/10 - 23:49:05 | 200 | 12.318ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/04/10 - 23:49:13 | 200 | 1.5331ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/04/10 - 23:49:13 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2025/04/10 - 23:49:19 | 200 | 4.7153ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/04/10 - 23:49:29 | 200 | 1.5366ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/04/10 - 23:49:39 | 200 | 1.5306ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/04/10 - 23:49:40 | 200 | 1.531ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/04/10 - 23:49:47 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2025/04/10 - 23:49:53 | 200 | 1.5666ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/04/10 - 23:49:59 | 200 | 2.5731ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/04/10 - 23:50:00 | 200 | 2.0272ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/04/10 - 23:50:01 | 200 | 1.567ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/04/10 - 23:50:03 | 200 | 2.1286ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/04/10 - 23:50:07 | 200 | 0s | 127.0.0.1 | GET "/api/version" time=2025-04-10T23:50:17.896+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.block_count default=0 time=2025-04-10T23:50:17.896+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.block_count default=0 time=2025-04-10T23:50:17.897+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.block_count default=0 time=2025-04-10T23:50:17.897+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.block_count default=0 time=2025-04-10T23:50:17.908+09:00 level=INFO source=server.go:105 msg="system memory" total="11.8 GiB" free="5.3 GiB" free_swap="35.1 GiB" time=2025-04-10T23:50:17.908+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.block_count default=0 time=2025-04-10T23:50:17.908+09:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=35 layers.offload=15 layers.split="" memory.available="[3.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.6 GiB" memory.required.partial="3.1 GiB" memory.required.kv="214.0 MiB" memory.required.allocations="[3.1 GiB]" memory.weights.total="3.8 GiB" memory.weights.repeating="3.2 GiB" memory.weights.nonrepeating="680.2 MiB" memory.graph.full="517.1 MiB" memory.graph.partial="1.0 GiB" time=2025-04-10T23:50:17.975+09:00 level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false time=2025-04-10T23:50:17.978+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.image_size default=0 time=2025-04-10T23:50:17.978+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.patch_size default=0 time=2025-04-10T23:50:17.978+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.num_channels default=0 time=2025-04-10T23:50:17.978+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.block_count default=0 time=2025-04-10T23:50:17.978+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.embedding_length default=0 time=2025-04-10T23:50:17.979+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.attention.head_count default=0 time=2025-04-10T23:50:17.979+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.image_size default=0 time=2025-04-10T23:50:17.979+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.patch_size default=0 time=2025-04-10T23:50:17.979+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.attention.layer_norm_epsilon default=0 time=2025-04-10T23:50:17.984+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000 time=2025-04-10T23:50:17.984+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 time=2025-04-10T23:50:17.984+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1 time=2025-04-10T23:50:17.984+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256 time=2025-04-10T23:50:17.993+09:00 level=INFO source=server.go:405 msg="starting llama server" cmd="C:\\Users\\yukkuriTV\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\yukkuriTV\\.ollama\\models\\blobs\\sha256-3d3a5470ffe8a3b9d5dff58f3be2c87c184b53f0b7fe25887328d17e8fd32eb2 --ctx-size 2048 --batch-size 512 --n-gpu-layers 15 --threads 6 --parallel 1 --port 50104" time=2025-04-10T23:50:18.037+09:00 level=INFO source=sched.go:451 msg="loaded runners" count=1 time=2025-04-10T23:50:18.037+09:00 level=INFO source=server.go:580 msg="waiting for llama runner to start responding" time=2025-04-10T23:50:18.042+09:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error" time=2025-04-10T23:50:18.068+09:00 level=INFO source=runner.go:816 msg="starting ollama engine" time=2025-04-10T23:50:18.094+09:00 level=INFO source=runner.go:879 msg="Server listening on 127.0.0.1:50104" time=2025-04-10T23:50:18.150+09:00 level=WARN source=ggml.go:152 msg="key not found" key=general.description default="" time=2025-04-10T23:50:18.150+09:00 level=INFO source=ggml.go:67 msg="" architecture=gemma3 file_type=Q8_0 name=Gemma-3-4B-It description="" num_tensors=444 num_key_values=35 time=2025-04-10T23:50:18.295+09:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce GTX 1650, compute capability 7.5, VMM: yes load_backend: loaded CUDA backend from C:\Users\yukkuriTV\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll load_backend: loaded CPU backend from C:\Users\yukkuriTV\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll time=2025-04-10T23:50:19.230+09:00 level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-04-10T23:50:19.726+09:00 level=INFO source=ggml.go:289 msg="model weights" buffer=CPU size="3.1 GiB" time=2025-04-10T23:50:19.726+09:00 level=INFO source=ggml.go:289 msg="model weights" buffer=CUDA0 size="1.4 GiB" time=2025-04-10T23:50:22.780+09:00 level=INFO source=ggml.go:388 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 time=2025-04-10T23:50:22.780+09:00 level=INFO source=ggml.go:388 msg="compute graph" backend=CPU buffer_type=CUDA_Host time=2025-04-10T23:50:22.788+09:00 level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false time=2025-04-10T23:50:22.791+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.image_size default=0 time=2025-04-10T23:50:22.791+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.patch_size default=0 time=2025-04-10T23:50:22.791+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.num_channels default=0 time=2025-04-10T23:50:22.791+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.block_count default=0 time=2025-04-10T23:50:22.791+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.embedding_length default=0 time=2025-04-10T23:50:22.791+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.attention.head_count default=0 time=2025-04-10T23:50:22.791+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.image_size default=0 time=2025-04-10T23:50:22.791+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.patch_size default=0 time=2025-04-10T23:50:22.791+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.vision.attention.layer_norm_epsilon default=0 time=2025-04-10T23:50:22.795+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000 time=2025-04-10T23:50:22.795+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 time=2025-04-10T23:50:22.795+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1 time=2025-04-10T23:50:22.795+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256 time=2025-04-10T23:50:22.809+09:00 level=INFO source=server.go:619 msg="llama runner started in 4.77 seconds" [GIN] 2025/04/10 - 23:50:26 | 200 | 8.7700477s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/04/10 - 23:50:31 | 200 | 4.3203821s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/04/10 - 23:50:36 | 200 | 5.3622921s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/04/10 - 23:50:47 | 200 | 2.0825ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/04/10 - 23:50:56 | 200 | 1.6732ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/04/10 - 23:50:57 | 200 | 1.5642ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/04/10 - 23:51:09 | 200 | 1.5384ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/04/10 - 23:51:17 | 200 | 0s | 127.0.0.1 | GET "/api/version" time=2025-04-10T23:51:34.541+09:00 level=INFO source=server.go:789 msg="llm predict error: Failed to create new sequence: failed to process inputs: this model is missing data required for image input" [GIN] 2025/04/10 - 23:51:34 | 200 | 40.6321ms | 127.0.0.1 | POST "/api/chat" [GIN] 2025/04/10 - 23:51:39 | 200 | 4.8736343s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/04/10 - 23:51:44 | 200 | 4.7278875s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/04/10 - 23:52:14 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2025/04/10 - 23:52:21 | 200 | 0s | 127.0.0.1 | GET "/api/version" time=2025-04-10T23:52:59.251+09:00 level=INFO source=server.go:789 msg="llm predict error: Failed to create new sequence: failed to process inputs: this model is missing data required for image input" [GIN] 2025/04/10 - 23:52:59 | 200 | 51.8141ms | 127.0.0.1 | POST "/api/chat" [GIN] 2025/04/10 - 23:53:04 | 200 | 4.5159211s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/04/10 - 23:53:25 | 200 | 4.763401s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/04/10 - 23:55:50 | 200 | 2.5738ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/04/10 - 23:55:50 | 200 | 6.3µs | 127.0.0.1 | GET "/api/version" time=2025-04-10T23:55:54.508+09:00 level=INFO source=server.go:789 msg="llm predict error: Failed to create new sequence: failed to process inputs: this model is missing data required for image input" [GIN] 2025/04/10 - 23:55:54 | 200 | 38.9413ms | 127.0.0.1 | POST "/api/chat" [GIN] 2025/04/10 - 23:55:59 | 200 | 4.3299013s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/04/10 - 23:56:03 | 200 | 4.4763269s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/04/10 - 23:56:57 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2025/04/10 - 23:58:36 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2025/04/10 - 23:58:43 | 200 | 3.0847ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/04/10 - 23:58:45 | 200 | 1.6163ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/04/10 - 23:58:59 | 200 | 1.5302ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/04/10 - 23:59:00 | 200 | 2.5908ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/04/10 - 23:59:05 | 200 | 0s | 127.0.0.1 | GET "/api/version" time=2025-04-11T00:01:16.162+09:00 level=WARN source=sched.go:648 msg="gpu VRAM usage didn't recover within timeout" seconds=12.302142 model=C:\Users\yukkuriTV\.ollama\models\blobs\sha256-3d3a5470ffe8a3b9d5dff58f3be2c87c184b53f0b7fe25887328d17e8fd32eb2 time=2025-04-11T00:01:16.375+09:00 level=WARN source=sched.go:648 msg="gpu VRAM usage didn't recover within timeout" seconds=12.5157752 model=C:\Users\yukkuriTV\.ollama\models\blobs\sha256-3d3a5470ffe8a3b9d5dff58f3be2c87c184b53f0b7fe25887328d17e8fd32eb2 time=2025-04-11T00:01:16.630+09:00 level=WARN source=sched.go:648 msg="gpu VRAM usage didn't recover within timeout" seconds=12.7709461 model=C:\Users\yukkuriTV\.ollama\models\blobs\sha256-3d3a5470ffe8a3b9d5dff58f3be2c87c184b53f0b7fe25887328d17e8fd32eb2 [GIN] 2025/04/11 - 00:02:14 | 200 | 2.7871ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/04/11 - 00:02:28 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2025/04/11 - 00:03:12 | 200 | 2.4986ms | 127.0.0.1 | GET "/api/tags"
Author
Owner

@rick-github commented on GitHub (Apr 10, 2025):

time=2025-04-10T23:50:17.908+09:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1
 layers.model=35 layers.offload=15 layers.split="" memory.available="[3.1 GiB]" memory.gpu_overhead="0 B"
 memory.required.full="5.6 GiB" memory.required.partial="3.1 GiB" memory.required.kv="214.0 MiB"
 memory.required.allocations="[3.1 GiB]" memory.weights.total="3.8 GiB" memory.weights.repeating="3.2 GiB"
 memory.weights.nonrepeating="680.2 MiB" memory.graph.full="517.1 MiB" memory.graph.partial="1.0 GiB"

There is no mention of a projector here, so this model is not vision capable. Where did you get the model from?

<!-- gh-comment-id:2794231318 --> @rick-github commented on GitHub (Apr 10, 2025): ``` time=2025-04-10T23:50:17.908+09:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=35 layers.offload=15 layers.split="" memory.available="[3.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.6 GiB" memory.required.partial="3.1 GiB" memory.required.kv="214.0 MiB" memory.required.allocations="[3.1 GiB]" memory.weights.total="3.8 GiB" memory.weights.repeating="3.2 GiB" memory.weights.nonrepeating="680.2 MiB" memory.graph.full="517.1 MiB" memory.graph.partial="1.0 GiB" ``` There is no mention of a projector here, so this model is not vision capable. Where did you get the model from?
Author
Owner

@yukkuriTV commented on GitHub (Apr 10, 2025):

https://huggingface.co/unsloth/gemma-3-4b-it-GGUF
This site.
https://huggingface.co/soob3123/amoral-gemma3-4B-v1/tree/main
Image recognition doesn't work on the model I got from here. However, image recognition isn't mentioned on the model's tag, so it may just not be included.

<!-- gh-comment-id:2794432221 --> @yukkuriTV commented on GitHub (Apr 10, 2025): https://huggingface.co/unsloth/gemma-3-4b-it-GGUF This site. https://huggingface.co/soob3123/amoral-gemma3-4B-v1/tree/main Image recognition doesn't work on the model I got from here. However, image recognition isn't mentioned on the model's tag, so it may just not be included.
Author
Owner

@rick-github commented on GitHub (Apr 10, 2025):

https://huggingface.co/unsloth/gemma-3-4b-it-GGUF/discussions/4

<!-- gh-comment-id:2794435383 --> @rick-github commented on GitHub (Apr 10, 2025): https://huggingface.co/unsloth/gemma-3-4b-it-GGUF/discussions/4
Author
Owner

@megvadulthangya commented on GitHub (Apr 12, 2025):

I tried the same model, plus the 12B version... or hf.co/mradermacher/gemma3-4b-it-abliterated-GGUF:Q6_K or hf.co/mradermacher/Qwen2.5-VL-7B-Instruct-Vision-R1-GGUF:Q6_K
Same results... Visual function Does not work in ollama

I just say ...
hf.co/openbmb/MiniCPM-o-2_6-gguf:Q6_K its ok, but thats on ollama as well... :)

<!-- gh-comment-id:2798839022 --> @megvadulthangya commented on GitHub (Apr 12, 2025): I tried the same model, plus the 12B version... or hf.co/mradermacher/gemma3-4b-it-abliterated-GGUF:Q6_K or hf.co/mradermacher/Qwen2.5-VL-7B-Instruct-Vision-R1-GGUF:Q6_K Same results... Visual function Does not work in ollama _I just say ..._ hf.co/openbmb/MiniCPM-o-2_6-gguf:Q6_K _its ok, but thats on ollama as well... :)_
Author
Owner

@rick-github commented on GitHub (Apr 12, 2025):

https://github.com/ollama/ollama/issues/10218#issuecomment-2794144223

<!-- gh-comment-id:2798839604 --> @rick-github commented on GitHub (Apr 12, 2025): https://github.com/ollama/ollama/issues/10218#issuecomment-2794144223
Author
Owner

@leporel commented on GitHub (Apr 14, 2025):

the same, want download Q6_K or Q5_K_M, but wherever I downloaded it from, it says vision is unsupported

ollama run hf.co/lmstudio-community/gemma-3-4b-it-GGUF:Q4_K_M
>>> G:\temp\test_1.png what you see?
Added image 'G:\temp\test_1.png'
Error: Failed to create new sequence: failed to process inputs: this model is missing data required for image input
ollama run hf.co/bartowski/soob3123_amoral-gemma3-12B-GGUF:Q5_K_M
>>> G:\temp\test_1.png what you see?
I am unable to access local files, so I cannot tell you what is in the image file "G:\temp\test_1.png".
<!-- gh-comment-id:2802083004 --> @leporel commented on GitHub (Apr 14, 2025): the same, want download Q6_K or Q5_K_M, but wherever I downloaded it from, it says vision is unsupported ``` ollama run hf.co/lmstudio-community/gemma-3-4b-it-GGUF:Q4_K_M >>> G:\temp\test_1.png what you see? Added image 'G:\temp\test_1.png' Error: Failed to create new sequence: failed to process inputs: this model is missing data required for image input ``` ``` ollama run hf.co/bartowski/soob3123_amoral-gemma3-12B-GGUF:Q5_K_M >>> G:\temp\test_1.png what you see? I am unable to access local files, so I cannot tell you what is in the image file "G:\temp\test_1.png". ```
Author
Owner

@rick-github commented on GitHub (Apr 14, 2025):

If you want to use a q4_k_m quant with vision, https://ollama.com/library/gemma3:4b-it-q4_K_M

If you want a q5 or q6 quant with vision, download the fp16 model and create the quant you want:

ollama pull gemma3:12b-it-fp16
echo FROM gemma3:12b-it-fp16 > Modelfile
ollama create -q q6_k gemma3:12b-it-q6_k
<!-- gh-comment-id:2802110866 --> @rick-github commented on GitHub (Apr 14, 2025): If you want to use a q4_k_m quant with vision, https://ollama.com/library/gemma3:4b-it-q4_K_M If you want a q5 or q6 quant with vision, download the fp16 model and create the quant you want: ``` ollama pull gemma3:12b-it-fp16 echo FROM gemma3:12b-it-fp16 > Modelfile ollama create -q q6_k gemma3:12b-it-q6_k ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#53217