[GH-ISSUE #9860] gemma-3-12b ignores stream=false parameter #6457

Closed
opened 2026-04-12 18:01:09 -05:00 by GiteaMirror · 12 comments
Owner

Originally created by @ALLMI78 on GitHub (Mar 18, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9860

What is the issue?

ollama 0.6.2 / WIN 10 / 4060 ti 16GB

hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M ignores stream=false parameter

answer is a stream, token by token and very slow

32k context size

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.6.2

Originally created by @ALLMI78 on GitHub (Mar 18, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9860 ### What is the issue? ollama 0.6.2 / WIN 10 / 4060 ti 16GB hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M ignores stream=false parameter answer is a stream, token by token and very slow 32k context size ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.6.2
GiteaMirror added the bug label 2026-04-12 18:01:09 -05:00
Author
Owner

@rick-github commented on GitHub (Mar 18, 2025):

Can you provide an example?

$ curl localhost:11434/api/generate -d '{"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","prompt":"hello","stream":false}'
{"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:00.743860081Z","response":"Hello! 😊 \n\nIt's nice to hear from you. How are you doing today? \n\n\n\nWhat can I do for you?","done":true,"done_reason":"stop","context":[105,2364,107,23391,106,107,105,4368,107,9259,236888,103453,236743,108,1509,236789,236751,6290,531,6899,699,611,236761,2088,659,611,3490,3124,236881,236743,110,3689,740,564,776,573,611,236881],"total_duration":2499818265,"load_duration":1727018071,"prompt_eval_count":10,"prompt_eval_duration":158295723,"eval_count":30,"eval_duration":613025373}

$ curl localhost:11434/api/generate -d '{"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","prompt":"hello","stream":true}'
{"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.065611505Z","response":"Hello","done":false}
{"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.086808959Z","response":"!","done":false}
{"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.107870975Z","response":" 😊","done":false}
{"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.129933115Z","response":" ","done":false}
{"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.151469834Z","response":"\n\n","done":false}
{"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.172742006Z","response":"It","done":false}
{"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.193716715Z","response":"'","done":false}
{"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.214524339Z","response":"s","done":false}
{"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.23530269Z","response":" nice","done":false}
{"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.255759053Z","response":" to","done":false}
{"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.276131882Z","response":" hear","done":false}
{"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.296509321Z","response":" from","done":false}
{"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.316833158Z","response":" you","done":false}
{"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.337091125Z","response":".","done":false}
{"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.357436876Z","response":" How","done":false}
{"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.377756509Z","response":" are","done":false}
{"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.398357958Z","response":" you","done":false}
{"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.419019518Z","response":" doing","done":false}
{"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.439524349Z","response":" today","done":false}
{"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.45985912Z","response":"?","done":false}
{"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.480178994Z","response":" ","done":false}
{"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.500630693Z","response":"\n\n\n\n","done":false}
{"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.521067128Z","response":"What","done":false}
{"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.541364724Z","response":" can","done":false}
{"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.561737035Z","response":" I","done":false}
{"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.582315579Z","response":" do","done":false}
{"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.602739069Z","response":" for","done":false}
{"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.623115145Z","response":" you","done":false}
{"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.64332402Z","response":"?","done":false}
{"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.663815596Z","response":"","done":true,"done_reason":"stop","context":[105,2364,107,23391,106,107,105,4368,107,9259,236888,103453,236743,108,1509,236789,236751,6290,531,6899,699,611,236761,2088,659,611,3490,3124,236881,236743,110,3689,740,564,776,573,611,236881],"total_duration":881339847,"load_duration":258782885,"prompt_eval_count":10,"prompt_eval_duration":23164881,"eval_count":30,"eval_duration":598690997}

<!-- gh-comment-id:2734054561 --> @rick-github commented on GitHub (Mar 18, 2025): Can you provide an example? ```console $ curl localhost:11434/api/generate -d '{"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","prompt":"hello","stream":false}' {"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:00.743860081Z","response":"Hello! 😊 \n\nIt's nice to hear from you. How are you doing today? \n\n\n\nWhat can I do for you?","done":true,"done_reason":"stop","context":[105,2364,107,23391,106,107,105,4368,107,9259,236888,103453,236743,108,1509,236789,236751,6290,531,6899,699,611,236761,2088,659,611,3490,3124,236881,236743,110,3689,740,564,776,573,611,236881],"total_duration":2499818265,"load_duration":1727018071,"prompt_eval_count":10,"prompt_eval_duration":158295723,"eval_count":30,"eval_duration":613025373} $ curl localhost:11434/api/generate -d '{"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","prompt":"hello","stream":true}' {"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.065611505Z","response":"Hello","done":false} {"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.086808959Z","response":"!","done":false} {"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.107870975Z","response":" 😊","done":false} {"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.129933115Z","response":" ","done":false} {"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.151469834Z","response":"\n\n","done":false} {"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.172742006Z","response":"It","done":false} {"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.193716715Z","response":"'","done":false} {"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.214524339Z","response":"s","done":false} {"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.23530269Z","response":" nice","done":false} {"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.255759053Z","response":" to","done":false} {"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.276131882Z","response":" hear","done":false} {"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.296509321Z","response":" from","done":false} {"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.316833158Z","response":" you","done":false} {"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.337091125Z","response":".","done":false} {"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.357436876Z","response":" How","done":false} {"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.377756509Z","response":" are","done":false} {"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.398357958Z","response":" you","done":false} {"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.419019518Z","response":" doing","done":false} {"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.439524349Z","response":" today","done":false} {"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.45985912Z","response":"?","done":false} {"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.480178994Z","response":" ","done":false} {"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.500630693Z","response":"\n\n\n\n","done":false} {"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.521067128Z","response":"What","done":false} {"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.541364724Z","response":" can","done":false} {"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.561737035Z","response":" I","done":false} {"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.582315579Z","response":" do","done":false} {"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.602739069Z","response":" for","done":false} {"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.623115145Z","response":" you","done":false} {"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.64332402Z","response":"?","done":false} {"model":"hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M","created_at":"2025-03-18T17:04:08.663815596Z","response":"","done":true,"done_reason":"stop","context":[105,2364,107,23391,106,107,105,4368,107,9259,236888,103453,236743,108,1509,236789,236751,6290,531,6899,699,611,236761,2088,659,611,3490,3124,236881,236743,110,3689,740,564,776,573,611,236881],"total_duration":881339847,"load_duration":258782885,"prompt_eval_count":10,"prompt_eval_duration":23164881,"eval_count":30,"eval_duration":598690997} ```
Author
Owner

@ALLMI78 commented on GitHub (Mar 18, 2025):

2025/03/18 18:17:55 routes.go:1230: INFO server config env="map[CUDA_VISIBLE_DEVICES:0 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:32768 OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:2 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:M:\OLLAMA\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:true OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"

in my query: "options":{ "seed":21858796, "num_predict":4096, "top_k":64, "top_p":0.95000000, "min_p":0.01000000, "temperature": 1.00000000, "num_ctx": 32768, "num_batch":128, "num_gpu":100, "main_gpu":0, "repeat_last_n":128, "repeat_penalty":1.00000000, "use_mmap":false, "use_mlock":true, "num_thread":8},"stream":false,"cache_prompt":false,"keep_alive":0}';

NAME ID SIZE PROCESSOR UNTIL
hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M f5e4bfca588c 22 GB 32%/68% CPU/GPU Stopping...

CPU LOAD 30% | GPU LOAD 100%

i send my query, it runs for a long time (around 4 minutes) and later it starts to give out the answer token by token (very slow)

[GIN] 2025/03/18 - 18:18:39 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/03/18 - 18:18:39 | 200 | 0s | 127.0.0.1 | GET "/api/ps"
time=2025-03-18T18:22:38.158+01:00 level=DEBUG source=process_text_spm.go:244 msg=decoded ids=[10354] text=###
time=2025-03-18T18:22:38.731+01:00 level=DEBUG source=process_text_spm.go:244 msg=decoded ids=[105839] text=" Analyse"
time=2025-03-18T18:22:39.289+01:00 level=DEBUG source=process_text_spm.go:244 msg=decoded ids=[236787] text=:
time=2025-03-18T18:22:39.835+01:00 level=DEBUG source=process_text_spm.go:244 msg=decoded ids=[108] text="\n\n"
time=2025-03-18T18:22:40.400+01:00 level=DEBUG source=process_text_spm.go:244 msg=decoded ids=[1018] text=**
time=2025-03-18T18:22:40.945+01:00 level=DEBUG source=process_text_spm.go:244 msg=decoded ids=[21241] text=MN
time=2025-03-18T18:22:41.509+01:00 level=DEBUG source=process_text_spm.go:244 msg=decoded ids=[236772] text=-
time=2025-03-18T18:22:42.055+01:00 level=DEBUG source=process_text_spm.go:244 msg=decoded ids=[2267] text=An
time=2025-03-18T18:22:42.624+01:00 level=DEBUG source=process_text_spm.go:244 msg=decoded ids=[42833] text=alyse
time=2025-03-18T18:22:43.179+01:00 level=DEBUG source=process_text_spm.go:244 msg=decoded ids=[53121] text=:**
time=2025-03-18T18:22:43.734+01:00 level=DEBUG source=process_text_spm.go:244 msg=decoded ids=[108] text="\n\n"
time=2025-03-18T18:22:44.296+01:00 level=DEBUG source=process_text_spm.go:244 msg=decoded i

log is full of msg=candidate and msg=pair it is huge, tell me if there is something missing

<!-- gh-comment-id:2734130350 --> @ALLMI78 commented on GitHub (Mar 18, 2025): > 2025/03/18 18:17:55 routes.go:1230: INFO server config env="map[CUDA_VISIBLE_DEVICES:0 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:32768 OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:2 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:M:\\OLLAMA\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:true OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" > in my query: "options":{ "seed":21858796, "num_predict":4096, "top_k":64, "top_p":0.95000000, "min_p":0.01000000, "temperature": 1.00000000, "num_ctx": 32768, "num_batch":128, "num_gpu":100, "main_gpu":0, "repeat_last_n":128, "repeat_penalty":1.00000000, "use_mmap":false, "use_mlock":true, "num_thread":8},"stream":false,"cache_prompt":false,"keep_alive":0}'; > NAME ID SIZE PROCESSOR UNTIL hf.co/unsloth/gemma-3-12b-it-GGUF:Q4_K_M f5e4bfca588c 22 GB 32%/68% CPU/GPU Stopping... CPU LOAD 30% | GPU LOAD 100% i send my query, it runs for a long time (around 4 minutes) and later it starts to give out the answer token by token (very slow) > [GIN] 2025/03/18 - 18:18:39 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/03/18 - 18:18:39 | 200 | 0s | 127.0.0.1 | GET "/api/ps" time=2025-03-18T18:22:38.158+01:00 level=DEBUG source=process_text_spm.go:244 msg=decoded ids=[10354] text=### time=2025-03-18T18:22:38.731+01:00 level=DEBUG source=process_text_spm.go:244 msg=decoded ids=[105839] text=" Analyse" time=2025-03-18T18:22:39.289+01:00 level=DEBUG source=process_text_spm.go:244 msg=decoded ids=[236787] text=: time=2025-03-18T18:22:39.835+01:00 level=DEBUG source=process_text_spm.go:244 msg=decoded ids=[108] text="\n\n" time=2025-03-18T18:22:40.400+01:00 level=DEBUG source=process_text_spm.go:244 msg=decoded ids=[1018] text=** time=2025-03-18T18:22:40.945+01:00 level=DEBUG source=process_text_spm.go:244 msg=decoded ids=[21241] text=MN time=2025-03-18T18:22:41.509+01:00 level=DEBUG source=process_text_spm.go:244 msg=decoded ids=[236772] text=- time=2025-03-18T18:22:42.055+01:00 level=DEBUG source=process_text_spm.go:244 msg=decoded ids=[2267] text=An time=2025-03-18T18:22:42.624+01:00 level=DEBUG source=process_text_spm.go:244 msg=decoded ids=[42833] text=alyse time=2025-03-18T18:22:43.179+01:00 level=DEBUG source=process_text_spm.go:244 msg=decoded ids=[53121] text=:** time=2025-03-18T18:22:43.734+01:00 level=DEBUG source=process_text_spm.go:244 msg=decoded ids=[108] text="\n\n" time=2025-03-18T18:22:44.296+01:00 level=DEBUG source=process_text_spm.go:244 msg=decoded i log is full of msg=candidate and msg=pair it is huge, tell me if there is something missing
Author
Owner

@rick-github commented on GitHub (Mar 18, 2025):

How are you sending the query, ie what client? If you set OLLAMA_DEBUG=0 the debugging won't be as huge but still may reveal pertinent details.

<!-- gh-comment-id:2734143103 --> @rick-github commented on GitHub (Mar 18, 2025): How are you sending the query, ie what client? If you set `OLLAMA_DEBUG=0` the debugging won't be as huge but still may reveal pertinent details.
Author
Owner

@ALLMI78 commented on GitHub (Mar 18, 2025):

which details from the logs do you need?

client and setup is complicated, i'm sending it with powershell request via Invoke-WebRequest

string pscode = ("[Console]::OutputEncoding = [System.Text.Encoding]::UTF8;\n" + "$bdy = '"+payload+"';\n" + "$bdy = [System.Text.Encoding]::UTF8.GetString([System.Text.Encoding]::UTF8.GetBytes($bdy));\n" + "$res = Invoke-WebRequest -Uri '"+apiurl+"' -Method POST -Body $bdy -ContentType 'application/json; charset=utf-8' -TimeoutSec "+(string)SET.req_timeout+";\n" + "$utf = if($res.Content -is [byte[]]) {[System.Text.Encoding]::UTF8.GetString($res.Content)} else {$res.Content};\n" + "$utf | Out-File -FilePath '"+outpath+"' -Encoding UTF8;\n");

in my setup it is the only way and it works fine with all other models

<!-- gh-comment-id:2734156435 --> @ALLMI78 commented on GitHub (Mar 18, 2025): which details from the logs do you need? client and setup is complicated, i'm sending it with powershell request via Invoke-WebRequest ` string pscode = ("[Console]::OutputEncoding = [System.Text.Encoding]::UTF8;\n" + "$bdy = '"+payload+"';\n" + "$bdy = [System.Text.Encoding]::UTF8.GetString([System.Text.Encoding]::UTF8.GetBytes($bdy));\n" + "$res = Invoke-WebRequest -Uri '"+apiurl+"' -Method POST -Body $bdy -ContentType 'application/json; charset=utf-8' -TimeoutSec "+(string)SET.req_timeout+";\n" + "$utf = if($res.Content -is [byte[]]) {[System.Text.Encoding]::UTF8.GetString($res.Content)} else {$res.Content};\n" + "$utf | Out-File -FilePath '"+outpath+"' -Encoding UTF8;\n");` in my setup it is the only way and it works fine with all other models
Author
Owner

@rick-github commented on GitHub (Mar 18, 2025):

which details from the logs do you need?

All of them.

<!-- gh-comment-id:2734166665 --> @rick-github commented on GitHub (Mar 18, 2025): > which details from the logs do you need? All of them.
Author
Owner

@ALLMI78 commented on GitHub (Mar 18, 2025):

maybe the problem is here, no stream parameter?

time=2025-03-18T18:40:10.393+01:00 level=INFO source=server.go:405 msg="starting llama server" cmd="C:\Users\admin\AppData\Local\Programs\Ollama\ollama.exe runner --ollama-engine --model M:\OLLAMA\models\blobs\sha256-ecb6908345e7a10be94511eae715b6b6eadbc518b7c1dd0fd5ba8816b62b4dc9 --ctx-size 32768 --batch-size 128 --n-gpu-layers 100 --threads 8 --flash-attn --no-mmap --mlock --parallel 1 --port 51176"

it runs into my timeout, does the output with OLLAMA_DEBUG=0 still show the token generation ???

PS C:\Users\admin> ollama serve
2025/03/18 18:39:25 routes.go:1230: INFO server config env="map[CUDA_VISIBLE_DEVICES:0 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:32768 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:2 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:M:\OLLAMA\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:true OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-03-18T18:39:25.222+01:00 level=INFO source=images.go:432 msg="total blobs: 39"
time=2025-03-18T18:39:25.227+01:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-03-18T18:39:25.230+01:00 level=INFO source=routes.go:1297 msg="Listening on 127.0.0.1:11434 (version 0.6.2)"
time=2025-03-18T18:39:25.230+01:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-03-18T18:39:25.230+01:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-03-18T18:39:25.230+01:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=4 efficiency=0 threads=8
time=2025-03-18T18:39:25.411+01:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-cf79912c-84b7-d47e-b92c-67fd3713592f library=cuda compute=8.9 driver=12.8 name="NVIDIA GeForce RTX 4060 Ti" overhead="599.8 MiB"
time=2025-03-18T18:39:25.413+01:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-cf79912c-84b7-d47e-b92c-67fd3713592f library=cuda variant=v12 compute=8.9 driver=12.8 name="NVIDIA GeForce RTX 4060 Ti" total="16.0 GiB" available="14.9 GiB"
time=2025-03-18T18:40:10.269+01:00 level=INFO source=server.go:105 msg="system memory" total="31.0 GiB" free="24.9 GiB" free_swap="26.5 GiB"time=2025-03-18T18:40:10.286+01:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=100 layers.model=49 layers.offload=32 layers.split="" memory.available="[14.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="21.4 GiB" memory.required.partial="14.5 GiB" memory.required.kv="12.0 GiB" memory.required.allocations="[14.5 GiB]" memory.weights.total="6.0 GiB" memory.weights.repeating="6.0 GiB" memory.weights.nonrepeating="787.7 MiB" memory.graph.full="279.8 MiB" memory.graph.partial="917.6 MiB" projector.weights="814.6 MiB" projector.graph="0 B"
time=2025-03-18T18:40:10.286+01:00 level=INFO source=server.go:185 msg="enabling flash attention"
time=2025-03-18T18:40:10.286+01:00 level=WARN source=server.go:193 msg="kv cache type not supported by model" type=""
time=2025-03-18T18:40:10.357+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\r\n\p{L}\p{N}]?\p{L}+|\p{N}{1,3}| ?[^\s\p{L}\p{N}]+[\r\n]
|\s*[\r\n]+|\s+(?!\S)|\s+"
time=2025-03-18T18:40:10.372+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-03-18T18:40:10.377+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.image_size default=0
time=2025-03-18T18:40:10.377+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.patch_size default=0
time=2025-03-18T18:40:10.377+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.num_channels default=0
time=2025-03-18T18:40:10.377+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.block_count default=0
time=2025-03-18T18:40:10.377+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.embedding_length default=0
time=2025-03-18T18:40:10.377+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.attention.head_count default=0
time=2025-03-18T18:40:10.377+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.image_size default=0
time=2025-03-18T18:40:10.377+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.patch_size default=0
time=2025-03-18T18:40:10.377+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.attention.layer_norm_epsilon default=0
time=2025-03-18T18:40:10.377+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\r\n\p{L}\p{N}]?\p{L}+|\p{N}{1,3}| ?[^\s\p{L}\p{N}]+[\r\n]|\s[\r\n]+|\s+(?!\S)|\s+"
time=2025-03-18T18:40:10.387+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-03-18T18:40:10.387+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-03-18T18:40:10.387+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-03-18T18:40:10.387+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-03-18T18:40:10.393+01:00 level=INFO source=server.go:405 msg="starting llama server" cmd="C:\Users\admin\AppData\Local\Programs\Ollama\ollama.exe runner --ollama-engine --model M:\OLLAMA\models\blobs\sha256-ecb6908345e7a10be94511eae715b6b6eadbc518b7c1dd0fd5ba8816b62b4dc9 --ctx-size 32768 --batch-size 128 --n-gpu-layers 100 --threads 8 --flash-attn --no-mmap --mlock --parallel 1 --port 51176"
time=2025-03-18T18:40:10.739+01:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-03-18T18:40:10.739+01:00 level=INFO source=server.go:580 msg="waiting for llama runner to start responding"
time=2025-03-18T18:40:10.741+01:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error"
time=2025-03-18T18:40:10.769+01:00 level=INFO source=runner.go:763 msg="starting ollama engine"
time=2025-03-18T18:40:10.819+01:00 level=INFO source=runner.go:823 msg="Server listening on 127.0.0.1:51176"
time=2025-03-18T18:40:10.897+01:00 level=WARN source=ggml.go:149 msg="key not found" key=general.description default=""
time=2025-03-18T18:40:10.897+01:00 level=INFO source=ggml.go:67 msg="" architecture=gemma3 file_type=Q4_K_M name=Gemma-3-12B-It description="" num_tensors=626 num_key_values=35
time=2025-03-18T18:40:10.994+01:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 4060 Ti, compute capability 8.9, VMM: yes
load_backend: loaded CUDA backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
load_backend: loaded CPU backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-sandybridge.dll
time=2025-03-18T18:40:11.035+01:00 level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-03-18T18:40:11.197+01:00 level=INFO source=ggml.go:289 msg="model weights" buffer=CUDA0 size="6.8 GiB"
time=2025-03-18T18:40:11.203+01:00 level=INFO source=ggml.go:289 msg="model weights" buffer=CPU size="787.7 MiB"
time=2025-03-18T18:40:14.585+01:00 level=INFO source=ggml.go:358 msg="compute graph" backend=CUDA0 buffer_type=CUDA0
time=2025-03-18T18:40:14.585+01:00 level=INFO source=ggml.go:358 msg="compute graph" backend=CPU buffer_type=CUDA_Host
time=2025-03-18T18:40:14.586+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\r\n\p{L}\p{N}]?\p{L}+|\p{N}{1,3}| ?[^\s\p{L}\p{N}]+[\r\n]|\s[\r\n]+|\s+(?!\S)|\s+"
time=2025-03-18T18:40:14.590+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-03-18T18:40:14.594+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.image_size default=0
time=2025-03-18T18:40:14.594+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.patch_size default=0
time=2025-03-18T18:40:14.594+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.num_channels default=0
time=2025-03-18T18:40:14.594+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.block_count default=0
time=2025-03-18T18:40:14.594+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.embedding_length default=0
time=2025-03-18T18:40:14.594+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.attention.head_count default=0
time=2025-03-18T18:40:14.594+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.image_size default=0
time=2025-03-18T18:40:14.594+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.patch_size default=0
time=2025-03-18T18:40:14.594+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.attention.layer_norm_epsilon default=0
time=2025-03-18T18:40:14.594+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\r\n\p{L}\p{N}]?\p{L}+|\p{N}{1,3}| ?[^\s\p{L}\p{N}]+[\r\n]|\s[\r\n]+|\s+(?!\S)|\s+"
time=2025-03-18T18:40:14.600+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-03-18T18:40:14.600+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-03-18T18:40:14.600+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-03-18T18:40:14.600+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-03-18T18:40:14.751+01:00 level=INFO source=server.go:619 msg="llama runner started in 4.01 seconds"
[GIN] 2025/03/18 - 18:45:10 | 500 | 4m59s | 127.0.0.1 | POST "/api/chat"
time=2025-03-18T18:45:30.504+01:00 level=INFO source=server.go:105 msg="system memory" total="31.0 GiB" free="25.0 GiB" free_swap="26.5 GiB"
time=2025-03-18T18:45:30.520+01:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=100 layers.model=49 layers.offload=32 layers.split="" memory.available="[14.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="21.4 GiB" memory.required.partial="14.5 GiB" memory.required.kv="12.0 GiB" memory.required.allocations="[14.5 GiB]" memory.weights.total="6.0 GiB" memory.weights.repeating="6.0 GiB" memory.weights.nonrepeating="787.7 MiB" memory.graph.full="279.8 MiB" memory.graph.partial="917.6 MiB" projector.weights="814.6 MiB" projector.graph="0 B"
time=2025-03-18T18:45:30.520+01:00 level=INFO source=server.go:185 msg="enabling flash attention"
time=2025-03-18T18:45:30.520+01:00 level=WARN source=server.go:193 msg="kv cache type not supported by model" type=""
time=2025-03-18T18:45:30.591+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\r\n\p{L}\p{N}]?\p{L}+|\p{N}{1,3}| ?[^\s\p{L}\p{N}]+[\r\n]|\s[\r\n]+|\s+(?!\S)|\s+"
time=2025-03-18T18:45:30.591+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-03-18T18:45:30.598+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.image_size default=0
time=2025-03-18T18:45:30.598+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.patch_size default=0
time=2025-03-18T18:45:30.598+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.num_channels default=0
time=2025-03-18T18:45:30.598+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.block_count default=0
time=2025-03-18T18:45:30.602+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.embedding_length default=0
time=2025-03-18T18:45:30.602+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.attention.head_count default=0
time=2025-03-18T18:45:30.602+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.image_size default=0
time=2025-03-18T18:45:30.602+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.patch_size default=0
time=2025-03-18T18:45:30.602+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.attention.layer_norm_epsilon default=0
time=2025-03-18T18:45:30.602+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\r\n\p{L}\p{N}]?\p{L}+|\p{N}{1,3}| ?[^\s\p{L}\p{N}]+[\r\n]|\s[\r\n]+|\s+(?!\S)|\s+"
time=2025-03-18T18:45:30.612+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-03-18T18:45:30.612+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-03-18T18:45:30.612+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-03-18T18:45:30.612+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-03-18T18:45:30.612+01:00 level=INFO source=server.go:405 msg="starting llama server" cmd="C:\Users\admin\AppData\Local\Programs\Ollama\ollama.exe runner --ollama-engine --model M:\OLLAMA\models\blobs\sha256-ecb6908345e7a10be94511eae715b6b6eadbc518b7c1dd0fd5ba8816b62b4dc9 --ctx-size 32768 --batch-size 128 --n-gpu-layers 100 --threads 8 --flash-attn --no-mmap --mlock --parallel 1 --port 51188"
time=2025-03-18T18:45:30.965+01:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-03-18T18:45:30.965+01:00 level=INFO source=server.go:580 msg="waiting for llama runner to start responding"
time=2025-03-18T18:45:30.967+01:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error"
time=2025-03-18T18:45:30.994+01:00 level=INFO source=runner.go:763 msg="starting ollama engine"
time=2025-03-18T18:45:31.045+01:00 level=INFO source=runner.go:823 msg="Server listening on 127.0.0.1:51188"
time=2025-03-18T18:45:31.111+01:00 level=WARN source=ggml.go:149 msg="key not found" key=general.description default=""
time=2025-03-18T18:45:31.111+01:00 level=INFO source=ggml.go:67 msg="" architecture=gemma3 file_type=Q4_K_M name=Gemma-3-12B-It description="" num_tensors=626 num_key_values=35
time=2025-03-18T18:45:31.221+01:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 4060 Ti, compute capability 8.9, VMM: yes
load_backend: loaded CUDA backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
load_backend: loaded CPU backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-sandybridge.dll
time=2025-03-18T18:45:31.254+01:00 level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-03-18T18:45:31.431+01:00 level=INFO source=ggml.go:289 msg="model weights" buffer=CPU size="787.7 MiB"
time=2025-03-18T18:45:31.431+01:00 level=INFO source=ggml.go:289 msg="model weights" buffer=CUDA0 size="6.8 GiB"
time=2025-03-18T18:45:34.969+01:00 level=INFO source=ggml.go:358 msg="compute graph" backend=CUDA0 buffer_type=CUDA0
time=2025-03-18T18:45:34.969+01:00 level=INFO source=ggml.go:358 msg="compute graph" backend=CPU buffer_type=CUDA_Host
time=2025-03-18T18:45:34.970+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\r\n\p{L}\p{N}]?\p{L}+|\p{N}{1,3}| ?[^\s\p{L}\p{N}]+[\r\n]|\s[\r\n]+|\s+(?!\S)|\s+"
time=2025-03-18T18:45:34.976+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-03-18T18:45:34.979+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.image_size default=0
time=2025-03-18T18:45:34.979+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.patch_size default=0
time=2025-03-18T18:45:34.979+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.num_channels default=0
time=2025-03-18T18:45:34.979+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.block_count default=0
time=2025-03-18T18:45:34.979+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.embedding_length default=0
time=2025-03-18T18:45:34.979+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.attention.head_count default=0
time=2025-03-18T18:45:34.979+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.image_size default=0
time=2025-03-18T18:45:34.979+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.patch_size default=0
time=2025-03-18T18:45:34.979+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.attention.layer_norm_epsilon default=0
time=2025-03-18T18:45:34.979+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\r\n\p{L}\p{N}]?\p{L}+|\p{N}{1,3}| ?[^\s\p{L}\p{N}]+[\r\n]|\s[\r\n]+|\s+(?!\S)|\s+"
time=2025-03-18T18:45:34.987+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-03-18T18:45:34.987+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-03-18T18:45:34.987+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-03-18T18:45:34.987+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-03-18T18:45:35.230+01:00 level=INFO source=server.go:619 msg="llama runner started in 4.26 seconds"

last run before that with DEBUG enabled it was visible, how it was answering token by token until it reaches my timeout...

<!-- gh-comment-id:2734207972 --> @ALLMI78 commented on GitHub (Mar 18, 2025): maybe the problem is here, no stream parameter? > time=2025-03-18T18:40:10.393+01:00 level=INFO source=server.go:405 msg="starting llama server" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model M:\\OLLAMA\\models\\blobs\\sha256-ecb6908345e7a10be94511eae715b6b6eadbc518b7c1dd0fd5ba8816b62b4dc9 --ctx-size 32768 --batch-size 128 --n-gpu-layers 100 --threads 8 --flash-attn --no-mmap --mlock --parallel 1 --port 51176" it runs into my timeout, does the output with OLLAMA_DEBUG=0 still show the token generation ??? > PS C:\Users\admin> ollama serve 2025/03/18 18:39:25 routes.go:1230: INFO server config env="map[CUDA_VISIBLE_DEVICES:0 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:32768 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:2 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:M:\\OLLAMA\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:true OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-03-18T18:39:25.222+01:00 level=INFO source=images.go:432 msg="total blobs: 39" time=2025-03-18T18:39:25.227+01:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-03-18T18:39:25.230+01:00 level=INFO source=routes.go:1297 msg="Listening on 127.0.0.1:11434 (version 0.6.2)" time=2025-03-18T18:39:25.230+01:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-03-18T18:39:25.230+01:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2025-03-18T18:39:25.230+01:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=4 efficiency=0 threads=8 time=2025-03-18T18:39:25.411+01:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-cf79912c-84b7-d47e-b92c-67fd3713592f library=cuda compute=8.9 driver=12.8 name="NVIDIA GeForce RTX 4060 Ti" overhead="599.8 MiB" time=2025-03-18T18:39:25.413+01:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-cf79912c-84b7-d47e-b92c-67fd3713592f library=cuda variant=v12 compute=8.9 driver=12.8 name="NVIDIA GeForce RTX 4060 Ti" total="16.0 GiB" available="14.9 GiB" time=2025-03-18T18:40:10.269+01:00 level=INFO source=server.go:105 msg="system memory" total="31.0 GiB" free="24.9 GiB" free_swap="26.5 GiB"time=2025-03-18T18:40:10.286+01:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=100 layers.model=49 layers.offload=32 layers.split="" memory.available="[14.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="21.4 GiB" memory.required.partial="14.5 GiB" memory.required.kv="12.0 GiB" memory.required.allocations="[14.5 GiB]" memory.weights.total="6.0 GiB" memory.weights.repeating="6.0 GiB" memory.weights.nonrepeating="787.7 MiB" memory.graph.full="279.8 MiB" memory.graph.partial="917.6 MiB" projector.weights="814.6 MiB" projector.graph="0 B" time=2025-03-18T18:40:10.286+01:00 level=INFO source=server.go:185 msg="enabling flash attention" time=2025-03-18T18:40:10.286+01:00 level=WARN source=server.go:193 msg="kv cache type not supported by model" type="" time=2025-03-18T18:40:10.357+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-03-18T18:40:10.372+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false time=2025-03-18T18:40:10.377+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.image_size default=0 time=2025-03-18T18:40:10.377+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.patch_size default=0 time=2025-03-18T18:40:10.377+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.num_channels default=0 time=2025-03-18T18:40:10.377+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.block_count default=0 time=2025-03-18T18:40:10.377+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.embedding_length default=0 time=2025-03-18T18:40:10.377+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.attention.head_count default=0 time=2025-03-18T18:40:10.377+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.image_size default=0 time=2025-03-18T18:40:10.377+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.patch_size default=0 time=2025-03-18T18:40:10.377+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.attention.layer_norm_epsilon default=0 time=2025-03-18T18:40:10.377+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-03-18T18:40:10.387+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000 time=2025-03-18T18:40:10.387+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 time=2025-03-18T18:40:10.387+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1 time=2025-03-18T18:40:10.387+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256 time=2025-03-18T18:40:10.393+01:00 level=INFO source=server.go:405 msg="starting llama server" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model M:\\OLLAMA\\models\\blobs\\sha256-ecb6908345e7a10be94511eae715b6b6eadbc518b7c1dd0fd5ba8816b62b4dc9 --ctx-size 32768 --batch-size 128 --n-gpu-layers 100 --threads 8 --flash-attn --no-mmap --mlock --parallel 1 --port 51176" time=2025-03-18T18:40:10.739+01:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2025-03-18T18:40:10.739+01:00 level=INFO source=server.go:580 msg="waiting for llama runner to start responding" time=2025-03-18T18:40:10.741+01:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error" time=2025-03-18T18:40:10.769+01:00 level=INFO source=runner.go:763 msg="starting ollama engine" time=2025-03-18T18:40:10.819+01:00 level=INFO source=runner.go:823 msg="Server listening on 127.0.0.1:51176" time=2025-03-18T18:40:10.897+01:00 level=WARN source=ggml.go:149 msg="key not found" key=general.description default="" time=2025-03-18T18:40:10.897+01:00 level=INFO source=ggml.go:67 msg="" architecture=gemma3 file_type=Q4_K_M name=Gemma-3-12B-It description="" num_tensors=626 num_key_values=35 time=2025-03-18T18:40:10.994+01:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4060 Ti, compute capability 8.9, VMM: yes load_backend: loaded CUDA backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll load_backend: loaded CPU backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-sandybridge.dll time=2025-03-18T18:40:11.035+01:00 level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-03-18T18:40:11.197+01:00 level=INFO source=ggml.go:289 msg="model weights" buffer=CUDA0 size="6.8 GiB" time=2025-03-18T18:40:11.203+01:00 level=INFO source=ggml.go:289 msg="model weights" buffer=CPU size="787.7 MiB" time=2025-03-18T18:40:14.585+01:00 level=INFO source=ggml.go:358 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 time=2025-03-18T18:40:14.585+01:00 level=INFO source=ggml.go:358 msg="compute graph" backend=CPU buffer_type=CUDA_Host time=2025-03-18T18:40:14.586+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-03-18T18:40:14.590+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false time=2025-03-18T18:40:14.594+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.image_size default=0 time=2025-03-18T18:40:14.594+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.patch_size default=0 time=2025-03-18T18:40:14.594+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.num_channels default=0 time=2025-03-18T18:40:14.594+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.block_count default=0 time=2025-03-18T18:40:14.594+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.embedding_length default=0 time=2025-03-18T18:40:14.594+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.attention.head_count default=0 time=2025-03-18T18:40:14.594+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.image_size default=0 time=2025-03-18T18:40:14.594+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.patch_size default=0 time=2025-03-18T18:40:14.594+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.attention.layer_norm_epsilon default=0 time=2025-03-18T18:40:14.594+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-03-18T18:40:14.600+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000 time=2025-03-18T18:40:14.600+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 time=2025-03-18T18:40:14.600+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1 time=2025-03-18T18:40:14.600+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256 time=2025-03-18T18:40:14.751+01:00 level=INFO source=server.go:619 msg="llama runner started in 4.01 seconds" [GIN] 2025/03/18 - 18:45:10 | 500 | 4m59s | 127.0.0.1 | POST "/api/chat" time=2025-03-18T18:45:30.504+01:00 level=INFO source=server.go:105 msg="system memory" total="31.0 GiB" free="25.0 GiB" free_swap="26.5 GiB" time=2025-03-18T18:45:30.520+01:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=100 layers.model=49 layers.offload=32 layers.split="" memory.available="[14.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="21.4 GiB" memory.required.partial="14.5 GiB" memory.required.kv="12.0 GiB" memory.required.allocations="[14.5 GiB]" memory.weights.total="6.0 GiB" memory.weights.repeating="6.0 GiB" memory.weights.nonrepeating="787.7 MiB" memory.graph.full="279.8 MiB" memory.graph.partial="917.6 MiB" projector.weights="814.6 MiB" projector.graph="0 B" time=2025-03-18T18:45:30.520+01:00 level=INFO source=server.go:185 msg="enabling flash attention" time=2025-03-18T18:45:30.520+01:00 level=WARN source=server.go:193 msg="kv cache type not supported by model" type="" time=2025-03-18T18:45:30.591+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-03-18T18:45:30.591+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false time=2025-03-18T18:45:30.598+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.image_size default=0 time=2025-03-18T18:45:30.598+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.patch_size default=0 time=2025-03-18T18:45:30.598+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.num_channels default=0 time=2025-03-18T18:45:30.598+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.block_count default=0 time=2025-03-18T18:45:30.602+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.embedding_length default=0 time=2025-03-18T18:45:30.602+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.attention.head_count default=0 time=2025-03-18T18:45:30.602+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.image_size default=0 time=2025-03-18T18:45:30.602+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.patch_size default=0 time=2025-03-18T18:45:30.602+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.attention.layer_norm_epsilon default=0 time=2025-03-18T18:45:30.602+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-03-18T18:45:30.612+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000 time=2025-03-18T18:45:30.612+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 time=2025-03-18T18:45:30.612+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1 time=2025-03-18T18:45:30.612+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256 time=2025-03-18T18:45:30.612+01:00 level=INFO source=server.go:405 msg="starting llama server" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model M:\\OLLAMA\\models\\blobs\\sha256-ecb6908345e7a10be94511eae715b6b6eadbc518b7c1dd0fd5ba8816b62b4dc9 --ctx-size 32768 --batch-size 128 --n-gpu-layers 100 --threads 8 --flash-attn --no-mmap --mlock --parallel 1 --port 51188" time=2025-03-18T18:45:30.965+01:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2025-03-18T18:45:30.965+01:00 level=INFO source=server.go:580 msg="waiting for llama runner to start responding" time=2025-03-18T18:45:30.967+01:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error" time=2025-03-18T18:45:30.994+01:00 level=INFO source=runner.go:763 msg="starting ollama engine" time=2025-03-18T18:45:31.045+01:00 level=INFO source=runner.go:823 msg="Server listening on 127.0.0.1:51188" time=2025-03-18T18:45:31.111+01:00 level=WARN source=ggml.go:149 msg="key not found" key=general.description default="" time=2025-03-18T18:45:31.111+01:00 level=INFO source=ggml.go:67 msg="" architecture=gemma3 file_type=Q4_K_M name=Gemma-3-12B-It description="" num_tensors=626 num_key_values=35 time=2025-03-18T18:45:31.221+01:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4060 Ti, compute capability 8.9, VMM: yes load_backend: loaded CUDA backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll load_backend: loaded CPU backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-sandybridge.dll time=2025-03-18T18:45:31.254+01:00 level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-03-18T18:45:31.431+01:00 level=INFO source=ggml.go:289 msg="model weights" buffer=CPU size="787.7 MiB" time=2025-03-18T18:45:31.431+01:00 level=INFO source=ggml.go:289 msg="model weights" buffer=CUDA0 size="6.8 GiB" time=2025-03-18T18:45:34.969+01:00 level=INFO source=ggml.go:358 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 time=2025-03-18T18:45:34.969+01:00 level=INFO source=ggml.go:358 msg="compute graph" backend=CPU buffer_type=CUDA_Host time=2025-03-18T18:45:34.970+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-03-18T18:45:34.976+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false time=2025-03-18T18:45:34.979+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.image_size default=0 time=2025-03-18T18:45:34.979+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.patch_size default=0 time=2025-03-18T18:45:34.979+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.num_channels default=0 time=2025-03-18T18:45:34.979+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.block_count default=0 time=2025-03-18T18:45:34.979+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.embedding_length default=0 time=2025-03-18T18:45:34.979+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.attention.head_count default=0 time=2025-03-18T18:45:34.979+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.image_size default=0 time=2025-03-18T18:45:34.979+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.patch_size default=0 time=2025-03-18T18:45:34.979+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.attention.layer_norm_epsilon default=0 time=2025-03-18T18:45:34.979+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-03-18T18:45:34.987+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000 time=2025-03-18T18:45:34.987+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 time=2025-03-18T18:45:34.987+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1 time=2025-03-18T18:45:34.987+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256 time=2025-03-18T18:45:35.230+01:00 level=INFO source=server.go:619 msg="llama runner started in 4.26 seconds" last run before that with DEBUG enabled it was visible, how it was answering token by token until it reaches my timeout...
Author
Owner

@ALLMI78 commented on GitHub (Mar 18, 2025):

i never get an answer...

[GIN] 2025/03/18 - 18:45:10 | 500 | 4m59s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/03/18 - 18:50:30 | 500 | 4m59s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/03/18 - 18:55:50 | 500 | 4m59s | 127.0.0.1 | POST "/api/chat"

a qwen 14b answers the same queries in around 60s...

i'll try to use a higher timeout, but i still think something is wrong with gemma-3 and bigger or more complex context...

<!-- gh-comment-id:2734237758 --> @ALLMI78 commented on GitHub (Mar 18, 2025): i never get an answer... [GIN] 2025/03/18 - 18:45:10 | 500 | 4m59s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/03/18 - 18:50:30 | 500 | 4m59s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/03/18 - 18:55:50 | 500 | 4m59s | 127.0.0.1 | POST "/api/chat" a qwen 14b answers the same queries in around 60s... i'll try to use a higher timeout, but i still think something is wrong with gemma-3 and bigger or more complex context...
Author
Owner

@pdevine commented on GitHub (Mar 18, 2025):

@ALLMI78 I'm a little confused by the issue. It looks like you're out of memory on your GPU which would explain the slowness. When you set stream=false are you getting a single json response or a collection of json responses similar to stream=true?

<!-- gh-comment-id:2734329461 --> @pdevine commented on GitHub (Mar 18, 2025): @ALLMI78 I'm a little confused by the issue. It looks like you're out of memory on your GPU which would explain the slowness. When you set `stream=false` are you getting a single json response or a collection of json responses similar to `stream=true`?
Author
Owner

@ALLMI78 commented on GitHub (Mar 18, 2025):

the memory usage with gemma-3 is another problem for me, i can use a 14b qwen and let it run but gemma-3-12b consumes a lot more memory (i need a 32 k context)

as you can see here, it answers token by token... and in my options i send stream=false

time=2025-03-18T18:22:38.158+01:00 level=DEBUG source=process_text_spm.go:244 msg=decoded ids=[10354] text=###
time=2025-03-18T18:22:38.731+01:00 level=DEBUG source=process_text_spm.go:244 msg=decoded ids=[105839] text=" Analyse"
time=2025-03-18T18:22:39.289+01:00 level=DEBUG source=process_text_spm.go:244 msg=decoded ids=[236787] text=:
time=2025-03-18T18:22:39.835+01:00 level=DEBUG source=process_text_spm.go:244 msg=decoded ids=[108] text="\n\n"
time=2025-03-18T18:22:40.400+01:00 level=DEBUG source=process_text_spm.go:244 msg=decoded ids=[1018] text=**

I’m also confused. I’ve had the setting on stream=false for ages and have never seen this kind of behavior from Ollama before. It was purely by chance (was using ollama serve and debug=1) that I even noticed that a response from Gemma-3 was coming token by token. Actually, I was searching for the other issues (high memory usage/high CPU load).

<!-- gh-comment-id:2734346782 --> @ALLMI78 commented on GitHub (Mar 18, 2025): the memory usage with gemma-3 is another problem for me, i can use a 14b qwen and let it run but gemma-3-12b consumes a lot more memory (i need a 32 k context) as you can see here, it answers token by token... and in my options i send stream=false > time=2025-03-18T18:22:38.158+01:00 level=DEBUG source=process_text_spm.go:244 msg=decoded ids=[10354] text=### time=2025-03-18T18:22:38.731+01:00 level=DEBUG source=process_text_spm.go:244 msg=decoded ids=[105839] text=" Analyse" time=2025-03-18T18:22:39.289+01:00 level=DEBUG source=process_text_spm.go:244 msg=decoded ids=[236787] text=: time=2025-03-18T18:22:39.835+01:00 level=DEBUG source=process_text_spm.go:244 msg=decoded ids=[108] text="\n\n" time=2025-03-18T18:22:40.400+01:00 level=DEBUG source=process_text_spm.go:244 msg=decoded ids=[1018] text=** I’m also confused. I’ve had the setting on stream=false for ages and have never seen this kind of behavior from Ollama before. It was purely by chance (was using ollama serve and debug=1) that I even noticed that a response from Gemma-3 was coming token by token. Actually, I was searching for the other issues (high memory usage/high CPU load).
Author
Owner

@pdevine commented on GitHub (Mar 18, 2025):

That is just debugging information for the new SPM tokenizer and not the actual output (because your server is running with OLLAMA_DEBUG=1). LLMs (even if you have stream set to false) still generate one token after another. We just buffer the output of the generated tokens and then return it to you all at once. What's the actual output to the curl command?

The reason why gemma-3-12b requires more memory is that the vision projector is unquantized so it will unfortunately require more memory. This is also the first model that we've released with the new ollama engine, and we've been working through some memory issues that unfortunately slipped through the cracks. 0.6.2 should require less memory than the previous two iterations. There are still some other fixes coming as well which should hopefully reduce this further.

<!-- gh-comment-id:2734383589 --> @pdevine commented on GitHub (Mar 18, 2025): That is just debugging information for the new SPM tokenizer and not the actual output (because your server is running with `OLLAMA_DEBUG=1`). LLMs (even if you have stream set to false) still generate one token after another. We just buffer the output of the generated tokens and then return it to you all at once. What's the actual output to the curl command? The reason why gemma-3-12b requires more memory is that the vision projector is *unquantized* so it will unfortunately require more memory. This is also the first model that we've released with the new ollama engine, and we've been working through some memory issues that unfortunately slipped through the cracks. `0.6.2` should require less memory than the previous two iterations. There are still some other fixes coming as well which should hopefully reduce this further.
Author
Owner

@ALLMI78 commented on GitHub (Mar 18, 2025):

Ahhh... that's just a debug output of the token generation or tokenizer process ;)?

My PowerShell client was buffering the answer in the same way it does with stream=true... it looked like the answer was coming token by token... I don’t fully understand why or how, but your explanation makes sense...

I double-checked just to be sure, and you're right, I misinterpreted it. The token generation starts at some point, and because of the debug output of the tokens, I assumed that they were being sent token by token, which was incorrect.

Ok, closed... My bad, thanks for your great work guys, you're doing very well...

<!-- gh-comment-id:2734430640 --> @ALLMI78 commented on GitHub (Mar 18, 2025): Ahhh... that's just a debug output of the token generation or tokenizer process ;)? My PowerShell client was buffering the answer in the same way it does with stream=true... it looked like the answer was coming token by token... I don’t fully understand why or how, but your explanation makes sense... I double-checked just to be sure, and you're right, I misinterpreted it. The token generation starts at some point, and because of the debug output of the tokens, I assumed that they were being sent token by token, which was incorrect. Ok, closed... My bad, thanks for your great work guys, you're doing very well...
Author
Owner

@pdevine commented on GitHub (Mar 18, 2025):

Ahhh... that's just a debug output of the token generation or tokenizer process ;)?

Yep, it's just extra debugging for tokenizing. I was actually going to turn it off soon (by putting it at a new trace level). Sorry for the confusion!

<!-- gh-comment-id:2734603994 --> @pdevine commented on GitHub (Mar 18, 2025): > Ahhh... that's just a debug output of the token generation or tokenizer process ;)? Yep, it's just extra debugging for tokenizing. I was actually going to turn it off soon (by putting it at a new trace level). Sorry for the confusion!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6457