[GH-ISSUE #15388] gemma4 not using Nvidia L40 #71902

Closed
opened 2026-05-05 02:54:56 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @FinoVM on GitHub (Apr 7, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15388

What is the issue?

Although Gemma 4 is loaded into the GPU (L40), only the CPU is being utilized.
The ollama ps command shows the following:"

"NAME ID SIZE PROCESSOR CONTEXT UNTIL
gemma4:26b 5571076f3d70 20 GB 100% GPU 32768 46 minutes from now"

NVTOP show:
PID USER DEV TYPE GPU GPU MEM CPU HOST MEM Command
3665341 ollama 0 Compute 6% 19414MiB 42% 3466% 1669MiB /usr/local/bin/ollama

All other like gemma3 or ministral work like expexted.

Relevant log output

Apr 07 13:07:15 hbrki ollama[3235054]: time=2026-04-07T13:07:15.031+02:00 level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/usr1/ollama/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df
Apr 07 13:07:15 hbrki ollama[3235054]: time=2026-04-07T13:07:15.036+02:00 level=DEBUG source=server.go:1538 msg="completion request" images=0 prompt=1089 format=""
Apr 07 13:07:15 hbrki ollama[3235054]: time=2026-04-07T13:07:15.041+02:00 level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=223 prompt=264 used=5 remaining=259
Apr 07 13:08:32 hbrki ollama[3235054]: [GIN] 2026/04/07 - 13:08:32 | 200 |         1m17s |      172.17.0.6 | POST     "/api/chat"
Apr 07 13:08:32 hbrki ollama[3235054]: time=2026-04-07T13:08:32.238+02:00 level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c Library:CUDA}]" runner.size="19.3 GiB" runner.vram="19.3 GiB" runner.parallel=1 runner.pid=3236227 runner.model=/usr1/ollama/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=32768
Apr 07 13:08:32 hbrki ollama[3235054]: time=2026-04-07T13:08:32.238+02:00 level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c Library:CUDA}]" runner.size="19.3 GiB" runner.vram="19.3 GiB" runner.parallel=1 runner.pid=3236227 runner.model=/usr1/ollama/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=32768 duration=1h0m0s
Apr 07 13:08:32 hbrki ollama[3235054]: time=2026-04-07T13:08:32.238+02:00 level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c Library:CUDA}]" runner.size="19.3 GiB" runner.vram="19.3 GiB" runner.parallel=1 runner.pid=3236227 runner.model=/usr1/ollama/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=32768 refCount=0
Apr 07 13:08:32 hbrki ollama[3235054]: time=2026-04-07T13:08:32.496+02:00 level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/usr1/ollama/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df
Apr 07 13:08:32 hbrki ollama[3235054]: time=2026-04-07T13:08:32.498+02:00 level=DEBUG source=server.go:1538 msg="completion request" images=0 prompt=1373 format=""
Apr 07 13:08:32 hbrki ollama[3235054]: time=2026-04-07T13:08:32.502+02:00 level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=876 prompt=319 used=14 remaining=305
Apr 07 13:09:40 hbrki systemd[1]: Stopping ollama.service - Ollama Service...
Apr 07 13:09:40 hbrki ollama[3235054]: time=2026-04-07T13:09:40.158+02:00 level=DEBUG source=sched.go:287 msg="shutting down scheduler completed loop"
Apr 07 13:09:40 hbrki ollama[3235054]: time=2026-04-07T13:09:40.158+02:00 level=DEBUG source=sched.go:908 msg="shutting down runner" model=/usr1/ollama/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df
Apr 07 13:09:40 hbrki ollama[3235054]: time=2026-04-07T13:09:40.158+02:00 level=DEBUG source=sched.go:161 msg="shutting down scheduler pending loop"
Apr 07 13:09:40 hbrki ollama[3235054]: time=2026-04-07T13:09:40.158+02:00 level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c Library:CUDA}]" runner.size="19.3 GiB" runner.vram="19.3 GiB" runner.parallel=1 runner.pid=3236227 runner.model=/usr1/ollama/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=32768
Apr 07 13:09:40 hbrki ollama[3235054]: time=2026-04-07T13:09:40.158+02:00 level=DEBUG source=server.go:1832 msg="stopping llama server" pid=3236227
Apr 07 13:09:40 hbrki ollama[3235054]: [GIN] 2026/04/07 - 13:09:40 | 500 |          1m7s |      172.17.0.6 | POST     "/api/chat"
Apr 07 13:09:40 hbrki ollama[3235054]: time=2026-04-07T13:09:40.159+02:00 level=DEBUG source=server.go:1838 msg="waiting for llama server to exit" pid=3236227
Apr 07 13:09:40 hbrki ollama[3235054]: time=2026-04-07T13:09:40.293+02:00 level=DEBUG source=server.go:1842 msg="llama server stopped" pid=3236227
Apr 07 13:09:40 hbrki systemd[1]: ollama.service: Deactivated successfully.
Apr 07 13:09:40 hbrki systemd[1]: Stopped ollama.service - Ollama Service.
Apr 07 13:09:40 hbrki systemd[1]: ollama.service: Consumed 1h 39min 35.585s CPU time, 1.5G memory peak, 0B memory swap peak.
Apr 07 13:09:40 hbrki systemd[1]: Started ollama.service - Ollama Service.
Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.366+02:00 level=INFO source=routes.go:1744 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:DEBUG OLLAMA_DEBUG_LOG_REQUESTS:false OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://10.10.2.75:11434 OLLAMA_KEEP_ALIVE:1h0m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr1/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.366+02:00 level=INFO source=routes.go:1746 msg="Ollama cloud disabled: false"
Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.368+02:00 level=INFO source=images.go:499 msg="total blobs: 35"
Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.369+02:00 level=INFO source=images.go:506 msg="total unused blobs removed: 0"
Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.369+02:00 level=INFO source=routes.go:1802 msg="Listening on 10.10.2.75:11434 (version 0.20.3)"
Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.369+02:00 level=DEBUG source=sched.go:145 msg="starting llm scheduler"
Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.370+02:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.370+02:00 level=INFO source=server.go:432 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 33895"
Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.370+02:00 level=DEBUG source=server.go:433 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin OLLAMA_HOST=10.10.2.75 OLLAMA_MODELS=/usr1/ollama/.ollama/models OLLAMA_KEEP_ALIVE=60m OLLAMA_FLASH_ATTENTION=1 OLLAMA_NUM_THREADS=32 OLLAMA_CUDA=1 OLLAMA_DEBUG=true LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12
Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.479+02:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=108.942073ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" extra_envs=map[]
Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.479+02:00 level=INFO source=server.go:432 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 32995"
Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.479+02:00 level=DEBUG source=server.go:433 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin OLLAMA_HOST=10.10.2.75 OLLAMA_MODELS=/usr1/ollama/.ollama/models OLLAMA_KEEP_ALIVE=60m OLLAMA_FLASH_ATTENTION=1 OLLAMA_NUM_THREADS=32 OLLAMA_CUDA=1 OLLAMA_DEBUG=true LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v13 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v13
Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.559+02:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=80.39168ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v13]" extra_envs=map[]
Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.559+02:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.559+02:00 level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=1
Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.559+02:00 level=DEBUG source=runner.go:146 msg="verifying if device is supported" library=/usr/local/lib/ollama/cuda_v12 description="NVIDIA L40" compute=8.9 id=GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c pci_id=0000:b5:00.0
Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.559+02:00 level=INFO source=server.go:432 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 34509"
Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.559+02:00 level=DEBUG source=server.go:433 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin OLLAMA_HOST=10.10.2.75 OLLAMA_MODELS=/usr1/ollama/.ollama/models OLLAMA_KEEP_ALIVE=60m OLLAMA_FLASH_ATTENTION=1 OLLAMA_NUM_THREADS=32 OLLAMA_CUDA=1 OLLAMA_DEBUG=true LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 CUDA_VISIBLE_DEVICES=GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c GGML_CUDA_INIT=1
Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.668+02:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=108.5764ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" extra_envs="map[CUDA_VISIBLE_DEVICES:GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c GGML_CUDA_INIT:1]"
Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.668+02:00 level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=298.454662ms
Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.668+02:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c filter_id="" library=CUDA compute=8.9 name=CUDA0 description="NVIDIA L40" libdirs=ollama,cuda_v12 driver=12.8 pci_id=0000:b5:00.0 type=discrete total="45.0 GiB" available="41.2 GiB"
Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.668+02:00 level=INFO source=routes.go:1852 msg="vram-based default context" total_vram="45.0 GiB" default_num_ctx=32768
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.023+02:00 level=DEBUG source=runner.go:264 msg="refreshing free memory"
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.023+02:00 level=DEBUG source=runner.go:328 msg="unable to refresh all GPUs with existing runners, performing bootstrap discovery"
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.023+02:00 level=INFO source=server.go:432 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 38483"
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.023+02:00 level=DEBUG source=server.go:433 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin OLLAMA_HOST=10.10.2.75 OLLAMA_MODELS=/usr1/ollama/.ollama/models OLLAMA_KEEP_ALIVE=60m OLLAMA_FLASH_ATTENTION=1 OLLAMA_NUM_THREADS=32 OLLAMA_CUDA=1 OLLAMA_DEBUG=true LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.136+02:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=112.972623ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" extra_envs=map[]
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.136+02:00 level=DEBUG source=runner.go:40 msg="overall device VRAM discovery took" duration=113.132335ms
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.144+02:00 level=DEBUG source=sched.go:220 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.144+02:00 level=DEBUG source=sched.go:229 msg="loading first model" model=/usr1/ollama/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.349+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.497+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.499+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.pooling_type default=0
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.499+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eot_token_id default=106
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.500+02:00 level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.500+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.attention.global_head_count_kv default=0
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.500+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.audio.block_count default=0
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.500+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.audio.embedding_length default=0
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.500+02:00 level=INFO source=server.go:247 msg="enabling flash attention"
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.500+02:00 level=INFO source=server.go:432 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr1/ollama/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df --port 34573"
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.500+02:00 level=DEBUG source=server.go:433 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin OLLAMA_HOST=10.10.2.75 OLLAMA_MODELS=/usr1/ollama/.ollama/models OLLAMA_KEEP_ALIVE=60m OLLAMA_FLASH_ATTENTION=1 OLLAMA_NUM_THREADS=32 OLLAMA_CUDA=1 OLLAMA_DEBUG=true LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.501+02:00 level=INFO source=sched.go:484 msg="system memory" total="251.4 GiB" free="230.2 GiB" free_swap="8.0 GiB"
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.501+02:00 level=INFO source=sched.go:491 msg="gpu memory" id=GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c library=CUDA available="40.7 GiB" free="41.2 GiB" minimum="457.0 MiB" overhead="0 B"
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.501+02:00 level=INFO source=server.go:759 msg="loading model" "model layers"=31 requested=-1
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.517+02:00 level=INFO source=runner.go:1417 msg="starting ollama engine"
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.517+02:00 level=INFO source=runner.go:1452 msg="Server listening on 127.0.0.1:34573"
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.523+02:00 level=INFO source=runner.go:1290 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType: NumThreads:64 GPULayers:31[ID:GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c Layers:31(0..30)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.594+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.595+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.name default=""
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.595+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.description default=""
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.595+02:00 level=INFO source=ggml.go:136 msg="" architecture=gemma4 file_type=Q4_K_M name="" description="" num_tensors=1014 num_key_values=52
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.595+02:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama
Apr 07 13:09:48 hbrki ollama[3665251]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.600+02:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v12
Apr 07 13:09:48 hbrki ollama[3665251]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Apr 07 13:09:48 hbrki ollama[3665251]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Apr 07 13:09:48 hbrki ollama[3665251]: ggml_cuda_init: found 1 CUDA devices:
Apr 07 13:09:48 hbrki ollama[3665251]:   Device 0: NVIDIA L40, compute capability 8.9, VMM: yes, ID: GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c
Apr 07 13:09:48 hbrki ollama[3665251]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.664+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.669+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.pooling_type default=0
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.669+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eot_token_id default=106
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.670+02:00 level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.670+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.attention.global_head_count_kv default=0
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.670+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.audio.block_count default=0
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.670+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.audio.embedding_length default=0
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.695+02:00 level=INFO source=model.go:138 msg="vision: decode" elapsed=10.390409ms bounds=(0,0)-(2048,2048)
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.849+02:00 level=INFO source=model.go:145 msg="vision: preprocess" elapsed=153.769947ms size="[768 768]"
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.849+02:00 level=INFO source=model.go:148 msg="vision: pixelValues" shape="[768 768 3]" dim0=768 dim1=768 dim2=3
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.849+02:00 level=INFO source=model.go:152 msg="vision: patches" patchesX=48 patchesY=48 total=2304 patchSize=16
Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.850+02:00 level=INFO source=model.go:156 msg="vision: encoded" elapsed=165.197362ms shape="[2816 256]"
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.028+02:00 level=DEBUG source=ggml.go:852 msg="compute graph" nodes=1137 splits=1
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.265+02:00 level=DEBUG source=ggml.go:852 msg="compute graph" nodes=2734 splits=12
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.272+02:00 level=DEBUG source=ggml.go:852 msg="compute graph" nodes=2732 splits=12
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.273+02:00 level=DEBUG source=device.go:240 msg="model weights" device=CUDA0 size="16.6 GiB"
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.273+02:00 level=DEBUG source=device.go:245 msg="model weights" device=CPU size="667.5 MiB"
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.273+02:00 level=DEBUG source=device.go:251 msg="kv cache" device=CUDA0 size="1.5 GiB"
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.273+02:00 level=DEBUG source=device.go:262 msg="compute graph" device=CUDA0 size="323.9 MiB"
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.273+02:00 level=DEBUG source=device.go:267 msg="compute graph" device=CPU size="192.0 MiB"
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.273+02:00 level=DEBUG source=device.go:272 msg="total memory" size="19.3 GiB"
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.273+02:00 level=DEBUG source=server.go:784 msg=memory success=true required.InputWeights=699924480 required.CPU.Graph=201326592 required.CUDA0.ID=GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c required.CUDA0.Weights="[590588928 590588928 590588928 493200128 493200128 597213952 493200128 491713280 589102080 493200128 491713280 597213952 491713280 493200128 589102080 491713280 493200128 597213952 491713280 491713280 590588928 491713280 491713280 597213952 493200128 491713280 589102080 590588928 589102080 597213952 1704233472]" required.CUDA0.Cache="[37748736 37748736 37748736 37748736 37748736 134217728 37748736 37748736 37748736 37748736 37748736 134217728 37748736 37748736 37748736 37748736 37748736 134217728 37748736 37748736 37748736 37748736 37748736 134217728 37748736 37748736 37748736 37748736 37748736 134217728 0]" required.CUDA0.Graph=339613696
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.274+02:00 level=DEBUG source=server.go:978 msg="available gpu" id=GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c library=CUDA "available layer vram"="40.4 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="323.9 MiB"
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.274+02:00 level=DEBUG source=server.go:795 msg="new layout created" layers="31[ID:GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c Layers:31(0..30)]"
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.274+02:00 level=INFO source=runner.go:1290 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType: NumThreads:64 GPULayers:31[ID:GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c Layers:31(0..30)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.386+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.395+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.pooling_type default=0
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.395+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eot_token_id default=106
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.396+02:00 level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.396+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.attention.global_head_count_kv default=0
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.396+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.audio.block_count default=0
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.396+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.audio.embedding_length default=0
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.418+02:00 level=INFO source=model.go:138 msg="vision: decode" elapsed=1.994265ms bounds=(0,0)-(2048,2048)
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.557+02:00 level=INFO source=model.go:145 msg="vision: preprocess" elapsed=138.91198ms size="[768 768]"
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.560+02:00 level=INFO source=model.go:148 msg="vision: pixelValues" shape="[768 768 3]" dim0=768 dim1=768 dim2=3
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.560+02:00 level=INFO source=model.go:152 msg="vision: patches" patchesX=48 patchesY=48 total=2304 patchSize=16
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.561+02:00 level=INFO source=model.go:156 msg="vision: encoded" elapsed=145.25369ms shape="[2816 256]"
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.565+02:00 level=DEBUG source=ggml.go:852 msg="compute graph" nodes=1137 splits=1
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.765+02:00 level=DEBUG source=ggml.go:852 msg="compute graph" nodes=2734 splits=12
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.775+02:00 level=DEBUG source=ggml.go:852 msg="compute graph" nodes=2732 splits=12
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.775+02:00 level=DEBUG source=device.go:240 msg="model weights" device=CUDA0 size="16.6 GiB"
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.775+02:00 level=DEBUG source=device.go:245 msg="model weights" device=CPU size="667.5 MiB"
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.775+02:00 level=DEBUG source=device.go:251 msg="kv cache" device=CUDA0 size="1.5 GiB"
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.775+02:00 level=DEBUG source=device.go:262 msg="compute graph" device=CUDA0 size="323.9 MiB"
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.775+02:00 level=DEBUG source=device.go:267 msg="compute graph" device=CPU size="192.0 MiB"
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.775+02:00 level=DEBUG source=device.go:272 msg="total memory" size="19.3 GiB"
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.776+02:00 level=DEBUG source=server.go:784 msg=memory success=true required.InputWeights=699924480 required.CPU.Graph=201326592 required.CUDA0.ID=GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c required.CUDA0.Weights="[590588928 590588928 590588928 493200128 493200128 597213952 493200128 491713280 589102080 493200128 491713280 597213952 491713280 493200128 589102080 491713280 493200128 597213952 491713280 491713280 590588928 491713280 491713280 597213952 493200128 491713280 589102080 590588928 589102080 597213952 1704233472]" required.CUDA0.Cache="[37748736 37748736 37748736 37748736 37748736 134217728 37748736 37748736 37748736 37748736 37748736 134217728 37748736 37748736 37748736 37748736 37748736 134217728 37748736 37748736 37748736 37748736 37748736 134217728 37748736 37748736 37748736 37748736 37748736 134217728 0]" required.CUDA0.Graph=339613696
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.776+02:00 level=DEBUG source=server.go:978 msg="available gpu" id=GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c library=CUDA "available layer vram"="40.4 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="323.9 MiB"
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.776+02:00 level=DEBUG source=server.go:795 msg="new layout created" layers="31[ID:GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c Layers:31(0..30)]"
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.776+02:00 level=INFO source=runner.go:1290 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType: NumThreads:64 GPULayers:31[ID:GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c Layers:31(0..30)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.777+02:00 level=INFO source=ggml.go:482 msg="offloading 30 repeating layers to GPU"
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.777+02:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU"
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.777+02:00 level=INFO source=ggml.go:494 msg="offloaded 31/31 layers to GPU"
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.777+02:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="16.6 GiB"
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.777+02:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="667.5 MiB"
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.777+02:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="1.5 GiB"
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.777+02:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="323.9 MiB"
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.777+02:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="192.0 MiB"
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.777+02:00 level=INFO source=device.go:272 msg="total memory" size="19.3 GiB"
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.777+02:00 level=INFO source=sched.go:561 msg="loaded runners" count=1
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.777+02:00 level=INFO source=server.go:1352 msg="waiting for llama runner to start responding"
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.778+02:00 level=INFO source=server.go:1386 msg="waiting for server to become available" status="llm server loading model"
Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.778+02:00 level=DEBUG source=server.go:1396 msg="model load progress 0.00"
Apr 07 13:09:50 hbrki ollama[3665251]: time=2026-04-07T13:09:50.029+02:00 level=DEBUG source=server.go:1396 msg="model load progress 0.08"
Apr 07 13:09:50 hbrki ollama[3665251]: time=2026-04-07T13:09:50.280+02:00 level=DEBUG source=server.go:1396 msg="model load progress 0.14"
Apr 07 13:09:50 hbrki ollama[3665251]: time=2026-04-07T13:09:50.531+02:00 level=DEBUG source=server.go:1396 msg="model load progress 0.19"
Apr 07 13:09:50 hbrki ollama[3665251]: time=2026-04-07T13:09:50.782+02:00 level=DEBUG source=server.go:1396 msg="model load progress 0.25"
Apr 07 13:09:51 hbrki ollama[3665251]: time=2026-04-07T13:09:51.033+02:00 level=DEBUG source=server.go:1396 msg="model load progress 0.31"
Apr 07 13:09:51 hbrki ollama[3665251]: time=2026-04-07T13:09:51.284+02:00 level=DEBUG source=server.go:1396 msg="model load progress 0.37"
Apr 07 13:09:51 hbrki ollama[3665251]: time=2026-04-07T13:09:51.535+02:00 level=DEBUG source=server.go:1396 msg="model load progress 0.43"
Apr 07 13:09:51 hbrki ollama[3665251]: time=2026-04-07T13:09:51.786+02:00 level=DEBUG source=server.go:1396 msg="model load progress 0.48"
Apr 07 13:09:52 hbrki ollama[3665251]: time=2026-04-07T13:09:52.037+02:00 level=DEBUG source=server.go:1396 msg="model load progress 0.54"
Apr 07 13:09:52 hbrki ollama[3665251]: time=2026-04-07T13:09:52.288+02:00 level=DEBUG source=server.go:1396 msg="model load progress 0.60"
Apr 07 13:09:52 hbrki ollama[3665251]: time=2026-04-07T13:09:52.539+02:00 level=DEBUG source=server.go:1396 msg="model load progress 0.66"
Apr 07 13:09:52 hbrki ollama[3665251]: time=2026-04-07T13:09:52.790+02:00 level=DEBUG source=server.go:1396 msg="model load progress 0.72"
Apr 07 13:09:53 hbrki ollama[3665251]: time=2026-04-07T13:09:53.041+02:00 level=DEBUG source=server.go:1396 msg="model load progress 0.78"
Apr 07 13:09:53 hbrki ollama[3665251]: time=2026-04-07T13:09:53.292+02:00 level=DEBUG source=server.go:1396 msg="model load progress 0.84"
Apr 07 13:09:53 hbrki ollama[3665251]: time=2026-04-07T13:09:53.543+02:00 level=DEBUG source=server.go:1396 msg="model load progress 0.90"
Apr 07 13:09:53 hbrki ollama[3665251]: time=2026-04-07T13:09:53.794+02:00 level=DEBUG source=server.go:1396 msg="model load progress 0.96"
Apr 07 13:09:54 hbrki ollama[3665251]: time=2026-04-07T13:09:54.044+02:00 level=DEBUG source=server.go:1396 msg="model load progress 0.99"
Apr 07 13:09:54 hbrki ollama[3665251]: time=2026-04-07T13:09:54.153+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.pooling_type default=0
Apr 07 13:09:54 hbrki ollama[3665251]: time=2026-04-07T13:09:54.295+02:00 level=INFO source=server.go:1390 msg="llama runner started in 5.79 seconds"
Apr 07 13:09:54 hbrki ollama[3665251]: time=2026-04-07T13:09:54.295+02:00 level=DEBUG source=sched.go:573 msg="finished setting up" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c Library:CUDA}]" runner.size="19.3 GiB" runner.vram="19.3 GiB" runner.parallel=1 runner.pid=3665341 runner.model=/usr1/ollama/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=32768
Apr 07 13:09:54 hbrki ollama[3665251]: time=2026-04-07T13:09:54.415+02:00 level=DEBUG source=server.go:1538 msg="completion request" images=0 prompt=87 format=""
Apr 07 13:09:54 hbrki ollama[3665251]: time=2026-04-07T13:09:54.513+02:00 level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=0 prompt=19 used=0 remaining=19
Apr 07 13:10:25 hbrki ollama[3665251]: [GIN] 2026/04/07 - 13:10:25 | 200 | 38.113744124s |      172.17.0.6 | POST     "/api/chat"
Apr 07 13:10:25 hbrki ollama[3665251]: time=2026-04-07T13:10:25.731+02:00 level=DEBUG source=sched.go:581 msg="context for request finished"
Apr 07 13:10:25 hbrki ollama[3665251]: time=2026-04-07T13:10:25.731+02:00 level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c Library:CUDA}]" runner.size="19.3 GiB" runner.vram="19.3 GiB" runner.parallel=1 runner.pid=3665341 runner.model=/usr1/ollama/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=32768 duration=1h0m0s
Apr 07 13:10:25 hbrki ollama[3665251]: time=2026-04-07T13:10:25.731+02:00 level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c Library:CUDA}]" runner.size="19.3 GiB" runner.vram="19.3 GiB" runner.parallel=1 runner.pid=3665341 runner.model=/usr1/ollama/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=32768 refCount=0
Apr 07 13:10:26 hbrki ollama[3665251]: time=2026-04-07T13:10:26.057+02:00 level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/usr1/ollama/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df
Apr 07 13:10:26 hbrki ollama[3665251]: time=2026-04-07T13:10:26.061+02:00 level=DEBUG source=server.go:1538 msg="completion request" images=0 prompt=1089 format=""
Apr 07 13:10:26 hbrki ollama[3665251]: time=2026-04-07T13:10:26.065+02:00 level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=269 prompt=264 used=5 remaining=259

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.20.3

Originally created by @FinoVM on GitHub (Apr 7, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15388 ### What is the issue? Although Gemma 4 is loaded into the GPU (L40), only the CPU is being utilized. The ollama ps command shows the following:" "NAME ID SIZE PROCESSOR CONTEXT UNTIL gemma4:26b 5571076f3d70 20 GB 100% GPU 32768 46 minutes from now" NVTOP show: PID USER DEV TYPE GPU GPU MEM CPU HOST MEM Command 3665341 ollama 0 Compute 6% 19414MiB 42% 3466% 1669MiB /usr/local/bin/ollama All other like gemma3 or ministral work like expexted. ### Relevant log output ```shell Apr 07 13:07:15 hbrki ollama[3235054]: time=2026-04-07T13:07:15.031+02:00 level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/usr1/ollama/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df Apr 07 13:07:15 hbrki ollama[3235054]: time=2026-04-07T13:07:15.036+02:00 level=DEBUG source=server.go:1538 msg="completion request" images=0 prompt=1089 format="" Apr 07 13:07:15 hbrki ollama[3235054]: time=2026-04-07T13:07:15.041+02:00 level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=223 prompt=264 used=5 remaining=259 Apr 07 13:08:32 hbrki ollama[3235054]: [GIN] 2026/04/07 - 13:08:32 | 200 | 1m17s | 172.17.0.6 | POST "/api/chat" Apr 07 13:08:32 hbrki ollama[3235054]: time=2026-04-07T13:08:32.238+02:00 level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c Library:CUDA}]" runner.size="19.3 GiB" runner.vram="19.3 GiB" runner.parallel=1 runner.pid=3236227 runner.model=/usr1/ollama/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=32768 Apr 07 13:08:32 hbrki ollama[3235054]: time=2026-04-07T13:08:32.238+02:00 level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c Library:CUDA}]" runner.size="19.3 GiB" runner.vram="19.3 GiB" runner.parallel=1 runner.pid=3236227 runner.model=/usr1/ollama/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=32768 duration=1h0m0s Apr 07 13:08:32 hbrki ollama[3235054]: time=2026-04-07T13:08:32.238+02:00 level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c Library:CUDA}]" runner.size="19.3 GiB" runner.vram="19.3 GiB" runner.parallel=1 runner.pid=3236227 runner.model=/usr1/ollama/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=32768 refCount=0 Apr 07 13:08:32 hbrki ollama[3235054]: time=2026-04-07T13:08:32.496+02:00 level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/usr1/ollama/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df Apr 07 13:08:32 hbrki ollama[3235054]: time=2026-04-07T13:08:32.498+02:00 level=DEBUG source=server.go:1538 msg="completion request" images=0 prompt=1373 format="" Apr 07 13:08:32 hbrki ollama[3235054]: time=2026-04-07T13:08:32.502+02:00 level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=876 prompt=319 used=14 remaining=305 Apr 07 13:09:40 hbrki systemd[1]: Stopping ollama.service - Ollama Service... Apr 07 13:09:40 hbrki ollama[3235054]: time=2026-04-07T13:09:40.158+02:00 level=DEBUG source=sched.go:287 msg="shutting down scheduler completed loop" Apr 07 13:09:40 hbrki ollama[3235054]: time=2026-04-07T13:09:40.158+02:00 level=DEBUG source=sched.go:908 msg="shutting down runner" model=/usr1/ollama/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df Apr 07 13:09:40 hbrki ollama[3235054]: time=2026-04-07T13:09:40.158+02:00 level=DEBUG source=sched.go:161 msg="shutting down scheduler pending loop" Apr 07 13:09:40 hbrki ollama[3235054]: time=2026-04-07T13:09:40.158+02:00 level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c Library:CUDA}]" runner.size="19.3 GiB" runner.vram="19.3 GiB" runner.parallel=1 runner.pid=3236227 runner.model=/usr1/ollama/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=32768 Apr 07 13:09:40 hbrki ollama[3235054]: time=2026-04-07T13:09:40.158+02:00 level=DEBUG source=server.go:1832 msg="stopping llama server" pid=3236227 Apr 07 13:09:40 hbrki ollama[3235054]: [GIN] 2026/04/07 - 13:09:40 | 500 | 1m7s | 172.17.0.6 | POST "/api/chat" Apr 07 13:09:40 hbrki ollama[3235054]: time=2026-04-07T13:09:40.159+02:00 level=DEBUG source=server.go:1838 msg="waiting for llama server to exit" pid=3236227 Apr 07 13:09:40 hbrki ollama[3235054]: time=2026-04-07T13:09:40.293+02:00 level=DEBUG source=server.go:1842 msg="llama server stopped" pid=3236227 Apr 07 13:09:40 hbrki systemd[1]: ollama.service: Deactivated successfully. Apr 07 13:09:40 hbrki systemd[1]: Stopped ollama.service - Ollama Service. Apr 07 13:09:40 hbrki systemd[1]: ollama.service: Consumed 1h 39min 35.585s CPU time, 1.5G memory peak, 0B memory swap peak. Apr 07 13:09:40 hbrki systemd[1]: Started ollama.service - Ollama Service. Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.366+02:00 level=INFO source=routes.go:1744 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:DEBUG OLLAMA_DEBUG_LOG_REQUESTS:false OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://10.10.2.75:11434 OLLAMA_KEEP_ALIVE:1h0m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr1/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.366+02:00 level=INFO source=routes.go:1746 msg="Ollama cloud disabled: false" Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.368+02:00 level=INFO source=images.go:499 msg="total blobs: 35" Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.369+02:00 level=INFO source=images.go:506 msg="total unused blobs removed: 0" Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.369+02:00 level=INFO source=routes.go:1802 msg="Listening on 10.10.2.75:11434 (version 0.20.3)" Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.369+02:00 level=DEBUG source=sched.go:145 msg="starting llm scheduler" Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.370+02:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.370+02:00 level=INFO source=server.go:432 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 33895" Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.370+02:00 level=DEBUG source=server.go:433 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin OLLAMA_HOST=10.10.2.75 OLLAMA_MODELS=/usr1/ollama/.ollama/models OLLAMA_KEEP_ALIVE=60m OLLAMA_FLASH_ATTENTION=1 OLLAMA_NUM_THREADS=32 OLLAMA_CUDA=1 OLLAMA_DEBUG=true LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.479+02:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=108.942073ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" extra_envs=map[] Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.479+02:00 level=INFO source=server.go:432 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 32995" Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.479+02:00 level=DEBUG source=server.go:433 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin OLLAMA_HOST=10.10.2.75 OLLAMA_MODELS=/usr1/ollama/.ollama/models OLLAMA_KEEP_ALIVE=60m OLLAMA_FLASH_ATTENTION=1 OLLAMA_NUM_THREADS=32 OLLAMA_CUDA=1 OLLAMA_DEBUG=true LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v13 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v13 Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.559+02:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=80.39168ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v13]" extra_envs=map[] Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.559+02:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.559+02:00 level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=1 Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.559+02:00 level=DEBUG source=runner.go:146 msg="verifying if device is supported" library=/usr/local/lib/ollama/cuda_v12 description="NVIDIA L40" compute=8.9 id=GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c pci_id=0000:b5:00.0 Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.559+02:00 level=INFO source=server.go:432 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 34509" Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.559+02:00 level=DEBUG source=server.go:433 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin OLLAMA_HOST=10.10.2.75 OLLAMA_MODELS=/usr1/ollama/.ollama/models OLLAMA_KEEP_ALIVE=60m OLLAMA_FLASH_ATTENTION=1 OLLAMA_NUM_THREADS=32 OLLAMA_CUDA=1 OLLAMA_DEBUG=true LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 CUDA_VISIBLE_DEVICES=GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c GGML_CUDA_INIT=1 Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.668+02:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=108.5764ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" extra_envs="map[CUDA_VISIBLE_DEVICES:GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c GGML_CUDA_INIT:1]" Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.668+02:00 level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=298.454662ms Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.668+02:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c filter_id="" library=CUDA compute=8.9 name=CUDA0 description="NVIDIA L40" libdirs=ollama,cuda_v12 driver=12.8 pci_id=0000:b5:00.0 type=discrete total="45.0 GiB" available="41.2 GiB" Apr 07 13:09:40 hbrki ollama[3665251]: time=2026-04-07T13:09:40.668+02:00 level=INFO source=routes.go:1852 msg="vram-based default context" total_vram="45.0 GiB" default_num_ctx=32768 Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.023+02:00 level=DEBUG source=runner.go:264 msg="refreshing free memory" Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.023+02:00 level=DEBUG source=runner.go:328 msg="unable to refresh all GPUs with existing runners, performing bootstrap discovery" Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.023+02:00 level=INFO source=server.go:432 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 38483" Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.023+02:00 level=DEBUG source=server.go:433 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin OLLAMA_HOST=10.10.2.75 OLLAMA_MODELS=/usr1/ollama/.ollama/models OLLAMA_KEEP_ALIVE=60m OLLAMA_FLASH_ATTENTION=1 OLLAMA_NUM_THREADS=32 OLLAMA_CUDA=1 OLLAMA_DEBUG=true LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.136+02:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=112.972623ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" extra_envs=map[] Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.136+02:00 level=DEBUG source=runner.go:40 msg="overall device VRAM discovery took" duration=113.132335ms Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.144+02:00 level=DEBUG source=sched.go:220 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1 Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.144+02:00 level=DEBUG source=sched.go:229 msg="loading first model" model=/usr1/ollama/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.349+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32 Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.497+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32 Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.499+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.pooling_type default=0 Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.499+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eot_token_id default=106 Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.500+02:00 level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883 Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.500+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.attention.global_head_count_kv default=0 Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.500+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.audio.block_count default=0 Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.500+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.audio.embedding_length default=0 Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.500+02:00 level=INFO source=server.go:247 msg="enabling flash attention" Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.500+02:00 level=INFO source=server.go:432 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr1/ollama/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df --port 34573" Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.500+02:00 level=DEBUG source=server.go:433 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin OLLAMA_HOST=10.10.2.75 OLLAMA_MODELS=/usr1/ollama/.ollama/models OLLAMA_KEEP_ALIVE=60m OLLAMA_FLASH_ATTENTION=1 OLLAMA_NUM_THREADS=32 OLLAMA_CUDA=1 OLLAMA_DEBUG=true LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.501+02:00 level=INFO source=sched.go:484 msg="system memory" total="251.4 GiB" free="230.2 GiB" free_swap="8.0 GiB" Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.501+02:00 level=INFO source=sched.go:491 msg="gpu memory" id=GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c library=CUDA available="40.7 GiB" free="41.2 GiB" minimum="457.0 MiB" overhead="0 B" Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.501+02:00 level=INFO source=server.go:759 msg="loading model" "model layers"=31 requested=-1 Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.517+02:00 level=INFO source=runner.go:1417 msg="starting ollama engine" Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.517+02:00 level=INFO source=runner.go:1452 msg="Server listening on 127.0.0.1:34573" Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.523+02:00 level=INFO source=runner.go:1290 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType: NumThreads:64 GPULayers:31[ID:GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c Layers:31(0..30)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.594+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32 Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.595+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.name default="" Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.595+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.description default="" Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.595+02:00 level=INFO source=ggml.go:136 msg="" architecture=gemma4 file_type=Q4_K_M name="" description="" num_tensors=1014 num_key_values=52 Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.595+02:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama Apr 07 13:09:48 hbrki ollama[3665251]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.600+02:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v12 Apr 07 13:09:48 hbrki ollama[3665251]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Apr 07 13:09:48 hbrki ollama[3665251]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Apr 07 13:09:48 hbrki ollama[3665251]: ggml_cuda_init: found 1 CUDA devices: Apr 07 13:09:48 hbrki ollama[3665251]: Device 0: NVIDIA L40, compute capability 8.9, VMM: yes, ID: GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c Apr 07 13:09:48 hbrki ollama[3665251]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.664+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.669+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.pooling_type default=0 Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.669+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eot_token_id default=106 Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.670+02:00 level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883 Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.670+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.attention.global_head_count_kv default=0 Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.670+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.audio.block_count default=0 Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.670+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.audio.embedding_length default=0 Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.695+02:00 level=INFO source=model.go:138 msg="vision: decode" elapsed=10.390409ms bounds=(0,0)-(2048,2048) Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.849+02:00 level=INFO source=model.go:145 msg="vision: preprocess" elapsed=153.769947ms size="[768 768]" Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.849+02:00 level=INFO source=model.go:148 msg="vision: pixelValues" shape="[768 768 3]" dim0=768 dim1=768 dim2=3 Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.849+02:00 level=INFO source=model.go:152 msg="vision: patches" patchesX=48 patchesY=48 total=2304 patchSize=16 Apr 07 13:09:48 hbrki ollama[3665251]: time=2026-04-07T13:09:48.850+02:00 level=INFO source=model.go:156 msg="vision: encoded" elapsed=165.197362ms shape="[2816 256]" Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.028+02:00 level=DEBUG source=ggml.go:852 msg="compute graph" nodes=1137 splits=1 Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.265+02:00 level=DEBUG source=ggml.go:852 msg="compute graph" nodes=2734 splits=12 Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.272+02:00 level=DEBUG source=ggml.go:852 msg="compute graph" nodes=2732 splits=12 Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.273+02:00 level=DEBUG source=device.go:240 msg="model weights" device=CUDA0 size="16.6 GiB" Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.273+02:00 level=DEBUG source=device.go:245 msg="model weights" device=CPU size="667.5 MiB" Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.273+02:00 level=DEBUG source=device.go:251 msg="kv cache" device=CUDA0 size="1.5 GiB" Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.273+02:00 level=DEBUG source=device.go:262 msg="compute graph" device=CUDA0 size="323.9 MiB" Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.273+02:00 level=DEBUG source=device.go:267 msg="compute graph" device=CPU size="192.0 MiB" Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.273+02:00 level=DEBUG source=device.go:272 msg="total memory" size="19.3 GiB" Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.273+02:00 level=DEBUG source=server.go:784 msg=memory success=true required.InputWeights=699924480 required.CPU.Graph=201326592 required.CUDA0.ID=GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c required.CUDA0.Weights="[590588928 590588928 590588928 493200128 493200128 597213952 493200128 491713280 589102080 493200128 491713280 597213952 491713280 493200128 589102080 491713280 493200128 597213952 491713280 491713280 590588928 491713280 491713280 597213952 493200128 491713280 589102080 590588928 589102080 597213952 1704233472]" required.CUDA0.Cache="[37748736 37748736 37748736 37748736 37748736 134217728 37748736 37748736 37748736 37748736 37748736 134217728 37748736 37748736 37748736 37748736 37748736 134217728 37748736 37748736 37748736 37748736 37748736 134217728 37748736 37748736 37748736 37748736 37748736 134217728 0]" required.CUDA0.Graph=339613696 Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.274+02:00 level=DEBUG source=server.go:978 msg="available gpu" id=GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c library=CUDA "available layer vram"="40.4 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="323.9 MiB" Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.274+02:00 level=DEBUG source=server.go:795 msg="new layout created" layers="31[ID:GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c Layers:31(0..30)]" Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.274+02:00 level=INFO source=runner.go:1290 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType: NumThreads:64 GPULayers:31[ID:GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c Layers:31(0..30)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.386+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32 Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.395+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.pooling_type default=0 Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.395+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eot_token_id default=106 Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.396+02:00 level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883 Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.396+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.attention.global_head_count_kv default=0 Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.396+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.audio.block_count default=0 Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.396+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.audio.embedding_length default=0 Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.418+02:00 level=INFO source=model.go:138 msg="vision: decode" elapsed=1.994265ms bounds=(0,0)-(2048,2048) Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.557+02:00 level=INFO source=model.go:145 msg="vision: preprocess" elapsed=138.91198ms size="[768 768]" Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.560+02:00 level=INFO source=model.go:148 msg="vision: pixelValues" shape="[768 768 3]" dim0=768 dim1=768 dim2=3 Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.560+02:00 level=INFO source=model.go:152 msg="vision: patches" patchesX=48 patchesY=48 total=2304 patchSize=16 Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.561+02:00 level=INFO source=model.go:156 msg="vision: encoded" elapsed=145.25369ms shape="[2816 256]" Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.565+02:00 level=DEBUG source=ggml.go:852 msg="compute graph" nodes=1137 splits=1 Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.765+02:00 level=DEBUG source=ggml.go:852 msg="compute graph" nodes=2734 splits=12 Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.775+02:00 level=DEBUG source=ggml.go:852 msg="compute graph" nodes=2732 splits=12 Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.775+02:00 level=DEBUG source=device.go:240 msg="model weights" device=CUDA0 size="16.6 GiB" Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.775+02:00 level=DEBUG source=device.go:245 msg="model weights" device=CPU size="667.5 MiB" Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.775+02:00 level=DEBUG source=device.go:251 msg="kv cache" device=CUDA0 size="1.5 GiB" Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.775+02:00 level=DEBUG source=device.go:262 msg="compute graph" device=CUDA0 size="323.9 MiB" Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.775+02:00 level=DEBUG source=device.go:267 msg="compute graph" device=CPU size="192.0 MiB" Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.775+02:00 level=DEBUG source=device.go:272 msg="total memory" size="19.3 GiB" Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.776+02:00 level=DEBUG source=server.go:784 msg=memory success=true required.InputWeights=699924480 required.CPU.Graph=201326592 required.CUDA0.ID=GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c required.CUDA0.Weights="[590588928 590588928 590588928 493200128 493200128 597213952 493200128 491713280 589102080 493200128 491713280 597213952 491713280 493200128 589102080 491713280 493200128 597213952 491713280 491713280 590588928 491713280 491713280 597213952 493200128 491713280 589102080 590588928 589102080 597213952 1704233472]" required.CUDA0.Cache="[37748736 37748736 37748736 37748736 37748736 134217728 37748736 37748736 37748736 37748736 37748736 134217728 37748736 37748736 37748736 37748736 37748736 134217728 37748736 37748736 37748736 37748736 37748736 134217728 37748736 37748736 37748736 37748736 37748736 134217728 0]" required.CUDA0.Graph=339613696 Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.776+02:00 level=DEBUG source=server.go:978 msg="available gpu" id=GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c library=CUDA "available layer vram"="40.4 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="323.9 MiB" Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.776+02:00 level=DEBUG source=server.go:795 msg="new layout created" layers="31[ID:GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c Layers:31(0..30)]" Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.776+02:00 level=INFO source=runner.go:1290 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType: NumThreads:64 GPULayers:31[ID:GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c Layers:31(0..30)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.777+02:00 level=INFO source=ggml.go:482 msg="offloading 30 repeating layers to GPU" Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.777+02:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU" Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.777+02:00 level=INFO source=ggml.go:494 msg="offloaded 31/31 layers to GPU" Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.777+02:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="16.6 GiB" Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.777+02:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="667.5 MiB" Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.777+02:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="1.5 GiB" Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.777+02:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="323.9 MiB" Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.777+02:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="192.0 MiB" Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.777+02:00 level=INFO source=device.go:272 msg="total memory" size="19.3 GiB" Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.777+02:00 level=INFO source=sched.go:561 msg="loaded runners" count=1 Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.777+02:00 level=INFO source=server.go:1352 msg="waiting for llama runner to start responding" Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.778+02:00 level=INFO source=server.go:1386 msg="waiting for server to become available" status="llm server loading model" Apr 07 13:09:49 hbrki ollama[3665251]: time=2026-04-07T13:09:49.778+02:00 level=DEBUG source=server.go:1396 msg="model load progress 0.00" Apr 07 13:09:50 hbrki ollama[3665251]: time=2026-04-07T13:09:50.029+02:00 level=DEBUG source=server.go:1396 msg="model load progress 0.08" Apr 07 13:09:50 hbrki ollama[3665251]: time=2026-04-07T13:09:50.280+02:00 level=DEBUG source=server.go:1396 msg="model load progress 0.14" Apr 07 13:09:50 hbrki ollama[3665251]: time=2026-04-07T13:09:50.531+02:00 level=DEBUG source=server.go:1396 msg="model load progress 0.19" Apr 07 13:09:50 hbrki ollama[3665251]: time=2026-04-07T13:09:50.782+02:00 level=DEBUG source=server.go:1396 msg="model load progress 0.25" Apr 07 13:09:51 hbrki ollama[3665251]: time=2026-04-07T13:09:51.033+02:00 level=DEBUG source=server.go:1396 msg="model load progress 0.31" Apr 07 13:09:51 hbrki ollama[3665251]: time=2026-04-07T13:09:51.284+02:00 level=DEBUG source=server.go:1396 msg="model load progress 0.37" Apr 07 13:09:51 hbrki ollama[3665251]: time=2026-04-07T13:09:51.535+02:00 level=DEBUG source=server.go:1396 msg="model load progress 0.43" Apr 07 13:09:51 hbrki ollama[3665251]: time=2026-04-07T13:09:51.786+02:00 level=DEBUG source=server.go:1396 msg="model load progress 0.48" Apr 07 13:09:52 hbrki ollama[3665251]: time=2026-04-07T13:09:52.037+02:00 level=DEBUG source=server.go:1396 msg="model load progress 0.54" Apr 07 13:09:52 hbrki ollama[3665251]: time=2026-04-07T13:09:52.288+02:00 level=DEBUG source=server.go:1396 msg="model load progress 0.60" Apr 07 13:09:52 hbrki ollama[3665251]: time=2026-04-07T13:09:52.539+02:00 level=DEBUG source=server.go:1396 msg="model load progress 0.66" Apr 07 13:09:52 hbrki ollama[3665251]: time=2026-04-07T13:09:52.790+02:00 level=DEBUG source=server.go:1396 msg="model load progress 0.72" Apr 07 13:09:53 hbrki ollama[3665251]: time=2026-04-07T13:09:53.041+02:00 level=DEBUG source=server.go:1396 msg="model load progress 0.78" Apr 07 13:09:53 hbrki ollama[3665251]: time=2026-04-07T13:09:53.292+02:00 level=DEBUG source=server.go:1396 msg="model load progress 0.84" Apr 07 13:09:53 hbrki ollama[3665251]: time=2026-04-07T13:09:53.543+02:00 level=DEBUG source=server.go:1396 msg="model load progress 0.90" Apr 07 13:09:53 hbrki ollama[3665251]: time=2026-04-07T13:09:53.794+02:00 level=DEBUG source=server.go:1396 msg="model load progress 0.96" Apr 07 13:09:54 hbrki ollama[3665251]: time=2026-04-07T13:09:54.044+02:00 level=DEBUG source=server.go:1396 msg="model load progress 0.99" Apr 07 13:09:54 hbrki ollama[3665251]: time=2026-04-07T13:09:54.153+02:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.pooling_type default=0 Apr 07 13:09:54 hbrki ollama[3665251]: time=2026-04-07T13:09:54.295+02:00 level=INFO source=server.go:1390 msg="llama runner started in 5.79 seconds" Apr 07 13:09:54 hbrki ollama[3665251]: time=2026-04-07T13:09:54.295+02:00 level=DEBUG source=sched.go:573 msg="finished setting up" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c Library:CUDA}]" runner.size="19.3 GiB" runner.vram="19.3 GiB" runner.parallel=1 runner.pid=3665341 runner.model=/usr1/ollama/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=32768 Apr 07 13:09:54 hbrki ollama[3665251]: time=2026-04-07T13:09:54.415+02:00 level=DEBUG source=server.go:1538 msg="completion request" images=0 prompt=87 format="" Apr 07 13:09:54 hbrki ollama[3665251]: time=2026-04-07T13:09:54.513+02:00 level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=0 prompt=19 used=0 remaining=19 Apr 07 13:10:25 hbrki ollama[3665251]: [GIN] 2026/04/07 - 13:10:25 | 200 | 38.113744124s | 172.17.0.6 | POST "/api/chat" Apr 07 13:10:25 hbrki ollama[3665251]: time=2026-04-07T13:10:25.731+02:00 level=DEBUG source=sched.go:581 msg="context for request finished" Apr 07 13:10:25 hbrki ollama[3665251]: time=2026-04-07T13:10:25.731+02:00 level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c Library:CUDA}]" runner.size="19.3 GiB" runner.vram="19.3 GiB" runner.parallel=1 runner.pid=3665341 runner.model=/usr1/ollama/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=32768 duration=1h0m0s Apr 07 13:10:25 hbrki ollama[3665251]: time=2026-04-07T13:10:25.731+02:00 level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-4ecb98ca-a1c7-06d4-91ba-03e8e763c11c Library:CUDA}]" runner.size="19.3 GiB" runner.vram="19.3 GiB" runner.parallel=1 runner.pid=3665341 runner.model=/usr1/ollama/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=32768 refCount=0 Apr 07 13:10:26 hbrki ollama[3665251]: time=2026-04-07T13:10:26.057+02:00 level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/usr1/ollama/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df Apr 07 13:10:26 hbrki ollama[3665251]: time=2026-04-07T13:10:26.061+02:00 level=DEBUG source=server.go:1538 msg="completion request" images=0 prompt=1089 format="" Apr 07 13:10:26 hbrki ollama[3665251]: time=2026-04-07T13:10:26.065+02:00 level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=269 prompt=264 used=5 remaining=259 ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.20.3
GiteaMirror added the bug label 2026-05-05 02:54:56 -05:00
Author
Owner

@rick-github commented on GitHub (Apr 7, 2026):

Disable flash attention. #15237

<!-- gh-comment-id:4198681333 --> @rick-github commented on GitHub (Apr 7, 2026): Disable flash attention. #15237
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#71902