[GH-ISSUE #15379] 500 error on /v1/chat/completions with gemma4:26b on Jetson AGX Orin (Ollama 0.20.2, CUDA) #35596

Open
opened 2026-04-22 20:13:16 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @lity2k on GitHub (Apr 7, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15379

What is the issue?

Requests to:
POST /v1/chat/completions`

sometimes return:
HTTP 500 Internal Server Error

Example log:
[GIN] 2026/04/07 - 09:57:06 | 500 | 16.147442882s | 127.0.0.1 | POST "/v1/chat/completions"

Environment

  • Hardware: NVIDIA Jetson AGX Orin Developer Kit (p3701-0005, 64GB RAM), MAXN power mode
  • Software:
    • JetPack: 6.2.1 (L4T R36.4.7)
    • Ubuntu 22.04.5 LTS
    • Ollama: 0.20.2
    • Model: gemma4:26b (ID: 5571076f3d70), loaded 100% on GPU (~27-29GB)
    • Context: OLLAMA_CONTEXT_LENGTH=131072 (tried reducing to 64k, no change)
    • OpenClaw context set to 64k
  • Ollama service config:
    • Runs as root
    • OLLAMA_HOST=0.0.0.0
    • OLLAMA_KEEP_ALIVE=-1
    • OLLAMA_DEBUG=1
  • Runtime Status
    • ollama ps:
      • gemma4:26b → 100% GPU
      • context: 131072
    • Memory:
      • Total: 62GB
      • Used: 30GB
      • Free: 14GB
      • Available: 31GB
      • Swap: unused

Steps to Reproduce

  1. Start Ollama with the above config.
  2. Load gemma4:26b.
  3. Use OpenClaw to send a /new command (or other tasks with moderate-to-long context/tool use).
  4. Error occurs with ~certain probability (higher on complex tasks).

Relevant log output

time=2026-04-07T09:56:20.175+08:00 level=DEBUG source=server.go:433 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin OLLAMA_MODELS=/data/.ollama/models OLLAMA_HOST=0.0.0.0 OLLAMA_DEBUG=1 OLLAMA_CONTEXT_LENGTH=131072 OLLAMA_KEEP_ALIVE=-1 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_jetpack6 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_jetpack6
time=2026-04-07T09:56:20.314+08:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=138.765108ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_jetpack6]" extra_envs=map[]
time=2026-04-07T09:56:20.314+08:00 level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=1
time=2026-04-07T09:56:20.314+08:00 level=DEBUG source=runner.go:146 msg="verifying if device is supported" library=/usr/local/lib/ollama/cuda_jetpack6 description=Orin compute=8.7 id=GPU-6238ccc5-45b3-5519-9f32-831427956f94 pci_id=0000:00:00.0
time=2026-04-07T09:56:20.314+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 46429"
time=2026-04-07T09:56:20.314+08:00 level=DEBUG source=server.go:433 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin OLLAMA_MODELS=/data/.ollama/models OLLAMA_HOST=0.0.0.0 OLLAMA_DEBUG=1 OLLAMA_CONTEXT_LENGTH=131072 OLLAMA_KEEP_ALIVE=-1 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_jetpack6 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_jetpack6 CUDA_VISIBLE_DEVICES=GPU-6238ccc5-45b3-5519-9f32-831427956f94 GGML_CUDA_INIT=1
time=2026-04-07T09:56:20.453+08:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=138.908541ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_jetpack6]" extra_envs="map[CUDA_VISIBLE_DEVICES:GPU-6238ccc5-45b3-5519-9f32-831427956f94 GGML_CUDA_INIT:1]"
time=2026-04-07T09:56:20.453+08:00 level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=278.279694ms
time=2026-04-07T09:56:20.453+08:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-6238ccc5-45b3-5519-9f32-831427956f94 filter_id="" library=CUDA compute=8.7 name=CUDA0 description=Orin libdirs=ollama,cuda_jetpack6 driver=12.6 pci_id=0000:00:00.0 type=iGPU total="61.4 GiB" available="59.3 GiB"
time=2026-04-07T09:56:20.453+08:00 level=INFO source=routes.go:1852 msg="vram-based default context" total_vram="61.4 GiB" default_num_ctx=262144
time=2026-04-07T09:56:50.693+08:00 level=DEBUG source=runner.go:264 msg="refreshing free memory"
time=2026-04-07T09:56:50.693+08:00 level=DEBUG source=runner.go:328 msg="unable to refresh all GPUs with existing runners, performing bootstrap discovery"
time=2026-04-07T09:56:50.693+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 35545"
time=2026-04-07T09:56:50.693+08:00 level=DEBUG source=server.go:433 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin OLLAMA_MODELS=/data/.ollama/models OLLAMA_HOST=0.0.0.0 OLLAMA_DEBUG=1 OLLAMA_CONTEXT_LENGTH=131072 OLLAMA_KEEP_ALIVE=-1 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_jetpack6 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_jetpack6
time=2026-04-07T09:56:50.839+08:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=146.355654ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_jetpack6]" extra_envs=map[]
time=2026-04-07T09:56:50.840+08:00 level=DEBUG source=runner.go:40 msg="overall device VRAM discovery took" duration=146.574949ms
time=2026-04-07T09:56:50.840+08:00 level=DEBUG source=sched.go:220 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1
time=2026-04-07T09:56:50.840+08:00 level=DEBUG source=sched.go:229 msg="loading first model" model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df
time=2026-04-07T09:56:50.999+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32
time=2026-04-07T09:56:51.085+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32
time=2026-04-07T09:56:51.086+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.pooling_type default=0
time=2026-04-07T09:56:51.086+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eot_token_id default=106
time=2026-04-07T09:56:51.087+08:00 level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883
time=2026-04-07T09:56:51.087+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.attention.global_head_count_kv default=0
time=2026-04-07T09:56:51.087+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.audio.block_count default=0
time=2026-04-07T09:56:51.087+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.audio.embedding_length default=0
time=2026-04-07T09:56:51.087+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df --port 40915"
time=2026-04-07T09:56:51.087+08:00 level=DEBUG source=server.go:433 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin OLLAMA_MODELS=/data/.ollama/models OLLAMA_HOST=0.0.0.0 OLLAMA_DEBUG=1 OLLAMA_CONTEXT_LENGTH=131072 OLLAMA_KEEP_ALIVE=-1 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_jetpack6 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_jetpack6
time=2026-04-07T09:56:51.088+08:00 level=INFO source=sched.go:484 msg="system memory" total="61.4 GiB" free="59.2 GiB" free_swap="30.7 GiB"
time=2026-04-07T09:56:51.088+08:00 level=INFO source=sched.go:491 msg="gpu memory" id=GPU-6238ccc5-45b3-5519-9f32-831427956f94 library=CUDA available="58.8 GiB" free="59.2 GiB" minimum="457.0 MiB" overhead="0 B"
time=2026-04-07T09:56:51.088+08:00 level=INFO source=server.go:759 msg="loading model" "model layers"=31 requested=-1
time=2026-04-07T09:56:51.102+08:00 level=INFO source=runner.go:1417 msg="starting ollama engine"
time=2026-04-07T09:56:51.105+08:00 level=INFO source=runner.go:1452 msg="Server listening on 127.0.0.1:40915"
time=2026-04-07T09:56:51.110+08:00 level=INFO source=runner.go:1290 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Disabled KvSize:131072 KvCacheType: NumThreads:12 GPULayers:31[ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Layers:31(0..30)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-04-07T09:56:51.199+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32
time=2026-04-07T09:56:51.200+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.name default=""
time=2026-04-07T09:56:51.200+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.description default=""
time=2026-04-07T09:56:51.200+08:00 level=INFO source=ggml.go:136 msg="" architecture=gemma4 file_type=Q4_K_M name="" description="" num_tensors=1014 num_key_values=52
time=2026-04-07T09:56:51.200+08:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama
load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu.so
time=2026-04-07T09:56:51.206+08:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_jetpack6
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: Orin, compute capability 8.7, VMM: yes, ID: GPU-6238ccc5-45b3-5519-9f32-831427956f94
load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_jetpack6/libggml-cuda.so
time=2026-04-07T09:56:51.239+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.LLAMAFILE=1 CPU.1.NEON=1 CPU.1.ARM_FMA=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=870 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2026-04-07T09:56:51.247+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.pooling_type default=0
time=2026-04-07T09:56:51.247+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eot_token_id default=106
time=2026-04-07T09:56:51.248+08:00 level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883
time=2026-04-07T09:56:51.248+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.attention.global_head_count_kv default=0
time=2026-04-07T09:56:51.248+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.audio.block_count default=0
time=2026-04-07T09:56:51.248+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.audio.embedding_length default=0
time=2026-04-07T09:56:51.268+08:00 level=INFO source=model.go:138 msg="vision: decode" elapsed=1.235849ms bounds=(0,0)-(2048,2048)
time=2026-04-07T09:56:51.427+08:00 level=INFO source=model.go:145 msg="vision: preprocess" elapsed=159.062754ms size="[768 768]"
time=2026-04-07T09:56:51.427+08:00 level=INFO source=model.go:148 msg="vision: pixelValues" shape="[768 768 3]" dim0=768 dim1=768 dim2=3
time=2026-04-07T09:56:51.427+08:00 level=INFO source=model.go:152 msg="vision: patches" patchesX=48 patchesY=48 total=2304 patchSize=16
time=2026-04-07T09:56:51.428+08:00 level=INFO source=model.go:156 msg="vision: encoded" elapsed=161.552338ms shape="[2816 256]"
time=2026-04-07T09:56:51.609+08:00 level=DEBUG source=ggml.go:852 msg="compute graph" nodes=1272 splits=1
time=2026-04-07T09:56:52.074+08:00 level=DEBUG source=ggml.go:852 msg="compute graph" nodes=2852 splits=2
time=2026-04-07T09:56:52.084+08:00 level=DEBUG source=ggml.go:852 msg="compute graph" nodes=2850 splits=2
time=2026-04-07T09:56:52.085+08:00 level=DEBUG source=device.go:240 msg="model weights" device=CUDA0 size="16.6 GiB"
time=2026-04-07T09:56:52.085+08:00 level=DEBUG source=device.go:245 msg="model weights" device=CPU size="667.5 MiB"
time=2026-04-07T09:56:52.085+08:00 level=DEBUG source=device.go:251 msg="kv cache" device=CUDA0 size="3.4 GiB"
time=2026-04-07T09:56:52.085+08:00 level=DEBUG source=device.go:262 msg="compute graph" device=CUDA0 size="6.3 GiB"
time=2026-04-07T09:56:52.085+08:00 level=DEBUG source=device.go:267 msg="compute graph" device=CPU size="5.5 MiB"
time=2026-04-07T09:56:52.085+08:00 level=DEBUG source=device.go:272 msg="total memory" size="27.0 GiB"
time=2026-04-07T09:56:52.085+08:00 level=DEBUG source=server.go:784 msg=memory success=true required.InputWeights=699924480 required.CPU.Graph=5767168 required.CUDA0.ID=GPU-6238ccc5-45b3-5519-9f32-831427956f94 required.CUDA0.Weights="[590588928 590588928 590588928 493200128 493200128 597213952 493200128 491713280 589102080 493200128 491713280 597213952 491713280 493200128 589102080 491713280 493200128 597213952 491713280 491713280 590588928 491713280 491713280 597213952 493200128 491713280 589102080 590588928 589102080 597213952 1704233472]" required.CUDA0.Cache="[37748736 37748736 37748736 37748736 37748736 536870912 37748736 37748736 37748736 37748736 37748736 536870912 37748736 37748736 37748736 37748736 37748736 536870912 37748736 37748736 37748736 37748736 37748736 536870912 37748736 37748736 37748736 37748736 37748736 536870912 0]" required.CUDA0.Graph=6788880512
time=2026-04-07T09:56:52.085+08:00 level=DEBUG source=server.go:978 msg="available gpu" id=GPU-6238ccc5-45b3-5519-9f32-831427956f94 library=CUDA "available layer vram"="52.5 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="6.3 GiB"
time=2026-04-07T09:56:52.085+08:00 level=DEBUG source=server.go:795 msg="new layout created" layers="31[ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Layers:31(0..30)]"
time=2026-04-07T09:56:52.086+08:00 level=INFO source=runner.go:1290 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Disabled KvSize:131072 KvCacheType: NumThreads:12 GPULayers:31[ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Layers:31(0..30)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-04-07T09:56:52.169+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32
time=2026-04-07T09:56:55.578+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.pooling_type default=0
time=2026-04-07T09:56:55.578+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eot_token_id default=106
time=2026-04-07T09:56:55.578+08:00 level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883
time=2026-04-07T09:56:55.578+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.attention.global_head_count_kv default=0
time=2026-04-07T09:56:55.578+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.audio.block_count default=0
time=2026-04-07T09:56:55.578+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.audio.embedding_length default=0
time=2026-04-07T09:56:55.592+08:00 level=INFO source=model.go:138 msg="vision: decode" elapsed=3.61003ms bounds=(0,0)-(2048,2048)
time=2026-04-07T09:56:55.752+08:00 level=INFO source=model.go:145 msg="vision: preprocess" elapsed=160.512666ms size="[768 768]"
time=2026-04-07T09:56:55.754+08:00 level=INFO source=model.go:148 msg="vision: pixelValues" shape="[768 768 3]" dim0=768 dim1=768 dim2=3
time=2026-04-07T09:56:55.754+08:00 level=INFO source=model.go:152 msg="vision: patches" patchesX=48 patchesY=48 total=2304 patchSize=16
time=2026-04-07T09:56:55.755+08:00 level=INFO source=model.go:156 msg="vision: encoded" elapsed=167.276953ms shape="[2816 256]"
time=2026-04-07T09:56:55.856+08:00 level=DEBUG source=ggml.go:852 msg="compute graph" nodes=1272 splits=1
time=2026-04-07T09:56:58.379+08:00 level=DEBUG source=ggml.go:852 msg="compute graph" nodes=2852 splits=2
time=2026-04-07T09:56:58.389+08:00 level=DEBUG source=ggml.go:852 msg="compute graph" nodes=2850 splits=2
time=2026-04-07T09:56:58.390+08:00 level=DEBUG source=device.go:240 msg="model weights" device=CUDA0 size="16.6 GiB"
time=2026-04-07T09:56:58.390+08:00 level=DEBUG source=device.go:245 msg="model weights" device=CPU size="667.5 MiB"
time=2026-04-07T09:56:58.390+08:00 level=DEBUG source=device.go:251 msg="kv cache" device=CUDA0 size="3.4 GiB"
time=2026-04-07T09:56:58.390+08:00 level=DEBUG source=device.go:262 msg="compute graph" device=CUDA0 size="6.3 GiB"
time=2026-04-07T09:56:58.390+08:00 level=DEBUG source=device.go:267 msg="compute graph" device=CPU size="5.5 MiB"
time=2026-04-07T09:56:58.390+08:00 level=DEBUG source=device.go:272 msg="total memory" size="27.0 GiB"
time=2026-04-07T09:56:58.390+08:00 level=DEBUG source=server.go:784 msg=memory success=true required.InputWeights=699924480 required.CPU.Graph=5767168 required.CUDA0.ID=GPU-6238ccc5-45b3-5519-9f32-831427956f94 required.CUDA0.Weights="[590588928 590588928 590588928 493200128 493200128 597213952 493200128 491713280 589102080 493200128 491713280 597213952 491713280 493200128 589102080 491713280 493200128 597213952 491713280 491713280 590588928 491713280 491713280 597213952 493200128 491713280 589102080 590588928 589102080 597213952 1704233472]" required.CUDA0.Cache="[37748736 37748736 37748736 37748736 37748736 536870912 37748736 37748736 37748736 37748736 37748736 536870912 37748736 37748736 37748736 37748736 37748736 536870912 37748736 37748736 37748736 37748736 37748736 536870912 37748736 37748736 37748736 37748736 37748736 536870912 0]" required.CUDA0.Graph=6788880512
time=2026-04-07T09:56:58.390+08:00 level=DEBUG source=server.go:978 msg="available gpu" id=GPU-6238ccc5-45b3-5519-9f32-831427956f94 library=CUDA "available layer vram"="52.5 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="6.3 GiB"
time=2026-04-07T09:56:58.390+08:00 level=DEBUG source=server.go:795 msg="new layout created" layers="31[ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Layers:31(0..30)]"
time=2026-04-07T09:56:58.390+08:00 level=INFO source=runner.go:1290 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Disabled KvSize:131072 KvCacheType: NumThreads:12 GPULayers:31[ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Layers:31(0..30)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-04-07T09:56:58.390+08:00 level=INFO source=ggml.go:482 msg="offloading 30 repeating layers to GPU"
time=2026-04-07T09:56:58.390+08:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU"
time=2026-04-07T09:56:58.390+08:00 level=INFO source=ggml.go:494 msg="offloaded 31/31 layers to GPU"
time=2026-04-07T09:56:58.390+08:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="16.6 GiB"
time=2026-04-07T09:56:58.391+08:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="667.5 MiB"
time=2026-04-07T09:56:58.391+08:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="3.4 GiB"
time=2026-04-07T09:56:58.391+08:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="6.3 GiB"
time=2026-04-07T09:56:58.391+08:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="5.5 MiB"
time=2026-04-07T09:56:58.391+08:00 level=INFO source=device.go:272 msg="total memory" size="27.0 GiB"
time=2026-04-07T09:56:58.391+08:00 level=INFO source=sched.go:561 msg="loaded runners" count=1
time=2026-04-07T09:56:58.391+08:00 level=INFO source=server.go:1352 msg="waiting for llama runner to start responding"
time=2026-04-07T09:56:58.391+08:00 level=INFO source=server.go:1386 msg="waiting for server to become available" status="llm server loading model"
time=2026-04-07T09:56:58.391+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.00"
time=2026-04-07T09:56:58.642+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.07"
time=2026-04-07T09:56:58.893+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.15"
time=2026-04-07T09:56:59.144+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.22"
time=2026-04-07T09:56:59.395+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.29"
time=2026-04-07T09:56:59.646+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.37"
time=2026-04-07T09:56:59.897+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.44"
time=2026-04-07T09:57:00.148+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.51"
time=2026-04-07T09:57:00.399+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.59"
time=2026-04-07T09:57:00.650+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.66"
time=2026-04-07T09:57:00.901+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.73"
time=2026-04-07T09:57:01.152+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.81"
time=2026-04-07T09:57:01.403+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.88"
time=2026-04-07T09:57:01.654+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.95"
time=2026-04-07T09:57:01.905+08:00 level=DEBUG source=server.go:1396 msg="model load progress 1.00"
time=2026-04-07T09:57:01.953+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.pooling_type default=0
time=2026-04-07T09:57:02.156+08:00 level=INFO source=server.go:1390 msg="llama runner started in 11.07 seconds"
time=2026-04-07T09:57:02.156+08:00 level=DEBUG source=sched.go:573 msg="finished setting up" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Library:CUDA}]" runner.size="27.0 GiB" runner.vram="27.0 GiB" runner.parallel=1 runner.pid=2700 runner.model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=131072
time=2026-04-07T09:57:02.406+08:00 level=DEBUG source=server.go:1538 msg="completion request" images=0 prompt=42320 format=""
time=2026-04-07T09:57:02.645+08:00 level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=0 prompt=10512 used=0 remaining=10512
time=2026-04-07T09:57:06.506+08:00 level=DEBUG source=sched.go:581 msg="context for request finished"
[GIN] 2026/04/07 - 09:57:06 | 500 | 16.147442882s |       127.0.0.1 | POST     "/v1/chat/completions"
time=2026-04-07T09:57:06.507+08:00 level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Library:CUDA}]" runner.size="27.0 GiB" runner.vram="27.0 GiB" runner.parallel=1 runner.pid=2700 runner.model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=131072 duration=2562047h47m16.854775807s
time=2026-04-07T09:57:06.507+08:00 level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Library:CUDA}]" runner.size="27.0 GiB" runner.vram="27.0 GiB" runner.parallel=1 runner.pid=2700 runner.model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=131072 refCount=0
time=2026-04-07T09:57:07.672+08:00 level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df
time=2026-04-07T09:57:07.872+08:00 level=DEBUG source=server.go:1538 msg="completion request" images=0 prompt=54360 format=""
time=2026-04-07T09:57:19.762+08:00 level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=10512 prompt=13473 used=0 remaining=13473
[GIN] 2026/04/07 - 09:57:43 | 200 | 36.222069034s |       127.0.0.1 | POST     "/v1/chat/completions"
time=2026-04-07T09:57:43.564+08:00 level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Library:CUDA}]" runner.size="27.0 GiB" runner.vram="27.0 GiB" runner.parallel=1 runner.pid=2700 runner.model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=131072
time=2026-04-07T09:57:43.564+08:00 level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Library:CUDA}]" runner.size="27.0 GiB" runner.vram="27.0 GiB" runner.parallel=1 runner.pid=2700 runner.model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=131072 duration=2562047h47m16.854775807s
time=2026-04-07T09:57:43.564+08:00 level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Library:CUDA}]" runner.size="27.0 GiB" runner.vram="27.0 GiB" runner.parallel=1 runner.pid=2700 runner.model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=131072 refCount=0
time=2026-04-07T09:57:44.217+08:00 level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df
time=2026-04-07T09:57:44.417+08:00 level=DEBUG source=server.go:1538 msg="completion request" images=0 prompt=56928 format=""
time=2026-04-07T09:57:44.624+08:00 level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=13500 prompt=14093 used=13473 remaining=620
[GIN] 2026/04/07 - 09:57:47 | 200 |  3.686131389s |       127.0.0.1 | POST     "/v1/chat/completions"
time=2026-04-07T09:57:47.527+08:00 level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Library:CUDA}]" runner.size="27.0 GiB" runner.vram="27.0 GiB" runner.parallel=1 runner.pid=2700 runner.model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=131072
time=2026-04-07T09:57:47.528+08:00 level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Library:CUDA}]" runner.size="27.0 GiB" runner.vram="27.0 GiB" runner.parallel=1 runner.pid=2700 runner.model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=131072 duration=2562047h47m16.854775807s
time=2026-04-07T09:57:47.528+08:00 level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Library:CUDA}]" runner.size="27.0 GiB" runner.vram="27.0 GiB" runner.parallel=1 runner.pid=2700 runner.model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=131072 refCount=0
time=2026-04-07T09:57:48.477+08:00 level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df
time=2026-04-07T09:57:48.682+08:00 level=DEBUG source=server.go:1538 msg="completion request" images=0 prompt=57528 format=""
time=2026-04-07T09:57:48.895+08:00 level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=14119 prompt=14252 used=14093 remaining=159
[GIN] 2026/04/07 - 09:57:51 | 200 |  3.388897966s |       127.0.0.1 | POST     "/v1/chat/completions"
time=2026-04-07T09:57:51.376+08:00 level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Library:CUDA}]" runner.size="27.0 GiB" runner.vram="27.0 GiB" runner.parallel=1 runner.pid=2700 runner.model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=131072
time=2026-04-07T09:57:51.376+08:00 level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Library:CUDA}]" runner.size="27.0 GiB" runner.vram="27.0 GiB" runner.parallel=1 runner.pid=2700 runner.model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=131072 duration=2562047h47m16.854775807s
time=2026-04-07T09:57:51.376+08:00 level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Library:CUDA}]" runner.size="27.0 GiB" runner.vram="27.0 GiB" runner.parallel=1 runner.pid=2700 runner.model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=131072 refCount=0
time=2026-04-07T09:57:52.039+08:00 level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df
time=2026-04-07T09:57:52.248+08:00 level=DEBUG source=server.go:1538 msg="completion request" images=0 prompt=57811 format=""
time=2026-04-07T09:57:52.460+08:00 level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=14289 prompt=14353 used=14252 remaining=101
[GIN] 2026/04/07 - 09:57:54 | 200 |  2.600673675s |       127.0.0.1 | POST     "/v1/chat/completions"
time=2026-04-07T09:57:54.241+08:00 level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Library:CUDA}]" runner.size="27.0 GiB" runner.vram="27.0 GiB" runner.parallel=1 runner.pid=2700 runner.model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=131072
time=2026-04-07T09:57:54.241+08:00 level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Library:CUDA}]" runner.size="27.0 GiB" runner.vram="27.0 GiB" runner.parallel=1 runner.pid=2700 runner.model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=131072 duration=2562047h47m16.854775807s
time=2026-04-07T09:57:54.241+08:00 level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Library:CUDA}]" runner.size="27.0 GiB" runner.vram="27.0 GiB" runner.parallel=1 runner.pid=2700 runner.model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=131072 refCount=0
time=2026-04-07T09:57:55.420+08:00 level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df
time=2026-04-07T09:57:55.637+08:00 level=DEBUG source=server.go:1538 msg="completion request" images=0 prompt=58719 format=""
time=2026-04-07T09:57:55.852+08:00 level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=14379 prompt=14605 used=14353 remaining=252
[GIN] 2026/04/07 - 09:57:57 | 200 |  3.081346526s |       127.0.0.1 | POST     "/v1/chat/completions"
time=2026-04-07T09:57:57.744+08:00 level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Library:CUDA}]" runner.size="27.0 GiB" runner.vram="27.0 GiB" runner.parallel=1 runner.pid=2700 runner.model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=131072
time=2026-04-07T09:57:57.744+08:00 level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Library:CUDA}]" runner.size="27.0 GiB" runner.vram="27.0 GiB" runner.parallel=1 runner.pid=2700 runner.model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=131072 duration=2562047h47m16.854775807s
time=2026-04-07T09:57:57.744+08:00 level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Library:CUDA}]" runner.size="27.0 GiB" runner.vram="27.0 GiB" runner.parallel=1 runner.pid=2700 runner.model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=131072 refCount=0

OS

Linux

GPU

Nvidia

CPU

Other

Ollama version

0.20.2

Originally created by @lity2k on GitHub (Apr 7, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15379 ### What is the issue? Requests to: `POST` /v1/chat/completions` sometimes return: `HTTP 500 Internal Server Error` Example log: `[GIN] 2026/04/07 - 09:57:06 | 500 | 16.147442882s | 127.0.0.1 | POST "/v1/chat/completions"` ### Environment - **Hardware**: NVIDIA Jetson AGX Orin Developer Kit (p3701-0005, 64GB RAM), MAXN power mode - **Software**: - JetPack: 6.2.1 (L4T R36.4.7) - Ubuntu 22.04.5 LTS - Ollama: 0.20.2 - Model: `gemma4:26b` (ID: 5571076f3d70), loaded 100% on GPU (~27-29GB) - Context: `OLLAMA_CONTEXT_LENGTH=131072` (tried reducing to 64k, no change) - OpenClaw context set to 64k - **Ollama service config**: - Runs as root - `OLLAMA_HOST=0.0.0.0` - `OLLAMA_KEEP_ALIVE=-1` - `OLLAMA_DEBUG=1` - **Runtime Status** - **ollama ps**: - gemma4:26b → 100% GPU - context: 131072 - **Memory**: - Total: 62GB - Used: 30GB - Free: 14GB - Available: 31GB - Swap: unused ### Steps to Reproduce 1. Start Ollama with the above config. 2. Load `gemma4:26b`. 3. Use OpenClaw to send a `/new` command (or other tasks with moderate-to-long context/tool use). 4. Error occurs with ~certain probability (higher on complex tasks). ### Relevant log output ```shell time=2026-04-07T09:56:20.175+08:00 level=DEBUG source=server.go:433 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin OLLAMA_MODELS=/data/.ollama/models OLLAMA_HOST=0.0.0.0 OLLAMA_DEBUG=1 OLLAMA_CONTEXT_LENGTH=131072 OLLAMA_KEEP_ALIVE=-1 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_jetpack6 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_jetpack6 time=2026-04-07T09:56:20.314+08:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=138.765108ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_jetpack6]" extra_envs=map[] time=2026-04-07T09:56:20.314+08:00 level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=1 time=2026-04-07T09:56:20.314+08:00 level=DEBUG source=runner.go:146 msg="verifying if device is supported" library=/usr/local/lib/ollama/cuda_jetpack6 description=Orin compute=8.7 id=GPU-6238ccc5-45b3-5519-9f32-831427956f94 pci_id=0000:00:00.0 time=2026-04-07T09:56:20.314+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 46429" time=2026-04-07T09:56:20.314+08:00 level=DEBUG source=server.go:433 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin OLLAMA_MODELS=/data/.ollama/models OLLAMA_HOST=0.0.0.0 OLLAMA_DEBUG=1 OLLAMA_CONTEXT_LENGTH=131072 OLLAMA_KEEP_ALIVE=-1 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_jetpack6 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_jetpack6 CUDA_VISIBLE_DEVICES=GPU-6238ccc5-45b3-5519-9f32-831427956f94 GGML_CUDA_INIT=1 time=2026-04-07T09:56:20.453+08:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=138.908541ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_jetpack6]" extra_envs="map[CUDA_VISIBLE_DEVICES:GPU-6238ccc5-45b3-5519-9f32-831427956f94 GGML_CUDA_INIT:1]" time=2026-04-07T09:56:20.453+08:00 level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=278.279694ms time=2026-04-07T09:56:20.453+08:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-6238ccc5-45b3-5519-9f32-831427956f94 filter_id="" library=CUDA compute=8.7 name=CUDA0 description=Orin libdirs=ollama,cuda_jetpack6 driver=12.6 pci_id=0000:00:00.0 type=iGPU total="61.4 GiB" available="59.3 GiB" time=2026-04-07T09:56:20.453+08:00 level=INFO source=routes.go:1852 msg="vram-based default context" total_vram="61.4 GiB" default_num_ctx=262144 time=2026-04-07T09:56:50.693+08:00 level=DEBUG source=runner.go:264 msg="refreshing free memory" time=2026-04-07T09:56:50.693+08:00 level=DEBUG source=runner.go:328 msg="unable to refresh all GPUs with existing runners, performing bootstrap discovery" time=2026-04-07T09:56:50.693+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 35545" time=2026-04-07T09:56:50.693+08:00 level=DEBUG source=server.go:433 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin OLLAMA_MODELS=/data/.ollama/models OLLAMA_HOST=0.0.0.0 OLLAMA_DEBUG=1 OLLAMA_CONTEXT_LENGTH=131072 OLLAMA_KEEP_ALIVE=-1 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_jetpack6 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_jetpack6 time=2026-04-07T09:56:50.839+08:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=146.355654ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_jetpack6]" extra_envs=map[] time=2026-04-07T09:56:50.840+08:00 level=DEBUG source=runner.go:40 msg="overall device VRAM discovery took" duration=146.574949ms time=2026-04-07T09:56:50.840+08:00 level=DEBUG source=sched.go:220 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1 time=2026-04-07T09:56:50.840+08:00 level=DEBUG source=sched.go:229 msg="loading first model" model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df time=2026-04-07T09:56:50.999+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32 time=2026-04-07T09:56:51.085+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32 time=2026-04-07T09:56:51.086+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.pooling_type default=0 time=2026-04-07T09:56:51.086+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eot_token_id default=106 time=2026-04-07T09:56:51.087+08:00 level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883 time=2026-04-07T09:56:51.087+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.attention.global_head_count_kv default=0 time=2026-04-07T09:56:51.087+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.audio.block_count default=0 time=2026-04-07T09:56:51.087+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.audio.embedding_length default=0 time=2026-04-07T09:56:51.087+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df --port 40915" time=2026-04-07T09:56:51.087+08:00 level=DEBUG source=server.go:433 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin OLLAMA_MODELS=/data/.ollama/models OLLAMA_HOST=0.0.0.0 OLLAMA_DEBUG=1 OLLAMA_CONTEXT_LENGTH=131072 OLLAMA_KEEP_ALIVE=-1 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_jetpack6 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_jetpack6 time=2026-04-07T09:56:51.088+08:00 level=INFO source=sched.go:484 msg="system memory" total="61.4 GiB" free="59.2 GiB" free_swap="30.7 GiB" time=2026-04-07T09:56:51.088+08:00 level=INFO source=sched.go:491 msg="gpu memory" id=GPU-6238ccc5-45b3-5519-9f32-831427956f94 library=CUDA available="58.8 GiB" free="59.2 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-04-07T09:56:51.088+08:00 level=INFO source=server.go:759 msg="loading model" "model layers"=31 requested=-1 time=2026-04-07T09:56:51.102+08:00 level=INFO source=runner.go:1417 msg="starting ollama engine" time=2026-04-07T09:56:51.105+08:00 level=INFO source=runner.go:1452 msg="Server listening on 127.0.0.1:40915" time=2026-04-07T09:56:51.110+08:00 level=INFO source=runner.go:1290 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Disabled KvSize:131072 KvCacheType: NumThreads:12 GPULayers:31[ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Layers:31(0..30)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-04-07T09:56:51.199+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32 time=2026-04-07T09:56:51.200+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.name default="" time=2026-04-07T09:56:51.200+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.description default="" time=2026-04-07T09:56:51.200+08:00 level=INFO source=ggml.go:136 msg="" architecture=gemma4 file_type=Q4_K_M name="" description="" num_tensors=1014 num_key_values=52 time=2026-04-07T09:56:51.200+08:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu.so time=2026-04-07T09:56:51.206+08:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_jetpack6 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: Orin, compute capability 8.7, VMM: yes, ID: GPU-6238ccc5-45b3-5519-9f32-831427956f94 load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_jetpack6/libggml-cuda.so time=2026-04-07T09:56:51.239+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.LLAMAFILE=1 CPU.1.NEON=1 CPU.1.ARM_FMA=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=870 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2026-04-07T09:56:51.247+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.pooling_type default=0 time=2026-04-07T09:56:51.247+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eot_token_id default=106 time=2026-04-07T09:56:51.248+08:00 level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883 time=2026-04-07T09:56:51.248+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.attention.global_head_count_kv default=0 time=2026-04-07T09:56:51.248+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.audio.block_count default=0 time=2026-04-07T09:56:51.248+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.audio.embedding_length default=0 time=2026-04-07T09:56:51.268+08:00 level=INFO source=model.go:138 msg="vision: decode" elapsed=1.235849ms bounds=(0,0)-(2048,2048) time=2026-04-07T09:56:51.427+08:00 level=INFO source=model.go:145 msg="vision: preprocess" elapsed=159.062754ms size="[768 768]" time=2026-04-07T09:56:51.427+08:00 level=INFO source=model.go:148 msg="vision: pixelValues" shape="[768 768 3]" dim0=768 dim1=768 dim2=3 time=2026-04-07T09:56:51.427+08:00 level=INFO source=model.go:152 msg="vision: patches" patchesX=48 patchesY=48 total=2304 patchSize=16 time=2026-04-07T09:56:51.428+08:00 level=INFO source=model.go:156 msg="vision: encoded" elapsed=161.552338ms shape="[2816 256]" time=2026-04-07T09:56:51.609+08:00 level=DEBUG source=ggml.go:852 msg="compute graph" nodes=1272 splits=1 time=2026-04-07T09:56:52.074+08:00 level=DEBUG source=ggml.go:852 msg="compute graph" nodes=2852 splits=2 time=2026-04-07T09:56:52.084+08:00 level=DEBUG source=ggml.go:852 msg="compute graph" nodes=2850 splits=2 time=2026-04-07T09:56:52.085+08:00 level=DEBUG source=device.go:240 msg="model weights" device=CUDA0 size="16.6 GiB" time=2026-04-07T09:56:52.085+08:00 level=DEBUG source=device.go:245 msg="model weights" device=CPU size="667.5 MiB" time=2026-04-07T09:56:52.085+08:00 level=DEBUG source=device.go:251 msg="kv cache" device=CUDA0 size="3.4 GiB" time=2026-04-07T09:56:52.085+08:00 level=DEBUG source=device.go:262 msg="compute graph" device=CUDA0 size="6.3 GiB" time=2026-04-07T09:56:52.085+08:00 level=DEBUG source=device.go:267 msg="compute graph" device=CPU size="5.5 MiB" time=2026-04-07T09:56:52.085+08:00 level=DEBUG source=device.go:272 msg="total memory" size="27.0 GiB" time=2026-04-07T09:56:52.085+08:00 level=DEBUG source=server.go:784 msg=memory success=true required.InputWeights=699924480 required.CPU.Graph=5767168 required.CUDA0.ID=GPU-6238ccc5-45b3-5519-9f32-831427956f94 required.CUDA0.Weights="[590588928 590588928 590588928 493200128 493200128 597213952 493200128 491713280 589102080 493200128 491713280 597213952 491713280 493200128 589102080 491713280 493200128 597213952 491713280 491713280 590588928 491713280 491713280 597213952 493200128 491713280 589102080 590588928 589102080 597213952 1704233472]" required.CUDA0.Cache="[37748736 37748736 37748736 37748736 37748736 536870912 37748736 37748736 37748736 37748736 37748736 536870912 37748736 37748736 37748736 37748736 37748736 536870912 37748736 37748736 37748736 37748736 37748736 536870912 37748736 37748736 37748736 37748736 37748736 536870912 0]" required.CUDA0.Graph=6788880512 time=2026-04-07T09:56:52.085+08:00 level=DEBUG source=server.go:978 msg="available gpu" id=GPU-6238ccc5-45b3-5519-9f32-831427956f94 library=CUDA "available layer vram"="52.5 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="6.3 GiB" time=2026-04-07T09:56:52.085+08:00 level=DEBUG source=server.go:795 msg="new layout created" layers="31[ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Layers:31(0..30)]" time=2026-04-07T09:56:52.086+08:00 level=INFO source=runner.go:1290 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Disabled KvSize:131072 KvCacheType: NumThreads:12 GPULayers:31[ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Layers:31(0..30)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-04-07T09:56:52.169+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32 time=2026-04-07T09:56:55.578+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.pooling_type default=0 time=2026-04-07T09:56:55.578+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eot_token_id default=106 time=2026-04-07T09:56:55.578+08:00 level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883 time=2026-04-07T09:56:55.578+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.attention.global_head_count_kv default=0 time=2026-04-07T09:56:55.578+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.audio.block_count default=0 time=2026-04-07T09:56:55.578+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.audio.embedding_length default=0 time=2026-04-07T09:56:55.592+08:00 level=INFO source=model.go:138 msg="vision: decode" elapsed=3.61003ms bounds=(0,0)-(2048,2048) time=2026-04-07T09:56:55.752+08:00 level=INFO source=model.go:145 msg="vision: preprocess" elapsed=160.512666ms size="[768 768]" time=2026-04-07T09:56:55.754+08:00 level=INFO source=model.go:148 msg="vision: pixelValues" shape="[768 768 3]" dim0=768 dim1=768 dim2=3 time=2026-04-07T09:56:55.754+08:00 level=INFO source=model.go:152 msg="vision: patches" patchesX=48 patchesY=48 total=2304 patchSize=16 time=2026-04-07T09:56:55.755+08:00 level=INFO source=model.go:156 msg="vision: encoded" elapsed=167.276953ms shape="[2816 256]" time=2026-04-07T09:56:55.856+08:00 level=DEBUG source=ggml.go:852 msg="compute graph" nodes=1272 splits=1 time=2026-04-07T09:56:58.379+08:00 level=DEBUG source=ggml.go:852 msg="compute graph" nodes=2852 splits=2 time=2026-04-07T09:56:58.389+08:00 level=DEBUG source=ggml.go:852 msg="compute graph" nodes=2850 splits=2 time=2026-04-07T09:56:58.390+08:00 level=DEBUG source=device.go:240 msg="model weights" device=CUDA0 size="16.6 GiB" time=2026-04-07T09:56:58.390+08:00 level=DEBUG source=device.go:245 msg="model weights" device=CPU size="667.5 MiB" time=2026-04-07T09:56:58.390+08:00 level=DEBUG source=device.go:251 msg="kv cache" device=CUDA0 size="3.4 GiB" time=2026-04-07T09:56:58.390+08:00 level=DEBUG source=device.go:262 msg="compute graph" device=CUDA0 size="6.3 GiB" time=2026-04-07T09:56:58.390+08:00 level=DEBUG source=device.go:267 msg="compute graph" device=CPU size="5.5 MiB" time=2026-04-07T09:56:58.390+08:00 level=DEBUG source=device.go:272 msg="total memory" size="27.0 GiB" time=2026-04-07T09:56:58.390+08:00 level=DEBUG source=server.go:784 msg=memory success=true required.InputWeights=699924480 required.CPU.Graph=5767168 required.CUDA0.ID=GPU-6238ccc5-45b3-5519-9f32-831427956f94 required.CUDA0.Weights="[590588928 590588928 590588928 493200128 493200128 597213952 493200128 491713280 589102080 493200128 491713280 597213952 491713280 493200128 589102080 491713280 493200128 597213952 491713280 491713280 590588928 491713280 491713280 597213952 493200128 491713280 589102080 590588928 589102080 597213952 1704233472]" required.CUDA0.Cache="[37748736 37748736 37748736 37748736 37748736 536870912 37748736 37748736 37748736 37748736 37748736 536870912 37748736 37748736 37748736 37748736 37748736 536870912 37748736 37748736 37748736 37748736 37748736 536870912 37748736 37748736 37748736 37748736 37748736 536870912 0]" required.CUDA0.Graph=6788880512 time=2026-04-07T09:56:58.390+08:00 level=DEBUG source=server.go:978 msg="available gpu" id=GPU-6238ccc5-45b3-5519-9f32-831427956f94 library=CUDA "available layer vram"="52.5 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="6.3 GiB" time=2026-04-07T09:56:58.390+08:00 level=DEBUG source=server.go:795 msg="new layout created" layers="31[ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Layers:31(0..30)]" time=2026-04-07T09:56:58.390+08:00 level=INFO source=runner.go:1290 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Disabled KvSize:131072 KvCacheType: NumThreads:12 GPULayers:31[ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Layers:31(0..30)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-04-07T09:56:58.390+08:00 level=INFO source=ggml.go:482 msg="offloading 30 repeating layers to GPU" time=2026-04-07T09:56:58.390+08:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU" time=2026-04-07T09:56:58.390+08:00 level=INFO source=ggml.go:494 msg="offloaded 31/31 layers to GPU" time=2026-04-07T09:56:58.390+08:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="16.6 GiB" time=2026-04-07T09:56:58.391+08:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="667.5 MiB" time=2026-04-07T09:56:58.391+08:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="3.4 GiB" time=2026-04-07T09:56:58.391+08:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="6.3 GiB" time=2026-04-07T09:56:58.391+08:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="5.5 MiB" time=2026-04-07T09:56:58.391+08:00 level=INFO source=device.go:272 msg="total memory" size="27.0 GiB" time=2026-04-07T09:56:58.391+08:00 level=INFO source=sched.go:561 msg="loaded runners" count=1 time=2026-04-07T09:56:58.391+08:00 level=INFO source=server.go:1352 msg="waiting for llama runner to start responding" time=2026-04-07T09:56:58.391+08:00 level=INFO source=server.go:1386 msg="waiting for server to become available" status="llm server loading model" time=2026-04-07T09:56:58.391+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.00" time=2026-04-07T09:56:58.642+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.07" time=2026-04-07T09:56:58.893+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.15" time=2026-04-07T09:56:59.144+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.22" time=2026-04-07T09:56:59.395+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.29" time=2026-04-07T09:56:59.646+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.37" time=2026-04-07T09:56:59.897+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.44" time=2026-04-07T09:57:00.148+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.51" time=2026-04-07T09:57:00.399+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.59" time=2026-04-07T09:57:00.650+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.66" time=2026-04-07T09:57:00.901+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.73" time=2026-04-07T09:57:01.152+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.81" time=2026-04-07T09:57:01.403+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.88" time=2026-04-07T09:57:01.654+08:00 level=DEBUG source=server.go:1396 msg="model load progress 0.95" time=2026-04-07T09:57:01.905+08:00 level=DEBUG source=server.go:1396 msg="model load progress 1.00" time=2026-04-07T09:57:01.953+08:00 level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.pooling_type default=0 time=2026-04-07T09:57:02.156+08:00 level=INFO source=server.go:1390 msg="llama runner started in 11.07 seconds" time=2026-04-07T09:57:02.156+08:00 level=DEBUG source=sched.go:573 msg="finished setting up" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Library:CUDA}]" runner.size="27.0 GiB" runner.vram="27.0 GiB" runner.parallel=1 runner.pid=2700 runner.model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=131072 time=2026-04-07T09:57:02.406+08:00 level=DEBUG source=server.go:1538 msg="completion request" images=0 prompt=42320 format="" time=2026-04-07T09:57:02.645+08:00 level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=0 prompt=10512 used=0 remaining=10512 time=2026-04-07T09:57:06.506+08:00 level=DEBUG source=sched.go:581 msg="context for request finished" [GIN] 2026/04/07 - 09:57:06 | 500 | 16.147442882s | 127.0.0.1 | POST "/v1/chat/completions" time=2026-04-07T09:57:06.507+08:00 level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Library:CUDA}]" runner.size="27.0 GiB" runner.vram="27.0 GiB" runner.parallel=1 runner.pid=2700 runner.model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=131072 duration=2562047h47m16.854775807s time=2026-04-07T09:57:06.507+08:00 level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Library:CUDA}]" runner.size="27.0 GiB" runner.vram="27.0 GiB" runner.parallel=1 runner.pid=2700 runner.model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=131072 refCount=0 time=2026-04-07T09:57:07.672+08:00 level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df time=2026-04-07T09:57:07.872+08:00 level=DEBUG source=server.go:1538 msg="completion request" images=0 prompt=54360 format="" time=2026-04-07T09:57:19.762+08:00 level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=10512 prompt=13473 used=0 remaining=13473 [GIN] 2026/04/07 - 09:57:43 | 200 | 36.222069034s | 127.0.0.1 | POST "/v1/chat/completions" time=2026-04-07T09:57:43.564+08:00 level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Library:CUDA}]" runner.size="27.0 GiB" runner.vram="27.0 GiB" runner.parallel=1 runner.pid=2700 runner.model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=131072 time=2026-04-07T09:57:43.564+08:00 level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Library:CUDA}]" runner.size="27.0 GiB" runner.vram="27.0 GiB" runner.parallel=1 runner.pid=2700 runner.model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=131072 duration=2562047h47m16.854775807s time=2026-04-07T09:57:43.564+08:00 level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Library:CUDA}]" runner.size="27.0 GiB" runner.vram="27.0 GiB" runner.parallel=1 runner.pid=2700 runner.model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=131072 refCount=0 time=2026-04-07T09:57:44.217+08:00 level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df time=2026-04-07T09:57:44.417+08:00 level=DEBUG source=server.go:1538 msg="completion request" images=0 prompt=56928 format="" time=2026-04-07T09:57:44.624+08:00 level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=13500 prompt=14093 used=13473 remaining=620 [GIN] 2026/04/07 - 09:57:47 | 200 | 3.686131389s | 127.0.0.1 | POST "/v1/chat/completions" time=2026-04-07T09:57:47.527+08:00 level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Library:CUDA}]" runner.size="27.0 GiB" runner.vram="27.0 GiB" runner.parallel=1 runner.pid=2700 runner.model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=131072 time=2026-04-07T09:57:47.528+08:00 level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Library:CUDA}]" runner.size="27.0 GiB" runner.vram="27.0 GiB" runner.parallel=1 runner.pid=2700 runner.model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=131072 duration=2562047h47m16.854775807s time=2026-04-07T09:57:47.528+08:00 level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Library:CUDA}]" runner.size="27.0 GiB" runner.vram="27.0 GiB" runner.parallel=1 runner.pid=2700 runner.model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=131072 refCount=0 time=2026-04-07T09:57:48.477+08:00 level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df time=2026-04-07T09:57:48.682+08:00 level=DEBUG source=server.go:1538 msg="completion request" images=0 prompt=57528 format="" time=2026-04-07T09:57:48.895+08:00 level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=14119 prompt=14252 used=14093 remaining=159 [GIN] 2026/04/07 - 09:57:51 | 200 | 3.388897966s | 127.0.0.1 | POST "/v1/chat/completions" time=2026-04-07T09:57:51.376+08:00 level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Library:CUDA}]" runner.size="27.0 GiB" runner.vram="27.0 GiB" runner.parallel=1 runner.pid=2700 runner.model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=131072 time=2026-04-07T09:57:51.376+08:00 level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Library:CUDA}]" runner.size="27.0 GiB" runner.vram="27.0 GiB" runner.parallel=1 runner.pid=2700 runner.model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=131072 duration=2562047h47m16.854775807s time=2026-04-07T09:57:51.376+08:00 level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Library:CUDA}]" runner.size="27.0 GiB" runner.vram="27.0 GiB" runner.parallel=1 runner.pid=2700 runner.model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=131072 refCount=0 time=2026-04-07T09:57:52.039+08:00 level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df time=2026-04-07T09:57:52.248+08:00 level=DEBUG source=server.go:1538 msg="completion request" images=0 prompt=57811 format="" time=2026-04-07T09:57:52.460+08:00 level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=14289 prompt=14353 used=14252 remaining=101 [GIN] 2026/04/07 - 09:57:54 | 200 | 2.600673675s | 127.0.0.1 | POST "/v1/chat/completions" time=2026-04-07T09:57:54.241+08:00 level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Library:CUDA}]" runner.size="27.0 GiB" runner.vram="27.0 GiB" runner.parallel=1 runner.pid=2700 runner.model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=131072 time=2026-04-07T09:57:54.241+08:00 level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Library:CUDA}]" runner.size="27.0 GiB" runner.vram="27.0 GiB" runner.parallel=1 runner.pid=2700 runner.model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=131072 duration=2562047h47m16.854775807s time=2026-04-07T09:57:54.241+08:00 level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Library:CUDA}]" runner.size="27.0 GiB" runner.vram="27.0 GiB" runner.parallel=1 runner.pid=2700 runner.model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=131072 refCount=0 time=2026-04-07T09:57:55.420+08:00 level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df time=2026-04-07T09:57:55.637+08:00 level=DEBUG source=server.go:1538 msg="completion request" images=0 prompt=58719 format="" time=2026-04-07T09:57:55.852+08:00 level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=14379 prompt=14605 used=14353 remaining=252 [GIN] 2026/04/07 - 09:57:57 | 200 | 3.081346526s | 127.0.0.1 | POST "/v1/chat/completions" time=2026-04-07T09:57:57.744+08:00 level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Library:CUDA}]" runner.size="27.0 GiB" runner.vram="27.0 GiB" runner.parallel=1 runner.pid=2700 runner.model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=131072 time=2026-04-07T09:57:57.744+08:00 level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Library:CUDA}]" runner.size="27.0 GiB" runner.vram="27.0 GiB" runner.parallel=1 runner.pid=2700 runner.model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=131072 duration=2562047h47m16.854775807s time=2026-04-07T09:57:57.744+08:00 level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:26b runner.inference="[{ID:GPU-6238ccc5-45b3-5519-9f32-831427956f94 Library:CUDA}]" runner.size="27.0 GiB" runner.vram="27.0 GiB" runner.parallel=1 runner.pid=2700 runner.model=/data/.ollama/models/blobs/sha256-7121486771cbfe218851513210c40b35dbdee93ab1ef43fe36283c883980f0df runner.num_ctx=131072 refCount=0 ``` ### OS Linux ### GPU Nvidia ### CPU Other ### Ollama version 0.20.2
GiteaMirror added the bug label 2026-04-22 20:13:16 -05:00
Author
Owner

@PureBlissAK commented on GitHub (Apr 18, 2026):

🤖 Automated Triage & Analysis Report

Issue: #15379
Analyzed: 2026-04-18T18:22:26.464340

Analysis

  • Type: unknown
  • Severity: medium
  • Components: unknown

Implementation Plan

  • Effort: medium
  • Steps:

This issue has been triaged and marked for implementation.

<!-- gh-comment-id:4274310004 --> @PureBlissAK commented on GitHub (Apr 18, 2026): <!-- ollama-issue-orchestrator:v1 issue:15379 --> ## 🤖 Automated Triage & Analysis Report **Issue**: #15379 **Analyzed**: 2026-04-18T18:22:26.464340 ### Analysis - **Type**: unknown - **Severity**: medium - **Components**: unknown ### Implementation Plan - **Effort**: medium - **Steps**: *This issue has been triaged and marked for implementation.*
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#35596