[GH-ISSUE #12470] GPT-OSS JSON Structure Not Being Applied #34046

Closed
opened 2026-04-22 17:16:22 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @infosechoudini on GitHub (Oct 1, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12470

What is the issue?

While utilizing gpt-oss:20b and gpt-oss:120b, the models are not following the request for jsonstructure and json responses. I've tried using LLAMA models instead and they follow the request for json in the responses. Downgrading down to any other versions that support the gpt-oss models does not work either. I've tested down to 0.11.4

  "model": "gpt-oss:20b",
  "messages": [{"role": "user", "content": "Ollama is 22 years old and busy saving the world. Return a JSON object with the age and availability."}],
  "stream": false,
  "think": true,
  "format": {
    "type": "object",
    "properties": {
      "age": {
        "type": "integer"
      },
      "available": {
        "type": "boolean"
      }
    },
    "required": [
      "age",
      "available"
    ]
  },
  "options": {
    "temperature": 0
  }
}'
{"model":"gpt-oss:20b","created_at":"2025-10-01T15:20:15.818711466Z","message":{"role":"assistant","content":""},"done_reason":"stop","done":true,"total_duration":1287079845,"load_duration":99614630,"prompt_eval_count":91,"prompt_eval_duration":87573584,"eval_count":12,"eval_duration":919269467}%  ```

### Relevant log output

```shell
Oct 01 15:08:20 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:20.626Z level=INFO source=server.go:686 msg="gpu memory" id=GPU-9bd26c2b-43f6-a171-c9fb-dcd21373f3b9 available="13.1 GiB" free="13.6 GiB" minimum="457.0 MiB" overhead="0 B"
Oct 01 15:08:20 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:20.628Z level=INFO source=runner.go:1171 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:8 GPULayers:25[ID:GPU-9bd26c2b-43f6-a171-c9fb-dcd21373f3b9 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Oct 01 15:08:20 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:20.710Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
Oct 01 15:08:20 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:20.712Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.name default=""
Oct 01 15:08:20 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:20.712Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.description default=""
Oct 01 15:08:20 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:20.712Z level=INFO source=ggml.go:131 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=315 num_key_values=30
Oct 01 15:08:20 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:20.712Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama
Oct 01 15:08:20 gcp-ai-development-1 ollama[2866468]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so
Oct 01 15:08:20 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:20.720Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v12
Oct 01 15:08:20 gcp-ai-development-1 ollama[2866468]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Oct 01 15:08:20 gcp-ai-development-1 ollama[2866468]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Oct 01 15:08:20 gcp-ai-development-1 ollama[2866468]: ggml_cuda_init: found 1 CUDA devices:
Oct 01 15:08:20 gcp-ai-development-1 ollama[2866468]:   Device 0: Tesla T4, compute capability 7.5, VMM: yes, ID: GPU-9bd26c2b-43f6-a171-c9fb-dcd21373f3b9
Oct 01 15:08:20 gcp-ai-development-1 ollama[2866468]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so
Oct 01 15:08:20 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:20.841Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
Oct 01 15:08:20 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:20.843Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.pooling_type default=0
Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.102Z level=DEBUG source=ggml.go:794 msg="compute graph" nodes=1325 splits=2
Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.103Z level=DEBUG source=backend.go:310 msg="model weights" device=CUDA0 size="11.8 GiB"
Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.103Z level=DEBUG source=backend.go:315 msg="model weights" device=CPU size="1.1 GiB"
Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.103Z level=DEBUG source=backend.go:321 msg="kv cache" device=CUDA0 size="204.0 MiB"
Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.103Z level=DEBUG source=backend.go:332 msg="compute graph" device=CUDA0 size="117.8 MiB"
Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.103Z level=DEBUG source=backend.go:337 msg="compute graph" device=CPU size="5.6 MiB"
Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.103Z level=DEBUG source=backend.go:342 msg="total memory" size="13.2 GiB"
Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.103Z level=DEBUG source=server.go:717 msg=memory success=true required.InputWeights=1158266880U required.CPU.Graph=5898240U required.CUDA0.ID=GPU-9bd26c2b-43f6-a171-c9fb-dcd21373f3b9 required.CUDA0.Weights="[477628800U 477628800U 477628800U 477628800U 477628800U 477628800U 477628800U 477628800U 477628800U 477628800U 477628800U 477628800U 477628800U 477628800U 477628800U 477628800U 477628800U 477628800U 477628800U 477628800U 477628800U 477628800U 477628800U 477628800U 1158278400U]" required.CUDA0.Cache="[9437184U 8388608U 9437184U 8388608U 9437184U 8388608U 9437184U 8388608U 9437184U 8388608U 9437184U 8388608U 9437184U 8388608U 9437184U 8388608U 9437184U 8388608U 9437184U 8388608U 9437184U 8388608U 9437184U 8388608U 0U]" required.CUDA0.Graph=123472000U
Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.103Z level=DEBUG source=server.go:894 msg="available gpu" id=GPU-9bd26c2b-43f6-a171-c9fb-dcd21373f3b9 "available layer vram"="13.0 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="117.8 MiB"
Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.103Z level=DEBUG source=server.go:728 msg="new layout created" layers="25[ID:GPU-9bd26c2b-43f6-a171-c9fb-dcd21373f3b9 Layers:25(0..24)]"
Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.104Z level=INFO source=runner.go:1171 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:8 GPULayers:25[ID:GPU-9bd26c2b-43f6-a171-c9fb-dcd21373f3b9 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.183Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.197Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.pooling_type default=0
Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.210Z level=DEBUG source=ggml.go:794 msg="compute graph" nodes=1325 splits=2
Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.211Z level=DEBUG source=backend.go:310 msg="model weights" device=CUDA0 size="11.8 GiB"
Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.211Z level=DEBUG source=backend.go:315 msg="model weights" device=CPU size="1.1 GiB"
Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.211Z level=DEBUG source=backend.go:321 msg="kv cache" device=CUDA0 size="204.0 MiB"
Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.211Z level=DEBUG source=backend.go:332 msg="compute graph" device=CUDA0 size="117.8 MiB"
Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.211Z level=DEBUG source=backend.go:337 msg="compute graph" device=CPU size="5.6 MiB"
Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.211Z level=DEBUG source=backend.go:342 msg="total memory" size="13.2 GiB"
Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.211Z level=DEBUG source=server.go:717 msg=memory success=true required.InputWeights=1158266880A required.CPU.Graph=5898240A required.CUDA0.ID=GPU-9bd26c2b-43f6-a171-c9fb-dcd21373f3b9 required.CUDA0.Weights="[477628800A 477628800A 477628800A 477628800A 477628800A 477628800A 477628800A 477628800A 477628800A 477628800A 477628800A 477628800A 477628800A 477628800A 477628800A 477628800A 477628800A 477628800A 477628800A 477628800A 477628800A 477628800A 477628800A 477628800A 1158278400A]" required.CUDA0.Cache="[9437184A 8388608A 9437184A 8388608A 9437184A 8388608A 9437184A 8388608A 9437184A 8388608A 9437184A 8388608A 9437184A 8388608A 9437184A 8388608A 9437184A 8388608A 9437184A 8388608A 9437184A 8388608A 9437184A 8388608A 0U]" required.CUDA0.Graph=123472000A
Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.211Z level=DEBUG source=server.go:894 msg="available gpu" id=GPU-9bd26c2b-43f6-a171-c9fb-dcd21373f3b9 "available layer vram"="13.0 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="117.8 MiB"
Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.211Z level=DEBUG source=server.go:728 msg="new layout created" layers="25[ID:GPU-9bd26c2b-43f6-a171-c9fb-dcd21373f3b9 Layers:25(0..24)]"
Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.211Z level=INFO source=runner.go:1171 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:8 GPULayers:25[ID:GPU-9bd26c2b-43f6-a171-c9fb-dcd21373f3b9 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.211Z level=INFO source=ggml.go:487 msg="offloading 24 repeating layers to GPU"
Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.212Z level=INFO source=ggml.go:493 msg="offloading output layer to GPU"
Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.212Z level=INFO source=ggml.go:498 msg="offloaded 25/25 layers to GPU"
Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.212Z level=INFO source=backend.go:310 msg="model weights" device=CUDA0 size="11.8 GiB"
Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.212Z level=INFO source=backend.go:315 msg="model weights" device=CPU size="1.1 GiB"
Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.212Z level=INFO source=backend.go:321 msg="kv cache" device=CUDA0 size="204.0 MiB"
Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.212Z level=INFO source=backend.go:332 msg="compute graph" device=CUDA0 size="117.8 MiB"
Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.212Z level=INFO source=backend.go:337 msg="compute graph" device=CPU size="5.6 MiB"
Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.212Z level=INFO source=backend.go:342 msg="total memory" size="13.2 GiB"
Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.212Z level=INFO source=sched.go:470 msg="loaded runners" count=1
Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.212Z level=INFO source=server.go:1251 msg="waiting for llama runner to start responding"
Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.212Z level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model"
Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.212Z level=DEBUG source=server.go:1295 msg="model load progress 0.00"
Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.463Z level=DEBUG source=server.go:1295 msg="model load progress 0.04"
Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.714Z level=DEBUG source=server.go:1295 msg="model load progress 0.09"
Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.966Z level=DEBUG source=server.go:1295 msg="model load progress 0.13"
Oct 01 15:08:22 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:22.217Z level=DEBUG source=server.go:1295 msg="model load progress 0.18"
Oct 01 15:08:22 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:22.468Z level=DEBUG source=server.go:1295 msg="model load progress 0.22"
Oct 01 15:08:22 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:22.719Z level=DEBUG source=server.go:1295 msg="model load progress 0.26"
Oct 01 15:08:22 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:22.970Z level=DEBUG source=server.go:1295 msg="model load progress 0.31"
Oct 01 15:08:23 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:23.221Z level=DEBUG source=server.go:1295 msg="model load progress 0.35"
Oct 01 15:08:23 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:23.472Z level=DEBUG source=server.go:1295 msg="model load progress 0.39"
Oct 01 15:08:23 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:23.723Z level=DEBUG source=server.go:1295 msg="model load progress 0.44"
Oct 01 15:08:23 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:23.974Z level=DEBUG source=server.go:1295 msg="model load progress 0.48"
Oct 01 15:08:24 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:24.225Z level=DEBUG source=server.go:1295 msg="model load progress 0.53"
Oct 01 15:08:24 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:24.476Z level=DEBUG source=server.go:1295 msg="model load progress 0.57"
Oct 01 15:08:24 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:24.727Z level=DEBUG source=server.go:1295 msg="model load progress 0.62"
Oct 01 15:08:24 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:24.978Z level=DEBUG source=server.go:1295 msg="model load progress 0.66"
Oct 01 15:08:25 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:25.229Z level=DEBUG source=server.go:1295 msg="model load progress 0.71"
Oct 01 15:08:25 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:25.480Z level=DEBUG source=server.go:1295 msg="model load progress 0.76"
Oct 01 15:08:25 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:25.732Z level=DEBUG source=server.go:1295 msg="model load progress 0.83"
Oct 01 15:08:25 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:25.983Z level=DEBUG source=server.go:1295 msg="model load progress 0.90"
Oct 01 15:08:26 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:26.234Z level=DEBUG source=server.go:1295 msg="model load progress 0.96"
Oct 01 15:08:26 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:26.485Z level=DEBUG source=server.go:1295 msg="model load progress 0.99"
Oct 01 15:08:26 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:26.583Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.pooling_type default=0
Oct 01 15:08:26 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:26.737Z level=INFO source=server.go:1289 msg="llama runner started in 6.45 seconds"
Oct 01 15:08:26 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:26.737Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=registry.ollama.ai/library/gpt-oss:20b runner.inference=cuda runner.devices=1 runner.size="13.2 GiB" runner.vram="13.2 GiB" runner.parallel=1 runner.pid=2868333 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 runner.num_ctx=4096
Oct 01 15:08:26 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:26.737Z level=DEBUG source=server.go:1388 msg="completion request" images=0 prompt=403 format="{\n    \"type\": \"object\",\n    \"properties\": {\n      \"age\": {\n        \"type\": \"integer\"\n      },\n      \"available\": {\n        \"type\": \"boolean\"\n      }\n    },\n    \"required\": [\n      \"age\",\n      \"available\"\n    ]\n  }"
Oct 01 15:08:26 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:26.993Z level=DEBUG source=cache.go:142 msg="loading cache slot" id=0 cache=0 prompt=91 used=0 remaining=91
Oct 01 15:08:28 gcp-ai-development-1 ollama[2866468]: [GIN] 2025/10/01 - 15:08:28 | 200 |  8.911892757s |       127.0.0.1 | POST     "/api/chat"
Oct 01 15:08:28 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:28.051Z level=DEBUG source=sched.go:490 msg="context for request finished"
Oct 01 15:08:28 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:28.051Z level=DEBUG source=sched.go:286 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gpt-oss:20b runner.inference=cuda runner.devices=1 runner.size="13.2 GiB" runner.vram="13.2 GiB" runner.parallel=1 runner.pid=2868333 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 runner.num_ctx=4096 duration=5m0s
Oct 01 15:08:28 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:28.051Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gpt-oss:20b runner.inference=cuda runner.devices=1 runner.size="13.2 GiB" runner.vram="13.2 GiB" runner.parallel=1 runner.pid=2868333 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 runner.num_ctx=4096 refCount=0
Oct 01 15:09:10 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:09:10.297Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model=/usr/share/ollama/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583
Oct 01 15:09:10 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:09:10.298Z level=DEBUG source=server.go:1388 msg="completion request" images=0 prompt=403 format="{\n    \"type\": \"object\",\n    \"properties\": {\n      \"age\": {\n        \"type\": \"integer\"\n      },\n      \"available\": {\n        \"type\": \"boolean\"\n      }\n    },\n    \"required\": [\n      \"age\",\n      \"available\"\n    ]\n  }"
Oct 01 15:09:10 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:09:10.487Z level=DEBUG source=cache.go:142 msg="loading cache slot" id=0 cache=102 prompt=91 used=90 remaining=1
Oct 01 15:09:11 gcp-ai-development-1 ollama[2866468]: [GIN] 2025/10/01 - 15:09:11 | 200 |  1.023167203s |       127.0.0.1 | POST     "/api/chat"
Oct 01 15:09:11 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:09:11.113Z level=DEBUG source=sched.go:377 msg="context for request finished" runner.name=registry.ollama.ai/library/gpt-oss:20b runner.inference=cuda runner.devices=1 runner.size="13.2 GiB" runner.vram="13.2 GiB" runner.parallel=1 runner.pid=2868333 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 runner.num_ctx=4096
Oct 01 15:09:11 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:09:11.113Z level=DEBUG source=sched.go:286 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gpt-oss:20b runner.inference=cuda runner.devices=1 runner.size="13.2 GiB" runner.vram="13.2 GiB" runner.parallel=1 runner.pid=2868333 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 runner.num_ctx=4096 duration=5m0s
Oct 01 15:09:11 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:09:11.113Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gpt-oss:20b runner.inference=cuda runner.devices=1 runner.size="13.2 GiB" runner.vram="13.2 GiB" runner.parallel=1 runner.pid=2868333 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 runner.num_ctx=4096 refCount=0
Oct 01 15:09:51 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:09:51.922Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model=/usr/share/ollama/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583
Oct 01 15:09:51 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:09:51.923Z level=DEBUG source=server.go:1388 msg="completion request" images=0 prompt=403 format="{\n    \"type\": \"object\",\n    \"properties\": {\n      \"age\": {\n        \"type\": \"integer\"\n      },\n      \"available\": {\n        \"type\": \"boolean\"\n      }\n    },\n    \"required\": [\n      \"age\",\n      \"available\"\n    ]\n  }"
Oct 01 15:09:52 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:09:52.103Z level=DEBUG source=cache.go:142 msg="loading cache slot" id=0 cache=102 prompt=91 used=90 remaining=1
Oct 01 15:09:52 gcp-ai-development-1 ollama[2866468]: [GIN] 2025/10/01 - 15:09:52 | 200 |  968.111574ms |       127.0.0.1 | POST     "/api/chat"
Oct 01 15:09:52 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:09:52.705Z level=DEBUG source=sched.go:377 msg="context for request finished" runner.name=registry.ollama.ai/library/gpt-oss:20b runner.inference=cuda runner.devices=1 runner.size="13.2 GiB" runner.vram="13.2 GiB" runner.parallel=1 runner.pid=2868333 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 runner.num_ctx=4096
Oct 01 15:09:52 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:09:52.705Z level=DEBUG source=sched.go:286 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gpt-oss:20b runner.inference=cuda runner.devices=1 runner.size="13.2 GiB" runner.vram="13.2 GiB" runner.parallel=1 runner.pid=2868333 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 runner.num_ctx=4096 duration=5m0s
Oct 01 15:09:52 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:09:52.705Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gpt-oss:20b runner.inference=cuda runner.devices=1 runner.size="13.2 GiB" runner.vram="13.2 GiB" runner.parallel=1 runner.pid=2868333 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 runner.num_ctx=4096 refCount=0

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.12.3

Originally created by @infosechoudini on GitHub (Oct 1, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12470 ### What is the issue? While utilizing gpt-oss:20b and gpt-oss:120b, the models are not following the request for jsonstructure and json responses. I've tried using LLAMA models instead and they follow the request for json in the responses. Downgrading down to any other versions that support the gpt-oss models does not work either. I've tested down to 0.11.4 ```curl -X POST http://localhost:11434/api/chat -H "Content-Type: application/json" -d '{ "model": "gpt-oss:20b", "messages": [{"role": "user", "content": "Ollama is 22 years old and busy saving the world. Return a JSON object with the age and availability."}], "stream": false, "think": true, "format": { "type": "object", "properties": { "age": { "type": "integer" }, "available": { "type": "boolean" } }, "required": [ "age", "available" ] }, "options": { "temperature": 0 } }' {"model":"gpt-oss:20b","created_at":"2025-10-01T15:20:15.818711466Z","message":{"role":"assistant","content":""},"done_reason":"stop","done":true,"total_duration":1287079845,"load_duration":99614630,"prompt_eval_count":91,"prompt_eval_duration":87573584,"eval_count":12,"eval_duration":919269467}% ``` ### Relevant log output ```shell Oct 01 15:08:20 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:20.626Z level=INFO source=server.go:686 msg="gpu memory" id=GPU-9bd26c2b-43f6-a171-c9fb-dcd21373f3b9 available="13.1 GiB" free="13.6 GiB" minimum="457.0 MiB" overhead="0 B" Oct 01 15:08:20 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:20.628Z level=INFO source=runner.go:1171 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:8 GPULayers:25[ID:GPU-9bd26c2b-43f6-a171-c9fb-dcd21373f3b9 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Oct 01 15:08:20 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:20.710Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 Oct 01 15:08:20 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:20.712Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.name default="" Oct 01 15:08:20 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:20.712Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.description default="" Oct 01 15:08:20 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:20.712Z level=INFO source=ggml.go:131 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=315 num_key_values=30 Oct 01 15:08:20 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:20.712Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama Oct 01 15:08:20 gcp-ai-development-1 ollama[2866468]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so Oct 01 15:08:20 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:20.720Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v12 Oct 01 15:08:20 gcp-ai-development-1 ollama[2866468]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Oct 01 15:08:20 gcp-ai-development-1 ollama[2866468]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Oct 01 15:08:20 gcp-ai-development-1 ollama[2866468]: ggml_cuda_init: found 1 CUDA devices: Oct 01 15:08:20 gcp-ai-development-1 ollama[2866468]: Device 0: Tesla T4, compute capability 7.5, VMM: yes, ID: GPU-9bd26c2b-43f6-a171-c9fb-dcd21373f3b9 Oct 01 15:08:20 gcp-ai-development-1 ollama[2866468]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so Oct 01 15:08:20 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:20.841Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) Oct 01 15:08:20 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:20.843Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.pooling_type default=0 Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.102Z level=DEBUG source=ggml.go:794 msg="compute graph" nodes=1325 splits=2 Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.103Z level=DEBUG source=backend.go:310 msg="model weights" device=CUDA0 size="11.8 GiB" Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.103Z level=DEBUG source=backend.go:315 msg="model weights" device=CPU size="1.1 GiB" Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.103Z level=DEBUG source=backend.go:321 msg="kv cache" device=CUDA0 size="204.0 MiB" Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.103Z level=DEBUG source=backend.go:332 msg="compute graph" device=CUDA0 size="117.8 MiB" Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.103Z level=DEBUG source=backend.go:337 msg="compute graph" device=CPU size="5.6 MiB" Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.103Z level=DEBUG source=backend.go:342 msg="total memory" size="13.2 GiB" Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.103Z level=DEBUG source=server.go:717 msg=memory success=true required.InputWeights=1158266880U required.CPU.Graph=5898240U required.CUDA0.ID=GPU-9bd26c2b-43f6-a171-c9fb-dcd21373f3b9 required.CUDA0.Weights="[477628800U 477628800U 477628800U 477628800U 477628800U 477628800U 477628800U 477628800U 477628800U 477628800U 477628800U 477628800U 477628800U 477628800U 477628800U 477628800U 477628800U 477628800U 477628800U 477628800U 477628800U 477628800U 477628800U 477628800U 1158278400U]" required.CUDA0.Cache="[9437184U 8388608U 9437184U 8388608U 9437184U 8388608U 9437184U 8388608U 9437184U 8388608U 9437184U 8388608U 9437184U 8388608U 9437184U 8388608U 9437184U 8388608U 9437184U 8388608U 9437184U 8388608U 9437184U 8388608U 0U]" required.CUDA0.Graph=123472000U Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.103Z level=DEBUG source=server.go:894 msg="available gpu" id=GPU-9bd26c2b-43f6-a171-c9fb-dcd21373f3b9 "available layer vram"="13.0 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="117.8 MiB" Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.103Z level=DEBUG source=server.go:728 msg="new layout created" layers="25[ID:GPU-9bd26c2b-43f6-a171-c9fb-dcd21373f3b9 Layers:25(0..24)]" Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.104Z level=INFO source=runner.go:1171 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:8 GPULayers:25[ID:GPU-9bd26c2b-43f6-a171-c9fb-dcd21373f3b9 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.183Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.197Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.pooling_type default=0 Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.210Z level=DEBUG source=ggml.go:794 msg="compute graph" nodes=1325 splits=2 Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.211Z level=DEBUG source=backend.go:310 msg="model weights" device=CUDA0 size="11.8 GiB" Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.211Z level=DEBUG source=backend.go:315 msg="model weights" device=CPU size="1.1 GiB" Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.211Z level=DEBUG source=backend.go:321 msg="kv cache" device=CUDA0 size="204.0 MiB" Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.211Z level=DEBUG source=backend.go:332 msg="compute graph" device=CUDA0 size="117.8 MiB" Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.211Z level=DEBUG source=backend.go:337 msg="compute graph" device=CPU size="5.6 MiB" Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.211Z level=DEBUG source=backend.go:342 msg="total memory" size="13.2 GiB" Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.211Z level=DEBUG source=server.go:717 msg=memory success=true required.InputWeights=1158266880A required.CPU.Graph=5898240A required.CUDA0.ID=GPU-9bd26c2b-43f6-a171-c9fb-dcd21373f3b9 required.CUDA0.Weights="[477628800A 477628800A 477628800A 477628800A 477628800A 477628800A 477628800A 477628800A 477628800A 477628800A 477628800A 477628800A 477628800A 477628800A 477628800A 477628800A 477628800A 477628800A 477628800A 477628800A 477628800A 477628800A 477628800A 477628800A 1158278400A]" required.CUDA0.Cache="[9437184A 8388608A 9437184A 8388608A 9437184A 8388608A 9437184A 8388608A 9437184A 8388608A 9437184A 8388608A 9437184A 8388608A 9437184A 8388608A 9437184A 8388608A 9437184A 8388608A 9437184A 8388608A 9437184A 8388608A 0U]" required.CUDA0.Graph=123472000A Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.211Z level=DEBUG source=server.go:894 msg="available gpu" id=GPU-9bd26c2b-43f6-a171-c9fb-dcd21373f3b9 "available layer vram"="13.0 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="117.8 MiB" Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.211Z level=DEBUG source=server.go:728 msg="new layout created" layers="25[ID:GPU-9bd26c2b-43f6-a171-c9fb-dcd21373f3b9 Layers:25(0..24)]" Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.211Z level=INFO source=runner.go:1171 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:8 GPULayers:25[ID:GPU-9bd26c2b-43f6-a171-c9fb-dcd21373f3b9 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.211Z level=INFO source=ggml.go:487 msg="offloading 24 repeating layers to GPU" Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.212Z level=INFO source=ggml.go:493 msg="offloading output layer to GPU" Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.212Z level=INFO source=ggml.go:498 msg="offloaded 25/25 layers to GPU" Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.212Z level=INFO source=backend.go:310 msg="model weights" device=CUDA0 size="11.8 GiB" Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.212Z level=INFO source=backend.go:315 msg="model weights" device=CPU size="1.1 GiB" Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.212Z level=INFO source=backend.go:321 msg="kv cache" device=CUDA0 size="204.0 MiB" Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.212Z level=INFO source=backend.go:332 msg="compute graph" device=CUDA0 size="117.8 MiB" Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.212Z level=INFO source=backend.go:337 msg="compute graph" device=CPU size="5.6 MiB" Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.212Z level=INFO source=backend.go:342 msg="total memory" size="13.2 GiB" Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.212Z level=INFO source=sched.go:470 msg="loaded runners" count=1 Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.212Z level=INFO source=server.go:1251 msg="waiting for llama runner to start responding" Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.212Z level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model" Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.212Z level=DEBUG source=server.go:1295 msg="model load progress 0.00" Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.463Z level=DEBUG source=server.go:1295 msg="model load progress 0.04" Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.714Z level=DEBUG source=server.go:1295 msg="model load progress 0.09" Oct 01 15:08:21 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:21.966Z level=DEBUG source=server.go:1295 msg="model load progress 0.13" Oct 01 15:08:22 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:22.217Z level=DEBUG source=server.go:1295 msg="model load progress 0.18" Oct 01 15:08:22 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:22.468Z level=DEBUG source=server.go:1295 msg="model load progress 0.22" Oct 01 15:08:22 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:22.719Z level=DEBUG source=server.go:1295 msg="model load progress 0.26" Oct 01 15:08:22 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:22.970Z level=DEBUG source=server.go:1295 msg="model load progress 0.31" Oct 01 15:08:23 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:23.221Z level=DEBUG source=server.go:1295 msg="model load progress 0.35" Oct 01 15:08:23 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:23.472Z level=DEBUG source=server.go:1295 msg="model load progress 0.39" Oct 01 15:08:23 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:23.723Z level=DEBUG source=server.go:1295 msg="model load progress 0.44" Oct 01 15:08:23 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:23.974Z level=DEBUG source=server.go:1295 msg="model load progress 0.48" Oct 01 15:08:24 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:24.225Z level=DEBUG source=server.go:1295 msg="model load progress 0.53" Oct 01 15:08:24 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:24.476Z level=DEBUG source=server.go:1295 msg="model load progress 0.57" Oct 01 15:08:24 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:24.727Z level=DEBUG source=server.go:1295 msg="model load progress 0.62" Oct 01 15:08:24 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:24.978Z level=DEBUG source=server.go:1295 msg="model load progress 0.66" Oct 01 15:08:25 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:25.229Z level=DEBUG source=server.go:1295 msg="model load progress 0.71" Oct 01 15:08:25 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:25.480Z level=DEBUG source=server.go:1295 msg="model load progress 0.76" Oct 01 15:08:25 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:25.732Z level=DEBUG source=server.go:1295 msg="model load progress 0.83" Oct 01 15:08:25 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:25.983Z level=DEBUG source=server.go:1295 msg="model load progress 0.90" Oct 01 15:08:26 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:26.234Z level=DEBUG source=server.go:1295 msg="model load progress 0.96" Oct 01 15:08:26 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:26.485Z level=DEBUG source=server.go:1295 msg="model load progress 0.99" Oct 01 15:08:26 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:26.583Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.pooling_type default=0 Oct 01 15:08:26 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:26.737Z level=INFO source=server.go:1289 msg="llama runner started in 6.45 seconds" Oct 01 15:08:26 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:26.737Z level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=registry.ollama.ai/library/gpt-oss:20b runner.inference=cuda runner.devices=1 runner.size="13.2 GiB" runner.vram="13.2 GiB" runner.parallel=1 runner.pid=2868333 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 runner.num_ctx=4096 Oct 01 15:08:26 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:26.737Z level=DEBUG source=server.go:1388 msg="completion request" images=0 prompt=403 format="{\n \"type\": \"object\",\n \"properties\": {\n \"age\": {\n \"type\": \"integer\"\n },\n \"available\": {\n \"type\": \"boolean\"\n }\n },\n \"required\": [\n \"age\",\n \"available\"\n ]\n }" Oct 01 15:08:26 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:26.993Z level=DEBUG source=cache.go:142 msg="loading cache slot" id=0 cache=0 prompt=91 used=0 remaining=91 Oct 01 15:08:28 gcp-ai-development-1 ollama[2866468]: [GIN] 2025/10/01 - 15:08:28 | 200 | 8.911892757s | 127.0.0.1 | POST "/api/chat" Oct 01 15:08:28 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:28.051Z level=DEBUG source=sched.go:490 msg="context for request finished" Oct 01 15:08:28 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:28.051Z level=DEBUG source=sched.go:286 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gpt-oss:20b runner.inference=cuda runner.devices=1 runner.size="13.2 GiB" runner.vram="13.2 GiB" runner.parallel=1 runner.pid=2868333 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 runner.num_ctx=4096 duration=5m0s Oct 01 15:08:28 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:08:28.051Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gpt-oss:20b runner.inference=cuda runner.devices=1 runner.size="13.2 GiB" runner.vram="13.2 GiB" runner.parallel=1 runner.pid=2868333 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 runner.num_ctx=4096 refCount=0 Oct 01 15:09:10 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:09:10.297Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model=/usr/share/ollama/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 Oct 01 15:09:10 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:09:10.298Z level=DEBUG source=server.go:1388 msg="completion request" images=0 prompt=403 format="{\n \"type\": \"object\",\n \"properties\": {\n \"age\": {\n \"type\": \"integer\"\n },\n \"available\": {\n \"type\": \"boolean\"\n }\n },\n \"required\": [\n \"age\",\n \"available\"\n ]\n }" Oct 01 15:09:10 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:09:10.487Z level=DEBUG source=cache.go:142 msg="loading cache slot" id=0 cache=102 prompt=91 used=90 remaining=1 Oct 01 15:09:11 gcp-ai-development-1 ollama[2866468]: [GIN] 2025/10/01 - 15:09:11 | 200 | 1.023167203s | 127.0.0.1 | POST "/api/chat" Oct 01 15:09:11 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:09:11.113Z level=DEBUG source=sched.go:377 msg="context for request finished" runner.name=registry.ollama.ai/library/gpt-oss:20b runner.inference=cuda runner.devices=1 runner.size="13.2 GiB" runner.vram="13.2 GiB" runner.parallel=1 runner.pid=2868333 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 runner.num_ctx=4096 Oct 01 15:09:11 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:09:11.113Z level=DEBUG source=sched.go:286 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gpt-oss:20b runner.inference=cuda runner.devices=1 runner.size="13.2 GiB" runner.vram="13.2 GiB" runner.parallel=1 runner.pid=2868333 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 runner.num_ctx=4096 duration=5m0s Oct 01 15:09:11 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:09:11.113Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gpt-oss:20b runner.inference=cuda runner.devices=1 runner.size="13.2 GiB" runner.vram="13.2 GiB" runner.parallel=1 runner.pid=2868333 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 runner.num_ctx=4096 refCount=0 Oct 01 15:09:51 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:09:51.922Z level=DEBUG source=sched.go:580 msg="evaluating already loaded" model=/usr/share/ollama/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 Oct 01 15:09:51 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:09:51.923Z level=DEBUG source=server.go:1388 msg="completion request" images=0 prompt=403 format="{\n \"type\": \"object\",\n \"properties\": {\n \"age\": {\n \"type\": \"integer\"\n },\n \"available\": {\n \"type\": \"boolean\"\n }\n },\n \"required\": [\n \"age\",\n \"available\"\n ]\n }" Oct 01 15:09:52 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:09:52.103Z level=DEBUG source=cache.go:142 msg="loading cache slot" id=0 cache=102 prompt=91 used=90 remaining=1 Oct 01 15:09:52 gcp-ai-development-1 ollama[2866468]: [GIN] 2025/10/01 - 15:09:52 | 200 | 968.111574ms | 127.0.0.1 | POST "/api/chat" Oct 01 15:09:52 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:09:52.705Z level=DEBUG source=sched.go:377 msg="context for request finished" runner.name=registry.ollama.ai/library/gpt-oss:20b runner.inference=cuda runner.devices=1 runner.size="13.2 GiB" runner.vram="13.2 GiB" runner.parallel=1 runner.pid=2868333 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 runner.num_ctx=4096 Oct 01 15:09:52 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:09:52.705Z level=DEBUG source=sched.go:286 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gpt-oss:20b runner.inference=cuda runner.devices=1 runner.size="13.2 GiB" runner.vram="13.2 GiB" runner.parallel=1 runner.pid=2868333 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 runner.num_ctx=4096 duration=5m0s Oct 01 15:09:52 gcp-ai-development-1 ollama[2866468]: time=2025-10-01T15:09:52.705Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gpt-oss:20b runner.inference=cuda runner.devices=1 runner.size="13.2 GiB" runner.vram="13.2 GiB" runner.parallel=1 runner.pid=2868333 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 runner.num_ctx=4096 refCount=0 ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.12.3
GiteaMirror added the bug label 2026-04-22 17:16:22 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#34046