[GH-ISSUE #15390] Claude Code & Ollama Integration - Invalid tool parameters & CPU Fallback #35603

Open
opened 2026-04-22 20:14:23 -05:00 by GiteaMirror · 12 comments
Owner

Originally created by @bmetallica on GitHub (Apr 7, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15390

What is the issue?

Description

When using Claude Code (CLI) with a local Ollama instance, the agent consistently fails during tool execution (e.g., entering "Plan Mode" or reading files). The model generates invalid JSON for the tool calls, leading to a loop of "Invalid tool parameters" errors.

Additionally, specific configurations cause extreme CPU spikes (100%+) and slow response times (50s+), which seems to be related to an unintended vision-processing overhead and a Flash Attention fallback.

System Environment

  • OS: Linux (Docker Deployment)
  • GPU: 2x NVIDIA GeForce RTX 3060 (12GB VRAM each)
  • Ollama Version: 0.20.3
  • Model: gemma4 (Local blob: sha256-4c27e0f5...)
  • Claude Code Command: ollama launch cloude

Docker Configuration (docker-compose.yml)

    environment:
      - OLLAMA_SCHED_SPREAD=true
      - OLLAMA_NUM_CTX=32768
      - OLLAMA_FLASH_ATTENTION=0  # Setting to 1 causes 100% CPU load instead of GPU boost
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities: [gpu]

Steps to Reproduce

Connect Claude Code to the local Ollama instance.

Provide a complex coding task (e.g., "Fix my project architecture and MQTT connection").

The agent attempts to initialize its internal "Plan Mode" tool.

The CLI returns: ⎿ Invalid tool parameters.

The model enters a loop: It apologizes for the wrong parameters and retries with the same (or similarly broken) JSON schema until the process is aborted.

Suspected Causes

Tool Parameter Formatting: The gemma4 model (likely due to its template or architecture) does not produce the exact JSON schema required by Claude Code's tool definitions.

Vision Encoder Overhead: The runner executes vision-related code (vision: encoded) for code-only prompts, which increases latency significantly and might interfere with the attention mechanism.

Flash Attention Regression: OLLAMA_FLASH_ATTENTION=1 results in a massive CPU spike. This suggests that the presence of the vision projector forces a fallback to a CPU-based attention implementation that is not optimized for large contexts.

Context Management: There is a discrepancy between the requested NUM_CTX=32768 and the actual prompt processing speed/stability wenn tools are involved.

Additional Context

The CPU load remains normal only wenn OLLAMA_FLASH_ATTENTION is disabled. However, the tool-calling issue persists regardless of this setting, preventing the agent from completing multi-step tasks.

Relevant log output

time=2026-04-07T12:23:41.133Z level=INFO source=runner.go:1290 msg=load request="{Operation:alloc ... FlashAttention:Disabled KvSize:32768 ...}"
time=2026-04-07T12:23:41.392Z level=INFO source=model.go:156 msg="vision: encoded" elapsed=147.336691ms shape="[2560 256]"
time=2026-04-07T12:23:41.579Z level=INFO source=ggml.go:494 msg="offloaded 43/43 layers to GPU"
[GIN] 2026/04/07 - 12:24:32 | 200 | 53.364869091s | 192.168.66.36 | POST "/v1/messages?beta=true"

OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @bmetallica on GitHub (Apr 7, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15390 ### What is the issue? ## Description When using **Claude Code** (CLI) with a local **Ollama** instance, the agent consistently fails during tool execution (e.g., entering "Plan Mode" or reading files). The model generates invalid JSON for the tool calls, leading to a loop of "Invalid tool parameters" errors. Additionally, specific configurations cause extreme CPU spikes (100%+) and slow response times (50s+), which seems to be related to an unintended vision-processing overhead and a Flash Attention fallback. ## System Environment * **OS:** Linux (Docker Deployment) * **GPU:** 2x NVIDIA GeForce RTX 3060 (12GB VRAM each) * **Ollama Version:** 0.20.3 * **Model:** `gemma4` (Local blob: `sha256-4c27e0f5...`) * **Claude Code Command:** `ollama launch cloude` ## Docker Configuration (`docker-compose.yml`) ```yaml environment: - OLLAMA_SCHED_SPREAD=true - OLLAMA_NUM_CTX=32768 - OLLAMA_FLASH_ATTENTION=0 # Setting to 1 causes 100% CPU load instead of GPU boost deploy: resources: reservations: devices: - driver: nvidia count: all capabilities: [gpu] ``` ## Steps to Reproduce Connect Claude Code to the local Ollama instance. Provide a complex coding task (e.g., "Fix my project architecture and MQTT connection"). The agent attempts to initialize its internal "Plan Mode" tool. The CLI returns: ⎿ Invalid tool parameters. The model enters a loop: It apologizes for the wrong parameters and retries with the same (or similarly broken) JSON schema until the process is aborted. ## Suspected Causes Tool Parameter Formatting: The gemma4 model (likely due to its template or architecture) does not produce the exact JSON schema required by Claude Code's tool definitions. Vision Encoder Overhead: The runner executes vision-related code (vision: encoded) for code-only prompts, which increases latency significantly and might interfere with the attention mechanism. Flash Attention Regression: OLLAMA_FLASH_ATTENTION=1 results in a massive CPU spike. This suggests that the presence of the vision projector forces a fallback to a CPU-based attention implementation that is not optimized for large contexts. Context Management: There is a discrepancy between the requested NUM_CTX=32768 and the actual prompt processing speed/stability wenn tools are involved. ## Additional Context The CPU load remains normal only wenn OLLAMA_FLASH_ATTENTION is disabled. However, the tool-calling issue persists regardless of this setting, preventing the agent from completing multi-step tasks. ### Relevant log output ```shell time=2026-04-07T12:23:41.133Z level=INFO source=runner.go:1290 msg=load request="{Operation:alloc ... FlashAttention:Disabled KvSize:32768 ...}" time=2026-04-07T12:23:41.392Z level=INFO source=model.go:156 msg="vision: encoded" elapsed=147.336691ms shape="[2560 256]" time=2026-04-07T12:23:41.579Z level=INFO source=ggml.go:494 msg="offloaded 43/43 layers to GPU" [GIN] 2026/04/07 - 12:24:32 | 200 | 53.364869091s | 192.168.66.36 | POST "/v1/messages?beta=true" ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-22 20:14:23 -05:00
Author
Owner

@rick-github commented on GitHub (Apr 7, 2026):

Context size is configured with OLLAMA_CONTEXT_LENGTH, not OLLAMA_NUM_CTX.

Server logs with OLLAMA_DEBUG=1 will aid in debugging.

<!-- gh-comment-id:4199898520 --> @rick-github commented on GitHub (Apr 7, 2026): Context size is configured with `OLLAMA_CONTEXT_LENGTH`, not `OLLAMA_NUM_CTX`. [Server logs](https://docs.ollama.com/troubleshooting) with `OLLAMA_DEBUG=1` will aid in debugging.
Author
Owner

@wperrin commented on GitHub (Apr 7, 2026):

I will say, it seems some tool calling is good. Websearch worked great with gemm4, just not code execution or plan mode are the one's I know now

<!-- gh-comment-id:4200793936 --> @wperrin commented on GitHub (Apr 7, 2026): I will say, it seems some tool calling is good. Websearch worked great with gemm4, just not code execution or plan mode are the one's I know now
Author
Owner

@rick-github commented on GitHub (Apr 7, 2026):

Plan mode works fine here. Asked it to summarize ollama, made multiple tool calls, read multiple files, summarized project structure.

$ ollama launch claude --model gemma4:31b -- --permission-mode plan 'summarize this project'
╭─── Claude Code v2.1.92 ────────────────────────────────────────────────────────────────────────────────╮
│                                    │ Tips for getting started                                          │
│            Welcome back!           │ Run /init to create a CLAUDE.md file with instructions for Claude │
│                                    │ ───────────────────────────────────────────────────────────────── │
│              ▐▛███▜▌              │ Recent activity                                                   │
│           ▝▜█████▛▘             │ No recent activity                                                │
│             ▘▘ ▝▝               │                                                                   │
│                                    │                                                                   │
│   gemma4:31b · API Usage Billing   │                                                                   │
│     ~/ollama                       │                                                                   │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────╯

  ↑ Opus now defaults to 1M context · 5x more room, same pricing

❯ summarize this project                                                                                                                                                                                           

● Explore(Summarize the project purpose and architecture)
  ⎿  Prompt:
       Please provide a comprehensive summary of this project.
       Explore the codebase to determine:
       1. What is the primary purpose of this project?
       2. What are the core components and their responsibilities?
       3. What are the main technologies and libraries used (e.g., Go, C++, llama.cpp, ggml)?
       4. How do the different parts (like the server, the llama.cpp backend, and ggml) interact?
       5. Look for README files, documentation, or entry points (like main.go or similar) to gather this information.
       6. Summarize the project structure and the role of key directories.

       Please be concise but thorough.
  ⎿  Bash(ls -F /home/rick/ollama)
  ⎿  Read(/home/rick/ollama/README.md)
  ⎿  Bash(cat /home/rick/ollama/go.mod && cat /home/rick/ollama/main.go)
  ⎿  Bash(ls -F /home/rick/ollama/cmd && ls -F /home/rick/ollama/server && ls -F /home/rick/ollama/api)
  ⎿  Bash(ls -F /home/rick/ollama/llama && ls -F /home/rick/ollama/runner && ls -F /home/rick/ollama/llm)
  ⎿  Read(/home/rick/ollama/llama/README.md)
  ⎿  Response:
...
● This project is Ollama, a tool designed to simplify running, managing, and deploying large language models (LLMs) locally. It essentially acts as a user-friendly wrapper around high-performance inference
  engines, providing a CLI and REST API so users don't have to manually configure complex C++ backends.
<!-- gh-comment-id:4200899610 --> @rick-github commented on GitHub (Apr 7, 2026): Plan mode works fine here. Asked it to summarize ollama, made multiple tool calls, read multiple files, summarized project structure. ``` $ ollama launch claude --model gemma4:31b -- --permission-mode plan 'summarize this project' ╭─── Claude Code v2.1.92 ────────────────────────────────────────────────────────────────────────────────╮ │ │ Tips for getting started │ │ Welcome back! │ Run /init to create a CLAUDE.md file with instructions for Claude │ │ │ ───────────────────────────────────────────────────────────────── │ │ ▐▛███▜▌ │ Recent activity │ │ ▝▜█████▛▘ │ No recent activity │ │ ▘▘ ▝▝ │ │ │ │ │ │ gemma4:31b · API Usage Billing │ │ │ ~/ollama │ │ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────╯ ↑ Opus now defaults to 1M context · 5x more room, same pricing ❯ summarize this project ● Explore(Summarize the project purpose and architecture) ⎿ Prompt: Please provide a comprehensive summary of this project. Explore the codebase to determine: 1. What is the primary purpose of this project? 2. What are the core components and their responsibilities? 3. What are the main technologies and libraries used (e.g., Go, C++, llama.cpp, ggml)? 4. How do the different parts (like the server, the llama.cpp backend, and ggml) interact? 5. Look for README files, documentation, or entry points (like main.go or similar) to gather this information. 6. Summarize the project structure and the role of key directories. Please be concise but thorough. ⎿ Bash(ls -F /home/rick/ollama) ⎿ Read(/home/rick/ollama/README.md) ⎿ Bash(cat /home/rick/ollama/go.mod && cat /home/rick/ollama/main.go) ⎿ Bash(ls -F /home/rick/ollama/cmd && ls -F /home/rick/ollama/server && ls -F /home/rick/ollama/api) ⎿ Bash(ls -F /home/rick/ollama/llama && ls -F /home/rick/ollama/runner && ls -F /home/rick/ollama/llm) ⎿ Read(/home/rick/ollama/llama/README.md) ⎿ Response: ... ● This project is Ollama, a tool designed to simplify running, managing, and deploying large language models (LLMs) locally. It essentially acts as a user-friendly wrapper around high-performance inference engines, providing a CLI and REST API so users don't have to manually configure complex C++ backends. ```
Author
Owner

@wperrin commented on GitHub (Apr 7, 2026):

plan a script ....
⎿  Invalid tool parameters
⎿  Invalid tool parameters
⎿  Invalid tool parameters

I was running on WSL but had issues so switched to Windows which could the problem -

<!-- gh-comment-id:4200969772 --> @wperrin commented on GitHub (Apr 7, 2026): plan a script .... ⎿  Invalid tool parameters ⎿  Invalid tool parameters ⎿  Invalid tool parameters I was running on WSL but had issues so switched to Windows which could the problem -
Author
Owner

@rick-github commented on GitHub (Apr 7, 2026):

What context size is configured in the ollama server?

Server logs with OLLAMA_DEBUG=1 will aid in debugging.

<!-- gh-comment-id:4200984729 --> @rick-github commented on GitHub (Apr 7, 2026): What context size is configured in the ollama server? [Server logs](https://docs.ollama.com/troubleshooting) with `OLLAMA_DEBUG=1` will aid in debugging.
Author
Owner

@wperrin commented on GitHub (Apr 7, 2026):

I think you are correct, I'm getting the "white dot" in front of every, what would be code execution. I am re-testing at 32k, I have a 12GB 4070 to run gemma4

<!-- gh-comment-id:4201088014 --> @wperrin commented on GitHub (Apr 7, 2026): I think you are correct, I'm getting the "white dot" in front of every, what would be code execution. I am re-testing at 32k, I have a 12GB 4070 to run gemma4
Author
Owner

@wperrin commented on GitHub (Apr 7, 2026):

Windows has the tool-tray part so adjust there and now I think we're good, Using this to launch:

# launch-claude.ps1
# Launch Claude Code via Ollama with clean state
# Usage: .\launch-claude.ps1 [optional working directory]

param(
    [string]$WorkDir = (Get-Location).Path
)

# Set working directory
Set-Location $WorkDir

# Raise context length - gemma4:e4b supports 128K
# Claude Code needs at least 20K to function reliably
$env:OLLAMA_CONTEXT_LENGTH = "32768"

# Launch
ollama launch claude --model gemma4:e4b
<!-- gh-comment-id:4201141462 --> @wperrin commented on GitHub (Apr 7, 2026): Windows has the tool-tray part so adjust there and now I think we're good, Using this to launch: ``` # launch-claude.ps1 # Launch Claude Code via Ollama with clean state # Usage: .\launch-claude.ps1 [optional working directory] param( [string]$WorkDir = (Get-Location).Path ) # Set working directory Set-Location $WorkDir # Raise context length - gemma4:e4b supports 128K # Claude Code needs at least 20K to function reliably $env:OLLAMA_CONTEXT_LENGTH = "32768" # Launch ollama launch claude --model gemma4:e4b ```
Author
Owner

@rick-github commented on GitHub (Apr 7, 2026):

$env:OLLAMA_CONTEXT_LENGTH = "32768"

This needs to be set in the server environment, not the client environment. If the server is already running, then setting it here won't change it in the server. If the server is not running, then the client will launch the server and this will set the size of the context.

<!-- gh-comment-id:4201170248 --> @rick-github commented on GitHub (Apr 7, 2026): > $env:OLLAMA_CONTEXT_LENGTH = "32768" This needs to be set in the server environment, not the client environment. If the server is already running, then setting it here won't change it in the server. If the server is not running, then the client will launch the server and this will set the size of the context.
Author
Owner

@bmetallica commented on GitHub (Apr 8, 2026):

Description

Despite updating to the latest pre-release (0.20.4-rc2) and correctly configuring OLLAMA_CONTEXT_LENGTH=65536, the agent still fails during tool execution (specifically EnterPlanMode). The model acknowledges the task in German, reads the README successfully, but then fails to produce valid JSON parameters for the next tool call, leading to a loop or sudden halt.

System Environment

  • Ollama Server/Client: 0.20.4-rc2 (Docker on Debian)

  • GPU: 2x NVIDIA GeForce RTX 3060 (12GB each)

  • Model: gemma4 (latest pull)

  • Claude Code: v2.1.92

Ollama Server (debian):

docker exec -i ai-ollama-1 ollama --version
ollama version is 0.20.4-rc2

Ollama Client (debian):

ollama --version
ollama version is 0.20.4-rc2
Warning: client version is 0.20.3

docker-compose.yml:

services:
  ollama:
    image: ollama/ollama:0.20.4-rc2
    ports:
      - "11434:11434"
    # Entferne 'gpus: all' oben, die 'deploy' Sektion unten reicht und ist präziser
    environment:
      - OLLAMA_DEBUG=1
      - OLLAMA_KEEP_ALIVE=-1
      - OLLAMA_FLASH_ATTENTION=1  # KRITISCH: Beschleunigt die Berechnung enorm (RTX 3060 unterstützt das)
#      - OLLAMA_KV_CACHE_TYPE=q4_k # Spart VRAM im Cache, verhindert das "Vollaufen" bei langem Code
      - OLLAMA_SCHED_SPREAD=true
      - OLLAMA_CONTEXT_LENGTH=65536
    volumes:
      - ollama_data:/root/.ollama
      - ollama_create:/create
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all  # Stellt sicher, dass beide 3060er sauber angesprochen werden
              capabilities: [gpu]

ollama launch cloud:

Launching Claude Code with gemma4...
╭─── Claude Code v2.1.92 ────────────────────────────────────────────────────────────────────────────╮
│                                │ Tips for getting started                                          │                                                              
│          Welcome back!         │ Run /init to create a CLAUDE.md file with instructions for Claude │
│                                │ ───────────────────────────────────────────────────────────────── │
│             ▐▛███▜▌            │ Recent activity                                                   │
│            ▝▜█████▛▘           │ No recent activity                                                │
│              ▘▘ ▝▝             │                                                                   │                                                             
│                                │                                                                   │
│   gemma4 · API Usage Billing   │                                                                   │
│          /opt/ha-dom_          │                                                                   │
╰────────────────────────────────────────────────────────────────────────────────────────────────────╯
               
❯ sprich deutsch mit mirsprich deutsch mit mir, aber verwende für alle Tool-Calls und deren Parameter ausschließlich die exakte englische Syntax laut Definition                                                                                                                                          
sieh dir das aktuelle projekt hier an. du findest alles nötige in der README.md. das Projekt funktioniert derzeit nicht, es werden keine Geräte von Domoticz oder   
Homeasisstant geholt und der mqtt verbindungsstatus ist nicht verbunden. aktuell läuft die anwendung auch garnicht. plane zuerst wie du das fixen willst. dann frag 
 mich ob das ok ist, dann fixe das und teste die anwendung.                                                                                                         
                                                                                                                                                                    
● Ich verstehe. Das Projekt hat mehrere kritische Fehler: es kann keine Geräte von Domoticz oder Home Assistant abrufen, der MQTT-Verbindungsstatus ist nicht       
  verbunden, und die Anwendung läuft momentan gar nicht.                                                                                                          
                                                                                                                                                                    
  Ich werde zunächst gemäß Ihren Anweisungen einen Plan erstellen. Ich beginne damit, die README.md zu lesen, um ein erstes Verständnis des Projekts zu gewinnen.   
                                                                                                                                                                    
  Ich lese nun die README.md ein.                                                                                                                                   
                  
  Read 1 file (ctrl+o to expand)                                                                                                                                    
                  
● Das Lesen der README.md zeigt, dass das Projekt ein komplexes System zur MQTT-Synchronisierung zwischen Home Assistant und Domoticz ist, das über API-Endpunkte   
  und einen Web UI läuft.                                
                                                                                          
  Die Fehler (kein Geräte-Polling, keine MQTT-Verbindung, Anwendung läuft nicht) deuten auf Probleme in der Initialisierung, der MQTT-Client-Implementierung oder   
  den Discovery-Mechanismen hin.                                                                                                                                    
                                                                                                                                                                    
  Ich werde nun in den Planmodus wechseln, um diesen komplexen Fehlerbehebungs- und Testprozess zu strukturieren.                                                   
  ⎿  Invalid tool parameters                                                                                                                                        
                                                                                                                                                                    
● Ich entschuldige mich für den fehlerhaften Aufruf des Planmodus. Ich werde den Plan erneut erstellen und mich strikt an die Tool-Definition halten.
                                                                                                                                                                    
  Ich verwende jetzt EnterPlanMode, um den Plan zu erstellen und Ihre Bestätigung einzuholen, bevor ich Code ändere.                                                
  ⎿  Invalid tool parameters                                                                                                                                        
  ⎿  Invalid tool parameters                                                                                                                                        
                                                                                                                                                                    
● Ich verstehe die kritische Natur der Situation und die notwendige Vorgehensweise: Planen, Abstimmen, Fixen, Testen.                                               
                                                                                                                                                                    
  Ich werde den Planmodus erneut aufrufen, diesmal mich strikt an die Argumentstruktur des Tools haltend, um sicherzustellen, dass der Plan korrekt verarbeitet     
  wird.                                                                                                                                                             
                                                                                                                                                                    
  Ich starte den Planmodus.                                                                                                                                         
                                                                                                                                                                    
✻ Churned for 1m 56s                                                                                                                                                
                  
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
❯  
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
  ? for shortcuts       

claude stops working at this point.

docker logs ai-ollama-1 :

time=2026-04-08T07:46:56.896Z level=INFO source=routes.go:1744 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:65536 OLLAMA_DEBUG:DEBUG OLLAMA_DEBUG_LOG_REQUESTS:false OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:true OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2026-04-08T07:46:56.896Z level=INFO source=routes.go:1746 msg="Ollama cloud disabled: false"
time=2026-04-08T07:46:56.903Z level=INFO source=images.go:499 msg="total blobs: 67"
time=2026-04-08T07:46:56.904Z level=INFO source=images.go:506 msg="total unused blobs removed: 0"
time=2026-04-08T07:46:56.905Z level=INFO source=routes.go:1802 msg="Listening on [::]:11434 (version 0.20.4-rc2)"
time=2026-04-08T07:46:56.906Z level=DEBUG source=sched.go:145 msg="starting llm scheduler"
time=2026-04-08T07:46:56.907Z level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2026-04-08T07:46:56.909Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 45519"
time=2026-04-08T07:46:56.909Z level=DEBUG source=server.go:445 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_SCHED_SPREAD=true OLLAMA_CONTEXT_LENGTH=65536 OLLAMA_DEBUG=1 OLLAMA_KEEP_ALIVE=-1 OLLAMA_FLASH_ATTENTION=1 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12
time=2026-04-08T07:46:57.518Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=611.257226ms OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs=map[]
time=2026-04-08T07:46:57.518Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 43385"
time=2026-04-08T07:46:57.518Z level=DEBUG source=server.go:445 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_SCHED_SPREAD=true OLLAMA_CONTEXT_LENGTH=65536 OLLAMA_DEBUG=1 OLLAMA_KEEP_ALIVE=-1 OLLAMA_FLASH_ATTENTION=1 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v13:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v13
time=2026-04-08T07:46:57.589Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=71.164816ms OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[]
time=2026-04-08T07:46:57.589Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
time=2026-04-08T07:46:57.589Z level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=2
time=2026-04-08T07:46:57.589Z level=DEBUG source=runner.go:146 msg="verifying if device is supported" library=/usr/lib/ollama/cuda_v12 description="NVIDIA GeForce RTX 3060" compute=8.6 id=GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 pci_id=0000:06:10.0
time=2026-04-08T07:46:57.589Z level=DEBUG source=runner.go:146 msg="verifying if device is supported" library=/usr/lib/ollama/cuda_v12 description="NVIDIA GeForce RTX 3060" compute=8.6 id=GPU-8e13d698-9b6b-a377-44bd-31550f42e79d pci_id=0000:06:11.0
time=2026-04-08T07:46:57.590Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 39711"
time=2026-04-08T07:46:57.590Z level=DEBUG source=server.go:445 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_SCHED_SPREAD=true OLLAMA_CONTEXT_LENGTH=65536 OLLAMA_DEBUG=1 OLLAMA_KEEP_ALIVE=-1 OLLAMA_FLASH_ATTENTION=1 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12 CUDA_VISIBLE_DEVICES=GPU-8e13d698-9b6b-a377-44bd-31550f42e79d GGML_CUDA_INIT=1
time=2026-04-08T07:46:57.590Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 40723"
time=2026-04-08T07:46:57.590Z level=DEBUG source=server.go:445 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_SCHED_SPREAD=true OLLAMA_CONTEXT_LENGTH=65536 OLLAMA_DEBUG=1 OLLAMA_KEEP_ALIVE=-1 OLLAMA_FLASH_ATTENTION=1 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12 CUDA_VISIBLE_DEVICES=GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 GGML_CUDA_INIT=1
time=2026-04-08T07:46:57.833Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=243.442349ms OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs="map[CUDA_VISIBLE_DEVICES:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d GGML_CUDA_INIT:1]"
time=2026-04-08T07:46:57.859Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=268.70511ms OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs="map[CUDA_VISIBLE_DEVICES:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 GGML_CUDA_INIT:1]"
time=2026-04-08T07:46:57.859Z level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=953.012941ms
time=2026-04-08T07:46:57.859Z level=INFO source=types.go:42 msg="inference compute" id=GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 filter_id="" library=CUDA compute=8.6 name=CUDA0 description="NVIDIA GeForce RTX 3060" libdirs=ollama,cuda_v12 driver=12.6 pci_id=0000:06:10.0 type=discrete total="12.0 GiB" available="11.7 GiB"
time=2026-04-08T07:46:57.859Z level=INFO source=types.go:42 msg="inference compute" id=GPU-8e13d698-9b6b-a377-44bd-31550f42e79d filter_id="" library=CUDA compute=8.6 name=CUDA1 description="NVIDIA GeForce RTX 3060" libdirs=ollama,cuda_v12 driver=12.6 pci_id=0000:06:11.0 type=discrete total="12.0 GiB" available="11.6 GiB"
time=2026-04-08T07:46:57.859Z level=INFO source=routes.go:1852 msg="vram-based default context" total_vram="24.0 GiB" default_num_ctx=32768
[GIN] 2026/04/08 - 07:47:00 | 200 |    1.182083ms |       127.0.0.1 | GET      "/api/version"
[GIN] 2026/04/08 - 07:47:06 | 200 |     331.989µs |       127.0.0.1 | HEAD     "/"
time=2026-04-08T07:47:06.504Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32
[GIN] 2026/04/08 - 07:47:06 | 200 |  321.558511ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/04/08 - 07:47:06 | 200 |    1.869121ms |       127.0.0.1 | POST     "/api/generate"
[GIN] 2026/04/08 - 07:47:06 | 200 |    5.282272ms |       127.0.0.1 | DELETE   "/api/delete"
[GIN] 2026/04/08 - 07:47:14 | 200 |      34.219µs |       127.0.0.1 | HEAD     "/"
time=2026-04-08T07:47:14.887Z level=DEBUG source=images.go:678 msg="manifest written" path=/root/.ollama/models/manifests/registry.ollama.ai/library/gemma4/latest sha256=c6eb396dbd5992bbe3f5cdb947e8bbc0ee413d7c17e2beaae69f5d569cf982eb size=709
[GIN] 2026/04/08 - 07:47:14 | 200 |  582.689587ms |       127.0.0.1 | POST     "/api/pull"
[GIN] 2026/04/08 - 07:47:28 | 200 |      27.385µs |       127.0.0.1 | HEAD     "/"
time=2026-04-08T07:47:28.442Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32
[GIN] 2026/04/08 - 07:47:28 | 200 |  320.795969ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/04/08 - 07:47:28 | 200 |    1.828096ms |       127.0.0.1 | POST     "/api/generate"
[GIN] 2026/04/08 - 07:47:28 | 200 |    3.818164ms |       127.0.0.1 | DELETE   "/api/delete"
[GIN] 2026/04/08 - 07:48:02 | 200 |       29.76µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/04/08 - 07:48:02 | 200 |    3.623567ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/04/08 - 07:48:20 | 200 |      41.887µs |       127.0.0.1 | HEAD     "/"
time=2026-04-08T07:48:20.376Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32
[GIN] 2026/04/08 - 07:48:20 | 200 |  347.948534ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/04/08 - 07:48:20 | 200 |    1.721617ms |       127.0.0.1 | POST     "/api/generate"
[GIN] 2026/04/08 - 07:48:21 | 200 |  745.494233ms |       127.0.0.1 | DELETE   "/api/delete"
[GIN] 2026/04/08 - 07:48:37 | 200 |      29.877µs |       127.0.0.1 | HEAD     "/"
time=2026-04-08T07:48:38.147Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32
[GIN] 2026/04/08 - 07:48:38 | 200 |  340.625503ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/04/08 - 07:48:38 | 200 |    1.663894ms |       127.0.0.1 | POST     "/api/generate"
[GIN] 2026/04/08 - 07:48:38 | 200 |   14.234391ms |       127.0.0.1 | DELETE   "/api/delete"
[GIN] 2026/04/08 - 07:48:55 | 200 |      26.847µs |       127.0.0.1 | HEAD     "/"
time=2026-04-08T07:48:55.878Z level=INFO source=download.go:179 msg="downloading 4c27e0f5b5ad in 16 600 MB part(s)"
time=2026-04-08T07:52:10.265Z level=INFO source=download.go:179 msg="downloading f0988ff50a24 in 1 473 B part(s)"
time=2026-04-08T07:52:58.530Z level=DEBUG source=images.go:678 msg="manifest written" path=/root/.ollama/models/manifests/registry.ollama.ai/library/gemma4/latest sha256=c6eb396dbd5992bbe3f5cdb947e8bbc0ee413d7c17e2beaae69f5d569cf982eb size=709
[GIN] 2026/04/08 - 07:52:58 | 200 |          4m3s |       127.0.0.1 | POST     "/api/pull"
[GIN] 2026/04/08 - 09:04:36 | 200 |      82.325µs |   192.168.66.36 | HEAD     "/"
[GIN] 2026/04/08 - 09:04:36 | 200 |      2.7179ms |   192.168.66.36 | GET      "/api/tags"
[GIN] 2026/04/08 - 09:04:36 | 200 |      61.412µs |   192.168.66.36 | GET      "/api/status"
[GIN] 2026/04/08 - 09:04:36 | 200 |       9.547µs |   192.168.66.36 | GET      "/api/status"
time=2026-04-08T09:04:38.626Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32
[GIN] 2026/04/08 - 09:04:38 | 200 |  316.088652ms |   192.168.66.36 | POST     "/api/show"
[GIN] 2026/04/08 - 09:04:40 | 200 |      26.094µs |   192.168.66.36 | HEAD     "/"
time=2026-04-08T09:05:05.892Z level=DEBUG source=runner.go:264 msg="refreshing free memory"
time=2026-04-08T09:05:05.892Z level=DEBUG source=runner.go:328 msg="unable to refresh all GPUs with existing runners, performing bootstrap discovery"
time=2026-04-08T09:05:05.892Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 45317"
time=2026-04-08T09:05:05.893Z level=DEBUG source=server.go:445 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_SCHED_SPREAD=true OLLAMA_CONTEXT_LENGTH=65536 OLLAMA_DEBUG=1 OLLAMA_KEEP_ALIVE=-1 OLLAMA_FLASH_ATTENTION=1 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12
time=2026-04-08T09:05:06.382Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=489.407808ms OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs=map[]
time=2026-04-08T09:05:06.382Z level=DEBUG source=runner.go:40 msg="overall device VRAM discovery took" duration=489.580248ms
time=2026-04-08T09:05:06.382Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax"
time=2026-04-08T09:05:06.382Z level=DEBUG source=sched.go:220 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=6 gpu_count=2
time=2026-04-08T09:05:06.383Z level=DEBUG source=sched.go:229 msg="loading first model" model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a
time=2026-04-08T09:05:06.518Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32
time=2026-04-08T09:05:06.593Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32
time=2026-04-08T09:05:06.599Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.pooling_type default=0
time=2026-04-08T09:05:06.599Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eot_token_id default=106
time=2026-04-08T09:05:06.601Z level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883
time=2026-04-08T09:05:06.602Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.attention.global_head_count_kv default=0
time=2026-04-08T09:05:06.602Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.attention.head_count_kv default="&{size:0 values:[]}"
time=2026-04-08T09:05:06.602Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.expert_count default=0
time=2026-04-08T09:05:06.602Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.expert_used_count default=0
time=2026-04-08T09:05:06.602Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.audio.num_mel_bins default=128
time=2026-04-08T09:05:06.602Z level=INFO source=server.go:259 msg="enabling flash attention"
time=2026-04-08T09:05:06.602Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a --port 40321"
time=2026-04-08T09:05:06.602Z level=DEBUG source=server.go:445 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_SCHED_SPREAD=true OLLAMA_CONTEXT_LENGTH=65536 OLLAMA_DEBUG=1 OLLAMA_KEEP_ALIVE=-1 OLLAMA_FLASH_ATTENTION=1 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12
time=2026-04-08T09:05:06.602Z level=INFO source=sched.go:484 msg="system memory" total="15.6 GiB" free="6.8 GiB" free_swap="807.7 MiB"
time=2026-04-08T09:05:06.603Z level=INFO source=sched.go:491 msg="gpu memory" id=GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 library=CUDA available="11.2 GiB" free="11.7 GiB" minimum="457.0 MiB" overhead="0 B"
time=2026-04-08T09:05:06.603Z level=INFO source=sched.go:491 msg="gpu memory" id=GPU-8e13d698-9b6b-a377-44bd-31550f42e79d library=CUDA available="11.1 GiB" free="11.6 GiB" minimum="457.0 MiB" overhead="0 B"
time=2026-04-08T09:05:06.603Z level=INFO source=server.go:771 msg="loading model" "model layers"=43 requested=-1
time=2026-04-08T09:05:06.618Z level=INFO source=runner.go:1417 msg="starting ollama engine"
time=2026-04-08T09:05:06.618Z level=INFO source=runner.go:1452 msg="Server listening on 127.0.0.1:40321"
time=2026-04-08T09:05:06.625Z level=INFO source=runner.go:1290 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:65536 KvCacheType: NumThreads:4 GPULayers:43[ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Layers:43(0..42)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-04-08T09:05:06.697Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32
time=2026-04-08T09:05:06.704Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.name default=""
time=2026-04-08T09:05:06.704Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.description default=""
time=2026-04-08T09:05:06.704Z level=INFO source=ggml.go:136 msg="" architecture=gemma4 file_type=Q4_K_M name="" description="" num_tensors=2131 num_key_values=55
time=2026-04-08T09:05:06.704Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-sse42.so
time=2026-04-08T09:05:06.711Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama/cuda_v12
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 2 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes, ID: GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4
  Device 1: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes, ID: GPU-8e13d698-9b6b-a377-44bd-31550f42e79d
load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so
time=2026-04-08T09:05:06.909Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2026-04-08T09:05:06.939Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.pooling_type default=0
time=2026-04-08T09:05:06.939Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eot_token_id default=106
time=2026-04-08T09:05:06.940Z level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883
time=2026-04-08T09:05:06.940Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.attention.global_head_count_kv default=0
time=2026-04-08T09:05:06.940Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.attention.head_count_kv default="&{size:0 values:[]}"
time=2026-04-08T09:05:06.940Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.expert_count default=0
time=2026-04-08T09:05:06.940Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.expert_used_count default=0
time=2026-04-08T09:05:06.940Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.audio.num_mel_bins default=128
time=2026-04-08T09:05:06.968Z level=INFO source=model.go:138 msg="vision: decode" elapsed=1.075109ms bounds=(0,0)-(2048,2048)
time=2026-04-08T09:05:07.119Z level=INFO source=model.go:145 msg="vision: preprocess" elapsed=150.142486ms size="[768 768]"
time=2026-04-08T09:05:07.119Z level=INFO source=model.go:148 msg="vision: pixelValues" shape="[768 768 3]" dim0=768 dim1=768 dim2=3
time=2026-04-08T09:05:07.119Z level=INFO source=model.go:152 msg="vision: patches" patchesX=48 patchesY=48 total=2304 patchSize=16
time=2026-04-08T09:05:07.120Z level=INFO source=model.go:156 msg="vision: encoded" elapsed=153.105474ms shape="[2560 256]"
time=2026-04-08T09:05:07.412Z level=DEBUG source=ggml.go:852 msg="compute graph" nodes=684 splits=1
time=2026-04-08T09:05:07.971Z level=DEBUG source=ggml.go:852 msg="compute graph" nodes=1831 splits=2
time=2026-04-08T09:05:08.015Z level=DEBUG source=ggml.go:852 msg="compute graph" nodes=1829 splits=2
time=2026-04-08T09:05:08.017Z level=DEBUG source=device.go:240 msg="model weights" device=CUDA1 size="8.9 GiB"
time=2026-04-08T09:05:08.017Z level=DEBUG source=device.go:245 msg="model weights" device=CPU size="587.0 MiB"
time=2026-04-08T09:05:08.017Z level=DEBUG source=device.go:251 msg="kv cache" device=CUDA1 size="1.2 GiB"
time=2026-04-08T09:05:08.017Z level=DEBUG source=device.go:262 msg="compute graph" device=CUDA1 size="331.0 MiB"
time=2026-04-08T09:05:08.017Z level=DEBUG source=device.go:267 msg="compute graph" device=CPU size="5.0 MiB"
time=2026-04-08T09:05:08.017Z level=DEBUG source=device.go:272 msg="total memory" size="11.0 GiB"
time=2026-04-08T09:05:08.017Z level=DEBUG source=server.go:796 msg=memory success=true required.InputWeights=615514112 required.CPU.Graph=5242880 required.CUDA1.ID=GPU-8e13d698-9b6b-a377-44bd-31550f42e79d required.CUDA1.Weights="[109892480 110045184 109892480 109892480 109892480 110846848 103134080 109892480 102796160 102796160 109892480 110171008 52401536 59497856 52401536 52401536 59497856 59776384 52401536 59497856 52401536 52401536 59497856 59776384 52401536 59497856 52401536 52401536 59497856 59776384 52401536 59497856 52401536 52401536 59497856 59776384 59159936 59497856 59159936 59159936 59497856 66534784 6521521152]" required.CUDA1.Cache="[9437184 9437184 9437184 9437184 9437184 268435456 9437184 9437184 9437184 9437184 9437184 268435456 9437184 9437184 9437184 9437184 9437184 268435456 9437184 9437184 9437184 9437184 9437184 268435456 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]" required.CUDA1.Graph=347086976
time=2026-04-08T09:05:08.017Z level=DEBUG source=server.go:990 msg="available gpu" id=GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 library=CUDA "available layer vram"="11.2 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="0 B"
time=2026-04-08T09:05:08.017Z level=DEBUG source=server.go:990 msg="available gpu" id=GPU-8e13d698-9b6b-a377-44bd-31550f42e79d library=CUDA "available layer vram"="10.8 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="331.0 MiB"
time=2026-04-08T09:05:08.017Z level=DEBUG source=server.go:807 msg="new layout created" layers="43[ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Layers:42(0..41) ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Layers:1(42..42)]"
time=2026-04-08T09:05:08.017Z level=INFO source=runner.go:1290 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:65536 KvCacheType: NumThreads:4 GPULayers:43[ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Layers:42(0..41) ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Layers:1(42..42)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-04-08T09:05:08.094Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32
time=2026-04-08T09:05:08.122Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.pooling_type default=0
time=2026-04-08T09:05:08.122Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eot_token_id default=106
time=2026-04-08T09:05:08.123Z level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883
time=2026-04-08T09:05:08.123Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.attention.global_head_count_kv default=0
time=2026-04-08T09:05:08.123Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.attention.head_count_kv default="&{size:0 values:[]}"
time=2026-04-08T09:05:08.123Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.expert_count default=0
time=2026-04-08T09:05:08.123Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.expert_used_count default=0
time=2026-04-08T09:05:08.123Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.audio.num_mel_bins default=128
time=2026-04-08T09:05:08.152Z level=INFO source=model.go:138 msg="vision: decode" elapsed=1.282165ms bounds=(0,0)-(2048,2048)
time=2026-04-08T09:05:08.306Z level=INFO source=model.go:145 msg="vision: preprocess" elapsed=154.372367ms size="[768 768]"
time=2026-04-08T09:05:08.306Z level=INFO source=model.go:148 msg="vision: pixelValues" shape="[768 768 3]" dim0=768 dim1=768 dim2=3
time=2026-04-08T09:05:08.306Z level=INFO source=model.go:152 msg="vision: patches" patchesX=48 patchesY=48 total=2304 patchSize=16
time=2026-04-08T09:05:08.307Z level=INFO source=model.go:156 msg="vision: encoded" elapsed=156.595868ms shape="[2560 256]"
time=2026-04-08T09:05:08.316Z level=DEBUG source=ggml.go:852 msg="compute graph" nodes=684 splits=1
time=2026-04-08T09:05:08.678Z level=DEBUG source=ggml.go:852 msg="compute graph" nodes=1831 splits=6
time=2026-04-08T09:05:08.685Z level=DEBUG source=ggml.go:852 msg="compute graph" nodes=1829 splits=6
time=2026-04-08T09:05:08.685Z level=DEBUG source=device.go:240 msg="model weights" device=CUDA0 size="2.8 GiB"
time=2026-04-08T09:05:08.686Z level=DEBUG source=device.go:240 msg="model weights" device=CUDA1 size="6.1 GiB"
time=2026-04-08T09:05:08.686Z level=DEBUG source=device.go:245 msg="model weights" device=CPU size="587.0 MiB"
time=2026-04-08T09:05:08.686Z level=DEBUG source=device.go:251 msg="kv cache" device=CUDA0 size="1.2 GiB"
time=2026-04-08T09:05:08.686Z level=DEBUG source=device.go:262 msg="compute graph" device=CUDA0 size="845.9 MiB"
time=2026-04-08T09:05:08.686Z level=DEBUG source=device.go:262 msg="compute graph" device=CUDA1 size="186.8 MiB"
time=2026-04-08T09:05:08.686Z level=DEBUG source=device.go:267 msg="compute graph" device=CPU size="5.0 MiB"
time=2026-04-08T09:05:08.686Z level=DEBUG source=device.go:272 msg="total memory" size="11.6 GiB"
time=2026-04-08T09:05:08.686Z level=DEBUG source=server.go:796 msg=memory success=true required.InputWeights=615514112 required.CPU.Graph=5242880 required.CUDA0.ID=GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 required.CUDA0.Weights="[109892480 110045184 109892480 109892480 109892480 110846848 103134080 109892480 102796160 102796160 109892480 110171008 52401536 59497856 52401536 52401536 59497856 59776384 52401536 59497856 52401536 52401536 59497856 59776384 52401536 59497856 52401536 52401536 59497856 59776384 52401536 59497856 52401536 52401536 59497856 59776384 59159936 59497856 59159936 59159936 59497856 66534784 0]" required.CUDA0.Cache="[9437184 9437184 9437184 9437184 9437184 268435456 9437184 9437184 9437184 9437184 9437184 268435456 9437184 9437184 9437184 9437184 9437184 268435456 9437184 9437184 9437184 9437184 9437184 268435456 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]" required.CUDA0.Graph=887014400 required.CUDA1.ID=GPU-8e13d698-9b6b-a377-44bd-31550f42e79d required.CUDA1.Weights="[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 6521521152]" required.CUDA1.Graph=195858432
time=2026-04-08T09:05:08.686Z level=DEBUG source=server.go:990 msg="available gpu" id=GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 library=CUDA "available layer vram"="10.4 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="845.9 MiB"
time=2026-04-08T09:05:08.686Z level=DEBUG source=server.go:990 msg="available gpu" id=GPU-8e13d698-9b6b-a377-44bd-31550f42e79d library=CUDA "available layer vram"="10.9 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="186.8 MiB"
time=2026-04-08T09:05:08.686Z level=DEBUG source=server.go:807 msg="new layout created" layers="43[ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Layers:42(0..41) ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Layers:1(42..42)]"
time=2026-04-08T09:05:08.686Z level=INFO source=runner.go:1290 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:65536 KvCacheType: NumThreads:4 GPULayers:43[ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Layers:42(0..41) ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Layers:1(42..42)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-04-08T09:05:08.753Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32
time=2026-04-08T09:05:08.779Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.pooling_type default=0
time=2026-04-08T09:05:08.779Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eot_token_id default=106
time=2026-04-08T09:05:08.780Z level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883
time=2026-04-08T09:05:08.780Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.attention.global_head_count_kv default=0
time=2026-04-08T09:05:08.780Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.attention.head_count_kv default="&{size:0 values:[]}"
time=2026-04-08T09:05:08.780Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.expert_count default=0
time=2026-04-08T09:05:08.780Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.expert_used_count default=0
time=2026-04-08T09:05:08.780Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.audio.num_mel_bins default=128
time=2026-04-08T09:05:08.806Z level=INFO source=model.go:138 msg="vision: decode" elapsed=936.07µs bounds=(0,0)-(2048,2048)
time=2026-04-08T09:05:08.970Z level=INFO source=model.go:145 msg="vision: preprocess" elapsed=163.943375ms size="[768 768]"
time=2026-04-08T09:05:08.973Z level=INFO source=model.go:148 msg="vision: pixelValues" shape="[768 768 3]" dim0=768 dim1=768 dim2=3
time=2026-04-08T09:05:08.973Z level=INFO source=model.go:152 msg="vision: patches" patchesX=48 patchesY=48 total=2304 patchSize=16
time=2026-04-08T09:05:08.974Z level=INFO source=model.go:156 msg="vision: encoded" elapsed=168.886045ms shape="[2560 256]"
time=2026-04-08T09:05:08.983Z level=DEBUG source=ggml.go:852 msg="compute graph" nodes=684 splits=1
time=2026-04-08T09:05:09.404Z level=DEBUG source=ggml.go:852 msg="compute graph" nodes=1831 splits=6
time=2026-04-08T09:05:09.419Z level=DEBUG source=ggml.go:852 msg="compute graph" nodes=1829 splits=6
time=2026-04-08T09:05:09.419Z level=DEBUG source=device.go:240 msg="model weights" device=CUDA0 size="2.8 GiB"
time=2026-04-08T09:05:09.419Z level=DEBUG source=device.go:240 msg="model weights" device=CUDA1 size="6.1 GiB"
time=2026-04-08T09:05:09.419Z level=DEBUG source=device.go:245 msg="model weights" device=CPU size="587.0 MiB"
time=2026-04-08T09:05:09.419Z level=DEBUG source=device.go:251 msg="kv cache" device=CUDA0 size="1.2 GiB"
time=2026-04-08T09:05:09.419Z level=DEBUG source=device.go:262 msg="compute graph" device=CUDA0 size="845.9 MiB"
time=2026-04-08T09:05:09.419Z level=DEBUG source=device.go:262 msg="compute graph" device=CUDA1 size="186.8 MiB"
time=2026-04-08T09:05:09.419Z level=DEBUG source=device.go:267 msg="compute graph" device=CPU size="5.0 MiB"
time=2026-04-08T09:05:09.419Z level=DEBUG source=device.go:272 msg="total memory" size="11.6 GiB"
time=2026-04-08T09:05:09.419Z level=DEBUG source=server.go:796 msg=memory success=true required.InputWeights=615514112 required.CPU.Graph=5242880 required.CUDA0.ID=GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 required.CUDA0.Weights="[109892480 110045184 109892480 109892480 109892480 110846848 103134080 109892480 102796160 102796160 109892480 110171008 52401536 59497856 52401536 52401536 59497856 59776384 52401536 59497856 52401536 52401536 59497856 59776384 52401536 59497856 52401536 52401536 59497856 59776384 52401536 59497856 52401536 52401536 59497856 59776384 59159936 59497856 59159936 59159936 59497856 66534784 0]" required.CUDA0.Cache="[9437184 9437184 9437184 9437184 9437184 268435456 9437184 9437184 9437184 9437184 9437184 268435456 9437184 9437184 9437184 9437184 9437184 268435456 9437184 9437184 9437184 9437184 9437184 268435456 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]" required.CUDA0.Graph=887014400 required.CUDA1.ID=GPU-8e13d698-9b6b-a377-44bd-31550f42e79d required.CUDA1.Weights="[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 6521521152]" required.CUDA1.Graph=195858432
time=2026-04-08T09:05:09.419Z level=DEBUG source=server.go:990 msg="available gpu" id=GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 library=CUDA "available layer vram"="10.4 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="845.9 MiB"
time=2026-04-08T09:05:09.420Z level=DEBUG source=server.go:990 msg="available gpu" id=GPU-8e13d698-9b6b-a377-44bd-31550f42e79d library=CUDA "available layer vram"="10.9 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="186.8 MiB"
time=2026-04-08T09:05:09.420Z level=DEBUG source=server.go:807 msg="new layout created" layers="43[ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Layers:42(0..41) ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Layers:1(42..42)]"
time=2026-04-08T09:05:09.420Z level=INFO source=runner.go:1290 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:65536 KvCacheType: NumThreads:4 GPULayers:43[ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Layers:42(0..41) ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Layers:1(42..42)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-04-08T09:05:09.420Z level=INFO source=ggml.go:482 msg="offloading 42 repeating layers to GPU"
time=2026-04-08T09:05:09.420Z level=INFO source=ggml.go:489 msg="offloading output layer to GPU"
time=2026-04-08T09:05:09.420Z level=INFO source=ggml.go:494 msg="offloaded 43/43 layers to GPU"
time=2026-04-08T09:05:09.420Z level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="2.8 GiB"
time=2026-04-08T09:05:09.420Z level=INFO source=device.go:240 msg="model weights" device=CUDA1 size="6.1 GiB"
time=2026-04-08T09:05:09.420Z level=INFO source=device.go:245 msg="model weights" device=CPU size="587.0 MiB"
time=2026-04-08T09:05:09.420Z level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="1.2 GiB"
time=2026-04-08T09:05:09.420Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="845.9 MiB"
time=2026-04-08T09:05:09.420Z level=INFO source=device.go:262 msg="compute graph" device=CUDA1 size="186.8 MiB"
time=2026-04-08T09:05:09.420Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="5.0 MiB"
time=2026-04-08T09:05:09.420Z level=INFO source=device.go:272 msg="total memory" size="11.6 GiB"
time=2026-04-08T09:05:09.420Z level=INFO source=sched.go:561 msg="loaded runners" count=1
time=2026-04-08T09:05:09.420Z level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a
time=2026-04-08T09:05:09.420Z level=INFO source=server.go:1364 msg="waiting for llama runner to start responding"
time=2026-04-08T09:05:09.420Z level=INFO source=server.go:1398 msg="waiting for server to become available" status="llm server loading model"
time=2026-04-08T09:05:09.421Z level=DEBUG source=server.go:1408 msg="model load progress 0.00"
time=2026-04-08T09:05:09.673Z level=DEBUG source=server.go:1408 msg="model load progress 0.11"
time=2026-04-08T09:05:09.924Z level=DEBUG source=server.go:1408 msg="model load progress 0.22"
time=2026-04-08T09:05:10.176Z level=DEBUG source=server.go:1408 msg="model load progress 0.32"
time=2026-04-08T09:05:10.426Z level=DEBUG source=server.go:1408 msg="model load progress 0.38"
time=2026-04-08T09:05:10.677Z level=DEBUG source=server.go:1408 msg="model load progress 0.44"
time=2026-04-08T09:05:10.928Z level=DEBUG source=server.go:1408 msg="model load progress 0.49"
time=2026-04-08T09:05:11.179Z level=DEBUG source=server.go:1408 msg="model load progress 0.54"
time=2026-04-08T09:05:11.430Z level=DEBUG source=server.go:1408 msg="model load progress 0.57"
time=2026-04-08T09:05:11.681Z level=DEBUG source=server.go:1408 msg="model load progress 0.60"
time=2026-04-08T09:05:11.932Z level=DEBUG source=server.go:1408 msg="model load progress 0.64"
time=2026-04-08T09:05:12.182Z level=DEBUG source=server.go:1408 msg="model load progress 0.67"
time=2026-04-08T09:05:12.433Z level=DEBUG source=server.go:1408 msg="model load progress 0.71"
time=2026-04-08T09:05:12.684Z level=DEBUG source=server.go:1408 msg="model load progress 0.74"
time=2026-04-08T09:05:12.935Z level=DEBUG source=server.go:1408 msg="model load progress 0.78"
time=2026-04-08T09:05:13.186Z level=DEBUG source=server.go:1408 msg="model load progress 0.82"
time=2026-04-08T09:05:13.437Z level=DEBUG source=server.go:1408 msg="model load progress 0.86"
time=2026-04-08T09:05:13.687Z level=DEBUG source=server.go:1408 msg="model load progress 0.89"
time=2026-04-08T09:05:13.938Z level=DEBUG source=server.go:1408 msg="model load progress 0.93"
time=2026-04-08T09:05:14.189Z level=DEBUG source=server.go:1408 msg="model load progress 0.95"
time=2026-04-08T09:05:14.440Z level=DEBUG source=server.go:1408 msg="model load progress 0.98"
time=2026-04-08T09:05:14.599Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.pooling_type default=0
time=2026-04-08T09:05:14.691Z level=INFO source=server.go:1402 msg="llama runner started in 8.09 seconds"
time=2026-04-08T09:05:14.691Z level=DEBUG source=sched.go:573 msg="finished setting up" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA} {ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=161 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536
time=2026-04-08T09:05:14.816Z level=DEBUG source=server.go:1550 msg="completion request" images=0 prompt=1236 format=""
time=2026-04-08T09:05:14.921Z level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=0 prompt=287 used=0 remaining=287
time=2026-04-08T09:05:14.978Z level=DEBUG source=server.go:1550 msg="completion request" images=0 prompt=94494 format=""
[GIN] 2026/04/08 - 09:05:22 | 200 |  17.06285347s |   192.168.66.36 | POST     "/v1/messages?beta=true"
time=2026-04-08T09:05:22.670Z level=DEBUG source=sched.go:581 msg="context for request finished"
time=2026-04-08T09:05:22.670Z level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA} {ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=161 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 refCount=1
time=2026-04-08T09:05:22.848Z level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=747 prompt=21639 used=19 remaining=21620
[GIN] 2026/04/08 - 09:05:58 | 200 | 52.741240222s |   192.168.66.36 | POST     "/v1/messages?beta=true"
time=2026-04-08T09:05:58.583Z level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA} {ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=161 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536
time=2026-04-08T09:05:58.584Z level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA} {ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=161 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 duration=2562047h47m16.854775807s
time=2026-04-08T09:05:58.584Z level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA} {ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=161 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 refCount=0
time=2026-04-08T09:05:58.923Z level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a
time=2026-04-08T09:05:59.104Z level=DEBUG source=server.go:1550 msg="completion request" images=0 prompt=98626 format=""
time=2026-04-08T09:05:59.301Z level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=22091 prompt=23250 used=21639 remaining=1611
[GIN] 2026/04/08 - 09:06:17 | 200 | 18.382182076s |   192.168.66.36 | POST     "/v1/messages?beta=true"
time=2026-04-08T09:06:17.012Z level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA} {ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=161 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536
time=2026-04-08T09:06:17.012Z level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA} {ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=161 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 duration=2562047h47m16.854775807s
time=2026-04-08T09:06:17.012Z level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA} {ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=161 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 refCount=0
time=2026-04-08T09:06:17.297Z level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a
time=2026-04-08T09:06:17.503Z level=DEBUG source=server.go:1550 msg="completion request" images=0 prompt=101207 format=""
time=2026-04-08T09:06:17.708Z level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=24071 prompt=23792 used=23250 remaining=542
[GIN] 2026/04/08 - 09:06:33 | 200 | 16.399025927s |   192.168.66.36 | POST     "/v1/messages?beta=true"
time=2026-04-08T09:06:33.420Z level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA} {ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=161 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536
time=2026-04-08T09:06:33.420Z level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA} {ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=161 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 duration=2562047h47m16.854775807s
time=2026-04-08T09:06:33.420Z level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA} {ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=161 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 refCount=0
time=2026-04-08T09:06:33.733Z level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a
time=2026-04-08T09:06:33.956Z level=DEBUG source=server.go:1550 msg="completion request" images=0 prompt=103384 format=""
time=2026-04-08T09:06:34.185Z level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=24611 prompt=24320 used=23792 remaining=528
[GIN] 2026/04/08 - 09:06:51 | 200 | 18.348473083s |   192.168.66.36 | POST     "/v1/messages?beta=true"
time=2026-04-08T09:06:51.779Z level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA} {ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=161 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536
time=2026-04-08T09:06:51.779Z level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA} {ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=161 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 duration=2562047h47m16.854775807s
time=2026-04-08T09:06:51.779Z level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA} {ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=161 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 refCount=0
time=2026-04-08T09:06:52.078Z level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a
time=2026-04-08T09:06:52.285Z level=DEBUG source=server.go:1550 msg="completion request" images=0 prompt=105908 format=""
time=2026-04-08T09:06:52.512Z level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=25235 prompt=24879 used=24320 remaining=559
[GIN] 2026/04/08 - 09:07:02 | 200 | 10.406733795s |   192.168.66.36 | POST     "/v1/messages?beta=true"
time=2026-04-08T09:07:02.194Z level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA} {ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=161 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536
time=2026-04-08T09:07:02.194Z level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA} {ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=161 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 duration=2562047h47m16.854775807s
time=2026-04-08T09:07:02.195Z level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA} {ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=161 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 refCount=0
    
<!-- gh-comment-id:4205225236 --> @bmetallica commented on GitHub (Apr 8, 2026): ## Description Despite updating to the latest pre-release (0.20.4-rc2) and correctly configuring OLLAMA_CONTEXT_LENGTH=65536, the agent still fails during tool execution (specifically EnterPlanMode). The model acknowledges the task in German, reads the README successfully, but then fails to produce valid JSON parameters for the next tool call, leading to a loop or sudden halt. ## System Environment - Ollama Server/Client: 0.20.4-rc2 (Docker on Debian) - GPU: 2x NVIDIA GeForce RTX 3060 (12GB each) - Model: gemma4 (latest pull) - Claude Code: v2.1.92 ## Ollama Server (debian): docker exec -i ai-ollama-1 ollama --version ollama version is 0.20.4-rc2 ## Ollama Client (debian): ollama --version ollama version is 0.20.4-rc2 Warning: client version is 0.20.3 ## docker-compose.yml: ``` services: ollama: image: ollama/ollama:0.20.4-rc2 ports: - "11434:11434" # Entferne 'gpus: all' oben, die 'deploy' Sektion unten reicht und ist präziser environment: - OLLAMA_DEBUG=1 - OLLAMA_KEEP_ALIVE=-1 - OLLAMA_FLASH_ATTENTION=1 # KRITISCH: Beschleunigt die Berechnung enorm (RTX 3060 unterstützt das) # - OLLAMA_KV_CACHE_TYPE=q4_k # Spart VRAM im Cache, verhindert das "Vollaufen" bei langem Code - OLLAMA_SCHED_SPREAD=true - OLLAMA_CONTEXT_LENGTH=65536 volumes: - ollama_data:/root/.ollama - ollama_create:/create deploy: resources: reservations: devices: - driver: nvidia count: all # Stellt sicher, dass beide 3060er sauber angesprochen werden capabilities: [gpu] ``` ## ollama launch cloud: ``` Launching Claude Code with gemma4... ╭─── Claude Code v2.1.92 ────────────────────────────────────────────────────────────────────────────╮ │ │ Tips for getting started │ │ Welcome back! │ Run /init to create a CLAUDE.md file with instructions for Claude │ │ │ ───────────────────────────────────────────────────────────────── │ │ ▐▛███▜▌ │ Recent activity │ │ ▝▜█████▛▘ │ No recent activity │ │ ▘▘ ▝▝ │ │ │ │ │ │ gemma4 · API Usage Billing │ │ │ /opt/ha-dom_ │ │ ╰────────────────────────────────────────────────────────────────────────────────────────────────────╯ ❯ sprich deutsch mit mirsprich deutsch mit mir, aber verwende für alle Tool-Calls und deren Parameter ausschließlich die exakte englische Syntax laut Definition sieh dir das aktuelle projekt hier an. du findest alles nötige in der README.md. das Projekt funktioniert derzeit nicht, es werden keine Geräte von Domoticz oder Homeasisstant geholt und der mqtt verbindungsstatus ist nicht verbunden. aktuell läuft die anwendung auch garnicht. plane zuerst wie du das fixen willst. dann frag mich ob das ok ist, dann fixe das und teste die anwendung. ● Ich verstehe. Das Projekt hat mehrere kritische Fehler: es kann keine Geräte von Domoticz oder Home Assistant abrufen, der MQTT-Verbindungsstatus ist nicht verbunden, und die Anwendung läuft momentan gar nicht. Ich werde zunächst gemäß Ihren Anweisungen einen Plan erstellen. Ich beginne damit, die README.md zu lesen, um ein erstes Verständnis des Projekts zu gewinnen. Ich lese nun die README.md ein. Read 1 file (ctrl+o to expand) ● Das Lesen der README.md zeigt, dass das Projekt ein komplexes System zur MQTT-Synchronisierung zwischen Home Assistant und Domoticz ist, das über API-Endpunkte und einen Web UI läuft. Die Fehler (kein Geräte-Polling, keine MQTT-Verbindung, Anwendung läuft nicht) deuten auf Probleme in der Initialisierung, der MQTT-Client-Implementierung oder den Discovery-Mechanismen hin. Ich werde nun in den Planmodus wechseln, um diesen komplexen Fehlerbehebungs- und Testprozess zu strukturieren. ⎿ Invalid tool parameters ● Ich entschuldige mich für den fehlerhaften Aufruf des Planmodus. Ich werde den Plan erneut erstellen und mich strikt an die Tool-Definition halten. Ich verwende jetzt EnterPlanMode, um den Plan zu erstellen und Ihre Bestätigung einzuholen, bevor ich Code ändere. ⎿ Invalid tool parameters ⎿ Invalid tool parameters ● Ich verstehe die kritische Natur der Situation und die notwendige Vorgehensweise: Planen, Abstimmen, Fixen, Testen. Ich werde den Planmodus erneut aufrufen, diesmal mich strikt an die Argumentstruktur des Tools haltend, um sicherzustellen, dass der Plan korrekt verarbeitet wird. Ich starte den Planmodus. ✻ Churned for 1m 56s ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── ❯ ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── ? for shortcuts ``` **claude stops working at this point.** ## docker logs ai-ollama-1 : ``` time=2026-04-08T07:46:56.896Z level=INFO source=routes.go:1744 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:65536 OLLAMA_DEBUG:DEBUG OLLAMA_DEBUG_LOG_REQUESTS:false OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:true OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2026-04-08T07:46:56.896Z level=INFO source=routes.go:1746 msg="Ollama cloud disabled: false" time=2026-04-08T07:46:56.903Z level=INFO source=images.go:499 msg="total blobs: 67" time=2026-04-08T07:46:56.904Z level=INFO source=images.go:506 msg="total unused blobs removed: 0" time=2026-04-08T07:46:56.905Z level=INFO source=routes.go:1802 msg="Listening on [::]:11434 (version 0.20.4-rc2)" time=2026-04-08T07:46:56.906Z level=DEBUG source=sched.go:145 msg="starting llm scheduler" time=2026-04-08T07:46:56.907Z level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-04-08T07:46:56.909Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 45519" time=2026-04-08T07:46:56.909Z level=DEBUG source=server.go:445 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_SCHED_SPREAD=true OLLAMA_CONTEXT_LENGTH=65536 OLLAMA_DEBUG=1 OLLAMA_KEEP_ALIVE=-1 OLLAMA_FLASH_ATTENTION=1 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12 time=2026-04-08T07:46:57.518Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=611.257226ms OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs=map[] time=2026-04-08T07:46:57.518Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 43385" time=2026-04-08T07:46:57.518Z level=DEBUG source=server.go:445 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_SCHED_SPREAD=true OLLAMA_CONTEXT_LENGTH=65536 OLLAMA_DEBUG=1 OLLAMA_KEEP_ALIVE=-1 OLLAMA_FLASH_ATTENTION=1 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v13:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v13 time=2026-04-08T07:46:57.589Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=71.164816ms OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[] time=2026-04-08T07:46:57.589Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" time=2026-04-08T07:46:57.589Z level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=2 time=2026-04-08T07:46:57.589Z level=DEBUG source=runner.go:146 msg="verifying if device is supported" library=/usr/lib/ollama/cuda_v12 description="NVIDIA GeForce RTX 3060" compute=8.6 id=GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 pci_id=0000:06:10.0 time=2026-04-08T07:46:57.589Z level=DEBUG source=runner.go:146 msg="verifying if device is supported" library=/usr/lib/ollama/cuda_v12 description="NVIDIA GeForce RTX 3060" compute=8.6 id=GPU-8e13d698-9b6b-a377-44bd-31550f42e79d pci_id=0000:06:11.0 time=2026-04-08T07:46:57.590Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 39711" time=2026-04-08T07:46:57.590Z level=DEBUG source=server.go:445 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_SCHED_SPREAD=true OLLAMA_CONTEXT_LENGTH=65536 OLLAMA_DEBUG=1 OLLAMA_KEEP_ALIVE=-1 OLLAMA_FLASH_ATTENTION=1 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12 CUDA_VISIBLE_DEVICES=GPU-8e13d698-9b6b-a377-44bd-31550f42e79d GGML_CUDA_INIT=1 time=2026-04-08T07:46:57.590Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 40723" time=2026-04-08T07:46:57.590Z level=DEBUG source=server.go:445 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_SCHED_SPREAD=true OLLAMA_CONTEXT_LENGTH=65536 OLLAMA_DEBUG=1 OLLAMA_KEEP_ALIVE=-1 OLLAMA_FLASH_ATTENTION=1 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12 CUDA_VISIBLE_DEVICES=GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 GGML_CUDA_INIT=1 time=2026-04-08T07:46:57.833Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=243.442349ms OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs="map[CUDA_VISIBLE_DEVICES:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d GGML_CUDA_INIT:1]" time=2026-04-08T07:46:57.859Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=268.70511ms OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs="map[CUDA_VISIBLE_DEVICES:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 GGML_CUDA_INIT:1]" time=2026-04-08T07:46:57.859Z level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=953.012941ms time=2026-04-08T07:46:57.859Z level=INFO source=types.go:42 msg="inference compute" id=GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 filter_id="" library=CUDA compute=8.6 name=CUDA0 description="NVIDIA GeForce RTX 3060" libdirs=ollama,cuda_v12 driver=12.6 pci_id=0000:06:10.0 type=discrete total="12.0 GiB" available="11.7 GiB" time=2026-04-08T07:46:57.859Z level=INFO source=types.go:42 msg="inference compute" id=GPU-8e13d698-9b6b-a377-44bd-31550f42e79d filter_id="" library=CUDA compute=8.6 name=CUDA1 description="NVIDIA GeForce RTX 3060" libdirs=ollama,cuda_v12 driver=12.6 pci_id=0000:06:11.0 type=discrete total="12.0 GiB" available="11.6 GiB" time=2026-04-08T07:46:57.859Z level=INFO source=routes.go:1852 msg="vram-based default context" total_vram="24.0 GiB" default_num_ctx=32768 [GIN] 2026/04/08 - 07:47:00 | 200 | 1.182083ms | 127.0.0.1 | GET "/api/version" [GIN] 2026/04/08 - 07:47:06 | 200 | 331.989µs | 127.0.0.1 | HEAD "/" time=2026-04-08T07:47:06.504Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32 [GIN] 2026/04/08 - 07:47:06 | 200 | 321.558511ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/04/08 - 07:47:06 | 200 | 1.869121ms | 127.0.0.1 | POST "/api/generate" [GIN] 2026/04/08 - 07:47:06 | 200 | 5.282272ms | 127.0.0.1 | DELETE "/api/delete" [GIN] 2026/04/08 - 07:47:14 | 200 | 34.219µs | 127.0.0.1 | HEAD "/" time=2026-04-08T07:47:14.887Z level=DEBUG source=images.go:678 msg="manifest written" path=/root/.ollama/models/manifests/registry.ollama.ai/library/gemma4/latest sha256=c6eb396dbd5992bbe3f5cdb947e8bbc0ee413d7c17e2beaae69f5d569cf982eb size=709 [GIN] 2026/04/08 - 07:47:14 | 200 | 582.689587ms | 127.0.0.1 | POST "/api/pull" [GIN] 2026/04/08 - 07:47:28 | 200 | 27.385µs | 127.0.0.1 | HEAD "/" time=2026-04-08T07:47:28.442Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32 [GIN] 2026/04/08 - 07:47:28 | 200 | 320.795969ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/04/08 - 07:47:28 | 200 | 1.828096ms | 127.0.0.1 | POST "/api/generate" [GIN] 2026/04/08 - 07:47:28 | 200 | 3.818164ms | 127.0.0.1 | DELETE "/api/delete" [GIN] 2026/04/08 - 07:48:02 | 200 | 29.76µs | 127.0.0.1 | HEAD "/" [GIN] 2026/04/08 - 07:48:02 | 200 | 3.623567ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/04/08 - 07:48:20 | 200 | 41.887µs | 127.0.0.1 | HEAD "/" time=2026-04-08T07:48:20.376Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32 [GIN] 2026/04/08 - 07:48:20 | 200 | 347.948534ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/04/08 - 07:48:20 | 200 | 1.721617ms | 127.0.0.1 | POST "/api/generate" [GIN] 2026/04/08 - 07:48:21 | 200 | 745.494233ms | 127.0.0.1 | DELETE "/api/delete" [GIN] 2026/04/08 - 07:48:37 | 200 | 29.877µs | 127.0.0.1 | HEAD "/" time=2026-04-08T07:48:38.147Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32 [GIN] 2026/04/08 - 07:48:38 | 200 | 340.625503ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/04/08 - 07:48:38 | 200 | 1.663894ms | 127.0.0.1 | POST "/api/generate" [GIN] 2026/04/08 - 07:48:38 | 200 | 14.234391ms | 127.0.0.1 | DELETE "/api/delete" [GIN] 2026/04/08 - 07:48:55 | 200 | 26.847µs | 127.0.0.1 | HEAD "/" time=2026-04-08T07:48:55.878Z level=INFO source=download.go:179 msg="downloading 4c27e0f5b5ad in 16 600 MB part(s)" time=2026-04-08T07:52:10.265Z level=INFO source=download.go:179 msg="downloading f0988ff50a24 in 1 473 B part(s)" time=2026-04-08T07:52:58.530Z level=DEBUG source=images.go:678 msg="manifest written" path=/root/.ollama/models/manifests/registry.ollama.ai/library/gemma4/latest sha256=c6eb396dbd5992bbe3f5cdb947e8bbc0ee413d7c17e2beaae69f5d569cf982eb size=709 [GIN] 2026/04/08 - 07:52:58 | 200 | 4m3s | 127.0.0.1 | POST "/api/pull" [GIN] 2026/04/08 - 09:04:36 | 200 | 82.325µs | 192.168.66.36 | HEAD "/" [GIN] 2026/04/08 - 09:04:36 | 200 | 2.7179ms | 192.168.66.36 | GET "/api/tags" [GIN] 2026/04/08 - 09:04:36 | 200 | 61.412µs | 192.168.66.36 | GET "/api/status" [GIN] 2026/04/08 - 09:04:36 | 200 | 9.547µs | 192.168.66.36 | GET "/api/status" time=2026-04-08T09:04:38.626Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32 [GIN] 2026/04/08 - 09:04:38 | 200 | 316.088652ms | 192.168.66.36 | POST "/api/show" [GIN] 2026/04/08 - 09:04:40 | 200 | 26.094µs | 192.168.66.36 | HEAD "/" time=2026-04-08T09:05:05.892Z level=DEBUG source=runner.go:264 msg="refreshing free memory" time=2026-04-08T09:05:05.892Z level=DEBUG source=runner.go:328 msg="unable to refresh all GPUs with existing runners, performing bootstrap discovery" time=2026-04-08T09:05:05.892Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 45317" time=2026-04-08T09:05:05.893Z level=DEBUG source=server.go:445 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_SCHED_SPREAD=true OLLAMA_CONTEXT_LENGTH=65536 OLLAMA_DEBUG=1 OLLAMA_KEEP_ALIVE=-1 OLLAMA_FLASH_ATTENTION=1 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12 time=2026-04-08T09:05:06.382Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=489.407808ms OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs=map[] time=2026-04-08T09:05:06.382Z level=DEBUG source=runner.go:40 msg="overall device VRAM discovery took" duration=489.580248ms time=2026-04-08T09:05:06.382Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax" time=2026-04-08T09:05:06.382Z level=DEBUG source=sched.go:220 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=6 gpu_count=2 time=2026-04-08T09:05:06.383Z level=DEBUG source=sched.go:229 msg="loading first model" model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a time=2026-04-08T09:05:06.518Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32 time=2026-04-08T09:05:06.593Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32 time=2026-04-08T09:05:06.599Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.pooling_type default=0 time=2026-04-08T09:05:06.599Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eot_token_id default=106 time=2026-04-08T09:05:06.601Z level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883 time=2026-04-08T09:05:06.602Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.attention.global_head_count_kv default=0 time=2026-04-08T09:05:06.602Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.attention.head_count_kv default="&{size:0 values:[]}" time=2026-04-08T09:05:06.602Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.expert_count default=0 time=2026-04-08T09:05:06.602Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.expert_used_count default=0 time=2026-04-08T09:05:06.602Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.audio.num_mel_bins default=128 time=2026-04-08T09:05:06.602Z level=INFO source=server.go:259 msg="enabling flash attention" time=2026-04-08T09:05:06.602Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a --port 40321" time=2026-04-08T09:05:06.602Z level=DEBUG source=server.go:445 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_SCHED_SPREAD=true OLLAMA_CONTEXT_LENGTH=65536 OLLAMA_DEBUG=1 OLLAMA_KEEP_ALIVE=-1 OLLAMA_FLASH_ATTENTION=1 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12 time=2026-04-08T09:05:06.602Z level=INFO source=sched.go:484 msg="system memory" total="15.6 GiB" free="6.8 GiB" free_swap="807.7 MiB" time=2026-04-08T09:05:06.603Z level=INFO source=sched.go:491 msg="gpu memory" id=GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 library=CUDA available="11.2 GiB" free="11.7 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-04-08T09:05:06.603Z level=INFO source=sched.go:491 msg="gpu memory" id=GPU-8e13d698-9b6b-a377-44bd-31550f42e79d library=CUDA available="11.1 GiB" free="11.6 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-04-08T09:05:06.603Z level=INFO source=server.go:771 msg="loading model" "model layers"=43 requested=-1 time=2026-04-08T09:05:06.618Z level=INFO source=runner.go:1417 msg="starting ollama engine" time=2026-04-08T09:05:06.618Z level=INFO source=runner.go:1452 msg="Server listening on 127.0.0.1:40321" time=2026-04-08T09:05:06.625Z level=INFO source=runner.go:1290 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:65536 KvCacheType: NumThreads:4 GPULayers:43[ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Layers:43(0..42)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-04-08T09:05:06.697Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32 time=2026-04-08T09:05:06.704Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.name default="" time=2026-04-08T09:05:06.704Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.description default="" time=2026-04-08T09:05:06.704Z level=INFO source=ggml.go:136 msg="" architecture=gemma4 file_type=Q4_K_M name="" description="" num_tensors=2131 num_key_values=55 time=2026-04-08T09:05:06.704Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-sse42.so time=2026-04-08T09:05:06.711Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama/cuda_v12 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 2 CUDA devices: Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes, ID: GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Device 1: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes, ID: GPU-8e13d698-9b6b-a377-44bd-31550f42e79d load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so time=2026-04-08T09:05:06.909Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2026-04-08T09:05:06.939Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.pooling_type default=0 time=2026-04-08T09:05:06.939Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eot_token_id default=106 time=2026-04-08T09:05:06.940Z level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883 time=2026-04-08T09:05:06.940Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.attention.global_head_count_kv default=0 time=2026-04-08T09:05:06.940Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.attention.head_count_kv default="&{size:0 values:[]}" time=2026-04-08T09:05:06.940Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.expert_count default=0 time=2026-04-08T09:05:06.940Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.expert_used_count default=0 time=2026-04-08T09:05:06.940Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.audio.num_mel_bins default=128 time=2026-04-08T09:05:06.968Z level=INFO source=model.go:138 msg="vision: decode" elapsed=1.075109ms bounds=(0,0)-(2048,2048) time=2026-04-08T09:05:07.119Z level=INFO source=model.go:145 msg="vision: preprocess" elapsed=150.142486ms size="[768 768]" time=2026-04-08T09:05:07.119Z level=INFO source=model.go:148 msg="vision: pixelValues" shape="[768 768 3]" dim0=768 dim1=768 dim2=3 time=2026-04-08T09:05:07.119Z level=INFO source=model.go:152 msg="vision: patches" patchesX=48 patchesY=48 total=2304 patchSize=16 time=2026-04-08T09:05:07.120Z level=INFO source=model.go:156 msg="vision: encoded" elapsed=153.105474ms shape="[2560 256]" time=2026-04-08T09:05:07.412Z level=DEBUG source=ggml.go:852 msg="compute graph" nodes=684 splits=1 time=2026-04-08T09:05:07.971Z level=DEBUG source=ggml.go:852 msg="compute graph" nodes=1831 splits=2 time=2026-04-08T09:05:08.015Z level=DEBUG source=ggml.go:852 msg="compute graph" nodes=1829 splits=2 time=2026-04-08T09:05:08.017Z level=DEBUG source=device.go:240 msg="model weights" device=CUDA1 size="8.9 GiB" time=2026-04-08T09:05:08.017Z level=DEBUG source=device.go:245 msg="model weights" device=CPU size="587.0 MiB" time=2026-04-08T09:05:08.017Z level=DEBUG source=device.go:251 msg="kv cache" device=CUDA1 size="1.2 GiB" time=2026-04-08T09:05:08.017Z level=DEBUG source=device.go:262 msg="compute graph" device=CUDA1 size="331.0 MiB" time=2026-04-08T09:05:08.017Z level=DEBUG source=device.go:267 msg="compute graph" device=CPU size="5.0 MiB" time=2026-04-08T09:05:08.017Z level=DEBUG source=device.go:272 msg="total memory" size="11.0 GiB" time=2026-04-08T09:05:08.017Z level=DEBUG source=server.go:796 msg=memory success=true required.InputWeights=615514112 required.CPU.Graph=5242880 required.CUDA1.ID=GPU-8e13d698-9b6b-a377-44bd-31550f42e79d required.CUDA1.Weights="[109892480 110045184 109892480 109892480 109892480 110846848 103134080 109892480 102796160 102796160 109892480 110171008 52401536 59497856 52401536 52401536 59497856 59776384 52401536 59497856 52401536 52401536 59497856 59776384 52401536 59497856 52401536 52401536 59497856 59776384 52401536 59497856 52401536 52401536 59497856 59776384 59159936 59497856 59159936 59159936 59497856 66534784 6521521152]" required.CUDA1.Cache="[9437184 9437184 9437184 9437184 9437184 268435456 9437184 9437184 9437184 9437184 9437184 268435456 9437184 9437184 9437184 9437184 9437184 268435456 9437184 9437184 9437184 9437184 9437184 268435456 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]" required.CUDA1.Graph=347086976 time=2026-04-08T09:05:08.017Z level=DEBUG source=server.go:990 msg="available gpu" id=GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 library=CUDA "available layer vram"="11.2 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="0 B" time=2026-04-08T09:05:08.017Z level=DEBUG source=server.go:990 msg="available gpu" id=GPU-8e13d698-9b6b-a377-44bd-31550f42e79d library=CUDA "available layer vram"="10.8 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="331.0 MiB" time=2026-04-08T09:05:08.017Z level=DEBUG source=server.go:807 msg="new layout created" layers="43[ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Layers:42(0..41) ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Layers:1(42..42)]" time=2026-04-08T09:05:08.017Z level=INFO source=runner.go:1290 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:65536 KvCacheType: NumThreads:4 GPULayers:43[ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Layers:42(0..41) ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Layers:1(42..42)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-04-08T09:05:08.094Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32 time=2026-04-08T09:05:08.122Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.pooling_type default=0 time=2026-04-08T09:05:08.122Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eot_token_id default=106 time=2026-04-08T09:05:08.123Z level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883 time=2026-04-08T09:05:08.123Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.attention.global_head_count_kv default=0 time=2026-04-08T09:05:08.123Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.attention.head_count_kv default="&{size:0 values:[]}" time=2026-04-08T09:05:08.123Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.expert_count default=0 time=2026-04-08T09:05:08.123Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.expert_used_count default=0 time=2026-04-08T09:05:08.123Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.audio.num_mel_bins default=128 time=2026-04-08T09:05:08.152Z level=INFO source=model.go:138 msg="vision: decode" elapsed=1.282165ms bounds=(0,0)-(2048,2048) time=2026-04-08T09:05:08.306Z level=INFO source=model.go:145 msg="vision: preprocess" elapsed=154.372367ms size="[768 768]" time=2026-04-08T09:05:08.306Z level=INFO source=model.go:148 msg="vision: pixelValues" shape="[768 768 3]" dim0=768 dim1=768 dim2=3 time=2026-04-08T09:05:08.306Z level=INFO source=model.go:152 msg="vision: patches" patchesX=48 patchesY=48 total=2304 patchSize=16 time=2026-04-08T09:05:08.307Z level=INFO source=model.go:156 msg="vision: encoded" elapsed=156.595868ms shape="[2560 256]" time=2026-04-08T09:05:08.316Z level=DEBUG source=ggml.go:852 msg="compute graph" nodes=684 splits=1 time=2026-04-08T09:05:08.678Z level=DEBUG source=ggml.go:852 msg="compute graph" nodes=1831 splits=6 time=2026-04-08T09:05:08.685Z level=DEBUG source=ggml.go:852 msg="compute graph" nodes=1829 splits=6 time=2026-04-08T09:05:08.685Z level=DEBUG source=device.go:240 msg="model weights" device=CUDA0 size="2.8 GiB" time=2026-04-08T09:05:08.686Z level=DEBUG source=device.go:240 msg="model weights" device=CUDA1 size="6.1 GiB" time=2026-04-08T09:05:08.686Z level=DEBUG source=device.go:245 msg="model weights" device=CPU size="587.0 MiB" time=2026-04-08T09:05:08.686Z level=DEBUG source=device.go:251 msg="kv cache" device=CUDA0 size="1.2 GiB" time=2026-04-08T09:05:08.686Z level=DEBUG source=device.go:262 msg="compute graph" device=CUDA0 size="845.9 MiB" time=2026-04-08T09:05:08.686Z level=DEBUG source=device.go:262 msg="compute graph" device=CUDA1 size="186.8 MiB" time=2026-04-08T09:05:08.686Z level=DEBUG source=device.go:267 msg="compute graph" device=CPU size="5.0 MiB" time=2026-04-08T09:05:08.686Z level=DEBUG source=device.go:272 msg="total memory" size="11.6 GiB" time=2026-04-08T09:05:08.686Z level=DEBUG source=server.go:796 msg=memory success=true required.InputWeights=615514112 required.CPU.Graph=5242880 required.CUDA0.ID=GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 required.CUDA0.Weights="[109892480 110045184 109892480 109892480 109892480 110846848 103134080 109892480 102796160 102796160 109892480 110171008 52401536 59497856 52401536 52401536 59497856 59776384 52401536 59497856 52401536 52401536 59497856 59776384 52401536 59497856 52401536 52401536 59497856 59776384 52401536 59497856 52401536 52401536 59497856 59776384 59159936 59497856 59159936 59159936 59497856 66534784 0]" required.CUDA0.Cache="[9437184 9437184 9437184 9437184 9437184 268435456 9437184 9437184 9437184 9437184 9437184 268435456 9437184 9437184 9437184 9437184 9437184 268435456 9437184 9437184 9437184 9437184 9437184 268435456 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]" required.CUDA0.Graph=887014400 required.CUDA1.ID=GPU-8e13d698-9b6b-a377-44bd-31550f42e79d required.CUDA1.Weights="[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 6521521152]" required.CUDA1.Graph=195858432 time=2026-04-08T09:05:08.686Z level=DEBUG source=server.go:990 msg="available gpu" id=GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 library=CUDA "available layer vram"="10.4 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="845.9 MiB" time=2026-04-08T09:05:08.686Z level=DEBUG source=server.go:990 msg="available gpu" id=GPU-8e13d698-9b6b-a377-44bd-31550f42e79d library=CUDA "available layer vram"="10.9 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="186.8 MiB" time=2026-04-08T09:05:08.686Z level=DEBUG source=server.go:807 msg="new layout created" layers="43[ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Layers:42(0..41) ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Layers:1(42..42)]" time=2026-04-08T09:05:08.686Z level=INFO source=runner.go:1290 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:65536 KvCacheType: NumThreads:4 GPULayers:43[ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Layers:42(0..41) ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Layers:1(42..42)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-04-08T09:05:08.753Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32 time=2026-04-08T09:05:08.779Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.pooling_type default=0 time=2026-04-08T09:05:08.779Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eot_token_id default=106 time=2026-04-08T09:05:08.780Z level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883 time=2026-04-08T09:05:08.780Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.attention.global_head_count_kv default=0 time=2026-04-08T09:05:08.780Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.attention.head_count_kv default="&{size:0 values:[]}" time=2026-04-08T09:05:08.780Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.expert_count default=0 time=2026-04-08T09:05:08.780Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.expert_used_count default=0 time=2026-04-08T09:05:08.780Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.audio.num_mel_bins default=128 time=2026-04-08T09:05:08.806Z level=INFO source=model.go:138 msg="vision: decode" elapsed=936.07µs bounds=(0,0)-(2048,2048) time=2026-04-08T09:05:08.970Z level=INFO source=model.go:145 msg="vision: preprocess" elapsed=163.943375ms size="[768 768]" time=2026-04-08T09:05:08.973Z level=INFO source=model.go:148 msg="vision: pixelValues" shape="[768 768 3]" dim0=768 dim1=768 dim2=3 time=2026-04-08T09:05:08.973Z level=INFO source=model.go:152 msg="vision: patches" patchesX=48 patchesY=48 total=2304 patchSize=16 time=2026-04-08T09:05:08.974Z level=INFO source=model.go:156 msg="vision: encoded" elapsed=168.886045ms shape="[2560 256]" time=2026-04-08T09:05:08.983Z level=DEBUG source=ggml.go:852 msg="compute graph" nodes=684 splits=1 time=2026-04-08T09:05:09.404Z level=DEBUG source=ggml.go:852 msg="compute graph" nodes=1831 splits=6 time=2026-04-08T09:05:09.419Z level=DEBUG source=ggml.go:852 msg="compute graph" nodes=1829 splits=6 time=2026-04-08T09:05:09.419Z level=DEBUG source=device.go:240 msg="model weights" device=CUDA0 size="2.8 GiB" time=2026-04-08T09:05:09.419Z level=DEBUG source=device.go:240 msg="model weights" device=CUDA1 size="6.1 GiB" time=2026-04-08T09:05:09.419Z level=DEBUG source=device.go:245 msg="model weights" device=CPU size="587.0 MiB" time=2026-04-08T09:05:09.419Z level=DEBUG source=device.go:251 msg="kv cache" device=CUDA0 size="1.2 GiB" time=2026-04-08T09:05:09.419Z level=DEBUG source=device.go:262 msg="compute graph" device=CUDA0 size="845.9 MiB" time=2026-04-08T09:05:09.419Z level=DEBUG source=device.go:262 msg="compute graph" device=CUDA1 size="186.8 MiB" time=2026-04-08T09:05:09.419Z level=DEBUG source=device.go:267 msg="compute graph" device=CPU size="5.0 MiB" time=2026-04-08T09:05:09.419Z level=DEBUG source=device.go:272 msg="total memory" size="11.6 GiB" time=2026-04-08T09:05:09.419Z level=DEBUG source=server.go:796 msg=memory success=true required.InputWeights=615514112 required.CPU.Graph=5242880 required.CUDA0.ID=GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 required.CUDA0.Weights="[109892480 110045184 109892480 109892480 109892480 110846848 103134080 109892480 102796160 102796160 109892480 110171008 52401536 59497856 52401536 52401536 59497856 59776384 52401536 59497856 52401536 52401536 59497856 59776384 52401536 59497856 52401536 52401536 59497856 59776384 52401536 59497856 52401536 52401536 59497856 59776384 59159936 59497856 59159936 59159936 59497856 66534784 0]" required.CUDA0.Cache="[9437184 9437184 9437184 9437184 9437184 268435456 9437184 9437184 9437184 9437184 9437184 268435456 9437184 9437184 9437184 9437184 9437184 268435456 9437184 9437184 9437184 9437184 9437184 268435456 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]" required.CUDA0.Graph=887014400 required.CUDA1.ID=GPU-8e13d698-9b6b-a377-44bd-31550f42e79d required.CUDA1.Weights="[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 6521521152]" required.CUDA1.Graph=195858432 time=2026-04-08T09:05:09.419Z level=DEBUG source=server.go:990 msg="available gpu" id=GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 library=CUDA "available layer vram"="10.4 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="845.9 MiB" time=2026-04-08T09:05:09.420Z level=DEBUG source=server.go:990 msg="available gpu" id=GPU-8e13d698-9b6b-a377-44bd-31550f42e79d library=CUDA "available layer vram"="10.9 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="186.8 MiB" time=2026-04-08T09:05:09.420Z level=DEBUG source=server.go:807 msg="new layout created" layers="43[ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Layers:42(0..41) ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Layers:1(42..42)]" time=2026-04-08T09:05:09.420Z level=INFO source=runner.go:1290 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:65536 KvCacheType: NumThreads:4 GPULayers:43[ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Layers:42(0..41) ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Layers:1(42..42)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-04-08T09:05:09.420Z level=INFO source=ggml.go:482 msg="offloading 42 repeating layers to GPU" time=2026-04-08T09:05:09.420Z level=INFO source=ggml.go:489 msg="offloading output layer to GPU" time=2026-04-08T09:05:09.420Z level=INFO source=ggml.go:494 msg="offloaded 43/43 layers to GPU" time=2026-04-08T09:05:09.420Z level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="2.8 GiB" time=2026-04-08T09:05:09.420Z level=INFO source=device.go:240 msg="model weights" device=CUDA1 size="6.1 GiB" time=2026-04-08T09:05:09.420Z level=INFO source=device.go:245 msg="model weights" device=CPU size="587.0 MiB" time=2026-04-08T09:05:09.420Z level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="1.2 GiB" time=2026-04-08T09:05:09.420Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="845.9 MiB" time=2026-04-08T09:05:09.420Z level=INFO source=device.go:262 msg="compute graph" device=CUDA1 size="186.8 MiB" time=2026-04-08T09:05:09.420Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="5.0 MiB" time=2026-04-08T09:05:09.420Z level=INFO source=device.go:272 msg="total memory" size="11.6 GiB" time=2026-04-08T09:05:09.420Z level=INFO source=sched.go:561 msg="loaded runners" count=1 time=2026-04-08T09:05:09.420Z level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a time=2026-04-08T09:05:09.420Z level=INFO source=server.go:1364 msg="waiting for llama runner to start responding" time=2026-04-08T09:05:09.420Z level=INFO source=server.go:1398 msg="waiting for server to become available" status="llm server loading model" time=2026-04-08T09:05:09.421Z level=DEBUG source=server.go:1408 msg="model load progress 0.00" time=2026-04-08T09:05:09.673Z level=DEBUG source=server.go:1408 msg="model load progress 0.11" time=2026-04-08T09:05:09.924Z level=DEBUG source=server.go:1408 msg="model load progress 0.22" time=2026-04-08T09:05:10.176Z level=DEBUG source=server.go:1408 msg="model load progress 0.32" time=2026-04-08T09:05:10.426Z level=DEBUG source=server.go:1408 msg="model load progress 0.38" time=2026-04-08T09:05:10.677Z level=DEBUG source=server.go:1408 msg="model load progress 0.44" time=2026-04-08T09:05:10.928Z level=DEBUG source=server.go:1408 msg="model load progress 0.49" time=2026-04-08T09:05:11.179Z level=DEBUG source=server.go:1408 msg="model load progress 0.54" time=2026-04-08T09:05:11.430Z level=DEBUG source=server.go:1408 msg="model load progress 0.57" time=2026-04-08T09:05:11.681Z level=DEBUG source=server.go:1408 msg="model load progress 0.60" time=2026-04-08T09:05:11.932Z level=DEBUG source=server.go:1408 msg="model load progress 0.64" time=2026-04-08T09:05:12.182Z level=DEBUG source=server.go:1408 msg="model load progress 0.67" time=2026-04-08T09:05:12.433Z level=DEBUG source=server.go:1408 msg="model load progress 0.71" time=2026-04-08T09:05:12.684Z level=DEBUG source=server.go:1408 msg="model load progress 0.74" time=2026-04-08T09:05:12.935Z level=DEBUG source=server.go:1408 msg="model load progress 0.78" time=2026-04-08T09:05:13.186Z level=DEBUG source=server.go:1408 msg="model load progress 0.82" time=2026-04-08T09:05:13.437Z level=DEBUG source=server.go:1408 msg="model load progress 0.86" time=2026-04-08T09:05:13.687Z level=DEBUG source=server.go:1408 msg="model load progress 0.89" time=2026-04-08T09:05:13.938Z level=DEBUG source=server.go:1408 msg="model load progress 0.93" time=2026-04-08T09:05:14.189Z level=DEBUG source=server.go:1408 msg="model load progress 0.95" time=2026-04-08T09:05:14.440Z level=DEBUG source=server.go:1408 msg="model load progress 0.98" time=2026-04-08T09:05:14.599Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=gemma4.pooling_type default=0 time=2026-04-08T09:05:14.691Z level=INFO source=server.go:1402 msg="llama runner started in 8.09 seconds" time=2026-04-08T09:05:14.691Z level=DEBUG source=sched.go:573 msg="finished setting up" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA} {ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=161 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 time=2026-04-08T09:05:14.816Z level=DEBUG source=server.go:1550 msg="completion request" images=0 prompt=1236 format="" time=2026-04-08T09:05:14.921Z level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=0 prompt=287 used=0 remaining=287 time=2026-04-08T09:05:14.978Z level=DEBUG source=server.go:1550 msg="completion request" images=0 prompt=94494 format="" [GIN] 2026/04/08 - 09:05:22 | 200 | 17.06285347s | 192.168.66.36 | POST "/v1/messages?beta=true" time=2026-04-08T09:05:22.670Z level=DEBUG source=sched.go:581 msg="context for request finished" time=2026-04-08T09:05:22.670Z level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA} {ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=161 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 refCount=1 time=2026-04-08T09:05:22.848Z level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=747 prompt=21639 used=19 remaining=21620 [GIN] 2026/04/08 - 09:05:58 | 200 | 52.741240222s | 192.168.66.36 | POST "/v1/messages?beta=true" time=2026-04-08T09:05:58.583Z level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA} {ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=161 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 time=2026-04-08T09:05:58.584Z level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA} {ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=161 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 duration=2562047h47m16.854775807s time=2026-04-08T09:05:58.584Z level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA} {ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=161 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 refCount=0 time=2026-04-08T09:05:58.923Z level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a time=2026-04-08T09:05:59.104Z level=DEBUG source=server.go:1550 msg="completion request" images=0 prompt=98626 format="" time=2026-04-08T09:05:59.301Z level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=22091 prompt=23250 used=21639 remaining=1611 [GIN] 2026/04/08 - 09:06:17 | 200 | 18.382182076s | 192.168.66.36 | POST "/v1/messages?beta=true" time=2026-04-08T09:06:17.012Z level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA} {ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=161 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 time=2026-04-08T09:06:17.012Z level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA} {ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=161 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 duration=2562047h47m16.854775807s time=2026-04-08T09:06:17.012Z level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA} {ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=161 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 refCount=0 time=2026-04-08T09:06:17.297Z level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a time=2026-04-08T09:06:17.503Z level=DEBUG source=server.go:1550 msg="completion request" images=0 prompt=101207 format="" time=2026-04-08T09:06:17.708Z level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=24071 prompt=23792 used=23250 remaining=542 [GIN] 2026/04/08 - 09:06:33 | 200 | 16.399025927s | 192.168.66.36 | POST "/v1/messages?beta=true" time=2026-04-08T09:06:33.420Z level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA} {ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=161 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 time=2026-04-08T09:06:33.420Z level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA} {ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=161 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 duration=2562047h47m16.854775807s time=2026-04-08T09:06:33.420Z level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA} {ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=161 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 refCount=0 time=2026-04-08T09:06:33.733Z level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a time=2026-04-08T09:06:33.956Z level=DEBUG source=server.go:1550 msg="completion request" images=0 prompt=103384 format="" time=2026-04-08T09:06:34.185Z level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=24611 prompt=24320 used=23792 remaining=528 [GIN] 2026/04/08 - 09:06:51 | 200 | 18.348473083s | 192.168.66.36 | POST "/v1/messages?beta=true" time=2026-04-08T09:06:51.779Z level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA} {ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=161 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 time=2026-04-08T09:06:51.779Z level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA} {ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=161 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 duration=2562047h47m16.854775807s time=2026-04-08T09:06:51.779Z level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA} {ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=161 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 refCount=0 time=2026-04-08T09:06:52.078Z level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a time=2026-04-08T09:06:52.285Z level=DEBUG source=server.go:1550 msg="completion request" images=0 prompt=105908 format="" time=2026-04-08T09:06:52.512Z level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=25235 prompt=24879 used=24320 remaining=559 [GIN] 2026/04/08 - 09:07:02 | 200 | 10.406733795s | 192.168.66.36 | POST "/v1/messages?beta=true" time=2026-04-08T09:07:02.194Z level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA} {ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=161 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 time=2026-04-08T09:07:02.194Z level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA} {ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=161 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 duration=2562047h47m16.854775807s time=2026-04-08T09:07:02.195Z level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA} {ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=161 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 refCount=0 ```
Author
Owner

@ParthSareen commented on GitHub (Apr 15, 2026):

Hi @bmetallica, there have been some recent improvements around gemma4's tool calling so trying again may help. Let me know how it goes!

<!-- gh-comment-id:4255319998 --> @ParthSareen commented on GitHub (Apr 15, 2026): Hi @bmetallica, there have been some recent improvements around gemma4's tool calling so trying again may help. Let me know how it goes!
Author
Owner

@bmetallica commented on GitHub (Apr 16, 2026):

Hi @ParthSareen ,
I am currently using version 0.20.8-rc0 (on both the Claude Code client and the Ollama Docker container). Many tool calls are still failing. Additionally, the context seems to fill up very quickly, causing Claude Code to lose track of the task or abruptly cancel ongoing processes because it incorrectly assumes the task is finished.

my docker-compose.yml

  - OLLAMA_DEBUG=1
  - OLLAMA_KEEP_ALIVE=-1
  - OLLAMA_FLASH_ATTENTION=1 
  - OLLAMA_SCHED_SPREAD=true
  - OLLAMA_CONTEXT_LENGTH=65536
  - OLLAMA_NUM_PARALLEL=1

claude code:

Now I have enough context to plan the approach. I must use EnterPlanMode as this is a complex, multi-file implementation task. I will structure the plan to cover
the core components: MQTT setup, API implementation, and the UI/Sync logic.<channel|>
⎿ Invalid tool parameters
⎿ Invalid tool parameters
⎿ Invalid tool parameters
⎿ Invalid tool parameters

● Searched for 1 pattern, read 2 files (ctrl+o to expand)
⎿ Invalid tool parameters

● Update(src/mqttClient.js)
⎿ Error editing file

● Update(src/mqttClient.js)
⎿ Error editing file

● Update(src/mqttClient.js)
⎿ Error editing file

● Write(src/syncService.js)
⎿ Wrote 80 lines to src/syncService.js

docker logs ollama:

time=2026-04-16T08:55:07.419Z level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a
time=2026-04-16T08:55:07.509Z level=DEBUG source=server.go:1550 msg="completion request" images=0 prompt=66198 format=""
time=2026-04-16T08:55:07.607Z level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=39054 prompt=16751 used=0 remaining=16751
[GIN] 2026/04/16 - 08:56:06 | 200 | 59.659571716s | 192.168.66.36 | POST "/v1/messages?beta=true"
time=2026-04-16T08:56:06.783Z level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536
time=2026-04-16T08:56:06.783Z level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 duration=2562047h47m16.854775807s
time=2026-04-16T08:56:06.783Z level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 refCount=0
time=2026-04-16T08:56:23.663Z level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a
time=2026-04-16T08:56:23.899Z level=DEBUG source=server.go:1550 msg="completion request" images=0 prompt=128033 format=""
time=2026-04-16T08:56:24.147Z level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=18870 prompt=30879 used=0 remaining=30879
[GIN] 2026/04/16 - 08:57:23 | 200 | 1m0s | 192.168.66.36 | POST "/v1/messages?beta=true"
time=2026-04-16T08:57:23.979Z level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536
time=2026-04-16T08:57:23.979Z level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 duration=2562047h47m16.854775807s
time=2026-04-16T08:57:23.979Z level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 refCount=0
time=2026-04-16T08:57:24.276Z level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a
time=2026-04-16T08:57:24.527Z level=DEBUG source=server.go:1550 msg="completion request" images=0 prompt=132121 format=""
time=2026-04-16T08:57:24.782Z level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=31848 prompt=31905 used=31770 remaining=135
[GIN] 2026/04/16 - 08:57:32 | 200 | 8.426352126s | 192.168.66.36 | POST "/v1/messages?beta=true"
time=2026-04-16T08:57:32.429Z level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536
time=2026-04-16T08:57:32.429Z level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 duration=2562047h47m16.854775807s
time=2026-04-16T08:57:32.429Z level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 refCount=0
time=2026-04-16T08:57:32.727Z level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a
time=2026-04-16T08:57:32.978Z level=DEBUG source=server.go:1550 msg="completion request" images=0 prompt=133863 format=""
time=2026-04-16T08:57:33.232Z level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=32282 prompt=32332 used=32195 remaining=137
[GIN] 2026/04/16 - 08:57:42 | 200 | 9.615121051s | 192.168.66.36 | POST "/v1/messages?beta=true"
time=2026-04-16T08:57:42.066Z level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536
time=2026-04-16T08:57:42.067Z level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 duration=2562047h47m16.854775807s
time=2026-04-16T08:57:42.067Z level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 refCount=0
time=2026-04-16T08:57:42.367Z level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a
time=2026-04-16T08:57:42.604Z level=DEBUG source=server.go:1550 msg="completion request" images=0 prompt=136085 format=""
time=2026-04-16T08:57:42.864Z level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=32784 prompt=32870 used=32661 remaining=209
[GIN] 2026/04/16 - 08:57:47 | 200 | 5.796241259s | 192.168.66.36 | POST "/v1/messages?beta=true"
time=2026-04-16T08:57:47.882Z level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536
time=2026-04-16T08:57:47.882Z level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 duration=2562047h47m16.854775807s
time=2026-04-16T08:57:47.882Z level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 refCount=0
time=2026-04-16T08:57:48.191Z level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a
time=2026-04-16T08:57:48.444Z level=DEBUG source=server.go:1550 msg="completion request" images=0 prompt=137370 format=""
time=2026-04-16T08:57:48.709Z level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=33107 prompt=33149 used=33107 remaining=42
[GIN] 2026/04/16 - 08:58:15 | 200 | 27.15564556s | 192.168.66.36 | POST "/v1/messages?beta=true"
time=2026-04-16T08:58:15.058Z level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536
time=2026-04-16T08:58:15.058Z level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 duration=2562047h47m16.854775807s
time=2026-04-16T08:58:15.058Z level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 refCount=0
time=2026-04-16T08:58:15.656Z level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a
time=2026-04-16T08:58:15.918Z level=DEBUG source=server.go:1550 msg="completion request" images=0 prompt=143160 format=""
time=2026-04-16T08:58:16.189Z level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=34522 prompt=34553 used=34522 remaining=31
[GIN] 2026/04/16 - 08:58:23 | 200 | 8.395744264s | 192.168.66.36 | POST "/v1/messages?beta=true"
time=2026-04-16T08:58:23.764Z level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536
time=2026-04-16T08:58:23.764Z level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 duration=2562047h47m16.854775807s
time=2026-04-16T08:58:23.764Z level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 refCount=0
time=2026-04-16T08:58:24.082Z level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a
time=2026-04-16T08:58:24.349Z level=DEBUG source=server.go:1550 msg="completion request" images=0 prompt=144811 format=""
time=2026-04-16T08:58:24.608Z level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=34949 prompt=34980 used=34949 remaining=31
[GIN] 2026/04/16 - 08:58:31 | 200 | 7.416646946s | 192.168.66.36 | POST "/v1/messages?beta=true"
time=2026-04-16T08:58:31.215Z level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536
time=2026-04-16T08:58:31.215Z level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 duration=2562047h47m16.854775807s
time=2026-04-16T08:58:31.215Z level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 refCount=0
time=2026-04-16T08:58:31.524Z level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a
time=2026-04-16T08:58:31.808Z level=DEBUG source=server.go:1550 msg="completion request" images=0 prompt=146230 format=""
time=2026-04-16T08:58:32.083Z level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=35323 prompt=35354 used=35323 remaining=31
[GIN] 2026/04/16 - 08:58:44 | 200 | 13.553868972s | 192.168.66.36 | POST "/v1/messages?beta=true"
time=2026-04-16T08:58:44.797Z level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536
time=2026-04-16T08:58:44.797Z level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 duration=2562047h47m16.854775807s
time=2026-04-16T08:58:44.797Z level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 refCount=0
time=2026-04-16T08:58:58.265Z level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a
time=2026-04-16T08:58:58.534Z level=DEBUG source=server.go:1550 msg="completion request" images=0 prompt=149669 format=""
time=2026-04-16T08:58:58.811Z level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=36006 prompt=36328 used=36006 remaining=322
[GIN] 2026/04/16 - 08:59:13 | 200 | 15.202230615s | 192.168.66.36 | POST "/v1/messages?beta=true"
time=2026-04-16T08:59:13.187Z level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536
time=2026-04-16T08:59:13.187Z level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 duration=2562047h47m16.854775807s
time=2026-04-16T08:59:13.187Z level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 refCount=0
time=2026-04-16T08:59:24.813Z level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a
time=2026-04-16T08:59:25.099Z level=DEBUG source=server.go:1550 msg="completion request" images=0 prompt=152829 format=""
time=2026-04-16T08:59:25.388Z level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=37014 prompt=37098 used=37014 remaining=84
[GIN] 2026/04/16 - 08:59:41 | 200 | 16.54454706s | 192.168.66.36 | POST "/v1/messages?beta=true"
time=2026-04-16T08:59:41.075Z level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536
time=2026-04-16T08:59:41.075Z level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 duration=2562047h47m16.854775807s
time=2026-04-16T08:59:41.075Z level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 refCount=0

<!-- gh-comment-id:4258755050 --> @bmetallica commented on GitHub (Apr 16, 2026): Hi @ParthSareen , I am currently using version 0.20.8-rc0 (on both the Claude Code client and the Ollama Docker container). Many tool calls are still failing. Additionally, the context seems to fill up very quickly, causing Claude Code to lose track of the task or abruptly cancel ongoing processes because it incorrectly assumes the task is finished. ## my docker-compose.yml - OLLAMA_DEBUG=1 - OLLAMA_KEEP_ALIVE=-1 - OLLAMA_FLASH_ATTENTION=1 - OLLAMA_SCHED_SPREAD=true - OLLAMA_CONTEXT_LENGTH=65536 - OLLAMA_NUM_PARALLEL=1 ## claude code: Now I have enough context to plan the approach. I must use EnterPlanMode as this is a complex, multi-file implementation task. I will structure the plan to cover the core components: MQTT setup, API implementation, and the UI/Sync logic.<channel|> ⎿ Invalid tool parameters ⎿ Invalid tool parameters ⎿ Invalid tool parameters ⎿ Invalid tool parameters ● Searched for 1 pattern, read 2 files (ctrl+o to expand) ⎿ Invalid tool parameters ● Update(src/mqttClient.js) ⎿ Error editing file ● Update(src/mqttClient.js) ⎿ Error editing file ● Update(src/mqttClient.js) ⎿ Error editing file ● Write(src/syncService.js) ⎿ Wrote 80 lines to src/syncService.js ## docker logs ollama: time=2026-04-16T08:55:07.419Z level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a time=2026-04-16T08:55:07.509Z level=DEBUG source=server.go:1550 msg="completion request" images=0 prompt=66198 format="" time=2026-04-16T08:55:07.607Z level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=39054 prompt=16751 used=0 remaining=16751 [GIN] 2026/04/16 - 08:56:06 | 200 | 59.659571716s | 192.168.66.36 | POST "/v1/messages?beta=true" time=2026-04-16T08:56:06.783Z level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 time=2026-04-16T08:56:06.783Z level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 duration=2562047h47m16.854775807s time=2026-04-16T08:56:06.783Z level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 refCount=0 time=2026-04-16T08:56:23.663Z level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a time=2026-04-16T08:56:23.899Z level=DEBUG source=server.go:1550 msg="completion request" images=0 prompt=128033 format="" time=2026-04-16T08:56:24.147Z level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=18870 prompt=30879 used=0 remaining=30879 [GIN] 2026/04/16 - 08:57:23 | 200 | 1m0s | 192.168.66.36 | POST "/v1/messages?beta=true" time=2026-04-16T08:57:23.979Z level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 time=2026-04-16T08:57:23.979Z level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 duration=2562047h47m16.854775807s time=2026-04-16T08:57:23.979Z level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 refCount=0 time=2026-04-16T08:57:24.276Z level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a time=2026-04-16T08:57:24.527Z level=DEBUG source=server.go:1550 msg="completion request" images=0 prompt=132121 format="" time=2026-04-16T08:57:24.782Z level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=31848 prompt=31905 used=31770 remaining=135 [GIN] 2026/04/16 - 08:57:32 | 200 | 8.426352126s | 192.168.66.36 | POST "/v1/messages?beta=true" time=2026-04-16T08:57:32.429Z level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 time=2026-04-16T08:57:32.429Z level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 duration=2562047h47m16.854775807s time=2026-04-16T08:57:32.429Z level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 refCount=0 time=2026-04-16T08:57:32.727Z level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a time=2026-04-16T08:57:32.978Z level=DEBUG source=server.go:1550 msg="completion request" images=0 prompt=133863 format="" time=2026-04-16T08:57:33.232Z level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=32282 prompt=32332 used=32195 remaining=137 [GIN] 2026/04/16 - 08:57:42 | 200 | 9.615121051s | 192.168.66.36 | POST "/v1/messages?beta=true" time=2026-04-16T08:57:42.066Z level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 time=2026-04-16T08:57:42.067Z level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 duration=2562047h47m16.854775807s time=2026-04-16T08:57:42.067Z level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 refCount=0 time=2026-04-16T08:57:42.367Z level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a time=2026-04-16T08:57:42.604Z level=DEBUG source=server.go:1550 msg="completion request" images=0 prompt=136085 format="" time=2026-04-16T08:57:42.864Z level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=32784 prompt=32870 used=32661 remaining=209 [GIN] 2026/04/16 - 08:57:47 | 200 | 5.796241259s | 192.168.66.36 | POST "/v1/messages?beta=true" time=2026-04-16T08:57:47.882Z level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 time=2026-04-16T08:57:47.882Z level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 duration=2562047h47m16.854775807s time=2026-04-16T08:57:47.882Z level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 refCount=0 time=2026-04-16T08:57:48.191Z level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a time=2026-04-16T08:57:48.444Z level=DEBUG source=server.go:1550 msg="completion request" images=0 prompt=137370 format="" time=2026-04-16T08:57:48.709Z level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=33107 prompt=33149 used=33107 remaining=42 [GIN] 2026/04/16 - 08:58:15 | 200 | 27.15564556s | 192.168.66.36 | POST "/v1/messages?beta=true" time=2026-04-16T08:58:15.058Z level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 time=2026-04-16T08:58:15.058Z level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 duration=2562047h47m16.854775807s time=2026-04-16T08:58:15.058Z level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 refCount=0 time=2026-04-16T08:58:15.656Z level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a time=2026-04-16T08:58:15.918Z level=DEBUG source=server.go:1550 msg="completion request" images=0 prompt=143160 format="" time=2026-04-16T08:58:16.189Z level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=34522 prompt=34553 used=34522 remaining=31 [GIN] 2026/04/16 - 08:58:23 | 200 | 8.395744264s | 192.168.66.36 | POST "/v1/messages?beta=true" time=2026-04-16T08:58:23.764Z level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 time=2026-04-16T08:58:23.764Z level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 duration=2562047h47m16.854775807s time=2026-04-16T08:58:23.764Z level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 refCount=0 time=2026-04-16T08:58:24.082Z level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a time=2026-04-16T08:58:24.349Z level=DEBUG source=server.go:1550 msg="completion request" images=0 prompt=144811 format="" time=2026-04-16T08:58:24.608Z level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=34949 prompt=34980 used=34949 remaining=31 [GIN] 2026/04/16 - 08:58:31 | 200 | 7.416646946s | 192.168.66.36 | POST "/v1/messages?beta=true" time=2026-04-16T08:58:31.215Z level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 time=2026-04-16T08:58:31.215Z level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 duration=2562047h47m16.854775807s time=2026-04-16T08:58:31.215Z level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 refCount=0 time=2026-04-16T08:58:31.524Z level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a time=2026-04-16T08:58:31.808Z level=DEBUG source=server.go:1550 msg="completion request" images=0 prompt=146230 format="" time=2026-04-16T08:58:32.083Z level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=35323 prompt=35354 used=35323 remaining=31 [GIN] 2026/04/16 - 08:58:44 | 200 | 13.553868972s | 192.168.66.36 | POST "/v1/messages?beta=true" time=2026-04-16T08:58:44.797Z level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 time=2026-04-16T08:58:44.797Z level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 duration=2562047h47m16.854775807s time=2026-04-16T08:58:44.797Z level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 refCount=0 time=2026-04-16T08:58:58.265Z level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a time=2026-04-16T08:58:58.534Z level=DEBUG source=server.go:1550 msg="completion request" images=0 prompt=149669 format="" time=2026-04-16T08:58:58.811Z level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=36006 prompt=36328 used=36006 remaining=322 [GIN] 2026/04/16 - 08:59:13 | 200 | 15.202230615s | 192.168.66.36 | POST "/v1/messages?beta=true" time=2026-04-16T08:59:13.187Z level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 time=2026-04-16T08:59:13.187Z level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 duration=2562047h47m16.854775807s time=2026-04-16T08:59:13.187Z level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 refCount=0 time=2026-04-16T08:59:24.813Z level=DEBUG source=sched.go:672 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a time=2026-04-16T08:59:25.099Z level=DEBUG source=server.go:1550 msg="completion request" images=0 prompt=152829 format="" time=2026-04-16T08:59:25.388Z level=DEBUG source=cache.go:151 msg="loading cache slot" id=0 cache=37014 prompt=37098 used=37014 remaining=84 [GIN] 2026/04/16 - 08:59:41 | 200 | 16.54454706s | 192.168.66.36 | POST "/v1/messages?beta=true" time=2026-04-16T08:59:41.075Z level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 time=2026-04-16T08:59:41.075Z level=DEBUG source=sched.go:309 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 duration=2562047h47m16.854775807s time=2026-04-16T08:59:41.075Z level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma4:latest runner.inference="[{ID:GPU-8e13d698-9b6b-a377-44bd-31550f42e79d Library:CUDA} {ID:GPU-4cd68947-f8c5-70c4-9bfe-a5916292a0c4 Library:CUDA}]" runner.size="11.6 GiB" runner.vram="11.6 GiB" runner.parallel=1 runner.pid=59 runner.model=/root/.ollama/models/blobs/sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a runner.num_ctx=65536 refCount=0
Author
Owner

@PureBlissAK commented on GitHub (Apr 18, 2026):

🤖 Automated Triage & Analysis Report

Issue: #15390
Analyzed: 2026-04-18T18:22:22.949037

Analysis

  • Type: unknown
  • Severity: medium
  • Components: unknown

Implementation Plan

  • Effort: medium
  • Steps:

This issue has been triaged and marked for implementation.

<!-- gh-comment-id:4274309900 --> @PureBlissAK commented on GitHub (Apr 18, 2026): <!-- ollama-issue-orchestrator:v1 issue:15390 --> ## 🤖 Automated Triage & Analysis Report **Issue**: #15390 **Analyzed**: 2026-04-18T18:22:22.949037 ### Analysis - **Type**: unknown - **Severity**: medium - **Components**: unknown ### Implementation Plan - **Effort**: medium - **Steps**: *This issue has been triaged and marked for implementation.*
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#35603