[GH-ISSUE #11079] "unexpected end of JSON input" when providing tools for phi4-mini #69367

Closed
opened 2026-05-04 17:55:16 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @icefairy64 on GitHub (Jun 15, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11079

What is the issue?

When performing a simple chat request with tools for phi4-mini specifically I get an error response: {"error":"unexpected end of JSON input"}.

An example of a request triggering this:

curl http://localhost:11434/api/chat -d '{
  "model": "phi4-mini",
  "messages": [
    {
      "role": "user",
      "content": "What is the weather today in Paris?"
    }
  ],
  "stream": false,
  "tools": [
    {
      "type": "function",
      "function": {
        "name": "get_current_weather",
        "description": "Get the current weather for a location",
        "parameters": {
          "type": "object",
          "properties": {
            "location": {
              "type": "string",
              "description": "The location to get the weather for, e.g. San Francisco, CA"
            },
            "format": {
              "type": "string",
              "description": "The format to return the weather in, e.g. 'celsius' or 'fahrenheit'",
              "enum": ["celsius", "fahrenheit"]
            }
          },
          "required": ["location", "format"]
        }
      }
    }
  ]
}'

Tool calling works fine with Qwen3.

Relevant log output

time=2025-06-15T07:09:02.948Z level=INFO source=routes.go:1234 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE:q4_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-06-15T07:09:02.950Z level=INFO source=images.go:479 msg="total blobs: 45"
time=2025-06-15T07:09:02.950Z level=INFO source=images.go:486 msg="total unused blobs removed: 0"
time=2025-06-15T07:09:02.950Z level=INFO source=routes.go:1287 msg="Listening on [::]:11434 (version 0.9.0)"
time=2025-06-15T07:09:02.950Z level=DEBUG source=sched.go:108 msg="starting llm scheduler"
time=2025-06-15T07:09:02.950Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-06-15T07:09:02.950Z level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA"
time=2025-06-15T07:09:02.950Z level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcuda.so*
time=2025-06-15T07:09:02.950Z level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2025-06-15T07:09:02.951Z level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[]
time=2025-06-15T07:09:02.951Z level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcudart.so*
time=2025-06-15T07:09:02.951Z level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/lib/ollama/libcudart.so* /usr/local/nvidia/lib/libcudart.so* /usr/local/nvidia/lib64/libcudart.so* /usr/lib/ollama/cuda_v*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]"
time=2025-06-15T07:09:02.951Z level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[]
time=2025-06-15T07:09:02.951Z level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2025-06-15T07:09:02.951Z level=DEBUG source=amd_linux.go:101 msg="evaluating amdgpu node /sys/class/kfd/kfd/topology/nodes/0/properties"
time=2025-06-15T07:09:02.951Z level=DEBUG source=amd_linux.go:121 msg="detected CPU /sys/class/kfd/kfd/topology/nodes/0/properties"
time=2025-06-15T07:09:02.951Z level=DEBUG source=amd_linux.go:101 msg="evaluating amdgpu node /sys/class/kfd/kfd/topology/nodes/1/properties"
time=2025-06-15T07:09:02.951Z level=DEBUG source=amd_linux.go:206 msg="mapping amdgpu to drm sysfs nodes" amdgpu=/sys/class/kfd/kfd/topology/nodes/1/properties vendor=4098 device=29772 unique_id=3545464219138192957
time=2025-06-15T07:09:02.951Z level=DEBUG source=amd_linux.go:240 msg=matched amdgpu=/sys/class/kfd/kfd/topology/nodes/1/properties drm=/sys/class/drm/card0/device
time=2025-06-15T07:09:02.951Z level=DEBUG source=amd_linux.go:318 msg="amdgpu memory" gpu=0 total="20.0 GiB"
time=2025-06-15T07:09:02.951Z level=DEBUG source=amd_linux.go:319 msg="amdgpu memory" gpu=0 available="18.0 GiB"
time=2025-06-15T07:09:02.951Z level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /usr/lib/ollama/rocm"
time=2025-06-15T07:09:02.951Z level=DEBUG source=amd_common.go:44 msg="detected ROCM next to ollama executable /usr/lib/ollama/rocm"
time=2025-06-15T07:09:02.951Z level=DEBUG source=amd_linux.go:371 msg="rocm supported GPUs" types="[gfx1010 gfx1012 gfx1030 gfx1100 gfx1101 gfx1102 gfx1151 gfx1200 gfx1201 gfx900 gfx906 gfx908 gfx90a gfx942]"
time=2025-06-15T07:09:02.951Z level=INFO source=amd_linux.go:386 msg="amdgpu is supported" gpu=GPU-313404ec3196f63d gpu_type=gfx1100
time=2025-06-15T07:09:02.951Z level=INFO source=types.go:130 msg="inference compute" id=GPU-313404ec3196f63d library=rocm variant="" compute=gfx1100 driver=0.0 name=1002:744c total="20.0 GiB" available="18.0 GiB"
time=2025-06-15T07:09:05.478Z level=DEBUG source=ggml.go:155 msg="key not found" key=general.alignment default=32
time=2025-06-15T07:09:05.478Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="94.3 GiB" before.free="86.5 GiB" before.free_swap="8.0 GiB" now.total="94.3 GiB" now.free="86.5 GiB" now.free_swap="8.0 GiB"
time=2025-06-15T07:09:05.478Z level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-313404ec3196f63d name=1002:744c before="18.0 GiB" now="18.0 GiB"
time=2025-06-15T07:09:05.478Z level=DEBUG source=sched.go:185 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1
time=2025-06-15T07:09:05.488Z level=DEBUG source=ggml.go:155 msg="key not found" key=general.alignment default=32
time=2025-06-15T07:09:05.498Z level=DEBUG source=ggml.go:155 msg="key not found" key=general.alignment default=32
time=2025-06-15T07:09:05.498Z level=DEBUG source=sched.go:228 msg="loading first model" model=/root/.ollama/models/blobs/sha256-3c168af1dea0a414299c7d9077e100ac763370e5a98b3c53801a958a47f0a5db
time=2025-06-15T07:09:05.498Z level=DEBUG source=memory.go:111 msg=evaluating library=rocm gpu_count=1 available="[18.0 GiB]"
time=2025-06-15T07:09:05.498Z level=DEBUG source=ggml.go:155 msg="key not found" key=phi3.vision.block_count default=0
time=2025-06-15T07:09:05.498Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="94.3 GiB" before.free="86.5 GiB" before.free_swap="8.0 GiB" now.total="94.3 GiB" now.free="86.5 GiB" now.free_swap="8.0 GiB"
time=2025-06-15T07:09:05.498Z level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-313404ec3196f63d name=1002:744c before="18.0 GiB" now="18.0 GiB"
time=2025-06-15T07:09:05.498Z level=DEBUG source=ggml.go:155 msg="key not found" key=phi3.attention.key_length default=128
time=2025-06-15T07:09:05.498Z level=DEBUG source=ggml.go:155 msg="key not found" key=phi3.attention.value_length default=128
time=2025-06-15T07:09:05.498Z level=DEBUG source=ggml.go:155 msg="key not found" key=phi3.attention.key_length default=128
time=2025-06-15T07:09:05.498Z level=DEBUG source=ggml.go:155 msg="key not found" key=phi3.attention.value_length default=128
time=2025-06-15T07:09:05.498Z level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-3c168af1dea0a414299c7d9077e100ac763370e5a98b3c53801a958a47f0a5db gpu=GPU-313404ec3196f63d parallel=2 available=19373862912 required="3.2 GiB"
time=2025-06-15T07:09:05.499Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="94.3 GiB" before.free="86.5 GiB" before.free_swap="8.0 GiB" now.total="94.3 GiB" now.free="86.5 GiB" now.free_swap="8.0 GiB"
time=2025-06-15T07:09:05.499Z level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-313404ec3196f63d name=1002:744c before="18.0 GiB" now="18.0 GiB"
time=2025-06-15T07:09:05.499Z level=INFO source=server.go:135 msg="system memory" total="94.3 GiB" free="86.5 GiB" free_swap="8.0 GiB"
time=2025-06-15T07:09:05.499Z level=DEBUG source=memory.go:111 msg=evaluating library=rocm gpu_count=1 available="[18.0 GiB]"
time=2025-06-15T07:09:05.499Z level=DEBUG source=ggml.go:155 msg="key not found" key=phi3.vision.block_count default=0
time=2025-06-15T07:09:05.499Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="94.3 GiB" before.free="86.5 GiB" before.free_swap="8.0 GiB" now.total="94.3 GiB" now.free="86.5 GiB" now.free_swap="8.0 GiB"
time=2025-06-15T07:09:05.499Z level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-313404ec3196f63d name=1002:744c before="18.0 GiB" now="18.0 GiB"
time=2025-06-15T07:09:05.499Z level=DEBUG source=ggml.go:155 msg="key not found" key=phi3.attention.key_length default=128
time=2025-06-15T07:09:05.499Z level=DEBUG source=ggml.go:155 msg="key not found" key=phi3.attention.value_length default=128
time=2025-06-15T07:09:05.499Z level=DEBUG source=ggml.go:155 msg="key not found" key=phi3.attention.key_length default=128
time=2025-06-15T07:09:05.499Z level=DEBUG source=ggml.go:155 msg="key not found" key=phi3.attention.value_length default=128
time=2025-06-15T07:09:05.499Z level=INFO source=server.go:168 msg=offload library=rocm layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[18.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="3.2 GiB" memory.required.partial="3.2 GiB" memory.required.kv="256.0 MiB" memory.required.allocations="[3.2 GiB]" memory.weights.total="2.3 GiB" memory.weights.repeating="1.8 GiB" memory.weights.nonrepeating="480.8 MiB" memory.graph.full="128.0 MiB" memory.graph.partial="128.0 MiB"
time=2025-06-15T07:09:05.499Z level=DEBUG source=ggml.go:155 msg="key not found" key=phi3.attention.key_length default=128
time=2025-06-15T07:09:05.499Z level=DEBUG source=ggml.go:155 msg="key not found" key=phi3.attention.value_length default=128
time=2025-06-15T07:09:05.499Z level=INFO source=server.go:211 msg="enabling flash attention"
time=2025-06-15T07:09:05.499Z level=DEBUG source=server.go:284 msg="compatible gpu libraries" compatible=[rocm]
llama_model_loader: loaded meta data with 36 key-value pairs and 196 tensors from /root/.ollama/models/blobs/sha256-3c168af1dea0a414299c7d9077e100ac763370e5a98b3c53801a958a47f0a5db (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = phi3
llama_model_loader: - kv   1:              phi3.rope.scaling.attn_factor f32              = 1.190238
llama_model_loader: - kv   2:                               general.type str              = model
llama_model_loader: - kv   3:                               general.name str              = Phi 4 Mini Instruct
llama_model_loader: - kv   4:                           general.finetune str              = instruct
llama_model_loader: - kv   5:                           general.basename str              = Phi-4
llama_model_loader: - kv   6:                         general.size_label str              = mini
llama_model_loader: - kv   7:                            general.license str              = mit
llama_model_loader: - kv   8:                       general.license.link str              = https://huggingface.co/microsoft/Phi-...
llama_model_loader: - kv   9:                               general.tags arr[str,3]       = ["nlp", "code", "text-generation"]
llama_model_loader: - kv  10:                          general.languages arr[str,24]      = ["multilingual", "ar", "zh", "cs", "d...
llama_model_loader: - kv  11:                        phi3.context_length u32              = 131072
llama_model_loader: - kv  12:  phi3.rope.scaling.original_context_length u32              = 4096
llama_model_loader: - kv  13:                      phi3.embedding_length u32              = 3072
llama_model_loader: - kv  14:                   phi3.feed_forward_length u32              = 8192
llama_model_loader: - kv  15:                           phi3.block_count u32              = 32
llama_model_loader: - kv  16:                  phi3.attention.head_count u32              = 24
llama_model_loader: - kv  17:               phi3.attention.head_count_kv u32              = 8
llama_model_loader: - kv  18:      phi3.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  19:                  phi3.rope.dimension_count u32              = 96
llama_model_loader: - kv  20:                        phi3.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv  21:              phi3.attention.sliding_window u32              = 262144
llama_model_loader: - kv  22:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  23:                         tokenizer.ggml.pre str              = gpt-4o
llama_model_loader: - kv  24:                      tokenizer.ggml.tokens arr[str,200064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  25:                  tokenizer.ggml.token_type arr[i32,200064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  26:                      tokenizer.ggml.merges arr[str,199742]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "e r", ...
llama_model_loader: - kv  27:                tokenizer.ggml.bos_token_id u32              = 199999
llama_model_loader: - kv  28:                tokenizer.ggml.eos_token_id u32              = 199999
llama_model_loader: - kv  29:            tokenizer.ggml.unknown_token_id u32              = 199999
llama_model_loader: - kv  30:            tokenizer.ggml.padding_token_id u32              = 199999
llama_model_loader: - kv  31:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  32:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  33:                    tokenizer.chat_template str              = {% for message in messages %}{% if me...
llama_model_loader: - kv  34:               general.quantization_version u32              = 2
llama_model_loader: - kv  35:                          general.file_type u32              = 15
llama_model_loader: - type  f32:   67 tensors
llama_model_loader: - type q4_K:   80 tensors
llama_model_loader: - type q5_K:   32 tensors
llama_model_loader: - type q6_K:   17 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 2.31 GiB (5.18 BPW) 
init_tokenizer: initializing tokenizer for type 2
load: control token: 200024 '<|/tool|>' is not marked as EOG
load: control token: 200023 '<|tool|>' is not marked as EOG
load: control token: 200022 '<|system|>' is not marked as EOG
load: control token: 200021 '<|user|>' is not marked as EOG
load: control token: 200025 '<|tool_call|>' is not marked as EOG
load: control token: 200027 '<|tool_response|>' is not marked as EOG
load: control token: 200028 '<|tag|>' is not marked as EOG
load: control token: 200026 '<|/tool_call|>' is not marked as EOG
load: control token: 200018 '<|endofprompt|>' is not marked as EOG
load: control token: 200019 '<|assistant|>' is not marked as EOG
load: special tokens cache size = 12
load: token to piece cache size = 1.3333 MB
print_info: arch             = phi3
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 3.84 B
print_info: general.name     = Phi 4 Mini Instruct
print_info: vocab type       = BPE
print_info: n_vocab          = 200064
print_info: n_merges         = 199742
print_info: BOS token        = 199999 '<|endoftext|>'
print_info: EOS token        = 199999 '<|endoftext|>'
print_info: EOT token        = 199999 '<|endoftext|>'
print_info: UNK token        = 199999 '<|endoftext|>'
print_info: PAD token        = 199999 '<|endoftext|>'
print_info: LF token         = 198 'Ċ'
print_info: EOG token        = 199999 '<|endoftext|>'
print_info: EOG token        = 200020 '<|end|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-06-15T07:09:05.685Z level=DEBUG source=server.go:360 msg="adding gpu library" path=/usr/lib/ollama/rocm
time=2025-06-15T07:09:05.685Z level=DEBUG source=server.go:367 msg="adding gpu dependency paths" paths=[/usr/lib/ollama/rocm]
time=2025-06-15T07:09:05.685Z level=INFO source=server.go:431 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-3c168af1dea0a414299c7d9077e100ac763370e5a98b3c53801a958a47f0a5db --ctx-size 8192 --batch-size 512 --n-gpu-layers 33 --threads 8 --flash-attn --kv-cache-type q4_0 --parallel 2 --port 36267"
time=2025-06-15T07:09:05.685Z level=DEBUG source=server.go:432 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LD_LIBRARY_PATH=/usr/lib/ollama/rocm:/usr/lib/ollama/rocm:/usr/lib/ollama:/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/lib/ollama OLLAMA_KV_CACHE_TYPE=q4_0 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_DEBUG=1 OLLAMA_FLASH_ATTENTION=1 OLLAMA_MAX_LOADED_MODELS=3 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/rocm ROCR_VISIBLE_DEVICES=GPU-313404ec3196f63d
time=2025-06-15T07:09:05.686Z level=INFO source=sched.go:483 msg="loaded runners" count=1
time=2025-06-15T07:09:05.686Z level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
time=2025-06-15T07:09:05.686Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding"
time=2025-06-15T07:09:05.692Z level=INFO source=runner.go:815 msg="starting go runner"
time=2025-06-15T07:09:05.692Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-skylakex.so
time=2025-06-15T07:09:05.694Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama/rocm
/opt/amdgpu/share/libdrm/amdgpu.ids: No such file or directory
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 ROCm devices:
  Device 0: AMD Radeon Graphics, gfx1100 (0x1100), VMM: no, Wave Size: 32
load_backend: loaded ROCm backend from /usr/lib/ollama/rocm/libggml-hip.so
time=2025-06-15T07:09:06.406Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
llama_model_load_from_file_impl: using device ROCm0 (AMD Radeon Graphics) - 20394 MiB free
time=2025-06-15T07:09:06.406Z level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:36267"
llama_model_loader: loaded meta data with 36 key-value pairs and 196 tensors from /root/.ollama/models/blobs/sha256-3c168af1dea0a414299c7d9077e100ac763370e5a98b3c53801a958a47f0a5db (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = phi3
llama_model_loader: - kv   1:              phi3.rope.scaling.attn_factor f32              = 1.190238
llama_model_loader: - kv   2:                               general.type str              = model
llama_model_loader: - kv   3:                               general.name str              = Phi 4 Mini Instruct
llama_model_loader: - kv   4:                           general.finetune str              = instruct
llama_model_loader: - kv   5:                           general.basename str              = Phi-4
llama_model_loader: - kv   6:                         general.size_label str              = mini
llama_model_loader: - kv   7:                            general.license str              = mit
llama_model_loader: - kv   8:                       general.license.link str              = https://huggingface.co/microsoft/Phi-...
llama_model_loader: - kv   9:                               general.tags arr[str,3]       = ["nlp", "code", "text-generation"]
llama_model_loader: - kv  10:                          general.languages arr[str,24]      = ["multilingual", "ar", "zh", "cs", "d...
llama_model_loader: - kv  11:                        phi3.context_length u32              = 131072
llama_model_loader: - kv  12:  phi3.rope.scaling.original_context_length u32              = 4096
llama_model_loader: - kv  13:                      phi3.embedding_length u32              = 3072
llama_model_loader: - kv  14:                   phi3.feed_forward_length u32              = 8192
llama_model_loader: - kv  15:                           phi3.block_count u32              = 32
llama_model_loader: - kv  16:                  phi3.attention.head_count u32              = 24
llama_model_loader: - kv  17:               phi3.attention.head_count_kv u32              = 8
llama_model_loader: - kv  18:      phi3.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  19:                  phi3.rope.dimension_count u32              = 96
llama_model_loader: - kv  20:                        phi3.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv  21:              phi3.attention.sliding_window u32              = 262144
llama_model_loader: - kv  22:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  23:                         tokenizer.ggml.pre str              = gpt-4o
time=2025-06-15T07:09:06.439Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: - kv  24:                      tokenizer.ggml.tokens arr[str,200064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  25:                  tokenizer.ggml.token_type arr[i32,200064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  26:                      tokenizer.ggml.merges arr[str,199742]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "e r", ...
llama_model_loader: - kv  27:                tokenizer.ggml.bos_token_id u32              = 199999
llama_model_loader: - kv  28:                tokenizer.ggml.eos_token_id u32              = 199999
llama_model_loader: - kv  29:            tokenizer.ggml.unknown_token_id u32              = 199999
llama_model_loader: - kv  30:            tokenizer.ggml.padding_token_id u32              = 199999
llama_model_loader: - kv  31:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  32:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  33:                    tokenizer.chat_template str              = {% for message in messages %}{% if me...
llama_model_loader: - kv  34:               general.quantization_version u32              = 2
llama_model_loader: - kv  35:                          general.file_type u32              = 15
llama_model_loader: - type  f32:   67 tensors
llama_model_loader: - type q4_K:   80 tensors
llama_model_loader: - type q5_K:   32 tensors
llama_model_loader: - type q6_K:   17 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 2.31 GiB (5.18 BPW) 
init_tokenizer: initializing tokenizer for type 2
load: control token: 200024 '<|/tool|>' is not marked as EOG
load: control token: 200023 '<|tool|>' is not marked as EOG
load: control token: 200022 '<|system|>' is not marked as EOG
load: control token: 200021 '<|user|>' is not marked as EOG
load: control token: 200025 '<|tool_call|>' is not marked as EOG
load: control token: 200027 '<|tool_response|>' is not marked as EOG
load: control token: 200028 '<|tag|>' is not marked as EOG
load: control token: 200026 '<|/tool_call|>' is not marked as EOG
load: control token: 200018 '<|endofprompt|>' is not marked as EOG
load: control token: 200019 '<|assistant|>' is not marked as EOG
load: special tokens cache size = 12
load: token to piece cache size = 1.3333 MB
print_info: arch             = phi3
print_info: vocab_only       = 0
print_info: n_ctx_train      = 131072
print_info: n_embd           = 3072
print_info: n_layer          = 32
print_info: n_head           = 24
print_info: n_head_kv        = 8
print_info: n_rot            = 96
print_info: n_swa            = 262144
print_info: n_swa_pattern    = 1
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 3
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 8192
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 2
print_info: rope scaling     = linear
print_info: freq_base_train  = 10000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 4096
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 3B
print_info: model params     = 3.84 B
print_info: general.name     = Phi 4 Mini Instruct
print_info: vocab type       = BPE
print_info: n_vocab          = 200064
print_info: n_merges         = 199742
print_info: BOS token        = 199999 '<|endoftext|>'
print_info: EOS token        = 199999 '<|endoftext|>'
print_info: EOT token        = 199999 '<|endoftext|>'
print_info: UNK token        = 199999 '<|endoftext|>'
print_info: PAD token        = 199999 '<|endoftext|>'
print_info: LF token         = 198 'Ċ'
print_info: EOG token        = 199999 '<|endoftext|>'
print_info: EOG token        = 200020 '<|end|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors: layer   0 assigned to device ROCm0, is_swa = 0
load_tensors: layer   1 assigned to device ROCm0, is_swa = 0
load_tensors: layer   2 assigned to device ROCm0, is_swa = 0
load_tensors: layer   3 assigned to device ROCm0, is_swa = 0
load_tensors: layer   4 assigned to device ROCm0, is_swa = 0
load_tensors: layer   5 assigned to device ROCm0, is_swa = 0
load_tensors: layer   6 assigned to device ROCm0, is_swa = 0
load_tensors: layer   7 assigned to device ROCm0, is_swa = 0
load_tensors: layer   8 assigned to device ROCm0, is_swa = 0
load_tensors: layer   9 assigned to device ROCm0, is_swa = 0
load_tensors: layer  10 assigned to device ROCm0, is_swa = 0
load_tensors: layer  11 assigned to device ROCm0, is_swa = 0
load_tensors: layer  12 assigned to device ROCm0, is_swa = 0
load_tensors: layer  13 assigned to device ROCm0, is_swa = 0
load_tensors: layer  14 assigned to device ROCm0, is_swa = 0
load_tensors: layer  15 assigned to device ROCm0, is_swa = 0
load_tensors: layer  16 assigned to device ROCm0, is_swa = 0
load_tensors: layer  17 assigned to device ROCm0, is_swa = 0
load_tensors: layer  18 assigned to device ROCm0, is_swa = 0
load_tensors: layer  19 assigned to device ROCm0, is_swa = 0
load_tensors: layer  20 assigned to device ROCm0, is_swa = 0
load_tensors: layer  21 assigned to device ROCm0, is_swa = 0
load_tensors: layer  22 assigned to device ROCm0, is_swa = 0
load_tensors: layer  23 assigned to device ROCm0, is_swa = 0
load_tensors: layer  24 assigned to device ROCm0, is_swa = 0
load_tensors: layer  25 assigned to device ROCm0, is_swa = 0
load_tensors: layer  26 assigned to device ROCm0, is_swa = 0
load_tensors: layer  27 assigned to device ROCm0, is_swa = 0
load_tensors: layer  28 assigned to device ROCm0, is_swa = 0
load_tensors: layer  29 assigned to device ROCm0, is_swa = 0
load_tensors: layer  30 assigned to device ROCm0, is_swa = 0
load_tensors: layer  31 assigned to device ROCm0, is_swa = 0
load_tensors: layer  32 assigned to device ROCm0, is_swa = 0
load_tensors: tensor 'token_embd.weight' (q6_K) (and 0 others) cannot be used with preferred buffer type ROCm_Host, using CPU instead
load_tensors: offloading 32 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 33/33 layers to GPU
load_tensors:        ROCm0 model buffer size =  2368.57 MiB
load_tensors:   CPU_Mapped model buffer size =   480.81 MiB
llama_context: constructing llama_context
llama_context: n_seq_max     = 2
llama_context: n_ctx         = 8192
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch       = 1024
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = 1
llama_context: freq_base     = 10000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
set_abort_callback: call
llama_context:  ROCm_Host  output buffer size =     1.55 MiB
create_memory: n_ctx = 8192 (padded)
llama_kv_cache_unified: kv_size = 8192, type_k = 'q4_0', type_v = 'q4_0', n_layer = 32, can_shift = 1, padding = 256
llama_kv_cache_unified: layer   0: dev = ROCm0
llama_kv_cache_unified: layer   1: dev = ROCm0
llama_kv_cache_unified: layer   2: dev = ROCm0
llama_kv_cache_unified: layer   3: dev = ROCm0
llama_kv_cache_unified: layer   4: dev = ROCm0
llama_kv_cache_unified: layer   5: dev = ROCm0
llama_kv_cache_unified: layer   6: dev = ROCm0
llama_kv_cache_unified: layer   7: dev = ROCm0
llama_kv_cache_unified: layer   8: dev = ROCm0
llama_kv_cache_unified: layer   9: dev = ROCm0
llama_kv_cache_unified: layer  10: dev = ROCm0
llama_kv_cache_unified: layer  11: dev = ROCm0
llama_kv_cache_unified: layer  12: dev = ROCm0
llama_kv_cache_unified: layer  13: dev = ROCm0
llama_kv_cache_unified: layer  14: dev = ROCm0
llama_kv_cache_unified: layer  15: dev = ROCm0
llama_kv_cache_unified: layer  16: dev = ROCm0
llama_kv_cache_unified: layer  17: dev = ROCm0
llama_kv_cache_unified: layer  18: dev = ROCm0
llama_kv_cache_unified: layer  19: dev = ROCm0
llama_kv_cache_unified: layer  20: dev = ROCm0
llama_kv_cache_unified: layer  21: dev = ROCm0
llama_kv_cache_unified: layer  22: dev = ROCm0
llama_kv_cache_unified: layer  23: dev = ROCm0
llama_kv_cache_unified: layer  24: dev = ROCm0
llama_kv_cache_unified: layer  25: dev = ROCm0
llama_kv_cache_unified: layer  26: dev = ROCm0
llama_kv_cache_unified: layer  27: dev = ROCm0
llama_kv_cache_unified: layer  28: dev = ROCm0
llama_kv_cache_unified: layer  29: dev = ROCm0
llama_kv_cache_unified: layer  30: dev = ROCm0
llama_kv_cache_unified: layer  31: dev = ROCm0
llama_kv_cache_unified:      ROCm0 KV buffer size =   288.00 MiB
llama_kv_cache_unified: KV self size  =  288.00 MiB, K (q4_0):  144.00 MiB, V (q4_0):  144.00 MiB
llama_context: enumerating backends
llama_context: backend_ptrs.size() = 2
llama_context: max_nodes = 65536
llama_context: worst-case: n_tokens = 512, n_seqs = 1, n_outputs = 0
llama_context: reserving graph for n_tokens = 512, n_seqs = 1
llama_context: reserving graph for n_tokens = 1, n_seqs = 1
llama_context: reserving graph for n_tokens = 512, n_seqs = 1
llama_context:      ROCm0 compute buffer size =   402.75 MiB
llama_context:  ROCm_Host compute buffer size =    22.01 MiB
llama_context: graph nodes  = 1223
llama_context: graph splits = 2
time=2025-06-15T07:09:06.940Z level=INFO source=server.go:630 msg="llama runner started in 1.25 seconds"
time=2025-06-15T07:09:06.940Z level=DEBUG source=sched.go:495 msg="finished setting up" runner.name=registry.ollama.ai/library/phi4-mini:latest runner.inference=rocm runner.devices=1 runner.size="3.2 GiB" runner.vram="3.2 GiB" runner.parallel=2 runner.pid=14 runner.model=/root/.ollama/models/blobs/sha256-3c168af1dea0a414299c7d9077e100ac763370e5a98b3c53801a958a47f0a5db runner.num_ctx=8192
time=2025-06-15T07:09:06.941Z level=ERROR source=routes.go:1530 msg="failed to create tool parser" error="unexpected end of JSON input"
[GIN] 2025/06/15 - 07:09:06 | 500 |  1.474349537s |   192.168.28.79 | POST     "/api/chat"
time=2025-06-15T07:09:06.941Z level=DEBUG source=sched.go:503 msg="context for request finished"
time=2025-06-15T07:09:06.941Z level=DEBUG source=sched.go:343 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/phi4-mini:latest runner.inference=rocm runner.devices=1 runner.size="3.2 GiB" runner.vram="3.2 GiB" runner.parallel=2 runner.pid=14 runner.model=/root/.ollama/models/blobs/sha256-3c168af1dea0a414299c7d9077e100ac763370e5a98b3c53801a958a47f0a5db runner.num_ctx=8192 duration=5m0s
time=2025-06-15T07:09:06.941Z level=DEBUG source=sched.go:361 msg="after processing request finished event" runner.name=registry.ollama.ai/library/phi4-mini:latest runner.inference=rocm runner.devices=1 runner.size="3.2 GiB" runner.vram="3.2 GiB" runner.parallel=2 runner.pid=14 runner.model=/root/.ollama/models/blobs/sha256-3c168af1dea0a414299c7d9077e100ac763370e5a98b3c53801a958a47f0a5db runner.num_ctx=8192 refCount=0

OS

Docker

GPU

AMD

CPU

AMD

Ollama version

0.9.0

Originally created by @icefairy64 on GitHub (Jun 15, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11079 ### What is the issue? When performing a simple chat request with tools for `phi4-mini` specifically I get an error response: `{"error":"unexpected end of JSON input"}`. An example of a request triggering this: ```bash curl http://localhost:11434/api/chat -d '{ "model": "phi4-mini", "messages": [ { "role": "user", "content": "What is the weather today in Paris?" } ], "stream": false, "tools": [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather for a location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The location to get the weather for, e.g. San Francisco, CA" }, "format": { "type": "string", "description": "The format to return the weather in, e.g. 'celsius' or 'fahrenheit'", "enum": ["celsius", "fahrenheit"] } }, "required": ["location", "format"] } } } ] }' ``` Tool calling works fine with Qwen3. ### Relevant log output ```shell time=2025-06-15T07:09:02.948Z level=INFO source=routes.go:1234 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE:q4_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-06-15T07:09:02.950Z level=INFO source=images.go:479 msg="total blobs: 45" time=2025-06-15T07:09:02.950Z level=INFO source=images.go:486 msg="total unused blobs removed: 0" time=2025-06-15T07:09:02.950Z level=INFO source=routes.go:1287 msg="Listening on [::]:11434 (version 0.9.0)" time=2025-06-15T07:09:02.950Z level=DEBUG source=sched.go:108 msg="starting llm scheduler" time=2025-06-15T07:09:02.950Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-06-15T07:09:02.950Z level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA" time=2025-06-15T07:09:02.950Z level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcuda.so* time=2025-06-15T07:09:02.950Z level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" time=2025-06-15T07:09:02.951Z level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[] time=2025-06-15T07:09:02.951Z level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcudart.so* time=2025-06-15T07:09:02.951Z level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/lib/ollama/libcudart.so* /usr/local/nvidia/lib/libcudart.so* /usr/local/nvidia/lib64/libcudart.so* /usr/lib/ollama/cuda_v*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]" time=2025-06-15T07:09:02.951Z level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[] time=2025-06-15T07:09:02.951Z level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2025-06-15T07:09:02.951Z level=DEBUG source=amd_linux.go:101 msg="evaluating amdgpu node /sys/class/kfd/kfd/topology/nodes/0/properties" time=2025-06-15T07:09:02.951Z level=DEBUG source=amd_linux.go:121 msg="detected CPU /sys/class/kfd/kfd/topology/nodes/0/properties" time=2025-06-15T07:09:02.951Z level=DEBUG source=amd_linux.go:101 msg="evaluating amdgpu node /sys/class/kfd/kfd/topology/nodes/1/properties" time=2025-06-15T07:09:02.951Z level=DEBUG source=amd_linux.go:206 msg="mapping amdgpu to drm sysfs nodes" amdgpu=/sys/class/kfd/kfd/topology/nodes/1/properties vendor=4098 device=29772 unique_id=3545464219138192957 time=2025-06-15T07:09:02.951Z level=DEBUG source=amd_linux.go:240 msg=matched amdgpu=/sys/class/kfd/kfd/topology/nodes/1/properties drm=/sys/class/drm/card0/device time=2025-06-15T07:09:02.951Z level=DEBUG source=amd_linux.go:318 msg="amdgpu memory" gpu=0 total="20.0 GiB" time=2025-06-15T07:09:02.951Z level=DEBUG source=amd_linux.go:319 msg="amdgpu memory" gpu=0 available="18.0 GiB" time=2025-06-15T07:09:02.951Z level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /usr/lib/ollama/rocm" time=2025-06-15T07:09:02.951Z level=DEBUG source=amd_common.go:44 msg="detected ROCM next to ollama executable /usr/lib/ollama/rocm" time=2025-06-15T07:09:02.951Z level=DEBUG source=amd_linux.go:371 msg="rocm supported GPUs" types="[gfx1010 gfx1012 gfx1030 gfx1100 gfx1101 gfx1102 gfx1151 gfx1200 gfx1201 gfx900 gfx906 gfx908 gfx90a gfx942]" time=2025-06-15T07:09:02.951Z level=INFO source=amd_linux.go:386 msg="amdgpu is supported" gpu=GPU-313404ec3196f63d gpu_type=gfx1100 time=2025-06-15T07:09:02.951Z level=INFO source=types.go:130 msg="inference compute" id=GPU-313404ec3196f63d library=rocm variant="" compute=gfx1100 driver=0.0 name=1002:744c total="20.0 GiB" available="18.0 GiB" time=2025-06-15T07:09:05.478Z level=DEBUG source=ggml.go:155 msg="key not found" key=general.alignment default=32 time=2025-06-15T07:09:05.478Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="94.3 GiB" before.free="86.5 GiB" before.free_swap="8.0 GiB" now.total="94.3 GiB" now.free="86.5 GiB" now.free_swap="8.0 GiB" time=2025-06-15T07:09:05.478Z level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-313404ec3196f63d name=1002:744c before="18.0 GiB" now="18.0 GiB" time=2025-06-15T07:09:05.478Z level=DEBUG source=sched.go:185 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1 time=2025-06-15T07:09:05.488Z level=DEBUG source=ggml.go:155 msg="key not found" key=general.alignment default=32 time=2025-06-15T07:09:05.498Z level=DEBUG source=ggml.go:155 msg="key not found" key=general.alignment default=32 time=2025-06-15T07:09:05.498Z level=DEBUG source=sched.go:228 msg="loading first model" model=/root/.ollama/models/blobs/sha256-3c168af1dea0a414299c7d9077e100ac763370e5a98b3c53801a958a47f0a5db time=2025-06-15T07:09:05.498Z level=DEBUG source=memory.go:111 msg=evaluating library=rocm gpu_count=1 available="[18.0 GiB]" time=2025-06-15T07:09:05.498Z level=DEBUG source=ggml.go:155 msg="key not found" key=phi3.vision.block_count default=0 time=2025-06-15T07:09:05.498Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="94.3 GiB" before.free="86.5 GiB" before.free_swap="8.0 GiB" now.total="94.3 GiB" now.free="86.5 GiB" now.free_swap="8.0 GiB" time=2025-06-15T07:09:05.498Z level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-313404ec3196f63d name=1002:744c before="18.0 GiB" now="18.0 GiB" time=2025-06-15T07:09:05.498Z level=DEBUG source=ggml.go:155 msg="key not found" key=phi3.attention.key_length default=128 time=2025-06-15T07:09:05.498Z level=DEBUG source=ggml.go:155 msg="key not found" key=phi3.attention.value_length default=128 time=2025-06-15T07:09:05.498Z level=DEBUG source=ggml.go:155 msg="key not found" key=phi3.attention.key_length default=128 time=2025-06-15T07:09:05.498Z level=DEBUG source=ggml.go:155 msg="key not found" key=phi3.attention.value_length default=128 time=2025-06-15T07:09:05.498Z level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-3c168af1dea0a414299c7d9077e100ac763370e5a98b3c53801a958a47f0a5db gpu=GPU-313404ec3196f63d parallel=2 available=19373862912 required="3.2 GiB" time=2025-06-15T07:09:05.499Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="94.3 GiB" before.free="86.5 GiB" before.free_swap="8.0 GiB" now.total="94.3 GiB" now.free="86.5 GiB" now.free_swap="8.0 GiB" time=2025-06-15T07:09:05.499Z level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-313404ec3196f63d name=1002:744c before="18.0 GiB" now="18.0 GiB" time=2025-06-15T07:09:05.499Z level=INFO source=server.go:135 msg="system memory" total="94.3 GiB" free="86.5 GiB" free_swap="8.0 GiB" time=2025-06-15T07:09:05.499Z level=DEBUG source=memory.go:111 msg=evaluating library=rocm gpu_count=1 available="[18.0 GiB]" time=2025-06-15T07:09:05.499Z level=DEBUG source=ggml.go:155 msg="key not found" key=phi3.vision.block_count default=0 time=2025-06-15T07:09:05.499Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="94.3 GiB" before.free="86.5 GiB" before.free_swap="8.0 GiB" now.total="94.3 GiB" now.free="86.5 GiB" now.free_swap="8.0 GiB" time=2025-06-15T07:09:05.499Z level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-313404ec3196f63d name=1002:744c before="18.0 GiB" now="18.0 GiB" time=2025-06-15T07:09:05.499Z level=DEBUG source=ggml.go:155 msg="key not found" key=phi3.attention.key_length default=128 time=2025-06-15T07:09:05.499Z level=DEBUG source=ggml.go:155 msg="key not found" key=phi3.attention.value_length default=128 time=2025-06-15T07:09:05.499Z level=DEBUG source=ggml.go:155 msg="key not found" key=phi3.attention.key_length default=128 time=2025-06-15T07:09:05.499Z level=DEBUG source=ggml.go:155 msg="key not found" key=phi3.attention.value_length default=128 time=2025-06-15T07:09:05.499Z level=INFO source=server.go:168 msg=offload library=rocm layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[18.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="3.2 GiB" memory.required.partial="3.2 GiB" memory.required.kv="256.0 MiB" memory.required.allocations="[3.2 GiB]" memory.weights.total="2.3 GiB" memory.weights.repeating="1.8 GiB" memory.weights.nonrepeating="480.8 MiB" memory.graph.full="128.0 MiB" memory.graph.partial="128.0 MiB" time=2025-06-15T07:09:05.499Z level=DEBUG source=ggml.go:155 msg="key not found" key=phi3.attention.key_length default=128 time=2025-06-15T07:09:05.499Z level=DEBUG source=ggml.go:155 msg="key not found" key=phi3.attention.value_length default=128 time=2025-06-15T07:09:05.499Z level=INFO source=server.go:211 msg="enabling flash attention" time=2025-06-15T07:09:05.499Z level=DEBUG source=server.go:284 msg="compatible gpu libraries" compatible=[rocm] llama_model_loader: loaded meta data with 36 key-value pairs and 196 tensors from /root/.ollama/models/blobs/sha256-3c168af1dea0a414299c7d9077e100ac763370e5a98b3c53801a958a47f0a5db (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = phi3 llama_model_loader: - kv 1: phi3.rope.scaling.attn_factor f32 = 1.190238 llama_model_loader: - kv 2: general.type str = model llama_model_loader: - kv 3: general.name str = Phi 4 Mini Instruct llama_model_loader: - kv 4: general.finetune str = instruct llama_model_loader: - kv 5: general.basename str = Phi-4 llama_model_loader: - kv 6: general.size_label str = mini llama_model_loader: - kv 7: general.license str = mit llama_model_loader: - kv 8: general.license.link str = https://huggingface.co/microsoft/Phi-... llama_model_loader: - kv 9: general.tags arr[str,3] = ["nlp", "code", "text-generation"] llama_model_loader: - kv 10: general.languages arr[str,24] = ["multilingual", "ar", "zh", "cs", "d... llama_model_loader: - kv 11: phi3.context_length u32 = 131072 llama_model_loader: - kv 12: phi3.rope.scaling.original_context_length u32 = 4096 llama_model_loader: - kv 13: phi3.embedding_length u32 = 3072 llama_model_loader: - kv 14: phi3.feed_forward_length u32 = 8192 llama_model_loader: - kv 15: phi3.block_count u32 = 32 llama_model_loader: - kv 16: phi3.attention.head_count u32 = 24 llama_model_loader: - kv 17: phi3.attention.head_count_kv u32 = 8 llama_model_loader: - kv 18: phi3.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 19: phi3.rope.dimension_count u32 = 96 llama_model_loader: - kv 20: phi3.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 21: phi3.attention.sliding_window u32 = 262144 llama_model_loader: - kv 22: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 23: tokenizer.ggml.pre str = gpt-4o llama_model_loader: - kv 24: tokenizer.ggml.tokens arr[str,200064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 25: tokenizer.ggml.token_type arr[i32,200064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 26: tokenizer.ggml.merges arr[str,199742] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "e r", ... llama_model_loader: - kv 27: tokenizer.ggml.bos_token_id u32 = 199999 llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 199999 llama_model_loader: - kv 29: tokenizer.ggml.unknown_token_id u32 = 199999 llama_model_loader: - kv 30: tokenizer.ggml.padding_token_id u32 = 199999 llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 32: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 33: tokenizer.chat_template str = {% for message in messages %}{% if me... llama_model_loader: - kv 34: general.quantization_version u32 = 2 llama_model_loader: - kv 35: general.file_type u32 = 15 llama_model_loader: - type f32: 67 tensors llama_model_loader: - type q4_K: 80 tensors llama_model_loader: - type q5_K: 32 tensors llama_model_loader: - type q6_K: 17 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 2.31 GiB (5.18 BPW) init_tokenizer: initializing tokenizer for type 2 load: control token: 200024 '<|/tool|>' is not marked as EOG load: control token: 200023 '<|tool|>' is not marked as EOG load: control token: 200022 '<|system|>' is not marked as EOG load: control token: 200021 '<|user|>' is not marked as EOG load: control token: 200025 '<|tool_call|>' is not marked as EOG load: control token: 200027 '<|tool_response|>' is not marked as EOG load: control token: 200028 '<|tag|>' is not marked as EOG load: control token: 200026 '<|/tool_call|>' is not marked as EOG load: control token: 200018 '<|endofprompt|>' is not marked as EOG load: control token: 200019 '<|assistant|>' is not marked as EOG load: special tokens cache size = 12 load: token to piece cache size = 1.3333 MB print_info: arch = phi3 print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 3.84 B print_info: general.name = Phi 4 Mini Instruct print_info: vocab type = BPE print_info: n_vocab = 200064 print_info: n_merges = 199742 print_info: BOS token = 199999 '<|endoftext|>' print_info: EOS token = 199999 '<|endoftext|>' print_info: EOT token = 199999 '<|endoftext|>' print_info: UNK token = 199999 '<|endoftext|>' print_info: PAD token = 199999 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 199999 '<|endoftext|>' print_info: EOG token = 200020 '<|end|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-06-15T07:09:05.685Z level=DEBUG source=server.go:360 msg="adding gpu library" path=/usr/lib/ollama/rocm time=2025-06-15T07:09:05.685Z level=DEBUG source=server.go:367 msg="adding gpu dependency paths" paths=[/usr/lib/ollama/rocm] time=2025-06-15T07:09:05.685Z level=INFO source=server.go:431 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-3c168af1dea0a414299c7d9077e100ac763370e5a98b3c53801a958a47f0a5db --ctx-size 8192 --batch-size 512 --n-gpu-layers 33 --threads 8 --flash-attn --kv-cache-type q4_0 --parallel 2 --port 36267" time=2025-06-15T07:09:05.685Z level=DEBUG source=server.go:432 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LD_LIBRARY_PATH=/usr/lib/ollama/rocm:/usr/lib/ollama/rocm:/usr/lib/ollama:/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/lib/ollama OLLAMA_KV_CACHE_TYPE=q4_0 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_DEBUG=1 OLLAMA_FLASH_ATTENTION=1 OLLAMA_MAX_LOADED_MODELS=3 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/rocm ROCR_VISIBLE_DEVICES=GPU-313404ec3196f63d time=2025-06-15T07:09:05.686Z level=INFO source=sched.go:483 msg="loaded runners" count=1 time=2025-06-15T07:09:05.686Z level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2025-06-15T07:09:05.686Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding" time=2025-06-15T07:09:05.692Z level=INFO source=runner.go:815 msg="starting go runner" time=2025-06-15T07:09:05.692Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-skylakex.so time=2025-06-15T07:09:05.694Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama/rocm /opt/amdgpu/share/libdrm/amdgpu.ids: No such file or directory ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 ROCm devices: Device 0: AMD Radeon Graphics, gfx1100 (0x1100), VMM: no, Wave Size: 32 load_backend: loaded ROCm backend from /usr/lib/ollama/rocm/libggml-hip.so time=2025-06-15T07:09:06.406Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) llama_model_load_from_file_impl: using device ROCm0 (AMD Radeon Graphics) - 20394 MiB free time=2025-06-15T07:09:06.406Z level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:36267" llama_model_loader: loaded meta data with 36 key-value pairs and 196 tensors from /root/.ollama/models/blobs/sha256-3c168af1dea0a414299c7d9077e100ac763370e5a98b3c53801a958a47f0a5db (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = phi3 llama_model_loader: - kv 1: phi3.rope.scaling.attn_factor f32 = 1.190238 llama_model_loader: - kv 2: general.type str = model llama_model_loader: - kv 3: general.name str = Phi 4 Mini Instruct llama_model_loader: - kv 4: general.finetune str = instruct llama_model_loader: - kv 5: general.basename str = Phi-4 llama_model_loader: - kv 6: general.size_label str = mini llama_model_loader: - kv 7: general.license str = mit llama_model_loader: - kv 8: general.license.link str = https://huggingface.co/microsoft/Phi-... llama_model_loader: - kv 9: general.tags arr[str,3] = ["nlp", "code", "text-generation"] llama_model_loader: - kv 10: general.languages arr[str,24] = ["multilingual", "ar", "zh", "cs", "d... llama_model_loader: - kv 11: phi3.context_length u32 = 131072 llama_model_loader: - kv 12: phi3.rope.scaling.original_context_length u32 = 4096 llama_model_loader: - kv 13: phi3.embedding_length u32 = 3072 llama_model_loader: - kv 14: phi3.feed_forward_length u32 = 8192 llama_model_loader: - kv 15: phi3.block_count u32 = 32 llama_model_loader: - kv 16: phi3.attention.head_count u32 = 24 llama_model_loader: - kv 17: phi3.attention.head_count_kv u32 = 8 llama_model_loader: - kv 18: phi3.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 19: phi3.rope.dimension_count u32 = 96 llama_model_loader: - kv 20: phi3.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 21: phi3.attention.sliding_window u32 = 262144 llama_model_loader: - kv 22: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 23: tokenizer.ggml.pre str = gpt-4o time=2025-06-15T07:09:06.439Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: - kv 24: tokenizer.ggml.tokens arr[str,200064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 25: tokenizer.ggml.token_type arr[i32,200064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 26: tokenizer.ggml.merges arr[str,199742] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "e r", ... llama_model_loader: - kv 27: tokenizer.ggml.bos_token_id u32 = 199999 llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 199999 llama_model_loader: - kv 29: tokenizer.ggml.unknown_token_id u32 = 199999 llama_model_loader: - kv 30: tokenizer.ggml.padding_token_id u32 = 199999 llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 32: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 33: tokenizer.chat_template str = {% for message in messages %}{% if me... llama_model_loader: - kv 34: general.quantization_version u32 = 2 llama_model_loader: - kv 35: general.file_type u32 = 15 llama_model_loader: - type f32: 67 tensors llama_model_loader: - type q4_K: 80 tensors llama_model_loader: - type q5_K: 32 tensors llama_model_loader: - type q6_K: 17 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 2.31 GiB (5.18 BPW) init_tokenizer: initializing tokenizer for type 2 load: control token: 200024 '<|/tool|>' is not marked as EOG load: control token: 200023 '<|tool|>' is not marked as EOG load: control token: 200022 '<|system|>' is not marked as EOG load: control token: 200021 '<|user|>' is not marked as EOG load: control token: 200025 '<|tool_call|>' is not marked as EOG load: control token: 200027 '<|tool_response|>' is not marked as EOG load: control token: 200028 '<|tag|>' is not marked as EOG load: control token: 200026 '<|/tool_call|>' is not marked as EOG load: control token: 200018 '<|endofprompt|>' is not marked as EOG load: control token: 200019 '<|assistant|>' is not marked as EOG load: special tokens cache size = 12 load: token to piece cache size = 1.3333 MB print_info: arch = phi3 print_info: vocab_only = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 3072 print_info: n_layer = 32 print_info: n_head = 24 print_info: n_head_kv = 8 print_info: n_rot = 96 print_info: n_swa = 262144 print_info: n_swa_pattern = 1 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 3 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 8192 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 2 print_info: rope scaling = linear print_info: freq_base_train = 10000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 4096 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 3B print_info: model params = 3.84 B print_info: general.name = Phi 4 Mini Instruct print_info: vocab type = BPE print_info: n_vocab = 200064 print_info: n_merges = 199742 print_info: BOS token = 199999 '<|endoftext|>' print_info: EOS token = 199999 '<|endoftext|>' print_info: EOT token = 199999 '<|endoftext|>' print_info: UNK token = 199999 '<|endoftext|>' print_info: PAD token = 199999 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 199999 '<|endoftext|>' print_info: EOG token = 200020 '<|end|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = true) load_tensors: layer 0 assigned to device ROCm0, is_swa = 0 load_tensors: layer 1 assigned to device ROCm0, is_swa = 0 load_tensors: layer 2 assigned to device ROCm0, is_swa = 0 load_tensors: layer 3 assigned to device ROCm0, is_swa = 0 load_tensors: layer 4 assigned to device ROCm0, is_swa = 0 load_tensors: layer 5 assigned to device ROCm0, is_swa = 0 load_tensors: layer 6 assigned to device ROCm0, is_swa = 0 load_tensors: layer 7 assigned to device ROCm0, is_swa = 0 load_tensors: layer 8 assigned to device ROCm0, is_swa = 0 load_tensors: layer 9 assigned to device ROCm0, is_swa = 0 load_tensors: layer 10 assigned to device ROCm0, is_swa = 0 load_tensors: layer 11 assigned to device ROCm0, is_swa = 0 load_tensors: layer 12 assigned to device ROCm0, is_swa = 0 load_tensors: layer 13 assigned to device ROCm0, is_swa = 0 load_tensors: layer 14 assigned to device ROCm0, is_swa = 0 load_tensors: layer 15 assigned to device ROCm0, is_swa = 0 load_tensors: layer 16 assigned to device ROCm0, is_swa = 0 load_tensors: layer 17 assigned to device ROCm0, is_swa = 0 load_tensors: layer 18 assigned to device ROCm0, is_swa = 0 load_tensors: layer 19 assigned to device ROCm0, is_swa = 0 load_tensors: layer 20 assigned to device ROCm0, is_swa = 0 load_tensors: layer 21 assigned to device ROCm0, is_swa = 0 load_tensors: layer 22 assigned to device ROCm0, is_swa = 0 load_tensors: layer 23 assigned to device ROCm0, is_swa = 0 load_tensors: layer 24 assigned to device ROCm0, is_swa = 0 load_tensors: layer 25 assigned to device ROCm0, is_swa = 0 load_tensors: layer 26 assigned to device ROCm0, is_swa = 0 load_tensors: layer 27 assigned to device ROCm0, is_swa = 0 load_tensors: layer 28 assigned to device ROCm0, is_swa = 0 load_tensors: layer 29 assigned to device ROCm0, is_swa = 0 load_tensors: layer 30 assigned to device ROCm0, is_swa = 0 load_tensors: layer 31 assigned to device ROCm0, is_swa = 0 load_tensors: layer 32 assigned to device ROCm0, is_swa = 0 load_tensors: tensor 'token_embd.weight' (q6_K) (and 0 others) cannot be used with preferred buffer type ROCm_Host, using CPU instead load_tensors: offloading 32 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 33/33 layers to GPU load_tensors: ROCm0 model buffer size = 2368.57 MiB load_tensors: CPU_Mapped model buffer size = 480.81 MiB llama_context: constructing llama_context llama_context: n_seq_max = 2 llama_context: n_ctx = 8192 llama_context: n_ctx_per_seq = 4096 llama_context: n_batch = 1024 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 1 llama_context: freq_base = 10000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized set_abort_callback: call llama_context: ROCm_Host output buffer size = 1.55 MiB create_memory: n_ctx = 8192 (padded) llama_kv_cache_unified: kv_size = 8192, type_k = 'q4_0', type_v = 'q4_0', n_layer = 32, can_shift = 1, padding = 256 llama_kv_cache_unified: layer 0: dev = ROCm0 llama_kv_cache_unified: layer 1: dev = ROCm0 llama_kv_cache_unified: layer 2: dev = ROCm0 llama_kv_cache_unified: layer 3: dev = ROCm0 llama_kv_cache_unified: layer 4: dev = ROCm0 llama_kv_cache_unified: layer 5: dev = ROCm0 llama_kv_cache_unified: layer 6: dev = ROCm0 llama_kv_cache_unified: layer 7: dev = ROCm0 llama_kv_cache_unified: layer 8: dev = ROCm0 llama_kv_cache_unified: layer 9: dev = ROCm0 llama_kv_cache_unified: layer 10: dev = ROCm0 llama_kv_cache_unified: layer 11: dev = ROCm0 llama_kv_cache_unified: layer 12: dev = ROCm0 llama_kv_cache_unified: layer 13: dev = ROCm0 llama_kv_cache_unified: layer 14: dev = ROCm0 llama_kv_cache_unified: layer 15: dev = ROCm0 llama_kv_cache_unified: layer 16: dev = ROCm0 llama_kv_cache_unified: layer 17: dev = ROCm0 llama_kv_cache_unified: layer 18: dev = ROCm0 llama_kv_cache_unified: layer 19: dev = ROCm0 llama_kv_cache_unified: layer 20: dev = ROCm0 llama_kv_cache_unified: layer 21: dev = ROCm0 llama_kv_cache_unified: layer 22: dev = ROCm0 llama_kv_cache_unified: layer 23: dev = ROCm0 llama_kv_cache_unified: layer 24: dev = ROCm0 llama_kv_cache_unified: layer 25: dev = ROCm0 llama_kv_cache_unified: layer 26: dev = ROCm0 llama_kv_cache_unified: layer 27: dev = ROCm0 llama_kv_cache_unified: layer 28: dev = ROCm0 llama_kv_cache_unified: layer 29: dev = ROCm0 llama_kv_cache_unified: layer 30: dev = ROCm0 llama_kv_cache_unified: layer 31: dev = ROCm0 llama_kv_cache_unified: ROCm0 KV buffer size = 288.00 MiB llama_kv_cache_unified: KV self size = 288.00 MiB, K (q4_0): 144.00 MiB, V (q4_0): 144.00 MiB llama_context: enumerating backends llama_context: backend_ptrs.size() = 2 llama_context: max_nodes = 65536 llama_context: worst-case: n_tokens = 512, n_seqs = 1, n_outputs = 0 llama_context: reserving graph for n_tokens = 512, n_seqs = 1 llama_context: reserving graph for n_tokens = 1, n_seqs = 1 llama_context: reserving graph for n_tokens = 512, n_seqs = 1 llama_context: ROCm0 compute buffer size = 402.75 MiB llama_context: ROCm_Host compute buffer size = 22.01 MiB llama_context: graph nodes = 1223 llama_context: graph splits = 2 time=2025-06-15T07:09:06.940Z level=INFO source=server.go:630 msg="llama runner started in 1.25 seconds" time=2025-06-15T07:09:06.940Z level=DEBUG source=sched.go:495 msg="finished setting up" runner.name=registry.ollama.ai/library/phi4-mini:latest runner.inference=rocm runner.devices=1 runner.size="3.2 GiB" runner.vram="3.2 GiB" runner.parallel=2 runner.pid=14 runner.model=/root/.ollama/models/blobs/sha256-3c168af1dea0a414299c7d9077e100ac763370e5a98b3c53801a958a47f0a5db runner.num_ctx=8192 time=2025-06-15T07:09:06.941Z level=ERROR source=routes.go:1530 msg="failed to create tool parser" error="unexpected end of JSON input" [GIN] 2025/06/15 - 07:09:06 | 500 | 1.474349537s | 192.168.28.79 | POST "/api/chat" time=2025-06-15T07:09:06.941Z level=DEBUG source=sched.go:503 msg="context for request finished" time=2025-06-15T07:09:06.941Z level=DEBUG source=sched.go:343 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/phi4-mini:latest runner.inference=rocm runner.devices=1 runner.size="3.2 GiB" runner.vram="3.2 GiB" runner.parallel=2 runner.pid=14 runner.model=/root/.ollama/models/blobs/sha256-3c168af1dea0a414299c7d9077e100ac763370e5a98b3c53801a958a47f0a5db runner.num_ctx=8192 duration=5m0s time=2025-06-15T07:09:06.941Z level=DEBUG source=sched.go:361 msg="after processing request finished event" runner.name=registry.ollama.ai/library/phi4-mini:latest runner.inference=rocm runner.devices=1 runner.size="3.2 GiB" runner.vram="3.2 GiB" runner.parallel=2 runner.pid=14 runner.model=/root/.ollama/models/blobs/sha256-3c168af1dea0a414299c7d9077e100ac763370e5a98b3c53801a958a47f0a5db runner.num_ctx=8192 refCount=0 ``` ### OS Docker ### GPU AMD ### CPU AMD ### Ollama version 0.9.0
GiteaMirror added the bug label 2026-05-04 17:55:16 -05:00
Author
Owner

@icefairy64 commented on GitHub (Jun 15, 2025):

Closing as duplicate of #9437.

<!-- gh-comment-id:2973559614 --> @icefairy64 commented on GitHub (Jun 15, 2025): Closing as duplicate of #9437.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#69367