[GH-ISSUE #10805] Ollama fails to load model vocabulary with minified JSON #69155

Closed
opened 2026-05-04 17:18:03 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @matc1294 on GitHub (May 21, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10805

What is the issue?

I'm having issues while attempting to do a chat completion on a minified JSON,

When running a curl like this:
curl -X POST http://localhost:11434/api/generate -H "Content-Type: application/json" -d @test.json
test.json

I'm getting this response:

{"error":"failed to load model vocabulary required for format\n"}

But sending the request with the pretty version of the JSON it works no problem.

Relevant log output

time=2025-05-21T17:50:02.697Z level=INFO source=routes.go:1205 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:1m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:2 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:true OLLAMA_NOPRUNE:true OLLAMA_NUM_PARALLEL:2 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-05-21T17:50:02.698Z level=INFO source=routes.go:1258 msg="Listening on [::]:11434 (version 0.7.0)"
time=2025-05-21T17:50:02.698Z level=DEBUG source=sched.go:108 msg="starting llm scheduler"
time=2025-05-21T17:50:02.698Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-05-21T17:50:02.698Z level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA"
time=2025-05-21T17:50:02.698Z level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcuda.so*
time=2025-05-21T17:50:02.698Z level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2025-05-21T17:50:02.699Z level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[/usr/lib/x86_64-linux-gnu/libcuda.so.570.133.20]
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.133.20
dlsym: cuInit - 0x7c1c93d0fe70
dlsym: cuDriverGetVersion - 0x7c1c93d0fe90
dlsym: cuDeviceGetCount - 0x7c1c93d0fed0
dlsym: cuDeviceGet - 0x7c1c93d0feb0
dlsym: cuDeviceGetAttribute - 0x7c1c93d0ffb0
dlsym: cuDeviceGetUuid - 0x7c1c93d0ff10
dlsym: cuDeviceGetName - 0x7c1c93d0fef0
dlsym: cuCtxCreate_v3 - 0x7c1c93d10190
dlsym: cuMemGetInfo_v2 - 0x7c1c93d10910
dlsym: cuCtxDestroy - 0x7c1c93d6eab0
calling cuInit
calling cuDriverGetVersion
raw version 0x2f30
CUDA driver version: 12.8
calling cuDeviceGetCount
device count 1
time=2025-05-21T17:50:02.710Z level=DEBUG source=gpu.go:125 msg="detected GPUs" count=1 library=/usr/lib/x86_64-linux-gnu/libcuda.so.570.133.20
[GPU-9adf377f-3cf1-90a2-6738-a7f99afdd19c] CUDA totalMem 22574mb
[GPU-9adf377f-3cf1-90a2-6738-a7f99afdd19c] CUDA freeMem 21491mb
[GPU-9adf377f-3cf1-90a2-6738-a7f99afdd19c] Compute Capability 8.9
time=2025-05-21T17:50:02.866Z level=DEBUG source=amd_linux.go:419 msg="amdgpu driver not detected /sys/module/amdgpu"
releasing cuda driver library
time=2025-05-21T17:50:02.866Z level=INFO source=types.go:130 msg="inference compute" id=GPU-9adf377f-3cf1-90a2-6738-a7f99afdd19c library=cuda variant=v12 compute=8.9 driver=12.8 name="NVIDIA L4" total="22.0 GiB" available="21.0 GiB"
time=2025-05-21T17:59:56.668Z level=DEBUG source=ggml.go:154 msg="key not found" key=general.alignment default=32
time=2025-05-21T17:59:56.669Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="15.0 GiB" before.free="12.7 GiB" before.free_swap="0 B" now.total="15.0 GiB" now.free="12.8 GiB" now.free_swap="0 B"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.133.20
dlsym: cuInit - 0x7c1c93d0fe70
dlsym: cuDriverGetVersion - 0x7c1c93d0fe90
dlsym: cuDeviceGetCount - 0x7c1c93d0fed0
dlsym: cuDeviceGet - 0x7c1c93d0feb0
dlsym: cuDeviceGetAttribute - 0x7c1c93d0ffb0
dlsym: cuDeviceGetUuid - 0x7c1c93d0ff10
dlsym: cuDeviceGetName - 0x7c1c93d0fef0
dlsym: cuCtxCreate_v3 - 0x7c1c93d10190
dlsym: cuMemGetInfo_v2 - 0x7c1c93d10910
dlsym: cuCtxDestroy - 0x7c1c93d6eab0
calling cuInit
calling cuDriverGetVersion
raw version 0x2f30
CUDA driver version: 12.8
calling cuDeviceGetCount
device count 1
time=2025-05-21T17:59:56.823Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-9adf377f-3cf1-90a2-6738-a7f99afdd19c name="NVIDIA L4" overhead="0 B" before.total="22.0 GiB" before.free="21.0 GiB" now.total="22.0 GiB" now.free="21.0 GiB" now.used="1.1 GiB"
releasing cuda driver library
time=2025-05-21T17:59:56.847Z level=DEBUG source=ggml.go:154 msg="key not found" key=general.alignment default=32
time=2025-05-21T17:59:56.871Z level=DEBUG source=ggml.go:154 msg="key not found" key=general.alignment default=32
time=2025-05-21T17:59:56.873Z level=DEBUG source=sched.go:228 msg="loading first model" model=/root/.ollama/models/blobs/sha256-1fb99eda86dc48a736567406253769fdc75f01e65cde7c65fa5563e4bdf156e0
time=2025-05-21T17:59:56.873Z level=DEBUG source=memory.go:111 msg=evaluating library=cuda gpu_count=1 available="[21.0 GiB]"
time=2025-05-21T17:59:56.874Z level=INFO source=sched.go:777 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-1fb99eda86dc48a736567406253769fdc75f01e65cde7c65fa5563e4bdf156e0 gpu=GPU-9adf377f-3cf1-90a2-6738-a7f99afdd19c parallel=2 available=22534946816 required="16.2 GiB"
time=2025-05-21T17:59:56.874Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="15.0 GiB" before.free="12.8 GiB" before.free_swap="0 B" now.total="15.0 GiB" now.free="12.8 GiB" now.free_swap="0 B"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.133.20
dlsym: cuInit - 0x7c1c93d0fe70
dlsym: cuDriverGetVersion - 0x7c1c93d0fe90
dlsym: cuDeviceGetCount - 0x7c1c93d0fed0
dlsym: cuDeviceGet - 0x7c1c93d0feb0
dlsym: cuDeviceGetAttribute - 0x7c1c93d0ffb0
dlsym: cuDeviceGetUuid - 0x7c1c93d0ff10
dlsym: cuDeviceGetName - 0x7c1c93d0fef0
dlsym: cuCtxCreate_v3 - 0x7c1c93d10190
dlsym: cuMemGetInfo_v2 - 0x7c1c93d10910
dlsym: cuCtxDestroy - 0x7c1c93d6eab0
calling cuInit
calling cuDriverGetVersion
raw version 0x2f30
CUDA driver version: 12.8
calling cuDeviceGetCount
device count 1
time=2025-05-21T17:59:57.026Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-9adf377f-3cf1-90a2-6738-a7f99afdd19c name="NVIDIA L4" overhead="0 B" before.total="22.0 GiB" before.free="21.0 GiB" now.total="22.0 GiB" now.free="21.0 GiB" now.used="1.1 GiB"
releasing cuda driver library
time=2025-05-21T17:59:57.026Z level=INFO source=server.go:135 msg="system memory" total="15.0 GiB" free="12.8 GiB" free_swap="0 B"
time=2025-05-21T17:59:57.026Z level=DEBUG source=memory.go:111 msg=evaluating library=cuda gpu_count=1 available="[21.0 GiB]"
time=2025-05-21T17:59:57.029Z level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[21.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="16.2 GiB" memory.required.partial="16.2 GiB" memory.required.kv="3.2 GiB" memory.required.allocations="[16.2 GiB]" memory.weights.total="7.5 GiB" memory.weights.repeating="5.6 GiB" memory.weights.nonrepeating="1.9 GiB" memory.graph.full="1.3 GiB" memory.graph.partial="1.6 GiB" projector.weights="806.2 MiB" projector.graph="1.0 GiB"
time=2025-05-21T17:59:57.029Z level=DEBUG source=server.go:284 msg="compatible gpu libraries" compatible="[cuda_v12 cuda_v11]"
time=2025-05-21T17:59:57.063Z level=DEBUG source=ggml.go:154 msg="key not found" key=general.alignment default=32
time=2025-05-21T17:59:57.064Z level=DEBUG source=ggml.go:154 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-05-21T17:59:57.068Z level=DEBUG source=ggml.go:154 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-05-21T17:59:57.068Z level=DEBUG source=ggml.go:154 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-05-21T17:59:57.068Z level=DEBUG source=ggml.go:154 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-05-21T17:59:57.068Z level=DEBUG source=ggml.go:154 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-05-21T17:59:57.068Z level=DEBUG source=server.go:360 msg="adding gpu library" path=/usr/lib/ollama/cuda_v12
time=2025-05-21T17:59:57.068Z level=DEBUG source=server.go:367 msg="adding gpu dependency paths" paths=[/usr/lib/ollama/cuda_v12]
time=2025-05-21T17:59:57.069Z level=INFO source=server.go:431 msg="starting llama server" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-1fb99eda86dc48a736567406253769fdc75f01e65cde7c65fa5563e4bdf156e0 --ctx-size 40000 --batch-size 512 --n-gpu-layers 49 --threads 2 --no-mmap --parallel 2 --port 38949"
time=2025-05-21T17:59:57.069Z level=DEBUG source=server.go:432 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_DEBUG=1 OLLAMA_KEEP_ALIVE="\"1m\"" OLLAMA_MAX_LOADED_MODELS=2 OLLAMA_NOHISTORY=1 OLLAMA_NOPRUNE=1 OLLAMA_NUM_PARALLEL=2 OLLAMA_ORIGINS=* LD_LIBRARY_PATH=/usr/lib/ollama/cuda_v12:/usr/lib/ollama/cuda_v12:/usr/lib/ollama:/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/lib/ollama OLLAMA_HOST=0.0.0.0:11434 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12 CUDA_VISIBLE_DEVICES=GPU-9adf377f-3cf1-90a2-6738-a7f99afdd19c
time=2025-05-21T17:59:57.069Z level=INFO source=sched.go:472 msg="loaded runners" count=1
time=2025-05-21T17:59:57.069Z level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
time=2025-05-21T17:59:57.069Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding"
time=2025-05-21T17:59:57.081Z level=INFO source=runner.go:836 msg="starting ollama engine"
time=2025-05-21T17:59:57.082Z level=INFO source=runner.go:899 msg="Server listening on 127.0.0.1:38949"
time=2025-05-21T17:59:57.116Z level=DEBUG source=ggml.go:154 msg="key not found" key=general.alignment default=32
time=2025-05-21T17:59:57.117Z level=DEBUG source=ggml.go:154 msg="key not found" key=general.name default=""
time=2025-05-21T17:59:57.117Z level=DEBUG source=ggml.go:154 msg="key not found" key=general.description default=""
time=2025-05-21T17:59:57.117Z level=INFO source=ggml.go:73 msg="" architecture=gemma3 file_type=Q4_0 name="" description="" num_tensors=1065 num_key_values=40
time=2025-05-21T17:59:57.117Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so
time=2025-05-21T17:59:57.123Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama/cuda_v12
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA L4, compute capability 8.9, VMM: yes
load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so
time=2025-05-21T17:59:57.197Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2025-05-21T17:59:57.297Z level=INFO source=ggml.go:299 msg="model weights" buffer=CPU size="1.9 GiB"
time=2025-05-21T17:59:57.297Z level=INFO source=ggml.go:299 msg="model weights" buffer=CUDA0 size="8.3 GiB"
time=2025-05-21T17:59:57.321Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
time=2025-05-21T17:59:57.321Z level=DEBUG source=server.go:636 msg="model load progress 0.02"
time=2025-05-21T17:59:57.574Z level=DEBUG source=server.go:636 msg="model load progress 0.22"
time=2025-05-21T17:59:57.825Z level=DEBUG source=server.go:636 msg="model load progress 0.41"
time=2025-05-21T17:59:58.076Z level=DEBUG source=server.go:636 msg="model load progress 0.61"
time=2025-05-21T17:59:58.329Z level=DEBUG source=server.go:636 msg="model load progress 0.80"
time=2025-05-21T17:59:58.581Z level=DEBUG source=server.go:636 msg="model load progress 0.90"
time=2025-05-21T17:59:58.831Z level=DEBUG source=server.go:636 msg="model load progress 0.94"
time=2025-05-21T17:59:59.082Z level=DEBUG source=server.go:636 msg="model load progress 0.99"
time=2025-05-21T17:59:59.169Z level=DEBUG source=ggml.go:154 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-05-21T17:59:59.172Z level=DEBUG source=ggml.go:154 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-05-21T17:59:59.172Z level=DEBUG source=ggml.go:154 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-05-21T17:59:59.172Z level=DEBUG source=ggml.go:154 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-05-21T17:59:59.172Z level=DEBUG source=ggml.go:154 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-05-21T17:59:59.327Z level=DEBUG source=ggml.go:553 msg="compute graph" nodes=2119 splits=2
time=2025-05-21T17:59:59.327Z level=INFO source=ggml.go:556 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="1.3 GiB"
time=2025-05-21T17:59:59.327Z level=INFO source=ggml.go:556 msg="compute graph" backend=CPU buffer_type=CPU size="7.5 MiB"
time=2025-05-21T17:59:59.333Z level=INFO source=server.go:630 msg="llama runner started in 2.26 seconds"
time=2025-05-21T17:59:59.333Z level=DEBUG source=sched.go:484 msg="finished setting up" runner.name=registry.ollama.ai/library/gemma3:12b-it-qat runner.inference=cuda runner.devices=1 runner.size="16.2 GiB" runner.vram="16.2 GiB" runner.parallel=2 runner.pid=20 runner.model=/root/.ollama/models/blobs/sha256-1fb99eda86dc48a736567406253769fdc75f01e65cde7c65fa5563e4bdf156e0 runner.num_ctx=40000
time=2025-05-21T17:59:59.334Z level=DEBUG source=server.go:729 msg="completion request" images=0 prompt=1234 format="{\"type\":\"object\",\"properties\":{\"is_bio\":{\"type\":\"boolean\"},\"confidence\":{\"type\":\"integer\"},\"bio_type\":{\"type\":\"string\"},\"key_elements\":{\"type\":\"object\",\"properties\":{\"has_experience\":{\"type\":\"boolean\"},\"has_education\":{\"type\":\"boolean\"},\"has_skills\":{\"type\":\"boolean\"},\"has_achievements\":{\"type\":\"boolean\"}}}},\"required\":[\"is_bio\",\"confidence\",\"bio_type\"]}"
parse: error parsing grammar: expecting ::= at 
root ::= "{" space is-bio-kv "," space confidence-kv "," space bio-type-kv ( "," space ( key-elements-kv ) )? "}" space
key-elements-kv ::= "\"key_elements\"" space ":" space key-elements
key-elements-has-experience-rest ::= ( "," space key-elements-has-education-kv )? key-elements-has-education-rest
boolean ::= ("true" | "false") space
integral-part ::= [0] | [1-9] [0-9]{0,15}
is-bio-kv ::= "\"is_bio\"" space ":" space boolean
integer ::= ("-"? integral-part) space
key-elements-has-education-kv ::= "\"has_education\"" space ":" space boolean
key-elements-has-experience-kv ::= "\"has_experience\"" space ":" space boolean
confidence-kv ::= "\"confidence\"" space ":" space integer
string ::= "\"" char* "\"" space
char ::= [^"\\\x7F\x00-\x1F] | [\\] (["\\bfnrt] | "u" [0-9a-fA-F]{4})
space ::= | " " | "\n"{1,2} [ \t]{0,20}
bio-type-kv ::= "\"bio_type\"" space ":" space string
key-elements ::= "{" space  (key-elements-has-experience-kv key-elements-has-experience-rest | key-elements-has-education-kv key-elements-has-education-rest | key-elements-has-skills-kv key-elements-has-skills-rest | key-elements-has-achievements-kv )? "}" space
key-elements-has-skills-kv ::= "\"has_skills\"" space ":" space boolean
key-elements-has-education-rest ::= ( "," space key-elements-has-skills-kv )? key-elements-has-skills-rest
key-elements-has-skills-rest ::= ( "," space key-elements-has-achievements-kv )?
key-elements-ha
llama_grammar_init_impl: failed to parse grammar
grammar_init: failed to initialize grammar
time=2025-05-21T17:59:59.462Z level=INFO source=server.go:809 msg="llm predict error: failed to load model vocabulary required for format"
[GIN] 2025/05/21 - 17:59:59 | 500 |  2.819479755s |   208.102.222.8 | POST     "/api/generate"
time=2025-05-21T17:59:59.462Z level=DEBUG source=sched.go:492 msg="context for request finished"
time=2025-05-21T17:59:59.462Z level=DEBUG source=sched.go:343 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma3:12b-it-qat runner.inference=cuda runner.devices=1 runner.size="16.2 GiB" runner.vram="16.2 GiB" runner.parallel=2 runner.pid=20 runner.model=/root/.ollama/models/blobs/sha256-1fb99eda86dc48a736567406253769fdc75f01e65cde7c65fa5563e4bdf156e0 runner.num_ctx=40000 duration=1m0s
time=2025-05-21T17:59:59.462Z level=DEBUG source=sched.go:361 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma3:12b-it-qat runner.inference=cuda runner.devices=1 runner.size="16.2 GiB" runner.vram="16.2 GiB" runner.parallel=2 runner.pid=20 runner.model=/root/.ollama/models/blobs/sha256-1fb99eda86dc48a736567406253769fdc75f01e65cde7c65fa5563e4bdf156e0 runner.num_ctx=40000 refCount=0

OS

Docker

GPU

Nvidia

CPU

AMD

Ollama version

0.7.0

Originally created by @matc1294 on GitHub (May 21, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10805 ### What is the issue? I'm having issues while attempting to do a chat completion on a minified JSON, When running a curl like this: `curl -X POST http://localhost:11434/api/generate -H "Content-Type: application/json" -d @test.json` [test.json](https://github.com/user-attachments/files/20373378/test.json) I'm getting this response: > {"error":"failed to load model vocabulary required for format\n"} But sending the request with the pretty version of the JSON it works no problem. ### Relevant log output ```shell time=2025-05-21T17:50:02.697Z level=INFO source=routes.go:1205 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:1m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:2 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:true OLLAMA_NOPRUNE:true OLLAMA_NUM_PARALLEL:2 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-05-21T17:50:02.698Z level=INFO source=routes.go:1258 msg="Listening on [::]:11434 (version 0.7.0)" time=2025-05-21T17:50:02.698Z level=DEBUG source=sched.go:108 msg="starting llm scheduler" time=2025-05-21T17:50:02.698Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-05-21T17:50:02.698Z level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA" time=2025-05-21T17:50:02.698Z level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcuda.so* time=2025-05-21T17:50:02.698Z level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" time=2025-05-21T17:50:02.699Z level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[/usr/lib/x86_64-linux-gnu/libcuda.so.570.133.20] initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.133.20 dlsym: cuInit - 0x7c1c93d0fe70 dlsym: cuDriverGetVersion - 0x7c1c93d0fe90 dlsym: cuDeviceGetCount - 0x7c1c93d0fed0 dlsym: cuDeviceGet - 0x7c1c93d0feb0 dlsym: cuDeviceGetAttribute - 0x7c1c93d0ffb0 dlsym: cuDeviceGetUuid - 0x7c1c93d0ff10 dlsym: cuDeviceGetName - 0x7c1c93d0fef0 dlsym: cuCtxCreate_v3 - 0x7c1c93d10190 dlsym: cuMemGetInfo_v2 - 0x7c1c93d10910 dlsym: cuCtxDestroy - 0x7c1c93d6eab0 calling cuInit calling cuDriverGetVersion raw version 0x2f30 CUDA driver version: 12.8 calling cuDeviceGetCount device count 1 time=2025-05-21T17:50:02.710Z level=DEBUG source=gpu.go:125 msg="detected GPUs" count=1 library=/usr/lib/x86_64-linux-gnu/libcuda.so.570.133.20 [GPU-9adf377f-3cf1-90a2-6738-a7f99afdd19c] CUDA totalMem 22574mb [GPU-9adf377f-3cf1-90a2-6738-a7f99afdd19c] CUDA freeMem 21491mb [GPU-9adf377f-3cf1-90a2-6738-a7f99afdd19c] Compute Capability 8.9 time=2025-05-21T17:50:02.866Z level=DEBUG source=amd_linux.go:419 msg="amdgpu driver not detected /sys/module/amdgpu" releasing cuda driver library time=2025-05-21T17:50:02.866Z level=INFO source=types.go:130 msg="inference compute" id=GPU-9adf377f-3cf1-90a2-6738-a7f99afdd19c library=cuda variant=v12 compute=8.9 driver=12.8 name="NVIDIA L4" total="22.0 GiB" available="21.0 GiB" time=2025-05-21T17:59:56.668Z level=DEBUG source=ggml.go:154 msg="key not found" key=general.alignment default=32 time=2025-05-21T17:59:56.669Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="15.0 GiB" before.free="12.7 GiB" before.free_swap="0 B" now.total="15.0 GiB" now.free="12.8 GiB" now.free_swap="0 B" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.133.20 dlsym: cuInit - 0x7c1c93d0fe70 dlsym: cuDriverGetVersion - 0x7c1c93d0fe90 dlsym: cuDeviceGetCount - 0x7c1c93d0fed0 dlsym: cuDeviceGet - 0x7c1c93d0feb0 dlsym: cuDeviceGetAttribute - 0x7c1c93d0ffb0 dlsym: cuDeviceGetUuid - 0x7c1c93d0ff10 dlsym: cuDeviceGetName - 0x7c1c93d0fef0 dlsym: cuCtxCreate_v3 - 0x7c1c93d10190 dlsym: cuMemGetInfo_v2 - 0x7c1c93d10910 dlsym: cuCtxDestroy - 0x7c1c93d6eab0 calling cuInit calling cuDriverGetVersion raw version 0x2f30 CUDA driver version: 12.8 calling cuDeviceGetCount device count 1 time=2025-05-21T17:59:56.823Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-9adf377f-3cf1-90a2-6738-a7f99afdd19c name="NVIDIA L4" overhead="0 B" before.total="22.0 GiB" before.free="21.0 GiB" now.total="22.0 GiB" now.free="21.0 GiB" now.used="1.1 GiB" releasing cuda driver library time=2025-05-21T17:59:56.847Z level=DEBUG source=ggml.go:154 msg="key not found" key=general.alignment default=32 time=2025-05-21T17:59:56.871Z level=DEBUG source=ggml.go:154 msg="key not found" key=general.alignment default=32 time=2025-05-21T17:59:56.873Z level=DEBUG source=sched.go:228 msg="loading first model" model=/root/.ollama/models/blobs/sha256-1fb99eda86dc48a736567406253769fdc75f01e65cde7c65fa5563e4bdf156e0 time=2025-05-21T17:59:56.873Z level=DEBUG source=memory.go:111 msg=evaluating library=cuda gpu_count=1 available="[21.0 GiB]" time=2025-05-21T17:59:56.874Z level=INFO source=sched.go:777 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-1fb99eda86dc48a736567406253769fdc75f01e65cde7c65fa5563e4bdf156e0 gpu=GPU-9adf377f-3cf1-90a2-6738-a7f99afdd19c parallel=2 available=22534946816 required="16.2 GiB" time=2025-05-21T17:59:56.874Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="15.0 GiB" before.free="12.8 GiB" before.free_swap="0 B" now.total="15.0 GiB" now.free="12.8 GiB" now.free_swap="0 B" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.133.20 dlsym: cuInit - 0x7c1c93d0fe70 dlsym: cuDriverGetVersion - 0x7c1c93d0fe90 dlsym: cuDeviceGetCount - 0x7c1c93d0fed0 dlsym: cuDeviceGet - 0x7c1c93d0feb0 dlsym: cuDeviceGetAttribute - 0x7c1c93d0ffb0 dlsym: cuDeviceGetUuid - 0x7c1c93d0ff10 dlsym: cuDeviceGetName - 0x7c1c93d0fef0 dlsym: cuCtxCreate_v3 - 0x7c1c93d10190 dlsym: cuMemGetInfo_v2 - 0x7c1c93d10910 dlsym: cuCtxDestroy - 0x7c1c93d6eab0 calling cuInit calling cuDriverGetVersion raw version 0x2f30 CUDA driver version: 12.8 calling cuDeviceGetCount device count 1 time=2025-05-21T17:59:57.026Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-9adf377f-3cf1-90a2-6738-a7f99afdd19c name="NVIDIA L4" overhead="0 B" before.total="22.0 GiB" before.free="21.0 GiB" now.total="22.0 GiB" now.free="21.0 GiB" now.used="1.1 GiB" releasing cuda driver library time=2025-05-21T17:59:57.026Z level=INFO source=server.go:135 msg="system memory" total="15.0 GiB" free="12.8 GiB" free_swap="0 B" time=2025-05-21T17:59:57.026Z level=DEBUG source=memory.go:111 msg=evaluating library=cuda gpu_count=1 available="[21.0 GiB]" time=2025-05-21T17:59:57.029Z level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[21.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="16.2 GiB" memory.required.partial="16.2 GiB" memory.required.kv="3.2 GiB" memory.required.allocations="[16.2 GiB]" memory.weights.total="7.5 GiB" memory.weights.repeating="5.6 GiB" memory.weights.nonrepeating="1.9 GiB" memory.graph.full="1.3 GiB" memory.graph.partial="1.6 GiB" projector.weights="806.2 MiB" projector.graph="1.0 GiB" time=2025-05-21T17:59:57.029Z level=DEBUG source=server.go:284 msg="compatible gpu libraries" compatible="[cuda_v12 cuda_v11]" time=2025-05-21T17:59:57.063Z level=DEBUG source=ggml.go:154 msg="key not found" key=general.alignment default=32 time=2025-05-21T17:59:57.064Z level=DEBUG source=ggml.go:154 msg="key not found" key=tokenizer.ggml.add_eot_token default=false time=2025-05-21T17:59:57.068Z level=DEBUG source=ggml.go:154 msg="key not found" key=gemma3.rope.local.freq_base default=10000 time=2025-05-21T17:59:57.068Z level=DEBUG source=ggml.go:154 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 time=2025-05-21T17:59:57.068Z level=DEBUG source=ggml.go:154 msg="key not found" key=gemma3.rope.freq_scale default=1 time=2025-05-21T17:59:57.068Z level=DEBUG source=ggml.go:154 msg="key not found" key=gemma3.mm_tokens_per_image default=256 time=2025-05-21T17:59:57.068Z level=DEBUG source=server.go:360 msg="adding gpu library" path=/usr/lib/ollama/cuda_v12 time=2025-05-21T17:59:57.068Z level=DEBUG source=server.go:367 msg="adding gpu dependency paths" paths=[/usr/lib/ollama/cuda_v12] time=2025-05-21T17:59:57.069Z level=INFO source=server.go:431 msg="starting llama server" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-1fb99eda86dc48a736567406253769fdc75f01e65cde7c65fa5563e4bdf156e0 --ctx-size 40000 --batch-size 512 --n-gpu-layers 49 --threads 2 --no-mmap --parallel 2 --port 38949" time=2025-05-21T17:59:57.069Z level=DEBUG source=server.go:432 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_DEBUG=1 OLLAMA_KEEP_ALIVE="\"1m\"" OLLAMA_MAX_LOADED_MODELS=2 OLLAMA_NOHISTORY=1 OLLAMA_NOPRUNE=1 OLLAMA_NUM_PARALLEL=2 OLLAMA_ORIGINS=* LD_LIBRARY_PATH=/usr/lib/ollama/cuda_v12:/usr/lib/ollama/cuda_v12:/usr/lib/ollama:/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/lib/ollama OLLAMA_HOST=0.0.0.0:11434 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12 CUDA_VISIBLE_DEVICES=GPU-9adf377f-3cf1-90a2-6738-a7f99afdd19c time=2025-05-21T17:59:57.069Z level=INFO source=sched.go:472 msg="loaded runners" count=1 time=2025-05-21T17:59:57.069Z level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2025-05-21T17:59:57.069Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding" time=2025-05-21T17:59:57.081Z level=INFO source=runner.go:836 msg="starting ollama engine" time=2025-05-21T17:59:57.082Z level=INFO source=runner.go:899 msg="Server listening on 127.0.0.1:38949" time=2025-05-21T17:59:57.116Z level=DEBUG source=ggml.go:154 msg="key not found" key=general.alignment default=32 time=2025-05-21T17:59:57.117Z level=DEBUG source=ggml.go:154 msg="key not found" key=general.name default="" time=2025-05-21T17:59:57.117Z level=DEBUG source=ggml.go:154 msg="key not found" key=general.description default="" time=2025-05-21T17:59:57.117Z level=INFO source=ggml.go:73 msg="" architecture=gemma3 file_type=Q4_0 name="" description="" num_tensors=1065 num_key_values=40 time=2025-05-21T17:59:57.117Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so time=2025-05-21T17:59:57.123Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama/cuda_v12 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA L4, compute capability 8.9, VMM: yes load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so time=2025-05-21T17:59:57.197Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2025-05-21T17:59:57.297Z level=INFO source=ggml.go:299 msg="model weights" buffer=CPU size="1.9 GiB" time=2025-05-21T17:59:57.297Z level=INFO source=ggml.go:299 msg="model weights" buffer=CUDA0 size="8.3 GiB" time=2025-05-21T17:59:57.321Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" time=2025-05-21T17:59:57.321Z level=DEBUG source=server.go:636 msg="model load progress 0.02" time=2025-05-21T17:59:57.574Z level=DEBUG source=server.go:636 msg="model load progress 0.22" time=2025-05-21T17:59:57.825Z level=DEBUG source=server.go:636 msg="model load progress 0.41" time=2025-05-21T17:59:58.076Z level=DEBUG source=server.go:636 msg="model load progress 0.61" time=2025-05-21T17:59:58.329Z level=DEBUG source=server.go:636 msg="model load progress 0.80" time=2025-05-21T17:59:58.581Z level=DEBUG source=server.go:636 msg="model load progress 0.90" time=2025-05-21T17:59:58.831Z level=DEBUG source=server.go:636 msg="model load progress 0.94" time=2025-05-21T17:59:59.082Z level=DEBUG source=server.go:636 msg="model load progress 0.99" time=2025-05-21T17:59:59.169Z level=DEBUG source=ggml.go:154 msg="key not found" key=tokenizer.ggml.add_eot_token default=false time=2025-05-21T17:59:59.172Z level=DEBUG source=ggml.go:154 msg="key not found" key=gemma3.rope.local.freq_base default=10000 time=2025-05-21T17:59:59.172Z level=DEBUG source=ggml.go:154 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 time=2025-05-21T17:59:59.172Z level=DEBUG source=ggml.go:154 msg="key not found" key=gemma3.rope.freq_scale default=1 time=2025-05-21T17:59:59.172Z level=DEBUG source=ggml.go:154 msg="key not found" key=gemma3.mm_tokens_per_image default=256 time=2025-05-21T17:59:59.327Z level=DEBUG source=ggml.go:553 msg="compute graph" nodes=2119 splits=2 time=2025-05-21T17:59:59.327Z level=INFO source=ggml.go:556 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="1.3 GiB" time=2025-05-21T17:59:59.327Z level=INFO source=ggml.go:556 msg="compute graph" backend=CPU buffer_type=CPU size="7.5 MiB" time=2025-05-21T17:59:59.333Z level=INFO source=server.go:630 msg="llama runner started in 2.26 seconds" time=2025-05-21T17:59:59.333Z level=DEBUG source=sched.go:484 msg="finished setting up" runner.name=registry.ollama.ai/library/gemma3:12b-it-qat runner.inference=cuda runner.devices=1 runner.size="16.2 GiB" runner.vram="16.2 GiB" runner.parallel=2 runner.pid=20 runner.model=/root/.ollama/models/blobs/sha256-1fb99eda86dc48a736567406253769fdc75f01e65cde7c65fa5563e4bdf156e0 runner.num_ctx=40000 time=2025-05-21T17:59:59.334Z level=DEBUG source=server.go:729 msg="completion request" images=0 prompt=1234 format="{\"type\":\"object\",\"properties\":{\"is_bio\":{\"type\":\"boolean\"},\"confidence\":{\"type\":\"integer\"},\"bio_type\":{\"type\":\"string\"},\"key_elements\":{\"type\":\"object\",\"properties\":{\"has_experience\":{\"type\":\"boolean\"},\"has_education\":{\"type\":\"boolean\"},\"has_skills\":{\"type\":\"boolean\"},\"has_achievements\":{\"type\":\"boolean\"}}}},\"required\":[\"is_bio\",\"confidence\",\"bio_type\"]}" parse: error parsing grammar: expecting ::= at root ::= "{" space is-bio-kv "," space confidence-kv "," space bio-type-kv ( "," space ( key-elements-kv ) )? "}" space key-elements-kv ::= "\"key_elements\"" space ":" space key-elements key-elements-has-experience-rest ::= ( "," space key-elements-has-education-kv )? key-elements-has-education-rest boolean ::= ("true" | "false") space integral-part ::= [0] | [1-9] [0-9]{0,15} is-bio-kv ::= "\"is_bio\"" space ":" space boolean integer ::= ("-"? integral-part) space key-elements-has-education-kv ::= "\"has_education\"" space ":" space boolean key-elements-has-experience-kv ::= "\"has_experience\"" space ":" space boolean confidence-kv ::= "\"confidence\"" space ":" space integer string ::= "\"" char* "\"" space char ::= [^"\\\x7F\x00-\x1F] | [\\] (["\\bfnrt] | "u" [0-9a-fA-F]{4}) space ::= | " " | "\n"{1,2} [ \t]{0,20} bio-type-kv ::= "\"bio_type\"" space ":" space string key-elements ::= "{" space (key-elements-has-experience-kv key-elements-has-experience-rest | key-elements-has-education-kv key-elements-has-education-rest | key-elements-has-skills-kv key-elements-has-skills-rest | key-elements-has-achievements-kv )? "}" space key-elements-has-skills-kv ::= "\"has_skills\"" space ":" space boolean key-elements-has-education-rest ::= ( "," space key-elements-has-skills-kv )? key-elements-has-skills-rest key-elements-has-skills-rest ::= ( "," space key-elements-has-achievements-kv )? key-elements-ha llama_grammar_init_impl: failed to parse grammar grammar_init: failed to initialize grammar time=2025-05-21T17:59:59.462Z level=INFO source=server.go:809 msg="llm predict error: failed to load model vocabulary required for format" [GIN] 2025/05/21 - 17:59:59 | 500 | 2.819479755s | 208.102.222.8 | POST "/api/generate" time=2025-05-21T17:59:59.462Z level=DEBUG source=sched.go:492 msg="context for request finished" time=2025-05-21T17:59:59.462Z level=DEBUG source=sched.go:343 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma3:12b-it-qat runner.inference=cuda runner.devices=1 runner.size="16.2 GiB" runner.vram="16.2 GiB" runner.parallel=2 runner.pid=20 runner.model=/root/.ollama/models/blobs/sha256-1fb99eda86dc48a736567406253769fdc75f01e65cde7c65fa5563e4bdf156e0 runner.num_ctx=40000 duration=1m0s time=2025-05-21T17:59:59.462Z level=DEBUG source=sched.go:361 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma3:12b-it-qat runner.inference=cuda runner.devices=1 runner.size="16.2 GiB" runner.vram="16.2 GiB" runner.parallel=2 runner.pid=20 runner.model=/root/.ollama/models/blobs/sha256-1fb99eda86dc48a736567406253769fdc75f01e65cde7c65fa5563e4bdf156e0 runner.num_ctx=40000 refCount=0 ``` ### OS Docker ### GPU Nvidia ### CPU AMD ### Ollama version 0.7.0
GiteaMirror added the bug label 2026-05-04 17:18:03 -05:00
Author
Owner

@rick-github commented on GitHub (May 21, 2025):

#10799

<!-- gh-comment-id:2898894446 --> @rick-github commented on GitHub (May 21, 2025): #10799
Author
Owner

@rick-github commented on GitHub (May 23, 2025):

Fixed by #10820

<!-- gh-comment-id:2903380591 --> @rick-github commented on GitHub (May 23, 2025): Fixed by #10820
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#69155