[GH-ISSUE #11778] gptoss-20b at 0.11.3 has invalid ggml type 39 #69866

Closed
opened 2026-05-04 19:38:24 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @Kinardus on GitHub (Aug 7, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11778

What is the issue?

Good day.
I have been using ollama in a secure perimeter (without internet access) for a long time: I always updated according to the manual, downloaded GGUF models from hf.com on another machine and uploaded them via ollama create -f ./Modelfile. It worked always, but not with gpt-oss-20b. I get an error (log attached). I tried both GGUF from unsloth and from bartowski, and original weights safetensors. I see that it works for others - hence the question: what am I doing wrong?

Debian 12
ollama-linux-amd64.tgz
2x4060Ti
Models that i tryed from hf.com:

  • bartowski/openai_gpt-oss-20b-GGUF (Q8_0, BF16)
  • lmstudio-community/gpt-oss-20b-GGUF (MXFP4)
  • openai/gpt-oss-20b (safetensors)
  • unsloth/gpt-oss-20b-GGUF (Q8_0, BF16, F16)

Another nuance: I have a separate machine on Windows - there, ollama pull worked perfectly and the model starts. I even tried to transfer the blob data to Linux - the error is unchanged.

Relevant log output

Aug 07 11:18:18 llms01 systemd[1]: Started ollama.service - Ollama Service.
Aug 07 11:18:18 llms01 ollama[1536]: time=2025-08-07T11:18:18.880+03:00 level=INFO source=routes.go:1297 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE:q8_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Aug 07 11:18:18 llms01 ollama[1536]: time=2025-08-07T11:18:18.881+03:00 level=INFO source=images.go:477 msg="total blobs: 14"
Aug 07 11:18:18 llms01 ollama[1536]: time=2025-08-07T11:18:18.882+03:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0"
Aug 07 11:18:18 llms01 ollama[1536]: time=2025-08-07T11:18:18.882+03:00 level=INFO source=routes.go:1350 msg="Listening on [::]:11434 (version 0.11.3)"
Aug 07 11:18:18 llms01 ollama[1536]: time=2025-08-07T11:18:18.883+03:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
Aug 07 11:18:19 llms01 ollama[1536]: time=2025-08-07T11:18:19.211+03:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-22e769d8-4d23-ead0-425d-77bf7ef299cf library=cuda variant=v12 compute=8.9 driver=12.8 name="NVIDIA GeForce RTX 4060 Ti" total="15.6 GiB" available="15.5 GiB"
Aug 07 11:18:19 llms01 ollama[1536]: time=2025-08-07T11:18:19.211+03:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-ce93c9d2-77e5-5fbe-91ee-a28768fb9bef library=cuda variant=v12 compute=8.9 driver=12.8 name="NVIDIA GeForce RTX 4060 Ti" total="15.6 GiB" available="15.5 GiB"
Aug 07 11:18:21 llms01 ollama[1536]: [GIN] 2025/08/07 - 11:18:21 | 200 |      95.101µs |       127.0.0.1 | HEAD     "/"
Aug 07 11:18:21 llms01 ollama[1536]: [GIN] 2025/08/07 - 11:18:21 | 200 |   49.595413ms |       127.0.0.1 | POST     "/api/show"
Aug 07 11:18:22 llms01 ollama[1536]: time=2025-08-07T11:18:22.183+03:00 level=INFO source=sched.go:802 msg="new model will fit in available VRAM, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-db9d08d2105a0cd9a6b03556595de60656c95df47780a43d0d5e51a2d51f826c library=cuda parallel=1 required="2.5 GiB"
Aug 07 11:18:22 llms01 ollama[1536]: time=2025-08-07T11:18:22.357+03:00 level=INFO source=server.go:135 msg="system memory" total="62.5 GiB" free="61.1 GiB" free_swap="977.0 MiB"
Aug 07 11:18:22 llms01 ollama[1536]: time=2025-08-07T11:18:22.530+03:00 level=INFO source=server.go:175 msg=offload library=cuda layers.requested=-1 layers.model=25 layers.offload=25 layers.split=13,12 memory.available="[15.5 GiB 15.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="2.5 GiB" memory.required.partial="2.5 GiB" memory.required.kv="96.0 MiB" memory.required.allocations="[1.6 GiB 1005.6 MiB]" memory.weights.total="1.2 GiB" memory.weights.repeating="680.5 MiB" memory.weights.nonrepeating="586.8 MiB" memory.graph.full="128.0 MiB" memory.graph.partial="128.0 MiB"
Aug 07 11:18:22 llms01 ollama[1536]: time=2025-08-07T11:18:22.530+03:00 level=INFO source=server.go:218 msg="enabling flash attention"
Aug 07 11:18:22 llms01 ollama[1536]: gguf_init_from_file_impl: tensor 'blk.0.ffn_down_exps.weight' has invalid ggml type 39 (NONE)
Aug 07 11:18:22 llms01 ollama[1536]: gguf_init_from_file_impl: failed to read tensor info
Aug 07 11:18:22 llms01 ollama[1536]: llama_model_load: error loading model: llama_model_loader: failed to load model from /usr/share/ollama/.ollama/models/blobs/sha256-db9d08d2105a0cd9a6b03556595de60656c95df47780a43d0d5e51a2d51f826c
Aug 07 11:18:22 llms01 ollama[1536]: llama_model_load_from_file_impl: failed to load model
Aug 07 11:18:22 llms01 ollama[1536]: time=2025-08-07T11:18:22.587+03:00 level=INFO source=sched.go:453 msg="NewLlamaServer failed" model=/usr/share/ollama/.ollama/models/blobs/sha256-db9d08d2105a0cd9a6b03556595de60656c95df47780a43d0d5e51a2d51f826c error="unable to load model: /usr/share/ollama/.ollama/models/blobs/sha256-db9d08d2105a0cd9a6b03556595de60656c95df47780a43d0d5e51a2d51f826c"
Aug 07 11:18:22 llms01 ollama[1536]: [GIN] 2025/08/07 - 11:18:22 | 500 |  832.548147ms |       127.0.0.1 | POST     "/api/generate"

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.11.3

Originally created by @Kinardus on GitHub (Aug 7, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11778 ### What is the issue? Good day. I have been using ollama in a secure perimeter (without internet access) for a long time: I always updated according to the manual, downloaded GGUF models from hf.com on another machine and uploaded them via `ollama create -f ./Modelfile`. It worked always, but not with _gpt-oss-20b._ I get an error (log attached). I tried both GGUF from _unsloth_ and from _bartowski_, and _original weights safetensors_. I see that it works for others - hence the question: what am I doing wrong? Debian 12 ollama-linux-amd64.tgz 2x4060Ti Models that i tryed from hf.com: - bartowski/openai_gpt-oss-20b-GGUF (Q8_0, BF16) - lmstudio-community/gpt-oss-20b-GGUF (MXFP4) - openai/gpt-oss-20b (safetensors) - unsloth/gpt-oss-20b-GGUF (Q8_0, BF16, F16) Another nuance: I have a separate machine on Windows - there, `ollama pull` worked perfectly and the model starts. I even tried to transfer the blob data to Linux - the error is unchanged. ### Relevant log output ```shell Aug 07 11:18:18 llms01 systemd[1]: Started ollama.service - Ollama Service. Aug 07 11:18:18 llms01 ollama[1536]: time=2025-08-07T11:18:18.880+03:00 level=INFO source=routes.go:1297 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE:q8_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Aug 07 11:18:18 llms01 ollama[1536]: time=2025-08-07T11:18:18.881+03:00 level=INFO source=images.go:477 msg="total blobs: 14" Aug 07 11:18:18 llms01 ollama[1536]: time=2025-08-07T11:18:18.882+03:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0" Aug 07 11:18:18 llms01 ollama[1536]: time=2025-08-07T11:18:18.882+03:00 level=INFO source=routes.go:1350 msg="Listening on [::]:11434 (version 0.11.3)" Aug 07 11:18:18 llms01 ollama[1536]: time=2025-08-07T11:18:18.883+03:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" Aug 07 11:18:19 llms01 ollama[1536]: time=2025-08-07T11:18:19.211+03:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-22e769d8-4d23-ead0-425d-77bf7ef299cf library=cuda variant=v12 compute=8.9 driver=12.8 name="NVIDIA GeForce RTX 4060 Ti" total="15.6 GiB" available="15.5 GiB" Aug 07 11:18:19 llms01 ollama[1536]: time=2025-08-07T11:18:19.211+03:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-ce93c9d2-77e5-5fbe-91ee-a28768fb9bef library=cuda variant=v12 compute=8.9 driver=12.8 name="NVIDIA GeForce RTX 4060 Ti" total="15.6 GiB" available="15.5 GiB" Aug 07 11:18:21 llms01 ollama[1536]: [GIN] 2025/08/07 - 11:18:21 | 200 | 95.101µs | 127.0.0.1 | HEAD "/" Aug 07 11:18:21 llms01 ollama[1536]: [GIN] 2025/08/07 - 11:18:21 | 200 | 49.595413ms | 127.0.0.1 | POST "/api/show" Aug 07 11:18:22 llms01 ollama[1536]: time=2025-08-07T11:18:22.183+03:00 level=INFO source=sched.go:802 msg="new model will fit in available VRAM, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-db9d08d2105a0cd9a6b03556595de60656c95df47780a43d0d5e51a2d51f826c library=cuda parallel=1 required="2.5 GiB" Aug 07 11:18:22 llms01 ollama[1536]: time=2025-08-07T11:18:22.357+03:00 level=INFO source=server.go:135 msg="system memory" total="62.5 GiB" free="61.1 GiB" free_swap="977.0 MiB" Aug 07 11:18:22 llms01 ollama[1536]: time=2025-08-07T11:18:22.530+03:00 level=INFO source=server.go:175 msg=offload library=cuda layers.requested=-1 layers.model=25 layers.offload=25 layers.split=13,12 memory.available="[15.5 GiB 15.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="2.5 GiB" memory.required.partial="2.5 GiB" memory.required.kv="96.0 MiB" memory.required.allocations="[1.6 GiB 1005.6 MiB]" memory.weights.total="1.2 GiB" memory.weights.repeating="680.5 MiB" memory.weights.nonrepeating="586.8 MiB" memory.graph.full="128.0 MiB" memory.graph.partial="128.0 MiB" Aug 07 11:18:22 llms01 ollama[1536]: time=2025-08-07T11:18:22.530+03:00 level=INFO source=server.go:218 msg="enabling flash attention" Aug 07 11:18:22 llms01 ollama[1536]: gguf_init_from_file_impl: tensor 'blk.0.ffn_down_exps.weight' has invalid ggml type 39 (NONE) Aug 07 11:18:22 llms01 ollama[1536]: gguf_init_from_file_impl: failed to read tensor info Aug 07 11:18:22 llms01 ollama[1536]: llama_model_load: error loading model: llama_model_loader: failed to load model from /usr/share/ollama/.ollama/models/blobs/sha256-db9d08d2105a0cd9a6b03556595de60656c95df47780a43d0d5e51a2d51f826c Aug 07 11:18:22 llms01 ollama[1536]: llama_model_load_from_file_impl: failed to load model Aug 07 11:18:22 llms01 ollama[1536]: time=2025-08-07T11:18:22.587+03:00 level=INFO source=sched.go:453 msg="NewLlamaServer failed" model=/usr/share/ollama/.ollama/models/blobs/sha256-db9d08d2105a0cd9a6b03556595de60656c95df47780a43d0d5e51a2d51f826c error="unable to load model: /usr/share/ollama/.ollama/models/blobs/sha256-db9d08d2105a0cd9a6b03556595de60656c95df47780a43d0d5e51a2d51f826c" Aug 07 11:18:22 llms01 ollama[1536]: [GIN] 2025/08/07 - 11:18:22 | 500 | 832.548147ms | 127.0.0.1 | POST "/api/generate" ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.11.3
GiteaMirror added the bug label 2026-05-04 19:38:24 -05:00
Author
Owner

@rick-github commented on GitHub (Aug 7, 2025):

Can you show the log of failure when you tried the blob data from the Windows machine?

<!-- gh-comment-id:3163180777 --> @rick-github commented on GitHub (Aug 7, 2025): Can you show the log of failure when you tried the blob data from the Windows machine?
Author
Owner

@phonzia commented on GitHub (Aug 7, 2025):

some here on macos

time=2025-08-07T17:39:56.175+08:00 level=INFO source=sched.go:786 msg="new model will fit in available VRAM in single GPU, loading" model=/Users/dx2880/.ollama/models/blobs/sha256-fcbc7ec4c2d1527c3da84b7049e59dc5af065876169216ec518bceab841e73f7 gpu=0 parallel=1 available=52428800000 required="3.3 GiB"
time=2025-08-07T17:39:56.175+08:00 level=INFO source=server.go:135 msg="system memory" total="64.0 GiB" free="24.6 GiB" free_swap="0 B"
time=2025-08-07T17:39:56.175+08:00 level=INFO source=server.go:175 msg=offload library=metal layers.requested=-1 layers.model=25 layers.offload=25 layers.split="" memory.available="[48.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="3.3 GiB" memory.required.partial="3.3 GiB" memory.required.kv="192.0 MiB" memory.required.allocations="[3.3 GiB]" memory.weights.total="2.3 GiB" memory.weights.repeating="1.2 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="256.0 MiB" memory.graph.partial="256.0 MiB"
llama_model_load_from_file_impl: using device Metal (Apple M1 Max) - 49999 MiB free
gguf_init_from_file_impl: tensor 'blk.0.ffn_down_exps.weight' has invalid ggml type 39 (NONE)
gguf_init_from_file_impl: failed to read tensor info
llama_model_load: error loading model: llama_model_loader: failed to load model from /Users/dx2880/.ollama/models/blobs/sha256-fcbc7ec4c2d1527c3da84b7049e59dc5af065876169216ec518bceab841e73f7

llama_model_load_from_file_impl: failed to load model
time=2025-08-07T17:39:56.233+08:00 level=INFO source=sched.go:453 msg="NewLlamaServer failed" model=/Users/dx2880/.ollama/models/blobs/sha256-fcbc7ec4c2d1527c3da84b7049e59dc5af065876169216ec518bceab841e73f7 error="unable to load model: /Users/dx2880/.ollama/models/blobs/sha256-fcbc7ec4c2d1527c3da84b7049e59dc5af065876169216ec518bceab841e73f7"
[GIN] 2025/08/07 - 17:39:56 | 500 | 147.638667ms | 127.0.0.1 | POST "/api/chat"

<!-- gh-comment-id:3163338297 --> @phonzia commented on GitHub (Aug 7, 2025): some here on macos > time=2025-08-07T17:39:56.175+08:00 level=INFO source=sched.go:786 msg="new model will fit in available VRAM in single GPU, loading" model=/Users/dx2880/.ollama/models/blobs/sha256-fcbc7ec4c2d1527c3da84b7049e59dc5af065876169216ec518bceab841e73f7 gpu=0 parallel=1 available=52428800000 required="3.3 GiB" time=2025-08-07T17:39:56.175+08:00 level=INFO source=server.go:135 msg="system memory" total="64.0 GiB" free="24.6 GiB" free_swap="0 B" time=2025-08-07T17:39:56.175+08:00 level=INFO source=server.go:175 msg=offload library=metal layers.requested=-1 layers.model=25 layers.offload=25 layers.split="" memory.available="[48.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="3.3 GiB" memory.required.partial="3.3 GiB" memory.required.kv="192.0 MiB" memory.required.allocations="[3.3 GiB]" memory.weights.total="2.3 GiB" memory.weights.repeating="1.2 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="256.0 MiB" memory.graph.partial="256.0 MiB" llama_model_load_from_file_impl: using device Metal (Apple M1 Max) - 49999 MiB free gguf_init_from_file_impl: tensor 'blk.0.ffn_down_exps.weight' has invalid ggml type 39 (NONE) gguf_init_from_file_impl: failed to read tensor info llama_model_load: error loading model: llama_model_loader: failed to load model from /Users/dx2880/.ollama/models/blobs/sha256-fcbc7ec4c2d1527c3da84b7049e59dc5af065876169216ec518bceab841e73f7 llama_model_load_from_file_impl: failed to load model time=2025-08-07T17:39:56.233+08:00 level=INFO source=sched.go:453 msg="NewLlamaServer failed" model=/Users/dx2880/.ollama/models/blobs/sha256-fcbc7ec4c2d1527c3da84b7049e59dc5af065876169216ec518bceab841e73f7 error="unable to load model: /Users/dx2880/.ollama/models/blobs/sha256-fcbc7ec4c2d1527c3da84b7049e59dc5af065876169216ec518bceab841e73f7" [GIN] 2025/08/07 - 17:39:56 | 500 | 147.638667ms | 127.0.0.1 | POST "/api/chat"
Author
Owner

@Kinardus commented on GitHub (Aug 7, 2025):

Can you show the log of failure when you tried the blob data from the Windows machine?

No, I can't. I just transferred them again and the model started. Very strange, because yesterday I spent the whole day on this. BUT:

  1. Linux rebooted at night
  2. This morning in the ollama settings I enabled flash attention.
    I don`t know what helped. Blob from windows worked now, gguf & safetensors - no.
    This fresh log from safetensors:

Aug 07 13:33:30 llms01 ollama[1536]: [GIN] 2025/08/07 - 13:33:30 | 200 | 32.961µs | 127.0.0.1 | HEAD "/"
Aug 07 13:33:30 llms01 ollama[1536]: [GIN] 2025/08/07 - 13:33:30 | 200 | 56.24965ms | 127.0.0.1 | POST "/api/show"
Aug 07 13:33:30 llms01 ollama[1536]: time=2025-08-07T13:33:30.466+03:00 level=WARN source=memory.go:129 msg="model missing blk.0 layer size"
Aug 07 13:33:30 llms01 ollama[1536]: time=2025-08-07T13:33:30.848+03:00 level=INFO source=server.go:135 msg="system memory" total="62.5 GiB" free="40.8 GiB" free_swap="801.0 MiB"
Aug 07 13:33:30 llms01 ollama[1536]: time=2025-08-07T13:33:30.848+03:00 level=WARN source=memory.go:129 msg="model missing blk.0 layer size"
Aug 07 13:33:31 llms01 ollama[1536]: time=2025-08-07T13:33:31.042+03:00 level=INFO source=server.go:175 msg=offload library=cuda layers.requested=-1 layers.model=25 layers.offload=24 layers.split=12,12 memory.available="[15.5 GiB 15.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.1 GiB" memory.required.partial="5.1 GiB" memory.required.kv="300.0 MiB" memory.required.allocations="[2.6 GiB 2.6 GiB]" memory.weights.total="0 B" memory.weights.repeating="0 B" memory.weights.nonrepeating="0 B" memory.graph.full="2.0 GiB" memory.graph.partial="2.0 GiB"
Aug 07 13:33:31 llms01 ollama[1536]: time=2025-08-07T13:33:31.042+03:00 level=WARN source=server.go:211 msg="flash attention enabled but not supported by model"
Aug 07 13:33:31 llms01 ollama[1536]: time=2025-08-07T13:33:31.042+03:00 level=WARN source=server.go:229 msg="quantized kv cache requested but flash attention disabled" type=q8_0
Aug 07 13:33:31 llms01 ollama[1536]: time=2025-08-07T13:33:31.068+03:00 level=INFO source=server.go:438 msg="starting llama server" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-30e548d97202721de7d681ad630fcf9754cc7ae20623cce9c6bae8f063fc2edb --ctx-size 8192 --batch-size 512 --n-gpu-layers 24 --threads 8 --parallel 1 --tensor-split 12,12 --port 40159"
Aug 07 13:33:31 llms01 ollama[1536]: time=2025-08-07T13:33:31.069+03:00 level=INFO source=sched.go:481 msg="loaded runners" count=1
Aug 07 13:33:31 llms01 ollama[1536]: time=2025-08-07T13:33:31.069+03:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding"
Aug 07 13:33:31 llms01 ollama[1536]: time=2025-08-07T13:33:31.069+03:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding"
Aug 07 13:33:31 llms01 ollama[1536]: time=2025-08-07T13:33:31.078+03:00 level=INFO source=runner.go:925 msg="starting ollama engine"
Aug 07 13:33:31 llms01 ollama[1536]: time=2025-08-07T13:33:31.079+03:00 level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:40159"
Aug 07 13:33:31 llms01 ollama[1536]: time=2025-08-07T13:33:31.111+03:00 level=INFO source=ggml.go:92 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=459 num_key_values=32
Aug 07 13:33:31 llms01 ollama[1536]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
Aug 07 13:33:31 llms01 ollama[1536]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Aug 07 13:33:31 llms01 ollama[1536]: ggml_cuda_init: found 2 CUDA devices:
Aug 07 13:33:31 llms01 ollama[1536]: Device 0: NVIDIA GeForce RTX 4060 Ti, compute capability 8.9, VMM: yes
Aug 07 13:33:31 llms01 ollama[1536]: Device 1: NVIDIA GeForce RTX 4060 Ti, compute capability 8.9, VMM: yes
Aug 07 13:33:31 llms01 ollama[1536]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/libggml-cuda.so
Aug 07 13:33:31 llms01 ollama[1536]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-alderlake.so
Aug 07 13:33:31 llms01 ollama[1536]: time=2025-08-07T13:33:31.211+03:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
Aug 07 13:33:31 llms01 ollama[1536]: time=2025-08-07T13:33:31.272+03:00 level=INFO source=ggml.go:367 msg="offloading 24 repeating layers to GPU"
Aug 07 13:33:31 llms01 ollama[1536]: time=2025-08-07T13:33:31.272+03:00 level=INFO source=ggml.go:371 msg="offloading output layer to CPU"
Aug 07 13:33:31 llms01 ollama[1536]: time=2025-08-07T13:33:31.272+03:00 level=INFO source=ggml.go:378 msg="offloaded 24/25 layers to GPU"
Aug 07 13:33:31 llms01 ollama[1536]: time=2025-08-07T13:33:31.272+03:00 level=INFO source=ggml.go:381 msg="model weights" buffer=CUDA0 size="10.1 GiB"
Aug 07 13:33:31 llms01 ollama[1536]: time=2025-08-07T13:33:31.272+03:00 level=INFO source=ggml.go:381 msg="model weights" buffer=CUDA1 size="10.1 GiB"
Aug 07 13:33:31 llms01 ollama[1536]: time=2025-08-07T13:33:31.272+03:00 level=INFO source=ggml.go:381 msg="model weights" buffer=CPU size="2.2 GiB"
Aug 07 13:33:31 llms01 ollama[1536]: panic: runtime error: invalid memory address or nil pointer dereference
Aug 07 13:33:31 llms01 ollama[1536]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x55cd99aee726]
Aug 07 13:33:31 llms01 ollama[1536]: goroutine 42 [running]:
Aug 07 13:33:31 llms01 ollama[1536]: github.com/ollama/ollama/ml/nn.(*Embedding).Forward(...)
Aug 07 13:33:31 llms01 ollama[1536]: github.com/ollama/ollama/ml/nn/embedding.go:10
Aug 07 13:33:31 llms01 ollama[1536]: github.com/ollama/ollama/model/models/gptoss.(*Transformer).Forward(0xc0001936b0, {0x55cd9a9a8a90, 0xc0018305c0}, {{0x55cd9a9b36e8, 0xc00182f218}, {0x0, 0x0, 0x0}, {0xc001834800, 0x200, ...}, ...})
Aug 07 13:33:31 llms01 ollama[1536]: github.com/ollama/ollama/model/models/gptoss/model.go:32 +0x46
Aug 07 13:33:31 llms01 ollama[1536]: github.com/ollama/ollama/runner/ollamarunner.(*Server).reserveWorstCaseGraph(0xc000114a20)
Aug 07 13:33:31 llms01 ollama[1536]: github.com/ollama/ollama/runner/ollamarunner/runner.go:821 +0xac5
Aug 07 13:33:31 llms01 ollama[1536]: github.com/ollama/ollama/runner/ollamarunner.(*Server).initModel(0xc000114a20, {0x7ffec69d1c81?, 0x0?}, {0x8, 0x0, 0x18, {0xc0001fd828, 0x2, 0x2}, 0x0}, ...)
Aug 07 13:33:31 llms01 ollama[1536]: github.com/ollama/ollama/runner/ollamarunner/runner.go:865 +0x270
Aug 07 13:33:31 llms01 ollama[1536]: github.com/ollama/ollama/runner/ollamarunner.(*Server).load(0xc000114a20, {0x55cd9a9a0790, 0xc0004f03c0}, {0x7ffec69d1c81?, 0x0?}, {0x8, 0x0, 0x18, {0xc0001fd828, 0x2, ...}, ...}, ...)
Aug 07 13:33:31 llms01 ollama[1536]: github.com/ollama/ollama/runner/ollamarunner/runner.go:878 +0xb8
Aug 07 13:33:31 llms01 ollama[1536]: created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1
Aug 07 13:33:31 llms01 ollama[1536]: github.com/ollama/ollama/runner/ollamarunner/runner.go:959 +0xa11
Aug 07 13:33:31 llms01 ollama[1536]: time=2025-08-07T13:33:31.343+03:00 level=ERROR source=server.go:464 msg="llama runner terminated" error="exit status 2"
Aug 07 13:33:31 llms01 ollama[1536]: time=2025-08-07T13:33:31.569+03:00 level=ERROR source=sched.go:487 msg="error loading llama server" error="llama runner process has terminated: error:invalid memory address or nil pointer dereference\n[signal"
Aug 07 13:33:31 llms01 ollama[1536]: [GIN] 2025/08/07 - 13:33:31 | 500 | 1.40920778s | 127.0.0.1 | POST "/api/generate"
Aug 07 13:33:36 llms01 ollama[1536]: time=2025-08-07T13:33:36.770+03:00 level=WARN source=sched.go:685 msg="gpu VRAM usage didn't recover within timeout" seconds=5.200268118 runner.size="5.1 GiB" runner.vram="5.1 GiB" runner.parallel=1 runner.pid=85827 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-30e548d97202721de7d681ad630fcf9754cc7ae20623cce9c6bae8f063fc2edb
Aug 07 13:33:37 llms01 ollama[1536]: time=2025-08-07T13:33:37.020+03:00 level=WARN source=sched.go:685 msg="gpu VRAM usage didn't recover within timeout" seconds=5.450466782 runner.size="5.1 GiB" runner.vram="5.1 GiB" runner.parallel=1 runner.pid=85827 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-30e548d97202721de7d681ad630fcf9754cc7ae20623cce9c6bae8f063fc2edb
Aug 07 13:33:37 llms01 ollama[1536]: time=2025-08-07T13:33:37.270+03:00 level=WARN source=sched.go:685 msg="gpu VRAM usage didn't recover within timeout" seconds=5.7002688070000005 runner.size="5.1 GiB" runner.vram="5.1 GiB" runner.parallel=1 runner.pid=85827 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-30e548d97202721de7d681ad630fcf9754cc7ae20623cce9c6bae8f063fc2edb

<!-- gh-comment-id:3163516652 --> @Kinardus commented on GitHub (Aug 7, 2025): > Can you show the log of failure when you tried the blob data from the Windows machine? No, I can't. I just transferred them again and the model started. Very strange, because yesterday I spent the whole day on this. BUT: 1. Linux rebooted at night 2. This morning in the ollama settings I enabled flash attention. I don`t know what helped. Blob from windows worked now, gguf & safetensors - no. This fresh log from safetensors: > Aug 07 13:33:30 llms01 ollama[1536]: [GIN] 2025/08/07 - 13:33:30 | 200 | 32.961µs | 127.0.0.1 | HEAD "/" > Aug 07 13:33:30 llms01 ollama[1536]: [GIN] 2025/08/07 - 13:33:30 | 200 | 56.24965ms | 127.0.0.1 | POST "/api/show" > Aug 07 13:33:30 llms01 ollama[1536]: time=2025-08-07T13:33:30.466+03:00 level=WARN source=memory.go:129 msg="model missing blk.0 layer size" > Aug 07 13:33:30 llms01 ollama[1536]: time=2025-08-07T13:33:30.848+03:00 level=INFO source=server.go:135 msg="system memory" total="62.5 GiB" free="40.8 GiB" free_swap="801.0 MiB" > Aug 07 13:33:30 llms01 ollama[1536]: time=2025-08-07T13:33:30.848+03:00 level=WARN source=memory.go:129 msg="model missing blk.0 layer size" > Aug 07 13:33:31 llms01 ollama[1536]: time=2025-08-07T13:33:31.042+03:00 level=INFO source=server.go:175 msg=offload library=cuda layers.requested=-1 layers.model=25 layers.offload=24 layers.split=12,12 memory.available="[15.5 GiB 15.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.1 GiB" memory.required.partial="5.1 GiB" memory.required.kv="300.0 MiB" memory.required.allocations="[2.6 GiB 2.6 GiB]" memory.weights.total="0 B" memory.weights.repeating="0 B" memory.weights.nonrepeating="0 B" memory.graph.full="2.0 GiB" memory.graph.partial="2.0 GiB" > Aug 07 13:33:31 llms01 ollama[1536]: time=2025-08-07T13:33:31.042+03:00 level=WARN source=server.go:211 msg="flash attention enabled but not supported by model" > Aug 07 13:33:31 llms01 ollama[1536]: time=2025-08-07T13:33:31.042+03:00 level=WARN source=server.go:229 msg="quantized kv cache requested but flash attention disabled" type=q8_0 > Aug 07 13:33:31 llms01 ollama[1536]: time=2025-08-07T13:33:31.068+03:00 level=INFO source=server.go:438 msg="starting llama server" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-30e548d97202721de7d681ad630fcf9754cc7ae20623cce9c6bae8f063fc2edb --ctx-size 8192 --batch-size 512 --n-gpu-layers 24 --threads 8 --parallel 1 --tensor-split 12,12 --port 40159" > Aug 07 13:33:31 llms01 ollama[1536]: time=2025-08-07T13:33:31.069+03:00 level=INFO source=sched.go:481 msg="loaded runners" count=1 > Aug 07 13:33:31 llms01 ollama[1536]: time=2025-08-07T13:33:31.069+03:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding" > Aug 07 13:33:31 llms01 ollama[1536]: time=2025-08-07T13:33:31.069+03:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding" > Aug 07 13:33:31 llms01 ollama[1536]: time=2025-08-07T13:33:31.078+03:00 level=INFO source=runner.go:925 msg="starting ollama engine" > Aug 07 13:33:31 llms01 ollama[1536]: time=2025-08-07T13:33:31.079+03:00 level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:40159" > Aug 07 13:33:31 llms01 ollama[1536]: time=2025-08-07T13:33:31.111+03:00 level=INFO source=ggml.go:92 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=459 num_key_values=32 > Aug 07 13:33:31 llms01 ollama[1536]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no > Aug 07 13:33:31 llms01 ollama[1536]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no > Aug 07 13:33:31 llms01 ollama[1536]: ggml_cuda_init: found 2 CUDA devices: > Aug 07 13:33:31 llms01 ollama[1536]: Device 0: NVIDIA GeForce RTX 4060 Ti, compute capability 8.9, VMM: yes > Aug 07 13:33:31 llms01 ollama[1536]: Device 1: NVIDIA GeForce RTX 4060 Ti, compute capability 8.9, VMM: yes > Aug 07 13:33:31 llms01 ollama[1536]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/libggml-cuda.so > Aug 07 13:33:31 llms01 ollama[1536]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-alderlake.so > Aug 07 13:33:31 llms01 ollama[1536]: time=2025-08-07T13:33:31.211+03:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) > Aug 07 13:33:31 llms01 ollama[1536]: time=2025-08-07T13:33:31.272+03:00 level=INFO source=ggml.go:367 msg="offloading 24 repeating layers to GPU" > Aug 07 13:33:31 llms01 ollama[1536]: time=2025-08-07T13:33:31.272+03:00 level=INFO source=ggml.go:371 msg="offloading output layer to CPU" > Aug 07 13:33:31 llms01 ollama[1536]: time=2025-08-07T13:33:31.272+03:00 level=INFO source=ggml.go:378 msg="offloaded 24/25 layers to GPU" > Aug 07 13:33:31 llms01 ollama[1536]: time=2025-08-07T13:33:31.272+03:00 level=INFO source=ggml.go:381 msg="model weights" buffer=CUDA0 size="10.1 GiB" > Aug 07 13:33:31 llms01 ollama[1536]: time=2025-08-07T13:33:31.272+03:00 level=INFO source=ggml.go:381 msg="model weights" buffer=CUDA1 size="10.1 GiB" > Aug 07 13:33:31 llms01 ollama[1536]: time=2025-08-07T13:33:31.272+03:00 level=INFO source=ggml.go:381 msg="model weights" buffer=CPU size="2.2 GiB" > Aug 07 13:33:31 llms01 ollama[1536]: panic: runtime error: invalid memory address or nil pointer dereference > Aug 07 13:33:31 llms01 ollama[1536]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x55cd99aee726] > Aug 07 13:33:31 llms01 ollama[1536]: goroutine 42 [running]: > Aug 07 13:33:31 llms01 ollama[1536]: github.com/ollama/ollama/ml/nn.(*Embedding).Forward(...) > Aug 07 13:33:31 llms01 ollama[1536]: github.com/ollama/ollama/ml/nn/embedding.go:10 > Aug 07 13:33:31 llms01 ollama[1536]: github.com/ollama/ollama/model/models/gptoss.(*Transformer).Forward(0xc0001936b0, {0x55cd9a9a8a90, 0xc0018305c0}, {{0x55cd9a9b36e8, 0xc00182f218}, {0x0, 0x0, 0x0}, {0xc001834800, 0x200, ...}, ...}) > Aug 07 13:33:31 llms01 ollama[1536]: github.com/ollama/ollama/model/models/gptoss/model.go:32 +0x46 > Aug 07 13:33:31 llms01 ollama[1536]: github.com/ollama/ollama/runner/ollamarunner.(*Server).reserveWorstCaseGraph(0xc000114a20) > Aug 07 13:33:31 llms01 ollama[1536]: github.com/ollama/ollama/runner/ollamarunner/runner.go:821 +0xac5 > Aug 07 13:33:31 llms01 ollama[1536]: github.com/ollama/ollama/runner/ollamarunner.(*Server).initModel(0xc000114a20, {0x7ffec69d1c81?, 0x0?}, {0x8, 0x0, 0x18, {0xc0001fd828, 0x2, 0x2}, 0x0}, ...) > Aug 07 13:33:31 llms01 ollama[1536]: github.com/ollama/ollama/runner/ollamarunner/runner.go:865 +0x270 > Aug 07 13:33:31 llms01 ollama[1536]: github.com/ollama/ollama/runner/ollamarunner.(*Server).load(0xc000114a20, {0x55cd9a9a0790, 0xc0004f03c0}, {0x7ffec69d1c81?, 0x0?}, {0x8, 0x0, 0x18, {0xc0001fd828, 0x2, ...}, ...}, ...) > Aug 07 13:33:31 llms01 ollama[1536]: github.com/ollama/ollama/runner/ollamarunner/runner.go:878 +0xb8 > Aug 07 13:33:31 llms01 ollama[1536]: created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1 > Aug 07 13:33:31 llms01 ollama[1536]: github.com/ollama/ollama/runner/ollamarunner/runner.go:959 +0xa11 > Aug 07 13:33:31 llms01 ollama[1536]: time=2025-08-07T13:33:31.343+03:00 level=ERROR source=server.go:464 msg="llama runner terminated" error="exit status 2" > Aug 07 13:33:31 llms01 ollama[1536]: time=2025-08-07T13:33:31.569+03:00 level=ERROR source=sched.go:487 msg="error loading llama server" error="llama runner process has terminated: error:invalid memory address or nil pointer dereference\n[signal" > Aug 07 13:33:31 llms01 ollama[1536]: [GIN] 2025/08/07 - 13:33:31 | 500 | 1.40920778s | 127.0.0.1 | POST "/api/generate" > Aug 07 13:33:36 llms01 ollama[1536]: time=2025-08-07T13:33:36.770+03:00 level=WARN source=sched.go:685 msg="gpu VRAM usage didn't recover within timeout" seconds=5.200268118 runner.size="5.1 GiB" runner.vram="5.1 GiB" runner.parallel=1 runner.pid=85827 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-30e548d97202721de7d681ad630fcf9754cc7ae20623cce9c6bae8f063fc2edb > Aug 07 13:33:37 llms01 ollama[1536]: time=2025-08-07T13:33:37.020+03:00 level=WARN source=sched.go:685 msg="gpu VRAM usage didn't recover within timeout" seconds=5.450466782 runner.size="5.1 GiB" runner.vram="5.1 GiB" runner.parallel=1 runner.pid=85827 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-30e548d97202721de7d681ad630fcf9754cc7ae20623cce9c6bae8f063fc2edb > Aug 07 13:33:37 llms01 ollama[1536]: time=2025-08-07T13:33:37.270+03:00 level=WARN source=sched.go:685 msg="gpu VRAM usage didn't recover within timeout" seconds=5.7002688070000005 runner.size="5.1 GiB" runner.vram="5.1 GiB" runner.parallel=1 runner.pid=85827 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-30e548d97202721de7d681ad630fcf9754cc7ae20623cce9c6bae8f063fc2edb
Author
Owner

@rick-github commented on GitHub (Aug 7, 2025):

@phonzia

model=/Users/dx2880/.ollama/models/blobs/sha256-fcbc7ec4c2d1527c3da84b7049e59dc5af065876169216ec518bceab841e73f7

This model is not from the ollama library.

@Kinardus

msg="flash attention enabled but not supported by model"

Flash attention settings wouldn't change anything.

The likely problem is the model is not from the ollama library, so may have incompatibilities. #11714

<!-- gh-comment-id:3163634930 --> @rick-github commented on GitHub (Aug 7, 2025): @phonzia ``` model=/Users/dx2880/.ollama/models/blobs/sha256-fcbc7ec4c2d1527c3da84b7049e59dc5af065876169216ec518bceab841e73f7 ``` This model is not from the ollama library. @Kinardus ``` msg="flash attention enabled but not supported by model" ``` Flash attention settings wouldn't change anything. The likely problem is the model is not from the ollama library, so may have incompatibilities. #11714
Author
Owner

@Kinardus commented on GitHub (Aug 7, 2025):

@phonzia

model=/Users/dx2880/.ollama/models/blobs/sha256-fcbc7ec4c2d1527c3da84b7049e59dc5af065876169216ec518bceab841e73f7

This model is not from the ollama library.

@Kinardus

msg="flash attention enabled but not supported by model"

Flash attention settings wouldn't change anything.

The likely problem is the model is not from the ollama library, so may have incompatibilities. #11714

Thank you very much for your work!

<!-- gh-comment-id:3163718540 --> @Kinardus commented on GitHub (Aug 7, 2025): > [@phonzia](https://github.com/phonzia) > > ``` > model=/Users/dx2880/.ollama/models/blobs/sha256-fcbc7ec4c2d1527c3da84b7049e59dc5af065876169216ec518bceab841e73f7 > ``` > > This model is not from the ollama library. > > [@Kinardus](https://github.com/Kinardus) > > ``` > msg="flash attention enabled but not supported by model" > ``` > > Flash attention settings wouldn't change anything. > > The likely problem is the model is not from the ollama library, so may have incompatibilities. [#11714](https://github.com/ollama/ollama/issues/11714) Thank you very much for your work!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#69866