[GH-ISSUE #15508] 0.20.5: unknown model architecture: 'gemma4' #9911

Open
opened 2026-04-12 22:45:37 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @NeilPandya on GitHub (Apr 11, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15508

What is the issue?

gemma4 Architecture is Unrecognized

The Problem

Ollama 0.20.5 is running in a containerized instance on my server. It can load gemma4 models when pulled from Ollama Cloud or GGUFs from Hugging Face, but when using ollama create, the system doesn't recognize the gemma4 architecture.

This model works.

root@ollama:/# ollama show hf.co/mradermacher/gemma-4-31B-it-uncensored-heretic-i1-GGUF:IQ4_XS
  Model
    architecture        gemma4     
    parameters          30.7B      
    context length      262144     
    embedding length    5376       
    quantization        unknown    

  Capabilities
    completion    

  Parameters
    stop    "<bos>"          
    stop    "<|turn>"        
    stop    "<turn|>"        
    stop    "<|turn>user"

This model I created doesn't.

root@ollama:/# ollama show NeilPandya/gemma4-31b-heretic-i1-vision:IQ4_XS
  Model
    architecture        gemma4     
    parameters          30.7B      
    context length      262144     
    embedding length    5376       
    quantization        unknown    

  Capabilities
    completion    
    tools         
    vision        
    thinking      

  Projector
    architecture        clip       
    parameters          575.74M    
    embedding length    1152       
    dimensions          5376       

  Parameters
    min_p          0.05             
    num_ctx        16384            
    seed           42               
    stop           "<bos>"          
    stop           "<|turn>"        
    stop           "<turn|>"        
    stop           "<|turn>user"    
    stop           "<|thought|>"    
    temperature    1                
    top_k          64               
    top_p          0.95             

  System
    You're a helpful assistant.

My Suspicion

  • Ollama 0.20.5 has the metadata support to recognize the name gemma4.

  • However, the runner library (libollama_llama.so) inside my container was likely compiled against a version of llama.cpp from just before the gemma4 architecture was fully merged.

  • Since I'm creating a custom model from a GGUF, Ollama tries to initialize a fresh runner, fails to find the gemma4 logic in the C++ code, and crashes.

A Possible Solution

A similar issue was reported with LM Studio and Gemma 4. According to this comment, upgrading llama.cpp did the trick:

The same issue occurred on my Apple M3 Max (64GB). It was fixed after upgrading llama.cpp to v2.11.0.

BTW, v2.10.0 and v2.10.1 did not work for me.

Therefore, I posit that Ollama needs to be compiled against llama.cpp=2.11.0.

Please let me know if I should provide further details!

Relevant log output

time=2026-04-11T20:32:14.923Z level=INFO source=routes.go:1752 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:INFO OLLAMA_DEBUG_LOG_REQUESTS:false OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:2 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:true OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2026-04-11T20:32:14.923Z level=INFO source=routes.go:1754 msg="Ollama cloud disabled: false"
time=2026-04-11T20:32:14.926Z level=INFO source=images.go:499 msg="total blobs: 43"
time=2026-04-11T20:32:14.927Z level=INFO source=images.go:506 msg="total unused blobs removed: 0"
time=2026-04-11T20:32:14.927Z level=INFO source=routes.go:1810 msg="Listening on [::]:11434 (version 0.20.5)"
time=2026-04-11T20:32:14.928Z level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2026-04-11T20:32:14.928Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 34775"
time=2026-04-11T20:32:18.071Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 33971"
time=2026-04-11T20:32:21.537Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
time=2026-04-11T20:32:21.538Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 36111"
time=2026-04-11T20:32:21.538Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 34841"
time=2026-04-11T20:32:21.538Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 38645"
time=2026-04-11T20:32:21.538Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 43425"
time=2026-04-11T20:32:21.538Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 45023"
time=2026-04-11T20:32:21.538Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 34441"
time=2026-04-11T20:32:25.138Z level=INFO source=types.go:42 msg="inference compute" id=GPU-6af317c0-d264-c8fa-8499-2b653b160248 filter_id="" library=CUDA compute=8.6 name=CUDA0 description="NVIDIA GeForce RTX 3080 Ti" libdirs=ollama,cuda_v13 driver=13.1 pci_id=0000:05:00.0 type=discrete total="12.0 GiB" available="11.4 GiB"
time=2026-04-11T20:32:25.138Z level=INFO source=types.go:42 msg="inference compute" id=GPU-bfc5e972-7888-2c29-d5aa-1dce863f0c0c filter_id="" library=CUDA compute=8.9 name=CUDA1 description="NVIDIA GeForce RTX 4060" libdirs=ollama,cuda_v13 driver=13.1 pci_id=0000:01:00.0 type=discrete total="8.0 GiB" available="7.5 GiB"
time=2026-04-11T20:32:25.138Z level=INFO source=types.go:42 msg="inference compute" id=GPU-1c7cc182-1257-32bd-7877-4239ee27746a filter_id="" library=CUDA compute=8.9 name=CUDA2 description="NVIDIA GeForce RTX 4060" libdirs=ollama,cuda_v13 driver=13.1 pci_id=0000:0a:00.0 type=discrete total="8.0 GiB" available="7.5 GiB"
time=2026-04-11T20:32:25.138Z level=INFO source=routes.go:1860 msg="vram-based default context" total_vram="28.0 GiB" default_num_ctx=32768
[GIN] 2026/04/11 - 20:32:44 | 200 |       51.83µs |       127.0.0.1 | GET      "/api/version"
[GIN] 2026/04/11 - 20:32:47 | 200 |       31.58µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/04/11 - 20:32:47 | 200 |    1.491551ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/04/11 - 20:33:16 | 200 |       21.74µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/04/11 - 20:33:17 | 200 |  220.902718ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/04/11 - 20:33:50 | 200 |      20.371µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/04/11 - 20:33:50 | 200 |  212.460896ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/04/11 - 20:33:50 | 200 |  211.757621ms |       127.0.0.1 | POST     "/api/show"
time=2026-04-11T20:33:51.210Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 33151"
time=2026-04-11T20:33:54.212Z level=INFO source=runner.go:464 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[] error="failed to finish discovery before timeout"
time=2026-04-11T20:33:54.212Z level=WARN source=runner.go:356 msg="unable to refresh free memory, using old values"
time=2026-04-11T20:33:54.213Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax"
llama_model_loader: loaded meta data with 62 key-value pairs and 833 tensors from /root/.ollama/models/blobs/sha256-b837c457224cd2e8df38c3a67255bf466d2d8b20a8fd13befbcc4261f728a77c (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = gemma4
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                     general.sampling.top_k i32              = 64
llama_model_loader: - kv   3:                     general.sampling.top_p f32              = 0.950000
llama_model_loader: - kv   4:                      general.sampling.temp f32              = 1.000000
llama_model_loader: - kv   5:                               general.name str              = Gemma 4 31B It Uncensored Heretic
llama_model_loader: - kv   6:                           general.finetune str              = it-uncensored-heretic
llama_model_loader: - kv   7:                           general.basename str              = gemma-4
llama_model_loader: - kv   8:                         general.size_label str              = 31B
llama_model_loader: - kv   9:                            general.license str              = apache-2.0
llama_model_loader: - kv  10:                       general.license.link str              = https://ai.google.dev/gemma/docs/gemm...
llama_model_loader: - kv  11:                   general.base_model.count u32              = 1
llama_model_loader: - kv  12:                  general.base_model.0.name str              = Gemma 4 31B It
llama_model_loader: - kv  13:          general.base_model.0.organization str              = Google
llama_model_loader: - kv  14:              general.base_model.0.repo_url str              = https://huggingface.co/google/gemma-4...
llama_model_loader: - kv  15:                               general.tags arr[str,6]       = ["heretic", "uncensored", "decensored...
llama_model_loader: - kv  16:                         gemma4.block_count u32              = 60
llama_model_loader: - kv  17:                      gemma4.context_length u32              = 262144
llama_model_loader: - kv  18:                    gemma4.embedding_length u32              = 5376
llama_model_loader: - kv  19:                 gemma4.feed_forward_length u32              = 21504
llama_model_loader: - kv  20:                gemma4.attention.head_count u32              = 32
llama_model_loader: - kv  21:             gemma4.attention.head_count_kv arr[i32,60]      = [16, 16, 16, 16, 16, 4, 16, 16, 16, 1...
llama_model_loader: - kv  22:                      gemma4.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  23:                  gemma4.rope.freq_base_swa f32              = 10000.000000
llama_model_loader: - kv  24:    gemma4.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  25:                gemma4.attention.key_length u32              = 512
llama_model_loader: - kv  26:              gemma4.attention.value_length u32              = 512
llama_model_loader: - kv  27:             gemma4.final_logit_softcapping f32              = 30.000000
llama_model_loader: - kv  28:            gemma4.attention.sliding_window u32              = 1024
llama_model_loader: - kv  29:          gemma4.attention.shared_kv_layers u32              = 0
llama_model_loader: - kv  30:    gemma4.embedding_length_per_layer_input u32              = 0
llama_model_loader: - kv  31:    gemma4.attention.sliding_window_pattern arr[bool,60]     = [true, true, true, true, true, false,...
llama_model_loader: - kv  32:            gemma4.attention.key_length_swa u32              = 256
llama_model_loader: - kv  33:          gemma4.attention.value_length_swa u32              = 256
llama_model_loader: - kv  34:                gemma4.rope.dimension_count u32              = 512
llama_model_loader: - kv  35:            gemma4.rope.dimension_count_swa u32              = 256
llama_model_loader: - kv  36:                       tokenizer.ggml.model str              = gemma4
llama_model_loader: - kv  37:                      tokenizer.ggml.tokens arr[str,262144]  = ["<pad>", "<eos>", "<bos>", "<unk>", ...
llama_model_loader: - kv  38:                      tokenizer.ggml.scores arr[f32,262144]  = [-1000.000000, -1000.000000, -1000.00...
llama_model_loader: - kv  39:                  tokenizer.ggml.token_type arr[i32,262144]  = [3, 3, 3, 3, 3, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  40:                      tokenizer.ggml.merges arr[str,514906]  = ["\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n", ...
llama_model_loader: - kv  41:                tokenizer.ggml.bos_token_id u32              = 2
llama_model_loader: - kv  42:                tokenizer.ggml.eos_token_id u32              = 1
llama_model_loader: - kv  43:            tokenizer.ggml.unknown_token_id u32              = 3
llama_model_loader: - kv  44:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  45:               tokenizer.ggml.mask_token_id u32              = 4
llama_model_loader: - kv  46:                    tokenizer.chat_template str              = {%- macro format_parameters(propertie...
llama_model_loader: - kv  47:            tokenizer.ggml.add_space_prefix bool             = false
llama_model_loader: - kv  48:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  49:               general.quantization_version u32              = 2
llama_model_loader: - kv  50:                          general.file_type u32              = 30
llama_model_loader: - kv  51:                                general.url str              = https://huggingface.co/mradermacher/g...
llama_model_loader: - kv  52:              mradermacher.quantize_version str              = 2
llama_model_loader: - kv  53:                  mradermacher.quantized_by str              = mradermacher
llama_model_loader: - kv  54:                  mradermacher.quantized_at str              = 2026-04-07T14:54:11+02:00
llama_model_loader: - kv  55:                  mradermacher.quantized_on str              = nico1
llama_model_loader: - kv  56:                         general.source.url str              = https://huggingface.co/llmfan46/gemma...
llama_model_loader: - kv  57:                  mradermacher.convert_type str              = hf
llama_model_loader: - kv  58:                      quantize.imatrix.file str              = gemma-4-31B-it-uncensored-heretic-i1-...
llama_model_loader: - kv  59:                   quantize.imatrix.dataset str              = imatrix-training-full-3
llama_model_loader: - kv  60:             quantize.imatrix.entries_count u32              = 410
llama_model_loader: - kv  61:              quantize.imatrix.chunks_count u32              = 320
llama_model_loader: - type  f32:  422 tensors
llama_model_loader: - type q6_K:    1 tensors
llama_model_loader: - type iq4_xs:  410 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = IQ4_XS - 4.25 bpw
print_info: file size   = 15.57 GiB (4.36 BPW) 
llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'gemma4'
llama_model_load_from_file_impl: failed to load model
time=2026-04-11T20:33:54.489Z level=INFO source=sched.go:462 msg="failed to create server" model=NeilPandya/gemma4-31b-heretic-i1-vision:IQ4_XS error="unable to load model: /root/.ollama/models/blobs/sha256-b837c457224cd2e8df38c3a67255bf466d2d8b20a8fd13befbcc4261f728a77c"
[GIN] 2026/04/11 - 20:33:54 | 500 |  3.522448584s |       127.0.0.1 | POST     "/api/generate"
[GIN] 2026/04/11 - 20:43:23 | 200 |       21.98µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/04/11 - 20:43:23 | 200 |       57.83µs |       127.0.0.1 | POST     "/api/blobs/sha256:869b5f54bd81167ad184272cfc27f706ba0687c75de1dc70921499e34d53d99d"
[GIN] 2026/04/11 - 20:43:23 | 200 |   62.586503ms |       127.0.0.1 | POST     "/api/create"
[GIN] 2026/04/11 - 20:43:29 | 200 |       31.36µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/04/11 - 20:43:29 | 200 |    1.815441ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/04/11 - 20:43:37 | 200 |       21.53µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/04/11 - 20:43:38 | 200 |  213.348481ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/04/11 - 20:43:52 | 200 |       20.56µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/04/11 - 20:43:52 | 200 |  210.734002ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/04/11 - 20:43:52 | 200 |  209.298746ms |       127.0.0.1 | POST     "/api/show"
time=2026-04-11T20:43:52.708Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 42715"
time=2026-04-11T20:43:55.710Z level=INFO source=runner.go:464 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[] error="failed to finish discovery before timeout"
time=2026-04-11T20:43:55.710Z level=WARN source=runner.go:356 msg="unable to refresh free memory, using old values"
time=2026-04-11T20:43:55.711Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax"
llama_model_loader: loaded meta data with 62 key-value pairs and 833 tensors from /root/.ollama/models/blobs/sha256-b837c457224cd2e8df38c3a67255bf466d2d8b20a8fd13befbcc4261f728a77c (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = gemma4
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                     general.sampling.top_k i32              = 64
llama_model_loader: - kv   3:                     general.sampling.top_p f32              = 0.950000
llama_model_loader: - kv   4:                      general.sampling.temp f32              = 1.000000
llama_model_loader: - kv   5:                               general.name str              = Gemma 4 31B It Uncensored Heretic
llama_model_loader: - kv   6:                           general.finetune str              = it-uncensored-heretic
llama_model_loader: - kv   7:                           general.basename str              = gemma-4
llama_model_loader: - kv   8:                         general.size_label str              = 31B
llama_model_loader: - kv   9:                            general.license str              = apache-2.0
llama_model_loader: - kv  10:                       general.license.link str              = https://ai.google.dev/gemma/docs/gemm...
llama_model_loader: - kv  11:                   general.base_model.count u32              = 1
llama_model_loader: - kv  12:                  general.base_model.0.name str              = Gemma 4 31B It
llama_model_loader: - kv  13:          general.base_model.0.organization str              = Google
llama_model_loader: - kv  14:              general.base_model.0.repo_url str              = https://huggingface.co/google/gemma-4...
llama_model_loader: - kv  15:                               general.tags arr[str,6]       = ["heretic", "uncensored", "decensored...
llama_model_loader: - kv  16:                         gemma4.block_count u32              = 60
llama_model_loader: - kv  17:                      gemma4.context_length u32              = 262144
llama_model_loader: - kv  18:                    gemma4.embedding_length u32              = 5376
llama_model_loader: - kv  19:                 gemma4.feed_forward_length u32              = 21504
llama_model_loader: - kv  20:                gemma4.attention.head_count u32              = 32
llama_model_loader: - kv  21:             gemma4.attention.head_count_kv arr[i32,60]      = [16, 16, 16, 16, 16, 4, 16, 16, 16, 1...
llama_model_loader: - kv  22:                      gemma4.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  23:                  gemma4.rope.freq_base_swa f32              = 10000.000000
llama_model_loader: - kv  24:    gemma4.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  25:                gemma4.attention.key_length u32              = 512
llama_model_loader: - kv  26:              gemma4.attention.value_length u32              = 512
llama_model_loader: - kv  27:             gemma4.final_logit_softcapping f32              = 30.000000
llama_model_loader: - kv  28:            gemma4.attention.sliding_window u32              = 1024
llama_model_loader: - kv  29:          gemma4.attention.shared_kv_layers u32              = 0
llama_model_loader: - kv  30:    gemma4.embedding_length_per_layer_input u32              = 0
llama_model_loader: - kv  31:    gemma4.attention.sliding_window_pattern arr[bool,60]     = [true, true, true, true, true, false,...
llama_model_loader: - kv  32:            gemma4.attention.key_length_swa u32              = 256
llama_model_loader: - kv  33:          gemma4.attention.value_length_swa u32              = 256
llama_model_loader: - kv  34:                gemma4.rope.dimension_count u32              = 512
llama_model_loader: - kv  35:            gemma4.rope.dimension_count_swa u32              = 256
llama_model_loader: - kv  36:                       tokenizer.ggml.model str              = gemma4
llama_model_loader: - kv  37:                      tokenizer.ggml.tokens arr[str,262144]  = ["<pad>", "<eos>", "<bos>", "<unk>", ...
llama_model_loader: - kv  38:                      tokenizer.ggml.scores arr[f32,262144]  = [-1000.000000, -1000.000000, -1000.00...
llama_model_loader: - kv  39:                  tokenizer.ggml.token_type arr[i32,262144]  = [3, 3, 3, 3, 3, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  40:                      tokenizer.ggml.merges arr[str,514906]  = ["\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n", ...
llama_model_loader: - kv  41:                tokenizer.ggml.bos_token_id u32              = 2
llama_model_loader: - kv  42:                tokenizer.ggml.eos_token_id u32              = 1
llama_model_loader: - kv  43:            tokenizer.ggml.unknown_token_id u32              = 3
llama_model_loader: - kv  44:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  45:               tokenizer.ggml.mask_token_id u32              = 4
llama_model_loader: - kv  46:                    tokenizer.chat_template str              = {%- macro format_parameters(propertie...
llama_model_loader: - kv  47:            tokenizer.ggml.add_space_prefix bool             = false
llama_model_loader: - kv  48:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  49:               general.quantization_version u32              = 2
llama_model_loader: - kv  50:                          general.file_type u32              = 30
llama_model_loader: - kv  51:                                general.url str              = https://huggingface.co/mradermacher/g...
llama_model_loader: - kv  52:              mradermacher.quantize_version str              = 2
llama_model_loader: - kv  53:                  mradermacher.quantized_by str              = mradermacher
llama_model_loader: - kv  54:                  mradermacher.quantized_at str              = 2026-04-07T14:54:11+02:00
llama_model_loader: - kv  55:                  mradermacher.quantized_on str              = nico1
llama_model_loader: - kv  56:                         general.source.url str              = https://huggingface.co/llmfan46/gemma...
llama_model_loader: - kv  57:                  mradermacher.convert_type str              = hf
llama_model_loader: - kv  58:                      quantize.imatrix.file str              = gemma-4-31B-it-uncensored-heretic-i1-...
llama_model_loader: - kv  59:                   quantize.imatrix.dataset str              = imatrix-training-full-3
llama_model_loader: - kv  60:             quantize.imatrix.entries_count u32              = 410
llama_model_loader: - kv  61:              quantize.imatrix.chunks_count u32              = 320
llama_model_loader: - type  f32:  422 tensors
llama_model_loader: - type q6_K:    1 tensors
llama_model_loader: - type iq4_xs:  410 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = IQ4_XS - 4.25 bpw
print_info: file size   = 15.57 GiB (4.36 BPW) 
llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'gemma4'
llama_model_load_from_file_impl: failed to load model
time=2026-04-11T20:43:55.980Z level=INFO source=sched.go:462 msg="failed to create server" model=NeilPandya/gemma4-31b-heretic-i1-vision:IQ4_XS error="unable to load model: /root/.ollama/models/blobs/sha256-b837c457224cd2e8df38c3a67255bf466d2d8b20a8fd13befbcc4261f728a77c"
[GIN] 2026/04/11 - 20:43:55 | 500 |  3.515513682s |       127.0.0.1 | POST     "/api/generate"
[GIN] 2026/04/11 - 20:57:48 | 200 |      21.991µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/04/11 - 20:57:48 | 200 |    1.481879ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/04/11 - 20:58:00 | 200 |       24.94µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/04/11 - 20:58:01 | 200 |  220.435324ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/04/11 - 20:58:16 | 200 |       19.34µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/04/11 - 20:58:16 | 200 |  211.707363ms |       127.0.0.1 | POST     "/api/show"
time=2026-04-11T21:01:14.047Z level=INFO source=routes.go:1752 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:INFO OLLAMA_DEBUG_LOG_REQUESTS:false OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:2 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:true OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2026-04-11T21:01:14.047Z level=INFO source=routes.go:1754 msg="Ollama cloud disabled: false"
time=2026-04-11T21:01:14.049Z level=INFO source=images.go:499 msg="total blobs: 43"
time=2026-04-11T21:01:14.050Z level=INFO source=images.go:506 msg="total unused blobs removed: 0"
time=2026-04-11T21:01:14.051Z level=INFO source=routes.go:1810 msg="Listening on [::]:11434 (version 0.20.5)"
time=2026-04-11T21:01:14.051Z level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2026-04-11T21:01:14.051Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
time=2026-04-11T21:01:14.051Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 39839"
time=2026-04-11T21:01:17.198Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 36653"
time=2026-04-11T21:01:17.713Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 42299"
time=2026-04-11T21:01:17.713Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 37763"
time=2026-04-11T21:01:17.713Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 44171"
time=2026-04-11T21:01:17.713Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 36429"
time=2026-04-11T21:01:17.713Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 45231"
time=2026-04-11T21:01:17.713Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 44855"
time=2026-04-11T21:01:21.336Z level=INFO source=types.go:42 msg="inference compute" id=GPU-6af317c0-d264-c8fa-8499-2b653b160248 filter_id="" library=CUDA compute=8.6 name=CUDA0 description="NVIDIA GeForce RTX 3080 Ti" libdirs=ollama,cuda_v13 driver=13.1 pci_id=0000:05:00.0 type=discrete total="12.0 GiB" available="11.4 GiB"
time=2026-04-11T21:01:21.336Z level=INFO source=types.go:42 msg="inference compute" id=GPU-bfc5e972-7888-2c29-d5aa-1dce863f0c0c filter_id="" library=CUDA compute=8.9 name=CUDA1 description="NVIDIA GeForce RTX 4060" libdirs=ollama,cuda_v13 driver=13.1 pci_id=0000:01:00.0 type=discrete total="8.0 GiB" available="7.5 GiB"
time=2026-04-11T21:01:21.336Z level=INFO source=types.go:42 msg="inference compute" id=GPU-1c7cc182-1257-32bd-7877-4239ee27746a filter_id="" library=CUDA compute=8.9 name=CUDA2 description="NVIDIA GeForce RTX 4060" libdirs=ollama,cuda_v13 driver=13.1 pci_id=0000:0a:00.0 type=discrete total="8.0 GiB" available="7.5 GiB"
time=2026-04-11T21:01:21.336Z level=INFO source=routes.go:1860 msg="vram-based default context" total_vram="28.0 GiB" default_num_ctx=32768
[GIN] 2026/04/11 - 21:01:39 | 200 |      52.901µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/04/11 - 21:01:39 | 200 |    1.473729ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/04/11 - 21:02:26 | 200 |      21.651µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/04/11 - 21:02:26 | 200 |  221.603542ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/04/11 - 21:02:26 | 200 |  208.075131ms |       127.0.0.1 | POST     "/api/show"
time=2026-04-11T21:02:27.030Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 40187"
time=2026-04-11T21:02:30.032Z level=INFO source=runner.go:464 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[] error="failed to finish discovery before timeout"
time=2026-04-11T21:02:30.032Z level=WARN source=runner.go:356 msg="unable to refresh free memory, using old values"
time=2026-04-11T21:02:30.033Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax"
time=2026-04-11T21:02:30.171Z level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883
time=2026-04-11T21:02:30.171Z level=INFO source=server.go:259 msg="enabling flash attention"
time=2026-04-11T21:02:30.171Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-b837c457224cd2e8df38c3a67255bf466d2d8b20a8fd13befbcc4261f728a77c --port 33489"
time=2026-04-11T21:02:30.172Z level=INFO source=sched.go:484 msg="system memory" total="62.6 GiB" free="62.4 GiB" free_swap="4.0 GiB"
time=2026-04-11T21:02:30.172Z level=INFO source=sched.go:491 msg="gpu memory" id=GPU-6af317c0-d264-c8fa-8499-2b653b160248 library=CUDA available="10.9 GiB" free="11.4 GiB" minimum="457.0 MiB" overhead="0 B"
time=2026-04-11T21:02:30.172Z level=INFO source=sched.go:491 msg="gpu memory" id=GPU-bfc5e972-7888-2c29-d5aa-1dce863f0c0c library=CUDA available="7.1 GiB" free="7.5 GiB" minimum="457.0 MiB" overhead="0 B"
time=2026-04-11T21:02:30.172Z level=INFO source=sched.go:491 msg="gpu memory" id=GPU-1c7cc182-1257-32bd-7877-4239ee27746a library=CUDA available="7.1 GiB" free="7.5 GiB" minimum="457.0 MiB" overhead="0 B"
time=2026-04-11T21:02:30.172Z level=INFO source=server.go:771 msg="loading model" "model layers"=61 requested=-1
time=2026-04-11T21:02:30.183Z level=INFO source=runner.go:1417 msg="starting ollama engine"
time=2026-04-11T21:02:30.183Z level=INFO source=runner.go:1452 msg="Server listening on 127.0.0.1:33489"
time=2026-04-11T21:02:30.194Z level=INFO source=runner.go:1290 msg=load request="{Operation:fit LoraPath:[] Parallel:2 BatchSize:512 FlashAttention:Enabled KvSize:65536 KvCacheType: NumThreads:8 GPULayers:61[ID:GPU-1c7cc182-1257-32bd-7877-4239ee27746a Layers:61(0..60)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-04-11T21:02:30.251Z level=INFO source=ggml.go:136 msg="" architecture=gemma4 file_type=unknown name="Gemma 4 31B It Uncensored Heretic" description="" num_tensors=833 num_key_values=63
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 3 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3080 Ti, compute capability 8.6, VMM: yes, ID: GPU-6af317c0-d264-c8fa-8499-2b653b160248
  Device 1: NVIDIA GeForce RTX 4060, compute capability 8.9, VMM: yes, ID: GPU-bfc5e972-7888-2c29-d5aa-1dce863f0c0c
  Device 2: NVIDIA GeForce RTX 4060, compute capability 8.9, VMM: yes, ID: GPU-1c7cc182-1257-32bd-7877-4239ee27746a
load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v13/libggml-cuda.so
time=2026-04-11T21:02:33.185Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 CUDA.2.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.2.USE_GRAPHS=1 CUDA.2.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2026-04-11T21:02:33.190Z level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883
time=2026-04-11T21:03:37.605Z level=INFO source=runner.go:1290 msg=load request="{Operation:fit LoraPath:[] Parallel:2 BatchSize:512 FlashAttention:Enabled KvSize:65536 KvCacheType: NumThreads:8 GPULayers:54[ID:GPU-6af317c0-d264-c8fa-8499-2b653b160248 Layers:24(6..29) ID:GPU-bfc5e972-7888-2c29-d5aa-1dce863f0c0c Layers:16(30..45) ID:GPU-1c7cc182-1257-32bd-7877-4239ee27746a Layers:14(46..59)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-04-11T21:03:37.655Z level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883
time=2026-04-11T21:04:03.240Z level=INFO source=runner.go:1290 msg=load request="{Operation:fit LoraPath:[] Parallel:2 BatchSize:512 FlashAttention:Enabled KvSize:65536 KvCacheType: NumThreads:8 GPULayers:52[ID:GPU-6af317c0-d264-c8fa-8499-2b653b160248 Layers:22(8..29) ID:GPU-bfc5e972-7888-2c29-d5aa-1dce863f0c0c Layers:16(30..45) ID:GPU-1c7cc182-1257-32bd-7877-4239ee27746a Layers:14(46..59)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-04-11T21:04:03.287Z level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883
time=2026-04-11T21:04:03.419Z level=INFO source=runner.go:1290 msg=load request="{Operation:alloc LoraPath:[] Parallel:2 BatchSize:512 FlashAttention:Enabled KvSize:65536 KvCacheType: NumThreads:8 GPULayers:52[ID:GPU-6af317c0-d264-c8fa-8499-2b653b160248 Layers:22(8..29) ID:GPU-bfc5e972-7888-2c29-d5aa-1dce863f0c0c Layers:16(30..45) ID:GPU-1c7cc182-1257-32bd-7877-4239ee27746a Layers:14(46..59)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-04-11T21:04:03.480Z level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883
time=2026-04-11T21:04:04.349Z level=INFO source=runner.go:1290 msg=load request="{Operation:commit LoraPath:[] Parallel:2 BatchSize:512 FlashAttention:Enabled KvSize:65536 KvCacheType: NumThreads:8 GPULayers:52[ID:GPU-6af317c0-d264-c8fa-8499-2b653b160248 Layers:22(8..29) ID:GPU-bfc5e972-7888-2c29-d5aa-1dce863f0c0c Layers:16(30..45) ID:GPU-1c7cc182-1257-32bd-7877-4239ee27746a Layers:14(46..59)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-04-11T21:04:04.349Z level=INFO source=ggml.go:482 msg="offloading 52 repeating layers to GPU"
time=2026-04-11T21:04:04.349Z level=INFO source=ggml.go:486 msg="offloading output layer to CPU"
time=2026-04-11T21:04:04.349Z level=INFO source=ggml.go:494 msg="offloaded 52/61 layers to GPU"
time=2026-04-11T21:04:04.349Z level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="5.3 GiB"
time=2026-04-11T21:04:04.349Z level=INFO source=device.go:240 msg="model weights" device=CUDA1 size="3.8 GiB"
time=2026-04-11T21:04:04.349Z level=INFO source=device.go:240 msg="model weights" device=CUDA2 size="3.4 GiB"
time=2026-04-11T21:04:04.349Z level=INFO source=device.go:245 msg="model weights" device=CPU size="4.1 GiB"
time=2026-04-11T21:04:04.349Z level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="4.4 GiB"
time=2026-04-11T21:04:04.349Z level=INFO source=device.go:251 msg="kv cache" device=CUDA1 size="2.9 GiB"
time=2026-04-11T21:04:04.349Z level=INFO source=device.go:251 msg="kv cache" device=CUDA2 size="3.0 GiB"
time=2026-04-11T21:04:04.349Z level=INFO source=device.go:256 msg="kv cache" device=CPU size="1.4 GiB"
time=2026-04-11T21:04:04.349Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="1.0 GiB"
time=2026-04-11T21:04:04.349Z level=INFO source=device.go:262 msg="compute graph" device=CUDA1 size="245.5 MiB"
time=2026-04-11T21:04:04.349Z level=INFO source=device.go:262 msg="compute graph" device=CUDA2 size="245.5 MiB"
time=2026-04-11T21:04:04.349Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="16.0 MiB"
time=2026-04-11T21:04:04.349Z level=INFO source=device.go:272 msg="total memory" size="29.8 GiB"
time=2026-04-11T21:04:04.349Z level=INFO source=sched.go:561 msg="loaded runners" count=1
time=2026-04-11T21:04:04.349Z level=INFO source=server.go:1364 msg="waiting for llama runner to start responding"
time=2026-04-11T21:04:04.350Z level=INFO source=server.go:1398 msg="waiting for server to become available" status="llm server loading model"
time=2026-04-11T21:04:09.116Z level=INFO source=server.go:1402 msg="llama runner started in 98.94 seconds"
[GIN] 2026/04/11 - 21:04:09 | 200 |         1m42s |       127.0.0.1 | POST     "/api/generate"
[GIN] 2026/04/11 - 21:05:15 | 200 |  9.045587989s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2026/04/11 - 21:05:30 | 200 |  4.822663939s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2026/04/11 - 21:05:57 | 200 |       18.84µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/04/11 - 21:05:58 | 200 |  227.040449ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/04/11 - 21:05:58 | 200 |   202.17918ms |       127.0.0.1 | POST     "/api/show"
ggml_backend_cuda_device_get_memory device GPU-6af317c0-d264-c8fa-8499-2b653b160248 utilizing NVML memory reporting free: 395444224 total: 12884901888
ggml_backend_cuda_device_get_memory device GPU-bfc5e972-7888-2c29-d5aa-1dce863f0c0c utilizing NVML memory reporting free: 596901888 total: 8585740288
ggml_backend_cuda_device_get_memory device GPU-1c7cc182-1257-32bd-7877-4239ee27746a utilizing NVML memory reporting free: 968097792 total: 8585740288
time=2026-04-11T21:06:01.912Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 42715"
time=2026-04-11T21:06:03.661Z level=INFO source=runner.go:464 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[] error="failed to finish discovery before timeout"
time=2026-04-11T21:06:03.661Z level=WARN source=runner.go:356 msg="unable to refresh free memory, using old values"
time=2026-04-11T21:06:03.661Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 32769"
time=2026-04-11T21:06:06.561Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax"
llama_model_loader: loaded meta data with 62 key-value pairs and 833 tensors from /root/.ollama/models/blobs/sha256-b837c457224cd2e8df38c3a67255bf466d2d8b20a8fd13befbcc4261f728a77c (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = gemma4
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                     general.sampling.top_k i32              = 64
llama_model_loader: - kv   3:                     general.sampling.top_p f32              = 0.950000
llama_model_loader: - kv   4:                      general.sampling.temp f32              = 1.000000
llama_model_loader: - kv   5:                               general.name str              = Gemma 4 31B It Uncensored Heretic
llama_model_loader: - kv   6:                           general.finetune str              = it-uncensored-heretic
llama_model_loader: - kv   7:                           general.basename str              = gemma-4
llama_model_loader: - kv   8:                         general.size_label str              = 31B
llama_model_loader: - kv   9:                            general.license str              = apache-2.0
llama_model_loader: - kv  10:                       general.license.link str              = https://ai.google.dev/gemma/docs/gemm...
llama_model_loader: - kv  11:                   general.base_model.count u32              = 1
llama_model_loader: - kv  12:                  general.base_model.0.name str              = Gemma 4 31B It
llama_model_loader: - kv  13:          general.base_model.0.organization str              = Google
llama_model_loader: - kv  14:              general.base_model.0.repo_url str              = https://huggingface.co/google/gemma-4...
llama_model_loader: - kv  15:                               general.tags arr[str,6]       = ["heretic", "uncensored", "decensored...
llama_model_loader: - kv  16:                         gemma4.block_count u32              = 60
llama_model_loader: - kv  17:                      gemma4.context_length u32              = 262144
llama_model_loader: - kv  18:                    gemma4.embedding_length u32              = 5376
llama_model_loader: - kv  19:                 gemma4.feed_forward_length u32              = 21504
llama_model_loader: - kv  20:                gemma4.attention.head_count u32              = 32
llama_model_loader: - kv  21:             gemma4.attention.head_count_kv arr[i32,60]      = [16, 16, 16, 16, 16, 4, 16, 16, 16, 1...
llama_model_loader: - kv  22:                      gemma4.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  23:                  gemma4.rope.freq_base_swa f32              = 10000.000000
llama_model_loader: - kv  24:    gemma4.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  25:                gemma4.attention.key_length u32              = 512
llama_model_loader: - kv  26:              gemma4.attention.value_length u32              = 512
llama_model_loader: - kv  27:             gemma4.final_logit_softcapping f32              = 30.000000
llama_model_loader: - kv  28:            gemma4.attention.sliding_window u32              = 1024
llama_model_loader: - kv  29:          gemma4.attention.shared_kv_layers u32              = 0
llama_model_loader: - kv  30:    gemma4.embedding_length_per_layer_input u32              = 0
llama_model_loader: - kv  31:    gemma4.attention.sliding_window_pattern arr[bool,60]     = [true, true, true, true, true, false,...
llama_model_loader: - kv  32:            gemma4.attention.key_length_swa u32              = 256
llama_model_loader: - kv  33:          gemma4.attention.value_length_swa u32              = 256
llama_model_loader: - kv  34:                gemma4.rope.dimension_count u32              = 512
llama_model_loader: - kv  35:            gemma4.rope.dimension_count_swa u32              = 256
llama_model_loader: - kv  36:                       tokenizer.ggml.model str              = gemma4
llama_model_loader: - kv  37:                      tokenizer.ggml.tokens arr[str,262144]  = ["<pad>", "<eos>", "<bos>", "<unk>", ...
llama_model_loader: - kv  38:                      tokenizer.ggml.scores arr[f32,262144]  = [-1000.000000, -1000.000000, -1000.00...
llama_model_loader: - kv  39:                  tokenizer.ggml.token_type arr[i32,262144]  = [3, 3, 3, 3, 3, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  40:                      tokenizer.ggml.merges arr[str,514906]  = ["\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n", ...
llama_model_loader: - kv  41:                tokenizer.ggml.bos_token_id u32              = 2
llama_model_loader: - kv  42:                tokenizer.ggml.eos_token_id u32              = 1
llama_model_loader: - kv  43:            tokenizer.ggml.unknown_token_id u32              = 3
llama_model_loader: - kv  44:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  45:               tokenizer.ggml.mask_token_id u32              = 4
llama_model_loader: - kv  46:                    tokenizer.chat_template str              = {%- macro format_parameters(propertie...
llama_model_loader: - kv  47:            tokenizer.ggml.add_space_prefix bool             = false
llama_model_loader: - kv  48:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  49:               general.quantization_version u32              = 2
llama_model_loader: - kv  50:                          general.file_type u32              = 30
llama_model_loader: - kv  51:                                general.url str              = https://huggingface.co/mradermacher/g...
llama_model_loader: - kv  52:              mradermacher.quantize_version str              = 2
llama_model_loader: - kv  53:                  mradermacher.quantized_by str              = mradermacher
llama_model_loader: - kv  54:                  mradermacher.quantized_at str              = 2026-04-07T14:54:11+02:00
llama_model_loader: - kv  55:                  mradermacher.quantized_on str              = nico1
llama_model_loader: - kv  56:                         general.source.url str              = https://huggingface.co/llmfan46/gemma...
llama_model_loader: - kv  57:                  mradermacher.convert_type str              = hf
llama_model_loader: - kv  58:                      quantize.imatrix.file str              = gemma-4-31B-it-uncensored-heretic-i1-...
llama_model_loader: - kv  59:                   quantize.imatrix.dataset str              = imatrix-training-full-3
llama_model_loader: - kv  60:             quantize.imatrix.entries_count u32              = 410
llama_model_loader: - kv  61:              quantize.imatrix.chunks_count u32              = 320
llama_model_loader: - type  f32:  422 tensors
llama_model_loader: - type q6_K:    1 tensors
llama_model_loader: - type iq4_xs:  410 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = IQ4_XS - 4.25 bpw
print_info: file size   = 15.57 GiB (4.36 BPW) 
llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'gemma4'
llama_model_load_from_file_impl: failed to load model
time=2026-04-11T21:06:06.835Z level=INFO source=sched.go:462 msg="failed to create server" model=NeilPandya/gemma4-31b-heretic-i1-vision:IQ4_XS error="unable to load model: /root/.ollama/models/blobs/sha256-b837c457224cd2e8df38c3a67255bf466d2d8b20a8fd13befbcc4261f728a77c"
[GIN] 2026/04/11 - 21:06:06 | 500 |  8.526107782s |       127.0.0.1 | POST     "/api/generate"

OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

0.20.5

Originally created by @NeilPandya on GitHub (Apr 11, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15508 ### What is the issue? # `gemma4` Architecture is Unrecognized ## The Problem Ollama 0.20.5 is running in a containerized instance on my server. It can load `gemma4` models when pulled from Ollama Cloud or GGUFs from Hugging Face, but when using `ollama create`, the system doesn't recognize the `gemma4` architecture. #### This model works. ```bash root@ollama:/# ollama show hf.co/mradermacher/gemma-4-31B-it-uncensored-heretic-i1-GGUF:IQ4_XS Model architecture gemma4 parameters 30.7B context length 262144 embedding length 5376 quantization unknown Capabilities completion Parameters stop "<bos>" stop "<|turn>" stop "<turn|>" stop "<|turn>user" ``` #### This model I created doesn't. ```bash root@ollama:/# ollama show NeilPandya/gemma4-31b-heretic-i1-vision:IQ4_XS Model architecture gemma4 parameters 30.7B context length 262144 embedding length 5376 quantization unknown Capabilities completion tools vision thinking Projector architecture clip parameters 575.74M embedding length 1152 dimensions 5376 Parameters min_p 0.05 num_ctx 16384 seed 42 stop "<bos>" stop "<|turn>" stop "<turn|>" stop "<|turn>user" stop "<|thought|>" temperature 1 top_k 64 top_p 0.95 System You're a helpful assistant. ``` ### My Suspicion - Ollama 0.20.5 has the metadata support to recognize the name `gemma4`. - However, the runner library (`libollama_llama.so`) inside my container was likely compiled against a version of `llama.cpp` from just before the `gemma4` architecture was fully merged. - Since I'm creating a custom model from a GGUF, Ollama tries to initialize a fresh runner, fails to find the `gemma4` logic in the C++ code, and crashes. ### A Possible Solution A [similar issue](https://github.com/lmstudio-ai/lmstudio-bug-tracker/issues/1728) was reported with LM Studio and Gemma 4. According to [this comment](https://github.com/lmstudio-ai/lmstudio-bug-tracker/issues/1728#issuecomment-4186038053), upgrading `llama.cpp` did the trick: > The same issue occurred on my Apple M3 Max (64GB). It was fixed after upgrading `llama.cpp` to `v2.11.0`. > > BTW, `v2.10.0` and `v2.10.1` did not work for me. Therefore, I posit that Ollama needs to be compiled against `llama.cpp=2.11.0`. Please let me know if I should provide further details! ### Relevant log output ```shell time=2026-04-11T20:32:14.923Z level=INFO source=routes.go:1752 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:INFO OLLAMA_DEBUG_LOG_REQUESTS:false OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:2 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:true OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2026-04-11T20:32:14.923Z level=INFO source=routes.go:1754 msg="Ollama cloud disabled: false" time=2026-04-11T20:32:14.926Z level=INFO source=images.go:499 msg="total blobs: 43" time=2026-04-11T20:32:14.927Z level=INFO source=images.go:506 msg="total unused blobs removed: 0" time=2026-04-11T20:32:14.927Z level=INFO source=routes.go:1810 msg="Listening on [::]:11434 (version 0.20.5)" time=2026-04-11T20:32:14.928Z level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-04-11T20:32:14.928Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 34775" time=2026-04-11T20:32:18.071Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 33971" time=2026-04-11T20:32:21.537Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" time=2026-04-11T20:32:21.538Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 36111" time=2026-04-11T20:32:21.538Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 34841" time=2026-04-11T20:32:21.538Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 38645" time=2026-04-11T20:32:21.538Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 43425" time=2026-04-11T20:32:21.538Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 45023" time=2026-04-11T20:32:21.538Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 34441" time=2026-04-11T20:32:25.138Z level=INFO source=types.go:42 msg="inference compute" id=GPU-6af317c0-d264-c8fa-8499-2b653b160248 filter_id="" library=CUDA compute=8.6 name=CUDA0 description="NVIDIA GeForce RTX 3080 Ti" libdirs=ollama,cuda_v13 driver=13.1 pci_id=0000:05:00.0 type=discrete total="12.0 GiB" available="11.4 GiB" time=2026-04-11T20:32:25.138Z level=INFO source=types.go:42 msg="inference compute" id=GPU-bfc5e972-7888-2c29-d5aa-1dce863f0c0c filter_id="" library=CUDA compute=8.9 name=CUDA1 description="NVIDIA GeForce RTX 4060" libdirs=ollama,cuda_v13 driver=13.1 pci_id=0000:01:00.0 type=discrete total="8.0 GiB" available="7.5 GiB" time=2026-04-11T20:32:25.138Z level=INFO source=types.go:42 msg="inference compute" id=GPU-1c7cc182-1257-32bd-7877-4239ee27746a filter_id="" library=CUDA compute=8.9 name=CUDA2 description="NVIDIA GeForce RTX 4060" libdirs=ollama,cuda_v13 driver=13.1 pci_id=0000:0a:00.0 type=discrete total="8.0 GiB" available="7.5 GiB" time=2026-04-11T20:32:25.138Z level=INFO source=routes.go:1860 msg="vram-based default context" total_vram="28.0 GiB" default_num_ctx=32768 [GIN] 2026/04/11 - 20:32:44 | 200 | 51.83µs | 127.0.0.1 | GET "/api/version" [GIN] 2026/04/11 - 20:32:47 | 200 | 31.58µs | 127.0.0.1 | HEAD "/" [GIN] 2026/04/11 - 20:32:47 | 200 | 1.491551ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/04/11 - 20:33:16 | 200 | 21.74µs | 127.0.0.1 | HEAD "/" [GIN] 2026/04/11 - 20:33:17 | 200 | 220.902718ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/04/11 - 20:33:50 | 200 | 20.371µs | 127.0.0.1 | HEAD "/" [GIN] 2026/04/11 - 20:33:50 | 200 | 212.460896ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/04/11 - 20:33:50 | 200 | 211.757621ms | 127.0.0.1 | POST "/api/show" time=2026-04-11T20:33:51.210Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 33151" time=2026-04-11T20:33:54.212Z level=INFO source=runner.go:464 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[] error="failed to finish discovery before timeout" time=2026-04-11T20:33:54.212Z level=WARN source=runner.go:356 msg="unable to refresh free memory, using old values" time=2026-04-11T20:33:54.213Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax" llama_model_loader: loaded meta data with 62 key-value pairs and 833 tensors from /root/.ollama/models/blobs/sha256-b837c457224cd2e8df38c3a67255bf466d2d8b20a8fd13befbcc4261f728a77c (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = gemma4 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.sampling.top_k i32 = 64 llama_model_loader: - kv 3: general.sampling.top_p f32 = 0.950000 llama_model_loader: - kv 4: general.sampling.temp f32 = 1.000000 llama_model_loader: - kv 5: general.name str = Gemma 4 31B It Uncensored Heretic llama_model_loader: - kv 6: general.finetune str = it-uncensored-heretic llama_model_loader: - kv 7: general.basename str = gemma-4 llama_model_loader: - kv 8: general.size_label str = 31B llama_model_loader: - kv 9: general.license str = apache-2.0 llama_model_loader: - kv 10: general.license.link str = https://ai.google.dev/gemma/docs/gemm... llama_model_loader: - kv 11: general.base_model.count u32 = 1 llama_model_loader: - kv 12: general.base_model.0.name str = Gemma 4 31B It llama_model_loader: - kv 13: general.base_model.0.organization str = Google llama_model_loader: - kv 14: general.base_model.0.repo_url str = https://huggingface.co/google/gemma-4... llama_model_loader: - kv 15: general.tags arr[str,6] = ["heretic", "uncensored", "decensored... llama_model_loader: - kv 16: gemma4.block_count u32 = 60 llama_model_loader: - kv 17: gemma4.context_length u32 = 262144 llama_model_loader: - kv 18: gemma4.embedding_length u32 = 5376 llama_model_loader: - kv 19: gemma4.feed_forward_length u32 = 21504 llama_model_loader: - kv 20: gemma4.attention.head_count u32 = 32 llama_model_loader: - kv 21: gemma4.attention.head_count_kv arr[i32,60] = [16, 16, 16, 16, 16, 4, 16, 16, 16, 1... llama_model_loader: - kv 22: gemma4.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 23: gemma4.rope.freq_base_swa f32 = 10000.000000 llama_model_loader: - kv 24: gemma4.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 25: gemma4.attention.key_length u32 = 512 llama_model_loader: - kv 26: gemma4.attention.value_length u32 = 512 llama_model_loader: - kv 27: gemma4.final_logit_softcapping f32 = 30.000000 llama_model_loader: - kv 28: gemma4.attention.sliding_window u32 = 1024 llama_model_loader: - kv 29: gemma4.attention.shared_kv_layers u32 = 0 llama_model_loader: - kv 30: gemma4.embedding_length_per_layer_input u32 = 0 llama_model_loader: - kv 31: gemma4.attention.sliding_window_pattern arr[bool,60] = [true, true, true, true, true, false,... llama_model_loader: - kv 32: gemma4.attention.key_length_swa u32 = 256 llama_model_loader: - kv 33: gemma4.attention.value_length_swa u32 = 256 llama_model_loader: - kv 34: gemma4.rope.dimension_count u32 = 512 llama_model_loader: - kv 35: gemma4.rope.dimension_count_swa u32 = 256 llama_model_loader: - kv 36: tokenizer.ggml.model str = gemma4 llama_model_loader: - kv 37: tokenizer.ggml.tokens arr[str,262144] = ["<pad>", "<eos>", "<bos>", "<unk>", ... llama_model_loader: - kv 38: tokenizer.ggml.scores arr[f32,262144] = [-1000.000000, -1000.000000, -1000.00... llama_model_loader: - kv 39: tokenizer.ggml.token_type arr[i32,262144] = [3, 3, 3, 3, 3, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 40: tokenizer.ggml.merges arr[str,514906] = ["\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n", ... llama_model_loader: - kv 41: tokenizer.ggml.bos_token_id u32 = 2 llama_model_loader: - kv 42: tokenizer.ggml.eos_token_id u32 = 1 llama_model_loader: - kv 43: tokenizer.ggml.unknown_token_id u32 = 3 llama_model_loader: - kv 44: tokenizer.ggml.padding_token_id u32 = 0 llama_model_loader: - kv 45: tokenizer.ggml.mask_token_id u32 = 4 llama_model_loader: - kv 46: tokenizer.chat_template str = {%- macro format_parameters(propertie... llama_model_loader: - kv 47: tokenizer.ggml.add_space_prefix bool = false llama_model_loader: - kv 48: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 49: general.quantization_version u32 = 2 llama_model_loader: - kv 50: general.file_type u32 = 30 llama_model_loader: - kv 51: general.url str = https://huggingface.co/mradermacher/g... llama_model_loader: - kv 52: mradermacher.quantize_version str = 2 llama_model_loader: - kv 53: mradermacher.quantized_by str = mradermacher llama_model_loader: - kv 54: mradermacher.quantized_at str = 2026-04-07T14:54:11+02:00 llama_model_loader: - kv 55: mradermacher.quantized_on str = nico1 llama_model_loader: - kv 56: general.source.url str = https://huggingface.co/llmfan46/gemma... llama_model_loader: - kv 57: mradermacher.convert_type str = hf llama_model_loader: - kv 58: quantize.imatrix.file str = gemma-4-31B-it-uncensored-heretic-i1-... llama_model_loader: - kv 59: quantize.imatrix.dataset str = imatrix-training-full-3 llama_model_loader: - kv 60: quantize.imatrix.entries_count u32 = 410 llama_model_loader: - kv 61: quantize.imatrix.chunks_count u32 = 320 llama_model_loader: - type f32: 422 tensors llama_model_loader: - type q6_K: 1 tensors llama_model_loader: - type iq4_xs: 410 tensors print_info: file format = GGUF V3 (latest) print_info: file type = IQ4_XS - 4.25 bpw print_info: file size = 15.57 GiB (4.36 BPW) llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'gemma4' llama_model_load_from_file_impl: failed to load model time=2026-04-11T20:33:54.489Z level=INFO source=sched.go:462 msg="failed to create server" model=NeilPandya/gemma4-31b-heretic-i1-vision:IQ4_XS error="unable to load model: /root/.ollama/models/blobs/sha256-b837c457224cd2e8df38c3a67255bf466d2d8b20a8fd13befbcc4261f728a77c" [GIN] 2026/04/11 - 20:33:54 | 500 | 3.522448584s | 127.0.0.1 | POST "/api/generate" [GIN] 2026/04/11 - 20:43:23 | 200 | 21.98µs | 127.0.0.1 | HEAD "/" [GIN] 2026/04/11 - 20:43:23 | 200 | 57.83µs | 127.0.0.1 | POST "/api/blobs/sha256:869b5f54bd81167ad184272cfc27f706ba0687c75de1dc70921499e34d53d99d" [GIN] 2026/04/11 - 20:43:23 | 200 | 62.586503ms | 127.0.0.1 | POST "/api/create" [GIN] 2026/04/11 - 20:43:29 | 200 | 31.36µs | 127.0.0.1 | HEAD "/" [GIN] 2026/04/11 - 20:43:29 | 200 | 1.815441ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/04/11 - 20:43:37 | 200 | 21.53µs | 127.0.0.1 | HEAD "/" [GIN] 2026/04/11 - 20:43:38 | 200 | 213.348481ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/04/11 - 20:43:52 | 200 | 20.56µs | 127.0.0.1 | HEAD "/" [GIN] 2026/04/11 - 20:43:52 | 200 | 210.734002ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/04/11 - 20:43:52 | 200 | 209.298746ms | 127.0.0.1 | POST "/api/show" time=2026-04-11T20:43:52.708Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 42715" time=2026-04-11T20:43:55.710Z level=INFO source=runner.go:464 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[] error="failed to finish discovery before timeout" time=2026-04-11T20:43:55.710Z level=WARN source=runner.go:356 msg="unable to refresh free memory, using old values" time=2026-04-11T20:43:55.711Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax" llama_model_loader: loaded meta data with 62 key-value pairs and 833 tensors from /root/.ollama/models/blobs/sha256-b837c457224cd2e8df38c3a67255bf466d2d8b20a8fd13befbcc4261f728a77c (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = gemma4 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.sampling.top_k i32 = 64 llama_model_loader: - kv 3: general.sampling.top_p f32 = 0.950000 llama_model_loader: - kv 4: general.sampling.temp f32 = 1.000000 llama_model_loader: - kv 5: general.name str = Gemma 4 31B It Uncensored Heretic llama_model_loader: - kv 6: general.finetune str = it-uncensored-heretic llama_model_loader: - kv 7: general.basename str = gemma-4 llama_model_loader: - kv 8: general.size_label str = 31B llama_model_loader: - kv 9: general.license str = apache-2.0 llama_model_loader: - kv 10: general.license.link str = https://ai.google.dev/gemma/docs/gemm... llama_model_loader: - kv 11: general.base_model.count u32 = 1 llama_model_loader: - kv 12: general.base_model.0.name str = Gemma 4 31B It llama_model_loader: - kv 13: general.base_model.0.organization str = Google llama_model_loader: - kv 14: general.base_model.0.repo_url str = https://huggingface.co/google/gemma-4... llama_model_loader: - kv 15: general.tags arr[str,6] = ["heretic", "uncensored", "decensored... llama_model_loader: - kv 16: gemma4.block_count u32 = 60 llama_model_loader: - kv 17: gemma4.context_length u32 = 262144 llama_model_loader: - kv 18: gemma4.embedding_length u32 = 5376 llama_model_loader: - kv 19: gemma4.feed_forward_length u32 = 21504 llama_model_loader: - kv 20: gemma4.attention.head_count u32 = 32 llama_model_loader: - kv 21: gemma4.attention.head_count_kv arr[i32,60] = [16, 16, 16, 16, 16, 4, 16, 16, 16, 1... llama_model_loader: - kv 22: gemma4.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 23: gemma4.rope.freq_base_swa f32 = 10000.000000 llama_model_loader: - kv 24: gemma4.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 25: gemma4.attention.key_length u32 = 512 llama_model_loader: - kv 26: gemma4.attention.value_length u32 = 512 llama_model_loader: - kv 27: gemma4.final_logit_softcapping f32 = 30.000000 llama_model_loader: - kv 28: gemma4.attention.sliding_window u32 = 1024 llama_model_loader: - kv 29: gemma4.attention.shared_kv_layers u32 = 0 llama_model_loader: - kv 30: gemma4.embedding_length_per_layer_input u32 = 0 llama_model_loader: - kv 31: gemma4.attention.sliding_window_pattern arr[bool,60] = [true, true, true, true, true, false,... llama_model_loader: - kv 32: gemma4.attention.key_length_swa u32 = 256 llama_model_loader: - kv 33: gemma4.attention.value_length_swa u32 = 256 llama_model_loader: - kv 34: gemma4.rope.dimension_count u32 = 512 llama_model_loader: - kv 35: gemma4.rope.dimension_count_swa u32 = 256 llama_model_loader: - kv 36: tokenizer.ggml.model str = gemma4 llama_model_loader: - kv 37: tokenizer.ggml.tokens arr[str,262144] = ["<pad>", "<eos>", "<bos>", "<unk>", ... llama_model_loader: - kv 38: tokenizer.ggml.scores arr[f32,262144] = [-1000.000000, -1000.000000, -1000.00... llama_model_loader: - kv 39: tokenizer.ggml.token_type arr[i32,262144] = [3, 3, 3, 3, 3, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 40: tokenizer.ggml.merges arr[str,514906] = ["\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n", ... llama_model_loader: - kv 41: tokenizer.ggml.bos_token_id u32 = 2 llama_model_loader: - kv 42: tokenizer.ggml.eos_token_id u32 = 1 llama_model_loader: - kv 43: tokenizer.ggml.unknown_token_id u32 = 3 llama_model_loader: - kv 44: tokenizer.ggml.padding_token_id u32 = 0 llama_model_loader: - kv 45: tokenizer.ggml.mask_token_id u32 = 4 llama_model_loader: - kv 46: tokenizer.chat_template str = {%- macro format_parameters(propertie... llama_model_loader: - kv 47: tokenizer.ggml.add_space_prefix bool = false llama_model_loader: - kv 48: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 49: general.quantization_version u32 = 2 llama_model_loader: - kv 50: general.file_type u32 = 30 llama_model_loader: - kv 51: general.url str = https://huggingface.co/mradermacher/g... llama_model_loader: - kv 52: mradermacher.quantize_version str = 2 llama_model_loader: - kv 53: mradermacher.quantized_by str = mradermacher llama_model_loader: - kv 54: mradermacher.quantized_at str = 2026-04-07T14:54:11+02:00 llama_model_loader: - kv 55: mradermacher.quantized_on str = nico1 llama_model_loader: - kv 56: general.source.url str = https://huggingface.co/llmfan46/gemma... llama_model_loader: - kv 57: mradermacher.convert_type str = hf llama_model_loader: - kv 58: quantize.imatrix.file str = gemma-4-31B-it-uncensored-heretic-i1-... llama_model_loader: - kv 59: quantize.imatrix.dataset str = imatrix-training-full-3 llama_model_loader: - kv 60: quantize.imatrix.entries_count u32 = 410 llama_model_loader: - kv 61: quantize.imatrix.chunks_count u32 = 320 llama_model_loader: - type f32: 422 tensors llama_model_loader: - type q6_K: 1 tensors llama_model_loader: - type iq4_xs: 410 tensors print_info: file format = GGUF V3 (latest) print_info: file type = IQ4_XS - 4.25 bpw print_info: file size = 15.57 GiB (4.36 BPW) llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'gemma4' llama_model_load_from_file_impl: failed to load model time=2026-04-11T20:43:55.980Z level=INFO source=sched.go:462 msg="failed to create server" model=NeilPandya/gemma4-31b-heretic-i1-vision:IQ4_XS error="unable to load model: /root/.ollama/models/blobs/sha256-b837c457224cd2e8df38c3a67255bf466d2d8b20a8fd13befbcc4261f728a77c" [GIN] 2026/04/11 - 20:43:55 | 500 | 3.515513682s | 127.0.0.1 | POST "/api/generate" [GIN] 2026/04/11 - 20:57:48 | 200 | 21.991µs | 127.0.0.1 | HEAD "/" [GIN] 2026/04/11 - 20:57:48 | 200 | 1.481879ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/04/11 - 20:58:00 | 200 | 24.94µs | 127.0.0.1 | HEAD "/" [GIN] 2026/04/11 - 20:58:01 | 200 | 220.435324ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/04/11 - 20:58:16 | 200 | 19.34µs | 127.0.0.1 | HEAD "/" [GIN] 2026/04/11 - 20:58:16 | 200 | 211.707363ms | 127.0.0.1 | POST "/api/show" time=2026-04-11T21:01:14.047Z level=INFO source=routes.go:1752 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:INFO OLLAMA_DEBUG_LOG_REQUESTS:false OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:2 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:true OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2026-04-11T21:01:14.047Z level=INFO source=routes.go:1754 msg="Ollama cloud disabled: false" time=2026-04-11T21:01:14.049Z level=INFO source=images.go:499 msg="total blobs: 43" time=2026-04-11T21:01:14.050Z level=INFO source=images.go:506 msg="total unused blobs removed: 0" time=2026-04-11T21:01:14.051Z level=INFO source=routes.go:1810 msg="Listening on [::]:11434 (version 0.20.5)" time=2026-04-11T21:01:14.051Z level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-04-11T21:01:14.051Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" time=2026-04-11T21:01:14.051Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 39839" time=2026-04-11T21:01:17.198Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 36653" time=2026-04-11T21:01:17.713Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 42299" time=2026-04-11T21:01:17.713Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 37763" time=2026-04-11T21:01:17.713Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 44171" time=2026-04-11T21:01:17.713Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 36429" time=2026-04-11T21:01:17.713Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 45231" time=2026-04-11T21:01:17.713Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 44855" time=2026-04-11T21:01:21.336Z level=INFO source=types.go:42 msg="inference compute" id=GPU-6af317c0-d264-c8fa-8499-2b653b160248 filter_id="" library=CUDA compute=8.6 name=CUDA0 description="NVIDIA GeForce RTX 3080 Ti" libdirs=ollama,cuda_v13 driver=13.1 pci_id=0000:05:00.0 type=discrete total="12.0 GiB" available="11.4 GiB" time=2026-04-11T21:01:21.336Z level=INFO source=types.go:42 msg="inference compute" id=GPU-bfc5e972-7888-2c29-d5aa-1dce863f0c0c filter_id="" library=CUDA compute=8.9 name=CUDA1 description="NVIDIA GeForce RTX 4060" libdirs=ollama,cuda_v13 driver=13.1 pci_id=0000:01:00.0 type=discrete total="8.0 GiB" available="7.5 GiB" time=2026-04-11T21:01:21.336Z level=INFO source=types.go:42 msg="inference compute" id=GPU-1c7cc182-1257-32bd-7877-4239ee27746a filter_id="" library=CUDA compute=8.9 name=CUDA2 description="NVIDIA GeForce RTX 4060" libdirs=ollama,cuda_v13 driver=13.1 pci_id=0000:0a:00.0 type=discrete total="8.0 GiB" available="7.5 GiB" time=2026-04-11T21:01:21.336Z level=INFO source=routes.go:1860 msg="vram-based default context" total_vram="28.0 GiB" default_num_ctx=32768 [GIN] 2026/04/11 - 21:01:39 | 200 | 52.901µs | 127.0.0.1 | HEAD "/" [GIN] 2026/04/11 - 21:01:39 | 200 | 1.473729ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/04/11 - 21:02:26 | 200 | 21.651µs | 127.0.0.1 | HEAD "/" [GIN] 2026/04/11 - 21:02:26 | 200 | 221.603542ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/04/11 - 21:02:26 | 200 | 208.075131ms | 127.0.0.1 | POST "/api/show" time=2026-04-11T21:02:27.030Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 40187" time=2026-04-11T21:02:30.032Z level=INFO source=runner.go:464 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[] error="failed to finish discovery before timeout" time=2026-04-11T21:02:30.032Z level=WARN source=runner.go:356 msg="unable to refresh free memory, using old values" time=2026-04-11T21:02:30.033Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax" time=2026-04-11T21:02:30.171Z level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883 time=2026-04-11T21:02:30.171Z level=INFO source=server.go:259 msg="enabling flash attention" time=2026-04-11T21:02:30.171Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-b837c457224cd2e8df38c3a67255bf466d2d8b20a8fd13befbcc4261f728a77c --port 33489" time=2026-04-11T21:02:30.172Z level=INFO source=sched.go:484 msg="system memory" total="62.6 GiB" free="62.4 GiB" free_swap="4.0 GiB" time=2026-04-11T21:02:30.172Z level=INFO source=sched.go:491 msg="gpu memory" id=GPU-6af317c0-d264-c8fa-8499-2b653b160248 library=CUDA available="10.9 GiB" free="11.4 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-04-11T21:02:30.172Z level=INFO source=sched.go:491 msg="gpu memory" id=GPU-bfc5e972-7888-2c29-d5aa-1dce863f0c0c library=CUDA available="7.1 GiB" free="7.5 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-04-11T21:02:30.172Z level=INFO source=sched.go:491 msg="gpu memory" id=GPU-1c7cc182-1257-32bd-7877-4239ee27746a library=CUDA available="7.1 GiB" free="7.5 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-04-11T21:02:30.172Z level=INFO source=server.go:771 msg="loading model" "model layers"=61 requested=-1 time=2026-04-11T21:02:30.183Z level=INFO source=runner.go:1417 msg="starting ollama engine" time=2026-04-11T21:02:30.183Z level=INFO source=runner.go:1452 msg="Server listening on 127.0.0.1:33489" time=2026-04-11T21:02:30.194Z level=INFO source=runner.go:1290 msg=load request="{Operation:fit LoraPath:[] Parallel:2 BatchSize:512 FlashAttention:Enabled KvSize:65536 KvCacheType: NumThreads:8 GPULayers:61[ID:GPU-1c7cc182-1257-32bd-7877-4239ee27746a Layers:61(0..60)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-04-11T21:02:30.251Z level=INFO source=ggml.go:136 msg="" architecture=gemma4 file_type=unknown name="Gemma 4 31B It Uncensored Heretic" description="" num_tensors=833 num_key_values=63 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 3 CUDA devices: Device 0: NVIDIA GeForce RTX 3080 Ti, compute capability 8.6, VMM: yes, ID: GPU-6af317c0-d264-c8fa-8499-2b653b160248 Device 1: NVIDIA GeForce RTX 4060, compute capability 8.9, VMM: yes, ID: GPU-bfc5e972-7888-2c29-d5aa-1dce863f0c0c Device 2: NVIDIA GeForce RTX 4060, compute capability 8.9, VMM: yes, ID: GPU-1c7cc182-1257-32bd-7877-4239ee27746a load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v13/libggml-cuda.so time=2026-04-11T21:02:33.185Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 CUDA.2.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.2.USE_GRAPHS=1 CUDA.2.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2026-04-11T21:02:33.190Z level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883 time=2026-04-11T21:03:37.605Z level=INFO source=runner.go:1290 msg=load request="{Operation:fit LoraPath:[] Parallel:2 BatchSize:512 FlashAttention:Enabled KvSize:65536 KvCacheType: NumThreads:8 GPULayers:54[ID:GPU-6af317c0-d264-c8fa-8499-2b653b160248 Layers:24(6..29) ID:GPU-bfc5e972-7888-2c29-d5aa-1dce863f0c0c Layers:16(30..45) ID:GPU-1c7cc182-1257-32bd-7877-4239ee27746a Layers:14(46..59)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-04-11T21:03:37.655Z level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883 time=2026-04-11T21:04:03.240Z level=INFO source=runner.go:1290 msg=load request="{Operation:fit LoraPath:[] Parallel:2 BatchSize:512 FlashAttention:Enabled KvSize:65536 KvCacheType: NumThreads:8 GPULayers:52[ID:GPU-6af317c0-d264-c8fa-8499-2b653b160248 Layers:22(8..29) ID:GPU-bfc5e972-7888-2c29-d5aa-1dce863f0c0c Layers:16(30..45) ID:GPU-1c7cc182-1257-32bd-7877-4239ee27746a Layers:14(46..59)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-04-11T21:04:03.287Z level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883 time=2026-04-11T21:04:03.419Z level=INFO source=runner.go:1290 msg=load request="{Operation:alloc LoraPath:[] Parallel:2 BatchSize:512 FlashAttention:Enabled KvSize:65536 KvCacheType: NumThreads:8 GPULayers:52[ID:GPU-6af317c0-d264-c8fa-8499-2b653b160248 Layers:22(8..29) ID:GPU-bfc5e972-7888-2c29-d5aa-1dce863f0c0c Layers:16(30..45) ID:GPU-1c7cc182-1257-32bd-7877-4239ee27746a Layers:14(46..59)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-04-11T21:04:03.480Z level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883 time=2026-04-11T21:04:04.349Z level=INFO source=runner.go:1290 msg=load request="{Operation:commit LoraPath:[] Parallel:2 BatchSize:512 FlashAttention:Enabled KvSize:65536 KvCacheType: NumThreads:8 GPULayers:52[ID:GPU-6af317c0-d264-c8fa-8499-2b653b160248 Layers:22(8..29) ID:GPU-bfc5e972-7888-2c29-d5aa-1dce863f0c0c Layers:16(30..45) ID:GPU-1c7cc182-1257-32bd-7877-4239ee27746a Layers:14(46..59)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-04-11T21:04:04.349Z level=INFO source=ggml.go:482 msg="offloading 52 repeating layers to GPU" time=2026-04-11T21:04:04.349Z level=INFO source=ggml.go:486 msg="offloading output layer to CPU" time=2026-04-11T21:04:04.349Z level=INFO source=ggml.go:494 msg="offloaded 52/61 layers to GPU" time=2026-04-11T21:04:04.349Z level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="5.3 GiB" time=2026-04-11T21:04:04.349Z level=INFO source=device.go:240 msg="model weights" device=CUDA1 size="3.8 GiB" time=2026-04-11T21:04:04.349Z level=INFO source=device.go:240 msg="model weights" device=CUDA2 size="3.4 GiB" time=2026-04-11T21:04:04.349Z level=INFO source=device.go:245 msg="model weights" device=CPU size="4.1 GiB" time=2026-04-11T21:04:04.349Z level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="4.4 GiB" time=2026-04-11T21:04:04.349Z level=INFO source=device.go:251 msg="kv cache" device=CUDA1 size="2.9 GiB" time=2026-04-11T21:04:04.349Z level=INFO source=device.go:251 msg="kv cache" device=CUDA2 size="3.0 GiB" time=2026-04-11T21:04:04.349Z level=INFO source=device.go:256 msg="kv cache" device=CPU size="1.4 GiB" time=2026-04-11T21:04:04.349Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="1.0 GiB" time=2026-04-11T21:04:04.349Z level=INFO source=device.go:262 msg="compute graph" device=CUDA1 size="245.5 MiB" time=2026-04-11T21:04:04.349Z level=INFO source=device.go:262 msg="compute graph" device=CUDA2 size="245.5 MiB" time=2026-04-11T21:04:04.349Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="16.0 MiB" time=2026-04-11T21:04:04.349Z level=INFO source=device.go:272 msg="total memory" size="29.8 GiB" time=2026-04-11T21:04:04.349Z level=INFO source=sched.go:561 msg="loaded runners" count=1 time=2026-04-11T21:04:04.349Z level=INFO source=server.go:1364 msg="waiting for llama runner to start responding" time=2026-04-11T21:04:04.350Z level=INFO source=server.go:1398 msg="waiting for server to become available" status="llm server loading model" time=2026-04-11T21:04:09.116Z level=INFO source=server.go:1402 msg="llama runner started in 98.94 seconds" [GIN] 2026/04/11 - 21:04:09 | 200 | 1m42s | 127.0.0.1 | POST "/api/generate" [GIN] 2026/04/11 - 21:05:15 | 200 | 9.045587989s | 127.0.0.1 | POST "/api/chat" [GIN] 2026/04/11 - 21:05:30 | 200 | 4.822663939s | 127.0.0.1 | POST "/api/chat" [GIN] 2026/04/11 - 21:05:57 | 200 | 18.84µs | 127.0.0.1 | HEAD "/" [GIN] 2026/04/11 - 21:05:58 | 200 | 227.040449ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/04/11 - 21:05:58 | 200 | 202.17918ms | 127.0.0.1 | POST "/api/show" ggml_backend_cuda_device_get_memory device GPU-6af317c0-d264-c8fa-8499-2b653b160248 utilizing NVML memory reporting free: 395444224 total: 12884901888 ggml_backend_cuda_device_get_memory device GPU-bfc5e972-7888-2c29-d5aa-1dce863f0c0c utilizing NVML memory reporting free: 596901888 total: 8585740288 ggml_backend_cuda_device_get_memory device GPU-1c7cc182-1257-32bd-7877-4239ee27746a utilizing NVML memory reporting free: 968097792 total: 8585740288 time=2026-04-11T21:06:01.912Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 42715" time=2026-04-11T21:06:03.661Z level=INFO source=runner.go:464 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[] error="failed to finish discovery before timeout" time=2026-04-11T21:06:03.661Z level=WARN source=runner.go:356 msg="unable to refresh free memory, using old values" time=2026-04-11T21:06:03.661Z level=INFO source=server.go:444 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 32769" time=2026-04-11T21:06:06.561Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax" llama_model_loader: loaded meta data with 62 key-value pairs and 833 tensors from /root/.ollama/models/blobs/sha256-b837c457224cd2e8df38c3a67255bf466d2d8b20a8fd13befbcc4261f728a77c (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = gemma4 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.sampling.top_k i32 = 64 llama_model_loader: - kv 3: general.sampling.top_p f32 = 0.950000 llama_model_loader: - kv 4: general.sampling.temp f32 = 1.000000 llama_model_loader: - kv 5: general.name str = Gemma 4 31B It Uncensored Heretic llama_model_loader: - kv 6: general.finetune str = it-uncensored-heretic llama_model_loader: - kv 7: general.basename str = gemma-4 llama_model_loader: - kv 8: general.size_label str = 31B llama_model_loader: - kv 9: general.license str = apache-2.0 llama_model_loader: - kv 10: general.license.link str = https://ai.google.dev/gemma/docs/gemm... llama_model_loader: - kv 11: general.base_model.count u32 = 1 llama_model_loader: - kv 12: general.base_model.0.name str = Gemma 4 31B It llama_model_loader: - kv 13: general.base_model.0.organization str = Google llama_model_loader: - kv 14: general.base_model.0.repo_url str = https://huggingface.co/google/gemma-4... llama_model_loader: - kv 15: general.tags arr[str,6] = ["heretic", "uncensored", "decensored... llama_model_loader: - kv 16: gemma4.block_count u32 = 60 llama_model_loader: - kv 17: gemma4.context_length u32 = 262144 llama_model_loader: - kv 18: gemma4.embedding_length u32 = 5376 llama_model_loader: - kv 19: gemma4.feed_forward_length u32 = 21504 llama_model_loader: - kv 20: gemma4.attention.head_count u32 = 32 llama_model_loader: - kv 21: gemma4.attention.head_count_kv arr[i32,60] = [16, 16, 16, 16, 16, 4, 16, 16, 16, 1... llama_model_loader: - kv 22: gemma4.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 23: gemma4.rope.freq_base_swa f32 = 10000.000000 llama_model_loader: - kv 24: gemma4.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 25: gemma4.attention.key_length u32 = 512 llama_model_loader: - kv 26: gemma4.attention.value_length u32 = 512 llama_model_loader: - kv 27: gemma4.final_logit_softcapping f32 = 30.000000 llama_model_loader: - kv 28: gemma4.attention.sliding_window u32 = 1024 llama_model_loader: - kv 29: gemma4.attention.shared_kv_layers u32 = 0 llama_model_loader: - kv 30: gemma4.embedding_length_per_layer_input u32 = 0 llama_model_loader: - kv 31: gemma4.attention.sliding_window_pattern arr[bool,60] = [true, true, true, true, true, false,... llama_model_loader: - kv 32: gemma4.attention.key_length_swa u32 = 256 llama_model_loader: - kv 33: gemma4.attention.value_length_swa u32 = 256 llama_model_loader: - kv 34: gemma4.rope.dimension_count u32 = 512 llama_model_loader: - kv 35: gemma4.rope.dimension_count_swa u32 = 256 llama_model_loader: - kv 36: tokenizer.ggml.model str = gemma4 llama_model_loader: - kv 37: tokenizer.ggml.tokens arr[str,262144] = ["<pad>", "<eos>", "<bos>", "<unk>", ... llama_model_loader: - kv 38: tokenizer.ggml.scores arr[f32,262144] = [-1000.000000, -1000.000000, -1000.00... llama_model_loader: - kv 39: tokenizer.ggml.token_type arr[i32,262144] = [3, 3, 3, 3, 3, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 40: tokenizer.ggml.merges arr[str,514906] = ["\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n", ... llama_model_loader: - kv 41: tokenizer.ggml.bos_token_id u32 = 2 llama_model_loader: - kv 42: tokenizer.ggml.eos_token_id u32 = 1 llama_model_loader: - kv 43: tokenizer.ggml.unknown_token_id u32 = 3 llama_model_loader: - kv 44: tokenizer.ggml.padding_token_id u32 = 0 llama_model_loader: - kv 45: tokenizer.ggml.mask_token_id u32 = 4 llama_model_loader: - kv 46: tokenizer.chat_template str = {%- macro format_parameters(propertie... llama_model_loader: - kv 47: tokenizer.ggml.add_space_prefix bool = false llama_model_loader: - kv 48: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 49: general.quantization_version u32 = 2 llama_model_loader: - kv 50: general.file_type u32 = 30 llama_model_loader: - kv 51: general.url str = https://huggingface.co/mradermacher/g... llama_model_loader: - kv 52: mradermacher.quantize_version str = 2 llama_model_loader: - kv 53: mradermacher.quantized_by str = mradermacher llama_model_loader: - kv 54: mradermacher.quantized_at str = 2026-04-07T14:54:11+02:00 llama_model_loader: - kv 55: mradermacher.quantized_on str = nico1 llama_model_loader: - kv 56: general.source.url str = https://huggingface.co/llmfan46/gemma... llama_model_loader: - kv 57: mradermacher.convert_type str = hf llama_model_loader: - kv 58: quantize.imatrix.file str = gemma-4-31B-it-uncensored-heretic-i1-... llama_model_loader: - kv 59: quantize.imatrix.dataset str = imatrix-training-full-3 llama_model_loader: - kv 60: quantize.imatrix.entries_count u32 = 410 llama_model_loader: - kv 61: quantize.imatrix.chunks_count u32 = 320 llama_model_loader: - type f32: 422 tensors llama_model_loader: - type q6_K: 1 tensors llama_model_loader: - type iq4_xs: 410 tensors print_info: file format = GGUF V3 (latest) print_info: file type = IQ4_XS - 4.25 bpw print_info: file size = 15.57 GiB (4.36 BPW) llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'gemma4' llama_model_load_from_file_impl: failed to load model time=2026-04-11T21:06:06.835Z level=INFO source=sched.go:462 msg="failed to create server" model=NeilPandya/gemma4-31b-heretic-i1-vision:IQ4_XS error="unable to load model: /root/.ollama/models/blobs/sha256-b837c457224cd2e8df38c3a67255bf466d2d8b20a8fd13befbcc4261f728a77c" [GIN] 2026/04/11 - 21:06:06 | 500 | 8.526107782s | 127.0.0.1 | POST "/api/generate" ``` ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version 0.20.5
GiteaMirror added the bug label 2026-04-12 22:45:37 -05:00
Author
Owner

@rick-github commented on GitHub (Apr 11, 2026):

https://github.com/ollama/ollama/issues/14575#issuecomment-3989918451

<!-- gh-comment-id:4230241367 --> @rick-github commented on GitHub (Apr 11, 2026): https://github.com/ollama/ollama/issues/14575#issuecomment-3989918451
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#9911