[GH-ISSUE #13763] Getting nonsense text responses using Intel ARC 750 8GB card (running on TrueNas) #71080

Open
opened 2026-05-04 23:56:36 -05:00 by GiteaMirror · 22 comments
Owner

Originally created by @the-bort-the on GitHub (Jan 17, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/13763

What is the issue?

Any suggesting on how to run things smoothly on an Intel Arc A750 8GB card? I have tried a few models. I'm able to get it to use the GPU, I see this in intel_gpu_top. When I prompt in UI, I get all this nonsense. This is seen across all three models. I am unable to upload images, but I can see in intel_gpu_top it's using the GPU.

Originally came from this discussion: https://github.com/ollama/ollama/pull/11160#issuecomment-3764241349

Using this as a truenas app:

Name:
ollama
App Version:
v0.14.2
Version:
v1.1.51

As seen in OpenWeb UI:

Open WebUI Version
v0.7.2
Ollama Version
0.14.2

environment variables tried.

OLLAMA_VULKAN=1
DEVICE=Arc
OLLAMA_INTEL_GPU=true
OLLAMA_NUM_GPU=999
ZES_ENABLE_SYSMAN=1
# ./ollama ls
NAME             ID              SIZE      MODIFIED          
llama3.2:3b      a80c4f17acd5    2.0 GB    7 minutes ago        
mistral:7b       6577803aa9a0    4.4 GB    20 minutes ago       
gemma3:latest    a2af6cc3eb7f    3.3 GB    About an hour ago
# ./ollama ps
NAME           ID              SIZE      PROCESSOR    CONTEXT    UNTIL   
llama3.2:3b    a80c4f17acd5    2.8 GB    100% GPU     4096       Forever

Example prompt:
"hey give me a joke"
response:

llama3.2:3b
Today at 1:25 PM
AGAIN Bonnielausunnable Binder evacuated evac binderuginlausroneig binder Consortlaus-Fi thức لیگ旋elsorna evaclaus Thur Rule decl thức binder binder stylinglaus binder Coollaus Binderichten Binder Bonnieudas evac launder Mig evac declig declрипrieflausminsteraramellaus Binder SurveyFileSync бокbeklauslausemean-Javadocolvlah launderlaus Declaration.X evac RicolausODBamsmium binder mergerbuster Consortlaus-Fiilies旋 usherlaus clickableudgebir evac launder DOIlaus binder stylingelsievichten بخlaus Turnbullorna playableeworld launder thức thứcig Fieldsunnable hydrogen Mig旋 decllaus thứclausminsterrief thứcрип Binder бокiras launderlauslauslauslauslauslah Declarationเอก mistr Millenn Blackburnlausaramel Surveylaus Famouslah Thurminstermium.android Consort binderlaus Accord tes Miglah.X Mig Mig Ricolaus evacams Rico
Image Image

Relevant log output

llama_model_load: vocab only - skipping tensors
time=2026-01-17T19:20:03.376Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff --port 38905"
time=2026-01-17T19:20:03.376Z level=INFO source=sched.go:452 msg="system memory" total="7.9 GiB" free="7.8 GiB" free_swap="0 B"
time=2026-01-17T19:20:03.377Z level=INFO source=sched.go:459 msg="gpu memory" id=8680a156-0800-0000-0700-000000000000 library=Vulkan available="6.7 GiB" free="7.1 GiB" minimum="457.0 MiB" overhead="0 B"
time=2026-01-17T19:20:03.377Z level=INFO source=server.go:496 msg="loading model" "model layers"=29 requested=-1
time=2026-01-17T19:20:03.377Z level=INFO source=device.go:240 msg="model weights" device=Vulkan0 size="1.9 GiB"
time=2026-01-17T19:20:03.377Z level=INFO source=device.go:251 msg="kv cache" device=Vulkan0 size="448.0 MiB"
time=2026-01-17T19:20:03.377Z level=INFO source=device.go:262 msg="compute graph" device=Vulkan0 size="256.5 MiB"
time=2026-01-17T19:20:03.377Z level=INFO source=device.go:272 msg="total memory" size="2.6 GiB"
time=2026-01-17T19:20:03.384Z level=INFO source=runner.go:965 msg="starting go runner"
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = Intel(R) Arc(tm) A750 Graphics (DG2) (Intel open-source Mesa driver) | uma: 0 | fp16: 1 | bf16: 0 | warp size: 32 | shared memory: 65536 | int dot: 1 | matrix cores: none
load_backend: loaded Vulkan backend from /usr/lib/ollama/vulkan/libggml-vulkan.so
time=2026-01-17T19:20:03.412Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
time=2026-01-17T19:20:03.412Z level=INFO source=runner.go:1001 msg="Server listening on 127.0.0.1:38905"
time=2026-01-17T19:20:03.420Z level=INFO source=runner.go:895 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Auto KvSize:4096 KvCacheType: NumThreads:2 GPULayers:29[ID:8680a156-0800-0000-0700-000000000000 Layers:29(0..28)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_vk_get_device_memory called: uuid 8680a156-0800-0000-0700-000000000000
ggml_backend_vk_get_device_memory called: luid 0x0000000000000000
time=2026-01-17T19:20:03.420Z level=INFO source=server.go:1347 msg="waiting for llama runner to start responding"
time=2026-01-17T19:20:03.420Z level=INFO source=server.go:1381 msg="waiting for server to become available" status="llm server loading model"
ggml_backend_vk_get_device_memory called: uuid 8680a156-0800-0000-0700-000000000000
ggml_backend_vk_get_device_memory called: luid 0x0000000000000000
ggml_backend_vk_get_device_memory called: uuid 8680a156-0800-0000-0700-000000000000
ggml_backend_vk_get_device_memory called: luid 0x0000000000000000
llama_model_load_from_file_impl: using device Vulkan0 (Intel(R) Arc(tm) A750 Graphics (DG2)) (0000:07:00.0) - 7315 MiB free
llama_model_loader: loaded meta data with 30 key-value pairs and 255 tensors from /root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Llama 3.2 3B Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Llama-3.2
llama_model_loader: - kv   5:                         general.size_label str              = 3B
llama_model_loader: - kv   6:                               general.tags arr[str,6]       = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv   7:                          general.languages arr[str,8]       = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv   8:                          llama.block_count u32              = 28
llama_model_loader: - kv   9:                       llama.context_length u32              = 131072
llama_model_loader: - kv  10:                     llama.embedding_length u32              = 3072
llama_model_loader: - kv  11:                  llama.feed_forward_length u32              = 8192
llama_model_loader: - kv  12:                 llama.attention.head_count u32              = 24
llama_model_loader: - kv  13:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  14:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  15:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  16:                 llama.attention.key_length u32              = 128
llama_model_loader: - kv  17:               llama.attention.value_length u32              = 128
llama_model_loader: - kv  18:                          general.file_type u32              = 15
llama_model_loader: - kv  19:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  20:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  21:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  22:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  23:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  24:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  25:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  26:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  27:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  28:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv  29:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   58 tensors
llama_model_loader: - type q4_K:  168 tensors
llama_model_loader: - type q6_K:   29 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 1.87 GiB (5.01 BPW) 
load: printing all EOG tokens:
load:   - 128001 ('<|end_of_text|>')
load:   - 128008 ('<|eom_id|>')
load:   - 128009 ('<|eot_id|>')
load: special tokens cache size = 256
load: token to piece cache size = 0.7999 MB
print_info: arch             = llama
print_info: vocab_only       = 0
print_info: no_alloc         = 0
print_info: n_ctx_train      = 131072
print_info: n_embd           = 3072
print_info: n_embd_inp       = 3072
print_info: n_layer          = 28
print_info: n_head           = 24
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: is_swa_any       = 0
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 3
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 8192
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: n_expert_groups  = 0
print_info: n_group_used     = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 0
print_info: rope scaling     = linear
print_info: freq_base_train  = 500000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 131072
print_info: rope_yarn_log_mul= 0.0000
print_info: rope_finetuned   = unknown
print_info: model type       = 3B
print_info: model params     = 3.21 B
print_info: general.name     = Llama 3.2 3B Instruct
print_info: vocab type       = BPE
print_info: n_vocab          = 128256
print_info: n_merges         = 280147
print_info: BOS token        = 128000 '<|begin_of_text|>'
print_info: EOS token        = 128009 '<|eot_id|>'
print_info: EOT token        = 128009 '<|eot_id|>'
print_info: EOM token        = 128008 '<|eom_id|>'
print_info: LF token         = 198 'Ċ'
print_info: EOG token        = 128001 '<|end_of_text|>'
print_info: EOG token        = 128008 '<|eom_id|>'
print_info: EOG token        = 128009 '<|eot_id|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = false)
ggml_backend_vk_get_device_memory called: uuid 8680a156-0800-0000-0700-000000000000
ggml_backend_vk_get_device_memory called: luid 0x0000000000000000
ggml_backend_vk_get_device_memory called: uuid 8680a156-0800-0000-0700-000000000000
ggml_backend_vk_get_device_memory called: luid 0x0000000000000000
load_tensors: offloading 28 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 29/29 layers to GPU
load_tensors:      Vulkan0 model buffer size =  1918.35 MiB
load_tensors:  Vulkan_Host model buffer size =   308.23 MiB
ggml_backend_vk_get_device_memory called: uuid 8680a156-0800-0000-0700-000000000000
ggml_backend_vk_get_device_memory called: luid 0x0000000000000000
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 4096
llama_context: n_ctx_seq     = 4096
llama_context: n_batch       = 512
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = auto
llama_context: kv_unified    = false
llama_context: freq_base     = 500000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_context: Vulkan_Host  output buffer size =     0.50 MiB
llama_kv_cache:    Vulkan0 KV buffer size =   448.00 MiB
llama_kv_cache: size =  448.00 MiB (  4096 cells,  28 layers,  1/1 seqs), K (f16):  224.00 MiB, V (f16):  224.00 MiB
llama_context: Flash Attention was auto, set to enabled
llama_context:    Vulkan0 compute buffer size =   256.50 MiB
llama_context: Vulkan_Host compute buffer size =    14.02 MiB
llama_context: graph nodes  = 875
llama_context: graph splits = 2
time=2026-01-17T19:20:05.426Z level=INFO source=server.go:1385 msg="llama runner started in 2.05 seconds"
time=2026-01-17T19:20:05.426Z level=INFO source=sched.go:526 msg="loaded runners" count=1
time=2026-01-17T19:20:05.426Z level=INFO source=server.go:1347 msg="waiting for llama runner to start responding"
time=2026-01-17T19:20:05.426Z level=INFO source=server.go:1385 msg="llama runner started in 2.05 seconds"
[GIN] 2026/01/17 - 19:20:05 | 200 |  2.417773737s |       127.0.0.1 | POST     "/api/generate"
[GIN] 2026/01/17 - 19:20:09 | 200 |      18.144µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/01/17 - 19:20:09 | 200 |      14.698µs |       127.0.0.1 | GET      "/api/ps"
[GIN] 2026/01/17 - 19:20:16 | 200 |     590.693µs |      172.16.8.1 | GET      "/api/tags"
[GIN] 2026/01/17 - 19:20:16 | 200 |      16.952µs |      172.16.8.1 | GET      "/api/ps"
[GIN] 2026/01/17 - 19:20:17 | 200 |      29.275µs |      172.16.8.1 | GET      "/api/version"
[GIN] 2026/01/17 - 19:22:28 | 200 |          2m7s |      172.16.8.1 | POST     "/api/chat"
[GIN] 2026/01/17 - 19:22:36 | 200 |      28.763µs |      172.16.8.1 | GET      "/api/version"
[GIN] 2026/01/17 - 19:26:26 | 200 |         1m14s |      172.16.8.1 | POST     "/api/chat"

OS

Docker

GPU

Intel

CPU

AMD

Ollama version

0.14.2

Originally created by @the-bort-the on GitHub (Jan 17, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/13763 ### What is the issue? Any suggesting on how to run things smoothly on an Intel Arc A750 8GB card? I have tried a few models. I'm able to get it to use the GPU, I see this in intel_gpu_top. When I prompt in UI, I get all this nonsense. This is seen across all three models. I am unable to upload images, but I can see in intel_gpu_top it's using the GPU. Originally came from this discussion: https://github.com/ollama/ollama/pull/11160#issuecomment-3764241349 Using this as a truenas app: ``` Name: ollama App Version: v0.14.2 Version: v1.1.51 ``` As seen in OpenWeb UI: ``` Open WebUI Version v0.7.2 Ollama Version 0.14.2 ``` environment variables tried. ``` OLLAMA_VULKAN=1 DEVICE=Arc OLLAMA_INTEL_GPU=true OLLAMA_NUM_GPU=999 ZES_ENABLE_SYSMAN=1 ``` ``` # ./ollama ls NAME ID SIZE MODIFIED llama3.2:3b a80c4f17acd5 2.0 GB 7 minutes ago mistral:7b 6577803aa9a0 4.4 GB 20 minutes ago gemma3:latest a2af6cc3eb7f 3.3 GB About an hour ago ``` ``` # ./ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL llama3.2:3b a80c4f17acd5 2.8 GB 100% GPU 4096 Forever ``` Example prompt: "hey give me a joke" response: ``` llama3.2:3b Today at 1:25 PM AGAIN Bonnielausunnable Binder evacuated evac binderuginlausroneig binder Consortlaus-Fi thức لیگ旋elsorna evaclaus Thur Rule decl thức binder binder stylinglaus binder Coollaus Binderichten Binder Bonnieudas evac launder Mig evac declig declрипrieflausminsteraramellaus Binder SurveyFileSync бокbeklauslausemean-Javadocolvlah launderlaus Declaration.X evac RicolausODBamsmium binder mergerbuster Consortlaus-Fiilies旋 usherlaus clickableudgebir evac launder DOIlaus binder stylingelsievichten بخlaus Turnbullorna playableeworld launder thức thứcig Fieldsunnable hydrogen Mig旋 decllaus thứclausminsterrief thứcрип Binder бокiras launderlauslauslauslauslauslah Declarationเอก mistr Millenn Blackburnlausaramel Surveylaus Famouslah Thurminstermium.android Consort binderlaus Accord tes Miglah.X Mig Mig Ricolaus evacams Rico ``` <img width="1003" height="393" alt="Image" src="https://github.com/user-attachments/assets/e74c80aa-f4c2-464f-85fc-4cb9f229bac6" /> <img width="1223" height="235" alt="Image" src="https://github.com/user-attachments/assets/04c79e45-b78e-46ef-868d-e975c04fd412" /> ### Relevant log output ```shell llama_model_load: vocab only - skipping tensors time=2026-01-17T19:20:03.376Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff --port 38905" time=2026-01-17T19:20:03.376Z level=INFO source=sched.go:452 msg="system memory" total="7.9 GiB" free="7.8 GiB" free_swap="0 B" time=2026-01-17T19:20:03.377Z level=INFO source=sched.go:459 msg="gpu memory" id=8680a156-0800-0000-0700-000000000000 library=Vulkan available="6.7 GiB" free="7.1 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-01-17T19:20:03.377Z level=INFO source=server.go:496 msg="loading model" "model layers"=29 requested=-1 time=2026-01-17T19:20:03.377Z level=INFO source=device.go:240 msg="model weights" device=Vulkan0 size="1.9 GiB" time=2026-01-17T19:20:03.377Z level=INFO source=device.go:251 msg="kv cache" device=Vulkan0 size="448.0 MiB" time=2026-01-17T19:20:03.377Z level=INFO source=device.go:262 msg="compute graph" device=Vulkan0 size="256.5 MiB" time=2026-01-17T19:20:03.377Z level=INFO source=device.go:272 msg="total memory" size="2.6 GiB" time=2026-01-17T19:20:03.384Z level=INFO source=runner.go:965 msg="starting go runner" load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so ggml_vulkan: Found 1 Vulkan devices: ggml_vulkan: 0 = Intel(R) Arc(tm) A750 Graphics (DG2) (Intel open-source Mesa driver) | uma: 0 | fp16: 1 | bf16: 0 | warp size: 32 | shared memory: 65536 | int dot: 1 | matrix cores: none load_backend: loaded Vulkan backend from /usr/lib/ollama/vulkan/libggml-vulkan.so time=2026-01-17T19:20:03.412Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2026-01-17T19:20:03.412Z level=INFO source=runner.go:1001 msg="Server listening on 127.0.0.1:38905" time=2026-01-17T19:20:03.420Z level=INFO source=runner.go:895 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Auto KvSize:4096 KvCacheType: NumThreads:2 GPULayers:29[ID:8680a156-0800-0000-0700-000000000000 Layers:29(0..28)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_vk_get_device_memory called: uuid 8680a156-0800-0000-0700-000000000000 ggml_backend_vk_get_device_memory called: luid 0x0000000000000000 time=2026-01-17T19:20:03.420Z level=INFO source=server.go:1347 msg="waiting for llama runner to start responding" time=2026-01-17T19:20:03.420Z level=INFO source=server.go:1381 msg="waiting for server to become available" status="llm server loading model" ggml_backend_vk_get_device_memory called: uuid 8680a156-0800-0000-0700-000000000000 ggml_backend_vk_get_device_memory called: luid 0x0000000000000000 ggml_backend_vk_get_device_memory called: uuid 8680a156-0800-0000-0700-000000000000 ggml_backend_vk_get_device_memory called: luid 0x0000000000000000 llama_model_load_from_file_impl: using device Vulkan0 (Intel(R) Arc(tm) A750 Graphics (DG2)) (0000:07:00.0) - 7315 MiB free llama_model_loader: loaded meta data with 30 key-value pairs and 255 tensors from /root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Llama 3.2 3B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Llama-3.2 llama_model_loader: - kv 5: general.size_label str = 3B llama_model_loader: - kv 6: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... llama_model_loader: - kv 7: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... llama_model_loader: - kv 8: llama.block_count u32 = 28 llama_model_loader: - kv 9: llama.context_length u32 = 131072 llama_model_loader: - kv 10: llama.embedding_length u32 = 3072 llama_model_loader: - kv 11: llama.feed_forward_length u32 = 8192 llama_model_loader: - kv 12: llama.attention.head_count u32 = 24 llama_model_loader: - kv 13: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 14: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 15: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 16: llama.attention.key_length u32 = 128 llama_model_loader: - kv 17: llama.attention.value_length u32 = 128 llama_model_loader: - kv 18: general.file_type u32 = 15 llama_model_loader: - kv 19: llama.vocab_size u32 = 128256 llama_model_loader: - kv 20: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 22: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 26: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 27: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 28: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 58 tensors llama_model_loader: - type q4_K: 168 tensors llama_model_loader: - type q6_K: 29 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 1.87 GiB (5.01 BPW) load: printing all EOG tokens: load: - 128001 ('<|end_of_text|>') load: - 128008 ('<|eom_id|>') load: - 128009 ('<|eot_id|>') load: special tokens cache size = 256 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 0 print_info: no_alloc = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 3072 print_info: n_embd_inp = 3072 print_info: n_layer = 28 print_info: n_head = 24 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: is_swa_any = 0 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 3 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 8192 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: n_expert_groups = 0 print_info: n_group_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 0 print_info: rope scaling = linear print_info: freq_base_train = 500000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 131072 print_info: rope_yarn_log_mul= 0.0000 print_info: rope_finetuned = unknown print_info: model type = 3B print_info: model params = 3.21 B print_info: general.name = Llama 3.2 3B Instruct print_info: vocab type = BPE print_info: n_vocab = 128256 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin_of_text|>' print_info: EOS token = 128009 '<|eot_id|>' print_info: EOT token = 128009 '<|eot_id|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128001 '<|end_of_text|>' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = false) ggml_backend_vk_get_device_memory called: uuid 8680a156-0800-0000-0700-000000000000 ggml_backend_vk_get_device_memory called: luid 0x0000000000000000 ggml_backend_vk_get_device_memory called: uuid 8680a156-0800-0000-0700-000000000000 ggml_backend_vk_get_device_memory called: luid 0x0000000000000000 load_tensors: offloading 28 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 29/29 layers to GPU load_tensors: Vulkan0 model buffer size = 1918.35 MiB load_tensors: Vulkan_Host model buffer size = 308.23 MiB ggml_backend_vk_get_device_memory called: uuid 8680a156-0800-0000-0700-000000000000 ggml_backend_vk_get_device_memory called: luid 0x0000000000000000 llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 4096 llama_context: n_ctx_seq = 4096 llama_context: n_batch = 512 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = auto llama_context: kv_unified = false llama_context: freq_base = 500000.0 llama_context: freq_scale = 1 llama_context: n_ctx_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_context: Vulkan_Host output buffer size = 0.50 MiB llama_kv_cache: Vulkan0 KV buffer size = 448.00 MiB llama_kv_cache: size = 448.00 MiB ( 4096 cells, 28 layers, 1/1 seqs), K (f16): 224.00 MiB, V (f16): 224.00 MiB llama_context: Flash Attention was auto, set to enabled llama_context: Vulkan0 compute buffer size = 256.50 MiB llama_context: Vulkan_Host compute buffer size = 14.02 MiB llama_context: graph nodes = 875 llama_context: graph splits = 2 time=2026-01-17T19:20:05.426Z level=INFO source=server.go:1385 msg="llama runner started in 2.05 seconds" time=2026-01-17T19:20:05.426Z level=INFO source=sched.go:526 msg="loaded runners" count=1 time=2026-01-17T19:20:05.426Z level=INFO source=server.go:1347 msg="waiting for llama runner to start responding" time=2026-01-17T19:20:05.426Z level=INFO source=server.go:1385 msg="llama runner started in 2.05 seconds" [GIN] 2026/01/17 - 19:20:05 | 200 | 2.417773737s | 127.0.0.1 | POST "/api/generate" [GIN] 2026/01/17 - 19:20:09 | 200 | 18.144µs | 127.0.0.1 | HEAD "/" [GIN] 2026/01/17 - 19:20:09 | 200 | 14.698µs | 127.0.0.1 | GET "/api/ps" [GIN] 2026/01/17 - 19:20:16 | 200 | 590.693µs | 172.16.8.1 | GET "/api/tags" [GIN] 2026/01/17 - 19:20:16 | 200 | 16.952µs | 172.16.8.1 | GET "/api/ps" [GIN] 2026/01/17 - 19:20:17 | 200 | 29.275µs | 172.16.8.1 | GET "/api/version" [GIN] 2026/01/17 - 19:22:28 | 200 | 2m7s | 172.16.8.1 | POST "/api/chat" [GIN] 2026/01/17 - 19:22:36 | 200 | 28.763µs | 172.16.8.1 | GET "/api/version" [GIN] 2026/01/17 - 19:26:26 | 200 | 1m14s | 172.16.8.1 | POST "/api/chat" ``` ### OS Docker ### GPU Intel ### CPU AMD ### Ollama version 0.14.2
GiteaMirror added the bug label 2026-05-04 23:56:36 -05:00
Author
Owner

@the-bort-the commented on GitHub (Jan 17, 2026):

Decided to build a model from an existing .gguf I had on the host. Seems to use the gpu and work correctly in the cli. However still garbage output in the OpenWeb UI. It recognizes the newly created model.

# cat <<'EOF' > /root/.ollama/models/Llama-3.2-3B-Instruct-Q4_K_M_Modelfile
> FROM /root/.ollama/models/Llama-3.2-3B-Instruct-Q4_K_M.gguf 
> EOF

# ./ollama create Llama-3.2-3B-Instruct-Q4_K_M -f /root/.ollama/models/Llama-3.2-3B-Instruct-Q4_K_M_Modelfile 
gathering model components 
copying file sha256:6c1a2b41161032677be168d354123594c0e6e67d2b9227c84f296ad037c728ff 100% 
parsing GGUF 
using existing layer sha256:6c1a2b41161032677be168d354123594c0e6e67d2b9227c84f296ad037c728ff 
writing manifest 
success
# ./ollama ls  
NAME                                   ID              SIZE      MODIFIED          
Llama-3.2-3B-Instruct-Q4_K_M:latest    5274c16b1cd9    2.0 GB    25 seconds ago
# ./ollama ps
NAME                                   ID              SIZE      PROCESSOR    CONTEXT    UNTIL   
Llama-3.2-3B-Instruct-Q4_K_M:latest    5274c16b1cd9    2.8 GB    100% GPU     4096       Forever
# ./ollama run Llama-3.2-3B-Instruct-Q4_K_M:latest
>>> give me facts about chicago bears football
 team
Here are some interesting facts about the Chicago Bears:

1. **Founded in 1919**: The Chicago Bears were founded on December 21, 1919, by A.E. Staley, a corn manufacturer.

More logs:

time=2026-01-17T20:28:53.777Z level=INFO source=routes.go:1614 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:30068 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2026-01-17T20:28:53.778Z level=INFO source=images.go:499 msg="total blobs: 16"
time=2026-01-17T20:28:53.778Z level=INFO source=images.go:506 msg="total unused blobs removed: 0"
time=2026-01-17T20:28:53.779Z level=INFO source=routes.go:1667 msg="Listening on [::]:30068 (version 0.14.2)"
time=2026-01-17T20:28:53.780Z level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2026-01-17T20:28:53.780Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 41825"
time=2026-01-17T20:28:53.797Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 38801"
time=2026-01-17T20:28:53.815Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 41763"
time=2026-01-17T20:28:53.864Z level=INFO source=types.go:42 msg="inference compute" id=8680a156-0800-0000-0700-000000000000 filter_id="" library=Vulkan compute=0.0 name=Vulkan0 description="Intel(R) Arc(tm) A750 Graphics (DG2)" libdirs=ollama,vulkan driver=0.0 pci_id=0000:07:00.0 type=discrete total="7.9 GiB" available="7.1 GiB"
time=2026-01-17T20:28:53.864Z level=INFO source=routes.go:1708 msg="entering low vram mode" "total vram"="7.9 GiB" threshold="20.0 GiB"
[GIN] 2026/01/17 - 20:31:11 | 200 |      29.706µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/01/17 - 20:31:11 | 200 |      72.195µs |       127.0.0.1 | GET      "/api/ps"
[GIN] 2026/01/17 - 20:31:12 | 200 |      17.703µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/01/17 - 20:31:12 | 200 |     731.366µs |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/01/17 - 20:31:27 | 200 |      17.723µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/01/17 - 20:31:27 | 500 |       58.68µs |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/01/17 - 20:31:52 | 200 |       16.28µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/01/17 - 20:31:52 | 500 |       26.52µs |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/01/17 - 20:33:21 | 200 |      16.942µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/01/17 - 20:33:21 | 500 |      27.241µs |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/01/17 - 20:34:25 | 200 |      18.374µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/01/17 - 20:34:29 | 201 |  1.936075089s |       127.0.0.1 | POST     "/api/blobs/sha256:6c1a2b41161032677be168d354123594c0e6e67d2b9227c84f296ad037c728ff"
[GIN] 2026/01/17 - 20:34:29 | 200 |  133.807738ms |       127.0.0.1 | POST     "/api/create"
[GIN] 2026/01/17 - 20:34:55 | 200 |       21.26µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/01/17 - 20:34:55 | 200 |     800.625µs |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/01/17 - 20:35:08 | 200 |      23.063µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/01/17 - 20:35:08 | 200 |   93.335444ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/01/17 - 20:35:08 | 200 |   96.190556ms |       127.0.0.1 | POST     "/api/show"
time=2026-01-17T20:35:08.554Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 36985"
llama_model_loader: loaded meta data with 35 key-value pairs and 255 tensors from /root/.ollama/models/blobs/sha256-6c1a2b41161032677be168d354123594c0e6e67d2b9227c84f296ad037c728ff (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Llama 3.2 3B Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Llama-3.2
llama_model_loader: - kv   5:                         general.size_label str              = 3B
llama_model_loader: - kv   6:                            general.license str              = llama3.2
llama_model_loader: - kv   7:                               general.tags arr[str,6]       = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv   8:                          general.languages arr[str,8]       = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv   9:                          llama.block_count u32              = 28
llama_model_loader: - kv  10:                       llama.context_length u32              = 131072
llama_model_loader: - kv  11:                     llama.embedding_length u32              = 3072
llama_model_loader: - kv  12:                  llama.feed_forward_length u32              = 8192
llama_model_loader: - kv  13:                 llama.attention.head_count u32              = 24
llama_model_loader: - kv  14:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  15:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  16:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  17:                 llama.attention.key_length u32              = 128
llama_model_loader: - kv  18:               llama.attention.value_length u32              = 128
llama_model_loader: - kv  19:                          general.file_type u32              = 15
llama_model_loader: - kv  20:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  21:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  22:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  23:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  24:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  25:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  26:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  27:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  28:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  29:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv  30:               general.quantization_version u32              = 2
llama_model_loader: - kv  31:                      quantize.imatrix.file str              = /models_out/Llama-3.2-3B-Instruct-GGU...
llama_model_loader: - kv  32:                   quantize.imatrix.dataset str              = /training_dir/calibration_datav3.txt
llama_model_loader: - kv  33:             quantize.imatrix.entries_count i32              = 196
llama_model_loader: - kv  34:              quantize.imatrix.chunks_count i32              = 125
llama_model_loader: - type  f32:   58 tensors
llama_model_loader: - type q4_K:  168 tensors
llama_model_loader: - type q6_K:   29 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 1.87 GiB (5.01 BPW) 
load: printing all EOG tokens:
load:   - 128001 ('<|end_of_text|>')
load:   - 128008 ('<|eom_id|>')
load:   - 128009 ('<|eot_id|>')
load: special tokens cache size = 256
load: token to piece cache size = 0.7999 MB
print_info: arch             = llama
print_info: vocab_only       = 1
print_info: no_alloc         = 0
print_info: model type       = ?B
print_info: model params     = 3.21 B
print_info: general.name     = Llama 3.2 3B Instruct
print_info: vocab type       = BPE
print_info: n_vocab          = 128256
print_info: n_merges         = 280147
print_info: BOS token        = 128000 '<|begin_of_text|>'
print_info: EOS token        = 128009 '<|eot_id|>'
print_info: EOT token        = 128009 '<|eot_id|>'
print_info: EOM token        = 128008 '<|eom_id|>'
print_info: LF token         = 198 'Ċ'
print_info: EOG token        = 128001 '<|end_of_text|>'
print_info: EOG token        = 128008 '<|eom_id|>'
print_info: EOG token        = 128009 '<|eot_id|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2026-01-17T20:35:08.860Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-6c1a2b41161032677be168d354123594c0e6e67d2b9227c84f296ad037c728ff --port 35809"
time=2026-01-17T20:35:08.860Z level=INFO source=sched.go:452 msg="system memory" total="7.9 GiB" free="7.8 GiB" free_swap="0 B"
time=2026-01-17T20:35:08.860Z level=INFO source=sched.go:459 msg="gpu memory" id=8680a156-0800-0000-0700-000000000000 library=Vulkan available="6.7 GiB" free="7.1 GiB" minimum="457.0 MiB" overhead="0 B"
time=2026-01-17T20:35:08.860Z level=INFO source=server.go:496 msg="loading model" "model layers"=29 requested=-1
time=2026-01-17T20:35:08.861Z level=INFO source=device.go:240 msg="model weights" device=Vulkan0 size="1.9 GiB"
time=2026-01-17T20:35:08.861Z level=INFO source=device.go:251 msg="kv cache" device=Vulkan0 size="448.0 MiB"
time=2026-01-17T20:35:08.861Z level=INFO source=device.go:262 msg="compute graph" device=Vulkan0 size="256.5 MiB"
time=2026-01-17T20:35:08.861Z level=INFO source=device.go:272 msg="total memory" size="2.6 GiB"
time=2026-01-17T20:35:08.867Z level=INFO source=runner.go:965 msg="starting go runner"
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = Intel(R) Arc(tm) A750 Graphics (DG2) (Intel open-source Mesa driver) | uma: 0 | fp16: 1 | bf16: 0 | warp size: 32 | shared memory: 65536 | int dot: 1 | matrix cores: none
load_backend: loaded Vulkan backend from /usr/lib/ollama/vulkan/libggml-vulkan.so
time=2026-01-17T20:35:08.900Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
time=2026-01-17T20:35:08.900Z level=INFO source=runner.go:1001 msg="Server listening on 127.0.0.1:35809"
time=2026-01-17T20:35:08.903Z level=INFO source=runner.go:895 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Auto KvSize:4096 KvCacheType: NumThreads:2 GPULayers:29[ID:8680a156-0800-0000-0700-000000000000 Layers:29(0..28)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_vk_get_device_memory called: uuid 8680a156-0800-0000-0700-000000000000
ggml_backend_vk_get_device_memory called: luid 0x0000000000000000
time=2026-01-17T20:35:08.904Z level=INFO source=server.go:1347 msg="waiting for llama runner to start responding"
ggml_backend_vk_get_device_memory called: uuid 8680a156-0800-0000-0700-000000000000
ggml_backend_vk_get_device_memory called: luid 0x0000000000000000
time=2026-01-17T20:35:08.904Z level=INFO source=server.go:1381 msg="waiting for server to become available" status="llm server loading model"
ggml_backend_vk_get_device_memory called: uuid 8680a156-0800-0000-0700-000000000000
ggml_backend_vk_get_device_memory called: luid 0x0000000000000000
llama_model_load_from_file_impl: using device Vulkan0 (Intel(R) Arc(tm) A750 Graphics (DG2)) (0000:07:00.0) - 7315 MiB free
llama_model_loader: loaded meta data with 35 key-value pairs and 255 tensors from /root/.ollama/models/blobs/sha256-6c1a2b41161032677be168d354123594c0e6e67d2b9227c84f296ad037c728ff (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Llama 3.2 3B Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Llama-3.2
llama_model_loader: - kv   5:                         general.size_label str              = 3B
llama_model_loader: - kv   6:                            general.license str              = llama3.2
llama_model_loader: - kv   7:                               general.tags arr[str,6]       = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv   8:                          general.languages arr[str,8]       = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv   9:                          llama.block_count u32              = 28
llama_model_loader: - kv  10:                       llama.context_length u32              = 131072
llama_model_loader: - kv  11:                     llama.embedding_length u32              = 3072
llama_model_loader: - kv  12:                  llama.feed_forward_length u32              = 8192
llama_model_loader: - kv  13:                 llama.attention.head_count u32              = 24
llama_model_loader: - kv  14:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  15:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  16:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  17:                 llama.attention.key_length u32              = 128
llama_model_loader: - kv  18:               llama.attention.value_length u32              = 128
llama_model_loader: - kv  19:                          general.file_type u32              = 15
llama_model_loader: - kv  20:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  21:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  22:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  23:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  24:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  25:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  26:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  27:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  28:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  29:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv  30:               general.quantization_version u32              = 2
llama_model_loader: - kv  31:                      quantize.imatrix.file str              = /models_out/Llama-3.2-3B-Instruct-GGU...
llama_model_loader: - kv  32:                   quantize.imatrix.dataset str              = /training_dir/calibration_datav3.txt
llama_model_loader: - kv  33:             quantize.imatrix.entries_count i32              = 196
llama_model_loader: - kv  34:              quantize.imatrix.chunks_count i32              = 125
llama_model_loader: - type  f32:   58 tensors
llama_model_loader: - type q4_K:  168 tensors
llama_model_loader: - type q6_K:   29 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 1.87 GiB (5.01 BPW) 
load: printing all EOG tokens:
load:   - 128001 ('<|end_of_text|>')
load:   - 128008 ('<|eom_id|>')
load:   - 128009 ('<|eot_id|>')
load: special tokens cache size = 256
load: token to piece cache size = 0.7999 MB
print_info: arch             = llama
print_info: vocab_only       = 0
print_info: no_alloc         = 0
print_info: n_ctx_train      = 131072
print_info: n_embd           = 3072
print_info: n_embd_inp       = 3072
print_info: n_layer          = 28
print_info: n_head           = 24
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: is_swa_any       = 0
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 3
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 8192
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: n_expert_groups  = 0
print_info: n_group_used     = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 0
print_info: rope scaling     = linear
print_info: freq_base_train  = 500000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 131072
print_info: rope_yarn_log_mul= 0.0000
print_info: rope_finetuned   = unknown
print_info: model type       = 3B
print_info: model params     = 3.21 B
print_info: general.name     = Llama 3.2 3B Instruct
print_info: vocab type       = BPE
print_info: n_vocab          = 128256
print_info: n_merges         = 280147
print_info: BOS token        = 128000 '<|begin_of_text|>'
print_info: EOS token        = 128009 '<|eot_id|>'
print_info: EOT token        = 128009 '<|eot_id|>'
print_info: EOM token        = 128008 '<|eom_id|>'
print_info: LF token         = 198 'Ċ'
print_info: EOG token        = 128001 '<|end_of_text|>'
print_info: EOG token        = 128008 '<|eom_id|>'
print_info: EOG token        = 128009 '<|eot_id|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = false)
ggml_backend_vk_get_device_memory called: uuid 8680a156-0800-0000-0700-000000000000
ggml_backend_vk_get_device_memory called: luid 0x0000000000000000
ggml_backend_vk_get_device_memory called: uuid 8680a156-0800-0000-0700-000000000000
ggml_backend_vk_get_device_memory called: luid 0x0000000000000000
load_tensors: offloading 28 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 29/29 layers to GPU
load_tensors:      Vulkan0 model buffer size =  1918.35 MiB
load_tensors:  Vulkan_Host model buffer size =   308.23 MiB
ggml_backend_vk_get_device_memory called: uuid 8680a156-0800-0000-0700-000000000000
ggml_backend_vk_get_device_memory called: luid 0x0000000000000000
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 4096
llama_context: n_ctx_seq     = 4096
llama_context: n_batch       = 512
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = auto
llama_context: kv_unified    = false
llama_context: freq_base     = 500000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_context: Vulkan_Host  output buffer size =     0.50 MiB
llama_kv_cache:    Vulkan0 KV buffer size =   448.00 MiB
llama_kv_cache: size =  448.00 MiB (  4096 cells,  28 layers,  1/1 seqs), K (f16):  224.00 MiB, V (f16):  224.00 MiB
llama_context: Flash Attention was auto, set to enabled
llama_context:    Vulkan0 compute buffer size =   256.50 MiB
llama_context: Vulkan_Host compute buffer size =    14.02 MiB
llama_context: graph nodes  = 875
llama_context: graph splits = 2
time=2026-01-17T20:35:11.160Z level=INFO source=server.go:1385 msg="llama runner started in 2.30 seconds"
time=2026-01-17T20:35:11.160Z level=INFO source=sched.go:526 msg="loaded runners" count=1
time=2026-01-17T20:35:11.160Z level=INFO source=server.go:1347 msg="waiting for llama runner to start responding"
time=2026-01-17T20:35:11.160Z level=INFO source=server.go:1385 msg="llama runner started in 2.30 seconds"
[GIN] 2026/01/17 - 20:35:11 | 200 |  2.688392935s |       127.0.0.1 | POST     "/api/generate"
[GIN] 2026/01/17 - 20:35:56 | 200 |      16.781µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/01/17 - 20:35:56 | 200 |     648.933µs |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/01/17 - 20:35:58 | 200 |      17.703µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/01/17 - 20:35:58 | 200 |      13.215µs |       127.0.0.1 | GET      "/api/ps"
[GIN] 2026/01/17 - 20:36:10 | 200 |      14.888µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/01/17 - 20:36:10 | 200 |   93.756681ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/01/17 - 20:36:10 | 200 |     720.997µs |       127.0.0.1 | POST     "/api/generate"
[GIN] 2026/01/17 - 20:36:32 | 200 |      17.322µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/01/17 - 20:36:32 | 200 |   93.727637ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/01/17 - 20:36:32 | 200 |   94.145346ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/01/17 - 20:36:32 | 200 |   80.046289ms |       127.0.0.1 | POST     "/api/generate"
[GIN] 2026/01/17 - 20:36:37 | 200 |      16.871µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/01/17 - 20:36:37 | 200 |      13.525µs |       127.0.0.1 | GET      "/api/ps"
[GIN] 2026/01/17 - 20:36:45 | 200 |      18.214µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/01/17 - 20:36:45 | 200 |      15.909µs |       127.0.0.1 | GET      "/api/ps"
[GIN] 2026/01/17 - 20:36:47 | 200 |      18.765µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/01/17 - 20:36:47 | 200 |   93.896782ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/01/17 - 20:36:47 | 200 |   93.540918ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/01/17 - 20:36:47 | 200 |   83.230868ms |       127.0.0.1 | POST     "/api/generate"
[GIN] 2026/01/17 - 20:36:59 | 200 |     662.769µs |      172.16.8.1 | GET      "/api/tags"
[GIN] 2026/01/17 - 20:36:59 | 200 |      21.339µs |      172.16.8.1 | GET      "/api/ps"
[GIN] 2026/01/17 - 20:37:01 | 200 |     704.476µs |      172.16.8.1 | GET      "/api/tags"
[GIN] 2026/01/17 - 20:37:01 | 200 |      17.542µs |      172.16.8.1 | GET      "/api/ps"
[GIN] 2026/01/17 - 20:37:02 | 200 |      29.585µs |      172.16.8.1 | GET      "/api/version"
[GIN] 2026/01/17 - 20:37:15 | 200 |      17.673µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/01/17 - 20:37:15 | 200 |      12.053µs |       127.0.0.1 | GET      "/api/ps"
[GIN] 2026/01/17 - 20:37:19 | 200 |      18.695µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/01/17 - 20:37:19 | 200 |     647.259µs |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/01/17 - 20:37:30 | 200 |      18.264µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/01/17 - 20:37:30 | 200 |   95.126639ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/01/17 - 20:37:30 | 200 |   95.623428ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/01/17 - 20:37:30 | 200 |   82.983196ms |       127.0.0.1 | POST     "/api/generate"
[GIN] 2026/01/17 - 20:37:41 | 200 |      18.314µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/01/17 - 20:37:41 | 200 |      16.711µs |       127.0.0.1 | GET      "/api/ps"
[GIN] 2026/01/17 - 20:37:43 | 200 |         2m29s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2026/01/17 - 20:37:44 | 200 |      19.406µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/01/17 - 20:37:44 | 200 |    96.77547ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/01/17 - 20:37:44 | 200 |   97.389919ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/01/17 - 20:37:44 | 200 |   82.705156ms |       127.0.0.1 | POST     "/api/generate"
[GIN] 2026/01/17 - 20:38:23 | 200 | 25.890474155s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2026/01/17 - 20:38:35 | 200 |      17.763µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/01/17 - 20:38:35 | 200 |       15.91µs |       127.0.0.1 | GET      "/api/ps"
[GIN] 2026/01/17 - 20:38:44 | 200 |      16.892µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/01/17 - 20:38:44 | 200 |       16.27µs |       127.0.0.1 | GET      "/api/ps"
[GIN] 2026/01/17 - 20:38:58 | 200 |  7.958619343s |      172.16.8.1 | POST     "/api/chat"
[GIN] 2026/01/17 - 20:39:06 | 200 |  4.396977442s |      172.16.8.1 | POST     "/api/chat"
[GIN] 2026/01/17 - 20:39:13 | 200 |      17.643µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/01/17 - 20:39:13 | 200 |   93.884459ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/01/17 - 20:39:13 | 200 |   96.122881ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/01/17 - 20:39:14 | 200 |   84.569878ms |       127.0.0.1 | POST     "/api/generate"
[GIN] 2026/01/17 - 20:39:33 | 200 |     675.081µs |      172.16.8.1 | GET      "/api/tags"
[GIN] 2026/01/17 - 20:39:33 | 200 |      14.567µs |      172.16.8.1 | GET      "/api/ps"
[GIN] 2026/01/17 - 20:39:34 | 200 |      24.406µs |      172.16.8.1 | GET      "/api/version"
[GIN] 2026/01/17 - 20:40:01 | 200 |      18.705µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/01/17 - 20:40:01 | 200 |      14.567µs |       127.0.0.1 | GET      "/api/ps"
[GIN] 2026/01/17 - 20:40:09 | 200 |     793.222µs |      172.16.8.1 | GET      "/api/tags"
[GIN] 2026/01/17 - 20:40:09 | 200 |      16.822µs |      172.16.8.1 | GET      "/api/ps"
[GIN] 2026/01/17 - 20:40:12 | 200 | 54.546692402s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2026/01/17 - 20:40:16 | 200 |     737.508µs |      172.16.8.1 | GET      "/api/tags"
[GIN] 2026/01/17 - 20:40:16 | 200 |      25.387µs |      172.16.8.1 | GET      "/api/ps"
[GIN] 2026/01/17 - 20:40:23 | 200 |      22.122µs |      172.16.8.1 | GET      "/api/version"
[GIN] 2026/01/17 - 20:41:32 | 200 |         1m52s |      172.16.8.1 | POST     "/api/chat"
[GIN] 2026/01/17 - 20:44:40 | 200 |       15.92µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/01/17 - 20:44:40 | 200 |  100.335962ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/01/17 - 20:44:40 | 200 |   98.198146ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/01/17 - 20:44:40 | 200 |   82.554093ms |       127.0.0.1 | POST     "/api/generate"
[GIN] 2026/01/17 - 20:45:22 | 200 |  26.26173154s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2026/01/17 - 20:46:56 | 200 |     832.495µs |      172.16.8.1 | GET      "/api/tags"
[GIN] 2026/01/17 - 20:46:56 | 200 |      13.806µs |      172.16.8.1 | GET      "/api/ps"
[GIN] 2026/01/17 - 20:46:56 | 200 |      32.391µs |      172.16.8.1 | GET      "/api/version"
[GIN] 2026/01/17 - 20:48:27 | 200 | 40.639601043s |      172.16.8.1 | POST     "/api/chat"
<!-- gh-comment-id:3764327035 --> @the-bort-the commented on GitHub (Jan 17, 2026): Decided to build a model from an existing .gguf I had on the host. Seems to use the gpu and work correctly in the cli. However still garbage output in the OpenWeb UI. It recognizes the newly created model. ``` # cat <<'EOF' > /root/.ollama/models/Llama-3.2-3B-Instruct-Q4_K_M_Modelfile > FROM /root/.ollama/models/Llama-3.2-3B-Instruct-Q4_K_M.gguf > EOF # ./ollama create Llama-3.2-3B-Instruct-Q4_K_M -f /root/.ollama/models/Llama-3.2-3B-Instruct-Q4_K_M_Modelfile gathering model components copying file sha256:6c1a2b41161032677be168d354123594c0e6e67d2b9227c84f296ad037c728ff 100% parsing GGUF using existing layer sha256:6c1a2b41161032677be168d354123594c0e6e67d2b9227c84f296ad037c728ff writing manifest success ``` ``` # ./ollama ls NAME ID SIZE MODIFIED Llama-3.2-3B-Instruct-Q4_K_M:latest 5274c16b1cd9 2.0 GB 25 seconds ago ``` ``` # ./ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL Llama-3.2-3B-Instruct-Q4_K_M:latest 5274c16b1cd9 2.8 GB 100% GPU 4096 Forever ``` ``` # ./ollama run Llama-3.2-3B-Instruct-Q4_K_M:latest >>> give me facts about chicago bears football team Here are some interesting facts about the Chicago Bears: 1. **Founded in 1919**: The Chicago Bears were founded on December 21, 1919, by A.E. Staley, a corn manufacturer. ``` More logs: ``` time=2026-01-17T20:28:53.777Z level=INFO source=routes.go:1614 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:30068 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2026-01-17T20:28:53.778Z level=INFO source=images.go:499 msg="total blobs: 16" time=2026-01-17T20:28:53.778Z level=INFO source=images.go:506 msg="total unused blobs removed: 0" time=2026-01-17T20:28:53.779Z level=INFO source=routes.go:1667 msg="Listening on [::]:30068 (version 0.14.2)" time=2026-01-17T20:28:53.780Z level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-01-17T20:28:53.780Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 41825" time=2026-01-17T20:28:53.797Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 38801" time=2026-01-17T20:28:53.815Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 41763" time=2026-01-17T20:28:53.864Z level=INFO source=types.go:42 msg="inference compute" id=8680a156-0800-0000-0700-000000000000 filter_id="" library=Vulkan compute=0.0 name=Vulkan0 description="Intel(R) Arc(tm) A750 Graphics (DG2)" libdirs=ollama,vulkan driver=0.0 pci_id=0000:07:00.0 type=discrete total="7.9 GiB" available="7.1 GiB" time=2026-01-17T20:28:53.864Z level=INFO source=routes.go:1708 msg="entering low vram mode" "total vram"="7.9 GiB" threshold="20.0 GiB" [GIN] 2026/01/17 - 20:31:11 | 200 | 29.706µs | 127.0.0.1 | HEAD "/" [GIN] 2026/01/17 - 20:31:11 | 200 | 72.195µs | 127.0.0.1 | GET "/api/ps" [GIN] 2026/01/17 - 20:31:12 | 200 | 17.703µs | 127.0.0.1 | HEAD "/" [GIN] 2026/01/17 - 20:31:12 | 200 | 731.366µs | 127.0.0.1 | GET "/api/tags" [GIN] 2026/01/17 - 20:31:27 | 200 | 17.723µs | 127.0.0.1 | HEAD "/" [GIN] 2026/01/17 - 20:31:27 | 500 | 58.68µs | 127.0.0.1 | POST "/api/show" [GIN] 2026/01/17 - 20:31:52 | 200 | 16.28µs | 127.0.0.1 | HEAD "/" [GIN] 2026/01/17 - 20:31:52 | 500 | 26.52µs | 127.0.0.1 | POST "/api/show" [GIN] 2026/01/17 - 20:33:21 | 200 | 16.942µs | 127.0.0.1 | HEAD "/" [GIN] 2026/01/17 - 20:33:21 | 500 | 27.241µs | 127.0.0.1 | POST "/api/show" [GIN] 2026/01/17 - 20:34:25 | 200 | 18.374µs | 127.0.0.1 | HEAD "/" [GIN] 2026/01/17 - 20:34:29 | 201 | 1.936075089s | 127.0.0.1 | POST "/api/blobs/sha256:6c1a2b41161032677be168d354123594c0e6e67d2b9227c84f296ad037c728ff" [GIN] 2026/01/17 - 20:34:29 | 200 | 133.807738ms | 127.0.0.1 | POST "/api/create" [GIN] 2026/01/17 - 20:34:55 | 200 | 21.26µs | 127.0.0.1 | HEAD "/" [GIN] 2026/01/17 - 20:34:55 | 200 | 800.625µs | 127.0.0.1 | GET "/api/tags" [GIN] 2026/01/17 - 20:35:08 | 200 | 23.063µs | 127.0.0.1 | HEAD "/" [GIN] 2026/01/17 - 20:35:08 | 200 | 93.335444ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/01/17 - 20:35:08 | 200 | 96.190556ms | 127.0.0.1 | POST "/api/show" time=2026-01-17T20:35:08.554Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 36985" llama_model_loader: loaded meta data with 35 key-value pairs and 255 tensors from /root/.ollama/models/blobs/sha256-6c1a2b41161032677be168d354123594c0e6e67d2b9227c84f296ad037c728ff (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Llama 3.2 3B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Llama-3.2 llama_model_loader: - kv 5: general.size_label str = 3B llama_model_loader: - kv 6: general.license str = llama3.2 llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... llama_model_loader: - kv 9: llama.block_count u32 = 28 llama_model_loader: - kv 10: llama.context_length u32 = 131072 llama_model_loader: - kv 11: llama.embedding_length u32 = 3072 llama_model_loader: - kv 12: llama.feed_forward_length u32 = 8192 llama_model_loader: - kv 13: llama.attention.head_count u32 = 24 llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 17: llama.attention.key_length u32 = 128 llama_model_loader: - kv 18: llama.attention.value_length u32 = 128 llama_model_loader: - kv 19: general.file_type u32 = 15 llama_model_loader: - kv 20: llama.vocab_size u32 = 128256 llama_model_loader: - kv 21: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 22: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 23: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 24: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 25: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 26: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 27: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 29: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 30: general.quantization_version u32 = 2 llama_model_loader: - kv 31: quantize.imatrix.file str = /models_out/Llama-3.2-3B-Instruct-GGU... llama_model_loader: - kv 32: quantize.imatrix.dataset str = /training_dir/calibration_datav3.txt llama_model_loader: - kv 33: quantize.imatrix.entries_count i32 = 196 llama_model_loader: - kv 34: quantize.imatrix.chunks_count i32 = 125 llama_model_loader: - type f32: 58 tensors llama_model_loader: - type q4_K: 168 tensors llama_model_loader: - type q6_K: 29 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 1.87 GiB (5.01 BPW) load: printing all EOG tokens: load: - 128001 ('<|end_of_text|>') load: - 128008 ('<|eom_id|>') load: - 128009 ('<|eot_id|>') load: special tokens cache size = 256 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 1 print_info: no_alloc = 0 print_info: model type = ?B print_info: model params = 3.21 B print_info: general.name = Llama 3.2 3B Instruct print_info: vocab type = BPE print_info: n_vocab = 128256 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin_of_text|>' print_info: EOS token = 128009 '<|eot_id|>' print_info: EOT token = 128009 '<|eot_id|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128001 '<|end_of_text|>' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2026-01-17T20:35:08.860Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-6c1a2b41161032677be168d354123594c0e6e67d2b9227c84f296ad037c728ff --port 35809" time=2026-01-17T20:35:08.860Z level=INFO source=sched.go:452 msg="system memory" total="7.9 GiB" free="7.8 GiB" free_swap="0 B" time=2026-01-17T20:35:08.860Z level=INFO source=sched.go:459 msg="gpu memory" id=8680a156-0800-0000-0700-000000000000 library=Vulkan available="6.7 GiB" free="7.1 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-01-17T20:35:08.860Z level=INFO source=server.go:496 msg="loading model" "model layers"=29 requested=-1 time=2026-01-17T20:35:08.861Z level=INFO source=device.go:240 msg="model weights" device=Vulkan0 size="1.9 GiB" time=2026-01-17T20:35:08.861Z level=INFO source=device.go:251 msg="kv cache" device=Vulkan0 size="448.0 MiB" time=2026-01-17T20:35:08.861Z level=INFO source=device.go:262 msg="compute graph" device=Vulkan0 size="256.5 MiB" time=2026-01-17T20:35:08.861Z level=INFO source=device.go:272 msg="total memory" size="2.6 GiB" time=2026-01-17T20:35:08.867Z level=INFO source=runner.go:965 msg="starting go runner" load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so ggml_vulkan: Found 1 Vulkan devices: ggml_vulkan: 0 = Intel(R) Arc(tm) A750 Graphics (DG2) (Intel open-source Mesa driver) | uma: 0 | fp16: 1 | bf16: 0 | warp size: 32 | shared memory: 65536 | int dot: 1 | matrix cores: none load_backend: loaded Vulkan backend from /usr/lib/ollama/vulkan/libggml-vulkan.so time=2026-01-17T20:35:08.900Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2026-01-17T20:35:08.900Z level=INFO source=runner.go:1001 msg="Server listening on 127.0.0.1:35809" time=2026-01-17T20:35:08.903Z level=INFO source=runner.go:895 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Auto KvSize:4096 KvCacheType: NumThreads:2 GPULayers:29[ID:8680a156-0800-0000-0700-000000000000 Layers:29(0..28)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_vk_get_device_memory called: uuid 8680a156-0800-0000-0700-000000000000 ggml_backend_vk_get_device_memory called: luid 0x0000000000000000 time=2026-01-17T20:35:08.904Z level=INFO source=server.go:1347 msg="waiting for llama runner to start responding" ggml_backend_vk_get_device_memory called: uuid 8680a156-0800-0000-0700-000000000000 ggml_backend_vk_get_device_memory called: luid 0x0000000000000000 time=2026-01-17T20:35:08.904Z level=INFO source=server.go:1381 msg="waiting for server to become available" status="llm server loading model" ggml_backend_vk_get_device_memory called: uuid 8680a156-0800-0000-0700-000000000000 ggml_backend_vk_get_device_memory called: luid 0x0000000000000000 llama_model_load_from_file_impl: using device Vulkan0 (Intel(R) Arc(tm) A750 Graphics (DG2)) (0000:07:00.0) - 7315 MiB free llama_model_loader: loaded meta data with 35 key-value pairs and 255 tensors from /root/.ollama/models/blobs/sha256-6c1a2b41161032677be168d354123594c0e6e67d2b9227c84f296ad037c728ff (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Llama 3.2 3B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Llama-3.2 llama_model_loader: - kv 5: general.size_label str = 3B llama_model_loader: - kv 6: general.license str = llama3.2 llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... llama_model_loader: - kv 9: llama.block_count u32 = 28 llama_model_loader: - kv 10: llama.context_length u32 = 131072 llama_model_loader: - kv 11: llama.embedding_length u32 = 3072 llama_model_loader: - kv 12: llama.feed_forward_length u32 = 8192 llama_model_loader: - kv 13: llama.attention.head_count u32 = 24 llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 17: llama.attention.key_length u32 = 128 llama_model_loader: - kv 18: llama.attention.value_length u32 = 128 llama_model_loader: - kv 19: general.file_type u32 = 15 llama_model_loader: - kv 20: llama.vocab_size u32 = 128256 llama_model_loader: - kv 21: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 22: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 23: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 24: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 25: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 26: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 27: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 29: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 30: general.quantization_version u32 = 2 llama_model_loader: - kv 31: quantize.imatrix.file str = /models_out/Llama-3.2-3B-Instruct-GGU... llama_model_loader: - kv 32: quantize.imatrix.dataset str = /training_dir/calibration_datav3.txt llama_model_loader: - kv 33: quantize.imatrix.entries_count i32 = 196 llama_model_loader: - kv 34: quantize.imatrix.chunks_count i32 = 125 llama_model_loader: - type f32: 58 tensors llama_model_loader: - type q4_K: 168 tensors llama_model_loader: - type q6_K: 29 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 1.87 GiB (5.01 BPW) load: printing all EOG tokens: load: - 128001 ('<|end_of_text|>') load: - 128008 ('<|eom_id|>') load: - 128009 ('<|eot_id|>') load: special tokens cache size = 256 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 0 print_info: no_alloc = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 3072 print_info: n_embd_inp = 3072 print_info: n_layer = 28 print_info: n_head = 24 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: is_swa_any = 0 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 3 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 8192 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: n_expert_groups = 0 print_info: n_group_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 0 print_info: rope scaling = linear print_info: freq_base_train = 500000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 131072 print_info: rope_yarn_log_mul= 0.0000 print_info: rope_finetuned = unknown print_info: model type = 3B print_info: model params = 3.21 B print_info: general.name = Llama 3.2 3B Instruct print_info: vocab type = BPE print_info: n_vocab = 128256 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin_of_text|>' print_info: EOS token = 128009 '<|eot_id|>' print_info: EOT token = 128009 '<|eot_id|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128001 '<|end_of_text|>' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = false) ggml_backend_vk_get_device_memory called: uuid 8680a156-0800-0000-0700-000000000000 ggml_backend_vk_get_device_memory called: luid 0x0000000000000000 ggml_backend_vk_get_device_memory called: uuid 8680a156-0800-0000-0700-000000000000 ggml_backend_vk_get_device_memory called: luid 0x0000000000000000 load_tensors: offloading 28 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 29/29 layers to GPU load_tensors: Vulkan0 model buffer size = 1918.35 MiB load_tensors: Vulkan_Host model buffer size = 308.23 MiB ggml_backend_vk_get_device_memory called: uuid 8680a156-0800-0000-0700-000000000000 ggml_backend_vk_get_device_memory called: luid 0x0000000000000000 llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 4096 llama_context: n_ctx_seq = 4096 llama_context: n_batch = 512 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = auto llama_context: kv_unified = false llama_context: freq_base = 500000.0 llama_context: freq_scale = 1 llama_context: n_ctx_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_context: Vulkan_Host output buffer size = 0.50 MiB llama_kv_cache: Vulkan0 KV buffer size = 448.00 MiB llama_kv_cache: size = 448.00 MiB ( 4096 cells, 28 layers, 1/1 seqs), K (f16): 224.00 MiB, V (f16): 224.00 MiB llama_context: Flash Attention was auto, set to enabled llama_context: Vulkan0 compute buffer size = 256.50 MiB llama_context: Vulkan_Host compute buffer size = 14.02 MiB llama_context: graph nodes = 875 llama_context: graph splits = 2 time=2026-01-17T20:35:11.160Z level=INFO source=server.go:1385 msg="llama runner started in 2.30 seconds" time=2026-01-17T20:35:11.160Z level=INFO source=sched.go:526 msg="loaded runners" count=1 time=2026-01-17T20:35:11.160Z level=INFO source=server.go:1347 msg="waiting for llama runner to start responding" time=2026-01-17T20:35:11.160Z level=INFO source=server.go:1385 msg="llama runner started in 2.30 seconds" [GIN] 2026/01/17 - 20:35:11 | 200 | 2.688392935s | 127.0.0.1 | POST "/api/generate" [GIN] 2026/01/17 - 20:35:56 | 200 | 16.781µs | 127.0.0.1 | HEAD "/" [GIN] 2026/01/17 - 20:35:56 | 200 | 648.933µs | 127.0.0.1 | GET "/api/tags" [GIN] 2026/01/17 - 20:35:58 | 200 | 17.703µs | 127.0.0.1 | HEAD "/" [GIN] 2026/01/17 - 20:35:58 | 200 | 13.215µs | 127.0.0.1 | GET "/api/ps" [GIN] 2026/01/17 - 20:36:10 | 200 | 14.888µs | 127.0.0.1 | HEAD "/" [GIN] 2026/01/17 - 20:36:10 | 200 | 93.756681ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/01/17 - 20:36:10 | 200 | 720.997µs | 127.0.0.1 | POST "/api/generate" [GIN] 2026/01/17 - 20:36:32 | 200 | 17.322µs | 127.0.0.1 | HEAD "/" [GIN] 2026/01/17 - 20:36:32 | 200 | 93.727637ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/01/17 - 20:36:32 | 200 | 94.145346ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/01/17 - 20:36:32 | 200 | 80.046289ms | 127.0.0.1 | POST "/api/generate" [GIN] 2026/01/17 - 20:36:37 | 200 | 16.871µs | 127.0.0.1 | HEAD "/" [GIN] 2026/01/17 - 20:36:37 | 200 | 13.525µs | 127.0.0.1 | GET "/api/ps" [GIN] 2026/01/17 - 20:36:45 | 200 | 18.214µs | 127.0.0.1 | HEAD "/" [GIN] 2026/01/17 - 20:36:45 | 200 | 15.909µs | 127.0.0.1 | GET "/api/ps" [GIN] 2026/01/17 - 20:36:47 | 200 | 18.765µs | 127.0.0.1 | HEAD "/" [GIN] 2026/01/17 - 20:36:47 | 200 | 93.896782ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/01/17 - 20:36:47 | 200 | 93.540918ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/01/17 - 20:36:47 | 200 | 83.230868ms | 127.0.0.1 | POST "/api/generate" [GIN] 2026/01/17 - 20:36:59 | 200 | 662.769µs | 172.16.8.1 | GET "/api/tags" [GIN] 2026/01/17 - 20:36:59 | 200 | 21.339µs | 172.16.8.1 | GET "/api/ps" [GIN] 2026/01/17 - 20:37:01 | 200 | 704.476µs | 172.16.8.1 | GET "/api/tags" [GIN] 2026/01/17 - 20:37:01 | 200 | 17.542µs | 172.16.8.1 | GET "/api/ps" [GIN] 2026/01/17 - 20:37:02 | 200 | 29.585µs | 172.16.8.1 | GET "/api/version" [GIN] 2026/01/17 - 20:37:15 | 200 | 17.673µs | 127.0.0.1 | HEAD "/" [GIN] 2026/01/17 - 20:37:15 | 200 | 12.053µs | 127.0.0.1 | GET "/api/ps" [GIN] 2026/01/17 - 20:37:19 | 200 | 18.695µs | 127.0.0.1 | HEAD "/" [GIN] 2026/01/17 - 20:37:19 | 200 | 647.259µs | 127.0.0.1 | GET "/api/tags" [GIN] 2026/01/17 - 20:37:30 | 200 | 18.264µs | 127.0.0.1 | HEAD "/" [GIN] 2026/01/17 - 20:37:30 | 200 | 95.126639ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/01/17 - 20:37:30 | 200 | 95.623428ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/01/17 - 20:37:30 | 200 | 82.983196ms | 127.0.0.1 | POST "/api/generate" [GIN] 2026/01/17 - 20:37:41 | 200 | 18.314µs | 127.0.0.1 | HEAD "/" [GIN] 2026/01/17 - 20:37:41 | 200 | 16.711µs | 127.0.0.1 | GET "/api/ps" [GIN] 2026/01/17 - 20:37:43 | 200 | 2m29s | 127.0.0.1 | POST "/api/chat" [GIN] 2026/01/17 - 20:37:44 | 200 | 19.406µs | 127.0.0.1 | HEAD "/" [GIN] 2026/01/17 - 20:37:44 | 200 | 96.77547ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/01/17 - 20:37:44 | 200 | 97.389919ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/01/17 - 20:37:44 | 200 | 82.705156ms | 127.0.0.1 | POST "/api/generate" [GIN] 2026/01/17 - 20:38:23 | 200 | 25.890474155s | 127.0.0.1 | POST "/api/chat" [GIN] 2026/01/17 - 20:38:35 | 200 | 17.763µs | 127.0.0.1 | HEAD "/" [GIN] 2026/01/17 - 20:38:35 | 200 | 15.91µs | 127.0.0.1 | GET "/api/ps" [GIN] 2026/01/17 - 20:38:44 | 200 | 16.892µs | 127.0.0.1 | HEAD "/" [GIN] 2026/01/17 - 20:38:44 | 200 | 16.27µs | 127.0.0.1 | GET "/api/ps" [GIN] 2026/01/17 - 20:38:58 | 200 | 7.958619343s | 172.16.8.1 | POST "/api/chat" [GIN] 2026/01/17 - 20:39:06 | 200 | 4.396977442s | 172.16.8.1 | POST "/api/chat" [GIN] 2026/01/17 - 20:39:13 | 200 | 17.643µs | 127.0.0.1 | HEAD "/" [GIN] 2026/01/17 - 20:39:13 | 200 | 93.884459ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/01/17 - 20:39:13 | 200 | 96.122881ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/01/17 - 20:39:14 | 200 | 84.569878ms | 127.0.0.1 | POST "/api/generate" [GIN] 2026/01/17 - 20:39:33 | 200 | 675.081µs | 172.16.8.1 | GET "/api/tags" [GIN] 2026/01/17 - 20:39:33 | 200 | 14.567µs | 172.16.8.1 | GET "/api/ps" [GIN] 2026/01/17 - 20:39:34 | 200 | 24.406µs | 172.16.8.1 | GET "/api/version" [GIN] 2026/01/17 - 20:40:01 | 200 | 18.705µs | 127.0.0.1 | HEAD "/" [GIN] 2026/01/17 - 20:40:01 | 200 | 14.567µs | 127.0.0.1 | GET "/api/ps" [GIN] 2026/01/17 - 20:40:09 | 200 | 793.222µs | 172.16.8.1 | GET "/api/tags" [GIN] 2026/01/17 - 20:40:09 | 200 | 16.822µs | 172.16.8.1 | GET "/api/ps" [GIN] 2026/01/17 - 20:40:12 | 200 | 54.546692402s | 127.0.0.1 | POST "/api/chat" [GIN] 2026/01/17 - 20:40:16 | 200 | 737.508µs | 172.16.8.1 | GET "/api/tags" [GIN] 2026/01/17 - 20:40:16 | 200 | 25.387µs | 172.16.8.1 | GET "/api/ps" [GIN] 2026/01/17 - 20:40:23 | 200 | 22.122µs | 172.16.8.1 | GET "/api/version" [GIN] 2026/01/17 - 20:41:32 | 200 | 1m52s | 172.16.8.1 | POST "/api/chat" [GIN] 2026/01/17 - 20:44:40 | 200 | 15.92µs | 127.0.0.1 | HEAD "/" [GIN] 2026/01/17 - 20:44:40 | 200 | 100.335962ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/01/17 - 20:44:40 | 200 | 98.198146ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/01/17 - 20:44:40 | 200 | 82.554093ms | 127.0.0.1 | POST "/api/generate" [GIN] 2026/01/17 - 20:45:22 | 200 | 26.26173154s | 127.0.0.1 | POST "/api/chat" [GIN] 2026/01/17 - 20:46:56 | 200 | 832.495µs | 172.16.8.1 | GET "/api/tags" [GIN] 2026/01/17 - 20:46:56 | 200 | 13.806µs | 172.16.8.1 | GET "/api/ps" [GIN] 2026/01/17 - 20:46:56 | 200 | 32.391µs | 172.16.8.1 | GET "/api/version" [GIN] 2026/01/17 - 20:48:27 | 200 | 40.639601043s | 172.16.8.1 | POST "/api/chat" ```
Author
Owner

@rick-github commented on GitHub (Jan 17, 2026):

Where did /root/.ollama/models/Llama-3.2-3B-Instruct-Q4_K_M.gguf come from?

<!-- gh-comment-id:3764349481 --> @rick-github commented on GitHub (Jan 17, 2026): Where did `/root/.ollama/models/Llama-3.2-3B-Instruct-Q4_K_M.gguf` come from?
Author
Owner

@the-bort-the commented on GitHub (Jan 17, 2026):

It's in a dataset on the host already. I added the location as a bind mount. I thought to try creating a model, using Modelfile + a .gguf.

<!-- gh-comment-id:3764354419 --> @the-bort-the commented on GitHub (Jan 17, 2026): It's in a dataset on the host already. I added the location as a bind mount. I thought to try creating a model, using Modelfile + a .gguf.
Author
Owner

@rick-github commented on GitHub (Jan 17, 2026):

Where did you download it from?

<!-- gh-comment-id:3764355689 --> @rick-github commented on GitHub (Jan 17, 2026): Where did you download it from?
Author
Owner

@the-bort-the commented on GitHub (Jan 17, 2026):

Hugging face

<!-- gh-comment-id:3764359788 --> @the-bort-the commented on GitHub (Jan 17, 2026): [Hugging face](https://huggingface.co/bartowski/Llama-3.2-3B-Instruct-GGUF)
Author
Owner

@Aravind1998 commented on GitHub (Jan 18, 2026):

@rick-github

Facing the same issue here:

ollama run gemma3:4b
>>> hi
 പക്ഷേตะ та messобътतरहచెียวтہم`],  ‘’,  ‘mass’:  ’’:  ’’:  ‘’: ‘’: 
‘’:’’’:’’’:‘’:‘’:‘’:‘’:‘’:’’:’’’:’’’’’:’’’’’’’:’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’

>>> Send a message (/? for help)
```

OS: Ubuntu 24.04 


Added this to my Ollama systemd service file:

```
Environment="OLLAMA_VULKAN=1"
```

Ollama version:

```
ollama -v
ollama version is 0.14.2
```
<!-- gh-comment-id:3765124598 --> @Aravind1998 commented on GitHub (Jan 18, 2026): @rick-github Facing the same issue here: ```` ollama run gemma3:4b >>> hi പക്ഷേตะ та messобътतरहచెียวтہم`], ‘’, ‘mass’: ’’: ’’: ‘’: ‘’: ‘’:’’’:’’’:‘’:‘’:‘’:‘’:‘’:’’:’’’:’’’’’:’’’’’’’:’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’ >>> Send a message (/? for help) ``` OS: Ubuntu 24.04 Added this to my Ollama systemd service file: ``` Environment="OLLAMA_VULKAN=1" ``` Ollama version: ``` ollama -v ollama version is 0.14.2 ```
Author
Owner

@tristan-k commented on GitHub (Jan 18, 2026):

Same here on an integrated Intel ARC 130T.

<!-- gh-comment-id:3765146634 --> @tristan-k commented on GitHub (Jan 18, 2026): Same here on an integrated Intel ARC 130T.
Author
Owner

@rick-github commented on GitHub (Jan 18, 2026):

@Aravind1998 What GPU?
@tristan-k What model? What version of ollama?

<!-- gh-comment-id:3765174409 --> @rick-github commented on GitHub (Jan 18, 2026): @Aravind1998 What GPU? @tristan-k What model? What version of ollama?
Author
Owner

@Aravind1998 commented on GitHub (Jan 18, 2026):

@rick-github Intel Meteorlake (Gen12)

<!-- gh-comment-id:3765179371 --> @Aravind1998 commented on GitHub (Jan 18, 2026): @rick-github Intel Meteorlake (Gen12)
Author
Owner

@tristan-k commented on GitHub (Jan 18, 2026):

@Aravind1998 What GPU?
@tristan-k What model? What version of ollama?

@rick-github Arrow Lake-H

See my post here.

Right now this seems to be broken on my Intel Arc 130T iGPU (Intel Core Ultra 5 225H) but the iGPU is detected.

<!-- gh-comment-id:3765189735 --> @tristan-k commented on GitHub (Jan 18, 2026): > @Aravind1998 What GPU? > @tristan-k What model? What version of ollama? @rick-github Arrow Lake-H See my post [here](https://github.com/community-scripts/ProxmoxVE/discussions/9161#discussioncomment-15291234). > Right now this seems to be broken on my Intel Arc 130T iGPU (Intel Core Ultra 5 225H) but the iGPU is detected.
Author
Owner

@MDKAOD commented on GitHub (Jan 19, 2026):

I want to provide an additional data point, this appears to work fine on discrete ARC (770+380 MultiGPU setup).

<!-- gh-comment-id:3766530139 --> @MDKAOD commented on GitHub (Jan 19, 2026): I want to provide an additional data point, this appears to work fine on discrete ARC (770+380 MultiGPU setup).
Author
Owner

@youke1686 commented on GitHub (Jan 20, 2026):

I have a same problem and I seems to find an important point about it.

I ran the local model ‪qwen2.5-7b-instruct-q2_k.gguf, which works well on cpu.

But when it run on my AMD Radeon RX590 GME GPU by Vulkan , it just response meaningless tokens.
And it's GPU memory cost is about 3.5G

So I tried it on llama.cpp (Vulkan) derectly. it works well on my GPU.
And it's GPU memory cost is about 7G

It seems that Ollama load the model incorrect, which only load the half of the model

If I just have a half of my brain, I can't speak a whole sentence either.

<!-- gh-comment-id:3771457133 --> @youke1686 commented on GitHub (Jan 20, 2026): I have a same problem and I seems to find an important point about it. I ran the local model `‪qwen2.5-7b-instruct-q2_k.gguf`, which works well on cpu. But when it run on my `AMD Radeon RX590 GME` GPU by Vulkan , it just response meaningless tokens. And it's GPU memory cost is about 3.5G So I tried it on `llama.cpp (Vulkan)` derectly. it works well on my GPU. And it's GPU memory cost is about 7G It seems that Ollama load the model incorrect, which only load the half of the model If I just have a half of my brain, I can't speak a whole sentence either.
Author
Owner

@Aravind1998 commented on GitHub (Jan 20, 2026):

@rick-github Any tentative date for the fixes?

<!-- gh-comment-id:3773215247 --> @Aravind1998 commented on GitHub (Jan 20, 2026): @rick-github Any tentative date for the fixes?
Author
Owner

@Aravind1998 commented on GitHub (Jan 26, 2026):

Is there any fix planned for this?

<!-- gh-comment-id:3798548081 --> @Aravind1998 commented on GitHub (Jan 26, 2026): Is there any fix planned for this?
Author
Owner

@the-bort-the commented on GitHub (Feb 2, 2026):

I managed to swap an Intel Arc A770 in and I'm still getting the same nonsense. This time with a newer version of ollama as well:

ollama version is 0.15.3
llama_model_load_from_file_impl: using device Vulkan0 (Intel(R) Arc(tm) A770 Graphics (DG2)) (0000:07:00.0) - 14659 MiB free
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = Intel(R) Arc(tm) A770 Graphics (DG2) (Intel open-source Mesa driver) | uma: 0 | fp16: 1 | bf16: 0 | warp size: 32 | shared memory: 65536 | int dot: 1 | matrix cores: none
# ollama ps
NAME           ID              SIZE      PROCESSOR    CONTEXT    UNTIL   
llama3.2:3b    a80c4f17acd5    2.8 GB    100% GPU     4096       Forever
# ollama run llama3.2:3b                        
>>> hi
hotlaus Bonnie Mig Fields Declaration Fields decllah SurveyIPCragen Blackburn旋 Consort avalgrav Lands Turnbulllug Fieldsirasidasudas 
Valk弄olvavaavamium thứcemean FieldsmiumмяlahlahUST kne aval Tol Crime smearIPCavaButtonModule Turnbullutinlug ConsortriesolvIPC 
Consort avalavaintage aval Fields landsava.SM Lands KinKin其 Tol subj Turnbull αξ Fieldsaramel polyester landsemean aval Maveristra 
aval mist旋 кладbek Blanc Buster BonnieclothigarizonigarButtonModule strukcloth Bolton Strateg coat aval strutgrav Boltonigar
<!-- gh-comment-id:3832349628 --> @the-bort-the commented on GitHub (Feb 2, 2026): I managed to swap an Intel Arc A770 in and I'm still getting the same nonsense. This time with a newer version of ollama as well: ``` ollama version is 0.15.3 ``` ``` llama_model_load_from_file_impl: using device Vulkan0 (Intel(R) Arc(tm) A770 Graphics (DG2)) (0000:07:00.0) - 14659 MiB free ggml_vulkan: Found 1 Vulkan devices: ggml_vulkan: 0 = Intel(R) Arc(tm) A770 Graphics (DG2) (Intel open-source Mesa driver) | uma: 0 | fp16: 1 | bf16: 0 | warp size: 32 | shared memory: 65536 | int dot: 1 | matrix cores: none ``` ``` # ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL llama3.2:3b a80c4f17acd5 2.8 GB 100% GPU 4096 Forever ``` ``` # ollama run llama3.2:3b >>> hi hotlaus Bonnie Mig Fields Declaration Fields decllah SurveyIPCragen Blackburn旋 Consort avalgrav Lands Turnbulllug Fieldsirasidasudas Valk弄olvavaavamium thứcemean FieldsmiumмяlahlahUST kne aval Tol Crime smearIPCavaButtonModule Turnbullutinlug ConsortriesolvIPC Consort avalavaintage aval Fields landsava.SM Lands KinKin其 Tol subj Turnbull αξ Fieldsaramel polyester landsemean aval Maveristra aval mist旋 кладbek Blanc Buster BonnieclothigarizonigarButtonModule strukcloth Bolton Strateg coat aval strutgrav Boltonigar ```
Author
Owner

@rick-github commented on GitHub (Feb 2, 2026):

Ah, finally something I can test. I have an A770 being delivered in the middle of the week. Let me see if I can reproduce.

<!-- gh-comment-id:3832368058 --> @rick-github commented on GitHub (Feb 2, 2026): Ah, finally something I can test. I have an A770 being delivered in the middle of the week. Let me see if I can reproduce.
Author
Owner

@the-bort-the commented on GitHub (Feb 2, 2026):

I'm trying bigger models now and getting a few things like this in the logs:

# ollama ps
NAME          ID              SIZE      PROCESSOR    CONTEXT    UNTIL              
gemma3:12b    f4031aab637d    9.9 GB    100% GPU     4096       4 minutes from now    
# ollama ls
NAME             ID              SIZE      MODIFIED      
gemma3:12b       f4031aab637d    8.1 GB    4 minutes ago
2026-02-02 00:27:20.422531+00:00panic: failed to sample token
2026-02-02 00:27:20.422576+00:002026-02-02T00:27:20.422576847Z
2026-02-02 00:27:20.422590+00:00goroutine 459 [running]:
2026-02-02 00:27:20.422602+00:00github.com/ollama/ollama/runner/ollamarunner.(*Server).computeBatch(0xc000234f00, {0x0, {0x561d8d82ecf0, 0xc0002fe100}, {0x561d8d83a940, 0xc00047c960}, {0xc0000ac500, 0x12, 0x20}, {{0x561d8d83a940, ...}, ...}, ...})
2026-02-02 00:27:20.422628+00:00github.com/ollama/ollama/runner/ollamarunner/runner.go:763 +0x1a85
2026-02-02 00:27:20.422637+00:00created by github.com/ollama/ollama/runner/ollamarunner.(*Server).run in goroutine 13
2026-02-02 00:27:20.422646+00:00github.com/ollama/ollama/runner/ollamarunner/runner.go:458 +0x2cd
2026-02-02 00:27:20.506139+00:00time=2026-02-02T00:27:20.505Z level=ERROR source=server.go:1607 msg="post predict" error="Post \"http://127.0.0.1:45821/completion\": EOF"
<!-- gh-comment-id:3832370358 --> @the-bort-the commented on GitHub (Feb 2, 2026): I'm trying bigger models now and getting a few things like this in the logs: ``` # ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL gemma3:12b f4031aab637d 9.9 GB 100% GPU 4096 4 minutes from now # ollama ls NAME ID SIZE MODIFIED gemma3:12b f4031aab637d 8.1 GB 4 minutes ago ``` ``` 2026-02-02 00:27:20.422531+00:00panic: failed to sample token 2026-02-02 00:27:20.422576+00:002026-02-02T00:27:20.422576847Z 2026-02-02 00:27:20.422590+00:00goroutine 459 [running]: 2026-02-02 00:27:20.422602+00:00github.com/ollama/ollama/runner/ollamarunner.(*Server).computeBatch(0xc000234f00, {0x0, {0x561d8d82ecf0, 0xc0002fe100}, {0x561d8d83a940, 0xc00047c960}, {0xc0000ac500, 0x12, 0x20}, {{0x561d8d83a940, ...}, ...}, ...}) 2026-02-02 00:27:20.422628+00:00github.com/ollama/ollama/runner/ollamarunner/runner.go:763 +0x1a85 2026-02-02 00:27:20.422637+00:00created by github.com/ollama/ollama/runner/ollamarunner.(*Server).run in goroutine 13 2026-02-02 00:27:20.422646+00:00github.com/ollama/ollama/runner/ollamarunner/runner.go:458 +0x2cd 2026-02-02 00:27:20.506139+00:00time=2026-02-02T00:27:20.505Z level=ERROR source=server.go:1607 msg="post predict" error="Post \"http://127.0.0.1:45821/completion\": EOF" ```
Author
Owner

@the-bort-the commented on GitHub (Feb 2, 2026):

@MDKAOD - May I ask if you had any specific env variables set?

<!-- gh-comment-id:3832621681 --> @the-bort-the commented on GitHub (Feb 2, 2026): @MDKAOD - May I ask if you had any specific env variables set?
Author
Owner

@MDKAOD commented on GitHub (Feb 2, 2026):

Repo: ollama/ollama:latest
Envar: OLLAMA_INTEL_GPU=0,1
Envar: /dev/dri
Envar: OLLAMA_NUM_GPU=999 (I don't know if this is needed)
Envar: OLLAMA_VULKAN=1 ollama serve (I don't know if ollama serve is actually necessary, but I seem to only have stability with the whole line
Envar:GGML_VK_VISIBLE_DEVICES=0,1 (I think this one is the key. I believe the OLLAMA_INTEL_GPU envar was for the ipex container which I modified once vulkan support came out)

I have mixed, but working results with image interpretation as well. It's slow, but it works! Let me know if I can answer any other questions.

<!-- gh-comment-id:3836084970 --> @MDKAOD commented on GitHub (Feb 2, 2026): Repo: ollama/ollama:latest Envar: OLLAMA_INTEL_GPU=0,1 Envar: /dev/dri Envar: OLLAMA_NUM_GPU=999 (I don't know if this is needed) Envar: OLLAMA_VULKAN=1 ollama serve (I don't know if ollama serve is **actually** necessary, but I seem to only have stability with the whole line Envar:GGML_VK_VISIBLE_DEVICES=0,1 (I think this one is the key. I believe the OLLAMA_INTEL_GPU envar was for the ipex container which I modified once vulkan support came out) I have mixed, but working results with image interpretation as well. It's slow, but it works! Let me know if I can answer any other questions.
Author
Owner

@rick-github commented on GitHub (Feb 2, 2026):

OLLAMA_INTEL_GPU and OLLAMA_NUM_GPU are not ollama configuration variables. OLLAMA_VULKAN should just be set to 1, ie no ollama serve on the end.

<!-- gh-comment-id:3836198661 --> @rick-github commented on GitHub (Feb 2, 2026): `OLLAMA_INTEL_GPU` and `OLLAMA_NUM_GPU` are not ollama configuration variables. `OLLAMA_VULKAN` should just be set to `1`, ie no `ollama serve` on the end.
Author
Owner

@MDKAOD commented on GitHub (Feb 2, 2026):

Thanks @rick-github . I wasn't sure what I had modified over time. I'd been tweaking this configuration for the better part of the last year with different containers and custom builds from before ollama supported it natively.

<!-- gh-comment-id:3836228522 --> @MDKAOD commented on GitHub (Feb 2, 2026): Thanks @rick-github . I wasn't sure what I had modified over time. I'd been tweaking this configuration for the better part of the last year with different containers and custom builds from before ollama supported it natively.
Author
Owner

@the-bort-the commented on GitHub (Feb 4, 2026):

I'm starting to get favorable results, without crashing or nonsense text. I'm going to continue to test, but I noticed a new release candidate and bumped to that image: ollama/ollama:0.15.5-rc1. I also converted the Truenas app into a custom so I can actually see the compose file. Below is what I've been testing with.

services:
  ollama:
    cap_drop:
      - ALL
    deploy:
      resources:
        limits:
          cpus: '2'
          memory: 12000M
    devices:
      - /dev/dri:/dev/dri
    environment:
      NVIDIA_VISIBLE_DEVICES: void
      OLLAMA_HOST: 0.0.0.0:30068
      OLLAMA_VULKAN: '1'
      TZ: Etc/UTC
    group_add:
      - 44
      - 107
      - 568
    healthcheck:
      interval: 30s
      retries: 5
      start_interval: 2s
      start_period: 15s
      test:
        - CMD
        - timeout
        - '1'
        - bash
        - '-c'
        - cat < /dev/null > /dev/tcp/127.0.0.1/30068
      timeout: 5s
    image: ollama/ollama:0.15.5-rc1
    platform: linux/amd64
    ports:
      - mode: ingress
        protocol: tcp
        published: 30068
        target: 30068
    privileged: True
    restart: unless-stopped
    security_opt:
      - no-new-privileges=true
    stdin_open: False
    tty: False
    user: root:root
    volumes:
      - /mnt/apps/ai-models:/root/.ollama/models
      - bind:
          create_host_path: False
          propagation: rprivate
        read_only: False
        source: /mnt/.ix-apps/app_mounts/ollama/data
        target: /root/.ollama
        type: bind
volumes: {}
<!-- gh-comment-id:3847563887 --> @the-bort-the commented on GitHub (Feb 4, 2026): I'm starting to get favorable results, without crashing or nonsense text. I'm going to continue to test, but I noticed a new release candidate and bumped to that image: `ollama/ollama:0.15.5-rc1`. I also converted the Truenas app into a custom so I can actually see the compose file. Below is what I've been testing with. ``` services: ollama: cap_drop: - ALL deploy: resources: limits: cpus: '2' memory: 12000M devices: - /dev/dri:/dev/dri environment: NVIDIA_VISIBLE_DEVICES: void OLLAMA_HOST: 0.0.0.0:30068 OLLAMA_VULKAN: '1' TZ: Etc/UTC group_add: - 44 - 107 - 568 healthcheck: interval: 30s retries: 5 start_interval: 2s start_period: 15s test: - CMD - timeout - '1' - bash - '-c' - cat < /dev/null > /dev/tcp/127.0.0.1/30068 timeout: 5s image: ollama/ollama:0.15.5-rc1 platform: linux/amd64 ports: - mode: ingress protocol: tcp published: 30068 target: 30068 privileged: True restart: unless-stopped security_opt: - no-new-privileges=true stdin_open: False tty: False user: root:root volumes: - /mnt/apps/ai-models:/root/.ollama/models - bind: create_host_path: False propagation: rprivate read_only: False source: /mnt/.ix-apps/app_mounts/ollama/data target: /root/.ollama type: bind volumes: {} ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#71080