[PR #15205] Draft: ml: support openvino #61770

Open
opened 2026-04-29 16:47:23 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/15205
Author: @jclab-joseph
Created: 4/2/2026
Status: 🔄 Open

Base: mainHead: feat/openvino


📝 Commits (6)

  • d2f7851 copy ggml-openvino from llama.cpp/ggml/src/ggml-openvino
  • 7e8ed03 add openvino backend
  • 05aef36 feat: build script
  • 9723800 ggml-openvino: detect cpu/npu(as igpu)/gpu
  • 8f00440 fix build
  • 33a8ca8 refactor: use ggml_openvino_is_npu instead of ggml_openvino_is_integrated_device

📊 Changes

57 files changed (+7894 additions, -2 deletions)

View changed files

📝 .github/workflows/release.yaml (+46 -1)
📝 .github/workflows/test.yaml (+48 -0)
📝 CMakeLists.txt (+9 -0)
📝 CMakePresets.json (+13 -0)
📝 Dockerfile (+20 -0)
📝 docs/development.md (+20 -0)
ml/backend/ggml/ggml/include/ggml-openvino.h (+37 -0)
📝 ml/backend/ggml/ggml/src/CMakeLists.txt (+1 -0)
📝 ml/backend/ggml/ggml/src/ggml-backend-reg.cpp (+8 -0)
ml/backend/ggml/ggml/src/ggml-openvino/.clang-format (+154 -0)
ml/backend/ggml/ggml/src/ggml-openvino/CMakeLists.txt (+20 -0)
ml/backend/ggml/ggml/src/ggml-openvino/ggml-decoder.cpp (+975 -0)
ml/backend/ggml/ggml/src/ggml-openvino/ggml-decoder.h (+294 -0)
ml/backend/ggml/ggml/src/ggml-openvino/ggml-openvino-extra.cpp (+373 -0)
ml/backend/ggml/ggml/src/ggml-openvino/ggml-openvino-extra.h (+182 -0)
ml/backend/ggml/ggml/src/ggml-openvino/ggml-openvino.cpp (+1127 -0)
ml/backend/ggml/ggml/src/ggml-openvino/ggml-quants.cpp (+884 -0)
ml/backend/ggml/ggml/src/ggml-openvino/ggml-quants.h (+153 -0)
ml/backend/ggml/ggml/src/ggml-openvino/openvino/decoder.h (+74 -0)
ml/backend/ggml/ggml/src/ggml-openvino/openvino/frontend.cpp (+27 -0)

...and 37 more files

📄 Description

I brought over https://github.com/ggml-org/llama.cpp/tree/master/ggml/src/ggml-openvino as is.

NOT WORKING YET

Usage:

export GGML_OPENVINO_DEVICE=NPU
ollama serve
ollama run hf.co/unsloth/Llama-3.2-1B-Instruct-GGUF:Q4_0 "hello"

Fixes:

Stuck Log:

Stuck Log
time=2026-04-02T15:13:49.266+09:00 level=INFO source=sched.go:484 msg="system memory" total="31.4 GiB" free="21.6 GiB" free_swap="19.6 GiB"
time=2026-04-02T15:13:49.266+09:00 level=INFO source=sched.go:491 msg="gpu memory" id=OPENVINO0 library=OPENVINO available="21.2 GiB" free="21.6 GiB" minimum="457.0 MiB" overhead="0 B"
time=2026-04-02T15:13:49.266+09:00 level=INFO source=server.go:499 msg="loading model" "model layers"=17 requested=-1
time=2026-04-02T15:13:49.266+09:00 level=DEBUG source=ggml.go:659 msg="default cache size estimate" "attention MiB"=1024 "attention bytes"=1073741824 "recurrent MiB"=0 "recurrent bytes"=0
time=2026-04-02T15:13:49.267+09:00 level=DEBUG source=server.go:978 msg="available gpu" id=OPENVINO0 library=OPENVINO "available layer vram"="21.1 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="0 B"
time=2026-04-02T15:13:49.267+09:00 level=DEBUG source=server.go:978 msg="available gpu" id=OPENVINO0 library=OPENVINO "available layer vram"="19.0 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="2.1 GiB"
time=2026-04-02T15:13:49.267+09:00 level=DEBUG source=server.go:670 msg=memory estimate.OPENVINO0.ID=OPENVINO0 estimate.OPENVINO0.Weights="[35274752 35274752 34226176 34226176 34226176 34226176 34226176 34226176 34226176 34226176 34226176 34226176 34226176 34226176 34226176 34226176 215478272]" estimate.OPENVINO0.Cache="[67108864 67108864 67108864 67108864 67108864 67108864 67108864 67108864 67108864 67108864 67108864 67108864 67108864 67108864 67108864 67108864 0]" estimate.OPENVINO0.Graph=2231371776
time=2026-04-02T15:13:49.267+09:00 level=INFO source=device.go:240 msg="model weights" device=OPENVINO0 size="729.7 MiB"
time=2026-04-02T15:13:49.267+09:00 level=INFO source=device.go:251 msg="kv cache" device=OPENVINO0 size="1.0 GiB"
time=2026-04-02T15:13:49.267+09:00 level=INFO source=device.go:262 msg="compute graph" device=OPENVINO0 size="2.1 GiB"
time=2026-04-02T15:13:49.267+09:00 level=INFO source=device.go:272 msg="total memory" size="3.8 GiB"
time=2026-04-02T15:13:49.316+09:00 level=INFO source=runner.go:965 msg="starting go runner"
time=2026-04-02T15:13:49.329+09:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\temp\ollama-out\lib\ollama
load_backend: loaded CPU backend from C:\temp\ollama-out\lib\ollama\ggml-cpu-alderlake.dll
time=2026-04-02T15:13:49.369+09:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\temp\ollama-out\lib\ollama\openvino
[WARNING] 15:13:49.649 [NPUZeroInitStructsHolder] Some features might not be available! Plugin L0 API minor version = 15, Driver L0 API minor version = 14
OpenVINO: using device NPU
load_backend: loaded OPENVINO backend from C:\temp\ollama-out\lib\ollama\openvino\ggml-openvino.dll
time=2026-04-02T15:13:49.656+09:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
time=2026-04-02T15:13:49.657+09:00 level=INFO source=runner.go:1001 msg="Server listening on 127.0.0.1:57676"
time=2026-04-02T15:13:49.661+09:00 level=INFO source=runner.go:895 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Auto KvSize:32768 KvCacheType: NumThreads:6 GPULayers:17[ID:OPENVINO0 Layers:17(0..16)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:true}"
llama_model_load_from_file_impl: using device OPENVINO0 (OpenVINO Runtime) (unknown id) - 22093 MiB free
time=2026-04-02T15:13:49.662+09:00 level=INFO source=server.go:1352 msg="waiting for llama runner to start responding"
time=2026-04-02T15:13:49.663+09:00 level=INFO source=server.go:1386 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: loaded meta data with 36 key-value pairs and 147 tensors from C:\Users\USER\.ollama\models\blobs\sha256-66bfbb2d48bdb77cd56bd03ef820deff3c4a74b1a09de3b917ae13e72c1a70c2 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Llama-3.2-1B-Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Llama-3.2-1B-Instruct
llama_model_loader: - kv   5:                       general.quantized_by str              = Unsloth
llama_model_loader: - kv   6:                         general.size_label str              = 1B
llama_model_loader: - kv   7:                           general.repo_url str              = https://huggingface.co/unsloth
llama_model_loader: - kv   8:                          llama.block_count u32              = 16
llama_model_loader: - kv   9:                       llama.context_length u32              = 131072
llama_model_loader: - kv  10:                     llama.embedding_length u32              = 2048
llama_model_loader: - kv  11:                  llama.feed_forward_length u32              = 8192
llama_model_loader: - kv  12:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv  13:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  14:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  15:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  16:                 llama.attention.key_length u32              = 64
llama_model_loader: - kv  17:               llama.attention.value_length u32              = 64
llama_model_loader: - kv  18:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  19:                 llama.rope.dimension_count u32              = 64
llama_model_loader: - kv  20:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  21:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  22:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  23:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  24:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  25:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  26:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  27:            tokenizer.ggml.padding_token_id u32              = 128004
llama_model_loader: - kv  28:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  29:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv  30:               general.quantization_version u32              = 2
llama_model_loader: - kv  31:                          general.file_type u32              = 2
llama_model_loader: - kv  32:                      quantize.imatrix.file str              = Llama-3.2-1B-Instruct-GGUF/imatrix_un...
llama_model_loader: - kv  33:                   quantize.imatrix.dataset str              = unsloth_calibration_Llama-3.2-1B-Inst...
llama_model_loader: - kv  34:             quantize.imatrix.entries_count i32              = 112
llama_model_loader: - kv  35:              quantize.imatrix.chunks_count i32              = 689
llama_model_loader: - type  f32:   34 tensors
llama_model_loader: - type q4_0:  110 tensors
llama_model_loader: - type q4_1:    2 tensors
llama_model_loader: - type q6_K:    1 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_0
print_info: file size   = 729.75 MiB (4.95 BPW)
init_tokenizer: initializing tokenizer for type 2
load: control token: 128254 '<|reserved_special_token_246|>' is not marked as EOG
load: control token: 128249 '<|reserved_special_token_241|>' is not marked as EOG
load: control token: 128246 '<|reserved_special_token_238|>' is not marked as EOG
load: control token: 128243 '<|reserved_special_token_235|>' is not marked as EOG
load: control token: 128242 '<|reserved_special_token_234|>' is not marked as EOG
load: control token: 128241 '<|reserved_special_token_233|>' is not marked as EOG
load: control token: 128240 '<|reserved_special_token_232|>' is not marked as EOG
load: control token: 128235 '<|reserved_special_token_227|>' is not marked as EOG
load: control token: 128231 '<|reserved_special_token_223|>' is not marked as EOG
load: control token: 128230 '<|reserved_special_token_222|>' is not marked as EOG
load: control token: 128228 '<|reserved_special_token_220|>' is not marked as EOG
load: control token: 128225 '<|reserved_special_token_217|>' is not marked as EOG
load: control token: 128218 '<|reserved_special_token_210|>' is not marked as EOG
load: control token: 128214 '<|reserved_special_token_206|>' is not marked as EOG
load: control token: 128213 '<|reserved_special_token_205|>' is not marked as EOG
load: control token: 128207 '<|reserved_special_token_199|>' is not marked as EOG
load: control token: 128206 '<|reserved_special_token_198|>' is not marked as EOG
load: control token: 128204 '<|reserved_special_token_196|>' is not marked as EOG
load: control token: 128200 '<|reserved_special_token_192|>' is not marked as EOG
load: control token: 128199 '<|reserved_special_token_191|>' is not marked as EOG
load: control token: 128198 '<|reserved_special_token_190|>' is not marked as EOG
load: control token: 128196 '<|reserved_special_token_188|>' is not marked as EOG
load: control token: 128194 '<|reserved_special_token_186|>' is not marked as EOG
load: control token: 128193 '<|reserved_special_token_185|>' is not marked as EOG
load: control token: 128188 '<|reserved_special_token_180|>' is not marked as EOG
load: control token: 128187 '<|reserved_special_token_179|>' is not marked as EOG
load: control token: 128185 '<|reserved_special_token_177|>' is not marked as EOG
load: control token: 128184 '<|reserved_special_token_176|>' is not marked as EOG
load: control token: 128180 '<|reserved_special_token_172|>' is not marked as EOG
load: control token: 128179 '<|reserved_special_token_171|>' is not marked as EOG
load: control token: 128178 '<|reserved_special_token_170|>' is not marked as EOG
load: control token: 128177 '<|reserved_special_token_169|>' is not marked as EOG
load: control token: 128176 '<|reserved_special_token_168|>' is not marked as EOG
load: control token: 128175 '<|reserved_special_token_167|>' is not marked as EOG
load: control token: 128171 '<|reserved_special_token_163|>' is not marked as EOG
load: control token: 128170 '<|reserved_special_token_162|>' is not marked as EOG
load: control token: 128169 '<|reserved_special_token_161|>' is not marked as EOG
load: control token: 128168 '<|reserved_special_token_160|>' is not marked as EOG
load: control token: 128165 '<|reserved_special_token_157|>' is not marked as EOG
load: control token: 128162 '<|reserved_special_token_154|>' is not marked as EOG
load: control token: 128158 '<|reserved_special_token_150|>' is not marked as EOG
load: control token: 128156 '<|reserved_special_token_148|>' is not marked as EOG
load: control token: 128155 '<|reserved_special_token_147|>' is not marked as EOG
load: control token: 128154 '<|reserved_special_token_146|>' is not marked as EOG
load: control token: 128151 '<|reserved_special_token_143|>' is not marked as EOG
load: control token: 128149 '<|reserved_special_token_141|>' is not marked as EOG
load: control token: 128147 '<|reserved_special_token_139|>' is not marked as EOG
load: control token: 128146 '<|reserved_special_token_138|>' is not marked as EOG
load: control token: 128144 '<|reserved_special_token_136|>' is not marked as EOG
load: control token: 128142 '<|reserved_special_token_134|>' is not marked as EOG
load: control token: 128141 '<|reserved_special_token_133|>' is not marked as EOG
load: control token: 128138 '<|reserved_special_token_130|>' is not marked as EOG
load: control token: 128136 '<|reserved_special_token_128|>' is not marked as EOG
load: control token: 128135 '<|reserved_special_token_127|>' is not marked as EOG
load: control token: 128134 '<|reserved_special_token_126|>' is not marked as EOG
load: control token: 128133 '<|reserved_special_token_125|>' is not marked as EOG
load: control token: 128131 '<|reserved_special_token_123|>' is not marked as EOG
load: control token: 128128 '<|reserved_special_token_120|>' is not marked as EOG
load: control token: 128124 '<|reserved_special_token_116|>' is not marked as EOG
load: control token: 128123 '<|reserved_special_token_115|>' is not marked as EOG
load: control token: 128122 '<|reserved_special_token_114|>' is not marked as EOG
load: control token: 128119 '<|reserved_special_token_111|>' is not marked as EOG
load: control token: 128115 '<|reserved_special_token_107|>' is not marked as EOG
load: control token: 128112 '<|reserved_special_token_104|>' is not marked as EOG
load: control token: 128110 '<|reserved_special_token_102|>' is not marked as EOG
load: control token: 128109 '<|reserved_special_token_101|>' is not marked as EOG
load: control token: 128108 '<|reserved_special_token_100|>' is not marked as EOG
load: control token: 128106 '<|reserved_special_token_98|>' is not marked as EOG
load: control token: 128103 '<|reserved_special_token_95|>' is not marked as EOG
load: control token: 128102 '<|reserved_special_token_94|>' is not marked as EOG
load: control token: 128101 '<|reserved_special_token_93|>' is not marked as EOG
load: control token: 128097 '<|reserved_special_token_89|>' is not marked as EOG
load: control token: 128091 '<|reserved_special_token_83|>' is not marked as EOG
load: control token: 128090 '<|reserved_special_token_82|>' is not marked as EOG
load: control token: 128089 '<|reserved_special_token_81|>' is not marked as EOG
load: control token: 128087 '<|reserved_special_token_79|>' is not marked as EOG
load: control token: 128085 '<|reserved_special_token_77|>' is not marked as EOG
load: control token: 128081 '<|reserved_special_token_73|>' is not marked as EOG
load: control token: 128078 '<|reserved_special_token_70|>' is not marked as EOG
load: control token: 128076 '<|reserved_special_token_68|>' is not marked as EOG
load: control token: 128075 '<|reserved_special_token_67|>' is not marked as EOG
load: control token: 128073 '<|reserved_special_token_65|>' is not marked as EOG
load: control token: 128068 '<|reserved_special_token_60|>' is not marked as EOG
load: control token: 128067 '<|reserved_special_token_59|>' is not marked as EOG
load: control token: 128065 '<|reserved_special_token_57|>' is not marked as EOG
load: control token: 128063 '<|reserved_special_token_55|>' is not marked as EOG
load: control token: 128062 '<|reserved_special_token_54|>' is not marked as EOG
load: control token: 128060 '<|reserved_special_token_52|>' is not marked as EOG
load: control token: 128059 '<|reserved_special_token_51|>' is not marked as EOG
load: control token: 128057 '<|reserved_special_token_49|>' is not marked as EOG
load: control token: 128054 '<|reserved_special_token_46|>' is not marked as EOG
load: control token: 128046 '<|reserved_special_token_38|>' is not marked as EOG
load: control token: 128045 '<|reserved_special_token_37|>' is not marked as EOG
load: control token: 128044 '<|reserved_special_token_36|>' is not marked as EOG
load: control token: 128043 '<|reserved_special_token_35|>' is not marked as EOG
load: control token: 128038 '<|reserved_special_token_30|>' is not marked as EOG
load: control token: 128036 '<|reserved_special_token_28|>' is not marked as EOG
load: control token: 128035 '<|reserved_special_token_27|>' is not marked as EOG
load: control token: 128032 '<|reserved_special_token_24|>' is not marked as EOG
load: control token: 128028 '<|reserved_special_token_20|>' is not marked as EOG
load: control token: 128027 '<|reserved_special_token_19|>' is not marked as EOG
load: control token: 128024 '<|reserved_special_token_16|>' is not marked as EOG
load: control token: 128023 '<|reserved_special_token_15|>' is not marked as EOG
load: control token: 128022 '<|reserved_special_token_14|>' is not marked as EOG
load: control token: 128021 '<|reserved_special_token_13|>' is not marked as EOG
load: control token: 128018 '<|reserved_special_token_10|>' is not marked as EOG
load: control token: 128016 '<|reserved_special_token_8|>' is not marked as EOG
load: control token: 128015 '<|reserved_special_token_7|>' is not marked as EOG
load: control token: 128013 '<|reserved_special_token_5|>' is not marked as EOG
load: control token: 128011 '<|reserved_special_token_3|>' is not marked as EOG
load: control token: 128005 '<|reserved_special_token_2|>' is not marked as EOG
load: control token: 128004 '<|finetune_right_pad_id|>' is not marked as EOG
load: control token: 128002 '<|reserved_special_token_0|>' is not marked as EOG
load: control token: 128252 '<|reserved_special_token_244|>' is not marked as EOG
load: control token: 128190 '<|reserved_special_token_182|>' is not marked as EOG
load: control token: 128183 '<|reserved_special_token_175|>' is not marked as EOG
load: control token: 128137 '<|reserved_special_token_129|>' is not marked as EOG
load: control token: 128182 '<|reserved_special_token_174|>' is not marked as EOG
load: control token: 128040 '<|reserved_special_token_32|>' is not marked as EOG
load: control token: 128048 '<|reserved_special_token_40|>' is not marked as EOG
load: control token: 128092 '<|reserved_special_token_84|>' is not marked as EOG
load: control token: 128215 '<|reserved_special_token_207|>' is not marked as EOG
load: control token: 128107 '<|reserved_special_token_99|>' is not marked as EOG
load: control token: 128208 '<|reserved_special_token_200|>' is not marked as EOG
load: control token: 128145 '<|reserved_special_token_137|>' is not marked as EOG
load: control token: 128031 '<|reserved_special_token_23|>' is not marked as EOG
load: control token: 128129 '<|reserved_special_token_121|>' is not marked as EOG
load: control token: 128201 '<|reserved_special_token_193|>' is not marked as EOG
load: control token: 128074 '<|reserved_special_token_66|>' is not marked as EOG
load: control token: 128095 '<|reserved_special_token_87|>' is not marked as EOG
load: control token: 128186 '<|reserved_special_token_178|>' is not marked as EOG
load: control token: 128143 '<|reserved_special_token_135|>' is not marked as EOG
load: control token: 128229 '<|reserved_special_token_221|>' is not marked as EOG
load: control token: 128007 '<|end_header_id|>' is not marked as EOG
load: control token: 128055 '<|reserved_special_token_47|>' is not marked as EOG
load: control token: 128056 '<|reserved_special_token_48|>' is not marked as EOG
load: control token: 128061 '<|reserved_special_token_53|>' is not marked as EOG
load: control token: 128153 '<|reserved_special_token_145|>' is not marked as EOG
load: control token: 128152 '<|reserved_special_token_144|>' is not marked as EOG
load: control token: 128212 '<|reserved_special_token_204|>' is not marked as EOG
load: control token: 128172 '<|reserved_special_token_164|>' is not marked as EOG
load: control token: 128160 '<|reserved_special_token_152|>' is not marked as EOG
load: control token: 128041 '<|reserved_special_token_33|>' is not marked as EOG
load: control token: 128181 '<|reserved_special_token_173|>' is not marked as EOG
load: control token: 128094 '<|reserved_special_token_86|>' is not marked as EOG
load: control token: 128118 '<|reserved_special_token_110|>' is not marked as EOG
load: control token: 128236 '<|reserved_special_token_228|>' is not marked as EOG
load: control token: 128148 '<|reserved_special_token_140|>' is not marked as EOG
load: control token: 128042 '<|reserved_special_token_34|>' is not marked as EOG
load: control token: 128139 '<|reserved_special_token_131|>' is not marked as EOG
load: control token: 128173 '<|reserved_special_token_165|>' is not marked as EOG
load: control token: 128239 '<|reserved_special_token_231|>' is not marked as EOG
load: control token: 128157 '<|reserved_special_token_149|>' is not marked as EOG
load: control token: 128052 '<|reserved_special_token_44|>' is not marked as EOG
load: control token: 128026 '<|reserved_special_token_18|>' is not marked as EOG
load: control token: 128003 '<|reserved_special_token_1|>' is not marked as EOG
load: control token: 128019 '<|reserved_special_token_11|>' is not marked as EOG
load: control token: 128116 '<|reserved_special_token_108|>' is not marked as EOG
load: control token: 128161 '<|reserved_special_token_153|>' is not marked as EOG
load: control token: 128226 '<|reserved_special_token_218|>' is not marked as EOG
load: control token: 128159 '<|reserved_special_token_151|>' is not marked as EOG
load: control token: 128012 '<|reserved_special_token_4|>' is not marked as EOG
load: control token: 128088 '<|reserved_special_token_80|>' is not marked as EOG
load: control token: 128163 '<|reserved_special_token_155|>' is not marked as EOG
load: control token: 128113 '<|reserved_special_token_105|>' is not marked as EOG
load: control token: 128250 '<|reserved_special_token_242|>' is not marked as EOG
load: control token: 128125 '<|reserved_special_token_117|>' is not marked as EOG
load: control token: 128053 '<|reserved_special_token_45|>' is not marked as EOG
load: control token: 128224 '<|reserved_special_token_216|>' is not marked as EOG
load: control token: 128247 '<|reserved_special_token_239|>' is not marked as EOG
load: control token: 128251 '<|reserved_special_token_243|>' is not marked as EOG
load: control token: 128216 '<|reserved_special_token_208|>' is not marked as EOG
load: control token: 128006 '<|start_header_id|>' is not marked as EOG
load: control token: 128211 '<|reserved_special_token_203|>' is not marked as EOG
load: control token: 128077 '<|reserved_special_token_69|>' is not marked as EOG
load: control token: 128237 '<|reserved_special_token_229|>' is not marked as EOG
load: control token: 128086 '<|reserved_special_token_78|>' is not marked as EOG
load: control token: 128227 '<|reserved_special_token_219|>' is not marked as EOG
load: control token: 128058 '<|reserved_special_token_50|>' is not marked as EOG
load: control token: 128100 '<|reserved_special_token_92|>' is not marked as EOG
load: control token: 128209 '<|reserved_special_token_201|>' is not marked as EOG
load: control token: 128084 '<|reserved_special_token_76|>' is not marked as EOG
load: control token: 128071 '<|reserved_special_token_63|>' is not marked as EOG
load: control token: 128070 '<|reserved_special_token_62|>' is not marked as EOG
load: control token: 128049 '<|reserved_special_token_41|>' is not marked as EOG
load: control token: 128197 '<|reserved_special_token_189|>' is not marked as EOG
load: control token: 128072 '<|reserved_special_token_64|>' is not marked as EOG
load: control token: 128000 '<|begin_of_text|>' is not marked as EOG
load: control token: 128223 '<|reserved_special_token_215|>' is not marked as EOG
load: control token: 128217 '<|reserved_special_token_209|>' is not marked as EOG
load: control token: 128111 '<|reserved_special_token_103|>' is not marked as EOG
load: control token: 128203 '<|reserved_special_token_195|>' is not marked as EOG
load: control token: 128051 '<|reserved_special_token_43|>' is not marked as EOG
load: control token: 128030 '<|reserved_special_token_22|>' is not marked as EOG
load: control token: 128117 '<|reserved_special_token_109|>' is not marked as EOG
load: control token: 128010 '<|python_tag|>' is not marked as EOG
load: control token: 128238 '<|reserved_special_token_230|>' is not marked as EOG
load: control token: 128255 '<|reserved_special_token_247|>' is not marked as EOG
load: control token: 128202 '<|reserved_special_token_194|>' is not marked as EOG
load: control token: 128132 '<|reserved_special_token_124|>' is not marked as EOG
load: control token: 128248 '<|reserved_special_token_240|>' is not marked as EOG
load: control token: 128167 '<|reserved_special_token_159|>' is not marked as EOG
load: control token: 128127 '<|reserved_special_token_119|>' is not marked as EOG
load: control token: 128105 '<|reserved_special_token_97|>' is not marked as EOG
load: control token: 128039 '<|reserved_special_token_31|>' is not marked as EOG
load: control token: 128232 '<|reserved_special_token_224|>' is not marked as EOG
load: control token: 128166 '<|reserved_special_token_158|>' is not marked as EOG
load: control token: 128130 '<|reserved_special_token_122|>' is not marked as EOG
load: control token: 128114 '<|reserved_special_token_106|>' is not marked as EOG
load: control token: 128234 '<|reserved_special_token_226|>' is not marked as EOG
load: control token: 128191 '<|reserved_special_token_183|>' is not marked as EOG
load: control token: 128064 '<|reserved_special_token_56|>' is not marked as EOG
load: control token: 128140 '<|reserved_special_token_132|>' is not marked as EOG
load: control token: 128096 '<|reserved_special_token_88|>' is not marked as EOG
load: control token: 128098 '<|reserved_special_token_90|>' is not marked as EOG
load: control token: 128192 '<|reserved_special_token_184|>' is not marked as EOG
load: control token: 128093 '<|reserved_special_token_85|>' is not marked as EOG
load: control token: 128150 '<|reserved_special_token_142|>' is not marked as EOG
load: control token: 128222 '<|reserved_special_token_214|>' is not marked as EOG
load: control token: 128233 '<|reserved_special_token_225|>' is not marked as EOG
load: control token: 128220 '<|reserved_special_token_212|>' is not marked as EOG
load: control token: 128034 '<|reserved_special_token_26|>' is not marked as EOG
load: control token: 128033 '<|reserved_special_token_25|>' is not marked as EOG
load: control token: 128253 '<|reserved_special_token_245|>' is not marked as EOG
load: control token: 128195 '<|reserved_special_token_187|>' is not marked as EOG
load: control token: 128099 '<|reserved_special_token_91|>' is not marked as EOG
load: control token: 128189 '<|reserved_special_token_181|>' is not marked as EOG
load: control token: 128210 '<|reserved_special_token_202|>' is not marked as EOG
load: control token: 128174 '<|reserved_special_token_166|>' is not marked as EOG
load: control token: 128083 '<|reserved_special_token_75|>' is not marked as EOG
load: control token: 128080 '<|reserved_special_token_72|>' is not marked as EOG
load: control token: 128104 '<|reserved_special_token_96|>' is not marked as EOG
load: control token: 128082 '<|reserved_special_token_74|>' is not marked as EOG
load: control token: 128219 '<|reserved_special_token_211|>' is not marked as EOG
load: control token: 128017 '<|reserved_special_token_9|>' is not marked as EOG
load: control token: 128050 '<|reserved_special_token_42|>' is not marked as EOG
load: control token: 128205 '<|reserved_special_token_197|>' is not marked as EOG
load: control token: 128047 '<|reserved_special_token_39|>' is not marked as EOG
load: control token: 128164 '<|reserved_special_token_156|>' is not marked as EOG
load: control token: 128020 '<|reserved_special_token_12|>' is not marked as EOG
load: control token: 128069 '<|reserved_special_token_61|>' is not marked as EOG
load: control token: 128245 '<|reserved_special_token_237|>' is not marked as EOG
load: control token: 128121 '<|reserved_special_token_113|>' is not marked as EOG
load: control token: 128079 '<|reserved_special_token_71|>' is not marked as EOG
load: control token: 128037 '<|reserved_special_token_29|>' is not marked as EOG
load: control token: 128244 '<|reserved_special_token_236|>' is not marked as EOG
load: control token: 128029 '<|reserved_special_token_21|>' is not marked as EOG
load: control token: 128221 '<|reserved_special_token_213|>' is not marked as EOG
load: control token: 128066 '<|reserved_special_token_58|>' is not marked as EOG
load: control token: 128120 '<|reserved_special_token_112|>' is not marked as EOG
load: control token: 128014 '<|reserved_special_token_6|>' is not marked as EOG
load: control token: 128025 '<|reserved_special_token_17|>' is not marked as EOG
load: control token: 128126 '<|reserved_special_token_118|>' is not marked as EOG
load: printing all EOG tokens:
load:   - 128001 ('<|end_of_text|>')
load:   - 128008 ('<|eom_id|>')
load:   - 128009 ('<|eot_id|>')
load: special tokens cache size = 256
load: token to piece cache size = 0.7999 MB
print_info: arch             = llama
print_info: vocab_only       = 0
print_info: no_alloc         = 0
print_info: n_ctx_train      = 131072
print_info: n_embd           = 2048
print_info: n_embd_inp       = 2048
print_info: n_layer          = 16
print_info: n_head           = 32
print_info: n_head_kv        = 8
print_info: n_rot            = 64
print_info: n_swa            = 0
print_info: is_swa_any       = 0
print_info: n_embd_head_k    = 64
print_info: n_embd_head_v    = 64
print_info: n_gqa            = 4
print_info: n_embd_k_gqa     = 512
print_info: n_embd_v_gqa     = 512
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 8192
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: n_expert_groups  = 0
print_info: n_group_used     = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 0
print_info: rope scaling     = linear
print_info: freq_base_train  = 500000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 131072
print_info: rope_yarn_log_mul= 0.0000
print_info: rope_finetuned   = unknown
print_info: model type       = 1B
print_info: model params     = 1.24 B
print_info: general.name     = Llama-3.2-1B-Instruct
print_info: vocab type       = BPE
print_info: n_vocab          = 128256
print_info: n_merges         = 280147
print_info: BOS token        = 128000 '<|begin_of_text|>'
print_info: EOS token        = 128009 '<|eot_id|>'
print_info: EOT token        = 128009 '<|eot_id|>'
print_info: EOM token        = 128008 '<|eom_id|>'
print_info: PAD token        = 128004 '<|finetune_right_pad_id|>'
print_info: LF token         = 198 'Ċ'
print_info: EOG token        = 128001 '<|end_of_text|>'
print_info: EOG token        = 128008 '<|eom_id|>'
print_info: EOG token        = 128009 '<|eot_id|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors: layer   0 assigned to device OPENVINO0, is_swa = 0
load_tensors: layer   1 assigned to device OPENVINO0, is_swa = 0
load_tensors: layer   2 assigned to device OPENVINO0, is_swa = 0
load_tensors: layer   3 assigned to device OPENVINO0, is_swa = 0
load_tensors: layer   4 assigned to device OPENVINO0, is_swa = 0
load_tensors: layer   5 assigned to device OPENVINO0, is_swa = 0
load_tensors: layer   6 assigned to device OPENVINO0, is_swa = 0
load_tensors: layer   7 assigned to device OPENVINO0, is_swa = 0
load_tensors: layer   8 assigned to device OPENVINO0, is_swa = 0
load_tensors: layer   9 assigned to device OPENVINO0, is_swa = 0
load_tensors: layer  10 assigned to device OPENVINO0, is_swa = 0
load_tensors: layer  11 assigned to device OPENVINO0, is_swa = 0
load_tensors: layer  12 assigned to device OPENVINO0, is_swa = 0
load_tensors: layer  13 assigned to device OPENVINO0, is_swa = 0
load_tensors: layer  14 assigned to device OPENVINO0, is_swa = 0
load_tensors: layer  15 assigned to device OPENVINO0, is_swa = 0
load_tensors: layer  16 assigned to device OPENVINO0, is_swa = 0
create_tensor: loading tensor token_embd.weight
create_tensor: loading tensor output_norm.weight
create_tensor: loading tensor token_embd.weight
create_tensor: loading tensor blk.0.attn_norm.weight
create_tensor: loading tensor blk.0.attn_q.weight
create_tensor: loading tensor blk.0.attn_k.weight
create_tensor: loading tensor blk.0.attn_v.weight
create_tensor: loading tensor blk.0.attn_output.weight
create_tensor: loading tensor blk.0.ffn_norm.weight
create_tensor: loading tensor rope_freqs.weight
create_tensor: loading tensor blk.0.ffn_gate.weight
create_tensor: loading tensor blk.0.ffn_down.weight
create_tensor: loading tensor blk.0.ffn_up.weight
create_tensor: loading tensor blk.1.attn_norm.weight
create_tensor: loading tensor blk.1.attn_q.weight
create_tensor: loading tensor blk.1.attn_k.weight
create_tensor: loading tensor blk.1.attn_v.weight
create_tensor: loading tensor blk.1.attn_output.weight
create_tensor: loading tensor blk.1.ffn_norm.weight
create_tensor: loading tensor blk.1.ffn_gate.weight
create_tensor: loading tensor blk.1.ffn_down.weight
create_tensor: loading tensor blk.1.ffn_up.weight
create_tensor: loading tensor blk.2.attn_norm.weight
create_tensor: loading tensor blk.2.attn_q.weight
create_tensor: loading tensor blk.2.attn_k.weight
create_tensor: loading tensor blk.2.attn_v.weight
create_tensor: loading tensor blk.2.attn_output.weight
create_tensor: loading tensor blk.2.ffn_norm.weight
create_tensor: loading tensor blk.2.ffn_gate.weight
create_tensor: loading tensor blk.2.ffn_down.weight
create_tensor: loading tensor blk.2.ffn_up.weight
create_tensor: loading tensor blk.3.attn_norm.weight
create_tensor: loading tensor blk.3.attn_q.weight
create_tensor: loading tensor blk.3.attn_k.weight
create_tensor: loading tensor blk.3.attn_v.weight
create_tensor: loading tensor blk.3.attn_output.weight
create_tensor: loading tensor blk.3.ffn_norm.weight
create_tensor: loading tensor blk.3.ffn_gate.weight
create_tensor: loading tensor blk.3.ffn_down.weight
create_tensor: loading tensor blk.3.ffn_up.weight
create_tensor: loading tensor blk.4.attn_norm.weight
create_tensor: loading tensor blk.4.attn_q.weight
create_tensor: loading tensor blk.4.attn_k.weight
create_tensor: loading tensor blk.4.attn_v.weight
create_tensor: loading tensor blk.4.attn_output.weight
create_tensor: loading tensor blk.4.ffn_norm.weight
create_tensor: loading tensor blk.4.ffn_gate.weight
create_tensor: loading tensor blk.4.ffn_down.weight
create_tensor: loading tensor blk.4.ffn_up.weight
create_tensor: loading tensor blk.5.attn_norm.weight
create_tensor: loading tensor blk.5.attn_q.weight
create_tensor: loading tensor blk.5.attn_k.weight
create_tensor: loading tensor blk.5.attn_v.weight
create_tensor: loading tensor blk.5.attn_output.weight
create_tensor: loading tensor blk.5.ffn_norm.weight
create_tensor: loading tensor blk.5.ffn_gate.weight
create_tensor: loading tensor blk.5.ffn_down.weight
create_tensor: loading tensor blk.5.ffn_up.weight
create_tensor: loading tensor blk.6.attn_norm.weight
create_tensor: loading tensor blk.6.attn_q.weight
create_tensor: loading tensor blk.6.attn_k.weight
create_tensor: loading tensor blk.6.attn_v.weight
create_tensor: loading tensor blk.6.attn_output.weight
create_tensor: loading tensor blk.6.ffn_norm.weight
create_tensor: loading tensor blk.6.ffn_gate.weight
create_tensor: loading tensor blk.6.ffn_down.weight
create_tensor: loading tensor blk.6.ffn_up.weight
create_tensor: loading tensor blk.7.attn_norm.weight
create_tensor: loading tensor blk.7.attn_q.weight
create_tensor: loading tensor blk.7.attn_k.weight
create_tensor: loading tensor blk.7.attn_v.weight
create_tensor: loading tensor blk.7.attn_output.weight
create_tensor: loading tensor blk.7.ffn_norm.weight
create_tensor: loading tensor blk.7.ffn_gate.weight
create_tensor: loading tensor blk.7.ffn_down.weight
create_tensor: loading tensor blk.7.ffn_up.weight
create_tensor: loading tensor blk.8.attn_norm.weight
create_tensor: loading tensor blk.8.attn_q.weight
create_tensor: loading tensor blk.8.attn_k.weight
create_tensor: loading tensor blk.8.attn_v.weight
create_tensor: loading tensor blk.8.attn_output.weight
create_tensor: loading tensor blk.8.ffn_norm.weight
create_tensor: loading tensor blk.8.ffn_gate.weight
create_tensor: loading tensor blk.8.ffn_down.weight
create_tensor: loading tensor blk.8.ffn_up.weight
create_tensor: loading tensor blk.9.attn_norm.weight
create_tensor: loading tensor blk.9.attn_q.weight
create_tensor: loading tensor blk.9.attn_k.weight
create_tensor: loading tensor blk.9.attn_v.weight
create_tensor: loading tensor blk.9.attn_output.weight
create_tensor: loading tensor blk.9.ffn_norm.weight
create_tensor: loading tensor blk.9.ffn_gate.weight
create_tensor: loading tensor blk.9.ffn_down.weight
create_tensor: loading tensor blk.9.ffn_up.weight
create_tensor: loading tensor blk.10.attn_norm.weight
create_tensor: loading tensor blk.10.attn_q.weight
create_tensor: loading tensor blk.10.attn_k.weight
create_tensor: loading tensor blk.10.attn_v.weight
create_tensor: loading tensor blk.10.attn_output.weight
create_tensor: loading tensor blk.10.ffn_norm.weight
create_tensor: loading tensor blk.10.ffn_gate.weight
create_tensor: loading tensor blk.10.ffn_down.weight
create_tensor: loading tensor blk.10.ffn_up.weight
create_tensor: loading tensor blk.11.attn_norm.weight
create_tensor: loading tensor blk.11.attn_q.weight
create_tensor: loading tensor blk.11.attn_k.weight
create_tensor: loading tensor blk.11.attn_v.weight
create_tensor: loading tensor blk.11.attn_output.weight
create_tensor: loading tensor blk.11.ffn_norm.weight
create_tensor: loading tensor blk.11.ffn_gate.weight
create_tensor: loading tensor blk.11.ffn_down.weight
create_tensor: loading tensor blk.11.ffn_up.weight
create_tensor: loading tensor blk.12.attn_norm.weight
create_tensor: loading tensor blk.12.attn_q.weight
create_tensor: loading tensor blk.12.attn_k.weight
create_tensor: loading tensor blk.12.attn_v.weight
create_tensor: loading tensor blk.12.attn_output.weight
create_tensor: loading tensor blk.12.ffn_norm.weight
create_tensor: loading tensor blk.12.ffn_gate.weight
create_tensor: loading tensor blk.12.ffn_down.weight
create_tensor: loading tensor blk.12.ffn_up.weight
create_tensor: loading tensor blk.13.attn_norm.weight
create_tensor: loading tensor blk.13.attn_q.weight
create_tensor: loading tensor blk.13.attn_k.weight
create_tensor: loading tensor blk.13.attn_v.weight
create_tensor: loading tensor blk.13.attn_output.weight
create_tensor: loading tensor blk.13.ffn_norm.weight
create_tensor: loading tensor blk.13.ffn_gate.weight
create_tensor: loading tensor blk.13.ffn_down.weight
create_tensor: loading tensor blk.13.ffn_up.weight
create_tensor: loading tensor blk.14.attn_norm.weight
create_tensor: loading tensor blk.14.attn_q.weight
create_tensor: loading tensor blk.14.attn_k.weight
create_tensor: loading tensor blk.14.attn_v.weight
create_tensor: loading tensor blk.14.attn_output.weight
create_tensor: loading tensor blk.14.ffn_norm.weight
create_tensor: loading tensor blk.14.ffn_gate.weight
create_tensor: loading tensor blk.14.ffn_down.weight
create_tensor: loading tensor blk.14.ffn_up.weight
create_tensor: loading tensor blk.15.attn_norm.weight
create_tensor: loading tensor blk.15.attn_q.weight
create_tensor: loading tensor blk.15.attn_k.weight
create_tensor: loading tensor blk.15.attn_v.weight
create_tensor: loading tensor blk.15.attn_output.weight
create_tensor: loading tensor blk.15.ffn_norm.weight
create_tensor: loading tensor blk.15.ffn_gate.weight
create_tensor: loading tensor blk.15.ffn_down.weight
create_tensor: loading tensor blk.15.ffn_up.weight
load_tensors: tensor 'token_embd.weight' (q6_K) (and 0 others) cannot be used with preferred buffer type OPENVINO0_HOST, using CPU instead
load_tensors: offloading 16 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 17/17 layers to GPU
load_tensors:   CPU_Mapped model buffer size =   205.49 MiB
load_tensors:    OPENVINO0 model buffer size =  1025.26 MiB
time=2026-04-02T15:13:50.918+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.22"
time=2026-04-02T15:14:07.716+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.45"
time=2026-04-02T15:14:07.967+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.46"
time=2026-04-02T15:14:08.218+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.47"
time=2026-04-02T15:14:08.469+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.48"
time=2026-04-02T15:14:08.720+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.49"
time=2026-04-02T15:14:09.222+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.50"
time=2026-04-02T15:14:09.472+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.52"
time=2026-04-02T15:14:09.723+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.53"
time=2026-04-02T15:14:09.974+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.54"
time=2026-04-02T15:14:10.224+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.55"
time=2026-04-02T15:14:10.476+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.55"
time=2026-04-02T15:14:10.726+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.56"
time=2026-04-02T15:14:10.978+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.57"
time=2026-04-02T15:14:11.228+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.58"
time=2026-04-02T15:14:11.480+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.59"
time=2026-04-02T15:14:11.730+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.60"
time=2026-04-02T15:14:11.981+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.61"
time=2026-04-02T15:14:12.233+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.62"
time=2026-04-02T15:14:12.484+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.63"
time=2026-04-02T15:14:12.735+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.64"
time=2026-04-02T15:14:12.986+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.66"
time=2026-04-02T15:14:13.236+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.67"
time=2026-04-02T15:14:13.487+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.68"
time=2026-04-02T15:14:13.738+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.69"
time=2026-04-02T15:14:13.990+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.70"
time=2026-04-02T15:14:14.240+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.71"
time=2026-04-02T15:14:14.491+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.72"
time=2026-04-02T15:14:14.742+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.73"
time=2026-04-02T15:14:14.993+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.74"
time=2026-04-02T15:14:15.244+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.75"
time=2026-04-02T15:14:15.494+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.76"
time=2026-04-02T15:14:15.745+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.76"
time=2026-04-02T15:14:15.996+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.77"
time=2026-04-02T15:14:16.247+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.78"
time=2026-04-02T15:14:16.498+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.80"
time=2026-04-02T15:14:16.749+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.81"
time=2026-04-02T15:14:17.000+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.82"
time=2026-04-02T15:14:17.251+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.83"
time=2026-04-02T15:14:17.501+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.83"
time=2026-04-02T15:14:17.753+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.84"
time=2026-04-02T15:14:18.004+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.85"
time=2026-04-02T15:14:18.255+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.87"
time=2026-04-02T15:14:18.507+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.88"
time=2026-04-02T15:14:18.757+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.89"
time=2026-04-02T15:14:19.008+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.90"
time=2026-04-02T15:14:19.259+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.90"
time=2026-04-02T15:14:19.511+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.91"
time=2026-04-02T15:14:19.761+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.92"
time=2026-04-02T15:14:20.012+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.94"
time=2026-04-02T15:14:20.263+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.95"
time=2026-04-02T15:14:20.514+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.96"
time=2026-04-02T15:14:20.765+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.97"
time=2026-04-02T15:14:21.015+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.97"
time=2026-04-02T15:14:21.266+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.98"
time=2026-04-02T15:14:21.518+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.99"
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 32768
llama_context: n_ctx_seq     = 32768
llama_context: n_batch       = 512
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = auto
llama_context: kv_unified    = false
llama_context: freq_base     = 500000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_seq (32768) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
set_abort_callback: call
llama_context: OPENVINO0_HOST  output buffer size =     0.50 MiB
llama_kv_cache: layer   0: dev = OPENVINO0
llama_kv_cache: layer   1: dev = OPENVINO0
llama_kv_cache: layer   2: dev = OPENVINO0
llama_kv_cache: layer   3: dev = OPENVINO0
llama_kv_cache: layer   4: dev = OPENVINO0
llama_kv_cache: layer   5: dev = OPENVINO0
llama_kv_cache: layer   6: dev = OPENVINO0
llama_kv_cache: layer   7: dev = OPENVINO0
llama_kv_cache: layer   8: dev = OPENVINO0
llama_kv_cache: layer   9: dev = OPENVINO0
llama_kv_cache: layer  10: dev = OPENVINO0
llama_kv_cache: layer  11: dev = OPENVINO0
llama_kv_cache: layer  12: dev = OPENVINO0
llama_kv_cache: layer  13: dev = OPENVINO0
llama_kv_cache: layer  14: dev = OPENVINO0
llama_kv_cache: layer  15: dev = OPENVINO0
time=2026-04-02T15:14:21.769+09:00 level=DEBUG source=server.go:1396 msg="model load progress 1.00"
llama_kv_cache:  OPENVINO0 KV buffer size =  1024.00 MiB
time=2026-04-02T15:14:22.021+09:00 level=DEBUG source=server.go:1399 msg="model load completed, waiting for server to become available" status="llm server loading model"
llama_kv_cache: size = 1024.00 MiB ( 32768 cells,  16 layers,  1/1 seqs), K (f16):  512.00 MiB, V (f16):  512.00 MiB
llama_context: enumerating backends
llama_context: backend_ptrs.size() = 2
llama_context: max_nodes = 1184
llama_context: reserving full memory module
llama_context: worst-case: n_tokens = 512, n_seqs = 1, n_outputs = 1
graph_reserve: reserving a graph for ubatch with n_tokens =    1, n_seqs =  1, n_outputs =    1
llama_context: Flash Attention was auto, set to enabled
graph_reserve: reserving a graph for ubatch with n_tokens =  512, n_seqs =  1, n_outputs =  512
graph_reserve: reserving a graph for ubatch with n_tokens =    1, n_seqs =  1, n_outputs =    1
graph_reserve: reserving a graph for ubatch with n_tokens =  512, n_seqs =  1, n_outputs =  512
llama_context:  OPENVINO0 compute buffer size =   254.50 MiB
llama_context: OPENVINO0_HOST compute buffer size =    64.01 MiB
llama_context: graph nodes  = 503
llama_context: graph splits = 1
time=2026-04-02T15:14:22.271+09:00 level=INFO source=server.go:1390 msg="llama runner started in 33.01 seconds"
time=2026-04-02T15:14:22.271+09:00 level=INFO source=sched.go:561 msg="loaded runners" count=1
time=2026-04-02T15:14:22.271+09:00 level=INFO source=server.go:1352 msg="waiting for llama runner to start responding"
time=2026-04-02T15:14:22.272+09:00 level=INFO source=server.go:1390 msg="llama runner started in 33.01 seconds"
time=2026-04-02T15:14:22.272+09:00 level=DEBUG source=sched.go:573 msg="finished setting up" runner.name=hf.co/unsloth/Llama-3.2-1B-Instruct-GGUF:Q4_0 runner.inference="[{ID:OPENVINO0 Library:OPENVINO}]" runner.size="3.8 GiB" runner.vram="3.8 GiB" runner.parallel=1 runner.pid=42036 runner.model=C:\Users\USER\.ollama\models\blobs\sha256-66bfbb2d48bdb77cd56bd03ef820deff3c4a74b1a09de3b917ae13e72c1a70c2 runner.num_ctx=32768
time=2026-04-02T15:14:22.273+09:00 level=DEBUG source=server.go:1538 msg="completion request" images=0 prompt=104 format=""
time=2026-04-02T15:14:22.274+09:00 level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=11 used=0 remaining=11
[WARNING] 15:15:54.205 [NPUPlugin] The IR version was not found within the runtime information attributes. The NPU plugin will continue execution assuming the version is 11. If wrong, compilation issues may occur.
[WARNING] 15:16:10.22 [NPUPlugin] The IR version was not found within the runtime information attributes. The NPU plugin will continue execution assuming the version is 11. If wrong, compilation issues may occur.
[WARNING] 15:16:27.786 [NPUPlugin] The IR version was not found within the runtime information attributes. The NPU plugin will continue execution assuming the version is 11. If wrong, compilation issues may occur.
[WARNING] 15:17:06.73 [NPUPlugin] The IR version was not found within the runtime information attributes. The NPU plugin will continue execution assuming the version is 11. If wrong, compilation issues may occur.
[WARNING] 15:17:08.909 [NPUPlugin] The IR version was not found within the runtime information attributes. The NPU plugin will continue execution assuming the version is 11. If wrong, compilation issues may occur.
[WARNING] 15:17:12.355 [NPUPlugin] The IR version was not found within the runtime information attributes. The NPU plugin will continue execution assuming the version is 11. If wrong, compilation issues may occur.

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/15205 **Author:** [@jclab-joseph](https://github.com/jclab-joseph) **Created:** 4/2/2026 **Status:** 🔄 Open **Base:** `main` ← **Head:** `feat/openvino` --- ### 📝 Commits (6) - [`d2f7851`](https://github.com/ollama/ollama/commit/d2f7851680560299d409aae209895508bce2576a) copy ggml-openvino from llama.cpp/ggml/src/ggml-openvino - [`7e8ed03`](https://github.com/ollama/ollama/commit/7e8ed03818217016ea495517e776c4d9ac8d84bc) add openvino backend - [`05aef36`](https://github.com/ollama/ollama/commit/05aef36115b3d57241cc31173499d9cd898c11b5) feat: build script - [`9723800`](https://github.com/ollama/ollama/commit/9723800eaa2901ca6ced44186abb8b2e7b957107) ggml-openvino: detect cpu/npu(as igpu)/gpu - [`8f00440`](https://github.com/ollama/ollama/commit/8f00440e2adc802c21425427db58795351b01112) fix build - [`33a8ca8`](https://github.com/ollama/ollama/commit/33a8ca892a3a5e24e444c3b9fa1ade86c6ee6346) refactor: use ggml_openvino_is_npu instead of ggml_openvino_is_integrated_device ### 📊 Changes **57 files changed** (+7894 additions, -2 deletions) <details> <summary>View changed files</summary> 📝 `.github/workflows/release.yaml` (+46 -1) 📝 `.github/workflows/test.yaml` (+48 -0) 📝 `CMakeLists.txt` (+9 -0) 📝 `CMakePresets.json` (+13 -0) 📝 `Dockerfile` (+20 -0) 📝 `docs/development.md` (+20 -0) ➕ `ml/backend/ggml/ggml/include/ggml-openvino.h` (+37 -0) 📝 `ml/backend/ggml/ggml/src/CMakeLists.txt` (+1 -0) 📝 `ml/backend/ggml/ggml/src/ggml-backend-reg.cpp` (+8 -0) ➕ `ml/backend/ggml/ggml/src/ggml-openvino/.clang-format` (+154 -0) ➕ `ml/backend/ggml/ggml/src/ggml-openvino/CMakeLists.txt` (+20 -0) ➕ `ml/backend/ggml/ggml/src/ggml-openvino/ggml-decoder.cpp` (+975 -0) ➕ `ml/backend/ggml/ggml/src/ggml-openvino/ggml-decoder.h` (+294 -0) ➕ `ml/backend/ggml/ggml/src/ggml-openvino/ggml-openvino-extra.cpp` (+373 -0) ➕ `ml/backend/ggml/ggml/src/ggml-openvino/ggml-openvino-extra.h` (+182 -0) ➕ `ml/backend/ggml/ggml/src/ggml-openvino/ggml-openvino.cpp` (+1127 -0) ➕ `ml/backend/ggml/ggml/src/ggml-openvino/ggml-quants.cpp` (+884 -0) ➕ `ml/backend/ggml/ggml/src/ggml-openvino/ggml-quants.h` (+153 -0) ➕ `ml/backend/ggml/ggml/src/ggml-openvino/openvino/decoder.h` (+74 -0) ➕ `ml/backend/ggml/ggml/src/ggml-openvino/openvino/frontend.cpp` (+27 -0) _...and 37 more files_ </details> ### 📄 Description I brought over https://github.com/ggml-org/llama.cpp/tree/master/ggml/src/ggml-openvino as is. **NOT WORKING YET** Usage: ``` export GGML_OPENVINO_DEVICE=NPU ollama serve ollama run hf.co/unsloth/Llama-3.2-1B-Instruct-GGUF:Q4_0 "hello" ``` Fixes: - https://github.com/ollama/ollama/issues/2169 - https://github.com/ollama/ollama/issues/5747 Stuck Log: <details> <summary>Stuck Log</summary> ``` time=2026-04-02T15:13:49.266+09:00 level=INFO source=sched.go:484 msg="system memory" total="31.4 GiB" free="21.6 GiB" free_swap="19.6 GiB" time=2026-04-02T15:13:49.266+09:00 level=INFO source=sched.go:491 msg="gpu memory" id=OPENVINO0 library=OPENVINO available="21.2 GiB" free="21.6 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-04-02T15:13:49.266+09:00 level=INFO source=server.go:499 msg="loading model" "model layers"=17 requested=-1 time=2026-04-02T15:13:49.266+09:00 level=DEBUG source=ggml.go:659 msg="default cache size estimate" "attention MiB"=1024 "attention bytes"=1073741824 "recurrent MiB"=0 "recurrent bytes"=0 time=2026-04-02T15:13:49.267+09:00 level=DEBUG source=server.go:978 msg="available gpu" id=OPENVINO0 library=OPENVINO "available layer vram"="21.1 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="0 B" time=2026-04-02T15:13:49.267+09:00 level=DEBUG source=server.go:978 msg="available gpu" id=OPENVINO0 library=OPENVINO "available layer vram"="19.0 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="2.1 GiB" time=2026-04-02T15:13:49.267+09:00 level=DEBUG source=server.go:670 msg=memory estimate.OPENVINO0.ID=OPENVINO0 estimate.OPENVINO0.Weights="[35274752 35274752 34226176 34226176 34226176 34226176 34226176 34226176 34226176 34226176 34226176 34226176 34226176 34226176 34226176 34226176 215478272]" estimate.OPENVINO0.Cache="[67108864 67108864 67108864 67108864 67108864 67108864 67108864 67108864 67108864 67108864 67108864 67108864 67108864 67108864 67108864 67108864 0]" estimate.OPENVINO0.Graph=2231371776 time=2026-04-02T15:13:49.267+09:00 level=INFO source=device.go:240 msg="model weights" device=OPENVINO0 size="729.7 MiB" time=2026-04-02T15:13:49.267+09:00 level=INFO source=device.go:251 msg="kv cache" device=OPENVINO0 size="1.0 GiB" time=2026-04-02T15:13:49.267+09:00 level=INFO source=device.go:262 msg="compute graph" device=OPENVINO0 size="2.1 GiB" time=2026-04-02T15:13:49.267+09:00 level=INFO source=device.go:272 msg="total memory" size="3.8 GiB" time=2026-04-02T15:13:49.316+09:00 level=INFO source=runner.go:965 msg="starting go runner" time=2026-04-02T15:13:49.329+09:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\temp\ollama-out\lib\ollama load_backend: loaded CPU backend from C:\temp\ollama-out\lib\ollama\ggml-cpu-alderlake.dll time=2026-04-02T15:13:49.369+09:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=C:\temp\ollama-out\lib\ollama\openvino [WARNING] 15:13:49.649 [NPUZeroInitStructsHolder] Some features might not be available! Plugin L0 API minor version = 15, Driver L0 API minor version = 14 OpenVINO: using device NPU load_backend: loaded OPENVINO backend from C:\temp\ollama-out\lib\ollama\openvino\ggml-openvino.dll time=2026-04-02T15:13:49.656+09:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2026-04-02T15:13:49.657+09:00 level=INFO source=runner.go:1001 msg="Server listening on 127.0.0.1:57676" time=2026-04-02T15:13:49.661+09:00 level=INFO source=runner.go:895 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Auto KvSize:32768 KvCacheType: NumThreads:6 GPULayers:17[ID:OPENVINO0 Layers:17(0..16)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:true}" llama_model_load_from_file_impl: using device OPENVINO0 (OpenVINO Runtime) (unknown id) - 22093 MiB free time=2026-04-02T15:13:49.662+09:00 level=INFO source=server.go:1352 msg="waiting for llama runner to start responding" time=2026-04-02T15:13:49.663+09:00 level=INFO source=server.go:1386 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: loaded meta data with 36 key-value pairs and 147 tensors from C:\Users\USER\.ollama\models\blobs\sha256-66bfbb2d48bdb77cd56bd03ef820deff3c4a74b1a09de3b917ae13e72c1a70c2 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Llama-3.2-1B-Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Llama-3.2-1B-Instruct llama_model_loader: - kv 5: general.quantized_by str = Unsloth llama_model_loader: - kv 6: general.size_label str = 1B llama_model_loader: - kv 7: general.repo_url str = https://huggingface.co/unsloth llama_model_loader: - kv 8: llama.block_count u32 = 16 llama_model_loader: - kv 9: llama.context_length u32 = 131072 llama_model_loader: - kv 10: llama.embedding_length u32 = 2048 llama_model_loader: - kv 11: llama.feed_forward_length u32 = 8192 llama_model_loader: - kv 12: llama.attention.head_count u32 = 32 llama_model_loader: - kv 13: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 14: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 15: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 16: llama.attention.key_length u32 = 64 llama_model_loader: - kv 17: llama.attention.value_length u32 = 64 llama_model_loader: - kv 18: llama.vocab_size u32 = 128256 llama_model_loader: - kv 19: llama.rope.dimension_count u32 = 64 llama_model_loader: - kv 20: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 21: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 23: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 24: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 27: tokenizer.ggml.padding_token_id u32 = 128004 llama_model_loader: - kv 28: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 29: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 30: general.quantization_version u32 = 2 llama_model_loader: - kv 31: general.file_type u32 = 2 llama_model_loader: - kv 32: quantize.imatrix.file str = Llama-3.2-1B-Instruct-GGUF/imatrix_un... llama_model_loader: - kv 33: quantize.imatrix.dataset str = unsloth_calibration_Llama-3.2-1B-Inst... llama_model_loader: - kv 34: quantize.imatrix.entries_count i32 = 112 llama_model_loader: - kv 35: quantize.imatrix.chunks_count i32 = 689 llama_model_loader: - type f32: 34 tensors llama_model_loader: - type q4_0: 110 tensors llama_model_loader: - type q4_1: 2 tensors llama_model_loader: - type q6_K: 1 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_0 print_info: file size = 729.75 MiB (4.95 BPW) init_tokenizer: initializing tokenizer for type 2 load: control token: 128254 '<|reserved_special_token_246|>' is not marked as EOG load: control token: 128249 '<|reserved_special_token_241|>' is not marked as EOG load: control token: 128246 '<|reserved_special_token_238|>' is not marked as EOG load: control token: 128243 '<|reserved_special_token_235|>' is not marked as EOG load: control token: 128242 '<|reserved_special_token_234|>' is not marked as EOG load: control token: 128241 '<|reserved_special_token_233|>' is not marked as EOG load: control token: 128240 '<|reserved_special_token_232|>' is not marked as EOG load: control token: 128235 '<|reserved_special_token_227|>' is not marked as EOG load: control token: 128231 '<|reserved_special_token_223|>' is not marked as EOG load: control token: 128230 '<|reserved_special_token_222|>' is not marked as EOG load: control token: 128228 '<|reserved_special_token_220|>' is not marked as EOG load: control token: 128225 '<|reserved_special_token_217|>' is not marked as EOG load: control token: 128218 '<|reserved_special_token_210|>' is not marked as EOG load: control token: 128214 '<|reserved_special_token_206|>' is not marked as EOG load: control token: 128213 '<|reserved_special_token_205|>' is not marked as EOG load: control token: 128207 '<|reserved_special_token_199|>' is not marked as EOG load: control token: 128206 '<|reserved_special_token_198|>' is not marked as EOG load: control token: 128204 '<|reserved_special_token_196|>' is not marked as EOG load: control token: 128200 '<|reserved_special_token_192|>' is not marked as EOG load: control token: 128199 '<|reserved_special_token_191|>' is not marked as EOG load: control token: 128198 '<|reserved_special_token_190|>' is not marked as EOG load: control token: 128196 '<|reserved_special_token_188|>' is not marked as EOG load: control token: 128194 '<|reserved_special_token_186|>' is not marked as EOG load: control token: 128193 '<|reserved_special_token_185|>' is not marked as EOG load: control token: 128188 '<|reserved_special_token_180|>' is not marked as EOG load: control token: 128187 '<|reserved_special_token_179|>' is not marked as EOG load: control token: 128185 '<|reserved_special_token_177|>' is not marked as EOG load: control token: 128184 '<|reserved_special_token_176|>' is not marked as EOG load: control token: 128180 '<|reserved_special_token_172|>' is not marked as EOG load: control token: 128179 '<|reserved_special_token_171|>' is not marked as EOG load: control token: 128178 '<|reserved_special_token_170|>' is not marked as EOG load: control token: 128177 '<|reserved_special_token_169|>' is not marked as EOG load: control token: 128176 '<|reserved_special_token_168|>' is not marked as EOG load: control token: 128175 '<|reserved_special_token_167|>' is not marked as EOG load: control token: 128171 '<|reserved_special_token_163|>' is not marked as EOG load: control token: 128170 '<|reserved_special_token_162|>' is not marked as EOG load: control token: 128169 '<|reserved_special_token_161|>' is not marked as EOG load: control token: 128168 '<|reserved_special_token_160|>' is not marked as EOG load: control token: 128165 '<|reserved_special_token_157|>' is not marked as EOG load: control token: 128162 '<|reserved_special_token_154|>' is not marked as EOG load: control token: 128158 '<|reserved_special_token_150|>' is not marked as EOG load: control token: 128156 '<|reserved_special_token_148|>' is not marked as EOG load: control token: 128155 '<|reserved_special_token_147|>' is not marked as EOG load: control token: 128154 '<|reserved_special_token_146|>' is not marked as EOG load: control token: 128151 '<|reserved_special_token_143|>' is not marked as EOG load: control token: 128149 '<|reserved_special_token_141|>' is not marked as EOG load: control token: 128147 '<|reserved_special_token_139|>' is not marked as EOG load: control token: 128146 '<|reserved_special_token_138|>' is not marked as EOG load: control token: 128144 '<|reserved_special_token_136|>' is not marked as EOG load: control token: 128142 '<|reserved_special_token_134|>' is not marked as EOG load: control token: 128141 '<|reserved_special_token_133|>' is not marked as EOG load: control token: 128138 '<|reserved_special_token_130|>' is not marked as EOG load: control token: 128136 '<|reserved_special_token_128|>' is not marked as EOG load: control token: 128135 '<|reserved_special_token_127|>' is not marked as EOG load: control token: 128134 '<|reserved_special_token_126|>' is not marked as EOG load: control token: 128133 '<|reserved_special_token_125|>' is not marked as EOG load: control token: 128131 '<|reserved_special_token_123|>' is not marked as EOG load: control token: 128128 '<|reserved_special_token_120|>' is not marked as EOG load: control token: 128124 '<|reserved_special_token_116|>' is not marked as EOG load: control token: 128123 '<|reserved_special_token_115|>' is not marked as EOG load: control token: 128122 '<|reserved_special_token_114|>' is not marked as EOG load: control token: 128119 '<|reserved_special_token_111|>' is not marked as EOG load: control token: 128115 '<|reserved_special_token_107|>' is not marked as EOG load: control token: 128112 '<|reserved_special_token_104|>' is not marked as EOG load: control token: 128110 '<|reserved_special_token_102|>' is not marked as EOG load: control token: 128109 '<|reserved_special_token_101|>' is not marked as EOG load: control token: 128108 '<|reserved_special_token_100|>' is not marked as EOG load: control token: 128106 '<|reserved_special_token_98|>' is not marked as EOG load: control token: 128103 '<|reserved_special_token_95|>' is not marked as EOG load: control token: 128102 '<|reserved_special_token_94|>' is not marked as EOG load: control token: 128101 '<|reserved_special_token_93|>' is not marked as EOG load: control token: 128097 '<|reserved_special_token_89|>' is not marked as EOG load: control token: 128091 '<|reserved_special_token_83|>' is not marked as EOG load: control token: 128090 '<|reserved_special_token_82|>' is not marked as EOG load: control token: 128089 '<|reserved_special_token_81|>' is not marked as EOG load: control token: 128087 '<|reserved_special_token_79|>' is not marked as EOG load: control token: 128085 '<|reserved_special_token_77|>' is not marked as EOG load: control token: 128081 '<|reserved_special_token_73|>' is not marked as EOG load: control token: 128078 '<|reserved_special_token_70|>' is not marked as EOG load: control token: 128076 '<|reserved_special_token_68|>' is not marked as EOG load: control token: 128075 '<|reserved_special_token_67|>' is not marked as EOG load: control token: 128073 '<|reserved_special_token_65|>' is not marked as EOG load: control token: 128068 '<|reserved_special_token_60|>' is not marked as EOG load: control token: 128067 '<|reserved_special_token_59|>' is not marked as EOG load: control token: 128065 '<|reserved_special_token_57|>' is not marked as EOG load: control token: 128063 '<|reserved_special_token_55|>' is not marked as EOG load: control token: 128062 '<|reserved_special_token_54|>' is not marked as EOG load: control token: 128060 '<|reserved_special_token_52|>' is not marked as EOG load: control token: 128059 '<|reserved_special_token_51|>' is not marked as EOG load: control token: 128057 '<|reserved_special_token_49|>' is not marked as EOG load: control token: 128054 '<|reserved_special_token_46|>' is not marked as EOG load: control token: 128046 '<|reserved_special_token_38|>' is not marked as EOG load: control token: 128045 '<|reserved_special_token_37|>' is not marked as EOG load: control token: 128044 '<|reserved_special_token_36|>' is not marked as EOG load: control token: 128043 '<|reserved_special_token_35|>' is not marked as EOG load: control token: 128038 '<|reserved_special_token_30|>' is not marked as EOG load: control token: 128036 '<|reserved_special_token_28|>' is not marked as EOG load: control token: 128035 '<|reserved_special_token_27|>' is not marked as EOG load: control token: 128032 '<|reserved_special_token_24|>' is not marked as EOG load: control token: 128028 '<|reserved_special_token_20|>' is not marked as EOG load: control token: 128027 '<|reserved_special_token_19|>' is not marked as EOG load: control token: 128024 '<|reserved_special_token_16|>' is not marked as EOG load: control token: 128023 '<|reserved_special_token_15|>' is not marked as EOG load: control token: 128022 '<|reserved_special_token_14|>' is not marked as EOG load: control token: 128021 '<|reserved_special_token_13|>' is not marked as EOG load: control token: 128018 '<|reserved_special_token_10|>' is not marked as EOG load: control token: 128016 '<|reserved_special_token_8|>' is not marked as EOG load: control token: 128015 '<|reserved_special_token_7|>' is not marked as EOG load: control token: 128013 '<|reserved_special_token_5|>' is not marked as EOG load: control token: 128011 '<|reserved_special_token_3|>' is not marked as EOG load: control token: 128005 '<|reserved_special_token_2|>' is not marked as EOG load: control token: 128004 '<|finetune_right_pad_id|>' is not marked as EOG load: control token: 128002 '<|reserved_special_token_0|>' is not marked as EOG load: control token: 128252 '<|reserved_special_token_244|>' is not marked as EOG load: control token: 128190 '<|reserved_special_token_182|>' is not marked as EOG load: control token: 128183 '<|reserved_special_token_175|>' is not marked as EOG load: control token: 128137 '<|reserved_special_token_129|>' is not marked as EOG load: control token: 128182 '<|reserved_special_token_174|>' is not marked as EOG load: control token: 128040 '<|reserved_special_token_32|>' is not marked as EOG load: control token: 128048 '<|reserved_special_token_40|>' is not marked as EOG load: control token: 128092 '<|reserved_special_token_84|>' is not marked as EOG load: control token: 128215 '<|reserved_special_token_207|>' is not marked as EOG load: control token: 128107 '<|reserved_special_token_99|>' is not marked as EOG load: control token: 128208 '<|reserved_special_token_200|>' is not marked as EOG load: control token: 128145 '<|reserved_special_token_137|>' is not marked as EOG load: control token: 128031 '<|reserved_special_token_23|>' is not marked as EOG load: control token: 128129 '<|reserved_special_token_121|>' is not marked as EOG load: control token: 128201 '<|reserved_special_token_193|>' is not marked as EOG load: control token: 128074 '<|reserved_special_token_66|>' is not marked as EOG load: control token: 128095 '<|reserved_special_token_87|>' is not marked as EOG load: control token: 128186 '<|reserved_special_token_178|>' is not marked as EOG load: control token: 128143 '<|reserved_special_token_135|>' is not marked as EOG load: control token: 128229 '<|reserved_special_token_221|>' is not marked as EOG load: control token: 128007 '<|end_header_id|>' is not marked as EOG load: control token: 128055 '<|reserved_special_token_47|>' is not marked as EOG load: control token: 128056 '<|reserved_special_token_48|>' is not marked as EOG load: control token: 128061 '<|reserved_special_token_53|>' is not marked as EOG load: control token: 128153 '<|reserved_special_token_145|>' is not marked as EOG load: control token: 128152 '<|reserved_special_token_144|>' is not marked as EOG load: control token: 128212 '<|reserved_special_token_204|>' is not marked as EOG load: control token: 128172 '<|reserved_special_token_164|>' is not marked as EOG load: control token: 128160 '<|reserved_special_token_152|>' is not marked as EOG load: control token: 128041 '<|reserved_special_token_33|>' is not marked as EOG load: control token: 128181 '<|reserved_special_token_173|>' is not marked as EOG load: control token: 128094 '<|reserved_special_token_86|>' is not marked as EOG load: control token: 128118 '<|reserved_special_token_110|>' is not marked as EOG load: control token: 128236 '<|reserved_special_token_228|>' is not marked as EOG load: control token: 128148 '<|reserved_special_token_140|>' is not marked as EOG load: control token: 128042 '<|reserved_special_token_34|>' is not marked as EOG load: control token: 128139 '<|reserved_special_token_131|>' is not marked as EOG load: control token: 128173 '<|reserved_special_token_165|>' is not marked as EOG load: control token: 128239 '<|reserved_special_token_231|>' is not marked as EOG load: control token: 128157 '<|reserved_special_token_149|>' is not marked as EOG load: control token: 128052 '<|reserved_special_token_44|>' is not marked as EOG load: control token: 128026 '<|reserved_special_token_18|>' is not marked as EOG load: control token: 128003 '<|reserved_special_token_1|>' is not marked as EOG load: control token: 128019 '<|reserved_special_token_11|>' is not marked as EOG load: control token: 128116 '<|reserved_special_token_108|>' is not marked as EOG load: control token: 128161 '<|reserved_special_token_153|>' is not marked as EOG load: control token: 128226 '<|reserved_special_token_218|>' is not marked as EOG load: control token: 128159 '<|reserved_special_token_151|>' is not marked as EOG load: control token: 128012 '<|reserved_special_token_4|>' is not marked as EOG load: control token: 128088 '<|reserved_special_token_80|>' is not marked as EOG load: control token: 128163 '<|reserved_special_token_155|>' is not marked as EOG load: control token: 128113 '<|reserved_special_token_105|>' is not marked as EOG load: control token: 128250 '<|reserved_special_token_242|>' is not marked as EOG load: control token: 128125 '<|reserved_special_token_117|>' is not marked as EOG load: control token: 128053 '<|reserved_special_token_45|>' is not marked as EOG load: control token: 128224 '<|reserved_special_token_216|>' is not marked as EOG load: control token: 128247 '<|reserved_special_token_239|>' is not marked as EOG load: control token: 128251 '<|reserved_special_token_243|>' is not marked as EOG load: control token: 128216 '<|reserved_special_token_208|>' is not marked as EOG load: control token: 128006 '<|start_header_id|>' is not marked as EOG load: control token: 128211 '<|reserved_special_token_203|>' is not marked as EOG load: control token: 128077 '<|reserved_special_token_69|>' is not marked as EOG load: control token: 128237 '<|reserved_special_token_229|>' is not marked as EOG load: control token: 128086 '<|reserved_special_token_78|>' is not marked as EOG load: control token: 128227 '<|reserved_special_token_219|>' is not marked as EOG load: control token: 128058 '<|reserved_special_token_50|>' is not marked as EOG load: control token: 128100 '<|reserved_special_token_92|>' is not marked as EOG load: control token: 128209 '<|reserved_special_token_201|>' is not marked as EOG load: control token: 128084 '<|reserved_special_token_76|>' is not marked as EOG load: control token: 128071 '<|reserved_special_token_63|>' is not marked as EOG load: control token: 128070 '<|reserved_special_token_62|>' is not marked as EOG load: control token: 128049 '<|reserved_special_token_41|>' is not marked as EOG load: control token: 128197 '<|reserved_special_token_189|>' is not marked as EOG load: control token: 128072 '<|reserved_special_token_64|>' is not marked as EOG load: control token: 128000 '<|begin_of_text|>' is not marked as EOG load: control token: 128223 '<|reserved_special_token_215|>' is not marked as EOG load: control token: 128217 '<|reserved_special_token_209|>' is not marked as EOG load: control token: 128111 '<|reserved_special_token_103|>' is not marked as EOG load: control token: 128203 '<|reserved_special_token_195|>' is not marked as EOG load: control token: 128051 '<|reserved_special_token_43|>' is not marked as EOG load: control token: 128030 '<|reserved_special_token_22|>' is not marked as EOG load: control token: 128117 '<|reserved_special_token_109|>' is not marked as EOG load: control token: 128010 '<|python_tag|>' is not marked as EOG load: control token: 128238 '<|reserved_special_token_230|>' is not marked as EOG load: control token: 128255 '<|reserved_special_token_247|>' is not marked as EOG load: control token: 128202 '<|reserved_special_token_194|>' is not marked as EOG load: control token: 128132 '<|reserved_special_token_124|>' is not marked as EOG load: control token: 128248 '<|reserved_special_token_240|>' is not marked as EOG load: control token: 128167 '<|reserved_special_token_159|>' is not marked as EOG load: control token: 128127 '<|reserved_special_token_119|>' is not marked as EOG load: control token: 128105 '<|reserved_special_token_97|>' is not marked as EOG load: control token: 128039 '<|reserved_special_token_31|>' is not marked as EOG load: control token: 128232 '<|reserved_special_token_224|>' is not marked as EOG load: control token: 128166 '<|reserved_special_token_158|>' is not marked as EOG load: control token: 128130 '<|reserved_special_token_122|>' is not marked as EOG load: control token: 128114 '<|reserved_special_token_106|>' is not marked as EOG load: control token: 128234 '<|reserved_special_token_226|>' is not marked as EOG load: control token: 128191 '<|reserved_special_token_183|>' is not marked as EOG load: control token: 128064 '<|reserved_special_token_56|>' is not marked as EOG load: control token: 128140 '<|reserved_special_token_132|>' is not marked as EOG load: control token: 128096 '<|reserved_special_token_88|>' is not marked as EOG load: control token: 128098 '<|reserved_special_token_90|>' is not marked as EOG load: control token: 128192 '<|reserved_special_token_184|>' is not marked as EOG load: control token: 128093 '<|reserved_special_token_85|>' is not marked as EOG load: control token: 128150 '<|reserved_special_token_142|>' is not marked as EOG load: control token: 128222 '<|reserved_special_token_214|>' is not marked as EOG load: control token: 128233 '<|reserved_special_token_225|>' is not marked as EOG load: control token: 128220 '<|reserved_special_token_212|>' is not marked as EOG load: control token: 128034 '<|reserved_special_token_26|>' is not marked as EOG load: control token: 128033 '<|reserved_special_token_25|>' is not marked as EOG load: control token: 128253 '<|reserved_special_token_245|>' is not marked as EOG load: control token: 128195 '<|reserved_special_token_187|>' is not marked as EOG load: control token: 128099 '<|reserved_special_token_91|>' is not marked as EOG load: control token: 128189 '<|reserved_special_token_181|>' is not marked as EOG load: control token: 128210 '<|reserved_special_token_202|>' is not marked as EOG load: control token: 128174 '<|reserved_special_token_166|>' is not marked as EOG load: control token: 128083 '<|reserved_special_token_75|>' is not marked as EOG load: control token: 128080 '<|reserved_special_token_72|>' is not marked as EOG load: control token: 128104 '<|reserved_special_token_96|>' is not marked as EOG load: control token: 128082 '<|reserved_special_token_74|>' is not marked as EOG load: control token: 128219 '<|reserved_special_token_211|>' is not marked as EOG load: control token: 128017 '<|reserved_special_token_9|>' is not marked as EOG load: control token: 128050 '<|reserved_special_token_42|>' is not marked as EOG load: control token: 128205 '<|reserved_special_token_197|>' is not marked as EOG load: control token: 128047 '<|reserved_special_token_39|>' is not marked as EOG load: control token: 128164 '<|reserved_special_token_156|>' is not marked as EOG load: control token: 128020 '<|reserved_special_token_12|>' is not marked as EOG load: control token: 128069 '<|reserved_special_token_61|>' is not marked as EOG load: control token: 128245 '<|reserved_special_token_237|>' is not marked as EOG load: control token: 128121 '<|reserved_special_token_113|>' is not marked as EOG load: control token: 128079 '<|reserved_special_token_71|>' is not marked as EOG load: control token: 128037 '<|reserved_special_token_29|>' is not marked as EOG load: control token: 128244 '<|reserved_special_token_236|>' is not marked as EOG load: control token: 128029 '<|reserved_special_token_21|>' is not marked as EOG load: control token: 128221 '<|reserved_special_token_213|>' is not marked as EOG load: control token: 128066 '<|reserved_special_token_58|>' is not marked as EOG load: control token: 128120 '<|reserved_special_token_112|>' is not marked as EOG load: control token: 128014 '<|reserved_special_token_6|>' is not marked as EOG load: control token: 128025 '<|reserved_special_token_17|>' is not marked as EOG load: control token: 128126 '<|reserved_special_token_118|>' is not marked as EOG load: printing all EOG tokens: load: - 128001 ('<|end_of_text|>') load: - 128008 ('<|eom_id|>') load: - 128009 ('<|eot_id|>') load: special tokens cache size = 256 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 0 print_info: no_alloc = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 2048 print_info: n_embd_inp = 2048 print_info: n_layer = 16 print_info: n_head = 32 print_info: n_head_kv = 8 print_info: n_rot = 64 print_info: n_swa = 0 print_info: is_swa_any = 0 print_info: n_embd_head_k = 64 print_info: n_embd_head_v = 64 print_info: n_gqa = 4 print_info: n_embd_k_gqa = 512 print_info: n_embd_v_gqa = 512 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 8192 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: n_expert_groups = 0 print_info: n_group_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 0 print_info: rope scaling = linear print_info: freq_base_train = 500000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 131072 print_info: rope_yarn_log_mul= 0.0000 print_info: rope_finetuned = unknown print_info: model type = 1B print_info: model params = 1.24 B print_info: general.name = Llama-3.2-1B-Instruct print_info: vocab type = BPE print_info: n_vocab = 128256 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin_of_text|>' print_info: EOS token = 128009 '<|eot_id|>' print_info: EOT token = 128009 '<|eot_id|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: PAD token = 128004 '<|finetune_right_pad_id|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128001 '<|end_of_text|>' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = true) load_tensors: layer 0 assigned to device OPENVINO0, is_swa = 0 load_tensors: layer 1 assigned to device OPENVINO0, is_swa = 0 load_tensors: layer 2 assigned to device OPENVINO0, is_swa = 0 load_tensors: layer 3 assigned to device OPENVINO0, is_swa = 0 load_tensors: layer 4 assigned to device OPENVINO0, is_swa = 0 load_tensors: layer 5 assigned to device OPENVINO0, is_swa = 0 load_tensors: layer 6 assigned to device OPENVINO0, is_swa = 0 load_tensors: layer 7 assigned to device OPENVINO0, is_swa = 0 load_tensors: layer 8 assigned to device OPENVINO0, is_swa = 0 load_tensors: layer 9 assigned to device OPENVINO0, is_swa = 0 load_tensors: layer 10 assigned to device OPENVINO0, is_swa = 0 load_tensors: layer 11 assigned to device OPENVINO0, is_swa = 0 load_tensors: layer 12 assigned to device OPENVINO0, is_swa = 0 load_tensors: layer 13 assigned to device OPENVINO0, is_swa = 0 load_tensors: layer 14 assigned to device OPENVINO0, is_swa = 0 load_tensors: layer 15 assigned to device OPENVINO0, is_swa = 0 load_tensors: layer 16 assigned to device OPENVINO0, is_swa = 0 create_tensor: loading tensor token_embd.weight create_tensor: loading tensor output_norm.weight create_tensor: loading tensor token_embd.weight create_tensor: loading tensor blk.0.attn_norm.weight create_tensor: loading tensor blk.0.attn_q.weight create_tensor: loading tensor blk.0.attn_k.weight create_tensor: loading tensor blk.0.attn_v.weight create_tensor: loading tensor blk.0.attn_output.weight create_tensor: loading tensor blk.0.ffn_norm.weight create_tensor: loading tensor rope_freqs.weight create_tensor: loading tensor blk.0.ffn_gate.weight create_tensor: loading tensor blk.0.ffn_down.weight create_tensor: loading tensor blk.0.ffn_up.weight create_tensor: loading tensor blk.1.attn_norm.weight create_tensor: loading tensor blk.1.attn_q.weight create_tensor: loading tensor blk.1.attn_k.weight create_tensor: loading tensor blk.1.attn_v.weight create_tensor: loading tensor blk.1.attn_output.weight create_tensor: loading tensor blk.1.ffn_norm.weight create_tensor: loading tensor blk.1.ffn_gate.weight create_tensor: loading tensor blk.1.ffn_down.weight create_tensor: loading tensor blk.1.ffn_up.weight create_tensor: loading tensor blk.2.attn_norm.weight create_tensor: loading tensor blk.2.attn_q.weight create_tensor: loading tensor blk.2.attn_k.weight create_tensor: loading tensor blk.2.attn_v.weight create_tensor: loading tensor blk.2.attn_output.weight create_tensor: loading tensor blk.2.ffn_norm.weight create_tensor: loading tensor blk.2.ffn_gate.weight create_tensor: loading tensor blk.2.ffn_down.weight create_tensor: loading tensor blk.2.ffn_up.weight create_tensor: loading tensor blk.3.attn_norm.weight create_tensor: loading tensor blk.3.attn_q.weight create_tensor: loading tensor blk.3.attn_k.weight create_tensor: loading tensor blk.3.attn_v.weight create_tensor: loading tensor blk.3.attn_output.weight create_tensor: loading tensor blk.3.ffn_norm.weight create_tensor: loading tensor blk.3.ffn_gate.weight create_tensor: loading tensor blk.3.ffn_down.weight create_tensor: loading tensor blk.3.ffn_up.weight create_tensor: loading tensor blk.4.attn_norm.weight create_tensor: loading tensor blk.4.attn_q.weight create_tensor: loading tensor blk.4.attn_k.weight create_tensor: loading tensor blk.4.attn_v.weight create_tensor: loading tensor blk.4.attn_output.weight create_tensor: loading tensor blk.4.ffn_norm.weight create_tensor: loading tensor blk.4.ffn_gate.weight create_tensor: loading tensor blk.4.ffn_down.weight create_tensor: loading tensor blk.4.ffn_up.weight create_tensor: loading tensor blk.5.attn_norm.weight create_tensor: loading tensor blk.5.attn_q.weight create_tensor: loading tensor blk.5.attn_k.weight create_tensor: loading tensor blk.5.attn_v.weight create_tensor: loading tensor blk.5.attn_output.weight create_tensor: loading tensor blk.5.ffn_norm.weight create_tensor: loading tensor blk.5.ffn_gate.weight create_tensor: loading tensor blk.5.ffn_down.weight create_tensor: loading tensor blk.5.ffn_up.weight create_tensor: loading tensor blk.6.attn_norm.weight create_tensor: loading tensor blk.6.attn_q.weight create_tensor: loading tensor blk.6.attn_k.weight create_tensor: loading tensor blk.6.attn_v.weight create_tensor: loading tensor blk.6.attn_output.weight create_tensor: loading tensor blk.6.ffn_norm.weight create_tensor: loading tensor blk.6.ffn_gate.weight create_tensor: loading tensor blk.6.ffn_down.weight create_tensor: loading tensor blk.6.ffn_up.weight create_tensor: loading tensor blk.7.attn_norm.weight create_tensor: loading tensor blk.7.attn_q.weight create_tensor: loading tensor blk.7.attn_k.weight create_tensor: loading tensor blk.7.attn_v.weight create_tensor: loading tensor blk.7.attn_output.weight create_tensor: loading tensor blk.7.ffn_norm.weight create_tensor: loading tensor blk.7.ffn_gate.weight create_tensor: loading tensor blk.7.ffn_down.weight create_tensor: loading tensor blk.7.ffn_up.weight create_tensor: loading tensor blk.8.attn_norm.weight create_tensor: loading tensor blk.8.attn_q.weight create_tensor: loading tensor blk.8.attn_k.weight create_tensor: loading tensor blk.8.attn_v.weight create_tensor: loading tensor blk.8.attn_output.weight create_tensor: loading tensor blk.8.ffn_norm.weight create_tensor: loading tensor blk.8.ffn_gate.weight create_tensor: loading tensor blk.8.ffn_down.weight create_tensor: loading tensor blk.8.ffn_up.weight create_tensor: loading tensor blk.9.attn_norm.weight create_tensor: loading tensor blk.9.attn_q.weight create_tensor: loading tensor blk.9.attn_k.weight create_tensor: loading tensor blk.9.attn_v.weight create_tensor: loading tensor blk.9.attn_output.weight create_tensor: loading tensor blk.9.ffn_norm.weight create_tensor: loading tensor blk.9.ffn_gate.weight create_tensor: loading tensor blk.9.ffn_down.weight create_tensor: loading tensor blk.9.ffn_up.weight create_tensor: loading tensor blk.10.attn_norm.weight create_tensor: loading tensor blk.10.attn_q.weight create_tensor: loading tensor blk.10.attn_k.weight create_tensor: loading tensor blk.10.attn_v.weight create_tensor: loading tensor blk.10.attn_output.weight create_tensor: loading tensor blk.10.ffn_norm.weight create_tensor: loading tensor blk.10.ffn_gate.weight create_tensor: loading tensor blk.10.ffn_down.weight create_tensor: loading tensor blk.10.ffn_up.weight create_tensor: loading tensor blk.11.attn_norm.weight create_tensor: loading tensor blk.11.attn_q.weight create_tensor: loading tensor blk.11.attn_k.weight create_tensor: loading tensor blk.11.attn_v.weight create_tensor: loading tensor blk.11.attn_output.weight create_tensor: loading tensor blk.11.ffn_norm.weight create_tensor: loading tensor blk.11.ffn_gate.weight create_tensor: loading tensor blk.11.ffn_down.weight create_tensor: loading tensor blk.11.ffn_up.weight create_tensor: loading tensor blk.12.attn_norm.weight create_tensor: loading tensor blk.12.attn_q.weight create_tensor: loading tensor blk.12.attn_k.weight create_tensor: loading tensor blk.12.attn_v.weight create_tensor: loading tensor blk.12.attn_output.weight create_tensor: loading tensor blk.12.ffn_norm.weight create_tensor: loading tensor blk.12.ffn_gate.weight create_tensor: loading tensor blk.12.ffn_down.weight create_tensor: loading tensor blk.12.ffn_up.weight create_tensor: loading tensor blk.13.attn_norm.weight create_tensor: loading tensor blk.13.attn_q.weight create_tensor: loading tensor blk.13.attn_k.weight create_tensor: loading tensor blk.13.attn_v.weight create_tensor: loading tensor blk.13.attn_output.weight create_tensor: loading tensor blk.13.ffn_norm.weight create_tensor: loading tensor blk.13.ffn_gate.weight create_tensor: loading tensor blk.13.ffn_down.weight create_tensor: loading tensor blk.13.ffn_up.weight create_tensor: loading tensor blk.14.attn_norm.weight create_tensor: loading tensor blk.14.attn_q.weight create_tensor: loading tensor blk.14.attn_k.weight create_tensor: loading tensor blk.14.attn_v.weight create_tensor: loading tensor blk.14.attn_output.weight create_tensor: loading tensor blk.14.ffn_norm.weight create_tensor: loading tensor blk.14.ffn_gate.weight create_tensor: loading tensor blk.14.ffn_down.weight create_tensor: loading tensor blk.14.ffn_up.weight create_tensor: loading tensor blk.15.attn_norm.weight create_tensor: loading tensor blk.15.attn_q.weight create_tensor: loading tensor blk.15.attn_k.weight create_tensor: loading tensor blk.15.attn_v.weight create_tensor: loading tensor blk.15.attn_output.weight create_tensor: loading tensor blk.15.ffn_norm.weight create_tensor: loading tensor blk.15.ffn_gate.weight create_tensor: loading tensor blk.15.ffn_down.weight create_tensor: loading tensor blk.15.ffn_up.weight load_tensors: tensor 'token_embd.weight' (q6_K) (and 0 others) cannot be used with preferred buffer type OPENVINO0_HOST, using CPU instead load_tensors: offloading 16 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 17/17 layers to GPU load_tensors: CPU_Mapped model buffer size = 205.49 MiB load_tensors: OPENVINO0 model buffer size = 1025.26 MiB time=2026-04-02T15:13:50.918+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.22" time=2026-04-02T15:14:07.716+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.45" time=2026-04-02T15:14:07.967+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.46" time=2026-04-02T15:14:08.218+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.47" time=2026-04-02T15:14:08.469+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.48" time=2026-04-02T15:14:08.720+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.49" time=2026-04-02T15:14:09.222+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.50" time=2026-04-02T15:14:09.472+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.52" time=2026-04-02T15:14:09.723+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.53" time=2026-04-02T15:14:09.974+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.54" time=2026-04-02T15:14:10.224+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.55" time=2026-04-02T15:14:10.476+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.55" time=2026-04-02T15:14:10.726+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.56" time=2026-04-02T15:14:10.978+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.57" time=2026-04-02T15:14:11.228+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.58" time=2026-04-02T15:14:11.480+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.59" time=2026-04-02T15:14:11.730+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.60" time=2026-04-02T15:14:11.981+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.61" time=2026-04-02T15:14:12.233+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.62" time=2026-04-02T15:14:12.484+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.63" time=2026-04-02T15:14:12.735+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.64" time=2026-04-02T15:14:12.986+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.66" time=2026-04-02T15:14:13.236+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.67" time=2026-04-02T15:14:13.487+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.68" time=2026-04-02T15:14:13.738+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.69" time=2026-04-02T15:14:13.990+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.70" time=2026-04-02T15:14:14.240+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.71" time=2026-04-02T15:14:14.491+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.72" time=2026-04-02T15:14:14.742+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.73" time=2026-04-02T15:14:14.993+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.74" time=2026-04-02T15:14:15.244+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.75" time=2026-04-02T15:14:15.494+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.76" time=2026-04-02T15:14:15.745+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.76" time=2026-04-02T15:14:15.996+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.77" time=2026-04-02T15:14:16.247+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.78" time=2026-04-02T15:14:16.498+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.80" time=2026-04-02T15:14:16.749+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.81" time=2026-04-02T15:14:17.000+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.82" time=2026-04-02T15:14:17.251+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.83" time=2026-04-02T15:14:17.501+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.83" time=2026-04-02T15:14:17.753+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.84" time=2026-04-02T15:14:18.004+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.85" time=2026-04-02T15:14:18.255+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.87" time=2026-04-02T15:14:18.507+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.88" time=2026-04-02T15:14:18.757+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.89" time=2026-04-02T15:14:19.008+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.90" time=2026-04-02T15:14:19.259+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.90" time=2026-04-02T15:14:19.511+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.91" time=2026-04-02T15:14:19.761+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.92" time=2026-04-02T15:14:20.012+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.94" time=2026-04-02T15:14:20.263+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.95" time=2026-04-02T15:14:20.514+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.96" time=2026-04-02T15:14:20.765+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.97" time=2026-04-02T15:14:21.015+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.97" time=2026-04-02T15:14:21.266+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.98" time=2026-04-02T15:14:21.518+09:00 level=DEBUG source=server.go:1396 msg="model load progress 0.99" llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 32768 llama_context: n_ctx_seq = 32768 llama_context: n_batch = 512 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = auto llama_context: kv_unified = false llama_context: freq_base = 500000.0 llama_context: freq_scale = 1 llama_context: n_ctx_seq (32768) < n_ctx_train (131072) -- the full capacity of the model will not be utilized set_abort_callback: call llama_context: OPENVINO0_HOST output buffer size = 0.50 MiB llama_kv_cache: layer 0: dev = OPENVINO0 llama_kv_cache: layer 1: dev = OPENVINO0 llama_kv_cache: layer 2: dev = OPENVINO0 llama_kv_cache: layer 3: dev = OPENVINO0 llama_kv_cache: layer 4: dev = OPENVINO0 llama_kv_cache: layer 5: dev = OPENVINO0 llama_kv_cache: layer 6: dev = OPENVINO0 llama_kv_cache: layer 7: dev = OPENVINO0 llama_kv_cache: layer 8: dev = OPENVINO0 llama_kv_cache: layer 9: dev = OPENVINO0 llama_kv_cache: layer 10: dev = OPENVINO0 llama_kv_cache: layer 11: dev = OPENVINO0 llama_kv_cache: layer 12: dev = OPENVINO0 llama_kv_cache: layer 13: dev = OPENVINO0 llama_kv_cache: layer 14: dev = OPENVINO0 llama_kv_cache: layer 15: dev = OPENVINO0 time=2026-04-02T15:14:21.769+09:00 level=DEBUG source=server.go:1396 msg="model load progress 1.00" llama_kv_cache: OPENVINO0 KV buffer size = 1024.00 MiB time=2026-04-02T15:14:22.021+09:00 level=DEBUG source=server.go:1399 msg="model load completed, waiting for server to become available" status="llm server loading model" llama_kv_cache: size = 1024.00 MiB ( 32768 cells, 16 layers, 1/1 seqs), K (f16): 512.00 MiB, V (f16): 512.00 MiB llama_context: enumerating backends llama_context: backend_ptrs.size() = 2 llama_context: max_nodes = 1184 llama_context: reserving full memory module llama_context: worst-case: n_tokens = 512, n_seqs = 1, n_outputs = 1 graph_reserve: reserving a graph for ubatch with n_tokens = 1, n_seqs = 1, n_outputs = 1 llama_context: Flash Attention was auto, set to enabled graph_reserve: reserving a graph for ubatch with n_tokens = 512, n_seqs = 1, n_outputs = 512 graph_reserve: reserving a graph for ubatch with n_tokens = 1, n_seqs = 1, n_outputs = 1 graph_reserve: reserving a graph for ubatch with n_tokens = 512, n_seqs = 1, n_outputs = 512 llama_context: OPENVINO0 compute buffer size = 254.50 MiB llama_context: OPENVINO0_HOST compute buffer size = 64.01 MiB llama_context: graph nodes = 503 llama_context: graph splits = 1 time=2026-04-02T15:14:22.271+09:00 level=INFO source=server.go:1390 msg="llama runner started in 33.01 seconds" time=2026-04-02T15:14:22.271+09:00 level=INFO source=sched.go:561 msg="loaded runners" count=1 time=2026-04-02T15:14:22.271+09:00 level=INFO source=server.go:1352 msg="waiting for llama runner to start responding" time=2026-04-02T15:14:22.272+09:00 level=INFO source=server.go:1390 msg="llama runner started in 33.01 seconds" time=2026-04-02T15:14:22.272+09:00 level=DEBUG source=sched.go:573 msg="finished setting up" runner.name=hf.co/unsloth/Llama-3.2-1B-Instruct-GGUF:Q4_0 runner.inference="[{ID:OPENVINO0 Library:OPENVINO}]" runner.size="3.8 GiB" runner.vram="3.8 GiB" runner.parallel=1 runner.pid=42036 runner.model=C:\Users\USER\.ollama\models\blobs\sha256-66bfbb2d48bdb77cd56bd03ef820deff3c4a74b1a09de3b917ae13e72c1a70c2 runner.num_ctx=32768 time=2026-04-02T15:14:22.273+09:00 level=DEBUG source=server.go:1538 msg="completion request" images=0 prompt=104 format="" time=2026-04-02T15:14:22.274+09:00 level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=11 used=0 remaining=11 [WARNING] 15:15:54.205 [NPUPlugin] The IR version was not found within the runtime information attributes. The NPU plugin will continue execution assuming the version is 11. If wrong, compilation issues may occur. [WARNING] 15:16:10.22 [NPUPlugin] The IR version was not found within the runtime information attributes. The NPU plugin will continue execution assuming the version is 11. If wrong, compilation issues may occur. [WARNING] 15:16:27.786 [NPUPlugin] The IR version was not found within the runtime information attributes. The NPU plugin will continue execution assuming the version is 11. If wrong, compilation issues may occur. [WARNING] 15:17:06.73 [NPUPlugin] The IR version was not found within the runtime information attributes. The NPU plugin will continue execution assuming the version is 11. If wrong, compilation issues may occur. [WARNING] 15:17:08.909 [NPUPlugin] The IR version was not found within the runtime information attributes. The NPU plugin will continue execution assuming the version is 11. If wrong, compilation issues may occur. [WARNING] 15:17:12.355 [NPUPlugin] The IR version was not found within the runtime information attributes. The NPU plugin will continue execution assuming the version is 11. If wrong, compilation issues may occur. ``` </details> --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-29 16:47:23 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#61770