[GH-ISSUE #11254] GGUF导入的多模态模型貌似丧失多模态功能 #33175

Closed
opened 2026-04-22 15:36:27 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @ygxiuming on GitHub (Jul 1, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11254

What is the issue?

原因

我下载的模型为 https://modelscope.cn/models/unsloth/Qwen2.5-VL-7B-Instruct-GGUF/files 里面的 Qwen2.5-VL-7B-Instruct-Q8_0.gguf 模型,通过 ollama 的命令:ollama show --modelfile qwen2.5vl:latest 显示其官网的 Modelfile 格式,以下是我修改后的格式,只修改了 From 的路径,但是在 ollama 的运行过程中出现了无法

Image

差异

GGUF 格式导入效果

Image

通过 ollama 官网拉取命令效果

Image

疑问

貌似通过 gguf 导入的模型,其视觉功能失效了,请问是什么原因导致的?

Relevant log output

time=2025-07-01T20:07:32.088+08:00 level=INFO source=sched.go:548 msg="updated VRAM based on existing loaded models" gpu=GPU-03eff3ba-e11f-c67e-28b2-3220d83c70bb library=cuda total="12.0 GiB" available="270.8 MiB"
time=2025-07-01T20:07:32.730+08:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=E:\Docker_stronge\ollama\models\blobs\sha256-a99b7f834d754b88f122d865f32758ba9f0994a83f8363df2c1e71c17605a025 gpu=GPU-03eff3ba-e11f-c67e-28b2-3220d83c70bb parallel=2 available=11237634048 required="8.4 GiB"
time=2025-07-01T20:07:32.746+08:00 level=INFO source=server.go:135 msg="system memory" total="31.8 GiB" free="17.1 GiB" free_swap="15.7 GiB"
time=2025-07-01T20:07:32.749+08:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[10.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="8.4 GiB" memory.required.partial="8.4 GiB" memory.required.kv="448.0 MiB" memory.required.allocations="[8.4 GiB]" memory.weights.total="4.1 GiB" memory.weights.repeating="3.7 GiB" memory.weights.nonrepeating="426.4 MiB" memory.graph.full="522.7 MiB" memory.graph.partial="522.7 MiB" projector.weights="1.2 GiB" projector.graph="1.6 GiB"
time=2025-07-01T20:07:32.795+08:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\\Users\\xiuming\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model E:\\Docker_stronge\\ollama\\models\\blobs\\sha256-a99b7f834d754b88f122d865f32758ba9f0994a83f8363df2c1e71c17605a025 --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --threads 6 --no-mmap --parallel 2 --port 47307"
time=2025-07-01T20:07:32.851+08:00 level=INFO source=sched.go:483 msg="loaded runners" count=1
time=2025-07-01T20:07:32.851+08:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
time=2025-07-01T20:07:32.852+08:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error"
time=2025-07-01T20:07:33.028+08:00 level=INFO source=runner.go:925 msg="starting ollama engine"
time=2025-07-01T20:07:33.052+08:00 level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:47307"
time=2025-07-01T20:07:33.092+08:00 level=INFO source=ggml.go:92 msg="" architecture=qwen25vl file_type=Q4_K_M name="" description="" num_tensors=858 num_key_values=36
time=2025-07-01T20:07:33.103+08:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
load_backend: loaded CPU backend from C:\Users\xiuming\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 4070 Ti, compute capability 8.9, VMM: yes
load_backend: loaded CUDA backend from C:\Users\xiuming\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
time=2025-07-01T20:07:33.278+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-07-01T20:07:33.448+08:00 level=INFO source=ggml.go:357 msg="model weights" buffer=CPU size="292.4 MiB"
time=2025-07-01T20:07:33.448+08:00 level=INFO source=ggml.go:357 msg="model weights" buffer=CUDA0 size="5.3 GiB"
time=2025-07-01T20:07:33.748+08:00 level=INFO source=ggml.go:644 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="1.7 GiB"
time=2025-07-01T20:07:33.748+08:00 level=INFO source=ggml.go:644 msg="compute graph" backend=CPU buffer_type=CPU size="0 B"
time=2025-07-01T20:07:33.806+08:00 level=INFO source=ggml.go:644 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="1.7 GiB"
time=2025-07-01T20:07:33.806+08:00 level=INFO source=ggml.go:644 msg="compute graph" backend=CPU buffer_type=CPU size="16.8 MiB"
time=2025-07-01T20:07:37.114+08:00 level=INFO source=server.go:630 msg="llama runner started in 4.26 seconds"
[GIN] 2025/07/01 - 20:07:37 | 200 |    5.5464398s |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/07/01 - 20:08:23 | 200 |    4.1031933s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2025/07/01 - 20:08:45 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/07/01 - 20:08:45 | 200 |     27.7773ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/07/01 - 20:08:56 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/07/01 - 20:08:56 | 200 |     32.4373ms |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/07/01 - 20:08:57 | 200 |    1.1171858s |       127.0.0.1 | DELETE   "/api/delete"
[GIN] 2025/07/01 - 20:09:02 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/07/01 - 20:09:02 | 200 |     15.7937ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/07/01 - 20:09:17 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/07/01 - 20:09:22 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/07/01 - 20:10:06 | 201 |   30.4141734s |       127.0.0.1 | POST     "/api/blobs/sha256:ee770c700d7429cc6f0c74d6c7ab3c063bf521312fc36e80776d1d79bc9fa4ad"
[GIN] 2025/07/01 - 20:10:06 | 200 |     194.224ms |       127.0.0.1 | POST     "/api/create"
[GIN] 2025/07/01 - 20:10:55 | 200 |       526.9µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/07/01 - 20:10:55 | 200 |    116.5988ms |       127.0.0.1 | POST     "/api/show"
time=2025-07-01T20:10:56.399+08:00 level=INFO source=sched.go:548 msg="updated VRAM based on existing loaded models" gpu=GPU-03eff3ba-e11f-c67e-28b2-3220d83c70bb library=cuda total="12.0 GiB" available="2.5 GiB"
time=2025-07-01T20:10:56.759+08:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=E:\Docker_stronge\ollama\models\blobs\sha256-ee770c700d7429cc6f0c74d6c7ab3c063bf521312fc36e80776d1d79bc9fa4ad gpu=GPU-03eff3ba-e11f-c67e-28b2-3220d83c70bb parallel=2 available=11055329280 required="8.6 GiB"
time=2025-07-01T20:10:56.773+08:00 level=INFO source=server.go:135 msg="system memory" total="31.8 GiB" free="17.1 GiB" free_swap="15.5 GiB"
time=2025-07-01T20:10:56.774+08:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[10.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="8.6 GiB" memory.required.partial="8.6 GiB" memory.required.kv="448.0 MiB" memory.required.allocations="[8.6 GiB]" memory.weights.total="7.0 GiB" memory.weights.repeating="6.5 GiB" memory.weights.nonrepeating="552.2 MiB" memory.graph.full="522.7 MiB" memory.graph.partial="522.7 MiB"
llama_model_loader: loaded meta data with 32 key-value pairs and 339 tensors from E:\Docker_stronge\ollama\models\blobs\sha256-ee770c700d7429cc6f0c74d6c7ab3c063bf521312fc36e80776d1d79bc9fa4ad (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2vl
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Qwen2.5-Vl-7B-Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Qwen2.5-Vl-7B-Instruct
llama_model_loader: - kv   5:                       general.quantized_by str              = Unsloth
llama_model_loader: - kv   6:                         general.size_label str              = 7B
llama_model_loader: - kv   7:                           general.repo_url str              = https://huggingface.co/unsloth
llama_model_loader: - kv   8:                        qwen2vl.block_count u32              = 28
llama_model_loader: - kv   9:                     qwen2vl.context_length u32              = 128000
llama_model_loader: - kv  10:                   qwen2vl.embedding_length u32              = 3584
llama_model_loader: - kv  11:                qwen2vl.feed_forward_length u32              = 18944
llama_model_loader: - kv  12:               qwen2vl.attention.head_count u32              = 28
llama_model_loader: - kv  13:            qwen2vl.attention.head_count_kv u32              = 4
llama_model_loader: - kv  14:                     qwen2vl.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  15:   qwen2vl.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  16:            qwen2vl.rope.dimension_sections arr[i32,4]       = [16, 24, 24, 0]
llama_model_loader: - kv  17:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  18:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  19:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  20:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  21:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  22:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  23:            tokenizer.ggml.padding_token_id u32              = 151654
llama_model_loader: - kv  24:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  25:                    tokenizer.chat_template str              = {% set image_count = namespace(value=...
llama_model_loader: - kv  26:               general.quantization_version u32              = 2
llama_model_loader: - kv  27:                          general.file_type u32              = 7
llama_model_loader: - kv  28:                      quantize.imatrix.file str              = Qwen2.5-VL-7B-Instruct-GGUF/imatrix_u...
llama_model_loader: - kv  29:                   quantize.imatrix.dataset str              = unsloth_calibration_Qwen2.5-VL-7B-Ins...
llama_model_loader: - kv  30:             quantize.imatrix.entries_count i32              = 196
llama_model_loader: - kv  31:              quantize.imatrix.chunks_count i32              = 691
llama_model_loader: - type  f32:  141 tensors
llama_model_loader: - type q8_0:  198 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q8_0
print_info: file size   = 7.54 GiB (8.50 BPW) 
load: special tokens cache size = 22
load: token to piece cache size = 0.9310 MB
print_info: arch             = qwen2vl
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 7.62 B
print_info: general.name     = Qwen2.5-Vl-7B-Instruct
print_info: vocab type       = BPE
print_info: n_vocab          = 152064
print_info: n_merges         = 151387
print_info: BOS token        = 11 ','
print_info: EOS token        = 151645 '<|im_end|>'
print_info: EOT token        = 151645 '<|im_end|>'
print_info: PAD token        = 151654 '<|vision_pad|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|endoftext|>'
print_info: EOG token        = 151645 '<|im_end|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-07-01T20:10:57.057+08:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\\Users\\xiuming\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model E:\\Docker_stronge\\ollama\\models\\blobs\\sha256-ee770c700d7429cc6f0c74d6c7ab3c063bf521312fc36e80776d1d79bc9fa4ad --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --threads 6 --no-mmap --parallel 2 --port 47463"
time=2025-07-01T20:10:57.116+08:00 level=INFO source=sched.go:483 msg="loaded runners" count=1
time=2025-07-01T20:10:57.116+08:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
time=2025-07-01T20:10:57.117+08:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error"
time=2025-07-01T20:10:57.254+08:00 level=INFO source=runner.go:815 msg="starting go runner"
load_backend: loaded CPU backend from C:\Users\xiuming\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 4070 Ti, compute capability 8.9, VMM: yes
load_backend: loaded CUDA backend from C:\Users\xiuming\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
time=2025-07-01T20:10:57.471+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-07-01T20:10:57.473+08:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:47463"
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4070 Ti) - 11038 MiB free
time=2025-07-01T20:10:57.620+08:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: loaded meta data with 32 key-value pairs and 339 tensors from E:\Docker_stronge\ollama\models\blobs\sha256-ee770c700d7429cc6f0c74d6c7ab3c063bf521312fc36e80776d1d79bc9fa4ad (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2vl
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Qwen2.5-Vl-7B-Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Qwen2.5-Vl-7B-Instruct
llama_model_loader: - kv   5:                       general.quantized_by str              = Unsloth
llama_model_loader: - kv   6:                         general.size_label str              = 7B
llama_model_loader: - kv   7:                           general.repo_url str              = https://huggingface.co/unsloth
llama_model_loader: - kv   8:                        qwen2vl.block_count u32              = 28
llama_model_loader: - kv   9:                     qwen2vl.context_length u32              = 128000
llama_model_loader: - kv  10:                   qwen2vl.embedding_length u32              = 3584
llama_model_loader: - kv  11:                qwen2vl.feed_forward_length u32              = 18944
llama_model_loader: - kv  12:               qwen2vl.attention.head_count u32              = 28
llama_model_loader: - kv  13:            qwen2vl.attention.head_count_kv u32              = 4
llama_model_loader: - kv  14:                     qwen2vl.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  15:   qwen2vl.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  16:            qwen2vl.rope.dimension_sections arr[i32,4]       = [16, 24, 24, 0]
llama_model_loader: - kv  17:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  18:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  19:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  20:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  21:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  22:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  23:            tokenizer.ggml.padding_token_id u32              = 151654
llama_model_loader: - kv  24:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  25:                    tokenizer.chat_template str              = {% set image_count = namespace(value=...
llama_model_loader: - kv  26:               general.quantization_version u32              = 2
llama_model_loader: - kv  27:                          general.file_type u32              = 7
llama_model_loader: - kv  28:                      quantize.imatrix.file str              = Qwen2.5-VL-7B-Instruct-GGUF/imatrix_u...
llama_model_loader: - kv  29:                   quantize.imatrix.dataset str              = unsloth_calibration_Qwen2.5-VL-7B-Ins...
llama_model_loader: - kv  30:             quantize.imatrix.entries_count i32              = 196
llama_model_loader: - kv  31:              quantize.imatrix.chunks_count i32              = 691
llama_model_loader: - type  f32:  141 tensors
llama_model_loader: - type q8_0:  198 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q8_0
print_info: file size   = 7.54 GiB (8.50 BPW) 
load: special tokens cache size = 22
load: token to piece cache size = 0.9310 MB
print_info: arch             = qwen2vl
print_info: vocab_only       = 0
print_info: n_ctx_train      = 128000
print_info: n_embd           = 3584
print_info: n_layer          = 28
print_info: n_head           = 28
print_info: n_head_kv        = 4
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: n_swa_pattern    = 1
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 7
print_info: n_embd_k_gqa     = 512
print_info: n_embd_v_gqa     = 512
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-06
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 18944
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = -1
print_info: rope type        = 8
print_info: rope scaling     = linear
print_info: freq_base_train  = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 128000
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 7B
print_info: model params     = 7.62 B
print_info: general.name     = Qwen2.5-Vl-7B-Instruct
print_info: vocab type       = BPE
print_info: n_vocab          = 152064
print_info: n_merges         = 151387
print_info: BOS token        = 11 ','
print_info: EOS token        = 151645 '<|im_end|>'
print_info: EOT token        = 151645 '<|im_end|>'
print_info: PAD token        = 151654 '<|vision_pad|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|endoftext|>'
print_info: EOG token        = 151645 '<|im_end|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors: offloading 28 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 29/29 layers to GPU
load_tensors:    CUDA_Host model buffer size =   552.23 MiB
load_tensors:        CUDA0 model buffer size =  7165.44 MiB
llama_context: constructing llama_context
llama_context: n_seq_max     = 2
llama_context: n_ctx         = 8192
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch       = 1024
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = 0
llama_context: freq_base     = 1000000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_per_seq (4096) < n_ctx_train (128000) -- the full capacity of the model will not be utilized
llama_context:  CUDA_Host  output buffer size =     1.19 MiB
llama_kv_cache_unified: kv_size = 8192, type_k = 'f16', type_v = 'f16', n_layer = 28, can_shift = 1, padding = 32
llama_kv_cache_unified:      CUDA0 KV buffer size =   448.00 MiB
llama_kv_cache_unified: KV self size  =  448.00 MiB, K (f16):  224.00 MiB, V (f16):  224.00 MiB
llama_context:      CUDA0 compute buffer size =   492.01 MiB
llama_context:  CUDA_Host compute buffer size =    23.01 MiB
llama_context: graph nodes  = 1042
llama_context: graph splits = 2
time=2025-07-01T20:11:00.379+08:00 level=INFO source=server.go:630 msg="llama runner started in 3.26 seconds"
[GIN] 2025/07/01 - 20:11:00 | 200 |    4.4009419s |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/07/01 - 20:11:09 | 200 |     1.649869s |       127.0.0.1 | POST     "/api/chat"

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.9.1

Originally created by @ygxiuming on GitHub (Jul 1, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11254 ### What is the issue? # 原因 我下载的模型为 https://modelscope.cn/models/unsloth/Qwen2.5-VL-7B-Instruct-GGUF/files 里面的 Qwen2.5-VL-7B-Instruct-Q8_0.gguf 模型,通过 ollama 的命令:ollama show --modelfile qwen2.5vl:latest 显示其官网的 Modelfile 格式,以下是我修改后的格式,只修改了 From 的路径,但是在 ollama 的运行过程中出现了无法 ![Image](https://github.com/user-attachments/assets/ef75626e-de5d-4bc3-b3d0-222cdb3f58ea) # 差异 GGUF 格式导入效果 ![Image](https://github.com/user-attachments/assets/970a500f-a321-47bc-98b0-755520530967) 通过 ollama 官网拉取命令效果 ![Image](https://github.com/user-attachments/assets/3448b9de-3e25-4656-b159-2d4231b0136e) # 疑问 貌似通过 gguf 导入的模型,其视觉功能失效了,请问是什么原因导致的? ### Relevant log output ```shell time=2025-07-01T20:07:32.088+08:00 level=INFO source=sched.go:548 msg="updated VRAM based on existing loaded models" gpu=GPU-03eff3ba-e11f-c67e-28b2-3220d83c70bb library=cuda total="12.0 GiB" available="270.8 MiB" time=2025-07-01T20:07:32.730+08:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=E:\Docker_stronge\ollama\models\blobs\sha256-a99b7f834d754b88f122d865f32758ba9f0994a83f8363df2c1e71c17605a025 gpu=GPU-03eff3ba-e11f-c67e-28b2-3220d83c70bb parallel=2 available=11237634048 required="8.4 GiB" time=2025-07-01T20:07:32.746+08:00 level=INFO source=server.go:135 msg="system memory" total="31.8 GiB" free="17.1 GiB" free_swap="15.7 GiB" time=2025-07-01T20:07:32.749+08:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[10.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="8.4 GiB" memory.required.partial="8.4 GiB" memory.required.kv="448.0 MiB" memory.required.allocations="[8.4 GiB]" memory.weights.total="4.1 GiB" memory.weights.repeating="3.7 GiB" memory.weights.nonrepeating="426.4 MiB" memory.graph.full="522.7 MiB" memory.graph.partial="522.7 MiB" projector.weights="1.2 GiB" projector.graph="1.6 GiB" time=2025-07-01T20:07:32.795+08:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\\Users\\xiuming\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model E:\\Docker_stronge\\ollama\\models\\blobs\\sha256-a99b7f834d754b88f122d865f32758ba9f0994a83f8363df2c1e71c17605a025 --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --threads 6 --no-mmap --parallel 2 --port 47307" time=2025-07-01T20:07:32.851+08:00 level=INFO source=sched.go:483 msg="loaded runners" count=1 time=2025-07-01T20:07:32.851+08:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2025-07-01T20:07:32.852+08:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error" time=2025-07-01T20:07:33.028+08:00 level=INFO source=runner.go:925 msg="starting ollama engine" time=2025-07-01T20:07:33.052+08:00 level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:47307" time=2025-07-01T20:07:33.092+08:00 level=INFO source=ggml.go:92 msg="" architecture=qwen25vl file_type=Q4_K_M name="" description="" num_tensors=858 num_key_values=36 time=2025-07-01T20:07:33.103+08:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" load_backend: loaded CPU backend from C:\Users\xiuming\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4070 Ti, compute capability 8.9, VMM: yes load_backend: loaded CUDA backend from C:\Users\xiuming\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll time=2025-07-01T20:07:33.278+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-07-01T20:07:33.448+08:00 level=INFO source=ggml.go:357 msg="model weights" buffer=CPU size="292.4 MiB" time=2025-07-01T20:07:33.448+08:00 level=INFO source=ggml.go:357 msg="model weights" buffer=CUDA0 size="5.3 GiB" time=2025-07-01T20:07:33.748+08:00 level=INFO source=ggml.go:644 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="1.7 GiB" time=2025-07-01T20:07:33.748+08:00 level=INFO source=ggml.go:644 msg="compute graph" backend=CPU buffer_type=CPU size="0 B" time=2025-07-01T20:07:33.806+08:00 level=INFO source=ggml.go:644 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="1.7 GiB" time=2025-07-01T20:07:33.806+08:00 level=INFO source=ggml.go:644 msg="compute graph" backend=CPU buffer_type=CPU size="16.8 MiB" time=2025-07-01T20:07:37.114+08:00 level=INFO source=server.go:630 msg="llama runner started in 4.26 seconds" [GIN] 2025/07/01 - 20:07:37 | 200 | 5.5464398s | 127.0.0.1 | POST "/api/generate" [GIN] 2025/07/01 - 20:08:23 | 200 | 4.1031933s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/07/01 - 20:08:45 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/07/01 - 20:08:45 | 200 | 27.7773ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/07/01 - 20:08:56 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/07/01 - 20:08:56 | 200 | 32.4373ms | 127.0.0.1 | POST "/api/generate" [GIN] 2025/07/01 - 20:08:57 | 200 | 1.1171858s | 127.0.0.1 | DELETE "/api/delete" [GIN] 2025/07/01 - 20:09:02 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/07/01 - 20:09:02 | 200 | 15.7937ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/07/01 - 20:09:17 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/07/01 - 20:09:22 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/07/01 - 20:10:06 | 201 | 30.4141734s | 127.0.0.1 | POST "/api/blobs/sha256:ee770c700d7429cc6f0c74d6c7ab3c063bf521312fc36e80776d1d79bc9fa4ad" [GIN] 2025/07/01 - 20:10:06 | 200 | 194.224ms | 127.0.0.1 | POST "/api/create" [GIN] 2025/07/01 - 20:10:55 | 200 | 526.9µs | 127.0.0.1 | HEAD "/" [GIN] 2025/07/01 - 20:10:55 | 200 | 116.5988ms | 127.0.0.1 | POST "/api/show" time=2025-07-01T20:10:56.399+08:00 level=INFO source=sched.go:548 msg="updated VRAM based on existing loaded models" gpu=GPU-03eff3ba-e11f-c67e-28b2-3220d83c70bb library=cuda total="12.0 GiB" available="2.5 GiB" time=2025-07-01T20:10:56.759+08:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=E:\Docker_stronge\ollama\models\blobs\sha256-ee770c700d7429cc6f0c74d6c7ab3c063bf521312fc36e80776d1d79bc9fa4ad gpu=GPU-03eff3ba-e11f-c67e-28b2-3220d83c70bb parallel=2 available=11055329280 required="8.6 GiB" time=2025-07-01T20:10:56.773+08:00 level=INFO source=server.go:135 msg="system memory" total="31.8 GiB" free="17.1 GiB" free_swap="15.5 GiB" time=2025-07-01T20:10:56.774+08:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[10.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="8.6 GiB" memory.required.partial="8.6 GiB" memory.required.kv="448.0 MiB" memory.required.allocations="[8.6 GiB]" memory.weights.total="7.0 GiB" memory.weights.repeating="6.5 GiB" memory.weights.nonrepeating="552.2 MiB" memory.graph.full="522.7 MiB" memory.graph.partial="522.7 MiB" llama_model_loader: loaded meta data with 32 key-value pairs and 339 tensors from E:\Docker_stronge\ollama\models\blobs\sha256-ee770c700d7429cc6f0c74d6c7ab3c063bf521312fc36e80776d1d79bc9fa4ad (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2vl llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen2.5-Vl-7B-Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Qwen2.5-Vl-7B-Instruct llama_model_loader: - kv 5: general.quantized_by str = Unsloth llama_model_loader: - kv 6: general.size_label str = 7B llama_model_loader: - kv 7: general.repo_url str = https://huggingface.co/unsloth llama_model_loader: - kv 8: qwen2vl.block_count u32 = 28 llama_model_loader: - kv 9: qwen2vl.context_length u32 = 128000 llama_model_loader: - kv 10: qwen2vl.embedding_length u32 = 3584 llama_model_loader: - kv 11: qwen2vl.feed_forward_length u32 = 18944 llama_model_loader: - kv 12: qwen2vl.attention.head_count u32 = 28 llama_model_loader: - kv 13: qwen2vl.attention.head_count_kv u32 = 4 llama_model_loader: - kv 14: qwen2vl.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 15: qwen2vl.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 16: qwen2vl.rope.dimension_sections arr[i32,4] = [16, 24, 24, 0] llama_model_loader: - kv 17: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 18: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 19: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 20: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 21: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 22: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 23: tokenizer.ggml.padding_token_id u32 = 151654 llama_model_loader: - kv 24: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 25: tokenizer.chat_template str = {% set image_count = namespace(value=... llama_model_loader: - kv 26: general.quantization_version u32 = 2 llama_model_loader: - kv 27: general.file_type u32 = 7 llama_model_loader: - kv 28: quantize.imatrix.file str = Qwen2.5-VL-7B-Instruct-GGUF/imatrix_u... llama_model_loader: - kv 29: quantize.imatrix.dataset str = unsloth_calibration_Qwen2.5-VL-7B-Ins... llama_model_loader: - kv 30: quantize.imatrix.entries_count i32 = 196 llama_model_loader: - kv 31: quantize.imatrix.chunks_count i32 = 691 llama_model_loader: - type f32: 141 tensors llama_model_loader: - type q8_0: 198 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q8_0 print_info: file size = 7.54 GiB (8.50 BPW) load: special tokens cache size = 22 load: token to piece cache size = 0.9310 MB print_info: arch = qwen2vl print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 7.62 B print_info: general.name = Qwen2.5-Vl-7B-Instruct print_info: vocab type = BPE print_info: n_vocab = 152064 print_info: n_merges = 151387 print_info: BOS token = 11 ',' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151654 '<|vision_pad|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-07-01T20:10:57.057+08:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\\Users\\xiuming\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model E:\\Docker_stronge\\ollama\\models\\blobs\\sha256-ee770c700d7429cc6f0c74d6c7ab3c063bf521312fc36e80776d1d79bc9fa4ad --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --threads 6 --no-mmap --parallel 2 --port 47463" time=2025-07-01T20:10:57.116+08:00 level=INFO source=sched.go:483 msg="loaded runners" count=1 time=2025-07-01T20:10:57.116+08:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2025-07-01T20:10:57.117+08:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error" time=2025-07-01T20:10:57.254+08:00 level=INFO source=runner.go:815 msg="starting go runner" load_backend: loaded CPU backend from C:\Users\xiuming\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4070 Ti, compute capability 8.9, VMM: yes load_backend: loaded CUDA backend from C:\Users\xiuming\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll time=2025-07-01T20:10:57.471+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-07-01T20:10:57.473+08:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:47463" llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4070 Ti) - 11038 MiB free time=2025-07-01T20:10:57.620+08:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: loaded meta data with 32 key-value pairs and 339 tensors from E:\Docker_stronge\ollama\models\blobs\sha256-ee770c700d7429cc6f0c74d6c7ab3c063bf521312fc36e80776d1d79bc9fa4ad (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2vl llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen2.5-Vl-7B-Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Qwen2.5-Vl-7B-Instruct llama_model_loader: - kv 5: general.quantized_by str = Unsloth llama_model_loader: - kv 6: general.size_label str = 7B llama_model_loader: - kv 7: general.repo_url str = https://huggingface.co/unsloth llama_model_loader: - kv 8: qwen2vl.block_count u32 = 28 llama_model_loader: - kv 9: qwen2vl.context_length u32 = 128000 llama_model_loader: - kv 10: qwen2vl.embedding_length u32 = 3584 llama_model_loader: - kv 11: qwen2vl.feed_forward_length u32 = 18944 llama_model_loader: - kv 12: qwen2vl.attention.head_count u32 = 28 llama_model_loader: - kv 13: qwen2vl.attention.head_count_kv u32 = 4 llama_model_loader: - kv 14: qwen2vl.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 15: qwen2vl.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 16: qwen2vl.rope.dimension_sections arr[i32,4] = [16, 24, 24, 0] llama_model_loader: - kv 17: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 18: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 19: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 20: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 21: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 22: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 23: tokenizer.ggml.padding_token_id u32 = 151654 llama_model_loader: - kv 24: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 25: tokenizer.chat_template str = {% set image_count = namespace(value=... llama_model_loader: - kv 26: general.quantization_version u32 = 2 llama_model_loader: - kv 27: general.file_type u32 = 7 llama_model_loader: - kv 28: quantize.imatrix.file str = Qwen2.5-VL-7B-Instruct-GGUF/imatrix_u... llama_model_loader: - kv 29: quantize.imatrix.dataset str = unsloth_calibration_Qwen2.5-VL-7B-Ins... llama_model_loader: - kv 30: quantize.imatrix.entries_count i32 = 196 llama_model_loader: - kv 31: quantize.imatrix.chunks_count i32 = 691 llama_model_loader: - type f32: 141 tensors llama_model_loader: - type q8_0: 198 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q8_0 print_info: file size = 7.54 GiB (8.50 BPW) load: special tokens cache size = 22 load: token to piece cache size = 0.9310 MB print_info: arch = qwen2vl print_info: vocab_only = 0 print_info: n_ctx_train = 128000 print_info: n_embd = 3584 print_info: n_layer = 28 print_info: n_head = 28 print_info: n_head_kv = 4 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_swa_pattern = 1 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 7 print_info: n_embd_k_gqa = 512 print_info: n_embd_v_gqa = 512 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-06 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 18944 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = -1 print_info: rope type = 8 print_info: rope scaling = linear print_info: freq_base_train = 1000000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 128000 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 7B print_info: model params = 7.62 B print_info: general.name = Qwen2.5-Vl-7B-Instruct print_info: vocab type = BPE print_info: n_vocab = 152064 print_info: n_merges = 151387 print_info: BOS token = 11 ',' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151654 '<|vision_pad|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = false) load_tensors: offloading 28 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 29/29 layers to GPU load_tensors: CUDA_Host model buffer size = 552.23 MiB load_tensors: CUDA0 model buffer size = 7165.44 MiB llama_context: constructing llama_context llama_context: n_seq_max = 2 llama_context: n_ctx = 8192 llama_context: n_ctx_per_seq = 4096 llama_context: n_batch = 1024 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 0 llama_context: freq_base = 1000000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (4096) < n_ctx_train (128000) -- the full capacity of the model will not be utilized llama_context: CUDA_Host output buffer size = 1.19 MiB llama_kv_cache_unified: kv_size = 8192, type_k = 'f16', type_v = 'f16', n_layer = 28, can_shift = 1, padding = 32 llama_kv_cache_unified: CUDA0 KV buffer size = 448.00 MiB llama_kv_cache_unified: KV self size = 448.00 MiB, K (f16): 224.00 MiB, V (f16): 224.00 MiB llama_context: CUDA0 compute buffer size = 492.01 MiB llama_context: CUDA_Host compute buffer size = 23.01 MiB llama_context: graph nodes = 1042 llama_context: graph splits = 2 time=2025-07-01T20:11:00.379+08:00 level=INFO source=server.go:630 msg="llama runner started in 3.26 seconds" [GIN] 2025/07/01 - 20:11:00 | 200 | 4.4009419s | 127.0.0.1 | POST "/api/generate" [GIN] 2025/07/01 - 20:11:09 | 200 | 1.649869s | 127.0.0.1 | POST "/api/chat" ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.9.1
GiteaMirror added the bug label 2026-04-22 15:36:27 -05:00
Author
Owner

@ygxiuming commented on GitHub (Jul 1, 2025):

Image

<!-- gh-comment-id:3023790606 --> @ygxiuming commented on GitHub (Jul 1, 2025): ![Image](https://github.com/user-attachments/assets/76e6e685-50a1-40a6-a903-467541f88c7e)
Author
Owner

@rick-github commented on GitHub (Jul 1, 2025):

GGUF files for vision models imported from ModelScope or HuggingFace do not have the format that the new ollama engine expects. If available, import the safetensors format and quantize.

<!-- gh-comment-id:3023818194 --> @rick-github commented on GitHub (Jul 1, 2025): GGUF files for vision models imported from ModelScope or HuggingFace do not have the format that the new ollama engine expects. If available, import the safetensors format and quantize.
Author
Owner

@ygxiuming commented on GitHub (Jul 1, 2025):

GGUF files for vision models imported from ModelScope or HuggingFace do not have the format that the new ollama engine expects. If available, import the safetensors format and quantize.

Thank you very much for your reply. I see that OLLAMA's qwen2.5vl:7b is the quantized version, and it works very well. Do I need to download the safetensors model and import it into OLLAMA after it is quantized? I understand the process of importing the safetensors model into OLLAMA. However, could you advise me on the subsequent quantization methods? Due to my circumstances, I need to use multi-modal models on an internal network and cannot connect to the internet, so I can only import models by downloading them. Is the only final method to copy and import the sha256 file of the imported model and replicate the effect of the official pull model?

<!-- gh-comment-id:3023869933 --> @ygxiuming commented on GitHub (Jul 1, 2025): > GGUF files for vision models imported from ModelScope or HuggingFace do not have the format that the new ollama engine expects. If available, import the safetensors format and quantize. Thank you very much for your reply. I see that OLLAMA's qwen2.5vl:7b is the quantized version, and it works very well. Do I need to download the safetensors model and import it into OLLAMA after it is quantized? I understand the process of importing the safetensors model into OLLAMA. However, could you advise me on the subsequent quantization methods? Due to my circumstances, I need to use multi-modal models on an internal network and cannot connect to the internet, so I can only import models by downloading them. Is the only final method to copy and import the sha256 file of the imported model and replicate the effect of the official pull model?
Author
Owner

@rick-github commented on GitHub (Jul 1, 2025):

I you don't have a specific need for a model from MS, I suggest just pulling qwen2.5vl:latest to a machine connected to the internet, zipping (or tar or cpio) up the .ollama directory, transferring the archive file to the unconnected computer, and extracting. It should work as if the model had been pulled directly to the unconnected machine.

<!-- gh-comment-id:3024253100 --> @rick-github commented on GitHub (Jul 1, 2025): I you don't have a specific need for a model from MS, I suggest just pulling `qwen2.5vl:latest` to a machine connected to the internet, zipping (or tar or cpio) up the .ollama directory, transferring the archive file to the unconnected computer, and extracting. It should work as if the model had been pulled directly to the unconnected machine.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#33175