[GH-ISSUE #4925] clip_model_load: don't support projector with: currently #49623

Closed
opened 2026-04-28 12:26:00 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @Greatz08 on GitHub (Jun 8, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4925

Jun 08 08:45:56 AIbo ollama[27990]: [GIN] 2024/06/08 - 08:45:56 | 200 |      16.211µs |       127.0.0.1 | HEAD     "/"
Jun 08 08:45:56 AIbo ollama[27990]: [GIN] 2024/06/08 - 08:45:56 | 200 |     511.825µs |       127.0.0.1 | POST     "/api/show"
Jun 08 08:45:56 AIbo ollama[27990]: [GIN] 2024/06/08 - 08:45:56 | 200 |     295.877µs |       127.0.0.1 | POST     "/api/show"
Jun 08 08:45:56 AIbo ollama[27990]: time=2024-06-08T08:45:56.993+05:30 level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=41 memory.available="6.8 GiB" memory.required.full="4.0 GiB" memory.required.partial="4.0 GiB" memory.required.kv="720.0 MiB" memory.weights.total="1.9 GiB" memory.weights.repeating="1.7 GiB" memory.weights.nonrepeating="221.3 MiB" memory.graph.full="120.0 MiB" memory.graph.partial="120.0 MiB"
Jun 08 08:45:56 AIbo ollama[27990]: time=2024-06-08T08:45:56.995+05:30 level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=41 memory.available="6.8 GiB" memory.required.full="4.0 GiB" memory.required.partial="4.0 GiB" memory.required.kv="720.0 MiB" memory.weights.total="1.9 GiB" memory.weights.repeating="1.7 GiB" memory.weights.nonrepeating="221.3 MiB" memory.graph.full="120.0 MiB" memory.graph.partial="120.0 MiB"
Jun 08 08:45:56 AIbo ollama[27990]: time=2024-06-08T08:45:56.995+05:30 level=WARN source=server.go:227 msg="multimodal models don't support parallel requests yet"
Jun 08 08:45:56 AIbo ollama[27990]: time=2024-06-08T08:45:56.995+05:30 level=INFO source=server.go:338 msg="starting llama server" cmd="/tmp/ollama4263366198/runners/cuda_v12/ollama_llama_server --model /var/lib/ollama/.ollama/models/blobs/sha256-cb45bc10adac7eb4500af3e8d578871d47c6d8a13c8cc4de5325ee79164e3650 --ctx-size 1024 --batch-size 512 --embedding --log-disable --n-gpu-layers 41 --mmproj /var/lib/ollama/.ollama/models/blobs/sha256-989f882d9ccf90570d278edd9c1a4f9fbf5b3d980243a20b9d41ca2e60537d10 --parallel 1 --port 32279"
Jun 08 08:45:56 AIbo ollama[27990]: time=2024-06-08T08:45:56.996+05:30 level=INFO source=sched.go:338 msg="loaded runners" count=1
Jun 08 08:45:56 AIbo ollama[27990]: time=2024-06-08T08:45:56.996+05:30 level=INFO source=server.go:526 msg="waiting for llama runner to start responding"
Jun 08 08:45:56 AIbo ollama[27990]: time=2024-06-08T08:45:56.996+05:30 level=INFO source=server.go:564 msg="waiting for server to become available" status="llm server error"
Jun 08 08:45:57 AIbo ollama[31028]: INFO [main] build info | build=2986 commit="74f33adf5" tid="137553045270528" timestamp=1717816557
Jun 08 08:45:57 AIbo ollama[31028]: INFO [main] system info | n_threads=8 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="137553045270528" timestamp=1717816557 total_threads=16
Jun 08 08:45:57 AIbo ollama[31028]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="15" port="32279" tid="137553045270528" timestamp=1717816557
Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: description:  image encoder for LLaVA
Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: GGUF version: 3
Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: alignment:    32
Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: n_tensors:    440
Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: n_kv:         18
Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: ftype:        f16
Jun 08 08:45:57 AIbo ollama[31028]: [1717816557]
Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: loaded meta data with 18 key-value pairs and 440 tensors from /var/lib/ollama/.ollama/models/blobs/sha256-989f882d9ccf90570d278edd9c1a4f9fbf5b3d980243a20b9d41ca2e60537d10
Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: - kv   0:                       general.architecture str              = clip
Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: - kv   1:                      clip.has_text_encoder bool             = false
Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: - kv   2:                    clip.has_vision_encoder bool             = true
Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: - kv   3:                   clip.has_llava_projector bool             = true
Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: - kv   4:                          general.file_type u32              = 1
Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: - kv   5:                        general.description str              = image encoder for LLaVA
Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: - kv   6:                        clip.projector_type str              = resampler
Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: - kv   7:                     clip.vision.image_size u32              = 448
Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: - kv   8:                     clip.vision.patch_size u32              = 14
Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: - kv   9:               clip.vision.embedding_length u32              = 1152
Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: - kv  10:            clip.vision.feed_forward_length u32              = 4304
Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: - kv  11:                 clip.vision.projection_dim u32              = 0
Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: - kv  12:           clip.vision.attention.head_count u32              = 16
Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: - kv  13:   clip.vision.attention.layer_norm_epsilon f32              = 0.000001
Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: - kv  14:                    clip.vision.block_count u32              = 26
Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: - kv  15:                     clip.vision.image_mean arr[f32,3]       = [0.500000, 0.500000, 0.500000]
Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: - kv  16:                      clip.vision.image_std arr[f32,3]       = [0.500000, 0.500000, 0.500000]
Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: - kv  17:                              clip.use_gelu bool             = true
Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: - type  f32:  277 tensors
Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: - type  f16:  163 tensors
Jun 08 08:45:57 AIbo ollama[27990]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:   yes
Jun 08 08:45:57 AIbo ollama[27990]: ggml_cuda_init: CUDA_USE_TENSOR_CORES: no
Jun 08 08:45:57 AIbo ollama[27990]: ggml_cuda_init: found 1 CUDA devices:
Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: CLIP using CUDA backend
Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: text_encoder:   0
Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: vision_encoder: 1
Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: llava_projector:  1
Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: model size:     828.18 MB
Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: metadata size:  0.17 MB
Jun 08 08:45:57 AIbo ollama[27990]:   Device 0: NVIDIA GeForce RTX 4060 Laptop GPU, compute capability 8.9, VMM: yes
Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: params backend buffer size =  828.18 MB (440 tensors)
Jun 08 08:45:57 AIbo ollama[27990]: time=2024-06-08T08:45:57.248+05:30 level=INFO source=server.go:564 msg="waiting for server to become available" status="llm server loading model"
Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] key clip.vision.image_grid_pinpoints not found in file
Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] key clip.vision.mm_patch_merge_type not found in file
Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] key clip.vision.image_crop_resolution not found in file
Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: failed to load vision model tensors
Jun 08 08:45:57 AIbo ollama[27990]: terminate called after throwing an instance of 'std::runtime_error'
Jun 08 08:45:57 AIbo ollama[27990]:   what():  clip_model_load: don't support projector with:  currently
Jun 08 08:45:57 AIbo ollama[27990]: time=2024-06-08T08:45:57.750+05:30 level=ERROR source=sched.go:344 msg="error loading llama server" error="llama runner process has terminated: signal: aborted (core dumped) "
Jun 08 08:45:57 AIbo ollama[27990]: [GIN] 2024/06/08 - 08:45:57 | 500 |  1.183139919s |       127.0.0.1 | POST     "/api/chat"
Jun 08 08:46:02 AIbo ollama[27990]: time=2024-06-08T08:46:02.833+05:30 level=WARN source=sched.go:512 msg="gpu VRAM usage didn't recover within timeout" seconds=5.083124146
Jun 08 08:46:03 AIbo ollama[27990]: time=2024-06-08T08:46:03.083+05:30 level=WARN source=sched.go:512 msg="gpu VRAM usage didn't recover within timeout" seconds=5.333482832
Jun 08 08:46:03 AIbo ollama[27990]: time=2024-06-08T08:46:03.333+05:30 level=WARN source=sched.go:512 msg="gpu VRAM usage didn't recover within timeout" seconds=5.582785593

Ollama does support multimodels now so it should support MiniCPM-V2 Multimodel too? Following is what got my attention in logs:
clip_model_load: don't support projector with: currently --- what does this mean? I thought core dump happens in ollama if not having enough available vram but i dont think what i am running needs too much vram because what i am running is :

MiniCPM-V-2.Q5_K_M.gguf - 2.24 GB
MiniCPM-V-2-mmproj.F16.gguf - 868 MB

FROM ./MiniCPM-V-2.Q5_K_M.gguf
FROM ./MiniCPM-V-2-mmproj.F16.gguf

TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|>

{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>

{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>

{{ .Response }}<|eot_id|>"""

PARAMETER stop "<|start_header_id|>"
PARAMETER stop "<|end_header_id|>"
PARAMETER stop "<|eot_id|>"

MiniCPM-V-2.Q5_K_M.gguf this 2.24 GB only &
https://huggingface.co/mzwing/MiniCPM-V-2-GGUF/blob/main/MiniCPM-V-2-mmproj.F16.gguf this one is 868 MB
Combined together too they are not more than 3.5 GB and i have RTX 4060 8 GB variant and 7 GB+ VRAM was free according to nvtop before running this and still i facing this issue?

I tried running LLava 1.6 multimodel and could run that model easily without issues and Modelfile for that llava model was this :

FROM /var/lib/ollama/.ollama/models/blobs/sha256-170370233dd5c5415250a2ecd5c71586352850729062ccef1496385647293868
FROM /var/lib/ollama/.ollama/models/blobs/sha256-72d6f08a42f656d36b356dbe0920675899a99ce21192fd66266fb7d82ed07539
TEMPLATE [INST] {{ if .System }}{{ .System }} {{ end }}{{ .Prompt }} [/INST]
PARAMETER stop [INST]
PARAMETER stop [/INST]

sha256-170370233dd5c5415250a2ecd5c71586352850729062ccef1496385647293868 - was 4.1GB
sha256-72d6f08a42f656d36b356dbe0920675899a99ce21192fd66266fb7d82ed07539 - was 624 MB

These files are much more heavier as compared to MiniCPM one which i mentioned above and i am able to run this easily without any issues of coredump or clip_model_load: don't support projector with: currently then why am i not able to run my MiniCPM model with ollama comfortably ? Is it bug or very limited vision model support or some parameter issue or something else.

I am not expert in AI or other related stuff (just another guy gaining knowledge everyday) so if possible please explain about this issue in detail and in easiest way possible so that i can understand what could be the cause of it and are there any possible solutions.
Thankyou

Originally created by @Greatz08 on GitHub (Jun 8, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4925 ``` Jun 08 08:45:56 AIbo ollama[27990]: [GIN] 2024/06/08 - 08:45:56 | 200 | 16.211µs | 127.0.0.1 | HEAD "/" Jun 08 08:45:56 AIbo ollama[27990]: [GIN] 2024/06/08 - 08:45:56 | 200 | 511.825µs | 127.0.0.1 | POST "/api/show" Jun 08 08:45:56 AIbo ollama[27990]: [GIN] 2024/06/08 - 08:45:56 | 200 | 295.877µs | 127.0.0.1 | POST "/api/show" Jun 08 08:45:56 AIbo ollama[27990]: time=2024-06-08T08:45:56.993+05:30 level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=41 memory.available="6.8 GiB" memory.required.full="4.0 GiB" memory.required.partial="4.0 GiB" memory.required.kv="720.0 MiB" memory.weights.total="1.9 GiB" memory.weights.repeating="1.7 GiB" memory.weights.nonrepeating="221.3 MiB" memory.graph.full="120.0 MiB" memory.graph.partial="120.0 MiB" Jun 08 08:45:56 AIbo ollama[27990]: time=2024-06-08T08:45:56.995+05:30 level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=41 memory.available="6.8 GiB" memory.required.full="4.0 GiB" memory.required.partial="4.0 GiB" memory.required.kv="720.0 MiB" memory.weights.total="1.9 GiB" memory.weights.repeating="1.7 GiB" memory.weights.nonrepeating="221.3 MiB" memory.graph.full="120.0 MiB" memory.graph.partial="120.0 MiB" Jun 08 08:45:56 AIbo ollama[27990]: time=2024-06-08T08:45:56.995+05:30 level=WARN source=server.go:227 msg="multimodal models don't support parallel requests yet" Jun 08 08:45:56 AIbo ollama[27990]: time=2024-06-08T08:45:56.995+05:30 level=INFO source=server.go:338 msg="starting llama server" cmd="/tmp/ollama4263366198/runners/cuda_v12/ollama_llama_server --model /var/lib/ollama/.ollama/models/blobs/sha256-cb45bc10adac7eb4500af3e8d578871d47c6d8a13c8cc4de5325ee79164e3650 --ctx-size 1024 --batch-size 512 --embedding --log-disable --n-gpu-layers 41 --mmproj /var/lib/ollama/.ollama/models/blobs/sha256-989f882d9ccf90570d278edd9c1a4f9fbf5b3d980243a20b9d41ca2e60537d10 --parallel 1 --port 32279" Jun 08 08:45:56 AIbo ollama[27990]: time=2024-06-08T08:45:56.996+05:30 level=INFO source=sched.go:338 msg="loaded runners" count=1 Jun 08 08:45:56 AIbo ollama[27990]: time=2024-06-08T08:45:56.996+05:30 level=INFO source=server.go:526 msg="waiting for llama runner to start responding" Jun 08 08:45:56 AIbo ollama[27990]: time=2024-06-08T08:45:56.996+05:30 level=INFO source=server.go:564 msg="waiting for server to become available" status="llm server error" Jun 08 08:45:57 AIbo ollama[31028]: INFO [main] build info | build=2986 commit="74f33adf5" tid="137553045270528" timestamp=1717816557 Jun 08 08:45:57 AIbo ollama[31028]: INFO [main] system info | n_threads=8 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="137553045270528" timestamp=1717816557 total_threads=16 Jun 08 08:45:57 AIbo ollama[31028]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="15" port="32279" tid="137553045270528" timestamp=1717816557 Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: description: image encoder for LLaVA Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: GGUF version: 3 Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: alignment: 32 Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: n_tensors: 440 Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: n_kv: 18 Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: ftype: f16 Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: loaded meta data with 18 key-value pairs and 440 tensors from /var/lib/ollama/.ollama/models/blobs/sha256-989f882d9ccf90570d278edd9c1a4f9fbf5b3d980243a20b9d41ca2e60537d10 Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: - kv 0: general.architecture str = clip Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: - kv 1: clip.has_text_encoder bool = false Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: - kv 2: clip.has_vision_encoder bool = true Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: - kv 3: clip.has_llava_projector bool = true Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: - kv 4: general.file_type u32 = 1 Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: - kv 5: general.description str = image encoder for LLaVA Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: - kv 6: clip.projector_type str = resampler Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: - kv 7: clip.vision.image_size u32 = 448 Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: - kv 8: clip.vision.patch_size u32 = 14 Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: - kv 9: clip.vision.embedding_length u32 = 1152 Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: - kv 10: clip.vision.feed_forward_length u32 = 4304 Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: - kv 11: clip.vision.projection_dim u32 = 0 Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: - kv 12: clip.vision.attention.head_count u32 = 16 Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: - kv 13: clip.vision.attention.layer_norm_epsilon f32 = 0.000001 Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: - kv 14: clip.vision.block_count u32 = 26 Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: - kv 15: clip.vision.image_mean arr[f32,3] = [0.500000, 0.500000, 0.500000] Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: - kv 16: clip.vision.image_std arr[f32,3] = [0.500000, 0.500000, 0.500000] Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: - kv 17: clip.use_gelu bool = true Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: - type f32: 277 tensors Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: - type f16: 163 tensors Jun 08 08:45:57 AIbo ollama[27990]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes Jun 08 08:45:57 AIbo ollama[27990]: ggml_cuda_init: CUDA_USE_TENSOR_CORES: no Jun 08 08:45:57 AIbo ollama[27990]: ggml_cuda_init: found 1 CUDA devices: Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: CLIP using CUDA backend Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: text_encoder: 0 Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: vision_encoder: 1 Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: llava_projector: 1 Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: model size: 828.18 MB Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: metadata size: 0.17 MB Jun 08 08:45:57 AIbo ollama[27990]: Device 0: NVIDIA GeForce RTX 4060 Laptop GPU, compute capability 8.9, VMM: yes Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: params backend buffer size = 828.18 MB (440 tensors) Jun 08 08:45:57 AIbo ollama[27990]: time=2024-06-08T08:45:57.248+05:30 level=INFO source=server.go:564 msg="waiting for server to become available" status="llm server loading model" Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] key clip.vision.image_grid_pinpoints not found in file Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] key clip.vision.mm_patch_merge_type not found in file Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] key clip.vision.image_crop_resolution not found in file Jun 08 08:45:57 AIbo ollama[31028]: [1717816557] clip_model_load: failed to load vision model tensors Jun 08 08:45:57 AIbo ollama[27990]: terminate called after throwing an instance of 'std::runtime_error' Jun 08 08:45:57 AIbo ollama[27990]: what(): clip_model_load: don't support projector with: currently Jun 08 08:45:57 AIbo ollama[27990]: time=2024-06-08T08:45:57.750+05:30 level=ERROR source=sched.go:344 msg="error loading llama server" error="llama runner process has terminated: signal: aborted (core dumped) " Jun 08 08:45:57 AIbo ollama[27990]: [GIN] 2024/06/08 - 08:45:57 | 500 | 1.183139919s | 127.0.0.1 | POST "/api/chat" Jun 08 08:46:02 AIbo ollama[27990]: time=2024-06-08T08:46:02.833+05:30 level=WARN source=sched.go:512 msg="gpu VRAM usage didn't recover within timeout" seconds=5.083124146 Jun 08 08:46:03 AIbo ollama[27990]: time=2024-06-08T08:46:03.083+05:30 level=WARN source=sched.go:512 msg="gpu VRAM usage didn't recover within timeout" seconds=5.333482832 Jun 08 08:46:03 AIbo ollama[27990]: time=2024-06-08T08:46:03.333+05:30 level=WARN source=sched.go:512 msg="gpu VRAM usage didn't recover within timeout" seconds=5.582785593 ``` Ollama does support multimodels now so it should support MiniCPM-V2 Multimodel too? Following is what got my attention in logs: **clip_model_load: don't support projector with: currently** --- what does this mean? I thought core dump happens in ollama if not having enough available vram but i dont think what i am running needs too much vram because what i am running is : MiniCPM-V-2.Q5_K_M.gguf - 2.24 GB MiniCPM-V-2-mmproj.F16.gguf - 868 MB ``` FROM ./MiniCPM-V-2.Q5_K_M.gguf FROM ./MiniCPM-V-2-mmproj.F16.gguf TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|>""" PARAMETER stop "<|start_header_id|>" PARAMETER stop "<|end_header_id|>" PARAMETER stop "<|eot_id|>" ``` [MiniCPM-V-2.Q5_K_M.gguf](https://huggingface.co/mzwing/MiniCPM-V-2-GGUF/blob/main/MiniCPM-V-2.Q5_K_M.gguf) this 2.24 GB only & https://huggingface.co/mzwing/MiniCPM-V-2-GGUF/blob/main/MiniCPM-V-2-mmproj.F16.gguf this one is 868 MB Combined together too they are not more than 3.5 GB and i have **RTX 4060 8 GB variant** and 7 GB+ VRAM was free according to nvtop before running this and still i facing this issue? I tried running **LLava 1.6** multimodel and could run that model easily without issues and Modelfile for that llava model was this : ``` FROM /var/lib/ollama/.ollama/models/blobs/sha256-170370233dd5c5415250a2ecd5c71586352850729062ccef1496385647293868 FROM /var/lib/ollama/.ollama/models/blobs/sha256-72d6f08a42f656d36b356dbe0920675899a99ce21192fd66266fb7d82ed07539 TEMPLATE [INST] {{ if .System }}{{ .System }} {{ end }}{{ .Prompt }} [/INST] PARAMETER stop [INST] PARAMETER stop [/INST] ``` sha256-170370233dd5c5415250a2ecd5c71586352850729062ccef1496385647293868 - was 4.1GB sha256-72d6f08a42f656d36b356dbe0920675899a99ce21192fd66266fb7d82ed07539 - was 624 MB These files are much more heavier as compared to MiniCPM one which i mentioned above and i am able to run this easily without any issues of **coredump** or **clip_model_load: don't support projector with: currently** then why am i not able to run my MiniCPM model with ollama comfortably ? Is it bug or very limited vision model support or some parameter issue or something else. I am not expert in AI or other related stuff (just another guy gaining knowledge everyday) so if possible please explain about this issue in detail and in easiest way possible so that i can understand what could be the cause of it and are there any possible solutions. Thankyou
Author
Owner

@HowardZorn commented on GitHub (Jun 8, 2024):

The authors of MiniCPM have customized ollama and llama.cpp for their project. So their model can not be directly import into ollama. Actually, their pull request for llama.cpp is more urgent than ollama's. If llama.cpp does not accept the pull request, then ollama cannot get progress.

<!-- gh-comment-id:2155863176 --> @HowardZorn commented on GitHub (Jun 8, 2024): The [authors of MiniCPM](https://github.com/OpenBMB) have customized `ollama` and `llama.cpp` for their project. So their model can not be directly import into `ollama`. Actually, [their pull request for `llama.cpp`](https://github.com/ggerganov/llama.cpp/pull/7599) is more urgent than `ollama`'s. If `llama.cpp` does not accept the pull request, then `ollama` cannot get progress.
Author
Owner

@Greatz08 commented on GitHub (Jun 8, 2024):

dayum i didnt know about this and that is why i was thinking why the hell is it not running. Thankyou buddy @HowardZorn for this info

<!-- gh-comment-id:2156086492 --> @Greatz08 commented on GitHub (Jun 8, 2024): dayum i didnt know about this and that is why i was thinking why the hell is it not running. Thankyou buddy @HowardZorn for this info
Author
Owner

@jmorganca commented on GitHub (Jun 9, 2024):

Thanks for the issue. Will merge this with https://github.com/ollama/ollama/issues/4900

<!-- gh-comment-id:2156703753 --> @jmorganca commented on GitHub (Jun 9, 2024): Thanks for the issue. Will merge this with https://github.com/ollama/ollama/issues/4900
Author
Owner

@nischalj10 commented on GitHub (Jun 12, 2024):

i am getting the same error while trying to load quantised gguf files of phi3 vision based on https://github.com/ggerganov/llama.cpp/pull/7705

Error: llama runner process has terminated: signal: abort trap error:clip_model_load: don't support projector with:  currently
<!-- gh-comment-id:2163053615 --> @nischalj10 commented on GitHub (Jun 12, 2024): i am getting the same error while trying to load quantised gguf files of phi3 vision based on https://github.com/ggerganov/llama.cpp/pull/7705 ``` Error: llama runner process has terminated: signal: abort trap error:clip_model_load: don't support projector with: currently ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#49623