[GH-ISSUE #7477] Submit 4 images to Ollama visual model, generate a large amount of log without any return #4753

Closed
opened 2026-04-12 15:41:48 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @delubee on GitHub (Nov 3, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7477

What is the issue?

code: python

# -*- coding: utf-8 -*-
from ollama import Client
import pymupdf as fitz
import os
import base64

client = Client(host='http://127.0.0.1:11434')
pdf_path = './books/book2.pdf'
doc = fitz.open(pdf_path)
image_base64_list = []

for i in range(min(4, doc.page_count)):
    page = doc.load_page(i)
    pix = page.get_pixmap()
    image_path = f'./images/page_{i + 1}.jpg'
    pix.save(image_path)   
    with open(image_path, 'rb') as image_file:
        encoded_string = base64.b64encode(image_file.read()).decode('utf-8')
        image_base64_list.append(encoded_string)

doc.close()

response = client.chat(model='llava:7b', messages=[
    {
        'role': 'user',
        'content': 'Extract information from the image including book title, author, publisher, publication date, ISBN, main content, etc. if available.',
        'images': image_base64_list  
    },
])
print(response['message']['content'])

no return,generate a large amount of log without any return.

logs:

2024/11/03 12:11:34 routes.go:1158: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:d:\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-11-03T12:11:34.700+08:00 level=INFO source=images.go:754 msg="total blobs: 101"
time=2024-11-03T12:11:34.714+08:00 level=INFO source=images.go:761 msg="total unused blobs removed: 0"
time=2024-11-03T12:11:34.720+08:00 level=INFO source=routes.go:1205 msg="Listening on [::]:11434 (version 0.3.14)"
time=2024-11-03T12:11:34.722+08:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cuda_v11 cuda_v12 rocm_v6.1 cpu cpu_avx cpu_avx2]"
time=2024-11-03T12:11:34.722+08:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
time=2024-11-03T12:11:34.722+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=2
time=2024-11-03T12:11:34.722+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=14 efficiency=0 threads=28
time=2024-11-03T12:11:34.722+08:00 level=INFO source=gpu_windows.go:214 msg="" package=1 cores=14 efficiency=0 threads=28
time=2024-11-03T12:11:35.014+08:00 level=INFO source=gpu.go:326 msg="detected OS VRAM overhead" id=GPU-789bc630-2559-016c-8a5f-b30f23ffd42e library=cuda compute=6.1 driver=12.6 name="Tesla P40" overhead="146.4 MiB"
time=2024-11-03T12:11:35.317+08:00 level=INFO source=types.go:123 msg="inference compute" id=GPU-789bc630-2559-016c-8a5f-b30f23ffd42e library=cuda variant=v12 compute=6.1 driver=12.6 name="Tesla P40" total="22.4 GiB" available="22.2 GiB"
time=2024-11-03T12:11:35.317+08:00 level=INFO source=types.go:123 msg="inference compute" id=GPU-b478a8b5-91bf-57f6-450d-608b615acd97 library=cuda variant=v12 compute=6.1 driver=12.6 name="NVIDIA GeForce GTX 1080 Ti" total="11.0 GiB" available="10.0 GiB"
[GIN] 2024/11/03 - 12:11:35 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2024/11/03 - 12:11:35 | 200 |     26.2634ms |       127.0.0.1 | GET      "/api/tags"
time=2024-11-03T12:11:40.590+08:00 level=WARN source=sched.go:137 msg="multimodal models don't support parallel requests yet"
time=2024-11-03T12:11:40.651+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=d:\models\blobs\sha256-170370233dd5c5415250a2ecd5c71586352850729062ccef1496385647293868 gpu=GPU-789bc630-2559-016c-8a5f-b30f23ffd42e parallel=1 available=23889838080 required="5.3 GiB"
time=2024-11-03T12:11:40.682+08:00 level=INFO source=server.go:105 msg="system memory" total="63.9 GiB" free="45.5 GiB" free_swap="46.0 GiB"
time=2024-11-03T12:11:40.684+08:00 level=INFO source=memory.go:326 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[22.2 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.3 GiB" memory.required.partial="5.3 GiB" memory.required.kv="256.0 MiB" memory.required.allocations="[5.3 GiB]" memory.weights.total="3.9 GiB" memory.weights.repeating="3.8 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="181.0 MiB"
time=2024-11-03T12:11:40.706+08:00 level=INFO source=server.go:388 msg="starting llama server" cmd="C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\runners\\cuda_v12\\ollama_llama_server.exe --model d:\\models\\blobs\\sha256-170370233dd5c5415250a2ecd5c71586352850729062ccef1496385647293868 --ctx-size 2048 --batch-size 512 --embedding --n-gpu-layers 33 --mmproj d:\\models\\blobs\\sha256-72d6f08a42f656d36b356dbe0920675899a99ce21192fd66266fb7d82ed07539 --threads 14 --no-mmap --parallel 1 --port 8495"
time=2024-11-03T12:11:40.726+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2024-11-03T12:11:40.726+08:00 level=INFO source=server.go:587 msg="waiting for llama runner to start responding"
time=2024-11-03T12:11:40.727+08:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error"
INFO [wmain] starting c++ runner | tid="27324" timestamp=1730607100
INFO [wmain] build info | build=3871 commit="63424972" tid="27324" timestamp=1730607100
INFO [wmain] system info | n_threads=14 n_threads_batch=14 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="27324" timestamp=1730607100 total_threads=56
INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="55" port="8495" tid="27324" timestamp=1730607101
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: Tesla P40, compute capability 6.1, VMM: no
time=2024-11-03T12:11:41.233+08:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model"
key clip.vision.image_grid_pinpoints not found in file
key clip.vision.mm_patch_merge_type not found in file
key clip.vision.image_crop_resolution not found in file
llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from d:\models\blobs\sha256-170370233dd5c5415250a2ecd5c71586352850729062ccef1496385647293868 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = liuhaotian
llama_model_loader: - kv   2:                       llama.context_length u32              = 32768
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                       llama.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  11:                          general.file_type u32              = 2
llama_model_loader: - kv  12:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  14:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  16:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  17:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  18:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  19:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  20:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  21:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  22:                    tokenizer.chat_template str              = {{ bos_token }}{% for message in mess...
llama_model_loader: - kv  23:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 3
llm_load_vocab: token to piece cache size = 0.1637 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 7.24 B
llm_load_print_meta: model size       = 3.83 GiB (4.54 BPW) 
llm_load_print_meta: general.name     = liuhaotian
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: PAD token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_print_meta: EOG token        = 2 '</s>'
llm_load_print_meta: max token length = 48
llm_load_tensors: ggml ctx size =    0.27 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors:  CUDA_Host buffer size =    70.31 MiB
llm_load_tensors:      CUDA0 buffer size =  3847.55 MiB
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      CUDA0 KV buffer size =   256.00 MiB
llama_new_context_with_model: KV self size  =  256.00 MiB, K (f16):  128.00 MiB, V (f16):  128.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     0.14 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =   164.00 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    12.01 MiB
llama_new_context_with_model: graph nodes  = 1030
llama_new_context_with_model: graph splits = 2
clip_model_load: model name:   openai/clip-vit-large-patch14-336
clip_model_load: description:  image encoder for LLaVA
clip_model_load: GGUF version: 3
clip_model_load: alignment:    32
clip_model_load: n_tensors:    377
clip_model_load: n_kv:         19
clip_model_load: ftype:        f16

clip_model_load: loaded meta data with 19 key-value pairs and 377 tensors from d:\models\blobs\sha256-72d6f08a42f656d36b356dbe0920675899a99ce21192fd66266fb7d82ed07539
clip_model_load: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
clip_model_load: - kv   0:                       general.architecture str              = clip
clip_model_load: - kv   1:                      clip.has_text_encoder bool             = false
clip_model_load: - kv   2:                    clip.has_vision_encoder bool             = true
clip_model_load: - kv   3:                   clip.has_llava_projector bool             = true
clip_model_load: - kv   4:                          general.file_type u32              = 1
clip_model_load: - kv   5:                               general.name str              = openai/clip-vit-large-patch14-336
clip_model_load: - kv   6:                        general.description str              = image encoder for LLaVA
clip_model_load: - kv   7:                        clip.projector_type str              = mlp
clip_model_load: - kv   8:                     clip.vision.image_size u32              = 336
clip_model_load: - kv   9:                     clip.vision.patch_size u32              = 14
clip_model_load: - kv  10:               clip.vision.embedding_length u32              = 1024
clip_model_load: - kv  11:            clip.vision.feed_forward_length u32              = 4096
clip_model_load: - kv  12:                 clip.vision.projection_dim u32              = 768
clip_model_load: - kv  13:           clip.vision.attention.head_count u32              = 16
clip_model_load: - kv  14:   clip.vision.attention.layer_norm_epsilon f32              = 0.000010
clip_model_load: - kv  15:                    clip.vision.block_count u32              = 23
clip_model_load: - kv  16:                     clip.vision.image_mean arr[f32,3]       = [0.481455, 0.457828, 0.408211]
clip_model_load: - kv  17:                      clip.vision.image_std arr[f32,3]       = [0.268630, 0.261303, 0.275777]
clip_model_load: - kv  18:                              clip.use_gelu bool             = false
clip_model_load: - type  f32:  235 tensors
clip_model_load: - type  f16:  142 tensors
clip_model_load: CLIP using CUDA backend
clip_model_load: text_encoder:   0
clip_model_load: vision_encoder: 1
clip_model_load: llava_projector:  1
clip_model_load: minicpmv_projector:  0
clip_model_load: model size:     595.49 MB
clip_model_load: metadata size:  0.13 MB
clip_model_load: params backend buffer size =  595.49 MB (377 tensors)
clip_model_load: compute allocated memory: 32.89 MB
INFO [wmain] model loaded | tid="27324" timestamp=1730607103
time=2024-11-03T12:11:43.537+08:00 level=INFO source=server.go:626 msg="llama runner started in 2.81 seconds"
encode_image_with_clip: image embedding created: 576 tokens

encode_image_with_clip: image encoded in   125.75 ms by CLIP (    0.22 ms per image patch)
encode_image_with_clip: image embedding created: 576 tokens

encode_image_with_clip: image encoded in   116.25 ms by CLIP (    0.20 ms per image patch)
encode_image_with_clip: image embedding created: 576 tokens

encode_image_with_clip: image encoded in   112.17 ms by CLIP (    0.19 ms per image patch)
encode_image_with_clip: image embedding created: 576 tokens

encode_image_with_clip: image encoded in   109.71 ms by CLIP (    0.19 ms per image patch)
ERROR [update_slots] failed processing images | slot_id=0 task_id=2 tid="27324" timestamp=1730607106
llama_decode_internal: invalid token[0] = -1462573600
llama_decode: failed to decode, ret = -1
llama_decode_internal: invalid token[0] = -1462573600
llama_decode: failed to decode, ret = -1
llama_decode_internal: invalid token[0] = -1462573600
llama_decode: failed to decode, ret = -1
llama_decode_internal: invalid token[0] = -1462573600
llama_decode: failed to decode, ret = -1
llama_decode_internal: invalid token[0] = -1462573600
llama_decode: failed to decode, ret = -1
llama_decode_internal: invalid token[0] = -1462573600
llama_decode: failed to decode, ret = -1
llama_decode_internal: invalid token[0] = -1462573600
llama_decode: failed to decode, ret = -1
llama_decode_internal: invalid token[0] = -1462573600
llama_decode: failed to decode, ret = -1

repeat

llama_decode_internal: invalid token[0] = -1462573600
llama_decode: failed to decode, ret = -1

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.3.14

Originally created by @delubee on GitHub (Nov 3, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7477 ### What is the issue? code: python ```python # -*- coding: utf-8 -*- from ollama import Client import pymupdf as fitz import os import base64 client = Client(host='http://127.0.0.1:11434') pdf_path = './books/book2.pdf' doc = fitz.open(pdf_path) image_base64_list = [] for i in range(min(4, doc.page_count)): page = doc.load_page(i) pix = page.get_pixmap() image_path = f'./images/page_{i + 1}.jpg' pix.save(image_path) with open(image_path, 'rb') as image_file: encoded_string = base64.b64encode(image_file.read()).decode('utf-8') image_base64_list.append(encoded_string) doc.close() response = client.chat(model='llava:7b', messages=[ { 'role': 'user', 'content': 'Extract information from the image including book title, author, publisher, publication date, ISBN, main content, etc. if available.', 'images': image_base64_list }, ]) print(response['message']['content']) ``` no return,generate a large amount of log without any return. logs: ``` 2024/11/03 12:11:34 routes.go:1158: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:d:\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-11-03T12:11:34.700+08:00 level=INFO source=images.go:754 msg="total blobs: 101" time=2024-11-03T12:11:34.714+08:00 level=INFO source=images.go:761 msg="total unused blobs removed: 0" time=2024-11-03T12:11:34.720+08:00 level=INFO source=routes.go:1205 msg="Listening on [::]:11434 (version 0.3.14)" time=2024-11-03T12:11:34.722+08:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cuda_v11 cuda_v12 rocm_v6.1 cpu cpu_avx cpu_avx2]" time=2024-11-03T12:11:34.722+08:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs" time=2024-11-03T12:11:34.722+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=2 time=2024-11-03T12:11:34.722+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=14 efficiency=0 threads=28 time=2024-11-03T12:11:34.722+08:00 level=INFO source=gpu_windows.go:214 msg="" package=1 cores=14 efficiency=0 threads=28 time=2024-11-03T12:11:35.014+08:00 level=INFO source=gpu.go:326 msg="detected OS VRAM overhead" id=GPU-789bc630-2559-016c-8a5f-b30f23ffd42e library=cuda compute=6.1 driver=12.6 name="Tesla P40" overhead="146.4 MiB" time=2024-11-03T12:11:35.317+08:00 level=INFO source=types.go:123 msg="inference compute" id=GPU-789bc630-2559-016c-8a5f-b30f23ffd42e library=cuda variant=v12 compute=6.1 driver=12.6 name="Tesla P40" total="22.4 GiB" available="22.2 GiB" time=2024-11-03T12:11:35.317+08:00 level=INFO source=types.go:123 msg="inference compute" id=GPU-b478a8b5-91bf-57f6-450d-608b615acd97 library=cuda variant=v12 compute=6.1 driver=12.6 name="NVIDIA GeForce GTX 1080 Ti" total="11.0 GiB" available="10.0 GiB" [GIN] 2024/11/03 - 12:11:35 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/11/03 - 12:11:35 | 200 | 26.2634ms | 127.0.0.1 | GET "/api/tags" time=2024-11-03T12:11:40.590+08:00 level=WARN source=sched.go:137 msg="multimodal models don't support parallel requests yet" time=2024-11-03T12:11:40.651+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=d:\models\blobs\sha256-170370233dd5c5415250a2ecd5c71586352850729062ccef1496385647293868 gpu=GPU-789bc630-2559-016c-8a5f-b30f23ffd42e parallel=1 available=23889838080 required="5.3 GiB" time=2024-11-03T12:11:40.682+08:00 level=INFO source=server.go:105 msg="system memory" total="63.9 GiB" free="45.5 GiB" free_swap="46.0 GiB" time=2024-11-03T12:11:40.684+08:00 level=INFO source=memory.go:326 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[22.2 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.3 GiB" memory.required.partial="5.3 GiB" memory.required.kv="256.0 MiB" memory.required.allocations="[5.3 GiB]" memory.weights.total="3.9 GiB" memory.weights.repeating="3.8 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="181.0 MiB" time=2024-11-03T12:11:40.706+08:00 level=INFO source=server.go:388 msg="starting llama server" cmd="C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\runners\\cuda_v12\\ollama_llama_server.exe --model d:\\models\\blobs\\sha256-170370233dd5c5415250a2ecd5c71586352850729062ccef1496385647293868 --ctx-size 2048 --batch-size 512 --embedding --n-gpu-layers 33 --mmproj d:\\models\\blobs\\sha256-72d6f08a42f656d36b356dbe0920675899a99ce21192fd66266fb7d82ed07539 --threads 14 --no-mmap --parallel 1 --port 8495" time=2024-11-03T12:11:40.726+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2024-11-03T12:11:40.726+08:00 level=INFO source=server.go:587 msg="waiting for llama runner to start responding" time=2024-11-03T12:11:40.727+08:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error" INFO [wmain] starting c++ runner | tid="27324" timestamp=1730607100 INFO [wmain] build info | build=3871 commit="63424972" tid="27324" timestamp=1730607100 INFO [wmain] system info | n_threads=14 n_threads_batch=14 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="27324" timestamp=1730607100 total_threads=56 INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="55" port="8495" tid="27324" timestamp=1730607101 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: Tesla P40, compute capability 6.1, VMM: no time=2024-11-03T12:11:41.233+08:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model" key clip.vision.image_grid_pinpoints not found in file key clip.vision.mm_patch_merge_type not found in file key clip.vision.image_crop_resolution not found in file llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from d:\models\blobs\sha256-170370233dd5c5415250a2ecd5c71586352850729062ccef1496385647293868 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = liuhaotian llama_model_loader: - kv 2: llama.context_length u32 = 32768 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: llama.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 11: general.file_type u32 = 2 llama_model_loader: - kv 12: tokenizer.ggml.model str = llama llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 19: tokenizer.ggml.padding_token_id u32 = 0 llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 22: tokenizer.chat_template str = {{ bos_token }}{% for message in mess... llama_model_loader: - kv 23: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_0: 225 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 3 llm_load_vocab: token to piece cache size = 0.1637 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 7.24 B llm_load_print_meta: model size = 3.83 GiB (4.54 BPW) llm_load_print_meta: general.name = liuhaotian llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: PAD token = 0 '<unk>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_print_meta: EOG token = 2 '</s>' llm_load_print_meta: max token length = 48 llm_load_tensors: ggml ctx size = 0.27 MiB llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 33/33 layers to GPU llm_load_tensors: CUDA_Host buffer size = 70.31 MiB llm_load_tensors: CUDA0 buffer size = 3847.55 MiB llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA0 KV buffer size = 256.00 MiB llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 0.14 MiB llama_new_context_with_model: CUDA0 compute buffer size = 164.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 12.01 MiB llama_new_context_with_model: graph nodes = 1030 llama_new_context_with_model: graph splits = 2 clip_model_load: model name: openai/clip-vit-large-patch14-336 clip_model_load: description: image encoder for LLaVA clip_model_load: GGUF version: 3 clip_model_load: alignment: 32 clip_model_load: n_tensors: 377 clip_model_load: n_kv: 19 clip_model_load: ftype: f16 clip_model_load: loaded meta data with 19 key-value pairs and 377 tensors from d:\models\blobs\sha256-72d6f08a42f656d36b356dbe0920675899a99ce21192fd66266fb7d82ed07539 clip_model_load: Dumping metadata keys/values. Note: KV overrides do not apply in this output. clip_model_load: - kv 0: general.architecture str = clip clip_model_load: - kv 1: clip.has_text_encoder bool = false clip_model_load: - kv 2: clip.has_vision_encoder bool = true clip_model_load: - kv 3: clip.has_llava_projector bool = true clip_model_load: - kv 4: general.file_type u32 = 1 clip_model_load: - kv 5: general.name str = openai/clip-vit-large-patch14-336 clip_model_load: - kv 6: general.description str = image encoder for LLaVA clip_model_load: - kv 7: clip.projector_type str = mlp clip_model_load: - kv 8: clip.vision.image_size u32 = 336 clip_model_load: - kv 9: clip.vision.patch_size u32 = 14 clip_model_load: - kv 10: clip.vision.embedding_length u32 = 1024 clip_model_load: - kv 11: clip.vision.feed_forward_length u32 = 4096 clip_model_load: - kv 12: clip.vision.projection_dim u32 = 768 clip_model_load: - kv 13: clip.vision.attention.head_count u32 = 16 clip_model_load: - kv 14: clip.vision.attention.layer_norm_epsilon f32 = 0.000010 clip_model_load: - kv 15: clip.vision.block_count u32 = 23 clip_model_load: - kv 16: clip.vision.image_mean arr[f32,3] = [0.481455, 0.457828, 0.408211] clip_model_load: - kv 17: clip.vision.image_std arr[f32,3] = [0.268630, 0.261303, 0.275777] clip_model_load: - kv 18: clip.use_gelu bool = false clip_model_load: - type f32: 235 tensors clip_model_load: - type f16: 142 tensors clip_model_load: CLIP using CUDA backend clip_model_load: text_encoder: 0 clip_model_load: vision_encoder: 1 clip_model_load: llava_projector: 1 clip_model_load: minicpmv_projector: 0 clip_model_load: model size: 595.49 MB clip_model_load: metadata size: 0.13 MB clip_model_load: params backend buffer size = 595.49 MB (377 tensors) clip_model_load: compute allocated memory: 32.89 MB INFO [wmain] model loaded | tid="27324" timestamp=1730607103 time=2024-11-03T12:11:43.537+08:00 level=INFO source=server.go:626 msg="llama runner started in 2.81 seconds" encode_image_with_clip: image embedding created: 576 tokens encode_image_with_clip: image encoded in 125.75 ms by CLIP ( 0.22 ms per image patch) encode_image_with_clip: image embedding created: 576 tokens encode_image_with_clip: image encoded in 116.25 ms by CLIP ( 0.20 ms per image patch) encode_image_with_clip: image embedding created: 576 tokens encode_image_with_clip: image encoded in 112.17 ms by CLIP ( 0.19 ms per image patch) encode_image_with_clip: image embedding created: 576 tokens encode_image_with_clip: image encoded in 109.71 ms by CLIP ( 0.19 ms per image patch) ERROR [update_slots] failed processing images | slot_id=0 task_id=2 tid="27324" timestamp=1730607106 llama_decode_internal: invalid token[0] = -1462573600 llama_decode: failed to decode, ret = -1 llama_decode_internal: invalid token[0] = -1462573600 llama_decode: failed to decode, ret = -1 llama_decode_internal: invalid token[0] = -1462573600 llama_decode: failed to decode, ret = -1 llama_decode_internal: invalid token[0] = -1462573600 llama_decode: failed to decode, ret = -1 llama_decode_internal: invalid token[0] = -1462573600 llama_decode: failed to decode, ret = -1 llama_decode_internal: invalid token[0] = -1462573600 llama_decode: failed to decode, ret = -1 llama_decode_internal: invalid token[0] = -1462573600 llama_decode: failed to decode, ret = -1 llama_decode_internal: invalid token[0] = -1462573600 llama_decode: failed to decode, ret = -1 ``` repeat ``` llama_decode_internal: invalid token[0] = -1462573600 llama_decode: failed to decode, ret = -1 ``` > ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.14
GiteaMirror added the needs more infobug labels 2026-04-12 15:41:48 -05:00
Author
Owner

@rick-github commented on GitHub (Nov 3, 2024):

Some models do not deal well with multiple images, this model is one of those. Rather than including all images in a single call, do multiple calls with one image.

<!-- gh-comment-id:2453406070 --> @rick-github commented on GitHub (Nov 3, 2024): Some models do not deal well with multiple images, this model is one of those. Rather than including all images in a single call, do multiple calls with one image.
Author
Owner

@dhiltgen commented on GitHub (Nov 5, 2024):

I believe Ollama is working correctly here. For models that don't support multiple images (e.g. llama 3.2 vision) we error with multiple images, but llava technically supports multiple images. It looks like the 34b model does a bit better, but sticking to less images will likely yield better results.

Please give 0.4.0 a try and that should clear up llama_decode: failed to decode

<!-- gh-comment-id:2458071123 --> @dhiltgen commented on GitHub (Nov 5, 2024): I believe Ollama is working correctly here. For models that don't support multiple images (e.g. llama 3.2 vision) we error with multiple images, but llava technically supports multiple images. It looks like the 34b model does a bit better, but sticking to less images will likely yield better results. Please give 0.4.0 a try and that should clear up `llama_decode: failed to decode`
Author
Owner

@nikhil-swamix commented on GitHub (Nov 6, 2024):

@delubee here you go,

# -*- coding: utf-8 -*-
from ollama import Client
import pymupdf as fitz
import os
import base64

client = Client(host='http://127.0.0.1:11434')
pdf_path = './books/book2.pdf'
doc = fitz.open(pdf_path)
responses = []  # List to store all responses

for i in range(min(4, doc.page_count)):
    # Process one page at a time
    page = doc.load_page(i)
    pix = page.get_pixmap()
    image_path = f'./images/page_{i + 1}.jpg'
    pix.save(image_path)   
    
    # Encode single image
    with open(image_path, 'rb') as image_file:
        encoded_string = base64.b64encode(image_file.read()).decode('utf-8')
    
    # Make request for single image
    response = client.chat(model='llava:7b', messages=[
        {
            'role': 'user',
            'content': f'Extract information from page {i+1} of the book including book title, author, publisher, publication date, ISBN, main content, etc. if available.',
            'images': [encoded_string]  # Send single image
        },
    ])
    
    # Store response
    responses.append({
        'page': i + 1,
        'content': response['message']['content']
    })

doc.close()

# Print all responses
for response in responses:
    print(f"\nPage {response['page']}:")
    print(response['content'])
    print("-" * 50)  # Separator between pages

Best regards.
Swamix Global AI Solutions

<!-- gh-comment-id:2458836454 --> @nikhil-swamix commented on GitHub (Nov 6, 2024): @delubee here you go, ```py # -*- coding: utf-8 -*- from ollama import Client import pymupdf as fitz import os import base64 client = Client(host='http://127.0.0.1:11434') pdf_path = './books/book2.pdf' doc = fitz.open(pdf_path) responses = [] # List to store all responses for i in range(min(4, doc.page_count)): # Process one page at a time page = doc.load_page(i) pix = page.get_pixmap() image_path = f'./images/page_{i + 1}.jpg' pix.save(image_path) # Encode single image with open(image_path, 'rb') as image_file: encoded_string = base64.b64encode(image_file.read()).decode('utf-8') # Make request for single image response = client.chat(model='llava:7b', messages=[ { 'role': 'user', 'content': f'Extract information from page {i+1} of the book including book title, author, publisher, publication date, ISBN, main content, etc. if available.', 'images': [encoded_string] # Send single image }, ]) # Store response responses.append({ 'page': i + 1, 'content': response['message']['content'] }) doc.close() # Print all responses for response in responses: print(f"\nPage {response['page']}:") print(response['content']) print("-" * 50) # Separator between pages ``` Best regards. Swamix Global AI Solutions
Author
Owner

@delubee commented on GitHub (Nov 6, 2024):

I believe Ollama is working correctly here. For models that don't support multiple images (e.g. llama 3.2 vision) we error with multiple images, but llava technically supports multiple images. It looks like the 34b model does a bit better, but sticking to less images will likely yield better results.

Please give 0.4.0 a try and that should clear up llama_decode: failed to decode

Same code, working perfectly with version 0.4.0-rc8!Additionally, the same issue occurs when using minicpm-v:latest with version v0.3.14

<!-- gh-comment-id:2459948253 --> @delubee commented on GitHub (Nov 6, 2024): > I believe Ollama is working correctly here. For models that don't support multiple images (e.g. llama 3.2 vision) we error with multiple images, but llava technically supports multiple images. It looks like the 34b model does a bit better, but sticking to less images will likely yield better results. > > Please give 0.4.0 a try and that should clear up `llama_decode: failed to decode` Same code, working perfectly with version 0.4.0-rc8!Additionally, the same issue occurs when using minicpm-v:latest with version v0.3.14
Author
Owner

@delubee commented on GitHub (Nov 6, 2024):

@delubee here you go,

# -*- coding: utf-8 -*-
from ollama import Client
import pymupdf as fitz
import os
import base64

client = Client(host='http://127.0.0.1:11434')
pdf_path = './books/book2.pdf'
doc = fitz.open(pdf_path)
responses = []  # List to store all responses

for i in range(min(4, doc.page_count)):
    # Process one page at a time
    page = doc.load_page(i)
    pix = page.get_pixmap()
    image_path = f'./images/page_{i + 1}.jpg'
    pix.save(image_path)   
    
    # Encode single image
    with open(image_path, 'rb') as image_file:
        encoded_string = base64.b64encode(image_file.read()).decode('utf-8')
    
    # Make request for single image
    response = client.chat(model='llava:7b', messages=[
        {
            'role': 'user',
            'content': f'Extract information from page {i+1} of the book including book title, author, publisher, publication date, ISBN, main content, etc. if available.',
            'images': [encoded_string]  # Send single image
        },
    ])
    
    # Store response
    responses.append({
        'page': i + 1,
        'content': response['message']['content']
    })

doc.close()

# Print all responses
for response in responses:
    print(f"\nPage {response['page']}:")
    print(response['content'])
    print("-" * 50)  # Separator between pages

Best regards. Swamix Global AI Solutions

thanks

<!-- gh-comment-id:2459950234 --> @delubee commented on GitHub (Nov 6, 2024): > @delubee here you go, > > ```python > # -*- coding: utf-8 -*- > from ollama import Client > import pymupdf as fitz > import os > import base64 > > client = Client(host='http://127.0.0.1:11434') > pdf_path = './books/book2.pdf' > doc = fitz.open(pdf_path) > responses = [] # List to store all responses > > for i in range(min(4, doc.page_count)): > # Process one page at a time > page = doc.load_page(i) > pix = page.get_pixmap() > image_path = f'./images/page_{i + 1}.jpg' > pix.save(image_path) > > # Encode single image > with open(image_path, 'rb') as image_file: > encoded_string = base64.b64encode(image_file.read()).decode('utf-8') > > # Make request for single image > response = client.chat(model='llava:7b', messages=[ > { > 'role': 'user', > 'content': f'Extract information from page {i+1} of the book including book title, author, publisher, publication date, ISBN, main content, etc. if available.', > 'images': [encoded_string] # Send single image > }, > ]) > > # Store response > responses.append({ > 'page': i + 1, > 'content': response['message']['content'] > }) > > doc.close() > > # Print all responses > for response in responses: > print(f"\nPage {response['page']}:") > print(response['content']) > print("-" * 50) # Separator between pages > ``` > > Best regards. Swamix Global AI Solutions thanks
Author
Owner

@dhiltgen commented on GitHub (Nov 6, 2024):

Glad to hear 0.4.0 cleared it up. It sounds like we can close this now.

<!-- gh-comment-id:2460310356 --> @dhiltgen commented on GitHub (Nov 6, 2024): Glad to hear 0.4.0 cleared it up. It sounds like we can close this now.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#4753