[GH-ISSUE #12918] Cannot communicate with qwen3-vl but qwen2.5-vl works well, i use the sdk which is "/api/chat" #70624

Open
opened 2026-05-04 22:18:24 -05:00 by GiteaMirror · 19 comments
Owner

Originally created by @rod-fu on GitHub (Nov 3, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12918

What is the issue?

i never changed the logic of my code that i tried with qwen2.5-vl and qwen3-vl, only qwen2.5-vl can work well

11月 03 15:38:50 ubuntu22 ollama[556325]: Device 0: NVIDIA RTX A5000, compute capability 8.6, VMM: yes, ID: GPU-edf11493-b7fd-5627-3d56-ce237517ebf2
11月 03 15:38:50 ubuntu22 ollama[556325]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so
11月 03 15:38:50 ubuntu22 ollama[556325]: time=2025-11-03T15:38:50.527+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16>
11月 03 15:38:51 ubuntu22 ollama[556325]: time=2025-11-03T15:38:51.059+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:5>
11月 03 15:38:51 ubuntu22 ollama[556325]: time=2025-11-03T15:38:51.141+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:>
11月 03 15:38:51 ubuntu22 ollama[556325]: time=2025-11-03T15:38:51.141+08:00 level=INFO source=ggml.go:482 msg="offloading 36 repeating layers to GPU"
11月 03 15:38:51 ubuntu22 ollama[556325]: time=2025-11-03T15:38:51.141+08:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU"
11月 03 15:38:51 ubuntu22 ollama[556325]: time=2025-11-03T15:38:51.141+08:00 level=INFO source=ggml.go:494 msg="offloaded 37/37 layers to GPU"
11月 03 15:38:51 ubuntu22 ollama[556325]: time=2025-11-03T15:38:51.141+08:00 level=INFO source=device.go:212 msg="model weights" device=CUDA0 size="2.3 GiB"
11月 03 15:38:51 ubuntu22 ollama[556325]: time=2025-11-03T15:38:51.141+08:00 level=INFO source=device.go:217 msg="model weights" device=CPU size="303.7 MiB"
11月 03 15:38:51 ubuntu22 ollama[556325]: time=2025-11-03T15:38:51.141+08:00 level=INFO source=device.go:223 msg="kv cache" device=CUDA0 size="576.0 MiB"
11月 03 15:38:51 ubuntu22 ollama[556325]: time=2025-11-03T15:38:51.141+08:00 level=INFO source=device.go:234 msg="compute graph" device=CUDA0 size="424.0 MiB"
11月 03 15:38:51 ubuntu22 ollama[556325]: time=2025-11-03T15:38:51.141+08:00 level=INFO source=device.go:239 msg="compute graph" device=CPU size="5.0 MiB"
11月 03 15:38:51 ubuntu22 ollama[556325]: time=2025-11-03T15:38:51.141+08:00 level=INFO source=device.go:244 msg="total memory" size="3.6 GiB"
11月 03 15:38:51 ubuntu22 ollama[556325]: time=2025-11-03T15:38:51.141+08:00 level=INFO source=sched.go:493 msg="loaded runners" count=2
11月 03 15:38:51 ubuntu22 ollama[556325]: time=2025-11-03T15:38:51.141+08:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding"
11月 03 15:38:51 ubuntu22 ollama[556325]: time=2025-11-03T15:38:51.142+08:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loadi>
11月 03 15:38:52 ubuntu22 ollama[556325]: time=2025-11-03T15:38:52.399+08:00 level=INFO source=server.go:1289 msg="llama runner started in 2.11 seconds"
11月 03 15:38:52 ubuntu22 ollama[556325]: [GIN] 2025/11/03 - 15:38:52 | 200 | 2.426396851s | xx.xx.x.xx | POST "/api/embed"
11月 03 15:39:48 ubuntu22 ollama[556325]: [GIN] 2025/11/03 - 15:39:48 | 400 | 55.707313686s | xx.xx.xx.xx| POST "/api/chat"

Exception: ""error":"model is required"

Relevant log output


OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.12.9

Originally created by @rod-fu on GitHub (Nov 3, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12918 ### What is the issue? i never changed the logic of my code that i tried with qwen2.5-vl and qwen3-vl, only qwen2.5-vl can work well 11月 03 15:38:50 ubuntu22 ollama[556325]: Device 0: NVIDIA RTX A5000, compute capability 8.6, VMM: yes, ID: GPU-edf11493-b7fd-5627-3d56-ce237517ebf2 11月 03 15:38:50 ubuntu22 ollama[556325]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so 11月 03 15:38:50 ubuntu22 ollama[556325]: time=2025-11-03T15:38:50.527+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16> 11月 03 15:38:51 ubuntu22 ollama[556325]: time=2025-11-03T15:38:51.059+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:5> 11月 03 15:38:51 ubuntu22 ollama[556325]: time=2025-11-03T15:38:51.141+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:> 11月 03 15:38:51 ubuntu22 ollama[556325]: time=2025-11-03T15:38:51.141+08:00 level=INFO source=ggml.go:482 msg="offloading 36 repeating layers to GPU" 11月 03 15:38:51 ubuntu22 ollama[556325]: time=2025-11-03T15:38:51.141+08:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU" 11月 03 15:38:51 ubuntu22 ollama[556325]: time=2025-11-03T15:38:51.141+08:00 level=INFO source=ggml.go:494 msg="offloaded 37/37 layers to GPU" 11月 03 15:38:51 ubuntu22 ollama[556325]: time=2025-11-03T15:38:51.141+08:00 level=INFO source=device.go:212 msg="model weights" device=CUDA0 size="2.3 GiB" 11月 03 15:38:51 ubuntu22 ollama[556325]: time=2025-11-03T15:38:51.141+08:00 level=INFO source=device.go:217 msg="model weights" device=CPU size="303.7 MiB" 11月 03 15:38:51 ubuntu22 ollama[556325]: time=2025-11-03T15:38:51.141+08:00 level=INFO source=device.go:223 msg="kv cache" device=CUDA0 size="576.0 MiB" 11月 03 15:38:51 ubuntu22 ollama[556325]: time=2025-11-03T15:38:51.141+08:00 level=INFO source=device.go:234 msg="compute graph" device=CUDA0 size="424.0 MiB" 11月 03 15:38:51 ubuntu22 ollama[556325]: time=2025-11-03T15:38:51.141+08:00 level=INFO source=device.go:239 msg="compute graph" device=CPU size="5.0 MiB" 11月 03 15:38:51 ubuntu22 ollama[556325]: time=2025-11-03T15:38:51.141+08:00 level=INFO source=device.go:244 msg="total memory" size="3.6 GiB" 11月 03 15:38:51 ubuntu22 ollama[556325]: time=2025-11-03T15:38:51.141+08:00 level=INFO source=sched.go:493 msg="loaded runners" count=2 11月 03 15:38:51 ubuntu22 ollama[556325]: time=2025-11-03T15:38:51.141+08:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding" 11月 03 15:38:51 ubuntu22 ollama[556325]: time=2025-11-03T15:38:51.142+08:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loadi> 11月 03 15:38:52 ubuntu22 ollama[556325]: time=2025-11-03T15:38:52.399+08:00 level=INFO source=server.go:1289 msg="llama runner started in 2.11 seconds" 11月 03 15:38:52 ubuntu22 ollama[556325]: [GIN] 2025/11/03 - 15:38:52 | 200 | 2.426396851s | xx.xx.x.xx | POST "/api/embed" 11月 03 15:39:48 ubuntu22 ollama[556325]: [GIN] 2025/11/03 - 15:39:48 | 400 | 55.707313686s | xx.xx.xx.xx| POST "/api/chat" Exception: ""error":"model is required" ### Relevant log output ```shell ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.12.9
GiteaMirror added the bugneeds more info labels 2026-05-04 22:18:25 -05:00
Author
Owner

@rod-fu commented on GitHub (Nov 3, 2025):

here are my parameters
messages = [{"role": "system", "content": system_prompt},
{"role": "user", "content": prompt, 'images': image_base64}]

        request_data = {'model': self.model_name, 'messages': messages,
                                     'max_token': 8192,  'stream': False,
                                     'options': {'temperature': 0.6, 'top_p': 0.95, 'top_k': 20, 'min_p': 0, "frequency_penalty": 1.0},
                                    }
<!-- gh-comment-id:3479299121 --> @rod-fu commented on GitHub (Nov 3, 2025): here are my parameters messages = [{"role": "system", "content": system_prompt}, {"role": "user", "content": prompt, 'images': image_base64}] request_data = {'model': self.model_name, 'messages': messages, 'max_token': 8192, 'stream': False, 'options': {'temperature': 0.6, 'top_p': 0.95, 'top_k': 20, 'min_p': 0, "frequency_penalty": 1.0}, }
Author
Owner

@rick-github commented on GitHub (Nov 3, 2025):

What's the output of ollama list?

<!-- gh-comment-id:3479305286 --> @rick-github commented on GitHub (Nov 3, 2025): What's the output of `ollama list`?
Author
Owner

@rod-fu commented on GitHub (Nov 3, 2025):

Image

actually , i just copied the model‘s name from this place ,i can make sure that the model’s name is correct

Image
<!-- gh-comment-id:3479319801 --> @rod-fu commented on GitHub (Nov 3, 2025): <img width="693" height="149" alt="Image" src="https://github.com/user-attachments/assets/467894ff-4890-40b4-a50f-c278e450c97d" /> actually , i just copied the model‘s name from this place ,i can make sure that the model’s name is correct <img width="1360" height="520" alt="Image" src="https://github.com/user-attachments/assets/aeccb759-1b6b-4abe-ac8b-3eda46ad2015" />
Author
Owner

@rick-github commented on GitHub (Nov 3, 2025):

A fragment of code showing your parameters doesn't help in diagnosing the issue. A self-contained script that demonstrates the problem will help in debugging.

<!-- gh-comment-id:3479809420 --> @rick-github commented on GitHub (Nov 3, 2025): A fragment of code showing your parameters doesn't help in diagnosing the issue. A self-contained script that demonstrates the problem will help in debugging.
Author
Owner

@rod-fu commented on GitHub (Nov 3, 2025):

the code below can help you repro the issue , btw there are five pictures in my individual path , i send them together,

import os
import base64
import requests

def batch_analyze_all_images(image_dir: str) -> str:
# 配置参数(替换为你的实际信息)
url = "http://xx.xxx.xxx.xx:11434/api/chat"
headers = {
'Content-Type': 'application/json',
'Authorization': 'Bearer sk-1234' # 你的token
}
system_prompt = "你是图片分析大师,请同时分析所有提供的图片,对比差异并总结共同特征"
user_prompt = "请分析以下所有图片,分别描述每张内容,并总结它们的关联或区别"

# 支持的图片格式
image_extensions = ('.jpg', '.jpeg', '.png', '.gif', '.bmp', '.webp')
all_images = []

# 读取目录下所有图片并转Base64
for filename in os.listdir(image_dir):
    file_path = os.path.join(image_dir, filename)
    if os.path.isfile(file_path) and filename.lower().endswith(image_extensions):
        with open(file_path, "rb") as f:
            img_base64 = base64.b64encode(f.read()).decode("utf-8")
            all_images.append(img_base64)  # 附带文件名方便模型区分

# 构造请求(一次性发送所有图片)
request_data = {
    'model': "qwen3-vl:8b",
    'messages': [
        {"role": "system", "content": system_prompt},
        {"role": "user", "content": user_prompt, "images": all_images}  # 所有图片放一个数组
    ],
    'max_token': 8192,
    'stream': False
}

# 发送请求
response = requests.post(url, headers=headers, json=request_data)

if response.status_code == 200:
    return response.json()['message']['content']
else:
    raise Exception(f"请求失败:{response.text}")

if name == "main":
# 替换为你的图片目录
image_dir = r"D:\xxx\Agno-Agent\app\utils\pic\空气质量总览AQI计算"
result = batch_analyze_all_images(image_dir)
print("批量分析结果:\n", result)

<!-- gh-comment-id:3481141445 --> @rod-fu commented on GitHub (Nov 3, 2025): the code below can help you repro the issue , btw there are five pictures in my individual path , i send them together, import os import base64 import requests def batch_analyze_all_images(image_dir: str) -> str: # 配置参数(替换为你的实际信息) url = "http://xx.xxx.xxx.xx:11434/api/chat" headers = { 'Content-Type': 'application/json', 'Authorization': 'Bearer sk-1234' # 你的token } system_prompt = "你是图片分析大师,请同时分析所有提供的图片,对比差异并总结共同特征" user_prompt = "请分析以下所有图片,分别描述每张内容,并总结它们的关联或区别" # 支持的图片格式 image_extensions = ('.jpg', '.jpeg', '.png', '.gif', '.bmp', '.webp') all_images = [] # 读取目录下所有图片并转Base64 for filename in os.listdir(image_dir): file_path = os.path.join(image_dir, filename) if os.path.isfile(file_path) and filename.lower().endswith(image_extensions): with open(file_path, "rb") as f: img_base64 = base64.b64encode(f.read()).decode("utf-8") all_images.append(img_base64) # 附带文件名方便模型区分 # 构造请求(一次性发送所有图片) request_data = { 'model': "qwen3-vl:8b", 'messages': [ {"role": "system", "content": system_prompt}, {"role": "user", "content": user_prompt, "images": all_images} # 所有图片放一个数组 ], 'max_token': 8192, 'stream': False } # 发送请求 response = requests.post(url, headers=headers, json=request_data) if response.status_code == 200: return response.json()['message']['content'] else: raise Exception(f"请求失败:{response.text}") if __name__ == "__main__": # 替换为你的图片目录 image_dir = r"D:\xxx\Agno-Agent\app\utils\pic\空气质量总览AQI计算" result = batch_analyze_all_images(image_dir) print("批量分析结果:\n", result)
Author
Owner

@rick-github commented on GitHub (Nov 3, 2025):

Please put this in a code markdown block (``` before and after).

<!-- gh-comment-id:3481149743 --> @rick-github commented on GitHub (Nov 3, 2025): Please put this in a code markdown block (\`\`\` before and after).
Author
Owner

@rod-fu commented on GitHub (Nov 3, 2025):

import os
import base64
import requests



def batch_analyze_all_images(image_dir: str) -> str:
    # 配置参数(替换为你的实际信息)
    url = "http://xx.xxx.xxx.xx:20658/api/chat"
    headers = {
        'Content-Type': 'application/json',
        'Authorization': 'Bearer sk-1234'  # 你的token
    }
    system_prompt = "你是图片分析大师,请同时分析所有提供的图片,对比差异并总结共同特征"
    user_prompt = "请分析以下所有图片,分别描述每张内容,并总结它们的关联或区别"

    # 支持的图片格式
    image_extensions = ('.jpg', '.jpeg', '.png', '.gif', '.bmp', '.webp')
    all_images = []

    # 读取目录下所有图片并转Base64
    for filename in os.listdir(image_dir):
        file_path = os.path.join(image_dir, filename)
        if os.path.isfile(file_path) and filename.lower().endswith(image_extensions):
            with open(file_path, "rb") as f:
                img_base64 = base64.b64encode(f.read()).decode("utf-8")
                all_images.append(img_base64)  # 附带文件名方便模型区分

    # 构造请求(一次性发送所有图片)
    request_data = {
        'model': "qwen3-vl:8b",
        'messages': [
            {"role": "system", "content": system_prompt},
            {"role": "user", "content": user_prompt, "images": all_images}  # 所有图片放一个数组
        ],
        'max_token': 8192,
        'stream': False
    }

    # 发送请求
    response = requests.post(url, headers=headers, json=request_data)

    if response.status_code == 200:
        return response.json()['message']['content']
    else:
        raise Exception(f"请求失败:{response.text}")

if __name__ == "__main__":
    # 替换为你的图片目录
    image_dir = r"D:\xxx\Agno-Agent\app\utils\pic\空气质量总览AQI计算"
    result = batch_analyze_all_images(image_dir)
    print("批量分析结果:\n", result)
<!-- gh-comment-id:3481194933 --> @rod-fu commented on GitHub (Nov 3, 2025): ``` import os import base64 import requests def batch_analyze_all_images(image_dir: str) -> str: # 配置参数(替换为你的实际信息) url = "http://xx.xxx.xxx.xx:20658/api/chat" headers = { 'Content-Type': 'application/json', 'Authorization': 'Bearer sk-1234' # 你的token } system_prompt = "你是图片分析大师,请同时分析所有提供的图片,对比差异并总结共同特征" user_prompt = "请分析以下所有图片,分别描述每张内容,并总结它们的关联或区别" # 支持的图片格式 image_extensions = ('.jpg', '.jpeg', '.png', '.gif', '.bmp', '.webp') all_images = [] # 读取目录下所有图片并转Base64 for filename in os.listdir(image_dir): file_path = os.path.join(image_dir, filename) if os.path.isfile(file_path) and filename.lower().endswith(image_extensions): with open(file_path, "rb") as f: img_base64 = base64.b64encode(f.read()).decode("utf-8") all_images.append(img_base64) # 附带文件名方便模型区分 # 构造请求(一次性发送所有图片) request_data = { 'model': "qwen3-vl:8b", 'messages': [ {"role": "system", "content": system_prompt}, {"role": "user", "content": user_prompt, "images": all_images} # 所有图片放一个数组 ], 'max_token': 8192, 'stream': False } # 发送请求 response = requests.post(url, headers=headers, json=request_data) if response.status_code == 200: return response.json()['message']['content'] else: raise Exception(f"请求失败:{response.text}") if __name__ == "__main__": # 替换为你的图片目录 image_dir = r"D:\xxx\Agno-Agent\app\utils\pic\空气质量总览AQI计算" result = batch_analyze_all_images(image_dir) print("批量分析结果:\n", result) ```
Author
Owner

@rod-fu commented on GitHub (Nov 3, 2025):

Also, i have another action, before i ran qwen3-vl:8B, i have called qwen3-embedding, i'm not sure whether it can influence qwen3-vl

<!-- gh-comment-id:3481214610 --> @rod-fu commented on GitHub (Nov 3, 2025): Also, i have another action, before i ran qwen3-vl:8B, i have called qwen3-embedding, i'm not sure whether it can influence qwen3-vl
Author
Owner

@rick-github commented on GitHub (Nov 3, 2025):

I made the following chage:

--- 12918.py.orig	2025-11-03 16:49:55.185846807 +0100
+++ 12918.py	2025-11-03 16:51:15.950436376 +0100
@@ -8,7 +8,7 @@
 
 def batch_analyze_all_images(image_dir: str) -> str:
     # 配置参数(替换为你的实际信息)
-    url = "http://xx.xxx.xxx.xx:20658/api/chat"
+    url = "http://127.0.0.1:11434/api/chat"
     headers = {
         'Content-Type': 'application/json',
         'Authorization': 'Bearer sk-1234'  # 你的token
@@ -49,6 +49,6 @@
 
 if __name__ == "__main__":
     # 替换为你的图片目录
-    image_dir = r"D:\xxx\Agno-Agent\app\utils\pic\空气质量总览AQI计算"
+    image_dir = r"."
     result = batch_analyze_all_images(image_dir)
     print("批量分析结果:\n", result)

And ran it in a directory with a picture of a puppy and a picture of a kitten:

 $ ./12918.py 
批量分析结果:
 ### 图片分析报告  

---

#### **第一张图片(白色小狗)**  
- **主体内容**:  
  一只**白色幼犬**,毛发蓬松柔软,正端坐于灰色石质台阶上。它佩戴着**红色项圈**,项圈上挂着一枚**金色铃铛**,增添可爱与灵动感。  
- **环境细节**:  
...
> 总结:两张图片以**宠物幼崽为纽带**,通过构图、光线、细节的差异化设计,展现“可爱”这一核心主题的多元表达——小狗偏重静谧感,小猫强调活力感,但共同传递出“温暖陪伴”的情感内核。

So this seems to be working as expected.

What happens if you run the following:

$ OLLAMA_HOST=http://xx.xxx.xxx.xx:20658/ ollama run qwen3-vl:8b
>>> hello
<!-- gh-comment-id:3481257289 --> @rick-github commented on GitHub (Nov 3, 2025): I made the following chage: ```diff --- 12918.py.orig 2025-11-03 16:49:55.185846807 +0100 +++ 12918.py 2025-11-03 16:51:15.950436376 +0100 @@ -8,7 +8,7 @@ def batch_analyze_all_images(image_dir: str) -> str: # 配置参数(替换为你的实际信息) - url = "http://xx.xxx.xxx.xx:20658/api/chat" + url = "http://127.0.0.1:11434/api/chat" headers = { 'Content-Type': 'application/json', 'Authorization': 'Bearer sk-1234' # 你的token @@ -49,6 +49,6 @@ if __name__ == "__main__": # 替换为你的图片目录 - image_dir = r"D:\xxx\Agno-Agent\app\utils\pic\空气质量总览AQI计算" + image_dir = r"." result = batch_analyze_all_images(image_dir) print("批量分析结果:\n", result) ``` And ran it in a directory with a picture of a puppy and a picture of a kitten: ```console $ ./12918.py 批量分析结果: ### 图片分析报告 --- #### **第一张图片(白色小狗)** - **主体内容**: 一只**白色幼犬**,毛发蓬松柔软,正端坐于灰色石质台阶上。它佩戴着**红色项圈**,项圈上挂着一枚**金色铃铛**,增添可爱与灵动感。 - **环境细节**: ... > 总结:两张图片以**宠物幼崽为纽带**,通过构图、光线、细节的差异化设计,展现“可爱”这一核心主题的多元表达——小狗偏重静谧感,小猫强调活力感,但共同传递出“温暖陪伴”的情感内核。 ``` So this seems to be working as expected. What happens if you run the following: ```console $ OLLAMA_HOST=http://xx.xxx.xxx.xx:20658/ ollama run qwen3-vl:8b >>> hello ```
Author
Owner

@rod-fu commented on GitHub (Nov 3, 2025):

yeah, just now i sent you message before, i had used four pictures for testing,it worked well, but five pictures were not pretty good

<!-- gh-comment-id:3481274713 --> @rod-fu commented on GitHub (Nov 3, 2025): yeah, just now i sent you message before, i had used four pictures for testing,it worked well, but five pictures were not pretty good
Author
Owner

@rod-fu commented on GitHub (Nov 3, 2025):

$ OLLAMA_HOST=http://xx.xxx.xxx.xx:20658/ ollama run qwen3-vl:8b
the one was okay that i could communicate with the model

<!-- gh-comment-id:3481282166 --> @rod-fu commented on GitHub (Nov 3, 2025): $ OLLAMA_HOST=http://xx.xxx.xxx.xx:20658/ ollama run qwen3-vl:8b the one was okay that i could communicate with the model
Author
Owner

@rod-fu commented on GitHub (Nov 3, 2025):

btw, i sent the same picture (five pictures) to qwen2.5:7b, it worked well

<!-- gh-comment-id:3481308554 --> @rod-fu commented on GitHub (Nov 3, 2025): btw, i sent the same picture (five pictures) to qwen2.5:7b, it worked well
Author
Owner

@rick-github commented on GitHub (Nov 3, 2025):

I copied the files to create 5 images (3 puppy, 2 kitten) and the script returned:

批量分析结果:
 ### 图片分析报告(基于提供的内容)  

#### **1. 各图片内容描述**  
由于您提供的图片信息存在**重复描述混乱**(例如连续出现多个“白色小狗”“橘猫”描述,但实际图片数量可能不足5张),以下分析基于**明确描述的部分**进行梳理。假设您实际提供了**2张有效图片**(其余可能为重复或误传):  

...

> **注**:若实际图片数量为5张,且包含更多小狗/小猫(如不同颜色、背景),分析可扩展至“背景环境的多样化”(如室内/室外、阳光/阴雨)及“动物表情的细微差异”(如好奇/安静)。当前分析基于您提供的有效内容,如需调整请补充说明。

which seems to be working as expected.

<!-- gh-comment-id:3481342099 --> @rick-github commented on GitHub (Nov 3, 2025): I copied the files to create 5 images (3 puppy, 2 kitten) and the script returned: ``` 批量分析结果: ### 图片分析报告(基于提供的内容) #### **1. 各图片内容描述** 由于您提供的图片信息存在**重复描述混乱**(例如连续出现多个“白色小狗”“橘猫”描述,但实际图片数量可能不足5张),以下分析基于**明确描述的部分**进行梳理。假设您实际提供了**2张有效图片**(其余可能为重复或误传): ... > **注**:若实际图片数量为5张,且包含更多小狗/小猫(如不同颜色、背景),分析可扩展至“背景环境的多样化”(如室内/室外、阳光/阴雨)及“动物表情的细微差异”(如好奇/安静)。当前分析基于您提供的有效内容,如需调整请补充说明。 ``` which seems to be working as expected.
Author
Owner

@rick-github commented on GitHub (Nov 3, 2025):

Your original error report was that communication with qwen3-vl failed (Exception: ""error":"model is required"). It seems that's no longer an issue, rather the response is not what you are expecting. What response are you expecting?

<!-- gh-comment-id:3481360479 --> @rick-github commented on GitHub (Nov 3, 2025): Your original error report was that communication with qwen3-vl failed (`Exception: ""error":"model is required"`). It seems that's no longer an issue, rather the response is not what you are expecting. What response are you expecting?
Author
Owner

@rod-fu commented on GitHub (Nov 4, 2025):

i just wanna get some kinds of things which are related with these pics, btw, whether you can call qwen3-embedding model at first, then use the upper code for testing again, because in my workflow ,i need to call embedding model first that i get some kinds of related contents and send them together, if yours works well, i'll reinstall my ollama again ,use the latest build ,the current one is 0.12.9

<!-- gh-comment-id:3483449202 --> @rod-fu commented on GitHub (Nov 4, 2025): i just wanna get some kinds of things which are related with these pics, btw, whether you can call qwen3-embedding model at first, then use the upper code for testing again, because in my workflow ,i need to call embedding model first that i get some kinds of related contents and send them together, if yours works well, i'll reinstall my ollama again ,use the latest build ,the current one is 0.12.9
Author
Owner

@rod-fu commented on GitHub (Nov 4, 2025):

i have tried to call the model with pics simply, it was okay, i'm not sure whether the embedding model has some kinds of influence ,please try it, and leave me the message,thx, actually it's not stable that i tried five times . just for three times i can get the response content, others are failed

<!-- gh-comment-id:3483457791 --> @rod-fu commented on GitHub (Nov 4, 2025): i have tried to call the model with pics simply, it was okay, i'm not sure whether the embedding model has some kinds of influence ,please try it, and leave me the message,thx, actually it's not stable that i tried five times . just for three times i can get the response content, others are failed
Author
Owner

@rod-fu commented on GitHub (Nov 4, 2025):

Image this term i just use pics simply , please check my debugging in the upper pic
<!-- gh-comment-id:3483472097 --> @rod-fu commented on GitHub (Nov 4, 2025): <img width="1430" height="478" alt="Image" src="https://github.com/user-attachments/assets/87411647-05ef-40ac-adb8-84f064437206" /> this term i just use pics simply , please check my debugging in the upper pic
Author
Owner

@rod-fu commented on GitHub (Nov 4, 2025):

@pdevine here is the log of ollama when the status of "/api/chat" is 500. i guess maybe the memory is not enough for inference?
‘’‘
11月 04 12:57:10 ubuntu22 ollama[562788]: ggml_backend_cuda_device_get_memory device GPU-edf11493-b7fd-5627-3d56-ce237517ebf2 utilizing
NVML memory reporting free: 9560457216 total: 25757220864
11月 04 12:57:10 ubuntu22 ollama[562788]: ggml_backend_cuda_device_get_memory device GPU-edf11493-b7fd-5627-3d56-ce237517ebf2 utilizing NVML memory reporting free: 13363183616 tota: 25757220864
’‘’

<!-- gh-comment-id:3483870591 --> @rod-fu commented on GitHub (Nov 4, 2025): @pdevine here is the log of ollama when the status of "/api/chat" is 500. i guess maybe the memory is not enough for inference? ‘’‘ 11月 04 12:57:10 ubuntu22 ollama[562788]: ggml_backend_cuda_device_get_memory device GPU-edf11493-b7fd-5627-3d56-ce237517ebf2 utilizing NVML memory reporting free: 9560457216 total: 25757220864 11月 04 12:57:10 ubuntu22 ollama[562788]: ggml_backend_cuda_device_get_memory device GPU-edf11493-b7fd-5627-3d56-ce237517ebf2 utilizing NVML memory reporting free: 13363183616 tota: 25757220864 ’‘’
Author
Owner

@rod-fu commented on GitHub (Nov 4, 2025):

Image here is parameters of my pics
<!-- gh-comment-id:3483875636 --> @rod-fu commented on GitHub (Nov 4, 2025): <img width="585" height="666" alt="Image" src="https://github.com/user-attachments/assets/1f051a68-112a-42c6-aee5-a1210dd38300" /> here is parameters of my pics
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#70624