[GH-ISSUE #2213] Interleaving text and images (for few-shot learning) #47779

Closed
opened 2026-04-28 05:18:36 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @delenius on GitHub (Jan 26, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2213

It does not appear to be possible (e.g. with llava) to interleave images and text (or is it?).

This would be necessary in order to give some few-shot examples of image-text pairs, and then a final image that we want to generate text for. For example, the OpenAI API allows for this by having the content field be a list, where each entry can be either text, or a base64-encoded image. (The examples in their docs do not show it, but it is indeed possible to interleave images and text arbitrarily using that API.)

I am not sure this is possible with the underlying llava model (or others), but if it is, it would be a great feature to have.

Originally created by @delenius on GitHub (Jan 26, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2213 It does not appear to be possible (e.g. with llava) to interleave images and text (or is it?). This would be necessary in order to give some few-shot examples of image-text pairs, and then a final image that we want to generate text for. For example, the [OpenAI API](https://platform.openai.com/docs/guides/vision) allows for this by having the `content` field be a list, where each entry can be either text, or a base64-encoded image. (The examples in their docs do not show it, but it is indeed possible to interleave images and text arbitrarily using that API.) I am not sure this is possible with the underlying llava model (or others), but if it is, it would be a great feature to have.
GiteaMirror added the feature request label 2026-04-28 05:18:36 -05:00
Author
Owner

@pdevine commented on GitHub (Jan 26, 2024):

Thank you for the feature request!

Unfortunately it's not currently possible to do this. The new MESSAGE command in the Modelfile doesn't yet support adding images, and I think there's probably some work that would need to be done to properly interpret older images.

<!-- gh-comment-id:1912834093 --> @pdevine commented on GitHub (Jan 26, 2024): Thank you for the feature request! Unfortunately it's not currently possible to do this. The new `MESSAGE` command in the Modelfile doesn't _yet_ support adding images, and I think there's probably some work that would need to be done to properly interpret older images.
Author
Owner

@mountaineerbr commented on GitHub (Feb 5, 2024):

Outcome will be random. You may try it yourself to even know what differences may arise.

What we do in my OpenAI (and Ollama) API warping is, leave images the as last items of a list of input prompts.

Indeed, I see that interleaved image and text may seem a way of organising stuff. But on USENET (or older), for e.g., or I have got it myself, use links such as "text[*]". That marks the link to some sort of predefined list below.

Humans do not read the references before what the text infers, of course you may want to "trick the user" to see an image before crucial bit of text, but that is advertisements or why should the user the see image before context, anyways? The user should strive to only check the relevant references.

Scientific papers leave refs at the end, and the reader may check the figure list, tables and any other appendix if relevant to his/her interest.

As a rule of thumb, the AI can only understand your world. So if you see images before text... Great! I guess... But I read text before evaluating images... Or if teh image strikes me first, then that is a bias, you know.

The only universal is text, anyways... Forget images, they get represented by base64, or 0s and 1s, or as yes or nulls, or as drums and guitars.

PS: I mean to prefer choosing Karl Max rhetroics instead of (his opposite who says the conclusion as the starting point of the entire rationale?).

<!-- gh-comment-id:1926185146 --> @mountaineerbr commented on GitHub (Feb 5, 2024): Outcome will be random. You may try it yourself to even know what differences may arise. What we do in my OpenAI (and Ollama) API warping is, leave images the as last items of a list of input prompts. Indeed, I see that interleaved image and text may seem a way of organising stuff. But on USENET (or older), for e.g., or I have got it myself, use links such as "text[*]". That marks the link to some sort of predefined list below. Humans do _not read_ the references before what the text infers, of course you may want to "trick the user" to see an image before crucial bit of text, but that is advertisements or why should the user the see image before context, anyways? The user should strive to only check the relevant references. Scientific papers leave refs at the end, and the reader may check the figure list, tables and any other appendix if relevant to his/her interest. As a rule of thumb, the AI can only understand your world. So if you see images before text... Great! I guess... But I read text before evaluating images... Or if teh image strikes me first, then that is a bias, you know. The only universal is text, anyways... Forget images, they get represented by base64, or 0s and 1s, or as yes or nulls, or as drums and guitars. PS: I mean to prefer choosing Karl Max rhetroics instead of (his opposite who says the conclusion as the starting point of the entire rationale?).
Author
Owner

@jmorganca commented on GitHub (Sep 4, 2024):

Hi there, this is now possible using the /api/chat or the /v1/chat/completions API (which supports base64-encoded images).

For the /api/chat endpoint, interleaving messages of text/images will automatically combine them into one message now.

Let me know if you have any issues with this!

<!-- gh-comment-id:2329723935 --> @jmorganca commented on GitHub (Sep 4, 2024): Hi there, this is now possible using the `/api/chat` or the `/v1/chat/completions` API (which supports base64-encoded images). For the `/api/chat` endpoint, interleaving messages of text/images will automatically combine them into one message now. Let me know if you have any issues with this!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#47779