[GH-ISSUE #7446] MiniCPM-V 2.6 model crash with error code 500 when using ollama API in golang #51245

Closed
opened 2026-04-28 19:02:00 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @FreemanFeng on GitHub (Oct 31, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7446

What is the issue?

1730369280 debug

below is the golang code, set the prompt to be "请识别图片", load the attached image into []byte.

func RunVLM(prompt string, images ...[]byte) (bool, any) {
client, err := api.ClientFromEnvironment()
if err != nil {
log.Fatal(err)
}
model := "minicpm-v"

req := &api.GenerateRequest{
	Model:     model,
	Prompt:    prompt,
	KeepAlive: new(api.Duration),
	// set streaming to false
	Stream: new(bool),
}
for _, k := range images {
	s := base64.StdEncoding.EncodeToString(k)
	req.Images = append(req.Images, api.ImageData(s))
}
//req.KeepAlive.Duration = 24 * 60 * time.Minute

var v any
ctx := context.Background()
respFunc := func(resp api.GenerateResponse) error {
	// Only print the response here; GenerateResponse has a number of other
	// interesting fields you want to examine.
	fmt.Println(resp.Response)
	e := json.Unmarshal([]byte(resp.Response), &v)
	if e != nil {
		e = json.Unmarshal([]byte(fetchJSON(resp.Response)), &v)
		if e != nil {
			log.Println(e.Error())
			v = resp.Response
			return nil
		}
	}
	return nil
}

err = client.Generate(ctx, req, respFunc)
if err != nil {
	log.Println(err.Error())
	return false, nil
}
return true, v

}

When I run the code, ollama return "unmarshalling llm prediction response: invalid character 'e' looking for beginning of value" error.

So I debug the code and find that the ollama api service return 500 error.

OS

Windows

GPU

No response

CPU

Intel

Ollama version

0.3.14

Originally created by @FreemanFeng on GitHub (Oct 31, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7446 ### What is the issue? ![1730369280 debug](https://github.com/user-attachments/assets/24e7e4b7-cd2e-4453-885d-4e9e934fa8f3) below is the golang code, set the prompt to be "请识别图片", load the attached image into []byte. func RunVLM(prompt string, images ...[]byte) (bool, any) { client, err := api.ClientFromEnvironment() if err != nil { log.Fatal(err) } model := "minicpm-v" req := &api.GenerateRequest{ Model: model, Prompt: prompt, KeepAlive: new(api.Duration), // set streaming to false Stream: new(bool), } for _, k := range images { s := base64.StdEncoding.EncodeToString(k) req.Images = append(req.Images, api.ImageData(s)) } //req.KeepAlive.Duration = 24 * 60 * time.Minute var v any ctx := context.Background() respFunc := func(resp api.GenerateResponse) error { // Only print the response here; GenerateResponse has a number of other // interesting fields you want to examine. fmt.Println(resp.Response) e := json.Unmarshal([]byte(resp.Response), &v) if e != nil { e = json.Unmarshal([]byte(fetchJSON(resp.Response)), &v) if e != nil { log.Println(e.Error()) v = resp.Response return nil } } return nil } err = client.Generate(ctx, req, respFunc) if err != nil { log.Println(err.Error()) return false, nil } return true, v } When I run the code, ollama return "unmarshalling llm prediction response: invalid character 'e' looking for beginning of value" error. So I debug the code and find that the ollama api service return 500 error. ### OS Windows ### GPU _No response_ ### CPU Intel ### Ollama version 0.3.14
GiteaMirror added the bug label 2026-04-28 19:02:00 -05:00
Author
Owner

@rick-github commented on GitHub (Oct 31, 2024):

Server logs will help in debugging.

<!-- gh-comment-id:2449548855 --> @rick-github commented on GitHub (Oct 31, 2024): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will help in debugging.
Author
Owner

@jessegross commented on GitHub (Nov 5, 2024):

If you can try the 0.4.0 RC, we are now stricter about Unicode characters that are emitted so that will probably address this problem. LLMs can sometimes emit partial multi-byte Unicode characters.

<!-- gh-comment-id:2458279298 --> @jessegross commented on GitHub (Nov 5, 2024): If you can try the 0.4.0 RC, we are now stricter about Unicode characters that are emitted so that will probably address this problem. LLMs can sometimes emit partial multi-byte Unicode characters.
Author
Owner

@jessegross commented on GitHub (Nov 14, 2024):

I'm going to assume that this has been fixed by 0.4.0, please reopen if you still see it occurring on the current version.

<!-- gh-comment-id:2477399755 --> @jessegross commented on GitHub (Nov 14, 2024): I'm going to assume that this has been fixed by 0.4.0, please reopen if you still see it occurring on the current version.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#51245