[GH-ISSUE #15475] Help about 500 Internal Server Error #56405

Open
opened 2026-04-29 10:46:39 -05:00 by GiteaMirror · 9 comments
Owner

Originally created by @Fuhua-code on GitHub (Apr 10, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15475

server.log

What is the issue?

I need to convert the GUFF model I downloaded from modelscope into a version that matches with ollama. To do this, I have written a Modelfile:
FROM "D:\LLM\llama_cpp\qwen-vl-4b\Qwen3VL-4B-Instruct-Q4_K_M.gguf"
ADAPTER "D:\LLM\llama_cpp\qwen-vl-4b\mmproj-Qwen3VL-4B-Instruct-F16.gguf"
PARAMETER num_ctx 8192
After running and waiting for Olama to successfully complete the model conversion, I ran the model but encountered the following error:
Error: 500 Internal Server Error: model runner has unexpectedly stopped, this may be due to resource limitations or an internal error, check ollama server logs for details
This article is machine translation.

Relevant log output


OS

windows 11 24H2

GPU

4060 laptop

CPU

i9-13900H

Ollama version

0.20.5

Originally created by @Fuhua-code on GitHub (Apr 10, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15475 [server.log](https://github.com/user-attachments/files/26631077/server.log) ### What is the issue? I need to convert the GUFF model I downloaded from modelscope into a version that matches with ollama. To do this, I have written a Modelfile: FROM "D:\LLM\llama_cpp\qwen-vl-4b\Qwen3VL-4B-Instruct-Q4_K_M.gguf" ADAPTER "D:\LLM\llama_cpp\qwen-vl-4b\mmproj-Qwen3VL-4B-Instruct-F16.gguf" PARAMETER num_ctx 8192 After running and waiting for Olama to successfully complete the model conversion, I ran the model but encountered the following error: Error: 500 Internal Server Error: model runner has unexpectedly stopped, this may be due to resource limitations or an internal error, check ollama server logs for details This article is machine translation. ### Relevant log output ```shell ``` ### OS windows 11 24H2 ### GPU 4060 laptop ### CPU i9-13900H ### Ollama version 0.20.5
GiteaMirror added the bug label 2026-04-29 10:46:39 -05:00
Author
Owner

@rick-github commented on GitHub (Apr 10, 2026):

Server logs will aid in debugging.

But it looks like you are trying to import a standard q4_K_M quant of Qwen3VL-4B-instruct, why not just use the one from the ollama library?

ollama pull qwen3-vl:4b-instruct-q4_K_M
echo FROM qwen3-vl:4b-instruct-q4_K_M > Modelfile
echo PARAMETER num_ctx 8192 >> Modelfile
ollama create qwen3-vl:4b-instruct-8k-q4_K_M
<!-- gh-comment-id:4223894907 --> @rick-github commented on GitHub (Apr 10, 2026): [Server logs](https://docs.ollama.com/troubleshooting) will aid in debugging. But it looks like you are trying to import a standard q4_K_M quant of Qwen3VL-4B-instruct, why not just use the one from the ollama library? ``` ollama pull qwen3-vl:4b-instruct-q4_K_M echo FROM qwen3-vl:4b-instruct-q4_K_M > Modelfile echo PARAMETER num_ctx 8192 >> Modelfile ollama create qwen3-vl:4b-instruct-8k-q4_K_M ```
Author
Owner

@Fuhua-code commented on GitHub (Apr 10, 2026):

Server logs will aid in debugging.

But it looks like you are trying to import a standard q4_K_M quant of Qwen3VL-4B-instruct, why not just use the one from the ollama library?

ollama pull qwen3-vl:4b-instruct-q4_K_M
echo FROM qwen3-vl:4b-instruct-q4_K_M > Modelfile
echo PARAMETER num_ctx 8192 >> Modelfile
ollama create qwen3-vl:4b-instruct-8k-q4_K_M

I have uploaded the server. log, thank you for the reminder.

Yes, I am downloading this, but the download speed of the model using Olama in my network environment is very slow.

Or is there any other way to download the models of Ollama?

<!-- gh-comment-id:4223937219 --> @Fuhua-code commented on GitHub (Apr 10, 2026): > [Server logs](https://docs.ollama.com/troubleshooting) will aid in debugging. > > But it looks like you are trying to import a standard q4_K_M quant of Qwen3VL-4B-instruct, why not just use the one from the ollama library? > > ``` > ollama pull qwen3-vl:4b-instruct-q4_K_M > echo FROM qwen3-vl:4b-instruct-q4_K_M > Modelfile > echo PARAMETER num_ctx 8192 >> Modelfile > ollama create qwen3-vl:4b-instruct-8k-q4_K_M > ``` I have uploaded the server. log, thank you for the reminder. Yes, I am downloading this, but the download speed of the model using Olama in my network environment is very slow. Or is there any other way to download the models of Ollama?
Author
Owner

@PureBlissAK commented on GitHub (Apr 18, 2026):

🤖 Automated Triage & Analysis Report

Issue: #15475
Analyzed: 2026-04-18T18:20:53.997640

Analysis

  • Type: unknown
  • Severity: medium
  • Components: unknown

Implementation Plan

  • Effort: medium
  • Steps:

This issue has been triaged and marked for implementation.

<!-- gh-comment-id:4274307380 --> @PureBlissAK commented on GitHub (Apr 18, 2026): <!-- ollama-issue-orchestrator:v1 issue:15475 --> ## 🤖 Automated Triage & Analysis Report **Issue**: #15475 **Analyzed**: 2026-04-18T18:20:53.997640 ### Analysis - **Type**: unknown - **Severity**: medium - **Components**: unknown ### Implementation Plan - **Effort**: medium - **Steps**: *This issue has been triaged and marked for implementation.*
Author
Owner

@yangqiheng2019 commented on GitHub (Apr 24, 2026):

hello,do you resolve this issue?I encountered the same issue you did.

<!-- gh-comment-id:4310377883 --> @yangqiheng2019 commented on GitHub (Apr 24, 2026): hello,do you resolve this issue?I encountered the same issue you did.
Author
Owner

@Fuhua-code commented on GitHub (Apr 24, 2026):

hello,do you resolve this issue?I encountered the same issue you did.

I couldn't solve it, so in the end I had to download the model from the Ollama official website.

没有解决,最后在ollama官方库下载的

<!-- gh-comment-id:4312697884 --> @Fuhua-code commented on GitHub (Apr 24, 2026): > hello,do you resolve this issue?I encountered the same issue you did. I couldn't solve it, so in the end I had to download the model from the Ollama official website. 没有解决,最后在ollama官方库下载的
Author
Owner

@yangqiheng2019 commented on GitHub (Apr 27, 2026):

hello,do you resolve this issue?I encountered the same issue you did.

I couldn't solve it, so in the end I had to download the model from the Ollama official website.

没有解决,最后在ollama官方库下载的

谢谢,我查询了一下,可能是缺少一个视觉编码器,不知道这个怎么解决

<!-- gh-comment-id:4324398704 --> @yangqiheng2019 commented on GitHub (Apr 27, 2026): > > hello,do you resolve this issue?I encountered the same issue you did. > > I couldn't solve it, so in the end I had to download the model from the Ollama official website. > > 没有解决,最后在ollama官方库下载的 谢谢,我查询了一下,可能是缺少一个视觉编码器,不知道这个怎么解决
Author
Owner

@Fuhua-code commented on GitHub (Apr 27, 2026):

hello,do you resolve this issue?I encountered the same issue you did.

I couldn't solve it, so in the end I had to download the model from the Ollama official website.

没有解决,最后在ollama官方库下载的

谢谢,我查询了一下,可能是缺少一个视觉编码器,不知道这个怎么解决

ollama提供的命令是可以把那个视觉模块转化为哈希文件的,但我在实际使用时依然报这个错😇

<!-- gh-comment-id:4324409044 --> @Fuhua-code commented on GitHub (Apr 27, 2026): > > > hello,do you resolve this issue?I encountered the same issue you did. > > > > I couldn't solve it, so in the end I had to download the model from the Ollama official website. > > > > 没有解决,最后在ollama官方库下载的 > > 谢谢,我查询了一下,可能是缺少一个视觉编码器,不知道这个怎么解决 ollama提供的命令是可以把那个视觉模块转化为哈希文件的,但我在实际使用时依然报这个错😇
Author
Owner

@yangqiheng2019 commented on GitHub (Apr 27, 2026):

hello,do you resolve this issue?I encountered the same issue you did.

I couldn't solve it, so in the end I had to download the model from the Ollama official website.
没有解决,最后在ollama官方库下载的

谢谢,我查询了一下,可能是缺少一个视觉编码器,不知道这个怎么解决

ollama提供的命令是可以把那个视觉模块转化为哈希文件的,但我在实际使用时依然报这个错😇
我也是,关键我的是自己训练的,没法用官方库的东西

<!-- gh-comment-id:4326403754 --> @yangqiheng2019 commented on GitHub (Apr 27, 2026): > > > > hello,do you resolve this issue?I encountered the same issue you did. > > > > > > > > > I couldn't solve it, so in the end I had to download the model from the Ollama official website. > > > 没有解决,最后在ollama官方库下载的 > > > > > > 谢谢,我查询了一下,可能是缺少一个视觉编码器,不知道这个怎么解决 > > ollama提供的命令是可以把那个视觉模块转化为哈希文件的,但我在实际使用时依然报这个错😇 我也是,关键我的是自己训练的,没法用官方库的东西
Author
Owner

@Fuhua-code commented on GitHub (Apr 27, 2026):

hello,do you resolve this issue?I encountered the same issue you did.

I couldn't solve it, so in the end I had to download the model from the Ollama official website.
没有解决,最后在ollama官方库下载的

谢谢,我查询了一下,可能是缺少一个视觉编码器,不知道这个怎么解决

ollama提供的命令是可以把那个视觉模块转化为哈希文件的,但我在实际使用时依然报这个错😇
我也是,关键我的是自己训练的,没法用官方库的东西

那可能得用LLM.cpp或者LLM STUDIO来部署了

<!-- gh-comment-id:4327437225 --> @Fuhua-code commented on GitHub (Apr 27, 2026): > > > > > hello,do you resolve this issue?I encountered the same issue you did. > > > > > > > > > > > > I couldn't solve it, so in the end I had to download the model from the Ollama official website. > > > > 没有解决,最后在ollama官方库下载的 > > > > > > > > > 谢谢,我查询了一下,可能是缺少一个视觉编码器,不知道这个怎么解决 > > > > ollama提供的命令是可以把那个视觉模块转化为哈希文件的,但我在实际使用时依然报这个错😇 > 我也是,关键我的是自己训练的,没法用官方库的东西 > 那可能得用LLM.cpp或者LLM STUDIO来部署了
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#56405