[GH-ISSUE #14474] GLM-OCR return empty markdown #71446

Closed
opened 2026-05-05 01:45:24 -05:00 by GiteaMirror · 22 comments
Owner

Originally created by @resc863 on GitHub (Feb 26, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14474

What is the issue?

After v0.17.1, GLM-OCR only return blank markdown with every my image input.

Image

Relevant log output


OS

Windows

GPU

Nvidia

CPU

AMD

Ollama version

0.17.2

Originally created by @resc863 on GitHub (Feb 26, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14474 ### What is the issue? After v0.17.1, GLM-OCR only return blank markdown with every my image input. <img width="856" height="377" alt="Image" src="https://github.com/user-attachments/assets/ecdfac53-9fd9-482a-9582-ce120b7281b1" /> ### Relevant log output ```shell ``` ### OS Windows ### GPU Nvidia ### CPU AMD ### Ollama version 0.17.2
GiteaMirror added the bug label 2026-05-05 01:45:24 -05:00
Author
Owner

@ZYWWYZ123 commented on GitHub (Feb 27, 2026):

having same problem on ollama version 0.17.4, the image size didn't exceed the limitation.
OS: Windows 11
GPU: Nvidia GeForce RTX 3060 Laptop
CPU: Intel i7-12700H
Ollama version: 0.17.4

<!-- gh-comment-id:3971517962 --> @ZYWWYZ123 commented on GitHub (Feb 27, 2026): having same problem on ollama version 0.17.4, the image size didn't exceed the limitation. OS: Windows 11 GPU: Nvidia GeForce RTX 3060 Laptop CPU: Intel i7-12700H Ollama version: 0.17.4
Author
Owner

@andibo73 commented on GitHub (Feb 27, 2026):

Same here since Ollama version 0.17.1-rc1.
OS: Windows 11
GPU: NVIDIA GeForce RTX 5090
CPU: Intel(R) Core(TM) Ultra 9 285K
Ollama version: 0.17.4

<!-- gh-comment-id:3971914638 --> @andibo73 commented on GitHub (Feb 27, 2026): Same here since Ollama version 0.17.1-rc1. OS: Windows 11 GPU: NVIDIA GeForce RTX 5090 CPU: Intel(R) Core(TM) Ultra 9 285K Ollama version: 0.17.4
Author
Owner

@illusdolphin commented on GitHub (Feb 27, 2026):

Via API:
{ "model": "glm-ocr:latest", "stream": false, "options": { "num_ctx": 40196 }, "messages": [ { "role": "user", "content": "OCR <image>", "images": ["iVBO...CC"] } ] }
(it was a valid image instead if "iVBO...CC", worked for previous versions, omitted to simplify). Image is not big

{ "model": "glm-ocr:latest", "created_at": "2026-02-27T14:19:54.2522082Z", "message": { "role": "assistant", "content": "```markdown\n\n```" }, "done": true, "done_reason": "stop", "total_duration": 4233457300, "load_duration": 4169850600, "prompt_eval_count": 11, "prompt_eval_duration": 36302000, "eval_count": 6, "eval_duration": 10474400 }

Logs are attached

server.log

<!-- gh-comment-id:3973259666 --> @illusdolphin commented on GitHub (Feb 27, 2026): Via API: `{ "model": "glm-ocr:latest", "stream": false, "options": { "num_ctx": 40196 }, "messages": [ { "role": "user", "content": "OCR <image>", "images": ["iVBO...CC"] } ] }` (it was a valid image instead if "iVBO...CC", worked for previous versions, omitted to simplify). Image is not big `{ "model": "glm-ocr:latest", "created_at": "2026-02-27T14:19:54.2522082Z", "message": { "role": "assistant", "content": "```markdown\n\n```" }, "done": true, "done_reason": "stop", "total_duration": 4233457300, "load_duration": 4169850600, "prompt_eval_count": 11, "prompt_eval_duration": 36302000, "eval_count": 6, "eval_duration": 10474400 }` Logs are attached [server.log](https://github.com/user-attachments/files/25606394/server.log)
Author
Owner

@Fsky666 commented on GitHub (Feb 27, 2026):

I've updated to version 0.17.4, but I'm still encountering issues with glm-ocr. When using images with dimensions within the 2048x2048 range, the output is empty, and there are no corresponding error logs. I reverted to version 0.16.3, and it's working again.

Ollama 0.17.4,RTX 2080
The model loaded successfully, but the output content is empty.

<!-- gh-comment-id:3973565511 --> @Fsky666 commented on GitHub (Feb 27, 2026): I've updated to version 0.17.4, but I'm still encountering issues with glm-ocr. When using images with dimensions within the 2048x2048 range, the output is empty, and there are no corresponding error logs. I reverted to version 0.16.3, and it's working again. Ollama 0.17.4,RTX 2080 The model loaded successfully, but the output content is empty.
Author
Owner

@Miroshnichenko-Dmitry commented on GitHub (Feb 27, 2026):

I have the same problem.
Ollama 0.17.4
GPU 3070
CPU 12400F

on 16.3 everything is fine

<!-- gh-comment-id:3974618986 --> @Miroshnichenko-Dmitry commented on GitHub (Feb 27, 2026): I have the same problem. Ollama 0.17.4 GPU 3070 CPU 12400F on 16.3 everything is fine
Author
Owner

@Supersnu commented on GitHub (Feb 28, 2026):

I downgraded Ollama to 0.17.0. That version works too.

<!-- gh-comment-id:3977033253 --> @Supersnu commented on GitHub (Feb 28, 2026): I downgraded Ollama to 0.17.0. That version works too.
Author
Owner

@kumanoko24 commented on GitHub (Mar 1, 2026):

I downgraded Ollama to 0.17.0 too, please help, community

<!-- gh-comment-id:3980696190 --> @kumanoko24 commented on GitHub (Mar 1, 2026): I downgraded Ollama to 0.17.0 too, please help, community
Author
Owner

@rdaim commented on GitHub (Mar 2, 2026):

I have the same problem.
Ollama 0.17.4
apple m1

<!-- gh-comment-id:3982296442 --> @rdaim commented on GitHub (Mar 2, 2026): I have the same problem. Ollama 0.17.4 apple m1
Author
Owner

@jbcallaghan commented on GitHub (Mar 2, 2026):

The same issue running 17.4, either returns markdown or the user prompt as the response.

<!-- gh-comment-id:3982854781 --> @jbcallaghan commented on GitHub (Mar 2, 2026): The same issue running 17.4, either returns markdown or the user prompt as the response.
Author
Owner

@flaming999 commented on GitHub (Mar 2, 2026):

ollama run glm-ocr:q8_0 Text Recognition: 20250610-103837.jpg


ollama run glm-ocr:q8_0 "Table Recognition:" 20250610-103837.jpg


The same issue with empty response

<!-- gh-comment-id:3983217316 --> @flaming999 commented on GitHub (Mar 2, 2026): ollama run glm-ocr:q8_0 Text Recognition: 20250610-103837.jpg ```markdown ``` ollama run glm-ocr:q8_0 "Table Recognition:" 20250610-103837.jpg <table><tr><td></td></tr></table> ------- The same issue with empty response
Author
Owner

@rdaim commented on GitHub (Mar 2, 2026):

use 16.x version, you can do it!

<!-- gh-comment-id:3983488455 --> @rdaim commented on GitHub (Mar 2, 2026): use 16.x version, you can do it!
Author
Owner

@ganlvtech commented on GitHub (Mar 2, 2026):

https://github.com/ollama/ollama/releases/tag/v0.16.3

<!-- gh-comment-id:3985125916 --> @ganlvtech commented on GitHub (Mar 2, 2026): https://github.com/ollama/ollama/releases/tag/v0.16.3
Author
Owner

@rdaim commented on GitHub (Mar 2, 2026):

how use glm-ocr with 0.17+ ? help me pls

<!-- gh-comment-id:3985176806 --> @rdaim commented on GitHub (Mar 2, 2026): how use glm-ocr with 0.17+ ? help me pls
Author
Owner

@kalaomer commented on GitHub (Mar 3, 2026):

v0.17.5 Mac M1 Pro not working :/

<!-- gh-comment-id:3988678118 --> @kalaomer commented on GitHub (Mar 3, 2026): v0.17.5 Mac M1 Pro not working :/
Author
Owner

@lhf2003 commented on GitHub (Mar 3, 2026):

0.17.5
same bug

<!-- gh-comment-id:3989149644 --> @lhf2003 commented on GitHub (Mar 3, 2026): 0.17.5 same bug
Author
Owner

@kalaomer commented on GitHub (Mar 4, 2026):

Same issue on 0.17.6 :/

<!-- gh-comment-id:3999643952 --> @kalaomer commented on GitHub (Mar 4, 2026): Same issue on 0.17.6 :/
Author
Owner

@andibo73 commented on GitHub (Mar 4, 2026):

Same issue on 0.17.6 :/

Nope. Works like 0.17.0.
Thanks!

<!-- gh-comment-id:3999672772 --> @andibo73 commented on GitHub (Mar 4, 2026): > Same issue on 0.17.6 :/ Nope. Works like 0.17.0. Thanks!
Author
Owner

@kalaomer commented on GitHub (Mar 4, 2026):

Same issue on 0.17.6 :/

Nope. Works like 0.17.0. Thanks!

Maybe now this is an os related bug? I'm using macos

<!-- gh-comment-id:3999743727 --> @kalaomer commented on GitHub (Mar 4, 2026): > > Same issue on 0.17.6 :/ > > Nope. Works like 0.17.0. Thanks! Maybe now this is an os related bug? I'm using macos
Author
Owner

@lhf2003 commented on GitHub (Mar 5, 2026):

Same issue on 0.17.6 :/

Nope. Works like 0.17.0. Thanks!

Maybe now this is an os related bug? I'm using macos

ollama version:0.17.6
The same problem exists in Windows 11.
Images can be recognized normally, but PDFs cannot be OCR-recognized.

Image
<!-- gh-comment-id:4001851802 --> @lhf2003 commented on GitHub (Mar 5, 2026): > > > Same issue on 0.17.6 :/ > > > > > > Nope. Works like 0.17.0. Thanks! > > Maybe now this is an os related bug? I'm using macos ollama version:0.17.6 The same problem exists in Windows 11. Images can be recognized normally, but PDFs cannot be OCR-recognized. <img width="874" height="1080" alt="Image" src="https://github.com/user-attachments/assets/e71869c0-f193-4b02-83f2-ebb7bc12b9a1" />
Author
Owner

@andibo73 commented on GitHub (Mar 5, 2026):

Same issue on 0.17.6 :/

Nope. Works like 0.17.0. Thanks!

Maybe now this is an os related bug? I'm using macos

ollama version:0.17.6 The same problem exists in Windows 11. Images can be recognized normally, but PDFs cannot be OCR-recognized.
Image

Sorry, I never used PDF as input. I just use images in my python workflow.

<!-- gh-comment-id:4003084231 --> @andibo73 commented on GitHub (Mar 5, 2026): > > > > Same issue on 0.17.6 :/ > > > > > > > > > Nope. Works like 0.17.0. Thanks! > > > > > > Maybe now this is an os related bug? I'm using macos > > ollama version:0.17.6 The same problem exists in Windows 11. Images can be recognized normally, but PDFs cannot be OCR-recognized. > <img alt="Image" width="874" height="1080" src="https://private-user-images.githubusercontent.com/76621767/558431891-e71869c0-f193-4b02-83f2-ebb7bc12b9a1.png?jwt=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NzI2OTc1NzEsIm5iZiI6MTc3MjY5NzI3MSwicGF0aCI6Ii83NjYyMTc2Ny81NTg0MzE4OTEtZTcxODY5YzAtZjE5My00YjAyLTgzZjItZWJiN2JjMTJiOWExLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNjAzMDUlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjYwMzA1VDA3NTQzMVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTEwY2EyZjNhMDUxY2JmMDBiZjJhMGRhODcyMzFiM2YxMTY5ZjI1M2QzZGZlMTY5MTIwY2U0YmFlOTgyNTdmMmYmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.Ta1_vtbmdq5DfIv4Qt1SI7pfi08PYmpjJ2yrjtxFPEc"> Sorry, I never used PDF as input. I just use images in my python workflow.
Author
Owner

@adarsh9780 commented on GitHub (Apr 2, 2026):

Is this issue resolved. I am on the latest version of the macos:
ProductName: macOS
ProductVersion: 26.3.1
ProductVersionExtra: (a)
BuildVersion: 25D771280a
ollama version is 0.19.0

I tried both the terminal command as well ollama UI. got empty results in both.

Image
<!-- gh-comment-id:4174971761 --> @adarsh9780 commented on GitHub (Apr 2, 2026): Is this issue resolved. I am on the latest version of the macos: ProductName: macOS ProductVersion: 26.3.1 ProductVersionExtra: (a) BuildVersion: 25D771280a ollama version is 0.19.0 I tried both the terminal command as well ollama UI. got empty results in both. <img width="576" height="237" alt="Image" src="https://github.com/user-attachments/assets/5ac67764-ec9f-4589-b4d1-74ee0c2d6f8f" />
Author
Owner

@andersfylling commented on GitHub (Apr 11, 2026):

I have exactly the same issue / output as @adarsh9780 and I'm using a completely different picture. Running on latest ollama version on arch linux: 6.19.10-arch1-1

<!-- gh-comment-id:4230210207 --> @andersfylling commented on GitHub (Apr 11, 2026): I have exactly the same issue / output as @adarsh9780 and I'm using a completely different picture. Running on latest ollama version on arch linux: 6.19.10-arch1-1
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#71446