[PR #13082] [MERGED] feat(model): deepseekocr #14068

Closed
opened 2026-04-13 00:43:52 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/13082
Author: @mxyng
Created: 11/13/2025
Status: Merged
Merged: 11/19/2025
Merged by: @mxyng

Base: mainHead: mxyng/deepseek-ocr


📝 Commits (2)

📊 Changes

23 files changed (+1010 additions, -14 deletions)

View changed files

📝 convert/convert.go (+2 -0)
convert/convert_deepseekocr.go (+136 -0)
📝 convert/reader.go (+4 -1)
📝 convert/reader_safetensors.go (+4 -1)
📝 fs/ggml/ggml.go (+1 -0)
📝 llama/patches/0028-Add-memory-detection-using-DXGI-PDH.patch (+1 -1)
📝 llama/patches/0029-vulkan-Call-ggml_vk_buffer_write_2d-from-ggml_vk_buf.patch (+1 -1)
📝 llama/patches/0030-Vulkan-MMQ-Integer-Dot-Refactor-and-K-Quant-support-.patch (+1 -1)
📝 llama/patches/0031-vulkan-Update-topk_moe-fusion-to-handle-gpt-s-late-s.patch (+1 -1)
📝 llama/patches/0032-vulkan-Fuse-rope-set_rows-16769.patch (+1 -1)
📝 llama/patches/0033-vulkan-Handle-argsort-with-a-large-number-of-rows-16.patch (+1 -1)
📝 llama/patches/0035-vulkan-Fix-crash-when-FP16-mul_mat-accumulation-is-n.patch (+1 -1)
llama/patches/0036-ggml-cuda-skip-large-batches.patch (+25 -0)
📝 ml/backend.go (+10 -0)
📝 ml/backend/ggml/ggml.go (+28 -1)
📝 ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu (+3 -0)
📝 model/imageproc/images.go (+32 -4)
model/models/deepseekocr/imageprocessor.go (+83 -0)
model/models/deepseekocr/model.go (+192 -0)
model/models/deepseekocr/model_sam.go (+225 -0)

...and 3 more files

📄 Description

deepseek-ai/DeepSeek-OCR is very finicky and requires very specific inputs. here's some examples on using this model effectively

Modelfile

FROM deepseek-ai/DeepSeek-OCR
PARAMETER temperature 0

Inputs

$ ollama run deepseek-ocr "/path/to/image\n<|grounding|>Given the layout of the image."
$ ollama run deepseek-ocr "/path/to/image\nFree OCR."
$ ollama run deepseek-ocr "/path/to/image\nParse the figure."
$ ollama run deepseek-ocr "/path/to/image\nExtract the text in the image."
$ ollama run deepseek-ocr "/path/to/image\n<|grounding|>Convert the document to markdown."

Known issues:

  • CUDA currently panics due to a kernel issue
    • there's currently a workaround where the problem operation will be offloaded to the cpu

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/13082 **Author:** [@mxyng](https://github.com/mxyng) **Created:** 11/13/2025 **Status:** ✅ Merged **Merged:** 11/19/2025 **Merged by:** [@mxyng](https://github.com/mxyng) **Base:** `main` ← **Head:** `mxyng/deepseek-ocr` --- ### 📝 Commits (2) - [`8612ca1`](https://github.com/ollama/ollama/commit/8612ca173ccc06123466dc2da146900b8aa7b670) deepseekocr - [`2588ae4`](https://github.com/ollama/ollama/commit/2588ae4f5f5745b07447cdb817d5af78b789de93) cuda: skip large batches ### 📊 Changes **23 files changed** (+1010 additions, -14 deletions) <details> <summary>View changed files</summary> 📝 `convert/convert.go` (+2 -0) ➕ `convert/convert_deepseekocr.go` (+136 -0) 📝 `convert/reader.go` (+4 -1) 📝 `convert/reader_safetensors.go` (+4 -1) 📝 `fs/ggml/ggml.go` (+1 -0) 📝 `llama/patches/0028-Add-memory-detection-using-DXGI-PDH.patch` (+1 -1) 📝 `llama/patches/0029-vulkan-Call-ggml_vk_buffer_write_2d-from-ggml_vk_buf.patch` (+1 -1) 📝 `llama/patches/0030-Vulkan-MMQ-Integer-Dot-Refactor-and-K-Quant-support-.patch` (+1 -1) 📝 `llama/patches/0031-vulkan-Update-topk_moe-fusion-to-handle-gpt-s-late-s.patch` (+1 -1) 📝 `llama/patches/0032-vulkan-Fuse-rope-set_rows-16769.patch` (+1 -1) 📝 `llama/patches/0033-vulkan-Handle-argsort-with-a-large-number-of-rows-16.patch` (+1 -1) 📝 `llama/patches/0035-vulkan-Fix-crash-when-FP16-mul_mat-accumulation-is-n.patch` (+1 -1) ➕ `llama/patches/0036-ggml-cuda-skip-large-batches.patch` (+25 -0) 📝 `ml/backend.go` (+10 -0) 📝 `ml/backend/ggml/ggml.go` (+28 -1) 📝 `ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu` (+3 -0) 📝 `model/imageproc/images.go` (+32 -4) ➕ `model/models/deepseekocr/imageprocessor.go` (+83 -0) ➕ `model/models/deepseekocr/model.go` (+192 -0) ➕ `model/models/deepseekocr/model_sam.go` (+225 -0) _...and 3 more files_ </details> ### 📄 Description [deepseek-ai/DeepSeek-OCR](https://huggingface.co/deepseek-ai/DeepSeek-OCR) is very finicky and requires very specific inputs. here's some examples on using this model effectively Modelfile ``` FROM deepseek-ai/DeepSeek-OCR PARAMETER temperature 0 ``` Inputs ``` $ ollama run deepseek-ocr "/path/to/image\n<|grounding|>Given the layout of the image." ``` ``` $ ollama run deepseek-ocr "/path/to/image\nFree OCR." ``` ``` $ ollama run deepseek-ocr "/path/to/image\nParse the figure." ``` ``` $ ollama run deepseek-ocr "/path/to/image\nExtract the text in the image." ``` ``` $ ollama run deepseek-ocr "/path/to/image\n<|grounding|>Convert the document to markdown." ``` Known issues: - CUDA currently panics due to a kernel issue - there's currently a workaround where the problem operation will be offloaded to the cpu --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-13 00:43:52 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#14068