[PR #7588] [CLOSED] Enable JSON Schema support #43711

Closed
opened 2026-04-24 23:18:21 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/7588
Author: @hieunguyen1053
Created: 11/9/2024
Status: Closed

Base: mainHead: dev


📝 Commits (2)

  • dd25e5f Enable JSON Schema support
  • c5f8130 Fix: Preserve ordered JSON in sampling.cpp

📊 Changes

10 files changed (+73 additions, -22 deletions)

View changed files

📝 api/types.go (+6 -0)
📝 llama/common.h (+1 -0)
📝 llama/llama.go (+6 -1)
📝 llama/runner/runner.go (+2 -0)
📝 llama/sampling.cpp (+11 -2)
📝 llama/sampling_ext.cpp (+1 -0)
📝 llama/sampling_ext.h (+1 -0)
📝 llm/server.go (+2 -3)
📝 openai/openai.go (+33 -8)
📝 server/routes.go (+10 -8)

📄 Description

This merge request introduces a new feature that adds support for response_format based on the grammar guide of llama.cpp. This new functionality has been implemented to improve the flexibility of response formats, and it has been tested and works well with both the openai library and langchain.

Please review the code changes and test the feature to confirm compatibility with any additional components or configurations specific to your setup.

Let me know if you need further adjustments!

Screenshot 2024-11-09 at 5 32 06 PM Screenshot 2024-11-09 at 5 32 48 PM

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/7588 **Author:** [@hieunguyen1053](https://github.com/hieunguyen1053) **Created:** 11/9/2024 **Status:** ❌ Closed **Base:** `main` ← **Head:** `dev` --- ### 📝 Commits (2) - [`dd25e5f`](https://github.com/ollama/ollama/commit/dd25e5fbf52edc3ca56920f52dff67f42b197cf9) Enable JSON Schema support - [`c5f8130`](https://github.com/ollama/ollama/commit/c5f81301ecab5324969e5d4513a871da9179f3d0) Fix: Preserve ordered JSON in sampling.cpp ### 📊 Changes **10 files changed** (+73 additions, -22 deletions) <details> <summary>View changed files</summary> 📝 `api/types.go` (+6 -0) 📝 `llama/common.h` (+1 -0) 📝 `llama/llama.go` (+6 -1) 📝 `llama/runner/runner.go` (+2 -0) 📝 `llama/sampling.cpp` (+11 -2) 📝 `llama/sampling_ext.cpp` (+1 -0) 📝 `llama/sampling_ext.h` (+1 -0) 📝 `llm/server.go` (+2 -3) 📝 `openai/openai.go` (+33 -8) 📝 `server/routes.go` (+10 -8) </details> ### 📄 Description This merge request introduces a new feature that adds support for response_format based on the grammar guide of llama.cpp. This new functionality has been implemented to improve the flexibility of response formats, and it has been tested and works well with both the openai library and langchain. Please review the code changes and test the feature to confirm compatibility with any additional components or configurations specific to your setup. Let me know if you need further adjustments! <img width="950" alt="Screenshot 2024-11-09 at 5 32 06 PM" src="https://github.com/user-attachments/assets/7fe1b87c-f8f2-4601-8148-df69452ba8d0"> <img width="752" alt="Screenshot 2024-11-09 at 5 32 48 PM" src="https://github.com/user-attachments/assets/bc871b4b-9e57-450d-84a4-099dd40b734d"> --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-24 23:18:21 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#43711