[PR #1334] [MERGED] load projectors #21093

Closed
opened 2026-04-19 15:26:19 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/1334
Author: @mxyng
Created: 11/30/2023
Status: Merged
Merged: 12/5/2023
Merged by: @mxyng

Base: mainHead: mxyng/load-projectors


📝 Commits (2)

  • b9495ea load projectors
  • 5d75505 return model configuration in generate

📊 Changes

5 files changed (+62 additions, -27 deletions)

View changed files

📝 api/types.go (+10 -0)
📝 llm/llama.go (+6 -1)
📝 llm/llm.go (+3 -3)
📝 server/images.go (+32 -17)
📝 server/routes.go (+11 -6)

📄 Description

continuation of #1250 and #1308 to load additional models

This adds model configurations to generate response:

$ curl -s localhost:11434/api/generate -d '{"model":"llava:7b-v1.5-q4_0"}' | jq .
{
  "model": "llava:7b-v1.5-q4_0",
  "created_at": "2023-12-01T19:41:43.684471Z",
  "response": "",
  "model_configuration": {
    "model_format": "gguf",
    "model_family": "llama",
    "model_families": [
      "llama",
      "clip"
    ],
    "model_type": "7B",
    "file_type": "Q4_0"
  },
  "done": true
}

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/1334 **Author:** [@mxyng](https://github.com/mxyng) **Created:** 11/30/2023 **Status:** ✅ Merged **Merged:** 12/5/2023 **Merged by:** [@mxyng](https://github.com/mxyng) **Base:** `main` ← **Head:** `mxyng/load-projectors` --- ### 📝 Commits (2) - [`b9495ea`](https://github.com/ollama/ollama/commit/b9495ea162d12d8b1d948d2276927e265a6056e3) load projectors - [`5d75505`](https://github.com/ollama/ollama/commit/5d75505ebde96d0ce56f931c3414ff6a232b665f) return model configuration in generate ### 📊 Changes **5 files changed** (+62 additions, -27 deletions) <details> <summary>View changed files</summary> 📝 `api/types.go` (+10 -0) 📝 `llm/llama.go` (+6 -1) 📝 `llm/llm.go` (+3 -3) 📝 `server/images.go` (+32 -17) 📝 `server/routes.go` (+11 -6) </details> ### 📄 Description continuation of #1250 and #1308 to load additional models This adds model configurations to generate response: ``` $ curl -s localhost:11434/api/generate -d '{"model":"llava:7b-v1.5-q4_0"}' | jq . { "model": "llava:7b-v1.5-q4_0", "created_at": "2023-12-01T19:41:43.684471Z", "response": "", "model_configuration": { "model_format": "gguf", "model_family": "llama", "model_families": [ "llama", "clip" ], "model_type": "7B", "file_type": "Q4_0" }, "done": true } ``` --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-19 15:26:19 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#21093