[PR #3702] [CLOSED] Streamlined server startup and fixed model loading status #42502

Closed
opened 2026-04-24 22:15:26 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/3702
Author: @mann1x
Created: 4/17/2024
Status: Closed

Base: mainHead: mannix-server


📝 Commits (3)

  • bd54b08 Streamlined WaitUntilRunning
  • c942e4a Fixed startup sequence to report model loading
  • c496967 Merge branch 'ollama:main' into mannix-server

📊 Changes

2 files changed (+48 additions, -62 deletions)

View changed files

📝 llm/ext_server/server.cpp (+21 -21)
📝 llm/server.go (+27 -41)

📄 Description

This patch provides:

  • Streamlined WaitUntilRunning method without ticker and nested loop
  • Updated server.cpp with the correct startup sequence; the HTTP server starts listening before loading the model and thus the model loading status is correctly reported, the /health API endpoint is excluded as well from the server logging

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/3702 **Author:** [@mann1x](https://github.com/mann1x) **Created:** 4/17/2024 **Status:** ❌ Closed **Base:** `main` ← **Head:** `mannix-server` --- ### 📝 Commits (3) - [`bd54b08`](https://github.com/ollama/ollama/commit/bd54b08261c15e927234d03e2b1020e528b38afe) Streamlined WaitUntilRunning - [`c942e4a`](https://github.com/ollama/ollama/commit/c942e4a07b91dc6b78bb245241ea514b752e3d4d) Fixed startup sequence to report model loading - [`c496967`](https://github.com/ollama/ollama/commit/c496967e5667b94059be607f495a3ee9f0b4cbb7) Merge branch 'ollama:main' into mannix-server ### 📊 Changes **2 files changed** (+48 additions, -62 deletions) <details> <summary>View changed files</summary> 📝 `llm/ext_server/server.cpp` (+21 -21) 📝 `llm/server.go` (+27 -41) </details> ### 📄 Description This patch provides: - Streamlined WaitUntilRunning method without ticker and nested loop - Updated server.cpp with the correct startup sequence; the HTTP server starts listening before loading the model and thus the `model loading` status is correctly reported, the `/health` API endpoint is excluded as well from the server logging --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-24 22:15:26 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#42502