[PR #7594] [CLOSED] Invalid OLLAMA_LLM_LIBRARY Error #23004

Closed
opened 2026-04-19 16:42:53 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/7594
Author: @jk-vtp-one
Created: 11/10/2024
Status: Closed

Base: mainHead: dev


📝 Commits (1)

  • d3f003f Invalid OLLAMA_LLM_LIBRARY error

📊 Changes

1 file changed (+1 additions, -0 deletions)

View changed files

📝 llm/server.go (+1 -0)

📄 Description

If the OLLAMA_LLM_LIBRARY environment variable is an invalid target, currently the server logs a message of the problem and continues to work using another available runner. This change causes it instead to raise an error. If you are using the OLLAMA_LLM_LIBRARY variable, you should intend for it to use that runner, and if that runner isn’t available it should error, not fall back to something else.

The specific use case is from the testing of the ollama builder with jetson-containers. Previous versions of the ollama build process included the env variable: OLLAMA_SKIP_CPU_GENERATE which allowed us to only generate the CUDA runner. If the binary build process completed but the cuda runner failed to build, our test would fail due to there being no available runners. Now, trying to use OLLAMA_LLM_LIBRARY with the desired but failed-to-build cuda_vXX runner, it falls back to use the CPU runner and the test doesn’t fail.

Re-implementing the build variables OLLAMA_SKIP_CPU_GENERATE and OLLAMA_SKIP_CUDA_GENERATE, would work: https://github.com/ollama/ollama/pull/7499; but doesn’t change that targeting an invalid OLLAMA_LLM_LIBRARY should error.


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/7594 **Author:** [@jk-vtp-one](https://github.com/jk-vtp-one) **Created:** 11/10/2024 **Status:** ❌ Closed **Base:** `main` ← **Head:** `dev` --- ### 📝 Commits (1) - [`d3f003f`](https://github.com/ollama/ollama/commit/d3f003f56d106df5f511f976e3bdf87b89588fe9) Invalid OLLAMA_LLM_LIBRARY error ### 📊 Changes **1 file changed** (+1 additions, -0 deletions) <details> <summary>View changed files</summary> 📝 `llm/server.go` (+1 -0) </details> ### 📄 Description If the OLLAMA_LLM_LIBRARY environment variable is an invalid target, currently the server logs a message of the problem and continues to work using another available runner. This change causes it instead to raise an error. If you are using the OLLAMA_LLM_LIBRARY variable, you should intend for it to use that runner, and if that runner isn’t available it should error, not fall back to something else. The specific use case is from the testing of the ollama builder with jetson-containers. Previous versions of the ollama build process included the env variable: OLLAMA_SKIP_CPU_GENERATE which allowed us to only generate the CUDA runner. If the binary build process completed but the cuda runner failed to build, our test would fail due to there being no available runners. Now, trying to use OLLAMA_LLM_LIBRARY with the desired but failed-to-build cuda_vXX runner, it falls back to use the CPU runner and the test doesn’t fail. Re-implementing the build variables OLLAMA_SKIP_CPU_GENERATE and OLLAMA_SKIP_CUDA_GENERATE, would work: https://github.com/ollama/ollama/pull/7499; but doesn’t change that targeting an invalid OLLAMA_LLM_LIBRARY should error. --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-19 16:42:53 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#23004