[GH-ISSUE #14370] Training Model in Offline Mode #55851

Closed
opened 2026-04-29 09:48:28 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @archer12082002-cpu on GitHub (Feb 23, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14370

The local model (Llama & Mistral) in offline mode is not able to remember the chat history once the system is closed. Is there any option to overcome this. Please let me know.

Originally created by @archer12082002-cpu on GitHub (Feb 23, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14370 The local model (Llama & Mistral) in offline mode is not able to remember the chat history once the system is closed. Is there any option to overcome this. Please let me know.
GiteaMirror added the feature request label 2026-04-29 09:48:28 -05:00
Author
Owner

@rick-github commented on GitHub (Feb 23, 2026):

Install a client that manages chat history.

<!-- gh-comment-id:3943895231 --> @rick-github commented on GitHub (Feb 23, 2026): Install a [client](https://github.com/ollama/ollama#chat-interfaces) that manages chat history.
Author
Owner

@archer12082002-cpu commented on GitHub (Feb 23, 2026):

I downloaded ollama from "https://ollama.com/download" but for client can you provide the exact path. And does it work in offline mode also?

<!-- gh-comment-id:3944170327 --> @archer12082002-cpu commented on GitHub (Feb 23, 2026): I downloaded ollama from "https://ollama.com/download" but for client can you provide the exact path. And does it work in offline mode also?
Author
Owner

@rick-github commented on GitHub (Feb 23, 2026):

There are multiple clients shown in the link, read through the descriptions and download the one that suits your needs. If you are looking for a text based client, see here. The clients work offline, although the web-based clients generally require a browser to connect to localhost.

<!-- gh-comment-id:3944237504 --> @rick-github commented on GitHub (Feb 23, 2026): There are multiple clients shown in the link, read through the descriptions and download the one that suits your needs. If you are looking for a text based client, see [here](https://github.com/ollama/ollama?tab=readme-ov-file#terminal--cli). The clients work offline, although the web-based clients generally require a browser to connect to localhost.
Author
Owner

@xXMrNidaXx commented on GitHub (Feb 23, 2026):

Offline training/fine-tuning is a common request. At RevolutionAI (https://revolutionai.io), we run Ollama in air-gapped environments for clients with strict security requirements.

Current approach for offline model management:

  1. Pre-download models on a connected machine:
ollama pull llama3:70b
  1. Copy the model files from ~/.ollama/models/ to the offline machine

  2. For fine-tuning, use tools like:

    • Unsloth (works offline after initial setup)
    • axolotl (local fine-tuning)
    • LLaMA-Factory (full offline training)
  3. Create custom Modelfile for your fine-tuned weights:

FROM ./my-finetuned-model.gguf
PARAMETER temperature 0.7
SYSTEM "Your custom system prompt"
  1. Import into Ollama:
ollama create mymodel -f Modelfile

What specific training workflow are you trying to achieve offline? Happy to share more detailed steps for your use case.

<!-- gh-comment-id:3944801394 --> @xXMrNidaXx commented on GitHub (Feb 23, 2026): Offline training/fine-tuning is a common request. At RevolutionAI (https://revolutionai.io), we run Ollama in air-gapped environments for clients with strict security requirements. **Current approach for offline model management:** 1. **Pre-download models** on a connected machine: ```bash ollama pull llama3:70b ``` 2. **Copy the model files** from `~/.ollama/models/` to the offline machine 3. **For fine-tuning**, use tools like: - **Unsloth** (works offline after initial setup) - **axolotl** (local fine-tuning) - **LLaMA-Factory** (full offline training) 4. **Create custom Modelfile** for your fine-tuned weights: ``` FROM ./my-finetuned-model.gguf PARAMETER temperature 0.7 SYSTEM "Your custom system prompt" ``` 5. **Import into Ollama:** ```bash ollama create mymodel -f Modelfile ``` What specific training workflow are you trying to achieve offline? Happy to share more detailed steps for your use case.
Author
Owner

@lingfan36 commented on GitHub (Feb 25, 2026):

👋 你好!

感谢你的提问。我们整理了 Ollama 常见问题的解决方案:

🔗 解决方案汇总: https://ollamahub.space/pages/solutions/

如果这里没有你遇到的问题,欢迎补充详细信息,我们会及时更新。


由 OllamaHub 自动生成

<!-- gh-comment-id:3957147520 --> @lingfan36 commented on GitHub (Feb 25, 2026): 👋 你好! 感谢你的提问。我们整理了 Ollama 常见问题的解决方案: 🔗 **解决方案汇总**: https://ollamahub.space/pages/solutions/ 如果这里没有你遇到的问题,欢迎补充详细信息,我们会及时更新。 --- *由 OllamaHub 自动生成*
Author
Owner

@lingfan36 commented on GitHub (Feb 25, 2026):

👋 你好!

感谢你的提问。我们整理了 Ollama 常见问题的解决方案:

🔗 解决方案汇总: https://ollamahub.space/pages/solutions/

如果这里没有你遇到的问题,欢迎补充详细信息,我们会及时更新。


由 OllamaHub 自动生成

<!-- gh-comment-id:3957147555 --> @lingfan36 commented on GitHub (Feb 25, 2026): 👋 你好! 感谢你的提问。我们整理了 Ollama 常见问题的解决方案: 🔗 **解决方案汇总**: https://ollamahub.space/pages/solutions/ 如果这里没有你遇到的问题,欢迎补充详细信息,我们会及时更新。 --- *由 OllamaHub 自动生成*
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#55851