[GH-ISSUE #4394] Modelfile containing "home" in its name breaks model execution #49258

Closed
opened 2026-04-28 11:03:13 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @leon-rgb on GitHub (May 13, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4394

What is the issue?

What have I done?

Created a model through ollama create sh-llama -f ./home_modelfile.
Got the usual output.
Trying to run the model also works ollama run sh-llama
But when giving an input no output is generated.

Solution

Renaming the home_modelfile to anything that doesn't contain 'home' in its name.

Note

This problem only occurs on my Ubuntu 20.04 system. No problems on Windows system.

OS

Linux

GPU

Other

CPU

Intel

Ollama version

0.1.33

Originally created by @leon-rgb on GitHub (May 13, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4394 ### What is the issue? ### What have I done? Created a model through `ollama create sh-llama -f ./home_modelfile`. Got the usual output. Trying to run the model also works `ollama run sh-llama` But when giving an input no output is generated. ### Solution Renaming the `home_modelfile` to anything that doesn't contain 'home' in its name. ### Note This problem only occurs on my Ubuntu 20.04 system. No problems on Windows system. ### OS Linux ### GPU Other ### CPU Intel ### Ollama version 0.1.33
GiteaMirror added the bug label 2026-04-28 11:03:13 -05:00
Author
Owner

@pdevine commented on GitHub (May 14, 2024):

@leon-rgb I wasn't able to reproduce this on macos or ubuntu 22.04.

Here's my output:

$ ollama create sh-llama -f ./home_modelfile
transferring model data
pulling model
pulling manifest
pulling 00e1317cbf74... 100% ▕███████████████████████████████████████████▏ 4.7 GB
pulling 4fa551d4f938... 100% ▕███████████████████████████████████████████▏  12 KB
pulling 8ab4849b038c... 100% ▕███████████████████████████████████████████▏  254 B
pulling 577073ffcc6c... 100% ▕███████████████████████████████████████████▏  110 B
pulling ad1518640c43... 100% ▕███████████████████████████████████████████▏  483 B
verifying sha256 digest
writing manifest
removing any unused layers
success
reading model metadata
creating system layer
using already created layer sha256:00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29
using already created layer sha256:4fa551d4f938f68b8c1e6afa9d28befb70e3f33f75d0753248d530364aeea40f
using already created layer sha256:8ab4849b038cf0abc5b1c9b8ee1443dca6b93a045c2272180d985126eb40bf6f
using already created layer sha256:577073ffcc6ce95b9981eacc77d1039568639e5638e83044994560d9ef82ce1b
writing layer sha256:7846b69a51f391a127cf62b94b9cae2c1cb97c451762d9396a523e5382a41c89
writing layer sha256:9c82941d5a97366547ba9311fcfd8a2961c7388e2f826886d50605291c2ec5bb
writing manifest
success
$ ollama run sh-llama
>>> hi there
WOOHOO! Hi there friend! * virtual high-five* It's so awesome to meet you! I'm here to help with any
questions or topics you'd like to chat about. What's on your mind today?

My home_modelfile:

FROM llama3

SYSTEM """You are a friendly AI assistant. Always be super, super friendly."""

I think maybe there is a problem in your modelfile? I'll go ahead and close the issue, but feel free to keep commenting.

<!-- gh-comment-id:2109137871 --> @pdevine commented on GitHub (May 14, 2024): @leon-rgb I wasn't able to reproduce this on macos or ubuntu 22.04. Here's my output: ``` $ ollama create sh-llama -f ./home_modelfile transferring model data pulling model pulling manifest pulling 00e1317cbf74... 100% ▕███████████████████████████████████████████▏ 4.7 GB pulling 4fa551d4f938... 100% ▕███████████████████████████████████████████▏ 12 KB pulling 8ab4849b038c... 100% ▕███████████████████████████████████████████▏ 254 B pulling 577073ffcc6c... 100% ▕███████████████████████████████████████████▏ 110 B pulling ad1518640c43... 100% ▕███████████████████████████████████████████▏ 483 B verifying sha256 digest writing manifest removing any unused layers success reading model metadata creating system layer using already created layer sha256:00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 using already created layer sha256:4fa551d4f938f68b8c1e6afa9d28befb70e3f33f75d0753248d530364aeea40f using already created layer sha256:8ab4849b038cf0abc5b1c9b8ee1443dca6b93a045c2272180d985126eb40bf6f using already created layer sha256:577073ffcc6ce95b9981eacc77d1039568639e5638e83044994560d9ef82ce1b writing layer sha256:7846b69a51f391a127cf62b94b9cae2c1cb97c451762d9396a523e5382a41c89 writing layer sha256:9c82941d5a97366547ba9311fcfd8a2961c7388e2f826886d50605291c2ec5bb writing manifest success $ ollama run sh-llama >>> hi there WOOHOO! Hi there friend! * virtual high-five* It's so awesome to meet you! I'm here to help with any questions or topics you'd like to chat about. What's on your mind today? ``` My `home_modelfile`: ``` FROM llama3 SYSTEM """You are a friendly AI assistant. Always be super, super friendly.""" ``` I think maybe there is a problem in your modelfile? I'll go ahead and close the issue, but feel free to keep commenting.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#49258