[GH-ISSUE #7207] Can't get anything to work #30336

Closed
opened 2026-04-22 09:54:40 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @softwaretrouble on GitHub (Oct 15, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7207

What is the issue?

While I still had the chance I wanted to do a complete Linux reinstall tonight to update to 22.0. Everything was going good until I came to Ollama and then everything stopped. I used to be able to use Ollama on this desktop with the 3-7 Gb models. It was slow thanks to being on an older desktop but with the smaller models I could still use them on this desktop.

I have models downloaded that I have used before and want to run them set up on the new install so they would be ready when I wanted to use them.

ollama run Neo-Phi-2.gguf

file does not exist

ollama create neo2 -f Neo-Phi-2.gguf

command must be one of from license template.........

I have the FROM ./Neo-Phi-2.gguf saved in txt file, even when saved in neo2 I still get the from, license, template error.

What do I have to do to get this to allow me to use it?

OS

Linux

GPU

Intel

CPU

Intel

Ollama version

0.3.13

Originally created by @softwaretrouble on GitHub (Oct 15, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7207 ### What is the issue? While I still had the chance I wanted to do a complete Linux reinstall tonight to update to 22.0. Everything was going good until I came to Ollama and then everything stopped. I used to be able to use Ollama on this desktop with the 3-7 Gb models. It was slow thanks to being on an older desktop but with the smaller models I could still use them on this desktop. I have models downloaded that I have used before and want to run them set up on the new install so they would be ready when I wanted to use them. ollama run Neo-Phi-2.gguf file does not exist ollama create neo2 -f Neo-Phi-2.gguf command must be one of from license template......... I have the FROM ./Neo-Phi-2.gguf saved in txt file, even when saved in neo2 I still get the from, license, template error. What do I have to do to get this to allow me to use it? ### OS Linux ### GPU Intel ### CPU Intel ### Ollama version 0.3.13
GiteaMirror added the bug label 2026-04-22 09:54:40 -05:00
Author
Owner

@rick-github commented on GitHub (Oct 15, 2024):

You need a properly formatted Modelfile to use it. HuggingFace doesn't mention Neo-Phi-2 so I'm using the GGUF from phi:2.7b-chat-v2-q4_0 as the example. phi needs a template.

# GGUF file in local directory
$ ls -l Neo-Phi-2.gguf 
-rw-r--r-- 1 rick rick 1602461536 Oct 15 15:10 Neo-Phi-2.gguf
# Modelfile contains pointer to GGUF file and a template that is correct for the model
$ cat Modelfile
FROM Neo-Phi-2.gguf
TEMPLATE "{{ if .System }}System: {{ .System }}{{ end }}
User: {{ .Prompt }}
Assistant:"
# create model
$ ollama create neo2 -f Modelfile
transferring model data 100% 
using existing layer sha256:04778965089b91318ad61d0995b7e44fad4b9a9f4e049d7be90932bf8812e828 
using existing layer sha256:774a15e6f1e5a0ccd2a2df78c20139ab688472bd8ed5f1ed3ef6abf505e02d02 
creating new layer sha256:7e17ab5ad512798ee323f2445a9e812b464c028d8ad59fa3ba493a77384e62b1 
writing manifest 
success
# test model
$ ollama run neo2 hello
 Hi there, how can I assist you today?

<!-- gh-comment-id:2413902911 --> @rick-github commented on GitHub (Oct 15, 2024): You need a properly formatted Modelfile to use it. HuggingFace doesn't mention Neo-Phi-2 so I'm using the GGUF from [phi:2.7b-chat-v2-q4_0](https://ollama.com/library/phi) as the example. phi needs a [template](https://ollama.com/library/phi/blobs/774a15e6f1e5). ```sh # GGUF file in local directory $ ls -l Neo-Phi-2.gguf -rw-r--r-- 1 rick rick 1602461536 Oct 15 15:10 Neo-Phi-2.gguf # Modelfile contains pointer to GGUF file and a template that is correct for the model $ cat Modelfile FROM Neo-Phi-2.gguf TEMPLATE "{{ if .System }}System: {{ .System }}{{ end }} User: {{ .Prompt }} Assistant:" # create model $ ollama create neo2 -f Modelfile transferring model data 100% using existing layer sha256:04778965089b91318ad61d0995b7e44fad4b9a9f4e049d7be90932bf8812e828 using existing layer sha256:774a15e6f1e5a0ccd2a2df78c20139ab688472bd8ed5f1ed3ef6abf505e02d02 creating new layer sha256:7e17ab5ad512798ee323f2445a9e812b464c028d8ad59fa3ba493a77384e62b1 writing manifest success # test model $ ollama run neo2 hello Hi there, how can I assist you today? ```
Author
Owner

@pdevine commented on GitHub (Oct 16, 2024):

I'm going to go ahead and close out the issue since @rick-github answered it. @softwaretrouble if you have any problems I can reopen.

<!-- gh-comment-id:2415365380 --> @pdevine commented on GitHub (Oct 16, 2024): I'm going to go ahead and close out the issue since @rick-github answered it. @softwaretrouble if you have any problems I can reopen.
Author
Owner

@softwaretrouble commented on GitHub (Oct 16, 2024):

Sorry have not been around all day. The problem is not solved. I'm trying to use the brighteon.ai models, downloaded from their website. They are working models. I have used them in the past with no difficulties. I just shortened the name of the file to try out the different possible solutions I had seen on the net. I wasn't having any luck using the full length names and I was hearing talk about trouble with spaces and capital letters, etc, so I decided I shorten up the names and see if that would solve the problem, but it didn't.

<!-- gh-comment-id:2415378276 --> @softwaretrouble commented on GitHub (Oct 16, 2024): Sorry have not been around all day. The problem is not solved. I'm trying to use the brighteon.ai models, downloaded from their website. They are working models. I have used them in the past with no difficulties. I just shortened the name of the file to try out the different possible solutions I had seen on the net. I wasn't having any luck using the full length names and I was hearing talk about trouble with spaces and capital letters, etc, so I decided I shorten up the names and see if that would solve the problem, but it didn't.
Author
Owner

@rick-github commented on GitHub (Oct 16, 2024):

$ cat Modelfile 
FROM Neo/Neo-Phi-2-E3-1-7B-V0-1-1/Neo-Phi-2-E3-1-7B-V0-1-1-Q5-Apr-24.gguf
TEMPLATE "{{ if .System }}System: {{ .System }}{{ end }}
User: {{ .Prompt }}
Assistant:"
$ ollama create neo2 -f Modelfile
transferring model data 100% 
using existing layer sha256:52facaa21a0c5bf367ab84cf3577a8752ebb3de0c6d9c09f746da720822998d9 
using existing layer sha256:774a15e6f1e5a0ccd2a2df78c20139ab688472bd8ed5f1ed3ef6abf505e02d02 
creating new layer sha256:6c49880974a1e322dce86a6fce628a032ef58f5ac08f423a4e3ca0688270861d 
writing manifest 
success 
$ ollama run neo2 hello
 Yes, I'm a language model AI designed to generate human-like responses. I am programmed to understand context and respond in a conversational manner. 
Is that clear?

Make sure that when you run the create command, you are specifying the Modelfile and not the model. If you still can't get it to work, paste the actual commands and errors in to this issue.

<!-- gh-comment-id:2415398313 --> @rick-github commented on GitHub (Oct 16, 2024): ```console $ cat Modelfile FROM Neo/Neo-Phi-2-E3-1-7B-V0-1-1/Neo-Phi-2-E3-1-7B-V0-1-1-Q5-Apr-24.gguf TEMPLATE "{{ if .System }}System: {{ .System }}{{ end }} User: {{ .Prompt }} Assistant:" $ ollama create neo2 -f Modelfile transferring model data 100% using existing layer sha256:52facaa21a0c5bf367ab84cf3577a8752ebb3de0c6d9c09f746da720822998d9 using existing layer sha256:774a15e6f1e5a0ccd2a2df78c20139ab688472bd8ed5f1ed3ef6abf505e02d02 creating new layer sha256:6c49880974a1e322dce86a6fce628a032ef58f5ac08f423a4e3ca0688270861d writing manifest success $ ollama run neo2 hello Yes, I'm a language model AI designed to generate human-like responses. I am programmed to understand context and respond in a conversational manner. Is that clear? ``` Make sure that when you run the `create` command, you are specifying the Modelfile and not the model. If you still can't get it to work, paste the actual commands and errors in to this issue.
Author
Owner

@softwaretrouble commented on GitHub (Oct 16, 2024):

Okay, not sure fully what I was doing differently between now and around a year ago. My old Modelfile only ever showed FROM and never included TEMPLATE or anything else. They don't work with only FROM. They are working now. Thanks. This can be closed.

<!-- gh-comment-id:2415490951 --> @softwaretrouble commented on GitHub (Oct 16, 2024): Okay, not sure fully what I was doing differently between now and around a year ago. My old Modelfile only ever showed FROM and never included TEMPLATE or anything else. They don't work with only FROM. They are working now. Thanks. This can be closed.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#30336