[GH-ISSUE #2515] How to run a Pytorch model with ollama? #27231

Closed
opened 2026-04-22 04:22:48 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @PriyaranjanMarathe on GitHub (Feb 15, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2515

Does ollama support loading a Pytorch model? I have trained a model and it's output is a .pt file. How do I use it with ollama? I tried doing the following and it doesn't seem to work.

[root@ trained_models]# ollama run model.pt
pulling manifest

Error: pull model manifest: file does not exist

Originally created by @PriyaranjanMarathe on GitHub (Feb 15, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2515 Does ollama support loading a Pytorch model? I have trained a model and it's output is a .pt file. How do I use it with ollama? I tried doing the following and it doesn't seem to work. [root@ trained_models]# ollama run model.pt pulling manifest Error: pull model manifest: file does not exist
Author
Owner

@PriyaranjanMarathe commented on GitHub (Feb 16, 2024):

I tried the following

ollama create custom_gpt_name -f /model_path/model.pt

and getting the following error. How to solve it?

Error: no FROM line for the model was specified

<!-- gh-comment-id:1949019624 --> @PriyaranjanMarathe commented on GitHub (Feb 16, 2024): I tried the following ollama create custom_gpt_name -f /model_path/model.pt and getting the following error. How to solve it? Error: no FROM line for the model was specified
Author
Owner

@easp commented on GitHub (Feb 16, 2024):

Read the docs. https://github.com/jmorganca/ollama/blob/main/docs/import.md

<!-- gh-comment-id:1949338250 --> @easp commented on GitHub (Feb 16, 2024): Read the docs. https://github.com/jmorganca/ollama/blob/main/docs/import.md
Author
Owner

@PriyaranjanMarathe commented on GitHub (Feb 16, 2024):

Thanks.

I get the following error now

transferring model data
creating model layer
Error: invalid file magic

My Modelfile looks like the following

FROM /user_directory/model.pt

Used the following command to create the model

ollama create example -f Modelfile

get the following error now

ollama create example -f Modelfile
transferring model data
creating model layer
Error: invalid file magic

ollama version is 0.1.24

<!-- gh-comment-id:1949375333 --> @PriyaranjanMarathe commented on GitHub (Feb 16, 2024): Thanks. I get the following error now transferring model data creating model layer Error: invalid file magic My Modelfile looks like the following FROM /user_directory/model.pt Used the following command to create the model ollama create example -f Modelfile get the following error now ollama create example -f Modelfile transferring model data creating model layer Error: invalid file magic ollama version is 0.1.24
Author
Owner

@PriyaranjanMarathe commented on GitHub (Feb 16, 2024):

Still seems to be an issue.

Getting the following error message

KeyError: ('torch', 'DoubleStorage')

Loading model file model
Traceback (most recent call last):
File "/ollama/ollama/llm/llama.cpp/convert.py", line 1478, in
main()
File "/ollama/ollama/llm/llama.cpp/convert.py", line 1414, in main
model_plus = load_some_model(args.model)
File "/ollama/ollama/llm/llama.cpp/convert.py", line 1274, in load_some_model
models_plus.append(lazy_load_file(path))
File "/ollama/ollama/llm/llama.cpp/convert.py", line 887, in lazy_load_file
return lazy_load_torch_file(fp, path)
File "/ollama/ollama/llm/llama.cpp/convert.py", line 843, in lazy_load_torch_file
model = unpickler.load()
File "/ollama/ollama/llm/llama.cpp/convert.py", line 832, in find_class
return self.CLASSES[(module, name)]
KeyError: ('torch', 'DoubleStorage')

<!-- gh-comment-id:1949441348 --> @PriyaranjanMarathe commented on GitHub (Feb 16, 2024): Still seems to be an issue. Getting the following error message KeyError: ('torch', 'DoubleStorage') Loading model file model Traceback (most recent call last): File "/ollama/ollama/llm/llama.cpp/convert.py", line 1478, in <module> main() File "/ollama/ollama/llm/llama.cpp/convert.py", line 1414, in main model_plus = load_some_model(args.model) File "/ollama/ollama/llm/llama.cpp/convert.py", line 1274, in load_some_model models_plus.append(lazy_load_file(path)) File "/ollama/ollama/llm/llama.cpp/convert.py", line 887, in lazy_load_file return lazy_load_torch_file(fp, path) File "/ollama/ollama/llm/llama.cpp/convert.py", line 843, in lazy_load_torch_file model = unpickler.load() File "/ollama/ollama/llm/llama.cpp/convert.py", line 832, in find_class return self.CLASSES[(module, name)] KeyError: ('torch', 'DoubleStorage')
Author
Owner

@PriyaranjanMarathe commented on GitHub (Feb 16, 2024):

I found a library which converts torch_to_ggml

https://github.com/Leikoe/torch_to_ggml

Was able to load this ggml to ollama.

It didn't give any answers though. How to test if the conversion worked?

<!-- gh-comment-id:1949483796 --> @PriyaranjanMarathe commented on GitHub (Feb 16, 2024): I found a library which converts torch_to_ggml https://github.com/Leikoe/torch_to_ggml Was able to load this ggml to ollama. It didn't give any answers though. How to test if the conversion worked?
Author
Owner

@pdevine commented on GitHub (Feb 18, 2024):

Yeah, you'll need to convert the model first to GGUF (not GGML as that's no longer supported). You can then create a modelfile using the FROM /path/to/gguf/model which will pull in the weights and create an ollama model. You can usually use one of the convert scripts which are sitting in ollama/llm/llama.cpp/convert*.py. This is mostly covered in the docs that @easp mentioned.

That said, this is an area that I've been looking at, however it's still a ways off. There just are too many steps right now.

I'm going to go ahead and close the issue, but feel free to keep commenting or even reopen it.

<!-- gh-comment-id:1950981276 --> @pdevine commented on GitHub (Feb 18, 2024): Yeah, you'll need to convert the model first to GGUF (not GGML as that's no longer supported). You can then create a modelfile using the `FROM /path/to/gguf/model` which will pull in the weights and create an ollama model. You can usually use one of the convert scripts which are sitting in `ollama/llm/llama.cpp/convert*.py`. This is mostly covered in the docs that @easp mentioned. That said, this is an area that I've been looking at, however it's still a ways off. There just are too many steps right now. I'm going to go ahead and close the issue, but feel free to keep commenting or even reopen it.
Author
Owner

@scefali commented on GitHub (Apr 9, 2024):

@pdevine I am struggling with this. Not getting any errors but my model just responds with nothing.

<!-- gh-comment-id:2043920820 --> @scefali commented on GitHub (Apr 9, 2024): @pdevine I am struggling with this. Not getting any errors but my model just responds with nothing.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#27231