[GH-ISSUE #3543] Conversion Script #2185

Closed
opened 2026-04-12 12:26:07 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @scefali on GitHub (Apr 9, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3543

What is the issue?

I am trying to run the conversion script as shown in the example for conversion to gguf.

What did you expect to see?

python llm/llama.cpp/convert.py ./model --outtype f16 --outfile converted.bin           

Loading model file model/model-00001-of-00002.safetensors
Traceback (most recent call last):
  File "/Users/stevecefali/omoide/ollama/llm/llama.cpp/convert.py", line 1523, in <module>
    main()
  File "/Users/stevecefali/omoide/ollama/llm/llama.cpp/convert.py", line 1455, in main
    model_plus = load_some_model(args.model)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/stevecefali/omoide/ollama/llm/llama.cpp/convert.py", line 1344, in load_some_model
    models_plus.append(lazy_load_file(path))
                       ^^^^^^^^^^^^^^^^^^^^
  File "/Users/stevecefali/omoide/ollama/llm/llama.cpp/convert.py", line 966, in lazy_load_file
    raise ValueError(f"unknown format: {path}")
ValueError: unknown format: model/model-00001-of-00002.safetensors

Steps to reproduce

git clone git@github.com:ollama/ollama.git ollama
cd ollama
git submodule init
git submodule update llm/llama.cpp
python3 -m venv llm/llama.cpp/.venv
source llm/llama.cpp/.venv/bin/activate
pip install -r llm/llama.cpp/requirements.txt
make -C llm/llama.cpp quantize
git lfs install
git clone https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1 model
python llm/llama.cpp/convert.py ./model --outtype f16 --outfile converted.bin

Are there any recent changes that introduced the issue?

No response

OS

No response

Architecture

No response

Platform

No response

Ollama version

No response

GPU

No response

GPU info

No response

CPU

No response

Other software

No response

Originally created by @scefali on GitHub (Apr 9, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3543 ### What is the issue? I am trying to run the conversion script as shown in the example for conversion to gguf. ### What did you expect to see? ``` python llm/llama.cpp/convert.py ./model --outtype f16 --outfile converted.bin Loading model file model/model-00001-of-00002.safetensors Traceback (most recent call last): File "/Users/stevecefali/omoide/ollama/llm/llama.cpp/convert.py", line 1523, in <module> main() File "/Users/stevecefali/omoide/ollama/llm/llama.cpp/convert.py", line 1455, in main model_plus = load_some_model(args.model) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/stevecefali/omoide/ollama/llm/llama.cpp/convert.py", line 1344, in load_some_model models_plus.append(lazy_load_file(path)) ^^^^^^^^^^^^^^^^^^^^ File "/Users/stevecefali/omoide/ollama/llm/llama.cpp/convert.py", line 966, in lazy_load_file raise ValueError(f"unknown format: {path}") ValueError: unknown format: model/model-00001-of-00002.safetensors ``` ### Steps to reproduce ``` git clone git@github.com:ollama/ollama.git ollama cd ollama git submodule init git submodule update llm/llama.cpp python3 -m venv llm/llama.cpp/.venv source llm/llama.cpp/.venv/bin/activate pip install -r llm/llama.cpp/requirements.txt make -C llm/llama.cpp quantize git lfs install git clone https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1 model python llm/llama.cpp/convert.py ./model --outtype f16 --outfile converted.bin ``` ### Are there any recent changes that introduced the issue? _No response_ ### OS _No response_ ### Architecture _No response_ ### Platform _No response_ ### Ollama version _No response_ ### GPU _No response_ ### GPU info _No response_ ### CPU _No response_ ### Other software _No response_
GiteaMirror added the bug label 2026-04-12 12:26:07 -05:00
Author
Owner

@pdevine commented on GitHub (Apr 10, 2024):

@scefali what model are you trying to convert? You probably want to use convert-hf-to-gguf.py if you're converting from safetensors.

<!-- gh-comment-id:2046239448 --> @pdevine commented on GitHub (Apr 10, 2024): @scefali what model are you trying to convert? You probably want to use `convert-hf-to-gguf.py` if you're converting from safetensors.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#2185