[GH-ISSUE #3791] Rename files with prefix "sha256:" to "sha256_" #64377

Closed
opened 2026-05-03 17:25:24 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @ker2xu on GitHub (Apr 21, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3791

What is the issue?

Some network file systems do not handle ":" well and interpret the string followed by ":" as foreign host, leading to Permission error (due to incorrect and non-existing locations/hosts).
It is also not good to use special symbol like ":" instead of "_", which is accepted by all OS and file systems.

FYI.
https://www.ibm.com/docs/en/zvm/7.2?topic=occ-understanding-network-file-system-nfs-path-name-syntax

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.1.17

Originally created by @ker2xu on GitHub (Apr 21, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3791 ### What is the issue? Some network file systems do not handle ":" well and interpret the string followed by ":" as foreign host, leading to Permission error (due to incorrect and non-existing locations/hosts). It is also not good to use special symbol like ":" instead of "_", which is accepted by all OS and file systems. FYI. https://www.ibm.com/docs/en/zvm/7.2?topic=occ-understanding-network-file-system-nfs-path-name-syntax ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.17
GiteaMirror added the bug label 2026-05-03 17:25:24 -05:00
Author
Owner

@jmorganca commented on GitHub (Apr 21, 2024):

Hi @ker2xu you'll be happy to learn the latest version of Ollama uses sha256- and so this shouldn't be a problem anymore. Please let me know if you still see issues after upgrading 😊

<!-- gh-comment-id:2067878315 --> @jmorganca commented on GitHub (Apr 21, 2024): Hi @ker2xu you'll be happy to learn the latest version of Ollama uses `sha256-` and so this shouldn't be a problem anymore. Please let me know if you still see issues after upgrading 😊
Author
Owner

@ker2xu commented on GitHub (Apr 21, 2024):

Thanks for your prompt reply! I realized my Ollama version is old when I reported the bug, and I am installing the latest version. It would be better if you can update it for the conda-forge channel, which is still 0.1.17 and very old.

<!-- gh-comment-id:2067879419 --> @ker2xu commented on GitHub (Apr 21, 2024): Thanks for your prompt reply! I realized my Ollama version is old when I reported the bug, and I am installing the latest version. It would be better if you can update it for the conda-forge channel, which is still 0.1.17 and very old.
Author
Owner

@ker2xu commented on GitHub (Apr 21, 2024):

Hi @ker2xu you'll be happy to learn the latest version of Ollama uses sha256- and so this shouldn't be a problem anymore. Please let me know if you still see issues after upgrading 😊

I downloaded the latest release ollama-linux-amd64 due to non-root user, but running it with command ./ollama serve gave me the "Segmentation fault" error.
I also tried to compile it by myself but also ended with an error relevant to llama.cpp as explained https://github.com/ggerganov/llama.cpp/issues/1090
The solution above did not declared the file path and there were lots of MakeFile, and all of them starts with the line "# CMAKE generated file: DO NOT EDIT!".
Do you have any advice? @jmorganca
cmake version 3.26.5
go version go1.22.2 linux/amd64
gcc (conda-forge gcc 11.4.0-5) 11.4.0

-- Build files have been written to: /home/comp/20481195/ollama/llm/build/linux/x86_64/cpu [75/13870]+ cmake --build ../build/linux/x86_64/cpu --target ollama_llama_server -j8 [ 7%] Generating build details from Git -- Found Git: /usr/bin/git (found version "2.39.3")
[ 21%] Built target ggml [ 21%] Generating build details from Git
-- Found Git: /usr/bin/git (found version "2.39.3")
[ 28%] Building CXX object CMakeFiles/llama.dir/llama.cpp.o
[ 28%] Built target build_info
[ 35%] Linking CXX static library libllama.a
[ 42%] Built target llama
[ 50%] Built target llava
[ 85%] Built target common
[ 92%] Linking CXX executable ../bin/ollama_llama_server
/home/abc/miniconda3/bin/../lib/gcc/x86_64-conda-linux-gnu/11.4.0/../../../../x86_64-conda-linux-gnu/bin/ld: ../libllama.a(ggml.c.o): in function ggml_time_us': ggml.c:(.text.ggml_time_us+0x1e): undefined reference to clock_gettime'
collect2: error: ld returned 1 exit status
gmake[3]: *** [ext_server/CMakeFiles/ollama_llama_server.dir/build.make:103: bin/ollama_llama_server] Error 1
gmake[2]: *** [CMakeFiles/Makefile2:3593: ext_server/CMakeFiles/ollama_llama_server.dir/all] Error 2
gmake[1]: *** [CMakeFiles/Makefile2:3600: ext_server/CMakeFiles/ollama_llama_server.dir/rule] Error 2gmake: *** [Makefile:1453: ollama_llama_server] Error 2
llm/generate/generate_linux.go:3: running "bash": exit status 2

<!-- gh-comment-id:2068042864 --> @ker2xu commented on GitHub (Apr 21, 2024): > Hi @ker2xu you'll be happy to learn the latest version of Ollama uses `sha256-` and so this shouldn't be a problem anymore. Please let me know if you still see issues after upgrading 😊 I downloaded the latest release ollama-linux-amd64 due to non-root user, but running it with command `./ollama serve` gave me the "Segmentation fault" error. I also tried to compile it by myself but also ended with an error relevant to llama.cpp as explained https://github.com/ggerganov/llama.cpp/issues/1090 The solution above did not declared the file path and there were lots of MakeFile, and all of them starts with the line "# CMAKE generated file: DO NOT EDIT!". Do you have any advice? @jmorganca cmake version 3.26.5 go version go1.22.2 linux/amd64 gcc (conda-forge gcc 11.4.0-5) 11.4.0 -- Build files have been written to: /home/comp/20481195/ollama/llm/build/linux/x86_64/cpu [75/13870]+ cmake --build ../build/linux/x86_64/cpu --target ollama_llama_server -j8 [ 7%] Generating build details from Git -- Found Git: /usr/bin/git (found version "2.39.3") [ 21%] Built target ggml [ 21%] Generating build details from Git -- Found Git: /usr/bin/git (found version "2.39.3") [ 28%] Building CXX object CMakeFiles/llama.dir/llama.cpp.o [ 28%] Built target build_info [ 35%] Linking CXX static library libllama.a [ 42%] Built target llama [ 50%] Built target llava [ 85%] Built target common [ 92%] Linking CXX executable ../bin/ollama_llama_server /home/abc/miniconda3/bin/../lib/gcc/x86_64-conda-linux-gnu/11.4.0/../../../../x86_64-conda-linux-gnu/bin/ld: ../libllama.a(ggml.c.o): in function `ggml_time_us': ggml.c:(.text.ggml_time_us+0x1e): undefined reference to `clock_gettime' collect2: error: ld returned 1 exit status gmake[3]: *** [ext_server/CMakeFiles/ollama_llama_server.dir/build.make:103: bin/ollama_llama_server] Error 1 gmake[2]: *** [CMakeFiles/Makefile2:3593: ext_server/CMakeFiles/ollama_llama_server.dir/all] Error 2 gmake[1]: *** [CMakeFiles/Makefile2:3600: ext_server/CMakeFiles/ollama_llama_server.dir/rule] Error 2gmake: *** [Makefile:1453: ollama_llama_server] Error 2 llm/generate/generate_linux.go:3: running "bash": exit status 2
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#64377