[GH-ISSUE #4655] invalid conversion from ‘void*’ to ‘unsigned int’ [-fpermissive] #64962

Closed
opened 2026-05-03 19:25:22 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @Zhou-CyberSecurity-AI on GitHub (May 27, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4655

What is the issue?

I want to load the local model from Pytorch, but "make -C llm/llama.cpp quantize" always reports an error. My gcc version is 9.5 in the centos system.

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

No response

Originally created by @Zhou-CyberSecurity-AI on GitHub (May 27, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4655 ### What is the issue? I want to load the local model from Pytorch, but "make -C llm/llama.cpp quantize" always reports an error. My gcc version is 9.5 in the centos system. ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version _No response_
GiteaMirror added the question label 2026-05-03 19:25:22 -05:00
Author
Owner

@dhiltgen commented on GitHub (Oct 23, 2024):

I believe we require gcc v10 or newer to build. You can check out https://github.com/ollama/ollama/blob/main/scripts/rh_linux_deps.sh to see how we add this to older CentOS distros.

You should also be aware that we're moving to a new build model for the native code - see https://github.com/ollama/ollama/blob/main/docs/development.md#transition-to-go-runner

<!-- gh-comment-id:2433486309 --> @dhiltgen commented on GitHub (Oct 23, 2024): I believe we require gcc v10 or newer to build. You can check out https://github.com/ollama/ollama/blob/main/scripts/rh_linux_deps.sh to see how we add this to older CentOS distros. You should also be aware that we're moving to a new build model for the native code - see https://github.com/ollama/ollama/blob/main/docs/development.md#transition-to-go-runner
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#64962