[GH-ISSUE #8589] Compile ollama failed in debian #67608

Closed
opened 2026-05-04 10:59:37 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @johndoanee on GitHub (Jan 26, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8589

What is the issue?

I tried to compile ollama in debian, but it give these information and quit compile. Please help me. Thank you!

root@debian:~/ollama# make -j12

GOARCH=amd64 go build -buildmode=pie "-ldflags=-w -s \"-X=github.com/ollama/ollama/version.Version=0.5.7-5-g453e4d0\"  " -trimpath -tags "avx" -o llama/build/linux-amd64/runners/cpu_avx/ollama_llama_server ./cmd/runner
GOARCH=amd64 go build -buildmode=pie "-ldflags=-w -s \"-X=github.com/ollama/ollama/version.Version=0.5.7-5-g453e4d0\"  " -trimpath -tags "avx,avx2" -o llama/build/linux-amd64/runners/cpu_avx2/ollama_llama_server ./cmd/runner
GOARCH=amd64 go build -buildmode=pie "-ldflags=-w -s \"-X=github.com/ollama/ollama/version.Version=0.5.7-5-g453e4d0\"  " -trimpath  -o ollama .
# github.com/ollama/ollama/llama
In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’,
   inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at ggml-cpu-aarch64.cpp:3711:39:
ggml-cpu-aarch64.cpp:3640:19: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=]
3640 |             memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t));
     |             ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
ggml-cpu-aarch64.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’:
ggml-cpu-aarch64.cpp:3711:20: note: at offset 72 into destination object ‘<anonymous>’ of size 72
3711 |             *dst++ = make_block_q4_0x4(dst_tmp, interleave_block);
     |             ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’,
   inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at ggml-cpu-aarch64.cpp:3711:39:
ggml-cpu-aarch64.cpp:3640:19: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=]
3640 |             memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t));
     |             ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
ggml-cpu-aarch64.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’:
ggml-cpu-aarch64.cpp:3711:20: note: at offset 88 into destination object ‘<anonymous>’ of size 72
3711 |             *dst++ = make_block_q4_0x4(dst_tmp, interleave_block);
     |             ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’,
   inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at ggml-cpu-aarch64.cpp:3711:39:
ggml-cpu-aarch64.cpp:3640:19: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=]
3640 |             memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t));
     |             ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
ggml-cpu-aarch64.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’:
ggml-cpu-aarch64.cpp:3711:20: note: at offset 104 into destination object ‘<anonymous>’ of size 72
3711 |             *dst++ = make_block_q4_0x4(dst_tmp, interleave_block);
     |             ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’,
   inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at ggml-cpu-aarch64.cpp:3711:39:
ggml-cpu-aarch64.cpp:3640:19: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=]
3640 |             memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t));
     |             ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
ggml-cpu-aarch64.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’:
ggml-cpu-aarch64.cpp:3711:20: note: at offset 120 into destination object ‘<anonymous>’ of size 72
3711 |             *dst++ = make_block_q4_0x4(dst_tmp, interleave_block);
     |             ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# github.com/ollama/ollama/llama
In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’,
   inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at ggml-cpu-aarch64.cpp:3711:39:
ggml-cpu-aarch64.cpp:3640:19: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=]
3640 |             memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t));
     |             ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
ggml-cpu-aarch64.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’:
ggml-cpu-aarch64.cpp:3711:20: note: at offset 72 into destination object ‘<anonymous>’ of size 72
3711 |             *dst++ = make_block_q4_0x4(dst_tmp, interleave_block);
     |             ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’,
   inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at ggml-cpu-aarch64.cpp:3711:39:
ggml-cpu-aarch64.cpp:3640:19: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=]
3640 |             memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t));
     |             ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
ggml-cpu-aarch64.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’:
ggml-cpu-aarch64.cpp:3711:20: note: at offset 88 into destination object ‘<anonymous>’ of size 72
3711 |             *dst++ = make_block_q4_0x4(dst_tmp, interleave_block);
     |             ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’,
   inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at ggml-cpu-aarch64.cpp:3711:39:
ggml-cpu-aarch64.cpp:3640:19: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=]
3640 |             memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t));
     |             ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
ggml-cpu-aarch64.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’:
ggml-cpu-aarch64.cpp:3711:20: note: at offset 104 into destination object ‘<anonymous>’ of size 72
3711 |             *dst++ = make_block_q4_0x4(dst_tmp, interleave_block);
     |             ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’,
   inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at ggml-cpu-aarch64.cpp:3711:39:
ggml-cpu-aarch64.cpp:3640:19: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=]
3640 |             memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t));
     |             ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
ggml-cpu-aarch64.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’:
ggml-cpu-aarch64.cpp:3711:20: note: at offset 120 into destination object ‘<anonymous>’ of size 72
3711 |             *dst++ = make_block_q4_0x4(dst_tmp, interleave_block);
     |             ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# github.com/ollama/ollama/llama
In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’,
   inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at ggml-cpu-aarch64.cpp:3711:39:
ggml-cpu-aarch64.cpp:3640:19: warning: writing 32 bytes into a region of size 0 [-Wstringop-overflow=]
3640 |             memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t));
     |             ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
ggml-cpu-aarch64.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’:
ggml-cpu-aarch64.cpp:3711:20: note: at offset 72 into destination object ‘<anonymous>’ of size 72
3711 |             *dst++ = make_block_q4_0x4(dst_tmp, interleave_block);
     |             ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’,
   inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at ggml-cpu-aarch64.cpp:3711:39:
ggml-cpu-aarch64.cpp:3640:19: warning: writing 32 bytes into a region of size 0 [-Wstringop-overflow=]
3640 |             memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t));
     |             ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
ggml-cpu-aarch64.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’:
ggml-cpu-aarch64.cpp:3711:20: note: at offset 104 into destination object ‘<anonymous>’ of size 72
3711 |             *dst++ = make_block_q4_0x4(dst_tmp, interleave_block);
     |             ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

=============================
gcc version 12.2.0 (Debian 12.2.0-14)
go version go1.23.5 linux/amd64

OS

Linux

GPU

No response

CPU

AMD

Ollama version

0.5.7

Originally created by @johndoanee on GitHub (Jan 26, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8589 ### What is the issue? I tried to compile ollama in debian, but it give these information and quit compile. Please help me. Thank you! ``` root@debian:~/ollama# make -j12 GOARCH=amd64 go build -buildmode=pie "-ldflags=-w -s \"-X=github.com/ollama/ollama/version.Version=0.5.7-5-g453e4d0\" " -trimpath -tags "avx" -o llama/build/linux-amd64/runners/cpu_avx/ollama_llama_server ./cmd/runner GOARCH=amd64 go build -buildmode=pie "-ldflags=-w -s \"-X=github.com/ollama/ollama/version.Version=0.5.7-5-g453e4d0\" " -trimpath -tags "avx,avx2" -o llama/build/linux-amd64/runners/cpu_avx2/ollama_llama_server ./cmd/runner GOARCH=amd64 go build -buildmode=pie "-ldflags=-w -s \"-X=github.com/ollama/ollama/version.Version=0.5.7-5-g453e4d0\" " -trimpath -o ollama . # github.com/ollama/ollama/llama In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’, inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at ggml-cpu-aarch64.cpp:3711:39: ggml-cpu-aarch64.cpp:3640:19: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 3640 | memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t)); | ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ggml-cpu-aarch64.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’: ggml-cpu-aarch64.cpp:3711:20: note: at offset 72 into destination object ‘<anonymous>’ of size 72 3711 | *dst++ = make_block_q4_0x4(dst_tmp, interleave_block); | ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’, inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at ggml-cpu-aarch64.cpp:3711:39: ggml-cpu-aarch64.cpp:3640:19: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 3640 | memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t)); | ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ggml-cpu-aarch64.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’: ggml-cpu-aarch64.cpp:3711:20: note: at offset 88 into destination object ‘<anonymous>’ of size 72 3711 | *dst++ = make_block_q4_0x4(dst_tmp, interleave_block); | ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’, inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at ggml-cpu-aarch64.cpp:3711:39: ggml-cpu-aarch64.cpp:3640:19: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 3640 | memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t)); | ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ggml-cpu-aarch64.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’: ggml-cpu-aarch64.cpp:3711:20: note: at offset 104 into destination object ‘<anonymous>’ of size 72 3711 | *dst++ = make_block_q4_0x4(dst_tmp, interleave_block); | ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’, inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at ggml-cpu-aarch64.cpp:3711:39: ggml-cpu-aarch64.cpp:3640:19: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 3640 | memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t)); | ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ggml-cpu-aarch64.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’: ggml-cpu-aarch64.cpp:3711:20: note: at offset 120 into destination object ‘<anonymous>’ of size 72 3711 | *dst++ = make_block_q4_0x4(dst_tmp, interleave_block); | ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # github.com/ollama/ollama/llama In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’, inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at ggml-cpu-aarch64.cpp:3711:39: ggml-cpu-aarch64.cpp:3640:19: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 3640 | memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t)); | ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ggml-cpu-aarch64.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’: ggml-cpu-aarch64.cpp:3711:20: note: at offset 72 into destination object ‘<anonymous>’ of size 72 3711 | *dst++ = make_block_q4_0x4(dst_tmp, interleave_block); | ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’, inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at ggml-cpu-aarch64.cpp:3711:39: ggml-cpu-aarch64.cpp:3640:19: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 3640 | memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t)); | ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ggml-cpu-aarch64.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’: ggml-cpu-aarch64.cpp:3711:20: note: at offset 88 into destination object ‘<anonymous>’ of size 72 3711 | *dst++ = make_block_q4_0x4(dst_tmp, interleave_block); | ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’, inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at ggml-cpu-aarch64.cpp:3711:39: ggml-cpu-aarch64.cpp:3640:19: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 3640 | memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t)); | ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ggml-cpu-aarch64.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’: ggml-cpu-aarch64.cpp:3711:20: note: at offset 104 into destination object ‘<anonymous>’ of size 72 3711 | *dst++ = make_block_q4_0x4(dst_tmp, interleave_block); | ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’, inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at ggml-cpu-aarch64.cpp:3711:39: ggml-cpu-aarch64.cpp:3640:19: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 3640 | memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t)); | ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ggml-cpu-aarch64.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’: ggml-cpu-aarch64.cpp:3711:20: note: at offset 120 into destination object ‘<anonymous>’ of size 72 3711 | *dst++ = make_block_q4_0x4(dst_tmp, interleave_block); | ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # github.com/ollama/ollama/llama In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’, inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at ggml-cpu-aarch64.cpp:3711:39: ggml-cpu-aarch64.cpp:3640:19: warning: writing 32 bytes into a region of size 0 [-Wstringop-overflow=] 3640 | memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t)); | ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ggml-cpu-aarch64.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’: ggml-cpu-aarch64.cpp:3711:20: note: at offset 72 into destination object ‘<anonymous>’ of size 72 3711 | *dst++ = make_block_q4_0x4(dst_tmp, interleave_block); | ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’, inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at ggml-cpu-aarch64.cpp:3711:39: ggml-cpu-aarch64.cpp:3640:19: warning: writing 32 bytes into a region of size 0 [-Wstringop-overflow=] 3640 | memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t)); | ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ggml-cpu-aarch64.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’: ggml-cpu-aarch64.cpp:3711:20: note: at offset 104 into destination object ‘<anonymous>’ of size 72 3711 | *dst++ = make_block_q4_0x4(dst_tmp, interleave_block); | ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` > ============================= gcc version 12.2.0 (Debian 12.2.0-14) go version go1.23.5 linux/amd64 ### OS Linux ### GPU _No response_ ### CPU AMD ### Ollama version 0.5.7
GiteaMirror added the buildbuglinux labels 2026-05-04 10:59:38 -05:00
Author
Owner

@LeisureLinux commented on GitHub (Feb 6, 2025):

you have some program running not as amd64 but aarch64:
ggml-cpu-aarch64.cpp

if you are not doing cross-compile, why you specify GOARCH=amd64?

<!-- gh-comment-id:2639739187 --> @LeisureLinux commented on GitHub (Feb 6, 2025): you have some program running not as amd64 but aarch64: ggml-cpu-aarch64.cpp if you are not doing cross-compile, why you specify GOARCH=amd64?
Author
Owner

@dhiltgen commented on GitHub (Jul 4, 2025):

We've moved to a cmake based build since this was filed, so I'm assuming it is no longer a problem. If you still have trouble building, please share an updated description and I'll reopen.

https://github.com/ollama/ollama/blob/main/docs/development.md

<!-- gh-comment-id:3037256009 --> @dhiltgen commented on GitHub (Jul 4, 2025): We've moved to a cmake based build since this was filed, so I'm assuming it is no longer a problem. If you still have trouble building, please share an updated description and I'll reopen. https://github.com/ollama/ollama/blob/main/docs/development.md
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#67608