[GH-ISSUE #2189] Error: Post "http://127.0.0.1:11434/api/generate": EOF #47763

Closed
opened 2026-04-28 05:15:29 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @blackandcold on GitHub (Jan 25, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2189

Installed by script and not AUR, previously running fine but since 2 weeks I can't run it anymore. MacOS 0.1.20 works fine.

ollama run llama2:latest
Error: Post "http://127.0.0.1:11434/api/generate": EOF

System:
OS: EndeavourOS Linux x86_64
Kernel: 6.7.0-arch3-1
Shell: zsh 5.9
CPU: AMD Ryzen 9 5900X (24) @ 3.700GHz
GPU: AMD ATI Radeon RX 6800 16GB
Memory: 13639MiB / 128714MiB

So there is some free action on a null pointer? :)

Jän 25 15:58:43 OS ollama[192151]: 2024/01/25 15:58:43 gpu.go:104: Radeon GPU detected
Jän 25 15:59:26 OS ollama[192151]: [GIN] 2024/01/25 - 15:59:26 | 200 | 33.771µs | 127.0.0.1 | HEAD ">
Jän 25 15:59:26 OS ollama[192151]: [GIN] 2024/01/25 - 15:59:26 | 200 | 2.403459ms | 127.0.0.1 | POST ">
Jän 25 15:59:26 OS ollama[192151]: [GIN] 2024/01/25 - 15:59:26 | 200 | 771.286µs | 127.0.0.1 | POST ">
Jän 25 15:59:27 OS ollama[192151]: 2024/01/25 15:59:27 shim_ext_server_linux.go:24: Updating PATH to /usr/local/sbi>
Jän 25 15:59:27 OS ollama[192151]: 2024/01/25 15:59:27 shim_ext_server.go:92: Loading Dynamic Shim llm server: /tmp>
Jän 25 15:59:27 OS ollama[192151]: 2024/01/25 15:59:27 ext_server_common.go:136: Initializing internal llama server
Jän 25 15:59:27 OS ollama[192151]: free(): invalid pointer
Jän 25 15:59:27 OS systemd[1]: ollama.service: Main process exited, code=dumped, status=6/ABRT
Jän 25 15:59:27 OS systemd[1]: ollama.service: Failed with result 'core-dump'.
Jän 25 15:59:27 OS systemd[1]: ollama.service: Consumed 1.181s CPU time, 406.9M memory peak, 0B memory swap peak.
Jän 25 15:59:31 OS systemd[1]: ollama.service: Scheduled restart job, restart counter is at 2.
Jän 25 15:59:31 OS systemd[1]: Started Ollama Service.
Jän 25 15:59:31 OS ollama[192251]: 2024/01/25 15:59:31 images.go:808: total blobs: 24
Jän 25 15:59:31 OS ollama[192251]: 2024/01/25 15:59:31 images.go:815: total unused blobs removed: 0
Jän 25 15:59:31 OS ollama[192251]: 2024/01/25 15:59:31 routes.go:930: Listening on 127.0.0.1:11434 (version 0.1.20)
Jän 25 15:59:31 OS ollama[192251]: 2024/01/25 15:59:31 shim_ext_server.go:142: Dynamic LLM variants [cuda rocm]
Jän 25 15:59:31 OS ollama[192251]: 2024/01/25 15:59:31 gpu.go:88: Detecting GPU type
Jän 25 15:59:31 OS ollama[192251]: 2024/01/25 15:59:31 gpu.go:203: Searching for GPU management library libnvidia-m>
Jän 25 15:59:31 OS ollama[192251]: 2024/01/25 15:59:31 gpu.go:248: Discovered GPU libraries: [/usr/lib/libnvidia-ml>
Jän 25 15:59:31 OS ollama[192251]: 2024/01/25 15:59:31 gpu.go:259: Unable to load CUDA management library /usr/lib/>
Jän 25 15:59:31 OS ollama[192251]: 2024/01/25 15:59:31 gpu.go:259: Unable to load CUDA management library /usr/lib6>
Jän 25 15:59:31 OS ollama[192251]: 2024/01/25 15:59:31 gpu.go:203: Searching for GPU management library librocm_smi>
Jän 25 15:59:31 OS ollama[192251]: 2024/01/25 15:59:31 gpu.go:248: Discovered GPU libraries: [/opt/rocm/lib/librocm>
Jän 25 15:59:31 OS ollama[192251]: 2024/01/25 15:59:31 gpu.go:104: Radeon GPU detected

Originally created by @blackandcold on GitHub (Jan 25, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2189 Installed by script and not AUR, previously running fine but since 2 weeks I can't run it anymore. MacOS 0.1.20 works fine. > ollama run llama2:latest > Error: Post "http://127.0.0.1:11434/api/generate": EOF System: OS: EndeavourOS Linux x86_64 Kernel: 6.7.0-arch3-1 Shell: zsh 5.9 CPU: AMD Ryzen 9 5900X (24) @ 3.700GHz GPU: AMD ATI Radeon RX 6800 16GB Memory: 13639MiB / 128714MiB So there is some free action on a null pointer? :) > Jän 25 15:58:43 OS ollama[192151]: 2024/01/25 15:58:43 gpu.go:104: Radeon GPU detected > Jän 25 15:59:26 OS ollama[192151]: [GIN] 2024/01/25 - 15:59:26 | 200 | 33.771µs | 127.0.0.1 | HEAD "> > Jän 25 15:59:26 OS ollama[192151]: [GIN] 2024/01/25 - 15:59:26 | 200 | 2.403459ms | 127.0.0.1 | POST "> > Jän 25 15:59:26 OS ollama[192151]: [GIN] 2024/01/25 - 15:59:26 | 200 | 771.286µs | 127.0.0.1 | POST "> > Jän 25 15:59:27 OS ollama[192151]: 2024/01/25 15:59:27 shim_ext_server_linux.go:24: Updating PATH to /usr/local/sbi> > Jän 25 15:59:27 OS ollama[192151]: 2024/01/25 15:59:27 shim_ext_server.go:92: Loading Dynamic Shim llm server: /tmp> > Jän 25 15:59:27 OS ollama[192151]: 2024/01/25 15:59:27 ext_server_common.go:136: Initializing internal llama server > **Jän 25 15:59:27 OS ollama[192151]: free(): invalid pointer** > Jän 25 15:59:27 OS systemd[1]: ollama.service: Main process exited, code=dumped, status=6/ABRT > Jän 25 15:59:27 OS systemd[1]: ollama.service: Failed with result 'core-dump'. > Jän 25 15:59:27 OS systemd[1]: ollama.service: Consumed 1.181s CPU time, 406.9M memory peak, 0B memory swap peak. > Jän 25 15:59:31 OS systemd[1]: ollama.service: Scheduled restart job, restart counter is at 2. > Jän 25 15:59:31 OS systemd[1]: Started Ollama Service. > Jän 25 15:59:31 OS ollama[192251]: 2024/01/25 15:59:31 images.go:808: total blobs: 24 > Jän 25 15:59:31 OS ollama[192251]: 2024/01/25 15:59:31 images.go:815: total unused blobs removed: 0 > Jän 25 15:59:31 OS ollama[192251]: 2024/01/25 15:59:31 routes.go:930: Listening on 127.0.0.1:11434 (version 0.1.20) > Jän 25 15:59:31 OS ollama[192251]: 2024/01/25 15:59:31 shim_ext_server.go:142: Dynamic LLM variants [cuda rocm] > Jän 25 15:59:31 OS ollama[192251]: 2024/01/25 15:59:31 gpu.go:88: Detecting GPU type > Jän 25 15:59:31 OS ollama[192251]: 2024/01/25 15:59:31 gpu.go:203: Searching for GPU management library libnvidia-m> > Jän 25 15:59:31 OS ollama[192251]: 2024/01/25 15:59:31 gpu.go:248: Discovered GPU libraries: [/usr/lib/libnvidia-ml> > Jän 25 15:59:31 OS ollama[192251]: 2024/01/25 15:59:31 gpu.go:259: Unable to load CUDA management library /usr/lib/> > Jän 25 15:59:31 OS ollama[192251]: 2024/01/25 15:59:31 gpu.go:259: Unable to load CUDA management library /usr/lib6> > Jän 25 15:59:31 OS ollama[192251]: 2024/01/25 15:59:31 gpu.go:203: Searching for GPU management library librocm_smi> > Jän 25 15:59:31 OS ollama[192251]: 2024/01/25 15:59:31 gpu.go:248: Discovered GPU libraries: [/opt/rocm/lib/librocm> > Jän 25 15:59:31 OS ollama[192251]: 2024/01/25 15:59:31 gpu.go:104: Radeon GPU detected
Author
Owner

@blackandcold commented on GitHub (Jan 25, 2024):

For whatever reason starting ollama manually works, just not with systemd.
Have to investigate what happened there but seems it is not a ollama problem, closing.

<!-- gh-comment-id:1910407932 --> @blackandcold commented on GitHub (Jan 25, 2024): For whatever reason starting ollama manually works, just not with systemd. Have to investigate what happened there but seems it is not a ollama problem, closing.
Author
Owner

@blackandcold commented on GitHub (Jan 25, 2024):

Ok downloading a new model works, not running it!

2024/01/25 16:13:17 shim_ext_server_linux.go:24: Updating PATH to /usr/local/sbin:/usr/local/bin:/usr/bin:/opt/cuda/bin:/opt/cuda/nsight_compute:/opt/cuda/nsight_systems/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/tmp/ollama268014040/rocm
Lazy loading /tmp/ollama268014040/rocm/libext_server.so library
2024/01/25 16:13:17 shim_ext_server.go:92: Loading Dynamic Shim llm server: /tmp/ollama268014040/rocm/libext_server.so
2024/01/25 16:13:17 ext_server_common.go:136: Initializing internal llama server
free(): invalid pointer
[1] 193968 IOT instruction (core dumped) ollama serve

<!-- gh-comment-id:1910409756 --> @blackandcold commented on GitHub (Jan 25, 2024): Ok downloading a new model works, not running it! 2024/01/25 16:13:17 shim_ext_server_linux.go:24: Updating PATH to /usr/local/sbin:/usr/local/bin:/usr/bin:/opt/cuda/bin:/opt/cuda/nsight_compute:/opt/cuda/nsight_systems/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/tmp/ollama268014040/rocm Lazy loading /tmp/ollama268014040/rocm/libext_server.so library 2024/01/25 16:13:17 shim_ext_server.go:92: Loading Dynamic Shim llm server: /tmp/ollama268014040/rocm/libext_server.so 2024/01/25 16:13:17 ext_server_common.go:136: Initializing internal llama server free(): invalid pointer [1] 193968 IOT instruction (core dumped) ollama serve
Author
Owner

@blackandcold commented on GitHub (Jan 25, 2024):

And here we have the exact error already open:
https://github.com/ollama/ollama/issues/2165

<!-- gh-comment-id:1910411420 --> @blackandcold commented on GitHub (Jan 25, 2024): And here we have the exact error already open: https://github.com/ollama/ollama/issues/2165
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#47763