[GH-ISSUE #9510] Add command to show which llama.cpp version is being used #6198

Closed
opened 2026-04-12 17:35:04 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @santo998 on GitHub (Mar 5, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9510

Originally created by @santo998 on GitHub (Mar 5, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9510
GiteaMirror added the feature request label 2026-04-12 17:35:04 -05:00
Author
Owner

@mchiang0610 commented on GitHub (Mar 5, 2025):

Hi, thank you for this feature request. We really appreciate the work of llama.cpp. We currently show the version here for the ggml code that we vendor: https://github.com/ollama/ollama/blob/main/Makefile.sync#L3

As we build out and release Ollama's new engine, we will begin transitioning to other backends. GGML will continue to be used for compatibility.

<!-- gh-comment-id:2700114666 --> @mchiang0610 commented on GitHub (Mar 5, 2025): Hi, thank you for this feature request. We really appreciate the work of llama.cpp. We currently show the version here for the ggml code that we vendor: https://github.com/ollama/ollama/blob/main/Makefile.sync#L3 As we build out and release Ollama's new engine, we will begin transitioning to other backends. GGML will continue to be used for compatibility.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6198