[GH-ISSUE #11415] ideas #54047

Closed
opened 2026-04-29 05:08:58 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @techindev on GitHub (Jul 14, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11415

1 Implement a command to update the llama.cpp libraries to the latest stable version

2 also implement a command to control the CPU threads

Originally created by @techindev on GitHub (Jul 14, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11415 1 Implement a command to update the llama.cpp libraries to the latest stable version 2 also implement a command to control the CPU threads
GiteaMirror added the feature request label 2026-04-29 05:08:58 -05:00
Author
Owner

@rick-github commented on GitHub (Jul 14, 2025):

1 Implement a command to update the llama.cpp libraries to the latest stable version

This depends on a sync of vendored code, which requires integration. To build with the latest stable version, clone from head and build.

2 also implement a command to control the CPU threads

echo FROM model > Modelfile
echo PARAMETER num_thread 4 >> Modelfile
ollama create model:4t

or

$ ollama run model
>>> /set parameter num_thread 4
<!-- gh-comment-id:3069801121 --> @rick-github commented on GitHub (Jul 14, 2025): > 1 Implement a command to update the llama.cpp libraries to the latest stable version This depends on a sync of vendored code, which requires integration. To build with the latest stable version, clone from head and build. > 2 also implement a command to control the CPU threads ``` echo FROM model > Modelfile echo PARAMETER num_thread 4 >> Modelfile ollama create model:4t ``` or ```console $ ollama run model >>> /set parameter num_thread 4 ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#54047