[GH-ISSUE #6200] Support --mlock on the command line. Also there are undocumented model file parameters #65910

Open
opened 2026-05-03 23:08:05 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @sdmorrey on GitHub (Aug 6, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6200

What is the issue?

The original issue was that my Mac was experiencing an exponential slow down. Upon investigating the issue I noticed my RAM usage was very, very low, but my swap was climbing at the rate of about 1GB per 1K of filled context.

In the end I discovered the --mlock flag in llama.cpp.
I was in discord asking for help setting it since the command line Ollama straight up rejects it.

Eventually we discovered that this is controlled by the model file parameter use_mlock

Except that appears to be undocumented. In the same space I found a few other undocumented parameters.

So in a nutshell, it would lovely if you could add --mlock to the command line since I don't want to have to edit every model file. Also if you could update the docs so use_mlock and the other undocumented parameters are documented, that would be awesome.

Thanks for your hard work and such a great product!

OS

macOS

GPU

Intel

CPU

Intel

Ollama version

0.3.3

Originally created by @sdmorrey on GitHub (Aug 6, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6200 ### What is the issue? The original issue was that my Mac was experiencing an exponential slow down. Upon investigating the issue I noticed my RAM usage was very, very low, but my swap was climbing at the rate of about 1GB per 1K of filled context. In the end I discovered the --mlock flag in llama.cpp. I was in discord asking for help setting it since the command line Ollama straight up rejects it. Eventually we discovered that this is controlled by the model file parameter use_mlock Except that appears to be undocumented. In the same space I found a few other undocumented parameters. So in a nutshell, it would lovely if you could add --mlock to the command line since I don't want to have to edit every model file. Also if you could update the docs so use_mlock and the other undocumented parameters are documented, that would be awesome. Thanks for your hard work and such a great product! ### OS macOS ### GPU Intel ### CPU Intel ### Ollama version 0.3.3
GiteaMirror added the bug label 2026-05-03 23:08:05 -05:00
Author
Owner

@sdmorrey commented on GitHub (Aug 6, 2024):

Forgot to mention, I'll be glad to submit a PR if that would be welcome. I just want to make sure it's an oversight and not a design decision for reasons I'm just not aware of.

Thanks!

<!-- gh-comment-id:2271304884 --> @sdmorrey commented on GitHub (Aug 6, 2024): Forgot to mention, I'll be glad to submit a PR if that would be welcome. I just want to make sure it's an oversight and not a design decision for reasons I'm just not aware of. Thanks!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#65910