[GH-ISSUE #7942] model requires more system memory than is available when useMmap #51594

Open
opened 2026-04-28 20:37:01 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @xgdgsc on GitHub (Dec 5, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7942

What is the issue?

When I use continue vscode extension to call ollama config like

    {
      "model": "qwen2.5-coder:14b",
      "title": "qwen2.5-coder:14b",
      "provider": "ollama",
      "completionOptions": {
        "keepAlive": 9999999,
        "useMmap": true
      }
    },

It still checks system memory disregard the "useMmap": true option. And return 500 internal error like:

{"error":"model requires more system memory (17.7 GiB) than is available (13.6 GiB)"}

OS

Windows

GPU

No response

CPU

Other

Ollama version

0.4.7

Originally created by @xgdgsc on GitHub (Dec 5, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7942 ### What is the issue? When I use continue vscode extension to call ollama config like ``` { "model": "qwen2.5-coder:14b", "title": "qwen2.5-coder:14b", "provider": "ollama", "completionOptions": { "keepAlive": 9999999, "useMmap": true } }, ``` It still checks system memory disregard the `"useMmap": true` option. And return 500 internal error like: ``` {"error":"model requires more system memory (17.7 GiB) than is available (13.6 GiB)"} ``` ### OS Windows ### GPU _No response_ ### CPU Other ### Ollama version 0.4.7
GiteaMirror added the bug label 2026-04-28 20:37:02 -05:00
Author
Owner

@rick-github commented on GitHub (Dec 5, 2024):

mmap doesn't affect the check for memory. If your system doesn't have enough system memory to load the model, you need to increase it by adding swap.

<!-- gh-comment-id:2519974119 --> @rick-github commented on GitHub (Dec 5, 2024): mmap doesn't affect the check for memory. If your system doesn't have enough system memory to load the model, you need to increase it by [adding swap](https://github.com/ollama/ollama/issues/6918#issuecomment-2488651203).
Author
Owner

@xgdgsc commented on GitHub (Dec 6, 2024):

No the logic is wrong. Both Windows and macos does swap adding dynamically automatically as I mentioned at https://github.com/ollama/ollama/issues/6918#issuecomment-2488221380 . So when a user knows what mmap is and ask for the config you shoiuld skip the checking.

<!-- gh-comment-id:2522148482 --> @xgdgsc commented on GitHub (Dec 6, 2024): No the logic is wrong. Both Windows and macos does swap adding dynamically automatically as I mentioned at https://github.com/ollama/ollama/issues/6918#issuecomment-2488221380 . So when a user knows what mmap is and ask for the config you shoiuld skip the checking.
Author
Owner

@xgdgsc commented on GitHub (Jan 14, 2025):

Has the latest windows version update changed the useMmap behavior? It doesn' t seem to mmap but copied to memory?

image

EDIT: Only first run have this issue. Not happening now.

<!-- gh-comment-id:2588674121 --> @xgdgsc commented on GitHub (Jan 14, 2025): Has the latest windows version update changed the useMmap behavior? It doesn' t seem to mmap but copied to memory? ![image](https://github.com/user-attachments/assets/2ecdd379-057b-4c49-ab62-90335d9d26f7) EDIT: Only first run have this issue. Not happening now.
Author
Owner

@xgdgsc commented on GitHub (Aug 25, 2025):

And the Ollama app doesn' t support useMmap.

<!-- gh-comment-id:3219533081 --> @xgdgsc commented on GitHub (Aug 25, 2025): And the Ollama app doesn' t support useMmap.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#51594