[GH-ISSUE #2894] How to get Ollama to use my RTX 4090 on windows 11 #1769

Closed
opened 2026-04-12 11:47:13 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @TimmekHW on GitHub (Mar 3, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2894

I have 12600K + 64GB RAM + RTX 4090. I use Ollama + OpenCHat.

For some reason Ollama won't use my RTX 4090. How can I show the program my graphics card?

image

messages = chat_histories[chat_id]
    options = {
        "num_ctx": 12768,
        "num_thread": 10,
        "num_predict": 300,
        "repeat_last_n": 100,
        "temperature": 0.75
    }
    response = "" 

config.json
server.log
app.log

**small bursts of GPU work are the work of Stable Diffusion

Originally created by @TimmekHW on GitHub (Mar 3, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2894 I have 12600K + 64GB RAM + RTX 4090. I use Ollama + OpenCHat. For some reason Ollama won't use my RTX 4090. How can I show the program my graphics card? ![image](https://github.com/ollama/ollama/assets/94626112/7fe5afe3-1fbb-46f1-a9e4-a1f8a58a6d05) ``` messages = chat_histories[chat_id] options = { "num_ctx": 12768, "num_thread": 10, "num_predict": 300, "repeat_last_n": 100, "temperature": 0.75 } response = "" ``` [config.json](https://github.com/ollama/ollama/files/14473300/config.json) [server.log](https://github.com/ollama/ollama/files/14473301/server.log) [app.log](https://github.com/ollama/ollama/files/14473302/app.log) **small bursts of GPU work are the work of Stable Diffusion
Author
Owner

@TimmekHW commented on GitHub (Mar 3, 2024):

Я просто удалил #"num_thread": 10, и всё заработало

<!-- gh-comment-id:1975270089 --> @TimmekHW commented on GitHub (Mar 3, 2024): Я просто удалил `#"num_thread": 10,` и всё заработало
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#1769