[GH-ISSUE #1465] CUDA error 2: out of memory (for a 33 billion param model, but I have 39GB of VRAM available across 4 GPUs) #62825

Closed
opened 2026-05-03 10:25:59 -05:00 by GiteaMirror · 20 comments
Owner

Originally created by @peteygao on GitHub (Dec 11, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1465

Originally assigned to: @mxyng on GitHub.

The model I'm trying to run is deepseek-coder:33b and journalctl -u ollama outputs:

Dec 11 18:31:37 x99 ollama[25964]: 2023/12/11 18:31:37 llama.go:292: 39320 MB VRAM available, loading up to 101 GPU layers
Dec 11 18:31:37 x99 ollama[25964]: 2023/12/11 18:31:37 llama.go:421: starting llama runner
Dec 11 18:31:37 x99 ollama[25964]: 2023/12/11 18:31:37 llama.go:479: waiting for llama runner to start responding
Dec 11 18:31:37 x99 ollama[25964]: ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   no
Dec 11 18:31:37 x99 ollama[25964]: ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
Dec 11 18:31:37 x99 ollama[25964]: ggml_init_cublas: found 4 CUDA devices:
Dec 11 18:31:37 x99 ollama[25964]:   Device 0: NVIDIA GeForce GTX 1080 Ti, compute capability 6.1
Dec 11 18:31:37 x99 ollama[25964]:   Device 1: NVIDIA GeForce GTX 1080 Ti, compute capability 6.1
Dec 11 18:31:37 x99 ollama[25964]:   Device 2: NVIDIA GeForce GTX 1080 Ti, compute capability 6.1
Dec 11 18:31:37 x99 ollama[25964]:   Device 3: NVIDIA GeForce GTX 1060 6GB, compute capability 6.1
Dec 11 18:31:39 x99 ollama[26042]: {"timestamp":1702290699,"level":"INFO","function":"main","line":2534,"message":"build info","build":375,"commit":"9656026"}
Dec 11 18:31:39 x99 ollama[26042]: {"timestamp":1702290699,"level":"INFO","function":"main","line":2537,"message":"system info","n_threads":18,"n_threads_batch":-1,"total_threads":36,"system_info":"AVX = 1 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | "}
Dec 11 18:31:39 x99 ollama[25964]: llama_model_loader: loaded meta data with 22 key-value pairs and 561 tensors from /usr/share/ollama/.ollama/models/blobs/sha256:137fe898f00f9b709b8ca96c549f64ad6a36ab85720cf10d3c24ac07389ab8fb (version GGUF V2)
---[snip]---
Dec 11 18:31:39 x99 ollama[25964]: llm_load_tensors: ggml ctx size =    0.21 MiB
Dec 11 18:31:39 x99 ollama[25964]: llm_load_tensors: using CUDA for GPU acceleration
Dec 11 18:31:39 x99 ollama[25964]: ggml_cuda_set_main_device: using device 0 (NVIDIA GeForce GTX 1080 Ti) as main device
Dec 11 18:31:39 x99 ollama[25964]: llm_load_tensors: mem required  =  124.24 MiB
Dec 11 18:31:39 x99 ollama[25964]: llm_load_tensors: offloading 62 repeating layers to GPU
Dec 11 18:31:39 x99 ollama[25964]: llm_load_tensors: offloading non-repeating layers to GPU
Dec 11 18:31:39 x99 ollama[25964]: llm_load_tensors: offloaded 65/65 layers to GPU
Dec 11 18:31:39 x99 ollama[25964]: llm_load_tensors: VRAM used: 17822.33 MiB
Dec 11 18:31:43 x99 ollama[25964]: ...................................................................................................
Dec 11 18:31:43 x99 ollama[25964]: llama_new_context_with_model: n_ctx      = 16384
Dec 11 18:31:43 x99 ollama[25964]: llama_new_context_with_model: freq_base  = 100000.0
Dec 11 18:31:43 x99 ollama[25964]: llama_new_context_with_model: freq_scale = 0.25
Dec 11 18:31:45 x99 ollama[25964]: llama_kv_cache_init: offloading v cache to GPU
Dec 11 18:31:45 x99 ollama[25964]: llama_kv_cache_init: offloading k cache to GPU
Dec 11 18:31:45 x99 ollama[25964]: llama_kv_cache_init: VRAM kv self = 3968.00 MiB
Dec 11 18:31:45 x99 ollama[25964]: llama_new_context_with_model: kv self size  = 3968.00 MiB
Dec 11 18:31:45 x99 ollama[25964]: llama_build_graph: non-view tensors processed: 1430/1430
Dec 11 18:31:45 x99 ollama[25964]: llama_new_context_with_model: compute buffer total size = 1869.07 MiB
Dec 11 18:31:46 x99 ollama[25964]: llama_new_context_with_model: VRAM scratch buffer: 1866.00 MiB
Dec 11 18:31:46 x99 ollama[25964]: llama_new_context_with_model: total VRAM used: 23656.33 MiB (model: 17822.33 MiB, context: 5834.00 MiB)
Dec 11 18:31:46 x99 ollama[25964]: CUDA error 2 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:7973: out of memory
Dec 11 18:31:46 x99 ollama[25964]: current device: 0
Dec 11 18:31:47 x99 ollama[25964]: 2023/12/11 18:31:47 llama.go:436: 2 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:7973: out of memory
Dec 11 18:31:47 x99 ollama[25964]: current device: 0
Dec 11 18:31:47 x99 ollama[25964]: 2023/12/11 18:31:47 llama.go:444: error starting llama runner: llama runner process has terminated

Ollama correctly identifies all 4 GPUs with a collective VRAM of 39320 MB VRAM available, loading up to 101 GPU layers (first line of the logs).

And then it proceeds to load the layers seemingly successfully, but then somehow an OOM error is triggered.

How can I manually change the number of layers loaded to the GPU to debug this issue?

Originally created by @peteygao on GitHub (Dec 11, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1465 Originally assigned to: @mxyng on GitHub. The model I'm trying to run is `deepseek-coder:33b` and `journalctl -u ollama` outputs: ``` Dec 11 18:31:37 x99 ollama[25964]: 2023/12/11 18:31:37 llama.go:292: 39320 MB VRAM available, loading up to 101 GPU layers Dec 11 18:31:37 x99 ollama[25964]: 2023/12/11 18:31:37 llama.go:421: starting llama runner Dec 11 18:31:37 x99 ollama[25964]: 2023/12/11 18:31:37 llama.go:479: waiting for llama runner to start responding Dec 11 18:31:37 x99 ollama[25964]: ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no Dec 11 18:31:37 x99 ollama[25964]: ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes Dec 11 18:31:37 x99 ollama[25964]: ggml_init_cublas: found 4 CUDA devices: Dec 11 18:31:37 x99 ollama[25964]: Device 0: NVIDIA GeForce GTX 1080 Ti, compute capability 6.1 Dec 11 18:31:37 x99 ollama[25964]: Device 1: NVIDIA GeForce GTX 1080 Ti, compute capability 6.1 Dec 11 18:31:37 x99 ollama[25964]: Device 2: NVIDIA GeForce GTX 1080 Ti, compute capability 6.1 Dec 11 18:31:37 x99 ollama[25964]: Device 3: NVIDIA GeForce GTX 1060 6GB, compute capability 6.1 Dec 11 18:31:39 x99 ollama[26042]: {"timestamp":1702290699,"level":"INFO","function":"main","line":2534,"message":"build info","build":375,"commit":"9656026"} Dec 11 18:31:39 x99 ollama[26042]: {"timestamp":1702290699,"level":"INFO","function":"main","line":2537,"message":"system info","n_threads":18,"n_threads_batch":-1,"total_threads":36,"system_info":"AVX = 1 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | "} Dec 11 18:31:39 x99 ollama[25964]: llama_model_loader: loaded meta data with 22 key-value pairs and 561 tensors from /usr/share/ollama/.ollama/models/blobs/sha256:137fe898f00f9b709b8ca96c549f64ad6a36ab85720cf10d3c24ac07389ab8fb (version GGUF V2) ---[snip]--- Dec 11 18:31:39 x99 ollama[25964]: llm_load_tensors: ggml ctx size = 0.21 MiB Dec 11 18:31:39 x99 ollama[25964]: llm_load_tensors: using CUDA for GPU acceleration Dec 11 18:31:39 x99 ollama[25964]: ggml_cuda_set_main_device: using device 0 (NVIDIA GeForce GTX 1080 Ti) as main device Dec 11 18:31:39 x99 ollama[25964]: llm_load_tensors: mem required = 124.24 MiB Dec 11 18:31:39 x99 ollama[25964]: llm_load_tensors: offloading 62 repeating layers to GPU Dec 11 18:31:39 x99 ollama[25964]: llm_load_tensors: offloading non-repeating layers to GPU Dec 11 18:31:39 x99 ollama[25964]: llm_load_tensors: offloaded 65/65 layers to GPU Dec 11 18:31:39 x99 ollama[25964]: llm_load_tensors: VRAM used: 17822.33 MiB Dec 11 18:31:43 x99 ollama[25964]: ................................................................................................... Dec 11 18:31:43 x99 ollama[25964]: llama_new_context_with_model: n_ctx = 16384 Dec 11 18:31:43 x99 ollama[25964]: llama_new_context_with_model: freq_base = 100000.0 Dec 11 18:31:43 x99 ollama[25964]: llama_new_context_with_model: freq_scale = 0.25 Dec 11 18:31:45 x99 ollama[25964]: llama_kv_cache_init: offloading v cache to GPU Dec 11 18:31:45 x99 ollama[25964]: llama_kv_cache_init: offloading k cache to GPU Dec 11 18:31:45 x99 ollama[25964]: llama_kv_cache_init: VRAM kv self = 3968.00 MiB Dec 11 18:31:45 x99 ollama[25964]: llama_new_context_with_model: kv self size = 3968.00 MiB Dec 11 18:31:45 x99 ollama[25964]: llama_build_graph: non-view tensors processed: 1430/1430 Dec 11 18:31:45 x99 ollama[25964]: llama_new_context_with_model: compute buffer total size = 1869.07 MiB Dec 11 18:31:46 x99 ollama[25964]: llama_new_context_with_model: VRAM scratch buffer: 1866.00 MiB Dec 11 18:31:46 x99 ollama[25964]: llama_new_context_with_model: total VRAM used: 23656.33 MiB (model: 17822.33 MiB, context: 5834.00 MiB) Dec 11 18:31:46 x99 ollama[25964]: CUDA error 2 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:7973: out of memory Dec 11 18:31:46 x99 ollama[25964]: current device: 0 Dec 11 18:31:47 x99 ollama[25964]: 2023/12/11 18:31:47 llama.go:436: 2 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:7973: out of memory Dec 11 18:31:47 x99 ollama[25964]: current device: 0 Dec 11 18:31:47 x99 ollama[25964]: 2023/12/11 18:31:47 llama.go:444: error starting llama runner: llama runner process has terminated ``` Ollama correctly identifies all 4 GPUs with a collective VRAM of `39320 MB VRAM available, loading up to 101 GPU layers` (first line of the logs). And then it proceeds to load the layers seemingly successfully, but then somehow an OOM error is triggered. How can I manually change the number of layers loaded to the GPU to debug this issue?
GiteaMirror added the bugnvidia labels 2026-05-03 10:26:02 -05:00
Author
Owner

@nibra commented on GitHub (Dec 11, 2023):

See https://github.com/jmorganca/ollama/issues/618#issuecomment-1737547046

The num_gpu parameter solved the problem for me. On my machine (only 12G), ollama loaded 43 layers and failed with the same error as above, but runs smooth with 40 layers (didn't try with 41 and 42, though)

<!-- gh-comment-id:1850381312 --> @nibra commented on GitHub (Dec 11, 2023): See https://github.com/jmorganca/ollama/issues/618#issuecomment-1737547046 The `num_gpu` parameter solved the problem for me. On my machine (only 12G), ollama loaded 43 layers and failed with the same error as above, but runs smooth with 40 layers (didn't try with 41 and 42, though)
Author
Owner

@phalexo commented on GitHub (Dec 11, 2023):

Likely a bug that was introduced into the later versions. Try 0.1.11 version.

<!-- gh-comment-id:1850858784 --> @phalexo commented on GitHub (Dec 11, 2023): Likely a bug that was introduced into the later versions. Try 0.1.11 version.
Author
Owner

@easp commented on GitHub (Dec 11, 2023):

IIRC llama.cpp only allocates the context on a single GPU. With large contexts this messes up calculation of layer splits. Not sure what a work around would be.

<!-- gh-comment-id:1850872410 --> @easp commented on GitHub (Dec 11, 2023): IIRC llama.cpp only allocates the context on a single GPU. With large contexts this messes up calculation of layer splits. Not sure what a work around would be.
Author
Owner

@peteygao commented on GitHub (Dec 13, 2023):

@easp For llama.cpp, there's the --tensor-split flag, to work around this issue by allocating to the "main" GPU less tensor layers so that more VRAM can be reserved for the context. Either allow that to be passed into ollama (currently not supported), or be smart about estimating context + layer size (since there's already a heuristic for estimating how many layers will fit) and perform that split accordingly.

<!-- gh-comment-id:1853165279 --> @peteygao commented on GitHub (Dec 13, 2023): @easp For llama.cpp, there's the `--tensor-split` flag, to work around this issue by allocating to the "main" GPU less tensor layers so that more VRAM can be reserved for the context. Either allow that to be passed into ollama (currently _**not**_ supported), or be smart about estimating context + layer size (since there's already a heuristic for estimating how many layers will fit) and perform that split accordingly.
Author
Owner

@Davery92 commented on GitHub (Dec 13, 2023):

Likely a bug that was introduced into the later versions. Try 0.1.11 version.

How would I revert to this version? I installed Ollama over a month ago and it was running perfectly. I upgraded today and keep getting OOM errors.

<!-- gh-comment-id:1854725043 --> @Davery92 commented on GitHub (Dec 13, 2023): > Likely a bug that was introduced into the later versions. Try 0.1.11 version. How would I revert to this version? I installed Ollama over a month ago and it was running perfectly. I upgraded today and keep getting OOM errors.
Author
Owner

@phalexo commented on GitHub (Dec 13, 2023):

Likely a bug that was introduced into the later versions. Try 0.1.11 version.

How would I revert to this version? I installed Ollama over a month ago and it was running perfectly. I upgraded today and keep getting OOM errors.

https://github.com/jmorganca/ollama/releases/tag/v0.1.11

Leave a reply afterwords if it works.

<!-- gh-comment-id:1854757944 --> @phalexo commented on GitHub (Dec 13, 2023): > > Likely a bug that was introduced into the later versions. Try 0.1.11 version. > > How would I revert to this version? I installed Ollama over a month ago and it was running perfectly. I upgraded today and keep getting OOM errors. https://github.com/jmorganca/ollama/releases/tag/v0.1.11 Leave a reply afterwords if it works.
Author
Owner

@Davery92 commented on GitHub (Dec 13, 2023):

Likely a bug that was introduced into the later versions. Try 0.1.11 version.

How would I revert to this version? I installed Ollama over a month ago and it was running perfectly. I upgraded today and keep getting OOM errors.

https://github.com/jmorganca/ollama/releases/tag/v0.1.11

Leave a reply afterwords if it works.

I got that version installed and it's officially working again. Tested with multiple models. Thank you!

<!-- gh-comment-id:1854840265 --> @Davery92 commented on GitHub (Dec 13, 2023): > > > Likely a bug that was introduced into the later versions. Try 0.1.11 version. > > > > > > How would I revert to this version? I installed Ollama over a month ago and it was running perfectly. I upgraded today and keep getting OOM errors. > > > > https://github.com/jmorganca/ollama/releases/tag/v0.1.11 > > > > Leave a reply afterwords if it works. > > I got that version installed and it's officially working again. Tested with multiple models. Thank you!
Author
Owner

@phalexo commented on GitHub (Dec 14, 2023):

@BruceMacD Looks like at least 3 people have been able to get rid of their OOM problems by reverting back to version 0.1.11. Clearly it is a bug when loading small models into much larger VRAM and still failing, but only with versions 0.1.12+. Lots of people would love to try out Mixtral but can't because of this issue.

<!-- gh-comment-id:1856348739 --> @phalexo commented on GitHub (Dec 14, 2023): @BruceMacD Looks like at least 3 people have been able to get rid of their OOM problems by reverting back to version 0.1.11. Clearly it is a bug when loading small models into much larger VRAM and still failing, but only with versions 0.1.12+. Lots of people would love to try out Mixtral but can't because of this issue.
Author
Owner

@phalexo commented on GitHub (Dec 16, 2023):

git clone --recursive https://github.com/jmorganca/ollama.git
cd ollama/llm/llama.cpp
vi generate_linux.go
//go:generate cmake -S ggml -B ggml/build/cuda -DLLAMA_CUBLAS=on -DLLAMA_ACCELERATE=on -DLLAMA_K_QUANTS=on -DLLAMA_CUDA_FORCE_MMQ=on
//go:generate cmake --build ggml/build/cuda --target server --config Release
//go:generate mv ggml/build/cuda/bin/server ggml/build/cuda/bin/ollama-runner
//go:generate cmake -S gguf -B gguf/build/cuda -DLLAMA_CUBLAS=on -DLLAMA_ACCELERATE=on -DLLAMA_K_QUANTS=on -DLLAMA_NATIVE=off -DLLAMA_AVX=on -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_FMA=off -DLLAMA_F16C=off -DLLAMA_CUDA_PEER_MAX_BATCH_SIZE=0 -DLLAMA_CUDA_FORCE_MMQ=on
//go:generate cmake --build gguf/build/cuda --target server --config Release
//go:generate mv gguf/build/cuda/bin/server gguf/build/cuda/bin/ollama-runner
cd ../..
go generate ./...
go build .
<!-- gh-comment-id:1858918095 --> @phalexo commented on GitHub (Dec 16, 2023): ```bash git clone --recursive https://github.com/jmorganca/ollama.git cd ollama/llm/llama.cpp vi generate_linux.go ``` ```go //go:generate cmake -S ggml -B ggml/build/cuda -DLLAMA_CUBLAS=on -DLLAMA_ACCELERATE=on -DLLAMA_K_QUANTS=on -DLLAMA_CUDA_FORCE_MMQ=on //go:generate cmake --build ggml/build/cuda --target server --config Release //go:generate mv ggml/build/cuda/bin/server ggml/build/cuda/bin/ollama-runner //go:generate cmake -S gguf -B gguf/build/cuda -DLLAMA_CUBLAS=on -DLLAMA_ACCELERATE=on -DLLAMA_K_QUANTS=on -DLLAMA_NATIVE=off -DLLAMA_AVX=on -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_FMA=off -DLLAMA_F16C=off -DLLAMA_CUDA_PEER_MAX_BATCH_SIZE=0 -DLLAMA_CUDA_FORCE_MMQ=on //go:generate cmake --build gguf/build/cuda --target server --config Release //go:generate mv gguf/build/cuda/bin/server gguf/build/cuda/bin/ollama-runner ``` ```bash cd ../.. go generate ./... go build . ```
Author
Owner

@peteygao commented on GitHub (Dec 18, 2023):

git clone --recursive https://github.com/jmorganca/ollama.git
cd ollama/llm/llama.cpp
vi generate_linux.go
//go:generate cmake -S ggml -B ggml/build/cuda -DLLAMA_CUBLAS=on -DLLAMA_ACCELERATE=on -DLLAMA_K_QUANTS=on -DLLAMA_CUDA_FORCE_MMQ=on
//go:generate cmake --build ggml/build/cuda --target server --config Release
//go:generate mv ggml/build/cuda/bin/server ggml/build/cuda/bin/ollama-runner
//go:generate cmake -S gguf -B gguf/build/cuda -DLLAMA_CUBLAS=on -DLLAMA_ACCELERATE=on -DLLAMA_K_QUANTS=on -DLLAMA_NATIVE=off -DLLAMA_AVX=on -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_FMA=off -DLLAMA_F16C=off -DLLAMA_CUDA_PEER_MAX_BATCH_SIZE=0 -DLLAMA_CUDA_FORCE_MMQ=on
//go:generate cmake --build gguf/build/cuda --target server --config Release
//go:generate mv gguf/build/cuda/bin/server gguf/build/cuda/bin/ollama-runner
cd ../..
go generate ./...
go build .

@phalexo Sorry, I'm not sure what this is stating? What's the relevance? Are you trying to imply these are the lines causing the OOM bug? Or something else...?

<!-- gh-comment-id:1859883321 --> @peteygao commented on GitHub (Dec 18, 2023): > ```shell > git clone --recursive https://github.com/jmorganca/ollama.git > cd ollama/llm/llama.cpp > vi generate_linux.go > ``` > > ```go > //go:generate cmake -S ggml -B ggml/build/cuda -DLLAMA_CUBLAS=on -DLLAMA_ACCELERATE=on -DLLAMA_K_QUANTS=on -DLLAMA_CUDA_FORCE_MMQ=on > //go:generate cmake --build ggml/build/cuda --target server --config Release > //go:generate mv ggml/build/cuda/bin/server ggml/build/cuda/bin/ollama-runner > //go:generate cmake -S gguf -B gguf/build/cuda -DLLAMA_CUBLAS=on -DLLAMA_ACCELERATE=on -DLLAMA_K_QUANTS=on -DLLAMA_NATIVE=off -DLLAMA_AVX=on -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_FMA=off -DLLAMA_F16C=off -DLLAMA_CUDA_PEER_MAX_BATCH_SIZE=0 -DLLAMA_CUDA_FORCE_MMQ=on > //go:generate cmake --build gguf/build/cuda --target server --config Release > //go:generate mv gguf/build/cuda/bin/server gguf/build/cuda/bin/ollama-runner > ``` > > ```shell > cd ../.. > go generate ./... > go build . > ``` @phalexo Sorry, I'm not sure what this is stating? What's the relevance? Are you trying to imply these are the lines causing the OOM bug? Or something else...?
Author
Owner

@dhiltgen commented on GitHub (Jan 27, 2024):

@peteygao we've made a bunch of improvements in how we do memory prediction calculations. Can you give the latest release a try (0.1.22) and see if it works properly on your setup?

<!-- gh-comment-id:1912904670 --> @dhiltgen commented on GitHub (Jan 27, 2024): @peteygao we've made a bunch of improvements in how we do memory prediction calculations. Can you give the latest release a try (0.1.22) and see if it works properly on your setup?
Author
Owner

@Davery92 commented on GitHub (Jan 27, 2024):

I won't be able too, I got an error on the last version claiming my GPUs are too old, so I may be stuck at this version.

<!-- gh-comment-id:1913185873 --> @Davery92 commented on GitHub (Jan 27, 2024): I won't be able too, I got an error on the last version claiming my GPUs are too old, so I may be stuck at this version.
Author
Owner

@dhiltgen commented on GitHub (Jan 27, 2024):

I won't be able too, I got an error on the last version claiming my GPUs are too old, so I may be stuck at this version.

Related issues: #1865 and #1756

<!-- gh-comment-id:1913242462 --> @dhiltgen commented on GitHub (Jan 27, 2024): > I won't be able too, I got an error on the last version claiming my GPUs are too old, so I may be stuck at this version. Related issues: #1865 and #1756
Author
Owner

@phalexo commented on GitHub (Jan 31, 2024):

Look at the docker files to see which version of Go to use. It may be your
problem.

On Tue, Jan 30, 2024 at 7:05 PM Davery92 @.***> wrote:

I won't be able too, I got an error on the last version claiming my GPUs
are too old, so I may be stuck at this version.

Related issues: #1865 https://github.com/ollama/ollama/issues/1865 and
#1756 https://github.com/ollama/ollama/issues/1756

So I pulled the newest release and it still runs only on CPU. So I pulled
the repo to build from source and every time I run go build. I get these
errors:
parser/parser.go:9:2: package log/slog is not in GOROOT
(/usr/lib/go-1.18/src/log/slog)
parser/parser.go:10:2: package slices is not in GOROOT
(/usr/lib/go-1.18/src/slices)


Reply to this email directly, view it on GitHub
https://github.com/ollama/ollama/issues/1465#issuecomment-1918120375,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ABDD3ZN637YJTCO3FHWKI53YRGDDVAVCNFSM6AAAAABAPQ5EXWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMJYGEZDAMZXGU
.
You are receiving this because you were mentioned.Message ID:
@.***>

<!-- gh-comment-id:1918351179 --> @phalexo commented on GitHub (Jan 31, 2024): Look at the docker files to see which version of Go to use. It may be your problem. On Tue, Jan 30, 2024 at 7:05 PM Davery92 ***@***.***> wrote: > I won't be able too, I got an error on the last version claiming my GPUs > are too old, so I may be stuck at this version. > > Related issues: #1865 <https://github.com/ollama/ollama/issues/1865> and > #1756 <https://github.com/ollama/ollama/issues/1756> > > So I pulled the newest release and it still runs only on CPU. So I pulled > the repo to build from source and every time I run go build. I get these > errors: > parser/parser.go:9:2: package log/slog is not in GOROOT > (/usr/lib/go-1.18/src/log/slog) > parser/parser.go:10:2: package slices is not in GOROOT > (/usr/lib/go-1.18/src/slices) > > — > Reply to this email directly, view it on GitHub > <https://github.com/ollama/ollama/issues/1465#issuecomment-1918120375>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ABDD3ZN637YJTCO3FHWKI53YRGDDVAVCNFSM6AAAAABAPQ5EXWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMJYGEZDAMZXGU> > . > You are receiving this because you were mentioned.Message ID: > ***@***.***> >
Author
Owner

@Davery92 commented on GitHub (Jan 31, 2024):

I deleted my comment because I'm stupid and had an old version of go but fixed it and mixtral works!! Across both my gpu's!! Except ollama serve locks up after roughly 8 messages. The api stops excepting and I can't even execute ollama run {model}

<!-- gh-comment-id:1919025863 --> @Davery92 commented on GitHub (Jan 31, 2024): I deleted my comment because I'm stupid and had an old version of go but fixed it and mixtral works!! Across both my gpu's!! Except ollama serve locks up after roughly 8 messages. The api stops excepting and I can't even execute ollama run {model}
Author
Owner

@dhiltgen commented on GitHub (Jan 31, 2024):

Happy to hear you got it working @Davery92 but sad you hit a hang/crash. Can you share the server logs? If there's not much in them, setting OLLAMA_DEBUG=1 might yield more insight into the nature of the hang.

<!-- gh-comment-id:1919526982 --> @dhiltgen commented on GitHub (Jan 31, 2024): Happy to hear you got it working @Davery92 but sad you hit a hang/crash. Can you share the server logs? If there's not much in them, setting `OLLAMA_DEBUG=1` might yield more insight into the nature of the hang.
Author
Owner

@Davery92 commented on GitHub (Jan 31, 2024):

Happy to hear you got it working @Davery92 but sad you hit a hang/crash. Can you share the server logs? If there's not much in them, setting OLLAMA_DEBUG=1 might yield more insight into the nature of the hang.

Sure I can try that when I get home, however I had ollama serve open this morning while I was chatting and it was just showing the api post after each successful generation, then it would just nothing. My api call would go nowhere, running ollama run would just sit and spin and when I tried to close the ollama server it would hang until I killed the PID. There's no error or anything it just like freezes.

<!-- gh-comment-id:1919685833 --> @Davery92 commented on GitHub (Jan 31, 2024): > Happy to hear you got it working @Davery92 but sad you hit a hang/crash. Can you share the server logs? If there's not much in them, setting `OLLAMA_DEBUG=1` might yield more insight into the nature of the hang. Sure I can try that when I get home, however I had ollama serve open this morning while I was chatting and it was just showing the api post after each successful generation, then it would just nothing. My api call would go nowhere, running ollama run would just sit and spin and when I tried to close the ollama server it would hang until I killed the PID. There's no error or anything it just like freezes.
Author
Owner

@dhiltgen commented on GitHub (May 2, 2024):

If you're still seeing OOMs or hangs, please give the latest release a try and let us know and we'll re-open the issue.

https://github.com/ollama/ollama/releases

<!-- gh-comment-id:2091700270 --> @dhiltgen commented on GitHub (May 2, 2024): If you're still seeing OOMs or hangs, please give the latest release a try and let us know and we'll re-open the issue. https://github.com/ollama/ollama/releases
Author
Owner

@phalexo commented on GitHub (May 2, 2024):

I have 4 GPUs with 12.2GiB VRAM and 1 GPU with 4GiB, all five have 5.2
architecture. I used to be able to use all five, and ollama was smart to
not put more than 4GiB on the last GPU.

Now it causes an error, so I can't use the 5th GPU anymore.

On Thu, May 2, 2024 at 5:25 PM Daniel Hiltgen @.***>
wrote:

If you're still seeing OOMs or hangs, please give the latest release a try
and let us know and we'll re-open the issue.

https://github.com/ollama/ollama/releases


Reply to this email directly, view it on GitHub
https://github.com/ollama/ollama/issues/1465#issuecomment-2091700270,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ABDD3ZPKZFFJQUUEYOOKQ4LZAKVLTAVCNFSM6AAAAABAPQ5EXWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAOJRG4YDAMRXGA
.
You are receiving this because you were mentioned.Message ID:
@.***>

<!-- gh-comment-id:2091926117 --> @phalexo commented on GitHub (May 2, 2024): I have 4 GPUs with 12.2GiB VRAM and 1 GPU with 4GiB, all five have 5.2 architecture. I used to be able to use all five, and ollama was smart to not put more than 4GiB on the last GPU. Now it causes an error, so I can't use the 5th GPU anymore. On Thu, May 2, 2024 at 5:25 PM Daniel Hiltgen ***@***.***> wrote: > If you're still seeing OOMs or hangs, please give the latest release a try > and let us know and we'll re-open the issue. > > https://github.com/ollama/ollama/releases > > — > Reply to this email directly, view it on GitHub > <https://github.com/ollama/ollama/issues/1465#issuecomment-2091700270>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ABDD3ZPKZFFJQUUEYOOKQ4LZAKVLTAVCNFSM6AAAAABAPQ5EXWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAOJRG4YDAMRXGA> > . > You are receiving this because you were mentioned.Message ID: > ***@***.***> >
Author
Owner

@dhiltgen commented on GitHub (May 4, 2024):

@phalexo sorry to hear that. It seems like this is a new issue not related to the original problem from this issue. Can you file a new issue and include the server log with OLLAMA_DEBUG=1 set so we can see exactly what the scheduler and memory prediction algorithms are doing any why it's exceeding the VRAM on your smaller GPU.

<!-- gh-comment-id:2094406019 --> @dhiltgen commented on GitHub (May 4, 2024): @phalexo sorry to hear that. It seems like this is a new issue not related to the original problem from this issue. Can you file a new issue and include the server log with OLLAMA_DEBUG=1 set so we can see exactly what the scheduler and memory prediction algorithms are doing any why it's exceeding the VRAM on your smaller GPU.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#62825