[GH-ISSUE #259] Enable GPU support on Linux #46620

Closed
opened 2026-04-27 23:13:56 -05:00 by GiteaMirror · 18 comments
Owner

Originally created by @S1LV3RJ1NX on GitHub (Aug 2, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/259

Originally assigned to: @BruceMacD on GitHub.

I have built from source ollama. But when I pass a sentence to the model, it does not use GPU. The machine has 64G RAM and Tesla T4 GPU.

Originally created by @S1LV3RJ1NX on GitHub (Aug 2, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/259 Originally assigned to: @BruceMacD on GitHub. I have built from source ollama. But when I pass a sentence to the model, it does not use GPU. The machine has 64G RAM and Tesla T4 GPU.
GiteaMirror added the linuxfeature request labels 2026-04-27 23:13:57 -05:00
Author
Owner

@mchiang0610 commented on GitHub (Aug 2, 2023):

@S1LV3RJ1NX Thanks for submitting this. We haven't built the features into linux yet. This is definitely a feature we will target when releasing on Linux.

<!-- gh-comment-id:1662867685 --> @mchiang0610 commented on GitHub (Aug 2, 2023): @S1LV3RJ1NX Thanks for submitting this. We haven't built the features into linux yet. This is definitely a feature we will target when releasing on Linux.
Author
Owner

@S1LV3RJ1NX commented on GitHub (Aug 2, 2023):

So what should we do to make GPU accessible? Or by when can we estimate
this feature on linux?

On Thu, 3 Aug 2023 at 1:08 AM, Michael Chiang @.***>
wrote:

@S1LV3RJ1NX https://github.com/S1LV3RJ1NX Thanks for submitting this.
We haven't built the features into linux yet. This is definitely a feature
we will target when releasing on Linux.


Reply to this email directly, view it on GitHub
https://github.com/jmorganca/ollama/issues/259#issuecomment-1662867685,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AG4UXGQ76AF5ALUDMPHT3GDXTKUDHANCNFSM6AAAAAA3BV7BMA
.
You are receiving this because you were mentioned.Message ID:
@.***>

<!-- gh-comment-id:1662894076 --> @S1LV3RJ1NX commented on GitHub (Aug 2, 2023): So what should we do to make GPU accessible? Or by when can we estimate this feature on linux? On Thu, 3 Aug 2023 at 1:08 AM, Michael Chiang ***@***.***> wrote: > @S1LV3RJ1NX <https://github.com/S1LV3RJ1NX> Thanks for submitting this. > We haven't built the features into linux yet. This is definitely a feature > we will target when releasing on Linux. > > — > Reply to this email directly, view it on GitHub > <https://github.com/jmorganca/ollama/issues/259#issuecomment-1662867685>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AG4UXGQ76AF5ALUDMPHT3GDXTKUDHANCNFSM6AAAAAA3BV7BMA> > . > You are receiving this because you were mentioned.Message ID: > ***@***.***> >
Author
Owner

@sqs commented on GitHub (Aug 4, 2023):

If you are looking for testers for Linux GPU support in the future, I'm happy to help. I have a NVIDIA GeForce RTX 4090.

<!-- gh-comment-id:1664962828 --> @sqs commented on GitHub (Aug 4, 2023): If you are looking for testers for Linux GPU support in the future, I'm happy to help. I have a NVIDIA GeForce RTX 4090.
Author
Owner

@adelamodwala commented on GitHub (Aug 6, 2023):

Happy to help test as well - running an RTX 3050 mobile, and an older R9 390x on desktop (curious how this will perform on AMD cards).

<!-- gh-comment-id:1666955837 --> @adelamodwala commented on GitHub (Aug 6, 2023): Happy to help test as well - running an RTX 3050 mobile, and an older R9 390x on desktop (curious how this will perform on AMD cards).
Author
Owner

@mgsotelo commented on GitHub (Aug 9, 2023):

I guess this has relation with the DefaultOptions provided by the ollama llama api. They should set the parameter MainGPU to something (I don't know what, since I guess ollama is running llama using the metal.h library by default, I haven't analyzed the code in depth).

File => ollama/api/types.go

func DefaultOptions() Options {
	return Options{
		Seed: -1,

		UseNUMA: false,

		NumCtx:             2048,
		NumKeep:            -1,
		NumBatch:           512,
		NumGPU:             1,
=====>          MainGPU:           SOMETHING
		NumGQA:             1,
<!-- gh-comment-id:1672042595 --> @mgsotelo commented on GitHub (Aug 9, 2023): I guess this has relation with the DefaultOptions provided by the ollama llama api. They should set the parameter MainGPU to *something* (I don't know what, since I guess ollama is running llama using the metal.h library by default, I haven't analyzed the code in depth). File => [ollama/api/types.go](https://github.com/jmorganca/ollama/tree/main/api/types.go) ``` func DefaultOptions() Options { return Options{ Seed: -1, UseNUMA: false, NumCtx: 2048, NumKeep: -1, NumBatch: 512, NumGPU: 1, =====> MainGPU: SOMETHING NumGQA: 1, ```
Author
Owner

@voodooattack commented on GitHub (Aug 25, 2023):

All right. Here's what I did to get GPU acceleration working on my Linux machine:

  • In ollama/api/types.go, set these: MainGPU: 0 and NumGPU: 32 (or 16, depending on your target model and your GPU). The last parameter determines the number of layers offloaded to the GPU during processing. Setting it to something unreasonable for your system WILL cause the application to crash.
  • In ollama/llm/llama.go, make the following change:
#cgo CFLAGS: ...
#cgo CPPFLAGS: ... 
#cgo CXXFLAGS: ...
/// PASTE THESE IN:
#cgo opencl CFLAGS: -DGGML_USE_CLBLAST 
#cgo opencl CPPFLAGS: -DGGML_USE_CLBLAST 
#cgo opencl LDFLAGS: -lOpenCL -lclblast

Note: You need to install clblast on your system, the package was clblast-devel in my Fedora environment.

  • Now go to your source root and run: go build --tags opencl .

If everything works correctly, you should see something like this in your terminal when you run ./ollama serve:

ggml_opencl: selecting platform: 'NVIDIA CUDA'
ggml_opencl: selecting device: 'NVIDIA GeForce GTX 1060'
ggml_opencl: device FP16 support: false

Have fun!

<!-- gh-comment-id:1693959312 --> @voodooattack commented on GitHub (Aug 25, 2023): All right. Here's what I did to get GPU acceleration working on my Linux machine: - In [ollama/api/types.go](https://github.com/jmorganca/ollama/tree/main/api/types.go), set these: `MainGPU: 0` and `NumGPU: 32` (or 16, depending on your target model and your GPU). The last parameter determines the number of layers offloaded to the GPU during processing. Setting it to something unreasonable for your system WILL cause the application to crash. - In [ollama/llm/llama.go](https://github.com/jmorganca/ollama/tree/main/llm/llama.go), make the following change: ``` #cgo CFLAGS: ... #cgo CPPFLAGS: ... #cgo CXXFLAGS: ... /// PASTE THESE IN: #cgo opencl CFLAGS: -DGGML_USE_CLBLAST #cgo opencl CPPFLAGS: -DGGML_USE_CLBLAST #cgo opencl LDFLAGS: -lOpenCL -lclblast ``` Note: You need to install `clblast` on your system, the package was `clblast-devel` in my Fedora environment. * Now go to your source root and run: `go build --tags opencl .` If everything works correctly, you should see something like this in your terminal when you run `./ollama serve`: ``` ggml_opencl: selecting platform: 'NVIDIA CUDA' ggml_opencl: selecting device: 'NVIDIA GeForce GTX 1060' ggml_opencl: device FP16 support: false ``` Have fun!
Author
Owner

@zopieux commented on GitHub (Aug 26, 2023):

@voodooattack wrote:

All right. Here's what I did to get GPU acceleration working on my Linux machine:

Tried that, and while it printed the ggml logs with my GPU info, I did not see a single blip of increased GPU usage and no performance improvement at all. Is that expected?

<!-- gh-comment-id:1694468843 --> @zopieux commented on GitHub (Aug 26, 2023): @voodooattack wrote: > All right. Here's what I did to get GPU acceleration working on my Linux machine: Tried that, and while it printed the ggml logs with my GPU info, I did not see a single blip of increased GPU usage and no performance improvement at all. Is that expected?
Author
Owner

@voodooattack commented on GitHub (Aug 26, 2023):

@voodooattack wrote:

All right. Here's what I did to get GPU acceleration working on my Linux machine:

Tried that, and while it printed the ggml logs with my GPU info, I did not see a single blip of increased GPU usage and no performance improvement at all. Is that expected?

No, GPU utilisation should go up and CPU utilisation should go down accordingly. Can you provide some info on your setup? What GPU(s) are you using and on which distro?

Edit: any info on what model you're trying to use and the parameters you set in types.go would also help.

<!-- gh-comment-id:1694502427 --> @voodooattack commented on GitHub (Aug 26, 2023): > @voodooattack wrote: > > > All right. Here's what I did to get GPU acceleration working on my Linux machine: > > > > Tried that, and while it printed the ggml logs with my GPU info, I did not see a single blip of increased GPU usage and no performance improvement at all. Is that expected? > > No, GPU utilisation should go up and CPU utilisation should go down accordingly. Can you provide some info on your setup? What GPU(s) are you using and on which distro? Edit: any info on what model you're trying to use and the parameters you set in `types.go` would also help.
Author
Owner

@zopieux commented on GitHub (Aug 26, 2023):

Thanks for the quick reply @voodooattack.

types.go:

NumGPU: 8,  # per instructions, sadly 16 and 32 crash with GGML_ASSERT: ggml-alloc.c:242: alloc->n_free_blocks < MAX_FREE_BLOCKS && "out of free blocks"
MainGPU: 0,

Here are the logs I get:

ggml_opencl: selecting platform: 'NVIDIA CUDA'
ggml_opencl: selecting device: 'NVIDIA GeForce RTX 4090'
ggml_opencl: device FP16 support: false
llama.cpp: loading model from […]/.ollama/models/blobs/sha256:b5749cc827d33b7cb4c8869cede7b296a0a28d9e5d1982705c2ba4c603258159
llama_model_load_internal: format     = ggjt v3 (latest)
llama_model_load_internal: n_vocab    = 32000
llama_model_load_internal: n_ctx      = 2048
llama_model_load_internal: n_embd     = 4096
llama_model_load_internal: n_mult     = 256
llama_model_load_internal: n_head     = 32
llama_model_load_internal: n_head_kv  = 32
llama_model_load_internal: n_layer    = 32
llama_model_load_internal: n_rot      = 128
llama_model_load_internal: n_gqa      = 1
llama_model_load_internal: rnorm_eps  = 5.0e-06
llama_model_load_internal: n_ff       = 11008
llama_model_load_internal: freq_base  = 10000.0
llama_model_load_internal: freq_scale = 1
llama_model_load_internal: ftype      = 2 (mostly Q4_0)
llama_model_load_internal: model size = 7B
llama_model_load_internal: ggml ctx size =    0.08 MB
llama_model_load_internal: using OpenCL for GPU acceleration
llama_model_load_internal: mem required  = 2746.98 MB (+ 1024.00 MB per state)
llama_model_load_internal: offloading 8 repeating layers to GPU
llama_model_load_internal: offloaded 8/33 layers to GPU
llama_model_load_internal: total VRAM used: 869 MB
llama_new_context_with_model: kv self size  = 1024.00 MB
llama_new_context_with_model: compute buffer total size =  153.35 MB

During inference all 24 CPU cores are used, but 0% of GPU:

    PID DEV     TYPE  GPU        GPU MEM   CPU  HOST MEM COMMAND
 627223   0  Compute  0%   1502MiB   6%  3155%   4266MiB ollama serve                    

I've tried with both ollama run codellama and ollama run llama2-uncensored. I'm using NixOS, not that it should matter.

<!-- gh-comment-id:1694507896 --> @zopieux commented on GitHub (Aug 26, 2023): Thanks for the quick reply @voodooattack. `types.go`: ``` NumGPU: 8, # per instructions, sadly 16 and 32 crash with GGML_ASSERT: ggml-alloc.c:242: alloc->n_free_blocks < MAX_FREE_BLOCKS && "out of free blocks" MainGPU: 0, ``` Here are the logs I get: ``` ggml_opencl: selecting platform: 'NVIDIA CUDA' ggml_opencl: selecting device: 'NVIDIA GeForce RTX 4090' ggml_opencl: device FP16 support: false llama.cpp: loading model from […]/.ollama/models/blobs/sha256:b5749cc827d33b7cb4c8869cede7b296a0a28d9e5d1982705c2ba4c603258159 llama_model_load_internal: format = ggjt v3 (latest) llama_model_load_internal: n_vocab = 32000 llama_model_load_internal: n_ctx = 2048 llama_model_load_internal: n_embd = 4096 llama_model_load_internal: n_mult = 256 llama_model_load_internal: n_head = 32 llama_model_load_internal: n_head_kv = 32 llama_model_load_internal: n_layer = 32 llama_model_load_internal: n_rot = 128 llama_model_load_internal: n_gqa = 1 llama_model_load_internal: rnorm_eps = 5.0e-06 llama_model_load_internal: n_ff = 11008 llama_model_load_internal: freq_base = 10000.0 llama_model_load_internal: freq_scale = 1 llama_model_load_internal: ftype = 2 (mostly Q4_0) llama_model_load_internal: model size = 7B llama_model_load_internal: ggml ctx size = 0.08 MB llama_model_load_internal: using OpenCL for GPU acceleration llama_model_load_internal: mem required = 2746.98 MB (+ 1024.00 MB per state) llama_model_load_internal: offloading 8 repeating layers to GPU llama_model_load_internal: offloaded 8/33 layers to GPU llama_model_load_internal: total VRAM used: 869 MB llama_new_context_with_model: kv self size = 1024.00 MB llama_new_context_with_model: compute buffer total size = 153.35 MB ``` During inference all 24 CPU cores are used, but 0% of GPU: ``` PID DEV TYPE GPU GPU MEM CPU HOST MEM COMMAND 627223 0 Compute 0% 1502MiB 6% 3155% 4266MiB ollama serve ``` I've tried with both `ollama run codellama` and `ollama run llama2-uncensored`. I'm using NixOS, not that it should matter.
Author
Owner

@voodooattack commented on GitHub (Aug 27, 2023):

@zopieux I'm grasping at straws since I can't reproduce the problem, but can you try adding #cgo opencl CFLAGS: -DGGML_USE_CLBLAST to llm/llama.go along with the other stuff? I think I forgot to include this.

<!-- gh-comment-id:1694591854 --> @voodooattack commented on GitHub (Aug 27, 2023): @zopieux I'm grasping at straws since I can't reproduce the problem, but can you try adding `#cgo opencl CFLAGS: -DGGML_USE_CLBLAST` to `llm/llama.go` along with the other stuff? I think I forgot to include this.
Author
Owner

@zopieux commented on GitHub (Aug 27, 2023):

Ah thanks, added it but no difference. In case it matters, I could make NumGPU go to 14 without crashing. Crashed for ≥16, which tells me it does have an effect.

Looking at it more closely, I should mention that I do see a single 0.1s blip of 100% GPU usage right before inferred tokens start appearing. But the CPU is still burning cycles and inference is still slow.

<!-- gh-comment-id:1694671480 --> @zopieux commented on GitHub (Aug 27, 2023): Ah thanks, added it but no difference. In case it matters, I could make `NumGPU` go to 14 without crashing. Crashed for ≥16, which tells me it does have an effect. Looking at it more closely, I should mention that I do see a single 0.1s blip of 100% GPU usage right before inferred tokens start appearing. But the CPU is still burning cycles and inference is still slow.
Author
Owner

@jpalvadev commented on GitHub (Aug 27, 2023):

All right. Here's what I did to get GPU acceleration working on my Linux machine:

  • In ollama/api/types.go, set these: MainGPU: 0 and NumGPU: 32 (or 16, depending on your target model and your GPU). The last parameter determines the number of layers offloaded to the GPU during processing. Setting it to something unreasonable for your system WILL cause the application to crash.
  • In ollama/llm/llama.go, make the following change:
#cgo CFLAGS: ...
#cgo CPPFLAGS: ... 
#cgo CXXFLAGS: ...
/// PASTE THESE IN:
#cgo opencl CFLAGS: -DGGML_USE_CLBLAST 
#cgo opencl CPPFLAGS: -DGGML_USE_CLBLAST 
#cgo opencl LDFLAGS: -lOpenCL -lclblast

Note: You need to install clblast on your system, the package was clblast-devel in my Fedora environment.

  • Now go to your source root and run: go build --tags opencl .

If everything works correctly, you should see something like this in your terminal when you run ./ollama serve:

ggml_opencl: selecting platform: 'NVIDIA CUDA'
ggml_opencl: selecting device: 'NVIDIA GeForce GTX 1060'
ggml_opencl: device FP16 support: false

Have fun!

Wen I try to ask something to the model, it crashes with the following error

ggml_opencl: clGetPlatformIDs(NPLAT, platform_ids, &n_platforms) error -1001 at ggml-opencl.cpp:993

I don't know what I'm doing wrong. WSL2 with a 3090

<!-- gh-comment-id:1694673710 --> @jpalvadev commented on GitHub (Aug 27, 2023): > All right. Here's what I did to get GPU acceleration working on my Linux machine: > > * In [ollama/api/types.go](https://github.com/jmorganca/ollama/tree/main/api/types.go), set these: `MainGPU: 0` and `NumGPU: 32` (or 16, depending on your target model and your GPU). The last parameter determines the number of layers offloaded to the GPU during processing. Setting it to something unreasonable for your system WILL cause the application to crash. > * In [ollama/llm/llama.go](https://github.com/jmorganca/ollama/tree/main/llm/llama.go), make the following change: > > ``` > #cgo CFLAGS: ... > #cgo CPPFLAGS: ... > #cgo CXXFLAGS: ... > /// PASTE THESE IN: > #cgo opencl CFLAGS: -DGGML_USE_CLBLAST > #cgo opencl CPPFLAGS: -DGGML_USE_CLBLAST > #cgo opencl LDFLAGS: -lOpenCL -lclblast > ``` > > Note: You need to install `clblast` on your system, the package was `clblast-devel` in my Fedora environment. > > * Now go to your source root and run: `go build --tags opencl .` > > If everything works correctly, you should see something like this in your terminal when you run `./ollama serve`: > > ``` > ggml_opencl: selecting platform: 'NVIDIA CUDA' > ggml_opencl: selecting device: 'NVIDIA GeForce GTX 1060' > ggml_opencl: device FP16 support: false > ``` > > Have fun! Wen I try to ask something to the model, it crashes with the following error `ggml_opencl: clGetPlatformIDs(NPLAT, platform_ids, &n_platforms) error -1001 at ggml-opencl.cpp:993` I don't know what I'm doing wrong. WSL2 with a 3090
Author
Owner

@voodooattack commented on GitHub (Aug 27, 2023):

Ah thanks, added it but no difference. In case it matters, I could make NumGPU go to 14 without crashing. Crashed for ≥16, which tells me it does have an effect.

Looking at it more closely, I should mention that I do see a single 0.1s blip of 100% GPU usage right before inferred tokens start appearing. But the CPU is still burning cycles and inference is still slow.

Can you post the output of the clinfo utility?

Wen I try to ask something to the model, it crashes with the following error

ggml_opencl: clGetPlatformIDs(NPLAT, platform_ids, &n_platforms) error -1001 at ggml-opencl.cpp:993

I don't know what I'm doing wrong. WSL2 with a 3090

I'm not very familiar with WSL, but I remember you needed to setup GPU passthrough to work with it.

Oh, found it: https://learn.microsoft.com/en-us/windows/ai/directml/gpu-accelerated-training

<!-- gh-comment-id:1694733824 --> @voodooattack commented on GitHub (Aug 27, 2023): > Ah thanks, added it but no difference. In case it matters, I could make `NumGPU` go to 14 without crashing. Crashed for ≥16, which tells me it does have an effect. > > Looking at it more closely, I should mention that I do see a single 0.1s blip of 100% GPU usage right before inferred tokens start appearing. But the CPU is still burning cycles and inference is still slow. Can you post the output of the `clinfo` utility? > Wen I try to ask something to the model, it crashes with the following error > > `ggml_opencl: clGetPlatformIDs(NPLAT, platform_ids, &n_platforms) error -1001 at ggml-opencl.cpp:993` > > I don't know what I'm doing wrong. WSL2 with a 3090 I'm not very familiar with WSL, but I remember you needed to setup GPU passthrough to work with it. Oh, found it: https://learn.microsoft.com/en-us/windows/ai/directml/gpu-accelerated-training
Author
Owner

@boneitis commented on GitHub (Aug 28, 2023):

@voodooattack I can't thank you enough. I have gone from half-hour responses to sub-minute now.

edit - Well, it was fantastic for the day I had it working. I started trying to play around with other models, had it crash, and now I can't get it to recognize my GPU anymore, no matter what I try, even after cold boots.

edit # 2 - nevermind, i was forgetting to rebuild with --tags opencl.

<!-- gh-comment-id:1694987452 --> @boneitis commented on GitHub (Aug 28, 2023): @voodooattack I can't thank you enough. I have gone from half-hour responses to sub-minute now. edit - Well, it was fantastic for the day I had it working. I started trying to play around with other models, had it crash, and now I can't get it to recognize my GPU anymore, no matter what I try, even after cold boots. edit # 2 - nevermind, i was forgetting to rebuild with `--tags opencl`.
Author
Owner

@esiqveland commented on GitHub (Aug 30, 2023):

Thanks for the quick reply @voodooattack.

types.go:

NumGPU: 8,  # per instructions, sadly 16 and 32 crash with GGML_ASSERT: ggml-alloc.c:242: alloc->n_free_blocks < MAX_FREE_BLOCKS && "out of free blocks"
MainGPU: 0,

Here are the logs I get:

ggml_opencl: selecting platform: 'NVIDIA CUDA'
ggml_opencl: selecting device: 'NVIDIA GeForce RTX 4090'
ggml_opencl: device FP16 support: false
llama.cpp: loading model from […]/.ollama/models/blobs/sha256:b5749cc827d33b7cb4c8869cede7b296a0a28d9e5d1982705c2ba4c603258159
llama_model_load_internal: format     = ggjt v3 (latest)
llama_model_load_internal: n_vocab    = 32000
llama_model_load_internal: n_ctx      = 2048
llama_model_load_internal: n_embd     = 4096
llama_model_load_internal: n_mult     = 256
llama_model_load_internal: n_head     = 32
llama_model_load_internal: n_head_kv  = 32
llama_model_load_internal: n_layer    = 32
llama_model_load_internal: n_rot      = 128
llama_model_load_internal: n_gqa      = 1
llama_model_load_internal: rnorm_eps  = 5.0e-06
llama_model_load_internal: n_ff       = 11008
llama_model_load_internal: freq_base  = 10000.0
llama_model_load_internal: freq_scale = 1
llama_model_load_internal: ftype      = 2 (mostly Q4_0)
llama_model_load_internal: model size = 7B
llama_model_load_internal: ggml ctx size =    0.08 MB
llama_model_load_internal: using OpenCL for GPU acceleration
llama_model_load_internal: mem required  = 2746.98 MB (+ 1024.00 MB per state)
llama_model_load_internal: offloading 8 repeating layers to GPU
llama_model_load_internal: offloaded 8/33 layers to GPU
llama_model_load_internal: total VRAM used: 869 MB
llama_new_context_with_model: kv self size  = 1024.00 MB
llama_new_context_with_model: compute buffer total size =  153.35 MB

During inference all 24 CPU cores are used, but 0% of GPU:

    PID DEV     TYPE  GPU        GPU MEM   CPU  HOST MEM COMMAND
 627223   0  Compute  0%   1502MiB   6%  3155%   4266MiB ollama serve                    

I've tried with both ollama run codellama and ollama run llama2-uncensored. I'm using NixOS, not that it should matter.

I see the same with a AMD GPU on Linux. All CPU cores are going full, but memory is reserved on the GPU with 0% GPU usage. The tokens are produced at roughly the same rate as before. If I look very closely, it looks like the GPU spikes for less than a second just as a new inferencing starts from a generate request, then it hovers around 0-2% usage.

$ clinfo -l
Platform #0: AMD Accelerated Parallel Processing
 `-- Device #0: gfx1030
ggml_opencl: selecting platform: 'AMD Accelerated Parallel Processing'
ggml_opencl: selecting device: 'gfx1030'
ggml_opencl: device FP16 support: true
llama.cpp: loading model from ../.ollama/models/blobs/sha256:f79142715bc9539a2edbb4b253548db8b34fac22736593eeaa28555874476e30
llama_model_load_internal: format     = ggjt v3 (latest)
llama_model_load_internal: n_vocab    = 32000
llama_model_load_internal: n_ctx      = 2048
llama_model_load_internal: n_embd     = 5120
llama_model_load_internal: n_mult     = 256
llama_model_load_internal: n_head     = 40
llama_model_load_internal: n_head_kv  = 40
llama_model_load_internal: n_layer    = 40
llama_model_load_internal: n_rot      = 128
llama_model_load_internal: n_gqa      = 1
llama_model_load_internal: rnorm_eps  = 5.0e-06
llama_model_load_internal: n_ff       = 13824
llama_model_load_internal: freq_base  = 10000.0
llama_model_load_internal: freq_scale = 1
llama_model_load_internal: ftype      = 2 (mostly Q4_0)
llama_model_load_internal: model size = 13B
llama_model_load_internal: ggml ctx size =    0.11 MB
llama_model_load_internal: using OpenCL for GPU acceleration
llama_model_load_internal: mem required  = 5622.16 MB (+ 3200.00 MB per state)
llama_model_load_internal: offloading 8 repeating layers to GPU
llama_model_load_internal: offloaded 8/41 layers to GPU
llama_model_load_internal: total VRAM used: 1362 MB
llama_new_context_with_model: kv self size  = 3200.00 MB
llama_new_context_with_model: compute buffer total size =  191.35 MB
<!-- gh-comment-id:1698823356 --> @esiqveland commented on GitHub (Aug 30, 2023): > Thanks for the quick reply @voodooattack. > > `types.go`: > > ``` > NumGPU: 8, # per instructions, sadly 16 and 32 crash with GGML_ASSERT: ggml-alloc.c:242: alloc->n_free_blocks < MAX_FREE_BLOCKS && "out of free blocks" > MainGPU: 0, > ``` > > Here are the logs I get: > > ``` > ggml_opencl: selecting platform: 'NVIDIA CUDA' > ggml_opencl: selecting device: 'NVIDIA GeForce RTX 4090' > ggml_opencl: device FP16 support: false > llama.cpp: loading model from […]/.ollama/models/blobs/sha256:b5749cc827d33b7cb4c8869cede7b296a0a28d9e5d1982705c2ba4c603258159 > llama_model_load_internal: format = ggjt v3 (latest) > llama_model_load_internal: n_vocab = 32000 > llama_model_load_internal: n_ctx = 2048 > llama_model_load_internal: n_embd = 4096 > llama_model_load_internal: n_mult = 256 > llama_model_load_internal: n_head = 32 > llama_model_load_internal: n_head_kv = 32 > llama_model_load_internal: n_layer = 32 > llama_model_load_internal: n_rot = 128 > llama_model_load_internal: n_gqa = 1 > llama_model_load_internal: rnorm_eps = 5.0e-06 > llama_model_load_internal: n_ff = 11008 > llama_model_load_internal: freq_base = 10000.0 > llama_model_load_internal: freq_scale = 1 > llama_model_load_internal: ftype = 2 (mostly Q4_0) > llama_model_load_internal: model size = 7B > llama_model_load_internal: ggml ctx size = 0.08 MB > llama_model_load_internal: using OpenCL for GPU acceleration > llama_model_load_internal: mem required = 2746.98 MB (+ 1024.00 MB per state) > llama_model_load_internal: offloading 8 repeating layers to GPU > llama_model_load_internal: offloaded 8/33 layers to GPU > llama_model_load_internal: total VRAM used: 869 MB > llama_new_context_with_model: kv self size = 1024.00 MB > llama_new_context_with_model: compute buffer total size = 153.35 MB > ``` > > During inference all 24 CPU cores are used, but 0% of GPU: > > ``` > PID DEV TYPE GPU GPU MEM CPU HOST MEM COMMAND > 627223 0 Compute 0% 1502MiB 6% 3155% 4266MiB ollama serve > ``` > > I've tried with both `ollama run codellama` and `ollama run llama2-uncensored`. I'm using NixOS, not that it should matter. I see the same with a AMD GPU on Linux. All CPU cores are going full, but memory is reserved on the GPU with 0% GPU usage. The tokens are produced at roughly the same rate as before. If I look very closely, it looks like the GPU spikes for less than a second just as a new inferencing starts from a generate request, then it hovers around 0-2% usage. ``` $ clinfo -l Platform #0: AMD Accelerated Parallel Processing `-- Device #0: gfx1030 ``` ``` ggml_opencl: selecting platform: 'AMD Accelerated Parallel Processing' ggml_opencl: selecting device: 'gfx1030' ggml_opencl: device FP16 support: true llama.cpp: loading model from ../.ollama/models/blobs/sha256:f79142715bc9539a2edbb4b253548db8b34fac22736593eeaa28555874476e30 llama_model_load_internal: format = ggjt v3 (latest) llama_model_load_internal: n_vocab = 32000 llama_model_load_internal: n_ctx = 2048 llama_model_load_internal: n_embd = 5120 llama_model_load_internal: n_mult = 256 llama_model_load_internal: n_head = 40 llama_model_load_internal: n_head_kv = 40 llama_model_load_internal: n_layer = 40 llama_model_load_internal: n_rot = 128 llama_model_load_internal: n_gqa = 1 llama_model_load_internal: rnorm_eps = 5.0e-06 llama_model_load_internal: n_ff = 13824 llama_model_load_internal: freq_base = 10000.0 llama_model_load_internal: freq_scale = 1 llama_model_load_internal: ftype = 2 (mostly Q4_0) llama_model_load_internal: model size = 13B llama_model_load_internal: ggml ctx size = 0.11 MB llama_model_load_internal: using OpenCL for GPU acceleration llama_model_load_internal: mem required = 5622.16 MB (+ 3200.00 MB per state) llama_model_load_internal: offloading 8 repeating layers to GPU llama_model_load_internal: offloaded 8/41 layers to GPU llama_model_load_internal: total VRAM used: 1362 MB llama_new_context_with_model: kv self size = 3200.00 MB llama_new_context_with_model: compute buffer total size = 191.35 MB ```
Author
Owner

@esiqveland commented on GitHub (Aug 30, 2023):

Upon closer inspection I think it might be 10-12x faster on my GPU. Even though it is barely visible on the usage graphs, I see now that it is indeed faster than the CPU only, it is just so slow on CPU that I didnt really notice it first time.

<!-- gh-comment-id:1699747730 --> @esiqveland commented on GitHub (Aug 30, 2023): Upon closer inspection I think it might be 10-12x faster on my GPU. Even though it is barely visible on the usage graphs, I see now that it is indeed faster than the CPU only, it is just so slow on CPU that I didnt really notice it first time.
Author
Owner

@voodooattack commented on GitHub (Aug 31, 2023):

@esiqveland Can you try with: LowVRAM: true? It fixed a lot of my issues locally. So I think there might be a problem with scratch buffer allocation in the included version of llama.cpp. Now I can set NumGPU: 64 and all the layers fit in my 6GB of VRAM (for the codellama model).

<!-- gh-comment-id:1700785027 --> @voodooattack commented on GitHub (Aug 31, 2023): @esiqveland Can you try with: `LowVRAM: true`? It fixed a lot of my issues locally. So I think there might be a problem with scratch buffer allocation in the included version of llama.cpp. Now I can set `NumGPU: 64` and all the layers fit in my 6GB of VRAM (for the codellama model).
Author
Owner

@esiqveland commented on GitHub (Aug 31, 2023):

@esiqveland Can you try with: LowVRAM: true? It fixed a lot of my issues locally. So I think there might be a problem with scratch buffer allocation in the included version of llama.cpp. Now I can set NumGPU: 64 and all the layers fit in my 6GB of VRAM (for the codellama model).

Weird, setting LowVRAM: true makes it much slower on my machine. I also can not run with NumGPU: 32 anymore, even with a 16GB GPU. So it looks like the exact opposite result.

Edit: I test from git tag v0.0.17

<!-- gh-comment-id:1701335740 --> @esiqveland commented on GitHub (Aug 31, 2023): > @esiqveland Can you try with: `LowVRAM: true`? It fixed a lot of my issues locally. So I think there might be a problem with scratch buffer allocation in the included version of llama.cpp. Now I can set `NumGPU: 64` and all the layers fit in my 6GB of VRAM (for the codellama model). Weird, setting `LowVRAM: true` makes it much slower on my machine. I also can not run with `NumGPU: 32` anymore, even with a 16GB GPU. So it looks like the exact opposite result. Edit: I test from git tag `v0.0.17`
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#46620