[GH-ISSUE #3723] Use NVIDIA + AMD GPUs simultaneously (CUDA OOM?) #48802

Closed
opened 2026-04-28 09:18:56 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @erasmus74 on GitHub (Apr 18, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3723

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

I'm trying to run my ollama:rocm docker image (pulled 4/16/24) and it does the Nvidia M40 and Ryzen 7900x CPU offloads. I see there is full nvidia VRAM usage and the remaining layers offload to my CPU RAM.

However I also have my 7900xtx AMD card in there, and when I'm not passing the "--gpus all" in the docker CLI for the run, I can use exclusively the AMD GPU. When I add that parameter, the NV GPU works, but the AMD GPU sits idle.

I'd like to use models that utilize upto 48gb of VRAM by splitting the usage across the two cards though they are not of the same vendor ( I know the challenges, trust me, took forever to get this far).

So my setup of course;

i've got the rocm stuff working, tested with just ollama:rocm and passing --device /dev/dri/renderD129 (my 7900xtx)
i've got the cuda stuff working, tested with just ollama:rocm and passing --device /dev/dri/renderD129 (my 7900xtx) --gpus all

Total I have 48GB VRAM and 64GB System RAM

The steps to reproduce;

docker run -d --gpus all --device /dev/kfd --device /dev/dri/renderD129 --device /dev/dri/renderD128 -v ollama:/root/.ollama -e OLLAMA_ORIGINS='*.github.io' -p 11434:11434 --name ollama ollama/ollama:rocm

docker exec -it /bin/bash

ollama run --verbose llama2-uncensored:70b (this is 38G)

and the output I get is;

[root@d8466466df3c /]# ollama run --verbose llama2-uncensored:70b
Error: llama runner process no longer running: -1 CUDA error: out of memory
  current device: 0, in function alloc at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:302
  cuMemCreate(&handle, reserve_size, &prop, 0)
GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:60: !"CUDA error"
[root@d8466466df3c /]#

And screenshot for reference (mid-run of final command)

Screenshot from 2024-04-18 00-20-01

happy to follow up and test.

OS

Linux, Docker

GPU

Nvidia, AMD

CPU

AMD

Ollama version

ollama version is 0.1.32:rocm

Originally created by @erasmus74 on GitHub (Apr 18, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3723 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? I'm trying to run my ollama:rocm docker image (pulled 4/16/24) and it does the Nvidia M40 and Ryzen 7900x CPU offloads. I see there is full nvidia VRAM usage and the remaining layers offload to my CPU RAM. However I also have my 7900xtx AMD card in there, and when I'm not passing the "--gpus all" in the docker CLI for the run, I can use exclusively the AMD GPU. When I add that parameter, the NV GPU works, but the AMD GPU sits idle. I'd like to use models that utilize upto 48gb of VRAM by splitting the usage across the two cards though they are not of the same vendor ( I know the challenges, trust me, took forever to get this far). So my setup of course; i've got the rocm stuff working, tested with just ollama:rocm and passing --device /dev/dri/renderD129 (my 7900xtx) i've got the cuda stuff working, tested with just ollama:rocm and passing --device /dev/dri/renderD129 (my 7900xtx) --gpus all Total I have 48GB VRAM and 64GB System RAM The steps to reproduce; `docker run -d --gpus all --device /dev/kfd --device /dev/dri/renderD129 --device /dev/dri/renderD128 -v ollama:/root/.ollama -e OLLAMA_ORIGINS='*.github.io' -p 11434:11434 --name ollama ollama/ollama:rocm` `docker exec -it /bin/bash` `ollama run --verbose llama2-uncensored:70b` (this is 38G) and the output I get is; ``` [root@d8466466df3c /]# ollama run --verbose llama2-uncensored:70b Error: llama runner process no longer running: -1 CUDA error: out of memory current device: 0, in function alloc at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:302 cuMemCreate(&handle, reserve_size, &prop, 0) GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:60: !"CUDA error" [root@d8466466df3c /]# ``` And screenshot for reference (mid-run of final command) ![Screenshot from 2024-04-18 00-20-01](https://github.com/ollama/ollama/assets/7828606/de4155fd-cbc9-4cde-b58f-8e7120d75105) happy to follow up and test. ### OS Linux, Docker ### GPU Nvidia, AMD ### CPU AMD ### Ollama version ollama version is 0.1.32:rocm
GiteaMirror added the bug label 2026-04-28 09:18:56 -05:00
Author
Owner

@dhiltgen commented on GitHub (Apr 19, 2024):

With PR #3418 I'm setting us up to be able to support mixed GPU types. I'm not sure if this will be possible in docker or not, but on the host level, it should be possible to load different models on the different GPUs. (I haven't set up a test rig to verify this actually works yet, but the code is intended to support that scenario)

<!-- gh-comment-id:2067301192 --> @dhiltgen commented on GitHub (Apr 19, 2024): With PR #3418 I'm setting us up to be able to support mixed GPU types. I'm not sure if this will be possible in docker or not, but on the host level, it should be possible to load different models on the different GPUs. (I haven't set up a test rig to verify this actually works yet, but the code is intended to support that scenario)
Author
Owner

@dhiltgen commented on GitHub (Apr 19, 2024):

As to the out-of-memory error, can you share the server log? We report some memory details before we load that may help us narrow down where the prediction logic is going wrong.

<!-- gh-comment-id:2067302553 --> @dhiltgen commented on GitHub (Apr 19, 2024): As to the out-of-memory error, can you share the server log? We report some memory details before we load that may help us narrow down where the prediction logic is going wrong.
Author
Owner

@erasmus74 commented on GitHub (Apr 19, 2024):

Docker server logs;

❯
docker logs -f ollama
time=2024-04-19T23:00:21.844Z level=INFO source=images.go:817 msg="total blobs: 162"
time=2024-04-19T23:00:21.845Z level=INFO source=images.go:824 msg="total unused blobs removed: 0"
time=2024-04-19T23:00:21.846Z level=INFO source=routes.go:1143 msg="Listening on [::]:11434 (version 0.1.32)"
time=2024-04-19T23:00:21.846Z level=INFO source=payload.go:28 msg="extracting embedded files" dir=/tmp/ollama3535518548/runners
time=2024-04-19T23:00:23.238Z level=INFO source=payload.go:41 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60002]"
time=2024-04-19T23:00:23.238Z level=INFO source=gpu.go:121 msg="Detecting GPU type"
time=2024-04-19T23:00:23.238Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
time=2024-04-19T23:00:23.238Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama3535518548/runners/cuda_v11/libcudart.so.11.0]"
time=2024-04-19T23:00:23.245Z level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
time=2024-04-19T23:00:23.245Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-19T23:00:23.272Z level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 5.2"
[GIN] 2024/04/19 - 23:00:29 | 200 |        17.8µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/04/19 - 23:00:29 | 200 |     2.33737ms |       ********* | GET      "/api/tags"
[GIN] 2024/04/19 - 23:01:40 | 200 |       12.23µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/04/19 - 23:01:40 | 200 |      186.76µs |       127.0.0.1 | POST     "/api/show"
[GIN] 2024/04/19 - 23:01:40 | 200 |       83.99µs |       127.0.0.1 | POST     "/api/show"
time=2024-04-19T23:01:40.763Z level=INFO source=gpu.go:121 msg="Detecting GPU type"
time=2024-04-19T23:01:40.763Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
time=2024-04-19T23:01:40.763Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama3535518548/runners/cuda_v11/libcudart.so.11.0]"
time=2024-04-19T23:01:40.764Z level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
time=2024-04-19T23:01:40.764Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-19T23:01:40.791Z level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 5.2"
time=2024-04-19T23:01:40.806Z level=INFO source=gpu.go:121 msg="Detecting GPU type"
time=2024-04-19T23:01:40.806Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
time=2024-04-19T23:01:40.807Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama3535518548/runners/cuda_v11/libcudart.so.11.0]"
time=2024-04-19T23:01:40.807Z level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
time=2024-04-19T23:01:40.807Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-19T23:01:40.824Z level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 5.2"
time=2024-04-19T23:01:40.840Z level=INFO source=server.go:127 msg="offload to gpu" reallayers=45 layers=45 required="39251.1 MiB" used="22446.8 MiB" available="22823.6 MiB" kv="1280.0 MiB" fulloffload="584.0 MiB" partialoffload="612.0 MiB"
time=2024-04-19T23:01:40.840Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-19T23:01:40.841Z level=INFO source=server.go:264 msg="starting llama server" cmd="/tmp/ollama3535518548/runners/cuda_v11/ollama_llama_server --model /root/.ollama/models/blobs/sha256-f28b58cd92766bdf5889dc7509786260a82cee8b36d61001310e3f08f742caf2 --ctx-size 4096 --batch-size 512 --embedding --log-disable --n-gpu-layers 45 --port 37283"
time=2024-04-19T23:01:40.841Z level=INFO source=server.go:389 msg="waiting for llama runner to start responding"
{"function":"server_params_parse","level":"INFO","line":2603,"msg":"logging to file is disabled.","tid":"139869800656896","timestamp":1713567700}
{"build":1,"commit":"7593639","function":"main","level":"INFO","line":2819,"msg":"build info","tid":"139869800656896","timestamp":1713567700}
{"function":"main","level":"INFO","line":2822,"msg":"system info","n_threads":12,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | ","tid":"139869800656896","timestamp":1713567700,"total_threads":24}
llama_model_loader: loaded meta data with 20 key-value pairs and 723 tensors from /root/.ollama/models/blobs/sha256-f28b58cd92766bdf5889dc7509786260a82cee8b36d61001310e3f08f742caf2 (version GGUF V2)
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = LLaMA v2
llama_model_loader: - kv   2:                       llama.context_length u32              = 4096
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 8192
llama_model_loader: - kv   4:                          llama.block_count u32              = 80
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 28672
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 64
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                       llama.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv  11:                          general.file_type u32              = 2
llama_model_loader: - kv  12:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  14:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  16:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  17:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  18:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  19:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  161 tensors
llama_model_loader: - type q4_0:  561 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format           = GGUF V2
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 4096
llm_load_print_meta: n_embd           = 8192
llm_load_print_meta: n_head           = 64
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 80
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 8
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 28672
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 4096
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 70B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 68.98 B
llm_load_print_meta: model size       = 36.20 GiB (4.51 BPW)
llm_load_print_meta: general.name     = LLaMA v2
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:   yes
ggml_cuda_init: CUDA_USE_TENSOR_CORES: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: Tesla M40 24GB, compute capability 5.2, VMM: yes
llm_load_tensors: ggml ctx size =    0.55 MiB
llm_load_tensors: offloading 45 repeating layers to GPU
llm_load_tensors: offloaded 45/81 layers to GPU
llm_load_tensors:        CPU buffer size = 37070.73 MiB
llm_load_tensors:      CUDA0 buffer size = 20657.81 MiB
....................................................................................................
llama_new_context_with_model: n_ctx      = 4096
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:  CUDA_Host KV buffer size =   560.00 MiB
llama_kv_cache_init:      CUDA0 KV buffer size =   720.00 MiB
llama_new_context_with_model: KV self size  = 1280.00 MiB, K (f16):  640.00 MiB, V (f16):  640.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     0.15 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =   596.00 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    24.01 MiB
llama_new_context_with_model: graph nodes  = 2566
llama_new_context_with_model: graph splits = 389
CUDA error: out of memory
  current device: 0, in function alloc at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:302
  cuMemCreate(&handle, reserve_size, &prop, 0)
GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:60: !"CUDA error"
No symbol table is loaded.  Use the "file" command.
ptrace: Operation not permitted.
No stack.
The program is not being run.
time=2024-04-19T23:02:12.792Z level=ERROR source=routes.go:120 msg="error loading llama server" error="llama runner process no longer running: -1 CUDA error: out of memory\n  current device: 0, in function alloc at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:302\n  cuMemCreate(&handle, reserve_size, &prop, 0)\nGGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:60: !\"CUDA error\""
[GIN] 2024/04/19 - 23:02:12 | 500 | 32.067876617s |       ********* | POST     "/api/chat"

Ollama console logs

[root@708e3cc70d03 /]# ollama run meditron:70b
Error: llama runner process no longer running: -1 CUDA error: out of memory
  current device: 0, in function alloc at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:302
  cuMemCreate(&handle, reserve_size, &prop, 0)
GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:60: !"CUDA error"
[root@708e3cc70d03 /]#

I'm going to install locally and see if it pans out better. any specific branch I should use? otherwise I'll stick to the rocm branch

<!-- gh-comment-id:2067377172 --> @erasmus74 commented on GitHub (Apr 19, 2024): Docker server logs; ``` ❯ docker logs -f ollama time=2024-04-19T23:00:21.844Z level=INFO source=images.go:817 msg="total blobs: 162" time=2024-04-19T23:00:21.845Z level=INFO source=images.go:824 msg="total unused blobs removed: 0" time=2024-04-19T23:00:21.846Z level=INFO source=routes.go:1143 msg="Listening on [::]:11434 (version 0.1.32)" time=2024-04-19T23:00:21.846Z level=INFO source=payload.go:28 msg="extracting embedded files" dir=/tmp/ollama3535518548/runners time=2024-04-19T23:00:23.238Z level=INFO source=payload.go:41 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60002]" time=2024-04-19T23:00:23.238Z level=INFO source=gpu.go:121 msg="Detecting GPU type" time=2024-04-19T23:00:23.238Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*" time=2024-04-19T23:00:23.238Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama3535518548/runners/cuda_v11/libcudart.so.11.0]" time=2024-04-19T23:00:23.245Z level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart" time=2024-04-19T23:00:23.245Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-04-19T23:00:23.272Z level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 5.2" [GIN] 2024/04/19 - 23:00:29 | 200 | 17.8µs | 127.0.0.1 | HEAD "/" [GIN] 2024/04/19 - 23:00:29 | 200 | 2.33737ms | ********* | GET "/api/tags" [GIN] 2024/04/19 - 23:01:40 | 200 | 12.23µs | 127.0.0.1 | HEAD "/" [GIN] 2024/04/19 - 23:01:40 | 200 | 186.76µs | 127.0.0.1 | POST "/api/show" [GIN] 2024/04/19 - 23:01:40 | 200 | 83.99µs | 127.0.0.1 | POST "/api/show" time=2024-04-19T23:01:40.763Z level=INFO source=gpu.go:121 msg="Detecting GPU type" time=2024-04-19T23:01:40.763Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*" time=2024-04-19T23:01:40.763Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama3535518548/runners/cuda_v11/libcudart.so.11.0]" time=2024-04-19T23:01:40.764Z level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart" time=2024-04-19T23:01:40.764Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-04-19T23:01:40.791Z level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 5.2" time=2024-04-19T23:01:40.806Z level=INFO source=gpu.go:121 msg="Detecting GPU type" time=2024-04-19T23:01:40.806Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*" time=2024-04-19T23:01:40.807Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama3535518548/runners/cuda_v11/libcudart.so.11.0]" time=2024-04-19T23:01:40.807Z level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart" time=2024-04-19T23:01:40.807Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-04-19T23:01:40.824Z level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 5.2" time=2024-04-19T23:01:40.840Z level=INFO source=server.go:127 msg="offload to gpu" reallayers=45 layers=45 required="39251.1 MiB" used="22446.8 MiB" available="22823.6 MiB" kv="1280.0 MiB" fulloffload="584.0 MiB" partialoffload="612.0 MiB" time=2024-04-19T23:01:40.840Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-04-19T23:01:40.841Z level=INFO source=server.go:264 msg="starting llama server" cmd="/tmp/ollama3535518548/runners/cuda_v11/ollama_llama_server --model /root/.ollama/models/blobs/sha256-f28b58cd92766bdf5889dc7509786260a82cee8b36d61001310e3f08f742caf2 --ctx-size 4096 --batch-size 512 --embedding --log-disable --n-gpu-layers 45 --port 37283" time=2024-04-19T23:01:40.841Z level=INFO source=server.go:389 msg="waiting for llama runner to start responding" {"function":"server_params_parse","level":"INFO","line":2603,"msg":"logging to file is disabled.","tid":"139869800656896","timestamp":1713567700} {"build":1,"commit":"7593639","function":"main","level":"INFO","line":2819,"msg":"build info","tid":"139869800656896","timestamp":1713567700} {"function":"main","level":"INFO","line":2822,"msg":"system info","n_threads":12,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | ","tid":"139869800656896","timestamp":1713567700,"total_threads":24} llama_model_loader: loaded meta data with 20 key-value pairs and 723 tensors from /root/.ollama/models/blobs/sha256-f28b58cd92766bdf5889dc7509786260a82cee8b36d61001310e3f08f742caf2 (version GGUF V2) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = LLaMA v2 llama_model_loader: - kv 2: llama.context_length u32 = 4096 llama_model_loader: - kv 3: llama.embedding_length u32 = 8192 llama_model_loader: - kv 4: llama.block_count u32 = 80 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 64 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: llama.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 11: general.file_type u32 = 2 llama_model_loader: - kv 12: tokenizer.ggml.model str = llama llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 19: general.quantization_version u32 = 2 llama_model_loader: - type f32: 161 tensors llama_model_loader: - type q4_0: 561 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens definition check successful ( 259/32000 ). llm_load_print_meta: format = GGUF V2 llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 4096 llm_load_print_meta: n_embd = 8192 llm_load_print_meta: n_head = 64 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 80 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 8 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 28672 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 4096 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 70B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 68.98 B llm_load_print_meta: model size = 36.20 GiB (4.51 BPW) llm_load_print_meta: general.name = LLaMA v2 llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: LF token = 13 '<0x0A>' ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes ggml_cuda_init: CUDA_USE_TENSOR_CORES: no ggml_cuda_init: found 1 CUDA devices: Device 0: Tesla M40 24GB, compute capability 5.2, VMM: yes llm_load_tensors: ggml ctx size = 0.55 MiB llm_load_tensors: offloading 45 repeating layers to GPU llm_load_tensors: offloaded 45/81 layers to GPU llm_load_tensors: CPU buffer size = 37070.73 MiB llm_load_tensors: CUDA0 buffer size = 20657.81 MiB .................................................................................................... llama_new_context_with_model: n_ctx = 4096 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA_Host KV buffer size = 560.00 MiB llama_kv_cache_init: CUDA0 KV buffer size = 720.00 MiB llama_new_context_with_model: KV self size = 1280.00 MiB, K (f16): 640.00 MiB, V (f16): 640.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 0.15 MiB llama_new_context_with_model: CUDA0 compute buffer size = 596.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 24.01 MiB llama_new_context_with_model: graph nodes = 2566 llama_new_context_with_model: graph splits = 389 CUDA error: out of memory current device: 0, in function alloc at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:302 cuMemCreate(&handle, reserve_size, &prop, 0) GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:60: !"CUDA error" No symbol table is loaded. Use the "file" command. ptrace: Operation not permitted. No stack. The program is not being run. time=2024-04-19T23:02:12.792Z level=ERROR source=routes.go:120 msg="error loading llama server" error="llama runner process no longer running: -1 CUDA error: out of memory\n current device: 0, in function alloc at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:302\n cuMemCreate(&handle, reserve_size, &prop, 0)\nGGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:60: !\"CUDA error\"" [GIN] 2024/04/19 - 23:02:12 | 500 | 32.067876617s | ********* | POST "/api/chat" ``` Ollama console logs ``` [root@708e3cc70d03 /]# ollama run meditron:70b Error: llama runner process no longer running: -1 CUDA error: out of memory current device: 0, in function alloc at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:302 cuMemCreate(&handle, reserve_size, &prop, 0) GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:60: !"CUDA error" [root@708e3cc70d03 /]# ``` I'm going to install locally and see if it pans out better. any specific branch I should use? otherwise I'll stick to the rocm branch
Author
Owner

@erasmus74 commented on GitHub (Apr 19, 2024):

And to be clear, my goal is to be able to load a 38GB model split across 2 GPUs. So essentially I want
scenario A)
most layers in GPU 1 and the rest in GPU 2 (You should be able to select GPU priority somehow (simple ordered array?))
scenario B)
model layers split evenly to all compatible GPUs (This assumes the GPUs perform equally, which if they do not, would potentially have a hydra of problems)

Right now it just does AMD GPU + Host RAM
or
NV GPU + Host RAM

Ideal scenario? I can list my gpu priority order, and it packs and queues the models' layers up as needed with system RAM as the last fallback if the rest is exhausted.

The other scenarios are also good use cases and will help. Ultimately, scenario A is what I'm trying to achieve.

<!-- gh-comment-id:2067380073 --> @erasmus74 commented on GitHub (Apr 19, 2024): And to be clear, my goal is to be able to load a 38GB model split across 2 GPUs. So essentially I want scenario A) most layers in GPU 1 and the rest in GPU 2 (You should be able to select GPU priority somehow (simple ordered array?)) scenario B) model layers split evenly to all compatible GPUs (This assumes the GPUs perform equally, which if they do not, would potentially have a hydra of problems) Right now it just does AMD GPU + Host RAM or NV GPU + Host RAM Ideal scenario? I can list my gpu priority order, and it packs and queues the models' layers up as needed with system RAM as the last fallback if the rest is exhausted. The other scenarios are also good use cases and will help. Ultimately, scenario A is what I'm trying to achieve.
Author
Owner

@dhiltgen commented on GitHub (Apr 22, 2024):

Ah, I need to clarify something. We wont be able to support splitting a single model across GPUs from different vendors unfortunately with our current LLM runner architecture. You could load 2 models, one on each GPU.

<!-- gh-comment-id:2071134571 --> @dhiltgen commented on GitHub (Apr 22, 2024): Ah, I need to clarify something. We wont be able to support splitting a single model across GPUs from different vendors unfortunately with our current LLM runner architecture. You could load 2 models, one on each GPU.
Author
Owner

@erasmus74 commented on GitHub (Apr 23, 2024):

I mean that would still be useful. Could I in theory do for example the new llama3:70b model if I wanted to run it in any of these following scenarios?

A)
24GB worth of layers from model loaded into the 7900xtx with ROCM
Rest into System RAM (64GB)
B)
24GB worth of layers from model loaded into the Tesla M40 with Cuda
Rest into System RAM (64GB)
C)
24GB worth of layers from model loaded into the 7900xtx with ROCM for model alpha
24GB worth of layers from model loaded into the Tesla M40 with Cuda for model beta
Rest of alpha and beta into System RAM (64GB) (simultaneously)

<!-- gh-comment-id:2071323053 --> @erasmus74 commented on GitHub (Apr 23, 2024): I mean that would still be useful. Could I in theory do for example the new llama3:70b model if I wanted to run it in any of these following scenarios? A) 24GB worth of layers from model loaded into the 7900xtx with ROCM Rest into System RAM (64GB) B) 24GB worth of layers from model loaded into the Tesla M40 with Cuda Rest into System RAM (64GB) C) 24GB worth of layers from model loaded into the 7900xtx with ROCM for model alpha 24GB worth of layers from model loaded into the Tesla M40 with Cuda for model beta Rest of alpha and beta into System RAM (64GB) (simultaneously)
Author
Owner

@dhiltgen commented on GitHub (Apr 23, 2024):

Yes, those should all work on main now that #3418 has merged, as long as you "opt in" to the new concurrency support. (see the PR for instructions)

<!-- gh-comment-id:2072865207 --> @dhiltgen commented on GitHub (Apr 23, 2024): Yes, those should all work on main now that #3418 has merged, as long as you "opt in" to the new concurrency support. (see the PR for instructions)
Author
Owner

@erasmus74 commented on GitHub (Apr 23, 2024):

Amazing. I'll close this. Though I know I initially wanted CUDA+RoCM to play nice, we're probably really far from that at the moment and its not a specific to ollama implementation detail, probably a llama.cpp problem.

Thank you for your amazing work

<!-- gh-comment-id:2073017486 --> @erasmus74 commented on GitHub (Apr 23, 2024): Amazing. I'll close this. Though I know I initially wanted CUDA+RoCM to play nice, we're probably really far from that at the moment and its not a specific to ollama implementation detail, probably a llama.cpp problem. Thank you for your amazing work
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#48802