[GH-ISSUE #5035] Ollama not use GPU #49694

Closed
opened 2026-04-28 12:44:06 -05:00 by GiteaMirror · 10 comments
Owner

Originally created by @Mina4ever on GitHub (Jun 13, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5035

What is the issue?

I am using Ollama , it use CPU only and not use GPU, although I installed cuda v 12.5 and cudnn v 9.2.0 and I can check that python using gpu in liabrary like pytourch (result of command (>>> print(torch.backends.cudnn.is_available())
True, ), I have Nvidia 1050 ti and I am trying to runn llama3 8B model, i found this warning in ollamam server log "level=WARN source=gpu.go:177 msg="CPU does not have AVX or AVX2, disabling GPU support."

OS

Windows

GPU

Nvidia

CPU

AMD

Ollama version

0.1.43

Originally created by @Mina4ever on GitHub (Jun 13, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5035 ### What is the issue? I am using Ollama , it use CPU only and not use GPU, although I installed cuda v 12.5 and cudnn v 9.2.0 and I can check that python using gpu in liabrary like pytourch (result of command (>>> print(torch.backends.cudnn.is_available()) **True**, ), I have Nvidia 1050 ti and I am trying to runn llama3 8B model, i found this warning in ollamam server log "level=WARN source=gpu.go:177 msg="CPU does not have AVX or AVX2, disabling GPU support." ### OS Windows ### GPU Nvidia ### CPU AMD ### Ollama version 0.1.43
GiteaMirror added the bug label 2026-04-28 12:44:06 -05:00
Author
Owner

@kangkang721 commented on GitHub (Jun 13, 2024):

解决了么?我也遇到同样的问题

<!-- gh-comment-id:2166975924 --> @kangkang721 commented on GitHub (Jun 13, 2024): 解决了么?我也遇到同样的问题
Author
Owner

@xinoer commented on GitHub (Jun 14, 2024):

I also encountered the same problemI also encountered the same problem, my Ollama version is the Windows installation version, version number 0.1.43

<!-- gh-comment-id:2167108795 --> @xinoer commented on GitHub (Jun 14, 2024): I also encountered the same problemI also encountered the same problem, my Ollama version is the Windows installation version, version number 0.1.43
Author
Owner

@melooy commented on GitHub (Jun 14, 2024):

I also encountered the same problemI also encountered the same problem, my Ollama version is the Windows installation version, version number 0.1.44

<!-- gh-comment-id:2167121291 --> @melooy commented on GitHub (Jun 14, 2024): I also encountered the same problemI also encountered the same problem, my Ollama version is the Windows installation version, version number 0.1.44
Author
Owner

@Shbhom commented on GitHub (Jun 14, 2024):

Getting the same error, I'm running ollama on docker on arch linux, I've already installed nvidia-container-toolkit, and I'm able to run nvidia-smi command inside the container, and under the processes using GPU in nvidia-smi, there's no process using GPU inside the container, while I'm using ollama to run llama3.

<!-- gh-comment-id:2168115289 --> @Shbhom commented on GitHub (Jun 14, 2024): Getting the same error, I'm running ollama on docker on arch linux, I've already installed nvidia-container-toolkit, and I'm able to run nvidia-smi command inside the container, and under the processes using GPU in nvidia-smi, there's no process using GPU inside the container, while I'm using ollama to run llama3.
Author
Owner

@pdevine commented on GitHub (Jun 14, 2024):

Unfortunately you're running this on some pretty old hardware. The RTX 1050 Ti only has 4 GB of VRAM, so llama3 8B would be tight even at 4 bit quantization even if you had AVX on the CPU. That said, this is a dupe of #2187 . In the future we'll make it easy for you to recompile Ollama on your own w/o AVX support but w/ CUDA support.

<!-- gh-comment-id:2168429581 --> @pdevine commented on GitHub (Jun 14, 2024): Unfortunately you're running this on some pretty old hardware. The RTX 1050 Ti only has 4 GB of VRAM, so llama3 8B would be tight even at 4 bit quantization even if you had AVX on the CPU. That said, this is a dupe of #2187 . In the future we'll make it easy for you to recompile Ollama on your own w/o AVX support but w/ CUDA support.
Author
Owner

@Shbhom commented on GitHub (Jun 14, 2024):

@pdevine can you tell me why I'm not able to run it on my pc, I got a RTX 4080 Ti.

<!-- gh-comment-id:2168446212 --> @Shbhom commented on GitHub (Jun 14, 2024): @pdevine can you tell me why I'm not able to run it on my pc, I got a RTX 4080 Ti.
Author
Owner

@dbl001 commented on GitHub (Jun 14, 2024):

@pdevine Can ollama run llama3 utilizing the GPU? Currently it's only using the CPU.

The GO script I am running is turning -DLLAMA_METAL off:

% OLLAMA_CUSTOM_CPU_DEFS="-DLLAMA_AVX=on -DLLAMA_AVX2=on -DLLAMA_F16C=on -DLLAMA_FMA=on -DLLAMA_METAL=on -DLLAMA_METAL_EMBED_LIBRARY=on -DGGML_USE_METAL=on -DLLAMA_METAL_COMPILE_SERIALIZED=1" go generate -v ./...
...
+ CMAKE_DEFS='-DCMAKE_OSX_DEPLOYMENT_TARGET=11.3 -DLLAMA_METAL_MACOSX_VERSION_MIN=11.3 -DCMAKE_SYSTEM_NAME=Darwin -DLLAMA_METAL_EMBED_LIBRARY=on -DCMAKE_SYSTEM_PROCESSOR=x86_64 -DCMAKE_OSX_ARCHITECTURES=x86_64 -DLLAMA_METAL=off -DLLAMA_NATIVE=off -DLLAMA_ACCELERATE=on -DLLAMA_AVX=on -DLLAMA_AVX2=on -DLLAMA_AVX512=off -DLLAMA_FMA=on -DLLAMA_F16C=on -DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off '

I am running ollama on an iMac 27" with an AMD Radeon Pro 5700 XT. llama.cpp does know about the AMD GPU.
E.g.

 ./main -m /Users/davidlaxer/llama.cpp/models/7B/ggml-model-q4_0.gguf -n 128 -ngl 1
Log start
main: build = 3051 (5921b8f0)
main: built with Apple clang version 15.0.0 (clang-1500.3.9.4) for x86_64-apple-darwin23.5.0
main: seed  = 1718377585
llama_model_loader: loaded meta data with 16 key-value pairs and 291 tensors from /Users/davidlaxer/llama.cpp/models/7B/ggml-model-q4_0.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = models
llama_model_loader: - kv   2:                       llama.context_length u32              = 2048
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 11008
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 32
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  10:                          general.file_type u32              = 2
llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  12:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  13:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  14:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  15:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens cache size = 259
llm_load_vocab: token to piece cache size = 0.3368 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 2048
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 32
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 4096
llm_load_print_meta: n_embd_v_gqa     = 4096
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 11008
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 2048
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 6.74 B
llm_load_print_meta: model size       = 3.56 GiB (4.54 BPW) 
llm_load_print_meta: general.name     = models
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_tensors: ggml ctx size =    0.30 MiB
ggml_backend_metal_log_allocated_size: allocated buffer, size =   108.60 MiB, (  108.60 / 16368.00)
llm_load_tensors: offloading 1 repeating layers to GPU
llm_load_tensors: offloaded 1/33 layers to GPU
llm_load_tensors:      Metal buffer size =   108.60 MiB
llm_load_tensors:        CPU buffer size =  3539.27 MiB
.................................................................................................
llama_new_context_with_model: n_ctx      = 512
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
ggml_metal_init: allocating
ggml_metal_init: found device: AMD Radeon Pro 5700 XT
ggml_metal_init: picking default device: AMD Radeon Pro 5700 XT
ggml_metal_init: default.metallib not found, loading from source
ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil
ggml_metal_init: loading '/Users/davidlaxer/ollama/llm/llama.cpp/ggml-metal.metal'
ggml_metal_init: GPU name:   AMD Radeon Pro 5700 XT
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_init: simdgroup reduction support   = true
ggml_metal_init: simdgroup matrix mul. support = false
ggml_metal_init: hasUnifiedMemory              = false
ggml_metal_init: recommendedMaxWorkingSetSize  = 17163.09 MB
ggml_metal_init: skipping kernel_mul_mm_f32_f32                    (not supported)
ggml_metal_init: skipping kernel_mul_mm_f16_f32                    (not supported)
ggml_metal_init: skipping kernel_mul_mm_q4_0
...
llama_new_context_with_model: graph splits = 3

system_info: n_threads = 8 / 16 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | 
sampling: 
	repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000
	top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.800
	mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampling order: 
CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temperature 
generate: n_ctx = 512, n_batch = 2048, n_predict = 128, n_keep = 1


Ignoring the Dangers of Antidepressant Medication: A Study of 366 Cases
A. E. Brady, M.D., J. L. Sharpe, Ph.D., J. G. D. Ruston, Ph.D., and J. M. Torgersen, M.D.
The authors interviewed 366 cases of individuals who had taken antidepressant medication but who had not been diagnosed as having a major depressive disorder. Most of the cases had been prescribed tricyclic antidepressants.
llama_print_timings:        load time =    9177.97 ms
llama_print_timings:      sample time =       4.22 ms /   128 runs   (    0.03 ms per token, 30346.14 tokens per second)
llama_print_timings: prompt eval time =       0.00 ms /     0 tokens (     nan ms per token,      nan tokens per second)
llama_print_timings:        eval time =   29291.42 ms /   128 runs   (  228.84 ms per token,     4.37 tokens per second)
llama_print_timings:       total time =   29311.07 ms /   128 tokens
ggml_metal_free: deallocating
Log end

Building ollama amd server output:

% OLLAMA_CUSTOM_CPU_DEFS="-DLLAMA_AVX=on -DLLAMA_AVX2=on -DLLAMA_F16C=on -DLLAMA_FMA=on -DLLAMA_METAL=1 -DLLAMA_METAL_EMBED_LIBRARY=on -DLLAMA_METAL_COMPILE_SERIALIZED=1" go generate -v ./...

... 

% ollama serve
Error: listen tcp 127.0.0.1:11434: bind: address already in use
(AI-Feynman) davidlaxer@bluediamond pytorch % ollama serve
2024/06/15 10:36:43 routes.go:1011: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:/Users/davidlaxer/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_TMPDIR:]"
time=2024-06-15T10:36:43.742-07:00 level=INFO source=images.go:725 msg="total blobs: 28"
time=2024-06-15T10:36:43.743-07:00 level=INFO source=images.go:732 msg="total unused blobs removed: 0"
time=2024-06-15T10:36:43.744-07:00 level=INFO source=routes.go:1057 msg="Listening on 127.0.0.1:11434 (version 0.1.44)"
time=2024-06-15T10:36:43.744-07:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/var/folders/3n/56fpv14n4wj0c1l1sb106pzw0000gn/T/ollama2746628305/runners
time=2024-06-15T10:36:43.770-07:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx2 cpu cpu_avx]"
time=2024-06-15T10:36:43.770-07:00 level=INFO source=types.go:71 msg="inference compute" id="" library=cpu compute="" driver=0.0 name="" total="128.0 GiB" available="0 B"
time=2024-06-15T10:41:36.771-07:00 level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=0 memory.available="0 B" memory.required.full="4.6 GiB" memory.required.partial="794.5 MiB" memory.required.kv="256.0 MiB" memory.weights.total="4.1 GiB" memory.weights.repeating="3.7 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="677.5 MiB"
time=2024-06-15T10:41:36.772-07:00 level=INFO source=server.go:341 msg="starting llama server" cmd="/var/folders/3n/56fpv14n4wj0c1l1sb106pzw0000gn/T/ollama2746628305/runners/cpu_avx2/ollama_llama_server --model /Users/davidlaxer/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa --ctx-size 2048 --batch-size 512 --embedding --log-disable --parallel 1 --port 63042"
time=2024-06-15T10:41:36.780-07:00 level=INFO source=sched.go:338 msg="loaded runners" count=1
time=2024-06-15T10:41:36.780-07:00 level=INFO source=server.go:529 msg="waiting for llama runner to start responding"
time=2024-06-15T10:41:36.780-07:00 level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=3051 commit="5921b8f0" tid="0x7ff85e144fc0" timestamp=1718473296
INFO [main] system info | n_threads=8 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="0x7ff85e144fc0" timestamp=1718473296 total_threads=16
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="15" port="63042" tid="0x7ff85e144fc0" timestamp=1718473296
time=2024-06-15T10:41:37.032-07:00 level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: loaded meta data with 22 key-value pairs and 291 tensors from /Users/davidlaxer/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = Meta-Llama-3-8B-Instruct
llama_model_loader: - kv   2:                          llama.block_count u32              = 32
llama_model_loader: - kv   3:                       llama.context_length u32              = 8192
llama_model_loader: - kv   4:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   7:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   8:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                          general.file_type u32              = 2
llama_model_loader: - kv  11:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  12:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  14:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  15:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  16:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  17:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  19:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  20:                    tokenizer.chat_template str              = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv  21:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 1.5928 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 128256
llm_load_print_meta: n_merges         = 280147
llm_load_print_meta: n_ctx_train      = 8192
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 8192
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 8B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 8.03 B
llm_load_print_meta: model size       = 4.33 GiB (4.64 BPW) 
llm_load_print_meta: general.name     = Meta-Llama-3-8B-Instruct
llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token        = 128009 '<|eot_id|>'
llm_load_print_meta: LF token         = 128 'Ä'
llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
llm_load_tensors: ggml ctx size =    0.15 MiB
llm_load_tensors:        CPU buffer size =  4437.80 MiB
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:        CPU KV buffer size =   256.00 MiB
llama_new_context_with_model: KV self size  =  256.00 MiB, K (f16):  128.00 MiB, V (f16):  128.00 MiB
llama_new_context_with_model:        CPU  output buffer size =     0.50 MiB
llama_new_context_with_model:        CPU compute buffer size =   258.50 MiB
llama_new_context_with_model: graph nodes  = 1030
llama_new_context_with_model: graph splits = 1
INFO [main] model loaded | tid="0x7ff85e144fc0" timestamp=1718473304
time=2024-06-15T10:41:44.296-07:00 level=INFO source=server.go:572 msg="llama runner started in 7.52 seconds"
[GIN] 2024/06/15 - 10:41:44 | 200 |  9.093560734s |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:41:54 | 200 |  1.154057317s |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:42:35 | 200 | 40.688860055s |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:42:41 | 200 |  6.229453908s |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:42:43 | 200 |  1.270069572s |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:43:23 | 200 | 40.445274886s |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/06/15 - 10:43:29 | 200 |   5.92720864s |       127.0.0.1 | POST     "/api/embeddings"

...

<!-- gh-comment-id:2168574737 --> @dbl001 commented on GitHub (Jun 14, 2024): @pdevine Can ollama run llama3 utilizing the GPU? Currently it's only using the CPU. The GO script I am running is turning -DLLAMA_METAL off: ``` % OLLAMA_CUSTOM_CPU_DEFS="-DLLAMA_AVX=on -DLLAMA_AVX2=on -DLLAMA_F16C=on -DLLAMA_FMA=on -DLLAMA_METAL=on -DLLAMA_METAL_EMBED_LIBRARY=on -DGGML_USE_METAL=on -DLLAMA_METAL_COMPILE_SERIALIZED=1" go generate -v ./... ... + CMAKE_DEFS='-DCMAKE_OSX_DEPLOYMENT_TARGET=11.3 -DLLAMA_METAL_MACOSX_VERSION_MIN=11.3 -DCMAKE_SYSTEM_NAME=Darwin -DLLAMA_METAL_EMBED_LIBRARY=on -DCMAKE_SYSTEM_PROCESSOR=x86_64 -DCMAKE_OSX_ARCHITECTURES=x86_64 -DLLAMA_METAL=off -DLLAMA_NATIVE=off -DLLAMA_ACCELERATE=on -DLLAMA_AVX=on -DLLAMA_AVX2=on -DLLAMA_AVX512=off -DLLAMA_FMA=on -DLLAMA_F16C=on -DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off ' ``` I am running ollama on an iMac 27" with an AMD Radeon Pro 5700 XT. llama.cpp does know about the AMD GPU. E.g. ``` ./main -m /Users/davidlaxer/llama.cpp/models/7B/ggml-model-q4_0.gguf -n 128 -ngl 1 Log start main: build = 3051 (5921b8f0) main: built with Apple clang version 15.0.0 (clang-1500.3.9.4) for x86_64-apple-darwin23.5.0 main: seed = 1718377585 llama_model_loader: loaded meta data with 16 key-value pairs and 291 tensors from /Users/davidlaxer/llama.cpp/models/7B/ggml-model-q4_0.gguf (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = models llama_model_loader: - kv 2: llama.context_length u32 = 2048 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 10: general.file_type u32 = 2 llama_model_loader: - kv 11: tokenizer.ggml.model str = llama llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 15: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_0: 225 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens cache size = 259 llm_load_vocab: token to piece cache size = 0.3368 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 2048 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 32 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 4096 llm_load_print_meta: n_embd_v_gqa = 4096 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 11008 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 2048 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 6.74 B llm_load_print_meta: model size = 3.56 GiB (4.54 BPW) llm_load_print_meta: general.name = models llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_tensors: ggml ctx size = 0.30 MiB ggml_backend_metal_log_allocated_size: allocated buffer, size = 108.60 MiB, ( 108.60 / 16368.00) llm_load_tensors: offloading 1 repeating layers to GPU llm_load_tensors: offloaded 1/33 layers to GPU llm_load_tensors: Metal buffer size = 108.60 MiB llm_load_tensors: CPU buffer size = 3539.27 MiB ................................................................................................. llama_new_context_with_model: n_ctx = 512 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 ggml_metal_init: allocating ggml_metal_init: found device: AMD Radeon Pro 5700 XT ggml_metal_init: picking default device: AMD Radeon Pro 5700 XT ggml_metal_init: default.metallib not found, loading from source ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil ggml_metal_init: loading '/Users/davidlaxer/ollama/llm/llama.cpp/ggml-metal.metal' ggml_metal_init: GPU name: AMD Radeon Pro 5700 XT ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_init: simdgroup reduction support = true ggml_metal_init: simdgroup matrix mul. support = false ggml_metal_init: hasUnifiedMemory = false ggml_metal_init: recommendedMaxWorkingSetSize = 17163.09 MB ggml_metal_init: skipping kernel_mul_mm_f32_f32 (not supported) ggml_metal_init: skipping kernel_mul_mm_f16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mm_q4_0 ... llama_new_context_with_model: graph splits = 3 system_info: n_threads = 8 / 16 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | sampling: repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000 top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.800 mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000 sampling order: CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temperature generate: n_ctx = 512, n_batch = 2048, n_predict = 128, n_keep = 1 Ignoring the Dangers of Antidepressant Medication: A Study of 366 Cases A. E. Brady, M.D., J. L. Sharpe, Ph.D., J. G. D. Ruston, Ph.D., and J. M. Torgersen, M.D. The authors interviewed 366 cases of individuals who had taken antidepressant medication but who had not been diagnosed as having a major depressive disorder. Most of the cases had been prescribed tricyclic antidepressants. llama_print_timings: load time = 9177.97 ms llama_print_timings: sample time = 4.22 ms / 128 runs ( 0.03 ms per token, 30346.14 tokens per second) llama_print_timings: prompt eval time = 0.00 ms / 0 tokens ( nan ms per token, nan tokens per second) llama_print_timings: eval time = 29291.42 ms / 128 runs ( 228.84 ms per token, 4.37 tokens per second) llama_print_timings: total time = 29311.07 ms / 128 tokens ggml_metal_free: deallocating Log end ``` Building ollama amd server output: ``` % OLLAMA_CUSTOM_CPU_DEFS="-DLLAMA_AVX=on -DLLAMA_AVX2=on -DLLAMA_F16C=on -DLLAMA_FMA=on -DLLAMA_METAL=1 -DLLAMA_METAL_EMBED_LIBRARY=on -DLLAMA_METAL_COMPILE_SERIALIZED=1" go generate -v ./... ... % ollama serve Error: listen tcp 127.0.0.1:11434: bind: address already in use (AI-Feynman) davidlaxer@bluediamond pytorch % ollama serve 2024/06/15 10:36:43 routes.go:1011: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:/Users/davidlaxer/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_TMPDIR:]" time=2024-06-15T10:36:43.742-07:00 level=INFO source=images.go:725 msg="total blobs: 28" time=2024-06-15T10:36:43.743-07:00 level=INFO source=images.go:732 msg="total unused blobs removed: 0" time=2024-06-15T10:36:43.744-07:00 level=INFO source=routes.go:1057 msg="Listening on 127.0.0.1:11434 (version 0.1.44)" time=2024-06-15T10:36:43.744-07:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/var/folders/3n/56fpv14n4wj0c1l1sb106pzw0000gn/T/ollama2746628305/runners time=2024-06-15T10:36:43.770-07:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx2 cpu cpu_avx]" time=2024-06-15T10:36:43.770-07:00 level=INFO source=types.go:71 msg="inference compute" id="" library=cpu compute="" driver=0.0 name="" total="128.0 GiB" available="0 B" time=2024-06-15T10:41:36.771-07:00 level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=0 memory.available="0 B" memory.required.full="4.6 GiB" memory.required.partial="794.5 MiB" memory.required.kv="256.0 MiB" memory.weights.total="4.1 GiB" memory.weights.repeating="3.7 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="677.5 MiB" time=2024-06-15T10:41:36.772-07:00 level=INFO source=server.go:341 msg="starting llama server" cmd="/var/folders/3n/56fpv14n4wj0c1l1sb106pzw0000gn/T/ollama2746628305/runners/cpu_avx2/ollama_llama_server --model /Users/davidlaxer/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa --ctx-size 2048 --batch-size 512 --embedding --log-disable --parallel 1 --port 63042" time=2024-06-15T10:41:36.780-07:00 level=INFO source=sched.go:338 msg="loaded runners" count=1 time=2024-06-15T10:41:36.780-07:00 level=INFO source=server.go:529 msg="waiting for llama runner to start responding" time=2024-06-15T10:41:36.780-07:00 level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server error" INFO [main] build info | build=3051 commit="5921b8f0" tid="0x7ff85e144fc0" timestamp=1718473296 INFO [main] system info | n_threads=8 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="0x7ff85e144fc0" timestamp=1718473296 total_threads=16 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="15" port="63042" tid="0x7ff85e144fc0" timestamp=1718473296 time=2024-06-15T10:41:37.032-07:00 level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: loaded meta data with 22 key-value pairs and 291 tensors from /Users/davidlaxer/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = Meta-Llama-3-8B-Instruct llama_model_loader: - kv 2: llama.block_count u32 = 32 llama_model_loader: - kv 3: llama.context_length u32 = 8192 llama_model_loader: - kv 4: llama.embedding_length u32 = 4096 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.attention.head_count u32 = 32 llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: general.file_type u32 = 2 llama_model_loader: - kv 11: llama.vocab_size u32 = 128256 llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 14: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 20: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... llama_model_loader: - kv 21: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_0: 225 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 1.5928 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: n_ctx_train = 8192 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 8192 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 8B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 8.03 B llm_load_print_meta: model size = 4.33 GiB (4.64 BPW) llm_load_print_meta: general.name = Meta-Llama-3-8B-Instruct llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_tensors: ggml ctx size = 0.15 MiB llm_load_tensors: CPU buffer size = 4437.80 MiB llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CPU KV buffer size = 256.00 MiB llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB llama_new_context_with_model: CPU output buffer size = 0.50 MiB llama_new_context_with_model: CPU compute buffer size = 258.50 MiB llama_new_context_with_model: graph nodes = 1030 llama_new_context_with_model: graph splits = 1 INFO [main] model loaded | tid="0x7ff85e144fc0" timestamp=1718473304 time=2024-06-15T10:41:44.296-07:00 level=INFO source=server.go:572 msg="llama runner started in 7.52 seconds" [GIN] 2024/06/15 - 10:41:44 | 200 | 9.093560734s | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/06/15 - 10:41:54 | 200 | 1.154057317s | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/06/15 - 10:42:35 | 200 | 40.688860055s | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/06/15 - 10:42:41 | 200 | 6.229453908s | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/06/15 - 10:42:43 | 200 | 1.270069572s | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/06/15 - 10:43:23 | 200 | 40.445274886s | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/06/15 - 10:43:29 | 200 | 5.92720864s | 127.0.0.1 | POST "/api/embeddings" ... ```
Author
Owner

@dvdblk commented on GitHub (Jun 20, 2024):

I'm having a similar issue. Two days ago I have started ollama (0.1.44) with Docker, used it for some text generation with llama3:8b-instruct-q8_0, everything went fine and it was generated on two GPUs.

Today I wanted to use it again, but it did the generation on a CPU instead of GPU. Same thing happened when I tried to use an embedding model. The solution was to restart the docker container...

<!-- gh-comment-id:2181549797 --> @dvdblk commented on GitHub (Jun 20, 2024): I'm having a similar issue. Two days ago I have started ollama (0.1.44) with Docker, used it for some text generation with `llama3:8b-instruct-q8_0`, everything went fine and it was generated on two GPUs. Today I wanted to use it again, but it did the generation on a CPU instead of GPU. Same thing happened when I tried to use an embedding model. The solution was to **restart** the docker container...
Author
Owner

@dhiltgen commented on GitHub (Jun 20, 2024):

@dvdblk if that happens again, you may want to check the host and see if the nvidia_uvm driver unloaded. We recently had to add this new logic to our install script to make sure things stay loaded on driver 555 or newer.

<!-- gh-comment-id:2181677758 --> @dhiltgen commented on GitHub (Jun 20, 2024): @dvdblk if that happens again, you may want to check the host and see if the nvidia_uvm driver unloaded. We recently had to add [this new logic](https://github.com/ollama/ollama/blob/main/scripts/install.sh#L314-L323) to our install script to make sure things stay loaded on driver 555 or newer.
Author
Owner

@dvdblk commented on GitHub (Jun 21, 2024):

@dvdblk if that happens again, you may want to check the host and see if the nvidia_uvm driver unloaded. We recently had to add this new logic to our install script to make sure things stay loaded on driver 555 or newer.

Thanks for the input. I'm on 535.171.04 right now. I vaguely remember an nvidia-driver issue from last year where the driver would stop working (unload?) after some time and it required a system reboot, but I have fixed this long time ago.

I'm not sure how to reproduce ollama losing access to GPU but the container was up for 40+ hours and until it got restarted the generation was done on CPU. I also have a few other containers running that use very small amount of vram on GPUs and they kept working while ollama started suddenly using CPU. Will try to get some logs the next time.

EDIT: think my problem was related to this issue

<!-- gh-comment-id:2182862004 --> @dvdblk commented on GitHub (Jun 21, 2024): > @dvdblk if that happens again, you may want to check the host and see if the nvidia_uvm driver unloaded. We recently had to add [this new logic](https://github.com/ollama/ollama/blob/main/scripts/install.sh#L314-L323) to our install script to make sure things stay loaded on driver 555 or newer. Thanks for the input. I'm on `535.171.04` right now. I vaguely remember an nvidia-driver issue from last year where the driver would stop working (unload?) after some time and it required a system reboot, but I have fixed this long time ago. I'm not sure how to reproduce ollama losing access to GPU but the container was up for 40+ hours and until it got restarted the generation was done on CPU. I also have a few other containers running that use very small amount of vram on GPUs and they kept working while ollama started suddenly using CPU. Will try to get some logs the next time. EDIT: think my problem was related to [this issue](https://stackoverflow.com/questions/72932940/failed-to-initialize-nvml-unknown-error-in-docker-after-few-hours)
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#49694