[GH-ISSUE #9031] H200s GPU support for Ollama #67931

Closed
opened 2026-05-04 12:04:59 -05:00 by GiteaMirror · 15 comments
Owner

Originally created by @rajeshkumar-n on GitHub (Feb 12, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9031

As per https://github.com/ollama/ollama/blob/main/docs/gpu.md, GPUs with Compute Capability <=9 listed. As on 12th Feb 2025, H100 listed highest. I was trying to host on-premise LLMs on H200s. When checked, H200s showing 9.0 as compute capability.

nvidia-smi -i 0,1 --query-gpu=compute_cap --format=csv compute_cap 9.0 9.0

However, Ollama failed to recognize the GPUs.

11T03:16:37.327Z level=DEBUG source=gpu.go:592 msg="Unable to load cudart library /usr/lib/ollama/libcudart.so.11.3.109: cudart init failure: 802" cudaSetDevice err: 802 time=2025-02-11T03:17:07.333Z level=DEBUG source=gpu.go:592 msg="Unable to load cudart library /usr/lib/ollama/libcudart.so.12.4.127: cudart init failure: 802" time=2025-02-11T03:17:07.333Z level=DEBUG source=amd_linux.go:421 msg="amdgpu driver not detected /sys/module/amdgpu" time=2025-02-11T03:17:07.333Z level=INFO source=gpu.go:392 msg="no compatible GPUs were discovered" time=2025-02-11T03:17:07.333Z level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="2015.5 GiB" available="1999.5 GiB"

nvidia-smi Tue Feb 11 20:54:41 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 570.86.15 Driver Version: 570.86.15 CUDA Version: 12.8 |

May I request Ollama to validate and support H200s?

Originally created by @rajeshkumar-n on GitHub (Feb 12, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9031 As per https://github.com/ollama/ollama/blob/main/docs/gpu.md, GPUs with Compute Capability <=9 listed. As on 12th Feb 2025, H100 listed highest. I was trying to host on-premise LLMs on H200s. When checked, H200s showing 9.0 as compute capability. `nvidia-smi -i 0,1 --query-gpu=compute_cap --format=csv compute_cap 9.0 9.0` However, Ollama failed to recognize the GPUs. `11T03:16:37.327Z level=DEBUG source=gpu.go:592 msg="Unable to load cudart library /usr/lib/ollama/libcudart.so.11.3.109: cudart init failure: 802" cudaSetDevice err: 802 time=2025-02-11T03:17:07.333Z level=DEBUG source=gpu.go:592 msg="Unable to load cudart library /usr/lib/ollama/libcudart.so.12.4.127: cudart init failure: 802" time=2025-02-11T03:17:07.333Z level=DEBUG source=amd_linux.go:421 msg="amdgpu driver not detected /sys/module/amdgpu" time=2025-02-11T03:17:07.333Z level=INFO source=gpu.go:392 msg="no compatible GPUs were discovered" time=2025-02-11T03:17:07.333Z level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="2015.5 GiB" available="1999.5 GiB"` ` nvidia-smi Tue Feb 11 20:54:41 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 570.86.15 Driver Version: 570.86.15 CUDA Version: 12.8 |` May I request Ollama to validate and support H200s?
GiteaMirror added the feature request label 2026-05-04 12:04:59 -05:00
Author
Owner

@rick-github commented on GitHub (Feb 12, 2025):

The error message seems to indicate a driver problem.

cudaErrorSystemNotReady = 802

  • This error indicates that the system is not yet ready to start any CUDA work. To continue using CUDA, verify the system configuration is in a valid state and all required driver daemons are actively running. More information about this error can be found in the system specific user guide.

Have you checked the output of dmesg? Have you tried unloading and reloading nvidia_uvm?

<!-- gh-comment-id:2653077321 --> @rick-github commented on GitHub (Feb 12, 2025): The error message seems to indicate a driver problem. cudaErrorSystemNotReady = 802 - This error indicates that the system is not yet ready to start any CUDA work. To continue using CUDA, verify the system configuration is in a valid state and all required driver daemons are actively running. More information about this error can be found in the system specific user guide. Have you checked the output of `dmesg`? Have you tried unloading and reloading `nvidia_uvm`?
Author
Owner

@jmorzeck commented on GitHub (Feb 12, 2025):

I am facing the same issue with my H200 server.

OS is Ubuntu 24.04.1 LTS
ollama version: latest docker image (0.5.7)
NVIDIA-SMI 560.35.03
Driver Version: 560.35.03
CUDA Version: 12.6

Did try undloading and reloading nvidia_uvm.

Would be great to have support for H200!

<!-- gh-comment-id:2654825179 --> @jmorzeck commented on GitHub (Feb 12, 2025): I am facing the same issue with my H200 server. OS is Ubuntu 24.04.1 LTS ollama version: latest docker image (0.5.7) NVIDIA-SMI 560.35.03 Driver Version: 560.35.03 CUDA Version: 12.6 Did try undloading and reloading `nvidia_uvm`. Would be great to have support for H200!
Author
Owner

@rick-github commented on GitHub (Feb 12, 2025):

H200 works fine.

2025/02/12 21:08:37 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://:11435 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-02-12T21:08:37.113Z level=INFO source=images.go:432 msg="total blobs: 5"
time=2025-02-12T21:08:37.113Z level=INFO source=images.go:439 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
 - using env:   export GIN_MODE=release
 - using code:  gin.SetMode(gin.ReleaseMode)

[GIN-debug] POST   /api/pull                 --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
[GIN-debug] POST   /api/generate             --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST   /api/chat                 --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST   /api/embed                --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST   /api/embeddings           --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST   /api/create               --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
[GIN-debug] POST   /api/push                 --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
[GIN-debug] POST   /api/copy                 --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete               --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
[GIN-debug] POST   /api/show                 --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
[GIN-debug] POST   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET    /api/ps                   --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
[GIN-debug] POST   /v1/chat/completions      --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST   /v1/completions           --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST   /v1/embeddings            --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models                --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models/:model         --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
[GIN-debug] GET    /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET    /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] GET    /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD   /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD   /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
time=2025-02-12T21:08:37.114Z level=INFO source=routes.go:1238 msg="Listening on [::]:11435 (version 0.5.7-0-ga420a45-dirty)"
time=2025-02-12T21:08:37.114Z level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx]"
time=2025-02-12T21:08:37.114Z level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2025-02-12T21:08:37.423Z level=INFO source=types.go:131 msg="inference compute" id=GPU-0c7ff836-694b-ed5e-91e0-a025d4450e39 library=cuda variant=v12 compute=9.0 driver=12.4 name="NVIDIA H200" total="139.7 GiB" available="139.2 GiB"
[GIN] 2025/02/12 - 21:08:41 | 200 |      39.326µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/02/12 - 21:08:41 | 200 |     349.607µs |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/02/12 - 21:08:53 | 200 |      25.193µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/02/12 - 21:08:54 | 200 |   14.973435ms |       127.0.0.1 | POST     "/api/show"
time=2025-02-12T21:08:54.314Z level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-c5396e06af294bd101b30dce59131a76d2b773e76950acc870eda801d3ab0515 gpu=GPU-0c7ff836-694b-ed5e-91e0-a025d4450e39 parallel=4 available=149471166464 required="1.2 GiB"
time=2025-02-12T21:08:54.582Z level=INFO source=server.go:104 msg="system memory" total="2015.6 GiB" free="1912.8 GiB" free_swap="0 B"
time=2025-02-12T21:08:54.583Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=25 layers.offload=25 layers.split="" memory.available="[139.2 GiB]" memory.gpu_overhead="0 B" memory.required.full="1.2 GiB" memory.required.partial="1.2 GiB" memory.required.kv="96.0 MiB" memory.required.allocations="[1.2 GiB]" memory.weights.total="331.8 MiB" memory.weights.repeating="193.8 MiB" memory.weights.nonrepeating="137.9 MiB" memory.graph.full="298.5 MiB" memory.graph.partial="405.0 MiB"
time=2025-02-12T21:08:54.583Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v12_avx/ollama_llama_server runner --model /root/.ollama/models/blobs/sha256-c5396e06af294bd101b30dce59131a76d2b773e76950acc870eda801d3ab0515 --ctx-size 8192 --batch-size 512 --n-gpu-layers 25 --threads 96 --parallel 4 --port 41579"
time=2025-02-12T21:08:54.583Z level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-02-12T21:08:54.584Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
time=2025-02-12T21:08:54.584Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2025-02-12T21:08:54.638Z level=INFO source=runner.go:936 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA H200, compute capability 9.0, VMM: yes
time=2025-02-12T21:08:54.663Z level=INFO source=runner.go:937 msg=system info="CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=96
time=2025-02-12T21:08:54.663Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:41579"
llama_load_model_from_file: using device CUDA0 (NVIDIA H200) - 142546 MiB free
time=2025-02-12T21:08:54.835Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: loaded meta data with 34 key-value pairs and 290 tensors from /root/.ollama/models/blobs/sha256-c5396e06af294bd101b30dce59131a76d2b773e76950acc870eda801d3ab0515 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Qwen2.5 0.5B Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Qwen2.5
llama_model_loader: - kv   5:                         general.size_label str              = 0.5B
llama_model_loader: - kv   6:                            general.license str              = apache-2.0
llama_model_loader: - kv   7:                       general.license.link str              = https://huggingface.co/Qwen/Qwen2.5-0...
llama_model_loader: - kv   8:                   general.base_model.count u32              = 1
llama_model_loader: - kv   9:                  general.base_model.0.name str              = Qwen2.5 0.5B
llama_model_loader: - kv  10:          general.base_model.0.organization str              = Qwen
llama_model_loader: - kv  11:              general.base_model.0.repo_url str              = https://huggingface.co/Qwen/Qwen2.5-0.5B
llama_model_loader: - kv  12:                               general.tags arr[str,2]       = ["chat", "text-generation"]
llama_model_loader: - kv  13:                          general.languages arr[str,1]       = ["en"]
llama_model_loader: - kv  14:                          qwen2.block_count u32              = 24
llama_model_loader: - kv  15:                       qwen2.context_length u32              = 32768
llama_model_loader: - kv  16:                     qwen2.embedding_length u32              = 896
llama_model_loader: - kv  17:                  qwen2.feed_forward_length u32              = 4864
llama_model_loader: - kv  18:                 qwen2.attention.head_count u32              = 14
llama_model_loader: - kv  19:              qwen2.attention.head_count_kv u32              = 2
llama_model_loader: - kv  20:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  21:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  22:                          general.file_type u32              = 15
llama_model_loader: - kv  23:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  24:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  25:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  26:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  27:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  28:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  29:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  30:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  31:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  32:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  33:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  121 tensors
llama_model_loader: - type q5_0:  132 tensors
llama_model_loader: - type q8_0:   13 tensors
llama_model_loader: - type q4_K:   12 tensors
llama_model_loader: - type q6_K:   12 tensors
llm_load_vocab: special tokens cache size = 22
llm_load_vocab: token to piece cache size = 0.9310 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = qwen2
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 151936
llm_load_print_meta: n_merges         = 151387
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 896
llm_load_print_meta: n_layer          = 24
llm_load_print_meta: n_head           = 14
llm_load_print_meta: n_head_kv        = 2
llm_load_print_meta: n_rot            = 64
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 64
llm_load_print_meta: n_embd_head_v    = 64
llm_load_print_meta: n_gqa            = 7
llm_load_print_meta: n_embd_k_gqa     = 128
llm_load_print_meta: n_embd_v_gqa     = 128
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 4864
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = 1B
llm_load_print_meta: model ftype      = Q4_K - Medium
llm_load_print_meta: model params     = 494.03 M
llm_load_print_meta: model size       = 373.71 MiB (6.35 BPW)
llm_load_print_meta: general.name     = Qwen2.5 0.5B Instruct
llm_load_print_meta: BOS token        = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token        = 151645 '<|im_end|>'
llm_load_print_meta: EOT token        = 151645 '<|im_end|>'
llm_load_print_meta: PAD token        = 151643 '<|endoftext|>'
llm_load_print_meta: LF token         = 148848 'ÄĬ'
llm_load_print_meta: FIM PRE token    = 151659 '<|fim_prefix|>'
llm_load_print_meta: FIM SUF token    = 151661 '<|fim_suffix|>'
llm_load_print_meta: FIM MID token    = 151660 '<|fim_middle|>'
llm_load_print_meta: FIM PAD token    = 151662 '<|fim_pad|>'
llm_load_print_meta: FIM REP token    = 151663 '<|repo_name|>'
llm_load_print_meta: FIM SEP token    = 151664 '<|file_sep|>'
llm_load_print_meta: EOG token        = 151643 '<|endoftext|>'
llm_load_print_meta: EOG token        = 151645 '<|im_end|>'
llm_load_print_meta: EOG token        = 151662 '<|fim_pad|>'
llm_load_print_meta: EOG token        = 151663 '<|repo_name|>'
llm_load_print_meta: EOG token        = 151664 '<|file_sep|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: offloading 24 repeating layers to GPU
llm_load_tensors: offloading output layer to GPU
llm_load_tensors: offloaded 25/25 layers to GPU
llm_load_tensors:   CPU_Mapped model buffer size =   137.94 MiB
llm_load_tensors:        CUDA0 model buffer size =   373.73 MiB
llama_new_context_with_model: n_seq_max     = 4
llama_new_context_with_model: n_ctx         = 8192
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch       = 2048
llama_new_context_with_model: n_ubatch      = 512
llama_new_context_with_model: flash_attn    = 0
llama_new_context_with_model: freq_base     = 1000000.0
llama_new_context_with_model: freq_scale    = 1
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (32768) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 24, can_shift = 1
llama_kv_cache_init:      CUDA0 KV buffer size =    96.00 MiB
llama_new_context_with_model: KV self size  =   96.00 MiB, K (f16):   48.00 MiB, V (f16):   48.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     2.33 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =   298.50 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    17.76 MiB
llama_new_context_with_model: graph nodes  = 846
llama_new_context_with_model: graph splits = 2
time=2025-02-12T21:08:55.587Z level=INFO source=server.go:594 msg="llama runner started in 1.00 seconds"
llama_model_loader: loaded meta data with 34 key-value pairs and 290 tensors from /root/.ollama/models/blobs/sha256-c5396e06af294bd101b30dce59131a76d2b773e76950acc870eda801d3ab0515 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Qwen2.5 0.5B Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Qwen2.5
llama_model_loader: - kv   5:                         general.size_label str              = 0.5B
llama_model_loader: - kv   6:                            general.license str              = apache-2.0
llama_model_loader: - kv   7:                       general.license.link str              = https://huggingface.co/Qwen/Qwen2.5-0...
llama_model_loader: - kv   8:                   general.base_model.count u32              = 1
llama_model_loader: - kv   9:                  general.base_model.0.name str              = Qwen2.5 0.5B
llama_model_loader: - kv  10:          general.base_model.0.organization str              = Qwen
llama_model_loader: - kv  11:              general.base_model.0.repo_url str              = https://huggingface.co/Qwen/Qwen2.5-0.5B
llama_model_loader: - kv  12:                               general.tags arr[str,2]       = ["chat", "text-generation"]
llama_model_loader: - kv  13:                          general.languages arr[str,1]       = ["en"]
llama_model_loader: - kv  14:                          qwen2.block_count u32              = 24
llama_model_loader: - kv  15:                       qwen2.context_length u32              = 32768
llama_model_loader: - kv  16:                     qwen2.embedding_length u32              = 896
llama_model_loader: - kv  17:                  qwen2.feed_forward_length u32              = 4864
llama_model_loader: - kv  18:                 qwen2.attention.head_count u32              = 14
llama_model_loader: - kv  19:              qwen2.attention.head_count_kv u32              = 2
llama_model_loader: - kv  20:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  21:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  22:                          general.file_type u32              = 15
llama_model_loader: - kv  23:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  24:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  25:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  26:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  27:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  28:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  29:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  30:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  31:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  32:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  33:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  121 tensors
llama_model_loader: - type q5_0:  132 tensors
llama_model_loader: - type q8_0:   13 tensors
llama_model_loader: - type q4_K:   12 tensors
llama_model_loader: - type q6_K:   12 tensors
llm_load_vocab: special tokens cache size = 22
llm_load_vocab: token to piece cache size = 0.9310 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = qwen2
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 151936
llm_load_print_meta: n_merges         = 151387
llm_load_print_meta: vocab_only       = 1
llm_load_print_meta: model type       = ?B
llm_load_print_meta: model ftype      = all F32
llm_load_print_meta: model params     = 494.03 M
llm_load_print_meta: model size       = 373.71 MiB (6.35 BPW)
llm_load_print_meta: general.name     = Qwen2.5 0.5B Instruct
llm_load_print_meta: BOS token        = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token        = 151645 '<|im_end|>'
llm_load_print_meta: EOT token        = 151645 '<|im_end|>'
llm_load_print_meta: PAD token        = 151643 '<|endoftext|>'
llm_load_print_meta: LF token         = 148848 'ÄĬ'
llm_load_print_meta: FIM PRE token    = 151659 '<|fim_prefix|>'
llm_load_print_meta: FIM SUF token    = 151661 '<|fim_suffix|>'
llm_load_print_meta: FIM MID token    = 151660 '<|fim_middle|>'
llm_load_print_meta: FIM PAD token    = 151662 '<|fim_pad|>'
llm_load_print_meta: FIM REP token    = 151663 '<|repo_name|>'
llm_load_print_meta: FIM SEP token    = 151664 '<|file_sep|>'
llm_load_print_meta: EOG token        = 151643 '<|endoftext|>'
llm_load_print_meta: EOG token        = 151645 '<|im_end|>'
llm_load_print_meta: EOG token        = 151662 '<|fim_pad|>'
llm_load_print_meta: EOG token        = 151663 '<|repo_name|>'
llm_load_print_meta: EOG token        = 151664 '<|file_sep|>'
llm_load_print_meta: max token length = 256
llama_model_load: vocab only - skipping tensors
[GIN] 2025/02/12 - 21:08:56 | 200 |   2.18302405s |       127.0.0.1 | POST     "/api/generate"
# ollama list
NAME            ID              SIZE      MODIFIED
qwen2.5:0.5b    a8b0c5157701    397 MB    About a minute ago
# ollama run qwen2.5:0.5b hello
Hello! How can I assist you today?
<!-- gh-comment-id:2654855072 --> @rick-github commented on GitHub (Feb 12, 2025): H200 works fine. ``` 2025/02/12 21:08:37 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://:11435 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-02-12T21:08:37.113Z level=INFO source=images.go:432 msg="total blobs: 5" time=2025-02-12T21:08:37.113Z level=INFO source=images.go:439 msg="total unused blobs removed: 0" [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers) [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers) [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers) [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers) [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers) [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers) [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers) [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers) [GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers) [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers) [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers) [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers) [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers) [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers) [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers) [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers) [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers) [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers) [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) time=2025-02-12T21:08:37.114Z level=INFO source=routes.go:1238 msg="Listening on [::]:11435 (version 0.5.7-0-ga420a45-dirty)" time=2025-02-12T21:08:37.114Z level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx]" time=2025-02-12T21:08:37.114Z level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2025-02-12T21:08:37.423Z level=INFO source=types.go:131 msg="inference compute" id=GPU-0c7ff836-694b-ed5e-91e0-a025d4450e39 library=cuda variant=v12 compute=9.0 driver=12.4 name="NVIDIA H200" total="139.7 GiB" available="139.2 GiB" [GIN] 2025/02/12 - 21:08:41 | 200 | 39.326µs | 127.0.0.1 | HEAD "/" [GIN] 2025/02/12 - 21:08:41 | 200 | 349.607µs | 127.0.0.1 | GET "/api/tags" [GIN] 2025/02/12 - 21:08:53 | 200 | 25.193µs | 127.0.0.1 | HEAD "/" [GIN] 2025/02/12 - 21:08:54 | 200 | 14.973435ms | 127.0.0.1 | POST "/api/show" time=2025-02-12T21:08:54.314Z level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-c5396e06af294bd101b30dce59131a76d2b773e76950acc870eda801d3ab0515 gpu=GPU-0c7ff836-694b-ed5e-91e0-a025d4450e39 parallel=4 available=149471166464 required="1.2 GiB" time=2025-02-12T21:08:54.582Z level=INFO source=server.go:104 msg="system memory" total="2015.6 GiB" free="1912.8 GiB" free_swap="0 B" time=2025-02-12T21:08:54.583Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=25 layers.offload=25 layers.split="" memory.available="[139.2 GiB]" memory.gpu_overhead="0 B" memory.required.full="1.2 GiB" memory.required.partial="1.2 GiB" memory.required.kv="96.0 MiB" memory.required.allocations="[1.2 GiB]" memory.weights.total="331.8 MiB" memory.weights.repeating="193.8 MiB" memory.weights.nonrepeating="137.9 MiB" memory.graph.full="298.5 MiB" memory.graph.partial="405.0 MiB" time=2025-02-12T21:08:54.583Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v12_avx/ollama_llama_server runner --model /root/.ollama/models/blobs/sha256-c5396e06af294bd101b30dce59131a76d2b773e76950acc870eda801d3ab0515 --ctx-size 8192 --batch-size 512 --n-gpu-layers 25 --threads 96 --parallel 4 --port 41579" time=2025-02-12T21:08:54.583Z level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-02-12T21:08:54.584Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding" time=2025-02-12T21:08:54.584Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" time=2025-02-12T21:08:54.638Z level=INFO source=runner.go:936 msg="starting go runner" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA H200, compute capability 9.0, VMM: yes time=2025-02-12T21:08:54.663Z level=INFO source=runner.go:937 msg=system info="CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=96 time=2025-02-12T21:08:54.663Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:41579" llama_load_model_from_file: using device CUDA0 (NVIDIA H200) - 142546 MiB free time=2025-02-12T21:08:54.835Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: loaded meta data with 34 key-value pairs and 290 tensors from /root/.ollama/models/blobs/sha256-c5396e06af294bd101b30dce59131a76d2b773e76950acc870eda801d3ab0515 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen2.5 0.5B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Qwen2.5 llama_model_loader: - kv 5: general.size_label str = 0.5B llama_model_loader: - kv 6: general.license str = apache-2.0 llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-0... llama_model_loader: - kv 8: general.base_model.count u32 = 1 llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 0.5B llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-0.5B llama_model_loader: - kv 12: general.tags arr[str,2] = ["chat", "text-generation"] llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 14: qwen2.block_count u32 = 24 llama_model_loader: - kv 15: qwen2.context_length u32 = 32768 llama_model_loader: - kv 16: qwen2.embedding_length u32 = 896 llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 4864 llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 14 llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 2 llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 22: general.file_type u32 = 15 llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 33: general.quantization_version u32 = 2 llama_model_loader: - type f32: 121 tensors llama_model_loader: - type q5_0: 132 tensors llama_model_loader: - type q8_0: 13 tensors llama_model_loader: - type q4_K: 12 tensors llama_model_loader: - type q6_K: 12 tensors llm_load_vocab: special tokens cache size = 22 llm_load_vocab: token to piece cache size = 0.9310 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 151936 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 896 llm_load_print_meta: n_layer = 24 llm_load_print_meta: n_head = 14 llm_load_print_meta: n_head_kv = 2 llm_load_print_meta: n_rot = 64 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 64 llm_load_print_meta: n_embd_head_v = 64 llm_load_print_meta: n_gqa = 7 llm_load_print_meta: n_embd_k_gqa = 128 llm_load_print_meta: n_embd_v_gqa = 128 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 4864 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 1B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 494.03 M llm_load_print_meta: model size = 373.71 MiB (6.35 BPW) llm_load_print_meta: general.name = Qwen2.5 0.5B Instruct llm_load_print_meta: BOS token = 151643 '<|endoftext|>' llm_load_print_meta: EOS token = 151645 '<|im_end|>' llm_load_print_meta: EOT token = 151645 '<|im_end|>' llm_load_print_meta: PAD token = 151643 '<|endoftext|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>' llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>' llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>' llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>' llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>' llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>' llm_load_print_meta: EOG token = 151643 '<|endoftext|>' llm_load_print_meta: EOG token = 151645 '<|im_end|>' llm_load_print_meta: EOG token = 151662 '<|fim_pad|>' llm_load_print_meta: EOG token = 151663 '<|repo_name|>' llm_load_print_meta: EOG token = 151664 '<|file_sep|>' llm_load_print_meta: max token length = 256 llm_load_tensors: offloading 24 repeating layers to GPU llm_load_tensors: offloading output layer to GPU llm_load_tensors: offloaded 25/25 layers to GPU llm_load_tensors: CPU_Mapped model buffer size = 137.94 MiB llm_load_tensors: CUDA0 model buffer size = 373.73 MiB llama_new_context_with_model: n_seq_max = 4 llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (32768) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 24, can_shift = 1 llama_kv_cache_init: CUDA0 KV buffer size = 96.00 MiB llama_new_context_with_model: KV self size = 96.00 MiB, K (f16): 48.00 MiB, V (f16): 48.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 2.33 MiB llama_new_context_with_model: CUDA0 compute buffer size = 298.50 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 17.76 MiB llama_new_context_with_model: graph nodes = 846 llama_new_context_with_model: graph splits = 2 time=2025-02-12T21:08:55.587Z level=INFO source=server.go:594 msg="llama runner started in 1.00 seconds" llama_model_loader: loaded meta data with 34 key-value pairs and 290 tensors from /root/.ollama/models/blobs/sha256-c5396e06af294bd101b30dce59131a76d2b773e76950acc870eda801d3ab0515 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen2.5 0.5B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Qwen2.5 llama_model_loader: - kv 5: general.size_label str = 0.5B llama_model_loader: - kv 6: general.license str = apache-2.0 llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-0... llama_model_loader: - kv 8: general.base_model.count u32 = 1 llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 0.5B llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-0.5B llama_model_loader: - kv 12: general.tags arr[str,2] = ["chat", "text-generation"] llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 14: qwen2.block_count u32 = 24 llama_model_loader: - kv 15: qwen2.context_length u32 = 32768 llama_model_loader: - kv 16: qwen2.embedding_length u32 = 896 llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 4864 llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 14 llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 2 llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 22: general.file_type u32 = 15 llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 33: general.quantization_version u32 = 2 llama_model_loader: - type f32: 121 tensors llama_model_loader: - type q5_0: 132 tensors llama_model_loader: - type q8_0: 13 tensors llama_model_loader: - type q4_K: 12 tensors llama_model_loader: - type q6_K: 12 tensors llm_load_vocab: special tokens cache size = 22 llm_load_vocab: token to piece cache size = 0.9310 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 151936 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 1 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = all F32 llm_load_print_meta: model params = 494.03 M llm_load_print_meta: model size = 373.71 MiB (6.35 BPW) llm_load_print_meta: general.name = Qwen2.5 0.5B Instruct llm_load_print_meta: BOS token = 151643 '<|endoftext|>' llm_load_print_meta: EOS token = 151645 '<|im_end|>' llm_load_print_meta: EOT token = 151645 '<|im_end|>' llm_load_print_meta: PAD token = 151643 '<|endoftext|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>' llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>' llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>' llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>' llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>' llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>' llm_load_print_meta: EOG token = 151643 '<|endoftext|>' llm_load_print_meta: EOG token = 151645 '<|im_end|>' llm_load_print_meta: EOG token = 151662 '<|fim_pad|>' llm_load_print_meta: EOG token = 151663 '<|repo_name|>' llm_load_print_meta: EOG token = 151664 '<|file_sep|>' llm_load_print_meta: max token length = 256 llama_model_load: vocab only - skipping tensors [GIN] 2025/02/12 - 21:08:56 | 200 | 2.18302405s | 127.0.0.1 | POST "/api/generate" ``` ```console # ollama list NAME ID SIZE MODIFIED qwen2.5:0.5b a8b0c5157701 397 MB About a minute ago # ollama run qwen2.5:0.5b hello Hello! How can I assist you today? ```
Author
Owner

@rajeshkumar-n commented on GitHub (Feb 13, 2025):

I am facing the same issue with my H200 server.

OS is Ubuntu 24.04.1 LTS ollama version: latest docker image (0.5.7) NVIDIA-SMI 560.35.03 Driver Version: 560.35.03 CUDA Version: 12.6

Did try undloading and reloading nvidia_uvm.

Would be great to have support for H200!

As per this, H200 recommended CUDA drivers are 12.7 and above

<!-- gh-comment-id:2655374750 --> @rajeshkumar-n commented on GitHub (Feb 13, 2025): > I am facing the same issue with my H200 server. > > OS is Ubuntu 24.04.1 LTS ollama version: latest docker image (0.5.7) NVIDIA-SMI 560.35.03 Driver Version: 560.35.03 CUDA Version: 12.6 > > Did try undloading and reloading `nvidia_uvm`. > > Would be great to have support for H200! As per [this](https://www.nvidia.com/en-us/drivers/results/), H200 recommended CUDA drivers are 12.7 and above
Author
Owner

@rajeshkumar-n commented on GitHub (Feb 13, 2025):

time=2025-02-12T21:08:37.423Z level=INFO source=types.go:131 msg="inference compute" id=GPU-0c7ff836-694b-ed5e-91e0-a025d4450e39 library=cuda variant=v12 compute=9.0 driver=12.4 name="NVIDIA H200" total="139.7 GiB" available="139.2 GiB"

Happy to see this working for you, @rick-github. I could get the CUDA driver details from the logs. Do you mind sharing your OS and NVCC and docker version details.

I am on Ubuntu 22_04, CUDA 12.8.

nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2025 NVIDIA Corporation Built on Wed_Jan_15_19:20:09_PST_2025 Cuda compilation tools, release 12.8, V12.8.61 Build cuda_12.8.r12.8/compiler.35404655_0

docker version Client: Docker Engine - Community Version: 27.5.1 API version: 1.47 Go version: go1.22.11 Git commit: 9f9e405 Built: Wed Jan 22 13:41:31 2025 OS/Arch: linux/amd64 Context: default

cat /etc/os-release PRETTY_NAME="Ubuntu 22.04.5 LTS" NAME="Ubuntu" VERSION_ID="22.04" VERSION="22.04.5 LTS (Jammy Jellyfish)" VERSION_CODENAME=jammy ID=ubuntu ID_LIKE=debian HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" UBUNTU_CODENAME=jammy

<!-- gh-comment-id:2655385326 --> @rajeshkumar-n commented on GitHub (Feb 13, 2025): > time=2025-02-12T21:08:37.423Z level=INFO source=types.go:131 msg="inference compute" id=GPU-0c7ff836-694b-ed5e-91e0-a025d4450e39 library=cuda variant=v12 compute=9.0 driver=12.4 name="NVIDIA H200" total="139.7 GiB" available="139.2 GiB" Happy to see this working for you, @rick-github. I could get the CUDA driver details from the logs. Do you mind sharing your OS and NVCC and docker version details. I am on Ubuntu 22_04, CUDA 12.8. `nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2025 NVIDIA Corporation Built on Wed_Jan_15_19:20:09_PST_2025 Cuda compilation tools, release 12.8, V12.8.61 Build cuda_12.8.r12.8/compiler.35404655_0` `docker version Client: Docker Engine - Community Version: 27.5.1 API version: 1.47 Go version: go1.22.11 Git commit: 9f9e405 Built: Wed Jan 22 13:41:31 2025 OS/Arch: linux/amd64 Context: default` `cat /etc/os-release PRETTY_NAME="Ubuntu 22.04.5 LTS" NAME="Ubuntu" VERSION_ID="22.04" VERSION="22.04.5 LTS (Jammy Jellyfish)" VERSION_CODENAME=jammy ID=ubuntu ID_LIKE=debian HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" UBUNTU_CODENAME=jammy`
Author
Owner

@jmorzeck commented on GitHub (Feb 13, 2025):

time=2025-02-12T21:08:37.423Z level=INFO source=types.go:131 msg="inference compute" id=GPU-0c7ff836-694b-ed5e-91e0-a025d4450e39 library=cuda variant=v12 compute=9.0 driver=12.4 name="NVIDIA H200" total="139.7 GiB" available="139.2 GiB"

Happy to see this working for you, @rick-github. I could get the CUDA driver details from the logs. Do you mind sharing your OS and NVCC and docker version details.

I am on Ubuntu 22_04, CUDA 12.8.

So that means that @rick-github is actually using 12.4 as driver instead of sth like version 12.7 or higher?
Would be nice to see your configuration here!

<!-- gh-comment-id:2655771726 --> @jmorzeck commented on GitHub (Feb 13, 2025): > > time=2025-02-12T21:08:37.423Z level=INFO source=types.go:131 msg="inference compute" id=GPU-0c7ff836-694b-ed5e-91e0-a025d4450e39 library=cuda variant=v12 compute=9.0 driver=12.4 name="NVIDIA H200" total="139.7 GiB" available="139.2 GiB" > > Happy to see this working for you, [@rick-github](https://github.com/rick-github). I could get the CUDA driver details from the logs. Do you mind sharing your OS and NVCC and docker version details. > > I am on Ubuntu 22_04, CUDA 12.8. So that means that @rick-github is actually using 12.4 as driver instead of sth like version 12.7 or higher? Would be nice to see your configuration here!
Author
Owner

@unicorn667 commented on GitHub (Feb 13, 2025):

My H200 System works with Ollama 0.5.7

Alma Linux 9.4

Image

<!-- gh-comment-id:2656447145 --> @unicorn667 commented on GitHub (Feb 13, 2025): My H200 System works with Ollama 0.5.7 Alma Linux 9.4 ![Image](https://github.com/user-attachments/assets/0ce2c022-fc82-4e43-9b4f-d31445eb96ed)
Author
Owner

@jmorzeck commented on GitHub (Feb 13, 2025):

It works on my system now, too.

I removed all nvidia and cuda related packages, and installed cuda-toolkit-12.8.
This led my to driver version 570.86.10, which I then updated to 570.86.15.
Lastly, I updated the nvidia-fabricmanager to nvidia-fabricmanager-570 (check if it runs, otherwise do systemctl start nvidia-fabricmanager.
Lastly rebooted my whole system and it works as expected.

nvidia-smi:

+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 570.86.15              Driver Version: 570.86.15      CUDA Version: 12.8     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA H200                    Off |   00000000:19:00.0 Off |                    0 |
| N/A   31C    P0            120W /  700W |     632MiB / 143771MiB |      0%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+
|   1  NVIDIA H200                    Off |   00000000:3B:00.0 Off |                    0 |
| N/A   29C    P0            116W /  700W |    6676MiB / 143771MiB |      0%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+
|   2  NVIDIA H200                    Off |   00000000:4C:00.0 Off |                    0 |
| N/A   25C    P0             79W /  700W |       4MiB / 143771MiB |      0%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+
|   3  NVIDIA H200                    Off |   00000000:5D:00.0 Off |                    0 |
| N/A   27C    P0             75W /  700W |       4MiB / 143771MiB |      0%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+
|   4  NVIDIA H200                    Off |   00000000:9B:00.0 Off |                    0 |
| N/A   29C    P0             77W /  700W |       1MiB / 143771MiB |      0%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+
|   5  NVIDIA H200                    Off |   00000000:BB:00.0 Off |                    0 |
| N/A   27C    P0             76W /  700W |       1MiB / 143771MiB |      0%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+
|   6  NVIDIA H200                    Off |   00000000:CB:00.0 Off |                    0 |
| N/A   28C    P0             78W /  700W |       1MiB / 143771MiB |      0%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+
|   7  NVIDIA H200                    Off |   00000000:DB:00.0 Off |                    0 |
| N/A   27C    P0             78W /  700W |       1MiB / 143771MiB |      0%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A           13245      C   /usr/local/bin/python                   622MiB |
|    1   N/A  N/A           14137      C   ...a_v12_avx/ollama_llama_server       6666MiB |
+-----------------------------------------------------------------------------------------+

nvcc --version:

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2025 NVIDIA Corporation
Built on Wed_Jan_15_19:20:09_PST_2025
Cuda compilation tools, release 12.8, V12.8.61
Build cuda_12.8.r12.8/compiler.35404655_0

I hope that helps anyone having the same issues!

<!-- gh-comment-id:2657127154 --> @jmorzeck commented on GitHub (Feb 13, 2025): It works on my system now, too. I removed all nvidia and cuda related packages, and installed `cuda-toolkit-12.8`. This led my to driver version `570.86.10`, which I then updated to `570.86.15`. Lastly, I updated the nvidia-fabricmanager to `nvidia-fabricmanager-570` (check if it runs, otherwise do `systemctl start nvidia-fabricmanager`. Lastly rebooted my whole system and it works as expected. `nvidia-smi`: ``` +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 570.86.15 Driver Version: 570.86.15 CUDA Version: 12.8 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA H200 Off | 00000000:19:00.0 Off | 0 | | N/A 31C P0 120W / 700W | 632MiB / 143771MiB | 0% Default | | | | Disabled | +-----------------------------------------+------------------------+----------------------+ | 1 NVIDIA H200 Off | 00000000:3B:00.0 Off | 0 | | N/A 29C P0 116W / 700W | 6676MiB / 143771MiB | 0% Default | | | | Disabled | +-----------------------------------------+------------------------+----------------------+ | 2 NVIDIA H200 Off | 00000000:4C:00.0 Off | 0 | | N/A 25C P0 79W / 700W | 4MiB / 143771MiB | 0% Default | | | | Disabled | +-----------------------------------------+------------------------+----------------------+ | 3 NVIDIA H200 Off | 00000000:5D:00.0 Off | 0 | | N/A 27C P0 75W / 700W | 4MiB / 143771MiB | 0% Default | | | | Disabled | +-----------------------------------------+------------------------+----------------------+ | 4 NVIDIA H200 Off | 00000000:9B:00.0 Off | 0 | | N/A 29C P0 77W / 700W | 1MiB / 143771MiB | 0% Default | | | | Disabled | +-----------------------------------------+------------------------+----------------------+ | 5 NVIDIA H200 Off | 00000000:BB:00.0 Off | 0 | | N/A 27C P0 76W / 700W | 1MiB / 143771MiB | 0% Default | | | | Disabled | +-----------------------------------------+------------------------+----------------------+ | 6 NVIDIA H200 Off | 00000000:CB:00.0 Off | 0 | | N/A 28C P0 78W / 700W | 1MiB / 143771MiB | 0% Default | | | | Disabled | +-----------------------------------------+------------------------+----------------------+ | 7 NVIDIA H200 Off | 00000000:DB:00.0 Off | 0 | | N/A 27C P0 78W / 700W | 1MiB / 143771MiB | 0% Default | | | | Disabled | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 13245 C /usr/local/bin/python 622MiB | | 1 N/A N/A 14137 C ...a_v12_avx/ollama_llama_server 6666MiB | +-----------------------------------------------------------------------------------------+ ``` `nvcc --version`: ``` nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2025 NVIDIA Corporation Built on Wed_Jan_15_19:20:09_PST_2025 Cuda compilation tools, release 12.8, V12.8.61 Build cuda_12.8.r12.8/compiler.35404655_0 ``` I hope that helps anyone having the same issues!
Author
Owner

@goarem commented on GitHub (Feb 13, 2025):

Does anyone know if B200s are supported?

https://www.nvidia.com/en-us/data-center/dgx-b200/

<!-- gh-comment-id:2657430701 --> @goarem commented on GitHub (Feb 13, 2025): Does anyone know if B200s are supported? https://www.nvidia.com/en-us/data-center/dgx-b200/
Author
Owner

@rajeshkumar-n commented on GitHub (Feb 19, 2025):

It works on my system now, too.

I removed all nvidia and cuda related packages, and installed cuda-toolkit-12.8. This led my to driver version 570.86.10, which I then updated to 570.86.15. Lastly, I updated the nvidia-fabricmanager to nvidia-fabricmanager-570 (check if it runs, otherwise do systemctl start nvidia-fabricmanager. Lastly rebooted my whole system and it works as expected.

I hope that helps anyone having the same issues!

Hi @jmorzeck - I had the same NVIDIA tools version as yours(Except I am on Ubuntu 22_04. The ollama docker image based on Ubuntu 22_04 though). My ollama container still fails to use the GPUs.

time=2025-02-19T04:43:53.194Z level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths="[/usr/lib/x86_64-linux-gnu/libcuda.so.570.86.15 /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.16]" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.15 dlsym: cuInit - 0x7f928be76e00 dlsym: cuDriverGetVersion - 0x7f928be76e20 dlsym: cuDeviceGetCount - 0x7f928be76e60 dlsym: cuDeviceGet - 0x7f928be76e40 dlsym: cuDeviceGetAttribute - 0x7f928be76f40 dlsym: cuDeviceGetUuid - 0x7f928be76ea0 dlsym: cuDeviceGetName - 0x7f928be76e80 dlsym: cuCtxCreate_v3 - 0x7f928be77120 dlsym: cuMemGetInfo_v2 - 0x7f928be778a0 dlsym: cuCtxDestroy - 0x7f928bed59f0 calling cuInit cuInit err: 802 time=2025-02-19T04:44:23.228Z level=INFO source=gpu.go:612 msg="Unable to load cudart library /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.15: cuda driver library init failure: 802"

`cat /proc/driver/nvidia/version
NVRM version: NVIDIA UNIX x86_64 Kernel Module 570.86.15 Thu Jan 23 23:23:10 UTC 2025
GCC version: gcc version 11.4.0 (Ubuntu 11.4.0-1ubuntu1~22.04)

dpkg -l | grep nvidia-container-toolkit
ii nvidia-container-toolkit 1.17.4-1 amd64 NVIDIA Container toolkit
ii nvidia-container-toolkit-base 1.17.4-1 amd64 NVIDIA Container Toolkit Base

nvidia-smi
Tue Feb 18 20:28:24 2025
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 570.86.15 Driver Version: 570.86.15 CUDA Version: 12.8 |
|-----------------------------------------+------------------------+----------------------+`

Can you please share the output of cat /proc/driver/nvidia/version and dpkg -l | grep nvidia-container-toolkit

<!-- gh-comment-id:2667618243 --> @rajeshkumar-n commented on GitHub (Feb 19, 2025): > It works on my system now, too. > > I removed all nvidia and cuda related packages, and installed `cuda-toolkit-12.8`. This led my to driver version `570.86.10`, which I then updated to `570.86.15`. Lastly, I updated the nvidia-fabricmanager to `nvidia-fabricmanager-570` (check if it runs, otherwise do `systemctl start nvidia-fabricmanager`. Lastly rebooted my whole system and it works as expected. > I hope that helps anyone having the same issues! Hi @jmorzeck - I had the same NVIDIA tools version as yours(Except I am on Ubuntu 22_04. The ollama docker image based on Ubuntu 22_04 though). My ollama container still fails to use the GPUs. `time=2025-02-19T04:43:53.194Z level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths="[/usr/lib/x86_64-linux-gnu/libcuda.so.570.86.15 /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.16]" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.15 dlsym: cuInit - 0x7f928be76e00 dlsym: cuDriverGetVersion - 0x7f928be76e20 dlsym: cuDeviceGetCount - 0x7f928be76e60 dlsym: cuDeviceGet - 0x7f928be76e40 dlsym: cuDeviceGetAttribute - 0x7f928be76f40 dlsym: cuDeviceGetUuid - 0x7f928be76ea0 dlsym: cuDeviceGetName - 0x7f928be76e80 dlsym: cuCtxCreate_v3 - 0x7f928be77120 dlsym: cuMemGetInfo_v2 - 0x7f928be778a0 dlsym: cuCtxDestroy - 0x7f928bed59f0 calling cuInit cuInit err: 802 time=2025-02-19T04:44:23.228Z level=INFO source=gpu.go:612 msg="Unable to load cudart library /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.15: cuda driver library init failure: 802"` `cat /proc/driver/nvidia/version NVRM version: NVIDIA UNIX x86_64 Kernel Module 570.86.15 Thu Jan 23 23:23:10 UTC 2025 GCC version: gcc version 11.4.0 (Ubuntu 11.4.0-1ubuntu1~22.04) dpkg -l | grep nvidia-container-toolkit ii nvidia-container-toolkit 1.17.4-1 amd64 NVIDIA Container toolkit ii nvidia-container-toolkit-base 1.17.4-1 amd64 NVIDIA Container Toolkit Base nvidia-smi Tue Feb 18 20:28:24 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 570.86.15 Driver Version: 570.86.15 CUDA Version: 12.8 | |-----------------------------------------+------------------------+----------------------+` Can you please share the output of cat /proc/driver/nvidia/version and dpkg -l | grep nvidia-container-toolkit
Author
Owner

@rick-github commented on GitHub (Feb 19, 2025):

cudaErrorSystemNotReady = 802

  • This error indicates that the system is not yet ready to start any CUDA work. To continue using CUDA, verify the system configuration is in a valid state and all required driver daemons are actively running. More information about this error can be found in the system specific user guide.
<!-- gh-comment-id:2668169205 --> @rick-github commented on GitHub (Feb 19, 2025): cudaErrorSystemNotReady = 802 - This error indicates that the system is not yet ready to start any CUDA work. To continue using CUDA, verify the system configuration is in a valid state and all required driver daemons are actively running. More information about this error can be found in the system specific user guide.
Author
Owner

@jmorzeck commented on GitHub (Feb 19, 2025):

@rajeshkumar-n here you go:

cat /proc/driver/nvidia/version

NVRM version: NVIDIA UNIX x86_64 Kernel Module  570.86.15  Thu Jan 23 23:23:10 UTC 2025
GCC version:  gcc version 13.3.0 (Ubuntu 13.3.0-6ubuntu2~24.04)

and

dpkg -l | grep nvidia-container-toolkit

ii  nvidia-container-toolkit         1.17.4-1    amd64    NVIDIA Container toolkit
ii  nvidia-container-toolkit-base    1.17.4-1    amd64    NVIDIA Container Toolkit Base
<!-- gh-comment-id:2668179175 --> @jmorzeck commented on GitHub (Feb 19, 2025): @rajeshkumar-n here you go: ``` cat /proc/driver/nvidia/version NVRM version: NVIDIA UNIX x86_64 Kernel Module 570.86.15 Thu Jan 23 23:23:10 UTC 2025 GCC version: gcc version 13.3.0 (Ubuntu 13.3.0-6ubuntu2~24.04) ``` and ``` dpkg -l | grep nvidia-container-toolkit ii nvidia-container-toolkit 1.17.4-1 amd64 NVIDIA Container toolkit ii nvidia-container-toolkit-base 1.17.4-1 amd64 NVIDIA Container Toolkit Base ```
Author
Owner

@rajeshkumar-n commented on GitHub (Feb 19, 2025):

@rajeshkumar-n here you go:

cat /proc/driver/nvidia/version

NVRM version: NVIDIA UNIX x86_64 Kernel Module  570.86.15  Thu Jan 23 23:23:10 UTC 2025
GCC version:  gcc version 13.3.0 (Ubuntu 13.3.0-6ubuntu2~24.04)

Thanks for your reply, @jmorzeck. Do you get the same result when you run this command inside the Ollama docker container as well?

<!-- gh-comment-id:2668299166 --> @rajeshkumar-n commented on GitHub (Feb 19, 2025): > [@rajeshkumar-n](https://github.com/rajeshkumar-n) here you go: > > ``` > cat /proc/driver/nvidia/version > > NVRM version: NVIDIA UNIX x86_64 Kernel Module 570.86.15 Thu Jan 23 23:23:10 UTC 2025 > GCC version: gcc version 13.3.0 (Ubuntu 13.3.0-6ubuntu2~24.04) > ``` Thanks for your reply, @jmorzeck. Do you get the same result when you run this command inside the Ollama docker container as well?
Author
Owner

@rajeshkumar-n commented on GitHub (Feb 19, 2025):

@rajeshkumar-n here you go:

cat /proc/driver/nvidia/version

NVRM version: NVIDIA UNIX x86_64 Kernel Module  570.86.15  Thu Jan 23 23:23:10 UTC 2025
GCC version:  gcc version 13.3.0 (Ubuntu 13.3.0-6ubuntu2~24.04)

Thanks for your reply, @jmorzeck. Do you get the same result when you run this command inside the Ollama docker container as well?

Lastly, I updated the nvidia-fabricmanager to nvidia-fabricmanager-570 (check if it runs, otherwise do systemctl start nvidia-fabricmanager.

It's working finally after installing and starting nvidia-fabricmanager-570. Many thanks @jmorzeck

<!-- gh-comment-id:2668614776 --> @rajeshkumar-n commented on GitHub (Feb 19, 2025): > > [@rajeshkumar-n](https://github.com/rajeshkumar-n) here you go: > > ``` > > cat /proc/driver/nvidia/version > > > > NVRM version: NVIDIA UNIX x86_64 Kernel Module 570.86.15 Thu Jan 23 23:23:10 UTC 2025 > > GCC version: gcc version 13.3.0 (Ubuntu 13.3.0-6ubuntu2~24.04) > > ``` > > Thanks for your reply, [@jmorzeck](https://github.com/jmorzeck). Do you get the same result when you run this command inside the Ollama docker container as well? > Lastly, I updated the nvidia-fabricmanager to nvidia-fabricmanager-570 (check if it runs, otherwise do systemctl start nvidia-fabricmanager. It's working finally after installing and starting nvidia-fabricmanager-570. Many thanks @jmorzeck
Author
Owner

@rajeshkumar-n commented on GitHub (Feb 19, 2025):

cudaErrorSystemNotReady = 802

  • This error indicates that the system is not yet ready to start any CUDA work. To continue using CUDA, verify the system configuration is in a valid state and all required driver daemons are actively running. More information about this error can be found in the system specific user guide.

It's working finally after installing and starting nvidia-fabricmanager-570. Is there any reference guide to check whether the system is ready and all required driver daemons are running?

<!-- gh-comment-id:2668617770 --> @rajeshkumar-n commented on GitHub (Feb 19, 2025): > cudaErrorSystemNotReady = 802 > > * This error indicates that the system is not yet ready to start any CUDA work. To continue using CUDA, verify the system configuration is in a valid state and all required driver daemons are actively running. More information about this error can be found in the system specific user guide. It's working finally after installing and starting nvidia-fabricmanager-570. Is there any reference guide to check whether the system is ready and all required driver daemons are running?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#67931