[GH-ISSUE #5571] CUDA error: unspecified launch failure on inference on Nvidia V100 GPUs #3483

Closed
opened 2026-04-12 14:10:23 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @louisbrulenaudet on GitHub (Jul 9, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5571

Originally assigned to: @jmorganca on GitHub.

What is the issue?

Hi everyone,

Users of older versions of Ollama have no problems, but with the new version, an error appears during inference. This seems to be linked to an error during the process of copying data between host and device (cudaMemcpyAsync).

I don't know if an answer could be found on Ollama's side or if it comes directly from llama.cpp, but here's the error message :

2024-07-09 14:50:08,792 - logger - INFO - {'command': 'serve'}
2024/07/09 14:50:08 routes.go:1033: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:/app/cfvr/lbrulenaudet/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-07-09T14:50:08.836+02:00 level=INFO source=images.go:751 msg="total blobs: 4"
time=2024-07-09T14:50:08.838+02:00 level=INFO source=images.go:758 msg="total unused blobs removed: 0"
time=2024-07-09T14:50:08.839+02:00 level=INFO source=routes.go:1080 msg="Listening on 127.0.0.1:11434 (version 0.2.1)"
time=2024-07-09T14:50:08.841+02:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama3554105619/runners
time=2024-07-09T14:50:12.694+02:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60101]"
time=2024-07-09T14:50:12.694+02:00 level=INFO source=gpu.go:205 msg="looking for compatible GPUs"
time=2024-07-09T14:50:12.704+02:00 level=INFO source=gpu.go:534 msg="unable to load cuda driver library" library=/usr/lib/x86_64-linux-gnu/libcuda.so.460.32.03 error="symbol lookup for cuCtxCreate_v3 failed: /usr/lib/x86_64-linux-gnu/libcuda.so.460.32.03: undefined symbol: cuCtxCreate_v3"
time=2024-07-09T14:50:12.706+02:00 level=INFO source=gpu.go:534 msg="unable to load cuda driver library" library=/usr/lib/x86_64-linux-gnu/libcuda.so.340.108 error="symbol lookup for cuDeviceGetUuid failed: /usr/lib/x86_64-linux-gnu/libcuda.so.340.108: undefined symbol: cuDeviceGetUuid"
time=2024-07-09T14:50:13.021+02:00 level=INFO source=types.go:103 msg="inference compute" id=GPU-600ee5b9-f172-c5e8-0e92-334d49fd4276 library=cuda compute=7.0 driver=0.0 name="" total="31.7 GiB" available="31.4 GiB"
time=2024-07-09T14:52:24.970+02:00 level=INFO source=sched.go:738 msg="new model will fit in available VRAM in single GPU, loading" model=/app/cfvr/lbrulenaudet/.ollama/models/blobs/sha256-3de21719a8ffb4f6acc4b636d4ca38d882e0d0aa9a5d417106f985e0e0a4a735 gpu=GPU-600ee5b9-f172-c5e8-0e92-334d49fd4276 parallel=4 available=33765720064 required="13.9 GiB"
time=2024-07-09T14:52:24.971+02:00 level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=28 layers.offload=28 layers.split="" memory.available="[31.4 GiB]" memory.required.full="13.9 GiB" memory.required.partial="13.9 GiB" memory.required.kv="2.1 GiB" memory.required.allocations="[13.9 GiB]" memory.weights.total="12.8 GiB" memory.weights.repeating="12.7 GiB" memory.weights.nonrepeating="164.1 MiB" memory.graph.full="296.0 MiB" memory.graph.partial="391.4 MiB"
time=2024-07-09T14:52:24.972+02:00 level=INFO source=server.go:375 msg="starting llama server" cmd="/tmp/ollama3554105619/runners/cuda_v11/ollama_llama_server --model /app/cfvr/lbrulenaudet/.ollama/models/blobs/sha256-3de21719a8ffb4f6acc4b636d4ca38d882e0d0aa9a5d417106f985e0e0a4a735 --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 28 --parallel 4 --port 37081"
time=2024-07-09T14:52:24.973+02:00 level=INFO source=sched.go:474 msg="loaded runners" count=1
time=2024-07-09T14:52:24.973+02:00 level=INFO source=server.go:563 msg="waiting for llama runner to start responding"
time=2024-07-09T14:52:24.974+02:00 level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server error"
llama_model_loader: loaded meta data with 42 key-value pairs and 377 tensors from /app/cfvr/lbrulenaudet/.ollama/models/blobs/sha256-3de21719a8ffb4f6acc4b636d4ca38d882e0d0aa9a5d417106f985e0e0a4a735 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = deepseek2
llama_model_loader: - kv   1:                               general.name str              = DeepSeek-Coder-V2-Lite-Instruct
llama_model_loader: - kv   2:                      deepseek2.block_count u32              = 27
llama_model_loader: - kv   3:                   deepseek2.context_length u32              = 163840
llama_model_loader: - kv   4:                 deepseek2.embedding_length u32              = 2048
llama_model_loader: - kv   5:              deepseek2.feed_forward_length u32              = 10944
llama_model_loader: - kv   6:             deepseek2.attention.head_count u32              = 16
llama_model_loader: - kv   7:          deepseek2.attention.head_count_kv u32              = 16
llama_model_loader: - kv   8:                   deepseek2.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv   9: deepseek2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  10:                deepseek2.expert_used_count u32              = 6
llama_model_loader: - kv  11:                          general.file_type u32              = 17
llama_model_loader: - kv  12:        deepseek2.leading_dense_block_count u32              = 1
llama_model_loader: - kv  13:                       deepseek2.vocab_size u32              = 102400
llama_model_loader: - kv  14:           deepseek2.attention.kv_lora_rank u32              = 512
llama_model_loader: - kv  15:             deepseek2.attention.key_length u32              = 192
llama_model_loader: - kv  16:           deepseek2.attention.value_length u32              = 128
llama_model_loader: - kv  17:       deepseek2.expert_feed_forward_length u32              = 1408
llama_model_loader: - kv  18:                     deepseek2.expert_count u32              = 64
llama_model_loader: - kv  19:              deepseek2.expert_shared_count u32              = 2
llama_model_loader: - kv  20:             deepseek2.expert_weights_scale f32              = 1.000000
llama_model_loader: - kv  21:             deepseek2.rope.dimension_count u32              = 64
llama_model_loader: - kv  22:                deepseek2.rope.scaling.type str              = yarn
llama_model_loader: - kv  23:              deepseek2.rope.scaling.factor f32              = 40.000000
llama_model_loader: - kv  24: deepseek2.rope.scaling.original_context_length u32              = 4096
llama_model_loader: - kv  25: deepseek2.rope.scaling.yarn_log_multiplier f32              = 0.070700
llama_model_loader: - kv  26:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  27:                         tokenizer.ggml.pre str              = deepseek-llm
llama_model_loader: - kv  28:                      tokenizer.ggml.tokens arr[str,102400]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  29:                  tokenizer.ggml.token_type arr[i32,102400]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  30:                      tokenizer.ggml.merges arr[str,99757]   = ["Ġ Ġ", "Ġ t", "Ġ a", "i n", "h e...
llama_model_loader: - kv  31:                tokenizer.ggml.bos_token_id u32              = 100000
llama_model_loader: - kv  32:                tokenizer.ggml.eos_token_id u32              = 100001
llama_model_loader: - kv  33:            tokenizer.ggml.padding_token_id u32              = 100001
llama_model_loader: - kv  34:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  35:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  36:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  37:               general.quantization_version u32              = 2
llama_model_loader: - kv  38:                      quantize.imatrix.file str              = /models/DeepSeek-Coder-V2-Lite-Instru...
llama_model_loader: - kv  39:                   quantize.imatrix.dataset str              = /training_data/calibration_datav3.txt
llama_model_loader: - kv  40:             quantize.imatrix.entries_count i32              = 293
llama_model_loader: - kv  41:              quantize.imatrix.chunks_count i32              = 139
llama_model_loader: - type  f32:  108 tensors
llama_model_loader: - type q5_1:   14 tensors
llama_model_loader: - type q8_0:   13 tensors
llama_model_loader: - type q5_K:  229 tensors
llama_model_loader: - type q6_K:   13 tensors
time=2024-07-09T14:52:25.227+02:00 level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 2400
llm_load_vocab: token to piece cache size = 0.6661 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = deepseek2
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 102400
llm_load_print_meta: n_merges         = 99757
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 163840
llm_load_print_meta: n_embd           = 2048
llm_load_print_meta: n_layer          = 27
llm_load_print_meta: n_head           = 16
llm_load_print_meta: n_head_kv        = 16
llm_load_print_meta: n_rot            = 64
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 192
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 3072
llm_load_print_meta: n_embd_v_gqa     = 2048
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 10944
llm_load_print_meta: n_expert         = 64
llm_load_print_meta: n_expert_used    = 6
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = yarn
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 0.025
llm_load_print_meta: n_ctx_orig_yarn  = 4096
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 16B
llm_load_print_meta: model ftype      = Q5_K - Medium
llm_load_print_meta: model params     = 15.71 B
llm_load_print_meta: model size       = 11.03 GiB (6.03 BPW) 
llm_load_print_meta: general.name     = DeepSeek-Coder-V2-Lite-Instruct
llm_load_print_meta: BOS token        = 100000 '<|begin▁of▁sentence|>'
llm_load_print_meta: EOS token        = 100001 '<|end▁of▁sentence|>'
llm_load_print_meta: PAD token        = 100001 '<|end▁of▁sentence|>'
llm_load_print_meta: LF token         = 126 'Ä'
llm_load_print_meta: max token length = 256
llm_load_print_meta: n_layer_dense_lead   = 1
llm_load_print_meta: n_lora_q             = 0
llm_load_print_meta: n_lora_kv            = 512
llm_load_print_meta: n_ff_exp             = 1408
llm_load_print_meta: n_expert_shared      = 2
llm_load_print_meta: expert_weights_scale = 1.0
llm_load_print_meta: rope_yarn_log_mul    = 0.0707
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    yes
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: Tesla V100-PCIE-32GB, compute capability 7.0, VMM: yes
llm_load_tensors: ggml ctx size =    0.32 MiB
time=2024-07-09T14:52:26.684+02:00 level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server not responding"
llm_load_tensors: offloading 27 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 28/28 layers to GPU
llm_load_tensors:        CPU buffer size =   137.50 MiB
llm_load_tensors:      CUDA0 buffer size = 11160.99 MiB
time=2024-07-09T14:52:28.291+02:00 level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server loading model"
time=2024-07-09T14:52:32.266+02:00 level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server not responding"
llama_new_context_with_model: n_ctx      = 8192
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 0.025
llama_kv_cache_init:      CUDA0 KV buffer size =  2160.00 MiB
llama_new_context_with_model: KV self size  = 2160.00 MiB, K (f16): 1296.00 MiB, V (f16):  864.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     1.59 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =   296.00 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    20.01 MiB
llama_new_context_with_model: graph nodes  = 1924
llama_new_context_with_model: graph splits = 2
time=2024-07-09T14:52:32.970+02:00 level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server loading model"
time=2024-07-09T14:52:34.481+02:00 level=INFO source=server.go:609 msg="llama runner started in 9.51 seconds"
CUDA error: unspecified launch failure
  current device: 0, in function ggml_cuda_mul_mat_id at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:2010
  cudaMemcpyAsync(ids_host.data(), ids_dev, ggml_nbytes(ids), cudaMemcpyDeviceToHost, stream)
GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:100: !"CUDA error"

This is the output of the nvidia-smi:

NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2

Thank you in advance for your reply, and I look forward to hearing from you.

Yours sincerely
Louis Brulé Naudet

OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

2.0.1

Originally created by @louisbrulenaudet on GitHub (Jul 9, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5571 Originally assigned to: @jmorganca on GitHub. ### What is the issue? Hi everyone, Users of older versions of Ollama have no problems, but with the new version, an error appears during inference. This seems to be linked to an error during the process of copying data between host and device ([cudaMemcpyAsync](https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__MEMORY.html#group__CUDART__MEMORY_1g85073372f776b4c4d5f89f7124b7bf79)). I don't know if an answer could be found on Ollama's side or if it comes directly from llama.cpp, but here's the error message : ``` 2024-07-09 14:50:08,792 - logger - INFO - {'command': 'serve'} 2024/07/09 14:50:08 routes.go:1033: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:/app/cfvr/lbrulenaudet/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-07-09T14:50:08.836+02:00 level=INFO source=images.go:751 msg="total blobs: 4" time=2024-07-09T14:50:08.838+02:00 level=INFO source=images.go:758 msg="total unused blobs removed: 0" time=2024-07-09T14:50:08.839+02:00 level=INFO source=routes.go:1080 msg="Listening on 127.0.0.1:11434 (version 0.2.1)" time=2024-07-09T14:50:08.841+02:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama3554105619/runners time=2024-07-09T14:50:12.694+02:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60101]" time=2024-07-09T14:50:12.694+02:00 level=INFO source=gpu.go:205 msg="looking for compatible GPUs" time=2024-07-09T14:50:12.704+02:00 level=INFO source=gpu.go:534 msg="unable to load cuda driver library" library=/usr/lib/x86_64-linux-gnu/libcuda.so.460.32.03 error="symbol lookup for cuCtxCreate_v3 failed: /usr/lib/x86_64-linux-gnu/libcuda.so.460.32.03: undefined symbol: cuCtxCreate_v3" time=2024-07-09T14:50:12.706+02:00 level=INFO source=gpu.go:534 msg="unable to load cuda driver library" library=/usr/lib/x86_64-linux-gnu/libcuda.so.340.108 error="symbol lookup for cuDeviceGetUuid failed: /usr/lib/x86_64-linux-gnu/libcuda.so.340.108: undefined symbol: cuDeviceGetUuid" time=2024-07-09T14:50:13.021+02:00 level=INFO source=types.go:103 msg="inference compute" id=GPU-600ee5b9-f172-c5e8-0e92-334d49fd4276 library=cuda compute=7.0 driver=0.0 name="" total="31.7 GiB" available="31.4 GiB" time=2024-07-09T14:52:24.970+02:00 level=INFO source=sched.go:738 msg="new model will fit in available VRAM in single GPU, loading" model=/app/cfvr/lbrulenaudet/.ollama/models/blobs/sha256-3de21719a8ffb4f6acc4b636d4ca38d882e0d0aa9a5d417106f985e0e0a4a735 gpu=GPU-600ee5b9-f172-c5e8-0e92-334d49fd4276 parallel=4 available=33765720064 required="13.9 GiB" time=2024-07-09T14:52:24.971+02:00 level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=28 layers.offload=28 layers.split="" memory.available="[31.4 GiB]" memory.required.full="13.9 GiB" memory.required.partial="13.9 GiB" memory.required.kv="2.1 GiB" memory.required.allocations="[13.9 GiB]" memory.weights.total="12.8 GiB" memory.weights.repeating="12.7 GiB" memory.weights.nonrepeating="164.1 MiB" memory.graph.full="296.0 MiB" memory.graph.partial="391.4 MiB" time=2024-07-09T14:52:24.972+02:00 level=INFO source=server.go:375 msg="starting llama server" cmd="/tmp/ollama3554105619/runners/cuda_v11/ollama_llama_server --model /app/cfvr/lbrulenaudet/.ollama/models/blobs/sha256-3de21719a8ffb4f6acc4b636d4ca38d882e0d0aa9a5d417106f985e0e0a4a735 --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 28 --parallel 4 --port 37081" time=2024-07-09T14:52:24.973+02:00 level=INFO source=sched.go:474 msg="loaded runners" count=1 time=2024-07-09T14:52:24.973+02:00 level=INFO source=server.go:563 msg="waiting for llama runner to start responding" time=2024-07-09T14:52:24.974+02:00 level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server error" llama_model_loader: loaded meta data with 42 key-value pairs and 377 tensors from /app/cfvr/lbrulenaudet/.ollama/models/blobs/sha256-3de21719a8ffb4f6acc4b636d4ca38d882e0d0aa9a5d417106f985e0e0a4a735 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = deepseek2 llama_model_loader: - kv 1: general.name str = DeepSeek-Coder-V2-Lite-Instruct llama_model_loader: - kv 2: deepseek2.block_count u32 = 27 llama_model_loader: - kv 3: deepseek2.context_length u32 = 163840 llama_model_loader: - kv 4: deepseek2.embedding_length u32 = 2048 llama_model_loader: - kv 5: deepseek2.feed_forward_length u32 = 10944 llama_model_loader: - kv 6: deepseek2.attention.head_count u32 = 16 llama_model_loader: - kv 7: deepseek2.attention.head_count_kv u32 = 16 llama_model_loader: - kv 8: deepseek2.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 9: deepseek2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 10: deepseek2.expert_used_count u32 = 6 llama_model_loader: - kv 11: general.file_type u32 = 17 llama_model_loader: - kv 12: deepseek2.leading_dense_block_count u32 = 1 llama_model_loader: - kv 13: deepseek2.vocab_size u32 = 102400 llama_model_loader: - kv 14: deepseek2.attention.kv_lora_rank u32 = 512 llama_model_loader: - kv 15: deepseek2.attention.key_length u32 = 192 llama_model_loader: - kv 16: deepseek2.attention.value_length u32 = 128 llama_model_loader: - kv 17: deepseek2.expert_feed_forward_length u32 = 1408 llama_model_loader: - kv 18: deepseek2.expert_count u32 = 64 llama_model_loader: - kv 19: deepseek2.expert_shared_count u32 = 2 llama_model_loader: - kv 20: deepseek2.expert_weights_scale f32 = 1.000000 llama_model_loader: - kv 21: deepseek2.rope.dimension_count u32 = 64 llama_model_loader: - kv 22: deepseek2.rope.scaling.type str = yarn llama_model_loader: - kv 23: deepseek2.rope.scaling.factor f32 = 40.000000 llama_model_loader: - kv 24: deepseek2.rope.scaling.original_context_length u32 = 4096 llama_model_loader: - kv 25: deepseek2.rope.scaling.yarn_log_multiplier f32 = 0.070700 llama_model_loader: - kv 26: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 27: tokenizer.ggml.pre str = deepseek-llm llama_model_loader: - kv 28: tokenizer.ggml.tokens arr[str,102400] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 29: tokenizer.ggml.token_type arr[i32,102400] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 30: tokenizer.ggml.merges arr[str,99757] = ["Ġ Ġ", "Ġ t", "Ġ a", "i n", "h e... llama_model_loader: - kv 31: tokenizer.ggml.bos_token_id u32 = 100000 llama_model_loader: - kv 32: tokenizer.ggml.eos_token_id u32 = 100001 llama_model_loader: - kv 33: tokenizer.ggml.padding_token_id u32 = 100001 llama_model_loader: - kv 34: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 35: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 36: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 37: general.quantization_version u32 = 2 llama_model_loader: - kv 38: quantize.imatrix.file str = /models/DeepSeek-Coder-V2-Lite-Instru... llama_model_loader: - kv 39: quantize.imatrix.dataset str = /training_data/calibration_datav3.txt llama_model_loader: - kv 40: quantize.imatrix.entries_count i32 = 293 llama_model_loader: - kv 41: quantize.imatrix.chunks_count i32 = 139 llama_model_loader: - type f32: 108 tensors llama_model_loader: - type q5_1: 14 tensors llama_model_loader: - type q8_0: 13 tensors llama_model_loader: - type q5_K: 229 tensors llama_model_loader: - type q6_K: 13 tensors time=2024-07-09T14:52:25.227+02:00 level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: special tokens cache size = 2400 llm_load_vocab: token to piece cache size = 0.6661 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = deepseek2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 102400 llm_load_print_meta: n_merges = 99757 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 163840 llm_load_print_meta: n_embd = 2048 llm_load_print_meta: n_layer = 27 llm_load_print_meta: n_head = 16 llm_load_print_meta: n_head_kv = 16 llm_load_print_meta: n_rot = 64 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 192 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 3072 llm_load_print_meta: n_embd_v_gqa = 2048 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 10944 llm_load_print_meta: n_expert = 64 llm_load_print_meta: n_expert_used = 6 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = yarn llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 0.025 llm_load_print_meta: n_ctx_orig_yarn = 4096 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 16B llm_load_print_meta: model ftype = Q5_K - Medium llm_load_print_meta: model params = 15.71 B llm_load_print_meta: model size = 11.03 GiB (6.03 BPW) llm_load_print_meta: general.name = DeepSeek-Coder-V2-Lite-Instruct llm_load_print_meta: BOS token = 100000 '<|begin▁of▁sentence|>' llm_load_print_meta: EOS token = 100001 '<|end▁of▁sentence|>' llm_load_print_meta: PAD token = 100001 '<|end▁of▁sentence|>' llm_load_print_meta: LF token = 126 'Ä' llm_load_print_meta: max token length = 256 llm_load_print_meta: n_layer_dense_lead = 1 llm_load_print_meta: n_lora_q = 0 llm_load_print_meta: n_lora_kv = 512 llm_load_print_meta: n_ff_exp = 1408 llm_load_print_meta: n_expert_shared = 2 llm_load_print_meta: expert_weights_scale = 1.0 llm_load_print_meta: rope_yarn_log_mul = 0.0707 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: Tesla V100-PCIE-32GB, compute capability 7.0, VMM: yes llm_load_tensors: ggml ctx size = 0.32 MiB time=2024-07-09T14:52:26.684+02:00 level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server not responding" llm_load_tensors: offloading 27 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 28/28 layers to GPU llm_load_tensors: CPU buffer size = 137.50 MiB llm_load_tensors: CUDA0 buffer size = 11160.99 MiB time=2024-07-09T14:52:28.291+02:00 level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server loading model" time=2024-07-09T14:52:32.266+02:00 level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server not responding" llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 0.025 llama_kv_cache_init: CUDA0 KV buffer size = 2160.00 MiB llama_new_context_with_model: KV self size = 2160.00 MiB, K (f16): 1296.00 MiB, V (f16): 864.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 1.59 MiB llama_new_context_with_model: CUDA0 compute buffer size = 296.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 20.01 MiB llama_new_context_with_model: graph nodes = 1924 llama_new_context_with_model: graph splits = 2 time=2024-07-09T14:52:32.970+02:00 level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server loading model" time=2024-07-09T14:52:34.481+02:00 level=INFO source=server.go:609 msg="llama runner started in 9.51 seconds" CUDA error: unspecified launch failure current device: 0, in function ggml_cuda_mul_mat_id at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:2010 cudaMemcpyAsync(ids_host.data(), ids_dev, ggml_nbytes(ids), cudaMemcpyDeviceToHost, stream) GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:100: !"CUDA error" ``` This is the output of the nvidia-smi: `NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2` Thank you in advance for your reply, and I look forward to hearing from you. Yours sincerely Louis Brulé Naudet ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version 2.0.1
GiteaMirror added the gpunvidiabug labels 2026-04-12 14:10:23 -05:00
Author
Owner

@jmorganca commented on GitHub (Jul 10, 2024):

Hi there. Sorry you hit this. Will work on hunting this down. In the meantime, do you know if it would be possible to try upgrading your nvidia drivers?

<!-- gh-comment-id:2219296067 --> @jmorganca commented on GitHub (Jul 10, 2024): Hi there. Sorry you hit this. Will work on hunting this down. In the meantime, do you know if it would be possible to try upgrading your nvidia drivers?
Author
Owner

@louisbrulenaudet commented on GitHub (Jul 10, 2024):

Hi @jmorganca ,

Thank you for your message. Unfortunately, within the control environment of my work, it is impossible for me to update the Nvidia drivers.

I remain at your disposal if I can do something that is authorized to me.

<!-- gh-comment-id:2219759728 --> @louisbrulenaudet commented on GitHub (Jul 10, 2024): Hi @jmorganca , Thank you for your message. Unfortunately, within the control environment of my work, it is impossible for me to update the Nvidia drivers. I remain at your disposal if I can do something that is authorized to me.
Author
Owner

@Zhangy-ly commented on GitHub (Jul 10, 2024):

I encountered the same problem. I could only solve it by uninstalling and installing the old version of Ollama.

curl -fsSL https://ollama.com/install.sh | OLLAMA_VERSION=0.1.47 sh

My device:
Tesla V100
Driver Version: 525
CUDA Version: 12.1

Try using an older version temporarily to fix this problem

<!-- gh-comment-id:2219810529 --> @Zhangy-ly commented on GitHub (Jul 10, 2024): I encountered the same problem. I could only solve it by uninstalling and installing the old version of Ollama. `curl -fsSL https://ollama.com/install.sh | OLLAMA_VERSION=0.1.47 sh` My device: Tesla V100 Driver Version: 525 CUDA Version: 12.1 Try using an older version temporarily to fix this problem
Author
Owner

@ChenZheChina commented on GitHub (Jul 10, 2024):

I have the same issue. Solved by rolling back to 0.1.48 of ollama.

GPU: Tesla V100-PCIE-32GB
Driver Version: 535
CUDA Version: 12.2
Model: qwen2

ERROR: CUDA kernel mul_mat_q has no device code compatible with CUDA arch 700. ggml-cuda.cu was compiled for: __CUDA_ARCH_LIST__

<!-- gh-comment-id:2219820399 --> @ChenZheChina commented on GitHub (Jul 10, 2024): I have the same issue. Solved by rolling back to 0.1.48 of ollama. GPU: Tesla V100-PCIE-32GB Driver Version: 535 CUDA Version: 12.2 Model: qwen2 `ERROR: CUDA kernel mul_mat_q has no device code compatible with CUDA arch 700. ggml-cuda.cu was compiled for: __CUDA_ARCH_LIST__`
Author
Owner

@box9527 commented on GitHub (Jul 10, 2024):

I have the same problem too and I can confirm rolling back to 0.1.48 can deal with it.

GPU: Tesla V100-PCIE-32GB
Driver Version: 450.248.02
CUDA Version: 11.0

Docker info:
Server version: 19.03.12

<!-- gh-comment-id:2219923602 --> @box9527 commented on GitHub (Jul 10, 2024): I have the same problem too and I can confirm rolling back to 0.1.48 can deal with it. GPU: Tesla V100-PCIE-32GB Driver Version: 450.248.02 CUDA Version: 11.0 Docker info: Server version: 19.03.12
Author
Owner

@ER-EPR commented on GitHub (Jul 10, 2024):

Install through open webui docker image, rolling back to last image with v0.1.47 works.

<!-- gh-comment-id:2220303688 --> @ER-EPR commented on GitHub (Jul 10, 2024): Install through open webui docker image, rolling back to last image with v0.1.47 works.
Author
Owner

@jmorganca commented on GitHub (Jul 10, 2024):

Fix is incoming - sorry all!

<!-- gh-comment-id:2220695808 --> @jmorganca commented on GitHub (Jul 10, 2024): Fix is incoming - sorry all!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#3483