[GH-ISSUE #7771] CUDA error: unspecified launch failure: current device: 0, in function ggml_backend_cuda_synchronize at ggml-cuda.cu:2508 #51475

Closed
opened 2026-04-28 20:19:06 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @daocoder2 on GitHub (Nov 21, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7771

What is the issue?

2024/11/21 01:22:08 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:3 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:4 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:true OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2024-11-21T01:22:08.918Z level=INFO source=images.go:755 msg="total blobs: 50"
time=2024-11-21T01:22:08.919Z level=INFO source=images.go:762 msg="total unused blobs removed: 0"
time=2024-11-21T01:22:08.919Z level=INFO source=routes.go:1240 msg="Listening on [::]:11434 (version 0.4.1)"
time=2024-11-21T01:22:08.919Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu_avx cpu_avx2 cuda_v11 cuda_v12 cpu]"
time=2024-11-21T01:22:08.919Z level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
time=2024-11-21T01:22:09.494Z level=INFO source=types.go:123 msg="inference compute" id=GPU-cfe28dbd-f61e-acdb-96ed-815caf9afc67 library=cuda variant=v11 compute=8.0 driver=11.7 name="NVIDIA Graphics Device" total="79.3 GiB" available="78.9 GiB"
time=2024-11-21T01:22:09.494Z level=INFO source=types.go:123 msg="inference compute" id=GPU-c5a1deea-f294-f993-7aa4-5386493bad88 library=cuda variant=v11 compute=8.0 driver=11.7 name="NVIDIA Graphics Device" total="79.3 GiB" available="45.0 GiB"
time=2024-11-21T01:22:09.494Z level=INFO source=types.go:123 msg="inference compute" id=GPU-807da1fa-7fac-08aa-4a8c-7c176f72f13b library=cuda variant=v11 compute=8.0 driver=11.7 name="NVIDIA Graphics Device" total="79.3 GiB" available="19.4 GiB"
time=2024-11-21T01:22:09.494Z level=INFO source=types.go:123 msg="inference compute" id=GPU-d2785990-22de-3488-9102-778351cda270 library=cuda variant=v11 compute=8.0 driver=11.7 name="NVIDIA Graphics Device" total="79.3 GiB" available="19.8 GiB"
time=2024-11-21T01:22:18.349Z level=WARN source=sched.go:137 msg="mllama doesn't support parallel requests yet"
time=2024-11-21T01:22:18.821Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-da63a910e34997d50c9f21cc7f16996d1e76e1c128b13319edd68348f760ecc7 library=cuda parallel=1 required="62.9 GiB"
time=2024-11-21T01:22:19.246Z level=INFO source=server.go:105 msg="system memory" total="2015.3 GiB" free="1895.5 GiB" free_swap="0 B"
time=2024-11-21T01:22:19.249Z level=INFO source=memory.go:343 msg="offload to cuda" projector.weights="1.9 GiB" projector.graph="2.8 GiB" layers.requested=-1 layers.model=101 layers.offload=101 layers.split=26,25,25,25 memory.available="[78.9 GiB 45.0 GiB 19.8 GiB 19.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="62.9 GiB" memory.required.partial="62.9 GiB" memory.required.kv="1.6 GiB" memory.required.allocations="[19.8 GiB 14.3 GiB 14.4 GiB 14.4 GiB]" memory.weights.total="49.3 GiB" memory.weights.repeating="48.5 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB"
time=2024-11-21T01:22:19.250Z level=INFO source=server.go:383 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v11/ollama_llama_server --model /root/.ollama/models/blobs/sha256-da63a910e34997d50c9f21cc7f16996d1e76e1c128b13319edd68348f760ecc7 --ctx-size 2048 --batch-size 512 --n-gpu-layers 101 --mmproj /root/.ollama/models/blobs/sha256-6b6c374d159e097509b33e9fda648c178c903959fc0c7dbfae487cc8d958093e --threads 64 --parallel 1 --tensor-split 26,25,25,25 --port 38572"
time=2024-11-21T01:22:19.250Z level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2024-11-21T01:22:19.250Z level=INFO source=server.go:562 msg="waiting for llama runner to start responding"
time=2024-11-21T01:22:19.250Z level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server error"
time=2024-11-21T01:22:19.259Z level=INFO source=runner.go:863 msg="starting go runner"
time=2024-11-21T01:22:19.259Z level=INFO source=runner.go:864 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=64
time=2024-11-21T01:22:19.260Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:38572"
llama_model_loader: loaded meta data with 27 key-value pairs and 984 tensors from /root/.ollama/models/blobs/sha256-da63a910e34997d50c9f21cc7f16996d1e76e1c128b13319edd68348f760ecc7 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = mllama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Model
llama_model_loader: - kv   3:                         general.size_label str              = 88B
llama_model_loader: - kv   4:                         mllama.block_count u32              = 100
llama_model_loader: - kv   5:                      mllama.context_length u32              = 131072
llama_model_loader: - kv   6:                    mllama.embedding_length u32              = 8192
llama_model_loader: - kv   7:                 mllama.feed_forward_length u32              = 28672
llama_model_loader: - kv   8:                mllama.attention.head_count u32              = 64
llama_model_loader: - kv   9:             mllama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  10:                      mllama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  11:    mllama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  12:                          general.file_type u32              = 15
llama_model_loader: - kv  13:                          mllama.vocab_size u32              = 128256
llama_model_loader: - kv  14:                mllama.rope.dimension_count u32              = 128
llama_model_loader: - kv  15:    mllama.attention.cross_attention_layers arr[i32,20]      = [3, 8, 13, 18, 23, 28, 33, 38, 43, 48...
llama_model_loader: - kv  16:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  17:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  18:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  19:                      tokenizer.ggml.tokens arr[str,128257]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  20:                  tokenizer.ggml.token_type arr[i32,128257]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  21:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  22:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  23:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  24:            tokenizer.ggml.padding_token_id u32              = 128004
llama_model_loader: - kv  25:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv  26:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  282 tensors
llama_model_loader: - type q4_K:  611 tensors
llama_model_loader: - type q6_K:   91 tensors
time=2024-11-21T01:22:19.502Z level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 257
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = mllama
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 128256
llm_load_print_meta: n_merges         = 280147
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 131072
llm_load_print_meta: n_embd           = 8192
llm_load_print_meta: n_layer          = 100
llm_load_print_meta: n_head           = 64
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 8
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 28672
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 131072
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = ?B
llm_load_print_meta: model ftype      = Q4_K - Medium
llm_load_print_meta: model params     = 87.67 B
llm_load_print_meta: model size       = 49.08 GiB (4.81 BPW) 
llm_load_print_meta: general.name     = Model
llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token        = 128009 '<|eot_id|>'
llm_load_print_meta: PAD token        = 128004 '<|finetune_right_pad_id|>'
llm_load_print_meta: LF token         = 128 'Ä'
llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token        = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token        = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token        = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llama_model_load: vocab mismatch 128256 !- 128257 ...
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 4 CUDA devices:
  Device 0: NVIDIA Graphics Device, compute capability 8.0, VMM: yes
  Device 1: NVIDIA Graphics Device, compute capability 8.0, VMM: yes
  Device 2: NVIDIA Graphics Device, compute capability 8.0, VMM: yes
  Device 3: NVIDIA Graphics Device, compute capability 8.0, VMM: yes
llm_load_tensors: ggml ctx size =    2.25 MiB
llm_load_tensors: offloading 100 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 101/101 layers to GPU
llm_load_tensors:        CPU buffer size =   563.66 MiB
llm_load_tensors:      CUDA0 buffer size = 12886.45 MiB
llm_load_tensors:      CUDA1 buffer size = 12010.76 MiB
llm_load_tensors:      CUDA2 buffer size = 11953.01 MiB
llm_load_tensors:      CUDA3 buffer size = 12848.06 MiB
time=2024-11-21T01:22:29.980Z level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server not responding"
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      CUDA0 KV buffer size =   418.16 MiB
llama_kv_cache_init:      CUDA1 KV buffer size =   410.16 MiB
llama_kv_cache_init:      CUDA2 KV buffer size =   410.16 MiB
llama_kv_cache_init:      CUDA3 KV buffer size =   402.16 MiB
llama_new_context_with_model: KV self size  = 1640.62 MiB, K (f16):  820.31 MiB, V (f16):  820.31 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     0.52 MiB
llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
llama_new_context_with_model:      CUDA0 compute buffer size =   400.01 MiB
llama_new_context_with_model:      CUDA1 compute buffer size =   400.01 MiB
llama_new_context_with_model:      CUDA2 compute buffer size =   400.01 MiB
llama_new_context_with_model:      CUDA3 compute buffer size =   400.02 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    32.02 MiB
llama_new_context_with_model: graph nodes  = 2566
llama_new_context_with_model: graph splits = 5
mllama_model_load: model name:   Llama-3.2-90B-Vision-Instruct
mllama_model_load: description:  vision encoder for Mllama
mllama_model_load: GGUF version: 3
mllama_model_load: alignment:    32
mllama_model_load: n_tensors:    512
mllama_model_load: n_kv:         17
mllama_model_load: ftype:        f16
mllama_model_load: 
mllama_model_load: vision using CUDA backend
time=2024-11-21T01:22:30.230Z level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server loading model"
mllama_model_load: compute allocated memory: 2853.34 MB
time=2024-11-21T01:22:30.982Z level=INFO source=server.go:601 msg="llama runner started in 11.73 seconds"
CUDA error: unspecified launch failure
  current device: 0, in function ggml_backend_cuda_synchronize at ggml-cuda.cu:2508
  cudaStreamSynchronize(cuda_ctx->stream())
ggml-cuda.cu:132: CUDA error
SIGBUS: bus error
PC=0x7fe40040db53 m=12 sigcode=2 addr=0x21a403fcc
signal arrived during cgo execution

goroutine 7 gp=0xc0002ac000 m=12 mp=0xc000200808 [syscall]:
runtime.cgocall(0x561ad5f9be90, 0xc000183b60)
	runtime/cgocall.go:157 +0x4b fp=0xc000183b38 sp=0xc000183b00 pc=0x561ad5d1e3cb
github.com/ollama/ollama/llama._Cfunc_llama_decode(0x7fe394006450, {0x6f, 0x561ad7af2fe0, 0x0, 0x0, 0x561ad7af37f0, 0x561ad7af4000, 0x561ad7af4810, 0x561ad79c42e0, 0x0, ...})
	_cgo_gotypes.go:543 +0x52 fp=0xc000183b60 sp=0xc000183b38 pc=0x561ad5e1b952
github.com/ollama/ollama/llama.(*Context).Decode.func1(0x561ad5f97d4b?, 0x7fe394006450?)
	github.com/ollama/ollama/llama/llama.go:167 +0xd8 fp=0xc000183c80 sp=0xc000183b60 pc=0x561ad5e1de78
github.com/ollama/ollama/llama.(*Context).Decode(0xc0000163c0?, 0x1?)
	github.com/ollama/ollama/llama/llama.go:167 +0x17 fp=0xc000183cc8 sp=0xc000183c80 pc=0x561ad5e1dcd7
main.(*Server).processBatch(0xc0001ce120, 0xc0001cc150, 0xc0001cc1c0)
	github.com/ollama/ollama/llama/runner/runner.go:424 +0x29e fp=0xc000183ed0 sp=0xc000183cc8 pc=0x561ad5f96d7e
main.(*Server).run(0xc0001ce120, {0x561ad62d9a40, 0xc0001a40a0})
	github.com/ollama/ollama/llama/runner/runner.go:338 +0x1a5 fp=0xc000183fb8 sp=0xc000183ed0 pc=0x561ad5f96765
main.main.gowrap2()
	github.com/ollama/ollama/llama/runner/runner.go:901 +0x28 fp=0xc000183fe0 sp=0xc000183fb8 pc=0x561ad5f9aec8
runtime.goexit({})
	runtime/asm_amd64.s:1695 +0x1 fp=0xc000183fe8 sp=0xc000183fe0 pc=0x561ad5d86de1
created by main.main in goroutine 1
	github.com/ollama/ollama/llama/runner/runner.go:901 +0xc2b

OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

0.4.1

Originally created by @daocoder2 on GitHub (Nov 21, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7771 ### What is the issue? ``` 2024/11/21 01:22:08 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:3 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:4 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:true OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2024-11-21T01:22:08.918Z level=INFO source=images.go:755 msg="total blobs: 50" time=2024-11-21T01:22:08.919Z level=INFO source=images.go:762 msg="total unused blobs removed: 0" time=2024-11-21T01:22:08.919Z level=INFO source=routes.go:1240 msg="Listening on [::]:11434 (version 0.4.1)" time=2024-11-21T01:22:08.919Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu_avx cpu_avx2 cuda_v11 cuda_v12 cpu]" time=2024-11-21T01:22:08.919Z level=INFO source=gpu.go:221 msg="looking for compatible GPUs" time=2024-11-21T01:22:09.494Z level=INFO source=types.go:123 msg="inference compute" id=GPU-cfe28dbd-f61e-acdb-96ed-815caf9afc67 library=cuda variant=v11 compute=8.0 driver=11.7 name="NVIDIA Graphics Device" total="79.3 GiB" available="78.9 GiB" time=2024-11-21T01:22:09.494Z level=INFO source=types.go:123 msg="inference compute" id=GPU-c5a1deea-f294-f993-7aa4-5386493bad88 library=cuda variant=v11 compute=8.0 driver=11.7 name="NVIDIA Graphics Device" total="79.3 GiB" available="45.0 GiB" time=2024-11-21T01:22:09.494Z level=INFO source=types.go:123 msg="inference compute" id=GPU-807da1fa-7fac-08aa-4a8c-7c176f72f13b library=cuda variant=v11 compute=8.0 driver=11.7 name="NVIDIA Graphics Device" total="79.3 GiB" available="19.4 GiB" time=2024-11-21T01:22:09.494Z level=INFO source=types.go:123 msg="inference compute" id=GPU-d2785990-22de-3488-9102-778351cda270 library=cuda variant=v11 compute=8.0 driver=11.7 name="NVIDIA Graphics Device" total="79.3 GiB" available="19.8 GiB" time=2024-11-21T01:22:18.349Z level=WARN source=sched.go:137 msg="mllama doesn't support parallel requests yet" time=2024-11-21T01:22:18.821Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-da63a910e34997d50c9f21cc7f16996d1e76e1c128b13319edd68348f760ecc7 library=cuda parallel=1 required="62.9 GiB" time=2024-11-21T01:22:19.246Z level=INFO source=server.go:105 msg="system memory" total="2015.3 GiB" free="1895.5 GiB" free_swap="0 B" time=2024-11-21T01:22:19.249Z level=INFO source=memory.go:343 msg="offload to cuda" projector.weights="1.9 GiB" projector.graph="2.8 GiB" layers.requested=-1 layers.model=101 layers.offload=101 layers.split=26,25,25,25 memory.available="[78.9 GiB 45.0 GiB 19.8 GiB 19.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="62.9 GiB" memory.required.partial="62.9 GiB" memory.required.kv="1.6 GiB" memory.required.allocations="[19.8 GiB 14.3 GiB 14.4 GiB 14.4 GiB]" memory.weights.total="49.3 GiB" memory.weights.repeating="48.5 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB" time=2024-11-21T01:22:19.250Z level=INFO source=server.go:383 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v11/ollama_llama_server --model /root/.ollama/models/blobs/sha256-da63a910e34997d50c9f21cc7f16996d1e76e1c128b13319edd68348f760ecc7 --ctx-size 2048 --batch-size 512 --n-gpu-layers 101 --mmproj /root/.ollama/models/blobs/sha256-6b6c374d159e097509b33e9fda648c178c903959fc0c7dbfae487cc8d958093e --threads 64 --parallel 1 --tensor-split 26,25,25,25 --port 38572" time=2024-11-21T01:22:19.250Z level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2024-11-21T01:22:19.250Z level=INFO source=server.go:562 msg="waiting for llama runner to start responding" time=2024-11-21T01:22:19.250Z level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server error" time=2024-11-21T01:22:19.259Z level=INFO source=runner.go:863 msg="starting go runner" time=2024-11-21T01:22:19.259Z level=INFO source=runner.go:864 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=64 time=2024-11-21T01:22:19.260Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:38572" llama_model_loader: loaded meta data with 27 key-value pairs and 984 tensors from /root/.ollama/models/blobs/sha256-da63a910e34997d50c9f21cc7f16996d1e76e1c128b13319edd68348f760ecc7 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = mllama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Model llama_model_loader: - kv 3: general.size_label str = 88B llama_model_loader: - kv 4: mllama.block_count u32 = 100 llama_model_loader: - kv 5: mllama.context_length u32 = 131072 llama_model_loader: - kv 6: mllama.embedding_length u32 = 8192 llama_model_loader: - kv 7: mllama.feed_forward_length u32 = 28672 llama_model_loader: - kv 8: mllama.attention.head_count u32 = 64 llama_model_loader: - kv 9: mllama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 10: mllama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 11: mllama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 12: general.file_type u32 = 15 llama_model_loader: - kv 13: mllama.vocab_size u32 = 128256 llama_model_loader: - kv 14: mllama.rope.dimension_count u32 = 128 llama_model_loader: - kv 15: mllama.attention.cross_attention_layers arr[i32,20] = [3, 8, 13, 18, 23, 28, 33, 38, 43, 48... llama_model_loader: - kv 16: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 17: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 18: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 19: tokenizer.ggml.tokens arr[str,128257] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 20: tokenizer.ggml.token_type arr[i32,128257] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 21: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 23: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 24: tokenizer.ggml.padding_token_id u32 = 128004 llama_model_loader: - kv 25: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 26: general.quantization_version u32 = 2 llama_model_loader: - type f32: 282 tensors llama_model_loader: - type q4_K: 611 tensors llama_model_loader: - type q6_K: 91 tensors time=2024-11-21T01:22:19.502Z level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: special tokens cache size = 257 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = mllama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 8192 llm_load_print_meta: n_layer = 100 llm_load_print_meta: n_head = 64 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 8 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 28672 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 87.67 B llm_load_print_meta: model size = 49.08 GiB (4.81 BPW) llm_load_print_meta: general.name = Model llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: PAD token = 128004 '<|finetune_right_pad_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llama_model_load: vocab mismatch 128256 !- 128257 ... ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 4 CUDA devices: Device 0: NVIDIA Graphics Device, compute capability 8.0, VMM: yes Device 1: NVIDIA Graphics Device, compute capability 8.0, VMM: yes Device 2: NVIDIA Graphics Device, compute capability 8.0, VMM: yes Device 3: NVIDIA Graphics Device, compute capability 8.0, VMM: yes llm_load_tensors: ggml ctx size = 2.25 MiB llm_load_tensors: offloading 100 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 101/101 layers to GPU llm_load_tensors: CPU buffer size = 563.66 MiB llm_load_tensors: CUDA0 buffer size = 12886.45 MiB llm_load_tensors: CUDA1 buffer size = 12010.76 MiB llm_load_tensors: CUDA2 buffer size = 11953.01 MiB llm_load_tensors: CUDA3 buffer size = 12848.06 MiB time=2024-11-21T01:22:29.980Z level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server not responding" llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA0 KV buffer size = 418.16 MiB llama_kv_cache_init: CUDA1 KV buffer size = 410.16 MiB llama_kv_cache_init: CUDA2 KV buffer size = 410.16 MiB llama_kv_cache_init: CUDA3 KV buffer size = 402.16 MiB llama_new_context_with_model: KV self size = 1640.62 MiB, K (f16): 820.31 MiB, V (f16): 820.31 MiB llama_new_context_with_model: CUDA_Host output buffer size = 0.52 MiB llama_new_context_with_model: pipeline parallelism enabled (n_copies=4) llama_new_context_with_model: CUDA0 compute buffer size = 400.01 MiB llama_new_context_with_model: CUDA1 compute buffer size = 400.01 MiB llama_new_context_with_model: CUDA2 compute buffer size = 400.01 MiB llama_new_context_with_model: CUDA3 compute buffer size = 400.02 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 32.02 MiB llama_new_context_with_model: graph nodes = 2566 llama_new_context_with_model: graph splits = 5 mllama_model_load: model name: Llama-3.2-90B-Vision-Instruct mllama_model_load: description: vision encoder for Mllama mllama_model_load: GGUF version: 3 mllama_model_load: alignment: 32 mllama_model_load: n_tensors: 512 mllama_model_load: n_kv: 17 mllama_model_load: ftype: f16 mllama_model_load: mllama_model_load: vision using CUDA backend time=2024-11-21T01:22:30.230Z level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server loading model" mllama_model_load: compute allocated memory: 2853.34 MB time=2024-11-21T01:22:30.982Z level=INFO source=server.go:601 msg="llama runner started in 11.73 seconds" CUDA error: unspecified launch failure current device: 0, in function ggml_backend_cuda_synchronize at ggml-cuda.cu:2508 cudaStreamSynchronize(cuda_ctx->stream()) ggml-cuda.cu:132: CUDA error SIGBUS: bus error PC=0x7fe40040db53 m=12 sigcode=2 addr=0x21a403fcc signal arrived during cgo execution goroutine 7 gp=0xc0002ac000 m=12 mp=0xc000200808 [syscall]: runtime.cgocall(0x561ad5f9be90, 0xc000183b60) runtime/cgocall.go:157 +0x4b fp=0xc000183b38 sp=0xc000183b00 pc=0x561ad5d1e3cb github.com/ollama/ollama/llama._Cfunc_llama_decode(0x7fe394006450, {0x6f, 0x561ad7af2fe0, 0x0, 0x0, 0x561ad7af37f0, 0x561ad7af4000, 0x561ad7af4810, 0x561ad79c42e0, 0x0, ...}) _cgo_gotypes.go:543 +0x52 fp=0xc000183b60 sp=0xc000183b38 pc=0x561ad5e1b952 github.com/ollama/ollama/llama.(*Context).Decode.func1(0x561ad5f97d4b?, 0x7fe394006450?) github.com/ollama/ollama/llama/llama.go:167 +0xd8 fp=0xc000183c80 sp=0xc000183b60 pc=0x561ad5e1de78 github.com/ollama/ollama/llama.(*Context).Decode(0xc0000163c0?, 0x1?) github.com/ollama/ollama/llama/llama.go:167 +0x17 fp=0xc000183cc8 sp=0xc000183c80 pc=0x561ad5e1dcd7 main.(*Server).processBatch(0xc0001ce120, 0xc0001cc150, 0xc0001cc1c0) github.com/ollama/ollama/llama/runner/runner.go:424 +0x29e fp=0xc000183ed0 sp=0xc000183cc8 pc=0x561ad5f96d7e main.(*Server).run(0xc0001ce120, {0x561ad62d9a40, 0xc0001a40a0}) github.com/ollama/ollama/llama/runner/runner.go:338 +0x1a5 fp=0xc000183fb8 sp=0xc000183ed0 pc=0x561ad5f96765 main.main.gowrap2() github.com/ollama/ollama/llama/runner/runner.go:901 +0x28 fp=0xc000183fe0 sp=0xc000183fb8 pc=0x561ad5f9aec8 runtime.goexit({}) runtime/asm_amd64.s:1695 +0x1 fp=0xc000183fe8 sp=0xc000183fe0 pc=0x561ad5d86de1 created by main.main in goroutine 1 github.com/ollama/ollama/llama/runner/runner.go:901 +0xc2b ``` ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version 0.4.1
GiteaMirror added the bug label 2026-04-28 20:19:07 -05:00
Author
Owner

@dhiltgen commented on GitHub (Nov 21, 2024):

Please upgrade to 0.4.2 which has the fix for this defect.

<!-- gh-comment-id:2491762286 --> @dhiltgen commented on GitHub (Nov 21, 2024): Please upgrade to 0.4.2 which has the fix for this defect.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#51475