[GH-ISSUE #13363] Cannot any model which bigger than 16g with V100-SXM2-16G and other GPU #8825

Open
opened 2026-04-12 21:36:30 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @acu715 on GitHub (Dec 7, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13363

What is the issue?

+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 581.57 Driver Version: 581.57 CUDA Version: 13.0 |
+-----------------------------------------+------------------------+----------------------+
| GPU Name Driver-Model | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 Tesla V100-SXM2-16GB TCC | 00000000:02:00.0 Off | Off |
| N/A 38C P0 22W / 300W | 10MiB / 16384MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 1 NVIDIA GeForce GTX 1070 WDDM | 00000000:03:00.0 On | N/A |
| 0% 42C P8 13W / 180W | 667MiB / 8192MiB | 8% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 2 Tesla P4 TCC | 00000000:06:00.0 Off | Off |
| N/A 31C P8 6W / 75W | 9MiB / 8192MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 1 N/A N/A 5784 C+G ...h_cw5n1h2txyewy\SearchApp.exe N/A |
| 1 N/A N/A 8108 C+G ...ram Files\Kryptex\Kryptex.exe N/A |
| 1 N/A N/A 8220 C+G ...IA App\CEF\NVIDIA Overlay.exe N/A |
| 1 N/A N/A 8380 C+G ...8bbwe\PhoneExperienceHost.exe N/A |
| 1 N/A N/A 10064 C+G ...yb3d8bbwe\WindowsTerminal.exe N/A |
| 1 N/A N/A 10196 C+G C:\Windows\explorer.exe N/A |
| 1 N/A N/A 11844 C+G ...hingPcLite\OneThingPcLite.exe N/A |
| 1 N/A N/A 14276 C+G ...h_cw5n1h2txyewy\SearchApp.exe N/A |
| 1 N/A N/A 14812 C+G ...IA App\CEF\NVIDIA Overlay.exe N/A |
| 1 N/A N/A 14988 C+G ...xyewy\ShellExperienceHost.exe N/A |
| 1 N/A N/A 15624 C+G ...5n1h2txyewy\TextInputHost.exe N/A |
| 1 N/A N/A 18344 C+G ...8wekyb3d8bbwe\M365Copilot.exe N/A |
| 1 N/A N/A 20092 C+G ...gram Files\Parsec\parsecd.exe N/A |
| 1 N/A N/A 20616 C+G ....0.3595.94\msedgewebview2.exe N/A |
| 1 N/A N/A 21196 C+G ....0.3595.94\msedgewebview2.exe N/A |
| 1 N/A N/A 23072 C+G D:\Chatbox\Chatbox.exe N/A |
+-----------------------------------------------------------------------------------------+

If i run the model which is about 32GB without V100 I can run. If i run the model with v100 it doesn't works.
windows10
Using ollama 0.13.2-rc2, same problem will show at ollama 0.13.0 , 0.13.1, 0.13.2
cmd as:
C:\Users\admin>ollama run hf.co/TriadParty/deepsex-34b-gguf:Q8_0
Error: 500 Internal Server Error: llama runner process has terminated: CUDA error

Relevant log output

[GIN] 2025/12/07 - 14:43:16 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/12/07 - 14:43:16 | 200 |     45.6623ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/12/07 - 14:43:16 | 200 |     38.8862ms |       127.0.0.1 | POST     "/api/show"
time=2025-12-07T14:43:16.683+08:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 61815"
time=2025-12-07T14:43:17.849+08:00 level=INFO source=cpu_windows.go:148 msg=packages count=2
time=2025-12-07T14:43:17.849+08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=14 efficiency=0 threads=28
time=2025-12-07T14:43:17.849+08:00 level=INFO source=cpu_windows.go:195 msg="" package=1 cores=14 efficiency=0 threads=28
llama_model_loader: loaded meta data with 22 key-value pairs and 543 tensors from D:\AI\LM_models\blobs\sha256-84aa170c16ada0cfc5656e571f8d7890df3b8d778d37b82b32ffc2a9b20865b7 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = LLaMA v2
llama_model_loader: - kv   2:                       llama.context_length u32              = 4096
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 7168
llama_model_loader: - kv   4:                          llama.block_count u32              = 60
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 20480
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 56
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                       llama.rope.freq_base f32              = 5000000.000000
llama_model_loader: - kv  11:                          general.file_type u32              = 7
llama_model_loader: - kv  12:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,64000]   = ["<unk>", "<|startoftext|>", "<|endof...
llama_model_loader: - kv  14:                      tokenizer.ggml.scores arr[f32,64000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,64000]   = [2, 3, 3, 3, 3, 3, 1, 1, 1, 3, 3, 3, ...
llama_model_loader: - kv  16:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  17:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  18:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  19:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  20:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  21:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  121 tensors
llama_model_loader: - type q8_0:  422 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q8_0
print_info: file size   = 34.03 GiB (8.50 BPW) 
load: control-looking token:     17 '<fim_pad>' was not control-type; this is probably a bug in the model. its type will be overridden
load: control-looking token:     16 '<fim_suffix>' was not control-type; this is probably a bug in the model. its type will be overridden
load: control-looking token:     31 '<reponame>' was not control-type; this is probably a bug in the model. its type will be overridden
load: control-looking token:     15 '<fim_middle>' was not control-type; this is probably a bug in the model. its type will be overridden
load: control-looking token:     14 '<fim_prefix>' was not control-type; this is probably a bug in the model. its type will be overridden
load: control-looking token:      7 '<|im_end|>' was not control-type; this is probably a bug in the model. its type will be overridden
load: printing all EOG tokens:
load:   - 2 ('<|endoftext|>')
load:   - 7 ('<|im_end|>')
load:   - 17 ('<fim_pad>')
load:   - 31 ('<reponame>')
load: special tokens cache size = 17
load: token to piece cache size = 0.3834 MB
print_info: arch             = llama
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 34.39 B
print_info: general.name     = LLaMA v2
print_info: vocab type       = SPM
print_info: n_vocab          = 64000
print_info: n_merges         = 0
print_info: BOS token        = 1 '<|startoftext|>'
print_info: EOS token        = 2 '<|endoftext|>'
print_info: EOT token        = 2 '<|endoftext|>'
print_info: UNK token        = 0 '<unk>'
print_info: PAD token        = 0 '<unk>'
print_info: LF token         = 315 '<0x0A>'
print_info: FIM PRE token    = 14 '<fim_prefix>'
print_info: FIM SUF token    = 16 '<fim_suffix>'
print_info: FIM MID token    = 15 '<fim_middle>'
print_info: FIM PAD token    = 17 '<fim_pad>'
print_info: FIM REP token    = 31 '<reponame>'
print_info: EOG token        = 2 '<|endoftext|>'
print_info: EOG token        = 7 '<|im_end|>'
print_info: EOG token        = 17 '<fim_pad>'
print_info: EOG token        = 31 '<reponame>'
print_info: max token length = 48
llama_model_load: vocab only - skipping tensors
time=2025-12-07T14:43:18.057+08:00 level=WARN source=server.go:167 msg="requested context size too large for model" num_ctx=16384 n_ctx_train=4096
time=2025-12-07T14:43:18.061+08:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model D:\\AI\\LM_models\\blobs\\sha256-84aa170c16ada0cfc5656e571f8d7890df3b8d778d37b82b32ffc2a9b20865b7 --port 61827"
time=2025-12-07T14:43:18.073+08:00 level=INFO source=sched.go:443 msg="system memory" total="63.9 GiB" free="54.8 GiB" free_swap="85.2 GiB"
time=2025-12-07T14:43:18.073+08:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-a4958fb8-5db1-420b-4907-6d7c35614cbc library=CUDA available="15.4 GiB" free="15.9 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-12-07T14:43:18.074+08:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-1d8ecd7f-d33c-0dda-2c04-c704ee580b2c library=CUDA available="7.4 GiB" free="7.8 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-12-07T14:43:18.074+08:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-40671468-338a-38fb-44e0-5bc537e131cc library=CUDA available="6.7 GiB" free="7.1 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-12-07T14:43:18.074+08:00 level=INFO source=server.go:459 msg="loading model" "model layers"=61 requested=-1
time=2025-12-07T14:43:18.078+08:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="13.8 GiB"
time=2025-12-07T14:43:18.078+08:00 level=INFO source=device.go:240 msg="model weights" device=CUDA1 size="6.1 GiB"
time=2025-12-07T14:43:18.078+08:00 level=INFO source=device.go:240 msg="model weights" device=CUDA2 size="5.0 GiB"
time=2025-12-07T14:43:18.078+08:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="8.7 GiB"
time=2025-12-07T14:43:18.078+08:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="400.0 MiB"
time=2025-12-07T14:43:18.078+08:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA1 size="176.0 MiB"
time=2025-12-07T14:43:18.078+08:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA2 size="144.0 MiB"
time=2025-12-07T14:43:18.078+08:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="240.0 MiB"
time=2025-12-07T14:43:18.078+08:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="533.6 MiB"
time=2025-12-07T14:43:18.078+08:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA1 size="533.6 MiB"
time=2025-12-07T14:43:18.078+08:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA2 size="533.6 MiB"
time=2025-12-07T14:43:18.078+08:00 level=INFO source=device.go:272 msg="total memory" size="36.1 GiB"
time=2025-12-07T14:43:18.169+08:00 level=INFO source=runner.go:963 msg="starting go runner"
load_backend: loaded CPU backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 3 CUDA devices:
  Device 0: Tesla V100-SXM2-16GB, compute capability 7.0, VMM: no, ID: GPU-a4958fb8-5db1-420b-4907-6d7c35614cbc
  Device 1: Tesla P4, compute capability 6.1, VMM: no, ID: GPU-1d8ecd7f-d33c-0dda-2c04-c704ee580b2c
  Device 2: NVIDIA GeForce GTX 1070, compute capability 6.1, VMM: yes, ID: GPU-40671468-338a-38fb-44e0-5bc537e131cc
load_backend: loaded CUDA backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
time=2025-12-07T14:43:18.657+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 CUDA.2.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.2.USE_GRAPHS=1 CUDA.2.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-12-07T14:43:18.658+08:00 level=INFO source=runner.go:999 msg="Server listening on 127.0.0.1:61827"
time=2025-12-07T14:43:18.663+08:00 level=INFO source=runner.go:893 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:28 GPULayers:45[ID:GPU-a4958fb8-5db1-420b-4907-6d7c35614cbc Layers:25(15..39) ID:GPU-1d8ecd7f-d33c-0dda-2c04-c704ee580b2c Layers:11(40..50) ID:GPU-40671468-338a-38fb-44e0-5bc537e131cc Layers:9(51..59)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-07T14:43:18.663+08:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding"
time=2025-12-07T14:43:18.664+08:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model"
ggml_backend_cuda_device_get_memory device GPU-a4958fb8-5db1-420b-4907-6d7c35614cbc utilizing NVML memory reporting free: 17021403136 total: 17179869184
llama_model_load_from_file_impl: using device CUDA0 (Tesla V100-SXM2-16GB) (0000:02:00.0) - 16232 MiB free
ggml_backend_cuda_device_get_memory device GPU-1d8ecd7f-d33c-0dda-2c04-c704ee580b2c utilizing NVML memory reporting free: 8384151552 total: 8589934592
llama_model_load_from_file_impl: using device CUDA1 (Tesla P4) (0000:06:00.0) - 7995 MiB free
ggml_backend_cuda_device_get_memory device GPU-40671468-338a-38fb-44e0-5bc537e131cc utilizing NVML memory reporting free: 7651250176 total: 8589934592
llama_model_load_from_file_impl: using device CUDA2 (NVIDIA GeForce GTX 1070) (0000:03:00.0) - 7296 MiB free
llama_model_loader: loaded meta data with 22 key-value pairs and 543 tensors from D:\AI\LM_models\blobs\sha256-84aa170c16ada0cfc5656e571f8d7890df3b8d778d37b82b32ffc2a9b20865b7 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = LLaMA v2
llama_model_loader: - kv   2:                       llama.context_length u32              = 4096
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 7168
llama_model_loader: - kv   4:                          llama.block_count u32              = 60
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 20480
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 56
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                       llama.rope.freq_base f32              = 5000000.000000
llama_model_loader: - kv  11:                          general.file_type u32              = 7
llama_model_loader: - kv  12:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,64000]   = ["<unk>", "<|startoftext|>", "<|endof...
llama_model_loader: - kv  14:                      tokenizer.ggml.scores arr[f32,64000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,64000]   = [2, 3, 3, 3, 3, 3, 1, 1, 1, 3, 3, 3, ...
llama_model_loader: - kv  16:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  17:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  18:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  19:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  20:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  21:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  121 tensors
llama_model_loader: - type q8_0:  422 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q8_0
print_info: file size   = 34.03 GiB (8.50 BPW) 
load: control-looking token:     17 '<fim_pad>' was not control-type; this is probably a bug in the model. its type will be overridden
load: control-looking token:     16 '<fim_suffix>' was not control-type; this is probably a bug in the model. its type will be overridden
load: control-looking token:     31 '<reponame>' was not control-type; this is probably a bug in the model. its type will be overridden
load: control-looking token:     15 '<fim_middle>' was not control-type; this is probably a bug in the model. its type will be overridden
load: control-looking token:     14 '<fim_prefix>' was not control-type; this is probably a bug in the model. its type will be overridden
load: control-looking token:      7 '<|im_end|>' was not control-type; this is probably a bug in the model. its type will be overridden
load: printing all EOG tokens:
load:   - 2 ('<|endoftext|>')
load:   - 7 ('<|im_end|>')
load:   - 17 ('<fim_pad>')
load:   - 31 ('<reponame>')
load: special tokens cache size = 17
load: token to piece cache size = 0.3834 MB
print_info: arch             = llama
print_info: vocab_only       = 0
print_info: n_ctx_train      = 4096
print_info: n_embd           = 7168
print_info: n_embd_inp       = 7168
print_info: n_layer          = 60
print_info: n_head           = 56
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: is_swa_any       = 0
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 7
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 20480
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: n_expert_groups  = 0
print_info: n_group_used     = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 0
print_info: rope scaling     = linear
print_info: freq_base_train  = 5000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 4096
print_info: rope_finetuned   = unknown
print_info: model type       = 30B
print_info: model params     = 34.39 B
print_info: general.name     = LLaMA v2
print_info: vocab type       = SPM
print_info: n_vocab          = 64000
print_info: n_merges         = 0
print_info: BOS token        = 1 '<|startoftext|>'
print_info: EOS token        = 2 '<|endoftext|>'
print_info: EOT token        = 2 '<|endoftext|>'
print_info: UNK token        = 0 '<unk>'
print_info: PAD token        = 0 '<unk>'
print_info: LF token         = 315 '<0x0A>'
print_info: FIM PRE token    = 14 '<fim_prefix>'
print_info: FIM SUF token    = 16 '<fim_suffix>'
print_info: FIM MID token    = 15 '<fim_middle>'
print_info: FIM PAD token    = 17 '<fim_pad>'
print_info: FIM REP token    = 31 '<reponame>'
print_info: EOG token        = 2 '<|endoftext|>'
print_info: EOG token        = 7 '<|im_end|>'
print_info: EOG token        = 17 '<fim_pad>'
print_info: EOG token        = 31 '<reponame>'
print_info: max token length = 48
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors: offloading 45 repeating layers to GPU
load_tensors: offloaded 45/61 layers to GPU
load_tensors:        CUDA0 model buffer size = 14132.62 MiB
load_tensors:        CUDA1 model buffer size =  6218.35 MiB
load_tensors:        CUDA2 model buffer size =  5087.74 MiB
load_tensors:    CUDA_Host model buffer size =  9409.29 MiB
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 4096
llama_context: n_ctx_seq     = 4096
llama_context: n_batch       = 512
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = disabled
llama_context: kv_unified    = false
llama_context: freq_base     = 5000000.0
llama_context: freq_scale    = 1
llama_context:        CPU  output buffer size =     0.27 MiB
llama_kv_cache:        CPU KV buffer size =   240.00 MiB
llama_kv_cache:      CUDA0 KV buffer size =   400.00 MiB
llama_kv_cache:      CUDA1 KV buffer size =   176.00 MiB
llama_kv_cache:      CUDA2 KV buffer size =   144.00 MiB
llama_kv_cache: size =  960.00 MiB (  4096 cells,  60 layers,  1/1 seqs), K (f16):  480.00 MiB, V (f16):  480.00 MiB
CUDA error: an unsupported value or parameter was passed to the function
  current device: 0, in function ggml_cuda_op_mul_mat_cublas at C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:1463
  cublasGemmEx(ctx.cublas_handle(id), CUBLAS_OP_T, CUBLAS_OP_N, row_diff, src1_ncols, ne10, &alpha_f16, src0_ptr, CUDA_R_16F, ne00, src1_ptr, CUDA_R_16F, ne10, &beta_f16, dst_f16.get(), CUDA_R_16F, ldc, CUBLAS_COMPUTE_16F, CUBLAS_GEMM_DEFAULT_TENSOR_OP)
C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:90: CUDA error
time=2025-12-07T14:43:54.465+08:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server not responding"
time=2025-12-07T14:43:55.278+08:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server error"
time=2025-12-07T14:43:56.154+08:00 level=ERROR source=server.go:265 msg="llama runner terminated" error="exit status 1"
time=2025-12-07T14:43:56.281+08:00 level=INFO source=sched.go:470 msg="Load failed" model=D:\AI\LM_models\blobs\sha256-84aa170c16ada0cfc5656e571f8d7890df3b8d778d37b82b32ffc2a9b20865b7 error="llama runner process has terminated: CUDA error"
[GIN] 2025/12/07 - 14:43:56 | 500 |   39.7006171s |       127.0.0.1 | POST     "/api/generate"

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

ollama version is 0.13.2-rc2

Originally created by @acu715 on GitHub (Dec 7, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13363 ### What is the issue? +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 581.57 Driver Version: 581.57 CUDA Version: 13.0 | +-----------------------------------------+------------------------+----------------------+ | GPU Name Driver-Model | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 Tesla V100-SXM2-16GB TCC | 00000000:02:00.0 Off | Off | | N/A 38C P0 22W / 300W | 10MiB / 16384MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 1 NVIDIA GeForce GTX 1070 WDDM | 00000000:03:00.0 On | N/A | | 0% 42C P8 13W / 180W | 667MiB / 8192MiB | 8% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 2 Tesla P4 TCC | 00000000:06:00.0 Off | Off | | N/A 31C P8 6W / 75W | 9MiB / 8192MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 1 N/A N/A 5784 C+G ...h_cw5n1h2txyewy\SearchApp.exe N/A | | 1 N/A N/A 8108 C+G ...ram Files\Kryptex\Kryptex.exe N/A | | 1 N/A N/A 8220 C+G ...IA App\CEF\NVIDIA Overlay.exe N/A | | 1 N/A N/A 8380 C+G ...8bbwe\PhoneExperienceHost.exe N/A | | 1 N/A N/A 10064 C+G ...yb3d8bbwe\WindowsTerminal.exe N/A | | 1 N/A N/A 10196 C+G C:\Windows\explorer.exe N/A | | 1 N/A N/A 11844 C+G ...hingPcLite\OneThingPcLite.exe N/A | | 1 N/A N/A 14276 C+G ...h_cw5n1h2txyewy\SearchApp.exe N/A | | 1 N/A N/A 14812 C+G ...IA App\CEF\NVIDIA Overlay.exe N/A | | 1 N/A N/A 14988 C+G ...xyewy\ShellExperienceHost.exe N/A | | 1 N/A N/A 15624 C+G ...5n1h2txyewy\TextInputHost.exe N/A | | 1 N/A N/A 18344 C+G ...8wekyb3d8bbwe\M365Copilot.exe N/A | | 1 N/A N/A 20092 C+G ...gram Files\Parsec\parsecd.exe N/A | | 1 N/A N/A 20616 C+G ....0.3595.94\msedgewebview2.exe N/A | | 1 N/A N/A 21196 C+G ....0.3595.94\msedgewebview2.exe N/A | | 1 N/A N/A 23072 C+G D:\Chatbox\Chatbox.exe N/A | +-----------------------------------------------------------------------------------------+ If i run the model which is about 32GB without V100 I can run. If i run the model with v100 it doesn't works. windows10 Using ollama 0.13.2-rc2, same problem will show at ollama 0.13.0 , 0.13.1, 0.13.2 cmd as: C:\Users\admin>ollama run hf.co/TriadParty/deepsex-34b-gguf:Q8_0 Error: 500 Internal Server Error: llama runner process has terminated: CUDA error ### Relevant log output ```shell [GIN] 2025/12/07 - 14:43:16 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/12/07 - 14:43:16 | 200 | 45.6623ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/12/07 - 14:43:16 | 200 | 38.8862ms | 127.0.0.1 | POST "/api/show" time=2025-12-07T14:43:16.683+08:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 61815" time=2025-12-07T14:43:17.849+08:00 level=INFO source=cpu_windows.go:148 msg=packages count=2 time=2025-12-07T14:43:17.849+08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=14 efficiency=0 threads=28 time=2025-12-07T14:43:17.849+08:00 level=INFO source=cpu_windows.go:195 msg="" package=1 cores=14 efficiency=0 threads=28 llama_model_loader: loaded meta data with 22 key-value pairs and 543 tensors from D:\AI\LM_models\blobs\sha256-84aa170c16ada0cfc5656e571f8d7890df3b8d778d37b82b32ffc2a9b20865b7 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = LLaMA v2 llama_model_loader: - kv 2: llama.context_length u32 = 4096 llama_model_loader: - kv 3: llama.embedding_length u32 = 7168 llama_model_loader: - kv 4: llama.block_count u32 = 60 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 20480 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 56 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: llama.rope.freq_base f32 = 5000000.000000 llama_model_loader: - kv 11: general.file_type u32 = 7 llama_model_loader: - kv 12: tokenizer.ggml.model str = llama llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,64000] = ["<unk>", "<|startoftext|>", "<|endof... llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,64000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,64000] = [2, 3, 3, 3, 3, 3, 1, 1, 1, 3, 3, 3, ... llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 18: tokenizer.ggml.padding_token_id u32 = 0 llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 21: general.quantization_version u32 = 2 llama_model_loader: - type f32: 121 tensors llama_model_loader: - type q8_0: 422 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q8_0 print_info: file size = 34.03 GiB (8.50 BPW) load: control-looking token: 17 '<fim_pad>' was not control-type; this is probably a bug in the model. its type will be overridden load: control-looking token: 16 '<fim_suffix>' was not control-type; this is probably a bug in the model. its type will be overridden load: control-looking token: 31 '<reponame>' was not control-type; this is probably a bug in the model. its type will be overridden load: control-looking token: 15 '<fim_middle>' was not control-type; this is probably a bug in the model. its type will be overridden load: control-looking token: 14 '<fim_prefix>' was not control-type; this is probably a bug in the model. its type will be overridden load: control-looking token: 7 '<|im_end|>' was not control-type; this is probably a bug in the model. its type will be overridden load: printing all EOG tokens: load: - 2 ('<|endoftext|>') load: - 7 ('<|im_end|>') load: - 17 ('<fim_pad>') load: - 31 ('<reponame>') load: special tokens cache size = 17 load: token to piece cache size = 0.3834 MB print_info: arch = llama print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 34.39 B print_info: general.name = LLaMA v2 print_info: vocab type = SPM print_info: n_vocab = 64000 print_info: n_merges = 0 print_info: BOS token = 1 '<|startoftext|>' print_info: EOS token = 2 '<|endoftext|>' print_info: EOT token = 2 '<|endoftext|>' print_info: UNK token = 0 '<unk>' print_info: PAD token = 0 '<unk>' print_info: LF token = 315 '<0x0A>' print_info: FIM PRE token = 14 '<fim_prefix>' print_info: FIM SUF token = 16 '<fim_suffix>' print_info: FIM MID token = 15 '<fim_middle>' print_info: FIM PAD token = 17 '<fim_pad>' print_info: FIM REP token = 31 '<reponame>' print_info: EOG token = 2 '<|endoftext|>' print_info: EOG token = 7 '<|im_end|>' print_info: EOG token = 17 '<fim_pad>' print_info: EOG token = 31 '<reponame>' print_info: max token length = 48 llama_model_load: vocab only - skipping tensors time=2025-12-07T14:43:18.057+08:00 level=WARN source=server.go:167 msg="requested context size too large for model" num_ctx=16384 n_ctx_train=4096 time=2025-12-07T14:43:18.061+08:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model D:\\AI\\LM_models\\blobs\\sha256-84aa170c16ada0cfc5656e571f8d7890df3b8d778d37b82b32ffc2a9b20865b7 --port 61827" time=2025-12-07T14:43:18.073+08:00 level=INFO source=sched.go:443 msg="system memory" total="63.9 GiB" free="54.8 GiB" free_swap="85.2 GiB" time=2025-12-07T14:43:18.073+08:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-a4958fb8-5db1-420b-4907-6d7c35614cbc library=CUDA available="15.4 GiB" free="15.9 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-12-07T14:43:18.074+08:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-1d8ecd7f-d33c-0dda-2c04-c704ee580b2c library=CUDA available="7.4 GiB" free="7.8 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-12-07T14:43:18.074+08:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-40671468-338a-38fb-44e0-5bc537e131cc library=CUDA available="6.7 GiB" free="7.1 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-12-07T14:43:18.074+08:00 level=INFO source=server.go:459 msg="loading model" "model layers"=61 requested=-1 time=2025-12-07T14:43:18.078+08:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="13.8 GiB" time=2025-12-07T14:43:18.078+08:00 level=INFO source=device.go:240 msg="model weights" device=CUDA1 size="6.1 GiB" time=2025-12-07T14:43:18.078+08:00 level=INFO source=device.go:240 msg="model weights" device=CUDA2 size="5.0 GiB" time=2025-12-07T14:43:18.078+08:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="8.7 GiB" time=2025-12-07T14:43:18.078+08:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="400.0 MiB" time=2025-12-07T14:43:18.078+08:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA1 size="176.0 MiB" time=2025-12-07T14:43:18.078+08:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA2 size="144.0 MiB" time=2025-12-07T14:43:18.078+08:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="240.0 MiB" time=2025-12-07T14:43:18.078+08:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="533.6 MiB" time=2025-12-07T14:43:18.078+08:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA1 size="533.6 MiB" time=2025-12-07T14:43:18.078+08:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA2 size="533.6 MiB" time=2025-12-07T14:43:18.078+08:00 level=INFO source=device.go:272 msg="total memory" size="36.1 GiB" time=2025-12-07T14:43:18.169+08:00 level=INFO source=runner.go:963 msg="starting go runner" load_backend: loaded CPU backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 3 CUDA devices: Device 0: Tesla V100-SXM2-16GB, compute capability 7.0, VMM: no, ID: GPU-a4958fb8-5db1-420b-4907-6d7c35614cbc Device 1: Tesla P4, compute capability 6.1, VMM: no, ID: GPU-1d8ecd7f-d33c-0dda-2c04-c704ee580b2c Device 2: NVIDIA GeForce GTX 1070, compute capability 6.1, VMM: yes, ID: GPU-40671468-338a-38fb-44e0-5bc537e131cc load_backend: loaded CUDA backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll time=2025-12-07T14:43:18.657+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 CUDA.2.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.2.USE_GRAPHS=1 CUDA.2.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-12-07T14:43:18.658+08:00 level=INFO source=runner.go:999 msg="Server listening on 127.0.0.1:61827" time=2025-12-07T14:43:18.663+08:00 level=INFO source=runner.go:893 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:28 GPULayers:45[ID:GPU-a4958fb8-5db1-420b-4907-6d7c35614cbc Layers:25(15..39) ID:GPU-1d8ecd7f-d33c-0dda-2c04-c704ee580b2c Layers:11(40..50) ID:GPU-40671468-338a-38fb-44e0-5bc537e131cc Layers:9(51..59)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-07T14:43:18.663+08:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding" time=2025-12-07T14:43:18.664+08:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model" ggml_backend_cuda_device_get_memory device GPU-a4958fb8-5db1-420b-4907-6d7c35614cbc utilizing NVML memory reporting free: 17021403136 total: 17179869184 llama_model_load_from_file_impl: using device CUDA0 (Tesla V100-SXM2-16GB) (0000:02:00.0) - 16232 MiB free ggml_backend_cuda_device_get_memory device GPU-1d8ecd7f-d33c-0dda-2c04-c704ee580b2c utilizing NVML memory reporting free: 8384151552 total: 8589934592 llama_model_load_from_file_impl: using device CUDA1 (Tesla P4) (0000:06:00.0) - 7995 MiB free ggml_backend_cuda_device_get_memory device GPU-40671468-338a-38fb-44e0-5bc537e131cc utilizing NVML memory reporting free: 7651250176 total: 8589934592 llama_model_load_from_file_impl: using device CUDA2 (NVIDIA GeForce GTX 1070) (0000:03:00.0) - 7296 MiB free llama_model_loader: loaded meta data with 22 key-value pairs and 543 tensors from D:\AI\LM_models\blobs\sha256-84aa170c16ada0cfc5656e571f8d7890df3b8d778d37b82b32ffc2a9b20865b7 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = LLaMA v2 llama_model_loader: - kv 2: llama.context_length u32 = 4096 llama_model_loader: - kv 3: llama.embedding_length u32 = 7168 llama_model_loader: - kv 4: llama.block_count u32 = 60 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 20480 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 56 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: llama.rope.freq_base f32 = 5000000.000000 llama_model_loader: - kv 11: general.file_type u32 = 7 llama_model_loader: - kv 12: tokenizer.ggml.model str = llama llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,64000] = ["<unk>", "<|startoftext|>", "<|endof... llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,64000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,64000] = [2, 3, 3, 3, 3, 3, 1, 1, 1, 3, 3, 3, ... llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 18: tokenizer.ggml.padding_token_id u32 = 0 llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 21: general.quantization_version u32 = 2 llama_model_loader: - type f32: 121 tensors llama_model_loader: - type q8_0: 422 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q8_0 print_info: file size = 34.03 GiB (8.50 BPW) load: control-looking token: 17 '<fim_pad>' was not control-type; this is probably a bug in the model. its type will be overridden load: control-looking token: 16 '<fim_suffix>' was not control-type; this is probably a bug in the model. its type will be overridden load: control-looking token: 31 '<reponame>' was not control-type; this is probably a bug in the model. its type will be overridden load: control-looking token: 15 '<fim_middle>' was not control-type; this is probably a bug in the model. its type will be overridden load: control-looking token: 14 '<fim_prefix>' was not control-type; this is probably a bug in the model. its type will be overridden load: control-looking token: 7 '<|im_end|>' was not control-type; this is probably a bug in the model. its type will be overridden load: printing all EOG tokens: load: - 2 ('<|endoftext|>') load: - 7 ('<|im_end|>') load: - 17 ('<fim_pad>') load: - 31 ('<reponame>') load: special tokens cache size = 17 load: token to piece cache size = 0.3834 MB print_info: arch = llama print_info: vocab_only = 0 print_info: n_ctx_train = 4096 print_info: n_embd = 7168 print_info: n_embd_inp = 7168 print_info: n_layer = 60 print_info: n_head = 56 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: is_swa_any = 0 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 7 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 20480 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: n_expert_groups = 0 print_info: n_group_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 0 print_info: rope scaling = linear print_info: freq_base_train = 5000000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 4096 print_info: rope_finetuned = unknown print_info: model type = 30B print_info: model params = 34.39 B print_info: general.name = LLaMA v2 print_info: vocab type = SPM print_info: n_vocab = 64000 print_info: n_merges = 0 print_info: BOS token = 1 '<|startoftext|>' print_info: EOS token = 2 '<|endoftext|>' print_info: EOT token = 2 '<|endoftext|>' print_info: UNK token = 0 '<unk>' print_info: PAD token = 0 '<unk>' print_info: LF token = 315 '<0x0A>' print_info: FIM PRE token = 14 '<fim_prefix>' print_info: FIM SUF token = 16 '<fim_suffix>' print_info: FIM MID token = 15 '<fim_middle>' print_info: FIM PAD token = 17 '<fim_pad>' print_info: FIM REP token = 31 '<reponame>' print_info: EOG token = 2 '<|endoftext|>' print_info: EOG token = 7 '<|im_end|>' print_info: EOG token = 17 '<fim_pad>' print_info: EOG token = 31 '<reponame>' print_info: max token length = 48 load_tensors: loading model tensors, this can take a while... (mmap = false) load_tensors: offloading 45 repeating layers to GPU load_tensors: offloaded 45/61 layers to GPU load_tensors: CUDA0 model buffer size = 14132.62 MiB load_tensors: CUDA1 model buffer size = 6218.35 MiB load_tensors: CUDA2 model buffer size = 5087.74 MiB load_tensors: CUDA_Host model buffer size = 9409.29 MiB llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 4096 llama_context: n_ctx_seq = 4096 llama_context: n_batch = 512 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = disabled llama_context: kv_unified = false llama_context: freq_base = 5000000.0 llama_context: freq_scale = 1 llama_context: CPU output buffer size = 0.27 MiB llama_kv_cache: CPU KV buffer size = 240.00 MiB llama_kv_cache: CUDA0 KV buffer size = 400.00 MiB llama_kv_cache: CUDA1 KV buffer size = 176.00 MiB llama_kv_cache: CUDA2 KV buffer size = 144.00 MiB llama_kv_cache: size = 960.00 MiB ( 4096 cells, 60 layers, 1/1 seqs), K (f16): 480.00 MiB, V (f16): 480.00 MiB CUDA error: an unsupported value or parameter was passed to the function current device: 0, in function ggml_cuda_op_mul_mat_cublas at C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:1463 cublasGemmEx(ctx.cublas_handle(id), CUBLAS_OP_T, CUBLAS_OP_N, row_diff, src1_ncols, ne10, &alpha_f16, src0_ptr, CUDA_R_16F, ne00, src1_ptr, CUDA_R_16F, ne10, &beta_f16, dst_f16.get(), CUDA_R_16F, ldc, CUBLAS_COMPUTE_16F, CUBLAS_GEMM_DEFAULT_TENSOR_OP) C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:90: CUDA error time=2025-12-07T14:43:54.465+08:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server not responding" time=2025-12-07T14:43:55.278+08:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server error" time=2025-12-07T14:43:56.154+08:00 level=ERROR source=server.go:265 msg="llama runner terminated" error="exit status 1" time=2025-12-07T14:43:56.281+08:00 level=INFO source=sched.go:470 msg="Load failed" model=D:\AI\LM_models\blobs\sha256-84aa170c16ada0cfc5656e571f8d7890df3b8d778d37b82b32ffc2a9b20865b7 error="llama runner process has terminated: CUDA error" [GIN] 2025/12/07 - 14:43:56 | 500 | 39.7006171s | 127.0.0.1 | POST "/api/generate" ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version ollama version is 0.13.2-rc2
GiteaMirror added the bug label 2026-04-12 21:36:30 -05:00
Author
Owner

@acu715 commented on GitHub (Dec 7, 2025):

I can use for running the model which are under 16G in V100 only OR take away V100 and run with 1070 and P4 only.

<!-- gh-comment-id:3622284094 --> @acu715 commented on GitHub (Dec 7, 2025): I can use for running the model which are under 16G in V100 only OR take away V100 and run with 1070 and P4 only.
Author
Owner

@rick-github commented on GitHub (Dec 8, 2025):

Set OLLAMA_DEBUG=2 in the server environment and then get a log for two runs, one where you load a model with all cards and one where you run with 1070 and P4 only.

<!-- gh-comment-id:3623935890 --> @rick-github commented on GitHub (Dec 8, 2025): Set `OLLAMA_DEBUG=2` in the server environment and then get a log for two runs, one where you load a model with all cards and one where you run with 1070 and P4 only.
Author
Owner

@acu715 commented on GitHub (Dec 14, 2025):

Here are the logs of running with V100 and 1070.( I removed P4, and it cannot run with v100 and 1070. (P4 was broken))
(1070 is used in every test)

server_with_v100.log

server_without_v100.log

<!-- gh-comment-id:3650250011 --> @acu715 commented on GitHub (Dec 14, 2025): Here are the logs of running with V100 and 1070.( I removed P4, and it cannot run with v100 and 1070. (P4 was broken)) (1070 is used in every test) [server_with_v100.log](https://github.com/user-attachments/files/24147731/server_with_v100.log) [server_without_v100.log](https://github.com/user-attachments/files/24147762/server_without_v100.log)
Author
Owner

@rick-github commented on GitHub (Dec 14, 2025):

graph_reserve: failed to allocate compute buffers

This model is being loaded with the old engine which suffers from inaccurate memory estimation. In this case the server estimates that it can load 30 of 41 layers into the GPU, but the failure indicates otherwise. You can try some OOM mitigations described here.

<!-- gh-comment-id:3650386515 --> @rick-github commented on GitHub (Dec 14, 2025): ``` graph_reserve: failed to allocate compute buffers ``` This model is being loaded with the old engine which suffers from inaccurate memory estimation. In this case the server estimates that it can load 30 of 41 layers into the GPU, but the failure indicates otherwise. You can try some OOM mitigations described [here](https://github.com/ollama/ollama/issues/8597#issuecomment-2614533288).
Author
Owner

@dhiltgen commented on GitHub (Dec 17, 2025):

It looks like the model is a llama architecture, so should work on the new engine. You can try setting OLLAMA_NEW_ENGINE=1 and try to load the model with the benefit of the better memory management.

<!-- gh-comment-id:3666501736 --> @dhiltgen commented on GitHub (Dec 17, 2025): It looks like the model is a llama architecture, so should work on the new engine. You can try setting OLLAMA_NEW_ENGINE=1 and try to load the model with the benefit of the better memory management.
Author
Owner

@acu715 commented on GitHub (Dec 20, 2025):

It can be use with OLLAMA_NEW_ENGINE=1.
However, the speed of HDD to read become very slow...
Before setting this is about 125MB/S, After that is 27MB/S

<!-- gh-comment-id:3677573022 --> @acu715 commented on GitHub (Dec 20, 2025): It can be use with OLLAMA_NEW_ENGINE=1. However, the speed of HDD to read become very slow... Before setting this is about 125MB/S, After that is 27MB/S
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#8825