[GH-ISSUE #11842] the runner process fails to pick up GPUs with SLURM sbatch or srun with singularity #54372

Closed
opened 2026-04-29 05:49:44 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @hwang2006 on GitHub (Aug 10, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11842

What is the issue?

What I observed (symptoms)
Under srun/sbatch + Singularity, Ollama would start and detect the A100 (“inference compute… library=cuda”), but when the runner spawned it would load:

load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so

…and never show “loaded CUDA backend / using device CUDA0”.

As a result, generation was CPU-only → Gradio sat “waiting” forever or felt painfully slow.

When I ran the same script on a login node for testing purpose, the runner did pick CUDA (you saw “using device CUDA0 … runner started”).

$ srun -p amd_a100nv_8 --comment=pytorch --gres=gpu:1 ./chatgpt_slurm_script.new.sh
srun: job 540347 queued and waiting for resources
srun: job 540347 has been allocated resources
========================================
Starting Ollama + Gradio
Date: Sun Aug 10 18:24:57 KST 2025
Server: gpu34
SLURM Job ID: 540347
Gradio Port (requested): 7860
Ollama Port: 11434
Default Model: 0
========================================
load module-environment
🔍 Python / GPU:
/scratch/qualis/miniconda3/envs/deepseek/bin/python
Python executable: /scratch/qualis/miniconda3/envs/deepseek/bin/python
fastapi                   0.116.1
gradio                    5.41.1
gradio_client             1.11.0
uvicorn                   0.35.0
NVIDIA A100-SXM4-80GB, 81920, 81038
Using CUDA toolkit at: /apps/cuda/12.1
🚀 Starting Ollama server…
Ollama PID: 125982
✅ Ollama API is up!
📋 Available models:
- mistral:7b (7.2B, Q4_K_M)
- qwen3:8b (8.2B, Q4_K_M)
- gpt-oss:latest (20.9B, MXFP4)
- gpt-oss:120b (116.8B, MXFP4)
- tinyllama:latest (1B, Q4_0)
- phi3:latest (3.8B, Q4_0)
- gemma:latest (9B, Q4_0)
- llama3:latest (8.0B, Q4_0)
🌐 Starting Gradio web interface...
Gradio PID: 126331
⏳ Waiting for Gradio UI at http://127.0.0.1:7860/ ...
  ... still waiting (10s)
  ... still waiting (20s)
  ... still waiting (30s)
  ... still waiting (40s)
  ... still waiting (50s)
  ... still waiting (60s)
  ... still waiting (70s)
  ... still waiting (80s)
  ... still waiting (90s)
  ... still waiting (100s)
  ... still waiting (110s)
  ... still waiting (120s)
  ... still waiting (130s)
  ... still waiting (140s)
  ... still waiting (150s)
  ... still waiting (160s)
  ... still waiting (170s)
  ... still waiting (180s)
  ... still waiting (190s)
  ... still waiting (200s)
  ... still waiting (210s)
  ... still waiting (220s)
  ... still waiting (230s)
  ... still waiting (240s)
  ... still waiting (250s)
  ... still waiting (260s)
  ... still waiting (270s)
  ... still waiting (280s)
  ... still waiting (290s)
  ... still waiting (300s)
  ... still waiting (310s)
  ... still waiting (320s)
  ... still waiting (330s)
  ... still waiting (340s)
  ... still waiting (350s)
  ... still waiting (360s)
  ... still waiting (370s)
  ... still waiting (380s)
  ... still waiting (390s)
  ... still waiting (400s)
  ... still waiting (410s)
  ... still waiting (420s)
  ... still waiting (430s)
  ... still waiting (440s)
  ... still waiting (450s)
  ... still waiting (460s)
  ... still waiting (470s)
  ... still waiting (480s)
  ... still waiting (490s)
  ... still waiting (500s)
  ... still waiting (510s)
  ... still waiting (520s)
  ... still waiting (530s)
  ... still waiting (540s)
  ... still waiting (550s)
✅ Gradio UI is up!
=========================================
🎉 All services started successfully!
Gradio URL: http://gpu34:7860
Local access (tunnel): http://localhost:7860  → use:
ssh -N -L 7860:gpu34:7860 ......
Ollama API: http://gpu34:11434
Logs:
  Ollama: /scratch/qualis/deepseek/ollama_server_540347.log
  Gradio: /scratch/qualis/deepseek/gradio_server_540347.log
=========================================
^Csrun: interrupt (one more within 1 sec to abort)
srun: StepId=540347.0 task 0: running
[Sun Aug 10 18:40:18 KST 2025] 💓 Heartbeat: services running
🔍 GPU Status:
  GPU0 ( NVIDIA A100-SXM4-80GB): 4MB/81920MB (0%) | Util: 0% | Temp: 28°C
✅ Ollama API responsive
✅ Gradio UI responsive
----------------------------------------
[Sun Aug 10 18:45:20 KST 2025] 💓 Heartbeat: services running
🔍 GPU Status:
  GPU0 ( NVIDIA A100-SXM4-80GB): 4MB/81920MB (0%) | Util: 0% | Temp: 28°C
✅ Ollama API responsive
✅ Gradio UI responsive
----------------------------------------
[Sun Aug 10 18:50:22 KST 2025] 💓 Heartbeat: services running
🔍 GPU Status:
  GPU0 ( NVIDIA A100-SXM4-80GB): 4MB/81920MB (0%) | Util: 0% | Temp: 28°C
✅ Ollama API responsive
✅ Gradio UI responsive
----------------------------------------

[ollama_gradio_run.sh.txt](https://github.com/user-attachments/files/21704785/ollama_gradio_run.sh.txt)

Relevant log output

$ cat ollama_server_540347.log
time=2025-08-10T18:24:59.300+09:00 level=INFO source=routes.go:1304 msg="server config" env="map[CUDA_VISIBLE_DEVICES:0 GPU_DEVICE_ORDINAL:0 HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:10m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY:cuda OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:2 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/scratch/qualis/ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:4 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:0 http_proxy: https_proxy: no_proxy:]"
time=2025-08-10T18:24:59.309+09:00 level=INFO source=images.go:477 msg="total blobs: 37"
time=2025-08-10T18:24:59.310+09:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0"
time=2025-08-10T18:24:59.310+09:00 level=INFO source=routes.go:1357 msg="Listening on 127.0.0.1:11434 (version 0.11.4)"
time=2025-08-10T18:24:59.310+09:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-08-10T18:24:59.709+09:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-445370ee-f575-0c73-0fc5-fe679c197b55 library=cuda variant=v12 compute=8.0 driver=12.4 name="NVIDIA A100-SXM4-80GB" total="79.1 GiB" available="78.7 GiB"
[GIN] 2025/08/10 - 18:25:03 | 200 |   145.81907ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/08/10 - 18:25:03 | 200 |    1.822454ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/08/10 - 18:25:13 | 200 |     1.84725ms |       127.0.0.1 | GET      "/api/tags"
time=2025-08-10T18:25:13.864+09:00 level=INFO source=sched.go:786 msg="new model will fit in available VRAM in single GPU, loading" model=/scratch/qualis/ollama/models/blobs/sha256-f5074b1221da0f5a2910d33b642efa5b9eb58cfdddca1c79e16d7ad28aa2b31f gpu=GPU-445370ee-f575-0c73-0fc5-fe679c197b55 parallel=4 available=84530692096 required="7.7 GiB"
time=2025-08-10T18:25:14.063+09:00 level=INFO source=server.go:135 msg="system memory" total="1007.4 GiB" free="946.5 GiB" free_swap="4.0 GiB"
time=2025-08-10T18:25:14.265+09:00 level=INFO source=server.go:175 msg=offload library=cuda layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[78.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="7.7 GiB" memory.required.partial="7.7 GiB" memory.required.kv="2.0 GiB" memory.required.allocations="[7.7 GiB]" memory.weights.total="4.0 GiB" memory.weights.repeating="3.9 GiB" memory.weights.nonrepeating="105.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB"
time=2025-08-10T18:25:14.265+09:00 level=INFO source=server.go:218 msg="enabling flash attention"
time=2025-08-10T18:25:14.265+09:00 level=WARN source=server.go:226 msg="kv cache type not supported by model" type=""
llama_model_loader: loaded meta data with 25 key-value pairs and 291 tensors from /scratch/qualis/ollama/models/blobs/sha256-f5074b1221da0f5a2910d33b642efa5b9eb58cfdddca1c79e16d7ad28aa2b31f (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = Mistral-7B-Instruct-v0.3
llama_model_loader: - kv   2:                          llama.block_count u32              = 32
llama_model_loader: - kv   3:                       llama.context_length u32              = 32768
llama_model_loader: - kv   4:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   7:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   8:                       llama.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                          general.file_type u32              = 15
llama_model_loader: - kv  11:                           llama.vocab_size u32              = 32768
llama_model_loader: - kv  12:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  14:                         tokenizer.ggml.pre str              = default
llama_model_loader: - kv  15:                      tokenizer.ggml.tokens arr[str,32768]   = ["<unk>", "<s>", "</s>", "[INST]", "[...
llama_model_loader: - kv  16:                      tokenizer.ggml.scores arr[f32,32768]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,32768]   = [2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  19:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  20:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  21:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  22:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  23:                    tokenizer.chat_template str              = {{ bos_token }}{% for message in mess...
llama_model_loader: - kv  24:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_K:  193 tensors
llama_model_loader: - type q6_K:   33 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 4.07 GiB (4.83 BPW)
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 771
load: token to piece cache size = 0.1731 MB
print_info: arch             = llama
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 7.25 B
print_info: general.name     = Mistral-7B-Instruct-v0.3
print_info: vocab type       = SPM
print_info: n_vocab          = 32768
print_info: n_merges         = 0
print_info: BOS token        = 1 '<s>'
print_info: EOS token        = 2 '</s>'
print_info: UNK token        = 0 '<unk>'
print_info: LF token         = 781 '<0x0A>'
print_info: EOG token        = 2 '</s>'
print_info: max token length = 48
llama_model_load: vocab only - skipping tensors
time=2025-08-10T18:25:14.323+09:00 level=INFO source=server.go:438 msg="starting llama server" cmd="/usr/bin/ollama runner --model /scratch/qualis/ollama/models/blobs/sha256-f5074b1221da0f5a2910d33b642efa5b9eb58cfdddca1c79e16d7ad28aa2b31f --ctx-size 16384 --batch-size 512 --n-gpu-layers 33 --threads 64 --flash-attn --parallel 4 --port 37057"
time=2025-08-10T18:25:14.323+09:00 level=INFO source=sched.go:481 msg="loaded runners" count=1
time=2025-08-10T18:25:14.323+09:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding"
time=2025-08-10T18:25:14.323+09:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding"
time=2025-08-10T18:25:14.331+09:00 level=INFO source=runner.go:815 msg="starting go runner"
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so
time=2025-08-10T18:25:14.453+09:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
time=2025-08-10T18:25:14.454+09:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:37057"
llama_model_loader: loaded meta data with 25 key-value pairs and 291 tensors from /scratch/qualis/ollama/models/blobs/sha256-f5074b1221da0f5a2910d33b642efa5b9eb58cfdddca1c79e16d7ad28aa2b31f (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = Mistral-7B-Instruct-v0.3
llama_model_loader: - kv   2:                          llama.block_count u32              = 32
llama_model_loader: - kv   3:                       llama.context_length u32              = 32768
llama_model_loader: - kv   4:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   7:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   8:                       llama.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                          general.file_type u32              = 15
llama_model_loader: - kv  11:                           llama.vocab_size u32              = 32768
llama_model_loader: - kv  12:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  14:                         tokenizer.ggml.pre str              = default
llama_model_loader: - kv  15:                      tokenizer.ggml.tokens arr[str,32768]   = ["<unk>", "<s>", "</s>", "[INST]", "[...
llama_model_loader: - kv  16:                      tokenizer.ggml.scores arr[f32,32768]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,32768]   = [2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  19:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  20:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  21:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  22:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  23:                    tokenizer.chat_template str              = {{ bos_token }}{% for message in mess...
llama_model_loader: - kv  24:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_K:  193 tensors
llama_model_loader: - type q6_K:   33 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 4.07 GiB (4.83 BPW)
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 771
load: token to piece cache size = 0.1731 MB
print_info: arch             = llama
print_info: vocab_only       = 0
print_info: n_ctx_train      = 32768
print_info: n_embd           = 4096
print_info: n_layer          = 32
print_info: n_head           = 32
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: n_swa_pattern    = 1
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 4
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 14336
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 0
print_info: rope scaling     = linear
print_info: freq_base_train  = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 32768
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 7B
print_info: model params     = 7.25 B
print_info: general.name     = Mistral-7B-Instruct-v0.3
print_info: vocab type       = SPM
print_info: n_vocab          = 32768
print_info: n_merges         = 0
print_info: BOS token        = 1 '<s>'
print_info: EOS token        = 2 '</s>'
print_info: UNK token        = 0 '<unk>'
print_info: LF token         = 781 '<0x0A>'
print_info: EOG token        = 2 '</s>'
print_info: max token length = 48
load_tensors: loading model tensors, this can take a while... (mmap = true)
time=2025-08-10T18:25:14.611+09:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model"
load_tensors:   CPU_Mapped model buffer size =  4169.52 MiB
llama_context: constructing llama_context
llama_context: n_seq_max     = 4
llama_context: n_ctx         = 16384
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch       = 2048
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = 1
llama_context: freq_base     = 1000000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_per_seq (4096) < n_ctx_train (32768) -- the full capacity of the model will not be utilized
llama_context:        CPU  output buffer size =     0.56 MiB
llama_kv_cache_unified: kv_size = 16384, type_k = 'f16', type_v = 'f16', n_layer = 32, can_shift = 1, padding = 256
llama_kv_cache_unified:        CPU KV buffer size =  2048.00 MiB
llama_kv_cache_unified: KV self size  = 2048.00 MiB, K (f16): 1024.00 MiB, V (f16): 1024.00 MiB
llama_context:        CPU compute buffer size =   112.01 MiB
llama_context: graph nodes  = 967
llama_context: graph splits = 1
time=2025-08-10T18:25:16.116+09:00 level=INFO source=server.go:637 msg="llama runner started in 1.79 seconds"
[GIN] 2025/08/10 - 18:35:13 | 500 |         10m0s |       127.0.0.1 | POST     "/api/generate"

OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

0.11.4 from running "singularity exec --nv ollama_latest.sif ollama --version"

Originally created by @hwang2006 on GitHub (Aug 10, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11842 ### What is the issue? What I observed (symptoms) Under srun/sbatch + Singularity, Ollama would start and detect the A100 (“inference compute… library=cuda”), but when the runner spawned it would load: load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so …and never show “loaded CUDA backend / using device CUDA0”. As a result, generation was CPU-only → Gradio sat “waiting” forever or felt painfully slow. When I ran the same script on a login node for testing purpose, the runner did pick CUDA (you saw “using device CUDA0 … runner started”). ``` $ srun -p amd_a100nv_8 --comment=pytorch --gres=gpu:1 ./chatgpt_slurm_script.new.sh srun: job 540347 queued and waiting for resources srun: job 540347 has been allocated resources ======================================== Starting Ollama + Gradio Date: Sun Aug 10 18:24:57 KST 2025 Server: gpu34 SLURM Job ID: 540347 Gradio Port (requested): 7860 Ollama Port: 11434 Default Model: 0 ======================================== load module-environment 🔍 Python / GPU: /scratch/qualis/miniconda3/envs/deepseek/bin/python Python executable: /scratch/qualis/miniconda3/envs/deepseek/bin/python fastapi 0.116.1 gradio 5.41.1 gradio_client 1.11.0 uvicorn 0.35.0 NVIDIA A100-SXM4-80GB, 81920, 81038 Using CUDA toolkit at: /apps/cuda/12.1 🚀 Starting Ollama server… Ollama PID: 125982 ✅ Ollama API is up! 📋 Available models: - mistral:7b (7.2B, Q4_K_M) - qwen3:8b (8.2B, Q4_K_M) - gpt-oss:latest (20.9B, MXFP4) - gpt-oss:120b (116.8B, MXFP4) - tinyllama:latest (1B, Q4_0) - phi3:latest (3.8B, Q4_0) - gemma:latest (9B, Q4_0) - llama3:latest (8.0B, Q4_0) 🌐 Starting Gradio web interface... Gradio PID: 126331 ⏳ Waiting for Gradio UI at http://127.0.0.1:7860/ ... ... still waiting (10s) ... still waiting (20s) ... still waiting (30s) ... still waiting (40s) ... still waiting (50s) ... still waiting (60s) ... still waiting (70s) ... still waiting (80s) ... still waiting (90s) ... still waiting (100s) ... still waiting (110s) ... still waiting (120s) ... still waiting (130s) ... still waiting (140s) ... still waiting (150s) ... still waiting (160s) ... still waiting (170s) ... still waiting (180s) ... still waiting (190s) ... still waiting (200s) ... still waiting (210s) ... still waiting (220s) ... still waiting (230s) ... still waiting (240s) ... still waiting (250s) ... still waiting (260s) ... still waiting (270s) ... still waiting (280s) ... still waiting (290s) ... still waiting (300s) ... still waiting (310s) ... still waiting (320s) ... still waiting (330s) ... still waiting (340s) ... still waiting (350s) ... still waiting (360s) ... still waiting (370s) ... still waiting (380s) ... still waiting (390s) ... still waiting (400s) ... still waiting (410s) ... still waiting (420s) ... still waiting (430s) ... still waiting (440s) ... still waiting (450s) ... still waiting (460s) ... still waiting (470s) ... still waiting (480s) ... still waiting (490s) ... still waiting (500s) ... still waiting (510s) ... still waiting (520s) ... still waiting (530s) ... still waiting (540s) ... still waiting (550s) ✅ Gradio UI is up! ========================================= 🎉 All services started successfully! Gradio URL: http://gpu34:7860 Local access (tunnel): http://localhost:7860 → use: ssh -N -L 7860:gpu34:7860 ...... Ollama API: http://gpu34:11434 Logs: Ollama: /scratch/qualis/deepseek/ollama_server_540347.log Gradio: /scratch/qualis/deepseek/gradio_server_540347.log ========================================= ^Csrun: interrupt (one more within 1 sec to abort) srun: StepId=540347.0 task 0: running [Sun Aug 10 18:40:18 KST 2025] 💓 Heartbeat: services running 🔍 GPU Status: GPU0 ( NVIDIA A100-SXM4-80GB): 4MB/81920MB (0%) | Util: 0% | Temp: 28°C ✅ Ollama API responsive ✅ Gradio UI responsive ---------------------------------------- [Sun Aug 10 18:45:20 KST 2025] 💓 Heartbeat: services running 🔍 GPU Status: GPU0 ( NVIDIA A100-SXM4-80GB): 4MB/81920MB (0%) | Util: 0% | Temp: 28°C ✅ Ollama API responsive ✅ Gradio UI responsive ---------------------------------------- [Sun Aug 10 18:50:22 KST 2025] 💓 Heartbeat: services running 🔍 GPU Status: GPU0 ( NVIDIA A100-SXM4-80GB): 4MB/81920MB (0%) | Util: 0% | Temp: 28°C ✅ Ollama API responsive ✅ Gradio UI responsive ---------------------------------------- [ollama_gradio_run.sh.txt](https://github.com/user-attachments/files/21704785/ollama_gradio_run.sh.txt) ``` ### Relevant log output ```shell $ cat ollama_server_540347.log time=2025-08-10T18:24:59.300+09:00 level=INFO source=routes.go:1304 msg="server config" env="map[CUDA_VISIBLE_DEVICES:0 GPU_DEVICE_ORDINAL:0 HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:10m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY:cuda OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:2 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/scratch/qualis/ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:4 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:0 http_proxy: https_proxy: no_proxy:]" time=2025-08-10T18:24:59.309+09:00 level=INFO source=images.go:477 msg="total blobs: 37" time=2025-08-10T18:24:59.310+09:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0" time=2025-08-10T18:24:59.310+09:00 level=INFO source=routes.go:1357 msg="Listening on 127.0.0.1:11434 (version 0.11.4)" time=2025-08-10T18:24:59.310+09:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-08-10T18:24:59.709+09:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-445370ee-f575-0c73-0fc5-fe679c197b55 library=cuda variant=v12 compute=8.0 driver=12.4 name="NVIDIA A100-SXM4-80GB" total="79.1 GiB" available="78.7 GiB" [GIN] 2025/08/10 - 18:25:03 | 200 | 145.81907ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/08/10 - 18:25:03 | 200 | 1.822454ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/08/10 - 18:25:13 | 200 | 1.84725ms | 127.0.0.1 | GET "/api/tags" time=2025-08-10T18:25:13.864+09:00 level=INFO source=sched.go:786 msg="new model will fit in available VRAM in single GPU, loading" model=/scratch/qualis/ollama/models/blobs/sha256-f5074b1221da0f5a2910d33b642efa5b9eb58cfdddca1c79e16d7ad28aa2b31f gpu=GPU-445370ee-f575-0c73-0fc5-fe679c197b55 parallel=4 available=84530692096 required="7.7 GiB" time=2025-08-10T18:25:14.063+09:00 level=INFO source=server.go:135 msg="system memory" total="1007.4 GiB" free="946.5 GiB" free_swap="4.0 GiB" time=2025-08-10T18:25:14.265+09:00 level=INFO source=server.go:175 msg=offload library=cuda layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[78.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="7.7 GiB" memory.required.partial="7.7 GiB" memory.required.kv="2.0 GiB" memory.required.allocations="[7.7 GiB]" memory.weights.total="4.0 GiB" memory.weights.repeating="3.9 GiB" memory.weights.nonrepeating="105.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB" time=2025-08-10T18:25:14.265+09:00 level=INFO source=server.go:218 msg="enabling flash attention" time=2025-08-10T18:25:14.265+09:00 level=WARN source=server.go:226 msg="kv cache type not supported by model" type="" llama_model_loader: loaded meta data with 25 key-value pairs and 291 tensors from /scratch/qualis/ollama/models/blobs/sha256-f5074b1221da0f5a2910d33b642efa5b9eb58cfdddca1c79e16d7ad28aa2b31f (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = Mistral-7B-Instruct-v0.3 llama_model_loader: - kv 2: llama.block_count u32 = 32 llama_model_loader: - kv 3: llama.context_length u32 = 32768 llama_model_loader: - kv 4: llama.embedding_length u32 = 4096 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.attention.head_count u32 = 32 llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 8: llama.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: general.file_type u32 = 15 llama_model_loader: - kv 11: llama.vocab_size u32 = 32768 llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 13: tokenizer.ggml.model str = llama llama_model_loader: - kv 14: tokenizer.ggml.pre str = default llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,32768] = ["<unk>", "<s>", "</s>", "[INST]", "[... llama_model_loader: - kv 16: tokenizer.ggml.scores arr[f32,32768] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,32768] = [2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ... llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 20: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 21: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 22: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 23: tokenizer.chat_template str = {{ bos_token }}{% for message in mess... llama_model_loader: - kv 24: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_K: 193 tensors llama_model_loader: - type q6_K: 33 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 4.07 GiB (4.83 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 771 load: token to piece cache size = 0.1731 MB print_info: arch = llama print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 7.25 B print_info: general.name = Mistral-7B-Instruct-v0.3 print_info: vocab type = SPM print_info: n_vocab = 32768 print_info: n_merges = 0 print_info: BOS token = 1 '<s>' print_info: EOS token = 2 '</s>' print_info: UNK token = 0 '<unk>' print_info: LF token = 781 '<0x0A>' print_info: EOG token = 2 '</s>' print_info: max token length = 48 llama_model_load: vocab only - skipping tensors time=2025-08-10T18:25:14.323+09:00 level=INFO source=server.go:438 msg="starting llama server" cmd="/usr/bin/ollama runner --model /scratch/qualis/ollama/models/blobs/sha256-f5074b1221da0f5a2910d33b642efa5b9eb58cfdddca1c79e16d7ad28aa2b31f --ctx-size 16384 --batch-size 512 --n-gpu-layers 33 --threads 64 --flash-attn --parallel 4 --port 37057" time=2025-08-10T18:25:14.323+09:00 level=INFO source=sched.go:481 msg="loaded runners" count=1 time=2025-08-10T18:25:14.323+09:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding" time=2025-08-10T18:25:14.323+09:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding" time=2025-08-10T18:25:14.331+09:00 level=INFO source=runner.go:815 msg="starting go runner" load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so time=2025-08-10T18:25:14.453+09:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2025-08-10T18:25:14.454+09:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:37057" llama_model_loader: loaded meta data with 25 key-value pairs and 291 tensors from /scratch/qualis/ollama/models/blobs/sha256-f5074b1221da0f5a2910d33b642efa5b9eb58cfdddca1c79e16d7ad28aa2b31f (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = Mistral-7B-Instruct-v0.3 llama_model_loader: - kv 2: llama.block_count u32 = 32 llama_model_loader: - kv 3: llama.context_length u32 = 32768 llama_model_loader: - kv 4: llama.embedding_length u32 = 4096 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.attention.head_count u32 = 32 llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 8: llama.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: general.file_type u32 = 15 llama_model_loader: - kv 11: llama.vocab_size u32 = 32768 llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 13: tokenizer.ggml.model str = llama llama_model_loader: - kv 14: tokenizer.ggml.pre str = default llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,32768] = ["<unk>", "<s>", "</s>", "[INST]", "[... llama_model_loader: - kv 16: tokenizer.ggml.scores arr[f32,32768] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,32768] = [2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ... llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 20: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 21: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 22: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 23: tokenizer.chat_template str = {{ bos_token }}{% for message in mess... llama_model_loader: - kv 24: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_K: 193 tensors llama_model_loader: - type q6_K: 33 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 4.07 GiB (4.83 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 771 load: token to piece cache size = 0.1731 MB print_info: arch = llama print_info: vocab_only = 0 print_info: n_ctx_train = 32768 print_info: n_embd = 4096 print_info: n_layer = 32 print_info: n_head = 32 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_swa_pattern = 1 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 4 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 14336 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 0 print_info: rope scaling = linear print_info: freq_base_train = 1000000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 32768 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 7B print_info: model params = 7.25 B print_info: general.name = Mistral-7B-Instruct-v0.3 print_info: vocab type = SPM print_info: n_vocab = 32768 print_info: n_merges = 0 print_info: BOS token = 1 '<s>' print_info: EOS token = 2 '</s>' print_info: UNK token = 0 '<unk>' print_info: LF token = 781 '<0x0A>' print_info: EOG token = 2 '</s>' print_info: max token length = 48 load_tensors: loading model tensors, this can take a while... (mmap = true) time=2025-08-10T18:25:14.611+09:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model" load_tensors: CPU_Mapped model buffer size = 4169.52 MiB llama_context: constructing llama_context llama_context: n_seq_max = 4 llama_context: n_ctx = 16384 llama_context: n_ctx_per_seq = 4096 llama_context: n_batch = 2048 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 1 llama_context: freq_base = 1000000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (4096) < n_ctx_train (32768) -- the full capacity of the model will not be utilized llama_context: CPU output buffer size = 0.56 MiB llama_kv_cache_unified: kv_size = 16384, type_k = 'f16', type_v = 'f16', n_layer = 32, can_shift = 1, padding = 256 llama_kv_cache_unified: CPU KV buffer size = 2048.00 MiB llama_kv_cache_unified: KV self size = 2048.00 MiB, K (f16): 1024.00 MiB, V (f16): 1024.00 MiB llama_context: CPU compute buffer size = 112.01 MiB llama_context: graph nodes = 967 llama_context: graph splits = 1 time=2025-08-10T18:25:16.116+09:00 level=INFO source=server.go:637 msg="llama runner started in 1.79 seconds" [GIN] 2025/08/10 - 18:35:13 | 500 | 10m0s | 127.0.0.1 | POST "/api/generate" ``` ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version 0.11.4 from running "singularity exec --nv ollama_latest.sif ollama --version"
GiteaMirror added the bug label 2026-04-29 05:49:44 -05:00
Author
Owner

@rick-github commented on GitHub (Aug 10, 2025):

Unset ROCR_VISIBLE_DEVICES. #11723

<!-- gh-comment-id:3172528388 --> @rick-github commented on GitHub (Aug 10, 2025): Unset `ROCR_VISIBLE_DEVICES`. #11723
Author
Owner

@hwang2006 commented on GitHub (Aug 10, 2025):

Great! It worked. It seems that it basically forces Ollama to only see NVIDIA’s CUDA_VISIBLE_DEVICES, letting it pick up the CUDA backend properly, without getting confused with the AMD ROCm environment.

$ cat ollama_server_540405.log
time=2025-08-10T21:57:19.996+09:00 level=INFO source=routes.go:1304 msg="server config" env="map[CUDA_VISIBLE_DEVICES:0 GPU_DEVICE_ORDINAL:0 HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:209715200 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:30m0s OLLAMA_KV_CACHE_TYPE:f16 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:3 OLLAMA_MAX_QUEUE:128 OLLAMA_MODELS:/scratch/qualis/ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:6 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false **ROCR_VISIBLE_DEVICES:** http_proxy: https_proxy: no_proxy:]"
.....
<!-- gh-comment-id:3172626273 --> @hwang2006 commented on GitHub (Aug 10, 2025): Great! It worked. It seems that it basically forces Ollama to only see NVIDIA’s CUDA_VISIBLE_DEVICES, letting it pick up the CUDA backend properly, without getting confused with the AMD ROCm environment. ``` $ cat ollama_server_540405.log time=2025-08-10T21:57:19.996+09:00 level=INFO source=routes.go:1304 msg="server config" env="map[CUDA_VISIBLE_DEVICES:0 GPU_DEVICE_ORDINAL:0 HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:209715200 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:30m0s OLLAMA_KV_CACHE_TYPE:f16 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:3 OLLAMA_MAX_QUEUE:128 OLLAMA_MODELS:/scratch/qualis/ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:6 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false **ROCR_VISIBLE_DEVICES:** http_proxy: https_proxy: no_proxy:]" ..... ```
Author
Owner

@pdevine commented on GitHub (Aug 11, 2025):

cc @jessegross

<!-- gh-comment-id:3177221414 --> @pdevine commented on GitHub (Aug 11, 2025): cc @jessegross
Author
Owner

@jessegross commented on GitHub (Aug 18, 2025):

This should be fixed in 0.11.5.

<!-- gh-comment-id:3197828036 --> @jessegross commented on GitHub (Aug 18, 2025): This should be fixed in 0.11.5.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#54372