[GH-ISSUE #11025] GPU is not used during inference, yet GPU BEING DETECTED. #7272

Closed
opened 2026-04-12 19:19:22 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @ROGERDJQ on GitHub (Jun 9, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11025

What is the issue?

I guess the bug caused by my installation of ollama. It is because I can not get the root/admin account of machine, which is not allowed by our admin. I can only install ollama through https://anaconda.org/conda-forge/ollama. after the installation, ollama can work, but GPU can not be loaded.

Relevant log output

time=2025-06-09T19:22:25.589+08:00 level=INFO source=routes.go:1234 msg="server config" env="map[CUDA_VISIBLE_DEVICES:0,1 GPU_DEVICE_ORDINAL:0,1 HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/remote-home1//.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:0,1 http_proxy: https_proxy: no_proxy:]"
time=2025-06-09T19:22:25.589+08:00 level=INFO source=images.go:479 msg="total blobs: 4"
time=2025-06-09T19:22:25.589+08:00 level=INFO source=images.go:486 msg="total unused blobs removed: 0"
time=2025-06-09T19:22:25.590+08:00 level=INFO source=routes.go:1287 msg="Listening on 127.0.0.1:11434 (version 0.9.0)"
time=2025-06-09T19:22:25.590+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-06-09T19:22:25.803+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-cade84b2-bdbd-4411-e9b2-54370c916892 library=cuda variant=v11 compute=8.0 driver=12.0 name="NVIDIA A800-SXM4-80GB" total="79.3 GiB" available="78.9 GiB"
time=2025-06-09T19:22:25.803+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-e2154281-9a88-fe6c-c129-93c8a1d3ec92 library=cuda variant=v11 compute=8.0 driver=12.0 name="NVIDIA A800-SXM4-80GB" total="79.3 GiB" available="78.9 GiB"
^C(ollama) @slurmd-6:~/hallucination_of_vlm/vl_r1/utils/webui/text-generation-webui$ export OLLAMA_HOST=0.0.0.0:30435
(ollama) @slurmd-6:~/hallucination_of_vlm/vl_r1/utils/webui/text-generation-webui$ ollama serve
time=2025-06-09T19:23:45.304+08:00 level=INFO source=routes.go:1234 msg="server config" env="map[CUDA_VISIBLE_DEVICES:0,1 GPU_DEVICE_ORDINAL:0,1 HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:30435 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/remote-home1//.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:0,1 http_proxy: https_proxy: no_proxy:]"
time=2025-06-09T19:23:45.305+08:00 level=INFO source=images.go:479 msg="total blobs: 4"
time=2025-06-09T19:23:45.305+08:00 level=INFO source=images.go:486 msg="total unused blobs removed: 0"
time=2025-06-09T19:23:45.305+08:00 level=INFO source=routes.go:1287 msg="Listening on [::]:30435 (version 0.9.0)"
time=2025-06-09T19:23:45.305+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-06-09T19:23:45.524+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-cade84b2-bdbd-4411-e9b2-54370c916892 library=cuda variant=v11 compute=8.0 driver=12.0 name="NVIDIA A800-SXM4-80GB" total="79.3 GiB" available="78.9 GiB"
time=2025-06-09T19:23:45.524+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-e2154281-9a88-fe6c-c129-93c8a1d3ec92 library=cuda variant=v11 compute=8.0 driver=12.0 name="NVIDIA A800-SXM4-80GB" total="79.3 GiB" available="78.9 GiB"
[GIN] 2025/06/09 - 19:24:08 | 200 |      68.647µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/06/09 - 19:24:08 | 200 |     537.687µs |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/06/09 - 19:24:14 | 200 |      19.083µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/06/09 - 19:24:14 | 200 |   39.503864ms |       127.0.0.1 | POST     "/api/show"
time=2025-06-09T19:24:14.710+08:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=/remote-home1//.ollama/models/blobs/sha256-9b9ee32e11cd0300ee6493c052e243ff5ddb0ad23a98676482725adb97722d83 gpu=GPU-cade84b2-bdbd-4411-e9b2-54370c916892 parallel=2 available=84737064960 required="15.0 GiB"
time=2025-06-09T19:24:14.852+08:00 level=INFO source=server.go:135 msg="system memory" total="1007.5 GiB" free="960.4 GiB" free_swap="6.8 GiB"
time=2025-06-09T19:24:14.853+08:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[78.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="15.0 GiB" memory.required.partial="15.0 GiB" memory.required.kv="448.0 MiB" memory.required.allocations="[15.0 GiB]" memory.weights.total="13.2 GiB" memory.weights.repeating="12.2 GiB" memory.weights.nonrepeating="1.0 GiB" memory.graph.full="478.0 MiB" memory.graph.partial="730.4 MiB"
llama_model_loader: loaded meta data with 23 key-value pairs and 339 tensors from /remote-home1//.ollama/models/blobs/sha256-9b9ee32e11cd0300ee6493c052e243ff5ddb0ad23a98676482725adb97722d83 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                          general.file_type u32              = 1
llama_model_loader: - kv   2:               general.quantization_version u32              = 2
llama_model_loader: - kv   3:                 qwen2.attention.head_count u32              = 28
llama_model_loader: - kv   4:              qwen2.attention.head_count_kv u32              = 4
llama_model_loader: - kv   5:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv   6:                          qwen2.block_count u32              = 28
llama_model_loader: - kv   7:                       qwen2.context_length u32              = 32768
llama_model_loader: - kv   8:                     qwen2.embedding_length u32              = 3584
llama_model_loader: - kv   9:                  qwen2.feed_forward_length u32              = 18944
llama_model_loader: - kv  10:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  11:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  12:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  13:           tokenizer.ggml.add_padding_token bool             = false
llama_model_loader: - kv  14:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  15:               tokenizer.ggml.eos_token_ids arr[i32,2]       = [151645, 151643]
llama_model_loader: - kv  16:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  17:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  18:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  19:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  20:                      tokenizer.ggml.scores arr[f32,152064]  = [0.000000, 1.000000, 2.000000, 3.0000...
llama_model_loader: - kv  21:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  22:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - type  f32:  141 tensors
llama_model_loader: - type  f16:  198 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = F16
print_info: file size   = 14.19 GiB (16.00 BPW) 
load: control-looking token: 151664 '<|file_sep|>' was not control-type; this is probably a bug in the model. its type will be overridden
load: control-looking token: 151660 '<|fim_middle|>' was not control-type; this is probably a bug in the model. its type will be overridden
load: control-looking token: 151659 '<|fim_prefix|>' was not control-type; this is probably a bug in the model. its type will be overridden
load: control-looking token: 151662 '<|fim_pad|>' was not control-type; this is probably a bug in the model. its type will be overridden
load: control-looking token: 151663 '<|repo_name|>' was not control-type; this is probably a bug in the model. its type will be overridden
load: control-looking token: 151661 '<|fim_suffix|>' was not control-type; this is probably a bug in the model. its type will be overridden
load: special tokens cache size = 421
load: token to piece cache size = 0.9340 MB
print_info: arch             = qwen2
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 7.62 B
print_info: general.name     = n/a
print_info: vocab type       = BPE
print_info: n_vocab          = 152064
print_info: n_merges         = 151387
print_info: BOS token        = 11 ','
print_info: EOS token        = 151645 '<|im_end|>'
print_info: EOT token        = 151645 '<|im_end|>'
print_info: PAD token        = 151643 '<|endoftext|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|endoftext|>'
print_info: EOG token        = 151645 '<|im_end|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-06-09T19:24:15.135+08:00 level=INFO source=server.go:431 msg="starting llama server" cmd="/remote-home1//anaconda3/envs/ollama/bin/ollama runner --model /remote-home1//.ollama/models/blobs/sha256-9b9ee32e11cd0300ee6493c052e243ff5ddb0ad23a98676482725adb97722d83 --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --threads 64 --parallel 2 --port 38859"
time=2025-06-09T19:24:15.135+08:00 level=INFO source=sched.go:483 msg="loaded runners" count=1
time=2025-06-09T19:24:15.135+08:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
time=2025-06-09T19:24:15.135+08:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding"
time=2025-06-09T19:24:15.147+08:00 level=INFO source=runner.go:815 msg="starting go runner"
time=2025-06-09T19:24:15.151+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.LLAMAFILE=1 compiler=cgo(gcc)
time=2025-06-09T19:24:15.153+08:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:38859"
llama_model_loader: loaded meta data with 23 key-value pairs and 339 tensors from /remote-home1//.ollama/models/blobs/sha256-9b9ee32e11cd0300ee6493c052e243ff5ddb0ad23a98676482725adb97722d83 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                          general.file_type u32              = 1
llama_model_loader: - kv   2:               general.quantization_version u32              = 2
llama_model_loader: - kv   3:                 qwen2.attention.head_count u32              = 28
llama_model_loader: - kv   4:              qwen2.attention.head_count_kv u32              = 4
llama_model_loader: - kv   5:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv   6:                          qwen2.block_count u32              = 28
llama_model_loader: - kv   7:                       qwen2.context_length u32              = 32768
llama_model_loader: - kv   8:                     qwen2.embedding_length u32              = 3584
llama_model_loader: - kv   9:                  qwen2.feed_forward_length u32              = 18944
llama_model_loader: - kv  10:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  11:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  12:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  13:           tokenizer.ggml.add_padding_token bool             = false
llama_model_loader: - kv  14:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  15:               tokenizer.ggml.eos_token_ids arr[i32,2]       = [151645, 151643]
llama_model_loader: - kv  16:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  17:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  18:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  19:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  20:                      tokenizer.ggml.scores arr[f32,152064]  = [0.000000, 1.000000, 2.000000, 3.0000...
llama_model_loader: - kv  21:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  22:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - type  f32:  141 tensors
llama_model_loader: - type  f16:  198 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = F16
print_info: file size   = 14.19 GiB (16.00 BPW) 
load: control-looking token: 151664 '<|file_sep|>' was not control-type; this is probably a bug in the model. its type will be overridden
load: control-looking token: 151660 '<|fim_middle|>' was not control-type; this is probably a bug in the model. its type will be overridden
load: control-looking token: 151659 '<|fim_prefix|>' was not control-type; this is probably a bug in the model. its type will be overridden
load: control-looking token: 151662 '<|fim_pad|>' was not control-type; this is probably a bug in the model. its type will be overridden
load: control-looking token: 151663 '<|repo_name|>' was not control-type; this is probably a bug in the model. its type will be overridden
load: control-looking token: 151661 '<|fim_suffix|>' was not control-type; this is probably a bug in the model. its type will be overridden
load: special tokens cache size = 421
time=2025-06-09T19:24:15.386+08:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
load: token to piece cache size = 0.9340 MB
print_info: arch             = qwen2
print_info: vocab_only       = 0
print_info: n_ctx_train      = 32768
print_info: n_embd           = 3584
print_info: n_layer          = 28
print_info: n_head           = 28
print_info: n_head_kv        = 4
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: n_swa_pattern    = 1
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 7
print_info: n_embd_k_gqa     = 512
print_info: n_embd_v_gqa     = 512
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-06
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 18944
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = -1
print_info: rope type        = 2
print_info: rope scaling     = linear
print_info: freq_base_train  = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 32768
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 7B
print_info: model params     = 7.62 B
print_info: general.name     = n/a
print_info: vocab type       = BPE
print_info: n_vocab          = 152064
print_info: n_merges         = 151387
print_info: BOS token        = 11 ','
print_info: EOS token        = 151645 '<|im_end|>'
print_info: EOT token        = 151645 '<|im_end|>'
print_info: PAD token        = 151643 '<|endoftext|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|endoftext|>'
print_info: EOG token        = 151645 '<|im_end|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors:   CPU_Mapped model buffer size = 14526.27 MiB
llama_context: constructing llama_context
llama_context: n_seq_max     = 2
llama_context: n_ctx         = 8192
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch       = 1024
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = 0
llama_context: freq_base     = 1000000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_per_seq (4096) < n_ctx_train (32768) -- the full capacity of the model will not be utilized
llama_context:        CPU  output buffer size =     1.19 MiB
llama_kv_cache_unified: kv_size = 8192, type_k = 'f16', type_v = 'f16', n_layer = 28, can_shift = 1, padding = 32
llama_kv_cache_unified:        CPU KV buffer size =   448.00 MiB
llama_kv_cache_unified: KV self size  =  448.00 MiB, K (f16):  224.00 MiB, V (f16):  224.00 MiB
llama_context:        CPU compute buffer size =   492.01 MiB
llama_context: graph nodes  = 1042
llama_context: graph splits = 1
time=2025-06-09T19:24:17.390+08:00 level=INFO source=server.go:630 msg="llama runner started in 2.26 seconds"
[GIN] 2025/06/09 - 19:24:17 | 200 |  2.884388552s |       127.0.0.1 | POST     "/api/generate"

OS

Linux

GPU

Nvidia

CPU

Other

Ollama version

0.9.0

Originally created by @ROGERDJQ on GitHub (Jun 9, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11025 ### What is the issue? I guess the bug caused by my installation of ollama. It is because I can not get the root/admin account of machine, which is not allowed by our admin. I can only install ollama through https://anaconda.org/conda-forge/ollama. after the installation, ollama can work, but GPU can not be loaded. ### Relevant log output ```shell time=2025-06-09T19:22:25.589+08:00 level=INFO source=routes.go:1234 msg="server config" env="map[CUDA_VISIBLE_DEVICES:0,1 GPU_DEVICE_ORDINAL:0,1 HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/remote-home1//.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:0,1 http_proxy: https_proxy: no_proxy:]" time=2025-06-09T19:22:25.589+08:00 level=INFO source=images.go:479 msg="total blobs: 4" time=2025-06-09T19:22:25.589+08:00 level=INFO source=images.go:486 msg="total unused blobs removed: 0" time=2025-06-09T19:22:25.590+08:00 level=INFO source=routes.go:1287 msg="Listening on 127.0.0.1:11434 (version 0.9.0)" time=2025-06-09T19:22:25.590+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-06-09T19:22:25.803+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-cade84b2-bdbd-4411-e9b2-54370c916892 library=cuda variant=v11 compute=8.0 driver=12.0 name="NVIDIA A800-SXM4-80GB" total="79.3 GiB" available="78.9 GiB" time=2025-06-09T19:22:25.803+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-e2154281-9a88-fe6c-c129-93c8a1d3ec92 library=cuda variant=v11 compute=8.0 driver=12.0 name="NVIDIA A800-SXM4-80GB" total="79.3 GiB" available="78.9 GiB" ^C(ollama) @slurmd-6:~/hallucination_of_vlm/vl_r1/utils/webui/text-generation-webui$ export OLLAMA_HOST=0.0.0.0:30435 (ollama) @slurmd-6:~/hallucination_of_vlm/vl_r1/utils/webui/text-generation-webui$ ollama serve time=2025-06-09T19:23:45.304+08:00 level=INFO source=routes.go:1234 msg="server config" env="map[CUDA_VISIBLE_DEVICES:0,1 GPU_DEVICE_ORDINAL:0,1 HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:30435 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/remote-home1//.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:0,1 http_proxy: https_proxy: no_proxy:]" time=2025-06-09T19:23:45.305+08:00 level=INFO source=images.go:479 msg="total blobs: 4" time=2025-06-09T19:23:45.305+08:00 level=INFO source=images.go:486 msg="total unused blobs removed: 0" time=2025-06-09T19:23:45.305+08:00 level=INFO source=routes.go:1287 msg="Listening on [::]:30435 (version 0.9.0)" time=2025-06-09T19:23:45.305+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-06-09T19:23:45.524+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-cade84b2-bdbd-4411-e9b2-54370c916892 library=cuda variant=v11 compute=8.0 driver=12.0 name="NVIDIA A800-SXM4-80GB" total="79.3 GiB" available="78.9 GiB" time=2025-06-09T19:23:45.524+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-e2154281-9a88-fe6c-c129-93c8a1d3ec92 library=cuda variant=v11 compute=8.0 driver=12.0 name="NVIDIA A800-SXM4-80GB" total="79.3 GiB" available="78.9 GiB" [GIN] 2025/06/09 - 19:24:08 | 200 | 68.647µs | 127.0.0.1 | HEAD "/" [GIN] 2025/06/09 - 19:24:08 | 200 | 537.687µs | 127.0.0.1 | GET "/api/tags" [GIN] 2025/06/09 - 19:24:14 | 200 | 19.083µs | 127.0.0.1 | HEAD "/" [GIN] 2025/06/09 - 19:24:14 | 200 | 39.503864ms | 127.0.0.1 | POST "/api/show" time=2025-06-09T19:24:14.710+08:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=/remote-home1//.ollama/models/blobs/sha256-9b9ee32e11cd0300ee6493c052e243ff5ddb0ad23a98676482725adb97722d83 gpu=GPU-cade84b2-bdbd-4411-e9b2-54370c916892 parallel=2 available=84737064960 required="15.0 GiB" time=2025-06-09T19:24:14.852+08:00 level=INFO source=server.go:135 msg="system memory" total="1007.5 GiB" free="960.4 GiB" free_swap="6.8 GiB" time=2025-06-09T19:24:14.853+08:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[78.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="15.0 GiB" memory.required.partial="15.0 GiB" memory.required.kv="448.0 MiB" memory.required.allocations="[15.0 GiB]" memory.weights.total="13.2 GiB" memory.weights.repeating="12.2 GiB" memory.weights.nonrepeating="1.0 GiB" memory.graph.full="478.0 MiB" memory.graph.partial="730.4 MiB" llama_model_loader: loaded meta data with 23 key-value pairs and 339 tensors from /remote-home1//.ollama/models/blobs/sha256-9b9ee32e11cd0300ee6493c052e243ff5ddb0ad23a98676482725adb97722d83 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.file_type u32 = 1 llama_model_loader: - kv 2: general.quantization_version u32 = 2 llama_model_loader: - kv 3: qwen2.attention.head_count u32 = 28 llama_model_loader: - kv 4: qwen2.attention.head_count_kv u32 = 4 llama_model_loader: - kv 5: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 6: qwen2.block_count u32 = 28 llama_model_loader: - kv 7: qwen2.context_length u32 = 32768 llama_model_loader: - kv 8: qwen2.embedding_length u32 = 3584 llama_model_loader: - kv 9: qwen2.feed_forward_length u32 = 18944 llama_model_loader: - kv 10: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 11: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 12: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 13: tokenizer.ggml.add_padding_token bool = false llama_model_loader: - kv 14: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 15: tokenizer.ggml.eos_token_ids arr[i32,2] = [151645, 151643] llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 17: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 18: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 19: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 20: tokenizer.ggml.scores arr[f32,152064] = [0.000000, 1.000000, 2.000000, 3.0000... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - type f32: 141 tensors llama_model_loader: - type f16: 198 tensors print_info: file format = GGUF V3 (latest) print_info: file type = F16 print_info: file size = 14.19 GiB (16.00 BPW) load: control-looking token: 151664 '<|file_sep|>' was not control-type; this is probably a bug in the model. its type will be overridden load: control-looking token: 151660 '<|fim_middle|>' was not control-type; this is probably a bug in the model. its type will be overridden load: control-looking token: 151659 '<|fim_prefix|>' was not control-type; this is probably a bug in the model. its type will be overridden load: control-looking token: 151662 '<|fim_pad|>' was not control-type; this is probably a bug in the model. its type will be overridden load: control-looking token: 151663 '<|repo_name|>' was not control-type; this is probably a bug in the model. its type will be overridden load: control-looking token: 151661 '<|fim_suffix|>' was not control-type; this is probably a bug in the model. its type will be overridden load: special tokens cache size = 421 load: token to piece cache size = 0.9340 MB print_info: arch = qwen2 print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 7.62 B print_info: general.name = n/a print_info: vocab type = BPE print_info: n_vocab = 152064 print_info: n_merges = 151387 print_info: BOS token = 11 ',' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-06-09T19:24:15.135+08:00 level=INFO source=server.go:431 msg="starting llama server" cmd="/remote-home1//anaconda3/envs/ollama/bin/ollama runner --model /remote-home1//.ollama/models/blobs/sha256-9b9ee32e11cd0300ee6493c052e243ff5ddb0ad23a98676482725adb97722d83 --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --threads 64 --parallel 2 --port 38859" time=2025-06-09T19:24:15.135+08:00 level=INFO source=sched.go:483 msg="loaded runners" count=1 time=2025-06-09T19:24:15.135+08:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2025-06-09T19:24:15.135+08:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding" time=2025-06-09T19:24:15.147+08:00 level=INFO source=runner.go:815 msg="starting go runner" time=2025-06-09T19:24:15.151+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.LLAMAFILE=1 compiler=cgo(gcc) time=2025-06-09T19:24:15.153+08:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:38859" llama_model_loader: loaded meta data with 23 key-value pairs and 339 tensors from /remote-home1//.ollama/models/blobs/sha256-9b9ee32e11cd0300ee6493c052e243ff5ddb0ad23a98676482725adb97722d83 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.file_type u32 = 1 llama_model_loader: - kv 2: general.quantization_version u32 = 2 llama_model_loader: - kv 3: qwen2.attention.head_count u32 = 28 llama_model_loader: - kv 4: qwen2.attention.head_count_kv u32 = 4 llama_model_loader: - kv 5: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 6: qwen2.block_count u32 = 28 llama_model_loader: - kv 7: qwen2.context_length u32 = 32768 llama_model_loader: - kv 8: qwen2.embedding_length u32 = 3584 llama_model_loader: - kv 9: qwen2.feed_forward_length u32 = 18944 llama_model_loader: - kv 10: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 11: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 12: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 13: tokenizer.ggml.add_padding_token bool = false llama_model_loader: - kv 14: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 15: tokenizer.ggml.eos_token_ids arr[i32,2] = [151645, 151643] llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 17: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 18: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 19: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 20: tokenizer.ggml.scores arr[f32,152064] = [0.000000, 1.000000, 2.000000, 3.0000... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - type f32: 141 tensors llama_model_loader: - type f16: 198 tensors print_info: file format = GGUF V3 (latest) print_info: file type = F16 print_info: file size = 14.19 GiB (16.00 BPW) load: control-looking token: 151664 '<|file_sep|>' was not control-type; this is probably a bug in the model. its type will be overridden load: control-looking token: 151660 '<|fim_middle|>' was not control-type; this is probably a bug in the model. its type will be overridden load: control-looking token: 151659 '<|fim_prefix|>' was not control-type; this is probably a bug in the model. its type will be overridden load: control-looking token: 151662 '<|fim_pad|>' was not control-type; this is probably a bug in the model. its type will be overridden load: control-looking token: 151663 '<|repo_name|>' was not control-type; this is probably a bug in the model. its type will be overridden load: control-looking token: 151661 '<|fim_suffix|>' was not control-type; this is probably a bug in the model. its type will be overridden load: special tokens cache size = 421 time=2025-06-09T19:24:15.386+08:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" load: token to piece cache size = 0.9340 MB print_info: arch = qwen2 print_info: vocab_only = 0 print_info: n_ctx_train = 32768 print_info: n_embd = 3584 print_info: n_layer = 28 print_info: n_head = 28 print_info: n_head_kv = 4 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_swa_pattern = 1 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 7 print_info: n_embd_k_gqa = 512 print_info: n_embd_v_gqa = 512 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-06 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 18944 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = -1 print_info: rope type = 2 print_info: rope scaling = linear print_info: freq_base_train = 1000000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 32768 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 7B print_info: model params = 7.62 B print_info: general.name = n/a print_info: vocab type = BPE print_info: n_vocab = 152064 print_info: n_merges = 151387 print_info: BOS token = 11 ',' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = true) load_tensors: CPU_Mapped model buffer size = 14526.27 MiB llama_context: constructing llama_context llama_context: n_seq_max = 2 llama_context: n_ctx = 8192 llama_context: n_ctx_per_seq = 4096 llama_context: n_batch = 1024 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 0 llama_context: freq_base = 1000000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (4096) < n_ctx_train (32768) -- the full capacity of the model will not be utilized llama_context: CPU output buffer size = 1.19 MiB llama_kv_cache_unified: kv_size = 8192, type_k = 'f16', type_v = 'f16', n_layer = 28, can_shift = 1, padding = 32 llama_kv_cache_unified: CPU KV buffer size = 448.00 MiB llama_kv_cache_unified: KV self size = 448.00 MiB, K (f16): 224.00 MiB, V (f16): 224.00 MiB llama_context: CPU compute buffer size = 492.01 MiB llama_context: graph nodes = 1042 llama_context: graph splits = 1 time=2025-06-09T19:24:17.390+08:00 level=INFO source=server.go:630 msg="llama runner started in 2.26 seconds" [GIN] 2025/06/09 - 19:24:17 | 200 | 2.884388552s | 127.0.0.1 | POST "/api/generate" ``` ### OS Linux ### GPU Nvidia ### CPU Other ### Ollama version 0.9.0
GiteaMirror added the bug label 2026-04-12 19:19:22 -05:00
Author
Owner

@rick-github commented on GitHub (Jun 9, 2025):

time=2025-06-09T19:24:15.147+08:00 level=INFO source=runner.go:815 msg="starting go runner"
time=2025-06-09T19:24:15.151+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.LLAMAFILE=1 compiler=cgo(gcc)

No CPU or GPU enabled backends found. https://github.com/ollama/ollama/issues/8532#issuecomment-2616281903

<!-- gh-comment-id:2955522607 --> @rick-github commented on GitHub (Jun 9, 2025): ``` time=2025-06-09T19:24:15.147+08:00 level=INFO source=runner.go:815 msg="starting go runner" time=2025-06-09T19:24:15.151+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.LLAMAFILE=1 compiler=cgo(gcc) ``` No CPU or GPU enabled backends found. https://github.com/ollama/ollama/issues/8532#issuecomment-2616281903
Author
Owner

@rick-github commented on GitHub (Jun 9, 2025):

Doing a manual install might get better results: https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install

<!-- gh-comment-id:2955529976 --> @rick-github commented on GitHub (Jun 9, 2025): Doing a manual install might get better results: https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install
Author
Owner

@ROGERDJQ commented on GitHub (Jun 9, 2025):

I can not do a manual install because sudo operation is not permitted in my group. I wonder is that possible to install ollama without sudo permission?

<!-- gh-comment-id:2955572387 --> @ROGERDJQ commented on GitHub (Jun 9, 2025): I can not do a manual install because sudo operation is not permitted in my group. I wonder is that possible to install ollama without sudo permission?
Author
Owner

@rick-github commented on GitHub (Jun 9, 2025):

curl -L https://ollama.com/download/ollama-linux-amd64.tgz -o ollama-linux-amd64.tgz
tar -C ~ -xzf ollama-linux-amd64.tgz
~/bin/ollama serve

Open another terminal, then

~/bin/ollama run llama3.1
<!-- gh-comment-id:2955595462 --> @rick-github commented on GitHub (Jun 9, 2025): ``` curl -L https://ollama.com/download/ollama-linux-amd64.tgz -o ollama-linux-amd64.tgz tar -C ~ -xzf ollama-linux-amd64.tgz ~/bin/ollama serve ``` Open another terminal, then ``` ~/bin/ollama run llama3.1 ```
Author
Owner

@ROGERDJQ commented on GitHub (Jun 9, 2025):

curl -L https://ollama.com/download/ollama-linux-amd64.tgz -o ollama-linux-amd64.tgz
tar -C ~ -xzf ollama-linux-amd64.tgz
~/bin/ollama serve

Open another terminal, then

~/bin/ollama run llama3.1

works well!

<!-- gh-comment-id:2956040343 --> @ROGERDJQ commented on GitHub (Jun 9, 2025): > ``` > curl -L https://ollama.com/download/ollama-linux-amd64.tgz -o ollama-linux-amd64.tgz > tar -C ~ -xzf ollama-linux-amd64.tgz > ~/bin/ollama serve > ``` > > Open another terminal, then > > ``` > ~/bin/ollama run llama3.1 > ``` works well!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#7272