[GH-ISSUE #6353] Very slow API generate endpoint #3986

Closed
opened 2026-04-12 14:51:16 -05:00 by GiteaMirror · 10 comments
Owner

Originally created by @mann1x on GitHub (Aug 14, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6353

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

As reported already numerous times in Discord, there's something wrong with the API generate endpoint as it's extremely slow.

This is the same prompt:

Aug 14 08:18:46 solidpc ollama[588934]: [GIN] 2024/08/14 - 08:18:46 | 200 |   16.263276ms |       127.0.0.1 | POST     "/api/show"
Aug 14 08:18:49 solidpc ollama[588934]: [GIN] 2024/08/14 - 08:18:49 | 200 |  3.131155144s |       127.0.0.1 | POST     "/api/generate"
Aug 14 08:20:30 solidpc ollama[588934]: [GIN] 2024/08/14 - 08:20:30 | 200 |         1m40s |             ::1 | POST     "/api/generate"
Aug 14 08:22:10 solidpc ollama[588934]: [GIN] 2024/08/14 - 08:22:10 | 200 |         1m39s |             ::1 | POST     "/api/generate"
Aug 14 08:23:51 solidpc ollama[588934]: [GIN] 2024/08/14 - 08:23:51 | 200 |         1m40s |             ::1 | POST     "/api/generate"
Aug 14 08:25:31 solidpc ollama[588934]: [GIN] 2024/08/14 - 08:25:31 | 200 |         1m40s |             ::1 | POST     "/api/generate"

The first /api/generate call is from ollama run, the other 4 are from curl:

ollama run --verbose mannix/$Modelname "Einstein was widely known with a first name, not necessarily his legally registered first name. Which one was it? Be brief and concise."
curl http://localhost:11434/api/generate -s --connect-timeout 5 -m 180 -d "{\"model\": \"mannix/$Modelname\", \"stream\": false, \"options\": { \"temperature\": 0.1, \"seed\": 74},\"prompt\": \"Einstein was widely known with a first name, not necessarily his legally registered first name. Which one was it? Be brief and concise.\"}" -o test1json

Difference is massive, from 3 seconds to 100 seconds to execute the same prompt.

OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

0.3.6

Originally created by @mann1x on GitHub (Aug 14, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6353 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? As reported already numerous times in Discord, there's something wrong with the API generate endpoint as it's extremely slow. This is the same prompt: ``` Aug 14 08:18:46 solidpc ollama[588934]: [GIN] 2024/08/14 - 08:18:46 | 200 | 16.263276ms | 127.0.0.1 | POST "/api/show" Aug 14 08:18:49 solidpc ollama[588934]: [GIN] 2024/08/14 - 08:18:49 | 200 | 3.131155144s | 127.0.0.1 | POST "/api/generate" Aug 14 08:20:30 solidpc ollama[588934]: [GIN] 2024/08/14 - 08:20:30 | 200 | 1m40s | ::1 | POST "/api/generate" Aug 14 08:22:10 solidpc ollama[588934]: [GIN] 2024/08/14 - 08:22:10 | 200 | 1m39s | ::1 | POST "/api/generate" Aug 14 08:23:51 solidpc ollama[588934]: [GIN] 2024/08/14 - 08:23:51 | 200 | 1m40s | ::1 | POST "/api/generate" Aug 14 08:25:31 solidpc ollama[588934]: [GIN] 2024/08/14 - 08:25:31 | 200 | 1m40s | ::1 | POST "/api/generate" ``` The first /api/generate call is from ollama run, the other 4 are from curl: ``` ollama run --verbose mannix/$Modelname "Einstein was widely known with a first name, not necessarily his legally registered first name. Which one was it? Be brief and concise." curl http://localhost:11434/api/generate -s --connect-timeout 5 -m 180 -d "{\"model\": \"mannix/$Modelname\", \"stream\": false, \"options\": { \"temperature\": 0.1, \"seed\": 74},\"prompt\": \"Einstein was widely known with a first name, not necessarily his legally registered first name. Which one was it? Be brief and concise.\"}" -o test1json ``` Difference is massive, from 3 seconds to 100 seconds to execute the same prompt. ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version 0.3.6
GiteaMirror added the nvidiabugneeds more info labels 2026-04-12 14:51:16 -05:00
Author
Owner

@rick-github commented on GitHub (Aug 14, 2024):

Server logs would help in debugging. What's the value of $Modelname?

<!-- gh-comment-id:2288178452 --> @rick-github commented on GitHub (Aug 14, 2024): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) would help in debugging. What's the value of `$Modelname`?
Author
Owner

@mann1x commented on GitHub (Aug 14, 2024):

@rick-github
There's not really much in the server logs as I can see, the execution time of the API requests above is the relevant part.

$Modelname is a bash variable

The issue is not 100% reproducible, it does happen randomly.
Seems to happen more frequently with big models, at least on my system.
I see it happen very frequently with llama3.1 70b, some quantizations being much more problematic than others for no obvious reasons.
Also the delta between a slow and a fast response is bigger and more easily detectable with bigger models.

Aug 14 08:07:12 solidpc systemd[1]: Started Ollama Service.
Aug 14 08:07:12 solidpc ollama[588934]: 2024/08/14 08:07:12 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
Aug 14 08:07:12 solidpc ollama[588934]: time=2024-08-14T08:07:12.556+02:00 level=INFO source=images.go:782 msg="total blobs: 170"
Aug 14 08:07:12 solidpc ollama[588934]: time=2024-08-14T08:07:12.557+02:00 level=INFO source=images.go:790 msg="total unused blobs removed: 0"
Aug 14 08:07:12 solidpc ollama[588934]: time=2024-08-14T08:07:12.557+02:00 level=INFO source=routes.go:1172 msg="Listening on [::]:11434 (version 0.3.6)"
Aug 14 08:07:12 solidpc ollama[588934]: time=2024-08-14T08:07:12.558+02:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama2551813417/runners
Aug 14 08:07:15 solidpc ollama[588934]: time=2024-08-14T08:07:15.443+02:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60102]"
Aug 14 08:07:15 solidpc ollama[588934]: time=2024-08-14T08:07:15.443+02:00 level=INFO source=gpu.go:204 msg="looking for compatible GPUs"
Aug 14 08:07:15 solidpc ollama[588934]: time=2024-08-14T08:07:15.469+02:00 level=INFO source=gpu.go:560 msg="no nvidia devices detected" library=/usr/lib/x86_64-linux-gnu/libcuda.so.550.54.15
Aug 14 08:07:15 solidpc ollama[588934]: time=2024-08-14T08:07:15.476+02:00 level=WARN source=amd_linux.go:59 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amd
gpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
Aug 14 08:07:15 solidpc ollama[588934]: time=2024-08-14T08:07:15.477+02:00 level=WARN source=amd_linux.go:201 msg="amdgpu too old gfx000" gpu=0
Aug 14 08:07:15 solidpc ollama[588934]: time=2024-08-14T08:07:15.477+02:00 level=WARN source=amd_linux.go:201 msg="amdgpu too old gfx000" gpu=1
Aug 14 08:07:15 solidpc ollama[588934]: time=2024-08-14T08:07:15.477+02:00 level=INFO source=amd_linux.go:360 msg="no compatible amdgpu devices detected"
Aug 14 08:07:15 solidpc ollama[588934]: time=2024-08-14T08:07:15.477+02:00 level=INFO source=gpu.go:350 msg="no compatible GPUs were discovered"
Aug 14 08:07:15 solidpc ollama[588934]: time=2024-08-14T08:07:15.477+02:00 level=INFO source=types.go:105 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="125.2 GiB" avail
able="111.7 GiB"
Aug 14 08:07:23 solidpc ollama[588934]: [GIN] 2024/08/14 - 08:07:23 | 200 |       32.77µs |       127.0.0.1 | HEAD     "/"
Aug 14 08:09:51 solidpc ollama[588934]: [GIN] 2024/08/14 - 08:09:51 | 201 |  59.99198631s |       127.0.0.1 | POST     "/api/blobs/sha256:29271b233c8960ca11e5e94a3de505337c057ccd92f0b46c36f183359b405a
7f"
Aug 14 08:09:51 solidpc ollama[588934]: [GIN] 2024/08/14 - 08:09:51 | 200 |   33.337905ms |       127.0.0.1 | POST     "/api/create"
Aug 14 08:09:51 solidpc ollama[588934]: time=2024-08-14T08:09:51.889+02:00 level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=81 layers.offload=0 layers.split="" mem
ory.available="[111.7 GiB]" memory.required.full="54.7 GiB" memory.required.partial="0 B" memory.required.kv="640.0 MiB" memory.required.allocations="[54.7 GiB]" memory.weights.total="52.9 GiB" memory
.weights.repeating="52.1 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="324.0 MiB" memory.graph.partial="1.1 GiB"
Aug 14 08:09:51 solidpc ollama[588934]: time=2024-08-14T08:09:51.889+02:00 level=INFO source=server.go:393 msg="starting llama server" cmd="/tmp/ollama2551813417/runners/cpu_avx2/ollama_llama_server -
-model /usr/share/ollama/.ollama/models/blobs/sha256-29271b233c8960ca11e5e94a3de505337c057ccd92f0b46c36f183359b405a7f --ctx-size 2048 --batch-size 512 --embedding --log-disable --no-mmap --parallel 1
--port 7365"
Aug 14 08:09:51 solidpc ollama[588934]: time=2024-08-14T08:09:51.890+02:00 level=INFO source=sched.go:445 msg="loaded runners" count=1
Aug 14 08:09:51 solidpc ollama[588934]: time=2024-08-14T08:09:51.890+02:00 level=INFO source=server.go:593 msg="waiting for llama runner to start responding"
Aug 14 08:09:51 solidpc ollama[588934]: time=2024-08-14T08:09:51.890+02:00 level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server error"
Aug 14 08:09:51 solidpc ollama[590872]: INFO [main] build info | build=1 commit="1e6f655" tid="140213752113024" timestamp=1723615791
Aug 14 08:09:51 solidpc ollama[590872]: INFO [main] system info | n_threads=6 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX51
2_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="140213752113024"
 timestamp=1723615791 total_threads=12
Aug 14 08:09:51 solidpc ollama[590872]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="11" port="7365" tid="140213752113024" timestamp=1723615791
Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: loaded meta data with 32 key-value pairs and 724 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-29271b233c8960ca11e5e94a3de50533
7c057ccd92f0b46c36f183359b405a7f (version GGUF V3 (latest))
Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv   0:                       general.architecture str              = llama
Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv   1:                               general.type str              = model
Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv   2:                               general.name str              = ..
Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv   3:                           general.finetune str              = ..
Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv   4:                         general.size_label str              = 71B
Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv   5:                            general.license str              = llama3.1
Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv   6:                               general.tags arr[str,6]       = ["facebook", "meta", "pytorch", "llam...
Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv   7:                          general.languages arr[str,8]       = ["en", "de", "fr", "it", "pt", "hi", ...
Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv   8:                          llama.block_count u32              = 80
Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv   9:                       llama.context_length u32              = 131072
Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv  10:                     llama.embedding_length u32              = 8192
Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv  11:                  llama.feed_forward_length u32              = 28672
Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv  12:                 llama.attention.head_count u32              = 64
Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv  13:              llama.attention.head_count_kv u32              = 8
Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv  14:                       llama.rope.freq_base f32              = 500000.000000
Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv  15:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv  16:                          general.file_type u32              = 18
Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv  17:                           llama.vocab_size u32              = 128256
Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv  18:                 llama.rope.dimension_count u32              = 128
Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv  19:                       tokenizer.ggml.model str              = gpt2
Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv  20:                         tokenizer.ggml.pre str              = llama-bpe
Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv  21:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv  22:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv  23:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv  24:                tokenizer.ggml.bos_token_id u32              = 128000
Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv  25:                tokenizer.ggml.eos_token_id u32              = 128009
Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv  26:                    tokenizer.chat_template str              = {% set loop_messages = messages %}{% ...
Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv  27:               general.quantization_version u32              = 2
Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv  28:                      quantize.imatrix.file str              = imatrix.dat
Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv  29:                   quantize.imatrix.dataset str              = /shared/opt/work_models/_imatrix/cali...
Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv  30:             quantize.imatrix.entries_count i32              = 560
Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv  31:              quantize.imatrix.chunks_count i32              = 62
Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - type  f32:  162 tensors
Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - type q6_K:  562 tensors
Aug 14 08:09:52 solidpc ollama[588934]: time=2024-08-14T08:09:52.141+02:00 level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server loading model"
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_vocab: special tokens cache size = 256
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_vocab: token to piece cache size = 0.7999 MB
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: format           = GGUF V3 (latest)
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: arch             = llama
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: vocab type       = BPE
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: n_vocab          = 128256
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: n_merges         = 280147
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: vocab_only       = 0
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: n_ctx_train      = 131072
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: n_embd           = 8192
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: n_layer          = 80
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: n_head           = 64
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: n_head_kv        = 8
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: n_rot            = 128
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: n_swa            = 0
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: n_embd_head_k    = 128
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: n_embd_head_v    = 128
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: n_gqa            = 8
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: n_embd_k_gqa     = 1024
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: n_embd_v_gqa     = 1024
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: f_norm_eps       = 0.0e+00
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: f_clamp_kqv      = 0.0e+00
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: f_logit_scale    = 0.0e+00
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: n_ff             = 28672
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: n_expert         = 0
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: n_expert_used    = 0
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: causal attn      = 1
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: pooling type     = 0
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: rope type        = 0
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: rope scaling     = linear
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: freq_base_train  = 500000.0
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: freq_scale_train = 1
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: n_ctx_orig_yarn  = 131072
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: rope_finetuned   = unknown
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: ssm_d_conv       = 0
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: ssm_d_inner      = 0
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: ssm_d_state      = 0
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: ssm_dt_rank      = 0
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: model type       = 70B
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: model ftype      = Q6_K
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: model params     = 70.55 B
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: model size       = 53.91 GiB (6.56 BPW)
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: general.name     = ..
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: EOS token        = 128009 '<|eot_id|>'
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: LF token         = 128 'Ä'
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: max token length = 256
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_tensors: ggml ctx size =    0.34 MiB
Aug 14 08:09:52 solidpc ollama[588934]: llm_load_tensors:        CPU buffer size = 55198.96 MiB
Aug 14 08:10:17 solidpc ollama[588934]: llama_new_context_with_model: n_ctx      = 2048
Aug 14 08:10:17 solidpc ollama[588934]: llama_new_context_with_model: n_batch    = 512
Aug 14 08:10:17 solidpc ollama[588934]: llama_new_context_with_model: n_ubatch   = 512
Aug 14 08:10:17 solidpc ollama[588934]: llama_new_context_with_model: flash_attn = 0
Aug 14 08:10:17 solidpc ollama[588934]: llama_new_context_with_model: freq_base  = 500000.0
Aug 14 08:10:17 solidpc ollama[588934]: llama_new_context_with_model: freq_scale = 1
Aug 14 08:10:17 solidpc ollama[588934]: llama_kv_cache_init:        CPU KV buffer size =   640.00 MiB
Aug 14 08:10:17 solidpc ollama[588934]: llama_new_context_with_model: KV self size  =  640.00 MiB, K (f16):  320.00 MiB, V (f16):  320.00 MiB
Aug 14 08:10:17 solidpc ollama[588934]: llama_new_context_with_model:        CPU  output buffer size =     0.52 MiB
Aug 14 08:10:17 solidpc ollama[588934]: llama_new_context_with_model:        CPU compute buffer size =   324.01 MiB
Aug 14 08:10:17 solidpc ollama[588934]: llama_new_context_with_model: graph nodes  = 2566
Aug 14 08:10:17 solidpc ollama[588934]: llama_new_context_with_model: graph splits = 1
Aug 14 08:10:19 solidpc ollama[590872]: INFO [main] model loaded | tid="140213752113024" timestamp=1723615819
Aug 14 08:10:19 solidpc ollama[588934]: time=2024-08-14T08:10:19.647+02:00 level=INFO source=server.go:632 msg="llama runner started in 27.76 seconds"
<!-- gh-comment-id:2288453106 --> @mann1x commented on GitHub (Aug 14, 2024): @rick-github There's not really much in the server logs as I can see, the execution time of the API requests above is the relevant part. `$Modelname` is a bash variable The issue is not 100% reproducible, it does happen randomly. Seems to happen more frequently with big models, at least on my system. I see it happen very frequently with llama3.1 70b, some quantizations being much more problematic than others for no obvious reasons. Also the delta between a slow and a fast response is bigger and more easily detectable with bigger models. ``` Aug 14 08:07:12 solidpc systemd[1]: Started Ollama Service. Aug 14 08:07:12 solidpc ollama[588934]: 2024/08/14 08:07:12 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" Aug 14 08:07:12 solidpc ollama[588934]: time=2024-08-14T08:07:12.556+02:00 level=INFO source=images.go:782 msg="total blobs: 170" Aug 14 08:07:12 solidpc ollama[588934]: time=2024-08-14T08:07:12.557+02:00 level=INFO source=images.go:790 msg="total unused blobs removed: 0" Aug 14 08:07:12 solidpc ollama[588934]: time=2024-08-14T08:07:12.557+02:00 level=INFO source=routes.go:1172 msg="Listening on [::]:11434 (version 0.3.6)" Aug 14 08:07:12 solidpc ollama[588934]: time=2024-08-14T08:07:12.558+02:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama2551813417/runners Aug 14 08:07:15 solidpc ollama[588934]: time=2024-08-14T08:07:15.443+02:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60102]" Aug 14 08:07:15 solidpc ollama[588934]: time=2024-08-14T08:07:15.443+02:00 level=INFO source=gpu.go:204 msg="looking for compatible GPUs" Aug 14 08:07:15 solidpc ollama[588934]: time=2024-08-14T08:07:15.469+02:00 level=INFO source=gpu.go:560 msg="no nvidia devices detected" library=/usr/lib/x86_64-linux-gnu/libcuda.so.550.54.15 Aug 14 08:07:15 solidpc ollama[588934]: time=2024-08-14T08:07:15.476+02:00 level=WARN source=amd_linux.go:59 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amd gpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" Aug 14 08:07:15 solidpc ollama[588934]: time=2024-08-14T08:07:15.477+02:00 level=WARN source=amd_linux.go:201 msg="amdgpu too old gfx000" gpu=0 Aug 14 08:07:15 solidpc ollama[588934]: time=2024-08-14T08:07:15.477+02:00 level=WARN source=amd_linux.go:201 msg="amdgpu too old gfx000" gpu=1 Aug 14 08:07:15 solidpc ollama[588934]: time=2024-08-14T08:07:15.477+02:00 level=INFO source=amd_linux.go:360 msg="no compatible amdgpu devices detected" Aug 14 08:07:15 solidpc ollama[588934]: time=2024-08-14T08:07:15.477+02:00 level=INFO source=gpu.go:350 msg="no compatible GPUs were discovered" Aug 14 08:07:15 solidpc ollama[588934]: time=2024-08-14T08:07:15.477+02:00 level=INFO source=types.go:105 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="125.2 GiB" avail able="111.7 GiB" Aug 14 08:07:23 solidpc ollama[588934]: [GIN] 2024/08/14 - 08:07:23 | 200 | 32.77µs | 127.0.0.1 | HEAD "/" Aug 14 08:09:51 solidpc ollama[588934]: [GIN] 2024/08/14 - 08:09:51 | 201 | 59.99198631s | 127.0.0.1 | POST "/api/blobs/sha256:29271b233c8960ca11e5e94a3de505337c057ccd92f0b46c36f183359b405a 7f" Aug 14 08:09:51 solidpc ollama[588934]: [GIN] 2024/08/14 - 08:09:51 | 200 | 33.337905ms | 127.0.0.1 | POST "/api/create" Aug 14 08:09:51 solidpc ollama[588934]: time=2024-08-14T08:09:51.889+02:00 level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=81 layers.offload=0 layers.split="" mem ory.available="[111.7 GiB]" memory.required.full="54.7 GiB" memory.required.partial="0 B" memory.required.kv="640.0 MiB" memory.required.allocations="[54.7 GiB]" memory.weights.total="52.9 GiB" memory .weights.repeating="52.1 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="324.0 MiB" memory.graph.partial="1.1 GiB" Aug 14 08:09:51 solidpc ollama[588934]: time=2024-08-14T08:09:51.889+02:00 level=INFO source=server.go:393 msg="starting llama server" cmd="/tmp/ollama2551813417/runners/cpu_avx2/ollama_llama_server - -model /usr/share/ollama/.ollama/models/blobs/sha256-29271b233c8960ca11e5e94a3de505337c057ccd92f0b46c36f183359b405a7f --ctx-size 2048 --batch-size 512 --embedding --log-disable --no-mmap --parallel 1 --port 7365" Aug 14 08:09:51 solidpc ollama[588934]: time=2024-08-14T08:09:51.890+02:00 level=INFO source=sched.go:445 msg="loaded runners" count=1 Aug 14 08:09:51 solidpc ollama[588934]: time=2024-08-14T08:09:51.890+02:00 level=INFO source=server.go:593 msg="waiting for llama runner to start responding" Aug 14 08:09:51 solidpc ollama[588934]: time=2024-08-14T08:09:51.890+02:00 level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server error" Aug 14 08:09:51 solidpc ollama[590872]: INFO [main] build info | build=1 commit="1e6f655" tid="140213752113024" timestamp=1723615791 Aug 14 08:09:51 solidpc ollama[590872]: INFO [main] system info | n_threads=6 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX51 2_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="140213752113024" timestamp=1723615791 total_threads=12 Aug 14 08:09:51 solidpc ollama[590872]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="11" port="7365" tid="140213752113024" timestamp=1723615791 Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: loaded meta data with 32 key-value pairs and 724 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-29271b233c8960ca11e5e94a3de50533 7c057ccd92f0b46c36f183359b405a7f (version GGUF V3 (latest)) Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv 0: general.architecture str = llama Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv 1: general.type str = model Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv 2: general.name str = .. Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv 3: general.finetune str = .. Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv 4: general.size_label str = 71B Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv 5: general.license str = llama3.1 Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv 6: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv 7: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv 8: llama.block_count u32 = 80 Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv 9: llama.context_length u32 = 131072 Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv 10: llama.embedding_length u32 = 8192 Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv 11: llama.feed_forward_length u32 = 28672 Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv 12: llama.attention.head_count u32 = 64 Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv 13: llama.attention.head_count_kv u32 = 8 Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv 14: llama.rope.freq_base f32 = 500000.000000 Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv 15: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv 16: general.file_type u32 = 18 Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv 17: llama.vocab_size u32 = 128256 Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv 18: llama.rope.dimension_count u32 = 128 Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv 19: tokenizer.ggml.model str = gpt2 Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv 20: tokenizer.ggml.pre str = llama-bpe Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv 21: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv 22: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv 23: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv 24: tokenizer.ggml.bos_token_id u32 = 128000 Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv 25: tokenizer.ggml.eos_token_id u32 = 128009 Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv 26: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv 27: general.quantization_version u32 = 2 Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv 28: quantize.imatrix.file str = imatrix.dat Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv 29: quantize.imatrix.dataset str = /shared/opt/work_models/_imatrix/cali... Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv 30: quantize.imatrix.entries_count i32 = 560 Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - kv 31: quantize.imatrix.chunks_count i32 = 62 Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - type f32: 162 tensors Aug 14 08:09:51 solidpc ollama[588934]: llama_model_loader: - type q6_K: 562 tensors Aug 14 08:09:52 solidpc ollama[588934]: time=2024-08-14T08:09:52.141+02:00 level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server loading model" Aug 14 08:09:52 solidpc ollama[588934]: llm_load_vocab: special tokens cache size = 256 Aug 14 08:09:52 solidpc ollama[588934]: llm_load_vocab: token to piece cache size = 0.7999 MB Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: format = GGUF V3 (latest) Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: arch = llama Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: vocab type = BPE Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: n_vocab = 128256 Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: n_merges = 280147 Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: vocab_only = 0 Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: n_ctx_train = 131072 Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: n_embd = 8192 Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: n_layer = 80 Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: n_head = 64 Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: n_head_kv = 8 Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: n_rot = 128 Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: n_swa = 0 Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: n_embd_head_k = 128 Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: n_embd_head_v = 128 Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: n_gqa = 8 Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: n_embd_k_gqa = 1024 Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: n_embd_v_gqa = 1024 Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: f_norm_eps = 0.0e+00 Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05 Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: f_logit_scale = 0.0e+00 Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: n_ff = 28672 Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: n_expert = 0 Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: n_expert_used = 0 Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: causal attn = 1 Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: pooling type = 0 Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: rope type = 0 Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: rope scaling = linear Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: freq_base_train = 500000.0 Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: freq_scale_train = 1 Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: n_ctx_orig_yarn = 131072 Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: rope_finetuned = unknown Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: ssm_d_conv = 0 Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: ssm_d_inner = 0 Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: ssm_d_state = 0 Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: ssm_dt_rank = 0 Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: model type = 70B Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: model ftype = Q6_K Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: model params = 70.55 B Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: model size = 53.91 GiB (6.56 BPW) Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: general.name = .. Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: EOS token = 128009 '<|eot_id|>' Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: LF token = 128 'Ä' Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: EOT token = 128009 '<|eot_id|>' Aug 14 08:09:52 solidpc ollama[588934]: llm_load_print_meta: max token length = 256 Aug 14 08:09:52 solidpc ollama[588934]: llm_load_tensors: ggml ctx size = 0.34 MiB Aug 14 08:09:52 solidpc ollama[588934]: llm_load_tensors: CPU buffer size = 55198.96 MiB Aug 14 08:10:17 solidpc ollama[588934]: llama_new_context_with_model: n_ctx = 2048 Aug 14 08:10:17 solidpc ollama[588934]: llama_new_context_with_model: n_batch = 512 Aug 14 08:10:17 solidpc ollama[588934]: llama_new_context_with_model: n_ubatch = 512 Aug 14 08:10:17 solidpc ollama[588934]: llama_new_context_with_model: flash_attn = 0 Aug 14 08:10:17 solidpc ollama[588934]: llama_new_context_with_model: freq_base = 500000.0 Aug 14 08:10:17 solidpc ollama[588934]: llama_new_context_with_model: freq_scale = 1 Aug 14 08:10:17 solidpc ollama[588934]: llama_kv_cache_init: CPU KV buffer size = 640.00 MiB Aug 14 08:10:17 solidpc ollama[588934]: llama_new_context_with_model: KV self size = 640.00 MiB, K (f16): 320.00 MiB, V (f16): 320.00 MiB Aug 14 08:10:17 solidpc ollama[588934]: llama_new_context_with_model: CPU output buffer size = 0.52 MiB Aug 14 08:10:17 solidpc ollama[588934]: llama_new_context_with_model: CPU compute buffer size = 324.01 MiB Aug 14 08:10:17 solidpc ollama[588934]: llama_new_context_with_model: graph nodes = 2566 Aug 14 08:10:17 solidpc ollama[588934]: llama_new_context_with_model: graph splits = 1 Aug 14 08:10:19 solidpc ollama[590872]: INFO [main] model loaded | tid="140213752113024" timestamp=1723615819 Aug 14 08:10:19 solidpc ollama[588934]: time=2024-08-14T08:10:19.647+02:00 level=INFO source=server.go:632 msg="llama runner started in 27.76 seconds" ```
Author
Owner

@rick-github commented on GitHub (Aug 14, 2024):

If you add OLLAMA_DEBUG=1 to the server environment the runner will print slot processing which may give insight in to what's causing the long processing times.

Just to verify, the log above is from https://ollama.com/mannix/llama3.1-70b:q6_k?

<!-- gh-comment-id:2288475847 --> @rick-github commented on GitHub (Aug 14, 2024): If you add `OLLAMA_DEBUG=1` to the server environment the runner will print slot processing which may give insight in to what's causing the long processing times. Just to verify, the log above is from `https://ollama.com/mannix/llama3.1-70b:q6_k`?
Author
Owner

@mann1x commented on GitHub (Aug 14, 2024):

Yes it's the q6_k quant.

At the next model to quantize, will enable the debug logs to see if there's more insight.

<!-- gh-comment-id:2288502176 --> @mann1x commented on GitHub (Aug 14, 2024): Yes it's the q6_k quant. At the next model to quantize, will enable the debug logs to see if there's more insight.
Author
Owner

@charsleysa commented on GitHub (Aug 16, 2024):

@mann1x from your logs, it looks like the nvidia GPUs aren't being detected and it's running on the CPU:

Aug 14 08:07:15 ... msg="looking for compatible GPUs"
Aug 14 08:07:15 ... msg="no nvidia devices detected" library=/usr/lib/x86_64-linux-gnu/libcuda.so.550.54.15
Aug 14 08:07:15 ... msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amd
gpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
Aug 14 08:07:15 ... msg="amdgpu too old gfx000" gpu=0
Aug 14 08:07:15 ... msg="amdgpu too old gfx000" gpu=1
Aug 14 08:07:15 ... msg="no compatible amdgpu devices detected"
<!-- gh-comment-id:2293582451 --> @charsleysa commented on GitHub (Aug 16, 2024): @mann1x from your logs, it looks like the nvidia GPUs aren't being detected and it's running on the CPU: ``` Aug 14 08:07:15 ... msg="looking for compatible GPUs" Aug 14 08:07:15 ... msg="no nvidia devices detected" library=/usr/lib/x86_64-linux-gnu/libcuda.so.550.54.15 Aug 14 08:07:15 ... msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amd gpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" Aug 14 08:07:15 ... msg="amdgpu too old gfx000" gpu=0 Aug 14 08:07:15 ... msg="amdgpu too old gfx000" gpu=1 Aug 14 08:07:15 ... msg="no compatible amdgpu devices detected" ```
Author
Owner

@mann1x commented on GitHub (Aug 16, 2024):

@charsleysa
thanks I missed it, that happens sometimes after computing the imatrix.
but as you can see later it loads the model with CUDA without any issue.
have to pick a better example but now I have some real hw issues and I can't switch it off from remote.

<!-- gh-comment-id:2293847205 --> @mann1x commented on GitHub (Aug 16, 2024): @charsleysa thanks I missed it, that happens sometimes after computing the imatrix. but as you can see later it loads the model with CUDA without any issue. have to pick a better example but now I have some real hw issues and I can't switch it off from remote.
Author
Owner

@dhiltgen commented on GitHub (Aug 18, 2024):

@mann1x you should be able to use ollama ps to see if it loaded on GPU or CPU. It sounds like you're landing on CPU sometimes and perhaps that correlates with the slower response?

<!-- gh-comment-id:2295315637 --> @dhiltgen commented on GitHub (Aug 18, 2024): @mann1x you should be able to use `ollama ps` to see if it loaded on GPU or CPU. It sounds like you're landing on CPU sometimes and perhaps that correlates with the slower response?
Author
Owner

@mann1x commented on GitHub (Aug 18, 2024):

@dhiltgen
I'm suffering the usual nvidia linux driver issues... running llama-imatrix since a while is causing the driver to start acting; get this error of no nvidia gpu found after a while but then it clearly loads and run on the GPU.
I will provide a better example once I have to upload another big model.
Have seen dozens of times this behavior while uploading, I usually do 22 tests for each quant: 2 tests with 1 ollama run, 10 curl generate.
Had to reduce to 1+4 for the big models at some point, it was getting incredibly slow, massive difference from a while ago.

Regarding the execution time delta, that the model is loaded fully on GPU or not shouldn't matter; the first test with ollama run, except seed and temperature, is the same as the curl calls.
When I see this issue the delta time is very big, too big, the different temp and seed doesn't justify it.
There's always a difference but not that much.
GPU or not, the execution time for the tests should be more or less the same, that's how it works when it works.

I suspect that a trigger is that the model is not fully run on GPU; at least for me I have seen it most of the times with models not fitting fully on the GPU.

<!-- gh-comment-id:2295334965 --> @mann1x commented on GitHub (Aug 18, 2024): @dhiltgen I'm suffering the usual nvidia linux driver issues... running llama-imatrix since a while is causing the driver to start acting; get this error of no nvidia gpu found after a while but then it clearly loads and run on the GPU. I will provide a better example once I have to upload another big model. Have seen dozens of times this behavior while uploading, I usually do 22 tests for each quant: 2 tests with 1 ollama run, 10 curl generate. Had to reduce to 1+4 for the big models at some point, it was getting incredibly slow, massive difference from a while ago. Regarding the execution time delta, that the model is loaded fully on GPU or not shouldn't matter; the first test with ollama run, except seed and temperature, is the same as the curl calls. When I see this issue the delta time is very big, too big, the different temp and seed doesn't justify it. There's always a difference but not that much. GPU or not, the execution time for the tests should be more or less the same, that's how it works when it works. I suspect that a trigger is that the model is not fully run on GPU; at least for me I have seen it most of the times with models not fitting fully on the GPU.
Author
Owner

@mann1x commented on GitHub (Aug 22, 2024):

@dhiltgen
I will close the issue, pretty sure at this point it was the usual nvidia linux driver mess.
Also don't see anymore complaints in discord, probably everyone else as well solved it updating the drivers.

Luckily the 560 release finally works on debian, managed to update everything and I'm back to normal, can test even 70b models as before with 10 iterations without issues.

For reference; I was using 550 release runfile and it worked for a long time without any issue.
The only component which was out of the runfile installer was docker container support for nvidia.
It kept updating to the latest version with apt and indeed when it completely stopped working was when I installed the very latest update.
Now everything is installed via apt with the open driver and works flawlessly.

<!-- gh-comment-id:2304972166 --> @mann1x commented on GitHub (Aug 22, 2024): @dhiltgen I will close the issue, pretty sure at this point it was the usual nvidia linux driver mess. Also don't see anymore complaints in discord, probably everyone else as well solved it updating the drivers. Luckily the 560 release finally works on debian, managed to update everything and I'm back to normal, can test even 70b models as before with 10 iterations without issues. For reference; I was using 550 release runfile and it worked for a long time without any issue. The only component which was out of the runfile installer was docker container support for nvidia. It kept updating to the latest version with apt and indeed when it completely stopped working was when I installed the very latest update. Now everything is installed via apt with the open driver and works flawlessly.
Author
Owner

@marcnaweb commented on GitHub (May 23, 2025):

I am having a similar behavior using the API endpoint. https://stackoverflow.com/questions/79635491/ollama-total-duration-is-bigger-than-load-duration-prompt-eval-duration-eval apparently there is a issue with large context..

<!-- gh-comment-id:2904796318 --> @marcnaweb commented on GitHub (May 23, 2025): I am having a similar behavior using the API endpoint. https://stackoverflow.com/questions/79635491/ollama-total-duration-is-bigger-than-load-duration-prompt-eval-duration-eval apparently there is a issue with large context..
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#3986