[GH-ISSUE #3426] Ollama serve API response returns nonsense only on subsequent calls or times out #48622

Closed
opened 2026-04-28 08:57:04 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @YanWittmann on GitHub (Mar 31, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3426

What is the issue?

Upon making more than one request to the ollama serve API server, the server will respond with seemingly garbage text:

curl -X POST http://localhost:11434/api/generate -d "{\"model\":\"mixtral:8x7b-instruct-v0.1-q3_K_S\",\"prompt\":\"Why is the sky blue? Just a short one-sentance description will be enough.\",\"stream\":false}" -H "Content-Type: application/json"
{"model":"mixtral:8x7b-instruct-v0.1-q3_K_S","created_at":"2024-03-31T14:43:56.3540523Z","response":"\n Question: Let q = -26137985 + 45303185. What is q rounded to the nearest 1000000?\nAnswer: 19000000","done":true,"context":[28705,...],"total_duration":12775145300,"load_duration":528600,"prompt_eval_duration":248002000,"eval_count":56,"eval_duration":12522777000}

The first time the request is made or upon switching models, everything works fine the first time, but the times after that, only nonsense is returned. Turning on or off streaming does not matter.

Sometimes, it is even worse and the API does not respond at all for subsequent calls.

The Ollama WebUI works fine without problems.

Here's the server log
time=2024-03-31T16:47:31.235+02:00 level=INFO source=routes.go:79 msg="changing loaded model"
time=2024-03-31T16:47:32.501+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-31T16:47:32.501+02:00 level=INFO source=gpu.go:119 msg="CUDA Compute Capability detected: 8.6"
time=2024-03-31T16:47:32.502+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-31T16:47:32.502+02:00 level=INFO source=gpu.go:119 msg="CUDA Compute Capability detected: 8.6"
time=2024-03-31T16:47:32.502+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
loading library C:\Users\yan20\AppData\Local\Temp\ollama113621716\runners\cpu_avx2\ext_server.dll
time=2024-03-31T16:47:32.507+02:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: C:\\Users\\yan20\\AppData\\Local\\Temp\\ollama113621716\\runners\\cpu_avx2\\ext_server.dll"time=2024-03-31T16:47:32.508+02:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
llama_model_loader: loaded meta data with 19 key-value pairs and 543 tensors from C:\Users\yan20\.ollama\models\blobs\sha256-83b45bda27326a4e5402e61f7ceb67f735729332ae2714f5fe857f117fb63445 (version GGUF V2)
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = lmsys
llama_model_loader: - kv   2:                       llama.context_length u32              = 2048
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 6656
llama_model_loader: - kv   4:                          llama.block_count u32              = 60
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 17920
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 52
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 52
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  10:                          general.file_type u32              = 2
llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  12:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  13:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  14:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  15:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  16:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  17:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  18:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  121 tensors
llama_model_loader: - type q4_0:  421 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format           = GGUF V2
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 2048
llm_load_print_meta: n_embd           = 6656
llm_load_print_meta: n_head           = 52
llm_load_print_meta: n_head_kv        = 52
llm_load_print_meta: n_layer          = 60
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 6656
llm_load_print_meta: n_embd_v_gqa     = 6656
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff             = 17920
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 2048
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: model type       = 30B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 32.53 B
llm_load_print_meta: model size       = 17.09 GiB (4.51 BPW)
llm_load_print_meta: general.name     = lmsys
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: PAD token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_tensors: ggml ctx size =    0.21 MiB
llm_load_tensors:        CPU buffer size = 17504.89 MiB
....................................................................................................
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:        CPU KV buffer size =  3120.00 MiB
llama_new_context_with_model: KV self size  = 3120.00 MiB, K (f16): 1560.00 MiB, V (f16): 1560.00 MiB
llama_new_context_with_model:        CPU input buffer size   =    18.02 MiB
llama_new_context_with_model:        CPU compute buffer size =   260.00 MiB
llama_new_context_with_model: graph splits (measure): 1
{"function":"initialize","level":"INFO","line":440,"msg":"initializing slots","n_slots":1,"tid":"19792","timestamp":1711896456}
{"function":"initialize","level":"INFO","line":452,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"19792","timestamp":1711896456}
time=2024-03-31T16:47:36.418+02:00 level=INFO source=dyn_ext_server.go:162 msg="Starting llama main loop"
{"function":"launch_slot_with_data","level":"INFO","line":833,"msg":"slot is processing task","slot_id":0,"task_id":0,"tid":"1856","timestamp":1711896456}
{"function":"update_slots","ga_i":0,"level":"INFO","line":1828,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":58,"slot_id":0,"task_id":0,"tid":"1856","timestamp":1711896456}
{"function":"update_slots","level":"INFO","line":1852,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":0,"tid":"1856","timestamp":1711896456}
{"function":"print_timings","level":"INFO","line":264,"msg":"prompt eval time     =   16803.92 ms /    58 tokens (  289.72 ms per token,     3.45 tokens per second)","n_prompt_tokens_processed":58,"n_tokens_second":3.4515749685356214,"slot_id":0,"t_prompt_processing":16803.923,"t_token":289.72281034482756,"task_id":0,"tid":"1856","timestamp":1711896498}
{"function":"print_timings","level":"INFO","line":278,"msg":"generation eval time =   25254.01 ms /    41 runs   (  615.95 ms per token,     1.62 tokens per second)","n_decoded":41,"n_tokens_second":1.623504222991869,"slot_id":0,"t_token":615.9515853658536,"t_token_generation":25254.015,"task_id":0,"tid":"1856","timestamp":1711896498}
{"function":"print_timings","level":"INFO","line":287,"msg":"          total time =   42057.94 ms","slot_id":0,"t_prompt_processing":16803.923,"t_token_generation":25254.015,"t_total":42057.937999999995,"task_id":0,"tid":"1856","timestamp":1711896498}
{"function":"update_slots","level":"INFO","line":1660,"msg":"slot released","n_cache_tokens":99,"n_ctx":2048,"n_past":98,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"1856","timestamp":1711896498,"truncated":false}
{"function":"update_slots","level":"INFO","line":1590,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"1856","timestamp":1711896498}
[GIN] 2024/03/31 - 16:48:18 | 200 |   47.2460699s |       127.0.0.1 | POST     "/api/generate"
{"function":"launch_slot_with_data","level":"INFO","line":833,"msg":"slot is processing task","slot_id":0,"task_id":44,"tid":"1856","timestamp":1711896509}
{"function":"update_slots","ga_i":0,"level":"INFO","line":1828,"msg":"slot progression","n_past":58,"n_past_se":0,"n_prompt_tokens_processed":0,"slot_id":0,"task_id":44,"tid":"1856","timestamp":1711896509}
{"function":"update_slots","level":"INFO","line":1839,"msg":"we have to evaluate at least 1 token to generate logits","slot_id":0,"task_id":44,"tid":"1856","timestamp":1711896509}
{"function":"update_slots","level":"INFO","line":1852,"msg":"kv cache rm [p0, end)","p0":57,"slot_id":0,"task_id":44,"tid":"1856","timestamp":1711896509}
[GIN] 2024/03/31 - 16:49:05 | 200 |   36.1949878s |       127.0.0.1 | POST     "/api/generate"
{"function":"update_slots","level":"INFO","line":1660,"msg":"slot released","n_cache_tokens":117,"n_ctx":2048,"n_past":116,"n_system_tokens":0,"slot_id":0,"task_id":44,"tid":"1856","timestamp":1711896545,"truncated":false}
{"function":"launch_slot_with_data","level":"INFO","line":833,"msg":"slot is processing task","slot_id":0,"task_id":106,"tid":"1856","timestamp":1711896546}
{"function":"update_slots","ga_i":0,"level":"INFO","line":1828,"msg":"slot progression","n_past":58,"n_past_se":0,"n_prompt_tokens_processed":0,"slot_id":0,"task_id":106,"tid":"1856","timestamp":1711896546}
{"function":"update_slots","level":"INFO","line":1839,"msg":"we have to evaluate at least 1 token to generate logits","slot_id":0,"task_id":106,"tid":"1856","timestamp":1711896546}
{"function":"update_slots","level":"INFO","line":1852,"msg":"kv cache rm [p0, end)","p0":57,"slot_id":0,"task_id":106,"tid":"1856","timestamp":1711896546}
{"function":"print_timings","level":"INFO","line":264,"msg":"prompt eval time     =     610.37 ms /     0 tokens (     inf ms per token,     0.00 tokens per second)","n_prompt_tokens_processed":0,"n_tokens_second":0.0,"slot_id":0,"t_prompt_processing":610.374,"t_token":null,"task_id":106,"tid":"1856","timestamp":1711896547}
{"function":"print_timings","level":"INFO","line":278,"msg":"generation eval time =     602.92 ms /     2 runs   (  301.46 ms per token,     3.32 tokens per second)","n_decoded":2,"n_tokens_second":3.3171676695570254,"slot_id":0,"t_token":301.462,"t_token_generation":602.924,"task_id":106,"tid":"1856","timestamp":1711896547}
{"function":"print_timings","level":"INFO","line":287,"msg":"          total time =    1213.30 ms","slot_id":0,"t_prompt_processing":610.374,"t_token_generation":602.924,"t_total":1213.298,"task_id":106,"tid":"1856","timestamp":1711896547}
{"function":"update_slots","level":"INFO","line":1660,"msg":"slot released","n_cache_tokens":60,"n_ctx":2048,"n_past":59,"n_system_tokens":0,"slot_id":0,"task_id":106,"tid":"1856","timestamp":1711896547,"truncated":false}
[GIN] 2024/03/31 - 16:49:07 | 200 |    1.2161599s |       127.0.0.1 | POST     "/api/generate"

What did you expect to see?

A response that uses the context provided in the prompt.

C:\Users\user>curl -X POST http://localhost:11434/api/generate -d "{\"model\":\"mixtral:8x7b-instruct-v0.1-q3_K_S\",\"prompt\":\"Why is the sky blue? Just a short one-sentance description will be enough.\",\"stream\":false}" -H "Content-Type: application/json"
{"model":"mixtral:8x7b-instruct-v0.1-q3_K_S","created_at":"2024-03-31T14:39:56.8355783Z","response":" The sky appears blue because molecules in the Earth's atmosphere scatter sunlight in all directions and blue light is scattered more than other colors due to its shorter wavelength.","done":true,"context":[28705,...],"total_duration":15124835300,"load_duration":3531055100,"prompt_eval_count":28,"prompt_eval_duration":4215357000,"eval_count":35,"eval_duration":7373269000}

Steps to reproduce

  • Startup the server using ollama serve
  • Make a first request to any model, you will get a meaningful response here.
C:\Users\user>curl -X POST http://localhost:11434/api/generate -d "{\"model\":\"mixtral:8x7b-instruct-v0.1-q3_K_S\",\"prompt\":\"Why is the sky blue? Just a short one-sentance description will be enough.\",\"stream\":false}" -H "Content-Type: application/json"
{"model":"mixtral:8x7b-instruct-v0.1-q3_K_S","created_at":"2024-03-31T14:39:56.8355783Z","response":" The sky appears blue because molecules in the Earth's atmosphere scatter sunlight in all directions and blue light is scattered more than other colors due to its shorter wavelength.","done":true,"context":[28705,...],"total_duration":15124835300,"load_duration":3531055100,"prompt_eval_count":28,"prompt_eval_duration":4215357000,"eval_count":35,"eval_duration":7373269000}
  • Make a second request to the same model, this time it will be nonsense. It does not matter what model, as long as you make the second request to the same model as the first request.
C:\Users\user>curl -X POST http://localhost:11434/api/generate -d "{\"model\":\"mixtral:8x7b-instruct-v0.1-q3_K_S\",\"prompt\":\"Why is the sky blue? Just a short one-sentance description will be enough.\",\"stream\":false}" -H "Content-Type: application/json"
{"model":"mixtral:8x7b-instruct-v0.1-q3_K_S","created_at":"2024-03-31T14:37:20.5472248Z","response":"\n Username: Administrator\n Password: {873f6431-a015-4e98-b86d-93aed073cfc6}\n```\n\nAfter I entered the above information, it works. But the error still appears if I use a new computer without these settings.\n\nAny ideas?\n\nComment: Have you tried installing it from an elevated command prompt?\n\nComment: Yes, I have installed it by using admin user account.\n\n## Answer (0)\n\nThe \"Access is denied\" error is thrown when there is insufficient permissions to access the registry key `HKEY_CURRENT_USER\\Control Panel\\International`. This can happen because of two reasons:\n\n1. The current user doesn't have read/write access to this registry key\n2....","done":true,"context":[28705,...],"total_duration":28009589800,"load_duration":523300,"prompt_eval_duration":79120000,"eval_count":820,"eval_duration":27926267000}

or for another model, the same happens:

C:\Users\user>curl -X POST http://localhost:11434/api/generate -d "{\"model\":\"vicuna:33b\",\"prompt\":\"Why is the sky blue? Just a short one-sentance description will be enough.\",\"stream\":false}" -H "Content-Type: application/json"
{"model":"vicuna:33b","created_at":"2024-03-31T14:32:43.4994468Z","response":"\n1. Name: \"Battle Chef Brigade\"\n2. Genre: Cooking Competition / Action Adventure\n3. Platform: PC, Nintendo Switch, Xbox One, and PlayStation 4\n4. Release Date: May 2018 (Nintendo","done":true,"context":[319,...],"total_duration":5752960100,"load_duration":1083700,"prompt_eval_count":16,"prompt_eval_duration":867577000,"eval_count":62,"eval_duration":4880727000}

Are there any recent changes that introduced the issue?

No response

OS

Windows

Architecture

amd64

Platform

No response

Ollama version

0.1.29

GPU

Nvidia

GPU info

NVIDIA GeForce RTX 3090

CPU

AMD

Other software

AMD Ryzen 7 3800XT 8-Core Processor

Originally created by @YanWittmann on GitHub (Mar 31, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3426 ### What is the issue? Upon making more than one request to the `ollama serve` API server, the server will respond with seemingly garbage text: ```batch curl -X POST http://localhost:11434/api/generate -d "{\"model\":\"mixtral:8x7b-instruct-v0.1-q3_K_S\",\"prompt\":\"Why is the sky blue? Just a short one-sentance description will be enough.\",\"stream\":false}" -H "Content-Type: application/json" {"model":"mixtral:8x7b-instruct-v0.1-q3_K_S","created_at":"2024-03-31T14:43:56.3540523Z","response":"\n Question: Let q = -26137985 + 45303185. What is q rounded to the nearest 1000000?\nAnswer: 19000000","done":true,"context":[28705,...],"total_duration":12775145300,"load_duration":528600,"prompt_eval_duration":248002000,"eval_count":56,"eval_duration":12522777000} ``` The first time the request is made or upon switching models, everything works fine the first time, but the times after that, only nonsense is returned. Turning on or off streaming does not matter. Sometimes, it is even worse and the API does not respond at all for subsequent calls. The Ollama WebUI works fine without problems. <details> <summary>Here's the server log</summary> ``` time=2024-03-31T16:47:31.235+02:00 level=INFO source=routes.go:79 msg="changing loaded model" time=2024-03-31T16:47:32.501+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-31T16:47:32.501+02:00 level=INFO source=gpu.go:119 msg="CUDA Compute Capability detected: 8.6" time=2024-03-31T16:47:32.502+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-31T16:47:32.502+02:00 level=INFO source=gpu.go:119 msg="CUDA Compute Capability detected: 8.6" time=2024-03-31T16:47:32.502+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" loading library C:\Users\yan20\AppData\Local\Temp\ollama113621716\runners\cpu_avx2\ext_server.dll time=2024-03-31T16:47:32.507+02:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: C:\\Users\\yan20\\AppData\\Local\\Temp\\ollama113621716\\runners\\cpu_avx2\\ext_server.dll"time=2024-03-31T16:47:32.508+02:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server" llama_model_loader: loaded meta data with 19 key-value pairs and 543 tensors from C:\Users\yan20\.ollama\models\blobs\sha256-83b45bda27326a4e5402e61f7ceb67f735729332ae2714f5fe857f117fb63445 (version GGUF V2) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = lmsys llama_model_loader: - kv 2: llama.context_length u32 = 2048 llama_model_loader: - kv 3: llama.embedding_length u32 = 6656 llama_model_loader: - kv 4: llama.block_count u32 = 60 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 17920 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 52 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 52 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 10: general.file_type u32 = 2 llama_model_loader: - kv 11: tokenizer.ggml.model str = llama llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 15: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 16: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 17: tokenizer.ggml.padding_token_id u32 = 0 llama_model_loader: - kv 18: general.quantization_version u32 = 2 llama_model_loader: - type f32: 121 tensors llama_model_loader: - type q4_0: 421 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens definition check successful ( 259/32000 ). llm_load_print_meta: format = GGUF V2 llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 2048 llm_load_print_meta: n_embd = 6656 llm_load_print_meta: n_head = 52 llm_load_print_meta: n_head_kv = 52 llm_load_print_meta: n_layer = 60 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 6656 llm_load_print_meta: n_embd_v_gqa = 6656 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: n_ff = 17920 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 2048 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: model type = 30B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 32.53 B llm_load_print_meta: model size = 17.09 GiB (4.51 BPW) llm_load_print_meta: general.name = lmsys llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: PAD token = 0 '<unk>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_tensors: ggml ctx size = 0.21 MiB llm_load_tensors: CPU buffer size = 17504.89 MiB .................................................................................................... llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CPU KV buffer size = 3120.00 MiB llama_new_context_with_model: KV self size = 3120.00 MiB, K (f16): 1560.00 MiB, V (f16): 1560.00 MiB llama_new_context_with_model: CPU input buffer size = 18.02 MiB llama_new_context_with_model: CPU compute buffer size = 260.00 MiB llama_new_context_with_model: graph splits (measure): 1 {"function":"initialize","level":"INFO","line":440,"msg":"initializing slots","n_slots":1,"tid":"19792","timestamp":1711896456} {"function":"initialize","level":"INFO","line":452,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"19792","timestamp":1711896456} time=2024-03-31T16:47:36.418+02:00 level=INFO source=dyn_ext_server.go:162 msg="Starting llama main loop" {"function":"launch_slot_with_data","level":"INFO","line":833,"msg":"slot is processing task","slot_id":0,"task_id":0,"tid":"1856","timestamp":1711896456} {"function":"update_slots","ga_i":0,"level":"INFO","line":1828,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":58,"slot_id":0,"task_id":0,"tid":"1856","timestamp":1711896456} {"function":"update_slots","level":"INFO","line":1852,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":0,"tid":"1856","timestamp":1711896456} {"function":"print_timings","level":"INFO","line":264,"msg":"prompt eval time = 16803.92 ms / 58 tokens ( 289.72 ms per token, 3.45 tokens per second)","n_prompt_tokens_processed":58,"n_tokens_second":3.4515749685356214,"slot_id":0,"t_prompt_processing":16803.923,"t_token":289.72281034482756,"task_id":0,"tid":"1856","timestamp":1711896498} {"function":"print_timings","level":"INFO","line":278,"msg":"generation eval time = 25254.01 ms / 41 runs ( 615.95 ms per token, 1.62 tokens per second)","n_decoded":41,"n_tokens_second":1.623504222991869,"slot_id":0,"t_token":615.9515853658536,"t_token_generation":25254.015,"task_id":0,"tid":"1856","timestamp":1711896498} {"function":"print_timings","level":"INFO","line":287,"msg":" total time = 42057.94 ms","slot_id":0,"t_prompt_processing":16803.923,"t_token_generation":25254.015,"t_total":42057.937999999995,"task_id":0,"tid":"1856","timestamp":1711896498} {"function":"update_slots","level":"INFO","line":1660,"msg":"slot released","n_cache_tokens":99,"n_ctx":2048,"n_past":98,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"1856","timestamp":1711896498,"truncated":false} {"function":"update_slots","level":"INFO","line":1590,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"1856","timestamp":1711896498} [GIN] 2024/03/31 - 16:48:18 | 200 | 47.2460699s | 127.0.0.1 | POST "/api/generate" {"function":"launch_slot_with_data","level":"INFO","line":833,"msg":"slot is processing task","slot_id":0,"task_id":44,"tid":"1856","timestamp":1711896509} {"function":"update_slots","ga_i":0,"level":"INFO","line":1828,"msg":"slot progression","n_past":58,"n_past_se":0,"n_prompt_tokens_processed":0,"slot_id":0,"task_id":44,"tid":"1856","timestamp":1711896509} {"function":"update_slots","level":"INFO","line":1839,"msg":"we have to evaluate at least 1 token to generate logits","slot_id":0,"task_id":44,"tid":"1856","timestamp":1711896509} {"function":"update_slots","level":"INFO","line":1852,"msg":"kv cache rm [p0, end)","p0":57,"slot_id":0,"task_id":44,"tid":"1856","timestamp":1711896509} [GIN] 2024/03/31 - 16:49:05 | 200 | 36.1949878s | 127.0.0.1 | POST "/api/generate" {"function":"update_slots","level":"INFO","line":1660,"msg":"slot released","n_cache_tokens":117,"n_ctx":2048,"n_past":116,"n_system_tokens":0,"slot_id":0,"task_id":44,"tid":"1856","timestamp":1711896545,"truncated":false} {"function":"launch_slot_with_data","level":"INFO","line":833,"msg":"slot is processing task","slot_id":0,"task_id":106,"tid":"1856","timestamp":1711896546} {"function":"update_slots","ga_i":0,"level":"INFO","line":1828,"msg":"slot progression","n_past":58,"n_past_se":0,"n_prompt_tokens_processed":0,"slot_id":0,"task_id":106,"tid":"1856","timestamp":1711896546} {"function":"update_slots","level":"INFO","line":1839,"msg":"we have to evaluate at least 1 token to generate logits","slot_id":0,"task_id":106,"tid":"1856","timestamp":1711896546} {"function":"update_slots","level":"INFO","line":1852,"msg":"kv cache rm [p0, end)","p0":57,"slot_id":0,"task_id":106,"tid":"1856","timestamp":1711896546} {"function":"print_timings","level":"INFO","line":264,"msg":"prompt eval time = 610.37 ms / 0 tokens ( inf ms per token, 0.00 tokens per second)","n_prompt_tokens_processed":0,"n_tokens_second":0.0,"slot_id":0,"t_prompt_processing":610.374,"t_token":null,"task_id":106,"tid":"1856","timestamp":1711896547} {"function":"print_timings","level":"INFO","line":278,"msg":"generation eval time = 602.92 ms / 2 runs ( 301.46 ms per token, 3.32 tokens per second)","n_decoded":2,"n_tokens_second":3.3171676695570254,"slot_id":0,"t_token":301.462,"t_token_generation":602.924,"task_id":106,"tid":"1856","timestamp":1711896547} {"function":"print_timings","level":"INFO","line":287,"msg":" total time = 1213.30 ms","slot_id":0,"t_prompt_processing":610.374,"t_token_generation":602.924,"t_total":1213.298,"task_id":106,"tid":"1856","timestamp":1711896547} {"function":"update_slots","level":"INFO","line":1660,"msg":"slot released","n_cache_tokens":60,"n_ctx":2048,"n_past":59,"n_system_tokens":0,"slot_id":0,"task_id":106,"tid":"1856","timestamp":1711896547,"truncated":false} [GIN] 2024/03/31 - 16:49:07 | 200 | 1.2161599s | 127.0.0.1 | POST "/api/generate" ``` </details> ### What did you expect to see? A response that uses the context provided in the prompt. ```batch C:\Users\user>curl -X POST http://localhost:11434/api/generate -d "{\"model\":\"mixtral:8x7b-instruct-v0.1-q3_K_S\",\"prompt\":\"Why is the sky blue? Just a short one-sentance description will be enough.\",\"stream\":false}" -H "Content-Type: application/json" {"model":"mixtral:8x7b-instruct-v0.1-q3_K_S","created_at":"2024-03-31T14:39:56.8355783Z","response":" The sky appears blue because molecules in the Earth's atmosphere scatter sunlight in all directions and blue light is scattered more than other colors due to its shorter wavelength.","done":true,"context":[28705,...],"total_duration":15124835300,"load_duration":3531055100,"prompt_eval_count":28,"prompt_eval_duration":4215357000,"eval_count":35,"eval_duration":7373269000} ``` ### Steps to reproduce - Startup the server using `ollama serve` - Make a first request to any model, you will get a meaningful response here. ```bash C:\Users\user>curl -X POST http://localhost:11434/api/generate -d "{\"model\":\"mixtral:8x7b-instruct-v0.1-q3_K_S\",\"prompt\":\"Why is the sky blue? Just a short one-sentance description will be enough.\",\"stream\":false}" -H "Content-Type: application/json" {"model":"mixtral:8x7b-instruct-v0.1-q3_K_S","created_at":"2024-03-31T14:39:56.8355783Z","response":" The sky appears blue because molecules in the Earth's atmosphere scatter sunlight in all directions and blue light is scattered more than other colors due to its shorter wavelength.","done":true,"context":[28705,...],"total_duration":15124835300,"load_duration":3531055100,"prompt_eval_count":28,"prompt_eval_duration":4215357000,"eval_count":35,"eval_duration":7373269000} ``` - Make a second request to the same model, this time it will be nonsense. It does not matter what model, as long as you make the second request to the same model as the first request. ```bash C:\Users\user>curl -X POST http://localhost:11434/api/generate -d "{\"model\":\"mixtral:8x7b-instruct-v0.1-q3_K_S\",\"prompt\":\"Why is the sky blue? Just a short one-sentance description will be enough.\",\"stream\":false}" -H "Content-Type: application/json" {"model":"mixtral:8x7b-instruct-v0.1-q3_K_S","created_at":"2024-03-31T14:37:20.5472248Z","response":"\n Username: Administrator\n Password: {873f6431-a015-4e98-b86d-93aed073cfc6}\n```\n\nAfter I entered the above information, it works. But the error still appears if I use a new computer without these settings.\n\nAny ideas?\n\nComment: Have you tried installing it from an elevated command prompt?\n\nComment: Yes, I have installed it by using admin user account.\n\n## Answer (0)\n\nThe \"Access is denied\" error is thrown when there is insufficient permissions to access the registry key `HKEY_CURRENT_USER\\Control Panel\\International`. This can happen because of two reasons:\n\n1. The current user doesn't have read/write access to this registry key\n2....","done":true,"context":[28705,...],"total_duration":28009589800,"load_duration":523300,"prompt_eval_duration":79120000,"eval_count":820,"eval_duration":27926267000} ``` or for another model, the same happens: ```bash C:\Users\user>curl -X POST http://localhost:11434/api/generate -d "{\"model\":\"vicuna:33b\",\"prompt\":\"Why is the sky blue? Just a short one-sentance description will be enough.\",\"stream\":false}" -H "Content-Type: application/json" {"model":"vicuna:33b","created_at":"2024-03-31T14:32:43.4994468Z","response":"\n1. Name: \"Battle Chef Brigade\"\n2. Genre: Cooking Competition / Action Adventure\n3. Platform: PC, Nintendo Switch, Xbox One, and PlayStation 4\n4. Release Date: May 2018 (Nintendo","done":true,"context":[319,...],"total_duration":5752960100,"load_duration":1083700,"prompt_eval_count":16,"prompt_eval_duration":867577000,"eval_count":62,"eval_duration":4880727000} ``` ### Are there any recent changes that introduced the issue? _No response_ ### OS Windows ### Architecture amd64 ### Platform _No response_ ### Ollama version 0.1.29 ### GPU Nvidia ### GPU info NVIDIA GeForce RTX 3090 ### CPU AMD ### Other software AMD Ryzen 7 3800XT 8-Core Processor
GiteaMirror added the bug label 2026-04-28 08:57:04 -05:00
Author
Owner

@YanWittmann commented on GitHub (Mar 31, 2024):

Note that chat completion behaves in the same way:

C:\Users\user>curl -X POST http://localhost:11434/api/chat -H "Content-Type: application/json" -d "{\"model\":\"mixtral:8x7b-instruct-v0.1-q3_K_S\",\"messages\":[{\"role\":\"user\",\"content\":\"Why is the sky blue? Just a short one-sentence description will be enough.\"}],\"stream\":false}"
{"model":"mixtral:8x7b-instruct-v0.1-q3_K_S","created_at":"2024-03-31T14:55:30.9278435Z","message":{"role":"assistant","content":" The protagonist, Alice, falls down a rabbit hole and enters a strange world filled with anthropomorphic animals where she participates in numerous illogical adventures."},"done":true,"total_duration":9592204200,"load_duration":1003900,"prompt_eval_count":14,"prompt_eval_duration":2047014000,"eval_count":35,"eval_duration":7539512000}

or turning off streaming:

C:\Users\user>curl -X POST http://localhost:11434/api/chat -H "Content-Type: application/json" -d "{\"model\":\"mixtral:8x7b-instruct-v0.1-q3_K_S\",\"messages\":[{\"role\":\"user\",\"content\":\"Why is the sky blue? Just a short one-sentence description will be enough.\"}]}"
{"model":"mixtral:8x7b-instruct-v0.1-q3_K_S","created_at":"2024-03-31T14:56:31.5547871Z","message":{"role":"assistant","content":" The"},"done":false}
{"model":"mixtral:8x7b-instruct-v0.1-q3_K_S","created_at":"2024-03-31T14:56:31.774966Z","message":{"role":"assistant","content":" book"},"done":false}
{"model":"mixtral:8x7b-instruct-v0.1-q3_K_S","created_at":"2024-03-31T14:56:31.9895566Z","message":{"role":"assistant","content":" \""},"done":false}
{"model":"mixtral:8x7b-instruct-v0.1-q3_K_S","created_at":"2024-03-31T14:56:32.201079Z","message":{"role":"assistant","content":"The"},"done":false}
{"model":"mixtral:8x7b-instruct-v0.1-q3_K_S","created_at":"2024-03-31T14:56:32.4202749Z","message":{"role":"assistant","content":" G"},"done":false}
{"model":"mixtral:8x7b-instruct-v0.1-q3_K_S","created_at":"2024-03-31T14:56:32.6377171Z","message":{"role":"assistant","content":"uns"},"done":false}
{"model":"mixtral:8x7b-instruct-v0.1-q3_K_S","created_at":"2024-03-31T14:56:32.8568363Z","message":{"role":"assistant","content":" of"},"done":false}
{"model":"mixtral:8x7b-instruct-v0.1-q3_K_S","created_at":"2024-03-31T14:56:33.0696926Z","message":{"role":"assistant","content":" August"},"done":false}
{"model":"mixtral:8x7b-instruct-v0.1-q3_K_S","created_at":"2024-03-31T14:56:33.2861858Z","message":{"role":"assistant","content":"\""},"done":false}
{"model":"mixtral:8x7b-instruct-v0.1-q3_K_S","created_at":"2024-03-31T14:56:33.5022013Z","message":{"role":"assistant","content":" by"},"done":false}
{"model":"mixtral:8x7b-instruct-v0.1-q3_K_S","created_at":"2024-03-31T14:56:33.7204166Z","message":{"role":"assistant","content":" Barbara"},"done":false}
<!-- gh-comment-id:2028786488 --> @YanWittmann commented on GitHub (Mar 31, 2024): Note that chat completion behaves in the same way: ```batch C:\Users\user>curl -X POST http://localhost:11434/api/chat -H "Content-Type: application/json" -d "{\"model\":\"mixtral:8x7b-instruct-v0.1-q3_K_S\",\"messages\":[{\"role\":\"user\",\"content\":\"Why is the sky blue? Just a short one-sentence description will be enough.\"}],\"stream\":false}" {"model":"mixtral:8x7b-instruct-v0.1-q3_K_S","created_at":"2024-03-31T14:55:30.9278435Z","message":{"role":"assistant","content":" The protagonist, Alice, falls down a rabbit hole and enters a strange world filled with anthropomorphic animals where she participates in numerous illogical adventures."},"done":true,"total_duration":9592204200,"load_duration":1003900,"prompt_eval_count":14,"prompt_eval_duration":2047014000,"eval_count":35,"eval_duration":7539512000} ``` or turning off streaming: ```batch C:\Users\user>curl -X POST http://localhost:11434/api/chat -H "Content-Type: application/json" -d "{\"model\":\"mixtral:8x7b-instruct-v0.1-q3_K_S\",\"messages\":[{\"role\":\"user\",\"content\":\"Why is the sky blue? Just a short one-sentence description will be enough.\"}]}" {"model":"mixtral:8x7b-instruct-v0.1-q3_K_S","created_at":"2024-03-31T14:56:31.5547871Z","message":{"role":"assistant","content":" The"},"done":false} {"model":"mixtral:8x7b-instruct-v0.1-q3_K_S","created_at":"2024-03-31T14:56:31.774966Z","message":{"role":"assistant","content":" book"},"done":false} {"model":"mixtral:8x7b-instruct-v0.1-q3_K_S","created_at":"2024-03-31T14:56:31.9895566Z","message":{"role":"assistant","content":" \""},"done":false} {"model":"mixtral:8x7b-instruct-v0.1-q3_K_S","created_at":"2024-03-31T14:56:32.201079Z","message":{"role":"assistant","content":"The"},"done":false} {"model":"mixtral:8x7b-instruct-v0.1-q3_K_S","created_at":"2024-03-31T14:56:32.4202749Z","message":{"role":"assistant","content":" G"},"done":false} {"model":"mixtral:8x7b-instruct-v0.1-q3_K_S","created_at":"2024-03-31T14:56:32.6377171Z","message":{"role":"assistant","content":"uns"},"done":false} {"model":"mixtral:8x7b-instruct-v0.1-q3_K_S","created_at":"2024-03-31T14:56:32.8568363Z","message":{"role":"assistant","content":" of"},"done":false} {"model":"mixtral:8x7b-instruct-v0.1-q3_K_S","created_at":"2024-03-31T14:56:33.0696926Z","message":{"role":"assistant","content":" August"},"done":false} {"model":"mixtral:8x7b-instruct-v0.1-q3_K_S","created_at":"2024-03-31T14:56:33.2861858Z","message":{"role":"assistant","content":"\""},"done":false} {"model":"mixtral:8x7b-instruct-v0.1-q3_K_S","created_at":"2024-03-31T14:56:33.5022013Z","message":{"role":"assistant","content":" by"},"done":false} {"model":"mixtral:8x7b-instruct-v0.1-q3_K_S","created_at":"2024-03-31T14:56:33.7204166Z","message":{"role":"assistant","content":" Barbara"},"done":false} ```
Author
Owner

@YanWittmann commented on GitHub (Mar 31, 2024):

Reinstalling Ollama (which took quite a while with all my models...) seems to have solved the issue for now. I'll keep an eye out for this, if it were to reappear, I am going to open the issue again.

<!-- gh-comment-id:2028800427 --> @YanWittmann commented on GitHub (Mar 31, 2024): Reinstalling Ollama (which took quite a while with all my models...) seems to have solved the issue for now. I'll keep an eye out for this, if it were to reappear, I am going to open the issue again.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#48622