[GH-ISSUE #5604] Error while running mixtral 8x7b q6 with 3x 7900 XTX #65541

Closed
opened 2026-05-03 21:37:45 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @darwinvelez58 on GitHub (Jul 10, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5604

What is the issue?

It was working fine with 2x 7900 XTX but after I added a new graphic card the output it just like this

imagen

I have try with different ollama version from 1.29 till latest and no one works.

This is my model file:

Modelfile
FROM mixtral:8x7b-instruct-v0.1-q6_K
PARAMETER num_ctx 20480

Logs:

ggml_cuda_init: GGML_CUDA_FORCE_MMQ:   no
ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
ggml_cuda_init: found 3 ROCm devices:
  Device 0: Radeon RX 7900 XTX, compute capability 11.0, VMM: no
  Device 1: Radeon RX 7900 XTX, compute capability 11.0, VMM: no
  Device 2: Radeon RX 7900 XTX, compute capability 11.0, VMM: no
llm_load_tensors: ggml ctx size =    1.67 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors:      ROCm0 buffer size = 10500.53 MiB
llm_load_tensors:      ROCm1 buffer size = 10500.53 MiB
llm_load_tensors:      ROCm2 buffer size =  9648.49 MiB
llm_load_tensors:  ROCm_Host buffer size =    85.94 MiB
time=2024-07-10T15:33:57.486Z level=DEBUG source=server.go:578 msg="model load progress 0.05"
time=2024-07-10T15:33:57.737Z level=DEBUG source=server.go:578 msg="model load progress 0.06"
time=2024-07-10T15:33:57.988Z level=DEBUG source=server.go:578 msg="model load progress 0.08"
time=2024-07-10T15:33:58.239Z level=DEBUG source=server.go:578 msg="model load progress 0.12"
time=2024-07-10T15:33:58.490Z level=DEBUG source=server.go:578 msg="model load progress 0.16"
[GIN] 2024/07/10 - 15:33:58 | 200 |      568.97µs |      172.24.0.3 | GET      "/api/tags"
time=2024-07-10T15:33:58.741Z level=DEBUG source=server.go:578 msg="model load progress 0.21"
time=2024-07-10T15:33:58.991Z level=DEBUG source=server.go:578 msg="model load progress 0.25"
time=2024-07-10T15:33:59.242Z level=DEBUG source=server.go:578 msg="model load progress 0.29"
time=2024-07-10T15:33:59.496Z level=DEBUG source=server.go:578 msg="model load progress 0.34"
time=2024-07-10T15:33:59.746Z level=DEBUG source=server.go:578 msg="model load progress 0.35"
time=2024-07-10T15:33:59.997Z level=DEBUG source=server.go:578 msg="model load progress 0.40"
time=2024-07-10T15:34:00.248Z level=DEBUG source=server.go:578 msg="model load progress 0.44"
time=2024-07-10T15:34:00.750Z level=DEBUG source=server.go:578 msg="model load progress 0.49"
time=2024-07-10T15:34:01.001Z level=DEBUG source=server.go:578 msg="model load progress 0.54"
time=2024-07-10T15:34:01.251Z level=DEBUG source=server.go:578 msg="model load progress 0.58"
time=2024-07-10T15:34:01.502Z level=DEBUG source=server.go:578 msg="model load progress 0.63"
time=2024-07-10T15:34:01.753Z level=DEBUG source=server.go:578 msg="model load progress 0.67"
time=2024-07-10T15:34:02.003Z level=DEBUG source=server.go:578 msg="model load progress 0.68"
time=2024-07-10T15:34:02.254Z level=DEBUG source=server.go:578 msg="model load progress 0.70"
time=2024-07-10T15:34:02.505Z level=DEBUG source=server.go:578 msg="model load progress 0.72"
time=2024-07-10T15:34:02.755Z level=DEBUG source=server.go:578 msg="model load progress 0.74"
time=2024-07-10T15:34:03.006Z level=DEBUG source=server.go:578 msg="model load progress 0.76"
time=2024-07-10T15:34:03.257Z level=DEBUG source=server.go:578 msg="model load progress 0.78"
time=2024-07-10T15:34:03.508Z level=DEBUG source=server.go:578 msg="model load progress 0.80"
time=2024-07-10T15:34:03.759Z level=DEBUG source=server.go:578 msg="model load progress 0.82"
time=2024-07-10T15:34:04.009Z level=DEBUG source=server.go:578 msg="model load progress 0.83"
time=2024-07-10T15:34:04.260Z level=DEBUG source=server.go:578 msg="model load progress 0.85"
time=2024-07-10T15:34:04.511Z level=DEBUG source=server.go:578 msg="model load progress 0.87"
time=2024-07-10T15:34:04.762Z level=DEBUG source=server.go:578 msg="model load progress 0.89"
time=2024-07-10T15:34:05.013Z level=DEBUG source=server.go:578 msg="model load progress 0.91"
time=2024-07-10T15:34:05.263Z level=DEBUG source=server.go:578 msg="model load progress 0.93"
time=2024-07-10T15:34:05.514Z level=DEBUG source=server.go:578 msg="model load progress 0.95"
time=2024-07-10T15:34:05.765Z level=DEBUG source=server.go:578 msg="model load progress 0.96"
time=2024-07-10T15:34:06.015Z level=DEBUG source=server.go:578 msg="model load progress 0.98"
time=2024-07-10T15:34:06.266Z level=DEBUG source=server.go:578 msg="model load progress 1.00"
time=2024-07-10T15:34:06.516Z level=DEBUG source=server.go:581 msg="model load completed, waiting for server to become available" status="llm server loading model"
time=2024-07-10T15:34:07.218Z level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server not responding"
time=2024-07-10T15:34:07.636Z level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server loading model"
llama_new_context_with_model: n_ctx      = 16384
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      ROCm0 KV buffer size =   704.00 MiB
llama_kv_cache_init:      ROCm1 KV buffer size =   704.00 MiB
llama_kv_cache_init:      ROCm2 KV buffer size =   640.00 MiB
llama_new_context_with_model: KV self size  = 2048.00 MiB, K (f16): 1024.00 MiB, V (f16): 1024.00 MiB
llama_new_context_with_model:  ROCm_Host  output buffer size =     0.14 MiB
llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
llama_new_context_with_model:      ROCm0 compute buffer size =  1216.01 MiB
llama_new_context_with_model:      ROCm1 compute buffer size =  1216.01 MiB
llama_new_context_with_model:      ROCm2 compute buffer size =  1216.02 MiB
llama_new_context_with_model:  ROCm_Host compute buffer size =   136.02 MiB
llama_new_context_with_model: graph nodes  = 1510
llama_new_context_with_model: graph splits = 4
[GIN] 2024/07/10 - 15:34:11 | 200 |     384.163µs |      172.24.0.3 | GET      "/api/tags"
DEBUG [initialize] initializing slots | n_slots=1 tid="140245709655104" timestamp=1720625653
DEBUG [initialize] new slot | n_ctx_slot=16384 slot_id=0 tid="140245709655104" timestamp=1720625653
INFO [main] model loaded | tid="140245709655104" timestamp=1720625653
DEBUG [update_slots] all slots are idle and system prompt is empty, clear the KV cache | tid="140245709655104" timestamp=1720625653
DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=0 tid="140245709655104" timestamp=1720625654
time=2024-07-10T15:34:14.028Z level=INFO source=server.go:572 msg="llama runner started in 26.07 seconds"
time=2024-07-10T15:34:14.028Z level=DEBUG source=sched.go:351 msg="finished setting up runner" model=/root/.ollama/models/blobs/sha256-aab5de7eb918ab6fd04bb1ff4e4de33745919de3832e817f5c68112ac0d100ac
time=2024-07-10T15:34:14.028Z level=DEBUG source=prompt.go:172 msg="prompt now fits in context window" required=1 window=16384
[GIN] 2024/07/10 - 15:34:14 | 200 | 26.814452514s |       127.0.0.1 | POST     "/api/chat"
time=2024-07-10T15:34:14.028Z level=DEBUG source=sched.go:355 msg="context for request finished"
time=2024-07-10T15:34:14.028Z level=DEBUG source=sched.go:237 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/root/.ollama/models/blobs/sha256-aab5de7eb918ab6fd04bb1ff4e4de33745919de3832e817f5c68112ac0d100ac duration=5m0s
time=2024-07-10T15:34:14.028Z level=DEBUG source=sched.go:255 msg="after processing request finished event" modelPath=/root/.ollama/models/blobs/sha256-aab5de7eb918ab6fd04bb1ff4e4de33745919de3832e817f5c68112ac0d100ac refCount=0
time=2024-07-10T15:34:18.398Z level=DEBUG source=sched.go:446 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-aab5de7eb918ab6fd04bb1ff4e4de33745919de3832e817f5c68112ac0d100ac
DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=1 tid="140245709655104" timestamp=1720625658
DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=2 tid="140245709655104" timestamp=1720625658
DEBUG [log_server_request] request | method="POST" params={} path="/tokenize" remote_addr="127.0.0.1" remote_port=33722 status=200 tid="140241595713280" timestamp=1720625658
time=2024-07-10T15:34:18.486Z level=DEBUG source=prompt.go:172 msg="prompt now fits in context window" required=9 window=16384
time=2024-07-10T15:34:18.486Z level=DEBUG source=routes.go:1305 msg="chat handler" prompt="[INST] hi [/INST]" images=0
time=2024-07-10T15:34:18.486Z level=DEBUG source=server.go:668 msg="setting token limit to 10x num_ctx" num_ctx=16384 num_predict=163840
DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=3 tid="140245709655104" timestamp=1720625658
DEBUG [launch_slot_with_data] slot is processing task | slot_id=0 task_id=4 tid="140245709655104" timestamp=1720625658
DEBUG [update_slots] slot progression | ga_i=0 n_past=0 n_past_se=0 n_prompt_tokens_processed=9 slot_id=0 task_id=4 tid="140245709655104" timestamp=1720625658
DEBUG [update_slots] kv cache rm [p0, end) | p0=0 slot_id=0 task_id=4 tid="140245709655104" timestamp=1720625658
[GIN] 2024/07/10 - 15:34:24 | 200 |     362.291µs |      172.24.0.3 | GET      "/api/tags"
time=2024-07-10T15:34:26.140Z level=DEBUG source=sched.go:304 msg="context for request finished"
time=2024-07-10T15:34:26.140Z level=DEBUG source=sched.go:237 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/root/.ollama/models/blobs/sha256-aab5de7eb918ab6fd04bb1ff4e4de33745919de3832e817f5c68112ac0d100ac duration=5m0s
time=2024-07-10T15:34:26.140Z level=DEBUG source=sched.go:255 msg="after processing request finished event" modelPath=/root/.ollama/models/blobs/sha256-aab5de7eb918ab6fd04bb1ff4e4de33745919de3832e817f5c68112ac0d100ac refCount=0
[GIN] 2024/07/10 - 15:34:26 | 200 |  7.743778357s |       127.0.0.1 | POST     "/api/chat"
DEBUG [log_server_request] request | method="POST" params={} path="/completion" remote_addr="127.0.0.1" remote_port=33726 status=200 tid="140241587320576" timestamp=1720625666
DEBUG [update_slots] slot released | n_cache_tokens=320 n_ctx=16384 n_past=319 n_system_tokens=0 slot_id=0 task_id=4 tid="140245709655104" timestamp=1720625666 truncated=false

Output:

[root@e3a07a10bb37 ~]# ollama create custom-mixtral -f Modelf
transferring model data 
using existing layer sha256:aab5de7eb918ab6fd04bb1ff4e4de33745919de3832e817f5c68112ac0d100ac 
using existing layer sha256:43070e2d4e532684de521b885f385d0841030efa2b1a20bafb76133a5e1379c1 
using existing layer sha256:c43332387573e98fdfad4a606171279955b53d891ba2500552c2984a6560ffb4 
creating new layer sha256:bcb4d835d0b5f4a6ef6e75985185c1dcc5fad0d0a2702f0ced1e650f87ea77ce 
creating new layer sha256:31859fd30e61c459884125f8ba87a3889a6d993128a7412aed3f2b7111ea3044 
writing manifest 
success 
[root@e3a07a10bb37 ~]# ollama run custom-mixtral
>>> hi
      .                                                                   1         90   11 ...                ,                    32    
            ....                1..3   
3               .62     1       1                35   ..>            .1                   9                .   3      4                        
        .1 1                   1
1.                              /.032.. .     .
4.     21                     . .11.       4      1                                               5.                .,    09           
162.0             .                21.
   .         7.                14 ,1..                ../.2   9   .3                .      139   2    140                 ..3        1    
            .11914,.1   1.     932                   .....                2       9                             
02 227      
           .                        911                2    1      1
.                     .                         1                            .   1    4 .0 .                 
13                                                                    
                             2..                1.....           0.313   ..1., 8.   /.190.      732          3                 2        1 ..^C
                             
                             

@jmorganca

OS

Linux

GPU

AMD

CPU

AMD

Ollama version

0.1.44-rocm

Originally created by @darwinvelez58 on GitHub (Jul 10, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5604 ### What is the issue? It was working fine with 2x 7900 XTX but after I added a new graphic card the output it just like this ![imagen](https://github.com/ollama/ollama/assets/118543481/cb80249b-5974-4d43-b904-a15b147f255f) I have try with different ollama version from 1.29 till latest and no one works. This is my model file: Modelfile FROM mixtral:8x7b-instruct-v0.1-q6_K PARAMETER num_ctx 20480 Logs: ``` ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes ggml_cuda_init: found 3 ROCm devices: Device 0: Radeon RX 7900 XTX, compute capability 11.0, VMM: no Device 1: Radeon RX 7900 XTX, compute capability 11.0, VMM: no Device 2: Radeon RX 7900 XTX, compute capability 11.0, VMM: no llm_load_tensors: ggml ctx size = 1.67 MiB llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 33/33 layers to GPU llm_load_tensors: ROCm0 buffer size = 10500.53 MiB llm_load_tensors: ROCm1 buffer size = 10500.53 MiB llm_load_tensors: ROCm2 buffer size = 9648.49 MiB llm_load_tensors: ROCm_Host buffer size = 85.94 MiB time=2024-07-10T15:33:57.486Z level=DEBUG source=server.go:578 msg="model load progress 0.05" time=2024-07-10T15:33:57.737Z level=DEBUG source=server.go:578 msg="model load progress 0.06" time=2024-07-10T15:33:57.988Z level=DEBUG source=server.go:578 msg="model load progress 0.08" time=2024-07-10T15:33:58.239Z level=DEBUG source=server.go:578 msg="model load progress 0.12" time=2024-07-10T15:33:58.490Z level=DEBUG source=server.go:578 msg="model load progress 0.16" [GIN] 2024/07/10 - 15:33:58 | 200 | 568.97µs | 172.24.0.3 | GET "/api/tags" time=2024-07-10T15:33:58.741Z level=DEBUG source=server.go:578 msg="model load progress 0.21" time=2024-07-10T15:33:58.991Z level=DEBUG source=server.go:578 msg="model load progress 0.25" time=2024-07-10T15:33:59.242Z level=DEBUG source=server.go:578 msg="model load progress 0.29" time=2024-07-10T15:33:59.496Z level=DEBUG source=server.go:578 msg="model load progress 0.34" time=2024-07-10T15:33:59.746Z level=DEBUG source=server.go:578 msg="model load progress 0.35" time=2024-07-10T15:33:59.997Z level=DEBUG source=server.go:578 msg="model load progress 0.40" time=2024-07-10T15:34:00.248Z level=DEBUG source=server.go:578 msg="model load progress 0.44" time=2024-07-10T15:34:00.750Z level=DEBUG source=server.go:578 msg="model load progress 0.49" time=2024-07-10T15:34:01.001Z level=DEBUG source=server.go:578 msg="model load progress 0.54" time=2024-07-10T15:34:01.251Z level=DEBUG source=server.go:578 msg="model load progress 0.58" time=2024-07-10T15:34:01.502Z level=DEBUG source=server.go:578 msg="model load progress 0.63" time=2024-07-10T15:34:01.753Z level=DEBUG source=server.go:578 msg="model load progress 0.67" time=2024-07-10T15:34:02.003Z level=DEBUG source=server.go:578 msg="model load progress 0.68" time=2024-07-10T15:34:02.254Z level=DEBUG source=server.go:578 msg="model load progress 0.70" time=2024-07-10T15:34:02.505Z level=DEBUG source=server.go:578 msg="model load progress 0.72" time=2024-07-10T15:34:02.755Z level=DEBUG source=server.go:578 msg="model load progress 0.74" time=2024-07-10T15:34:03.006Z level=DEBUG source=server.go:578 msg="model load progress 0.76" time=2024-07-10T15:34:03.257Z level=DEBUG source=server.go:578 msg="model load progress 0.78" time=2024-07-10T15:34:03.508Z level=DEBUG source=server.go:578 msg="model load progress 0.80" time=2024-07-10T15:34:03.759Z level=DEBUG source=server.go:578 msg="model load progress 0.82" time=2024-07-10T15:34:04.009Z level=DEBUG source=server.go:578 msg="model load progress 0.83" time=2024-07-10T15:34:04.260Z level=DEBUG source=server.go:578 msg="model load progress 0.85" time=2024-07-10T15:34:04.511Z level=DEBUG source=server.go:578 msg="model load progress 0.87" time=2024-07-10T15:34:04.762Z level=DEBUG source=server.go:578 msg="model load progress 0.89" time=2024-07-10T15:34:05.013Z level=DEBUG source=server.go:578 msg="model load progress 0.91" time=2024-07-10T15:34:05.263Z level=DEBUG source=server.go:578 msg="model load progress 0.93" time=2024-07-10T15:34:05.514Z level=DEBUG source=server.go:578 msg="model load progress 0.95" time=2024-07-10T15:34:05.765Z level=DEBUG source=server.go:578 msg="model load progress 0.96" time=2024-07-10T15:34:06.015Z level=DEBUG source=server.go:578 msg="model load progress 0.98" time=2024-07-10T15:34:06.266Z level=DEBUG source=server.go:578 msg="model load progress 1.00" time=2024-07-10T15:34:06.516Z level=DEBUG source=server.go:581 msg="model load completed, waiting for server to become available" status="llm server loading model" time=2024-07-10T15:34:07.218Z level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server not responding" time=2024-07-10T15:34:07.636Z level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server loading model" llama_new_context_with_model: n_ctx = 16384 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: ROCm0 KV buffer size = 704.00 MiB llama_kv_cache_init: ROCm1 KV buffer size = 704.00 MiB llama_kv_cache_init: ROCm2 KV buffer size = 640.00 MiB llama_new_context_with_model: KV self size = 2048.00 MiB, K (f16): 1024.00 MiB, V (f16): 1024.00 MiB llama_new_context_with_model: ROCm_Host output buffer size = 0.14 MiB llama_new_context_with_model: pipeline parallelism enabled (n_copies=4) llama_new_context_with_model: ROCm0 compute buffer size = 1216.01 MiB llama_new_context_with_model: ROCm1 compute buffer size = 1216.01 MiB llama_new_context_with_model: ROCm2 compute buffer size = 1216.02 MiB llama_new_context_with_model: ROCm_Host compute buffer size = 136.02 MiB llama_new_context_with_model: graph nodes = 1510 llama_new_context_with_model: graph splits = 4 [GIN] 2024/07/10 - 15:34:11 | 200 | 384.163µs | 172.24.0.3 | GET "/api/tags" DEBUG [initialize] initializing slots | n_slots=1 tid="140245709655104" timestamp=1720625653 DEBUG [initialize] new slot | n_ctx_slot=16384 slot_id=0 tid="140245709655104" timestamp=1720625653 INFO [main] model loaded | tid="140245709655104" timestamp=1720625653 DEBUG [update_slots] all slots are idle and system prompt is empty, clear the KV cache | tid="140245709655104" timestamp=1720625653 DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=0 tid="140245709655104" timestamp=1720625654 time=2024-07-10T15:34:14.028Z level=INFO source=server.go:572 msg="llama runner started in 26.07 seconds" time=2024-07-10T15:34:14.028Z level=DEBUG source=sched.go:351 msg="finished setting up runner" model=/root/.ollama/models/blobs/sha256-aab5de7eb918ab6fd04bb1ff4e4de33745919de3832e817f5c68112ac0d100ac time=2024-07-10T15:34:14.028Z level=DEBUG source=prompt.go:172 msg="prompt now fits in context window" required=1 window=16384 [GIN] 2024/07/10 - 15:34:14 | 200 | 26.814452514s | 127.0.0.1 | POST "/api/chat" time=2024-07-10T15:34:14.028Z level=DEBUG source=sched.go:355 msg="context for request finished" time=2024-07-10T15:34:14.028Z level=DEBUG source=sched.go:237 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/root/.ollama/models/blobs/sha256-aab5de7eb918ab6fd04bb1ff4e4de33745919de3832e817f5c68112ac0d100ac duration=5m0s time=2024-07-10T15:34:14.028Z level=DEBUG source=sched.go:255 msg="after processing request finished event" modelPath=/root/.ollama/models/blobs/sha256-aab5de7eb918ab6fd04bb1ff4e4de33745919de3832e817f5c68112ac0d100ac refCount=0 time=2024-07-10T15:34:18.398Z level=DEBUG source=sched.go:446 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-aab5de7eb918ab6fd04bb1ff4e4de33745919de3832e817f5c68112ac0d100ac DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=1 tid="140245709655104" timestamp=1720625658 DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=2 tid="140245709655104" timestamp=1720625658 DEBUG [log_server_request] request | method="POST" params={} path="/tokenize" remote_addr="127.0.0.1" remote_port=33722 status=200 tid="140241595713280" timestamp=1720625658 time=2024-07-10T15:34:18.486Z level=DEBUG source=prompt.go:172 msg="prompt now fits in context window" required=9 window=16384 time=2024-07-10T15:34:18.486Z level=DEBUG source=routes.go:1305 msg="chat handler" prompt="[INST] hi [/INST]" images=0 time=2024-07-10T15:34:18.486Z level=DEBUG source=server.go:668 msg="setting token limit to 10x num_ctx" num_ctx=16384 num_predict=163840 DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=3 tid="140245709655104" timestamp=1720625658 DEBUG [launch_slot_with_data] slot is processing task | slot_id=0 task_id=4 tid="140245709655104" timestamp=1720625658 DEBUG [update_slots] slot progression | ga_i=0 n_past=0 n_past_se=0 n_prompt_tokens_processed=9 slot_id=0 task_id=4 tid="140245709655104" timestamp=1720625658 DEBUG [update_slots] kv cache rm [p0, end) | p0=0 slot_id=0 task_id=4 tid="140245709655104" timestamp=1720625658 [GIN] 2024/07/10 - 15:34:24 | 200 | 362.291µs | 172.24.0.3 | GET "/api/tags" time=2024-07-10T15:34:26.140Z level=DEBUG source=sched.go:304 msg="context for request finished" time=2024-07-10T15:34:26.140Z level=DEBUG source=sched.go:237 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/root/.ollama/models/blobs/sha256-aab5de7eb918ab6fd04bb1ff4e4de33745919de3832e817f5c68112ac0d100ac duration=5m0s time=2024-07-10T15:34:26.140Z level=DEBUG source=sched.go:255 msg="after processing request finished event" modelPath=/root/.ollama/models/blobs/sha256-aab5de7eb918ab6fd04bb1ff4e4de33745919de3832e817f5c68112ac0d100ac refCount=0 [GIN] 2024/07/10 - 15:34:26 | 200 | 7.743778357s | 127.0.0.1 | POST "/api/chat" DEBUG [log_server_request] request | method="POST" params={} path="/completion" remote_addr="127.0.0.1" remote_port=33726 status=200 tid="140241587320576" timestamp=1720625666 DEBUG [update_slots] slot released | n_cache_tokens=320 n_ctx=16384 n_past=319 n_system_tokens=0 slot_id=0 task_id=4 tid="140245709655104" timestamp=1720625666 truncated=false ``` Output: ``` [root@e3a07a10bb37 ~]# ollama create custom-mixtral -f Modelf transferring model data using existing layer sha256:aab5de7eb918ab6fd04bb1ff4e4de33745919de3832e817f5c68112ac0d100ac using existing layer sha256:43070e2d4e532684de521b885f385d0841030efa2b1a20bafb76133a5e1379c1 using existing layer sha256:c43332387573e98fdfad4a606171279955b53d891ba2500552c2984a6560ffb4 creating new layer sha256:bcb4d835d0b5f4a6ef6e75985185c1dcc5fad0d0a2702f0ced1e650f87ea77ce creating new layer sha256:31859fd30e61c459884125f8ba87a3889a6d993128a7412aed3f2b7111ea3044 writing manifest success [root@e3a07a10bb37 ~]# ollama run custom-mixtral >>> hi . 1 90 11 ... , 32 .... 1..3 3 .62 1 1 35 ..> .1 9 . 3 4 .1 1 1 1. /.032.. . . 4. 21 . .11. 4 1 5. ., 09 162.0 . 21. . 7. 14 ,1.. ../.2 9 .3 . 139 2 140 ..3 1 .11914,.1 1. 932 ..... 2 9 02 227 . 911 2 1 1 . . 1 . 1 4 .0 . 13 2.. 1..... 0.313 ..1., 8. /.190. 732 3 2 1 ..^C ``` @jmorganca ### OS Linux ### GPU AMD ### CPU AMD ### Ollama version 0.1.44-rocm
GiteaMirror added the bug label 2026-05-03 21:37:45 -05:00
Author
Owner

@OliverStutz commented on GitHub (Jul 10, 2024):

I have the same issue, anyone has an idea what could be done?

<!-- gh-comment-id:2220944587 --> @OliverStutz commented on GitHub (Jul 10, 2024): I have the same issue, anyone has an idea what could be done?
Author
Owner

@darwinvelez58 commented on GitHub (Jul 11, 2024):

any update for this issue? @jmorganca

<!-- gh-comment-id:2224065042 --> @darwinvelez58 commented on GitHub (Jul 11, 2024): any update for this issue? @jmorganca
Author
Owner

@dhiltgen commented on GitHub (Jul 23, 2024):

Consolidating all the ROCm 3x issues into #5629

<!-- gh-comment-id:2246440836 --> @dhiltgen commented on GitHub (Jul 23, 2024): Consolidating all the ROCm 3x issues into #5629
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#65541