[GH-ISSUE #3158] ROCM setup with two 7900 XTX outputs generate irrelevant content. #1943

Closed
opened 2026-04-12 12:04:57 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @wizd on GitHub (Mar 15, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3158

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

Rocm docker container giberish output
屏幕截图 2024-03-15 082145
屏幕截图 2024-03-15 082121

What did you expect to see?

Normal content

Steps to reproduce

No response

Are there any recent changes that introduced the issue?

No response

OS

Linux

Architecture

amd64

Platform

Docker

Ollama version

ollama/ollama:0.1.29-rocm

GPU

AMD

GPU info

$ rocminfo
ROCk module is loaded
=====================
HSA System Attributes
=====================
Runtime Version:         1.1
System Timestamp Freq.:  1000.000000MHz
Sig. Max Wait Duration:  18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count)
Machine Model:           LARGE
System Endianness:       LITTLE
Mwaitx:                  DISABLED
DMAbuf Support:          YES

==========
HSA Agents
==========
*******
Agent 1
*******
  Name:                    AMD Ryzen 9 7950X3D 16-Core Processor
  Uuid:                    CPU-XX
  Marketing Name:          AMD Ryzen 9 7950X3D 16-Core Processor
  Vendor Name:             CPU
  Feature:                 None specified
  Profile:                 FULL_PROFILE
  Float Round Mode:        NEAR
  Max Queue Number:        0(0x0)
  Queue Min Size:          0(0x0)
  Queue Max Size:          0(0x0)
  Queue Type:              MULTI
  Node:                    0
  Device Type:             CPU
  Cache Info:
    L1:                      32768(0x8000) KB
  Chip ID:                 0(0x0)
  ASIC Revision:           0(0x0)
  Cacheline Size:          64(0x40)
  Max Clock Freq. (MHz):   4200
  BDFID:                   0
  Internal Node ID:        0
  Compute Unit:            32
  SIMDs per CU:            0
  Shader Engines:          0
  Shader Arrs. per Eng.:   0
  WatchPts on Addr. Ranges:1
  Features:                None
  Pool Info:
    Pool 1
      Segment:                 GLOBAL; FLAGS: FINE GRAINED
      Size:                    131086376(0x7d03828) KB
      Allocatable:             TRUE
      Alloc Granule:           4KB
      Alloc Alignment:         4KB
      Accessible by all:       TRUE
    Pool 2
      Segment:                 GLOBAL; FLAGS: KERNARG, FINE GRAINED
      Size:                    131086376(0x7d03828) KB
      Allocatable:             TRUE
      Alloc Granule:           4KB
      Alloc Alignment:         4KB
      Accessible by all:       TRUE
    Pool 3
      Segment:                 GLOBAL; FLAGS: COARSE GRAINED
      Size:                    131086376(0x7d03828) KB
      Allocatable:             TRUE
      Alloc Granule:           4KB
      Alloc Alignment:         4KB
      Accessible by all:       TRUE
  ISA Info:
*******
Agent 2
*******
  Name:                    gfx1100
  Uuid:                    GPU-aee456bdb1c699e6
  Marketing Name:          Radeon RX 7900 XTX
  Vendor Name:             AMD
  Feature:                 KERNEL_DISPATCH
  Profile:                 BASE_PROFILE
  Float Round Mode:        NEAR
  Max Queue Number:        128(0x80)
  Queue Min Size:          64(0x40)
  Queue Max Size:          131072(0x20000)
  Queue Type:              MULTI
  Node:                    1
  Device Type:             GPU
  Cache Info:
    L1:                      32(0x20) KB
    L2:                      6144(0x1800) KB
    L3:                      98304(0x18000) KB
  Chip ID:                 29772(0x744c)
  ASIC Revision:           0(0x0)
  Cacheline Size:          64(0x40)
  Max Clock Freq. (MHz):   2526
  BDFID:                   768
  Internal Node ID:        1
  Compute Unit:            96
  SIMDs per CU:            2
  Shader Engines:          6
  Shader Arrs. per Eng.:   2
  WatchPts on Addr. Ranges:4
  Coherent Host Access:    FALSE
  Features:                KERNEL_DISPATCH
  Fast F16 Operation:      TRUE
  Wavefront Size:          32(0x20)
  Workgroup Max Size:      1024(0x400)
  Workgroup Max Size per Dimension:
    x                        1024(0x400)
    y                        1024(0x400)
    z                        1024(0x400)
  Max Waves Per CU:        32(0x20)
  Max Work-item Per CU:    1024(0x400)
  Grid Max Size:           4294967295(0xffffffff)
  Grid Max Size per Dimension:
    x                        4294967295(0xffffffff)
    y                        4294967295(0xffffffff)
    z                        4294967295(0xffffffff)
  Max fbarriers/Workgrp:   32
  Packet Processor uCode:: 550
  SDMA engine uCode::      19
  IOMMU Support::          None
  Pool Info:
    Pool 1
      Segment:                 GLOBAL; FLAGS: COARSE GRAINED
      Size:                    25149440(0x17fc000) KB
      Allocatable:             TRUE
      Alloc Granule:           4KB
      Alloc Alignment:         4KB
      Accessible by all:       FALSE
    Pool 2
      Segment:                 GLOBAL; FLAGS: EXTENDED FINE GRAINED
      Size:                    25149440(0x17fc000) KB
      Allocatable:             TRUE
      Alloc Granule:           4KB
      Alloc Alignment:         4KB
      Accessible by all:       FALSE
    Pool 3
      Segment:                 GROUP
      Size:                    64(0x40) KB
      Allocatable:             FALSE
      Alloc Granule:           0KB
      Alloc Alignment:         0KB
      Accessible by all:       FALSE
  ISA Info:
    ISA 1
      Name:                    amdgcn-amd-amdhsa--gfx1100
      Machine Models:          HSA_MACHINE_MODEL_LARGE
      Profiles:                HSA_PROFILE_BASE
      Default Rounding Mode:   NEAR
      Default Rounding Mode:   NEAR
      Fast f16:                TRUE
      Workgroup Max Size:      1024(0x400)
      Workgroup Max Size per Dimension:
        x                        1024(0x400)
        y                        1024(0x400)
        z                        1024(0x400)
      Grid Max Size:           4294967295(0xffffffff)
      Grid Max Size per Dimension:
        x                        4294967295(0xffffffff)
        y                        4294967295(0xffffffff)
        z                        4294967295(0xffffffff)
      FBarrier Max Size:       32
*******
Agent 3
*******
  Name:                    gfx1100
  Uuid:                    GPU-398a3f843a146602
  Marketing Name:          Radeon RX 7900 XTX
  Vendor Name:             AMD
  Feature:                 KERNEL_DISPATCH
  Profile:                 BASE_PROFILE
  Float Round Mode:        NEAR
  Max Queue Number:        128(0x80)
  Queue Min Size:          64(0x40)
  Queue Max Size:          131072(0x20000)
  Queue Type:              MULTI
  Node:                    2
  Device Type:             GPU
  Cache Info:
    L1:                      32(0x20) KB
    L2:                      6144(0x1800) KB
    L3:                      98304(0x18000) KB
  Chip ID:                 29772(0x744c)
  ASIC Revision:           0(0x0)
  Cacheline Size:          64(0x40)
  Max Clock Freq. (MHz):   2526
  BDFID:                   2304
  Internal Node ID:        2
  Compute Unit:            96
  SIMDs per CU:            2
  Shader Engines:          6
  Shader Arrs. per Eng.:   2
  WatchPts on Addr. Ranges:4
  Coherent Host Access:    FALSE
  Features:                KERNEL_DISPATCH
  Fast F16 Operation:      TRUE
  Wavefront Size:          32(0x20)
  Workgroup Max Size:      1024(0x400)
  Workgroup Max Size per Dimension:
    x                        1024(0x400)
    y                        1024(0x400)
    z                        1024(0x400)
  Max Waves Per CU:        32(0x20)
  Max Work-item Per CU:    1024(0x400)
  Grid Max Size:           4294967295(0xffffffff)
  Grid Max Size per Dimension:
    x                        4294967295(0xffffffff)
    y                        4294967295(0xffffffff)
    z                        4294967295(0xffffffff)
  Max fbarriers/Workgrp:   32
  Packet Processor uCode:: 550
  SDMA engine uCode::      19
  IOMMU Support::          None
  Pool Info:
    Pool 1
      Segment:                 GLOBAL; FLAGS: COARSE GRAINED
      Size:                    25149440(0x17fc000) KB
      Allocatable:             TRUE
      Alloc Granule:           4KB
      Alloc Alignment:         4KB
      Accessible by all:       FALSE
    Pool 2
      Segment:                 GLOBAL; FLAGS: EXTENDED FINE GRAINED
      Size:                    25149440(0x17fc000) KB
      Allocatable:             TRUE
      Alloc Granule:           4KB
      Alloc Alignment:         4KB
      Accessible by all:       FALSE
    Pool 3
      Segment:                 GROUP
      Size:                    64(0x40) KB
      Allocatable:             FALSE
      Alloc Granule:           0KB
      Alloc Alignment:         0KB
      Accessible by all:       FALSE
  ISA Info:
    ISA 1
      Name:                    amdgcn-amd-amdhsa--gfx1100
      Machine Models:          HSA_MACHINE_MODEL_LARGE
      Profiles:                HSA_PROFILE_BASE
      Default Rounding Mode:   NEAR
      Default Rounding Mode:   NEAR
      Fast f16:                TRUE
      Workgroup Max Size:      1024(0x400)
      Workgroup Max Size per Dimension:
        x                        1024(0x400)
        y                        1024(0x400)
        z                        1024(0x400)
      Grid Max Size:           4294967295(0xffffffff)
      Grid Max Size per Dimension:
        x                        4294967295(0xffffffff)
        y                        4294967295(0xffffffff)
        z                        4294967295(0xffffffff)
      FBarrier Max Size:       32
*******
Agent 4
*******
  Name:                    gfx1036
  Uuid:                    GPU-XX
  Marketing Name:          AMD Radeon Graphics
  Vendor Name:             AMD
  Feature:                 KERNEL_DISPATCH
  Profile:                 BASE_PROFILE
  Float Round Mode:        NEAR
  Max Queue Number:        128(0x80)
  Queue Min Size:          64(0x40)
  Queue Max Size:          131072(0x20000)
  Queue Type:              MULTI
  Node:                    3
  Device Type:             GPU
  Cache Info:
    L1:                      16(0x10) KB
    L2:                      256(0x100) KB
  Chip ID:                 5710(0x164e)
  ASIC Revision:           1(0x1)
  Cacheline Size:          64(0x40)
  Max Clock Freq. (MHz):   2200
  BDFID:                   4352
  Internal Node ID:        3
  Compute Unit:            2
  SIMDs per CU:            2
  Shader Engines:          1
  Shader Arrs. per Eng.:   1
  WatchPts on Addr. Ranges:4
  Coherent Host Access:    FALSE
  Features:                KERNEL_DISPATCH
  Fast F16 Operation:      TRUE
  Wavefront Size:          32(0x20)
  Workgroup Max Size:      1024(0x400)
  Workgroup Max Size per Dimension:
    x                        1024(0x400)
    y                        1024(0x400)
    z                        1024(0x400)
  Max Waves Per CU:        32(0x20)
  Max Work-item Per CU:    1024(0x400)
  Grid Max Size:           4294967295(0xffffffff)
  Grid Max Size per Dimension:
    x                        4294967295(0xffffffff)
    y                        4294967295(0xffffffff)
    z                        4294967295(0xffffffff)
  Max fbarriers/Workgrp:   32
  Packet Processor uCode:: 20
  SDMA engine uCode::      9
  IOMMU Support::          None
  Pool Info:
    Pool 1
      Segment:                 GLOBAL; FLAGS: COARSE GRAINED
      Size:                    524288(0x80000) KB
      Allocatable:             TRUE
      Alloc Granule:           4KB
      Alloc Alignment:         4KB
      Accessible by all:       FALSE
    Pool 2
      Segment:                 GLOBAL; FLAGS: EXTENDED FINE GRAINED
      Size:                    524288(0x80000) KB
      Allocatable:             TRUE
      Alloc Granule:           4KB
      Alloc Alignment:         4KB
      Accessible by all:       FALSE
    Pool 3
      Segment:                 GROUP
      Size:                    64(0x40) KB
      Allocatable:             FALSE
      Alloc Granule:           0KB
      Alloc Alignment:         0KB
      Accessible by all:       FALSE
  ISA Info:
    ISA 1
      Name:                    amdgcn-amd-amdhsa--gfx1036
      Machine Models:          HSA_MACHINE_MODEL_LARGE
      Profiles:                HSA_PROFILE_BASE
      Default Rounding Mode:   NEAR
      Default Rounding Mode:   NEAR
      Fast f16:                TRUE
      Workgroup Max Size:      1024(0x400)
      Workgroup Max Size per Dimension:
        x                        1024(0x400)
        y                        1024(0x400)
        z                        1024(0x400)
      Grid Max Size:           4294967295(0xffffffff)
      Grid Max Size per Dimension:
        x                        4294967295(0xffffffff)
        y                        4294967295(0xffffffff)
        z                        4294967295(0xffffffff)
      FBarrier Max Size:       32
*** Done ***

CPU

AMD

Other software

docker-compose.yml

version: '3.8'

services:
  ollama:
    volumes:
      - ./ollama:/root/.ollama
    container_name: ollama
    pull_policy: always
    privileged: true
    group_add:
      - video
    cap_add:
      - SYS_PTRACE
    security_opt:
      - seccomp:unconfined
    ipc: host
    restart: unless-stopped
    image: ollama/ollama:rocm
    devices:
      - /dev/kfd:/dev/kfd
      - /dev/dri:/dev/dri
    ports:
      - "11434:11434"
    environment:
      - HIP_VISIBLE_DEVICES=0,1

  open-webui:
    build:
      context: .
      args:
        OLLAMA_BASE_URL: '/ollama'
      dockerfile: Dockerfile
    image: ghcr.io/open-webui/open-webui:main
    container_name: open-webui
    volumes:
      - ./open-webui:/app/backend/data
    depends_on:
      - ollama
    ports:
      - 7000:8080
    environment:
      - 'OLLAMA_BASE_URL=http://ollama:11434'
      - 'WEBUI_SECRET_KEY='
    extra_hosts:
      - host.docker.internal:host-gateway
    restart: unless-stopped

volumes:
  ollama: {}
  open-webui: {}

No response

Originally created by @wizd on GitHub (Mar 15, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3158 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? Rocm docker container giberish output ![屏幕截图 2024-03-15 082145](https://github.com/ollama/ollama/assets/2835415/29f81523-9945-41c1-a9dd-5779e3cf2b50) ![屏幕截图 2024-03-15 082121](https://github.com/ollama/ollama/assets/2835415/77e6efb9-67c9-439e-a160-86082233a965) ### What did you expect to see? Normal content ### Steps to reproduce _No response_ ### Are there any recent changes that introduced the issue? _No response_ ### OS Linux ### Architecture amd64 ### Platform Docker ### Ollama version ollama/ollama:0.1.29-rocm ### GPU AMD ### GPU info ``` $ rocminfo ROCk module is loaded ===================== HSA System Attributes ===================== Runtime Version: 1.1 System Timestamp Freq.: 1000.000000MHz Sig. Max Wait Duration: 18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count) Machine Model: LARGE System Endianness: LITTLE Mwaitx: DISABLED DMAbuf Support: YES ========== HSA Agents ========== ******* Agent 1 ******* Name: AMD Ryzen 9 7950X3D 16-Core Processor Uuid: CPU-XX Marketing Name: AMD Ryzen 9 7950X3D 16-Core Processor Vendor Name: CPU Feature: None specified Profile: FULL_PROFILE Float Round Mode: NEAR Max Queue Number: 0(0x0) Queue Min Size: 0(0x0) Queue Max Size: 0(0x0) Queue Type: MULTI Node: 0 Device Type: CPU Cache Info: L1: 32768(0x8000) KB Chip ID: 0(0x0) ASIC Revision: 0(0x0) Cacheline Size: 64(0x40) Max Clock Freq. (MHz): 4200 BDFID: 0 Internal Node ID: 0 Compute Unit: 32 SIMDs per CU: 0 Shader Engines: 0 Shader Arrs. per Eng.: 0 WatchPts on Addr. Ranges:1 Features: None Pool Info: Pool 1 Segment: GLOBAL; FLAGS: FINE GRAINED Size: 131086376(0x7d03828) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Alignment: 4KB Accessible by all: TRUE Pool 2 Segment: GLOBAL; FLAGS: KERNARG, FINE GRAINED Size: 131086376(0x7d03828) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Alignment: 4KB Accessible by all: TRUE Pool 3 Segment: GLOBAL; FLAGS: COARSE GRAINED Size: 131086376(0x7d03828) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Alignment: 4KB Accessible by all: TRUE ISA Info: ******* Agent 2 ******* Name: gfx1100 Uuid: GPU-aee456bdb1c699e6 Marketing Name: Radeon RX 7900 XTX Vendor Name: AMD Feature: KERNEL_DISPATCH Profile: BASE_PROFILE Float Round Mode: NEAR Max Queue Number: 128(0x80) Queue Min Size: 64(0x40) Queue Max Size: 131072(0x20000) Queue Type: MULTI Node: 1 Device Type: GPU Cache Info: L1: 32(0x20) KB L2: 6144(0x1800) KB L3: 98304(0x18000) KB Chip ID: 29772(0x744c) ASIC Revision: 0(0x0) Cacheline Size: 64(0x40) Max Clock Freq. (MHz): 2526 BDFID: 768 Internal Node ID: 1 Compute Unit: 96 SIMDs per CU: 2 Shader Engines: 6 Shader Arrs. per Eng.: 2 WatchPts on Addr. Ranges:4 Coherent Host Access: FALSE Features: KERNEL_DISPATCH Fast F16 Operation: TRUE Wavefront Size: 32(0x20) Workgroup Max Size: 1024(0x400) Workgroup Max Size per Dimension: x 1024(0x400) y 1024(0x400) z 1024(0x400) Max Waves Per CU: 32(0x20) Max Work-item Per CU: 1024(0x400) Grid Max Size: 4294967295(0xffffffff) Grid Max Size per Dimension: x 4294967295(0xffffffff) y 4294967295(0xffffffff) z 4294967295(0xffffffff) Max fbarriers/Workgrp: 32 Packet Processor uCode:: 550 SDMA engine uCode:: 19 IOMMU Support:: None Pool Info: Pool 1 Segment: GLOBAL; FLAGS: COARSE GRAINED Size: 25149440(0x17fc000) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Alignment: 4KB Accessible by all: FALSE Pool 2 Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED Size: 25149440(0x17fc000) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Alignment: 4KB Accessible by all: FALSE Pool 3 Segment: GROUP Size: 64(0x40) KB Allocatable: FALSE Alloc Granule: 0KB Alloc Alignment: 0KB Accessible by all: FALSE ISA Info: ISA 1 Name: amdgcn-amd-amdhsa--gfx1100 Machine Models: HSA_MACHINE_MODEL_LARGE Profiles: HSA_PROFILE_BASE Default Rounding Mode: NEAR Default Rounding Mode: NEAR Fast f16: TRUE Workgroup Max Size: 1024(0x400) Workgroup Max Size per Dimension: x 1024(0x400) y 1024(0x400) z 1024(0x400) Grid Max Size: 4294967295(0xffffffff) Grid Max Size per Dimension: x 4294967295(0xffffffff) y 4294967295(0xffffffff) z 4294967295(0xffffffff) FBarrier Max Size: 32 ******* Agent 3 ******* Name: gfx1100 Uuid: GPU-398a3f843a146602 Marketing Name: Radeon RX 7900 XTX Vendor Name: AMD Feature: KERNEL_DISPATCH Profile: BASE_PROFILE Float Round Mode: NEAR Max Queue Number: 128(0x80) Queue Min Size: 64(0x40) Queue Max Size: 131072(0x20000) Queue Type: MULTI Node: 2 Device Type: GPU Cache Info: L1: 32(0x20) KB L2: 6144(0x1800) KB L3: 98304(0x18000) KB Chip ID: 29772(0x744c) ASIC Revision: 0(0x0) Cacheline Size: 64(0x40) Max Clock Freq. (MHz): 2526 BDFID: 2304 Internal Node ID: 2 Compute Unit: 96 SIMDs per CU: 2 Shader Engines: 6 Shader Arrs. per Eng.: 2 WatchPts on Addr. Ranges:4 Coherent Host Access: FALSE Features: KERNEL_DISPATCH Fast F16 Operation: TRUE Wavefront Size: 32(0x20) Workgroup Max Size: 1024(0x400) Workgroup Max Size per Dimension: x 1024(0x400) y 1024(0x400) z 1024(0x400) Max Waves Per CU: 32(0x20) Max Work-item Per CU: 1024(0x400) Grid Max Size: 4294967295(0xffffffff) Grid Max Size per Dimension: x 4294967295(0xffffffff) y 4294967295(0xffffffff) z 4294967295(0xffffffff) Max fbarriers/Workgrp: 32 Packet Processor uCode:: 550 SDMA engine uCode:: 19 IOMMU Support:: None Pool Info: Pool 1 Segment: GLOBAL; FLAGS: COARSE GRAINED Size: 25149440(0x17fc000) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Alignment: 4KB Accessible by all: FALSE Pool 2 Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED Size: 25149440(0x17fc000) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Alignment: 4KB Accessible by all: FALSE Pool 3 Segment: GROUP Size: 64(0x40) KB Allocatable: FALSE Alloc Granule: 0KB Alloc Alignment: 0KB Accessible by all: FALSE ISA Info: ISA 1 Name: amdgcn-amd-amdhsa--gfx1100 Machine Models: HSA_MACHINE_MODEL_LARGE Profiles: HSA_PROFILE_BASE Default Rounding Mode: NEAR Default Rounding Mode: NEAR Fast f16: TRUE Workgroup Max Size: 1024(0x400) Workgroup Max Size per Dimension: x 1024(0x400) y 1024(0x400) z 1024(0x400) Grid Max Size: 4294967295(0xffffffff) Grid Max Size per Dimension: x 4294967295(0xffffffff) y 4294967295(0xffffffff) z 4294967295(0xffffffff) FBarrier Max Size: 32 ******* Agent 4 ******* Name: gfx1036 Uuid: GPU-XX Marketing Name: AMD Radeon Graphics Vendor Name: AMD Feature: KERNEL_DISPATCH Profile: BASE_PROFILE Float Round Mode: NEAR Max Queue Number: 128(0x80) Queue Min Size: 64(0x40) Queue Max Size: 131072(0x20000) Queue Type: MULTI Node: 3 Device Type: GPU Cache Info: L1: 16(0x10) KB L2: 256(0x100) KB Chip ID: 5710(0x164e) ASIC Revision: 1(0x1) Cacheline Size: 64(0x40) Max Clock Freq. (MHz): 2200 BDFID: 4352 Internal Node ID: 3 Compute Unit: 2 SIMDs per CU: 2 Shader Engines: 1 Shader Arrs. per Eng.: 1 WatchPts on Addr. Ranges:4 Coherent Host Access: FALSE Features: KERNEL_DISPATCH Fast F16 Operation: TRUE Wavefront Size: 32(0x20) Workgroup Max Size: 1024(0x400) Workgroup Max Size per Dimension: x 1024(0x400) y 1024(0x400) z 1024(0x400) Max Waves Per CU: 32(0x20) Max Work-item Per CU: 1024(0x400) Grid Max Size: 4294967295(0xffffffff) Grid Max Size per Dimension: x 4294967295(0xffffffff) y 4294967295(0xffffffff) z 4294967295(0xffffffff) Max fbarriers/Workgrp: 32 Packet Processor uCode:: 20 SDMA engine uCode:: 9 IOMMU Support:: None Pool Info: Pool 1 Segment: GLOBAL; FLAGS: COARSE GRAINED Size: 524288(0x80000) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Alignment: 4KB Accessible by all: FALSE Pool 2 Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED Size: 524288(0x80000) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Alignment: 4KB Accessible by all: FALSE Pool 3 Segment: GROUP Size: 64(0x40) KB Allocatable: FALSE Alloc Granule: 0KB Alloc Alignment: 0KB Accessible by all: FALSE ISA Info: ISA 1 Name: amdgcn-amd-amdhsa--gfx1036 Machine Models: HSA_MACHINE_MODEL_LARGE Profiles: HSA_PROFILE_BASE Default Rounding Mode: NEAR Default Rounding Mode: NEAR Fast f16: TRUE Workgroup Max Size: 1024(0x400) Workgroup Max Size per Dimension: x 1024(0x400) y 1024(0x400) z 1024(0x400) Grid Max Size: 4294967295(0xffffffff) Grid Max Size per Dimension: x 4294967295(0xffffffff) y 4294967295(0xffffffff) z 4294967295(0xffffffff) FBarrier Max Size: 32 *** Done *** ``` ### CPU AMD ### Other software docker-compose.yml ``` version: '3.8' services: ollama: volumes: - ./ollama:/root/.ollama container_name: ollama pull_policy: always privileged: true group_add: - video cap_add: - SYS_PTRACE security_opt: - seccomp:unconfined ipc: host restart: unless-stopped image: ollama/ollama:rocm devices: - /dev/kfd:/dev/kfd - /dev/dri:/dev/dri ports: - "11434:11434" environment: - HIP_VISIBLE_DEVICES=0,1 open-webui: build: context: . args: OLLAMA_BASE_URL: '/ollama' dockerfile: Dockerfile image: ghcr.io/open-webui/open-webui:main container_name: open-webui volumes: - ./open-webui:/app/backend/data depends_on: - ollama ports: - 7000:8080 environment: - 'OLLAMA_BASE_URL=http://ollama:11434' - 'WEBUI_SECRET_KEY=' extra_hosts: - host.docker.internal:host-gateway restart: unless-stopped volumes: ollama: {} open-webui: {} ``` _No response_
GiteaMirror added the bugamd labels 2026-04-12 12:04:57 -05:00
Author
Owner

@dhiltgen commented on GitHub (Mar 21, 2024):

If you run on a single GPU does it generate valid responses?

Can you share the server log so we can see which GPU(s) it's attaching to? I'm wondering if the "gfx1036" GPU is somehow getting used and causing problems? It may also be helpful to set OLLAMA_DEBUG=1

<!-- gh-comment-id:2011980174 --> @dhiltgen commented on GitHub (Mar 21, 2024): If you run on a single GPU does it generate valid responses? Can you share the server log so we can see which GPU(s) it's attaching to? I'm wondering if the "gfx1036" GPU is somehow getting used and causing problems? It may also be helpful to set `OLLAMA_DEBUG=1`
Author
Owner

@wizd commented on GitHub (Mar 24, 2024):

If you run on a single GPU does it generate valid responses?

Can you share the server log so we can see which GPU(s) it's attaching to? I'm wondering if the "gfx1036" GPU is somehow getting used and causing problems? It may also be helpful to set OLLAMA_DEBUG=1

running 2 instances, everything is good. the HIP_VISIBLE_DEVICES is right.

$ cat docker-compose.yml
version: '3.8'

services:
  ollama:
    volumes:
      - ./ollama:/root/.ollama
    container_name: ollama
    pull_policy: always
    privileged: true
    group_add:
      - video
    cap_add:
      - SYS_PTRACE
    security_opt:
      - seccomp:unconfined
    ipc: host
    restart: unless-stopped
    image: ollama/ollama:rocm
    devices:
      - /dev/kfd:/dev/kfd
      - /dev/dri:/dev/dri
    ports:
      - "11434:11434"
    environment:
      - HIP_VISIBLE_DEVICES=0

  open-webui:
    build:
      context: .
      args:
        OLLAMA_BASE_URL: '/ollama'
      dockerfile: Dockerfile
    image: ghcr.io/open-webui/open-webui:main
    container_name: open-webui
    volumes:
      - ./open-webui:/app/backend/data
    depends_on:
      - ollama
    ports:
      - 7000:8080
    environment:
      - 'OLLAMA_BASE_URL=http://ollama:11434'
      - 'WEBUI_SECRET_KEY='
    extra_hosts:
      - host.docker.internal:host-gateway
    restart: unless-stopped

  ollama2:
    volumes:
      - ./ollama:/root/.ollama
    container_name: ollama2
    pull_policy: always
    privileged: true
    group_add:
      - video
    cap_add:
      - SYS_PTRACE
    security_opt:
      - seccomp:unconfined
    ipc: host
    restart: unless-stopped
    image: ollama/ollama:rocm
    devices:
      - /dev/kfd:/dev/kfd
      - /dev/dri:/dev/dri
    ports:
      - "11435:11434"
    environment:
      - HIP_VISIBLE_DEVICES=1

  open-webui2:
    build:
      context: .
      args:
        OLLAMA_BASE_URL: '/ollama'
      dockerfile: Dockerfile
    image: ghcr.io/open-webui/open-webui:main
    container_name: open-webui2
    volumes:
      - ./open-webui2:/app/backend/data
    depends_on:
      - ollama2
    ports:
      - 7001:8080
    environment:
      - 'OLLAMA_BASE_URL=http://ollama2:11434'
      - 'WEBUI_SECRET_KEY='
    extra_hosts:
      - host.docker.internal:host-gateway
    restart: unless-stopped

volumes:
  ollama: {}
  open-webui: {}
  ollama2: {}
  open-webui2: {}
<!-- gh-comment-id:2016829932 --> @wizd commented on GitHub (Mar 24, 2024): > If you run on a single GPU does it generate valid responses? > > Can you share the server log so we can see which GPU(s) it's attaching to? I'm wondering if the "gfx1036" GPU is somehow getting used and causing problems? It may also be helpful to set `OLLAMA_DEBUG=1` running 2 instances, everything is good. the HIP_VISIBLE_DEVICES is right. ``` $ cat docker-compose.yml version: '3.8' services: ollama: volumes: - ./ollama:/root/.ollama container_name: ollama pull_policy: always privileged: true group_add: - video cap_add: - SYS_PTRACE security_opt: - seccomp:unconfined ipc: host restart: unless-stopped image: ollama/ollama:rocm devices: - /dev/kfd:/dev/kfd - /dev/dri:/dev/dri ports: - "11434:11434" environment: - HIP_VISIBLE_DEVICES=0 open-webui: build: context: . args: OLLAMA_BASE_URL: '/ollama' dockerfile: Dockerfile image: ghcr.io/open-webui/open-webui:main container_name: open-webui volumes: - ./open-webui:/app/backend/data depends_on: - ollama ports: - 7000:8080 environment: - 'OLLAMA_BASE_URL=http://ollama:11434' - 'WEBUI_SECRET_KEY=' extra_hosts: - host.docker.internal:host-gateway restart: unless-stopped ollama2: volumes: - ./ollama:/root/.ollama container_name: ollama2 pull_policy: always privileged: true group_add: - video cap_add: - SYS_PTRACE security_opt: - seccomp:unconfined ipc: host restart: unless-stopped image: ollama/ollama:rocm devices: - /dev/kfd:/dev/kfd - /dev/dri:/dev/dri ports: - "11435:11434" environment: - HIP_VISIBLE_DEVICES=1 open-webui2: build: context: . args: OLLAMA_BASE_URL: '/ollama' dockerfile: Dockerfile image: ghcr.io/open-webui/open-webui:main container_name: open-webui2 volumes: - ./open-webui2:/app/backend/data depends_on: - ollama2 ports: - 7001:8080 environment: - 'OLLAMA_BASE_URL=http://ollama2:11434' - 'WEBUI_SECRET_KEY=' extra_hosts: - host.docker.internal:host-gateway restart: unless-stopped volumes: ollama: {} open-webui: {} ollama2: {} open-webui2: {} ```
Author
Owner

@wizd commented on GitHub (Mar 24, 2024):

this is the log with 2 7900 XTX selected, setting OLLAMA_DEBUG=1

$ docker logs -f ollama
time=2024-03-24T14:39:32.931Z level=INFO source=images.go:806 msg="total blobs: 60"
time=2024-03-24T14:39:32.931Z level=INFO source=images.go:813 msg="total unused blobs removed: 0"
time=2024-03-24T14:39:32.932Z level=INFO source=routes.go:1110 msg="Listening on [::]:11434 (version 0.1.29)"
time=2024-03-24T14:39:32.932Z level=INFO source=payload_common.go:112 msg="Extracting dynamic libraries to /tmp/ollama2408392975/runners ..."
time=2024-03-24T14:39:34.749Z level=INFO source=payload_common.go:139 msg="Dynamic LLM libraries [cpu_avx cpu_avx2 rocm_v60000 cpu cuda_v11]"
time=2024-03-24T14:39:34.750Z level=DEBUG source=payload_common.go:140 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-03-24T14:39:34.750Z level=INFO source=gpu.go:77 msg="Detecting GPU type"
time=2024-03-24T14:39:34.750Z level=INFO source=gpu.go:191 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-03-24T14:39:34.750Z level=DEBUG source=gpu.go:209 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /opt/rocm/lib/libnvidia-ml.so* /usr/local/lib/libnvidia-ml.so* /opt/rh/devtoolset-7/root/libnvidia-ml.so*]"
time=2024-03-24T14:39:34.750Z level=INFO source=gpu.go:237 msg="Discovered GPU libraries: []"
time=2024-03-24T14:39:34.750Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-24T14:39:34.750Z level=INFO source=amd_linux.go:50 msg="AMD Driver: 6.3.6"
time=2024-03-24T14:39:34.750Z level=DEBUG source=amd_linux.go:152 msg="discovering VRAM for amdgpu devices"
time=2024-03-24T14:39:34.750Z level=DEBUG source=amd_linux.go:171 msg="amdgpu devices [0 1]"
time=2024-03-24T14:39:34.750Z level=INFO source=amd_linux.go:246 msg="[0] amdgpu totalMemory 24560M"
time=2024-03-24T14:39:34.750Z level=INFO source=amd_linux.go:247 msg="[0] amdgpu freeMemory  24560M"
time=2024-03-24T14:39:34.750Z level=INFO source=amd_linux.go:246 msg="[1] amdgpu totalMemory 24560M"
time=2024-03-24T14:39:34.750Z level=INFO source=amd_linux.go:247 msg="[1] amdgpu freeMemory  24533M"
time=2024-03-24T14:39:34.750Z level=DEBUG source=gpu.go:180 msg="rocm detected 2 devices with 44184M available memory"
time=2024-03-24T14:39:46.219Z level=INFO source=routes.go:843 msg="skipping file: registry.ollama.ai/library/qwen-4b-chat-v1.5-q6_k:latest"
time=2024-03-24T14:39:46.219Z level=INFO source=routes.go:843 msg="skipping file: registry.ollama.ai/library/qwen-7b-chat-v1.5-q5_k_m:latest"
[GIN] 2024/03/24 - 14:39:46 | 200 |     1.61832ms |      172.19.0.3 | GET      "/api/tags"
time=2024-03-24T14:39:46.223Z level=INFO source=routes.go:843 msg="skipping file: registry.ollama.ai/library/qwen-4b-chat-v1.5-q6_k:latest"
time=2024-03-24T14:39:46.223Z level=INFO source=routes.go:843 msg="skipping file: registry.ollama.ai/library/qwen-7b-chat-v1.5-q5_k_m:latest"
[GIN] 2024/03/24 - 14:39:46 | 200 |    1.135124ms |      172.19.0.3 | GET      "/api/tags"
time=2024-03-24T14:39:47.271Z level=INFO source=routes.go:843 msg="skipping file: registry.ollama.ai/library/qwen-4b-chat-v1.5-q6_k:latest"
time=2024-03-24T14:39:47.271Z level=INFO source=routes.go:843 msg="skipping file: registry.ollama.ai/library/qwen-7b-chat-v1.5-q5_k_m:latest"
[GIN] 2024/03/24 - 14:39:47 | 200 |    1.573496ms |      172.19.0.3 | GET      "/api/tags"
time=2024-03-24T14:39:53.949Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-24T14:39:53.949Z level=INFO source=amd_linux.go:50 msg="AMD Driver: 6.3.6"
time=2024-03-24T14:39:53.949Z level=DEBUG source=amd_linux.go:152 msg="discovering VRAM for amdgpu devices"
time=2024-03-24T14:39:53.949Z level=DEBUG source=amd_linux.go:171 msg="amdgpu devices [0 1]"
time=2024-03-24T14:39:53.949Z level=INFO source=amd_linux.go:246 msg="[0] amdgpu totalMemory 24560M"
time=2024-03-24T14:39:53.949Z level=INFO source=amd_linux.go:247 msg="[0] amdgpu freeMemory  24560M"
time=2024-03-24T14:39:53.949Z level=INFO source=amd_linux.go:246 msg="[1] amdgpu totalMemory 24560M"
time=2024-03-24T14:39:53.949Z level=INFO source=amd_linux.go:247 msg="[1] amdgpu freeMemory  24533M"
time=2024-03-24T14:39:53.949Z level=DEBUG source=gpu.go:180 msg="rocm detected 2 devices with 44184M available memory"
time=2024-03-24T14:39:53.949Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-24T14:39:53.949Z level=INFO source=amd_linux.go:50 msg="AMD Driver: 6.3.6"
time=2024-03-24T14:39:53.949Z level=DEBUG source=amd_linux.go:152 msg="discovering VRAM for amdgpu devices"
time=2024-03-24T14:39:53.949Z level=DEBUG source=amd_linux.go:171 msg="amdgpu devices [0 1]"
time=2024-03-24T14:39:53.949Z level=INFO source=amd_linux.go:246 msg="[0] amdgpu totalMemory 24560M"
time=2024-03-24T14:39:53.949Z level=INFO source=amd_linux.go:247 msg="[0] amdgpu freeMemory  24560M"
time=2024-03-24T14:39:53.950Z level=INFO source=amd_linux.go:246 msg="[1] amdgpu totalMemory 24560M"
time=2024-03-24T14:39:53.950Z level=INFO source=amd_linux.go:247 msg="[1] amdgpu freeMemory  24533M"
time=2024-03-24T14:39:53.950Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-24T14:39:53.950Z level=DEBUG source=payload_common.go:93 msg="ordered list of LLM libraries to try [/tmp/ollama2408392975/runners/rocm_v60000/libext_server.so /tmp/ollama2408392975/runners/cpu_avx2/libext_server.so]"
time=2024-03-24T14:39:53.979Z level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama2408392975/runners/rocm_v60000/libext_server.so"
time=2024-03-24T14:39:53.979Z level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
time=2024-03-24T14:39:53.979Z level=DEBUG source=dyn_ext_server.go:151 msg="server params: {model:0x7f33b4907890 n_ctx:16000 n_batch:512 n_threads:0 n_parallel:1 rope_freq_base:0 rope_freq_scale:0 memory_f16:true n_gpu_layers:33 main_gpu:0 use_mlock:false use_mmap:true numa:0 embedding:true lora_adapters:<nil> mmproj:<nil> verbose_logging:true _:[0 0 0 0 0 0 0]}"
[1711291193] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 |
[1711291193] Performing pre-initialization of GPU
ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 2 ROCm devices:
  Device 0: Radeon RX 7900 XTX, compute capability 11.0, VMM: no
  Device 1: Radeon RX 7900 XTX, compute capability 11.0, VMM: no
llama_model_loader: loaded meta data with 24 key-value pairs and 995 tensors from /root/.ollama/models/blobs/sha256:be19f5e0f8312849231ed0ec21af482c8769f80c7acfa44ea425f4147e3bbcf7 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = cognitivecomputations
llama_model_loader: - kv   2:                       llama.context_length u32              = 32768
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   9:                         llama.expert_count u32              = 8
llama_model_loader: - kv  10:                    llama.expert_used_count u32              = 2
llama_model_loader: - kv  11:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  12:                       llama.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  13:                          general.file_type u32              = 16
llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  15:                      tokenizer.ggml.tokens arr[str,32002]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  16:                      tokenizer.ggml.scores arr[f32,32002]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,32002]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  19:                tokenizer.ggml.eos_token_id u32              = 32000
llama_model_loader: - kv  20:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  21:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  22:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  23:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type  f16:   32 tensors
llama_model_loader: - type q8_0:   64 tensors
llama_model_loader: - type q5_K:  833 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens definition check successful ( 261/32002 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32002
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 8
llm_load_print_meta: n_expert_used    = 2
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = Q5_K - Small
llm_load_print_meta: model params     = 46.70 B
llm_load_print_meta: model size       = 30.02 GiB (5.52 BPW)
llm_load_print_meta: general.name     = cognitivecomputations
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 32000 '<|im_end|>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_tensors: ggml ctx size =    1.14 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors:      ROCm0 buffer size = 16228.09 MiB
llm_load_tensors:      ROCm1 buffer size = 14421.47 MiB
llm_load_tensors:        CPU buffer size =    85.94 MiB
....................................................................................................
llama_new_context_with_model: n_ctx      = 16000
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      ROCm0 KV buffer size =  1062.50 MiB
llama_kv_cache_init:      ROCm1 KV buffer size =   937.50 MiB
llama_new_context_with_model: KV self size  = 2000.00 MiB, K (f16): 1000.00 MiB, V (f16): 1000.00 MiB
llama_new_context_with_model:  ROCm_Host input buffer size   =    40.38 MiB
llama_new_context_with_model:      ROCm0 compute buffer size =  1083.28 MiB
llama_new_context_with_model:      ROCm1 compute buffer size =  1091.26 MiB
llama_new_context_with_model:  ROCm_Host compute buffer size =     8.00 MiB
llama_new_context_with_model: graph splits (measure): 3
[1711291202] warming up the model with an empty run
loading library /tmp/ollama2408392975/runners/rocm_v60000/libext_server.so
{"function":"initialize","level":"INFO","line":440,"msg":"initializing slots","n_slots":1,"tid":"139860291606272","timestamp":1711291203}
{"function":"initialize","level":"INFO","line":452,"msg":"new slot","n_ctx_slot":16000,"slot_id":0,"tid":"139860291606272","timestamp":1711291203}
time=2024-03-24T14:40:03.772Z level=INFO source=dyn_ext_server.go:162 msg="Starting llama main loop"
[1711291203] llama server main loop starting
{"function":"update_slots","level":"INFO","line":1590,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"139850942510848","timestamp":1711291203}
time=2024-03-24T14:40:03.772Z level=DEBUG source=prompt.go:172 msg="prompt now fits in context window" required=37 window=16000
time=2024-03-24T14:40:03.772Z level=DEBUG source=routes.go:1316 msg="chat handler" prompt="<|im_start|>system\nYou are Dolphin, a helpful AI assistant.\n<|im_end|>\n<|im_start|>user\nTell me a random fun fact about the Roman Empire<|im_end|>\n<|im_start|>assistant\n" images=0
{"function":"launch_slot_with_data","level":"INFO","line":833,"msg":"slot is processing task","slot_id":0,"task_id":0,"tid":"139850942510848","timestamp":1711291203}
{"function":"update_slots","ga_i":0,"level":"INFO","line":1828,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":37,"slot_id":0,"task_id":0,"tid":"139850942510848","timestamp":1711291203}
{"function":"update_slots","level":"INFO","line":1852,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":0,"tid":"139850942510848","timestamp":1711291203}
[GIN] 2024/03/24 - 14:40:05 | 200 | 11.426820925s |      172.19.0.3 | POST     "/api/chat"
time=2024-03-24T14:40:05.285Z level=DEBUG source=dyn_ext_server.go:271 msg="prediction aborted, token repeat limit reached"
time=2024-03-24T14:40:05.321Z level=INFO source=routes.go:79 msg="changing loaded model"
[1711291205]
initiating shutdown - draining remaining tasks...
[1711291205]
llama server shutting down
{"function":"update_slots","level":"INFO","line":1660,"msg":"slot released","n_cache_tokens":70,"n_ctx":16000,"n_past":69,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"139850942510848","timestamp":1711291205,"truncated":false}
[1711291205] llama server shutdown complete
time=2024-03-24T14:40:05.419Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-24T14:40:05.419Z level=INFO source=amd_linux.go:50 msg="AMD Driver: 6.3.6"
time=2024-03-24T14:40:05.419Z level=DEBUG source=amd_linux.go:152 msg="discovering VRAM for amdgpu devices"
time=2024-03-24T14:40:05.419Z level=DEBUG source=amd_linux.go:171 msg="amdgpu devices [0 1]"
time=2024-03-24T14:40:05.419Z level=INFO source=amd_linux.go:246 msg="[0] amdgpu totalMemory 24560M"
time=2024-03-24T14:40:05.419Z level=INFO source=amd_linux.go:247 msg="[0] amdgpu freeMemory  24560M"
time=2024-03-24T14:40:05.419Z level=INFO source=amd_linux.go:246 msg="[1] amdgpu totalMemory 24560M"
time=2024-03-24T14:40:05.419Z level=INFO source=amd_linux.go:247 msg="[1] amdgpu freeMemory  7243M"
time=2024-03-24T14:40:05.419Z level=DEBUG source=gpu.go:180 msg="rocm detected 2 devices with 28622M available memory"
time=2024-03-24T14:40:05.419Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-24T14:40:05.419Z level=INFO source=amd_linux.go:50 msg="AMD Driver: 6.3.6"
time=2024-03-24T14:40:05.419Z level=DEBUG source=amd_linux.go:152 msg="discovering VRAM for amdgpu devices"
time=2024-03-24T14:40:05.419Z level=DEBUG source=amd_linux.go:171 msg="amdgpu devices [0 1]"
time=2024-03-24T14:40:05.419Z level=INFO source=amd_linux.go:246 msg="[0] amdgpu totalMemory 24560M"
time=2024-03-24T14:40:05.419Z level=INFO source=amd_linux.go:247 msg="[0] amdgpu freeMemory  24560M"
time=2024-03-24T14:40:05.419Z level=INFO source=amd_linux.go:246 msg="[1] amdgpu totalMemory 24560M"
time=2024-03-24T14:40:05.419Z level=INFO source=amd_linux.go:247 msg="[1] amdgpu freeMemory  7243M"
time=2024-03-24T14:40:05.419Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-24T14:40:05.419Z level=DEBUG source=payload_common.go:93 msg="ordered list of LLM libraries to try [/tmp/ollama2408392975/runners/rocm_v60000/libext_server.so /tmp/ollama2408392975/runners/cpu_avx2/libext_server.so]"
time=2024-03-24T14:40:05.419Z level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama2408392975/runners/rocm_v60000/libext_server.so"
time=2024-03-24T14:40:05.419Z level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
time=2024-03-24T14:40:05.419Z level=DEBUG source=dyn_ext_server.go:151 msg="server params: {model:0x7f30480536f0 n_ctx:2048 n_batch:512 n_threads:0 n_parallel:1 rope_freq_base:0 rope_freq_scale:0 memory_f16:true n_gpu_layers:29 main_gpu:0 use_mlock:false use_mmap:true numa:0 embedding:true lora_adapters:<nil> mmproj:<nil> verbose_logging:true _:[0 0 0 0 0 0 0]}"
[1711291205] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 |
[1711291205] Performing pre-initialization of GPU
llama_model_loader: loaded meta data with 24 key-value pairs and 995 tensors from /root/.ollama/models/blobs/sha256:be19f5e0f8312849231ed0ec21af482c8769f80c7acfa44ea425f4147e3bbcf7 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = cognitivecomputations
llama_model_loader: - kv   2:                       llama.context_length u32              = 32768
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   9:                         llama.expert_count u32              = 8
llama_model_loader: - kv  10:                    llama.expert_used_count u32              = 2
llama_model_loader: - kv  11:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  12:                       llama.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  13:                          general.file_type u32              = 16
llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  15:                      tokenizer.ggml.tokens arr[str,32002]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  16:                      tokenizer.ggml.scores arr[f32,32002]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,32002]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  19:                tokenizer.ggml.eos_token_id u32              = 32000
llama_model_loader: - kv  20:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  21:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  22:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  23:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type  f16:   32 tensors
llama_model_loader: - type q8_0:   64 tensors
llama_model_loader: - type q5_K:  833 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens definition check successful ( 261/32002 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32002
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 8
llm_load_print_meta: n_expert_used    = 2
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = Q5_K - Small
llm_load_print_meta: model params     = 46.70 B
llm_load_print_meta: model size       = 30.02 GiB (5.52 BPW)
llm_load_print_meta: general.name     = cognitivecomputations
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 32000 '<|im_end|>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_tensors: ggml ctx size =    1.14 MiB
llm_load_tensors: offloading 29 repeating layers to GPU
llm_load_tensors: offloaded 29/33 layers to GPU
llm_load_tensors:      ROCm0 buffer size = 14318.91 MiB
llm_load_tensors:      ROCm1 buffer size = 13364.31 MiB
llm_load_tensors:        CPU buffer size = 30735.50 MiB
....................................................................................................
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 2 ROCm devices:
  Device 0: Radeon RX 7900 XTX, compute capability 11.0, VMM: no
  Device 1: Radeon RX 7900 XTX, compute capability 11.0, VMM: no
llama_kv_cache_init:      ROCm0 KV buffer size =   120.00 MiB
llama_kv_cache_init:      ROCm1 KV buffer size =   112.00 MiB
llama_kv_cache_init:  ROCm_Host KV buffer size =    24.00 MiB
llama_new_context_with_model: KV self size  =  256.00 MiB, K (f16):  128.00 MiB, V (f16):  128.00 MiB
llama_new_context_with_model:  ROCm_Host input buffer size   =    13.02 MiB
llama_new_context_with_model:      ROCm0 compute buffer size =   192.01 MiB
llama_new_context_with_model:      ROCm1 compute buffer size =   192.01 MiB
llama_new_context_with_model:  ROCm_Host compute buffer size =   188.03 MiB
llama_new_context_with_model: graph splits (measure): 4
[1711291209] warming up the model with an empty run
loading library /tmp/ollama2408392975/runners/rocm_v60000/libext_server.so
{"function":"initialize","level":"INFO","line":440,"msg":"initializing slots","n_slots":1,"tid":"139860529432320","timestamp":1711291210}
{"function":"initialize","level":"INFO","line":452,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"139860529432320","timestamp":1711291210}
time=2024-03-24T14:40:10.020Z level=INFO source=dyn_ext_server.go:162 msg="Starting llama main loop"
time=2024-03-24T14:40:10.020Z level=DEBUG source=routes.go:249 msg="generate handler" prompt="Create a concise, 3-5 word phrase as a header for the following query, strictly adhering to the 3-5 word limit and avoiding the use of the word 'title': Tell me a random fun fact about the Roman Empire"
time=2024-03-24T14:40:10.020Z level=DEBUG source=routes.go:250 msg="generate handler" template="<|im_start|>system\n{{ .System }}<|im_end|>\n<|im_start|>user\n{{ .Prompt }}<|im_end|>\n<|im_start|>assistant\n"
time=2024-03-24T14:40:10.020Z level=DEBUG source=routes.go:251 msg="generate handler" system="You are Dolphin, a helpful AI assistant.\n"
[1711291210] llama server main loop starting
{"function":"update_slots","level":"INFO","line":1590,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"139842566473472","timestamp":1711291210}
time=2024-03-24T14:40:10.020Z level=DEBUG source=routes.go:282 msg="generate handler" prompt="<|im_start|>system\nYou are Dolphin, a helpful AI assistant.\n<|im_end|>\n<|im_start|>user\nCreate a concise, 3-5 word phrase as a header for the following query, strictly adhering to the 3-5 word limit and avoiding the use of the word 'title': Tell me a random fun fact about the Roman Empire<|im_end|>\n<|im_start|>assistant\n"
{"function":"launch_slot_with_data","level":"INFO","line":833,"msg":"slot is processing task","slot_id":0,"task_id":0,"tid":"139842566473472","timestamp":1711291210}
{"function":"update_slots","ga_i":0,"level":"INFO","line":1828,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":78,"slot_id":0,"task_id":0,"tid":"139842566473472","timestamp":1711291210}
{"function":"update_slots","level":"INFO","line":1852,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":0,"tid":"139842566473472","timestamp":1711291210}
<!-- gh-comment-id:2016831449 --> @wizd commented on GitHub (Mar 24, 2024): this is the log with 2 7900 XTX selected, setting OLLAMA_DEBUG=1 ``` $ docker logs -f ollama time=2024-03-24T14:39:32.931Z level=INFO source=images.go:806 msg="total blobs: 60" time=2024-03-24T14:39:32.931Z level=INFO source=images.go:813 msg="total unused blobs removed: 0" time=2024-03-24T14:39:32.932Z level=INFO source=routes.go:1110 msg="Listening on [::]:11434 (version 0.1.29)" time=2024-03-24T14:39:32.932Z level=INFO source=payload_common.go:112 msg="Extracting dynamic libraries to /tmp/ollama2408392975/runners ..." time=2024-03-24T14:39:34.749Z level=INFO source=payload_common.go:139 msg="Dynamic LLM libraries [cpu_avx cpu_avx2 rocm_v60000 cpu cuda_v11]" time=2024-03-24T14:39:34.750Z level=DEBUG source=payload_common.go:140 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" time=2024-03-24T14:39:34.750Z level=INFO source=gpu.go:77 msg="Detecting GPU type" time=2024-03-24T14:39:34.750Z level=INFO source=gpu.go:191 msg="Searching for GPU management library libnvidia-ml.so" time=2024-03-24T14:39:34.750Z level=DEBUG source=gpu.go:209 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /opt/rocm/lib/libnvidia-ml.so* /usr/local/lib/libnvidia-ml.so* /opt/rh/devtoolset-7/root/libnvidia-ml.so*]" time=2024-03-24T14:39:34.750Z level=INFO source=gpu.go:237 msg="Discovered GPU libraries: []" time=2024-03-24T14:39:34.750Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-24T14:39:34.750Z level=INFO source=amd_linux.go:50 msg="AMD Driver: 6.3.6" time=2024-03-24T14:39:34.750Z level=DEBUG source=amd_linux.go:152 msg="discovering VRAM for amdgpu devices" time=2024-03-24T14:39:34.750Z level=DEBUG source=amd_linux.go:171 msg="amdgpu devices [0 1]" time=2024-03-24T14:39:34.750Z level=INFO source=amd_linux.go:246 msg="[0] amdgpu totalMemory 24560M" time=2024-03-24T14:39:34.750Z level=INFO source=amd_linux.go:247 msg="[0] amdgpu freeMemory 24560M" time=2024-03-24T14:39:34.750Z level=INFO source=amd_linux.go:246 msg="[1] amdgpu totalMemory 24560M" time=2024-03-24T14:39:34.750Z level=INFO source=amd_linux.go:247 msg="[1] amdgpu freeMemory 24533M" time=2024-03-24T14:39:34.750Z level=DEBUG source=gpu.go:180 msg="rocm detected 2 devices with 44184M available memory" time=2024-03-24T14:39:46.219Z level=INFO source=routes.go:843 msg="skipping file: registry.ollama.ai/library/qwen-4b-chat-v1.5-q6_k:latest" time=2024-03-24T14:39:46.219Z level=INFO source=routes.go:843 msg="skipping file: registry.ollama.ai/library/qwen-7b-chat-v1.5-q5_k_m:latest" [GIN] 2024/03/24 - 14:39:46 | 200 | 1.61832ms | 172.19.0.3 | GET "/api/tags" time=2024-03-24T14:39:46.223Z level=INFO source=routes.go:843 msg="skipping file: registry.ollama.ai/library/qwen-4b-chat-v1.5-q6_k:latest" time=2024-03-24T14:39:46.223Z level=INFO source=routes.go:843 msg="skipping file: registry.ollama.ai/library/qwen-7b-chat-v1.5-q5_k_m:latest" [GIN] 2024/03/24 - 14:39:46 | 200 | 1.135124ms | 172.19.0.3 | GET "/api/tags" time=2024-03-24T14:39:47.271Z level=INFO source=routes.go:843 msg="skipping file: registry.ollama.ai/library/qwen-4b-chat-v1.5-q6_k:latest" time=2024-03-24T14:39:47.271Z level=INFO source=routes.go:843 msg="skipping file: registry.ollama.ai/library/qwen-7b-chat-v1.5-q5_k_m:latest" [GIN] 2024/03/24 - 14:39:47 | 200 | 1.573496ms | 172.19.0.3 | GET "/api/tags" time=2024-03-24T14:39:53.949Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-24T14:39:53.949Z level=INFO source=amd_linux.go:50 msg="AMD Driver: 6.3.6" time=2024-03-24T14:39:53.949Z level=DEBUG source=amd_linux.go:152 msg="discovering VRAM for amdgpu devices" time=2024-03-24T14:39:53.949Z level=DEBUG source=amd_linux.go:171 msg="amdgpu devices [0 1]" time=2024-03-24T14:39:53.949Z level=INFO source=amd_linux.go:246 msg="[0] amdgpu totalMemory 24560M" time=2024-03-24T14:39:53.949Z level=INFO source=amd_linux.go:247 msg="[0] amdgpu freeMemory 24560M" time=2024-03-24T14:39:53.949Z level=INFO source=amd_linux.go:246 msg="[1] amdgpu totalMemory 24560M" time=2024-03-24T14:39:53.949Z level=INFO source=amd_linux.go:247 msg="[1] amdgpu freeMemory 24533M" time=2024-03-24T14:39:53.949Z level=DEBUG source=gpu.go:180 msg="rocm detected 2 devices with 44184M available memory" time=2024-03-24T14:39:53.949Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-24T14:39:53.949Z level=INFO source=amd_linux.go:50 msg="AMD Driver: 6.3.6" time=2024-03-24T14:39:53.949Z level=DEBUG source=amd_linux.go:152 msg="discovering VRAM for amdgpu devices" time=2024-03-24T14:39:53.949Z level=DEBUG source=amd_linux.go:171 msg="amdgpu devices [0 1]" time=2024-03-24T14:39:53.949Z level=INFO source=amd_linux.go:246 msg="[0] amdgpu totalMemory 24560M" time=2024-03-24T14:39:53.949Z level=INFO source=amd_linux.go:247 msg="[0] amdgpu freeMemory 24560M" time=2024-03-24T14:39:53.950Z level=INFO source=amd_linux.go:246 msg="[1] amdgpu totalMemory 24560M" time=2024-03-24T14:39:53.950Z level=INFO source=amd_linux.go:247 msg="[1] amdgpu freeMemory 24533M" time=2024-03-24T14:39:53.950Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-24T14:39:53.950Z level=DEBUG source=payload_common.go:93 msg="ordered list of LLM libraries to try [/tmp/ollama2408392975/runners/rocm_v60000/libext_server.so /tmp/ollama2408392975/runners/cpu_avx2/libext_server.so]" time=2024-03-24T14:39:53.979Z level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama2408392975/runners/rocm_v60000/libext_server.so" time=2024-03-24T14:39:53.979Z level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server" time=2024-03-24T14:39:53.979Z level=DEBUG source=dyn_ext_server.go:151 msg="server params: {model:0x7f33b4907890 n_ctx:16000 n_batch:512 n_threads:0 n_parallel:1 rope_freq_base:0 rope_freq_scale:0 memory_f16:true n_gpu_layers:33 main_gpu:0 use_mlock:false use_mmap:true numa:0 embedding:true lora_adapters:<nil> mmproj:<nil> verbose_logging:true _:[0 0 0 0 0 0 0]}" [1711291193] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | [1711291193] Performing pre-initialization of GPU ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes ggml_init_cublas: found 2 ROCm devices: Device 0: Radeon RX 7900 XTX, compute capability 11.0, VMM: no Device 1: Radeon RX 7900 XTX, compute capability 11.0, VMM: no llama_model_loader: loaded meta data with 24 key-value pairs and 995 tensors from /root/.ollama/models/blobs/sha256:be19f5e0f8312849231ed0ec21af482c8769f80c7acfa44ea425f4147e3bbcf7 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = cognitivecomputations llama_model_loader: - kv 2: llama.context_length u32 = 32768 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 9: llama.expert_count u32 = 8 llama_model_loader: - kv 10: llama.expert_used_count u32 = 2 llama_model_loader: - kv 11: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 12: llama.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 13: general.file_type u32 = 16 llama_model_loader: - kv 14: tokenizer.ggml.model str = llama llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,32002] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 16: tokenizer.ggml.scores arr[f32,32002] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,32002] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 32000 llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 22: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 23: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type f16: 32 tensors llama_model_loader: - type q8_0: 64 tensors llama_model_loader: - type q5_K: 833 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens definition check successful ( 261/32002 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32002 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 8 llm_load_print_meta: n_expert_used = 2 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q5_K - Small llm_load_print_meta: model params = 46.70 B llm_load_print_meta: model size = 30.02 GiB (5.52 BPW) llm_load_print_meta: general.name = cognitivecomputations llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 32000 '<|im_end|>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_tensors: ggml ctx size = 1.14 MiB llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 33/33 layers to GPU llm_load_tensors: ROCm0 buffer size = 16228.09 MiB llm_load_tensors: ROCm1 buffer size = 14421.47 MiB llm_load_tensors: CPU buffer size = 85.94 MiB .................................................................................................... llama_new_context_with_model: n_ctx = 16000 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: ROCm0 KV buffer size = 1062.50 MiB llama_kv_cache_init: ROCm1 KV buffer size = 937.50 MiB llama_new_context_with_model: KV self size = 2000.00 MiB, K (f16): 1000.00 MiB, V (f16): 1000.00 MiB llama_new_context_with_model: ROCm_Host input buffer size = 40.38 MiB llama_new_context_with_model: ROCm0 compute buffer size = 1083.28 MiB llama_new_context_with_model: ROCm1 compute buffer size = 1091.26 MiB llama_new_context_with_model: ROCm_Host compute buffer size = 8.00 MiB llama_new_context_with_model: graph splits (measure): 3 [1711291202] warming up the model with an empty run loading library /tmp/ollama2408392975/runners/rocm_v60000/libext_server.so {"function":"initialize","level":"INFO","line":440,"msg":"initializing slots","n_slots":1,"tid":"139860291606272","timestamp":1711291203} {"function":"initialize","level":"INFO","line":452,"msg":"new slot","n_ctx_slot":16000,"slot_id":0,"tid":"139860291606272","timestamp":1711291203} time=2024-03-24T14:40:03.772Z level=INFO source=dyn_ext_server.go:162 msg="Starting llama main loop" [1711291203] llama server main loop starting {"function":"update_slots","level":"INFO","line":1590,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"139850942510848","timestamp":1711291203} time=2024-03-24T14:40:03.772Z level=DEBUG source=prompt.go:172 msg="prompt now fits in context window" required=37 window=16000 time=2024-03-24T14:40:03.772Z level=DEBUG source=routes.go:1316 msg="chat handler" prompt="<|im_start|>system\nYou are Dolphin, a helpful AI assistant.\n<|im_end|>\n<|im_start|>user\nTell me a random fun fact about the Roman Empire<|im_end|>\n<|im_start|>assistant\n" images=0 {"function":"launch_slot_with_data","level":"INFO","line":833,"msg":"slot is processing task","slot_id":0,"task_id":0,"tid":"139850942510848","timestamp":1711291203} {"function":"update_slots","ga_i":0,"level":"INFO","line":1828,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":37,"slot_id":0,"task_id":0,"tid":"139850942510848","timestamp":1711291203} {"function":"update_slots","level":"INFO","line":1852,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":0,"tid":"139850942510848","timestamp":1711291203} [GIN] 2024/03/24 - 14:40:05 | 200 | 11.426820925s | 172.19.0.3 | POST "/api/chat" time=2024-03-24T14:40:05.285Z level=DEBUG source=dyn_ext_server.go:271 msg="prediction aborted, token repeat limit reached" time=2024-03-24T14:40:05.321Z level=INFO source=routes.go:79 msg="changing loaded model" [1711291205] initiating shutdown - draining remaining tasks... [1711291205] llama server shutting down {"function":"update_slots","level":"INFO","line":1660,"msg":"slot released","n_cache_tokens":70,"n_ctx":16000,"n_past":69,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"139850942510848","timestamp":1711291205,"truncated":false} [1711291205] llama server shutdown complete time=2024-03-24T14:40:05.419Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-24T14:40:05.419Z level=INFO source=amd_linux.go:50 msg="AMD Driver: 6.3.6" time=2024-03-24T14:40:05.419Z level=DEBUG source=amd_linux.go:152 msg="discovering VRAM for amdgpu devices" time=2024-03-24T14:40:05.419Z level=DEBUG source=amd_linux.go:171 msg="amdgpu devices [0 1]" time=2024-03-24T14:40:05.419Z level=INFO source=amd_linux.go:246 msg="[0] amdgpu totalMemory 24560M" time=2024-03-24T14:40:05.419Z level=INFO source=amd_linux.go:247 msg="[0] amdgpu freeMemory 24560M" time=2024-03-24T14:40:05.419Z level=INFO source=amd_linux.go:246 msg="[1] amdgpu totalMemory 24560M" time=2024-03-24T14:40:05.419Z level=INFO source=amd_linux.go:247 msg="[1] amdgpu freeMemory 7243M" time=2024-03-24T14:40:05.419Z level=DEBUG source=gpu.go:180 msg="rocm detected 2 devices with 28622M available memory" time=2024-03-24T14:40:05.419Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-24T14:40:05.419Z level=INFO source=amd_linux.go:50 msg="AMD Driver: 6.3.6" time=2024-03-24T14:40:05.419Z level=DEBUG source=amd_linux.go:152 msg="discovering VRAM for amdgpu devices" time=2024-03-24T14:40:05.419Z level=DEBUG source=amd_linux.go:171 msg="amdgpu devices [0 1]" time=2024-03-24T14:40:05.419Z level=INFO source=amd_linux.go:246 msg="[0] amdgpu totalMemory 24560M" time=2024-03-24T14:40:05.419Z level=INFO source=amd_linux.go:247 msg="[0] amdgpu freeMemory 24560M" time=2024-03-24T14:40:05.419Z level=INFO source=amd_linux.go:246 msg="[1] amdgpu totalMemory 24560M" time=2024-03-24T14:40:05.419Z level=INFO source=amd_linux.go:247 msg="[1] amdgpu freeMemory 7243M" time=2024-03-24T14:40:05.419Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-24T14:40:05.419Z level=DEBUG source=payload_common.go:93 msg="ordered list of LLM libraries to try [/tmp/ollama2408392975/runners/rocm_v60000/libext_server.so /tmp/ollama2408392975/runners/cpu_avx2/libext_server.so]" time=2024-03-24T14:40:05.419Z level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama2408392975/runners/rocm_v60000/libext_server.so" time=2024-03-24T14:40:05.419Z level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server" time=2024-03-24T14:40:05.419Z level=DEBUG source=dyn_ext_server.go:151 msg="server params: {model:0x7f30480536f0 n_ctx:2048 n_batch:512 n_threads:0 n_parallel:1 rope_freq_base:0 rope_freq_scale:0 memory_f16:true n_gpu_layers:29 main_gpu:0 use_mlock:false use_mmap:true numa:0 embedding:true lora_adapters:<nil> mmproj:<nil> verbose_logging:true _:[0 0 0 0 0 0 0]}" [1711291205] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | [1711291205] Performing pre-initialization of GPU llama_model_loader: loaded meta data with 24 key-value pairs and 995 tensors from /root/.ollama/models/blobs/sha256:be19f5e0f8312849231ed0ec21af482c8769f80c7acfa44ea425f4147e3bbcf7 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = cognitivecomputations llama_model_loader: - kv 2: llama.context_length u32 = 32768 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 9: llama.expert_count u32 = 8 llama_model_loader: - kv 10: llama.expert_used_count u32 = 2 llama_model_loader: - kv 11: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 12: llama.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 13: general.file_type u32 = 16 llama_model_loader: - kv 14: tokenizer.ggml.model str = llama llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,32002] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 16: tokenizer.ggml.scores arr[f32,32002] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,32002] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 32000 llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 22: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 23: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type f16: 32 tensors llama_model_loader: - type q8_0: 64 tensors llama_model_loader: - type q5_K: 833 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens definition check successful ( 261/32002 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32002 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 8 llm_load_print_meta: n_expert_used = 2 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q5_K - Small llm_load_print_meta: model params = 46.70 B llm_load_print_meta: model size = 30.02 GiB (5.52 BPW) llm_load_print_meta: general.name = cognitivecomputations llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 32000 '<|im_end|>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_tensors: ggml ctx size = 1.14 MiB llm_load_tensors: offloading 29 repeating layers to GPU llm_load_tensors: offloaded 29/33 layers to GPU llm_load_tensors: ROCm0 buffer size = 14318.91 MiB llm_load_tensors: ROCm1 buffer size = 13364.31 MiB llm_load_tensors: CPU buffer size = 30735.50 MiB .................................................................................................... llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes ggml_init_cublas: found 2 ROCm devices: Device 0: Radeon RX 7900 XTX, compute capability 11.0, VMM: no Device 1: Radeon RX 7900 XTX, compute capability 11.0, VMM: no llama_kv_cache_init: ROCm0 KV buffer size = 120.00 MiB llama_kv_cache_init: ROCm1 KV buffer size = 112.00 MiB llama_kv_cache_init: ROCm_Host KV buffer size = 24.00 MiB llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB llama_new_context_with_model: ROCm_Host input buffer size = 13.02 MiB llama_new_context_with_model: ROCm0 compute buffer size = 192.01 MiB llama_new_context_with_model: ROCm1 compute buffer size = 192.01 MiB llama_new_context_with_model: ROCm_Host compute buffer size = 188.03 MiB llama_new_context_with_model: graph splits (measure): 4 [1711291209] warming up the model with an empty run loading library /tmp/ollama2408392975/runners/rocm_v60000/libext_server.so {"function":"initialize","level":"INFO","line":440,"msg":"initializing slots","n_slots":1,"tid":"139860529432320","timestamp":1711291210} {"function":"initialize","level":"INFO","line":452,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"139860529432320","timestamp":1711291210} time=2024-03-24T14:40:10.020Z level=INFO source=dyn_ext_server.go:162 msg="Starting llama main loop" time=2024-03-24T14:40:10.020Z level=DEBUG source=routes.go:249 msg="generate handler" prompt="Create a concise, 3-5 word phrase as a header for the following query, strictly adhering to the 3-5 word limit and avoiding the use of the word 'title': Tell me a random fun fact about the Roman Empire" time=2024-03-24T14:40:10.020Z level=DEBUG source=routes.go:250 msg="generate handler" template="<|im_start|>system\n{{ .System }}<|im_end|>\n<|im_start|>user\n{{ .Prompt }}<|im_end|>\n<|im_start|>assistant\n" time=2024-03-24T14:40:10.020Z level=DEBUG source=routes.go:251 msg="generate handler" system="You are Dolphin, a helpful AI assistant.\n" [1711291210] llama server main loop starting {"function":"update_slots","level":"INFO","line":1590,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"139842566473472","timestamp":1711291210} time=2024-03-24T14:40:10.020Z level=DEBUG source=routes.go:282 msg="generate handler" prompt="<|im_start|>system\nYou are Dolphin, a helpful AI assistant.\n<|im_end|>\n<|im_start|>user\nCreate a concise, 3-5 word phrase as a header for the following query, strictly adhering to the 3-5 word limit and avoiding the use of the word 'title': Tell me a random fun fact about the Roman Empire<|im_end|>\n<|im_start|>assistant\n" {"function":"launch_slot_with_data","level":"INFO","line":833,"msg":"slot is processing task","slot_id":0,"task_id":0,"tid":"139842566473472","timestamp":1711291210} {"function":"update_slots","ga_i":0,"level":"INFO","line":1828,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":78,"slot_id":0,"task_id":0,"tid":"139842566473472","timestamp":1711291210} {"function":"update_slots","level":"INFO","line":1852,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":0,"tid":"139842566473472","timestamp":1711291210} ```
Author
Owner

@awz commented on GitHub (Apr 14, 2024):

Super interested in this thread since I'd also like to try using multiple (consumer) AMD GPUs with Ollama.

<!-- gh-comment-id:2053836631 --> @awz commented on GitHub (Apr 14, 2024): Super interested in this thread since I'd also like to try using multiple (consumer) AMD GPUs with Ollama.
Author
Owner

@dhiltgen commented on GitHub (Apr 28, 2024):

The 0.1.33 release is available now as a pre-release. It also includes updates to the llama.cpp component which may resolve the multi-gpu gibberish problem.

<!-- gh-comment-id:2081606123 --> @dhiltgen commented on GitHub (Apr 28, 2024): The [0.1.33](https://github.com/ollama/ollama/releases) release is available now as a pre-release. It also includes updates to the llama.cpp component which may resolve the multi-gpu gibberish problem.
Author
Owner

@dhiltgen commented on GitHub (Jun 1, 2024):

I have a similar setup with dual 6800's (2x16G) and with the latest release 0.1.40, it crashes attempting to load this model due to OOM. It looks like my PR #4517 fixes the prediction logic in this case and the model does load, and seems to render reasonable responses.

% ./ollama-linux-amd64 run dolphin-mixtral:8x7b-v2.7-q5_K_S --verbose why is the sky blue
 The sky appears blue due to a phenomenon called Rayleigh scattering. Light from the sun enters Earth's atmosphere and gets scattered by molecules of air, such as
nitrogen and oxygen. Blue light has shorter wavelengths and higher frequency than other colors in sunlight, so it scatters more easily. As a result, we see a blue
sky when looking away from the Sun at the zenith or during daytime hours because the blue light is scattered everywhere around us by air molecules.

total duration:       26.94763327s
load duration:        18.867764679s
prompt eval count:    33 token(s)
prompt eval duration: 983.595ms
prompt eval rate:     33.55 tokens/s
eval count:           103 token(s)
eval duration:        7.052919s
eval rate:            14.60 tokens/s
% ./ollama-linux-amd64 ps
NAME                            	ID          	SIZE 	PROCESSOR      	UNTIL
dolphin-mixtral:8x7b-v2.7-q5_K_S	9f4dc7178a19	37 GB	11%/89% CPU/GPU	4 minutes from now
<!-- gh-comment-id:2143592704 --> @dhiltgen commented on GitHub (Jun 1, 2024): I have a similar setup with dual 6800's (2x16G) and with the latest release 0.1.40, it crashes attempting to load this model due to OOM. It looks like my PR #4517 fixes the prediction logic in this case and the model does load, and seems to render reasonable responses. ``` % ./ollama-linux-amd64 run dolphin-mixtral:8x7b-v2.7-q5_K_S --verbose why is the sky blue The sky appears blue due to a phenomenon called Rayleigh scattering. Light from the sun enters Earth's atmosphere and gets scattered by molecules of air, such as nitrogen and oxygen. Blue light has shorter wavelengths and higher frequency than other colors in sunlight, so it scatters more easily. As a result, we see a blue sky when looking away from the Sun at the zenith or during daytime hours because the blue light is scattered everywhere around us by air molecules. total duration: 26.94763327s load duration: 18.867764679s prompt eval count: 33 token(s) prompt eval duration: 983.595ms prompt eval rate: 33.55 tokens/s eval count: 103 token(s) eval duration: 7.052919s eval rate: 14.60 tokens/s % ./ollama-linux-amd64 ps NAME ID SIZE PROCESSOR UNTIL dolphin-mixtral:8x7b-v2.7-q5_K_S 9f4dc7178a19 37 GB 11%/89% CPU/GPU 4 minutes from now ```
Author
Owner

@darwinvelez58 commented on GitHub (Jul 12, 2024):

Still the same Issue for AMD....

<!-- gh-comment-id:2225468369 --> @darwinvelez58 commented on GitHub (Jul 12, 2024): Still the same Issue for AMD....
Author
Owner

@dhiltgen commented on GitHub (Jul 22, 2024):

@darwinvelez58 can you share more information about your setup? Are you running the latest ollama version? What GPUs? Can you share your server log? If it's still rendering gibberish on the latest release I'll reopen the issue.

<!-- gh-comment-id:2243661516 --> @dhiltgen commented on GitHub (Jul 22, 2024): @darwinvelez58 can you share more information about your setup? Are you running the latest ollama version? What GPUs? Can you share your server log? If it's still rendering gibberish on the latest release I'll reopen the issue.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#1943