[GH-ISSUE #3304] Bug found ROCm docker (ver 0.1.29) didnt suppot daul CPU, But 0.1.28 is fine w/ Dual CPU #27792

Closed
opened 2026-04-22 05:23:09 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @MorrisLu-Taipei on GitHub (Mar 23, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3304

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

ROCm docker (ver 0.1.29) started well
time=2024-03-23T04:49:45.409Z level=INFO source=amd_linux.go:50 msg="AMD Driver: 6.3.6"
time=2024-03-23T04:49:45.409Z level=INFO source=amd_linux.go:88 msg="detected amdgpu versions [gfx000 gfx1100]"
time=2024-03-23T04:49:45.423Z level=WARN source=amd_linux.go:114 msg="amdgpu [0] gfx000 is not supported by /tmp/ollama1698492372/rocm [gfx1030 gfx1100 gfx1101 gfx1102 gfx900 gfx906 gfx908 gfx90a gfx940 gfx941 gfx942]"
time=2024-03-23T04:49:45.423Z level=WARN source=amd_linux.go:116 msg="See https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md for HSA_OVERRIDE_GFX_VERSION usage"
time=2024-03-23T04:49:45.423Z level=INFO source=amd_linux.go:119 msg="amdgpu [1] gfx1100 is supported"
time=2024-03-23T04:49:45.423Z level=INFO source=amd_linux.go:246 msg="[1] amdgpu totalMemory 20464M"
time=2024-03-23T04:49:45.423Z level=INFO source=amd_linux.go:247 msg="[1] amdgpu freeMemory 20464M"
time=2024-03-23T04:49:45.423Z level=INFO source=amd_common.go:54 msg="Setting HIP_VISIBLE_DEVICES=1"

BUT running model use CPU always.

llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CPU KV buffer size = 256.00 MiB
llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB
llama_new_context_with_model: CPU input buffer size = 13.02 MiB
llama_new_context_with_model: CPU compute buffer size = 160.00 MiB

What did you expect to see?

Should use ROCm GPU to run model (ROCm-Docker) , not use CPU

Steps to reproduce

No response

Are there any recent changes that introduced the issue?

No response

OS

Linux

Architecture

x86

Platform

Docker

Ollama version

0.1.29

GPU

AMD

GPU info

ROCk module is loaded

HSA System Attributes

Runtime Version: 1.1
System Timestamp Freq.: 1000.000000MHz
Sig. Max Wait Duration: 18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count)
Machine Model: LARGE
System Endianness: LITTLE
Mwaitx: DISABLED
DMAbuf Support: YES

==========
HSA Agents


Agent 1


Name: Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz
Uuid: CPU-XX
Marketing Name: Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz
Vendor Name: CPU
Feature: None specified
Profile: FULL_PROFILE
Float Round Mode: NEAR
Max Queue Number: 0(0x0)
Queue Min Size: 0(0x0)
Queue Max Size: 0(0x0)
Queue Type: MULTI
Node: 0
Device Type: CPU
Cache Info:
L1: 32768(0x8000) KB
Chip ID: 0(0x0)
ASIC Revision: 0(0x0)
Cacheline Size: 64(0x40)
Max Clock Freq. (MHz): 3100
BDFID: 0
Internal Node ID: 0
Compute Unit: 20
SIMDs per CU: 0
Shader Engines: 0
Shader Arrs. per Eng.: 0
WatchPts on Addr. Ranges:1
Features: None
Pool Info:
Pool 1
Segment: GLOBAL; FLAGS: FINE GRAINED
Size: 3942924(0x3c2a0c) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
Pool 2
Segment: GLOBAL; FLAGS: KERNARG, FINE GRAINED
Size: 3942924(0x3c2a0c) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
Pool 3
Segment: GLOBAL; FLAGS: COARSE GRAINED
Size: 3942924(0x3c2a0c) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
ISA Info:


Agent 2


Name: Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz
Uuid: CPU-XX
Marketing Name: Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz
Vendor Name: CPU
Feature: None specified
Profile: FULL_PROFILE
Float Round Mode: NEAR
Max Queue Number: 0(0x0)
Queue Min Size: 0(0x0)
Queue Max Size: 0(0x0)
Queue Type: MULTI
Node: 1
Device Type: CPU
Cache Info:
L1: 32768(0x8000) KB
Chip ID: 0(0x0)
ASIC Revision: 0(0x0)
Cacheline Size: 64(0x40)
Max Clock Freq. (MHz): 3100
BDFID: 0
Internal Node ID: 1
Compute Unit: 20
SIMDs per CU: 0
Shader Engines: 0
Shader Arrs. per Eng.: 0
WatchPts on Addr. Ranges:1
Features: None
Pool Info:
Pool 1
Segment: GLOBAL; FLAGS: FINE GRAINED
Size: 4072064(0x3e2280) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
Pool 2
Segment: GLOBAL; FLAGS: KERNARG, FINE GRAINED
Size: 4072064(0x3e2280) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
Pool 3
Segment: GLOBAL; FLAGS: COARSE GRAINED
Size: 4072064(0x3e2280) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
ISA Info:


Agent 3


Name: gfx1100
Uuid: GPU-26f6b21ec442090e
Marketing Name: Radeon RX 7900 XT
Vendor Name: AMD
Feature: KERNEL_DISPATCH
Profile: BASE_PROFILE
Float Round Mode: NEAR
Max Queue Number: 128(0x80)
Queue Min Size: 64(0x40)
Queue Max Size: 131072(0x20000)
Queue Type: MULTI
Node: 2
Device Type: GPU
Cache Info:
L1: 32(0x20) KB
L2: 6144(0x1800) KB
L3: 81920(0x14000) KB
Chip ID: 29772(0x744c)
ASIC Revision: 0(0x0)
Cacheline Size: 64(0x40)
Max Clock Freq. (MHz): 2129
BDFID: 1024
Internal Node ID: 2
Compute Unit: 84
SIMDs per CU: 2
Shader Engines: 6
Shader Arrs. per Eng.: 2
WatchPts on Addr. Ranges:4
Coherent Host Access: FALSE
Features: KERNEL_DISPATCH
Fast F16 Operation: TRUE
Wavefront Size: 32(0x20)
Workgroup Max Size: 1024(0x400)
Workgroup Max Size per Dimension:
x 1024(0x400)
y 1024(0x400)
z 1024(0x400)
Max Waves Per CU: 32(0x20)
Max Work-item Per CU: 1024(0x400)
Grid Max Size: 4294967295(0xffffffff)
Grid Max Size per Dimension:
x 4294967295(0xffffffff)
y 4294967295(0xffffffff)
z 4294967295(0xffffffff)
Max fbarriers/Workgrp: 32
Packet Processor uCode:: 550
SDMA engine uCode:: 19
IOMMU Support:: None
Pool Info:
Pool 1
Segment: GLOBAL; FLAGS: COARSE GRAINED
Size: 20955136(0x13fc000) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Alignment: 4KB
Accessible by all: FALSE
Pool 2
Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED
Size: 20955136(0x13fc000) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Alignment: 4KB
Accessible by all: FALSE
Pool 3
Segment: GROUP
Size: 64(0x40) KB
Allocatable: FALSE
Alloc Granule: 0KB
Alloc Alignment: 0KB
Accessible by all: FALSE
ISA Info:
ISA 1
Name: amdgcn-amd-amdhsa--gfx1100
Machine Models: HSA_MACHINE_MODEL_LARGE
Profiles: HSA_PROFILE_BASE
Default Rounding Mode: NEAR
Default Rounding Mode: NEAR
Fast f16: TRUE
Workgroup Max Size: 1024(0x400)
Workgroup Max Size per Dimension:
x 1024(0x400)
y 1024(0x400)
z 1024(0x400)
Grid Max Size: 4294967295(0xffffffff)
Grid Max Size per Dimension:
x 4294967295(0xffffffff)
y 4294967295(0xffffffff)
z 4294967295(0xffffffff)
FBarrier Max Size: 32
*** Done ***

CPU

Intel

Other software

No response

Originally created by @MorrisLu-Taipei on GitHub (Mar 23, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3304 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? **ROCm docker (ver 0.1.29) started well** time=2024-03-23T04:49:45.409Z level=INFO source=amd_linux.go:50 msg="AMD Driver: 6.3.6" time=2024-03-23T04:49:45.409Z level=INFO source=amd_linux.go:88 msg="detected amdgpu versions [gfx000 gfx1100]" time=2024-03-23T04:49:45.423Z level=WARN source=amd_linux.go:114 msg="amdgpu [0] gfx000 is not supported by /tmp/ollama1698492372/rocm [gfx1030 gfx1100 gfx1101 gfx1102 gfx900 gfx906 gfx908 gfx90a gfx940 gfx941 gfx942]" time=2024-03-23T04:49:45.423Z level=WARN source=amd_linux.go:116 msg="See https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md for HSA_OVERRIDE_GFX_VERSION usage" time=2024-03-23T04:49:45.423Z level=INFO source=amd_linux.go:119 msg="amdgpu [1] gfx1100 is supported" time=2024-03-23T04:49:45.423Z level=INFO source=amd_linux.go:246 msg="[1] amdgpu totalMemory 20464M" time=2024-03-23T04:49:45.423Z level=INFO source=amd_linux.go:247 msg="[1] amdgpu freeMemory 20464M" time=2024-03-23T04:49:45.423Z level=INFO source=amd_common.go:54 msg="Setting HIP_VISIBLE_DEVICES=1" **BUT running model use CPU always.** llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CPU KV buffer size = 256.00 MiB llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB llama_new_context_with_model: CPU input buffer size = 13.02 MiB llama_new_context_with_model: CPU compute buffer size = 160.00 MiB ### What did you expect to see? Should use ROCm GPU to run model (ROCm-Docker) , not use CPU ### Steps to reproduce _No response_ ### Are there any recent changes that introduced the issue? _No response_ ### OS Linux ### Architecture x86 ### Platform Docker ### Ollama version 0.1.29 ### GPU AMD ### GPU info ROCk module is loaded ===================== HSA System Attributes ===================== Runtime Version: 1.1 System Timestamp Freq.: 1000.000000MHz Sig. Max Wait Duration: 18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count) Machine Model: LARGE System Endianness: LITTLE Mwaitx: DISABLED DMAbuf Support: YES ========== HSA Agents ========== ******* Agent 1 ******* Name: Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz Uuid: CPU-XX Marketing Name: Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz Vendor Name: CPU Feature: None specified Profile: FULL_PROFILE Float Round Mode: NEAR Max Queue Number: 0(0x0) Queue Min Size: 0(0x0) Queue Max Size: 0(0x0) Queue Type: MULTI Node: 0 Device Type: CPU Cache Info: L1: 32768(0x8000) KB Chip ID: 0(0x0) ASIC Revision: 0(0x0) Cacheline Size: 64(0x40) Max Clock Freq. (MHz): 3100 BDFID: 0 Internal Node ID: 0 Compute Unit: 20 SIMDs per CU: 0 Shader Engines: 0 Shader Arrs. per Eng.: 0 WatchPts on Addr. Ranges:1 Features: None Pool Info: Pool 1 Segment: GLOBAL; FLAGS: FINE GRAINED Size: 3942924(0x3c2a0c) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Alignment: 4KB Accessible by all: TRUE Pool 2 Segment: GLOBAL; FLAGS: KERNARG, FINE GRAINED Size: 3942924(0x3c2a0c) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Alignment: 4KB Accessible by all: TRUE Pool 3 Segment: GLOBAL; FLAGS: COARSE GRAINED Size: 3942924(0x3c2a0c) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Alignment: 4KB Accessible by all: TRUE ISA Info: ******* Agent 2 ******* Name: Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz Uuid: CPU-XX Marketing Name: Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz Vendor Name: CPU Feature: None specified Profile: FULL_PROFILE Float Round Mode: NEAR Max Queue Number: 0(0x0) Queue Min Size: 0(0x0) Queue Max Size: 0(0x0) Queue Type: MULTI Node: 1 Device Type: CPU Cache Info: L1: 32768(0x8000) KB Chip ID: 0(0x0) ASIC Revision: 0(0x0) Cacheline Size: 64(0x40) Max Clock Freq. (MHz): 3100 BDFID: 0 Internal Node ID: 1 Compute Unit: 20 SIMDs per CU: 0 Shader Engines: 0 Shader Arrs. per Eng.: 0 WatchPts on Addr. Ranges:1 Features: None Pool Info: Pool 1 Segment: GLOBAL; FLAGS: FINE GRAINED Size: 4072064(0x3e2280) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Alignment: 4KB Accessible by all: TRUE Pool 2 Segment: GLOBAL; FLAGS: KERNARG, FINE GRAINED Size: 4072064(0x3e2280) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Alignment: 4KB Accessible by all: TRUE Pool 3 Segment: GLOBAL; FLAGS: COARSE GRAINED Size: 4072064(0x3e2280) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Alignment: 4KB Accessible by all: TRUE ISA Info: ******* Agent 3 ******* Name: gfx1100 Uuid: GPU-26f6b21ec442090e Marketing Name: Radeon RX 7900 XT Vendor Name: AMD Feature: KERNEL_DISPATCH Profile: BASE_PROFILE Float Round Mode: NEAR Max Queue Number: 128(0x80) Queue Min Size: 64(0x40) Queue Max Size: 131072(0x20000) Queue Type: MULTI Node: 2 Device Type: GPU Cache Info: L1: 32(0x20) KB L2: 6144(0x1800) KB L3: 81920(0x14000) KB Chip ID: 29772(0x744c) ASIC Revision: 0(0x0) Cacheline Size: 64(0x40) Max Clock Freq. (MHz): 2129 BDFID: 1024 Internal Node ID: 2 Compute Unit: 84 SIMDs per CU: 2 Shader Engines: 6 Shader Arrs. per Eng.: 2 WatchPts on Addr. Ranges:4 Coherent Host Access: FALSE Features: KERNEL_DISPATCH Fast F16 Operation: TRUE Wavefront Size: 32(0x20) Workgroup Max Size: 1024(0x400) Workgroup Max Size per Dimension: x 1024(0x400) y 1024(0x400) z 1024(0x400) Max Waves Per CU: 32(0x20) Max Work-item Per CU: 1024(0x400) Grid Max Size: 4294967295(0xffffffff) Grid Max Size per Dimension: x 4294967295(0xffffffff) y 4294967295(0xffffffff) z 4294967295(0xffffffff) Max fbarriers/Workgrp: 32 Packet Processor uCode:: 550 SDMA engine uCode:: 19 IOMMU Support:: None Pool Info: Pool 1 Segment: GLOBAL; FLAGS: COARSE GRAINED Size: 20955136(0x13fc000) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Alignment: 4KB Accessible by all: FALSE Pool 2 Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED Size: 20955136(0x13fc000) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Alignment: 4KB Accessible by all: FALSE Pool 3 Segment: GROUP Size: 64(0x40) KB Allocatable: FALSE Alloc Granule: 0KB Alloc Alignment: 0KB Accessible by all: FALSE ISA Info: ISA 1 Name: amdgcn-amd-amdhsa--gfx1100 Machine Models: HSA_MACHINE_MODEL_LARGE Profiles: HSA_PROFILE_BASE Default Rounding Mode: NEAR Default Rounding Mode: NEAR Fast f16: TRUE Workgroup Max Size: 1024(0x400) Workgroup Max Size per Dimension: x 1024(0x400) y 1024(0x400) z 1024(0x400) Grid Max Size: 4294967295(0xffffffff) Grid Max Size per Dimension: x 4294967295(0xffffffff) y 4294967295(0xffffffff) z 4294967295(0xffffffff) FBarrier Max Size: 32 *** Done *** ### CPU Intel ### Other software _No response_
GiteaMirror added the amdbug labels 2026-04-22 05:23:09 -05:00
Author
Owner

@MorrisLu-Taipei commented on GitHub (Mar 23, 2024):

When running model w/ N-card was perfect. but something wrong w/A-card

llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
ggml_init_cublas: GGML_CUDA_FORCE_MMQ: yes
ggml_init_cublas: CUDA_USE_TENSOR_CORES: no
ggml_init_cublas: found 2 CUDA devices:
Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes

<!-- gh-comment-id:2016352198 --> @MorrisLu-Taipei commented on GitHub (Mar 23, 2024): **When running model w/ N-card was perfect. but something wrong w/A-card** llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 ggml_init_cublas: GGML_CUDA_FORCE_MMQ: yes ggml_init_cublas: CUDA_USE_TENSOR_CORES: no ggml_init_cublas: found 2 CUDA devices: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Author
Owner

@dhiltgen commented on GitHub (Mar 23, 2024):

Thanks for the rocminfo output. It looks like we have a bug properly handling multi-socket server CPUs. As a temporary workaround until I get this fixed, you should be able to set HSA_OVERRIDE_GFX_VERSION=11.0.0 on your system to bypass the GPU compatibility check.

<!-- gh-comment-id:2016378797 --> @dhiltgen commented on GitHub (Mar 23, 2024): Thanks for the rocminfo output. It looks like we have a bug properly handling multi-socket server CPUs. As a temporary workaround until I get this fixed, you should be able to set `HSA_OVERRIDE_GFX_VERSION=11.0.0` on your system to bypass the GPU compatibility check.
Author
Owner

@MorrisLu-Taipei commented on GitHub (Mar 23, 2024):

Thanks for your information, I will try it and feedback soon

<!-- gh-comment-id:2016383993 --> @MorrisLu-Taipei commented on GitHub (Mar 23, 2024): Thanks for your information, I will try it and feedback soon
Author
Owner

@MorrisLu-Taipei commented on GitHub (Mar 23, 2024):

Hi dhiltgen,

thnaks for your help,
after export HSA_OVERRIDE_GFX_VERSION=11.0.0 and export HIP_VISIBLE_DEVICES=x still not working to me,

Do you mean that issue is daul CPU ? rocm docker will work fine if i use single cpu, rigth?

Thanks for the rocminfo output. It looks like we have a bug properly handling multi-socket server CPUs. As a temporary workaround until I get this fixed, you should be able to set HSA_OVERRIDE_GFX_VERSION=11.0.0 on your system to bypass the GPU compatibility check.

<!-- gh-comment-id:2016412789 --> @MorrisLu-Taipei commented on GitHub (Mar 23, 2024): Hi dhiltgen, thnaks for your help, after export HSA_OVERRIDE_GFX_VERSION=11.0.0 and export HIP_VISIBLE_DEVICES=x still not working to me, Do you mean that issue is daul CPU ? rocm docker will work fine if i use single cpu, rigth? > Thanks for the rocminfo output. It looks like we have a bug properly handling multi-socket server CPUs. As a temporary workaround until I get this fixed, you should be able to set `HSA_OVERRIDE_GFX_VERSION=11.0.0` on your system to bypass the GPU compatibility check.
Author
Owner

@MorrisLu-Taipei commented on GitHub (Mar 23, 2024):

Bug found, ROCm docker ver 0.1.29 didnt support dual cpu w/ RX7900x4
BUT, after rollback to 0.1.28 , this supported anf ran pretty well w/ daul CPU

<!-- gh-comment-id:2016546511 --> @MorrisLu-Taipei commented on GitHub (Mar 23, 2024): Bug found, ROCm docker ver 0.1.29 didnt support dual cpu w/ RX7900x4 BUT, after rollback to 0.1.28 , this supported anf ran pretty well w/ daul CPU
Author
Owner

@xvbingbing commented on GitHub (Mar 27, 2024):

Excuse me, could you give me a python code to run ollama using GPU? I set the option gpu_num and main_gpu, but no response. I tried for a long time and couldn't figure out how to use the GPU.

<!-- gh-comment-id:2022209707 --> @xvbingbing commented on GitHub (Mar 27, 2024): Excuse me, could you give me a python code to run ollama using GPU? I set the option gpu_num and main_gpu, but no response. I tried for a long time and couldn't figure out how to use the GPU.
Author
Owner

@dhiltgen commented on GitHub (Mar 27, 2024):

@MorrisLu-Taipei there's an Ollama bug where ROCm is returning information about each CPU socket, and we only handled the first CPU correctly. The second CPU was incorrectly interpreted as a discrete GPU by mistake. I was hoping the gfx override workaround would get past that, but it sounds like that didn't work. Sorry about that. Running the older version before we changed the ROCm discovery may be the only option for multi-CPU radeon servers until we get this fixed.

@xvbingbing python client usage is off-topic for this issue. Head on over to Discord and folks can help you out.

<!-- gh-comment-id:2023806737 --> @dhiltgen commented on GitHub (Mar 27, 2024): @MorrisLu-Taipei there's an Ollama bug where ROCm is returning information about each CPU socket, and we only handled the first CPU correctly. The second CPU was incorrectly interpreted as a discrete GPU by mistake. I was hoping the gfx override workaround would get past that, but it sounds like that didn't work. Sorry about that. Running the older version before we changed the ROCm discovery may be the only option for multi-CPU radeon servers until we get this fixed. @xvbingbing python client usage is off-topic for this issue. Head on over to [Discord](https://discord.gg/ollama) and folks can help you out.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#27792