[GH-ISSUE #4826] Model request: GLM-4 9B #65087

Closed
opened 2026-05-03 19:43:40 -05:00 by GiteaMirror · 22 comments
Owner

Originally created by @mywwq on GitHub (Jun 5, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4826

Add GLM-4 9B model

Model Type Seq Length Download
GLM-4-9B Base 8K 🤗 Huggingface
GLM-4-9B-Chat Chat 128K 🤗 Huggingface
GLM-4-9B-Chat-1M Chat 1M 🤗 Huggingface
GLM-4V-9B Chat 8K 🤗 Huggingface

请问什么时候引入glm4-9b模型

Originally created by @mywwq on GitHub (Jun 5, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4826 Add GLM-4 9B model Model | Type | Seq Length | Download -- | -- | -- | -- GLM-4-9B | Base | 8K | [🤗 Huggingface](https://huggingface.co/THUDM/glm-4-9b) GLM-4-9B-Chat | Chat | 128K | [🤗 Huggingface](https://huggingface.co/THUDM/glm-4-9b-chat) GLM-4-9B-Chat-1M | Chat | 1M | [🤗 Huggingface](https://huggingface.co/THUDM/glm-4-9b-chat-1m) GLM-4V-9B | Chat | 8K | [🤗 Huggingface](https://huggingface.co/THUDM/glm-4v-9b) ----- 请问什么时候引入glm4-9b模型
GiteaMirror added the model label 2026-05-03 19:43:40 -05:00
Author
Owner

@mili-tan commented on GitHub (Jun 5, 2024):

对 GLM 的支持还没有添加到 ollama 的上游 llama.cpp,也许你应该看看https://github.com/li-plus/chatglm.cpp

<!-- gh-comment-id:2150975667 --> @mili-tan commented on GitHub (Jun 5, 2024): 对 GLM 的支持还没有添加到 ollama 的上游 llama.cpp,也许你应该看看[https://github.com/li-plus/chatglm.cpp](https://github.com/li-plus/chatglm.cpp)?
Author
Owner

@io-q commented on GitHub (Jun 21, 2024):

the feature seem ready for llama.cpp soon
https://github.com/ggerganov/llama.cpp/pull/8031

<!-- gh-comment-id:2182013656 --> @io-q commented on GitHub (Jun 21, 2024): the feature seem ready for llama.cpp soon https://github.com/ggerganov/llama.cpp/pull/8031
Author
Owner

@GHOST1834 commented on GitHub (Jun 27, 2024):

It seems that there are already gguf format glm4 models on huggingface:
https://hf-mirror.com/models?search=glm
Is it possible to add these models now?

<!-- gh-comment-id:2194188468 --> @GHOST1834 commented on GitHub (Jun 27, 2024): It seems that there are already gguf format glm4 models on huggingface: https://hf-mirror.com/models?search=glm Is it possible to add these models now?
Author
Owner

@cvanelteren commented on GitHub (Jun 27, 2024):

If the Guf are available, we can add them manually 🫡

<!-- gh-comment-id:2194647968 --> @cvanelteren commented on GitHub (Jun 27, 2024): If the Guf are available, we can add them manually 🫡
Author
Owner

@GHOST1834 commented on GitHub (Jun 27, 2024):

We can, but it will be more convenient if glm4 is added officially.

发自我的iPhone

------------------ Original ------------------
From: Casper van Elteren @.>
Date: Thu,Jun 27,2024 9:06 PM
To: ollama/ollama @.
>
Cc: GHOST1834 @.>, Comment @.>
Subject: Re: [ollama/ollama] Model request: GLM-4 9B (Issue #4826)

If the Guf are available, we can add them manually 🫡


Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: @.***>

<!-- gh-comment-id:2194652804 --> @GHOST1834 commented on GitHub (Jun 27, 2024): We can, but it will be more convenient if glm4 is added officially. 发自我的iPhone ------------------ Original ------------------ From: Casper van Elteren ***@***.***&gt; Date: Thu,Jun 27,2024 9:06 PM To: ollama/ollama ***@***.***&gt; Cc: GHOST1834 ***@***.***&gt;, Comment ***@***.***&gt; Subject: Re: [ollama/ollama] Model request: GLM-4 9B (Issue #4826) If the Guf are available, we can add them manually 🫡 — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: ***@***.***&gt;
Author
Owner

@cleverpig commented on GitHub (Jun 29, 2024):

If the Guf are available, we can add them manually 🫡

Error: llama runner process has terminated: signal: aborted (core dumped)

<!-- gh-comment-id:2198021031 --> @cleverpig commented on GitHub (Jun 29, 2024): > If the Guf are available, we can add them manually 🫡 Error: llama runner process has terminated: signal: aborted (core dumped)
Author
Owner

@GHOST1834 commented on GitHub (Jun 29, 2024):

It failed?

发自我的iPhone

------------------ Original ------------------
From: cleverpig @.>
Date: Sat,Jun 29,2024 3:07 PM
To: ollama/ollama @.
>
Cc: GHOST1834 @.>, Comment @.>
Subject: Re: [ollama/ollama] Model request: GLM-4 9B (Issue #4826)

If the Guf are available, we can add them manually 🫡

Error: llama runner process has terminated: signal: aborted (core dumped)


Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: @.***>

<!-- gh-comment-id:2198024594 --> @GHOST1834 commented on GitHub (Jun 29, 2024): It failed? 发自我的iPhone ------------------ Original ------------------ From: cleverpig ***@***.***&gt; Date: Sat,Jun 29,2024 3:07 PM To: ollama/ollama ***@***.***&gt; Cc: GHOST1834 ***@***.***&gt;, Comment ***@***.***&gt; Subject: Re: [ollama/ollama] Model request: GLM-4 9B (Issue #4826) If the Guf are available, we can add them manually 🫡 Error: llama runner process has terminated: signal: aborted (core dumped) — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: ***@***.***&gt;
Author
Owner

@cvanelteren commented on GitHub (Jun 29, 2024):

Same issue. The format apparently is not compatible.

On Sat, 29 Jun 2024 at 09:21, GHOST1834 @.***> wrote:

It failed?

发自我的iPhone

------------------ Original ------------------
From: cleverpig @.>
Date: Sat,Jun 29,2024 3:07 PM
To: ollama/ollama @.
>
Cc: GHOST1834 @.>, Comment @.>
Subject: Re: [ollama/ollama] Model request: GLM-4 9B (Issue #4826)

If the Guf are available, we can add them manually 🫡

Error: llama runner process has terminated: signal: aborted (core dumped)


Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: @.***>


Reply to this email directly, view it on GitHub
https://github.com/ollama/ollama/issues/4826#issuecomment-2198024594,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AEUVDV34ZTACC25O2MDJAWLZJZN7BAVCNFSM6AAAAABIZ74XU6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCOJYGAZDINJZGQ
.
You are receiving this because you are subscribed to this thread.Message
ID: @.***>

--

<!-- gh-comment-id:2198025090 --> @cvanelteren commented on GitHub (Jun 29, 2024): Same issue. The format apparently is not compatible. On Sat, 29 Jun 2024 at 09:21, GHOST1834 ***@***.***> wrote: > It failed? > > > > 发自我的iPhone > > > ------------------ Original ------------------ > From: cleverpig ***@***.***&gt; > Date: Sat,Jun 29,2024 3:07 PM > To: ollama/ollama ***@***.***&gt; > Cc: GHOST1834 ***@***.***&gt;, Comment ***@***.***&gt; > Subject: Re: [ollama/ollama] Model request: GLM-4 9B (Issue #4826) > > > > > > > If the Guf are available, we can add them manually 🫡 > > Error: llama runner process has terminated: signal: aborted (core dumped) > > — > Reply to this email directly, view it on GitHub, or unsubscribe. > You are receiving this because you commented.Message ID: ***@***.***&gt; > > — > Reply to this email directly, view it on GitHub > <https://github.com/ollama/ollama/issues/4826#issuecomment-2198024594>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AEUVDV34ZTACC25O2MDJAWLZJZN7BAVCNFSM6AAAAABIZ74XU6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCOJYGAZDINJZGQ> > . > You are receiving this because you are subscribed to this thread.Message > ID: ***@***.***> > -- - Checkout my website <https://cvanelteren.github.io/> for contact info and current projects!
Author
Owner

@V-I-C-T-O-R commented on GitHub (Jul 2, 2024):

It failed? 发自我的iPhone

------------------ Original ------------------ From: cleverpig @.> Date: Sat,Jun 29,2024 3:07 PM To: ollama/ollama @.> Cc: GHOST1834 @.>, Comment @.> Subject: Re: [ollama/ollama] Model request: GLM-4 9B (Issue #4826) If the Guf are available, we can add them manually 🫡 Error: llama runner process has terminated: signal: aborted (core dumped) — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

It will happen error "unknown model architecture"

<!-- gh-comment-id:2201770083 --> @V-I-C-T-O-R commented on GitHub (Jul 2, 2024): > It failed? 发自我的iPhone > […](#) > ------------------ Original ------------------ From: cleverpig ***@***.***&gt; Date: Sat,Jun 29,2024 3:07 PM To: ollama/ollama ***@***.***&gt; Cc: GHOST1834 ***@***.***&gt;, Comment ***@***.***&gt; Subject: Re: [ollama/ollama] Model request: GLM-4 9B (Issue #4826) If the Guf are available, we can add them manually 🫡 Error: llama runner process has terminated: signal: aborted (core dumped) — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: ***@***.***&gt; It will happen error "unknown model architecture"
Author
Owner

@zalyoung commented on GitHub (Jul 7, 2024):

The llama.cpp is supporting GLM3 and GLM4 now. As the https://github.com/ggerganov/llama.cpp/pull/8031 has been merged

I use convert_hf_to_gguf.py to convert THUDM/glm-4-9b-chat model to GGUF format, then import the GGUF file into Ollama by using a Modelfile.

FROM ./glm-4-9b-chat.gguf
# sets the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 0.5
# sets the context window size to 4096, this controls how many tokens the LLM can use as context to generate the next token
PARAMETER num_ctx 4096

# sets a custom system message to specify the behavior of the chat assistant
SYSTEM You are a helpful AI assistant

When I want a try, Ollama throw out unknown model architecture: 'chatglm' error.

As below:

(base) ➜  ~ ollama ls |grep glm
glm-4-9b-chat:latest                  	54b1c4ad3aec	10.0 GB	16 minutes ago

(base) ➜  ~ ollama run glm-4-9b-chat:latest
Error: llama runner process has terminated: signal: abort trap error:error loading model architecture: unknown model architecture: 'chatglm'
<!-- gh-comment-id:2212499615 --> @zalyoung commented on GitHub (Jul 7, 2024): The llama.cpp is supporting GLM3 and GLM4 now. As the https://github.com/ggerganov/llama.cpp/pull/8031 has been merged I use [convert_hf_to_gguf.py](https://github.com/ggerganov/llama.cpp/blob/master/convert_hf_to_gguf.py) to convert [`THUDM/glm-4-9b-chat`](https://huggingface.co/THUDM/glm-4-9b-chat) model to GGUF format, then [import](https://github.com/ollama/ollama/blob/main/docs/import.md) the GGUF file into Ollama by using a Modelfile. ```Modelfile FROM ./glm-4-9b-chat.gguf # sets the temperature to 1 [higher is more creative, lower is more coherent] PARAMETER temperature 0.5 # sets the context window size to 4096, this controls how many tokens the LLM can use as context to generate the next token PARAMETER num_ctx 4096 # sets a custom system message to specify the behavior of the chat assistant SYSTEM You are a helpful AI assistant ``` When I want a try, Ollama throw out `unknown model architecture: 'chatglm'` error. As below: ``` (base) ➜ ~ ollama ls |grep glm glm-4-9b-chat:latest 54b1c4ad3aec 10.0 GB 16 minutes ago (base) ➜ ~ ollama run glm-4-9b-chat:latest Error: llama runner process has terminated: signal: abort trap error:error loading model architecture: unknown model architecture: 'chatglm' ```
Author
Owner

@kiradzS commented on GitHub (Jul 8, 2024):

llama.cpp现在支持 GLM3 和 GLM4。由于 ggerganov/llama.cpp#8031 已合并

我使用convert_hf_to_gguf.py将模型转换为GGUF格式,然后使用Modelfile将GGUF文件导入Ollama。

FROM ./glm-4-9b-chat.gguf
# sets the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 0.5
# sets the context window size to 4096, this controls how many tokens the LLM can use as context to generate the next token
PARAMETER num_ctx 4096

# sets a custom system message to specify the behavior of the chat assistant
SYSTEM You are a helpful AI assistant

当我想尝试时,Ollama 抛出错误。unknown model architecture: 'chatglm'

如下:

(base) ➜  ~ ollama ls |grep glm
glm-4-9b-chat:latest                  	54b1c4ad3aec	10.0 GB	16 minutes ago

(base) ➜  ~ ollama run glm-4-9b-chat:latest
Error: llama runner process has terminated: signal: abort trap error:error loading model architecture: unknown model architecture: 'chatglm'

me too

<!-- gh-comment-id:2214625207 --> @kiradzS commented on GitHub (Jul 8, 2024): > llama.cpp现在支持 GLM3 和 GLM4。由于 [ggerganov/llama.cpp#8031](https://github.com/ggerganov/llama.cpp/pull/8031) 已合并 > > 我使用[convert_hf_to_gguf.py](https://github.com/ggerganov/llama.cpp/blob/master/convert_hf_to_gguf.py)将模型转换为[](https://huggingface.co/THUDM/glm-4-9b-chat)GGUF格式,然后使用Modelfile将GGUF文件[导入](https://github.com/ollama/ollama/blob/main/docs/import.md)Ollama。 > > ``` > FROM ./glm-4-9b-chat.gguf > # sets the temperature to 1 [higher is more creative, lower is more coherent] > PARAMETER temperature 0.5 > # sets the context window size to 4096, this controls how many tokens the LLM can use as context to generate the next token > PARAMETER num_ctx 4096 > > # sets a custom system message to specify the behavior of the chat assistant > SYSTEM You are a helpful AI assistant > ``` > > 当我想尝试时,Ollama 抛出错误。`unknown model architecture: 'chatglm'` > > 如下: > > ``` > (base) ➜ ~ ollama ls |grep glm > glm-4-9b-chat:latest 54b1c4ad3aec 10.0 GB 16 minutes ago > > (base) ➜ ~ ollama run glm-4-9b-chat:latest > Error: llama runner process has terminated: signal: abort trap error:error loading model architecture: unknown model architecture: 'chatglm' > ``` me too
Author
Owner

@Forevery1 commented on GitHub (Jul 9, 2024):

https://github.com/ollama/ollama/issues/5529

<!-- gh-comment-id:2215718296 --> @Forevery1 commented on GitHub (Jul 9, 2024): https://github.com/ollama/ollama/issues/5529
Author
Owner

@pdevine commented on GitHub (Jul 9, 2024):

This is now supported. Make sure you upgrade to version 0.2.1 and then you can ollama run glm4.

<!-- gh-comment-id:2218153529 --> @pdevine commented on GitHub (Jul 9, 2024): This is now supported. Make sure you upgrade to version `0.2.1` and then you can `ollama run glm4`.
Author
Owner

@cvanelteren commented on GitHub (Jul 10, 2024):

Great. Sometimes it responds in chinese tho ;-)

On Tue, 9 Jul 2024 at 18:35, Patrick Devine @.***>
wrote:

Closed #4826 https://github.com/ollama/ollama/issues/4826 as completed.


Reply to this email directly, view it on GitHub
https://github.com/ollama/ollama/issues/4826#event-13443189784, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/AEUVDV7LAUPJND2ZCANSEILZLQGLHAVCNFSM6AAAAABIZ74XU6VHI2DSMVQWIX3LMV45UABCJFZXG5LFIV3GK3TUJZXXI2LGNFRWC5DJN5XDWMJTGQ2DGMJYHE3TQNA
.
You are receiving this because you are subscribed to this thread.Message
ID: @.***>

--

<!-- gh-comment-id:2219717917 --> @cvanelteren commented on GitHub (Jul 10, 2024): Great. Sometimes it responds in chinese tho ;-) On Tue, 9 Jul 2024 at 18:35, Patrick Devine ***@***.***> wrote: > Closed #4826 <https://github.com/ollama/ollama/issues/4826> as completed. > > — > Reply to this email directly, view it on GitHub > <https://github.com/ollama/ollama/issues/4826#event-13443189784>, or > unsubscribe > <https://github.com/notifications/unsubscribe-auth/AEUVDV7LAUPJND2ZCANSEILZLQGLHAVCNFSM6AAAAABIZ74XU6VHI2DSMVQWIX3LMV45UABCJFZXG5LFIV3GK3TUJZXXI2LGNFRWC5DJN5XDWMJTGQ2DGMJYHE3TQNA> > . > You are receiving this because you are subscribed to this thread.Message > ID: ***@***.***> > -- - Checkout my website <https://cvanelteren.github.io/> for contact info and current projects!
Author
Owner

@pdevine commented on GitHub (Jul 10, 2024):

@cvanelteren you could try telling it not to :-D

<!-- gh-comment-id:2220852886 --> @pdevine commented on GitHub (Jul 10, 2024): @cvanelteren you could try telling it not to :-D
Author
Owner

@cvanelteren commented on GitHub (Jul 10, 2024):

I did but sometimes it switches out of the blue.

On Wed, 10 Jul 2024 at 17:38, Patrick Devine @.***>
wrote:

@cvanelteren https://github.com/cvanelteren you could try telling it
not to :-D


Reply to this email directly, view it on GitHub
https://github.com/ollama/ollama/issues/4826#issuecomment-2220852886,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AEUVDVZIG7T65V2VSJPSJELZLVIN3AVCNFSM6AAAAABIZ74XU6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMRQHA2TEOBYGY
.
You are receiving this because you were mentioned.Message ID:
@.***>

<!-- gh-comment-id:2221059885 --> @cvanelteren commented on GitHub (Jul 10, 2024): I did but sometimes it switches out of the blue. - Checkout my website <https://cvanelteren.github.io/> for contact info and current projects! On Wed, 10 Jul 2024 at 17:38, Patrick Devine ***@***.***> wrote: > @cvanelteren <https://github.com/cvanelteren> you could try telling it > not to :-D > > — > Reply to this email directly, view it on GitHub > <https://github.com/ollama/ollama/issues/4826#issuecomment-2220852886>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AEUVDVZIG7T65V2VSJPSJELZLVIN3AVCNFSM6AAAAABIZ74XU6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMRQHA2TEOBYGY> > . > You are receiving this because you were mentioned.Message ID: > ***@***.***> >
Author
Owner

@VarLad commented on GitHub (Jul 11, 2024):

@pdevine what about glm4v-9b (the multimodal one) though? Is that supported too?

<!-- gh-comment-id:2221923515 --> @VarLad commented on GitHub (Jul 11, 2024): @pdevine what about glm4v-9b (the multimodal one) though? Is that supported too?
Author
Owner

@cvanelteren commented on GitHub (Jul 11, 2024):

image
the Chinese is pretty prevalent on text..., any workaround for this?

<!-- gh-comment-id:2223004248 --> @cvanelteren commented on GitHub (Jul 11, 2024): ![image](https://github.com/ollama/ollama/assets/19485143/3276fb84-f869-4514-a349-f604160fda94) the Chinese is pretty prevalent on text..., any workaround for this?
Author
Owner

@pdevine commented on GitHub (Jul 11, 2024):

@cvanelteren that doesn't seem like a problem w/ chinese, but there was an issue w/ the graph calculation w/ glm4 where we weren't calculating the memory correctly. There's a fix for that in 0.2.2 which is coming out imminently.

<!-- gh-comment-id:2223521892 --> @pdevine commented on GitHub (Jul 11, 2024): @cvanelteren that doesn't seem like a problem w/ chinese, but there was an issue w/ the graph calculation w/ glm4 where we weren't calculating the memory correctly. There's a fix for that in 0.2.2 which is coming out imminently.
Author
Owner

@cvanelteren commented on GitHub (Jul 11, 2024):

Ah ok! I thought it was ASCII encoding for Chinese 😂

On Thu, 11 Jul 2024 at 19:43, Patrick Devine @.***>
wrote:

@cvanelteren https://github.com/cvanelteren that doesn't seem like a
problem w/ chinese, but there was an issue w/ the graph calculation w/ glm4
where we weren't calculating the memory correctly. There's a fix for that
in 0.2.2 which is coming out imminently.


Reply to this email directly, view it on GitHub
https://github.com/ollama/ollama/issues/4826#issuecomment-2223521892,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AEUVDVZ3LEZOTABMGVKQS2DZL272VAVCNFSM6AAAAABIZ74XU6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMRTGUZDCOBZGI
.
You are receiving this because you were mentioned.Message ID:
@.***>

<!-- gh-comment-id:2223624656 --> @cvanelteren commented on GitHub (Jul 11, 2024): Ah ok! I thought it was ASCII encoding for Chinese 😂 - Checkout my website <https://cvanelteren.github.io/> for contact info and current projects! On Thu, 11 Jul 2024 at 19:43, Patrick Devine ***@***.***> wrote: > @cvanelteren <https://github.com/cvanelteren> that doesn't seem like a > problem w/ chinese, but there was an issue w/ the graph calculation w/ glm4 > where we weren't calculating the memory correctly. There's a fix for that > in 0.2.2 which is coming out imminently. > > — > Reply to this email directly, view it on GitHub > <https://github.com/ollama/ollama/issues/4826#issuecomment-2223521892>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AEUVDVZ3LEZOTABMGVKQS2DZL272VAVCNFSM6AAAAABIZ74XU6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMRTGUZDCOBZGI> > . > You are receiving this because you were mentioned.Message ID: > ***@***.***> >
Author
Owner

@pdevine commented on GitHub (Jul 11, 2024):

well, I'm not 100% sure, but the ascii seems suspicious to me 😅 . What platform are you running on?

<!-- gh-comment-id:2223633753 --> @pdevine commented on GitHub (Jul 11, 2024): well, I'm not 100% sure, but the ascii seems suspicious to me 😅 . What platform are you running on?
Author
Owner

@cvanelteren commented on GitHub (Jul 11, 2024):

Linux (arch)

On Thu, 11 Jul 2024 at 20:34, Patrick Devine @.***>
wrote:

well, I'm not 100% sure, but the ascii seems suspicious to me 😅 . What
platform are you running on?


Reply to this email directly, view it on GitHub
https://github.com/ollama/ollama/issues/4826#issuecomment-2223633753,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AEUVDV3KZULDDLGIHTSLITLZL3F3RAVCNFSM6AAAAABIZ74XU6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMRTGYZTGNZVGM
.
You are receiving this because you were mentioned.Message ID:
@.***>

--

<!-- gh-comment-id:2223718522 --> @cvanelteren commented on GitHub (Jul 11, 2024): Linux (arch) On Thu, 11 Jul 2024 at 20:34, Patrick Devine ***@***.***> wrote: > well, I'm not 100% sure, but the ascii seems suspicious to me 😅 . What > platform are you running on? > > — > Reply to this email directly, view it on GitHub > <https://github.com/ollama/ollama/issues/4826#issuecomment-2223633753>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AEUVDV3KZULDDLGIHTSLITLZL3F3RAVCNFSM6AAAAABIZ74XU6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMRTGYZTGNZVGM> > . > You are receiving this because you were mentioned.Message ID: > ***@***.***> > -- - Checkout my website <https://cvanelteren.github.io/> for contact info and current projects!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#65087