[GH-ISSUE #3160] When will the ChatGLM model be supported? #27704

Open
opened 2026-04-22 05:14:38 -05:00 by GiteaMirror · 22 comments
Owner

Originally created by @yongxingMa on GitHub (Mar 15, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3160

What model would you like?

When will the ChatGLM model be supported

Originally created by @yongxingMa on GitHub (Mar 15, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3160 ### What model would you like? When will the ChatGLM model be supported
GiteaMirror added the model label 2026-04-22 05:14:38 -05:00
Author
Owner

@34892002 commented on GitHub (Mar 15, 2024):

Convert the ChatGLM3 model Error:
(.venv) (base) [root@10-9-159-200 ollama]# python llm/llama.cpp/convert-hf-to-gguf.py /data/chatglm3 --outtype f16 --outfile chatglm3.bin
Loading model: chatglm3
Traceback (most recent call last):
File "/data/ollama/llm/llama.cpp/convert-hf-to-gguf.py", line 1938, in
main()
File "/data/ollama/llm/llama.cpp/convert-hf-to-gguf.py", line 1918, in main
model_class = Model.from_model_architecture(hparams["architectures"][0])
File "/data/ollama/llm/llama.cpp/convert-hf-to-gguf.py", line 204, in from_model_architecture
raise NotImplementedError(f'Architecture {arch!r} not supported!') from None
NotImplementedError: Architecture 'ChatGLMModel' not supported!
(.venv) (base) [root@10-9-159-200 ollama]# python llm/llama.cpp/convert.py /data/chatglm3 --outtype f16 --outfile chatglm3.bin
Loading model file /data/chatglm3/model-00001-of-00007.safetensors
Traceback (most recent call last):
File "/data/ollama/llm/llama.cpp/convert.py", line 1466, in
main()
File "/data/ollama/llm/llama.cpp/convert.py", line 1402, in main
model_plus = load_some_model(args.model)
File "/data/ollama/llm/llama.cpp/convert.py", line 1278, in load_some_model
models_plus.append(lazy_load_file(path))
File "/data/ollama/llm/llama.cpp/convert.py", line 888, in lazy_load_file
elif struct.unpack('<Q', first8)[0] < 16 * 1024 * 1024:
struct.error: unpack requires a buffer of 8 bytes
(.venv) (base) [root@10-9-159-200 ollama]# unpack requires a buffer of 8 bytes

https://github.com/ollama/ollama/blob/main/docs/import.md

<!-- gh-comment-id:1998980308 --> @34892002 commented on GitHub (Mar 15, 2024): Convert the ChatGLM3 model Error: (.venv) (base) [root@10-9-159-200 ollama]# python llm/llama.cpp/convert-hf-to-gguf.py /data/chatglm3 --outtype f16 --outfile chatglm3.bin Loading model: chatglm3 Traceback (most recent call last): File "/data/ollama/llm/llama.cpp/convert-hf-to-gguf.py", line 1938, in <module> main() File "/data/ollama/llm/llama.cpp/convert-hf-to-gguf.py", line 1918, in main model_class = Model.from_model_architecture(hparams["architectures"][0]) File "/data/ollama/llm/llama.cpp/convert-hf-to-gguf.py", line 204, in from_model_architecture raise NotImplementedError(f'Architecture {arch!r} not supported!') from None NotImplementedError: Architecture 'ChatGLMModel' not supported! (.venv) (base) [root@10-9-159-200 ollama]# python llm/llama.cpp/convert.py /data/chatglm3 --outtype f16 --outfile chatglm3.bin Loading model file /data/chatglm3/model-00001-of-00007.safetensors Traceback (most recent call last): File "/data/ollama/llm/llama.cpp/convert.py", line 1466, in <module> main() File "/data/ollama/llm/llama.cpp/convert.py", line 1402, in main model_plus = load_some_model(args.model) File "/data/ollama/llm/llama.cpp/convert.py", line 1278, in load_some_model models_plus.append(lazy_load_file(path)) File "/data/ollama/llm/llama.cpp/convert.py", line 888, in lazy_load_file elif struct.unpack('<Q', first8)[0] < 16 * 1024 * 1024: struct.error: unpack requires a buffer of 8 bytes (.venv) (base) [root@10-9-159-200 ollama]# unpack requires a buffer of 8 bytes # https://github.com/ollama/ollama/blob/main/docs/import.md
Author
Owner

@BruceMacD commented on GitHub (Mar 15, 2024):

Hi @yongxingMa we have added any ChatGLM models to our main library repo yet, but some community members have uploaded it:
https://ollama.com/search?q=chatglm&p=1

<!-- gh-comment-id:1999455110 --> @BruceMacD commented on GitHub (Mar 15, 2024): Hi @yongxingMa we have added any ChatGLM models to our main library repo yet, but some community members have uploaded it: https://ollama.com/search?q=chatglm&p=1
Author
Owner

@34892002 commented on GitHub (Mar 18, 2024):

No models have been pushed.

<!-- gh-comment-id:2002777838 --> @34892002 commented on GitHub (Mar 18, 2024): No models have been pushed.
Author
Owner

@LukeCara commented on GitHub (Mar 19, 2024):

no model has been pushed. Looking forward to have some chatglm3 models soon!

<!-- gh-comment-id:2005924265 --> @LukeCara commented on GitHub (Mar 19, 2024): no model has been pushed. Looking forward to have some chatglm3 models soon!
Author
Owner

@yaoice commented on GitHub (Mar 29, 2024):

+1

<!-- gh-comment-id:2027219494 --> @yaoice commented on GitHub (Mar 29, 2024): +1
Author
Owner

@EaglePPP commented on GitHub (Apr 10, 2024):

Hi @yongxingMa we have added any ChatGLM models to our main library repo yet, but some community members have uploaded it: https://ollama.com/search?q=chatglm&p=1

its gone

<!-- gh-comment-id:2047438837 --> @EaglePPP commented on GitHub (Apr 10, 2024): > Hi @yongxingMa we have added any ChatGLM models to our main library repo yet, but some community members have uploaded it: https://ollama.com/search?q=chatglm&p=1 its gone
Author
Owner

@yongxingMa commented on GitHub (Apr 12, 2024):

Hi @yongxingMa we have added any ChatGLM models to our main library repo yet, but some community members have uploaded it: https://ollama.com/search?q=chatglm&p=1

its gone

No chatglm models have been search yet.

<!-- gh-comment-id:2051270177 --> @yongxingMa commented on GitHub (Apr 12, 2024): > > Hi @yongxingMa we have added any ChatGLM models to our main library repo yet, but some community members have uploaded it: https://ollama.com/search?q=chatglm&p=1 > > its gone No chatglm models have been search yet.
Author
Owner

@DUZHUJUN commented on GitHub (Apr 18, 2024):

i cant't convert chatglm model to GGUF format,how can i run a chatglm model local?

<!-- gh-comment-id:2062918138 --> @DUZHUJUN commented on GitHub (Apr 18, 2024): i cant't convert chatglm model to GGUF format,how can i run a chatglm model local?
Author
Owner

@flyfox666 commented on GitHub (Apr 20, 2024):

Yes I've run into this problem as well, and sincerely request that chatglm support be added!

<!-- gh-comment-id:2067633060 --> @flyfox666 commented on GitHub (Apr 20, 2024): Yes I've run into this problem as well, and sincerely request that chatglm support be added!
Author
Owner

@shinyzhu commented on GitHub (Apr 20, 2024):

I updated to latest Ollama and llama.cpp source. but got this error:

(.venv) shiny@ubuntuinr2:/data/llmbuild/ollama$ python llm/llama.cpp/convert.py ./model --outtype f16 --outfile converted.bin
Loading model file model/model-00001-of-00007.safetensors
Loading model file model/model-00001-of-00007.safetensors
Loading model file model/model-00002-of-00007.safetensors
Loading model file model/model-00003-of-00007.safetensors
Loading model file model/model-00004-of-00007.safetensors
Loading model file model/model-00005-of-00007.safetensors
Loading model file model/model-00006-of-00007.safetensors
Loading model file model/model-00007-of-00007.safetensors
Traceback (most recent call last):
  File "/data/llmbuild/ollama/llm/llama.cpp/convert.py", line 1548, in <module>
    main()
  File "/data/llmbuild/ollama/llm/llama.cpp/convert.py", line 1480, in main
    model_plus = load_some_model(args.model)
  File "/data/llmbuild/ollama/llm/llama.cpp/convert.py", line 1371, in load_some_model
    model_plus = merge_multifile_models(models_plus)
  File "/data/llmbuild/ollama/llm/llama.cpp/convert.py", line 804, in merge_multifile_models
    model = merge_sharded([mp.model for mp in models_plus])
  File "/data/llmbuild/ollama/llm/llama.cpp/convert.py", line 783, in merge_sharded
    return {name: convert(name) for name in names}
  File "/data/llmbuild/ollama/llm/llama.cpp/convert.py", line 783, in <dictcomp>
    return {name: convert(name) for name in names}
  File "/data/llmbuild/ollama/llm/llama.cpp/convert.py", line 758, in convert
    lazy_tensors = [model[name] for model in models]
  File "/data/llmbuild/ollama/llm/llama.cpp/convert.py", line 758, in <listcomp>
    lazy_tensors = [model[name] for model in models]
KeyError: 'transformer.embedding.word_embeddings.weight'
<!-- gh-comment-id:2067718876 --> @shinyzhu commented on GitHub (Apr 20, 2024): I updated to latest `Ollama` and `llama.cpp` source. but got this error: ```sh (.venv) shiny@ubuntuinr2:/data/llmbuild/ollama$ python llm/llama.cpp/convert.py ./model --outtype f16 --outfile converted.bin Loading model file model/model-00001-of-00007.safetensors Loading model file model/model-00001-of-00007.safetensors Loading model file model/model-00002-of-00007.safetensors Loading model file model/model-00003-of-00007.safetensors Loading model file model/model-00004-of-00007.safetensors Loading model file model/model-00005-of-00007.safetensors Loading model file model/model-00006-of-00007.safetensors Loading model file model/model-00007-of-00007.safetensors Traceback (most recent call last): File "/data/llmbuild/ollama/llm/llama.cpp/convert.py", line 1548, in <module> main() File "/data/llmbuild/ollama/llm/llama.cpp/convert.py", line 1480, in main model_plus = load_some_model(args.model) File "/data/llmbuild/ollama/llm/llama.cpp/convert.py", line 1371, in load_some_model model_plus = merge_multifile_models(models_plus) File "/data/llmbuild/ollama/llm/llama.cpp/convert.py", line 804, in merge_multifile_models model = merge_sharded([mp.model for mp in models_plus]) File "/data/llmbuild/ollama/llm/llama.cpp/convert.py", line 783, in merge_sharded return {name: convert(name) for name in names} File "/data/llmbuild/ollama/llm/llama.cpp/convert.py", line 783, in <dictcomp> return {name: convert(name) for name in names} File "/data/llmbuild/ollama/llm/llama.cpp/convert.py", line 758, in convert lazy_tensors = [model[name] for model in models] File "/data/llmbuild/ollama/llm/llama.cpp/convert.py", line 758, in <listcomp> lazy_tensors = [model[name] for model in models] KeyError: 'transformer.embedding.word_embeddings.weight' ```
Author
Owner

@alexsworld commented on GitHub (Apr 28, 2024):

Currently not supported ChatGLM3?

screenshot-20240428-174054

<!-- gh-comment-id:2081412453 --> @alexsworld commented on GitHub (Apr 28, 2024): Currently not supported ChatGLM3? ![screenshot-20240428-174054](https://github.com/ollama/ollama/assets/58976423/ef9069ea-e9f5-4f2e-a5dc-9786c933a04c)
Author
Owner

@evnydd0f commented on GitHub (Apr 30, 2024):

ChatGLM not Supported

<!-- gh-comment-id:2085315724 --> @evnydd0f commented on GitHub (Apr 30, 2024): ChatGLM not Supported
Author
Owner

@darkknight0007 commented on GitHub (May 11, 2024):

When will the ChatGLM model be supported?

<!-- gh-comment-id:2106048817 --> @darkknight0007 commented on GitHub (May 11, 2024): When will the ChatGLM model be supported?
Author
Owner

@mufenzhimi commented on GitHub (May 16, 2024):

+1 looking forward to this

<!-- gh-comment-id:2113978504 --> @mufenzhimi commented on GitHub (May 16, 2024): +1 looking forward to this
Author
Owner

@shengbox commented on GitHub (May 16, 2024):

+1

<!-- gh-comment-id:2115334586 --> @shengbox commented on GitHub (May 16, 2024): +1
Author
Owner

@xukecheng commented on GitHub (May 20, 2024):

+1

<!-- gh-comment-id:2120030671 --> @xukecheng commented on GitHub (May 20, 2024): +1
Author
Owner

@aboutmydreams commented on GitHub (May 27, 2024):

+1

<!-- gh-comment-id:2132537934 --> @aboutmydreams commented on GitHub (May 27, 2024): +1
Author
Owner

@ztsweet commented on GitHub (Jun 3, 2024):

+1

<!-- gh-comment-id:2144389053 --> @ztsweet commented on GitHub (Jun 3, 2024): +1
Author
Owner

@weirdo-adam commented on GitHub (Jun 7, 2024):

+1

<!-- gh-comment-id:2154383407 --> @weirdo-adam commented on GitHub (Jun 7, 2024): +1
Author
Owner

@zhangzhiqiangcs commented on GitHub (Jun 11, 2024):

+1

<!-- gh-comment-id:2159716344 --> @zhangzhiqiangcs commented on GitHub (Jun 11, 2024): +1
Author
Owner

@funkytaco commented on GitHub (Jul 9, 2024):

Hi guys, I just happen to be the first one to reply with a successful use of ChatGLM:

https://ollama.com/library/codegeex4

I noticed it said Ollama 0.2 is needed. I had to restart for it to work.

ollama run codegeex4
Error: llama runner process has terminated: signal: abort trap error:error loading model architecture: unknown model architecture: 'chatglm'
(I restarted ollama to update)

$ ollama run codegeex4
>>> hi
Hello! How can I assist you today?

>>> Send a message (/? for help)

ChatGLM arch seems to work now

<!-- gh-comment-id:2216609443 --> @funkytaco commented on GitHub (Jul 9, 2024): Hi guys, I just happen to be the first one to reply with a successful use of ChatGLM: https://ollama.com/library/codegeex4 I noticed it said Ollama 0.2 is needed. I had to restart for it to work. ``` ollama run codegeex4 Error: llama runner process has terminated: signal: abort trap error:error loading model architecture: unknown model architecture: 'chatglm' (I restarted ollama to update) $ ollama run codegeex4 >>> hi Hello! How can I assist you today? >>> Send a message (/? for help) ``` ChatGLM arch seems to work now
Author
Owner

@TasmeTime commented on GitHub (Jul 9, 2024):

EDIT:
i updated to ollama v0.2.1 and it's working now as @funkytaco mentioned earlier

C:\Users\Tasmetime>ollama run codegeex4
Error: llama runner process has terminated: exit status 0xc0000409

app.log
time=2024-07-09T15:09:04.106+03:30 level=INFO source=logging.go:50 msg="ollama app started" time=2024-07-09T15:09:04.125+03:30 level=INFO source=server.go:176 msg="unable to connect to server" time=2024-07-09T15:09:04.125+03:30 level=INFO source=server.go:135 msg="starting server..." time=2024-07-09T15:09:04.128+03:30 level=INFO source=server.go:121 msg="started ollama server with pid 27128" time=2024-07-09T15:09:04.128+03:30 level=INFO source=server.go:123 msg="ollama server logs C:\\Users\\Tasmetime\\AppData\\Local\\Ollama\\server.log" time=2024-07-09T15:09:07.520+03:30 level=INFO source=updater.go:91 msg="check update error 403 - \n<html><head>\n<meta http-equiv=\"content-type\" content=\"text/html;charset=utf-8\">\n<title>403 Forb"

server.log

`
2024/07/09 15:09:04 routes.go:1064: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:F:\AI\ollama_models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\Users\Tasmetime\AppData\Local\Programs\Ollama\ollama_runners OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-07-09T15:09:04.178+03:30 level=INFO source=images.go:730 msg="total blobs: 24"
time=2024-07-09T15:09:04.178+03:30 level=INFO source=images.go:737 msg="total unused blobs removed: 0"
time=2024-07-09T15:09:04.179+03:30 level=INFO source=routes.go:1111 msg="Listening on 127.0.0.1:11434 (version 0.1.48)"
time=2024-07-09T15:09:04.179+03:30 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cuda_v11.3 rocm_v5.7 cpu cpu_avx cpu_avx2]"
time=2024-07-09T15:09:04.287+03:30 level=INFO source=types.go:98 msg="inference compute" id=GPU-b3ce8207-e375-8cea-dcd3-f475728acad1 library=cuda compute=8.6 driver=12.4 name="NVIDIA GeForce RTX 3070" total="8.0 GiB" available="6.9 GiB"
[GIN] 2024/07/09 - 15:09:04 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2024/07/09 - 15:09:04 | 200 | 21.4613ms | 127.0.0.1 | POST "/api/show"
time=2024-07-09T15:09:04.653+03:30 level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=41 layers.offload=41 layers.split="" memory.available="[7.3 GiB]" memory.required.full="5.6 GiB" memory.required.partial="5.6 GiB" memory.required.kv="80.0 MiB" memory.required.allocations="[5.6 GiB]" memory.weights.total="4.4 GiB" memory.weights.repeating="3.9 GiB" memory.weights.nonrepeating="485.6 MiB" memory.graph.full="213.3 MiB" memory.graph.partial="213.3 MiB"
time=2024-07-09T15:09:04.661+03:30 level=INFO source=server.go:368 msg="starting llama server" cmd="C:\Users\Tasmetime\AppData\Local\Programs\Ollama\ollama_runners\cuda_v11.3\ollama_llama_server.exe --model F:\AI\ollama_models\blobs\sha256-816441b33390807d429fbdb1de7e33bb4d569ac68e2203bdbca5d8d79b5c7266 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 41 --no-mmap --parallel 1 --port 57883"
time=2024-07-09T15:09:04.663+03:30 level=INFO source=sched.go:382 msg="loaded runners" count=1
time=2024-07-09T15:09:04.663+03:30 level=INFO source=server.go:556 msg="waiting for llama runner to start responding"
time=2024-07-09T15:09:04.664+03:30 level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server error"
INFO [wmain] build info | build=3171 commit="7c26775a" tid="26740" timestamp=1720525144
INFO [wmain] system info | n_threads=8 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="26740" timestamp=1720525144 total_threads=16
INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="15" port="57883" tid="26740" timestamp=1720525144
llama_model_loader: loaded meta data with 23 key-value pairs and 283 tensors from F:\AI\ollama_models\blobs\sha256-816441b33390807d429fbdb1de7e33bb4d569ac68e2203bdbca5d8d79b5c7266 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = chatglm
llama_model_loader: - kv 1: general.name str = codegeex4-all-9b
llama_model_loader: - kv 2: chatglm.context_length u32 = 131072
llama_model_loader: - kv 3: chatglm.embedding_length u32 = 4096
llama_model_loader: - kv 4: chatglm.feed_forward_length u32 = 13696
llama_model_loader: - kv 5: chatglm.block_count u32 = 40
llama_model_loader: - kv 6: chatglm.attention.head_count u32 = 32
llama_model_loader: - kv 7: chatglm.attention.head_count_kv u32 = 2
llama_model_loader: - kv 8: chatglm.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 9: general.file_type u32 = 2
llama_model_loader: - kv 10: chatglm.rope.dimension_count u32 = 64
llama_model_loader: - kv 11: tokenizer.ggml.add_bos_token bool = false
llama_model_loader: - kv 12: chatglm.rope.freq_base f32 = 5000000.000000
llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 14: tokenizer.ggml.pre str = chatglm-bpe
llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,151552] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,151552] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,151073] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 18: tokenizer.ggml.padding_token_id u32 = 151329
llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 151329
llama_model_loader: - kv 20: tokenizer.ggml.eot_token_id u32 = 151336
llama_model_loader: - kv 21: tokenizer.ggml.unknown_token_id u32 = 151329
llama_model_loader: - kv 22: general.quantization_version u32 = 2
llama_model_loader: - type f32: 121 tensors
llama_model_loader: - type q4_0: 161 tensors
llama_model_loader: - type q6_K: 1 tensors
llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'chatglm'
llama_load_model_from_file: exception loading model
time=2024-07-09T15:09:04.921+03:30 level=ERROR source=sched.go:388 msg="error loading llama server" error="llama runner process has terminated: exit status 0xc0000409 "
[GIN] 2024/07/09 - 15:09:04 | 500 | 302.0822ms | 127.0.0.1 | POST "/api/chat"

`
not working for me.
is there a way I can install the chatglm arch or it's just not supported?

<!-- gh-comment-id:2217424362 --> @TasmeTime commented on GitHub (Jul 9, 2024): **EDIT:** `i updated to ollama v0.2.1 and it's working now as @funkytaco mentioned earlier` C:\Users\Tasmetime>ollama run codegeex4 Error: llama runner process has terminated: exit status 0xc0000409 app.log ` time=2024-07-09T15:09:04.106+03:30 level=INFO source=logging.go:50 msg="ollama app started" time=2024-07-09T15:09:04.125+03:30 level=INFO source=server.go:176 msg="unable to connect to server" time=2024-07-09T15:09:04.125+03:30 level=INFO source=server.go:135 msg="starting server..." time=2024-07-09T15:09:04.128+03:30 level=INFO source=server.go:121 msg="started ollama server with pid 27128" time=2024-07-09T15:09:04.128+03:30 level=INFO source=server.go:123 msg="ollama server logs C:\\Users\\Tasmetime\\AppData\\Local\\Ollama\\server.log" time=2024-07-09T15:09:07.520+03:30 level=INFO source=updater.go:91 msg="check update error 403 - \n<html><head>\n<meta http-equiv=\"content-type\" content=\"text/html;charset=utf-8\">\n<title>403 Forb" ` server.log ` 2024/07/09 15:09:04 routes.go:1064: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:F:\\AI\\ollama_models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\\Users\\Tasmetime\\AppData\\Local\\Programs\\Ollama\\ollama_runners OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-07-09T15:09:04.178+03:30 level=INFO source=images.go:730 msg="total blobs: 24" time=2024-07-09T15:09:04.178+03:30 level=INFO source=images.go:737 msg="total unused blobs removed: 0" time=2024-07-09T15:09:04.179+03:30 level=INFO source=routes.go:1111 msg="Listening on 127.0.0.1:11434 (version 0.1.48)" time=2024-07-09T15:09:04.179+03:30 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cuda_v11.3 rocm_v5.7 cpu cpu_avx cpu_avx2]" time=2024-07-09T15:09:04.287+03:30 level=INFO source=types.go:98 msg="inference compute" id=GPU-b3ce8207-e375-8cea-dcd3-f475728acad1 library=cuda compute=8.6 driver=12.4 name="NVIDIA GeForce RTX 3070" total="8.0 GiB" available="6.9 GiB" [GIN] 2024/07/09 - 15:09:04 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/07/09 - 15:09:04 | 200 | 21.4613ms | 127.0.0.1 | POST "/api/show" time=2024-07-09T15:09:04.653+03:30 level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=41 layers.offload=41 layers.split="" memory.available="[7.3 GiB]" memory.required.full="5.6 GiB" memory.required.partial="5.6 GiB" memory.required.kv="80.0 MiB" memory.required.allocations="[5.6 GiB]" memory.weights.total="4.4 GiB" memory.weights.repeating="3.9 GiB" memory.weights.nonrepeating="485.6 MiB" memory.graph.full="213.3 MiB" memory.graph.partial="213.3 MiB" time=2024-07-09T15:09:04.661+03:30 level=INFO source=server.go:368 msg="starting llama server" cmd="C:\\Users\\Tasmetime\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cuda_v11.3\\ollama_llama_server.exe --model F:\\AI\\ollama_models\\blobs\\sha256-816441b33390807d429fbdb1de7e33bb4d569ac68e2203bdbca5d8d79b5c7266 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 41 --no-mmap --parallel 1 --port 57883" time=2024-07-09T15:09:04.663+03:30 level=INFO source=sched.go:382 msg="loaded runners" count=1 time=2024-07-09T15:09:04.663+03:30 level=INFO source=server.go:556 msg="waiting for llama runner to start responding" time=2024-07-09T15:09:04.664+03:30 level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server error" INFO [wmain] build info | build=3171 commit="7c26775a" tid="26740" timestamp=1720525144 INFO [wmain] system info | n_threads=8 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="26740" timestamp=1720525144 total_threads=16 INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="15" port="57883" tid="26740" timestamp=1720525144 llama_model_loader: loaded meta data with 23 key-value pairs and 283 tensors from F:\AI\ollama_models\blobs\sha256-816441b33390807d429fbdb1de7e33bb4d569ac68e2203bdbca5d8d79b5c7266 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = **chatglm** llama_model_loader: - kv 1: general.name str = codegeex4-all-9b llama_model_loader: - kv 2: chatglm.context_length u32 = 131072 llama_model_loader: - kv 3: chatglm.embedding_length u32 = 4096 llama_model_loader: - kv 4: chatglm.feed_forward_length u32 = 13696 llama_model_loader: - kv 5: chatglm.block_count u32 = 40 llama_model_loader: - kv 6: chatglm.attention.head_count u32 = 32 llama_model_loader: - kv 7: chatglm.attention.head_count_kv u32 = 2 llama_model_loader: - kv 8: chatglm.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 9: general.file_type u32 = 2 llama_model_loader: - kv 10: chatglm.rope.dimension_count u32 = 64 llama_model_loader: - kv 11: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 12: chatglm.rope.freq_base f32 = 5000000.000000 llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 14: tokenizer.ggml.pre str = chatglm-bpe llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,151552] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,151552] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,151073] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 18: tokenizer.ggml.padding_token_id u32 = 151329 llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 151329 llama_model_loader: - kv 20: tokenizer.ggml.eot_token_id u32 = 151336 llama_model_loader: - kv 21: tokenizer.ggml.unknown_token_id u32 = 151329 llama_model_loader: - kv 22: general.quantization_version u32 = 2 llama_model_loader: - type f32: 121 tensors llama_model_loader: - type q4_0: 161 tensors llama_model_loader: - type q6_K: 1 tensors llama_model_load: **error loading model: error loading model architecture: unknown model architecture: 'chatglm'** llama_load_model_from_file: exception loading model time=2024-07-09T15:09:04.921+03:30 level=ERROR source=sched.go:388 msg="error loading llama server" error="llama runner process has terminated: exit status 0xc0000409 " [GIN] 2024/07/09 - 15:09:04 | 500 | 302.0822ms | 127.0.0.1 | POST "/api/chat" ` not working for me. is there a way I can install the chatglm arch or it's just not supported?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#27704