[GH-ISSUE #12026] Ollama Service Killed During ’converting model‘ When Adding Fine-Tuned Model on Linux #33745

Closed
opened 2026-04-22 16:43:11 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @Paul-Grant2000 on GitHub (Aug 22, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12026

What is the issue?

Hello everyone, I encountered a strange issue while using Ollama on a Linux server. When I used the command 'ollama create model' to add a Safetensors format model that I had fine-tuned myself to Ollama, the Ollama service was unexpectedly killed during the console output "converting model." This fine-tuned model could be added normally on the Windows version of Ollama. Strangely, when I used the same command to add a downloaded GGUF format model, it was added successfully. I am unsure where the problem lies and would appreciate any insights from experts who might know about this issue.

Relevant log output

(base) root:~/merge_output/Qwen_coder_7B_2100# ollama create qwencoder
gathering model components 
copying file sha256:9c5ae00e602b8860cbd784ba82a8aa14e8feecec692e7076590d014d7b7fdafa 100% 
copying file sha256:88512721805f73b6e813946b9fc056170ffde78ef1738a6f9a283170d6be10f9 100% 
copying file sha256:9c5ae00e602b8860cbd784ba82a8aa14e8feecec692e7076590d014d7b7fdafa 100% 
copying file sha256:58b54bbe36fc752f79a24a271ef66a0a0830054b4dfad94bde757d851968060b 100% 
copying file sha256:5aa6e5cbe642377fd441fb4e60e83cca96b2bcd9820e245b9ea06d94653f17f2 100% 
copying file sha256:58b54bbe36fc752f79a24a271ef66a0a0830054b4dfad94bde757d851968060b 100% 
copying file sha256:0962ca6631df3dbf813431f098c5baa203b1b870b0ff158796740719208065c2 100% 
copying file sha256:ca10d7e9fb3ed18575dd1e277a2579c16d108e32f27439684afa0e10b1440910 100% 
copying file sha256:20db9a065d1a2e4ed9e1158bf6bcbeb37b7fb5b739db536f62a649b54a7be16e 100% 
copying file sha256:ca10d7e9fb3ed18575dd1e277a2579c16d108e32f27439684afa0e10b1440910 100% 
copying file sha256:998a078123ffc97763690de7f2a677eb89168af5eaf8a5e12e6bc24d18e25bdb 100% 
copying file sha256:fcdef8bc02ee7f055a80503a0b57074013fc697c7cc6ca6d38457cd6f05e0a58 100% 
copying file sha256:14428f164ce20feb1ede5e3ecdcb302edf7e7763c44fa3977d319870af6b0710 100% 
copying file sha256:76862e765266b85aa9459767e33cbaf13970f327a0e88d1c65846c2ddd3a1ecd 100% 
copying file sha256:d63855fa35c986f08a50bd48a500668d1e896917f0966a45e9a5168d715dab6f 100% 
copying file sha256:fcdef8bc02ee7f055a80503a0b57074013fc697c7cc6ca6d38457cd6f05e0a58 100% 
copying file sha256:76862e765266b85aa9459767e33cbaf13970f327a0e88d1c65846c2ddd3a1ecd 100% 
copying file sha256:998a078123ffc97763690de7f2a677eb89168af5eaf8a5e12e6bc24d18e25bdb 100% 
copying file sha256:20db9a065d1a2e4ed9e1158bf6bcbeb37b7fb5b739db536f62a649b54a7be16e 100% 
copying file sha256:14428f164ce20feb1ede5e3ecdcb302edf7e7763c44fa3977d319870af6b0710 100% 
converting model 
(base) root:~/merge_output/Qwen_coder_7B_2100# 


(base) root:~# ollama serve
time=2025-08-22T14:18:20.884+08:00 level=INFO source=routes.go:1318 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NEW_ESTIMATES:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-08-22T14:18:20.885+08:00 level=INFO source=images.go:477 msg="total blobs: 14"
time=2025-08-22T14:18:20.886+08:00 level=INFO source=images.go:484 msg="total unused blobs removed: 12"
time=2025-08-22T14:18:20.886+08:00 level=INFO source=routes.go:1371 msg="Listening on 127.0.0.1:11434 (version 0.11.6)"
time=2025-08-22T14:18:20.886+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-08-22T14:18:21.096+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-d4046a26-9f69-36f8-3fa5-3fabd5c52864 library=cuda variant=v12 compute=8.0 driver=12.2 name="NVIDIA A800-SXM4-80GB" total="79.3 GiB" available="78.9 GiB"
[GIN] 2025/08/22 - 14:19:44 | 200 |      39.641µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/08/22 - 14:19:50 | 201 |    1.430265ms |       127.0.0.1 | POST     "/api/blobs/sha256:76862e765266b85aa9459767e33cbaf13970f327a0e88d1c65846c2ddd3a1ecd"
[GIN] 2025/08/22 - 14:19:50 | 201 |     420.744µs |       127.0.0.1 | POST     "/api/blobs/sha256:58b54bbe36fc752f79a24a271ef66a0a0830054b4dfad94bde757d851968060b"
[GIN] 2025/08/22 - 14:19:50 | 201 |    1.504626ms |       127.0.0.1 | POST     "/api/blobs/sha256:58b54bbe36fc752f79a24a271ef66a0a0830054b4dfad94bde757d851968060b"
[GIN] 2025/08/22 - 14:19:50 | 201 |     930.731µs |       127.0.0.1 | POST     "/api/blobs/sha256:76862e765266b85aa9459767e33cbaf13970f327a0e88d1c65846c2ddd3a1ecd"
[GIN] 2025/08/22 - 14:19:50 | 200 |     826.309µs |       127.0.0.1 | POST     "/api/blobs/sha256:20db9a065d1a2e4ed9e1158bf6bcbeb37b7fb5b739db536f62a649b54a7be16e"
[GIN] 2025/08/22 - 14:19:50 | 201 |     661.029µs |       127.0.0.1 | POST     "/api/blobs/sha256:20db9a065d1a2e4ed9e1158bf6bcbeb37b7fb5b739db536f62a649b54a7be16e"
[GIN] 2025/08/22 - 14:19:50 | 201 |     1.73941ms |       127.0.0.1 | POST     "/api/blobs/sha256:fcdef8bc02ee7f055a80503a0b57074013fc697c7cc6ca6d38457cd6f05e0a58"
[GIN] 2025/08/22 - 14:19:50 | 201 |    1.729821ms |       127.0.0.1 | POST     "/api/blobs/sha256:14428f164ce20feb1ede5e3ecdcb302edf7e7763c44fa3977d319870af6b0710"
[GIN] 2025/08/22 - 14:19:50 | 201 |    1.664407ms |       127.0.0.1 | POST     "/api/blobs/sha256:14428f164ce20feb1ede5e3ecdcb302edf7e7763c44fa3977d319870af6b0710"
[GIN] 2025/08/22 - 14:19:50 | 201 |     914.385µs |       127.0.0.1 | POST     "/api/blobs/sha256:fcdef8bc02ee7f055a80503a0b57074013fc697c7cc6ca6d38457cd6f05e0a58"
[GIN] 2025/08/22 - 14:19:50 | 201 |     2.26517ms |       127.0.0.1 | POST     "/api/blobs/sha256:998a078123ffc97763690de7f2a677eb89168af5eaf8a5e12e6bc24d18e25bdb"
[GIN] 2025/08/22 - 14:19:50 | 201 |    2.146527ms |       127.0.0.1 | POST     "/api/blobs/sha256:998a078123ffc97763690de7f2a677eb89168af5eaf8a5e12e6bc24d18e25bdb"
[GIN] 2025/08/22 - 14:19:50 | 201 |   17.842599ms |       127.0.0.1 | POST     "/api/blobs/sha256:9c5ae00e602b8860cbd784ba82a8aa14e8feecec692e7076590d014d7b7fdafa"
[GIN] 2025/08/22 - 14:19:50 | 201 |    5.070885ms |       127.0.0.1 | POST     "/api/blobs/sha256:ca10d7e9fb3ed18575dd1e277a2579c16d108e32f27439684afa0e10b1440910"
[GIN] 2025/08/22 - 14:19:50 | 201 |    4.795032ms |       127.0.0.1 | POST     "/api/blobs/sha256:ca10d7e9fb3ed18575dd1e277a2579c16d108e32f27439684afa0e10b1440910"
[GIN] 2025/08/22 - 14:19:50 | 201 |   17.843664ms |       127.0.0.1 | POST     "/api/blobs/sha256:9c5ae00e602b8860cbd784ba82a8aa14e8feecec692e7076590d014d7b7fdafa"
[GIN] 2025/08/22 - 14:19:51 | 201 |  1.465328174s |       127.0.0.1 | POST     "/api/blobs/sha256:5aa6e5cbe642377fd441fb4e60e83cca96b2bcd9820e245b9ea06d94653f17f2"
[GIN] 2025/08/22 - 14:19:55 | 201 |  5.775598735s |       127.0.0.1 | POST     "/api/blobs/sha256:88512721805f73b6e813946b9fc056170ffde78ef1738a6f9a283170d6be10f9"
[GIN] 2025/08/22 - 14:19:56 | 201 |  6.454938488s |       127.0.0.1 | POST     "/api/blobs/sha256:d63855fa35c986f08a50bd48a500668d1e896917f0966a45e9a5168d715dab6f"
[GIN] 2025/08/22 - 14:19:56 | 201 |  6.538105052s |       127.0.0.1 | POST     "/api/blobs/sha256:0962ca6631df3dbf813431f098c5baa203b1b870b0ff158796740719208065c2"
Killed
(base) root:~#

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.11.2

Originally created by @Paul-Grant2000 on GitHub (Aug 22, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12026 ### What is the issue? Hello everyone, I encountered a strange issue while using Ollama on a Linux server. When I used the command 'ollama create model' to add a Safetensors format model that I had fine-tuned myself to Ollama, the Ollama service was unexpectedly killed during the console output "converting model." This fine-tuned model could be added normally on the Windows version of Ollama. Strangely, when I used the same command to add a downloaded GGUF format model, it was added successfully. I am unsure where the problem lies and would appreciate any insights from experts who might know about this issue. ### Relevant log output ```shell (base) root:~/merge_output/Qwen_coder_7B_2100# ollama create qwencoder gathering model components copying file sha256:9c5ae00e602b8860cbd784ba82a8aa14e8feecec692e7076590d014d7b7fdafa 100% copying file sha256:88512721805f73b6e813946b9fc056170ffde78ef1738a6f9a283170d6be10f9 100% copying file sha256:9c5ae00e602b8860cbd784ba82a8aa14e8feecec692e7076590d014d7b7fdafa 100% copying file sha256:58b54bbe36fc752f79a24a271ef66a0a0830054b4dfad94bde757d851968060b 100% copying file sha256:5aa6e5cbe642377fd441fb4e60e83cca96b2bcd9820e245b9ea06d94653f17f2 100% copying file sha256:58b54bbe36fc752f79a24a271ef66a0a0830054b4dfad94bde757d851968060b 100% copying file sha256:0962ca6631df3dbf813431f098c5baa203b1b870b0ff158796740719208065c2 100% copying file sha256:ca10d7e9fb3ed18575dd1e277a2579c16d108e32f27439684afa0e10b1440910 100% copying file sha256:20db9a065d1a2e4ed9e1158bf6bcbeb37b7fb5b739db536f62a649b54a7be16e 100% copying file sha256:ca10d7e9fb3ed18575dd1e277a2579c16d108e32f27439684afa0e10b1440910 100% copying file sha256:998a078123ffc97763690de7f2a677eb89168af5eaf8a5e12e6bc24d18e25bdb 100% copying file sha256:fcdef8bc02ee7f055a80503a0b57074013fc697c7cc6ca6d38457cd6f05e0a58 100% copying file sha256:14428f164ce20feb1ede5e3ecdcb302edf7e7763c44fa3977d319870af6b0710 100% copying file sha256:76862e765266b85aa9459767e33cbaf13970f327a0e88d1c65846c2ddd3a1ecd 100% copying file sha256:d63855fa35c986f08a50bd48a500668d1e896917f0966a45e9a5168d715dab6f 100% copying file sha256:fcdef8bc02ee7f055a80503a0b57074013fc697c7cc6ca6d38457cd6f05e0a58 100% copying file sha256:76862e765266b85aa9459767e33cbaf13970f327a0e88d1c65846c2ddd3a1ecd 100% copying file sha256:998a078123ffc97763690de7f2a677eb89168af5eaf8a5e12e6bc24d18e25bdb 100% copying file sha256:20db9a065d1a2e4ed9e1158bf6bcbeb37b7fb5b739db536f62a649b54a7be16e 100% copying file sha256:14428f164ce20feb1ede5e3ecdcb302edf7e7763c44fa3977d319870af6b0710 100% converting model (base) root:~/merge_output/Qwen_coder_7B_2100# (base) root:~# ollama serve time=2025-08-22T14:18:20.884+08:00 level=INFO source=routes.go:1318 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NEW_ESTIMATES:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-08-22T14:18:20.885+08:00 level=INFO source=images.go:477 msg="total blobs: 14" time=2025-08-22T14:18:20.886+08:00 level=INFO source=images.go:484 msg="total unused blobs removed: 12" time=2025-08-22T14:18:20.886+08:00 level=INFO source=routes.go:1371 msg="Listening on 127.0.0.1:11434 (version 0.11.6)" time=2025-08-22T14:18:20.886+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-08-22T14:18:21.096+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-d4046a26-9f69-36f8-3fa5-3fabd5c52864 library=cuda variant=v12 compute=8.0 driver=12.2 name="NVIDIA A800-SXM4-80GB" total="79.3 GiB" available="78.9 GiB" [GIN] 2025/08/22 - 14:19:44 | 200 | 39.641µs | 127.0.0.1 | HEAD "/" [GIN] 2025/08/22 - 14:19:50 | 201 | 1.430265ms | 127.0.0.1 | POST "/api/blobs/sha256:76862e765266b85aa9459767e33cbaf13970f327a0e88d1c65846c2ddd3a1ecd" [GIN] 2025/08/22 - 14:19:50 | 201 | 420.744µs | 127.0.0.1 | POST "/api/blobs/sha256:58b54bbe36fc752f79a24a271ef66a0a0830054b4dfad94bde757d851968060b" [GIN] 2025/08/22 - 14:19:50 | 201 | 1.504626ms | 127.0.0.1 | POST "/api/blobs/sha256:58b54bbe36fc752f79a24a271ef66a0a0830054b4dfad94bde757d851968060b" [GIN] 2025/08/22 - 14:19:50 | 201 | 930.731µs | 127.0.0.1 | POST "/api/blobs/sha256:76862e765266b85aa9459767e33cbaf13970f327a0e88d1c65846c2ddd3a1ecd" [GIN] 2025/08/22 - 14:19:50 | 200 | 826.309µs | 127.0.0.1 | POST "/api/blobs/sha256:20db9a065d1a2e4ed9e1158bf6bcbeb37b7fb5b739db536f62a649b54a7be16e" [GIN] 2025/08/22 - 14:19:50 | 201 | 661.029µs | 127.0.0.1 | POST "/api/blobs/sha256:20db9a065d1a2e4ed9e1158bf6bcbeb37b7fb5b739db536f62a649b54a7be16e" [GIN] 2025/08/22 - 14:19:50 | 201 | 1.73941ms | 127.0.0.1 | POST "/api/blobs/sha256:fcdef8bc02ee7f055a80503a0b57074013fc697c7cc6ca6d38457cd6f05e0a58" [GIN] 2025/08/22 - 14:19:50 | 201 | 1.729821ms | 127.0.0.1 | POST "/api/blobs/sha256:14428f164ce20feb1ede5e3ecdcb302edf7e7763c44fa3977d319870af6b0710" [GIN] 2025/08/22 - 14:19:50 | 201 | 1.664407ms | 127.0.0.1 | POST "/api/blobs/sha256:14428f164ce20feb1ede5e3ecdcb302edf7e7763c44fa3977d319870af6b0710" [GIN] 2025/08/22 - 14:19:50 | 201 | 914.385µs | 127.0.0.1 | POST "/api/blobs/sha256:fcdef8bc02ee7f055a80503a0b57074013fc697c7cc6ca6d38457cd6f05e0a58" [GIN] 2025/08/22 - 14:19:50 | 201 | 2.26517ms | 127.0.0.1 | POST "/api/blobs/sha256:998a078123ffc97763690de7f2a677eb89168af5eaf8a5e12e6bc24d18e25bdb" [GIN] 2025/08/22 - 14:19:50 | 201 | 2.146527ms | 127.0.0.1 | POST "/api/blobs/sha256:998a078123ffc97763690de7f2a677eb89168af5eaf8a5e12e6bc24d18e25bdb" [GIN] 2025/08/22 - 14:19:50 | 201 | 17.842599ms | 127.0.0.1 | POST "/api/blobs/sha256:9c5ae00e602b8860cbd784ba82a8aa14e8feecec692e7076590d014d7b7fdafa" [GIN] 2025/08/22 - 14:19:50 | 201 | 5.070885ms | 127.0.0.1 | POST "/api/blobs/sha256:ca10d7e9fb3ed18575dd1e277a2579c16d108e32f27439684afa0e10b1440910" [GIN] 2025/08/22 - 14:19:50 | 201 | 4.795032ms | 127.0.0.1 | POST "/api/blobs/sha256:ca10d7e9fb3ed18575dd1e277a2579c16d108e32f27439684afa0e10b1440910" [GIN] 2025/08/22 - 14:19:50 | 201 | 17.843664ms | 127.0.0.1 | POST "/api/blobs/sha256:9c5ae00e602b8860cbd784ba82a8aa14e8feecec692e7076590d014d7b7fdafa" [GIN] 2025/08/22 - 14:19:51 | 201 | 1.465328174s | 127.0.0.1 | POST "/api/blobs/sha256:5aa6e5cbe642377fd441fb4e60e83cca96b2bcd9820e245b9ea06d94653f17f2" [GIN] 2025/08/22 - 14:19:55 | 201 | 5.775598735s | 127.0.0.1 | POST "/api/blobs/sha256:88512721805f73b6e813946b9fc056170ffde78ef1738a6f9a283170d6be10f9" [GIN] 2025/08/22 - 14:19:56 | 201 | 6.454938488s | 127.0.0.1 | POST "/api/blobs/sha256:d63855fa35c986f08a50bd48a500668d1e896917f0966a45e9a5168d715dab6f" [GIN] 2025/08/22 - 14:19:56 | 201 | 6.538105052s | 127.0.0.1 | POST "/api/blobs/sha256:0962ca6631df3dbf813431f098c5baa203b1b870b0ff158796740719208065c2" Killed (base) root:~# ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.11.2
GiteaMirror added the bug label 2026-04-22 16:43:11 -05:00
Author
Owner

@rick-github commented on GitHub (Sep 3, 2025):

This is likely the server being killed due to an OOM issue while spinning up multiple go co-routines to process the tensors. You can set GOMAXPROCS=1 in the server environment to force the server to only launch one co-routine. It may take a little but longer to run but it should prevent the kernel from killing the server due to memory requirements.

<!-- gh-comment-id:3248737616 --> @rick-github commented on GitHub (Sep 3, 2025): This is likely the server being killed due to an OOM issue while spinning up multiple go co-routines to process the tensors. You can set `GOMAXPROCS=1` in the server environment to force the server to only launch one co-routine. It may take a little but longer to run but it should prevent the kernel from killing the server due to memory requirements.
Author
Owner

@hcaicai commented on GitHub (Sep 20, 2025):

This is likely the server being killed due to an OOM issue while spinning up multiple go co-routines to process the tensors. You can set GOMAXPROCS=1 in the server environment to force the server to only launch one co-routine. It may take a little but longer to run but it should prevent the kernel from killing the server due to memory requirements.

After setting GOMAXPROCS=1, the model conversion still fails. How can this issue be resolved?

(base) user1@Server:/model/Qwen2.5-7B-lora$ GOMAXPROCS=1 ollama create qwen2.5-7b-lora -f Modefile
gathering model components
copying file sha256:d10d4ad57348e7bf9b899b6d9b3b9cf8209776b47803e754541ca275e03f2dd3 100%
copying file sha256:ca10d7e9fb3ed18575dd1e277a2579c16d108e32f27439684afa0e10b1440910 100%
copying file sha256:887c09423c3a29bda09175e95c056e6ec1fd36c265cdd589a7db12c80eac085e 100%
copying file sha256:c592a0ce7d8c1553d27cedab02cb200d610faecabd38038363c71ff98f3d4f53 100%
copying file sha256:58b54bbe36fc752f79a24a271ef66a0a0830054b4dfad94bde757d851968060b 100%
copying file sha256:14428f164ce20feb1ede5e3ecdcb302edf7e7763c44fa3977d319870af6b0710 100%
copying file sha256:56f7a098fe2da207676326ab02a0564a544aefd0ba309208f033cc3b72c501a0 100%
copying file sha256:76862e765266b85aa9459767e33cbaf13970f327a0e88d1c65846c2ddd3a1ecd 100%
copying file sha256:d6fc3e28ea0bebb696b08b5ce047b640c1420493bd3baaf167ae1a2e29eb1b76 100%
copying file sha256:06006972c3be88e8a44fe21cfe2b0472b130780c781a741f8f90f1fe5ba3aae2 100%
copying file sha256:998a078123ffc97763690de7f2a677eb89168af5eaf8a5e12e6bc24d18e25bdb 100%
copying file sha256:9c5ae00e602b8860cbd784ba82a8aa14e8feecec692e7076590d014d7b7fdafa 100%
converting model
(base) user1@Server:/model/Qwen2.5-7B-lora$

<!-- gh-comment-id:3314477760 --> @hcaicai commented on GitHub (Sep 20, 2025): > This is likely the server being killed due to an OOM issue while spinning up multiple go co-routines to process the tensors. You can set `GOMAXPROCS=1` in the server environment to force the server to only launch one co-routine. It may take a little but longer to run but it should prevent the kernel from killing the server due to memory requirements. After setting GOMAXPROCS=1, the model conversion still fails. How can this issue be resolved? (base) user1@Server:/model/Qwen2.5-7B-lora$ GOMAXPROCS=1 ollama create qwen2.5-7b-lora -f Modefile gathering model components copying file sha256:d10d4ad57348e7bf9b899b6d9b3b9cf8209776b47803e754541ca275e03f2dd3 100% copying file sha256:ca10d7e9fb3ed18575dd1e277a2579c16d108e32f27439684afa0e10b1440910 100% copying file sha256:887c09423c3a29bda09175e95c056e6ec1fd36c265cdd589a7db12c80eac085e 100% copying file sha256:c592a0ce7d8c1553d27cedab02cb200d610faecabd38038363c71ff98f3d4f53 100% copying file sha256:58b54bbe36fc752f79a24a271ef66a0a0830054b4dfad94bde757d851968060b 100% copying file sha256:14428f164ce20feb1ede5e3ecdcb302edf7e7763c44fa3977d319870af6b0710 100% copying file sha256:56f7a098fe2da207676326ab02a0564a544aefd0ba309208f033cc3b72c501a0 100% copying file sha256:76862e765266b85aa9459767e33cbaf13970f327a0e88d1c65846c2ddd3a1ecd 100% copying file sha256:d6fc3e28ea0bebb696b08b5ce047b640c1420493bd3baaf167ae1a2e29eb1b76 100% copying file sha256:06006972c3be88e8a44fe21cfe2b0472b130780c781a741f8f90f1fe5ba3aae2 100% copying file sha256:998a078123ffc97763690de7f2a677eb89168af5eaf8a5e12e6bc24d18e25bdb 100% copying file sha256:9c5ae00e602b8860cbd784ba82a8aa14e8feecec692e7076590d014d7b7fdafa 100% converting model (base) user1@Server:/model/Qwen2.5-7B-lora$
Author
Owner

@rick-github commented on GitHub (Sep 20, 2025):

Set GOMAXPROCS=1 in the server environment.

<!-- gh-comment-id:3314832401 --> @rick-github commented on GitHub (Sep 20, 2025): Set GOMAXPROCS=1 in the **server** environment.
Author
Owner

@mjyang04 commented on GitHub (Nov 5, 2025):

Still have the same problem

(base) ymj@cics:~/resource$ export GOMAXPROCS=1
(base) ymj@cics:~/resource$ ollama create DeepSeek-R1-70B -f /home/ymj/dev/DeepSeek-R1-70B/Modelfile
gathering model components
copying file sha256:3b91e78c60e2708c9354d46fe4fc20520d0a12713e13d5ffab60118305c96620 100%
copying file sha256:b1d4326d0d4187830fcea956d8ccfafbf6871fa7baed1b759c1ed843cafe92c0 100%
copying file sha256:62a83ba7af95d532065e46322ae2c17ee020c3c8e92b7285420c09ca0d6f68f0 100%
copying file sha256:8189753d14d5f9b3c31466d1f5a185bb7ffc7fc346cbc06a5453e48e07c2b97b 100%
copying file sha256:928bea3ae97c3962b3eb3f51a66f674be4cd5858caae541becd3e175f3305403 100%
copying file sha256:1db6d4f8387897b0f40e505310a1542f6d0308370526fa86c42cdc99451d2200 100%
copying file sha256:572ed974eb237aacf3de9ddfeaa373c62d39038a96ffd7f2beec6ea8f8a7b16e 100%
copying file sha256:cd5194726d1e8f7361a8c8425fc11d33ade5e69de1fd7615eb23fae5601af68b 100%
copying file sha256:8ac8c85fb242563c2260baec0909debd69d718af6a0b3d90e6cab62b4d341cd5 100%
copying file sha256:4f57a2538539d61f9f22785d3c48f089227829948ee66572808dd13d5b08d63e 100%
copying file sha256:3f948c2df3e4f3e678e1b4c75b345ce646c4843a23e7c9089462cf3995d2af45 100%
copying file sha256:31a5eb90d0773f1514ae7b240ed62961412e48f9eb160fdac6226221b87fe61f 100%
copying file sha256:d01c2823a0393ded7f97556995b69965dad78665664853f1c8a2be901e643a44 100%
copying file sha256:e5a581377b62f0a918bf515d05f2514281eec031ea2748e7ce8123b910403bba 100%
copying file sha256:077f51bc9eb56b50868cb95c7cf0d6d0f0a524b824ba3af8b248c78a46593e97 100%
copying file sha256:e826381017b2d8a9d8704be34ee0e0ceeb0a3a9abca27beaa4555d370d5a4f10 100%
copying file sha256:0f2ce0194b017ae009448602d7fd4ac24d7f25059c0a87d42fc496000f7bdacc 100%
copying file sha256:95ef9768e4741543dbfaf0c274f101855883ff338b235c99eca2b6a4f4abee12 100%
copying file sha256:f888421726665e8a84b738eed42a64875aed79de8be7daade851ac8bf4c0cef9 100%
copying file sha256:b9c9eb63a8e03059914880f918cd28a880dec8b6e15e4461e1ff677e3743dbb8 100%
copying file sha256:aa4b0afa70d26873dabc927e59b7dbc24a0e8eda323d4fdba17927d316d3516c 100%
copying file sha256:3c43532c1b128f0135edc587ace540dfa53759916836960db06b73ab718e5eca 100%
copying file sha256:605b29e86ec80aaa3a38bb3aab4bc859e4df26928fe49bbca00f127f55897e8d 100%
converting model
<!-- gh-comment-id:3489175783 --> @mjyang04 commented on GitHub (Nov 5, 2025): Still have the same problem ``` (base) ymj@cics:~/resource$ export GOMAXPROCS=1 (base) ymj@cics:~/resource$ ollama create DeepSeek-R1-70B -f /home/ymj/dev/DeepSeek-R1-70B/Modelfile gathering model components copying file sha256:3b91e78c60e2708c9354d46fe4fc20520d0a12713e13d5ffab60118305c96620 100% copying file sha256:b1d4326d0d4187830fcea956d8ccfafbf6871fa7baed1b759c1ed843cafe92c0 100% copying file sha256:62a83ba7af95d532065e46322ae2c17ee020c3c8e92b7285420c09ca0d6f68f0 100% copying file sha256:8189753d14d5f9b3c31466d1f5a185bb7ffc7fc346cbc06a5453e48e07c2b97b 100% copying file sha256:928bea3ae97c3962b3eb3f51a66f674be4cd5858caae541becd3e175f3305403 100% copying file sha256:1db6d4f8387897b0f40e505310a1542f6d0308370526fa86c42cdc99451d2200 100% copying file sha256:572ed974eb237aacf3de9ddfeaa373c62d39038a96ffd7f2beec6ea8f8a7b16e 100% copying file sha256:cd5194726d1e8f7361a8c8425fc11d33ade5e69de1fd7615eb23fae5601af68b 100% copying file sha256:8ac8c85fb242563c2260baec0909debd69d718af6a0b3d90e6cab62b4d341cd5 100% copying file sha256:4f57a2538539d61f9f22785d3c48f089227829948ee66572808dd13d5b08d63e 100% copying file sha256:3f948c2df3e4f3e678e1b4c75b345ce646c4843a23e7c9089462cf3995d2af45 100% copying file sha256:31a5eb90d0773f1514ae7b240ed62961412e48f9eb160fdac6226221b87fe61f 100% copying file sha256:d01c2823a0393ded7f97556995b69965dad78665664853f1c8a2be901e643a44 100% copying file sha256:e5a581377b62f0a918bf515d05f2514281eec031ea2748e7ce8123b910403bba 100% copying file sha256:077f51bc9eb56b50868cb95c7cf0d6d0f0a524b824ba3af8b248c78a46593e97 100% copying file sha256:e826381017b2d8a9d8704be34ee0e0ceeb0a3a9abca27beaa4555d370d5a4f10 100% copying file sha256:0f2ce0194b017ae009448602d7fd4ac24d7f25059c0a87d42fc496000f7bdacc 100% copying file sha256:95ef9768e4741543dbfaf0c274f101855883ff338b235c99eca2b6a4f4abee12 100% copying file sha256:f888421726665e8a84b738eed42a64875aed79de8be7daade851ac8bf4c0cef9 100% copying file sha256:b9c9eb63a8e03059914880f918cd28a880dec8b6e15e4461e1ff677e3743dbb8 100% copying file sha256:aa4b0afa70d26873dabc927e59b7dbc24a0e8eda323d4fdba17927d316d3516c 100% copying file sha256:3c43532c1b128f0135edc587ace540dfa53759916836960db06b73ab718e5eca 100% copying file sha256:605b29e86ec80aaa3a38bb3aab4bc859e4df26928fe49bbca00f127f55897e8d 100% converting model ```
Author
Owner

@rick-github commented on GitHub (Nov 5, 2025):

Set GOMAXPROCS=1 in the server environment.

<!-- gh-comment-id:3490314775 --> @rick-github commented on GitHub (Nov 5, 2025): Set `GOMAXPROCS=1` in the [***server environment***](https://github.com/ollama/ollama/blob/main/docs/faq.mdx#setting-environment-variables-on-linux).
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#33745