[GH-ISSUE #4572] Error: llama runner process has terminated: exit status 0xc0000409 #28627

Closed
opened 2026-04-22 07:04:59 -05:00 by GiteaMirror · 35 comments
Owner

Originally created by @NeoFii on GitHub (May 22, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4572

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

I encountered issues while deploying my fine-tuned model using ollama.I have successfully created my own model locally.
2024-05-22_154533

When I used the command ollama run legalassistant, an error occurred.
Error: llama runner process has terminated: exit status 0xc0000409
I don't know what's wrong,could you help me?

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.1.38

Originally created by @NeoFii on GitHub (May 22, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4572 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? I encountered issues while deploying my fine-tuned model using ollama.I have successfully created my own model locally. ![2024-05-22_154533](https://github.com/ollama/ollama/assets/155638855/5294d10b-e88d-4999-b70d-1dccec8502f1) When I used the command `ollama run legalassistant`, an error occurred. Error: llama runner process has terminated: exit status 0xc0000409 I don't know what's wrong,could you help me? ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.38
GiteaMirror added the nvidiabugwindows labels 2026-04-22 07:04:59 -05:00
Author
Owner

@Just0Focus commented on GitHub (May 22, 2024):

Same here, tryna run yarn-llama2:

PS C:\Users\1p-A_Win11> ollama run yarn-llama2:7b-64k-q6_K
pulling manifest
pulling c85a99000ec1... 100% ▕████████████████████████████▏ 5.5 GB
pulling e9d3a814cdd6... 100% ▕████████████████████████████▏   17 B
pulling 824800269a73... 100% ▕████████████████████████████▏  307 B
verifying sha256 digest
writing manifest
removing any unused layers
success
Error: llama runner process has terminated: exit status 0xc0000409 error:failed to create context with model 'C:\Users\1p-A_Win11\.ollama\models\blobs\sha256-c85a99000ec1bc847530adb9aff086fb7c16d028d2470e74f41a61244bb56aef'
PS C:\Users\1p-A_Win11> ollama run yarn-llama2:7b-128k-q6_K
pulling manifest
pulling 7600d8f4045d... 100% ▕████████████████████████████▏ 5.5 GB
pulling 1639d5c1f004... 100% ▕████████████████████████████▏   18 B
pulling abfa203235cd... 100% ▕████████████████████████████▏  307 B
verifying sha256 digest
writing manifest
removing any unused layers
success
Error: llama runner process has terminated: exit status 0xc0000409 error:failed to create context with model 'C:\Users\1p-A_Win11\.ollama\models\blobs\sha256-7600d8f4045d1b48d84fd301e8bb74cdc046f7de586fa1964a186b39052c33be'
PS C:\Users\1p-A_Win11>

I tried normal releases with the same result.

<!-- gh-comment-id:2124267991 --> @Just0Focus commented on GitHub (May 22, 2024): Same here, tryna run `yarn-llama2`: ``` PS C:\Users\1p-A_Win11> ollama run yarn-llama2:7b-64k-q6_K pulling manifest pulling c85a99000ec1... 100% ▕████████████████████████████▏ 5.5 GB pulling e9d3a814cdd6... 100% ▕████████████████████████████▏ 17 B pulling 824800269a73... 100% ▕████████████████████████████▏ 307 B verifying sha256 digest writing manifest removing any unused layers success Error: llama runner process has terminated: exit status 0xc0000409 error:failed to create context with model 'C:\Users\1p-A_Win11\.ollama\models\blobs\sha256-c85a99000ec1bc847530adb9aff086fb7c16d028d2470e74f41a61244bb56aef' PS C:\Users\1p-A_Win11> ollama run yarn-llama2:7b-128k-q6_K pulling manifest pulling 7600d8f4045d... 100% ▕████████████████████████████▏ 5.5 GB pulling 1639d5c1f004... 100% ▕████████████████████████████▏ 18 B pulling abfa203235cd... 100% ▕████████████████████████████▏ 307 B verifying sha256 digest writing manifest removing any unused layers success Error: llama runner process has terminated: exit status 0xc0000409 error:failed to create context with model 'C:\Users\1p-A_Win11\.ollama\models\blobs\sha256-7600d8f4045d1b48d84fd301e8bb74cdc046f7de586fa1964a186b39052c33be' PS C:\Users\1p-A_Win11> ``` I tried normal releases with the same result.
Author
Owner

@dhiltgen commented on GitHub (May 22, 2024):

Can you share your server log?

https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md

<!-- gh-comment-id:2125787911 --> @dhiltgen commented on GitHub (May 22, 2024): Can you share your server log? https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md
Author
Owner

@HarrisonMulderSkyboxLabs commented on GitHub (May 24, 2024):

I'm in a similar boat. My steps:

  • Download Llama-3-8B-Instruct from Meta's HF
  • Fine-tune the model on my dataset w/ LLaMA-Factory
  • Merge the LoRA into the model w/ LLaMA-Factory
  • Quantize the merged model into f16 using llama.cpp
  • Quantize the f16 GGUF into Q8_0, also using llama.cpp
  • Created Modelfile (copy/pasted the template on relevant ollama base model here):

image

  • Run "ollama create testingImport -f Modelfile"
  • Observe success message
  • Run "ollama run testingImport"
  • Observe failure message:
    "Error: llama runner process has terminated: exit status 0xc0000409"

Summary:

  • Downloaded Llama3 8B Instruct from Meta and fune-tuned it with LLaMA-Factory, merging the LoRA into the model
  • Quantized the fine-tuned model into f16, then Q8_0 with llama.cpp
  • Followed ollama's docs page on Importing GGUF's and made a Modelfile
  • Created the model and attempted to run it

Hope this helps narrow down the issue :)

<!-- gh-comment-id:2128617512 --> @HarrisonMulderSkyboxLabs commented on GitHub (May 24, 2024): I'm in a similar boat. My steps: - Download Llama-3-8B-Instruct from Meta's HF - Fine-tune the model on my dataset w/ LLaMA-Factory - Merge the LoRA into the model w/ LLaMA-Factory - Quantize the merged model into f16 using llama.cpp - Quantize the f16 GGUF into Q8_0, also using llama.cpp - Created Modelfile (copy/pasted the template on relevant ollama base model [here](https://ollama.com/library/llama3:8b-instruct-q8_0/blobs/8ab4849b038c)): ![image](https://github.com/ollama/ollama/assets/82464301/a27082d4-800a-4ea2-a5ec-fdbebdc62a3a) - Run "ollama create testingImport -f Modelfile" - Observe success message - Run "ollama run testingImport" - Observe failure message: "Error: llama runner process has terminated: exit status 0xc0000409" Summary: - Downloaded Llama3 8B Instruct from Meta and fune-tuned it with LLaMA-Factory, merging the LoRA into the model - Quantized the fine-tuned model into f16, then Q8_0 with llama.cpp - Followed ollama's docs page on Importing GGUF's and made a Modelfile - Created the model and attempted to run it Hope this helps narrow down the issue :)
Author
Owner

@kozuch commented on GitHub (May 25, 2024):

Similar issue:

error loading model: error loading model vocabulary: unknown pre-tokenizer type: 'qwen2'
https://github.com/ollama/ollama/issues/4457

<!-- gh-comment-id:2131358331 --> @kozuch commented on GitHub (May 25, 2024): Similar issue: error loading model: error loading model vocabulary: unknown pre-tokenizer type: 'qwen2' https://github.com/ollama/ollama/issues/4457
Author
Owner

@kozuch commented on GitHub (May 27, 2024):

Is this a duplicate of https://github.com/ollama/ollama/issues/4457?

<!-- gh-comment-id:2133890244 --> @kozuch commented on GitHub (May 27, 2024): Is this a duplicate of https://github.com/ollama/ollama/issues/4457?
Author
Owner

@HarrisonMulderSkyboxLabs commented on GitHub (May 28, 2024):

I'm in a similar boat. My steps:

  • Download Llama-3-8B-Instruct from Meta's HF
  • Fine-tune the model on my dataset w/ LLaMA-Factory
  • Merge the LoRA into the model w/ LLaMA-Factory
  • Quantize the merged model into f16 using llama.cpp
  • Quantize the f16 GGUF into Q8_0, also using llama.cpp
  • Created Modelfile (copy/pasted the template on relevant ollama base model here):

image

  • Run "ollama create testingImport -f Modelfile"
  • Observe success message
  • Run "ollama run testingImport"
  • Observe failure message:
    "Error: llama runner process has terminated: exit status 0xc0000409"

Summary:

  • Downloaded Llama3 8B Instruct from Meta and fune-tuned it with LLaMA-Factory, merging the LoRA into the model
  • Quantized the fine-tuned model into f16, then Q8_0 with llama.cpp
  • Followed ollama's docs page on Importing GGUF's and made a Modelfile
  • Created the model and attempted to run it

Hope this helps narrow down the issue :)

I ended up solving my issue.
When quantizing with llama.cpp previously, I had gone into 'convert-hf-to-gguf-update.py' and added the Llama 3 8B Instruct model to the 'models' list on Line 64 and gave it a name. However, I failed to notice that the Llama 3 8B (base model, non-instruct) was already in that list, and that I shouldn't have touched the file.

After reverting my changes to that file and leaving it alone, then running 'convert-hf-to-gguf-update.py' followed by running 'convert-hf-to-gguf.py', solved my issues.

For OP, I suggest checking the 'models' list in the 'convert-hf-to-gguf-update.py' in llama.cpp and making sure your base (untrained) model is in there, and if it isn't - to make a ticket to get support for the model you want in there. Alternatively, you could attempt fine-tuning a different base model that is already in this list, like Meta's Llama 3, or Phi 3 (if you're on a less performant machine).

Hope this helps!

<!-- gh-comment-id:2135856850 --> @HarrisonMulderSkyboxLabs commented on GitHub (May 28, 2024): > I'm in a similar boat. My steps: > > * Download Llama-3-8B-Instruct from Meta's HF > * Fine-tune the model on my dataset w/ LLaMA-Factory > * Merge the LoRA into the model w/ LLaMA-Factory > * Quantize the merged model into f16 using llama.cpp > * Quantize the f16 GGUF into Q8_0, also using llama.cpp > * Created Modelfile (copy/pasted the template on relevant ollama base model [here](https://ollama.com/library/llama3:8b-instruct-q8_0/blobs/8ab4849b038c)): > > ![image](https://private-user-images.githubusercontent.com/82464301/333470178-a27082d4-800a-4ea2-a5ec-fdbebdc62a3a.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTY5MjAyMDAsIm5iZiI6MTcxNjkxOTkwMCwicGF0aCI6Ii84MjQ2NDMwMS8zMzM0NzAxNzgtYTI3MDgyZDQtODAwYS00ZWEyLWE1ZWMtZmRiZWJkYzYyYTNhLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA1MjglMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNTI4VDE4MTE0MFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTM1OGRjZjlhNGUwYzI2MzlmOGRiM2Q0MGI3ZmVlOTFhMmRmNzNlZDMzNTgyNDE4ZjYxOGYwNmJkNTY4MTlmNzgmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.peC_uDQNTFUGSOnmxT0YEeKky90nCQW3ZQkSKLl2zYA) > > * Run "ollama create testingImport -f Modelfile" > * Observe success message > * Run "ollama run testingImport" > * Observe failure message: > "Error: llama runner process has terminated: exit status 0xc0000409" > > Summary: > > * Downloaded Llama3 8B Instruct from Meta and fune-tuned it with LLaMA-Factory, merging the LoRA into the model > * Quantized the fine-tuned model into f16, then Q8_0 with llama.cpp > * Followed ollama's docs page on Importing GGUF's and made a Modelfile > * Created the model and attempted to run it > > Hope this helps narrow down the issue :) I ended up solving my issue. When quantizing with llama.cpp previously, I had gone into 'convert-hf-to-gguf-update.py' and added the Llama 3 8B Instruct model to the 'models' list on Line 64 and gave it a name. However, I failed to notice that the Llama 3 8B (base model, non-instruct) was already in that list, and that I shouldn't have touched the file. After reverting my changes to that file and leaving it alone, then running 'convert-hf-to-gguf-update.py' followed by running 'convert-hf-to-gguf.py', solved my issues. For OP, I suggest checking the 'models' list in the 'convert-hf-to-gguf-update.py' in llama.cpp and making sure your base (untrained) model is in there, and if it isn't - to make a ticket to get support for the model you want in there. Alternatively, you could attempt fine-tuning a different base model that is already in this list, like Meta's Llama 3, or Phi 3 (if you're on a less performant machine). Hope this helps!
Author
Owner

@kozuch commented on GitHub (May 30, 2024):

The 0xc0000409 error has been fixed for me in 0.1.39. See https://github.com/ollama/ollama/issues/4457 for more info.

<!-- gh-comment-id:2139212328 --> @kozuch commented on GitHub (May 30, 2024): The 0xc0000409 error has been fixed for me in 0.1.39. See https://github.com/ollama/ollama/issues/4457 for more info.
Author
Owner

@freshlesh3 commented on GitHub (Jun 11, 2024):

I get this error when trying to download the Qwen2 models on windows 11. Any suggestions?

<!-- gh-comment-id:2159728129 --> @freshlesh3 commented on GitHub (Jun 11, 2024): I get this error when trying to download the Qwen2 models on windows 11. Any suggestions?
Author
Owner

@LiuMingfeng0 commented on GitHub (Jun 12, 2024):

Maybe it's because your Ollama version is too low,try to download new ollama and try it again

<!-- gh-comment-id:2162673563 --> @LiuMingfeng0 commented on GitHub (Jun 12, 2024): > Maybe it's because your Ollama version is too low,try to download new ollama and try it again
Author
Owner

@itay1551 commented on GitHub (Jun 12, 2024):

I get this error when trying to download the Qwen2 models on Windows 11. Any suggestions?

The solution for me was to update Ollama to 0.1.42 version and the problem was solved for me.
To do it click on the Ollama icon and press the "restart to update" button.

<!-- gh-comment-id:2162714087 --> @itay1551 commented on GitHub (Jun 12, 2024): > I get this error when trying to download the Qwen2 models on Windows 11. Any suggestions? The solution for me was to update Ollama to 0.1.42 version and the problem was solved for me. To do it click on the Ollama icon and press the "restart to update" button.
Author
Owner

@godwincod3s commented on GitHub (Jun 15, 2024):

Facing the same issue when i download ollama.exe and try to run : ollama run llama2 it gives me the error Error: llama runner process has terminated: exit status 0xc0000409 CUDA error" on my windows machine

<!-- gh-comment-id:2170404749 --> @godwincod3s commented on GitHub (Jun 15, 2024): Facing the same issue when i download ollama.exe and try to run : ollama run llama2 it gives me the error Error: llama runner process has terminated: exit status 0xc0000409 CUDA error" on my windows machine
Author
Owner

@MNeMoNiCuZ commented on GitHub (Jun 16, 2024):

I also have the same error on the qwen2 models. Both 1.5b and 7b.

<!-- gh-comment-id:2171901387 --> @MNeMoNiCuZ commented on GitHub (Jun 16, 2024): I also have the same error on the qwen2 models. Both 1.5b and 7b.
Author
Owner

@godwincod3s commented on GitHub (Jun 17, 2024):

This is my log file.

when i run the command on cmd #ollama run llama2
I get the response #Error: llama runner process has terminated: exit status 0xc0000409 CUDA error"

Please this is my log file.
server.log

<!-- gh-comment-id:2174433527 --> @godwincod3s commented on GitHub (Jun 17, 2024): This is my log file. when i run the command on cmd #ollama run llama2 I get the response #Error: llama runner process has terminated: exit status 0xc0000409 CUDA error" Please this is my log file. [server.log](https://github.com/user-attachments/files/15877298/server.log)
Author
Owner

@dhiltgen commented on GitHub (Jun 18, 2024):

@godwinjs it looks like you have a 2G card so only a small amount of llama2 will fit, and unfortunately our memory prediction algorithm overshot the available memory leading to an out-of-memory crash. As a workaround until we get that fixed, you can force the ollama server to use a smaller amount of VRAM with OLLAMA_MAX_VRAM set to something like 1610612736 (1.5G). First quit the tray app, then in a powershell terminal

$env:OLLAMA_MAX_VRAM="1610612736"
& "ollama app"

then try running llama2 again.

@MNeMoNiCuZ please make sure to upgrade to the latest version for qwen2 support.

<!-- gh-comment-id:2176518681 --> @dhiltgen commented on GitHub (Jun 18, 2024): @godwinjs it looks like you have a 2G card so only a small amount of llama2 will fit, and unfortunately our memory prediction algorithm overshot the available memory leading to an out-of-memory crash. As a workaround until we get that fixed, you can force the ollama server to use a smaller amount of VRAM with OLLAMA_MAX_VRAM set to something like 1610612736 (1.5G). First quit the tray app, then in a powershell terminal ``` $env:OLLAMA_MAX_VRAM="1610612736" & "ollama app" ``` then try running llama2 again. @MNeMoNiCuZ please make sure to upgrade to the latest version for qwen2 support.
Author
Owner

@godwincod3s commented on GitHub (Jun 18, 2024):

i get the response.

env:OLLAMA_MAX_VRAM=1610612736 : The term 'env:OLLAMA_MAX_VRAM=1610612736' is not recognized as the name of a
cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included,
verify that the path is correct and try again.
At line:1 char:1

  • env:OLLAMA_MAX_VRAM="1610612736"
  •   + CategoryInfo          : ObjectNotFound: (env:OLLAMA_MAX_VRAM=1610612736:String) [], CommandNotFoundExceptio
     n
      + FullyQualifiedErrorId : CommandNotFoundException
    
      in the meantime i'll search how to do this manually, but please does anyone knows the set of lines to run that doesn't throw an error.
    
<!-- gh-comment-id:2177148777 --> @godwincod3s commented on GitHub (Jun 18, 2024): i get the response. >>> env:OLLAMA_MAX_VRAM=1610612736 : The term 'env:OLLAMA_MAX_VRAM=1610612736' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:1 char:1 + env:OLLAMA_MAX_VRAM="1610612736" + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : ObjectNotFound: (env:OLLAMA_MAX_VRAM=1610612736:String) [], CommandNotFoundExceptio n + FullyQualifiedErrorId : CommandNotFoundException in the meantime i'll search how to do this manually, but please does anyone knows the set of lines to run that doesn't throw an error.
Author
Owner

@godwincod3s commented on GitHub (Jun 18, 2024):

thank you @dhiltgen

<!-- gh-comment-id:2177150407 --> @godwincod3s commented on GitHub (Jun 18, 2024): thank you @dhiltgen
Author
Owner

@godwincod3s commented on GitHub (Jun 18, 2024):

My bad @dhiltgen I didn't include the " $ " to the code but i did accordingly and got no errors, but the original error still persists.

ollama run llama2

Error: llama runner process has terminated: exit status 0xc0000409 CUDA error"

<!-- gh-comment-id:2177158291 --> @godwincod3s commented on GitHub (Jun 18, 2024): My bad @dhiltgen I didn't include the " $ " to the code but i did accordingly and got no errors, but the original error still persists. # ollama run llama2 >>> Error: llama runner process has terminated: exit status 0xc0000409 CUDA error"
Author
Owner

@godwincod3s commented on GitHub (Jun 18, 2024):

@dhiltgen this is my new log file
serverLog.txt

<!-- gh-comment-id:2177167422 --> @godwincod3s commented on GitHub (Jun 18, 2024): @dhiltgen this is my new log file [serverLog.txt](https://github.com/user-attachments/files/15893195/serverLog.txt)
Author
Owner

@godwincod3s commented on GitHub (Jun 18, 2024):

i used cm with the command

set OLLAMA_MAX_VRAM=4096

everything works fine now.
thank you @dhiltgen .

<!-- gh-comment-id:2177250852 --> @godwincod3s commented on GitHub (Jun 18, 2024): i used cm with the command # set OLLAMA_MAX_VRAM=4096 everything works fine now. thank you @dhiltgen .
Author
Owner

@dhiltgen commented on GitHub (Jun 18, 2024):

@godwinjs 4096 bytes will cause us to run on CPU and not load any layers into the GPU. I'd try setting a value less than 1.5G but as ~large as you can get it to run to get best performance.

<!-- gh-comment-id:2177272310 --> @dhiltgen commented on GitHub (Jun 18, 2024): @godwinjs 4096 bytes will cause us to run on CPU and not load any layers into the GPU. I'd try setting a value less than 1.5G but as ~large as you can get it to run to get best performance.
Author
Owner

@dhiltgen commented on GitHub (Jul 3, 2024):

Are you still seeing the failure with the latest version?

<!-- gh-comment-id:2207187162 --> @dhiltgen commented on GitHub (Jul 3, 2024): Are you still seeing the failure with the latest version?
Author
Owner

@LiuMingfeng0 commented on GitHub (Jul 4, 2024):

No. I have had the same problem twice. The solution is to update the Ollama. I mean new model like qwen2 and gemma2, not the fine-turning model by self

------------------ 原始邮件 ------------------
发件人: "ollama/ollama" @.>;
发送时间: 2024年7月4日(星期四) 凌晨4:24
@.
>;
@.@.>;
主题: Re: [ollama/ollama] Error: llama runner process has terminated: exit status 0xc0000409 (Issue #4572)

Are you still seeing the failure with the latest version?


Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: @.***>

<!-- gh-comment-id:2207808396 --> @LiuMingfeng0 commented on GitHub (Jul 4, 2024): No. I have had the same problem twice. The solution is to update the Ollama. I mean new model like qwen2 and gemma2, not the fine-turning model by self ------------------&nbsp;原始邮件&nbsp;------------------ 发件人: "ollama/ollama" ***@***.***&gt;; 发送时间:&nbsp;2024年7月4日(星期四) 凌晨4:24 ***@***.***&gt;; ***@***.******@***.***&gt;; 主题:&nbsp;Re: [ollama/ollama] Error: llama runner process has terminated: exit status 0xc0000409 (Issue #4572) Are you still seeing the failure with the latest version? — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: ***@***.***&gt;
Author
Owner

@keno-log commented on GitHub (Jul 9, 2024):

ollama 0.2.1,
ollama run gemma2:27b-instruct-q8_0
got the same issue.

<!-- gh-comment-id:2217223220 --> @keno-log commented on GitHub (Jul 9, 2024): ollama 0.2.1, ollama run gemma2:27b-instruct-q8_0 got the same issue.
Author
Owner

@alzubitariq commented on GitHub (Jul 19, 2024):

C:\Users\tmmz>ollama run akuldatta/mistral-nemo-instruct-12b:q5km
pulling manifest
pulling 913722d032f3... 100% ▕████████████████████████████████████████████████████████▏ 8.7 GB
pulling b9e4f1ee84fe... 100% ▕████████████████████████████████████████████████████████▏ 266 B
verifying sha256 digest
writing manifest
removing any unused layers
success
Error: llama runner process has terminated: exit status 0xc0000409 error loading model: check_tensor_dims: tensor 'blk.0.attn_q.weight' has wrong shape; expected 5120, 5120, got 5120, 4096, 1, 1

C:\Users\tmmz>ollama run akuldatta/mistral-nemo-instruct-12b:q5km
Error: llama runner process has terminated: exit status 0xc0000409 error loading model: check_tensor_dims: tensor 'blk.0.attn_q.weight' has wrong shape; expected 5120, 5120, got 5120, 4096, 1, 1

C:\Users\tmmz>ollama --version
ollama version is 0.2.5

C:\Users\tmmz>

<!-- gh-comment-id:2239212295 --> @alzubitariq commented on GitHub (Jul 19, 2024): C:\Users\tmmz>ollama run akuldatta/mistral-nemo-instruct-12b:q5km pulling manifest pulling 913722d032f3... 100% ▕████████████████████████████████████████████████████████▏ 8.7 GB pulling b9e4f1ee84fe... 100% ▕████████████████████████████████████████████████████████▏ 266 B verifying sha256 digest writing manifest removing any unused layers success Error: llama runner process has terminated: exit status 0xc0000409 error loading model: check_tensor_dims: tensor 'blk.0.attn_q.weight' has wrong shape; expected 5120, 5120, got 5120, 4096, 1, 1 C:\Users\tmmz>ollama run akuldatta/mistral-nemo-instruct-12b:q5km Error: llama runner process has terminated: exit status 0xc0000409 error loading model: check_tensor_dims: tensor 'blk.0.attn_q.weight' has wrong shape; expected 5120, 5120, got 5120, 4096, 1, 1 C:\Users\tmmz>ollama --version ollama version is 0.2.5 C:\Users\tmmz>
Author
Owner

@alzubitariq commented on GitHub (Jul 19, 2024):

@dhiltgen Hi , I'm having the same issue

<!-- gh-comment-id:2239214638 --> @alzubitariq commented on GitHub (Jul 19, 2024): @dhiltgen Hi , I'm having the same issue
Author
Owner

@ArashIranfar commented on GitHub (Jul 21, 2024):

Updating the Ollama solved the issue for me. I was trying to run Gemma2

<!-- gh-comment-id:2241491355 --> @ArashIranfar commented on GitHub (Jul 21, 2024): Updating the Ollama solved the issue for me. I was trying to run Gemma2
Author
Owner

@dexmac221 commented on GitHub (Jul 22, 2024):

Hi, solved for nemo ? here using ollama 0.2.7
ollama run akuldatta/mistral-nemo-instruct-12b:q5km
Error: llama runner process has terminated: signal: aborted error loading model: check_tensor_dims: tensor 'blk.0.attn_q.weight' has wrong shape; expected 5120, 5120, got 5120, 4096, 1, 1

<!-- gh-comment-id:2242344940 --> @dexmac221 commented on GitHub (Jul 22, 2024): Hi, solved for nemo ? here using ollama 0.2.7 ollama run akuldatta/mistral-nemo-instruct-12b:q5km Error: llama runner process has terminated: signal: aborted error loading model: check_tensor_dims: tensor 'blk.0.attn_q.weight' has wrong shape; expected 5120, 5120, got 5120, 4096, 1, 1
Author
Owner

@GabriIT commented on GitHub (Jul 22, 2024):

Same error here, Ollama version 0.2.7.
ollama run akuldatta/mistral-nemo-instruct-12b:q5km Error: llama runner process has terminated: signal: aborted (core dumped) error loading model: check_tensor_dims: tensor 'blk.0.attn_q.weight' has wrong shape; expected 5120, 5120, got 5120, 4096, 1, 1

<!-- gh-comment-id:2243050254 --> @GabriIT commented on GitHub (Jul 22, 2024): Same error here, Ollama version 0.2.7. `ollama run akuldatta/mistral-nemo-instruct-12b:q5km Error: llama runner process has terminated: signal: aborted (core dumped) error loading model: check_tensor_dims: tensor 'blk.0.attn_q.weight' has wrong shape; expected 5120, 5120, got 5120, 4096, 1, 1`
Author
Owner

@dhiltgen commented on GitHub (Jul 23, 2024):

Support for mistral-nemo was added in v0.2.8

This issue has drifted a bit from the original submission, so I'm going to close it now. In the latest release we've refined the error logging to remove the "noise" of the 0xc0000409 exit status when we're able to tell why it exited since the status code isn't unique. This should help make sure we don't wind up with unrelated problems piling into existing issues and getting lost in the noise.

@NeoFii if you're still having difficulty with converting a model after upgrading to the latest version, please file a model request issue with more details on the architecture, location, etc. so we can reproduce.

<!-- gh-comment-id:2245927883 --> @dhiltgen commented on GitHub (Jul 23, 2024): Support for mistral-nemo was added in [v0.2.8](https://github.com/ollama/ollama/releases/tag/v0.2.8) This issue has drifted a bit from the original submission, so I'm going to close it now. In the latest release we've refined the error logging to remove the "noise" of the 0xc0000409 exit status when we're able to tell why it exited since the status code isn't unique. This should help make sure we don't wind up with unrelated problems piling into existing issues and getting lost in the noise. @NeoFii if you're still having difficulty with converting a model after upgrading to the latest version, please file a model request issue with more details on the architecture, location, etc. so we can reproduce.
Author
Owner

@gacekk commented on GitHub (Aug 5, 2024):

Hi,

I have been getting same issue on Win 11. Got 24Gb VRAM 3090. Set the env to use only 20GB.

Trying to run my own fine tuned 7b model.
Below is log from server

2024/08/05 10:43:02 routes.go:1108: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\kosia\\.ollama\\models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\\Users\\kosia\\AppData\\Local\\Programs\\Ollama\\ollama_runners OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-08-05T10:43:02.400+02:00 level=INFO source=images.go:781 msg="total blobs: 3"
time=2024-08-05T10:43:02.401+02:00 level=INFO source=images.go:788 msg="total unused blobs removed: 1"
time=2024-08-05T10:43:02.402+02:00 level=INFO source=routes.go:1155 msg="Listening on 127.0.0.1:11434 (version 0.3.3)"
time=2024-08-05T10:43:02.402+02:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11.3 rocm_v6.1]"
time=2024-08-05T10:43:02.402+02:00 level=INFO source=gpu.go:205 msg="looking for compatible GPUs"
time=2024-08-05T10:43:02.522+02:00 level=INFO source=types.go:105 msg="inference compute" id=GPU-f6df223a-c036-ffc1-282e-7cf3f48550de library=cuda compute=8.6 driver=12.4 name="NVIDIA GeForce RTX 3090" total="24.0 GiB" available="22.8 GiB"
[GIN] 2024/08/05 - 10:43:34 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2024/08/05 - 10:44:06 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2024/08/05 - 10:44:06 | 200 |      3.6112ms |       127.0.0.1 | POST     "/api/show"
time=2024-08-05T10:44:06.159+02:00 level=INFO source=sched.go:710 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\kosia\.ollama\models\blobs\sha256-6a3cce23caa117ce5f7394ae5fb1274eb1ccd2a8d70302bd41c20c34cc37b408 gpu=GPU-f6df223a-c036-ffc1-282e-7cf3f48550de parallel=4 available=24412684288 required="15.7 GiB"
time=2024-08-05T10:44:06.159+02:00 level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[22.7 GiB]" memory.required.full="15.7 GiB" memory.required.partial="15.7 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[15.7 GiB]" memory.weights.total="14.0 GiB" memory.weights.repeating="13.8 GiB" memory.weights.nonrepeating="250.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="585.0 MiB"
time=2024-08-05T10:44:06.164+02:00 level=INFO source=server.go:384 msg="starting llama server" cmd="C:\\Users\\kosia\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cuda_v11.3\\ollama_llama_server.exe --model C:\\Users\\kosia\\.ollama\\models\\blobs\\sha256-6a3cce23caa117ce5f7394ae5fb1274eb1ccd2a8d70302bd41c20c34cc37b408 --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --no-mmap --parallel 4 --port 54768"
time=2024-08-05T10:44:06.166+02:00 level=INFO source=sched.go:445 msg="loaded runners" count=1
time=2024-08-05T10:44:06.166+02:00 level=INFO source=server.go:584 msg="waiting for llama runner to start responding"
time=2024-08-05T10:44:06.166+02:00 level=INFO source=server.go:618 msg="waiting for server to become available" status="llm server error"
INFO [wmain] build info | build=3485 commit="6eeaeba1" tid="19428" timestamp=1722847446
INFO [wmain] system info | n_threads=8 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="19428" timestamp=1722847446 total_threads=16
INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="15" port="54768" tid="19428" timestamp=1722847446
llama_model_loader: loaded meta data with 28 key-value pairs and 291 tensors from C:\Users\kosia\.ollama\models\blobs\sha256-6a3cce23caa117ce5f7394ae5fb1274eb1ccd2a8d70302bd41c20c34cc37b408 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                          general.file_type u32              = 1
llama_model_loader: - kv   2:                               general.name str              = llama
llama_model_loader: - kv   3:               general.quantization_version u32              = 2
llama_model_loader: - kv   4:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   5:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   6:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv   7:                          llama.block_count u32              = 32
llama_model_loader: - kv   8:                       llama.context_length u32              = 32768
llama_model_loader: - kv   9:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv  10:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv  11:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  12:                       llama.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv  13:                           llama.vocab_size u32              = 32000
llama_model_loader: - kv  14:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  15:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  16:           tokenizer.ggml.add_padding_token bool             = false
llama_model_loader: - kv  17:           tokenizer.ggml.add_unknown_token bool             = false
llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  19:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  20:                      tokenizer.ggml.merges arr[str,58980]   = ["▁ t", "i n", "e r", "▁ a", "h e...
llama_model_loader: - kv  21:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  22:            tokenizer.ggml.padding_token_id u32              = 2
llama_model_loader: - kv  23:                         tokenizer.ggml.pre str              = default
llama_model_loader: - kv  24:                      tokenizer.ggml.scores arr[f32,32003]   = [0.000000, 0.000000, 1.000000, 1.0000...
llama_model_loader: - kv  25:                  tokenizer.ggml.token_type arr[i32,32003]   = [3, 1, 3, 1, 1, 3, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  26:                      tokenizer.ggml.tokens arr[str,32003]   = ["<unk>", "<unk>", "<s>", "<s>", "</s...
llama_model_loader: - kv  27:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type  f16:  226 tensors
C:\a\ollama\ollama\llm\llama.cpp\src\llama.cpp:5511: GGML_ASSERT(vocab.id_to_token.size() == vocab.token_to_id.size()) failed
time=2024-08-05T10:44:06.625+02:00 level=INFO source=server.go:618 msg="waiting for server to become available" status="llm server not responding"
time=2024-08-05T10:44:08.717+02:00 level=INFO source=server.go:618 msg="waiting for server to become available" status="llm server error"
time=2024-08-05T10:44:09.223+02:00 level=ERROR source=sched.go:451 msg="error loading llama server" error="llama runner process has terminated: exit status 0xc0000409"
[GIN] 2024/08/05 - 10:44:09 | 500 |    3.0914469s |       127.0.0.1 | POST     "/api/chat"
<!-- gh-comment-id:2268513380 --> @gacekk commented on GitHub (Aug 5, 2024): Hi, I have been getting same issue on Win 11. Got 24Gb VRAM 3090. Set the env to use only 20GB. Trying to run my own fine tuned 7b model. Below is log from server ``` 2024/08/05 10:43:02 routes.go:1108: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\kosia\\.ollama\\models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\\Users\\kosia\\AppData\\Local\\Programs\\Ollama\\ollama_runners OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-08-05T10:43:02.400+02:00 level=INFO source=images.go:781 msg="total blobs: 3" time=2024-08-05T10:43:02.401+02:00 level=INFO source=images.go:788 msg="total unused blobs removed: 1" time=2024-08-05T10:43:02.402+02:00 level=INFO source=routes.go:1155 msg="Listening on 127.0.0.1:11434 (version 0.3.3)" time=2024-08-05T10:43:02.402+02:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11.3 rocm_v6.1]" time=2024-08-05T10:43:02.402+02:00 level=INFO source=gpu.go:205 msg="looking for compatible GPUs" time=2024-08-05T10:43:02.522+02:00 level=INFO source=types.go:105 msg="inference compute" id=GPU-f6df223a-c036-ffc1-282e-7cf3f48550de library=cuda compute=8.6 driver=12.4 name="NVIDIA GeForce RTX 3090" total="24.0 GiB" available="22.8 GiB" [GIN] 2024/08/05 - 10:43:34 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/08/05 - 10:44:06 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/08/05 - 10:44:06 | 200 | 3.6112ms | 127.0.0.1 | POST "/api/show" time=2024-08-05T10:44:06.159+02:00 level=INFO source=sched.go:710 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\kosia\.ollama\models\blobs\sha256-6a3cce23caa117ce5f7394ae5fb1274eb1ccd2a8d70302bd41c20c34cc37b408 gpu=GPU-f6df223a-c036-ffc1-282e-7cf3f48550de parallel=4 available=24412684288 required="15.7 GiB" time=2024-08-05T10:44:06.159+02:00 level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[22.7 GiB]" memory.required.full="15.7 GiB" memory.required.partial="15.7 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[15.7 GiB]" memory.weights.total="14.0 GiB" memory.weights.repeating="13.8 GiB" memory.weights.nonrepeating="250.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="585.0 MiB" time=2024-08-05T10:44:06.164+02:00 level=INFO source=server.go:384 msg="starting llama server" cmd="C:\\Users\\kosia\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cuda_v11.3\\ollama_llama_server.exe --model C:\\Users\\kosia\\.ollama\\models\\blobs\\sha256-6a3cce23caa117ce5f7394ae5fb1274eb1ccd2a8d70302bd41c20c34cc37b408 --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --no-mmap --parallel 4 --port 54768" time=2024-08-05T10:44:06.166+02:00 level=INFO source=sched.go:445 msg="loaded runners" count=1 time=2024-08-05T10:44:06.166+02:00 level=INFO source=server.go:584 msg="waiting for llama runner to start responding" time=2024-08-05T10:44:06.166+02:00 level=INFO source=server.go:618 msg="waiting for server to become available" status="llm server error" INFO [wmain] build info | build=3485 commit="6eeaeba1" tid="19428" timestamp=1722847446 INFO [wmain] system info | n_threads=8 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="19428" timestamp=1722847446 total_threads=16 INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="15" port="54768" tid="19428" timestamp=1722847446 llama_model_loader: loaded meta data with 28 key-value pairs and 291 tensors from C:\Users\kosia\.ollama\models\blobs\sha256-6a3cce23caa117ce5f7394ae5fb1274eb1ccd2a8d70302bd41c20c34cc37b408 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.file_type u32 = 1 llama_model_loader: - kv 2: general.name str = llama llama_model_loader: - kv 3: general.quantization_version u32 = 2 llama_model_loader: - kv 4: llama.attention.head_count u32 = 32 llama_model_loader: - kv 5: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 6: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 7: llama.block_count u32 = 32 llama_model_loader: - kv 8: llama.context_length u32 = 32768 llama_model_loader: - kv 9: llama.embedding_length u32 = 4096 llama_model_loader: - kv 10: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 11: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 12: llama.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 13: llama.vocab_size u32 = 32000 llama_model_loader: - kv 14: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 15: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 16: tokenizer.ggml.add_padding_token bool = false llama_model_loader: - kv 17: tokenizer.ggml.add_unknown_token bool = false llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 20: tokenizer.ggml.merges arr[str,58980] = ["▁ t", "i n", "e r", "▁ a", "h e... llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 22: tokenizer.ggml.padding_token_id u32 = 2 llama_model_loader: - kv 23: tokenizer.ggml.pre str = default llama_model_loader: - kv 24: tokenizer.ggml.scores arr[f32,32003] = [0.000000, 0.000000, 1.000000, 1.0000... llama_model_loader: - kv 25: tokenizer.ggml.token_type arr[i32,32003] = [3, 1, 3, 1, 1, 3, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 26: tokenizer.ggml.tokens arr[str,32003] = ["<unk>", "<unk>", "<s>", "<s>", "</s... llama_model_loader: - kv 27: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type f16: 226 tensors C:\a\ollama\ollama\llm\llama.cpp\src\llama.cpp:5511: GGML_ASSERT(vocab.id_to_token.size() == vocab.token_to_id.size()) failed time=2024-08-05T10:44:06.625+02:00 level=INFO source=server.go:618 msg="waiting for server to become available" status="llm server not responding" time=2024-08-05T10:44:08.717+02:00 level=INFO source=server.go:618 msg="waiting for server to become available" status="llm server error" time=2024-08-05T10:44:09.223+02:00 level=ERROR source=sched.go:451 msg="error loading llama server" error="llama runner process has terminated: exit status 0xc0000409" [GIN] 2024/08/05 - 10:44:09 | 500 | 3.0914469s | 127.0.0.1 | POST "/api/chat" ```
Author
Owner

@dhiltgen commented on GitHub (Aug 5, 2024):

@gacekk I'll refine our error parsing code to better catch this, but your failure looks like something isn't quite right in your fine tune

GGML_ASSERT(vocab.id_to_token.size() == vocab.token_to_id.size())
<!-- gh-comment-id:2269453056 --> @dhiltgen commented on GitHub (Aug 5, 2024): @gacekk I'll refine our error parsing code to better catch this, but your failure looks like something isn't quite right in your fine tune ``` GGML_ASSERT(vocab.id_to_token.size() == vocab.token_to_id.size()) ```
Author
Owner

@ash1ni commented on GitHub (Oct 2, 2024):

If you are on windows and facing this error, try updating the ollama. Mine got resolved by updating.

<!-- gh-comment-id:2389038055 --> @ash1ni commented on GitHub (Oct 2, 2024): If you are on windows and facing this error, try updating the ollama. Mine got resolved by updating.
Author
Owner

@respwill commented on GitHub (Nov 11, 2024):

I am a windows user and faced same issue. It is resolved as updating ollama.

<!-- gh-comment-id:2467147678 --> @respwill commented on GitHub (Nov 11, 2024): I am a windows user and faced same issue. It is resolved as updating ollama.
Author
Owner

@medazizktata commented on GitHub (Nov 29, 2024):

C:\Users\PC>ollama run llama3.2
Error: llama runner process has terminated: exit status 0xc0000409 error loading model: done_getting_tensors: wrong number of tensors; expected 255, got 254
What should I do ?

<!-- gh-comment-id:2508714564 --> @medazizktata commented on GitHub (Nov 29, 2024): C:\Users\PC>ollama run llama3.2 Error: llama runner process has terminated: exit status 0xc0000409 error loading model: done_getting_tensors: wrong number of tensors; expected 255, got 254 What should I do ?
Author
Owner

@jessegross commented on GitHub (Dec 3, 2024):

@medazizktata Can you please file a new bug and include server logs? This is unlikely to be related to the original issue.

<!-- gh-comment-id:2513347069 --> @jessegross commented on GitHub (Dec 3, 2024): @medazizktata Can you please file a new bug and include [server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues)? This is unlikely to be related to the original issue.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#28627