[GH-ISSUE #3774] Error: llama runner process no longer running: 3221225785 #2330

Closed
opened 2026-04-12 12:39:10 -05:00 by GiteaMirror · 29 comments
Owner

Originally created by @pheonixravi on GitHub (Apr 20, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3774

Originally assigned to: @dhiltgen on GitHub.

server.log

Unable to run mistral or any other modal locally using ollama

C:\Users\ravik>ollama list
NAME ID SIZE MODIFIED
mistral:latest 61e88e884507 4.1 GB About an hour ago

C:\Users\ravik>ollama run mistral
Error: llama runner process no longer running: 3221225785

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

No response

Originally created by @pheonixravi on GitHub (Apr 20, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3774 Originally assigned to: @dhiltgen on GitHub. [server.log](https://github.com/ollama/ollama/files/15047891/server.log) ### Unable to run mistral or any other modal locally using ollama C:\Users\ravik>ollama list NAME ID SIZE MODIFIED mistral:latest 61e88e884507 4.1 GB About an hour ago C:\Users\ravik>ollama run mistral Error: llama runner process no longer running: 3221225785 ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version _No response_
GiteaMirror added the bugwindows labels 2026-04-12 12:39:10 -05:00
Author
Owner

@carsonfeng commented on GitHub (Apr 21, 2024):

similar case:

ollama run llama3:70b
Error: llama runner process no longer running: 1 error:failed to create context with model '/root/.ollama/models/blobs/sha256-4fe022a8902336d3c452c88f7aca5590f5b5b02ccfd06320fdefab02412e1f0b'

<!-- gh-comment-id:2067836059 --> @carsonfeng commented on GitHub (Apr 21, 2024): similar case: ollama run llama3:70b Error: llama runner process no longer running: 1 error:failed to create context with model '/root/.ollama/models/blobs/sha256-4fe022a8902336d3c452c88f7aca5590f5b5b02ccfd06320fdefab02412e1f0b'
Author
Owner

@fearnworks commented on GitHub (Apr 21, 2024):

Same issue here.

<!-- gh-comment-id:2068074802 --> @fearnworks commented on GitHub (Apr 21, 2024): Same issue here.
Author
Owner

@diegocaumont commented on GitHub (Apr 22, 2024):

Same case here.

<!-- gh-comment-id:2068688005 --> @diegocaumont commented on GitHub (Apr 22, 2024): Same case here.
Author
Owner

@hunt-47 commented on GitHub (Apr 22, 2024):

same issue here

<!-- gh-comment-id:2068934965 --> @hunt-47 commented on GitHub (Apr 22, 2024): same issue here
Author
Owner

@ecsfu commented on GitHub (Apr 22, 2024):

same issue here

<!-- gh-comment-id:2069569037 --> @ecsfu commented on GitHub (Apr 22, 2024): same issue here
Author
Owner

@anishchhaparwal commented on GitHub (Apr 23, 2024):

same issue

<!-- gh-comment-id:2072668579 --> @anishchhaparwal commented on GitHub (Apr 23, 2024): same issue
Author
Owner

@kiririnou commented on GitHub (Apr 23, 2024):

same issue with llama3 and windows

time=2024-04-23T21:00:09.249+03:00 level=INFO source=server.go:389 msg="waiting for llama runner to start responding"
time=2024-04-23T21:00:09.305+03:00 level=ERROR source=routes.go:120 msg="error loading llama server" error="llama runner process no longer running: 3221225781 "
[GIN] 2024/04/23 - 21:00:09 | 500 |    1.6587976s |       127.0.0.1 | POST     "/api/chat"
<!-- gh-comment-id:2073064862 --> @kiririnou commented on GitHub (Apr 23, 2024): same issue with llama3 and windows ``` time=2024-04-23T21:00:09.249+03:00 level=INFO source=server.go:389 msg="waiting for llama runner to start responding" time=2024-04-23T21:00:09.305+03:00 level=ERROR source=routes.go:120 msg="error loading llama server" error="llama runner process no longer running: 3221225781 " [GIN] 2024/04/23 - 21:00:09 | 500 | 1.6587976s | 127.0.0.1 | POST "/api/chat" ```
Author
Owner

@hammad93 commented on GitHub (Apr 23, 2024):

Hello, I was testing this on a Rasberry Pi CM4 with the aarch64 (ARM) architecture and also received the same error. As a workaround, I was able to run it through Docker,

https://hub.docker.com/r/ollama/ollama

The commands I used include,

docker pull --platform linux/arm64 ollama/ollama 
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
docker exec -it ollama ollama run phi

Remove the --platform flag for other setups
I appreciated that the Docker image was only ~320 MB

<!-- gh-comment-id:2073281974 --> @hammad93 commented on GitHub (Apr 23, 2024): Hello, I was testing this on a Rasberry Pi CM4 with the aarch64 (ARM) architecture and also received the same error. As a workaround, I was able to run it through Docker, https://hub.docker.com/r/ollama/ollama The commands I used include, ``` docker pull --platform linux/arm64 ollama/ollama docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama docker exec -it ollama ollama run phi ``` Remove the `--platform` flag for other setups I appreciated that the Docker image was only ~320 MB
Author
Owner

@HughieCN commented on GitHub (Apr 24, 2024):

Same issue here.

<!-- gh-comment-id:2074356119 --> @HughieCN commented on GitHub (Apr 24, 2024): Same issue here.
Author
Owner

@ironicviking commented on GitHub (Apr 24, 2024):

Same issue on windows 10, amd 7800xt. I uninstalled Ollama, downloaded 1.30, installed it as administrator and that got it working. Had to fight the updater as it tries to update directly, but i was able to get it working by installing it as administrator. So something has changed in the last release.

<!-- gh-comment-id:2074720871 --> @ironicviking commented on GitHub (Apr 24, 2024): Same issue on windows 10, amd 7800xt. I uninstalled Ollama, downloaded 1.30, installed it as administrator and that got it working. Had to fight the updater as it tries to update directly, but i was able to get it working by installing it as administrator. So something has changed in the last release.
Author
Owner

@hunt-47 commented on GitHub (Apr 24, 2024):

I think its the problem with the model. i encountered this issue when i tried to use a finetuned of the official version. but if you are using quantized models from TheBloke it works fine.

<!-- gh-comment-id:2075318187 --> @hunt-47 commented on GitHub (Apr 24, 2024): I think its the problem with the model. i encountered this issue when i tried to use a finetuned of the official version. but if you are using quantized models from TheBloke it works fine.
Author
Owner

@dhiltgen commented on GitHub (Apr 24, 2024):

There may be a few different possible root causes, but I believe this should be resolved in 0.1.33 based on #3850

<!-- gh-comment-id:2075414696 --> @dhiltgen commented on GitHub (Apr 24, 2024): There may be a few different possible root causes, but I believe this should be resolved in 0.1.33 based on #3850
Author
Owner

@yhd12138 commented on GitHub (Apr 25, 2024):

There may be a few different possible root causes, but I believe this should be resolved in 0.1.33 based on #3850

Can we get the 0.1.33 version of the "ollama-windows-amd64.exe"? Thanks.

<!-- gh-comment-id:2076146975 --> @yhd12138 commented on GitHub (Apr 25, 2024): > There may be a few different possible root causes, but I believe this should be resolved in 0.1.33 based on #3850 Can we get the 0.1.33 version of the "ollama-windows-amd64.exe"? Thanks.
Author
Owner

@Sarolta commented on GitHub (Apr 27, 2024):

had the same issue running a undowloaded model, failed on the last step,
did a pull and it succeeded. Then the run command worked.

<!-- gh-comment-id:2080347289 --> @Sarolta commented on GitHub (Apr 27, 2024): had the same issue running a undowloaded model, failed on the last step, did a pull and it succeeded. Then the run command worked.
Author
Owner

@abhishekmyway commented on GitHub (Apr 28, 2024):

Still its same issue on linux:
Screenshot 2024-04-28 195741

<!-- gh-comment-id:2081453407 --> @abhishekmyway commented on GitHub (Apr 28, 2024): Still its same issue on linux: ![Screenshot 2024-04-28 195741](https://github.com/ollama/ollama/assets/51020286/70ef267b-9f14-4e36-b5d6-c53065860117)
Author
Owner

@dhiltgen commented on GitHub (Apr 28, 2024):

The pre-release for 0.1.33 is available now, which should resolve these exe missing/disappearing problems on windows.

<!-- gh-comment-id:2081585340 --> @dhiltgen commented on GitHub (Apr 28, 2024): The pre-release for [0.1.33](https://github.com/ollama/ollama/releases) is available now, which should resolve these exe missing/disappearing problems on windows.
Author
Owner

@aaronjrod commented on GitHub (Apr 28, 2024):

Hey @dhiltgen! I've just installed the pre-release, since I was having this issue. The model now runs, but not via GPU: msg="[0] CUDA GPU is too old. Compute Capability detected: 1.0"

In the 0.1.32 version, it would detect my GPU (GeForce GTX 970): "CUDART CUDA Compute Capability detected: 5.2"

Any recommendations?

<!-- gh-comment-id:2081589282 --> @aaronjrod commented on GitHub (Apr 28, 2024): Hey @dhiltgen! I've just installed the pre-release, since I was having this issue. The model now runs, but not via GPU: msg="[0] CUDA GPU is too old. Compute Capability detected: 1.0" In the 0.1.32 version, it would detect my GPU (GeForce GTX 970): "CUDART CUDA Compute Capability detected: 5.2" Any recommendations?
Author
Owner

@dhiltgen commented on GitHub (Apr 28, 2024):

@aaronjrod that's unexpected. Can you share the server log? We may need debug turned on, so quit the app, then in a powershell terminal run

$env:OLLAMA_DEBUG="1"
ollama serve
<!-- gh-comment-id:2081599036 --> @dhiltgen commented on GitHub (Apr 28, 2024): @aaronjrod that's unexpected. Can you share the server log? We may need debug turned on, so quit the app, then in a powershell terminal run ```powershell $env:OLLAMA_DEBUG="1" ollama serve ```
Author
Owner

@aaronjrod commented on GitHub (Apr 28, 2024):

@dhiltgen sure! For context, I am using an Intel i5-6400 CPU and GeForce GTX 970 GPU:

PS C:\Users\Aaron> $env:OLLAMA_DEBUG="1"
PS C:\Users\Aaron> ollama serve
time=2024-04-28T16:56:46.238-04:00 level=INFO source=images.go:821 msg="total blobs: 5"
time=2024-04-28T16:56:46.240-04:00 level=INFO source=images.go:828 msg="total unused blobs removed: 0"
time=2024-04-28T16:56:46.241-04:00 level=INFO source=routes.go:1074 msg="Listening on 127.0.0.1:11434 (version 0.1.33-rc5)"
time=2024-04-28T16:56:46.241-04:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\Aaron\AppData\Local\Programs\Ollama\ollama_runners\cpu
time=2024-04-28T16:56:46.241-04:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\Aaron\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx
time=2024-04-28T16:56:46.241-04:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\Aaron\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx2
time=2024-04-28T16:56:46.241-04:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\Aaron\AppData\Local\Programs\Ollama\ollama_runners\cuda_v11.3
time=2024-04-28T16:56:46.241-04:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\Aaron\AppData\Local\Programs\Ollama\ollama_runners\rocm_v5.7
time=2024-04-28T16:56:46.241-04:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx cpu_avx2 cuda_v11.3 rocm_v5.7 cpu]"
time=2024-04-28T16:56:46.242-04:00 level=DEBUG source=payload.go:45 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-04-28T16:56:46.242-04:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler"
time=2024-04-28T16:56:46.242-04:00 level=INFO source=gpu.go:96 msg="Detecting GPUs"
time=2024-04-28T16:56:46.243-04:00 level=DEBUG source=gpu.go:203 msg="Searching for GPU library" name=cudart64_*.dll
time=2024-04-28T16:56:46.243-04:00 level=DEBUG source=gpu.go:221 msg="gpu library search" globs="[C:\\Users\\Aaron\\AppData\\Local\\Programs\\Ollama\\cudart64_*.dll c:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v*\\bin\\cudart64_*.dll C:\\Python38\\Scripts\\cudart64_*.dll* C:\\Python38\\cudart64_*.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_*.dll* C:\\Program Files (x86)\\Intel\\iCLS Client\\cudart64_*.dll* C:\\Program Files\\Intel\\iCLS Client\\cudart64_*.dll* C:\\WINDOWS\\system32\\cudart64_*.dll* C:\\WINDOWS\\cudart64_*.dll* C:\\WINDOWS\\System32\\Wbem\\cudart64_*.dll* C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\cudart64_*.dll* C:\\Program Files (x86)\\Intel\\Intel(R) Management Engine Components\\DAL\\cudart64_*.dll* C:\\Program Files\\Intel\\Intel(R) Management Engine Components\\DAL\\cudart64_*.dll* C:\\Program Files (x86)\\Intel\\Intel(R) Management Engine Components\\IPT\\cudart64_*.dll* C:\\Program Files\\Intel\\Intel(R) Management Engine Components\\IPT\\cudart64_*.dll* C:\\Program Files\\TortoiseGit\\bin\\cudart64_*.dll* C:\\Program Files\\nodejs\\cudart64_*.dll* C:\\UnxTools\\cudart64_*.dll* C:\\WINDOWS\\System32\\OpenSSH\\cudart64_*.dll* C:\\Program Files\\dotnet\\cudart64_*.dll* C:\\Program Files\\Microsoft SQL Server\\130\\Tools\\Binn\\cudart64_*.dll* C:\\Program Files\\Microsoft SQL Server\\Client SDK\\ODBC\\170\\Tools\\Binn\\cudart64_*.dll* C:\\Program Files (x86)\\Yarn\\bin\\cudart64_*.dll* C:\\ProgramData\\chocolatey\\bin\\cudart64_*.dll* c:\\k\\cudart64_*.dll* C:\\tools\\java\\jdk1.8.0_221\\bin\\cudart64_*.dll* C:\\Program Files\\Git\\cmd\\cudart64_*.dll* C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2024.1.1\\cudart64_*.dll* C:\\Users\\Aaron\\AppData\\Local\\Microsoft\\WindowsApps\\cudart64_*.dll* C:\\Users\\Aaron\\AppData\\Local\\atom\\bin\\cudart64_*.dll* C:\\Users\\Aaron\\AppData\\Local\\Programs\\Git\\cmd\\cudart64_*.dll* C:\\Users\\Aaron\\AppData\\Roaming\\npm\\cudart64_*.dll* C:\\Users\\Aaron\\AppData\\Local\\Microsoft\\WindowsApps\\cudart64_*.dll* C:\\Users\\Aaron\\cudart64_*.dll* C:\\Users\\Aaron\\AppData\\Local\\Programs\\Microsoft VS Code\\bin\\cudart64_*.dll* C:\\Users\\Aaron\\AppData\\Local\\Programs\\Ollama\\cudart64_*.dll*]"
time=2024-04-28T16:56:46.262-04:00 level=DEBUG source=gpu.go:249 msg="discovered GPU libraries" paths="[C:\\Users\\Aaron\\AppData\\Local\\Programs\\Ollama\\cudart64_110.dll C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_60.dll]"
cudaSetDevice err: 3
time=2024-04-28T16:56:46.462-04:00 level=DEBUG source=gpu.go:261 msg="Unable to load cudart" library=C:\Users\Aaron\AppData\Local\Programs\Ollama\cudart64_110.dll error="cudart init failure: 3"
CUDA driver version: 9-1
time=2024-04-28T16:56:46.469-04:00 level=INFO source=gpu.go:101 msg="detected GPUs" library="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_60.dll" count=1
time=2024-04-28T16:56:46.470-04:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
[GPU-00000000-0100-0000-00c0-000000000000] CUDA totalMem 0
[GPU-00000000-0100-0000-00c0-000000000000] CUDA freeMem 3540602060
[GPU-00000000-0100-0000-00c0-000000000000] Compute Capability 1.0
time=2024-04-28T16:56:46.552-04:00 level=INFO source=gpu.go:148 msg="[0] CUDA GPU is too old. Compute Capability detected: 1.0"
time=2024-04-28T16:56:46.554-04:00 level=DEBUG source=amd_windows.go:32 msg="unable to load amdhip64.dll: The specified module could not be found."
<!-- gh-comment-id:2081653926 --> @aaronjrod commented on GitHub (Apr 28, 2024): @dhiltgen sure! For context, I am using an Intel i5-6400 CPU and GeForce GTX 970 GPU: ``` PS C:\Users\Aaron> $env:OLLAMA_DEBUG="1" PS C:\Users\Aaron> ollama serve time=2024-04-28T16:56:46.238-04:00 level=INFO source=images.go:821 msg="total blobs: 5" time=2024-04-28T16:56:46.240-04:00 level=INFO source=images.go:828 msg="total unused blobs removed: 0" time=2024-04-28T16:56:46.241-04:00 level=INFO source=routes.go:1074 msg="Listening on 127.0.0.1:11434 (version 0.1.33-rc5)" time=2024-04-28T16:56:46.241-04:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\Aaron\AppData\Local\Programs\Ollama\ollama_runners\cpu time=2024-04-28T16:56:46.241-04:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\Aaron\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx time=2024-04-28T16:56:46.241-04:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\Aaron\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx2 time=2024-04-28T16:56:46.241-04:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\Aaron\AppData\Local\Programs\Ollama\ollama_runners\cuda_v11.3 time=2024-04-28T16:56:46.241-04:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\Aaron\AppData\Local\Programs\Ollama\ollama_runners\rocm_v5.7 time=2024-04-28T16:56:46.241-04:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx cpu_avx2 cuda_v11.3 rocm_v5.7 cpu]" time=2024-04-28T16:56:46.242-04:00 level=DEBUG source=payload.go:45 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" time=2024-04-28T16:56:46.242-04:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler" time=2024-04-28T16:56:46.242-04:00 level=INFO source=gpu.go:96 msg="Detecting GPUs" time=2024-04-28T16:56:46.243-04:00 level=DEBUG source=gpu.go:203 msg="Searching for GPU library" name=cudart64_*.dll time=2024-04-28T16:56:46.243-04:00 level=DEBUG source=gpu.go:221 msg="gpu library search" globs="[C:\\Users\\Aaron\\AppData\\Local\\Programs\\Ollama\\cudart64_*.dll c:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v*\\bin\\cudart64_*.dll C:\\Python38\\Scripts\\cudart64_*.dll* C:\\Python38\\cudart64_*.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_*.dll* C:\\Program Files (x86)\\Intel\\iCLS Client\\cudart64_*.dll* C:\\Program Files\\Intel\\iCLS Client\\cudart64_*.dll* C:\\WINDOWS\\system32\\cudart64_*.dll* C:\\WINDOWS\\cudart64_*.dll* C:\\WINDOWS\\System32\\Wbem\\cudart64_*.dll* C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\cudart64_*.dll* C:\\Program Files (x86)\\Intel\\Intel(R) Management Engine Components\\DAL\\cudart64_*.dll* C:\\Program Files\\Intel\\Intel(R) Management Engine Components\\DAL\\cudart64_*.dll* C:\\Program Files (x86)\\Intel\\Intel(R) Management Engine Components\\IPT\\cudart64_*.dll* C:\\Program Files\\Intel\\Intel(R) Management Engine Components\\IPT\\cudart64_*.dll* C:\\Program Files\\TortoiseGit\\bin\\cudart64_*.dll* C:\\Program Files\\nodejs\\cudart64_*.dll* C:\\UnxTools\\cudart64_*.dll* C:\\WINDOWS\\System32\\OpenSSH\\cudart64_*.dll* C:\\Program Files\\dotnet\\cudart64_*.dll* C:\\Program Files\\Microsoft SQL Server\\130\\Tools\\Binn\\cudart64_*.dll* C:\\Program Files\\Microsoft SQL Server\\Client SDK\\ODBC\\170\\Tools\\Binn\\cudart64_*.dll* C:\\Program Files (x86)\\Yarn\\bin\\cudart64_*.dll* C:\\ProgramData\\chocolatey\\bin\\cudart64_*.dll* c:\\k\\cudart64_*.dll* C:\\tools\\java\\jdk1.8.0_221\\bin\\cudart64_*.dll* C:\\Program Files\\Git\\cmd\\cudart64_*.dll* C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2024.1.1\\cudart64_*.dll* C:\\Users\\Aaron\\AppData\\Local\\Microsoft\\WindowsApps\\cudart64_*.dll* C:\\Users\\Aaron\\AppData\\Local\\atom\\bin\\cudart64_*.dll* C:\\Users\\Aaron\\AppData\\Local\\Programs\\Git\\cmd\\cudart64_*.dll* C:\\Users\\Aaron\\AppData\\Roaming\\npm\\cudart64_*.dll* C:\\Users\\Aaron\\AppData\\Local\\Microsoft\\WindowsApps\\cudart64_*.dll* C:\\Users\\Aaron\\cudart64_*.dll* C:\\Users\\Aaron\\AppData\\Local\\Programs\\Microsoft VS Code\\bin\\cudart64_*.dll* C:\\Users\\Aaron\\AppData\\Local\\Programs\\Ollama\\cudart64_*.dll*]" time=2024-04-28T16:56:46.262-04:00 level=DEBUG source=gpu.go:249 msg="discovered GPU libraries" paths="[C:\\Users\\Aaron\\AppData\\Local\\Programs\\Ollama\\cudart64_110.dll C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_60.dll]" cudaSetDevice err: 3 time=2024-04-28T16:56:46.462-04:00 level=DEBUG source=gpu.go:261 msg="Unable to load cudart" library=C:\Users\Aaron\AppData\Local\Programs\Ollama\cudart64_110.dll error="cudart init failure: 3" CUDA driver version: 9-1 time=2024-04-28T16:56:46.469-04:00 level=INFO source=gpu.go:101 msg="detected GPUs" library="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_60.dll" count=1 time=2024-04-28T16:56:46.470-04:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" [GPU-00000000-0100-0000-00c0-000000000000] CUDA totalMem 0 [GPU-00000000-0100-0000-00c0-000000000000] CUDA freeMem 3540602060 [GPU-00000000-0100-0000-00c0-000000000000] Compute Capability 1.0 time=2024-04-28T16:56:46.552-04:00 level=INFO source=gpu.go:148 msg="[0] CUDA GPU is too old. Compute Capability detected: 1.0" time=2024-04-28T16:56:46.554-04:00 level=DEBUG source=amd_windows.go:32 msg="unable to load amdhip64.dll: The specified module could not be found." ```
Author
Owner

@xavernsk commented on GitHub (Apr 30, 2024):

Facing with this issue on my windows laptop trying to use ollama with api (already used pre-release install):


C:\Users\Xaver>ollama serve
Error: listen tcp 127.0.0.1:11434: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted.

C:\Users\Xaver>ollama serve
time=2024-04-30T10:12:12.263+07:00 level=INFO source=images.go:821 msg="total blobs: 5"
time=2024-04-30T10:12:12.303+07:00 level=INFO source=images.go:828 msg="total unused blobs removed: 0"
time=2024-04-30T10:12:12.305+07:00 level=INFO source=routes.go:1074 msg="Listening on 127.0.0.1:11434 (version 0.1.33-rc5)"
time=2024-04-30T10:12:12.306+07:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11.3 rocm_v5.7]"
time=2024-04-30T10:12:12.306+07:00 level=INFO source=gpu.go:96 msg="Detecting GPUs"
time=2024-04-30T10:12:12.336+07:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-30T10:12:12.405+07:00 level=INFO source=amd_windows.go:39 msg="AMD Driver: 50422971"
time=2024-04-30T10:12:12.409+07:00 level=INFO source=amd_windows.go:68 msg="detected hip devices" count=3
time=2024-04-30T10:12:12.409+07:00 level=INFO source=amd_windows.go:88 msg="hip device" id=0 name="AMD Radeon(TM) Graphics" gfx=gfx90c:xnack-
time=2024-04-30T10:12:12.409+07:00 level=INFO source=amd_windows.go:99 msg="iGPU detected skipping" id=0
time=2024-04-30T10:12:12.409+07:00 level=INFO source=amd_windows.go:88 msg="hip device" id=1 name="AMD Radeon(TM) Graphics" gfx=gfx90c:xnack-
time=2024-04-30T10:12:12.410+07:00 level=INFO source=amd_windows.go:99 msg="iGPU detected skipping" id=1
time=2024-04-30T10:12:12.410+07:00 level=INFO source=amd_windows.go:88 msg="hip device" id=2 name="AMD Radeon(TM) Graphics" gfx=gfx90c:xnack-
time=2024-04-30T10:12:12.410+07:00 level=INFO source=amd_windows.go:99 msg="iGPU detected skipping" id=2
time=2024-04-30T10:12:35.752+07:00 level=INFO source=gpu.go:96 msg="Detecting GPUs"
time=2024-04-30T10:12:35.793+07:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-30T10:12:35.856+07:00 level=INFO source=amd_windows.go:39 msg="AMD Driver: 50422971"
time=2024-04-30T10:12:35.860+07:00 level=INFO source=amd_windows.go:68 msg="detected hip devices" count=3
time=2024-04-30T10:12:35.860+07:00 level=INFO source=amd_windows.go:88 msg="hip device" id=0 name="AMD Radeon(TM) Graphics" gfx=gfx90c:xnack-
time=2024-04-30T10:12:35.861+07:00 level=INFO source=amd_windows.go:99 msg="iGPU detected skipping" id=0
time=2024-04-30T10:12:35.861+07:00 level=INFO source=amd_windows.go:88 msg="hip device" id=1 name="AMD Radeon(TM) Graphics" gfx=gfx90c:xnack-
time=2024-04-30T10:12:35.861+07:00 level=INFO source=amd_windows.go:99 msg="iGPU detected skipping" id=1
time=2024-04-30T10:12:35.861+07:00 level=INFO source=amd_windows.go:88 msg="hip device" id=2 name="AMD Radeon(TM) Graphics" gfx=gfx90c:xnack-
time=2024-04-30T10:12:35.861+07:00 level=INFO source=amd_windows.go:99 msg="iGPU detected skipping" id=2
time=2024-04-30T10:12:41.419+07:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-30T10:12:41.454+07:00 level=INFO source=server.go:290 msg="starting llama server" cmd="C:\\Users\\Xaver\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cpu_avx2\\ollama_llama_server.exe --model C:\\Users\\Xaver\\.ollama\\models\\blobs\\sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 --ctx-size 2048 --batch-size 512 --embedding --log-disable --mlock --parallel 1 --port 53577"
time=2024-04-30T10:12:41.556+07:00 level=INFO source=sched.go:327 msg="loaded runners" count=1
time=2024-04-30T10:12:41.557+07:00 level=INFO source=server.go:439 msg="waiting for llama runner to start responding"
{"function":"server_params_parse","level":"INFO","line":2603,"msg":"logging to file is disabled.","tid":"16228","timestamp":1714446761}
{"build":2737,"commit":"46e12c4","function":"wmain","level":"INFO","line":2820,"msg":"build info","tid":"16228","timestamp":1714446761}
{"function":"wmain","level":"INFO","line":2827,"msg":"system info","n_threads":6,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LAMMAFILE = 1 | ","tid":"16228","timestamp":1714446761,"total_threads":12}
llama_model_loader: loaded meta data with 21 key-value pairs and 291 tensors from C:\Users\Xaver\.ollama\models\blobs\sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = Meta-Llama-3-8B-Instruct
llama_model_loader: - kv   2:                          llama.block_count u32              = 32
llama_model_loader: - kv   3:                       llama.context_length u32              = 8192
llama_model_loader: - kv   4:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   7:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   8:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                          general.file_type u32              = 2
llama_model_loader: - kv  11:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  12:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  14:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  16:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  18:                tokenizer.ggml.eos_token_id u32              = 128001
llama_model_loader: - kv  19:                    tokenizer.chat_template str              = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv  20:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens definition check successful ( 256/128256 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 128256
llm_load_print_meta: n_merges         = 280147
llm_load_print_meta: n_ctx_train      = 8192
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 8192
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 8B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 8.03 B
llm_load_print_meta: model size       = 4.33 GiB (4.64 BPW)
llm_load_print_meta: general.name     = Meta-Llama-3-8B-Instruct
llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token        = 128001 '<|end_of_text|>'
llm_load_print_meta: LF token         = 128 'Ä'
llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
llm_load_tensors: ggml ctx size =    0.15 MiB
llm_load_tensors:        CPU buffer size =  4437.80 MiB
.......................................................................................
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: freq_base  = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:        CPU KV buffer size =   256.00 MiB
llama_new_context_with_model: KV self size  =  256.00 MiB, K (f16):  128.00 MiB, V (f16):  128.00 MiB
llama_new_context_with_model:        CPU  output buffer size =     0.50 MiB
ggml_backend_cpu_buffer_type_alloc_buffer: failed to allocate buffer of size 271056928
ggml_gallocr_reserve_n: failed to allocate CPU buffer of size 271056896
llama_new_context_with_model: failed to allocate compute buffers
llama_init_from_gpt_params: error: failed to create context with model 'C:\Users\Xaver\.ollama\models\blobs\sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29'
{"function":"load_model","level":"ERR","line":410,"model":"C:\\Users\\Xaver\\.ollama\\models\\blobs\\sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29","msg":"unable to load model","tid":"16228","timestamp":1714446773}
time=2024-04-30T10:12:54.007+07:00 level=ERROR source=sched.go:333 msg="error loading llama server" error="llama runner process no longer running: 1 error:failed to create context with model 'C:\\Users\\Xaver\\.ollama\\models\\blobs\\sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29'"
[GIN] 2024/04/30 - 10:12:54 | 500 |   18.2617974s |       127.0.0.1 | POST     "/api/generate"
<!-- gh-comment-id:2084297136 --> @xavernsk commented on GitHub (Apr 30, 2024): Facing with this issue on my windows laptop trying to use ollama with api (already used pre-release install): ```shell C:\Users\Xaver>ollama serve Error: listen tcp 127.0.0.1:11434: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted. C:\Users\Xaver>ollama serve time=2024-04-30T10:12:12.263+07:00 level=INFO source=images.go:821 msg="total blobs: 5" time=2024-04-30T10:12:12.303+07:00 level=INFO source=images.go:828 msg="total unused blobs removed: 0" time=2024-04-30T10:12:12.305+07:00 level=INFO source=routes.go:1074 msg="Listening on 127.0.0.1:11434 (version 0.1.33-rc5)" time=2024-04-30T10:12:12.306+07:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11.3 rocm_v5.7]" time=2024-04-30T10:12:12.306+07:00 level=INFO source=gpu.go:96 msg="Detecting GPUs" time=2024-04-30T10:12:12.336+07:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-04-30T10:12:12.405+07:00 level=INFO source=amd_windows.go:39 msg="AMD Driver: 50422971" time=2024-04-30T10:12:12.409+07:00 level=INFO source=amd_windows.go:68 msg="detected hip devices" count=3 time=2024-04-30T10:12:12.409+07:00 level=INFO source=amd_windows.go:88 msg="hip device" id=0 name="AMD Radeon(TM) Graphics" gfx=gfx90c:xnack- time=2024-04-30T10:12:12.409+07:00 level=INFO source=amd_windows.go:99 msg="iGPU detected skipping" id=0 time=2024-04-30T10:12:12.409+07:00 level=INFO source=amd_windows.go:88 msg="hip device" id=1 name="AMD Radeon(TM) Graphics" gfx=gfx90c:xnack- time=2024-04-30T10:12:12.410+07:00 level=INFO source=amd_windows.go:99 msg="iGPU detected skipping" id=1 time=2024-04-30T10:12:12.410+07:00 level=INFO source=amd_windows.go:88 msg="hip device" id=2 name="AMD Radeon(TM) Graphics" gfx=gfx90c:xnack- time=2024-04-30T10:12:12.410+07:00 level=INFO source=amd_windows.go:99 msg="iGPU detected skipping" id=2 time=2024-04-30T10:12:35.752+07:00 level=INFO source=gpu.go:96 msg="Detecting GPUs" time=2024-04-30T10:12:35.793+07:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-04-30T10:12:35.856+07:00 level=INFO source=amd_windows.go:39 msg="AMD Driver: 50422971" time=2024-04-30T10:12:35.860+07:00 level=INFO source=amd_windows.go:68 msg="detected hip devices" count=3 time=2024-04-30T10:12:35.860+07:00 level=INFO source=amd_windows.go:88 msg="hip device" id=0 name="AMD Radeon(TM) Graphics" gfx=gfx90c:xnack- time=2024-04-30T10:12:35.861+07:00 level=INFO source=amd_windows.go:99 msg="iGPU detected skipping" id=0 time=2024-04-30T10:12:35.861+07:00 level=INFO source=amd_windows.go:88 msg="hip device" id=1 name="AMD Radeon(TM) Graphics" gfx=gfx90c:xnack- time=2024-04-30T10:12:35.861+07:00 level=INFO source=amd_windows.go:99 msg="iGPU detected skipping" id=1 time=2024-04-30T10:12:35.861+07:00 level=INFO source=amd_windows.go:88 msg="hip device" id=2 name="AMD Radeon(TM) Graphics" gfx=gfx90c:xnack- time=2024-04-30T10:12:35.861+07:00 level=INFO source=amd_windows.go:99 msg="iGPU detected skipping" id=2 time=2024-04-30T10:12:41.419+07:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-04-30T10:12:41.454+07:00 level=INFO source=server.go:290 msg="starting llama server" cmd="C:\\Users\\Xaver\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cpu_avx2\\ollama_llama_server.exe --model C:\\Users\\Xaver\\.ollama\\models\\blobs\\sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 --ctx-size 2048 --batch-size 512 --embedding --log-disable --mlock --parallel 1 --port 53577" time=2024-04-30T10:12:41.556+07:00 level=INFO source=sched.go:327 msg="loaded runners" count=1 time=2024-04-30T10:12:41.557+07:00 level=INFO source=server.go:439 msg="waiting for llama runner to start responding" {"function":"server_params_parse","level":"INFO","line":2603,"msg":"logging to file is disabled.","tid":"16228","timestamp":1714446761} {"build":2737,"commit":"46e12c4","function":"wmain","level":"INFO","line":2820,"msg":"build info","tid":"16228","timestamp":1714446761} {"function":"wmain","level":"INFO","line":2827,"msg":"system info","n_threads":6,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LAMMAFILE = 1 | ","tid":"16228","timestamp":1714446761,"total_threads":12} llama_model_loader: loaded meta data with 21 key-value pairs and 291 tensors from C:\Users\Xaver\.ollama\models\blobs\sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = Meta-Llama-3-8B-Instruct llama_model_loader: - kv 2: llama.block_count u32 = 32 llama_model_loader: - kv 3: llama.context_length u32 = 8192 llama_model_loader: - kv 4: llama.embedding_length u32 = 4096 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.attention.head_count u32 = 32 llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: general.file_type u32 = 2 llama_model_loader: - kv 11: llama.vocab_size u32 = 128256 llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 19: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... llama_model_loader: - kv 20: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_0: 225 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens definition check successful ( 256/128256 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: n_ctx_train = 8192 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 8192 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 8B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 8.03 B llm_load_print_meta: model size = 4.33 GiB (4.64 BPW) llm_load_print_meta: general.name = Meta-Llama-3-8B-Instruct llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128001 '<|end_of_text|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_tensors: ggml ctx size = 0.15 MiB llm_load_tensors: CPU buffer size = 4437.80 MiB ....................................................................................... llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CPU KV buffer size = 256.00 MiB llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB llama_new_context_with_model: CPU output buffer size = 0.50 MiB ggml_backend_cpu_buffer_type_alloc_buffer: failed to allocate buffer of size 271056928 ggml_gallocr_reserve_n: failed to allocate CPU buffer of size 271056896 llama_new_context_with_model: failed to allocate compute buffers llama_init_from_gpt_params: error: failed to create context with model 'C:\Users\Xaver\.ollama\models\blobs\sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29' {"function":"load_model","level":"ERR","line":410,"model":"C:\\Users\\Xaver\\.ollama\\models\\blobs\\sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29","msg":"unable to load model","tid":"16228","timestamp":1714446773} time=2024-04-30T10:12:54.007+07:00 level=ERROR source=sched.go:333 msg="error loading llama server" error="llama runner process no longer running: 1 error:failed to create context with model 'C:\\Users\\Xaver\\.ollama\\models\\blobs\\sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29'" [GIN] 2024/04/30 - 10:12:54 | 500 | 18.2617974s | 127.0.0.1 | POST "/api/generate" ```
Author
Owner

@abluejay-piyo commented on GitHub (May 1, 2024):

same with mixtral:8x22b

<!-- gh-comment-id:2087950831 --> @abluejay-piyo commented on GitHub (May 1, 2024): same with mixtral:8x22b
Author
Owner

@dhiltgen commented on GitHub (May 1, 2024):

@aaronjrod's issue with the PhysX version of cudart is being tracked in #4008 with a workaround to adjust your PATH to point to a different CUDA directory first.

@xavernsk I could be misreading the logs, but it seems like you may be trying to load a model that's larger than your physical memory. failed to allocate CPU buffer of size 271056896

@anjia1991 can you share your server log?

<!-- gh-comment-id:2088720894 --> @dhiltgen commented on GitHub (May 1, 2024): @aaronjrod's issue with the PhysX version of cudart is being tracked in #4008 with a workaround to adjust your PATH to point to a different CUDA directory first. @xavernsk I could be misreading the logs, but it seems like you may be trying to load a model that's larger than your physical memory. `failed to allocate CPU buffer of size 271056896` @anjia1991 can you share your server log?
Author
Owner

@dylan-sh commented on GitHub (May 2, 2024):

same with mixtral:8x22b ubuntu 4090

<!-- gh-comment-id:2089775000 --> @dylan-sh commented on GitHub (May 2, 2024): same with mixtral:8x22b ubuntu 4090
Author
Owner

@kiririnou commented on GitHub (May 5, 2024):

After updating to 0.1.33 version, was able to successfully run llama3 on windows.

<!-- gh-comment-id:2094971547 --> @kiririnou commented on GitHub (May 5, 2024): After updating to 0.1.33 version, was able to successfully run llama3 on windows.
Author
Owner

@benlodz commented on GitHub (May 6, 2024):

I was having the same exact issue on Arch and I can confirm it's gone away for me too.

<!-- gh-comment-id:2096038319 --> @benlodz commented on GitHub (May 6, 2024): I was having the same exact issue on Arch and I can confirm it's gone away for me too.
Author
Owner

@pheonixravi commented on GitHub (May 7, 2024):

Yes,same for me,it's been resolved after the updates.

On Mon, May 6, 2024 at 3:49 AM kiririnou @.***> wrote:

After updating to 0.1.33 version, was able to successfully run llama3 on
windows.


Reply to this email directly, view it on GitHub
https://github.com/ollama/ollama/issues/3774#issuecomment-2094971547,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ACNUDAE7MUQA43YMBNIEAUDZA2V65AVCNFSM6AAAAABGQMYECKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAOJUHE3TCNJUG4
.
You are receiving this because you authored the thread.Message ID:
@.***>

--

*K. Ravikumar *

<!-- gh-comment-id:2097489073 --> @pheonixravi commented on GitHub (May 7, 2024): Yes,same for me,it's been resolved after the updates. On Mon, May 6, 2024 at 3:49 AM kiririnou ***@***.***> wrote: > After updating to 0.1.33 version, was able to successfully run llama3 on > windows. > > — > Reply to this email directly, view it on GitHub > <https://github.com/ollama/ollama/issues/3774#issuecomment-2094971547>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ACNUDAE7MUQA43YMBNIEAUDZA2V65AVCNFSM6AAAAABGQMYECKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAOJUHE3TCNJUG4> > . > You are receiving this because you authored the thread.Message ID: > ***@***.***> > -- *K. Ravikumar *
Author
Owner

@dhiltgen commented on GitHub (May 7, 2024):

Sounds like we can close this issue out as resolved now. The PhysX bug should also be resolved in the next release (0.1.34) when it comes out.

<!-- gh-comment-id:2098776892 --> @dhiltgen commented on GitHub (May 7, 2024): Sounds like we can close this issue out as resolved now. The PhysX bug should also be resolved in the next release (0.1.34) when it comes out.
Author
Owner

@colorfuldarkgray commented on GitHub (Aug 4, 2024):

I was trying to make llama3.1 assist me with some academic writing. First it itred to write a little bit. Then it denied to help writing a message saying literally it couldnt help me. Then this

ValueError: Ollama call failed with status code 500. Details: {"error":"llama runner process no longer running: -1 "}

Is there a disclaimer about no assisting academic writing or something??

I am trying to stop and disable ollama but I get

Failed to stop/disable ollama.service: Unit ollama.service not loaded.

<!-- gh-comment-id:2267363154 --> @colorfuldarkgray commented on GitHub (Aug 4, 2024): I was trying to make llama3.1 assist me with some academic writing. First it itred to write a little bit. Then it denied to help writing a message saying literally it couldnt help me. Then this ValueError: Ollama call failed with status code 500. Details: {"error":"llama runner process no longer running: -1 "} Is there a disclaimer about no assisting academic writing or something?? I am trying to stop and disable ollama but I get Failed to stop/disable ollama.service: Unit ollama.service not loaded.
Author
Owner

@dhiltgen commented on GitHub (Aug 6, 2024):

@colorfuldarkgray your problem sounds unrelated to this issue, and no, it shouldn't crash like that. Please go ahead and submit a new issue reporting your crash, and share the server logs so we can investigate.

<!-- gh-comment-id:2271724392 --> @dhiltgen commented on GitHub (Aug 6, 2024): @colorfuldarkgray your problem sounds unrelated to this issue, and no, it shouldn't crash like that. Please go ahead and submit a new issue reporting your crash, and share the server logs so we can investigate.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#2330