[GH-ISSUE #2945] Error: Post "http://127.0.0.1:11434/api/generate": EOF / CUDA errors when trying to run ollama in terminal #48320

Closed
opened 2026-04-28 07:43:22 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @jferments on GitHub (Mar 6, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2945

I am using Ollama version 0.1.20 and am getting CUDA errors when trying to run Ollama in terminal or from python scripts. When I try to run these in terminal:

ollama run mistral
ollama run orca-mini

They fail with the only message being:

Error: Post "http://127.0.0.1:11434/api/generate": EOF

These are being caused by CUDA errors as you can see below, but there is nothing in the terminal output re: CUDA errors.

Here is output from journalctl for ollama:

Mar 05 11:00:25 jesse-MS-7C02 ollama[74384]: llm_load_tensors: ggml ctx size =    0.11 MiB
Mar 05 11:00:25 jesse-MS-7C02 ollama[74384]: llm_load_tensors: mem required  = 3917.98 MiB
Mar 05 11:00:25 jesse-MS-7C02 ollama[74384]: llm_load_tensors: offloading 32 repeating layers to GPU
Mar 05 11:00:25 jesse-MS-7C02 ollama[74384]: llm_load_tensors: offloading non-repeating layers to GPU
Mar 05 11:00:25 jesse-MS-7C02 ollama[74384]: llm_load_tensors: offloaded 33/33 layers to GPU
Mar 05 11:00:25 jesse-MS-7C02 ollama[74384]: llm_load_tensors: VRAM used: 0.00 MiB
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: ...................................................................................................
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: llama_new_context_with_model: n_ctx      = 2048
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: llama_new_context_with_model: freq_base  = 1000000.0
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: llama_new_context_with_model: freq_scale = 1
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: CUDA error 999 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:495: unknown error
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: current device: -1809317920
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: Lazy loading /tmp/ollama801692426/cuda/libext_server.so library
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:495: !"CUDA error"
Mar 05 11:00:26 jesse-MS-7C02 ollama[74962]: Could not attach to process.  If your uid matches the uid of the target
Mar 05 11:00:26 jesse-MS-7C02 ollama[74962]: process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
Mar 05 11:00:26 jesse-MS-7C02 ollama[74962]: again as the root user.  For more details, see /etc/sysctl.d/10-ptrace.conf
Mar 05 11:00:26 jesse-MS-7C02 ollama[74962]: ptrace: Inappropriate ioctl for device.
Mar 05 11:00:26 jesse-MS-7C02 ollama[74962]: No stack.
Mar 05 11:00:26 jesse-MS-7C02 ollama[74962]: The program is not being run.
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: SIGABRT: abort
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: PC=0x7fc01a899a1b m=14 sigcode=18446744073709551610
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: signal arrived during cgo execution
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: goroutine 41 [syscall]:
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: runtime.cgocall(0x9c3170, 0xc00033a608)
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]:         /usr/local/go/src/runtime/cgocall.go:157 +0x4b fp=0xc00033a5e0 sp=0xc00033a5a8 pc=0x4291cb
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: github.com/jmorganca/ollama/llm._Cfunc_dynamic_shim_llama_server_init({0x7fbf94001d40, 0x7fbf70dfa410, 0x7fbf70d>
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]:         _cgo_gotypes.go:287 +0x45 fp=0xc00033a608 sp=0xc00033a5e0 pc=0x7cf965
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: github.com/jmorganca/ollama/llm.(*shimExtServer).llama_server_init.func1(0x45973b?, 0x80?, 0x80?)
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]:         /go/src/github.com/jmorganca/ollama/llm/shim_ext_server.go:40 +0xec fp=0xc00033a6f8 sp=0xc00033a608 pc=0>
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: github.com/jmorganca/ollama/llm.(*shimExtServer).llama_server_init(0xc00010a2d0?, 0x0?, 0x43a2e8?)
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]:         /go/src/github.com/jmorganca/ollama/llm/shim_ext_server.go:40 +0x13 fp=0xc00033a720 sp=0xc00033a6f8 pc=0>
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: github.com/jmorganca/ollama/llm.newExtServer({0x17845038, 0xc0004327e0}, {0xc000190af0, _}, {_, _, _}, {0x0, 0x0>
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]:         /go/src/github.com/jmorganca/ollama/llm/ext_server_common.go:139 +0x70e fp=0xc00033a8e0 sp=0xc00033a720 >
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: github.com/jmorganca/ollama/llm.newDynamicShimExtServer({0xc0000be000, 0x2a}, {0xc000190af0, _}, {_, _, _}, {0x0>
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]:         /go/src/github.com/jmorganca/ollama/llm/shim_ext_server.go:93 +0x547 fp=0xc00033aaf8 sp=0xc00033a8e0 pc=>
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: github.com/jmorganca/ollama/llm.newLlmServer({0xc3fc44, 0x4}, {0xc000190af0, _}, {_, _, _}, {0x0, 0x0, 0x0}, ...)
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]:         /go/src/github.com/jmorganca/ollama/llm/llm.go:125 +0x149 fp=0xc00033ac78 sp=0xc00033aaf8 pc=0x7ceac9
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: github.com/jmorganca/ollama/llm.New({0xc00048e240?, 0x0?}, {0xc000190af0, _}, {_, _, _}, {0x0, 0x0, 0x0}, ...)
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]:         /go/src/github.com/jmorganca/ollama/llm/llm.go:115 +0x628 fp=0xc00033aef0 sp=0xc00033ac78 pc=0x7ce608
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: github.com/jmorganca/ollama/server.load(0xc000002f00?, 0xc000002f00, {{0x0, 0x800, 0x200, 0x1, 0xfffffffffffffff>
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]:         /go/src/github.com/jmorganca/ollama/server/routes.go:84 +0x425 fp=0xc00033b0a0 sp=0xc00033aef0 pc=0x99ef>
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: github.com/jmorganca/ollama/server.GenerateHandler(0xc000466600)
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]:         /go/src/github.com/jmorganca/ollama/server/routes.go:191 +0x8c8 fp=0xc00033b748 sp=0xc00033b0a0 pc=0x99f>
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: github.com/gin-gonic/gin.(*Context).Next(...)
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]:         /root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: github.com/jmorganca/ollama/server.(*Server).GenerateRoutes.func1(0xc000466600)
Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]:         /go/src/github.com/jmorganca/ollama/server/routes.go:877 +0x68 fp=0xc00033b780 sp=0xc00033b748 pc=0x9a91>

You can see that CUDA error is occuring due to llama.cpp ... This is also happening when I try to call Ollama from within Python/llama-index scripts (CUDA errors).

This even happens with very tiny models like tinyllama, when I have barely any GPU usage:

(venv) jesse@jesse-MS-7C02:~/code/obot/obot/extractor$ nvidia-smi 
Tue Mar  5 22:18:49 2024       
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.161.07             Driver Version: 535.161.07   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 2060        Off | 00000000:26:00.0  On |                  N/A |
|  0%   33C    P8               7W / 170W |    786MiB /  6144MiB |      3%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|    0   N/A  N/A      1184      G   /usr/lib/xorg/Xorg                          220MiB |
|    0   N/A  N/A      1493      G   /usr/bin/kwalletd5                            2MiB |
|    0   N/A  N/A      1730      G   /usr/bin/ksmserver                            2MiB |
|    0   N/A  N/A      1732      G   /usr/bin/kded5                                2MiB |
|    0   N/A  N/A      1733      G   /usr/bin/kwin_x11                           157MiB |
|    0   N/A  N/A      1764      G   /usr/bin/plasmashell                         54MiB |
|    0   N/A  N/A      1787      G   ...c/polkit-kde-authentication-agent-1        2MiB |
|    0   N/A  N/A      1968      G   ...86_64-linux-gnu/libexec/kdeconnectd        2MiB |
|    0   N/A  N/A      1981      G   /usr/bin/kaccess                              2MiB |
|    0   N/A  N/A      2003      G   ...irefox/3836/usr/lib/firefox/firefox      316MiB |
|    0   N/A  N/A      2007      G   ...-linux-gnu/libexec/DiscoverNotifier        2MiB |
|    0   N/A  N/A      2478      G   ...-gnu/libexec/xdg-desktop-portal-kde        2MiB |
|    0   N/A  N/A     11747      G   /usr/bin/konsole                              2MiB |
|    0   N/A  N/A     36012      G   /usr/bin/kate                                 2MiB |
|    0   N/A  N/A     86749      G   /usr/bin/dolphin                              2MiB |
+---------------------------------------------------------------------------------------+
(venv) jesse@jesse-MS-7C02:~/code/obot/obot/extractor$ ollama run tinyllama
Error: Post "http://127.0.0.1:11434/api/generate": EOF

I don't know why, but once I reboot, it seems to magically fix everything. Simply stopping ollama service / killing ollama processes and restarting those doesn't work though.

The problem is intermittent/random so it's hard to figure out what exactly is causing it. I can often run the above commands with no issue on my system, but this EOF/CUDA error randomly pops up every couple of days, and then I have to reboot to fix it.

I am using Ubuntu Linux 23.10 and an RTX 2060 with 6GB VRAM.

Any suggestions would be very welcome!

Originally created by @jferments on GitHub (Mar 6, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2945 I am using Ollama version 0.1.20 and am getting CUDA errors when trying to run Ollama in terminal or from python scripts. When I try to run these in terminal: `ollama run mistral` `ollama run orca-mini` They fail with the only message being: `Error: Post "http://127.0.0.1:11434/api/generate": EOF` These are being caused by CUDA errors as you can see below, but there is nothing in the terminal output re: CUDA errors. Here is output from `journalctl` for ollama: ``` Mar 05 11:00:25 jesse-MS-7C02 ollama[74384]: llm_load_tensors: ggml ctx size = 0.11 MiB Mar 05 11:00:25 jesse-MS-7C02 ollama[74384]: llm_load_tensors: mem required = 3917.98 MiB Mar 05 11:00:25 jesse-MS-7C02 ollama[74384]: llm_load_tensors: offloading 32 repeating layers to GPU Mar 05 11:00:25 jesse-MS-7C02 ollama[74384]: llm_load_tensors: offloading non-repeating layers to GPU Mar 05 11:00:25 jesse-MS-7C02 ollama[74384]: llm_load_tensors: offloaded 33/33 layers to GPU Mar 05 11:00:25 jesse-MS-7C02 ollama[74384]: llm_load_tensors: VRAM used: 0.00 MiB Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: ................................................................................................... Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: llama_new_context_with_model: n_ctx = 2048 Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: llama_new_context_with_model: freq_base = 1000000.0 Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: llama_new_context_with_model: freq_scale = 1 Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: CUDA error 999 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:495: unknown error Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: current device: -1809317920 Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: Lazy loading /tmp/ollama801692426/cuda/libext_server.so library Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:495: !"CUDA error" Mar 05 11:00:26 jesse-MS-7C02 ollama[74962]: Could not attach to process. If your uid matches the uid of the target Mar 05 11:00:26 jesse-MS-7C02 ollama[74962]: process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try Mar 05 11:00:26 jesse-MS-7C02 ollama[74962]: again as the root user. For more details, see /etc/sysctl.d/10-ptrace.conf Mar 05 11:00:26 jesse-MS-7C02 ollama[74962]: ptrace: Inappropriate ioctl for device. Mar 05 11:00:26 jesse-MS-7C02 ollama[74962]: No stack. Mar 05 11:00:26 jesse-MS-7C02 ollama[74962]: The program is not being run. Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: SIGABRT: abort Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: PC=0x7fc01a899a1b m=14 sigcode=18446744073709551610 Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: signal arrived during cgo execution Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: goroutine 41 [syscall]: Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: runtime.cgocall(0x9c3170, 0xc00033a608) Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: /usr/local/go/src/runtime/cgocall.go:157 +0x4b fp=0xc00033a5e0 sp=0xc00033a5a8 pc=0x4291cb Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: github.com/jmorganca/ollama/llm._Cfunc_dynamic_shim_llama_server_init({0x7fbf94001d40, 0x7fbf70dfa410, 0x7fbf70d> Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: _cgo_gotypes.go:287 +0x45 fp=0xc00033a608 sp=0xc00033a5e0 pc=0x7cf965 Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: github.com/jmorganca/ollama/llm.(*shimExtServer).llama_server_init.func1(0x45973b?, 0x80?, 0x80?) Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: /go/src/github.com/jmorganca/ollama/llm/shim_ext_server.go:40 +0xec fp=0xc00033a6f8 sp=0xc00033a608 pc=0> Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: github.com/jmorganca/ollama/llm.(*shimExtServer).llama_server_init(0xc00010a2d0?, 0x0?, 0x43a2e8?) Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: /go/src/github.com/jmorganca/ollama/llm/shim_ext_server.go:40 +0x13 fp=0xc00033a720 sp=0xc00033a6f8 pc=0> Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: github.com/jmorganca/ollama/llm.newExtServer({0x17845038, 0xc0004327e0}, {0xc000190af0, _}, {_, _, _}, {0x0, 0x0> Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: /go/src/github.com/jmorganca/ollama/llm/ext_server_common.go:139 +0x70e fp=0xc00033a8e0 sp=0xc00033a720 > Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: github.com/jmorganca/ollama/llm.newDynamicShimExtServer({0xc0000be000, 0x2a}, {0xc000190af0, _}, {_, _, _}, {0x0> Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: /go/src/github.com/jmorganca/ollama/llm/shim_ext_server.go:93 +0x547 fp=0xc00033aaf8 sp=0xc00033a8e0 pc=> Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: github.com/jmorganca/ollama/llm.newLlmServer({0xc3fc44, 0x4}, {0xc000190af0, _}, {_, _, _}, {0x0, 0x0, 0x0}, ...) Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: /go/src/github.com/jmorganca/ollama/llm/llm.go:125 +0x149 fp=0xc00033ac78 sp=0xc00033aaf8 pc=0x7ceac9 Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: github.com/jmorganca/ollama/llm.New({0xc00048e240?, 0x0?}, {0xc000190af0, _}, {_, _, _}, {0x0, 0x0, 0x0}, ...) Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: /go/src/github.com/jmorganca/ollama/llm/llm.go:115 +0x628 fp=0xc00033aef0 sp=0xc00033ac78 pc=0x7ce608 Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: github.com/jmorganca/ollama/server.load(0xc000002f00?, 0xc000002f00, {{0x0, 0x800, 0x200, 0x1, 0xfffffffffffffff> Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: /go/src/github.com/jmorganca/ollama/server/routes.go:84 +0x425 fp=0xc00033b0a0 sp=0xc00033aef0 pc=0x99ef> Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: github.com/jmorganca/ollama/server.GenerateHandler(0xc000466600) Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: /go/src/github.com/jmorganca/ollama/server/routes.go:191 +0x8c8 fp=0xc00033b748 sp=0xc00033b0a0 pc=0x99f> Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: github.com/gin-gonic/gin.(*Context).Next(...) Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: /root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174 Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: github.com/jmorganca/ollama/server.(*Server).GenerateRoutes.func1(0xc000466600) Mar 05 11:00:26 jesse-MS-7C02 ollama[74384]: /go/src/github.com/jmorganca/ollama/server/routes.go:877 +0x68 fp=0xc00033b780 sp=0xc00033b748 pc=0x9a91> ``` You can see that CUDA error is occuring due to llama.cpp ... This is also happening when I try to call Ollama from within Python/llama-index scripts (CUDA errors). This even happens with very tiny models like tinyllama, when I have barely any GPU usage: ``` (venv) jesse@jesse-MS-7C02:~/code/obot/obot/extractor$ nvidia-smi Tue Mar 5 22:18:49 2024 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 535.161.07 Driver Version: 535.161.07 CUDA Version: 12.2 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA GeForce RTX 2060 Off | 00000000:26:00.0 On | N/A | | 0% 33C P8 7W / 170W | 786MiB / 6144MiB | 3% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | 0 N/A N/A 1184 G /usr/lib/xorg/Xorg 220MiB | | 0 N/A N/A 1493 G /usr/bin/kwalletd5 2MiB | | 0 N/A N/A 1730 G /usr/bin/ksmserver 2MiB | | 0 N/A N/A 1732 G /usr/bin/kded5 2MiB | | 0 N/A N/A 1733 G /usr/bin/kwin_x11 157MiB | | 0 N/A N/A 1764 G /usr/bin/plasmashell 54MiB | | 0 N/A N/A 1787 G ...c/polkit-kde-authentication-agent-1 2MiB | | 0 N/A N/A 1968 G ...86_64-linux-gnu/libexec/kdeconnectd 2MiB | | 0 N/A N/A 1981 G /usr/bin/kaccess 2MiB | | 0 N/A N/A 2003 G ...irefox/3836/usr/lib/firefox/firefox 316MiB | | 0 N/A N/A 2007 G ...-linux-gnu/libexec/DiscoverNotifier 2MiB | | 0 N/A N/A 2478 G ...-gnu/libexec/xdg-desktop-portal-kde 2MiB | | 0 N/A N/A 11747 G /usr/bin/konsole 2MiB | | 0 N/A N/A 36012 G /usr/bin/kate 2MiB | | 0 N/A N/A 86749 G /usr/bin/dolphin 2MiB | +---------------------------------------------------------------------------------------+ (venv) jesse@jesse-MS-7C02:~/code/obot/obot/extractor$ ollama run tinyllama Error: Post "http://127.0.0.1:11434/api/generate": EOF ``` I don't know why, but once I reboot, it seems to magically fix everything. Simply stopping ollama service / killing ollama processes and restarting those doesn't work though. The problem is intermittent/random so it's hard to figure out what exactly is causing it. I can often run the above commands with no issue on my system, but this EOF/CUDA error randomly pops up every couple of days, and then I have to reboot to fix it. I am using Ubuntu Linux 23.10 and an RTX 2060 with 6GB VRAM. Any suggestions would be very welcome!
Author
Owner

@igorschlum commented on GitHub (Mar 6, 2024):

Hi @jferments could you try with version 0.1.28?

<!-- gh-comment-id:1980205554 --> @igorschlum commented on GitHub (Mar 6, 2024): Hi @jferments could you try with version 0.1.28?
Author
Owner

@dhiltgen commented on GitHub (Mar 6, 2024):

You likely hit #1877 which should be fixed with the latest release. If you're able to repro with the latest version, please provide updated logs and I'll reopen the issue.

<!-- gh-comment-id:1981254368 --> @dhiltgen commented on GitHub (Mar 6, 2024): You likely hit #1877 which should be fixed with the latest release. If you're able to repro with the latest version, please provide updated logs and I'll reopen the issue.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#48320