[GH-ISSUE #2558] Issue on Windows 10 ENT. wsarecv: An existing connection was forcibly closed by the remote host. #1499

Closed
opened 2026-04-12 11:24:40 -05:00 by GiteaMirror · 20 comments
Owner

Originally created by @wilkinsabane on GitHub (Feb 17, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2558

I've successfully installed the Ollama Preview for Windows.
My NVidia graphics is fully updated.
But every time I run a model and write a prompt, I get the following error:

C:\Users\User>ollama run mistral

hi
Error: Post "http://127.0.0.1:11434/api/chat": read tcp 127.0.0.1:51644->127.0.0.1:11434: wsarecv: An existing connection was forcibly closed by the remote host.

Please help.

Originally created by @wilkinsabane on GitHub (Feb 17, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2558 I've successfully installed the Ollama Preview for Windows. My NVidia graphics is fully updated. But every time I run a model and write a prompt, I get the following error: C:\Users\User>ollama run mistral >>> hi Error: Post "http://127.0.0.1:11434/api/chat": read tcp 127.0.0.1:51644->127.0.0.1:11434: wsarecv: An existing connection was forcibly closed by the remote host. Please help.
GiteaMirror added the bug label 2026-04-12 11:24:40 -05:00
Author
Owner

@tamacrea commented on GitHub (Feb 17, 2024):

same here can somebody help

<!-- gh-comment-id:1949910682 --> @tamacrea commented on GitHub (Feb 17, 2024): same here can somebody help
Author
Owner

@thdevai commented on GitHub (Feb 18, 2024):

I commented about it here: https://github.com/ollama/ollama/issues/2560#issuecomment-1950690705

maybe that could be it.

<!-- gh-comment-id:1950696398 --> @thdevai commented on GitHub (Feb 18, 2024): I commented about it here: https://github.com/ollama/ollama/issues/2560#issuecomment-1950690705 maybe that could be it.
Author
Owner

@Cybervet commented on GitHub (Feb 18, 2024):

I commented about it here: #2560 (comment)

maybe that could be it.

Nope thats not the problem.

<!-- gh-comment-id:1951331793 --> @Cybervet commented on GitHub (Feb 18, 2024): > I commented about it here: [#2560 (comment)](https://github.com/ollama/ollama/issues/2560#issuecomment-1950690705) > > maybe that could be it. Nope thats not the problem.
Author
Owner

@Pey-crypto commented on GitHub (Feb 20, 2024):

Check whether these ports are being used by other executable. Type the following command into admin privileged cmd window.
netstat -a -b

<!-- gh-comment-id:1953579284 --> @Pey-crypto commented on GitHub (Feb 20, 2024): Check whether these ports are being used by other executable. Type the following command into admin privileged cmd window. netstat -a -b
Author
Owner

@wilkinsabane commented on GitHub (Feb 20, 2024):

I've successfully installed the Ollama Preview for Windows. My NVidia graphics is fully updated. But every time I run a model and write a prompt, I get the following error:

C:\Users\User>ollama run mistral

hi
Error: Post "http://127.0.0.1:11434/api/chat": read tcp 127.0.0.1:51644->127.0.0.1:11434: wsarecv: An existing connection was forcibly closed by the remote host.

Please help.

So, after analyzing the ollama server logs below:

[GIN] 2024/02/19 - 15:45:25 | 200 |      0s |    127.0.0.1 | HEAD   "/"
[GIN] 2024/02/19 - 15:45:25 | 200 |   1.1469ms |    127.0.0.1 | POST   "/api/show"
[GIN] 2024/02/19 - 15:45:25 | 200 |   1.6631ms |    127.0.0.1 | POST   "/api/show"
time=2024-02-19T15:45:27.335+01:00 level=INFO source=gpu.go:94 msg="Detecting GPU type"
time=2024-02-19T15:45:27.335+01:00 level=INFO source=gpu.go:262 msg="Searching for GPU management library nvml.dll"
time=2024-02-19T15:45:27.336+01:00 level=DEBUG source=gpu.go:280 msg="gpu management search paths: [c:\Windows\System32\nvml.dll C:\Windows\system32\nvml.dll* C:\Windows\nvml.dll* C:\Windows\System32\Wbem\nvml.dll* C:\Windows\System32\WindowsPowerShell\v1.0\nvml.dll* C:\Windows\System32\OpenSSH\nvml.dll* C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common\nvml.dll* C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\DAL\nvml.dll* C:\Program Files\Intel\Intel(R) Management Engine Components\DAL\nvml.dll* C:\Program Files\dotnet\nvml.dll* C:\Users\Wilkins\AppData\Roaming\Python\Scripts\poetry\nvml.dll* C:\Users\Wilkins\AppData\Roaming\Python\Scripts\nvml.dll* C:\Program Files\Git\cmd\nvml.dll* C:\Program Files\NVIDIA Corporation\NVIDIA NvDLISR\nvml.dll* C:\Users\Wilkins\AppData\Local\Programs\Ollama\nvml.dll* C:\Users\Wilkins\AppData\Local\Programs\Python\Python312\Scripts\nvml.dll* C:\Users\Wilkins\AppData\Local\Programs\Python\Python312\nvml.dll* C:\Users\Wilkins\AppData\Local\Microsoft\WindowsApps\nvml.dll* C:\Users\Wilkins\.dotnet\tools\nvml.dll* C:\Users\Wilkins\AppData\Local\Programs\Microsoft VS Code\bin\nvml.dll*]"
time=2024-02-19T15:45:27.340+01:00 level=INFO source=gpu.go:308 msg="Discovered GPU libraries: [c:\Windows\System32\nvml.dll C:\Windows\system32\nvml.dll]"
time=2024-02-19T15:45:27.893+01:00 level=INFO source=gpu.go:99 msg="Nvidia GPU detected"
time=2024-02-19T15:45:27.894+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-02-19T15:45:27.911+01:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 5.0"
time=2024-02-19T15:45:27.911+01:00 level=DEBUG source=gpu.go:251 msg="cuda detected 1 devices with 977M available memory"
time=2024-02-19T15:45:27.911+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-02-19T15:45:27.911+01:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 5.0"
time=2024-02-19T15:45:27.911+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-02-19T15:45:27.911+01:00 level=DEBUG source=payload_common.go:93 msg="ordered list of LLM libraries to try [C:\Users\Wilkins\AppData\Local\Temp\ollama617761062\cuda_v11.3\ext_server.dll C:\Users\Wilkins\AppData\Local\Temp\ollama617761062\cpu_avx2\ext_server.dll]"
time=2024-02-19T15:45:27.911+01:00 level=INFO source=dyn_ext_server.go:380 msg="Updating PATH to C:\Users\Wilkins\AppData\Local\Temp\ollama617761062\cuda_v11.3;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Windows\System32\OpenSSH\;C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files\dotnet\;C:\Users\Wilkins\AppData\Roaming\Python\Scripts\poetry;C:\Users\Wilkins\AppData\Roaming\Python\Scripts;C:\Program Files\Git\cmd;C:\Program Files\NVIDIA Corporation\NVIDIA NvDLISR;C:\Users\Wilkins\AppData\Local\Programs\Ollama\;C:\Users\Wilkins\AppData\Local\Programs\Python\Python312\Scripts\;C:\Users\Wilkins\AppData\Local\Programs\Python\Python312\;C:\Users\Wilkins\AppData\Local\Microsoft\WindowsApps;C:\Users\Wilkins\.dotnet\tools;C:\Users\Wilkins\AppData\Local\Programs\Microsoft VS Code\bin"
time=2024-02-19T15:45:28.113+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: C:\Users\Wilkins\AppData\Local\Temp\ollama617761062\cuda_v11.3\ext_server.dll"
time=2024-02-19T15:45:28.114+01:00 level=INFO source=dyn_ext_server.go:145 msg="Initializing llama server"
[1708353928] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | 
[1708353928] Performing pre-initialization of GPU
ggml_init_cublas: GGML_CUDA_FORCE_MMQ:  no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 CUDA devices:
 Device 0: NVIDIA GeForce 940MX, compute capability 5.0, VMM: yes
llama_model_loader: loaded meta data with 24 key-value pairs and 273 tensors from C:\Users\Wilkins.ollama\models\blobs\sha256-4f72bdefa815fd730f8982f6118a090dc041f1f0af433a4ed5c2fd2663c171e2 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv  0:            general.architecture str       = llama
llama_model_loader: - kv  1:                general.name str       = LLaMA v2
llama_model_loader: - kv  2:            llama.context_length u32       = 4096
llama_model_loader: - kv  3:           llama.embedding_length u32       = 4096
llama_model_loader: - kv  4:             llama.block_count u32       = 30
llama_model_loader: - kv  5:         llama.feed_forward_length u32       = 11008
llama_model_loader: - kv  6:         llama.rope.dimension_count u32       = 128
llama_model_loader: - kv  7:         llama.attention.head_count u32       = 32
llama_model_loader: - kv  8:       llama.attention.head_count_kv u32       = 32
llama_model_loader: - kv  9:   llama.attention.layer_norm_rms_epsilon f32       = 0.000001
llama_model_loader: - kv 10:            llama.rope.freq_base f32       = 10000.000000
llama_model_loader: - kv 11:             general.file_type u32       = 14
llama_model_loader: - kv 12:            tokenizer.ggml.model str       = gpt2
llama_model_loader: - kv 13:           tokenizer.ggml.tokens arr[str,102400] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 14:           tokenizer.ggml.scores arr[f32,102400] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 15:         tokenizer.ggml.token_type arr[i32,102400] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 16:           tokenizer.ggml.merges arr[str,99757]  = ["Ġ Ġ", "Ġ t", "Ġ a", "i n", "h e...
llama_model_loader: - kv 17:        tokenizer.ggml.bos_token_id u32       = 100000
llama_model_loader: - kv 18:        tokenizer.ggml.eos_token_id u32       = 100015
llama_model_loader: - kv 19:      tokenizer.ggml.padding_token_id u32       = 100001
llama_model_loader: - kv 20:        tokenizer.ggml.add_bos_token bool       = true
llama_model_loader: - kv 21:        tokenizer.ggml.add_eos_token bool       = false
llama_model_loader: - kv 22:          tokenizer.chat_template str       = {% if not add_generation_prompt is de...
llama_model_loader: - kv 23:        general.quantization_version u32       = 2
llama_model_loader: - type f32:  61 tensors
llama_model_loader: - type q4_K: 204 tensors
llama_model_loader: - type q5_K:  7 tensors
llama_model_loader: - type q6_K:  1 tensors
llm_load_vocab: mismatch in special tokens definition ( 2387/102400 vs 2400/102400 ).
llm_load_print_meta: format      = GGUF V3 (latest)
llm_load_print_meta: arch       = llama
llm_load_print_meta: vocab type    = BPE
llm_load_print_meta: n_vocab     = 102400
llm_load_print_meta: n_merges     = 99757
llm_load_print_meta: n_ctx_train   = 4096
llm_load_print_meta: n_embd      = 4096
llm_load_print_meta: n_head      = 32
llm_load_print_meta: n_head_kv    = 32
llm_load_print_meta: n_layer     = 30
llm_load_print_meta: n_rot      = 128
llm_load_print_meta: n_embd_head_k  = 128
llm_load_print_meta: n_embd_head_v  = 128
llm_load_print_meta: n_gqa      = 1
llm_load_print_meta: n_embd_k_gqa   = 4096
llm_load_print_meta: n_embd_v_gqa   = 4096
llm_load_print_meta: f_norm_eps    = 0.0e+00
llm_load_print_meta: f_norm_rms_eps  = 1.0e-06
llm_load_print_meta: f_clamp_kqv   = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff       = 11008
llm_load_print_meta: n_expert     = 0
llm_load_print_meta: n_expert_used  = 0
llm_load_print_meta: rope scaling   = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 4096
llm_load_print_meta: rope_finetuned  = unknown
llm_load_print_meta: model type    = ?B
llm_load_print_meta: model ftype   = Q4_K - Small
llm_load_print_meta: model params   = 6.91 B
llm_load_print_meta: model size    = 3.75 GiB (4.66 BPW) 
llm_load_print_meta: general.name   = LLaMA v2
llm_load_print_meta: BOS token    = 100000 '<|begin▁of▁sentence|>'
llm_load_print_meta: EOS token    = 100015 '<|EOT|>'
llm_load_print_meta: PAD token    = 100001 '<|end▁of▁sentence|>'
llm_load_print_meta: LF token     = 30 '?'
llm_load_tensors: ggml ctx size =  0.21 MiB
llm_load_tensors: offloading 5 repeating layers to GPU
llm_load_tensors: offloaded 5/31 layers to GPU
llm_load_tensors:    CPU buffer size = 3835.08 MiB
llm_load_tensors:   CUDA0 buffer size =  542.97 MiB
.........................................................................................
llama_new_context_with_model: n_ctx   = 2048
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA_Host KV buffer size =  800.00 MiB
llama_kv_cache_init:   CUDA0 KV buffer size =  160.00 MiB
llama_new_context_with_model: KV self size = 960.00 MiB, K (f16): 480.00 MiB, V (f16): 480.00 MiB
llama_new_context_with_model: CUDA_Host input buffer size  =  13.01 MiB
llama_new_context_with_model:   CUDA0 compute buffer size =  164.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size =  224.00 MiB
llama_new_context_with_model: graph splits (measure): 5
[1708353930] warming up the model with an empty run
[1708353931] Available slots:
[1708353931] -> Slot 0 - max context: 2048
time=2024-02-19T15:45:31.397+01:00 level=INFO source=dyn_ext_server.go:156 msg="Starting llama main loop"
[1708353931] llama server main loop starting
[1708353931] all slots are idle and system prompt is empty, clear the KV cache
time=2024-02-19T15:45:31.398+01:00 level=DEBUG source=prompt.go:175 msg="prompt now fits in context window" required=30 window=2048
[GIN] 2024/02/19 - 15:45:31 | 200 |  5.5332454s |    127.0.0.1 | POST   "/api/chat"
[GIN] 2024/02/19 - 15:45:40 | 200 |      0s |    127.0.0.1 | GET   "/"
[GIN] 2024/02/19 - 15:45:41 | 404 |      0s |    127.0.0.1 | GET   "/favicon.ico"
time=2024-02-19T15:46:40.842+01:00 level=DEBUG source=routes.go:243 msg="generate handler" prompt="Why is the sky blue?"
time=2024-02-19T15:46:40.842+01:00 level=DEBUG source=routes.go:244 msg="generate handler" template="{{ if .System }}<|im_start|>system\r\n{{ .System }}<|im_end|>\r\n{{ end }}{{ if .Prompt }}<|im_start|>user\r\n{{ .Prompt }}<|im_end|>\r\n{{ end }}<|im_start|>assistant\r\n"
time=2024-02-19T15:46:40.842+01:00 level=DEBUG source=routes.go:245 msg="generate handler" system=""
time=2024-02-19T15:46:40.842+01:00 level=DEBUG source=routes.go:275 msg="generate handler" prompt="<|im_start|>system\r\n<|im_end|>\r\n<|im_start|>user\r\nWhy is the sky blue?<|im_end|>\r\n<|im_start|>assistant\r\n"
[1708354000] slot 0 is processing [task id: 0]
[1708354000] slot 0 : in cache: 0 tokens | to process: 53 tokens
[1708354000] slot 0 : kv cache rm - [0, end)
CUDA error: out of memory
 current device: 0, in function ggml_cuda_pool_malloc_vmm at C:\Users\jeff\git\ollama\llm\llama.cpp\ggml-cuda.cu:7834
 cuMemSetAccess(g_cuda_pool_addr[device] + g_cuda_pool_size[device], reserve_size, &access, 1)
GGML_ASSERT: C:\Users\jeff\git\ollama\llm\llama.cpp\ggml-cuda.cu:241: !"CUDA error"

Error Message:
CUDA error: out of memory occurs during prompt processing in Slot 0.
This indicates insufficient memory on my NVIDIA GeForce 940MX GPU to handle the current request.

Possible Causes:
Limited GPU Memory: My GPU with 2GB memory might not be sufficient for large models and complex prompts.
Resource Conflicts: Other applications running in the background could be consuming GPU resources.
Ollama Configuration: Specific settings within Ollama might be further demanding on available GPU memory.

  1. I've checked background applications, and none is consuming GPU resources when I run Ollama.
  2. Ollama runs smoothly on Linux in my WSL environment, so I don't think it's a limited GPU memory as per my previous deductions. Unless this is specific to running Ollama natively on Windows.

Hopefully, the team gets on top of this issue for the beta release of Ollama for Windows. That would be most appreciated. For now, I'll keep running on WSL.

<!-- gh-comment-id:1954396148 --> @wilkinsabane commented on GitHub (Feb 20, 2024): > I've successfully installed the Ollama Preview for Windows. My NVidia graphics is fully updated. But every time I run a model and write a prompt, I get the following error: > > C:\Users\User>ollama run mistral > > > > > hi > > > > Error: Post "http://127.0.0.1:11434/api/chat": read tcp 127.0.0.1:51644->127.0.0.1:11434: wsarecv: An existing connection was forcibly closed by the remote host. > > Please help. So, after analyzing the ollama server logs below: > [GIN] 2024/02/19 - 15:45:25 | 200 |      0s |    127.0.0.1 | HEAD   "/" > [GIN] 2024/02/19 - 15:45:25 | 200 |   1.1469ms |    127.0.0.1 | POST   "/api/show" > [GIN] 2024/02/19 - 15:45:25 | 200 |   1.6631ms |    127.0.0.1 | POST   "/api/show" > time=2024-02-19T15:45:27.335+01:00 level=INFO source=gpu.go:94 msg="Detecting GPU type" > time=2024-02-19T15:45:27.335+01:00 level=INFO source=gpu.go:262 msg="Searching for GPU management library nvml.dll" > time=2024-02-19T15:45:27.336+01:00 level=DEBUG source=gpu.go:280 msg="gpu management search paths: [c:\\Windows\\System32\\nvml.dll C:\\Windows\\system32\\nvml.dll* C:\\Windows\\nvml.dll* C:\\Windows\\System32\\Wbem\\nvml.dll* C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvml.dll* C:\\Windows\\System32\\OpenSSH\\nvml.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll* C:\\Program Files (x86)\\Intel\\Intel(R) Management Engine Components\\DAL\\nvml.dll* C:\\Program Files\\Intel\\Intel(R) Management Engine Components\\DAL\\nvml.dll* C:\\Program Files\\dotnet\\nvml.dll* C:\\Users\\Wilkins\\AppData\\Roaming\\Python\\Scripts\\poetry\\nvml.dll* C:\\Users\\Wilkins\\AppData\\Roaming\\Python\\Scripts\\nvml.dll* C:\\Program Files\\Git\\cmd\\nvml.dll* C:\\Program Files\\NVIDIA Corporation\\NVIDIA NvDLISR\\nvml.dll* C:\\Users\\Wilkins\\AppData\\Local\\Programs\\Ollama\\nvml.dll* C:\\Users\\Wilkins\\AppData\\Local\\Programs\\Python\\Python312\\Scripts\\nvml.dll* C:\\Users\\Wilkins\\AppData\\Local\\Programs\\Python\\Python312\\nvml.dll* C:\\Users\\Wilkins\\AppData\\Local\\Microsoft\\WindowsApps\\nvml.dll* C:\\Users\\Wilkins\\.dotnet\\tools\\nvml.dll* C:\\Users\\Wilkins\\AppData\\Local\\Programs\\Microsoft VS Code\\bin\\nvml.dll*]" > time=2024-02-19T15:45:27.340+01:00 level=INFO source=gpu.go:308 msg="Discovered GPU libraries: [c:\\Windows\\System32\\nvml.dll C:\\Windows\\system32\\nvml.dll]" > time=2024-02-19T15:45:27.893+01:00 level=INFO source=gpu.go:99 msg="Nvidia GPU detected" > time=2024-02-19T15:45:27.894+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" > time=2024-02-19T15:45:27.911+01:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 5.0" > time=2024-02-19T15:45:27.911+01:00 level=DEBUG source=gpu.go:251 msg="cuda detected 1 devices with 977M available memory" > time=2024-02-19T15:45:27.911+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" > time=2024-02-19T15:45:27.911+01:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 5.0" > time=2024-02-19T15:45:27.911+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" > time=2024-02-19T15:45:27.911+01:00 level=DEBUG source=payload_common.go:93 msg="ordered list of LLM libraries to try [C:\\Users\\Wilkins\\AppData\\Local\\Temp\\ollama617761062\\cuda_v11.3\\ext_server.dll C:\\Users\\Wilkins\\AppData\\Local\\Temp\\ollama617761062\\cpu_avx2\\ext_server.dll]" > time=2024-02-19T15:45:27.911+01:00 level=INFO source=dyn_ext_server.go:380 msg="Updating PATH to C:\\Users\\Wilkins\\AppData\\Local\\Temp\\ollama617761062\\cuda_v11.3;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files (x86)\\Intel\\Intel(R) Management Engine Components\\DAL;C:\\Program Files\\Intel\\Intel(R) Management Engine Components\\DAL;C:\\Program Files\\dotnet\\;C:\\Users\\Wilkins\\AppData\\Roaming\\Python\\Scripts\\poetry;C:\\Users\\Wilkins\\AppData\\Roaming\\Python\\Scripts;C:\\Program Files\\Git\\cmd;C:\\Program Files\\NVIDIA Corporation\\NVIDIA NvDLISR;C:\\Users\\Wilkins\\AppData\\Local\\Programs\\Ollama\\;C:\\Users\\Wilkins\\AppData\\Local\\Programs\\Python\\Python312\\Scripts\\;C:\\Users\\Wilkins\\AppData\\Local\\Programs\\Python\\Python312\\;C:\\Users\\Wilkins\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\Wilkins\\.dotnet\\tools;C:\\Users\\Wilkins\\AppData\\Local\\Programs\\Microsoft VS Code\\bin" > time=2024-02-19T15:45:28.113+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: C:\\Users\\Wilkins\\AppData\\Local\\Temp\\ollama617761062\\cuda_v11.3\\ext_server.dll" > time=2024-02-19T15:45:28.114+01:00 level=INFO source=dyn_ext_server.go:145 msg="Initializing llama server" > [1708353928] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 |  > [1708353928] Performing pre-initialization of GPU > ggml_init_cublas: GGML_CUDA_FORCE_MMQ:  no > ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes > ggml_init_cublas: found 1 CUDA devices: >  Device 0: NVIDIA GeForce 940MX, compute capability 5.0, VMM: yes > llama_model_loader: loaded meta data with 24 key-value pairs and 273 tensors from C:\Users\Wilkins\.ollama\models\blobs\sha256-4f72bdefa815fd730f8982f6118a090dc041f1f0af433a4ed5c2fd2663c171e2 (version GGUF V3 (latest)) > llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. > llama_model_loader: - kv  0:            general.architecture str       = llama > llama_model_loader: - kv  1:                general.name str       = LLaMA v2 > llama_model_loader: - kv  2:            llama.context_length u32       = 4096 > llama_model_loader: - kv  3:           llama.embedding_length u32       = 4096 > llama_model_loader: - kv  4:             llama.block_count u32       = 30 > llama_model_loader: - kv  5:         llama.feed_forward_length u32       = 11008 > llama_model_loader: - kv  6:         llama.rope.dimension_count u32       = 128 > llama_model_loader: - kv  7:         llama.attention.head_count u32       = 32 > llama_model_loader: - kv  8:       llama.attention.head_count_kv u32       = 32 > llama_model_loader: - kv  9:   llama.attention.layer_norm_rms_epsilon f32       = 0.000001 > llama_model_loader: - kv 10:            llama.rope.freq_base f32       = 10000.000000 > llama_model_loader: - kv 11:             general.file_type u32       = 14 > llama_model_loader: - kv 12:            tokenizer.ggml.model str       = gpt2 > llama_model_loader: - kv 13:           tokenizer.ggml.tokens arr[str,102400] = ["!", "\"", "#", "$", "%", "&", "'", ... > llama_model_loader: - kv 14:           tokenizer.ggml.scores arr[f32,102400] = [0.000000, 0.000000, 0.000000, 0.0000... > llama_model_loader: - kv 15:         tokenizer.ggml.token_type arr[i32,102400] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... > llama_model_loader: - kv 16:           tokenizer.ggml.merges arr[str,99757]  = ["Ġ Ġ", "Ġ t", "Ġ a", "i n", "h e... > llama_model_loader: - kv 17:        tokenizer.ggml.bos_token_id u32       = 100000 > llama_model_loader: - kv 18:        tokenizer.ggml.eos_token_id u32       = 100015 > llama_model_loader: - kv 19:      tokenizer.ggml.padding_token_id u32       = 100001 > llama_model_loader: - kv 20:        tokenizer.ggml.add_bos_token bool       = true > llama_model_loader: - kv 21:        tokenizer.ggml.add_eos_token bool       = false > llama_model_loader: - kv 22:          tokenizer.chat_template str       = {% if not add_generation_prompt is de... > llama_model_loader: - kv 23:        general.quantization_version u32       = 2 > llama_model_loader: - type f32:  61 tensors > llama_model_loader: - type q4_K: 204 tensors > llama_model_loader: - type q5_K:  7 tensors > llama_model_loader: - type q6_K:  1 tensors > llm_load_vocab: mismatch in special tokens definition ( 2387/102400 vs 2400/102400 ). > llm_load_print_meta: format      = GGUF V3 (latest) > llm_load_print_meta: arch       = llama > llm_load_print_meta: vocab type    = BPE > llm_load_print_meta: n_vocab     = 102400 > llm_load_print_meta: n_merges     = 99757 > llm_load_print_meta: n_ctx_train   = 4096 > llm_load_print_meta: n_embd      = 4096 > llm_load_print_meta: n_head      = 32 > llm_load_print_meta: n_head_kv    = 32 > llm_load_print_meta: n_layer     = 30 > llm_load_print_meta: n_rot      = 128 > llm_load_print_meta: n_embd_head_k  = 128 > llm_load_print_meta: n_embd_head_v  = 128 > llm_load_print_meta: n_gqa      = 1 > llm_load_print_meta: n_embd_k_gqa   = 4096 > llm_load_print_meta: n_embd_v_gqa   = 4096 > llm_load_print_meta: f_norm_eps    = 0.0e+00 > llm_load_print_meta: f_norm_rms_eps  = 1.0e-06 > llm_load_print_meta: f_clamp_kqv   = 0.0e+00 > llm_load_print_meta: f_max_alibi_bias = 0.0e+00 > llm_load_print_meta: n_ff       = 11008 > llm_load_print_meta: n_expert     = 0 > llm_load_print_meta: n_expert_used  = 0 > llm_load_print_meta: rope scaling   = linear > llm_load_print_meta: freq_base_train = 10000.0 > llm_load_print_meta: freq_scale_train = 1 > llm_load_print_meta: n_yarn_orig_ctx = 4096 > llm_load_print_meta: rope_finetuned  = unknown > llm_load_print_meta: model type    = ?B > llm_load_print_meta: model ftype   = Q4_K - Small > llm_load_print_meta: model params   = 6.91 B > llm_load_print_meta: model size    = 3.75 GiB (4.66 BPW)  > llm_load_print_meta: general.name   = LLaMA v2 > llm_load_print_meta: BOS token    = 100000 '<|begin▁of▁sentence|>' > llm_load_print_meta: EOS token    = 100015 '<|EOT|>' > llm_load_print_meta: PAD token    = 100001 '<|end▁of▁sentence|>' > llm_load_print_meta: LF token     = 30 '?' > llm_load_tensors: ggml ctx size =  0.21 MiB > llm_load_tensors: offloading 5 repeating layers to GPU > llm_load_tensors: offloaded 5/31 layers to GPU > llm_load_tensors:    CPU buffer size = 3835.08 MiB > llm_load_tensors:   CUDA0 buffer size =  542.97 MiB > ......................................................................................... > llama_new_context_with_model: n_ctx   = 2048 > llama_new_context_with_model: freq_base = 10000.0 > llama_new_context_with_model: freq_scale = 1 > llama_kv_cache_init: CUDA_Host KV buffer size =  800.00 MiB > llama_kv_cache_init:   CUDA0 KV buffer size =  160.00 MiB > llama_new_context_with_model: KV self size = 960.00 MiB, K (f16): 480.00 MiB, V (f16): 480.00 MiB > llama_new_context_with_model: CUDA_Host input buffer size  =  13.01 MiB > llama_new_context_with_model:   CUDA0 compute buffer size =  164.00 MiB > llama_new_context_with_model: CUDA_Host compute buffer size =  224.00 MiB > llama_new_context_with_model: graph splits (measure): 5 > [1708353930] warming up the model with an empty run > [1708353931] Available slots: > [1708353931] -> Slot 0 - max context: 2048 > time=2024-02-19T15:45:31.397+01:00 level=INFO source=dyn_ext_server.go:156 msg="Starting llama main loop" > [1708353931] llama server main loop starting > [1708353931] all slots are idle and system prompt is empty, clear the KV cache > time=2024-02-19T15:45:31.398+01:00 level=DEBUG source=prompt.go:175 msg="prompt now fits in context window" required=30 window=2048 > [GIN] 2024/02/19 - 15:45:31 | 200 |  5.5332454s |    127.0.0.1 | POST   "/api/chat" > [GIN] 2024/02/19 - 15:45:40 | 200 |      0s |    127.0.0.1 | GET   "/" > [GIN] 2024/02/19 - 15:45:41 | 404 |      0s |    127.0.0.1 | GET   "/favicon.ico" > time=2024-02-19T15:46:40.842+01:00 level=DEBUG source=routes.go:243 msg="generate handler" prompt="Why is the sky blue?" > time=2024-02-19T15:46:40.842+01:00 level=DEBUG source=routes.go:244 msg="generate handler" template="{{ if .System }}<|im_start|>system\r\n{{ .System }}<|im_end|>\r\n{{ end }}{{ if .Prompt }}<|im_start|>user\r\n{{ .Prompt }}<|im_end|>\r\n{{ end }}<|im_start|>assistant\r\n" > time=2024-02-19T15:46:40.842+01:00 level=DEBUG source=routes.go:245 msg="generate handler" system="<system message>" > time=2024-02-19T15:46:40.842+01:00 level=DEBUG source=routes.go:275 msg="generate handler" prompt="<|im_start|>system\r\n<system message><|im_end|>\r\n<|im_start|>user\r\nWhy is the sky blue?<|im_end|>\r\n<|im_start|>assistant\r\n" > [1708354000] slot 0 is processing [task id: 0] > [1708354000] slot 0 : in cache: 0 tokens | to process: 53 tokens > [1708354000] slot 0 : kv cache rm - [0, end) > CUDA error: out of memory >  current device: 0, in function ggml_cuda_pool_malloc_vmm at C:\Users\jeff\git\ollama\llm\llama.cpp\ggml-cuda.cu:7834 >  cuMemSetAccess(g_cuda_pool_addr[device] + g_cuda_pool_size[device], reserve_size, &access, 1) > GGML_ASSERT: C:\Users\jeff\git\ollama\llm\llama.cpp\ggml-cuda.cu:241: !"CUDA error" **Error Message:** CUDA error: out of memory occurs during prompt processing in Slot 0. This indicates insufficient memory on my NVIDIA GeForce 940MX GPU to handle the current request. **Possible Causes:** Limited GPU Memory: My GPU with 2GB memory might not be sufficient for large models and complex prompts. Resource Conflicts: Other applications running in the background could be consuming GPU resources. Ollama Configuration: Specific settings within Ollama might be further demanding on available GPU memory. 1. I've checked background applications, and none is consuming GPU resources when I run Ollama. 2. Ollama runs smoothly on Linux in my WSL environment, so I don't think it's a limited GPU memory as per my previous deductions. Unless this is specific to running Ollama natively on Windows. Hopefully, the team gets on top of this issue for the beta release of Ollama for Windows. That would be most appreciated. For now, I'll keep running on WSL.
Author
Owner

@wilkinsabane commented on GitHub (Feb 20, 2024):

Check whether these ports are being used by other executable. Type the following command into admin privileged cmd window. netstat -a -b

Thanks for your tip. But unfortunately no service was running on that port. Only Ollama has access to it but as in the error, it kept closing as soon as a request (question) is made to a loaded model.

<!-- gh-comment-id:1954402267 --> @wilkinsabane commented on GitHub (Feb 20, 2024): > Check whether these ports are being used by other executable. Type the following command into admin privileged cmd window. netstat -a -b Thanks for your tip. But unfortunately no service was running on that port. Only Ollama has access to it but as in the error, it kept closing as soon as a request (question) is made to a loaded model.
Author
Owner

@Pey-crypto commented on GitHub (Feb 20, 2024):

could you run nvidia-smi and post that log.

<!-- gh-comment-id:1954512487 --> @Pey-crypto commented on GitHub (Feb 20, 2024): could you run nvidia-smi and post that log.
Author
Owner

@wilkinsabane commented on GitHub (Feb 20, 2024):

could you run nvidia-smi and post that log.

Sure..

PS C:\Windows\system32> nvidia-smi
Tue Feb 20 17:53:25 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 551.52 Driver Version: 551.52 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce 940MX WDDM | 00000000:01:00.0 Off | N/A |
| N/A 0C P8 N/A / 200W | 0MiB / 2048MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
+-----------------------------------------------------------------------------------------+

<!-- gh-comment-id:1954638948 --> @wilkinsabane commented on GitHub (Feb 20, 2024): > could you run nvidia-smi and post that log. Sure.. PS C:\Windows\system32> nvidia-smi Tue Feb 20 17:53:25 2024 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 551.52 Driver Version: 551.52 CUDA Version: 12.4 | |-----------------------------------------+------------------------+----------------------+ | GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce 940MX WDDM | 00000000:01:00.0 Off | N/A | | N/A 0C P8 N/A / 200W | 0MiB / 2048MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | No running processes found | +-----------------------------------------------------------------------------------------+
Author
Owner

@hjvogel commented on GitHub (Feb 21, 2024):

Similar issue here on w11. Running fine with few models like mistral - can even switch between different
but just new google gemma try: throws that error.
Even after fresh reboot - to clear any GPU blocking in case

<!-- gh-comment-id:1957273054 --> @hjvogel commented on GitHub (Feb 21, 2024): Similar issue here on w11. Running fine with few models like mistral - can even switch between different but just new google gemma try: throws that error. Even after fresh reboot - to clear any GPU blocking in case
Author
Owner

@hjvogel commented on GitHub (Feb 22, 2024):

seems similar to https://github.com/ollama/ollama/issues/1436

<!-- gh-comment-id:1958955688 --> @hjvogel commented on GitHub (Feb 22, 2024): seems similar to https://github.com/ollama/ollama/issues/1436
Author
Owner

@Pey-crypto commented on GitHub (Feb 26, 2024):

Could you try increasing the size of the page file on your device. I was able to reproduce this when I reduced my page size

<!-- gh-comment-id:1963946780 --> @Pey-crypto commented on GitHub (Feb 26, 2024): Could you try increasing the size of the page file on your device. I was able to reproduce this when I reduced my page size
Author
Owner

@wilkinsabane commented on GitHub (Feb 26, 2024):

Could you try increasing the size of the page file on your device. I was able to reproduce this when I reduced my page size

I'm sorry but what do you mean to increase the size of the page file? Can you explain further?

<!-- gh-comment-id:1964084520 --> @wilkinsabane commented on GitHub (Feb 26, 2024): > Could you try increasing the size of the page file on your device. I was able to reproduce this when I reduced my page size I'm sorry but what do you mean to increase the size of the page file? Can you explain further?
Author
Owner

@Pey-crypto commented on GitHub (Feb 26, 2024):

https://www.ibm.com/docs/en/openpages/8.1.0?topic=tuning-optional-increasing-paging-file-size-windows-computers. This might explain it better.

<!-- gh-comment-id:1964090586 --> @Pey-crypto commented on GitHub (Feb 26, 2024): https://www.ibm.com/docs/en/openpages/8.1.0?topic=tuning-optional-increasing-paging-file-size-windows-computers. This might explain it better.
Author
Owner

@wilkinsabane commented on GitHub (Feb 26, 2024):

Ok, I've increased the page file as you suggested but that unfortunately has not resolved the issue. I have 16GB of ram. I changed the page file to minimum 16384 and maximum 24576 but I encounter the same issue.
Uploading page file.PNG…

<!-- gh-comment-id:1964122407 --> @wilkinsabane commented on GitHub (Feb 26, 2024): Ok, I've increased the page file as you suggested but that unfortunately has not resolved the issue. I have 16GB of ram. I changed the page file to minimum 16384 and maximum 24576 but I encounter the same issue. ![Uploading page file.PNG…]()
Author
Owner

@davidkanaga commented on GitHub (Feb 28, 2024):

Updating Ollama to the latest version fixed the issue for me and please make sure your windows defender not removing any Ollama files by mistake.

<!-- gh-comment-id:1968341108 --> @davidkanaga commented on GitHub (Feb 28, 2024): Updating Ollama to the latest version fixed the issue for me and please make sure your windows defender not removing any Ollama files by mistake.
Author
Owner

@hoyyeva commented on GitHub (Mar 11, 2024):

Hey @shersoni610, thanks for bringing this to our attention, and thank you @davidkanaga for for stepping in to assist! Please let us know if you are still running into the issue after updating Ollama to the latest.

We're going to mark this as resolved for the moment, but if you're still facing any troubles, please feel free to reopen the issue.

<!-- gh-comment-id:1989297584 --> @hoyyeva commented on GitHub (Mar 11, 2024): Hey @shersoni610, thanks for bringing this to our attention, and thank you @davidkanaga for for stepping in to assist! Please let us know if you are still running into the issue after updating Ollama to the latest. We're going to mark this as resolved for the moment, but if you're still facing any troubles, please feel free to reopen the issue.
Author
Owner

@LucaBianco commented on GitHub (Mar 23, 2024):

[EDIT: Now I updated to AMD graphic drivers 24.3.1 and it works.]

I downloaded and installed the latest version of ollama, still running into this issue:
System:
CPU: Ryzen 7 5800X3D
GPU: AMD RX 6800 XT (23.12.1)
OS: Windows 10 22H2 (OS Build 19045.4170)
I read that it works with amd too now, so I wanted to give ollama a try.

Console Output:
ollama run codellama:7b
pulling manifest
[...]
success
Error: Post "http://127.0.0.1:11434/api/chat": read tcp 127.0.0.1:62096->127.0.0.1:11434: wsarecv: An existing connection was forcibly closed by the remote host.

Server.log:
Attached the file as it was too long:
server - Copy.log

<!-- gh-comment-id:2016544198 --> @LucaBianco commented on GitHub (Mar 23, 2024): **[EDIT: Now I updated to AMD graphic drivers 24.3.1 and it works.]** I downloaded and installed the latest version of ollama, still running into this issue: **System:** CPU: Ryzen 7 5800X3D GPU: AMD RX 6800 XT (23.12.1) OS: Windows 10 22H2 (OS Build 19045.4170) I read that [it works with amd too now](https://ollama.com/blog/amd-preview), so I wanted to give ollama a try. **Console Output:** ollama run codellama:7b pulling manifest [...] success Error: Post "http://127.0.0.1:11434/api/chat": read tcp 127.0.0.1:62096->127.0.0.1:11434: wsarecv: An existing connection was forcibly closed by the remote host. **Server.log:** Attached the file as it was too long: [server - Copy.log](https://github.com/ollama/ollama/files/14732953/server.-.Copy.log)
Author
Owner

@saminbj commented on GitHub (Apr 10, 2024):

I have the same problem while running qwen:14b(7b is fine) and gemma. it seems vram out of memory which server logs showing:
{"function":"update_slots","level":"INFO","line":1836,"msg":"kv cache rm [p0, end)","p0":33,"slot_id":0,"task_id":199,"tid":"7004","timestamp":1712779028}
CUDA error: out of memory
current device: 0, in function alloc at C:\a\ollama\ollama\llm\llama.cpp\ggml-cuda.cu:532
cuMemSetAccess(pool_addr + pool_size, reserve_size, &access, 1)
GGML_ASSERT: C:\a\ollama\ollama\llm\llama.cpp\ggml-cuda.cu:193: !"CUDA error"

<!-- gh-comment-id:2048348168 --> @saminbj commented on GitHub (Apr 10, 2024): I have the same problem while running qwen:14b(7b is fine) and gemma. it seems vram out of memory which server logs showing: {"function":"update_slots","level":"INFO","line":1836,"msg":"kv cache rm [p0, end)","p0":33,"slot_id":0,"task_id":199,"tid":"7004","timestamp":1712779028} CUDA error: out of memory current device: 0, in function alloc at C:\a\ollama\ollama\llm\llama.cpp\ggml-cuda.cu:532 cuMemSetAccess(pool_addr + pool_size, reserve_size, &access, 1) GGML_ASSERT: C:\a\ollama\ollama\llm\llama.cpp\ggml-cuda.cu:193: !"CUDA error"
Author
Owner

@QiuQiuzh commented on GitHub (Apr 16, 2025):

[EDIT: Now I updated to AMD graphic drivers 24.3.1 and it works.]

I downloaded and installed the latest version of ollama, still running into this issue: System: CPU: Ryzen 7 5800X3D GPU: AMD RX 6800 XT (23.12.1) OS: Windows 10 22H2 (OS Build 19045.4170) I read that it works with amd too now, so I wanted to give ollama a try.

Console Output: ollama run codellama:7b pulling manifest [...] success Error: Post "http://127.0.0.1:11434/api/chat": read tcp 127.0.0.1:62096->127.0.0.1:11434: wsarecv: An existing connection was forcibly closed by the remote host.

Server.log: Attached the file as it was too long: server - Copy.log

Hello, I have the same problem with you now, how did you solve it?

<!-- gh-comment-id:2808769320 --> @QiuQiuzh commented on GitHub (Apr 16, 2025): > **[EDIT: Now I updated to AMD graphic drivers 24.3.1 and it works.]** > > I downloaded and installed the latest version of ollama, still running into this issue: **System:** CPU: Ryzen 7 5800X3D GPU: AMD RX 6800 XT (23.12.1) OS: Windows 10 22H2 (OS Build 19045.4170) I read that [it works with amd too now](https://ollama.com/blog/amd-preview), so I wanted to give ollama a try. > > **Console Output:** ollama run codellama:7b pulling manifest [...] success Error: Post "http://127.0.0.1:11434/api/chat": read tcp 127.0.0.1:62096->127.0.0.1:11434: wsarecv: An existing connection was forcibly closed by the remote host. > > **Server.log:** Attached the file as it was too long: [server - Copy.log](https://github.com/ollama/ollama/files/14732953/server.-.Copy.log) Hello, I have the same problem with you now, how did you solve it?
Author
Owner

@architectonicus commented on GitHub (Nov 5, 2025):

The Problem, as I experienced and understood it: When ollama is behind a proxy and attempts to fetch a model and fails (for example because of a missing https_proxy variable), it leaves a process listening to the port 11343. All further attempt to fetch models afterwards will fail.

How I fixed the Problem (on Windows):

  1. Kill ollama (only the process ; -). I did it by finding the small icon on the Windows taskbar (aka notification area), then right-clicking on the icon and selecting "Quit"
  2. Find any processes listening to the port 11343: netstat -ano | findstr :11434
  3. Kill it: taskkill /PID /F
  4. Set the https_proxy property (I ended up doing it directly in a shell)
  5. Start ollama and download models. Should work as a charm.
<!-- gh-comment-id:3489683431 --> @architectonicus commented on GitHub (Nov 5, 2025): The Problem, as I experienced and understood it: When ollama is behind a proxy and attempts to fetch a model and fails (for example because of a missing https_proxy variable), it leaves a process listening to the port 11343. All further attempt to fetch models afterwards will fail. How I fixed the Problem (on Windows): 1. Kill ollama (only the process ; -). I did it by finding the small icon on the Windows taskbar (aka notification area), then right-clicking on the icon and selecting "Quit" 2. Find any processes listening to the port 11343: netstat -ano | findstr :11434 3. Kill it: taskkill /PID <The PID> /F 4. Set the https_proxy property (I ended up doing it directly in a shell) 5. Start ollama and download models. Should work as a charm.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#1499