[GH-ISSUE #8035] NOT ABLE TO INSTALL "llama 3.2 model" #67195

Closed
opened 2026-05-04 09:36:12 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @PriyeshGit on GitHub (Dec 11, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/8035

What is the issue?

After installing Ollama on my PC.
When I try to install llama3.2 model.
I always get the same error, but I am not able comprehend what is wrong.

Welcome to Ollama!
Run your first model:
ollama run llama3.2

PS C:\Windows\System32> ollama run llama3.2
Error: something went wrong, please see the ollama server logs for details

Here is how my server log looks like:

2024/12/10 17:17:36 routes.go:1195: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY:https://proxy.example.com:8080 HTTP_PROXY:http://proxy.example.com:8080 NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\Path\To\Models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-12-10T17:17:36.214+09:00 level=INFO source=images.go:753 msg="total blobs: 0"
time=2024-12-10T17:17:36.214+09:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2024-12-10T17:17:36.218+09:00 level=INFO source=routes.go:1246 msg="Listening on [::]:11434 (version 0.5.1)"
time=2024-12-10T17:17:36.222+09:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[rocm cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12]"
time=2024-12-10T17:17:36.222+09:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
time=2024-12-10T17:17:36.223+09:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2024-12-10T17:17:36.223+09:00 level=INFO source=gpu_windows.go:183 msg="efficiency cores detected" maxEfficiencyClass=1
time=2024-12-10T17:17:36.223+09:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=12 efficiency=10 threads=14
time=2024-12-10T17:17:36.246+09:00 level=INFO source=gpu.go:386 msg="no compatible GPUs were discovered"
time=2024-12-10T17:17:36.246+09:00 level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="31.5 GiB" available="12.5 GiB"

OS

Windows

GPU

No response

CPU

Intel

Ollama version

No response

Originally created by @PriyeshGit on GitHub (Dec 11, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/8035 ### What is the issue? After installing Ollama on my PC. When I try to install llama3.2 model. I always get the same error, but I am not able comprehend what is wrong. Welcome to Ollama! Run your first model: ollama run llama3.2 PS C:\Windows\System32> ollama run llama3.2 Error: something went wrong, please see the ollama server logs for details **Here is how my server log looks like:** 2024/12/10 17:17:36 routes.go:1195: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY:https://proxy.example.com:8080 HTTP_PROXY:http://proxy.example.com:8080 NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\Path\To\Models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-12-10T17:17:36.214+09:00 level=INFO source=images.go:753 msg="total blobs: 0" time=2024-12-10T17:17:36.214+09:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0" time=2024-12-10T17:17:36.218+09:00 level=INFO source=routes.go:1246 msg="Listening on [::]:11434 (version 0.5.1)" time=2024-12-10T17:17:36.222+09:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[rocm cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12]" time=2024-12-10T17:17:36.222+09:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs" time=2024-12-10T17:17:36.223+09:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2024-12-10T17:17:36.223+09:00 level=INFO source=gpu_windows.go:183 msg="efficiency cores detected" maxEfficiencyClass=1 time=2024-12-10T17:17:36.223+09:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=12 efficiency=10 threads=14 time=2024-12-10T17:17:36.246+09:00 level=INFO source=gpu.go:386 msg="no compatible GPUs were discovered" time=2024-12-10T17:17:36.246+09:00 level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="31.5 GiB" available="12.5 GiB" ### OS Windows ### GPU _No response_ ### CPU Intel ### Ollama version _No response_
GiteaMirror added the bugwindows labels 2026-05-04 09:36:12 -05:00
Author
Owner

@rick-github commented on GitHub (Dec 11, 2024):

You have HTTP_PROXY set, the client is unable to connect to the server because it's being routed through the proxy. Unset HTTP_PROXY in the client enviromment.

<!-- gh-comment-id:2533376349 --> @rick-github commented on GitHub (Dec 11, 2024): You have `HTTP_PROXY` set, the client is unable to connect to the server because it's being routed through the proxy. Unset `HTTP_PROXY` in the client enviromment.
Author
Owner

@PriyeshGit commented on GitHub (Dec 11, 2024):

Thank you for your response,
I have unset the HTTP_PROXY, but still the same error is showing up.

C:\Users[USERNAME]>set HTTP_PROXY=

C:\Users[USERNAME]>set HTTPS_PROXY=

C:\Users[USERNAME]>ollama serve
2024/12/11 10:19:37 routes.go:1195: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\Users\[USERNAME]\.ollama\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-12-11T10:19:37.500+09:00 level=INFO source=images.go:753 msg="total blobs: 0"
time=2024-12-11T10:19:37.506+09:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2024-12-11T10:19:37.519+09:00 level=INFO source=routes.go:1246 msg="Listening on [::]:11434 (version 0.5.1)"
time=2024-12-11T10:19:37.537+09:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12 rocm]"
time=2024-12-11T10:19:37.538+09:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
time=2024/12/11 10:19:37.539+09:00 level=INFO source=gpu_windows.go:167 msg="packages count=1"
time=2024/12/11 10:19:37.539+09:00 level=INFO source=gpu_windows.go:183 msg="efficiency cores detected" maxEfficiencyClass=1
time=2024/12/11 10:19:37.539+09:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=12 efficiency=10 threads=14
time=2024/12/11 10:19:37.639+09:00 level=INFO source=gpu.go:386 msg="no compatible GPUs were discovered"
time=2024/12/11 10:19:37.640+09:00 level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="31.5 GiB" available="16.9 GiB"

C:\Users[USERNAME]>ollama pull llama3.2
Error: something went wrong, please see the ollama server logs for details

<!-- gh-comment-id:2533412172 --> @PriyeshGit commented on GitHub (Dec 11, 2024): Thank you for your response, I have unset the HTTP_PROXY, but still the same error is showing up. C:\Users\[USERNAME]>set HTTP_PROXY= C:\Users\[USERNAME]>set HTTPS_PROXY= C:\Users\[USERNAME]>ollama serve 2024/12/11 10:19:37 routes.go:1195: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\[USERNAME]\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-12-11T10:19:37.500+09:00 level=INFO source=images.go:753 msg="total blobs: 0" time=2024-12-11T10:19:37.506+09:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0" time=2024-12-11T10:19:37.519+09:00 level=INFO source=routes.go:1246 msg="Listening on [::]:11434 (version 0.5.1)" time=2024-12-11T10:19:37.537+09:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12 rocm]" time=2024-12-11T10:19:37.538+09:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs" time=2024/12/11 10:19:37.539+09:00 level=INFO source=gpu_windows.go:167 msg="packages count=1" time=2024/12/11 10:19:37.539+09:00 level=INFO source=gpu_windows.go:183 msg="efficiency cores detected" maxEfficiencyClass=1 time=2024/12/11 10:19:37.539+09:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=12 efficiency=10 threads=14 time=2024/12/11 10:19:37.639+09:00 level=INFO source=gpu.go:386 msg="no compatible GPUs were discovered" time=2024/12/11 10:19:37.640+09:00 level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="31.5 GiB" available="16.9 GiB" C:\Users\[USERNAME]>ollama pull llama3.2 Error: something went wrong, please see the ollama server logs for details
Author
Owner

@rick-github commented on GitHub (Dec 11, 2024):

You unset HTTP_PROXY in the server environment. You need to unset HTTP_PROXY in the client environment.

C:\Users[USERNAME]>set HTTP_PROXY=
C:\Users[USERNAME]>ollama pull llama3.2
<!-- gh-comment-id:2533416716 --> @rick-github commented on GitHub (Dec 11, 2024): You unset `HTTP_PROXY` in the server environment. You need to unset `HTTP_PROXY` in the client environment. ``` C:\Users[USERNAME]>set HTTP_PROXY= C:\Users[USERNAME]>ollama pull llama3.2 ```
Author
Owner

@PriyeshGit commented on GitHub (Dec 11, 2024):

Thank you, this time the error was not shown but a new error is emerging.

PS C:\Windows\System32> set HTTP_PROXY=
PS C:\Windows\System32> ollama pull llama3.2
pulling manifest
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/llama3.2/manifests/latest": dial tcp 104.21.75.227:443: i/o timeout

<!-- gh-comment-id:2533604484 --> @PriyeshGit commented on GitHub (Dec 11, 2024): Thank you, this time the error was not shown but a new error is emerging. PS C:\Windows\System32> set HTTP_PROXY= PS C:\Windows\System32> ollama pull llama3.2 pulling manifest Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/llama3.2/manifests/latest": dial tcp 104.21.75.227:443: i/o timeout
Author
Owner

@rick-github commented on GitHub (Dec 11, 2024):

You still need to set HTTPS_PROXY in the server environment.

<!-- gh-comment-id:2535445176 --> @rick-github commented on GitHub (Dec 11, 2024): You still need to set `HTTPS_PROXY` in the server environment.
Author
Owner

@athmanar commented on GitHub (Dec 11, 2024):

hello i can confirm this does not seem to work.. set HTTPS_PROXY on WINDOWS

<!-- gh-comment-id:2536819039 --> @athmanar commented on GitHub (Dec 11, 2024): hello i can confirm this does not seem to work.. set HTTPS_PROXY on WINDOWS
Author
Owner

@rick-github commented on GitHub (Dec 11, 2024):

socks5 proxy with ollama-0.5.1 works fine.

C:\Users\bill>set https_proxy=socks5://z1:1080
C:\Users\bill>ollama serve
2024/12/11 20:56:53 routes.go:1195: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY:socks5://z1:1080 HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://:11435 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\bill\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[chrome-extension://* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-12-11T20:57:07.715-08:00 level=INFO source=images.go:753 msg="total blobs: 10"
time=2024-12-11T20:57:07.721-08:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2024-12-11T20:57:07.745-08:00 level=INFO source=routes.go:1246 msg="Listening on [::]:11435 (version 0.5.1)
time=2024-12-11T20:57:07.746-08:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu_avx2 cuda_v11 cuda_v12 rocm cpu cpu_avx]"
time=2024-12-11T20:57:07.747-08:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
time=2024-12-11T20:57:07.748-08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2024-12-11T20:57:07.749-08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=1 efficiency=0 threads=1
time=2024-12-11T20:57:07.782-08:00 level=INFO source=gpu.go:386 msg="no compatible GPUs were discovered"
time=2024-12-11T20:57:07.789-08:00 level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="4.0 GiB" available="1.7 GiB"                                                       
[GIN] 2024/12/11 - 20:57:29 | 200 |            0s |       127.0.0.1 | HEAD     "/"                                      
time=2024-12-11T20:57:31.682-08:00 level=INFO source=download.go:175 msg="downloading 74701a8c35f6 in 14 100 MB part(s)"
time=2024-12-11T20:57:55.414-08:00 level=INFO source=download.go:175 msg="downloading 966de95ca8a6 in 1 1.4 KB part(s)"
time=2024-12-11T20:57:56.806-08:00 level=INFO source=download.go:175 msg="downloading fcc5a6bec9da in 1 7.7 KB part(s)"
time=2024-12-11T20:57:58.245-08:00 level=INFO source=download.go:175 msg="downloading a70ff7e570d9 in 1 6.0 KB part(s)" 
time=2024-12-11T20:57:59.624-08:00 level=INFO source=download.go:175 msg="downloading 4f659a1e86d7 in 1 485 B part(s)"  
[GIN] 2024/12/11 - 20:58:08 | 200 |   38.5621367s |       127.0.0.1 | POST     "/api/pull"   

If I set the proxy incorrectly, it fails.

C:\Users\bill>set https_proxy=socks5://z1:1081
C:\Users\bill>ollama serve
...
time=2024-12-11T21:07:29.665-08:00 level=INFO source=images.go:990 msg="request failed: Get \"https://registry.ollama.ai/v2/library/llama3.2/manifests/1b\": proxyconnect tcp: dial tcp 10.0.3.1:1081: connectex: No connection could be made because the target machine actively refused it."
[GIN] 2024/12/11 - 21:07:29 | 200 |    2.1723186s |       127.0.0.1 | POST     "/api/pull" 

If you could add some details like error message and configuration it would greatly enhance the possibility of diagnosing your problem.

<!-- gh-comment-id:2537195772 --> @rick-github commented on GitHub (Dec 11, 2024): socks5 proxy with ollama-0.5.1 works fine. ```console C:\Users\bill>set https_proxy=socks5://z1:1080 C:\Users\bill>ollama serve 2024/12/11 20:56:53 routes.go:1195: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY:socks5://z1:1080 HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://:11435 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\bill\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[chrome-extension://* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-12-11T20:57:07.715-08:00 level=INFO source=images.go:753 msg="total blobs: 10" time=2024-12-11T20:57:07.721-08:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0" time=2024-12-11T20:57:07.745-08:00 level=INFO source=routes.go:1246 msg="Listening on [::]:11435 (version 0.5.1) time=2024-12-11T20:57:07.746-08:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu_avx2 cuda_v11 cuda_v12 rocm cpu cpu_avx]" time=2024-12-11T20:57:07.747-08:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs" time=2024-12-11T20:57:07.748-08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2024-12-11T20:57:07.749-08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=1 efficiency=0 threads=1 time=2024-12-11T20:57:07.782-08:00 level=INFO source=gpu.go:386 msg="no compatible GPUs were discovered" time=2024-12-11T20:57:07.789-08:00 level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="4.0 GiB" available="1.7 GiB" [GIN] 2024/12/11 - 20:57:29 | 200 | 0s | 127.0.0.1 | HEAD "/" time=2024-12-11T20:57:31.682-08:00 level=INFO source=download.go:175 msg="downloading 74701a8c35f6 in 14 100 MB part(s)" time=2024-12-11T20:57:55.414-08:00 level=INFO source=download.go:175 msg="downloading 966de95ca8a6 in 1 1.4 KB part(s)" time=2024-12-11T20:57:56.806-08:00 level=INFO source=download.go:175 msg="downloading fcc5a6bec9da in 1 7.7 KB part(s)" time=2024-12-11T20:57:58.245-08:00 level=INFO source=download.go:175 msg="downloading a70ff7e570d9 in 1 6.0 KB part(s)" time=2024-12-11T20:57:59.624-08:00 level=INFO source=download.go:175 msg="downloading 4f659a1e86d7 in 1 485 B part(s)" [GIN] 2024/12/11 - 20:58:08 | 200 | 38.5621367s | 127.0.0.1 | POST "/api/pull" ``` If I set the proxy incorrectly, it fails. ```console C:\Users\bill>set https_proxy=socks5://z1:1081 C:\Users\bill>ollama serve ... time=2024-12-11T21:07:29.665-08:00 level=INFO source=images.go:990 msg="request failed: Get \"https://registry.ollama.ai/v2/library/llama3.2/manifests/1b\": proxyconnect tcp: dial tcp 10.0.3.1:1081: connectex: No connection could be made because the target machine actively refused it." [GIN] 2024/12/11 - 21:07:29 | 200 | 2.1723186s | 127.0.0.1 | POST "/api/pull" ``` If you could add some details like error message and configuration it would greatly enhance the possibility of diagnosing your problem.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#67195