[GH-ISSUE #5672] ollama._types.ResponseError #50049

Closed
opened 2026-04-28 13:56:57 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @Lena-Van on GitHub (Jul 13, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5672

What is the issue?

I set the llama3_70b_ollama_model_configuration = {
"config_name": "ollama_llama3_70b",
"model_type": "ollama_chat",
"model_name": "example",
"options": {
"temperature": 0.5,
"seed": 123
},
"keep_alive": "5m"
}

the "example" model was downloaded from the Huggingface(https://huggingface.co/bartowski/Smaug-Llama-3-70B-Instruct-32K-GGUF), it's the llama3_70_instruct's 4-bit quantized version.
I've successfully run it in the past, got the responses. But today, I got the "ollama._types.ResponseError"

the latest traceback is like:
File "/home/wenlu/anaconda3/envs/angent/lib/python3.10/site-packages/ollama/_client.py", line 180, in chat return self._request_stream( File "/home/wenlu/anaconda3/envs/angent/lib/python3.10/site-packages/ollama/_client.py", line 98, in _request_stream return self._stream(*args, **kwargs) if stream else self._request(*args, **kwargs).json() File "/home/wenlu/anaconda3/envs/angent/lib/python3.10/site-packages/ollama/_client.py", line 74, in _request raise ResponseError(e.response.text, e.response.status_code) from None ollama._types.ResponseError

I tried to unset the proxy, it didn't work for me.
WechatIMG312
WechatIMG313

OS

Linux

GPU

Nvidia

CPU

No response

Ollama version

0.2.1

Originally created by @Lena-Van on GitHub (Jul 13, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5672 ### What is the issue? I set the llama3_70b_ollama_model_configuration = { "config_name": "ollama_llama3_70b", "model_type": "ollama_chat", "model_name": "example", "options": { "temperature": 0.5, "seed": 123 }, "keep_alive": "5m" } the "example" model was downloaded from the Huggingface(https://huggingface.co/bartowski/Smaug-Llama-3-70B-Instruct-32K-GGUF), it's the llama3_70_instruct's 4-bit quantized version. I've successfully run it in the past, got the responses. But today, I got the "ollama._types.ResponseError" the latest traceback is like: ` File "/home/wenlu/anaconda3/envs/angent/lib/python3.10/site-packages/ollama/_client.py", line 180, in chat return self._request_stream( File "/home/wenlu/anaconda3/envs/angent/lib/python3.10/site-packages/ollama/_client.py", line 98, in _request_stream return self._stream(*args, **kwargs) if stream else self._request(*args, **kwargs).json() File "/home/wenlu/anaconda3/envs/angent/lib/python3.10/site-packages/ollama/_client.py", line 74, in _request raise ResponseError(e.response.text, e.response.status_code) from None ollama._types.ResponseError ` I tried to unset the proxy, it didn't work for me. ![WechatIMG312](https://github.com/user-attachments/assets/60ab2da5-9759-49a8-a776-4e6c3e095926) ![WechatIMG313](https://github.com/user-attachments/assets/6a4dfe3b-88bc-4d79-9500-80f335c7b895) ### OS Linux ### GPU Nvidia ### CPU _No response_ ### Ollama version 0.2.1
GiteaMirror added the bug label 2026-04-28 13:56:57 -05:00
Author
Owner

@rick-github commented on GitHub (Jul 13, 2024):

Might be the same issue as https://github.com/ollama/ollama/issues/5671

<!-- gh-comment-id:2226950586 --> @rick-github commented on GitHub (Jul 13, 2024): Might be the same issue as https://github.com/ollama/ollama/issues/5671
Author
Owner

@Lena-Van commented on GitHub (Jul 14, 2024):

Might be the same issue as #5671
Thank you very much~ I've downgraded to 0.2.1. It's works for me!

<!-- gh-comment-id:2227349199 --> @Lena-Van commented on GitHub (Jul 14, 2024): > Might be the same issue as #5671 Thank you very much~ I've downgraded to 0.2.1. It's works for me!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#50049