This website requires JavaScript.
Explore
Help
Sign In
github-starred
/
ollama-ollama
Watch
2
Star
0
Fork
0
You've already forked ollama-ollama
mirror of
https://github.com/ollama/ollama.git
synced
2026-03-11 20:23:55 -05:00
Code
Issues
2.2k
Packages
Projects
Releases
100
Wiki
Activity
Labels
Milestones
New Issue
789 Open
3,604 Closed
Label
Show archived labels
Use
alt
+
click/enter
to exclude labels
All labels
No label
amd
api
app
bug
build
cli
client2
cloud
compatibility
context-length
create
docker
documentation
embeddings
engine
feature request
feedback wanted
good first issue
gpt-oss
gpu
harmony
help wanted
install
integration
intel
js
linux
macos
memory
model
needs more info
networking
nvidia
ollama.com
performance
pull-request
python
question
registry
rendering
thinking
tools
top
windows
wsl
Milestone
All milestones
No milestones
Project
All projects
No project
Author
All users
Assignee
Assigned to nobody
Assigned to anybody
GiteaMirror
ninjasurge
Sort
Newest
Oldest
Most recently updated
Least recently updated
Most commented
Least commented
Nearest due date
Farthest due date
Label
789 Open
3,604 Closed
Close
Label
Clear labels
amd
api
app
bug
build
cli
client2
cloud
compatibility
context-length
create
docker
documentation
embeddings
engine
feature request
feedback wanted
good first issue
gpt-oss
gpu
harmony
help wanted
install
integration
intel
js
linux
macos
memory
model
needs more info
networking
nvidia
ollama.com
performance
pull-request
python
question
registry
rendering
thinking
tools
top
windows
wsl
Milestone
No milestone
Projects
Clear projects
Assignee
Clear assignees
No assignee
GiteaMirror
ninjasurge
Ollama 0.12.10 embedding crash (nomic-embed-text-v1.5 on macOS)
bug
#8657
opened
2025-11-12 14:48:25 -06:00
by
GiteaMirror
17
Ollama 0.12.10 fails to find CUDA compiler
bug
#8655
opened
2025-11-12 14:48:22 -06:00
by
GiteaMirror
3
When will minimax-m2 be supported?
bug
model
#8654
opened
2025-11-12 14:48:22 -06:00
by
GiteaMirror
1
Error 500 Internal Server Error: unmarshal: invalid character 'I' looking for beginning of value.
bug
#8653
opened
2025-11-12 14:48:21 -06:00
by
GiteaMirror
1
qwen3:30b Severe bug found
bug
needs more info
#8650
opened
2025-11-12 14:48:15 -06:00
by
GiteaMirror
4
Ollama hangs in infinite loop during code update requests, requires service restart
bug
#8649
opened
2025-11-12 14:48:14 -06:00
by
GiteaMirror
2
ollama run
in Windows 0.12.4+ doesn't start the server.
bug
windows
#8646
opened
2025-11-12 14:48:10 -06:00
by
GiteaMirror
5
Issue: Ollama 0.12.10 fails on NVIDIA Jetson Thor (Regression from 0.12.9)
bug
#8643
opened
2025-11-12 14:48:05 -06:00
by
GiteaMirror
9
Error: 500 Internal Server Error: do load request: Post "http://127.0.0.1:11680/load": read tcp 127.0.0.1:11685->127.0.0.1:11680: wsarecv: An existing connection was forcibly closed by the remote host.
bug
#8642
opened
2025-11-12 14:48:04 -06:00
by
GiteaMirror
4
Broken example in api.md
bug
#8641
opened
2025-11-12 14:48:03 -06:00
by
GiteaMirror
Vulkan fails to allocate memory buffer
bug
#8639
opened
2025-11-12 14:48:01 -06:00
by
GiteaMirror
500 Internal Server Error:
bug
#8637
opened
2025-11-12 14:47:59 -06:00
by
GiteaMirror
2
Apertus-70B-Instruct-2509 Full GPU layer allocation fails on multi-GPU setup works only when at least one layer is offloaded
bug
#8635
opened
2025-11-12 14:47:57 -06:00
by
GiteaMirror
3
after selecting Deepseek-V3.1 in the ollama app - it will NOT allow you to change to a different model.
bug
#8634
opened
2025-11-12 14:47:56 -06:00
by
GiteaMirror
1
Intel Iris Xe Graphics (16GB) not detected by Ollama v0.12.10 on Windows 11 despite Vulkan/DXGI+PDH support
bug
needs more info
#8633
opened
2025-11-12 14:47:54 -06:00
by
GiteaMirror
3
too much available memory reported
amd
bug
gpu
#8628
opened
2025-11-12 14:47:43 -06:00
by
GiteaMirror
10
Ollama 0.12.10 Error: 500 Internal Server Error: do load request: Post EOF
bug
#8625
opened
2025-11-12 14:47:36 -06:00
by
GiteaMirror
5
ollama version is 0.12.10 error : CUDA error: the resource allocation failed
bug
#8623
opened
2025-11-12 14:47:35 -06:00
by
GiteaMirror
Ollama 0.12.10: After starting the service, my computer hangs when trying to suspend
bug
#8622
opened
2025-11-12 14:47:34 -06:00
by
GiteaMirror
ollama loads nearly the entire model on CPU after unloading from GPU
bug
#8620
opened
2025-11-12 14:47:30 -06:00
by
GiteaMirror
8
First
Previous
1
2
3
4
5
...
Next
Last