[GH-ISSUE #2230] Ollama (llama2) running in VM Box on Ubuntu but /api/generate not working #1278

Closed
opened 2026-04-12 11:04:55 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @Marvin-VW on GitHub (Jan 27, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2230

Hey, i pulled llama2 as described and im running it with 'ollama run llama2'
It is working inside the terminal with no errors, but as soon as i try to to reach it via

curl http://localhost:11434/api/generate -d '{ "model": "llama2", "prompt":"Why is the sky blue?" }'

it just says:

{"error":"model "llama2" not found, try pulling it first"}

Originally created by @Marvin-VW on GitHub (Jan 27, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2230 Hey, i pulled llama2 as described and im running it with 'ollama run llama2' It is working inside the terminal with no errors, but as soon as i try to to reach it via `curl http://localhost:11434/api/generate -d '{ "model": "llama2", "prompt":"Why is the sky blue?" }'` it just says: `{"error":"model "llama2" not found, try pulling it first"}`
Author
Owner

@doanaktar commented on GitHub (Jun 25, 2024):

Hi, i have the same issue. Can i ask how you solved it?

<!-- gh-comment-id:2188113398 --> @doanaktar commented on GitHub (Jun 25, 2024): Hi, i have the same issue. Can i ask how you solved it?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#1278