[GH-ISSUE #158] Build fails with server/routes.go:53:20: undefined: llama.New #60

Closed
opened 2026-04-12 09:35:40 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @gbro3n on GitHub (Jul 21, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/158

Originally assigned to: @BruceMacD on GitHub.

Running build from source command:

go build .

Results in the following error:

# github.com/jmorganca/ollama/server
server/routes.go:53:20: undefined: llama.New

Context, building on Ubuntu 22.04 using go1.20.6 linux/amd64

Originally created by @gbro3n on GitHub (Jul 21, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/158 Originally assigned to: @BruceMacD on GitHub. Running build from source command: ``` go build . ``` Results in the following error: ``` # github.com/jmorganca/ollama/server server/routes.go:53:20: undefined: llama.New ``` Context, building on Ubuntu 22.04 using go1.20.6 linux/amd64
Author
Owner

@BruceMacD commented on GitHub (Jul 21, 2023):

Hi @gbro3n,

I was able to reproduce this. This error is due to CGO not being enabled in your Go environment. We package llama.cpp along with ollama, so this flag must be set.

Here are the steps to get ollama working from source on Linux:

# enable CGO
$ export CGO_ENABLED=1

# you may also need the gcc compiler
$ sudo apt-get install build-essential

# now go build will work
$ go build .

# start the ollama server in the background 
./ollama serve &

# now you can run models
./ollama run llama2

Also, be aware that Linux isn't officially supported yet, so some CLI output might be a bit weird. Let me know if you hit any other issues.

<!-- gh-comment-id:1646192533 --> @BruceMacD commented on GitHub (Jul 21, 2023): Hi @gbro3n, I was able to reproduce this. This error is due to CGO not being enabled in your Go environment. We package llama.cpp along with ollama, so this flag must be set. Here are the steps to get ollama working from source on Linux: ``` # enable CGO $ export CGO_ENABLED=1 # you may also need the gcc compiler $ sudo apt-get install build-essential # now go build will work $ go build . # start the ollama server in the background ./ollama serve & # now you can run models ./ollama run llama2 ``` Also, be aware that Linux isn't officially supported yet, so some CLI output might be a bit weird. Let me know if you hit any other issues.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#60