[GH-ISSUE #1102] Ollama on FreeBSD #47062

Open
opened 2026-04-28 02:55:54 -05:00 by GiteaMirror · 58 comments
Owner

Originally created by @eng-alameedi on GitHub (Nov 12, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1102

Hello there:

  is there any chance to get ollama working on freebsd please??
Originally created by @eng-alameedi on GitHub (Nov 12, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1102 Hello there: is there any chance to get ollama working on freebsd please??
GiteaMirror added the feature request label 2026-04-28 02:55:54 -05:00
Author
Owner

@walterjwhite commented on GitHub (Jan 7, 2024):

I tried briefly:

git clone https://github.com/jmorganca/ollama.git
cd ollama
go generate ./...
go build .

package github.com/jmorganca/ollama
	imports github.com/jmorganca/ollama/cmd
	imports github.com/jmorganca/ollama/server
	imports github.com/jmorganca/ollama/gpu: C source files not allowed when not using cgo or SWIG: gpu_info_cpu.c gpu_info_cuda.c gpu_info_rocm.c

CGO_ENABLED=0 go build .
	# github.com/jmorganca/ollama/llm
	llm/llm.go:75:17: undefined: gpu.GetGPUInfo
	llm/llm.go:81:9: undefined: nativeInit
	llm/llm.go:84:109: undefined: extServer
	llm/llm.go:86:15: undefined: newDynamicShimExtServer
	llm/llm.go:94:9: undefined: newDefaultExtServer

Note that I have gcc13 installed, cmake, and go installed on FreeBSD 14.

Can you clarify what you tried? Perhaps an additional dependency is required.

<!-- gh-comment-id:1880187208 --> @walterjwhite commented on GitHub (Jan 7, 2024): I tried briefly: git clone https://github.com/jmorganca/ollama.git cd ollama go generate ./... go build . package github.com/jmorganca/ollama imports github.com/jmorganca/ollama/cmd imports github.com/jmorganca/ollama/server imports github.com/jmorganca/ollama/gpu: C source files not allowed when not using cgo or SWIG: gpu_info_cpu.c gpu_info_cuda.c gpu_info_rocm.c CGO_ENABLED=0 go build . # github.com/jmorganca/ollama/llm llm/llm.go:75:17: undefined: gpu.GetGPUInfo llm/llm.go:81:9: undefined: nativeInit llm/llm.go:84:109: undefined: extServer llm/llm.go:86:15: undefined: newDynamicShimExtServer llm/llm.go:94:9: undefined: newDefaultExtServer Note that I have gcc13 installed, cmake, and go installed on FreeBSD 14. Can you clarify what you tried? Perhaps an additional dependency is required.
Author
Owner

@SoloBSD commented on GitHub (Apr 17, 2024):

@walterjwhite
I used:
export CGO_ENABLED="0"
and then
go build .

And got:

github.com/ollama/ollama/llm

llm/payload.go:143:24: undefined: libEmbed
llm/payload.go:163:17: undefined: libEmbed
llm/server.go:59:28: undefined: gpu.CheckVRAM
llm/server.go:60:14: undefined: gpu.GetGPUInfo

I don't have a GPU so I think we need some modifier to skip GPU on build.

<!-- gh-comment-id:2060092099 --> @SoloBSD commented on GitHub (Apr 17, 2024): @walterjwhite I used: export CGO_ENABLED="0" and then go build . And got: # github.com/ollama/ollama/llm llm/payload.go:143:24: undefined: libEmbed llm/payload.go:163:17: undefined: libEmbed llm/server.go:59:28: undefined: gpu.CheckVRAM llm/server.go:60:14: undefined: gpu.GetGPUInfo I don't have a GPU so I think we need some modifier to skip GPU on build.
Author
Owner

@walterjwhite commented on GitHub (Apr 29, 2024):

Ok, I believe I had this running on my old system (hard drive died) by simply doing:

download https://ollama.ai/download/ollama-linux-$_ARCHITECTURE and move it to $PATH
chmod +x ollama

When I try to run it now, I end up with 'Abort trap'. I have the linux compatibility kernel module loaded.

I should also note that while I had it running on FreeBSD, my hardware is a bit dated, so performance was abysmal.

<!-- gh-comment-id:2082452481 --> @walterjwhite commented on GitHub (Apr 29, 2024): Ok, I believe I had this running on my old system (hard drive died) by simply doing: download https://ollama.ai/download/ollama-linux-$_ARCHITECTURE and move it to $PATH chmod +x ollama When I try to run it now, I end up with 'Abort trap'. I have the linux compatibility kernel module loaded. I should also note that while I had it running on FreeBSD, my hardware is a bit dated, so performance was abysmal.
Author
Owner

@danielrpfeiffer commented on GitHub (May 9, 2024):

The strongest machines we have run FreeBSD. It would be great to have ollama working natively, even in CPU-only mode.

<!-- gh-comment-id:2102644200 --> @danielrpfeiffer commented on GitHub (May 9, 2024): The strongest machines we have run FreeBSD. It would be great to have ollama working natively, even in CPU-only mode.
Author
Owner

@yurivict commented on GitHub (May 15, 2024):

0.1.38 still has this problem.

<!-- gh-comment-id:2113436014 --> @yurivict commented on GitHub (May 15, 2024): 0.1.38 still has this problem.
Author
Owner

@yjqg6666 commented on GitHub (May 21, 2024):

Ok, I believe I had this running on my old system (hard drive died) by simply doing:

download https://ollama.ai/download/ollama-linux-$_ARCHITECTURE and move it to $PATH chmod +x ollama

When I try to run it now, I end up with 'Abort trap'. I have the linux compatibility kernel module loaded.

I should also note that while I had it running on FreeBSD, my hardware is a bit dated, so performance was abysmal.

You may need brandelf -t Linux ollama.

<!-- gh-comment-id:2122069088 --> @yjqg6666 commented on GitHub (May 21, 2024): > Ok, I believe I had this running on my old system (hard drive died) by simply doing: > > download https://ollama.ai/download/ollama-linux-$_ARCHITECTURE and move it to $PATH chmod +x ollama > > When I try to run it now, I end up with 'Abort trap'. I have the linux compatibility kernel module loaded. > > I should also note that while I had it running on FreeBSD, my hardware is a bit dated, so performance was abysmal. You may need `brandelf -t Linux ollama`.
Author
Owner

@kraileth commented on GitHub (May 27, 2024):

In case you missed it: there's a PR, #4172, which is meant to add support for the four main BSDs. It makes ollama buildable on FreeBSD natively (without requiring the Linuxulator). After running the application for a couple of days, I can say that it works really well. I hope it gets merged. Then the next step would obviously be to create a port in the FPC.

@eng-alameedi @walterjwhite @SoloBSD @danielrpfeiffer @yurivict @yjqg6666

<!-- gh-comment-id:2133934371 --> @kraileth commented on GitHub (May 27, 2024): In case you missed it: there's a PR, #4172, which is meant to add support for the four main BSDs. It makes `ollama` buildable on FreeBSD _natively_ (without requiring the Linuxulator). After running the application for a couple of days, I can say that it works really well. I hope it gets merged. Then the next step would obviously be to create a port in the FPC. @eng-alameedi @walterjwhite @SoloBSD @danielrpfeiffer @yurivict @yjqg6666
Author
Owner

@yurivict commented on GitHub (May 27, 2024):

@kraileth
I am getting this failure with 0.1.39 + #4172 :

===>  Building cmd from ./cmd
llm/llm_bsd.go:7:12: pattern build/bsd/*/*/bin/*: no matching files found

go-1.22 is used.

<!-- gh-comment-id:2133956848 --> @yurivict commented on GitHub (May 27, 2024): @kraileth I am getting this failure with 0.1.39 + #4172 : ``` ===> Building cmd from ./cmd llm/llm_bsd.go:7:12: pattern build/bsd/*/*/bin/*: no matching files found ``` go-1.22 is used.
Author
Owner

@kraileth commented on GitHub (May 27, 2024):

@yurivict Looks like the additional files introduced by the PR may not be present on your system?

Just redoing it in a fresh jail to document what I was doing:

# freebsd-version 
14.0-RELEASE-p6

# pkg install -y git go122 cmake vulkan-headers vulkan-loader

# git clone https://github.com/prep/ollama.git

# cd ollama && git checkout feature/add-bsd-support

# go122 generate ./...

# go122 build .

# ./ollama help | head -n 5
Large language model runner

Usage:
  ollama [flags]
  ollama [command]

Works fine for me, no problems encountered.

<!-- gh-comment-id:2134016430 --> @kraileth commented on GitHub (May 27, 2024): @yurivict Looks like the additional files introduced by the PR may not be present on your system? Just redoing it in a fresh jail to document what I was doing: ``` # freebsd-version 14.0-RELEASE-p6 ``` `# pkg install -y git go122 cmake vulkan-headers vulkan-loader` `# git clone https://github.com/prep/ollama.git` `# cd ollama && git checkout feature/add-bsd-support` `# go122 generate ./...` `# go122 build .` ``` # ./ollama help | head -n 5 Large language model runner Usage: ollama [flags] ollama [command] ``` Works fine for me, no problems encountered.
Author
Owner

@yurivict commented on GitHub (May 27, 2024):

In order for us to use this PR in the FreeBSD port it should be merged first, because you clone from another account: https://github.com/prep/ollama

Any idea when is it going to be merged?

<!-- gh-comment-id:2134099513 --> @yurivict commented on GitHub (May 27, 2024): In order for us to use this PR in the FreeBSD port it should be merged first, because you clone from another account: https://github.com/prep/ollama Any idea when is it going to be merged?
Author
Owner

@kraileth commented on GitHub (May 27, 2024):

@yurivict So now it works for you, too? We could pick the changes from the PR and patch ollama source downstream in ports. It would definitively be preferable to have the PR merged, though. That would also benefit the other BSDs as well.

Unfortunately with just short of 180 open PRs, I assume it may take the small team that manages the project a moment to get to it. But maybe we'll be lucky?

<!-- gh-comment-id:2134109391 --> @kraileth commented on GitHub (May 27, 2024): @yurivict So now it works for you, too? We could pick the changes from the PR and patch ollama source downstream in ports. It would definitively be preferable to have the PR merged, though. That would also benefit the other BSDs as well. Unfortunately with just short of 180 open PRs, I assume it may take the small team that manages the project a moment to get to it. But maybe we'll be lucky?
Author
Owner

@yurivict commented on GitHub (May 27, 2024):

It worked for me when I used your instructions.

However, in order to have a working port we need this PR to be merged into this account.

It failed for me when I tried to add patches from the PR into the last ollama release.

<!-- gh-comment-id:2134116301 --> @yurivict commented on GitHub (May 27, 2024): It worked for me when I used your instructions. However, in order to have a working port we need this PR to be merged into this account. It failed for me when I tried to add patches from the PR into the last ollama release.
Author
Owner

@SoloBSD commented on GitHub (May 28, 2024):

I asked on the Discord if they could prioritize the merge of this PR.
It has been forwarded to proper developers.
Let's hope it gets merged soon.

<!-- gh-comment-id:2134169507 --> @SoloBSD commented on GitHub (May 28, 2024): I asked on the Discord if they could prioritize the merge of this PR. It has been forwarded to proper developers. Let's hope it gets merged soon.
Author
Owner

@walterjwhite commented on GitHub (May 28, 2024):

@yurivict Looks like the additional files introduced by the PR may not be present on your system?

Just redoing it in a fresh jail to document what I was doing:

# freebsd-version 
14.0-RELEASE-p6

# pkg install -y git go122 cmake vulkan-headers vulkan-loader

# git clone https://github.com/prep/ollama.git

# cd ollama && git checkout feature/add-bsd-support

# go122 generate ./...

# go122 build .

# ./ollama help | head -n 5
Large language model runner

Usage:
  ollama [flags]
  ollama [command]

Works fine for me, no problems encountered.

Works for me too, thanks.

<!-- gh-comment-id:2134176994 --> @walterjwhite commented on GitHub (May 28, 2024): > @yurivict Looks like the additional files introduced by the PR may not be present on your system? > > Just redoing it in a fresh jail to document what I was doing: > > ``` > # freebsd-version > 14.0-RELEASE-p6 > ``` > > `# pkg install -y git go122 cmake vulkan-headers vulkan-loader` > > `# git clone https://github.com/prep/ollama.git` > > `# cd ollama && git checkout feature/add-bsd-support` > > `# go122 generate ./...` > > `# go122 build .` > > ``` > # ./ollama help | head -n 5 > Large language model runner > > Usage: > ollama [flags] > ollama [command] > ``` > > Works fine for me, no problems encountered. Works for me too, thanks.
Author
Owner

@kraileth commented on GitHub (May 28, 2024):

It worked for me when I used your instructions.

However, in order to have a working port we need this PR to be merged into this account.

It failed for me when I tried to add patches from the PR into the last ollama release.

Seems like we were having quite a bit of bad luck: The version that works well is from Star Wars day but on the very next day #4144 introduced changes that obviously broke something that stops the build on FreeBSD. I have no idea what, though.

<!-- gh-comment-id:2135776640 --> @kraileth commented on GitHub (May 28, 2024): > It worked for me when I used your instructions. > > However, in order to have a working port we need this PR to be merged into this account. > > It failed for me when I tried to add patches from the PR into the last ollama release. Seems like we were having quite a bit of bad luck: The version that works well is from Star Wars day but on the very next day #4144 introduced changes that obviously broke something that stops the build on FreeBSD. I have no idea what, though.
Author
Owner

@rmszc81 commented on GitHub (Jul 23, 2024):

Hello guys,
any news in this topic?

I'd be great to use ollama in FreeBSD by having officially in the ports tree.

<!-- gh-comment-id:2244191288 --> @rmszc81 commented on GitHub (Jul 23, 2024): Hello guys, any news in this topic? I'd be great to use ollama in FreeBSD by having officially in the ports tree.
Author
Owner

@xorander00 commented on GitHub (Aug 5, 2024):

I've managed to update the patch for v0.3.3. However, I feel like something isn't right. The output executable that was built on my FreeBSD 14.1-STABLE system is only 24mb (35mb unstripped), whereas the Github-released Linux executable is 559mb.

Is there data that's supposed to be embedded into the executable? If so, is it optional or required?

Mind you, I'm completely new to Ollama. I know nothing about it and the reason I'm building it is so that I can play around with it on my daily driver FreeBSD desktop.


 ❯ dir
.0755 me wheel   24 MB 2024-08-04T22:27:41 ollama-freebsd-amd64*
.0755 me wheel   35 MB 2024-08-04T22:27:35 ollama-freebsd-amd64-unstripped*
.0755 me wheel  559 MB 2024-08-04T20:57:58 ollama-linux-amd64*

 ❯ file ./ollama-freebsd-amd64
ollama-freebsd-amd64: ELF 64-bit LSB executable, x86-64, version 1 (FreeBSD), dynamically linked, interpreter /libexec/ld-elf.so.1, for FreeBSD 14.1 (1401501), FreeBSD-style, Go BuildID=1BWVjeDeCpPbNDLgQifA/XKF3Bi4iAuSasgt-JekW/2mDw42JgVdmopx3XLMpT/yXWCKYYYXHiNNW9J81lO, stripped

 ❯ file ./ollama-freebsd-amd64-unstripped
./ollama-freebsd-amd64-unstripped: ELF 64-bit LSB executable, x86-64, version 1 (FreeBSD), dynamically linked, interpreter /libexec/ld-elf.so.1, for FreeBSD 14.1 (1401501), FreeBSD-style, Go BuildID=1BWVjeDeCpPbNDLgQifA/XKF3Bi4iAuSasgt-JekW/2mDw42JgVdmopx3XLMpT/yXWCKYYYXHiNNW9J81lO, with debug_info, not stripped

 ❯ file ./ollama-linux-amd64
./ollama-linux-amd64: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=78074a18b46963116991f4e32c8abcec2fb6ee22, for GNU/Linux 2.6.32, stripped

 ❯ ./ollama-freebsd-amd64 --help
Large language model runner

Usage:
  ollama [flags]
  ollama [command]

Available Commands:
  serve       Start ollama
  create      Create a model from a Modelfile
  show        Show information for a model
  run         Run a model
  pull        Pull a model from a registry
  push        Push a model to a registry
  list        List models
  ps          List running models
  cp          Copy a model
  rm          Remove a model
  help        Help about any command

Flags:
  -h, --help      help for ollama
  -v, --version   Show version information

Use "ollama [command] --help" for more information about a command.
<!-- gh-comment-id:2268058309 --> @xorander00 commented on GitHub (Aug 5, 2024): I've managed to update the patch for v0.3.3. However, I feel like something isn't right. The output executable that was built on my FreeBSD 14.1-STABLE system is only 24mb (35mb unstripped), whereas the Github-released Linux executable is 559mb. Is there data that's supposed to be embedded into the executable? If so, is it optional or required? Mind you, I'm completely new to Ollama. I know nothing about it and the reason I'm building it is so that I can play around with it on my daily driver FreeBSD desktop. ----- ```sh ❯ dir .0755 me wheel 24 MB 2024-08-04T22:27:41 ollama-freebsd-amd64* .0755 me wheel 35 MB 2024-08-04T22:27:35 ollama-freebsd-amd64-unstripped* .0755 me wheel 559 MB 2024-08-04T20:57:58 ollama-linux-amd64* ❯ file ./ollama-freebsd-amd64 ollama-freebsd-amd64: ELF 64-bit LSB executable, x86-64, version 1 (FreeBSD), dynamically linked, interpreter /libexec/ld-elf.so.1, for FreeBSD 14.1 (1401501), FreeBSD-style, Go BuildID=1BWVjeDeCpPbNDLgQifA/XKF3Bi4iAuSasgt-JekW/2mDw42JgVdmopx3XLMpT/yXWCKYYYXHiNNW9J81lO, stripped ❯ file ./ollama-freebsd-amd64-unstripped ./ollama-freebsd-amd64-unstripped: ELF 64-bit LSB executable, x86-64, version 1 (FreeBSD), dynamically linked, interpreter /libexec/ld-elf.so.1, for FreeBSD 14.1 (1401501), FreeBSD-style, Go BuildID=1BWVjeDeCpPbNDLgQifA/XKF3Bi4iAuSasgt-JekW/2mDw42JgVdmopx3XLMpT/yXWCKYYYXHiNNW9J81lO, with debug_info, not stripped ❯ file ./ollama-linux-amd64 ./ollama-linux-amd64: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=78074a18b46963116991f4e32c8abcec2fb6ee22, for GNU/Linux 2.6.32, stripped ❯ ./ollama-freebsd-amd64 --help Large language model runner Usage: ollama [flags] ollama [command] Available Commands: serve Start ollama create Create a model from a Modelfile show Show information for a model run Run a model pull Pull a model from a registry push Push a model to a registry list List models ps List running models cp Copy a model rm Remove a model help Help about any command Flags: -h, --help help for ollama -v, --version Show version information Use "ollama [command] --help" for more information about a command. ```
Author
Owner

@yurivict commented on GitHub (Aug 5, 2024):

@xorander00
Are you able to submit the patch as a pull request for this repository?

<!-- gh-comment-id:2268112480 --> @yurivict commented on GitHub (Aug 5, 2024): @xorander00 Are you able to submit the patch as a pull request for this repository?
Author
Owner

@kraileth commented on GitHub (Aug 5, 2024):

@xorander00 The much smaller size is normal, I guess. The self-built Linux binary that I currently run in a Linuxulator jail is 38 MB. I'd also be interested in you sharing your patch.

<!-- gh-comment-id:2270017078 --> @kraileth commented on GitHub (Aug 5, 2024): @xorander00 The much smaller size is normal, I guess. The self-built Linux binary that I currently run in a Linuxulator jail is 38 MB. I'd also be interested in you sharing your patch.
Author
Owner

@xorander00 commented on GitHub (Aug 5, 2024):

@yurivict @kraileth Weird, it wouldn't let me attach the patch to this post. Copying+pasting it here for now...

diff --git a/gpu/gpu_bsd.go b/gpu/gpu_bsd.go
new file mode 100644
index 00000000..17e70677
--- /dev/null
+++ b/gpu/gpu_bsd.go
@@ -0,0 +1,101 @@
+//go:build dragonfly || freebsd || netbsd || openbsd
+
+package gpu
+
+import "github.com/ollama/ollama/format"
+
+/*
+#cgo CFLAGS: -I/usr/local/include
+#cgo LDFLAGS: -L/usr/local/lib -lvulkan
+
+#include <stdbool.h>
+#include <unistd.h>
+#include <vulkan/vulkan.h>
+
+bool hasVulkanSupport(uint64_t *memSize) {
+  VkInstance instance;
+
+	VkApplicationInfo appInfo = { VK_STRUCTURE_TYPE_APPLICATION_INFO };
+	appInfo.pApplicationName = "Ollama";
+	appInfo.apiVersion = VK_API_VERSION_1_0;
+
+	VkInstanceCreateInfo createInfo = { VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO };
+	createInfo.pApplicationInfo = &appInfo;
+
+	// Create a Vulkan instance
+	if (vkCreateInstance(&createInfo, NULL, &instance) != VK_SUCCESS)
+		return false;
+
+	// Fetch the first physical Vulkan device. Note that numDevices is overwritten with the number of devices found
+	uint32_t numDevices = 1;
+	VkPhysicalDevice device;
+	vkEnumeratePhysicalDevices(instance, &numDevices, &device);
+	if (numDevices == 0) {
+		vkDestroyInstance(instance, NULL);
+		return false;
+	}
+
+	// Fetch the memory information for this device.
+	VkPhysicalDeviceMemoryProperties memProperties;
+	vkGetPhysicalDeviceMemoryProperties(device, &memProperties);
+
+	// Add up all the heaps.
+	VkDeviceSize totalMemory = 0;
+	for (uint32_t i = 0; i < memProperties.memoryHeapCount; ++i) {
+		if (memProperties.memoryHeaps[i].flags & VK_MEMORY_HEAP_DEVICE_LOCAL_BIT) {
+			*memSize += memProperties.memoryHeaps[i].size;
+		}
+	}
+
+	vkDestroyInstance(instance, NULL);
+	return true;
+}
+*/
+import "C"
+
+func GetGPUInfo() GpuInfoList {
+	var gpuMem C.uint64_t
+	if C.hasVulkanSupport(&gpuMem) {
+		// Vulkan supported
+		return []GpuInfo{
+			{
+				Library: 				"vulkan",
+				ID:							"0",
+				MinimumMemory: 	512 * format.MebiByte,
+				memInfo: 	memInfo{
+					FreeMemory: uint64(gpuMem),
+					TotalMemory: uint64(gpuMem),
+				},
+			},
+		}
+	}
+
+	// CPU fallback
+	cpuMem, _ := GetCPUMem()
+	return []GpuInfo{
+		{
+			Library: "cpu",
+			memInfo: cpuMem,
+		},
+	}
+}
+
+func GetCPUInfo() GpuInfoList {
+	mem, _ := GetCPUMem()
+	return []GpuInfo{
+		{
+			Library: "cpu",
+			Variant: GetCPUCapability(),
+			memInfo: mem,
+		},
+	}
+}
+
+func GetCPUMem() (memInfo, error) {
+	size := C.sysconf(C._SC_PHYS_PAGES) * C.sysconf(C._SC_PAGE_SIZE)
+	return memInfo{TotalMemory: uint64(size)}, nil
+}
+
+func (l GpuInfoList) GetVisibleDevicesEnv() (string, string) {
+	return "", ""
+}
diff --git a/gpu/gpu_test.go b/gpu/gpu_test.go
index 46d3201e..9d889508 100644
--- a/gpu/gpu_test.go
+++ b/gpu/gpu_test.go
@@ -11,7 +11,7 @@ import (
 func TestBasicGetGPUInfo(t *testing.T) {
 	info := GetGPUInfo()
 	assert.NotEmpty(t, len(info))
-	assert.Contains(t, "cuda rocm cpu metal", info[0].Library)
+	assert.Contains(t, "cuda rocm cpu metal vulkan", info[0].Library)
 	if info[0].Library != "cpu" {
 		assert.Greater(t, info[0].TotalMemory, uint64(0))
 		assert.Greater(t, info[0].FreeMemory, uint64(0))
@@ -24,6 +24,8 @@ func TestCPUMemInfo(t *testing.T) {
 	switch runtime.GOOS {
 	case "darwin":
 		t.Skip("CPU memory not populated on darwin")
+	case "dragonfly", "freebsd", "netbsd", "openbsd":
+	  t.Skip("CPU memory is not populated on *BSD")
 	case "linux", "windows":
 		assert.Greater(t, info.TotalMemory, uint64(0))
 		assert.Greater(t, info.FreeMemory, uint64(0))
diff --git a/llm/generate/gen_bsd.sh b/llm/generate/gen_bsd.sh
new file mode 100755
index 00000000..9fc11d94
--- /dev/null
+++ b/llm/generate/gen_bsd.sh
@@ -0,0 +1,64 @@
+#!/bin/sh
+# This script is intended to run inside the go generate
+# working directory must be ./llm/generate/
+
+set -ex
+set -o pipefail
+echo "Starting BSD generate script"
+. $(dirname $0)/gen_common.sh
+init_vars
+git_module_setup
+apply_patches
+
+COMMON_BSD_DEFS="-DCMAKE_SYSTEM_NAME=$(uname -s)"
+CMAKE_TARGETS="--target llama --target ggml"
+
+case "${GOARCH}" in
+  "amd64")
+    COMMON_CPU_DEFS="${COMMON_BSD_DEFS} -DCMAKE_SYSTEM_PROCESSOR=${ARCH}"
+
+    # Static build for linking into the Go binary
+    init_vars
+    CMAKE_DEFS="${COMMON_CPU_DEFS} -DBUILD_SHARED_LIBS=off -DLLAMA_ACCELERATE=off -DLLAMA_AVX=off -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_FMA=off -DLLAMA_F16C=off ${CMAKE_DEFS}"
+    BUILD_DIR="../build/bsd/${ARCH}_static"
+    echo "Building static library"
+    build
+
+    init_vars
+    CMAKE_DEFS="${COMMON_CPU_DEFS} -DLLAMA_AVX=off -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_FMA=off -DLLAMA_F16C=off ${CMAKE_DEFS}"
+    BUILD_DIR="../build/bsd/${ARCH}/cpu"
+    echo "Building LCD CPU"
+    build
+    compress
+
+    init_vars
+    CMAKE_DEFS="${COMMON_CPU_DEFS} -DLLAMA_AVX=on -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_FMA=off -DLLAMA_F16C=off ${CMAKE_DEFS}"
+    BUILD_DIR="../build/bsd/${ARCH}/cpu_avx"
+    echo "Building AVX CPU"
+    build
+    compress
+
+    init_vars
+    CMAKE_DEFS="${COMMON_CPU_DEFS} -DLLAMA_AVX=on -DLLAMA_AVX2=on -DLLAMA_AVX512=off -DLLAMA_FMA=on -DLLAMA_F16C=on ${CMAKE_DEFS}"
+    BUILD_DIR="../build/bsd/${ARCH}/cpu_avx2"
+    echo "Building AVX2 CPU"
+    build
+    compress
+
+    init_vars
+    CMAKE_DEFS="${COMMON_CPU_DEFS} -DLLAMA_VULKAN=on ${CMAKE_DEFS}"
+    BUILD_DIR="../build/bsd/${ARCH}/vulkan"
+    echo "Building Vulkan GPU"
+    build
+    compress
+    ;;
+
+  *)
+    echo "GOARCH must be set"
+    echo "this script is meant to be run from within go generate"
+    exit 1
+    ;;
+esac
+
+cleanup
+echo "go generate completed.  LLM runners: $(cd ${BUILD_DIR}/..; echo *)"
diff --git a/llm/generate/gen_common.sh b/llm/generate/gen_common.sh
index da1b0688..51b688ee 100644
--- a/llm/generate/gen_common.sh
+++ b/llm/generate/gen_common.sh
@@ -89,13 +89,13 @@ compress() {
     rm -rf ${BUILD_DIR}/bin/*.gz
     for f in ${BUILD_DIR}/bin/* ; do
         gzip -n --best -f ${f} &
-        pids+=" $!"
+        pids="$pids $!"
     done
     # check for lib directory
     if [ -d ${BUILD_DIR}/lib ]; then
         for f in ${BUILD_DIR}/lib/* ; do
             gzip -n --best -f ${f} &
-            pids+=" $!"
+            pids="$pids $!"
         done
     fi
     echo
diff --git a/llm/generate/generate_bsd.go b/llm/generate/generate_bsd.go
new file mode 100644
index 00000000..540f6115
--- /dev/null
+++ b/llm/generate/generate_bsd.go
@@ -0,0 +1,5 @@
+//go:build dragonfly || freebsd || netbsd || openbsd
+
+package generate
+
+//go:generate sh ./gen_bsd.sh
diff --git a/llm/llm.go b/llm/llm.go
index d24507cc..e49b63d7 100644
--- a/llm/llm.go
+++ b/llm/llm.go
@@ -8,6 +8,10 @@ package llm
 // #cgo windows,arm64 LDFLAGS: -static-libstdc++ -static-libgcc -static -L${SRCDIR}/build/windows/arm64_static -L${SRCDIR}/build/windows/arm64_static/src -L${SRCDIR}/build/windows/arm64_static/ggml/src
 // #cgo linux,amd64 LDFLAGS: -L${SRCDIR}/build/linux/x86_64_static -L${SRCDIR}/build/linux/x86_64_static/src -L${SRCDIR}/build/linux/x86_64_static/ggml/src
 // #cgo linux,arm64 LDFLAGS: -L${SRCDIR}/build/linux/arm64_static -L${SRCDIR}/build/linux/arm64_static/src -L${SRCDIR}/build/linux/arm64_static/ggml/src
+// #cgo dragonfly,amd64 LDFLAGS: ${SRCDIR}/build/bsd/x86_64_static/src/libllama.a -lstdc++ -lm
+// #cgo freebsd,amd64 LDFLAGS: ${SRCDIR}/build/bsd/x86_64_static/src/libllama.a -lstdc++ -lm
+// #cgo netbsd,amd64 LDFLAGS: ${SRCDIR}/build/bsd/x86_64_static/src/libllama.a -lstdc++ -lm
+// #cgo openbsd,amd64 LDFLAGS: ${SRCDIR}/build/bsd/x86_64_static/src/libllama.a -lstdc++ -lm
 // #include <stdlib.h>
 // #include "llama.h"
 import "C"
diff --git a/llm/llm_bsd.go b/llm/llm_bsd.go
new file mode 100644
index 00000000..2d2cbf18
--- /dev/null
+++ b/llm/llm_bsd.go
@@ -0,0 +1,13 @@
+//go:build dragonfly || freebsd || netbsd || openbsd
+
+package llm
+
+import (
+	"embed"
+	"syscall"
+)
+
+//go:embed build/bsd/*/*/bin/*
+var libEmbed embed.FS
+
+var LlamaServerSysProcAttr = &syscall.SysProcAttr{}
diff --git a/scripts/build_bsd.sh b/scripts/build_bsd.sh
new file mode 100755
index 00000000..594ec4b9
--- /dev/null
+++ b/scripts/build_bsd.sh
@@ -0,0 +1,27 @@
+#!/bin/sh
+
+set -e
+
+case "$(uname -s)" in
+  DragonFly)
+    ;;
+  FreeBSD)
+    ;;
+  NetBSD)
+    ;;
+  OpenBSD)
+    ;;
+  *)
+    echo "$(uname -s) is not supported"
+    exit 1
+    ;;
+esac
+
+export VERSION=${VERSION:-$(git describe --tags --first-parent --abbrev=7 --long --dirty --always | sed -e "s/^v//g")}
+export GOFLAGS="'-ldflags=-w -s \"-X=github.com/ollama/ollama/version.Version=$VERSION\" \"-X=github.com/ollama/ollama/server.mode=release\"'"
+
+mkdir -p dist
+rm -rf llm/llama.cpp/build
+
+go generate ./...
+CGO_ENABLED=1 go build -trimpath -o dist/ollama-bsd
diff --git a/scripts/build_freebsd.sh b/scripts/build_freebsd.sh
new file mode 120000
index 00000000..692c340f
--- /dev/null
+++ b/scripts/build_freebsd.sh
@@ -0,0 +1 @@
+build_bsd.sh
\ No newline at end of file

...and then here are the actual commands to build...

git checkout v0.3.3
git apply ../freebsd.patch
go122 generate ./...
go build -o ./ollama-unstripped
strip -s -o ./ollama ./ollama-unstripped
<!-- gh-comment-id:2270042340 --> @xorander00 commented on GitHub (Aug 5, 2024): @yurivict @kraileth Weird, it wouldn't let me attach the patch to this post. Copying+pasting it here for now... ```patch diff --git a/gpu/gpu_bsd.go b/gpu/gpu_bsd.go new file mode 100644 index 00000000..17e70677 --- /dev/null +++ b/gpu/gpu_bsd.go @@ -0,0 +1,101 @@ +//go:build dragonfly || freebsd || netbsd || openbsd + +package gpu + +import "github.com/ollama/ollama/format" + +/* +#cgo CFLAGS: -I/usr/local/include +#cgo LDFLAGS: -L/usr/local/lib -lvulkan + +#include <stdbool.h> +#include <unistd.h> +#include <vulkan/vulkan.h> + +bool hasVulkanSupport(uint64_t *memSize) { + VkInstance instance; + + VkApplicationInfo appInfo = { VK_STRUCTURE_TYPE_APPLICATION_INFO }; + appInfo.pApplicationName = "Ollama"; + appInfo.apiVersion = VK_API_VERSION_1_0; + + VkInstanceCreateInfo createInfo = { VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO }; + createInfo.pApplicationInfo = &appInfo; + + // Create a Vulkan instance + if (vkCreateInstance(&createInfo, NULL, &instance) != VK_SUCCESS) + return false; + + // Fetch the first physical Vulkan device. Note that numDevices is overwritten with the number of devices found + uint32_t numDevices = 1; + VkPhysicalDevice device; + vkEnumeratePhysicalDevices(instance, &numDevices, &device); + if (numDevices == 0) { + vkDestroyInstance(instance, NULL); + return false; + } + + // Fetch the memory information for this device. + VkPhysicalDeviceMemoryProperties memProperties; + vkGetPhysicalDeviceMemoryProperties(device, &memProperties); + + // Add up all the heaps. + VkDeviceSize totalMemory = 0; + for (uint32_t i = 0; i < memProperties.memoryHeapCount; ++i) { + if (memProperties.memoryHeaps[i].flags & VK_MEMORY_HEAP_DEVICE_LOCAL_BIT) { + *memSize += memProperties.memoryHeaps[i].size; + } + } + + vkDestroyInstance(instance, NULL); + return true; +} +*/ +import "C" + +func GetGPUInfo() GpuInfoList { + var gpuMem C.uint64_t + if C.hasVulkanSupport(&gpuMem) { + // Vulkan supported + return []GpuInfo{ + { + Library: "vulkan", + ID: "0", + MinimumMemory: 512 * format.MebiByte, + memInfo: memInfo{ + FreeMemory: uint64(gpuMem), + TotalMemory: uint64(gpuMem), + }, + }, + } + } + + // CPU fallback + cpuMem, _ := GetCPUMem() + return []GpuInfo{ + { + Library: "cpu", + memInfo: cpuMem, + }, + } +} + +func GetCPUInfo() GpuInfoList { + mem, _ := GetCPUMem() + return []GpuInfo{ + { + Library: "cpu", + Variant: GetCPUCapability(), + memInfo: mem, + }, + } +} + +func GetCPUMem() (memInfo, error) { + size := C.sysconf(C._SC_PHYS_PAGES) * C.sysconf(C._SC_PAGE_SIZE) + return memInfo{TotalMemory: uint64(size)}, nil +} + +func (l GpuInfoList) GetVisibleDevicesEnv() (string, string) { + return "", "" +} diff --git a/gpu/gpu_test.go b/gpu/gpu_test.go index 46d3201e..9d889508 100644 --- a/gpu/gpu_test.go +++ b/gpu/gpu_test.go @@ -11,7 +11,7 @@ import ( func TestBasicGetGPUInfo(t *testing.T) { info := GetGPUInfo() assert.NotEmpty(t, len(info)) - assert.Contains(t, "cuda rocm cpu metal", info[0].Library) + assert.Contains(t, "cuda rocm cpu metal vulkan", info[0].Library) if info[0].Library != "cpu" { assert.Greater(t, info[0].TotalMemory, uint64(0)) assert.Greater(t, info[0].FreeMemory, uint64(0)) @@ -24,6 +24,8 @@ func TestCPUMemInfo(t *testing.T) { switch runtime.GOOS { case "darwin": t.Skip("CPU memory not populated on darwin") + case "dragonfly", "freebsd", "netbsd", "openbsd": + t.Skip("CPU memory is not populated on *BSD") case "linux", "windows": assert.Greater(t, info.TotalMemory, uint64(0)) assert.Greater(t, info.FreeMemory, uint64(0)) diff --git a/llm/generate/gen_bsd.sh b/llm/generate/gen_bsd.sh new file mode 100755 index 00000000..9fc11d94 --- /dev/null +++ b/llm/generate/gen_bsd.sh @@ -0,0 +1,64 @@ +#!/bin/sh +# This script is intended to run inside the go generate +# working directory must be ./llm/generate/ + +set -ex +set -o pipefail +echo "Starting BSD generate script" +. $(dirname $0)/gen_common.sh +init_vars +git_module_setup +apply_patches + +COMMON_BSD_DEFS="-DCMAKE_SYSTEM_NAME=$(uname -s)" +CMAKE_TARGETS="--target llama --target ggml" + +case "${GOARCH}" in + "amd64") + COMMON_CPU_DEFS="${COMMON_BSD_DEFS} -DCMAKE_SYSTEM_PROCESSOR=${ARCH}" + + # Static build for linking into the Go binary + init_vars + CMAKE_DEFS="${COMMON_CPU_DEFS} -DBUILD_SHARED_LIBS=off -DLLAMA_ACCELERATE=off -DLLAMA_AVX=off -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_FMA=off -DLLAMA_F16C=off ${CMAKE_DEFS}" + BUILD_DIR="../build/bsd/${ARCH}_static" + echo "Building static library" + build + + init_vars + CMAKE_DEFS="${COMMON_CPU_DEFS} -DLLAMA_AVX=off -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_FMA=off -DLLAMA_F16C=off ${CMAKE_DEFS}" + BUILD_DIR="../build/bsd/${ARCH}/cpu" + echo "Building LCD CPU" + build + compress + + init_vars + CMAKE_DEFS="${COMMON_CPU_DEFS} -DLLAMA_AVX=on -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_FMA=off -DLLAMA_F16C=off ${CMAKE_DEFS}" + BUILD_DIR="../build/bsd/${ARCH}/cpu_avx" + echo "Building AVX CPU" + build + compress + + init_vars + CMAKE_DEFS="${COMMON_CPU_DEFS} -DLLAMA_AVX=on -DLLAMA_AVX2=on -DLLAMA_AVX512=off -DLLAMA_FMA=on -DLLAMA_F16C=on ${CMAKE_DEFS}" + BUILD_DIR="../build/bsd/${ARCH}/cpu_avx2" + echo "Building AVX2 CPU" + build + compress + + init_vars + CMAKE_DEFS="${COMMON_CPU_DEFS} -DLLAMA_VULKAN=on ${CMAKE_DEFS}" + BUILD_DIR="../build/bsd/${ARCH}/vulkan" + echo "Building Vulkan GPU" + build + compress + ;; + + *) + echo "GOARCH must be set" + echo "this script is meant to be run from within go generate" + exit 1 + ;; +esac + +cleanup +echo "go generate completed. LLM runners: $(cd ${BUILD_DIR}/..; echo *)" diff --git a/llm/generate/gen_common.sh b/llm/generate/gen_common.sh index da1b0688..51b688ee 100644 --- a/llm/generate/gen_common.sh +++ b/llm/generate/gen_common.sh @@ -89,13 +89,13 @@ compress() { rm -rf ${BUILD_DIR}/bin/*.gz for f in ${BUILD_DIR}/bin/* ; do gzip -n --best -f ${f} & - pids+=" $!" + pids="$pids $!" done # check for lib directory if [ -d ${BUILD_DIR}/lib ]; then for f in ${BUILD_DIR}/lib/* ; do gzip -n --best -f ${f} & - pids+=" $!" + pids="$pids $!" done fi echo diff --git a/llm/generate/generate_bsd.go b/llm/generate/generate_bsd.go new file mode 100644 index 00000000..540f6115 --- /dev/null +++ b/llm/generate/generate_bsd.go @@ -0,0 +1,5 @@ +//go:build dragonfly || freebsd || netbsd || openbsd + +package generate + +//go:generate sh ./gen_bsd.sh diff --git a/llm/llm.go b/llm/llm.go index d24507cc..e49b63d7 100644 --- a/llm/llm.go +++ b/llm/llm.go @@ -8,6 +8,10 @@ package llm // #cgo windows,arm64 LDFLAGS: -static-libstdc++ -static-libgcc -static -L${SRCDIR}/build/windows/arm64_static -L${SRCDIR}/build/windows/arm64_static/src -L${SRCDIR}/build/windows/arm64_static/ggml/src // #cgo linux,amd64 LDFLAGS: -L${SRCDIR}/build/linux/x86_64_static -L${SRCDIR}/build/linux/x86_64_static/src -L${SRCDIR}/build/linux/x86_64_static/ggml/src // #cgo linux,arm64 LDFLAGS: -L${SRCDIR}/build/linux/arm64_static -L${SRCDIR}/build/linux/arm64_static/src -L${SRCDIR}/build/linux/arm64_static/ggml/src +// #cgo dragonfly,amd64 LDFLAGS: ${SRCDIR}/build/bsd/x86_64_static/src/libllama.a -lstdc++ -lm +// #cgo freebsd,amd64 LDFLAGS: ${SRCDIR}/build/bsd/x86_64_static/src/libllama.a -lstdc++ -lm +// #cgo netbsd,amd64 LDFLAGS: ${SRCDIR}/build/bsd/x86_64_static/src/libllama.a -lstdc++ -lm +// #cgo openbsd,amd64 LDFLAGS: ${SRCDIR}/build/bsd/x86_64_static/src/libllama.a -lstdc++ -lm // #include <stdlib.h> // #include "llama.h" import "C" diff --git a/llm/llm_bsd.go b/llm/llm_bsd.go new file mode 100644 index 00000000..2d2cbf18 --- /dev/null +++ b/llm/llm_bsd.go @@ -0,0 +1,13 @@ +//go:build dragonfly || freebsd || netbsd || openbsd + +package llm + +import ( + "embed" + "syscall" +) + +//go:embed build/bsd/*/*/bin/* +var libEmbed embed.FS + +var LlamaServerSysProcAttr = &syscall.SysProcAttr{} diff --git a/scripts/build_bsd.sh b/scripts/build_bsd.sh new file mode 100755 index 00000000..594ec4b9 --- /dev/null +++ b/scripts/build_bsd.sh @@ -0,0 +1,27 @@ +#!/bin/sh + +set -e + +case "$(uname -s)" in + DragonFly) + ;; + FreeBSD) + ;; + NetBSD) + ;; + OpenBSD) + ;; + *) + echo "$(uname -s) is not supported" + exit 1 + ;; +esac + +export VERSION=${VERSION:-$(git describe --tags --first-parent --abbrev=7 --long --dirty --always | sed -e "s/^v//g")} +export GOFLAGS="'-ldflags=-w -s \"-X=github.com/ollama/ollama/version.Version=$VERSION\" \"-X=github.com/ollama/ollama/server.mode=release\"'" + +mkdir -p dist +rm -rf llm/llama.cpp/build + +go generate ./... +CGO_ENABLED=1 go build -trimpath -o dist/ollama-bsd diff --git a/scripts/build_freebsd.sh b/scripts/build_freebsd.sh new file mode 120000 index 00000000..692c340f --- /dev/null +++ b/scripts/build_freebsd.sh @@ -0,0 +1 @@ +build_bsd.sh \ No newline at end of file ``` ...and then here are the actual commands to build... ```shell git checkout v0.3.3 git apply ../freebsd.patch go122 generate ./... go build -o ./ollama-unstripped strip -s -o ./ollama ./ollama-unstripped ```
Author
Owner

@yurivict commented on GitHub (Aug 6, 2024):

The build fails for me with version 0.3.3 + the above patch:

app/store/store.go:50:13: undefined: getStorePath
app/store/store.go:55:28: undefined: getStorePath
app/store/store.go:60:64: undefined: getStorePath
app/store/store.go:68:13: undefined: getStorePath

Was anything forgotten?

<!-- gh-comment-id:2270314484 --> @yurivict commented on GitHub (Aug 6, 2024): The build fails for me with version 0.3.3 + the above patch: ``` app/store/store.go:50:13: undefined: getStorePath app/store/store.go:55:28: undefined: getStorePath app/store/store.go:60:64: undefined: getStorePath app/store/store.go:68:13: undefined: getStorePath ``` Was anything forgotten?
Author
Owner

@yurivict commented on GitHub (Aug 6, 2024):

app/store/store_linux.go needs to be copied to app/store/store_bsd.go

<!-- gh-comment-id:2270320629 --> @yurivict commented on GitHub (Aug 6, 2024): app/store/store_linux.go needs to be copied to app/store/store_bsd.go
Author
Owner

@xorander00 commented on GitHub (Aug 6, 2024):

Hmm, strange. I didn't have to do that to get it to successfully compile. Looking at app/store/ though, the build would fail if it tried to compile that package. Did you happen to use any build tags during your build? I'm wondering why it didn't fail on mine.

Either way, it should probably be patched. Copying store_linux.go to store_bsd.go will solve the error, but you'll want to modify line 11 to return "/usr/local/etc/ollama/config.json" instead of "/etc/ollama/config.json". I think store_unix.go would technically cover both platforms (Linux, FreeBSD) too, so you could just wrap that line in a conditional that checks GOOS and returns the platform-specific path or a default path.

<!-- gh-comment-id:2270326973 --> @xorander00 commented on GitHub (Aug 6, 2024): Hmm, strange. I didn't have to do that to get it to successfully compile. Looking at app/store/ though, the build would fail if it tried to compile that package. Did you happen to use any build tags during your build? I'm wondering why it didn't fail on mine. Either way, it should probably be patched. Copying store_linux.go to store_bsd.go will solve the error, but you'll want to modify line 11 to return "/usr/local/etc/ollama/config.json" instead of "/etc/ollama/config.json". I think store_unix.go would technically cover both platforms (Linux, FreeBSD) too, so you could just wrap that line in a conditional that checks GOOS and returns the platform-specific path or a default path.
Author
Owner

@yurivict commented on GitHub (Aug 6, 2024):

I added the FreeBSD port for ollama: https://cgit.freebsd.org/ports/tree/misc/ollama

Thank you, @xorander00, for your patch.

<!-- gh-comment-id:2270898577 --> @yurivict commented on GitHub (Aug 6, 2024): I added the FreeBSD port for ollama: https://cgit.freebsd.org/ports/tree/misc/ollama Thank you, @xorander00, for your patch.
Author
Owner

@xorander00 commented on GitHub (Aug 6, 2024):

I added the FreeBSD port for ollama: https://cgit.freebsd.org/ports/tree/misc/ollama

Thank you, @xorander00, for your patch.

Where should I message you? I have something like 400+ internal ports I've made over the last couple of years that I've been meaning to upstream. Just haven't had the time and they're messy. Can work with you go get that going.

<!-- gh-comment-id:2271505759 --> @xorander00 commented on GitHub (Aug 6, 2024): > I added the FreeBSD port for ollama: https://cgit.freebsd.org/ports/tree/misc/ollama > > Thank you, @xorander00, for your patch. > Where should I message you? I have something like 400+ internal ports I've made over the last couple of years that I've been meaning to upstream. Just haven't had the time and they're messy. Can work with you go get that going.
Author
Owner

@yurivict commented on GitHub (Aug 6, 2024):

Where should I message you?

yuri at FreeBSD

<!-- gh-comment-id:2271607243 --> @yurivict commented on GitHub (Aug 6, 2024): > Where should I message you? yuri at FreeBSD
Author
Owner

@yurivict commented on GitHub (Aug 6, 2024):

@xorander00

The port builds but fails to run inference for some reason.

The client fails:

$ ollama run mistral
>>> Say something.
Error: an unknown error was encountered while running the model 

The server has this:

DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=2 tid="0x260df6612000" timestamp=1722972008
DEBUG [launch_slot_with_data] slot is processing task | slot_id=0 task_id=3 tid="0x260df6612000" timestamp=1722972008
DEBUG [update_slots] slot progression | ga_i=0 n_past=0 n_past_se=0 n_prompt_tokens_processed=8 slot_id=0 task_id=3 tid="0x260df6612000" timestamp=1722972008
DEBUG [update_slots] kv cache rm [p0, end) | p0=0 slot_id=0 task_id=3 tid="0x260df6612000" timestamp=1722972008
llama_get_logits_ith: invalid logits id 7, reason: no logits
time=2024-08-06T12:20:17.282-07:00 level=DEBUG source=server.go:1048 msg="stopping llama server"
time=2024-08-06T12:20:17.282-07:00 level=DEBUG source=server.go:1054 msg="waiting for llama server to exit"
time=2024-08-06T12:20:17.540-07:00 level=DEBUG source=server.go:1058 msg="llama server stopped"
[GIN] 2024/08/06 - 12:20:17 | 200 |  9.504839083s |       127.0.0.1 | POST     "/api/chat"
time=2024-08-06T12:20:17.540-07:00 level=DEBUG source=sched.go:403 msg="context for request finished"
time=2024-08-06T12:20:17.540-07:00 level=DEBUG source=sched.go:334 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/yuri/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 duration=5m0s
time=2024-08-06T12:20:17.540-07:00 level=DEBUG source=sched.go:352 msg="after processing request finished event" modelPath=/home/yuri/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 refCount=0

Do you know what might be wrong?

<!-- gh-comment-id:2271982749 --> @yurivict commented on GitHub (Aug 6, 2024): @xorander00 The port builds but fails to run inference for some reason. The client fails: ``` $ ollama run mistral >>> Say something. Error: an unknown error was encountered while running the model ``` The server has this: ``` DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=2 tid="0x260df6612000" timestamp=1722972008 DEBUG [launch_slot_with_data] slot is processing task | slot_id=0 task_id=3 tid="0x260df6612000" timestamp=1722972008 DEBUG [update_slots] slot progression | ga_i=0 n_past=0 n_past_se=0 n_prompt_tokens_processed=8 slot_id=0 task_id=3 tid="0x260df6612000" timestamp=1722972008 DEBUG [update_slots] kv cache rm [p0, end) | p0=0 slot_id=0 task_id=3 tid="0x260df6612000" timestamp=1722972008 llama_get_logits_ith: invalid logits id 7, reason: no logits time=2024-08-06T12:20:17.282-07:00 level=DEBUG source=server.go:1048 msg="stopping llama server" time=2024-08-06T12:20:17.282-07:00 level=DEBUG source=server.go:1054 msg="waiting for llama server to exit" time=2024-08-06T12:20:17.540-07:00 level=DEBUG source=server.go:1058 msg="llama server stopped" [GIN] 2024/08/06 - 12:20:17 | 200 | 9.504839083s | 127.0.0.1 | POST "/api/chat" time=2024-08-06T12:20:17.540-07:00 level=DEBUG source=sched.go:403 msg="context for request finished" time=2024-08-06T12:20:17.540-07:00 level=DEBUG source=sched.go:334 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/yuri/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 duration=5m0s time=2024-08-06T12:20:17.540-07:00 level=DEBUG source=sched.go:352 msg="after processing request finished event" modelPath=/home/yuri/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 refCount=0 ``` Do you know what might be wrong?
Author
Owner

@xorander00 commented on GitHub (Aug 6, 2024):

@yurivict
My first guess, off the top of my head, is because there's no actual model that is bundled with the executable. I'm guessing that's probably why the Linux release is 559mb vs. the FreeBSD source-built executable is less than 40mb.

Will either have to download model(s) or look at patching the source to embed models into the executable. That is of course if that's what is actually happening here. Will look here in a bit and see what I find.

<!-- gh-comment-id:2271995252 --> @xorander00 commented on GitHub (Aug 6, 2024): @yurivict My first guess, off the top of my head, is because there's no actual model that is bundled with the executable. I'm guessing that's probably why the Linux release is 559mb vs. the FreeBSD source-built executable is less than 40mb. Will either have to download model(s) or look at patching the source to embed models into the executable. That is of course if that's what is actually happening here. Will look here in a bit and see what I find.
Author
Owner

@xorander00 commented on GitHub (Aug 6, 2024):

Oh, and if it's not a model issue, then my second guess is that it's a hardware acceleration issue. It seemed that CPU support was being reworked or dropped if there was no GPU fallback, though I could still very well be wrong.

<!-- gh-comment-id:2272003461 --> @xorander00 commented on GitHub (Aug 6, 2024): Oh, and if it's not a model issue, then my second guess is that it's a hardware acceleration issue. It seemed that CPU support was being reworked or dropped if there was no GPU fallback, though I could still very well be wrong.
Author
Owner

@xorander00 commented on GitHub (Aug 6, 2024):

@yurivict
See https://github.com/ggerganov/llama.cpp/issues/7386

<!-- gh-comment-id:2272012984 --> @xorander00 commented on GitHub (Aug 6, 2024): @yurivict See https://github.com/ggerganov/llama.cpp/issues/7386
Author
Owner

@kraileth commented on GitHub (Aug 6, 2024):

There seems to be slightly more wrong with the port so far. By chance I just mailed Yuri before I saw that there's more replies here. One of the things that have changed since the initial OpenBSD patch earlier this year is that some CMake vars were renamed. Therefore we can see this when building:

CMake Warning:
 Manually-specified variables were not used by the project:

   LLAMA_ACCELERATE
   LLAMA_AVX
   LLAMA_AVX2
   LLAMA_AVX512
   LLAMA_F16C
   LLAMA_FMA

These should be replaced by GGML_*. This will lead to a build failure due to a required vulkan component missing which can be satisfied by graphics/shaderc. However I still couldn't build it due to a missing symbol pthread_create. I'll have to stop at this point for today but wanted to share these bits in case it helps somebody else. It would be awesome if we could get ollama working properly on FreeBSD.

<!-- gh-comment-id:2272028874 --> @kraileth commented on GitHub (Aug 6, 2024): There seems to be slightly more wrong with the port so far. By chance I just mailed Yuri before I saw that there's more replies here. One of the things that have changed since the initial OpenBSD patch earlier this year is that some CMake vars were renamed. Therefore we can see this when building: ``` CMake Warning: Manually-specified variables were not used by the project: LLAMA_ACCELERATE LLAMA_AVX LLAMA_AVX2 LLAMA_AVX512 LLAMA_F16C LLAMA_FMA ``` These should be replaced by GGML_*. This will lead to a build failure due to a required vulkan component missing which can be satisfied by graphics/shaderc. However I still couldn't build it due to a missing symbol pthread_create. I'll have to stop at this point for today but wanted to share these bits in case it helps somebody else. It would be awesome if we could get ollama working properly on FreeBSD.
Author
Owner

@xorander00 commented on GitHub (Aug 6, 2024):

@kraileth
Looking at it right now in fact, so your comment is helpful. Thanks!

I'll see if I can update the patch to add the renamed CMake variables. I'm also searching for go build tags to see what else might be relevant for FreeBSD. I just noticed now something about the gpu package that might need to be patched, but not sure yet.

<!-- gh-comment-id:2272036777 --> @xorander00 commented on GitHub (Aug 6, 2024): @kraileth Looking at it right now in fact, so your comment is helpful. Thanks! I'll see if I can update the patch to add the renamed CMake variables. I'm also searching for go build tags to see what else might be relevant for FreeBSD. I just noticed now something about the gpu package that might need to be patched, but not sure yet.
Author
Owner

@xorander00 commented on GitHub (Aug 6, 2024):

@yurivict
I have some other stuff I have to get back to for the time being, and you may be faster at this piece anyway. If it's trying to build llama.cpp, then we should figure out how to skip it and use the system (port). The pthread_create issue is with building llama.cpp and I haven't yet been able to find where it's setting the linker path for pthreads. Shouldn't have to do any of that though if it's able to rely on the system package instead.

<!-- gh-comment-id:2272103032 --> @xorander00 commented on GitHub (Aug 6, 2024): @yurivict I have some other stuff I have to get back to for the time being, and you may be faster at this piece anyway. If it's trying to build llama.cpp, then we should figure out how to skip it and use the system (port). The pthread_create issue is with building llama.cpp and I haven't yet been able to find where it's setting the linker path for pthreads. Shouldn't have to do any of that though if it's able to rely on the system package instead.
Author
Owner

@xorander00 commented on GitHub (Aug 6, 2024):

@yurivict

Here's a snapshot of my WIP patch:
freebsd.txt

Don't be surprised if the build still fails. I added freebsd to gpu/gpu.go as a build tag, which could very well cause the build to fail.

<!-- gh-comment-id:2272105915 --> @xorander00 commented on GitHub (Aug 6, 2024): @yurivict Here's a snapshot of my WIP patch: [freebsd.txt](https://github.com/user-attachments/files/16515515/freebsd.txt) Don't be surprised if the build still fails. I added freebsd to gpu/gpu.go as a build tag, which could very well cause the build to fail.
Author
Owner

@yurivict commented on GitHub (Aug 6, 2024):

If it's trying to build llama.cpp, then we should figure out how to skip it and use the system (port).

It builds the patched llama.cpp though.
The patches have to be upstreamed first to use the llama-cpp package.

<!-- gh-comment-id:2272125399 --> @yurivict commented on GitHub (Aug 6, 2024): > If it's trying to build llama.cpp, then we should figure out how to skip it and use the system (port). It builds the patched llama.cpp though. The patches have to be upstreamed first to use the llama-cpp package.
Author
Owner

@abdielsudiro commented on GitHub (Aug 8, 2024):

@yurivict Looks like the additional files introduced by the PR may not be present on your system?

Just redoing it in a fresh jail to document what I was doing:

# freebsd-version 
14.0-RELEASE-p6

# pkg install -y git go122 cmake vulkan-headers vulkan-loader

# git clone https://github.com/prep/ollama.git

# cd ollama && git checkout feature/add-bsd-support

# go122 generate ./...

# go122 build .

# ./ollama help | head -n 5
Large language model runner

Usage:
  ollama [flags]
  ollama [command]

Works fine for me, no problems encountered.

awesome. many thanks. this for me too.

<!-- gh-comment-id:2274898757 --> @abdielsudiro commented on GitHub (Aug 8, 2024): > @yurivict Looks like the additional files introduced by the PR may not be present on your system? > > Just redoing it in a fresh jail to document what I was doing: > > ``` > # freebsd-version > 14.0-RELEASE-p6 > ``` > > `# pkg install -y git go122 cmake vulkan-headers vulkan-loader` > > `# git clone https://github.com/prep/ollama.git` > > `# cd ollama && git checkout feature/add-bsd-support` > > `# go122 generate ./...` > > `# go122 build .` > > ``` > # ./ollama help | head -n 5 > Large language model runner > > Usage: > ollama [flags] > ollama [command] > ``` > > Works fine for me, no problems encountered. awesome. many thanks. this for me too.
Author
Owner

@xorander00 commented on GitHub (Aug 8, 2024):

@yurivict Not sure if you saw it, but in llm/generate/gen_bsd.sh the CMake variable prefixes need to be changed from LLAMA_* to GGML_*. I haven't had a chance to resuming working on an updated patch, but my tree has those changes. I'll generate a patch later when I get a chance to use as a diff reference to see what might be worth integrating.

<!-- gh-comment-id:2276265741 --> @xorander00 commented on GitHub (Aug 8, 2024): @yurivict Not sure if you saw it, but in llm/generate/gen_bsd.sh the CMake variable prefixes need to be changed from LLAMA_* to GGML_*. I haven't had a chance to resuming working on an updated patch, but my tree has those changes. I'll generate a patch later when I get a chance to use as a diff reference to see what might be worth integrating.
Author
Owner

@yurivict commented on GitHub (Aug 8, 2024):

@xorander00

I believe that libllvm.so from llama-cpp is actually used, so renaming LLAMA_* to GGML_* shouldn't matter in this case.

<!-- gh-comment-id:2276322518 --> @yurivict commented on GitHub (Aug 8, 2024): @xorander00 I believe that libllvm.so from llama-cpp is actually used, so renaming LLAMA_* to GGML_* shouldn't matter in this case.
Author
Owner

@yurivict commented on GitHub (Aug 8, 2024):

Looks like the additional files introduced by the PR may not be present on your system?

What are these files?

<!-- gh-comment-id:2276360202 --> @yurivict commented on GitHub (Aug 8, 2024): > Looks like the additional files introduced by the PR may not be present on your system? What are these files?
Author
Owner

@yurivict commented on GitHub (Aug 8, 2024):

I asked the ollama upstream: https://github.com/ollama/ollama/issues/6259

<!-- gh-comment-id:2276368759 --> @yurivict commented on GitHub (Aug 8, 2024): I asked the ollama upstream: https://github.com/ollama/ollama/issues/6259
Author
Owner

@yurivict commented on GitHub (Aug 8, 2024):

The latest revision of the misc/ollama port has inference working.
Please update your ports tree and rebuild.

The working package name will be ollama-0.3.4_2

In case you'll find any problems, please report them to me either through the e-mail yuri at FreeBSD, or through the FreeBSD Bugzilla.

I think that this issue can be closed now.

<!-- gh-comment-id:2276563312 --> @yurivict commented on GitHub (Aug 8, 2024): The latest revision of the misc/ollama port has inference working. Please update your ports tree and rebuild. The working package name will be ollama-0.3.4_2 In case you'll find any problems, please report them to me either through the e-mail yuri at FreeBSD, or through the FreeBSD Bugzilla. I think that this issue can be closed now.
Author
Owner

@yurivict commented on GitHub (Aug 9, 2024):

To be precise, inference works on CPU.
I am working on enabling Vulkan.

<!-- gh-comment-id:2277051264 --> @yurivict commented on GitHub (Aug 9, 2024): To be precise, inference works on CPU. I am working on enabling Vulkan.
Author
Owner

@yurivict commented on GitHub (Aug 9, 2024):

Vulkan now works.

Please test the port.

<!-- gh-comment-id:2277198139 --> @yurivict commented on GitHub (Aug 9, 2024): Vulkan now works. Please test the port.
Author
Owner

@yjqg6666 commented on GitHub (Sep 12, 2024):

How about updating to the recent version v0.3.10?

<!-- gh-comment-id:2345144093 --> @yjqg6666 commented on GitHub (Sep 12, 2024): How about updating to the recent version v0.3.10?
Author
Owner

@yurivict commented on GitHub (Sep 12, 2024):

I tried to update it but the extensive patch needs extensive changes and I couldn't make it work yet.

<!-- gh-comment-id:2345168215 --> @yurivict commented on GitHub (Sep 12, 2024): I tried to update it but the extensive patch needs extensive changes and I couldn't make it work yet.
Author
Owner

@tingox commented on GitHub (Nov 8, 2024):

I've installed ollama from a package

root@locaal:~ # pkg info olla\*
ollama-0.3.6_1

on FreeBSD 13.4

root@locaal:~ # freebsd-version -ku
13.4-RELEASE-p1
13.4-RELEASE-p2

I start the server like this

$ OLLAMA_NUM_PARALLEL=1 OLLAMA_DEBUG=1 LLAMA_DEBUG=1 ollama start

but quite often, the client aborts while trying to start and load the model (I've tried a few models)

tingo@locaal:~ $ ollama run mistral
Error: Post "http://127.0.0.1:11434/api/chat": EOF

on the next try it works

tingo@locaal:~ $ ollama run mistral
>>> Send a message (/? for help)

any other info I can provide to help debug this?

<!-- gh-comment-id:2465831682 --> @tingox commented on GitHub (Nov 8, 2024): I've installed ollama from a package ``` root@locaal:~ # pkg info olla\* ollama-0.3.6_1 ``` on FreeBSD 13.4 ``` root@locaal:~ # freebsd-version -ku 13.4-RELEASE-p1 13.4-RELEASE-p2 ``` I start the server like this ``` $ OLLAMA_NUM_PARALLEL=1 OLLAMA_DEBUG=1 LLAMA_DEBUG=1 ollama start ``` but quite often, the client aborts while trying to start and load the model (I've tried a few models) ``` tingo@locaal:~ $ ollama run mistral Error: Post "http://127.0.0.1:11434/api/chat": EOF ``` on the next try it works ``` tingo@locaal:~ $ ollama run mistral >>> Send a message (/? for help) ``` any other info I can provide to help debug this?
Author
Owner

@yurivict commented on GitHub (Feb 28, 2025):

I maintain the FreeBSD port that is currently at the version 0.3.6

I am receiving e-mails from users almost every week asking why isn't the port updated.

Therefore I have these questions:

  1. Can ollama be built on Linux only for CPU? Can Ollama be built on Linux only for Vulkan?
  2. If the answers are yes, what would prevent such build to work on FreeBSD? What are the specific Linux-only features that are used in ollama?
<!-- gh-comment-id:2690271965 --> @yurivict commented on GitHub (Feb 28, 2025): I maintain the FreeBSD port that is currently at the version 0.3.6 I am receiving e-mails from users almost every week asking why isn't the port updated. Therefore I have these questions: 1. Can ollama be built on Linux only for CPU? Can Ollama be built on Linux only for Vulkan? 2. If the answers are yes, what would prevent such build to work on FreeBSD? What are the specific Linux-only features that are used in ollama?
Author
Owner

@aleksander-haugas commented on GitHub (Mar 6, 2025):

@yurivict Looks like the additional files introduced by the PR may not be present on your system?

Just redoing it in a fresh jail to document what I was doing:

# freebsd-version 
14.0-RELEASE-p6

# pkg install -y git go122 cmake vulkan-headers vulkan-loader

# git clone https://github.com/prep/ollama.git

# cd ollama && git checkout feature/add-bsd-support

# go122 generate ./...

# go122 build .

# ./ollama help | head -n 5
Large language model runner

Usage:
  ollama [flags]
  ollama [command]

Works fine for me, no problems encountered.

Works fine on 14.1-RELEASE with only cpu and mistral, super Fast!!!.... for some reason is forced to use vulkan libs...

<!-- gh-comment-id:2705067547 --> @aleksander-haugas commented on GitHub (Mar 6, 2025): > [@yurivict](https://github.com/yurivict) Looks like the additional files introduced by the PR may not be present on your system? > > Just redoing it in a fresh jail to document what I was doing: > > ``` > # freebsd-version > 14.0-RELEASE-p6 > ``` > > `# pkg install -y git go122 cmake vulkan-headers vulkan-loader` > > `# git clone https://github.com/prep/ollama.git` > > `# cd ollama && git checkout feature/add-bsd-support` > > `# go122 generate ./...` > > `# go122 build .` > > ``` > # ./ollama help | head -n 5 > Large language model runner > > Usage: > ollama [flags] > ollama [command] > ``` > > Works fine for me, no problems encountered. Works fine on 14.1-RELEASE with only cpu and mistral, super Fast!!!.... for some reason is forced to use vulkan libs...
Author
Owner

@walterjwhite commented on GitHub (Mar 7, 2025):

Yes, the above steps do work for me. It must be go122 and not go123. For me, my CPU is not performant at all. I did try mistral as I previously used llama3.2.

I'm on an i5-3470 with 16GB of ram, ancient stuff with no GPU acceleration. I asked a basic question and gave up waiting. For comparison, I asked the same question on an older computer, but with an Nvidia RTX 2060 (still old), dual Xeon processors with 96 GB of ram and got a response relatively quickly.

The above steps work and I can vouch for that, but my only question is, what hardware are you running on to get good performance with CPU alone?

<!-- gh-comment-id:2705347277 --> @walterjwhite commented on GitHub (Mar 7, 2025): Yes, the above steps do work for me. It must be go122 and not go123. For me, my CPU is not performant at all. I did try mistral as I previously used llama3.2. I'm on an i5-3470 with 16GB of ram, ancient stuff with no GPU acceleration. I asked a basic question and gave up waiting. For comparison, I asked the same question on an older computer, but with an Nvidia RTX 2060 (still old), dual Xeon processors with 96 GB of ram and got a response relatively quickly. The above steps work and I can vouch for that, but my only question is, what hardware are you running on to get good performance with CPU alone?
Author
Owner

@yurivict commented on GitHub (Mar 16, 2025):

@aleksander-haugas

The feature/add-bsd-support branch works, but it is for 1 year old version. It hasn't been updated for the latest version, since May 5th, 2024.

<!-- gh-comment-id:2727302956 --> @yurivict commented on GitHub (Mar 16, 2025): @aleksander-haugas The feature/add-bsd-support branch works, but it is for 1 year old version. It hasn't been updated for the latest version, since May 5th, 2024.
Author
Owner

@aleksander-haugas commented on GitHub (Mar 16, 2025):

@aleksander-haugas

The feature/add-bsd-support branch works, but it is for 1 year old version. It hasn't been updated for the latest version, since May 5th, 2024.

When I try the PKG version... Doesn't load all models I want, something happens with tensor and ollama... But compiling and building yourself... Works... Also is easy ...

<!-- gh-comment-id:2727323087 --> @aleksander-haugas commented on GitHub (Mar 16, 2025): > @aleksander-haugas > > The feature/add-bsd-support branch works, but it is for 1 year old version. It hasn't been updated for the latest version, since May 5th, 2024. > When I try the PKG version... Doesn't load all models I want, something happens with tensor and ollama... But compiling and building yourself... Works... Also is easy ...
Author
Owner

@yurivict commented on GitHub (Apr 13, 2025):

https://github.com/ollama/ollama/pull/10254

<!-- gh-comment-id:2799538844 --> @yurivict commented on GitHub (Apr 13, 2025): https://github.com/ollama/ollama/pull/10254
Author
Owner

@hckiang commented on GitHub (Jan 25, 2026):

The patch doesn't work anymore with the new refactored discovery/. The GpuInfo struct has disappeared and now there's

uint64_t getRecommendedMaxVRAM();
uint64_t getPhysicalMemory();
uint64_t getFreeMemory();

in discover/gpu_info_darwin.m. I don't know where these are called...

<!-- gh-comment-id:3796759274 --> @hckiang commented on GitHub (Jan 25, 2026): The patch doesn't work anymore with the new refactored discovery/. The `GpuInfo` struct has disappeared and now there's ``` uint64_t getRecommendedMaxVRAM(); uint64_t getPhysicalMemory(); uint64_t getFreeMemory(); ``` in `discover/gpu_info_darwin.m`. I don't know where these are called...
Author
Owner

@yurivict commented on GitHub (Jan 25, 2026):

The upstream refused to merge this patch and it isn't updated any more.

Please use the package ollama-0.13.5_1 that is currently available, or the port misc/ollama.

<!-- gh-comment-id:3797006074 --> @yurivict commented on GitHub (Jan 25, 2026): The upstream refused to merge this patch and it isn't updated any more. Please use the package ollama-0.13.5_1 that is currently available, or the port misc/ollama.
Author
Owner

@yurivict commented on GitHub (Jan 25, 2026):

This issue can be closed now.

The FreeBSD patches are in the ollama port.

Someone from the upstream replied to my Discord post last month and said that they are afraid that these patches would be a dead code since there is no CI.

I will try to create a CI job.

This issue can be closed now to avoid confusion since it is outdated and will not be updated.

<!-- gh-comment-id:3797025333 --> @yurivict commented on GitHub (Jan 25, 2026): This issue can be closed now. The FreeBSD patches are in the [ollama port](https://cgit.freebsd.org/ports/tree/misc/ollama). Someone from the upstream replied to my Discord post last month and said that they are afraid that these patches would be a dead code since there is no CI. I will try to create a CI job. This issue can be closed now to avoid confusion since it is outdated and will not be updated.
Author
Owner

@hckiang commented on GitHub (Jan 27, 2026):

Thanks for replies. I reckon they want CI etc and it was rejected. The port misc/ollama works but doesn't seem to have use GPU despite there's libvulkan.so etc and llama.cpp is working well (enough) with GPU+vulkan.

Does your Github fork support GPU on FreeBSD?

<!-- gh-comment-id:3805366779 --> @hckiang commented on GitHub (Jan 27, 2026): Thanks for replies. I reckon they want CI etc and it was rejected. The port misc/ollama works but doesn't seem to have use GPU despite there's libvulkan.so etc and llama.cpp is working well (enough) with GPU+vulkan. Does your Github fork support GPU on FreeBSD?
Author
Owner

@yurivict commented on GitHub (Jan 27, 2026):

I didn't test Vulkan with the ollama port.
I know thatin ollama itwas added ~Oct-Nov 2025 and it used to or still requires some option to enable.

I will try it and will get back to you.

<!-- gh-comment-id:3806430955 --> @yurivict commented on GitHub (Jan 27, 2026): I didn't test Vulkan with the ollama port. I know thatin ollama itwas added ~Oct-Nov 2025 and it used to or still requires some option to enable. I will try it and will get back to you.
Author
Owner

@spmzt commented on GitHub (Mar 7, 2026):

I didn't test Vulkan with the ollama port. I know thatin ollama itwas added ~Oct-Nov 2025 and it used to or still requires some option to enable.

I will try it and will get back to you.

Thank you for your work. any updates?

<!-- gh-comment-id:4016543265 --> @spmzt commented on GitHub (Mar 7, 2026): > I didn't test Vulkan with the ollama port. I know thatin ollama itwas added ~Oct-Nov 2025 and it used to or still requires some option to enable. > > I will try it and will get back to you. Thank you for your work. any updates?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#47062