[PR #5479] [CLOSED] Add device cmd/api to query the device information #22336

Closed
opened 2026-04-19 16:15:41 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/5479
Author: @yeahdongcn
Created: 7/4/2024
Status: Closed

Base: mainHead: info


📝 Commits (1)

📊 Changes

4 files changed (+84 additions, -0 deletions)

View changed files

📝 api/client.go (+8 -0)
📝 api/types.go (+12 -0)
📝 cmd/cmd.go (+46 -0)
📝 server/routes.go (+18 -0)

📄 Description

I'd like to use https://github.com/aidatatools/ollama-benchmark to benchmark Ollama on MTGPU (https://github.com/ollama/ollama/pull/5353) and it depends on GPUtil python package to check the avaible VRAM in choosing correct LLM models.

So I plan to add a new device cmd/api to query the device information:

# On M1 Mac
➜ ollama device
ID      NAME    LIBRARY TOTAL MEMORY    FREE MEMORY 
0       Unknown metal   10.7 GiB        10.7 GiB   

# API
➜ curl localhost:11434/api/device
{"devices":[{"id":"0","name":"","library":"metal","total_memory":11453251584,"free_memory":11453251584}]}

# On Ubuntu Linux with MTGPU
➜ ollama device
ID      NAME            LIBRARY TOTAL MEMORY    FREE MEMORY 
0       1ed5:0323       musa    48.0 GiB        48.0 GiB

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/5479 **Author:** [@yeahdongcn](https://github.com/yeahdongcn) **Created:** 7/4/2024 **Status:** ❌ Closed **Base:** `main` ← **Head:** `info` --- ### 📝 Commits (1) - [`58bd71b`](https://github.com/ollama/ollama/commit/58bd71b14d2d3d45d5854c10bc6075cacc67e1a6) Add device cmd/api ### 📊 Changes **4 files changed** (+84 additions, -0 deletions) <details> <summary>View changed files</summary> 📝 `api/client.go` (+8 -0) 📝 `api/types.go` (+12 -0) 📝 `cmd/cmd.go` (+46 -0) 📝 `server/routes.go` (+18 -0) </details> ### 📄 Description I'd like to use https://github.com/aidatatools/ollama-benchmark to benchmark Ollama on MTGPU (https://github.com/ollama/ollama/pull/5353) and it depends on GPUtil python package to check the avaible VRAM in choosing correct LLM models. So I plan to add a new `device` cmd/api to query the device information: ```bash # On M1 Mac ➜ ollama device ID NAME LIBRARY TOTAL MEMORY FREE MEMORY 0 Unknown metal 10.7 GiB 10.7 GiB # API ➜ curl localhost:11434/api/device {"devices":[{"id":"0","name":"","library":"metal","total_memory":11453251584,"free_memory":11453251584}]} # On Ubuntu Linux with MTGPU ➜ ollama device ID NAME LIBRARY TOTAL MEMORY FREE MEMORY 0 1ed5:0323 musa 48.0 GiB 48.0 GiB ``` --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-19 16:15:41 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#22336