[GH-ISSUE #2250] Nvidia Tesla M60 #1290

Closed
opened 2026-04-12 11:06:21 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @nejib1 on GitHub (Jan 29, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2250

Originally assigned to: @bmizerany on GitHub.

Hello,

I would like to inquire whether the Nvidia Tesla M60 is compatible with Ollama's code.
Can someone please provide information or insights regarding this compatibility?

Thank you!

Originally created by @nejib1 on GitHub (Jan 29, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2250 Originally assigned to: @bmizerany on GitHub. Hello, I would like to inquire whether the Nvidia Tesla M60 is compatible with Ollama's code. Can someone please provide information or insights regarding this compatibility? Thank you!
GiteaMirror added the nvidia label 2026-04-12 11:06:21 -05:00
Author
Owner

@easp commented on GitHub (Jan 29, 2024):

That Compute Capability of that card is 5.2. Support for 5.2 was just merged this past weekend and so I'd expect it to show up in the next release. I'd guess that would happen in the next week or two.

<!-- gh-comment-id:1915428304 --> @easp commented on GitHub (Jan 29, 2024): That Compute Capability of that card is 5.2. Support for 5.2 was just merged this past weekend and so I'd expect it to show up in the next release. I'd guess that would happen in the next week or two.
Author
Owner

@lirc572 commented on GitHub (Feb 3, 2024):

Hi @nejib1, have you tested it out? I am considering getting a M40 or M60 card if it is significantly faster than CPUs for running Ollama.

<!-- gh-comment-id:1925037598 --> @lirc572 commented on GitHub (Feb 3, 2024): Hi @nejib1, have you tested it out? I am considering getting a M40 or M60 card if it is significantly faster than CPUs for running Ollama.
Author
Owner

@orlyandico commented on GitHub (Feb 3, 2024):

I went on an ancient GPU buying spree in 2022 and ended up with a K80 and M60. The K80 isn't great because it's compute capability 3.7 (only recently working, but you have to build ollama from source). The M60 is newer, but is in many ways weaker than the K80 (only 1 GPU, and only 8GB RAM).

The king of cut-rate GPU's right now has got to be the P40, which you can get on ebay for $200. It's a bit faster than an Nvidia T4 or RTX 3060, but the killer is it has 24GB RAM. It doesn't support float16 however (rather, it does, but is immensely slow) so any code that can leverage float16 or Tensor cores would be much faster on a more modern GPU. But at a cost less than a 3060...

I do have a 3060 and the P40. I've benchmarked them all, CPU as well as M2 Max. Any GPU is way way faster than CPU (10X at least) if the entire model can fit in the GPU RAM. I haven't managed to get the 13B models working on the P40 yet though.

<!-- gh-comment-id:1925086931 --> @orlyandico commented on GitHub (Feb 3, 2024): I went on an ancient GPU buying spree in 2022 and ended up with a K80 and M60. The K80 isn't great because it's compute capability 3.7 (only recently working, but you have to build ollama from source). The M60 is newer, but is in many ways weaker than the K80 (only 1 GPU, and only 8GB RAM). The king of cut-rate GPU's right now has got to be the P40, which you can get on ebay for $200. It's a bit faster than an Nvidia T4 or RTX 3060, but the killer is it has 24GB RAM. It doesn't support float16 however (rather, it does, but is immensely slow) so any code that can leverage float16 or Tensor cores would be much faster on a more modern GPU. But at a cost less than a 3060... I do have a 3060 and the P40. I've benchmarked them all, CPU as well as M2 Max. Any GPU is way way faster than CPU (10X at least) if the entire model can fit in the GPU RAM. I haven't managed to get the 13B models working on the P40 yet though.
Author
Owner

@lirc572 commented on GitHub (Feb 3, 2024):

@orlyandico Very informative. Thank you!!

I have a 16GB 4060Ti (around $600) on my PC. It's imo the best modern Nvidia GPU with enough VRAM and an okay-ish performance for people on a budget.

I want to build a cheap always-on server that can run some LLM workloads. The P40 looks like a great option. My only other concern is its power consumption... If it's gonna add $50 to my monthly electricity bill, I would rather get another 4060Ti.

<!-- gh-comment-id:1925104183 --> @lirc572 commented on GitHub (Feb 3, 2024): @orlyandico Very informative. Thank you!! I have a 16GB 4060Ti (around $600) on my PC. It's imo the best modern Nvidia GPU with enough VRAM and an okay-ish performance for people on a budget. I want to build a cheap always-on server that can run some LLM workloads. The P40 looks like a great option. My only other concern is its power consumption... If it's gonna add $50 to my monthly electricity bill, I would rather get another 4060Ti.
Author
Owner

@orlyandico commented on GitHub (Feb 3, 2024):

It consumes 250W when inferencing, and 50W when not. If you were inferencing 10% of the time (2.4 hours/day) then daily power consumption is 2.4 x 0.25kW + 21.6 x 0.05kW = 1.11kWh.

I don't know what your $/kWh is but the UK is $0.38 which is extortionate, at that rate the electricity cost would be $12

I noticed on my 3060 that when inferencing it pulls about 60W (out of 170W) and 12W when idle. The model I used (falcon-7B) doesn't seem to max it out. I imagine the 4060Ti is similar, since it has a 165W TDP. If we follow the same logic as above, the 3060 would consume 2.4 x 0.06kW + 21.6 x 0.012kW = 0.26kWh/day or 7.8kWh per month = $3/month.

So the electricity cost delta between the 3060 and P40 is $9/month. Whoopee.

(incidentally the price of electricity in Singapore is 1/3 that of the UK.. so.. I don't think electricity will be an issue)

There are a couple caveats with the P40. It is a datacenter card, so has no fans. You'll have to jury rig some cooling for it (lots of 3D models on thingiverse). It is a full length card (267mm) so will require a large case. It uses an EPS 12V connector, but the one I bought on ebay came with the appropriate cable so you can connect 2x 6- or 8-pin PCIE to the card to provide power.

It will need a 600W power supply. I addressed this by buying an old Lenovo Thinkstation S30 on ebay for $100. So literally for almost the price of a new 600W power supply I got an entire PC. The only downside is the size of the case is huge.

<!-- gh-comment-id:1925126955 --> @orlyandico commented on GitHub (Feb 3, 2024): It consumes 250W when inferencing, and 50W when not. If you were inferencing 10% of the time (2.4 hours/day) then daily power consumption is 2.4 x 0.25kW + 21.6 x 0.05kW = 1.11kWh. I don't know what your $/kWh is but the UK is $0.38 which is extortionate, at that rate the electricity cost would be $12 I noticed on my 3060 that when inferencing it pulls about 60W (out of 170W) and 12W when idle. The model I used (falcon-7B) doesn't seem to max it out. I imagine the 4060Ti is similar, since it has a 165W TDP. If we follow the same logic as above, the 3060 would consume 2.4 x 0.06kW + 21.6 x 0.012kW = 0.26kWh/day or 7.8kWh per month = $3/month. So the electricity cost delta between the 3060 and P40 is $9/month. Whoopee. (incidentally the price of electricity in Singapore is 1/3 that of the UK.. so.. I don't think electricity will be an issue) There are a couple caveats with the P40. It is a datacenter card, so has no fans. You'll have to jury rig some cooling for it (lots of 3D models on thingiverse). It is a full length card (267mm) so will require a large case. It uses an EPS 12V connector, but the one I bought on ebay came with the appropriate cable so you can connect 2x 6- or 8-pin PCIE to the card to provide power. It will need a 600W power supply. I addressed this by buying an old Lenovo Thinkstation S30 on ebay for $100. So literally for almost the price of a new 600W power supply I got an entire PC. The only downside is the size of the case is huge.
Author
Owner

@nejib1 commented on GitHub (Feb 9, 2024):

Hi @nejib1, have you tested it out? I am considering getting a M40 or M60 card if it is significantly faster than CPUs for running Ollama.

Hello,
I bought a new one RTX A4000, it's a bad idea to work with old GPU..

<!-- gh-comment-id:1936748383 --> @nejib1 commented on GitHub (Feb 9, 2024): > Hi @nejib1, have you tested it out? I am considering getting a M40 or M60 card if it is significantly faster than CPUs for running Ollama. Hello, I bought a new one RTX A4000, it's a bad idea to work with old GPU..
Author
Owner

@bmizerany commented on GitHub (Mar 12, 2024):

Nvidia Tesla M60 should be supported in the current release of Ollama. Closing this, but please reopen and post server logs ~/.ollama/logs if you have issues.

<!-- gh-comment-id:1992602897 --> @bmizerany commented on GitHub (Mar 12, 2024): Nvidia Tesla M60 should be supported in the current release of Ollama. Closing this, but please reopen and post server logs `~/.ollama/logs` if you have issues.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#1290