[GH-ISSUE #5935] ollama 0.2.8 doesn't support Multiple GPU H100 #3701

Closed
opened 2026-04-12 14:30:56 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @sksdev27 on GitHub (Jul 25, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5935

Originally assigned to: @dhiltgen on GitHub.

So when i launch the latest ollama 0.2.8 it uses one gpu but when i use ollama version 0.1.30 it uses all the gpu. The fix applied 0.1.30 didnt make it to 0.2.8
Here are the logs:
log_ollama.txt

Originally posted by @sksdev27 in https://github.com/ollama/ollama/issues/5024#issuecomment-2249121012

Originally created by @sksdev27 on GitHub (Jul 25, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5935 Originally assigned to: @dhiltgen on GitHub. So when i launch the latest ollama 0.2.8 it uses one gpu but when i use ollama version 0.1.30 it uses all the gpu. The fix applied 0.1.30 didnt make it to 0.2.8 Here are the logs: [log_ollama.txt](https://github.com/user-attachments/files/16368867/log_ollama.txt) _Originally posted by @sksdev27 in https://github.com/ollama/ollama/issues/5024#issuecomment-2249121012_
GiteaMirror added the needs more info label 2026-04-12 14:30:56 -05:00
Author
Owner

@dhiltgen commented on GitHub (Jul 26, 2024):

Can you try loading the model with use_mmap=false?

From the logs, it looks like the load is stalling for 5m and timing out. Your system has a large amount of system memory, but my suspicion is the storage is slow. Can you elaborate what sort of storage you're storing the models on?

<!-- gh-comment-id:2253123740 --> @dhiltgen commented on GitHub (Jul 26, 2024): Can you try loading the model with use_mmap=false? From the logs, it looks like the load is stalling for 5m and timing out. Your system has a large amount of system memory, but my suspicion is the storage is slow. Can you elaborate what sort of storage you're storing the models on?
Author
Owner

@sksdev27 commented on GitHub (Jul 30, 2024):

I pulled the latest ollama 0.3.0 its just tagged latest
~/Documents$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 504G 0 504G 0% /dev
tmpfs 101G 3.7M 101G 1% /run
/dev/sda2 1.8T 363G 1.3T 22% /
tmpfs 504G 0 504G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 504G 0 504G 0% /sys/fs/cgroup
/dev/loop0 128K 128K 0 100% /snap/bare/5
/dev/loop1 64M 64M 0 100% /snap/core20/2264
/dev/loop2 64M 64M 0 100% /snap/core20/2318
/dev/loop3 75M 75M 0 100% /snap/core22/1122
/dev/loop5 13M 13M 0 100% /snap/snap-store/959
/dev/loop4 75M 75M 0 100% /snap/core22/1380
/dev/loop6 13M 13M 0 100% /snap/snap-store/1113
/dev/loop7 39M 39M 0 100% /snap/snapd/21465
/dev/loop8 39M 39M 0 100% /snap/snapd/21759
/dev/loop12 92M 92M 0 100% /snap/gtk-common-themes/1535
/dev/loop10 347M 347M 0 100% /snap/gnome-3-38-2004/119
/dev/loop11 350M 350M 0 100% /snap/gnome-3-38-2004/143
/dev/loop13 506M 506M 0 100% /snap/gnome-42-2204/176
/dev/loop9 505M 505M 0 100% /snap/gnome-42-2204/172
/dev/loop15 13M 13M 0 100% /snap/kubectl/3315
/dev/loop14 13M 13M 0 100% /snap/kubectl/3302
/dev/sda1 511M 6.1M 505M 2% /boot/efi
tmpfs 101G 8.0K 101G 1% /run/user/131
/dev/loop16 56M 56M 0 100% /snap/core18/2829
/dev/loop17 37M 37M 0 100% /snap/gh/502
tmpfs 101G 52K 101G 1% /run/user/1000
/dev/sdb 5.8T 216G 5.3T 4% /mnt/disk_2
~/Documents$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 4K 1 loop /snap/bare/5
loop1 7:1 0 64M 1 loop /snap/core20/2264
loop2 7:2 0 64M 1 loop /snap/core20/2318
loop3 7:3 0 74.2M 1 loop /snap/core22/1122
loop4 7:4 0 74.2M 1 loop /snap/core22/1380
loop5 7:5 0 12.3M 1 loop /snap/snap-store/959
loop6 7:6 0 12.9M 1 loop /snap/snap-store/1113
loop7 7:7 0 38.8M 1 loop /snap/snapd/21465
loop8 7:8 0 38.8M 1 loop /snap/snapd/21759
loop9 7:9 0 504.2M 1 loop /snap/gnome-42-2204/172
loop10 7:10 0 346.3M 1 loop /snap/gnome-3-38-2004/119
loop11 7:11 0 349.7M 1 loop /snap/gnome-3-38-2004/143
loop12 7:12 0 91.7M 1 loop /snap/gtk-common-themes/1535
loop13 7:13 0 505.1M 1 loop /snap/gnome-42-2204/176
loop14 7:14 0 12.3M 1 loop /snap/kubectl/3302
loop15 7:15 0 12.3M 1 loop /snap/kubectl/3315
loop16 7:16 0 55.7M 1 loop /snap/core18/2829
loop17 7:17 0 37M 1 loop /snap/gh/502
sda 8:0 0 1.8T 0 disk
├─sda1 8:1 0 512M 0 part /boot/efi
└─sda2 8:2 0 1.8T 0 part /
sdb 8:16 0 5.8T 0 disk /mnt/disk_2

docker volume inspect ollama
[
{
"CreatedAt": "2024-06-18T21:10:47-06:00",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/ollama/_data",
"Name": "ollama",
"Options": null,
"Scope": "local"
}
]

Seems like i have plenty of storage, but your right about the system storage. I am not creating special storage for Ollama models. seems like they are part of the the storage within the container. I can create special volume folder for it, if that helps. Also, it worked with curl command second time which is awesome thanks.

Logs:
docker ollama input.txt
docker ollama logs .txt

<!-- gh-comment-id:2258697188 --> @sksdev27 commented on GitHub (Jul 30, 2024): I pulled the latest ollama 0.3.0 its just tagged latest ~/Documents$ df -h Filesystem Size Used Avail Use% Mounted on udev 504G 0 504G 0% /dev tmpfs 101G 3.7M 101G 1% /run /dev/sda2 1.8T 363G 1.3T 22% / tmpfs 504G 0 504G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 504G 0 504G 0% /sys/fs/cgroup /dev/loop0 128K 128K 0 100% /snap/bare/5 /dev/loop1 64M 64M 0 100% /snap/core20/2264 /dev/loop2 64M 64M 0 100% /snap/core20/2318 /dev/loop3 75M 75M 0 100% /snap/core22/1122 /dev/loop5 13M 13M 0 100% /snap/snap-store/959 /dev/loop4 75M 75M 0 100% /snap/core22/1380 /dev/loop6 13M 13M 0 100% /snap/snap-store/1113 /dev/loop7 39M 39M 0 100% /snap/snapd/21465 /dev/loop8 39M 39M 0 100% /snap/snapd/21759 /dev/loop12 92M 92M 0 100% /snap/gtk-common-themes/1535 /dev/loop10 347M 347M 0 100% /snap/gnome-3-38-2004/119 /dev/loop11 350M 350M 0 100% /snap/gnome-3-38-2004/143 /dev/loop13 506M 506M 0 100% /snap/gnome-42-2204/176 /dev/loop9 505M 505M 0 100% /snap/gnome-42-2204/172 /dev/loop15 13M 13M 0 100% /snap/kubectl/3315 /dev/loop14 13M 13M 0 100% /snap/kubectl/3302 /dev/sda1 511M 6.1M 505M 2% /boot/efi tmpfs 101G 8.0K 101G 1% /run/user/131 /dev/loop16 56M 56M 0 100% /snap/core18/2829 /dev/loop17 37M 37M 0 100% /snap/gh/502 tmpfs 101G 52K 101G 1% /run/user/1000 /dev/sdb 5.8T 216G 5.3T 4% /mnt/disk_2 ~/Documents$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 4K 1 loop /snap/bare/5 loop1 7:1 0 64M 1 loop /snap/core20/2264 loop2 7:2 0 64M 1 loop /snap/core20/2318 loop3 7:3 0 74.2M 1 loop /snap/core22/1122 loop4 7:4 0 74.2M 1 loop /snap/core22/1380 loop5 7:5 0 12.3M 1 loop /snap/snap-store/959 loop6 7:6 0 12.9M 1 loop /snap/snap-store/1113 loop7 7:7 0 38.8M 1 loop /snap/snapd/21465 loop8 7:8 0 38.8M 1 loop /snap/snapd/21759 loop9 7:9 0 504.2M 1 loop /snap/gnome-42-2204/172 loop10 7:10 0 346.3M 1 loop /snap/gnome-3-38-2004/119 loop11 7:11 0 349.7M 1 loop /snap/gnome-3-38-2004/143 loop12 7:12 0 91.7M 1 loop /snap/gtk-common-themes/1535 loop13 7:13 0 505.1M 1 loop /snap/gnome-42-2204/176 loop14 7:14 0 12.3M 1 loop /snap/kubectl/3302 loop15 7:15 0 12.3M 1 loop /snap/kubectl/3315 loop16 7:16 0 55.7M 1 loop /snap/core18/2829 loop17 7:17 0 37M 1 loop /snap/gh/502 sda 8:0 0 1.8T 0 disk ├─sda1 8:1 0 512M 0 part /boot/efi └─sda2 8:2 0 1.8T 0 part / sdb 8:16 0 5.8T 0 disk /mnt/disk_2 docker volume inspect ollama [ { "CreatedAt": "2024-06-18T21:10:47-06:00", "Driver": "local", "Labels": null, "Mountpoint": "/var/lib/docker/volumes/ollama/_data", "Name": "ollama", "Options": null, "Scope": "local" } ] Seems like i have plenty of storage, but your right about the system storage. I am not creating special storage for Ollama models. seems like they are part of the the storage within the container. I can create special volume folder for it, if that helps. Also, it worked with curl command second time which is awesome thanks. Logs: [docker ollama input.txt](https://github.com/user-attachments/files/16430551/docker.ollama.input.txt) [docker ollama logs .txt](https://github.com/user-attachments/files/16430552/docker.ollama.logs.txt)
Author
Owner

@dhiltgen commented on GitHub (Jul 30, 2024):

Space probably isn't the problem, but low iops is more likely leading to slow loads. Is this local SSD storage, EBS (or equivalent), iSCSI, FC SAN, etc...?

On the host, you can try something like iostat -dmx 5 while trying to load a model and see utilization and read throughput of the drive where the models are stored.

<!-- gh-comment-id:2258706833 --> @dhiltgen commented on GitHub (Jul 30, 2024): Space probably isn't the problem, but low iops is more likely leading to slow loads. Is this local SSD storage, EBS (or equivalent), iSCSI, FC SAN, etc...? On the host, you can try something like `iostat -dmx 5` while trying to load a model and see utilization and read throughput of the drive where the models are stored.
Author
Owner

@sksdev27 commented on GitHub (Jul 30, 2024):

iostat_log.txt
vmstat_log.txt
docker ollama input latest.txt
log_ollama_latest.txt

the first ollama run llama3:70b .. well i tried capturing it but didn't write to the text file correctly. However the curl command iostat probably is what got capture correctly. then I did without the curl and it just worked..

<!-- gh-comment-id:2258889867 --> @sksdev27 commented on GitHub (Jul 30, 2024): [iostat_log.txt](https://github.com/user-attachments/files/16431646/iostat_log.txt) [vmstat_log.txt](https://github.com/user-attachments/files/16431652/vmstat_log.txt) [docker ollama input latest.txt](https://github.com/user-attachments/files/16431654/docker.ollama.input.latest.txt) [log_ollama_latest.txt](https://github.com/user-attachments/files/16431655/log_ollama_latest.txt) the first ollama run llama3:70b .. well i tried capturing it but didn't write to the text file correctly. However the curl command iostat probably is what got capture correctly. then I did without the curl and it just worked..
Author
Owner

@dhiltgen commented on GitHub (Jul 30, 2024):

That's great to hear that it's running on all 4 of the H100's.

Most likely what happened is the model was warmed up in the filesystem cache in Linux, so subsequent loads were faster as it came from RAM instead of waiting for disk I/O. We're still aiming to improve the behavior on slow load times. I'm tracking it under #5494

It sounds like we can close this one.

<!-- gh-comment-id:2259257598 --> @dhiltgen commented on GitHub (Jul 30, 2024): That's great to hear that it's running on all 4 of the H100's. Most likely what happened is the model was warmed up in the filesystem cache in Linux, so subsequent loads were faster as it came from RAM instead of waiting for disk I/O. We're still aiming to improve the behavior on slow load times. I'm tracking it under #5494 It sounds like we can close this one.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#3701