[GH-ISSUE #1949] bad generation on multi-GPU setup #63162

Closed
opened 2026-05-03 12:19:48 -05:00 by GiteaMirror · 9 comments
Owner

Originally created by @jerzydziewierz on GitHub (Jan 12, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/1949

When using vast.ai and image nvidia/cuda:12.3.1-devel-ubuntu22.04
and 4x RTX3090 on a AMD EPYC 7302P 16-Core Processor,

Trying any "small model" ( i have not tried large models yet )

I get either an outright crash or a bad generation like

and i quote:

############################

screenshot of my desktop, showing btop in top-right, nvtop in bottom-right, ollama serve in top left, and the ollama run in bottom left:

image

output of nvidia-smi :

root@C.8226224:~$ nvidia-smi
Fri Jan 12 12:09:43 2024       
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 545.23.08              Driver Version: 545.23.08    CUDA Version: 12.3     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 3090        On  | 00000000:01:00.0 Off |                  N/A |
| 30%   26C    P8              37W / 350W |   2005MiB / 24576MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   1  NVIDIA GeForce RTX 3090        On  | 00000000:41:00.0 Off |                  N/A |
| 30%   24C    P8              32W / 350W |   1591MiB / 24576MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   2  NVIDIA GeForce RTX 3090        On  | 00000000:81:00.0 Off |                  N/A |
| 30%   25C    P8              30W / 350W |   1591MiB / 24576MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   3  NVIDIA GeForce RTX 3090        On  | 00000000:C1:00.0 Off |                  N/A |
| 30%   26C    P8              40W / 350W |   1591MiB / 24576MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|

any ideas? maybe I should try with a different image (CUDA version) ?

please advise what else can I try or report with this

My eventual target is to run the new model, megadolphin https://ollama.ai/library/megadolphin on multi-GPU setup.

Originally created by @jerzydziewierz on GitHub (Jan 12, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/1949 When using `vast.ai` and image `nvidia/cuda:12.3.1-devel-ubuntu22.04` and 4x RTX3090 on a AMD EPYC 7302P 16-Core Processor, Trying any "small model" ( i have not tried large models yet ) I get either an outright crash or a bad generation like and i quote: ``` ############################ ``` screenshot of my desktop, showing `btop` in top-right, `nvtop` in bottom-right, `ollama serve` in top left, and the `ollama run ` in bottom left: ![image](https://github.com/jmorganca/ollama/assets/1606347/3c8b888d-b4fa-4731-9c60-a39d6680c7e0) output of `nvidia-smi` : ``` root@C.8226224:~$ nvidia-smi Fri Jan 12 12:09:43 2024 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 545.23.08 Driver Version: 545.23.08 CUDA Version: 12.3 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA GeForce RTX 3090 On | 00000000:01:00.0 Off | N/A | | 30% 26C P8 37W / 350W | 2005MiB / 24576MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ | 1 NVIDIA GeForce RTX 3090 On | 00000000:41:00.0 Off | N/A | | 30% 24C P8 32W / 350W | 1591MiB / 24576MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ | 2 NVIDIA GeForce RTX 3090 On | 00000000:81:00.0 Off | N/A | | 30% 25C P8 30W / 350W | 1591MiB / 24576MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ | 3 NVIDIA GeForce RTX 3090 On | 00000000:C1:00.0 Off | N/A | | 30% 26C P8 40W / 350W | 1591MiB / 24576MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| ``` any ideas? maybe I should try with a different image (CUDA version) ? please advise what else can I try or report with this My eventual target is to run the new model, megadolphin `https://ollama.ai/library/megadolphin` on multi-GPU setup.
GiteaMirror added the bugnvidia labels 2026-05-03 12:19:48 -05:00
Author
Owner

@jerzydziewierz commented on GitHub (Jan 12, 2024):

update,
using the image nvidia/cuda:12.0.1-devel-ubuntu20.04 on 4x Tesla V100, it appears to work correctly,

so maybe this is something to do with the nvidia/cuda:12.3.1-devel-ubuntu22.04 image being incompatible

<!-- gh-comment-id:1889065592 --> @jerzydziewierz commented on GitHub (Jan 12, 2024): update, using the image `nvidia/cuda:12.0.1-devel-ubuntu20.04` on 4x Tesla V100, it appears to work correctly, so maybe this is something to do with the `nvidia/cuda:12.3.1-devel-ubuntu22.04` image being incompatible
Author
Owner

@dcasota commented on GitHub (Jan 12, 2024):

For Multi-Instance GPU (MIG) support, see https://docs.nvidia.com/datacenter/tesla/mig-user-guide/index.html#supported-gpus.
For tesla v100: MIG is supported on systems that include the supported products above such as DGX, DGX Station and HGX.

<!-- gh-comment-id:1889110239 --> @dcasota commented on GitHub (Jan 12, 2024): For Multi-Instance GPU (MIG) support, see https://docs.nvidia.com/datacenter/tesla/mig-user-guide/index.html#supported-gpus. For tesla v100: _MIG is supported on systems that include the supported products above such as DGX, DGX Station and HGX._
Author
Owner

@fpreiss commented on GitHub (Jan 14, 2024):

I am observing something similar on another multi-GPU setup (2 x RTX 4090). Until the v0.1.17 release I was able to run a number of models on dual GPUs.

More recent releases most of the time just crash (quite drastically, see logs below from just before I lost the network connection) or generate output like in the example given above.

I get normal (gpu accelerated) output on a system with a single RTX 2070 or on the dual GPU setup when blacklisting one of the GPUs:

CUDA_VISIBLE_DEVICES=1 ./ollama serve

The following log is from a recent arch linux installation with ollama compiled
on 288ef8ff952e44eb86ae1471437543e8aa29651d 565f8a3c441b2af51da7277be1b07e6a6d3cfc09.

Jan 14 02:45:43 ws-1 kernel: BUG: kernel NULL pointer dereference, address: 0000000000000000
Jan 14 02:45:43 ws-1 kernel: #PF: supervisor instruction fetch in kernel mode
Jan 14 02:45:43 ws-1 kernel: #PF: error_code(0x0010) - not-present page
...
Jan 14 02:46:12 ws-1 kernel: watchdog: Watchdog detected hard LOCKUP on cpu 11
Jan 14 02:46:12 ws-1 kernel: Modules linked in: veth xt_nat xt_tcpudp xt_conntrack nft_chain_nat xt_MASQUERADE nf_nat nf_conntrack_netlink nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 xt_addrtype nft_compat nf_tables wireguard curve25519_x86_64 libchacha20poly1305 chacha_x86_64 poly1305_x86_64 libcurve25519_generic libchacha ip6_udp_tunnel udp_tunnel cfg80211 rfkill 8021q garp mrp overlay nvidia_drm(POE) nvidia_modeset(POE) nvidia_uvm(POE) intel_rapl_msr intel_rapl_common snd_sof_pci_intel_tgl snd_sof_intel_hda_common intel_uncore_frequency intel_uncore_frequency_common soundwire_intel snd_sof_intel_hda_mlink soundwire_cadence snd_sof_intel_hda snd_sof_pci snd_sof_xtensa_dsp snd_sof snd_sof_utils snd_soc_hdac_hda snd_hda_ext_core snd_soc_acpi_intel_match snd_soc_acpi soundwire_generic_allocation soundwire_bus x86_pkg_temp_thermal intel_powerclamp snd_soc_core snd_compress coretemp ac97_bus snd_hda_codec_hdmi snd_pcm_dmaengine snd_hda_intel kvm_intel i915 snd_intel_dspcfg snd_usb_audio uvcvideo snd_intel_sdw_acpi kvm videobuf2_vmalloc
Jan 14 02:46:12 ws-1 kernel:  snd_usbmidi_lib snd_hda_codec uvc snd_ump videobuf2_memops snd_hda_core snd_rawmidi videobuf2_v4l2 snd_hwdep snd_seq_device drm_buddy irqbypass iTCO_wdt videodev intel_pmc_bxt vfat snd_pcm i2c_algo_bit pmt_telemetry rapl videobuf2_common iTCO_vendor_support pmt_class nvidia(POE) mei_hdcp fat mei_pxp spi_nor ttm snd_timer intel_cstate intel_uncore pcspkr wmi_bmof mtd mxm_wmi mc drm_display_helper mei_me snd i2c_i801 igc cec mei i2c_smbus soundcore intel_gtt intel_vsec serial_multi_instantiate mousedev joydev acpi_tad acpi_pad mac_hid br_netfilter bridge stp llc i2c_dev crypto_user fuse loop nfnetlink ip_tables x_tables btrfs blake2b_generic libcrc32c crc32c_generic xor raid6_pq dm_crypt cbc encrypted_keys trusted asn1_encoder tee usbhid crct10dif_pclmul crc32_pclmul dm_mod crc32c_intel polyval_clmulni polyval_generic gf128mul ghash_clmulni_intel sha512_ssse3 sha256_ssse3 sha1_ssse3 aesni_intel nvme crypto_simd spi_intel_pci cryptd nvme_core spi_intel xhci_pci nvme_common xhci_pci_renesas video wmi
Jan 14 02:46:12 ws-1 kernel: CPU: 11 PID: 118634 Comm: ollama Tainted: P      D W  OE      6.6.10-arch1-1 #1 1c4c0f23a3d2aa9ceff1bccbbfb5902f421e2288
Jan 14 02:46:12 ws-1 kernel: Hardware name: Micro-Star International Co., Ltd. MS-7D32/MAG Z690 TORPEDO (MS-7D32), BIOS A.10 12/02/2021
Jan 14 02:46:12 ws-1 kernel: RIP: 0010:native_queued_spin_lock_slowpath+0x6e/0x2e0
Jan 14 02:46:12 ws-1 kernel: Code: 77 7f f0 0f ba 2b 08 0f 92 c2 8b 03 0f b6 d2 c1 e2 08 30 e4 09 d0 3d ff 00 00 00 77 5b 85 c0 74 10 0f b6 03 84 c0 74 09 f3 90 <0f> b6 03 84 c0 75 f7 b8 01 00 00 00 66 89 03 65 48 ff 05 b3 ef 06
Jan 14 02:46:12 ws-1 kernel: RSP: 0018:ffffb9d743f67ca8 EFLAGS: 00000002
Jan 14 02:46:12 ws-1 kernel: RAX: 0000000000000001 RBX: ffff975784a4ec68 RCX: 0000000225c17d03
Jan 14 02:46:12 ws-1 kernel: RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffff975784a4ec68
Jan 14 02:46:12 ws-1 kernel: RBP: ffff9758205fe000 R08: 0000000000000000 R09: ffffb9d743f67da8
Jan 14 02:46:12 ws-1 kernel: R10: 00000000000390a0 R11: 0000000000000000 R12: ffffb9d743f67d30
Jan 14 02:46:12 ws-1 kernel: R13: 000000000000002b R14: ffff975b6cceac00 R15: 000000000000002b
Jan 14 02:46:12 ws-1 kernel: FS:  00007fac6d4336c0(0000) GS:ffff9766ef8c0000(0000) knlGS:0000000000000000
Jan 14 02:46:12 ws-1 kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Jan 14 02:46:12 ws-1 kernel: CR2: 000000c0002cd010 CR3: 00000003e712e000 CR4: 0000000000f50ee0
Jan 14 02:46:12 ws-1 kernel: PKRU: 55555554
Jan 14 02:46:12 ws-1 kernel: Call Trace:
Jan 14 02:46:12 ws-1 kernel:  <NMI>
Jan 14 02:46:12 ws-1 kernel:  ? watchdog_hardlockup_check+0xaa/0x160
Jan 14 02:46:12 ws-1 kernel:  ? __perf_event_overflow+0xe5/0x2a0
Jan 14 02:46:12 ws-1 kernel:  ? handle_pmi_common+0x16f/0x3c0
Jan 14 02:46:12 ws-1 kernel:  ? intel_pmu_handle_irq+0x104/0x480
Jan 14 02:46:12 ws-1 kernel:  ? perf_event_nmi_handler+0x2a/0x50
Jan 14 02:46:12 ws-1 kernel:  ? nmi_handle+0x5e/0x150
Jan 14 02:46:12 ws-1 kernel:  ? default_do_nmi+0x40/0x100
Jan 14 02:46:12 ws-1 kernel:  ? exc_nmi+0x139/0x1c0
Jan 14 02:46:12 ws-1 kernel:  ? end_repeat_nmi+0x16/0x67
Jan 14 02:46:12 ws-1 kernel:  ? native_queued_spin_lock_slowpath+0x6e/0x2e0
Jan 14 02:46:12 ws-1 kernel:  ? native_queued_spin_lock_slowpath+0x6e/0x2e0
Jan 14 02:46:12 ws-1 kernel:  ? native_queued_spin_lock_slowpath+0x6e/0x2e0
Jan 14 02:46:12 ws-1 kernel:  </NMI>
Jan 14 02:46:12 ws-1 kernel:  <TASK>
Jan 14 02:46:12 ws-1 kernel:  _raw_spin_lock_irqsave+0x3d/0x50
Jan 14 02:46:12 ws-1 kernel:  os_acquire_spinlock+0x12/0x30 [nvidia 55ab717de45bfa8eb3cad25b783b4b3e73357350]
Jan 14 02:46:12 ws-1 kernel:  _nv042844rm+0x10/0x20 [nvidia 55ab717de45bfa8eb3cad25b783b4b3e73357350]
Jan 14 02:46:12 ws-1 kernel:  ? rm_ioctl+0x40/0xb0 [nvidia 55ab717de45bfa8eb3cad25b783b4b3e73357350]
Jan 14 02:46:12 ws-1 kernel:  _nv048409rm+0xc3/0x1d0 [nvidia 55ab717de45bfa8eb3cad25b783b4b3e73357350]
Jan 14 02:46:12 ws-1 kernel:  rm_ioctl+0x40/0xb0 [nvidia 55ab717de45bfa8eb3cad25b783b4b3e73357350]
Jan 14 02:46:12 ws-1 kernel:  nvidia_unlocked_ioctl+0x6ee/0x8f0 [nvidia 55ab717de45bfa8eb3cad25b783b4b3e73357350]
Jan 14 02:46:12 ws-1 kernel:  __x64_sys_ioctl+0x94/0xd0
Jan 14 02:46:12 ws-1 kernel:  do_syscall_64+0x5d/0x90
Jan 14 02:46:12 ws-1 kernel:  ? syscall_exit_to_user_mode+0x2b/0x40
Jan 14 02:46:12 ws-1 kernel:  ? do_syscall_64+0x6c/0x90
Jan 14 02:46:12 ws-1 kernel:  ? hrtimer_interrupt+0x121/0x230
Jan 14 02:46:12 ws-1 kernel:  ? sched_clock+0x10/0x30
Jan 14 02:46:12 ws-1 kernel:  ? sched_clock_cpu+0xf/0x190
Jan 14 02:46:12 ws-1 kernel:  ? irqtime_account_irq+0x40/0xc0
Jan 14 02:46:12 ws-1 kernel:  ? __irq_exit_rcu+0x4b/0xc0
Jan 14 02:46:12 ws-1 kernel:  entry_SYSCALL_64_after_hwframe+0x6e/0xd8
Jan 14 02:46:12 ws-1 kernel: RIP: 0033:0x7fb06123d3af
Jan 14 02:46:12 ws-1 kernel: Code: 00 48 89 44 24 18 31 c0 48 8d 44 24 60 c7 04 24 10 00 00 00 48 89 44 24 08 48 8d 44 24 20 48 89 44 24 10 b8 10 00 00 00 0f 05 <89> c2 3d 00 f0 ff ff 77 18 48 8b 44 24 18 64 48 2b 04 25 28 00 00
Jan 14 02:46:12 ws-1 kernel: RSP: 002b:00007fac6d4310d0 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Jan 14 02:46:12 ws-1 kernel: RAX: ffffffffffffffda RBX: 00007fac6d4311e0 RCX: 00007fb06123d3af
Jan 14 02:46:12 ws-1 kernel: RDX: 00007fac6d4311e0 RSI: 00000000c030462b RDI: 0000000000000017
Jan 14 02:46:12 ws-1 kernel: RBP: 00007fac6d431180 R08: 00007fac6d4311e0 R09: 00007fac6d431208
Jan 14 02:46:12 ws-1 kernel: R10: 00007fac609350a0 R11: 0000000000000246 R12: 00000000c030462b
Jan 14 02:46:12 ws-1 kernel: R13: 0000000000000017 R14: 00007fac6d431208 R15: 00007fac6d431140
Jan 14 02:46:12 ws-1 kernel:  </TASK>
Jan 14 02:46:12 ws-1 kernel: INFO: NMI handler (perf_event_nmi_handler) took too long to run: 1.379 msecs

This happend when trying to run the default LLaVA quantisation from ollama.ai, but the same behavior can be seen on other models as well. Additionally here is a coredump from an earlier run when I attempted running ollama as a service with a modified PKGBUILD for a recent git commit:

           PID: 239507 (ollama)
           UID: 953 (ollama)
           GID: 953 (ollama)
        Signal: 6 (ABRT)
     Timestamp: Sat 2024-01-13 09:21:08 CET (6min ago)
  Command Line: /usr/bin/ollama serve
    Executable: /usr/bin/ollama
 Control Group: /system.slice/ollama.service
          Unit: ollama.service
         Slice: system.slice
       Boot ID: e9f9584145144c4bbf970ccfa36ffb08
    Machine ID: 6dc88c6be7ed4d33814fee1d2de3f871
      Hostname: ws-1
       Storage: /var/lib/systemd/coredump/core.ollama.953.e9f9584145144c4bbf970ccfa36ffb08.239507.1705134068000000.zst (present)
  Size on Disk: 756.9M
       Message: Process 239507 (ollama) of user 953 dumped core.

                Module libnvidia-ml.so without build-id.
                Stack trace of thread 239675:
                #0  0x0000561f175540c1 runtime.raise.abi0 (ollama + 0x1d50c1)
                #1  0x0000561f1753643b runtime.raisebadsignal (ollama + 0x1b743b)
                #2  0x0000561f17536889 runtime.badsignal (ollama + 0x1b7889)
                #3  0x0000561f1753518b runtime.sigtrampgo (ollama + 0x1b618b)
                #4  0x0000561f175543a9 runtime.sigtramp.abi0 (ollama + 0x1d53a9)
                #5  0x00007efcc796f710 n/a (libc.so.6 + 0x3e710)
                #6  0x00007efcc79bf83c n/a (libc.so.6 + 0x8e83c)
                #7  0x00007efcc796f668 raise (libc.so.6 + 0x3e668)
                #8  0x00007efcc79574b8 abort (libc.so.6 + 0x264b8)
                #9  0x00007efcc7cdd3b2 _ZSt21__glibcxx_assert_failPKciS0_S0_ (libstdc++.so.6 + 0xdd3b2)
                #10 0x00007efbe5096050 n/a (/tmp/ollama2184276840/cuda/libext_server.so + 0x1b5e050)
                #11 0x00007efbe506a8a9 n/a (/tmp/ollama2184276840/cuda/libext_server.so + 0x1b328a9)
                #12 0x00007efbe4fff0a0 n/a (/tmp/ollama2184276840/cuda/libext_server.so + 0x1ac70a0)
                #13 0x00007efbe504eda1 n/a (/tmp/ollama2184276840/cuda/libext_server.so + 0x1b16da1)
                #14 0x00007efcc7ce1943 execute_native_thread_routine (libstdc++.so.6 + 0xe1943)
                #15 0x00007efcc79bd9eb n/a (libc.so.6 + 0x8c9eb)
                #16 0x00007efcc7a417cc n/a (libc.so.6 + 0x1107cc)

                Stack trace of thread 239507:
                #0  0x0000561f17554643 runtime.futex.abi0 (ollama + 0x1d5643)
                #1  0x0000561f1751c190 runtime.futexsleep (ollama + 0x19d190)
                #2  0x0000561f174f5347 runtime.notesleep (ollama + 0x176347)
                #3  0x0000561f17527153 runtime.stoplockedm (ollama + 0x1a8153)
                #4  0x0000561f17528f9a runtime.schedule (ollama + 0x1a9f9a)
                #5  0x0000561f1752951f runtime.park_m (ollama + 0x1aa51f)
                #6  0x0000561f17550850 runtime.mcall (ollama + 0x1d1850)
                #7  0x00007ffc95fe4e68 n/a (n/a + 0x0)
                ELF object binary architecture: AMD x86-64

Edit 1: added log output from Jan 14 02:45:43
Edit 2: corrected commit hash from build (didn't have direct access to the device until now after the crash)

<!-- gh-comment-id:1890861475 --> @fpreiss commented on GitHub (Jan 14, 2024): I am observing something similar on another multi-GPU setup (2 x RTX 4090). Until the v0.1.17 release I was able to run a number of models on dual GPUs. More recent releases most of the time just crash (quite drastically, see logs below from just before I lost the network connection) or generate output like in the example given above. I get normal (gpu accelerated) output on a system with a single RTX 2070 or on the dual GPU setup when blacklisting one of the GPUs: ```bash CUDA_VISIBLE_DEVICES=1 ./ollama serve ``` The following log is from a recent arch linux installation with ollama compiled on ~`288ef8ff952e44eb86ae1471437543e8aa29651d`~ `565f8a3c441b2af51da7277be1b07e6a6d3cfc09`. ```log Jan 14 02:45:43 ws-1 kernel: BUG: kernel NULL pointer dereference, address: 0000000000000000 Jan 14 02:45:43 ws-1 kernel: #PF: supervisor instruction fetch in kernel mode Jan 14 02:45:43 ws-1 kernel: #PF: error_code(0x0010) - not-present page ... Jan 14 02:46:12 ws-1 kernel: watchdog: Watchdog detected hard LOCKUP on cpu 11 Jan 14 02:46:12 ws-1 kernel: Modules linked in: veth xt_nat xt_tcpudp xt_conntrack nft_chain_nat xt_MASQUERADE nf_nat nf_conntrack_netlink nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 xt_addrtype nft_compat nf_tables wireguard curve25519_x86_64 libchacha20poly1305 chacha_x86_64 poly1305_x86_64 libcurve25519_generic libchacha ip6_udp_tunnel udp_tunnel cfg80211 rfkill 8021q garp mrp overlay nvidia_drm(POE) nvidia_modeset(POE) nvidia_uvm(POE) intel_rapl_msr intel_rapl_common snd_sof_pci_intel_tgl snd_sof_intel_hda_common intel_uncore_frequency intel_uncore_frequency_common soundwire_intel snd_sof_intel_hda_mlink soundwire_cadence snd_sof_intel_hda snd_sof_pci snd_sof_xtensa_dsp snd_sof snd_sof_utils snd_soc_hdac_hda snd_hda_ext_core snd_soc_acpi_intel_match snd_soc_acpi soundwire_generic_allocation soundwire_bus x86_pkg_temp_thermal intel_powerclamp snd_soc_core snd_compress coretemp ac97_bus snd_hda_codec_hdmi snd_pcm_dmaengine snd_hda_intel kvm_intel i915 snd_intel_dspcfg snd_usb_audio uvcvideo snd_intel_sdw_acpi kvm videobuf2_vmalloc Jan 14 02:46:12 ws-1 kernel: snd_usbmidi_lib snd_hda_codec uvc snd_ump videobuf2_memops snd_hda_core snd_rawmidi videobuf2_v4l2 snd_hwdep snd_seq_device drm_buddy irqbypass iTCO_wdt videodev intel_pmc_bxt vfat snd_pcm i2c_algo_bit pmt_telemetry rapl videobuf2_common iTCO_vendor_support pmt_class nvidia(POE) mei_hdcp fat mei_pxp spi_nor ttm snd_timer intel_cstate intel_uncore pcspkr wmi_bmof mtd mxm_wmi mc drm_display_helper mei_me snd i2c_i801 igc cec mei i2c_smbus soundcore intel_gtt intel_vsec serial_multi_instantiate mousedev joydev acpi_tad acpi_pad mac_hid br_netfilter bridge stp llc i2c_dev crypto_user fuse loop nfnetlink ip_tables x_tables btrfs blake2b_generic libcrc32c crc32c_generic xor raid6_pq dm_crypt cbc encrypted_keys trusted asn1_encoder tee usbhid crct10dif_pclmul crc32_pclmul dm_mod crc32c_intel polyval_clmulni polyval_generic gf128mul ghash_clmulni_intel sha512_ssse3 sha256_ssse3 sha1_ssse3 aesni_intel nvme crypto_simd spi_intel_pci cryptd nvme_core spi_intel xhci_pci nvme_common xhci_pci_renesas video wmi Jan 14 02:46:12 ws-1 kernel: CPU: 11 PID: 118634 Comm: ollama Tainted: P D W OE 6.6.10-arch1-1 #1 1c4c0f23a3d2aa9ceff1bccbbfb5902f421e2288 Jan 14 02:46:12 ws-1 kernel: Hardware name: Micro-Star International Co., Ltd. MS-7D32/MAG Z690 TORPEDO (MS-7D32), BIOS A.10 12/02/2021 Jan 14 02:46:12 ws-1 kernel: RIP: 0010:native_queued_spin_lock_slowpath+0x6e/0x2e0 Jan 14 02:46:12 ws-1 kernel: Code: 77 7f f0 0f ba 2b 08 0f 92 c2 8b 03 0f b6 d2 c1 e2 08 30 e4 09 d0 3d ff 00 00 00 77 5b 85 c0 74 10 0f b6 03 84 c0 74 09 f3 90 <0f> b6 03 84 c0 75 f7 b8 01 00 00 00 66 89 03 65 48 ff 05 b3 ef 06 Jan 14 02:46:12 ws-1 kernel: RSP: 0018:ffffb9d743f67ca8 EFLAGS: 00000002 Jan 14 02:46:12 ws-1 kernel: RAX: 0000000000000001 RBX: ffff975784a4ec68 RCX: 0000000225c17d03 Jan 14 02:46:12 ws-1 kernel: RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffff975784a4ec68 Jan 14 02:46:12 ws-1 kernel: RBP: ffff9758205fe000 R08: 0000000000000000 R09: ffffb9d743f67da8 Jan 14 02:46:12 ws-1 kernel: R10: 00000000000390a0 R11: 0000000000000000 R12: ffffb9d743f67d30 Jan 14 02:46:12 ws-1 kernel: R13: 000000000000002b R14: ffff975b6cceac00 R15: 000000000000002b Jan 14 02:46:12 ws-1 kernel: FS: 00007fac6d4336c0(0000) GS:ffff9766ef8c0000(0000) knlGS:0000000000000000 Jan 14 02:46:12 ws-1 kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Jan 14 02:46:12 ws-1 kernel: CR2: 000000c0002cd010 CR3: 00000003e712e000 CR4: 0000000000f50ee0 Jan 14 02:46:12 ws-1 kernel: PKRU: 55555554 Jan 14 02:46:12 ws-1 kernel: Call Trace: Jan 14 02:46:12 ws-1 kernel: <NMI> Jan 14 02:46:12 ws-1 kernel: ? watchdog_hardlockup_check+0xaa/0x160 Jan 14 02:46:12 ws-1 kernel: ? __perf_event_overflow+0xe5/0x2a0 Jan 14 02:46:12 ws-1 kernel: ? handle_pmi_common+0x16f/0x3c0 Jan 14 02:46:12 ws-1 kernel: ? intel_pmu_handle_irq+0x104/0x480 Jan 14 02:46:12 ws-1 kernel: ? perf_event_nmi_handler+0x2a/0x50 Jan 14 02:46:12 ws-1 kernel: ? nmi_handle+0x5e/0x150 Jan 14 02:46:12 ws-1 kernel: ? default_do_nmi+0x40/0x100 Jan 14 02:46:12 ws-1 kernel: ? exc_nmi+0x139/0x1c0 Jan 14 02:46:12 ws-1 kernel: ? end_repeat_nmi+0x16/0x67 Jan 14 02:46:12 ws-1 kernel: ? native_queued_spin_lock_slowpath+0x6e/0x2e0 Jan 14 02:46:12 ws-1 kernel: ? native_queued_spin_lock_slowpath+0x6e/0x2e0 Jan 14 02:46:12 ws-1 kernel: ? native_queued_spin_lock_slowpath+0x6e/0x2e0 Jan 14 02:46:12 ws-1 kernel: </NMI> Jan 14 02:46:12 ws-1 kernel: <TASK> Jan 14 02:46:12 ws-1 kernel: _raw_spin_lock_irqsave+0x3d/0x50 Jan 14 02:46:12 ws-1 kernel: os_acquire_spinlock+0x12/0x30 [nvidia 55ab717de45bfa8eb3cad25b783b4b3e73357350] Jan 14 02:46:12 ws-1 kernel: _nv042844rm+0x10/0x20 [nvidia 55ab717de45bfa8eb3cad25b783b4b3e73357350] Jan 14 02:46:12 ws-1 kernel: ? rm_ioctl+0x40/0xb0 [nvidia 55ab717de45bfa8eb3cad25b783b4b3e73357350] Jan 14 02:46:12 ws-1 kernel: _nv048409rm+0xc3/0x1d0 [nvidia 55ab717de45bfa8eb3cad25b783b4b3e73357350] Jan 14 02:46:12 ws-1 kernel: rm_ioctl+0x40/0xb0 [nvidia 55ab717de45bfa8eb3cad25b783b4b3e73357350] Jan 14 02:46:12 ws-1 kernel: nvidia_unlocked_ioctl+0x6ee/0x8f0 [nvidia 55ab717de45bfa8eb3cad25b783b4b3e73357350] Jan 14 02:46:12 ws-1 kernel: __x64_sys_ioctl+0x94/0xd0 Jan 14 02:46:12 ws-1 kernel: do_syscall_64+0x5d/0x90 Jan 14 02:46:12 ws-1 kernel: ? syscall_exit_to_user_mode+0x2b/0x40 Jan 14 02:46:12 ws-1 kernel: ? do_syscall_64+0x6c/0x90 Jan 14 02:46:12 ws-1 kernel: ? hrtimer_interrupt+0x121/0x230 Jan 14 02:46:12 ws-1 kernel: ? sched_clock+0x10/0x30 Jan 14 02:46:12 ws-1 kernel: ? sched_clock_cpu+0xf/0x190 Jan 14 02:46:12 ws-1 kernel: ? irqtime_account_irq+0x40/0xc0 Jan 14 02:46:12 ws-1 kernel: ? __irq_exit_rcu+0x4b/0xc0 Jan 14 02:46:12 ws-1 kernel: entry_SYSCALL_64_after_hwframe+0x6e/0xd8 Jan 14 02:46:12 ws-1 kernel: RIP: 0033:0x7fb06123d3af Jan 14 02:46:12 ws-1 kernel: Code: 00 48 89 44 24 18 31 c0 48 8d 44 24 60 c7 04 24 10 00 00 00 48 89 44 24 08 48 8d 44 24 20 48 89 44 24 10 b8 10 00 00 00 0f 05 <89> c2 3d 00 f0 ff ff 77 18 48 8b 44 24 18 64 48 2b 04 25 28 00 00 Jan 14 02:46:12 ws-1 kernel: RSP: 002b:00007fac6d4310d0 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 Jan 14 02:46:12 ws-1 kernel: RAX: ffffffffffffffda RBX: 00007fac6d4311e0 RCX: 00007fb06123d3af Jan 14 02:46:12 ws-1 kernel: RDX: 00007fac6d4311e0 RSI: 00000000c030462b RDI: 0000000000000017 Jan 14 02:46:12 ws-1 kernel: RBP: 00007fac6d431180 R08: 00007fac6d4311e0 R09: 00007fac6d431208 Jan 14 02:46:12 ws-1 kernel: R10: 00007fac609350a0 R11: 0000000000000246 R12: 00000000c030462b Jan 14 02:46:12 ws-1 kernel: R13: 0000000000000017 R14: 00007fac6d431208 R15: 00007fac6d431140 Jan 14 02:46:12 ws-1 kernel: </TASK> Jan 14 02:46:12 ws-1 kernel: INFO: NMI handler (perf_event_nmi_handler) took too long to run: 1.379 msecs ``` This happend when trying to run the default LLaVA quantisation from ollama.ai, but the same behavior can be seen on other models as well. Additionally here is a coredump from an earlier run when I attempted running `ollama` as a service with a modified `PKGBUILD` for a recent `git` commit: ```log PID: 239507 (ollama) UID: 953 (ollama) GID: 953 (ollama) Signal: 6 (ABRT) Timestamp: Sat 2024-01-13 09:21:08 CET (6min ago) Command Line: /usr/bin/ollama serve Executable: /usr/bin/ollama Control Group: /system.slice/ollama.service Unit: ollama.service Slice: system.slice Boot ID: e9f9584145144c4bbf970ccfa36ffb08 Machine ID: 6dc88c6be7ed4d33814fee1d2de3f871 Hostname: ws-1 Storage: /var/lib/systemd/coredump/core.ollama.953.e9f9584145144c4bbf970ccfa36ffb08.239507.1705134068000000.zst (present) Size on Disk: 756.9M Message: Process 239507 (ollama) of user 953 dumped core. Module libnvidia-ml.so without build-id. Stack trace of thread 239675: #0 0x0000561f175540c1 runtime.raise.abi0 (ollama + 0x1d50c1) #1 0x0000561f1753643b runtime.raisebadsignal (ollama + 0x1b743b) #2 0x0000561f17536889 runtime.badsignal (ollama + 0x1b7889) #3 0x0000561f1753518b runtime.sigtrampgo (ollama + 0x1b618b) #4 0x0000561f175543a9 runtime.sigtramp.abi0 (ollama + 0x1d53a9) #5 0x00007efcc796f710 n/a (libc.so.6 + 0x3e710) #6 0x00007efcc79bf83c n/a (libc.so.6 + 0x8e83c) #7 0x00007efcc796f668 raise (libc.so.6 + 0x3e668) #8 0x00007efcc79574b8 abort (libc.so.6 + 0x264b8) #9 0x00007efcc7cdd3b2 _ZSt21__glibcxx_assert_failPKciS0_S0_ (libstdc++.so.6 + 0xdd3b2) #10 0x00007efbe5096050 n/a (/tmp/ollama2184276840/cuda/libext_server.so + 0x1b5e050) #11 0x00007efbe506a8a9 n/a (/tmp/ollama2184276840/cuda/libext_server.so + 0x1b328a9) #12 0x00007efbe4fff0a0 n/a (/tmp/ollama2184276840/cuda/libext_server.so + 0x1ac70a0) #13 0x00007efbe504eda1 n/a (/tmp/ollama2184276840/cuda/libext_server.so + 0x1b16da1) #14 0x00007efcc7ce1943 execute_native_thread_routine (libstdc++.so.6 + 0xe1943) #15 0x00007efcc79bd9eb n/a (libc.so.6 + 0x8c9eb) #16 0x00007efcc7a417cc n/a (libc.so.6 + 0x1107cc) Stack trace of thread 239507: #0 0x0000561f17554643 runtime.futex.abi0 (ollama + 0x1d5643) #1 0x0000561f1751c190 runtime.futexsleep (ollama + 0x19d190) #2 0x0000561f174f5347 runtime.notesleep (ollama + 0x176347) #3 0x0000561f17527153 runtime.stoplockedm (ollama + 0x1a8153) #4 0x0000561f17528f9a runtime.schedule (ollama + 0x1a9f9a) #5 0x0000561f1752951f runtime.park_m (ollama + 0x1aa51f) #6 0x0000561f17550850 runtime.mcall (ollama + 0x1d1850) #7 0x00007ffc95fe4e68 n/a (n/a + 0x0) ELF object binary architecture: AMD x86-64 ``` Edit 1: added log output from `Jan 14 02:45:43` Edit 2: corrected commit hash from build (didn't have direct access to the device until now after the crash)
Author
Owner

@dcasota commented on GitHub (Jan 14, 2024):

@fpreiss Accordingly to https://github.com/NVIDIA/open-gpu-kernel-modules/issues/256 for kernel 5.18, ibt=off fixed an arch kernel configuration specific issue for nvidia. Your kernel is 6.6.10-arch1-1, hence you could give a try to that kernel boot parameter. nvidia's kernel versions supported by cuda actually lists 6.2.0-26 as latest.

<!-- gh-comment-id:1890931934 --> @dcasota commented on GitHub (Jan 14, 2024): @fpreiss Accordingly to https://github.com/NVIDIA/open-gpu-kernel-modules/issues/256 for kernel 5.18, `ibt=off` fixed an arch kernel configuration specific issue for nvidia. Your kernel is 6.6.10-arch1-1, hence you could give a try to that kernel boot parameter. nvidia's [kernel versions supported by cuda]( https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#system-requirements) actually lists 6.2.0-26 as latest.
Author
Owner

@fpreiss commented on GitHub (Jan 14, 2024):

@dcasota The issue above occurred with ibt=off set (probably because I ran into the mentioned issue before), so its not a fix here unfortunately.

<!-- gh-comment-id:1890938228 --> @fpreiss commented on GitHub (Jan 14, 2024): @dcasota The issue above occurred with `ibt=off` set (probably because I ran into the mentioned issue before), so its not a fix here unfortunately.
Author
Owner

@fpreiss commented on GitHub (Jan 25, 2024):

I did another attempt on compiling and running ollama on the above mentioned multi-GPU system and as of commit 5f81a33f43edea71edfb3d045e140595caeaa226 I am not observing the crashes anymore. The text generation is now working as intended.

<!-- gh-comment-id:1910281507 --> @fpreiss commented on GitHub (Jan 25, 2024): I did another attempt on compiling and running ollama on the above mentioned multi-GPU system and as of commit `5f81a33f43edea71edfb3d045e140595caeaa226` I am not observing the crashes anymore. The text generation is now working as intended.
Author
Owner

@pdevine commented on GitHub (Jan 27, 2024):

Going to close this as a dupe of #1881 . Please try 0.1.22 and make sure you have the latest version of the model you're trying to run (you can re-pull it, and it will be a nop if it's already up to date).

<!-- gh-comment-id:1912886920 --> @pdevine commented on GitHub (Jan 27, 2024): Going to close this as a dupe of #1881 . Please try `0.1.22` and make sure you have the latest version of the model you're trying to run (you can re-pull it, and it will be a nop if it's already up to date).
Author
Owner

@wizd commented on GitHub (Jan 29, 2024):

I got this error with ollama/ollama:0.1.22-rocm and dolphin-mixtral:8x7b-v2.6.1-q3_K_M

<!-- gh-comment-id:1913795304 --> @wizd commented on GitHub (Jan 29, 2024): I got this error with ollama/ollama:0.1.22-rocm and dolphin-mixtral:8x7b-v2.6.1-q3_K_M
Author
Owner

@pdevine commented on GitHub (May 17, 2024):

OK, I've tested this out on 2x3060s and I believe everything is working. This is with the llama3:8b-instruct-fp16 model which splits the model across both cards

$ ollama run llama3:8b-instruct-fp16
>>> hi there
Hi there! It's nice to meet you. Is there something I can help you with or would you like to chat?
>>>
$ ollama ps
NAME                   	ID          	SIZE 	PROCESSOR	UNTIL
llama3:8b-instruct-fp16	ca471fe48cbc	16 GB	100% GPU 	2 minutes from now
$ nvidia-smi
Fri May 17 00:53:07 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.54.15              Driver Version: 550.54.15      CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 3060        Off |   00000000:01:00.0  On |                  N/A |
|  0%   54C    P2             36W /  170W |    8066MiB /  12288MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   1  NVIDIA GeForce RTX 3060        Off |   00000000:05:00.0 Off |                  N/A |
|  0%   55C    P2             35W /  170W |    7814MiB /  12288MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A   1641313      C   ...unners/cuda_v11/ollama_llama_server       8060MiB |
|    1   N/A  N/A   1641313      C   ...unners/cuda_v11/ollama_llama_server       7808MiB |
+-----------------------------------------------------------------------------------------+

I've also tried it w/ mistral and everything is working fine. Going to go ahead and close this again.

<!-- gh-comment-id:2116442915 --> @pdevine commented on GitHub (May 17, 2024): OK, I've tested this out on 2x3060s and I believe everything is working. This is with the `llama3:8b-instruct-fp16` model which splits the model across both cards ``` $ ollama run llama3:8b-instruct-fp16 >>> hi there Hi there! It's nice to meet you. Is there something I can help you with or would you like to chat? >>> $ ollama ps NAME ID SIZE PROCESSOR UNTIL llama3:8b-instruct-fp16 ca471fe48cbc 16 GB 100% GPU 2 minutes from now $ nvidia-smi Fri May 17 00:53:07 2024 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 550.54.15 Driver Version: 550.54.15 CUDA Version: 12.4 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 3060 Off | 00000000:01:00.0 On | N/A | | 0% 54C P2 36W / 170W | 8066MiB / 12288MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 1 NVIDIA GeForce RTX 3060 Off | 00000000:05:00.0 Off | N/A | | 0% 55C P2 35W / 170W | 7814MiB / 12288MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 1641313 C ...unners/cuda_v11/ollama_llama_server 8060MiB | | 1 N/A N/A 1641313 C ...unners/cuda_v11/ollama_llama_server 7808MiB | +-----------------------------------------------------------------------------------------+ ``` I've also tried it w/ `mistral` and everything is working fine. Going to go ahead and close this again.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#63162