[GH-ISSUE #13775] [Bug] Persistent Digest Mismatch on Large GGUF Import & Pull (Linux/ROCm/AMD) despite valid file integrity #71086

Closed
opened 2026-05-04 23:58:19 -05:00 by GiteaMirror · 14 comments
Owner

Originally created by @DevLenn on GitHub (Jan 19, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/13775

What is the issue?

System Information

  • OS: Linux Debian Bookworm (Kernel 6.1.0-29-amd64)
  • Hardware: AMD GPU (ROCm detected)
  • Ollama Version: Updated to 0.14.2
  • Disk Space: >300 GB free on root partition (/)
  • RAM/Swap: ~12GB RAM, 1GB Swap

Description

I am unable to load large GGUF models (specifically nvidia_Nemotron-Cascade-8B-Thinking-GGUF:Q6_K, approx 6.7GB). The operation fails with a digest mismatch error during the final verification stage. This happens both when importing a local file (verified via sha256sum) and when pulling directly from HuggingFace via the new ollama run syntax.

Steps to Reproduce & Logs

1. Attempt: Local Import via Modelfile

I downloaded the GGUF manually and verified its integrity on disk.

  • File: nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf
  • Verification: sha256sum matches the HuggingFace reference.
  • Modelfile: FROM /home/<USER>/ext_models/nemtron/nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf

Command:

ollama create nemtron -f Modelfile

Error Output:

gathering model components 
copying file sha256:393399a1... 100% 
Error: digest mismatch, expected "sha256:393399a1...", got "sha256:6714d725..."

(Note: The "got" hash changes on subsequent attempts, e.g., sha256:09e193..., suggesting data corruption during the copy process).


2. Attempt: Update & Direct Pull

I upgraded Ollama using the official script (curl -fsSL https://ollama.com/install.sh | sh) to ensure I am not running legacy code. The update was successful (AMD GPU ready).

Command:

ollama run hf.co/bartowski/nvidia_Nemotron-Cascade-8B-Thinking-GGUF:Q6_K

Error Output:

pulling manifest 
pulling 8a8fd4bd6993: 100% ▕██████████████████▏ 6.7 GB                          
pulling 2d54db2b9bb2: 100% ▕██████████████████▏ 1.5 KB                          
...
verifying sha256 digest 
Error: digest mismatch, file must be downloaded again: want sha256:8a8fd4bd6993..., got sha256:5edfd045...

Troubleshooting Steps Taken

  1. Disk Space: Checked df -h. Root partition has 312GB free. No storage bottleneck.
  2. File Integrity: Ran sha256sum on the manually downloaded GGUF file. The file on disk is correct. The corruption seems to happen when Ollama reads/copies/processes the blob.
  3. Clean Install: Stopped service (systemctl stop ollama), removed old binaries, and re-installed via official script.
  4. Permissions: Verified user is in ollama and render groups.

Context

It seems like Ollama is corrupting the data stream when writing the blob to its internal storage (/usr/share/ollama/...) or when verifying it immediately after download. Since the "got" checksum varies, this might be related to a race condition, RAM instability, or an issue with the ROCm/AMD backend handling memory during the import.

Relevant log output


OS

Linux

GPU

AMD

CPU

Intel

Ollama version

0.14.2

Originally created by @DevLenn on GitHub (Jan 19, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/13775 ### What is the issue? ### System Information * **OS:** Linux Debian Bookworm (Kernel 6.1.0-29-amd64) * **Hardware:** AMD GPU (ROCm detected) * **Ollama Version:** Updated to `0.14.2` * **Disk Space:** >300 GB free on root partition (`/`) * **RAM/Swap:** ~12GB RAM, 1GB Swap ### Description I am unable to load large GGUF models (specifically `nvidia_Nemotron-Cascade-8B-Thinking-GGUF:Q6_K`, approx 6.7GB). The operation fails with a `digest mismatch` error during the final verification stage. This happens both when importing a local file (verified via sha256sum) and when pulling directly from HuggingFace via the new `ollama run` syntax. ### Steps to Reproduce & Logs #### 1. Attempt: Local Import via Modelfile I downloaded the GGUF manually and verified its integrity on disk. * **File:** `nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf` * **Verification:** `sha256sum` matches the HuggingFace reference. * **Modelfile:** `FROM /home/<USER>/ext_models/nemtron/nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf` **Command:** ```bash ollama create nemtron -f Modelfile ``` **Error Output:** ```text gathering model components copying file sha256:393399a1... 100% Error: digest mismatch, expected "sha256:393399a1...", got "sha256:6714d725..." ``` *(Note: The "got" hash changes on subsequent attempts, e.g., `sha256:09e193...`, suggesting data corruption during the copy process).* --- #### 2. Attempt: Update & Direct Pull I upgraded Ollama using the official script (`curl -fsSL https://ollama.com/install.sh | sh`) to ensure I am not running legacy code. The update was successful (`AMD GPU ready`). **Command:** ```bash ollama run hf.co/bartowski/nvidia_Nemotron-Cascade-8B-Thinking-GGUF:Q6_K ``` **Error Output:** ```text pulling manifest pulling 8a8fd4bd6993: 100% ▕██████████████████▏ 6.7 GB pulling 2d54db2b9bb2: 100% ▕██████████████████▏ 1.5 KB ... verifying sha256 digest Error: digest mismatch, file must be downloaded again: want sha256:8a8fd4bd6993..., got sha256:5edfd045... ``` ### Troubleshooting Steps Taken 1. **Disk Space:** Checked `df -h`. Root partition has 312GB free. No storage bottleneck. 2. **File Integrity:** Ran `sha256sum` on the manually downloaded GGUF file. The file on disk is correct. The corruption seems to happen when Ollama reads/copies/processes the blob. 3. **Clean Install:** Stopped service (`systemctl stop ollama`), removed old binaries, and re-installed via official script. 4. **Permissions:** Verified user is in `ollama` and `render` groups. ### Context It seems like Ollama is corrupting the data stream when writing the blob to its internal storage (`/usr/share/ollama/...`) or when verifying it immediately after download. Since the "got" checksum varies, this might be related to a race condition, RAM instability, or an issue with the ROCm/AMD backend handling memory during the import. ### Relevant log output ```shell ``` ### OS Linux ### GPU AMD ### CPU Intel ### Ollama version 0.14.2
GiteaMirror added the bug label 2026-05-04 23:58:19 -05:00
Author
Owner

@rick-github commented on GitHub (Jan 19, 2026):

As far as I can tell, sha256:393399a1 doesn't match any of the files on hf.co/bartowski/nvidia_Nemotron-Cascade-8B-Thinking-GGUF. The Q6_K file is 8a8fd4bd6993, so it seems like the initial sha256 computation is incorrect even before the ollama client copies the file to server storage. Importing the model with ollama run works fine on my systems. Does your system log show any memory or disk read errors? Does the error occur for any other models?

<!-- gh-comment-id:3768020086 --> @rick-github commented on GitHub (Jan 19, 2026): As far as I can tell, sha256:393399a1 doesn't match any of the files on hf.co/bartowski/nvidia_Nemotron-Cascade-8B-Thinking-GGUF. The Q6_K file is 8a8fd4bd6993, so it seems like the initial sha256 computation is incorrect even before the ollama client copies the file to server storage. Importing the model with `ollama run` works fine on my systems. Does your system log show any memory or disk read errors? Does the error occur for any other models?
Author
Owner

@DevLenn commented on GitHub (Jan 19, 2026):

@rick-github pulling and using qwen-3vl-2b via 'ollama pull' and 'ollama serve' works fine. disk has no issues, s.m.a.r.t. values are all good. but yeah, you're right, seems to be an issue with the download.

<!-- gh-comment-id:3768219141 --> @DevLenn commented on GitHub (Jan 19, 2026): @rick-github pulling and using qwen-3vl-2b via 'ollama pull' and 'ollama serve' works fine. disk has no issues, s.m.a.r.t. values are all good. but yeah, you're right, seems to be an issue with the download.
Author
Owner

@DevLenn commented on GitHub (Jan 19, 2026):

@rick-github I’m not 100% sure, but I think the SHA256 shown on the Hugging Face page (8a8fd4bd6993…) might actually be for the small Xet pointer file in Git rather than the full 6.7 GB GGUF file. The actual file on disk seems to have a different SHA256 when checked locally.

<!-- gh-comment-id:3768465728 --> @DevLenn commented on GitHub (Jan 19, 2026): @rick-github I’m not 100% sure, but I think the SHA256 shown on the Hugging Face page (8a8fd4bd6993…) might actually be for the small Xet pointer file in Git rather than the full 6.7 GB GGUF file. The actual file on disk seems to have a different SHA256 when checked locally.
Author
Owner

@rick-github commented on GitHub (Jan 19, 2026):

The sha256 for the 6.7GB GGUF file is 8a8fd4bd6993 as shown in your run command. If the file on disk has a different sha256 then it's been corrupted.

$ wget https://huggingface.co/bartowski/nvidia_Nemotron-Cascade-8B-Thinking-GGUF/resolve/main/nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf
$ ls -l nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf
-rw-rw-r-- 1 rick rick 6725900352 Jan 19 15:20 nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf
$ sha256sum nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf
8a8fd4bd69937b2b3b04eaacd50f07bb8428679281dd2c27ad82c62313c0901f  nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf

$ f=$(ollama show --modelfile hf.co/bartowski/nvidia_Nemotron-Cascade-8B-Thinking-GGUF:Q6_K | sed -ne 's/^FROM //p')
$ ls -l $f
-rw-r--r-- 1 root root 6725900352 Jan 19 12:50 /root/.ollama/models/blobs/sha256-8a8fd4bd69937b2b3b04eaacd50f07bb8428679281dd2c27ad82c62313c0901f
$ sha256sum $f
8a8fd4bd69937b2b3b04eaacd50f07bb8428679281dd2c27ad82c62313c0901f  /root/.ollama/models/blobs/sha256-8a8fd4bd69937b2b3b04eaacd50f07bb8428679281dd2c27ad82c62313c0901f
<!-- gh-comment-id:3768585726 --> @rick-github commented on GitHub (Jan 19, 2026): The sha256 for the 6.7GB GGUF file is 8a8fd4bd6993 as shown in your run command. If the file on disk has a different sha256 then it's been corrupted. ```console $ wget https://huggingface.co/bartowski/nvidia_Nemotron-Cascade-8B-Thinking-GGUF/resolve/main/nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf $ ls -l nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf -rw-rw-r-- 1 rick rick 6725900352 Jan 19 15:20 nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf $ sha256sum nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf 8a8fd4bd69937b2b3b04eaacd50f07bb8428679281dd2c27ad82c62313c0901f nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf $ f=$(ollama show --modelfile hf.co/bartowski/nvidia_Nemotron-Cascade-8B-Thinking-GGUF:Q6_K | sed -ne 's/^FROM //p') $ ls -l $f -rw-r--r-- 1 root root 6725900352 Jan 19 12:50 /root/.ollama/models/blobs/sha256-8a8fd4bd69937b2b3b04eaacd50f07bb8428679281dd2c27ad82c62313c0901f $ sha256sum $f 8a8fd4bd69937b2b3b04eaacd50f07bb8428679281dd2c27ad82c62313c0901f /root/.ollama/models/blobs/sha256-8a8fd4bd69937b2b3b04eaacd50f07bb8428679281dd2c27ad82c62313c0901f ```
Author
Owner

@DevLenn commented on GitHub (Jan 19, 2026):

The sha256 for the 6.7GB GGUF file is 8a8fd4bd6993 as shown in your run command. If the file on disk has a different sha256 then it's been corrupted.

$ wget https://huggingface.co/bartowski/nvidia_Nemotron-Cascade-8B-Thinking-GGUF/resolve/main/nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf
$ ls -l nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf
-rw-rw-r-- 1 rick rick 6725900352 Jan 19 15:20 nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf
$ sha256sum nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf
8a8fd4bd69937b2b3b04eaacd50f07bb8428679281dd2c27ad82c62313c0901f  nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf

$ f=$(ollama show --modelfile hf.co/bartowski/nvidia_Nemotron-Cascade-8B-Thinking-GGUF:Q6_K | sed -ne 's/^FROM //p')
$ ls -l $f
-rw-r--r-- 1 root root 6725900352 Jan 19 12:50 /root/.ollama/models/blobs/sha256-8a8fd4bd69937b2b3b04eaacd50f07bb8428679281dd2c27ad82c62313c0901f
$ sha256sum $f
8a8fd4bd69937b2b3b04eaacd50f07bb8428679281dd2c27ad82c62313c0901f  /root/.ollama/models/blobs/sha256-8a8fd4bd69937b2b3b04eaacd50f07bb8428679281dd2c27ad82c62313c0901f

@rick-github
Dude, i don't know wtf is going on, closed the issue because i genuinely believe it's not an issue with ollama. But i also can't figure out wtf it is:


master@retrod:~$ cd ext_models
master@retrod:~/ext_models$ cd nemotron
master@retrod:~/ext_models/nemotron$ cat Modelfile
FROM ./nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf
master@retrod:~/ext_models/nemotron$ ls
download_model.sh  nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf
Modelfile
master@retrod:~/ext_models/nemotron$ sha256sum nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf
8a8fd4bd69937b2b3b04eaacd50f07bb8428679281dd2c27ad82c62313c0901f  nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf
master@retrod:~/ext_models/nemotron$ ollama create nemotron --file Modelfile
gathering model components
gathering model components
gathering model components
gathering model components
gathering model components
gathering model components
gathering model components
gathering model components
gathering model components
gathering model components
gathering model components
gathering model components
gathering model components
gathering model components
gathering model components
gathering model components
gathering model components
gathering model components
copying file sha256:7d80e73ee24bca8692259b28bed198f1b9a9dd834ac5d5d6f5bcf8b210d47418 100%
Error: digest mismatch, expected "sha256:7d80e73ee24bca8692259b28bed198f1b9a9dd834ac5d5d6f5bcf8b210d47418", got "sha256:1b5b2463a399f29614b0672a18c1f8f96919479f3d2574e341c19e50846b454d"
master@retrod:~/ext_models/nemotron$
<!-- gh-comment-id:3768937805 --> @DevLenn commented on GitHub (Jan 19, 2026): > The sha256 for the 6.7GB GGUF file is 8a8fd4bd6993 as shown in your run command. If the file on disk has a different sha256 then it's been corrupted. > > ```console > $ wget https://huggingface.co/bartowski/nvidia_Nemotron-Cascade-8B-Thinking-GGUF/resolve/main/nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf > $ ls -l nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf > -rw-rw-r-- 1 rick rick 6725900352 Jan 19 15:20 nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf > $ sha256sum nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf > 8a8fd4bd69937b2b3b04eaacd50f07bb8428679281dd2c27ad82c62313c0901f nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf > > $ f=$(ollama show --modelfile hf.co/bartowski/nvidia_Nemotron-Cascade-8B-Thinking-GGUF:Q6_K | sed -ne 's/^FROM //p') > $ ls -l $f > -rw-r--r-- 1 root root 6725900352 Jan 19 12:50 /root/.ollama/models/blobs/sha256-8a8fd4bd69937b2b3b04eaacd50f07bb8428679281dd2c27ad82c62313c0901f > $ sha256sum $f > 8a8fd4bd69937b2b3b04eaacd50f07bb8428679281dd2c27ad82c62313c0901f /root/.ollama/models/blobs/sha256-8a8fd4bd69937b2b3b04eaacd50f07bb8428679281dd2c27ad82c62313c0901f > ``` @rick-github Dude, i don't know wtf is going on, closed the issue because i genuinely believe it's not an issue with ollama. But i also can't figure out wtf it is: ``` master@retrod:~$ cd ext_models master@retrod:~/ext_models$ cd nemotron master@retrod:~/ext_models/nemotron$ cat Modelfile FROM ./nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf master@retrod:~/ext_models/nemotron$ ls download_model.sh nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf Modelfile master@retrod:~/ext_models/nemotron$ sha256sum nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf 8a8fd4bd69937b2b3b04eaacd50f07bb8428679281dd2c27ad82c62313c0901f nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf master@retrod:~/ext_models/nemotron$ ollama create nemotron --file Modelfile gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components copying file sha256:7d80e73ee24bca8692259b28bed198f1b9a9dd834ac5d5d6f5bcf8b210d47418 100% Error: digest mismatch, expected "sha256:7d80e73ee24bca8692259b28bed198f1b9a9dd834ac5d5d6f5bcf8b210d47418", got "sha256:1b5b2463a399f29614b0672a18c1f8f96919479f3d2574e341c19e50846b454d" master@retrod:~/ext_models/nemotron$ ```
Author
Owner

@rick-github commented on GitHub (Jan 19, 2026):

The gathering model components is the ollama client reading the file to compute the sha256. Since it's wrong (7d80e73ee24 instead of 8a8fd4bd699) it indicates a problem somewhere between the bytes on disk and the output of the bytes fed through the sha256 algorithm in the ollama client. So it may be an interaction of ollama and some aspect of the hardware. What file system is being used? What's the underlying hardware (SSD, NVME, HDD, network, etc)? What's the output of lscpu and lsmem? Does memtester flag anything?

<!-- gh-comment-id:3769148141 --> @rick-github commented on GitHub (Jan 19, 2026): The `gathering model components` is the ollama client reading the file to compute the sha256. Since it's wrong (7d80e73ee24 instead of 8a8fd4bd699) it indicates a problem somewhere between the bytes on disk and the output of the bytes fed through the sha256 algorithm in the ollama client. So it may be an interaction of ollama and some aspect of the hardware. What file system is being used? What's the underlying hardware (SSD, NVME, HDD, network, etc)? What's the output of `lscpu` and `lsmem`? Does `memtester` flag anything?
Author
Owner

@DevLenn commented on GitHub (Jan 19, 2026):

@rick-github

master@retrod:~$ lsblk --fs
NAME   FSTYPE FSVER LABEL UUID                                 FSAVAIL FSUSE% MOUNTPOINTS
sda                                                                           
├─sda1 ext4   1.0         37a4bf3d-e6e1-4921-9754-daf9c7ab0ccd  285,8G    32% /
├─sda2                                                                        
└─sda5 swap   1           25265880-504d-4a22-afbc-3d3389180ce1                [SWAP]
sr0                                                                           



master@retrod:~$ lscpu
Architecture:                x86_64
  CPU op-mode(s):            32-bit, 64-bit
  Address sizes:             36 bits physical, 48 bits virtual
  Byte Order:                Little Endian
CPU(s):                      8
  On-line CPU(s) list:       0-7
Vendor ID:                   GenuineIntel
  Model name:                Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz
    CPU family:              6
    Model:                   42
    Thread(s) per core:      2
    Core(s) per socket:      4
    Socket(s):               1
    Stepping:                7
    CPU(s) scaling MHz:      42%
    CPU max MHz:             3800,0000
    CPU min MHz:             1600,0000
    BogoMIPS:                6784,77
    Flags:                   fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc 
                             cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi 
                             flexpriority ept vpid xsaveopt dtherm ida arat pln pts md_clear flush_l1d ibpb_exit_to_user
Virtualization features:     
  Virtualization:            VT-x
Caches (sum of all):         
  L1d:                       128 KiB (4 instances)
  L1i:                       128 KiB (4 instances)
  L2:                        1 MiB (4 instances)
  L3:                        8 MiB (1 instance)
NUMA:                        
  NUMA node(s):              1
  NUMA node0 CPU(s):         0-7
Vulnerabilities:             
  Gather data sampling:      Not affected
  Indirect target selection: Not affected
  Itlb multihit:             KVM: Mitigation: VMX disabled
  L1tf:                      Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
  Mds:                       Mitigation; Clear CPU buffers; SMT vulnerable
  Meltdown:                  Mitigation; PTI
  Mmio stale data:           Unknown: No mitigations
  Reg file data sampling:    Not affected
  Retbleed:                  Not affected
  Spec rstack overflow:      Not affected
  Spec store bypass:         Mitigation; Speculative Store Bypass disabled via prctl
  Spectre v1:                Mitigation; usercopy/swapgs barriers and __user pointer sanitization
  Spectre v2:                Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
  Srbds:                     Not affected
  Tsa:                       Not affected
  Tsx async abort:           Not affected
  Vmscape:                   Mitigation; IBPB before exit to userspace





master@retrod:~$ lsmem
RANGE                                  SIZE  STATE REMOVABLE  BLOCK
0x0000000000000000-0x00000000cfffffff  3,3G online       yes   0-25
0x0000000100000000-0x000000032fffffff  8,8G online       yes 32-101

Memory block size:       128M
Total online memory:      12G
Total offline memory:      0B





master@retrod:~$ sudo memtester 128M 1
memtester version 4.6.0 (64-bit)
Copyright (C) 2001-2020 Charles Cazabon.
Licensed under the GNU General Public License version 2 (only).

pagesize is 4096
pagesizemask is 0xfffffffffffff000
want 128MB (134217728 bytes)
got  128MB (134217728 bytes), trying mlock ...locked.
Loop 1/1:
  Stuck Address       : ok         
  Random Value        : ok
  Compare XOR         : ok
  Compare SUB         : ok
  Compare MUL         : ok
  Compare DIV         : ok
  Compare OR          : ok
  Compare AND         : ok
  Sequential Increment: ok
  Solid Bits          : ok         
  Block Sequential    : ok         
  Checkerboard        : ok         
  Bit Spread          : ok         
  Bit Flip            : ok         
  Walking Ones        : ok         
  Walking Zeroes      : ok         
  8-bit Writes        : ok
  16-bit Writes       : ok

Done.

<!-- gh-comment-id:3769241296 --> @DevLenn commented on GitHub (Jan 19, 2026): @rick-github ```shell master@retrod:~$ lsblk --fs NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS sda ├─sda1 ext4 1.0 37a4bf3d-e6e1-4921-9754-daf9c7ab0ccd 285,8G 32% / ├─sda2 └─sda5 swap 1 25265880-504d-4a22-afbc-3d3389180ce1 [SWAP] sr0 master@retrod:~$ lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 36 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Vendor ID: GenuineIntel Model name: Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz CPU family: 6 Model: 42 Thread(s) per core: 2 Core(s) per socket: 4 Socket(s): 1 Stepping: 7 CPU(s) scaling MHz: 42% CPU max MHz: 3800,0000 CPU min MHz: 1600,0000 BogoMIPS: 6784,77 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid xsaveopt dtherm ida arat pln pts md_clear flush_l1d ibpb_exit_to_user Virtualization features: Virtualization: VT-x Caches (sum of all): L1d: 128 KiB (4 instances) L1i: 128 KiB (4 instances) L2: 1 MiB (4 instances) L3: 8 MiB (1 instance) NUMA: NUMA node(s): 1 NUMA node0 CPU(s): 0-7 Vulnerabilities: Gather data sampling: Not affected Indirect target selection: Not affected Itlb multihit: KVM: Mitigation: VMX disabled L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable Mds: Mitigation; Clear CPU buffers; SMT vulnerable Meltdown: Mitigation; PTI Mmio stale data: Unknown: No mitigations Reg file data sampling: Not affected Retbleed: Not affected Spec rstack overflow: Not affected Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected Srbds: Not affected Tsa: Not affected Tsx async abort: Not affected Vmscape: Mitigation; IBPB before exit to userspace master@retrod:~$ lsmem RANGE SIZE STATE REMOVABLE BLOCK 0x0000000000000000-0x00000000cfffffff 3,3G online yes 0-25 0x0000000100000000-0x000000032fffffff 8,8G online yes 32-101 Memory block size: 128M Total online memory: 12G Total offline memory: 0B master@retrod:~$ sudo memtester 128M 1 memtester version 4.6.0 (64-bit) Copyright (C) 2001-2020 Charles Cazabon. Licensed under the GNU General Public License version 2 (only). pagesize is 4096 pagesizemask is 0xfffffffffffff000 want 128MB (134217728 bytes) got 128MB (134217728 bytes), trying mlock ...locked. Loop 1/1: Stuck Address : ok Random Value : ok Compare XOR : ok Compare SUB : ok Compare MUL : ok Compare DIV : ok Compare OR : ok Compare AND : ok Sequential Increment: ok Solid Bits : ok Block Sequential : ok Checkerboard : ok Bit Spread : ok Bit Flip : ok Walking Ones : ok Walking Zeroes : ok 8-bit Writes : ok 16-bit Writes : ok Done. ```
Author
Owner

@rick-github commented on GitHub (Jan 19, 2026):

This looks normal. What's the output of:

mkdir ~/ext_models/13775
cd ~/ext_models/13775
touch model.safetensors
ollama create test-13775
truncate -s 6G model.safetensors
ollama create test-13775
yes | head -100000000 > model.safetensors
ollama create test-13775
<!-- gh-comment-id:3769381701 --> @rick-github commented on GitHub (Jan 19, 2026): This looks normal. What's the output of: ``` mkdir ~/ext_models/13775 cd ~/ext_models/13775 touch model.safetensors ollama create test-13775 truncate -s 6G model.safetensors ollama create test-13775 yes | head -100000000 > model.safetensors ollama create test-13775 ```
Author
Owner

@DevLenn commented on GitHub (Jan 19, 2026):

master@retrod:~$ mkdir ~/ext_models/13775
cd ~/ext_models/13775
touch model.safetensors
ollama create test-13775
truncate -s 6G model.safetensors
ollama create test-13775
yes | head -100000000 > model.safetensors
ollama create test-13775
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
copying file sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 100% 
converting model 
Error: open config.json: no such file or directory
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
copying file sha256:5c32c2b28999325bc5ad39d6530bcb46fbdf1f86375a991b7269764c50b0d109 100% 
converting model 
Error: open config.json: no such file or directory
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
gathering model components 
copying file sha256:294dc044302beef2e1797f194f18c661eaa2cb51ea864efaeb955f5b1700c40e 100% 
converting model 
Error: open config.json: no such file or directory

<!-- gh-comment-id:3769487228 --> @DevLenn commented on GitHub (Jan 19, 2026): ```shell master@retrod:~$ mkdir ~/ext_models/13775 cd ~/ext_models/13775 touch model.safetensors ollama create test-13775 truncate -s 6G model.safetensors ollama create test-13775 yes | head -100000000 > model.safetensors ollama create test-13775 gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components copying file sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 100% converting model Error: open config.json: no such file or directory gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components copying file sha256:5c32c2b28999325bc5ad39d6530bcb46fbdf1f86375a991b7269764c50b0d109 100% converting model Error: open config.json: no such file or directory gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components gathering model components copying file sha256:294dc044302beef2e1797f194f18c661eaa2cb51ea864efaeb955f5b1700c40e 100% converting model Error: open config.json: no such file or directory ```
Author
Owner

@rick-github commented on GitHub (Jan 19, 2026):

Well, it's creating the right sha256 for these dummy files. What happens if you do the following

cd ~/ext_models/nemotron
mv nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf.hold
cp nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf.hold nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf
sha256sum nvidia*
ollama create nemotron
<!-- gh-comment-id:3769517559 --> @rick-github commented on GitHub (Jan 19, 2026): Well, it's creating the right sha256 for these dummy files. What happens if you do the following ``` cd ~/ext_models/nemotron mv nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf.hold cp nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf.hold nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf sha256sum nvidia* ollama create nemotron ```
Author
Owner

@DevLenn commented on GitHub (Jan 19, 2026):

okay... wth. shouldn't they be the same?:

0755327b1cae7414f9074104cdbf3eb6effec191ef78c805f96e1c5091b6b728  nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf
f302479b778f7981f3cb5ccdf0fed4a1ec640c5d31c02695eaf157bd0ca67a45  nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf.hold

<!-- gh-comment-id:3769633503 --> @DevLenn commented on GitHub (Jan 19, 2026): okay... wth. shouldn't they be the same?: ``` 0755327b1cae7414f9074104cdbf3eb6effec191ef78c805f96e1c5091b6b728 nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf f302479b778f7981f3cb5ccdf0fed4a1ec640c5d31c02695eaf157bd0ca67a45 nvidia_Nemotron-Cascade-8B-Thinking-Q6_K.gguf.hold ```
Author
Owner

@rick-github commented on GitHub (Jan 19, 2026):

Yes they should. Since ollama isn't involved we can rule it out as a problem. It seems like a hardware issue but not sure which bit of hardware, since there are no errors in the logs and memtester didn't show anything. Perhaps run memtester over a larger range of memory, it might just be a bad bit in one of the DRAM modules that is used for the page cache. Or install memtest86+ and reboot the machine and do a full memory test.

<!-- gh-comment-id:3769698966 --> @rick-github commented on GitHub (Jan 19, 2026): Yes they should. Since ollama isn't involved we can rule it out as a problem. It seems like a hardware issue but not sure which bit of hardware, since there are no errors in the logs and `memtester` didn't show anything. Perhaps run `memtester` over a larger range of memory, it might just be a bad bit in one of the DRAM modules that is used for the page cache. Or install `memtest86+` and reboot the machine and do a full memory test.
Author
Owner

@DevLenn commented on GitHub (Jan 19, 2026):

alr, thx for your help.

<!-- gh-comment-id:3769723652 --> @DevLenn commented on GitHub (Jan 19, 2026): alr, thx for your help.
Author
Owner

@focalfury commented on GitHub (Apr 20, 2026):

Just wanted to comment that I had this exact issue on larger models. Every run on 3 machines on 2 different WAN networks had the issue. Once I switched to the mverrilli branch it immediately worked. I did extensive hardware testing and could find no issue on any machine. I even wrote a small utility that pulled it in single thread mode as opposed to using the ollama run which I think chunks them. Still had the problem.

Hoping to see mverrilli's commit merged.

Thanks all

<!-- gh-comment-id:4282353749 --> @focalfury commented on GitHub (Apr 20, 2026): Just wanted to comment that I had this exact issue on larger models. Every run on 3 machines on 2 different WAN networks had the issue. Once I switched to the mverrilli branch it immediately worked. I did extensive hardware testing and could find no issue on any machine. I even wrote a small utility that pulled it in single thread mode as opposed to using the ollama run which I think chunks them. Still had the problem. Hoping to see mverrilli's commit merged. Thanks all
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#71086