[GH-ISSUE #6263] Pull Command Parsing Not Working #3921

Closed
opened 2026-04-12 14:47:32 -05:00 by GiteaMirror · 9 comments
Owner

Originally created by @chadwickhar08 on GitHub (Aug 8, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6263

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

When running ollama on Windows, attempt to run 'ollama pull llama3.1' results in 'ollama pull llama3.1
pulling manifest
Error: Incorrect function.'

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

ollama --version ollama version is 0.3.4

Originally created by @chadwickhar08 on GitHub (Aug 8, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6263 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? When running ollama on Windows, attempt to run 'ollama pull llama3.1' results in 'ollama pull llama3.1 pulling manifest Error: Incorrect function.' ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version ollama --version ollama version is 0.3.4
GiteaMirror added the bugwindows labels 2026-04-12 14:47:32 -05:00
Author
Owner

@igorschlum commented on GitHub (Aug 8, 2024):

@chadwickhar08 is Ollama pulling other models like phi3:mini on your computer?

<!-- gh-comment-id:2276755907 --> @igorschlum commented on GitHub (Aug 8, 2024): @chadwickhar08 is Ollama pulling other models like phi3:mini on your computer?
Author
Owner

@chadwickhar08 commented on GitHub (Aug 8, 2024):

No, here is the output: 'ollama pull phi3:mini
pulling manifest
Error: Incorrect function.'

But a previously pulled mistral and several others work: 'ollama pull mistral
pulling manifest
pulling ff82381e2bea... 100% ▕████████████████▏ 4.1 GB
pulling 43070e2d4e53... 100% ▕████████████████▏ 11 KB
pulling 491dfa501e59... 100% ▕████████████████▏ 801 B
pulling ed11eda7790d... 100% ▕████████████████▏ 30 B
pulling 42347cd80dc8... 100% ▕████████████████▏ 485 B
verifying sha256 digest
writing manifest
removing any unused layers
success
'
'ollama pull gemma2
pulling manifest
pulling ff1d1fc78170... 100% ▕████████████████▏ 5.4 GB
pulling 109037bec39c... 100% ▕████████████████▏ 136 B
pulling 097a36493f71... 100% ▕████████████████▏ 8.4 KB
pulling 2490e7468436... 100% ▕████████████████▏ 65 B
pulling 10aa81da732e... 100% ▕████████████████▏ 487 B
verifying sha256 digest
writing manifest
removing any unused layers
success
'
'ollama pull deepseek-coder-v2
pulling manifest
pulling 5ff0abeeac1d... 100% ▕████████████████▏ 8.9 GB
pulling b321cd7de6c7... 100% ▕████████████████▏ 111 B
pulling 4bb71764481f... 100% ▕████████████████▏ 13 KB
pulling 1c8f573e830c... 100% ▕████████████████▏ 1.1 KB
pulling 19f2fb9e8bc6... 100% ▕████████████████▏ 32 B
pulling 34488e453cfe... 100% ▕████████████████▏ 568 B
verifying sha256 digest
writing manifest
removing any unused layers
success'

Additionally attempting to pull the llama3.1:405b model crashes ollama and my external filesystem on Windows. But that could be a separate bug.

<!-- gh-comment-id:2276774676 --> @chadwickhar08 commented on GitHub (Aug 8, 2024): No, here is the output: 'ollama pull phi3:mini pulling manifest Error: Incorrect function.' But a previously pulled mistral and several others work: 'ollama pull mistral pulling manifest pulling ff82381e2bea... 100% ▕████████████████▏ 4.1 GB pulling 43070e2d4e53... 100% ▕████████████████▏ 11 KB pulling 491dfa501e59... 100% ▕████████████████▏ 801 B pulling ed11eda7790d... 100% ▕████████████████▏ 30 B pulling 42347cd80dc8... 100% ▕████████████████▏ 485 B verifying sha256 digest writing manifest removing any unused layers success ' 'ollama pull gemma2 pulling manifest pulling ff1d1fc78170... 100% ▕████████████████▏ 5.4 GB pulling 109037bec39c... 100% ▕████████████████▏ 136 B pulling 097a36493f71... 100% ▕████████████████▏ 8.4 KB pulling 2490e7468436... 100% ▕████████████████▏ 65 B pulling 10aa81da732e... 100% ▕████████████████▏ 487 B verifying sha256 digest writing manifest removing any unused layers success ' 'ollama pull deepseek-coder-v2 pulling manifest pulling 5ff0abeeac1d... 100% ▕████████████████▏ 8.9 GB pulling b321cd7de6c7... 100% ▕████████████████▏ 111 B pulling 4bb71764481f... 100% ▕████████████████▏ 13 KB pulling 1c8f573e830c... 100% ▕████████████████▏ 1.1 KB pulling 19f2fb9e8bc6... 100% ▕████████████████▏ 32 B pulling 34488e453cfe... 100% ▕████████████████▏ 568 B verifying sha256 digest writing manifest removing any unused layers success' Additionally attempting to pull the llama3.1:405b model crashes ollama and my external filesystem on Windows. But that could be a separate bug.
Author
Owner

@dhiltgen commented on GitHub (Aug 9, 2024):

What version of Windows are you running? Is there anything special about the filesystem where you're storing models? Did 0.3.3 (or earlier) pull correctly?

We recently fixed a bug relating to how we handle sparse files, so perhaps that has an unintended side effect.

<!-- gh-comment-id:2278550143 --> @dhiltgen commented on GitHub (Aug 9, 2024): What version of Windows are you running? Is there anything special about the filesystem where you're storing models? Did 0.3.3 (or earlier) pull correctly? We recently fixed a bug relating to how we handle sparse files, so perhaps that has an unintended side effect.
Author
Owner

@chadwickhar08 commented on GitHub (Aug 9, 2024):

Hello,

I am running Windows 11. Nothing special about the external file system
other than I believe it is exFAT. The previous versions of Ollama worked
flawlessly, but not entirely sure if the llama3.1:405b still would have
pulled correctly since it is a newer model which coincides with the newer
Ollama release. I have been able to pull phi3 in the past but now with
trying to pull the model that I don't have, phi3:mini the error as noted
appears when in fact I had never encountered that issue before.

Thanks
-Chad H.

On Fri, Aug 9, 2024 at 2:52 PM Daniel Hiltgen @.***>
wrote:

What version of Windows are you running? Is there anything special about
the filesystem where you're storing models? Did 0.3.3 (or earlier) pull
correctly?

We recently fixed a bug relating to how we handle sparse files, so perhaps
that has an unintended side effect.


Reply to this email directly, view it on GitHub
https://github.com/ollama/ollama/issues/6263#issuecomment-2278550143,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AFRJZYNYSJBRZBFHSSS7KQDZQUFVTAVCNFSM6AAAAABMHJA4U2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENZYGU2TAMJUGM
.
You are receiving this because you were mentioned.Message ID:
@.***>

<!-- gh-comment-id:2278558247 --> @chadwickhar08 commented on GitHub (Aug 9, 2024): Hello, I am running Windows 11. Nothing special about the external file system other than I believe it is exFAT. The previous versions of Ollama worked flawlessly, but not entirely sure if the llama3.1:405b still would have pulled correctly since it is a newer model which coincides with the newer Ollama release. I have been able to pull phi3 in the past but now with trying to pull the model that I don't have, phi3:mini the error as noted appears when in fact I had never encountered that issue before. Thanks -Chad H. On Fri, Aug 9, 2024 at 2:52 PM Daniel Hiltgen ***@***.***> wrote: > What version of Windows are you running? Is there anything special about > the filesystem where you're storing models? Did 0.3.3 (or earlier) pull > correctly? > > We recently fixed a bug relating to how we handle sparse files, so perhaps > that has an unintended side effect. > > — > Reply to this email directly, view it on GitHub > <https://github.com/ollama/ollama/issues/6263#issuecomment-2278550143>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AFRJZYNYSJBRZBFHSSS7KQDZQUFVTAVCNFSM6AAAAABMHJA4U2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENZYGU2TAMJUGM> > . > You are receiving this because you were mentioned.Message ID: > ***@***.***> >
Author
Owner

@dhiltgen commented on GitHub (Aug 9, 2024):

@chadwickhar08 thanks for that info. Yes, exFat does not support sparse files, so that explains the regression. We'll get this fixed in the next patch release. Sorry about that.

https://learn.microsoft.com/en-us/windows/win32/fileio/filesystem-functionality-comparison#functionality

<!-- gh-comment-id:2278564452 --> @dhiltgen commented on GitHub (Aug 9, 2024): @chadwickhar08 thanks for that info. Yes, exFat does not support sparse files, so that explains the regression. We'll get this fixed in the next patch release. Sorry about that. https://learn.microsoft.com/en-us/windows/win32/fileio/filesystem-functionality-comparison#functionality
Author
Owner

@chadwickhar08 commented on GitHub (Aug 9, 2024):

No worries, thanks!

-C

<!-- gh-comment-id:2278593097 --> @chadwickhar08 commented on GitHub (Aug 9, 2024): No worries, thanks! -C
Author
Owner

@igorschlum commented on GitHub (Aug 9, 2024):

@chadwickhar08 how much ram or vram have you on your PC?
New version of llama3.1 is being uploaded today.

<!-- gh-comment-id:2278782716 --> @igorschlum commented on GitHub (Aug 9, 2024): @chadwickhar08 how much ram or vram have you on your PC? New version of llama3.1 is being uploaded today.
Author
Owner

@chadwickhar08 commented on GitHub (Aug 9, 2024):

It is a RTX 3060(12GB) and 16 GB of ram. I have absolutely no intention of running the 405b model as I do not possess the hardware capabilities but rather am attempting to pull it as a means of storage.

<!-- gh-comment-id:2278784986 --> @chadwickhar08 commented on GitHub (Aug 9, 2024): It is a RTX 3060(12GB) and 16 GB of ram. I have absolutely no intention of running the 405b model as I do not possess the hardware capabilities but rather am attempting to pull it as a means of storage.
Author
Owner

@chadwickhar08 commented on GitHub (Aug 10, 2024):

That was quick! Very cool! Thanks Igor and everyone involved, super
impressive.

Take care,
-C

On Fri, Aug 9, 2024 at 5:32 PM Igor Schlumberger @.***>
wrote:

@chadwickhar08 https://github.com/chadwickhar08 how much ram or vram
have you on your PC?
New version of llama3.1 is being uploaded today.


Reply to this email directly, view it on GitHub
https://github.com/ollama/ollama/issues/6263#issuecomment-2278782716,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AFRJZYNHK7YQ7PXM6OTVD53ZQUYPNAVCNFSM6AAAAABMHJA4U2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENZYG44DENZRGY
.
You are receiving this because you were mentioned.Message ID:
@.***>

<!-- gh-comment-id:2278902352 --> @chadwickhar08 commented on GitHub (Aug 10, 2024): That was quick! Very cool! Thanks Igor and everyone involved, super impressive. Take care, -C On Fri, Aug 9, 2024 at 5:32 PM Igor Schlumberger ***@***.***> wrote: > @chadwickhar08 <https://github.com/chadwickhar08> how much ram or vram > have you on your PC? > New version of llama3.1 is being uploaded today. > > — > Reply to this email directly, view it on GitHub > <https://github.com/ollama/ollama/issues/6263#issuecomment-2278782716>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AFRJZYNHK7YQ7PXM6OTVD53ZQUYPNAVCNFSM6AAAAABMHJA4U2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENZYG44DENZRGY> > . > You are receiving this because you were mentioned.Message ID: > ***@***.***> >
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#3921