[GH-ISSUE #10885] Am I creating a quantized model correctly - resulting model outputs random characters / gibberish? #53666

Closed
opened 2026-04-29 04:26:14 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @lazydog2 on GitHub (May 28, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10885

What is the issue?

If I import the Llama 3.1 8B Instruct safetensors (https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct excluding the original folder) using the following command, the model works as expected:
ollama create mymodel -f Modelfile
If I instead create either a q8_0 or q4_k_m quantized model using the following command, the model outputs a sequence of random characters / gibberish:
ollama create mymodel -f Modelfile --quantize <quant>
Where Modelfile in both cases is simply:
FROM .

Relevant log output


OS

Docker

GPU

Nvidia

CPU

AMD

Ollama version

0.7.1

Originally created by @lazydog2 on GitHub (May 28, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10885 ### What is the issue? If I import the Llama 3.1 8B Instruct safetensors (https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct excluding the original folder) using the following command, the model works as expected: ``ollama create mymodel -f Modelfile`` If I instead create either a q8_0 or q4_k_m quantized model using the following command, the model outputs a sequence of random characters / gibberish: ``ollama create mymodel -f Modelfile --quantize <quant>`` Where Modelfile in both cases is simply: ``FROM .`` ### Relevant log output ```shell ``` ### OS Docker ### GPU Nvidia ### CPU AMD ### Ollama version 0.7.1
GiteaMirror added the bug label 2026-04-29 04:26:14 -05:00
Author
Owner

@rick-github commented on GitHub (May 28, 2025):

$ ollama -v
ollama version is 0.7.1
$ cd meta-llama/Llama-3.1-8B-Instruct
$ echo FROM . > Modelfile
$ ollama create Llama-3.1-8B-Instruct:fp16 -f Modelfile
gathering model components 
copying file sha256:2b1879f356aed350030bb40eb45ad362c89d9891096f79a3ab323d3ba5607668 100% 
copying file sha256:09d433f650646834a83c580877bd60c6d1f88f7755305c12576b5c7058f9af15 100% 
copying file sha256:fc1cdddd6bfa91128d6e94ee73d0ce62bfcdb7af29e978ddcab30c66ae9ea7fa 100% 
copying file sha256:189fb0c0d7fd8a527db217c0a60a0e013f0394cd8800f9697a666a9e75e5f7fd 100% 
copying file sha256:29e4c210b0d6ac178b16b2a255a568bdb23b581e50ca1ef6a6d071dd85704e6e 100% 
copying file sha256:177c7b61e616fecb84c17ce0591acb92c6c4d60e9ac5ababfb940ff23bbcd424 100% 
copying file sha256:6f38c73729248f6c127296386e3cdde96e254636cc58b4169d3fd32328d9a8ec 100% 
copying file sha256:79e3e522635f3171300913bb421464a87de6222182a0570b9b2ccba2a964b2b4 100% 
copying file sha256:146776fce3f6db1103aa6f249e65ee5544c5923ce6f971b092eee79aa6e5d37b 100% 
copying file sha256:92ecfe1a2414458b4821ac8c13cf8cb70aed66b5eea8dc5ad9eeb4ff309d6d7b 100% 
converting model 
using existing layer sha256:e87bc8b1fcdbd0ea33aab51ca6685af80afccaed148a709c922c7ea386909a3e 
using autodetected template llama3-instruct 
using existing layer sha256:56bb8bd477a519ffa694fc449c2413c6f0e1d3b1c88fa7e3c9d88d3ae49d4dcb 
writing manifest 
success 
$ ollama create -q q4_k_m Llama-3.1-8B-Instruct:q4_k_m
gathering model components 
copying file sha256:2b1879f356aed350030bb40eb45ad362c89d9891096f79a3ab323d3ba5607668 100% 
copying file sha256:29e4c210b0d6ac178b16b2a255a568bdb23b581e50ca1ef6a6d071dd85704e6e 100% 
copying file sha256:09d433f650646834a83c580877bd60c6d1f88f7755305c12576b5c7058f9af15 100% 
copying file sha256:fc1cdddd6bfa91128d6e94ee73d0ce62bfcdb7af29e978ddcab30c66ae9ea7fa 100% 
copying file sha256:92ecfe1a2414458b4821ac8c13cf8cb70aed66b5eea8dc5ad9eeb4ff309d6d7b 100% 
copying file sha256:6f38c73729248f6c127296386e3cdde96e254636cc58b4169d3fd32328d9a8ec 100% 
copying file sha256:177c7b61e616fecb84c17ce0591acb92c6c4d60e9ac5ababfb940ff23bbcd424 100% 
copying file sha256:146776fce3f6db1103aa6f249e65ee5544c5923ce6f971b092eee79aa6e5d37b 100% 
copying file sha256:189fb0c0d7fd8a527db217c0a60a0e013f0394cd8800f9697a666a9e75e5f7fd 100% 
copying file sha256:79e3e522635f3171300913bb421464a87de6222182a0570b9b2ccba2a964b2b4 100% 
converting model 
quantizing F16 model to Q4_K_M 100% ▕█████████████████████████████████████████████████████████▏  16 GB                         
verifying conversion 
creating new layer sha256:07eb3e10d4fecca98683fbfcbd09f8ab724d3b78ea7f54f029c5efacd0cc0ffa 
using autodetected template llama3-instruct 
using existing layer sha256:56bb8bd477a519ffa694fc449c2413c6f0e1d3b1c88fa7e3c9d88d3ae49d4dcb 
writing manifest 
success 
$ ollama create -q q8_0 Llama-3.1-8B-Instruct:q8_0 -f Modelfile 
gathering model components 
copying file sha256:92ecfe1a2414458b4821ac8c13cf8cb70aed66b5eea8dc5ad9eeb4ff309d6d7b 100% 
copying file sha256:79e3e522635f3171300913bb421464a87de6222182a0570b9b2ccba2a964b2b4 100% 
copying file sha256:2b1879f356aed350030bb40eb45ad362c89d9891096f79a3ab323d3ba5607668 100% 
copying file sha256:fc1cdddd6bfa91128d6e94ee73d0ce62bfcdb7af29e978ddcab30c66ae9ea7fa 100% 
copying file sha256:09d433f650646834a83c580877bd60c6d1f88f7755305c12576b5c7058f9af15 100% 
copying file sha256:189fb0c0d7fd8a527db217c0a60a0e013f0394cd8800f9697a666a9e75e5f7fd 100% 
copying file sha256:29e4c210b0d6ac178b16b2a255a568bdb23b581e50ca1ef6a6d071dd85704e6e 100% 
copying file sha256:177c7b61e616fecb84c17ce0591acb92c6c4d60e9ac5ababfb940ff23bbcd424 100% 
copying file sha256:6f38c73729248f6c127296386e3cdde96e254636cc58b4169d3fd32328d9a8ec 100% 
copying file sha256:146776fce3f6db1103aa6f249e65ee5544c5923ce6f971b092eee79aa6e5d37b 100% 
converting model 
quantizing F16 model to Q8_0 100% ▕████████████████████████████████████████████████████████████▏  16 GB                         
verifying conversion 
creating new layer sha256:34d97ddf055a356e9f54636e004fb6b67cf65f451ba878efcdc0e9c16293ca29 
using autodetected template llama3-instruct 
using existing layer sha256:56bb8bd477a519ffa694fc449c2413c6f0e1d3b1c88fa7e3c9d88d3ae49d4dcb 
writing manifest 
success 
$ ollama run Llama-3.1-8B-Instruct:fp16 hello
Hello! How can I assist you today?

$ ollama run Llama-3.1-8B-Instruct:q4_k_m hello
Hello! How can I assist you today?

$ ollama run Llama-3.1-8B-Instruct:q8_0 hello
Hello! Is there something I can help you with?
<!-- gh-comment-id:2916454124 --> @rick-github commented on GitHub (May 28, 2025): ```console $ ollama -v ollama version is 0.7.1 $ cd meta-llama/Llama-3.1-8B-Instruct $ echo FROM . > Modelfile $ ollama create Llama-3.1-8B-Instruct:fp16 -f Modelfile gathering model components copying file sha256:2b1879f356aed350030bb40eb45ad362c89d9891096f79a3ab323d3ba5607668 100% copying file sha256:09d433f650646834a83c580877bd60c6d1f88f7755305c12576b5c7058f9af15 100% copying file sha256:fc1cdddd6bfa91128d6e94ee73d0ce62bfcdb7af29e978ddcab30c66ae9ea7fa 100% copying file sha256:189fb0c0d7fd8a527db217c0a60a0e013f0394cd8800f9697a666a9e75e5f7fd 100% copying file sha256:29e4c210b0d6ac178b16b2a255a568bdb23b581e50ca1ef6a6d071dd85704e6e 100% copying file sha256:177c7b61e616fecb84c17ce0591acb92c6c4d60e9ac5ababfb940ff23bbcd424 100% copying file sha256:6f38c73729248f6c127296386e3cdde96e254636cc58b4169d3fd32328d9a8ec 100% copying file sha256:79e3e522635f3171300913bb421464a87de6222182a0570b9b2ccba2a964b2b4 100% copying file sha256:146776fce3f6db1103aa6f249e65ee5544c5923ce6f971b092eee79aa6e5d37b 100% copying file sha256:92ecfe1a2414458b4821ac8c13cf8cb70aed66b5eea8dc5ad9eeb4ff309d6d7b 100% converting model using existing layer sha256:e87bc8b1fcdbd0ea33aab51ca6685af80afccaed148a709c922c7ea386909a3e using autodetected template llama3-instruct using existing layer sha256:56bb8bd477a519ffa694fc449c2413c6f0e1d3b1c88fa7e3c9d88d3ae49d4dcb writing manifest success $ ollama create -q q4_k_m Llama-3.1-8B-Instruct:q4_k_m gathering model components copying file sha256:2b1879f356aed350030bb40eb45ad362c89d9891096f79a3ab323d3ba5607668 100% copying file sha256:29e4c210b0d6ac178b16b2a255a568bdb23b581e50ca1ef6a6d071dd85704e6e 100% copying file sha256:09d433f650646834a83c580877bd60c6d1f88f7755305c12576b5c7058f9af15 100% copying file sha256:fc1cdddd6bfa91128d6e94ee73d0ce62bfcdb7af29e978ddcab30c66ae9ea7fa 100% copying file sha256:92ecfe1a2414458b4821ac8c13cf8cb70aed66b5eea8dc5ad9eeb4ff309d6d7b 100% copying file sha256:6f38c73729248f6c127296386e3cdde96e254636cc58b4169d3fd32328d9a8ec 100% copying file sha256:177c7b61e616fecb84c17ce0591acb92c6c4d60e9ac5ababfb940ff23bbcd424 100% copying file sha256:146776fce3f6db1103aa6f249e65ee5544c5923ce6f971b092eee79aa6e5d37b 100% copying file sha256:189fb0c0d7fd8a527db217c0a60a0e013f0394cd8800f9697a666a9e75e5f7fd 100% copying file sha256:79e3e522635f3171300913bb421464a87de6222182a0570b9b2ccba2a964b2b4 100% converting model quantizing F16 model to Q4_K_M 100% ▕█████████████████████████████████████████████████████████▏ 16 GB verifying conversion creating new layer sha256:07eb3e10d4fecca98683fbfcbd09f8ab724d3b78ea7f54f029c5efacd0cc0ffa using autodetected template llama3-instruct using existing layer sha256:56bb8bd477a519ffa694fc449c2413c6f0e1d3b1c88fa7e3c9d88d3ae49d4dcb writing manifest success $ ollama create -q q8_0 Llama-3.1-8B-Instruct:q8_0 -f Modelfile gathering model components copying file sha256:92ecfe1a2414458b4821ac8c13cf8cb70aed66b5eea8dc5ad9eeb4ff309d6d7b 100% copying file sha256:79e3e522635f3171300913bb421464a87de6222182a0570b9b2ccba2a964b2b4 100% copying file sha256:2b1879f356aed350030bb40eb45ad362c89d9891096f79a3ab323d3ba5607668 100% copying file sha256:fc1cdddd6bfa91128d6e94ee73d0ce62bfcdb7af29e978ddcab30c66ae9ea7fa 100% copying file sha256:09d433f650646834a83c580877bd60c6d1f88f7755305c12576b5c7058f9af15 100% copying file sha256:189fb0c0d7fd8a527db217c0a60a0e013f0394cd8800f9697a666a9e75e5f7fd 100% copying file sha256:29e4c210b0d6ac178b16b2a255a568bdb23b581e50ca1ef6a6d071dd85704e6e 100% copying file sha256:177c7b61e616fecb84c17ce0591acb92c6c4d60e9ac5ababfb940ff23bbcd424 100% copying file sha256:6f38c73729248f6c127296386e3cdde96e254636cc58b4169d3fd32328d9a8ec 100% copying file sha256:146776fce3f6db1103aa6f249e65ee5544c5923ce6f971b092eee79aa6e5d37b 100% converting model quantizing F16 model to Q8_0 100% ▕████████████████████████████████████████████████████████████▏ 16 GB verifying conversion creating new layer sha256:34d97ddf055a356e9f54636e004fb6b67cf65f451ba878efcdc0e9c16293ca29 using autodetected template llama3-instruct using existing layer sha256:56bb8bd477a519ffa694fc449c2413c6f0e1d3b1c88fa7e3c9d88d3ae49d4dcb writing manifest success $ ollama run Llama-3.1-8B-Instruct:fp16 hello Hello! How can I assist you today? $ ollama run Llama-3.1-8B-Instruct:q4_k_m hello Hello! How can I assist you today? $ ollama run Llama-3.1-8B-Instruct:q8_0 hello Hello! Is there something I can help you with?
Author
Owner

@lazydog2 commented on GitHub (May 28, 2025):

Thanks @rick-github, that looks like it should work. Are you able to show the contents of your meta-llama/Llama-3.1-8B-Instruct directory? Also, is it possible something about the environment I'm using is causing the problem e.g. could the quantized model creation be affected by the combination of CPU and GPU I'm using? Do you have any thoughts as to why creating a non-quantized model works but creating a quantized one doesn't?

<!-- gh-comment-id:2917748836 --> @lazydog2 commented on GitHub (May 28, 2025): Thanks @rick-github, that looks like it should work. Are you able to show the contents of your meta-llama/Llama-3.1-8B-Instruct directory? Also, is it possible something about the environment I'm using is causing the problem e.g. could the quantized model creation be affected by the combination of CPU and GPU I'm using? Do you have any thoughts as to why creating a non-quantized model works but creating a quantized one doesn't?
Author
Owner

@rick-github commented on GitHub (May 28, 2025):

I downloaded the model with huggingface-cli and did chmod 000 original to prevent ollama from trying to read it.

$ ls -l
total 15693196
-rw-rw-r-- 1 rick rick        855 May 28 14:39 config.json
-rw-rw-r-- 1 rick rick        184 May 28 14:39 generation_config.json
-rw-rw-r-- 1 rick rick       7627 May 28 14:39 LICENSE
-rw-rw-r-- 1 rick rick 4976698672 May 28 14:43 model-00001-of-00004.safetensors
-rw-rw-r-- 1 rick rick 4999802720 May 28 14:45 model-00002-of-00004.safetensors
-rw-rw-r-- 1 rick rick 4915916176 May 28 14:44 model-00003-of-00004.safetensors
-rw-rw-r-- 1 rick rick 1168138808 May 28 14:41 model-00004-of-00004.safetensors
-rw-rw-r-- 1 rick rick          7 May 28 15:24 Modelfile
-rw-rw-r-- 1 rick rick      23950 May 28 14:39 model.safetensors.index.json
d--------- 2 rick rick       4096 May 28 14:48 original
-rw-rw-r-- 1 rick rick      44044 May 28 14:39 README.md
-rw-rw-r-- 1 rick rick        296 May 28 14:39 special_tokens_map.json
-rw-rw-r-- 1 rick rick      55351 May 28 14:39 tokenizer_config.json
-rw-rw-r-- 1 rick rick    9085657 May 28 14:39 tokenizer.json
-rw-rw-r-- 1 rick rick       4691 May 28 14:39 USE_POLICY.md

You can check to see if the quantization on your system matches mine by looking at the sha256 of the files created:

  • e87bc8b1fcdbd0ea33aab51ca6685af80afccaed148a709c922c7ea386909a3e : fp16
  • 07eb3e10d4fecca98683fbfcbd09f8ab724d3b78ea7f54f029c5efacd0cc0ffa : q4_k-m
  • 34d97ddf055a356e9f54636e004fb6b67cf65f451ba878efcdc0e9c16293ca29 : q8_0

Change /root to whatever the base directory for ollama is:

$ ls -l /root/{.ollama/models/blobs/sha256-e87bc8b1fcdbd0ea33aab51ca6685af80afccaed148a709c922c7ea386909a3e,.ollama/models/bl
obs/sha256-07eb3e10d4fecca98683fbfcbd09f8ab724d3b78ea7f54f029c5efacd0cc0ffa,.ollama/models/blobs/sha256-34d97ddf055a356e9f54636e004fb6b67cf65f451ba878efcdc0e9c16293ca29}
-rw-r--r-- 1 root root  4921251680 May 28 15:21 /root/.ollama/models/blobs/sha256-07eb3e10d4fecca98683fbfcbd09f8ab724d3b78ea7f54f029c5efacd0cc0ffa
-rw-r--r-- 1 root root  8541288288 May 28 15:45 /root/.ollama/models/blobs/sha256-34d97ddf055a356e9f54636e004fb6b67cf65f451ba878efcdc0e9c16293ca29
-rw-r--r-- 1 root root 16069408544 May 28 15:03 /root/.ollama/models/blobs/sha256-e87bc8b1fcdbd0ea33aab51ca6685af80afccaed148a709c922c7ea386909a3e
$ sha256sum /root/{.ollama/models/blobs/sha256-e87bc8b1fcdbd0ea33aab51ca6685af80afccaed148a709c922c7ea386909a3e,.ollama/model
s/blobs/sha256-07eb3e10d4fecca98683fbfcbd09f8ab724d3b78ea7f54f029c5efacd0cc0ffa,.ollama/models/blobs/sha256-34d97ddf055a356e9f54636e004fb6b67cf65f451ba878efcdc0e9c16293ca29}
e87bc8b1fcdbd0ea33aab51ca6685af80afccaed148a709c922c7ea386909a3e  /root/.ollama/models/blobs/sha256-e87bc8b1fcdbd0ea33aab51ca6685af80afccaed148a709c922c7ea386909a3e
07eb3e10d4fecca98683fbfcbd09f8ab724d3b78ea7f54f029c5efacd0cc0ffa  /root/.ollama/models/blobs/sha256-07eb3e10d4fecca98683fbfcbd09f8ab724d3b78ea7f54f029c5efacd0cc0ffa
34d97ddf055a356e9f54636e004fb6b67cf65f451ba878efcdc0e9c16293ca29  /root/.ollama/models/blobs/sha256-34d97ddf055a356e9f54636e004fb6b67cf65f451ba878efcdc0e9c16293ca29

Also verify that the Modelfile (template, parameters) is the same for all models:

$ for q in fp16 q8_0 q4_k_m ; do ollama show --modelfile Llama-3.1-8B-Instruct:$q | grep -v FROM | sha256sum ; done
5128e51a45a4b5e6488f08a21dd185bb2a1b443148dea67d9f19bc04665a59d5  -
5128e51a45a4b5e6488f08a21dd185bb2a1b443148dea67d9f19bc04665a59d5  -
5128e51a45a4b5e6488f08a21dd185bb2a1b443148dea67d9f19bc04665a59d5  -

If these check out then it's not a model problem. Server logs may aid in debugging.

<!-- gh-comment-id:2917802222 --> @rick-github commented on GitHub (May 28, 2025): I downloaded the model with `huggingface-cli` and did `chmod 000 original` to prevent ollama from trying to read it. ```console $ ls -l total 15693196 -rw-rw-r-- 1 rick rick 855 May 28 14:39 config.json -rw-rw-r-- 1 rick rick 184 May 28 14:39 generation_config.json -rw-rw-r-- 1 rick rick 7627 May 28 14:39 LICENSE -rw-rw-r-- 1 rick rick 4976698672 May 28 14:43 model-00001-of-00004.safetensors -rw-rw-r-- 1 rick rick 4999802720 May 28 14:45 model-00002-of-00004.safetensors -rw-rw-r-- 1 rick rick 4915916176 May 28 14:44 model-00003-of-00004.safetensors -rw-rw-r-- 1 rick rick 1168138808 May 28 14:41 model-00004-of-00004.safetensors -rw-rw-r-- 1 rick rick 7 May 28 15:24 Modelfile -rw-rw-r-- 1 rick rick 23950 May 28 14:39 model.safetensors.index.json d--------- 2 rick rick 4096 May 28 14:48 original -rw-rw-r-- 1 rick rick 44044 May 28 14:39 README.md -rw-rw-r-- 1 rick rick 296 May 28 14:39 special_tokens_map.json -rw-rw-r-- 1 rick rick 55351 May 28 14:39 tokenizer_config.json -rw-rw-r-- 1 rick rick 9085657 May 28 14:39 tokenizer.json -rw-rw-r-- 1 rick rick 4691 May 28 14:39 USE_POLICY.md ``` You can check to see if the quantization on your system matches mine by looking at the sha256 of the files created: - e87bc8b1fcdbd0ea33aab51ca6685af80afccaed148a709c922c7ea386909a3e : fp16 - 07eb3e10d4fecca98683fbfcbd09f8ab724d3b78ea7f54f029c5efacd0cc0ffa : q4_k-m - 34d97ddf055a356e9f54636e004fb6b67cf65f451ba878efcdc0e9c16293ca29 : q8_0 Change `/root` to whatever the base directory for ollama is: ```console $ ls -l /root/{.ollama/models/blobs/sha256-e87bc8b1fcdbd0ea33aab51ca6685af80afccaed148a709c922c7ea386909a3e,.ollama/models/bl obs/sha256-07eb3e10d4fecca98683fbfcbd09f8ab724d3b78ea7f54f029c5efacd0cc0ffa,.ollama/models/blobs/sha256-34d97ddf055a356e9f54636e004fb6b67cf65f451ba878efcdc0e9c16293ca29} -rw-r--r-- 1 root root 4921251680 May 28 15:21 /root/.ollama/models/blobs/sha256-07eb3e10d4fecca98683fbfcbd09f8ab724d3b78ea7f54f029c5efacd0cc0ffa -rw-r--r-- 1 root root 8541288288 May 28 15:45 /root/.ollama/models/blobs/sha256-34d97ddf055a356e9f54636e004fb6b67cf65f451ba878efcdc0e9c16293ca29 -rw-r--r-- 1 root root 16069408544 May 28 15:03 /root/.ollama/models/blobs/sha256-e87bc8b1fcdbd0ea33aab51ca6685af80afccaed148a709c922c7ea386909a3e ``` ```console $ sha256sum /root/{.ollama/models/blobs/sha256-e87bc8b1fcdbd0ea33aab51ca6685af80afccaed148a709c922c7ea386909a3e,.ollama/model s/blobs/sha256-07eb3e10d4fecca98683fbfcbd09f8ab724d3b78ea7f54f029c5efacd0cc0ffa,.ollama/models/blobs/sha256-34d97ddf055a356e9f54636e004fb6b67cf65f451ba878efcdc0e9c16293ca29} e87bc8b1fcdbd0ea33aab51ca6685af80afccaed148a709c922c7ea386909a3e /root/.ollama/models/blobs/sha256-e87bc8b1fcdbd0ea33aab51ca6685af80afccaed148a709c922c7ea386909a3e 07eb3e10d4fecca98683fbfcbd09f8ab724d3b78ea7f54f029c5efacd0cc0ffa /root/.ollama/models/blobs/sha256-07eb3e10d4fecca98683fbfcbd09f8ab724d3b78ea7f54f029c5efacd0cc0ffa 34d97ddf055a356e9f54636e004fb6b67cf65f451ba878efcdc0e9c16293ca29 /root/.ollama/models/blobs/sha256-34d97ddf055a356e9f54636e004fb6b67cf65f451ba878efcdc0e9c16293ca29 ``` Also verify that the Modelfile (template, parameters) is the same for all models: ```console $ for q in fp16 q8_0 q4_k_m ; do ollama show --modelfile Llama-3.1-8B-Instruct:$q | grep -v FROM | sha256sum ; done 5128e51a45a4b5e6488f08a21dd185bb2a1b443148dea67d9f19bc04665a59d5 - 5128e51a45a4b5e6488f08a21dd185bb2a1b443148dea67d9f19bc04665a59d5 - 5128e51a45a4b5e6488f08a21dd185bb2a1b443148dea67d9f19bc04665a59d5 - ``` If these check out then it's not a model problem. [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) may aid in debugging.
Author
Owner

@lazydog2 commented on GitHub (May 30, 2025):

Thanks for your help @rick-github. I resolved my issue. When going through and comparing the hashes in the output of your "ollama create" command, I noticed I was missing one, which corresponded to "tokenizer_config.json". Adding this and recreating the model resolved my issue.

<!-- gh-comment-id:2921225097 --> @lazydog2 commented on GitHub (May 30, 2025): Thanks for your help @rick-github. I resolved my issue. When going through and comparing the hashes in the output of your "ollama create" command, I noticed I was missing one, which corresponded to "tokenizer_config.json". Adding this and recreating the model resolved my issue.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#53666