[GH-ISSUE #11870] Intelligent-Internet Search 4B models #69938

Closed
opened 2026-05-04 19:49:26 -05:00 by GiteaMirror · 2 comments
Owner
Originally created by @mihairaduonofrei on GitHub (Aug 12, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11870 Please https://huggingface.co/Intelligent-Internet/II-Search-4B https://huggingface.co/Intelligent-Internet/II-Search-CIR-4B
GiteaMirror added the model label 2026-05-04 19:49:26 -05:00
Author
Owner

@rick-github commented on GitHub (Aug 12, 2025):

These are fine-tuned qwen3 models, so you can just import them.

<!-- gh-comment-id:3180897741 --> @rick-github commented on GitHub (Aug 12, 2025): These are fine-tuned qwen3 models, so you can just [import](https://github.com/ollama/ollama/blob/main/docs/import.md#Importing-a-model-from-Safetensors-weights) them.
Author
Owner

@mihairaduonofrei commented on GitHub (Aug 12, 2025):

thank you!
after building llama.cpp
huggingface-cli download Intelligent-Internet/II-Search-4B --local-dir ./II-Search-4B
python convert_hf_to_gguf.py ./II-Search-4B --outfile ./II-Search-4B.gguf --outtype f16
ollama create ii-search-4b -f <(echo 'FROM ./II-Search-4B.gguf')
ollama create ii-search-4b_q8 --quantize q8_0 -f <(echo 'FROM ./II-Search-4B.gguf')

<!-- gh-comment-id:3181350590 --> @mihairaduonofrei commented on GitHub (Aug 12, 2025): thank you! after building llama.cpp huggingface-cli download Intelligent-Internet/II-Search-4B --local-dir ./II-Search-4B python convert_hf_to_gguf.py ./II-Search-4B --outfile ./II-Search-4B.gguf --outtype f16 ollama create ii-search-4b -f <(echo 'FROM ./II-Search-4B.gguf') ollama create ii-search-4b_q8 --quantize q8_0 -f <(echo 'FROM ./II-Search-4B.gguf')
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#69938