[GH-ISSUE #8525] Ollama Linux Service vs. Ollama Serve (Changing Ports) #5496

Closed
opened 2026-04-12 16:43:52 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @ghost on GitHub (Jan 22, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8525

Default Setup:
Following the installation guide, Ollama works without issues when hosted on the default port.

Changing Address to 0.0.0.0:
I was able to successfully change the address to 0.0.0.0, which works fine. However, when trying to change the port, I encountered issues.

Modifying the Service File:
When I modify the system service file with commands above the ############################## section, I can successfully set the address to 0.0.0.0. However, after trying to add a custom port, I encounter "debug mode." Even when canceling this debug mode and re-running the default port setup, the ollama -v works for default port, not for custom port, with no apparent difference between the two other than address listed. In new terminal, ollama -v is not recognized and it asks if ollama is running for port 3001, when showing clearly in the other terminal. Yet when I attempt to run OLLAMA_HOST=0:0:0:0:3001 in this case via terminal, it says the port is already being used. In neither of these cases can I achieve a "run" command because in one case, I am not allowed further inputs, and in another, it is not recognized.

Environment Variable Overrides:
When I try to override the port by setting OLLAMA_HOST=0.0.0.0:3001 ollama serve, the command gives identical debug ouput but different address stated reflecting [::]3001. Opening another terminal and running ollama seems to indicate that the service is not running. However, when I attempt to start Ollama on the same port, it says the port is already in use.

Temporary File Confusion:
After making edits removing my additions using systemctl, because the explanation implies anything above the hashtag symbols will be added to the .service file. When doing so it fails and says it is an empty temporary file.

Lack of Clear Documentation:
The documentation around configuring Ollama to run on a custom port is unclear. It seems like some people suggest not including the port in the OLLAMA_HOST variable, while others recommend adding it. I don't see a clear explanation of how to configure the service properly for a custom setup.

Confusion Over Service and Manual Commands:
I'm also confused about the relationship between the system service and the manual command ollama serve. It seems like the service is already running the process on the port I specified in the .service file, but when I try to call ollama manually, it either doesn't respond or conflicts with the service. There's advice suggesting it is unnecessary to modify the service directly, but this has led to further confusion, as I cannot get the ollama command to behave as expected.

Unclear Chronological Setup Process:
I would like a clearer understanding of the intended setup process. Specifically, how the service is supposed to be configured on Linux when using a custom port, and how it interacts with ollama serve and other commands.

Request for Help:

I would appreciate any clarification or guidance on the following:

How to properly configure the Ollama service to run on a custom port.
What the expected interaction is between system services (e.g., systemctl) and manual commands like ollama serve.
Whether modifying the .service file directly is appropriate, and if so, how to ensure my changes persist.
Any additional resources or documentation that can provide a clearer explanation of these steps.
I would be happy to contribute to the documentation once I better understand the process, as I believe clearer guidance is needed, especially for production-focused setups.

Originally created by @ghost on GitHub (Jan 22, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8525 Default Setup: Following the installation guide, Ollama works without issues when hosted on the default port. Changing Address to 0.0.0.0: I was able to successfully change the address to 0.0.0.0, which works fine. However, when trying to change the port, I encountered issues. Modifying the Service File: When I modify the system service file with commands above the ############################## section, I can successfully set the address to 0.0.0.0. However, after trying to add a custom port, I encounter "debug mode." Even when canceling this debug mode and re-running the default port setup, the ollama -v works for default port, not for custom port, with no apparent difference between the two other than address listed. In new terminal, ollama -v is not recognized and it asks if ollama is running for port 3001, when showing clearly in the other terminal. Yet when I attempt to run OLLAMA_HOST=0:0:0:0:3001 in this case via terminal, it says the port is already being used. In neither of these cases can I achieve a "run" command because in one case, I am not allowed further inputs, and in another, it is not recognized. Environment Variable Overrides: When I try to override the port by setting OLLAMA_HOST=0.0.0.0:3001 ollama serve, the command gives identical debug ouput but different address stated reflecting [::]3001. Opening another terminal and running ollama seems to indicate that the service is not running. However, when I attempt to start Ollama on the same port, it says the port is already in use. Temporary File Confusion: After making edits removing my additions using systemctl, because the explanation implies anything above the hashtag symbols will be added to the .service file. When doing so it fails and says it is an empty temporary file. Lack of Clear Documentation: The documentation around configuring Ollama to run on a custom port is unclear. It seems like some people suggest not including the port in the OLLAMA_HOST variable, while others recommend adding it. I don't see a clear explanation of how to configure the service properly for a custom setup. Confusion Over Service and Manual Commands: I'm also confused about the relationship between the system service and the manual command ollama serve. It seems like the service is already running the process on the port I specified in the .service file, but when I try to call ollama manually, it either doesn't respond or conflicts with the service. There's advice suggesting it is unnecessary to modify the service directly, but this has led to further confusion, as I cannot get the ollama command to behave as expected. Unclear Chronological Setup Process: I would like a clearer understanding of the intended setup process. Specifically, how the service is supposed to be configured on Linux when using a custom port, and how it interacts with ollama serve and other commands. Request for Help: I would appreciate any clarification or guidance on the following: How to properly configure the Ollama service to run on a custom port. What the expected interaction is between system services (e.g., systemctl) and manual commands like ollama serve. Whether modifying the .service file directly is appropriate, and if so, how to ensure my changes persist. Any additional resources or documentation that can provide a clearer explanation of these steps. I would be happy to contribute to the documentation once I better understand the process, as I believe clearer guidance is needed, especially for production-focused setups.
Author
Owner

@ghost commented on GitHub (Jan 22, 2025):

After further investigation, it appears when changing the address to listen to 0.0.0.0 using systemctl, and then doing a "ollama serve" with no custom port specified, I get this output from debug_mode. The outputs insinuate it's forcing a model load of some sort, and it appears setting an api up for it. I don't specify a model. typically I ollama serve and then ollama run model. It also takes MUCH longer to the point where I thought it was hanging. There's no way to tell progress.

`2025/01/21 16:42:13 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/khammitt/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-01-21T16:42:14.004-06:00 level=INFO source=images.go:432 msg="total blobs: 0"
time=2025-01-21T16:42:14.005-06:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.

  • using env: export GIN_MODE=release
  • using code: gin.SetMode(gin.ReleaseMode)

[GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
[GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)[GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
[GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)[GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
[GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
[GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
[GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
time=2025-01-21T16:42:14.005-06:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7)"
time=2025-01-21T16:42:14.005-06:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx]"
time=2025-01-21T16:42:14.005-06:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2025-01-21T16:42:14.242-06:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-473729a7-a78c-bd5a-eea8-9888394b121a library=cuda variant=v12 compute=8.9 driver=12.7 name="NVIDIA GeForce RTX 4090" total="23.5 GiB" available="23.1 GiB"
[GIN] 2025/01/21 - 16:43:58 | 200 | 437.492µs | 127.0.0.1 | GET "/api/version"
[GIN] 2025/01/21 - 16:45:29 | 200 | 13.144µs | 127.0.0.1 | HEAD "/"
[GIN] 2025/01/21 - 16:45:29 | 404 | 445.063µs | 127.0.0.1 | POST "/api/show"
time=2025-01-21T16:45:30.408-06:00 level=INFO source=download.go:175 msg="downloading 5c56bb0256a2 in 16 100 MB part(s)"time=2025-01-21T16:45:45.608-06:00 level=INFO source=download.go:175 msg="downloading f76a906816c4 in 1 1.4 KB part(s)"
time=2025-01-21T16:45:46.791-06:00 level=INFO source=download.go:175 msg="downloading f7b956e70ca3 in 1 69 B part(s)"
time=2025-01-21T16:45:48.015-06:00 level=INFO source=download.go:175 msg="downloading 492069a62c25 in 1 11 KB part(s)"
time=2025-01-21T16:45:49.209-06:00 level=INFO source=download.go:175 msg="downloading cc40ff1e8045 in 1 491 B part(s)"
[GIN] 2025/01/21 - 16:45:51 | 200 | 21.446445295s | 127.0.0.1 | POST "/api/pull"
[GIN] 2025/01/21 - 16:45:51 | 200 | 3.244961ms | 127.0.0.1 | POST "/api/show"
time=2025-01-21T16:45:51.239-06:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/home/khammitt/.ollama/models/blobs/sha256-5c56bb0256a2c402e95282a29bb5cb747bb805eda0e14a84b1f6c594a297ec1a gpu=GPU-473729a7-a78c-bd5a-eea8-9888394b121a parallel=4 available=24856231936 required="3.0 GiB"
time=2025-01-21T16:45:51.422-06:00 level=INFO source=server.go:104 msg="system memory" total="28.8 GiB" free="24.8 GiB" free_swap="0 B"
time=2025-01-21T16:45:51.422-06:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=41 layers.offload=41 layers.split="" memory.available="[23.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="3.0 GiB" memory.required.partial="3.0 GiB" memory.required.kv="640.0 MiB" memory.required.allocations="[3.0 GiB]" memory.weights.total="2.0 GiB" memory.weights.repeating="1.9 GiB" memory.weights.nonrepeating="102.0 MiB" memory.graph.full="426.7 MiB" memory.graph.partial="426.7 MiB"
time=2025-01-21T16:45:51.423-06:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server runner --model /home/khammitt/.ollama/models/blobs/sha256-5c56bb0256a2c402e95282a29bb5cb747bb805eda0e14a84b1f6c594a297ec1a --ctx-size 8192 --batch-size 512 --n-gpu-layers 41 --threads 8 --parallel 4 --port 41813"
time=2025-01-21T16:45:51.424-06:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-01-21T16:45:51.424-06:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
time=2025-01-21T16:45:51.424-06:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2025-01-21T16:45:51.445-06:00 level=INFO source=runner.go:936 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
time=2025-01-21T16:45:51.457-06:00 level=INFO source=runner.go:937 msg=system info="CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=8
time=2025-01-21T16:45:51.457-06:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:41813"
llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 4090) - 23704 MiB free
llama_model_loader: loaded meta data with 40 key-value pairs and 362 tensors from /home/khammitt/.ollama/models/blobs/sha256-5c56bb0256a2c402e95282a29bb5cb747bb805eda0e14a84b1f6c594a297ec1a (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = granite
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Granite 3.1 2b Instruct
llama_model_loader: - kv 3: general.finetune str = instruct
llama_model_loader: - kv 4: general.basename str = granite-3.1
llama_model_loader: - kv 5: general.size_label str = 2B
llama_model_loader: - kv 6: general.license str = apache-2.0
llama_model_loader: - kv 7: general.base_model.count u32 = 1
llama_model_loader: - kv 8: general.base_model.0.name str = Granite 3.1 2b Base
llama_model_loader: - kv 9: general.base_model.0.organization str = Ibm Granite
llama_model_loader: - kv 10: general.base_model.0.repo_url str = https://huggingface.co/ibm-granite/gr...
llama_model_loader: - kv 11: general.tags arr[str,3] = ["language", "granite-3.1", "text-gen...
llama_model_loader: - kv 12: granite.block_count u32 = 40
llama_model_loader: - kv 13: granite.context_length u32 = 131072
llama_model_loader: - kv 14: granite.embedding_length u32 = 2048
llama_model_loader: - kv 15: granite.feed_forward_length u32 = 8192
llama_model_loader: - kv 16: granite.attention.head_count u32 = 32
llama_model_loader: - kv 17: granite.attention.head_count_kv u32 = 8
llama_model_loader: - kv 18: granite.rope.freq_base f32 = 5000000.000000
llama_model_loader: - kv 19: granite.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 20: general.file_type u32 = 15
llama_model_loader: - kv 21: granite.vocab_size u32 = 49155
llama_model_loader: - kv 22: granite.rope.dimension_count u32 = 64
llama_model_loader: - kv 23: granite.attention.scale f32 = 0.015625
llama_model_loader: - kv 24: granite.embedding_scale f32 = 12.000000
llama_model_loader: - kv 25: granite.residual_scale f32 = 0.220000
llama_model_loader: - kv 26: granite.logit_scale f32 = 8.000000
llama_model_loader: - kv 27: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 28: tokenizer.ggml.pre str = refact
llama_model_loader: - kv 29: tokenizer.ggml.tokens arr[str,49155] = ["<|end_of_text|>", "<fim_prefix>", "...
llama_model_loader: - kv 30: tokenizer.ggml.token_type arr[i32,49155] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv 31: tokenizer.ggml.merges arr[str,48891] = ["Ġ Ġ", "ĠĠ ĠĠ", "ĠĠĠĠ ĠĠ...llama_model_loader: - kv 32: tokenizer.ggml.bos_token_id u32 = 0
llama_model_loader: - kv 33: tokenizer.ggml.eos_token_id u32 = 0
llama_model_loader: - kv 34: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 35: tokenizer.ggml.padding_token_id u32 = 0
llama_model_loader: - kv 36: tokenizer.ggml.add_bos_token bool = false
llama_model_loader: - kv 37: tokenizer.chat_template str = {%- if messages[0]['role'] == 'system...
llama_model_loader: - kv 38: tokenizer.ggml.add_space_prefix bool = false
llama_model_loader: - kv 39: general.quantization_version u32 = 2
llama_model_loader: - type f32: 81 tensors
llama_model_loader: - type q8_0: 1 tensors
llama_model_loader: - type q4_K: 240 tensors
llama_model_loader: - type q6_K: 40 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 22
llm_load_vocab: token to piece cache size = 0.2826 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = granite
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 49155
llm_load_print_meta: n_merges = 48891
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 2048
llm_load_print_meta: n_layer = 40
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 64
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 64
llm_load_print_meta: n_embd_head_v = 64
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 512
llm_load_print_meta: n_embd_v_gqa = 512
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 8.0e+00
llm_load_print_meta: n_ff = 8192
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 5000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 3B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 2.53 B
llm_load_print_meta: model size = 1.46 GiB (4.95 BPW)
llm_load_print_meta: general.name = Granite 3.1 2b Instruct
llm_load_print_meta: BOS token = 0 '<|end_of_text|>'
llm_load_print_meta: EOS token = 0 '<|end_of_text|>'
llm_load_print_meta: UNK token = 0 '<|end_of_text|>'
llm_load_print_meta: PAD token = 0 '<|end_of_text|>'
llm_load_print_meta: LF token = 145 'Ä'
llm_load_print_meta: EOG token = 0 '<|end_of_text|>'
llm_load_print_meta: max token length = 512
llm_load_print_meta: f_embedding_scale = 12.000000
llm_load_print_meta: f_residual_scale = 0.220000
llm_load_print_meta: f_attention_scale = 0.015625
time=2025-01-21T16:45:51.696-06:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
llm_load_tensors: offloading 40 repeating layers to GPU
llm_load_tensors: offloading output layer to GPU
llm_load_tensors: offloaded 41/41 layers to GPU
llm_load_tensors: CPU_Mapped model buffer size = 102.01 MiB
llm_load_tensors: CUDA0 model buffer size = 1495.30 MiB
llama_new_context_with_model: n_seq_max = 4
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 5000000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 40, can_shift = 1
llama_kv_cache_init: CUDA0 KV buffer size = 640.00 MiB
llama_new_context_with_model: KV self size = 640.00 MiB, K (f16): 320.00 MiB, V (f16): 320.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 0.78 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 544.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 20.01 MiB
llama_new_context_with_model: graph nodes = 1368
llama_new_context_with_model: graph splits = 2
time=2025-01-21T16:45:51.947-06:00 level=INFO source=server.go:594 msg="llama runner started in 0.52 seconds"
[GIN] 2025/01/21 - 16:45:51 | 200 | 940.694287ms | 127.0.0.1 | POST "/api/generate"`

And the long hang with no terminal response happens on this line of output:

time=2025-01-21T16:42:14.242-06:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-473729a7-a78c-bd5a-eea8-9888394b121a library=cuda variant=v12 compute=8.9 driver=12.7 name="NVIDIA GeForce RTX 4090" total="23.5 GiB" available="23.1 GiB"

It doesn't do this when I default quick install, only when I edit address, no matter port. Unsure why it's so much slower in one case than another to ollama serve, and I haven't even run a model yet. I also see a 404 for /api/show. I don't think it is dependent on port also because we've managed 0.0.0.0:11434 just fine, it just seems I'm not setting port correctly, because when I leave it alone it works.

<!-- gh-comment-id:2606135876 --> @ghost commented on GitHub (Jan 22, 2025): After further investigation, it appears when changing the address to listen to 0.0.0.0 using systemctl, and then doing a "ollama serve" with no custom port specified, I get this output from debug_mode. The outputs insinuate it's forcing a model load of some sort, and it appears setting an api up for it. I don't specify a model. typically I ollama serve and then ollama run model. It also takes MUCH longer to the point where I thought it was hanging. There's no way to tell progress. `2025/01/21 16:42:13 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/khammitt/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-01-21T16:42:14.004-06:00 level=INFO source=images.go:432 msg="total blobs: 0" time=2025-01-21T16:42:14.005-06:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers) [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers) [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers) [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers) [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers) [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)[GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers) [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers) [GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)[GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers) [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers) [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers) [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers) [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers) [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers) [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers) [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers) [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers) [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) time=2025-01-21T16:42:14.005-06:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7)" time=2025-01-21T16:42:14.005-06:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx]" time=2025-01-21T16:42:14.005-06:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2025-01-21T16:42:14.242-06:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-473729a7-a78c-bd5a-eea8-9888394b121a library=cuda variant=v12 compute=8.9 driver=12.7 name="NVIDIA GeForce RTX 4090" total="23.5 GiB" available="23.1 GiB" [GIN] 2025/01/21 - 16:43:58 | 200 | 437.492µs | 127.0.0.1 | GET "/api/version" [GIN] 2025/01/21 - 16:45:29 | 200 | 13.144µs | 127.0.0.1 | HEAD "/" [GIN] 2025/01/21 - 16:45:29 | 404 | 445.063µs | 127.0.0.1 | POST "/api/show" time=2025-01-21T16:45:30.408-06:00 level=INFO source=download.go:175 msg="downloading 5c56bb0256a2 in 16 100 MB part(s)"time=2025-01-21T16:45:45.608-06:00 level=INFO source=download.go:175 msg="downloading f76a906816c4 in 1 1.4 KB part(s)" time=2025-01-21T16:45:46.791-06:00 level=INFO source=download.go:175 msg="downloading f7b956e70ca3 in 1 69 B part(s)" time=2025-01-21T16:45:48.015-06:00 level=INFO source=download.go:175 msg="downloading 492069a62c25 in 1 11 KB part(s)" time=2025-01-21T16:45:49.209-06:00 level=INFO source=download.go:175 msg="downloading cc40ff1e8045 in 1 491 B part(s)" [GIN] 2025/01/21 - 16:45:51 | 200 | 21.446445295s | 127.0.0.1 | POST "/api/pull" [GIN] 2025/01/21 - 16:45:51 | 200 | 3.244961ms | 127.0.0.1 | POST "/api/show" time=2025-01-21T16:45:51.239-06:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/home/khammitt/.ollama/models/blobs/sha256-5c56bb0256a2c402e95282a29bb5cb747bb805eda0e14a84b1f6c594a297ec1a gpu=GPU-473729a7-a78c-bd5a-eea8-9888394b121a parallel=4 available=24856231936 required="3.0 GiB" time=2025-01-21T16:45:51.422-06:00 level=INFO source=server.go:104 msg="system memory" total="28.8 GiB" free="24.8 GiB" free_swap="0 B" time=2025-01-21T16:45:51.422-06:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=41 layers.offload=41 layers.split="" memory.available="[23.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="3.0 GiB" memory.required.partial="3.0 GiB" memory.required.kv="640.0 MiB" memory.required.allocations="[3.0 GiB]" memory.weights.total="2.0 GiB" memory.weights.repeating="1.9 GiB" memory.weights.nonrepeating="102.0 MiB" memory.graph.full="426.7 MiB" memory.graph.partial="426.7 MiB" time=2025-01-21T16:45:51.423-06:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server runner --model /home/khammitt/.ollama/models/blobs/sha256-5c56bb0256a2c402e95282a29bb5cb747bb805eda0e14a84b1f6c594a297ec1a --ctx-size 8192 --batch-size 512 --n-gpu-layers 41 --threads 8 --parallel 4 --port 41813" time=2025-01-21T16:45:51.424-06:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-01-21T16:45:51.424-06:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding" time=2025-01-21T16:45:51.424-06:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" time=2025-01-21T16:45:51.445-06:00 level=INFO source=runner.go:936 msg="starting go runner" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes time=2025-01-21T16:45:51.457-06:00 level=INFO source=runner.go:937 msg=system info="CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=8 time=2025-01-21T16:45:51.457-06:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:41813" llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 4090) - 23704 MiB free llama_model_loader: loaded meta data with 40 key-value pairs and 362 tensors from /home/khammitt/.ollama/models/blobs/sha256-5c56bb0256a2c402e95282a29bb5cb747bb805eda0e14a84b1f6c594a297ec1a (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = granite llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Granite 3.1 2b Instruct llama_model_loader: - kv 3: general.finetune str = instruct llama_model_loader: - kv 4: general.basename str = granite-3.1 llama_model_loader: - kv 5: general.size_label str = 2B llama_model_loader: - kv 6: general.license str = apache-2.0 llama_model_loader: - kv 7: general.base_model.count u32 = 1 llama_model_loader: - kv 8: general.base_model.0.name str = Granite 3.1 2b Base llama_model_loader: - kv 9: general.base_model.0.organization str = Ibm Granite llama_model_loader: - kv 10: general.base_model.0.repo_url str = https://huggingface.co/ibm-granite/gr... llama_model_loader: - kv 11: general.tags arr[str,3] = ["language", "granite-3.1", "text-gen... llama_model_loader: - kv 12: granite.block_count u32 = 40 llama_model_loader: - kv 13: granite.context_length u32 = 131072 llama_model_loader: - kv 14: granite.embedding_length u32 = 2048 llama_model_loader: - kv 15: granite.feed_forward_length u32 = 8192 llama_model_loader: - kv 16: granite.attention.head_count u32 = 32 llama_model_loader: - kv 17: granite.attention.head_count_kv u32 = 8 llama_model_loader: - kv 18: granite.rope.freq_base f32 = 5000000.000000 llama_model_loader: - kv 19: granite.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 20: general.file_type u32 = 15 llama_model_loader: - kv 21: granite.vocab_size u32 = 49155 llama_model_loader: - kv 22: granite.rope.dimension_count u32 = 64 llama_model_loader: - kv 23: granite.attention.scale f32 = 0.015625 llama_model_loader: - kv 24: granite.embedding_scale f32 = 12.000000 llama_model_loader: - kv 25: granite.residual_scale f32 = 0.220000 llama_model_loader: - kv 26: granite.logit_scale f32 = 8.000000 llama_model_loader: - kv 27: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 28: tokenizer.ggml.pre str = refact llama_model_loader: - kv 29: tokenizer.ggml.tokens arr[str,49155] = ["<|end_of_text|>", "<fim_prefix>", "... llama_model_loader: - kv 30: tokenizer.ggml.token_type arr[i32,49155] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ... llama_model_loader: - kv 31: tokenizer.ggml.merges arr[str,48891] = ["Ġ Ġ", "ĠĠ ĠĠ", "ĠĠĠĠ ĠĠ...llama_model_loader: - kv 32: tokenizer.ggml.bos_token_id u32 = 0 llama_model_loader: - kv 33: tokenizer.ggml.eos_token_id u32 = 0 llama_model_loader: - kv 34: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 35: tokenizer.ggml.padding_token_id u32 = 0 llama_model_loader: - kv 36: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 37: tokenizer.chat_template str = {%- if messages[0]['role'] == 'system... llama_model_loader: - kv 38: tokenizer.ggml.add_space_prefix bool = false llama_model_loader: - kv 39: general.quantization_version u32 = 2 llama_model_loader: - type f32: 81 tensors llama_model_loader: - type q8_0: 1 tensors llama_model_loader: - type q4_K: 240 tensors llama_model_loader: - type q6_K: 40 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 22 llm_load_vocab: token to piece cache size = 0.2826 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = granite llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 49155 llm_load_print_meta: n_merges = 48891 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 2048 llm_load_print_meta: n_layer = 40 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 64 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 64 llm_load_print_meta: n_embd_head_v = 64 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 512 llm_load_print_meta: n_embd_v_gqa = 512 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 8.0e+00 llm_load_print_meta: n_ff = 8192 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 5000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 3B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 2.53 B llm_load_print_meta: model size = 1.46 GiB (4.95 BPW) llm_load_print_meta: general.name = Granite 3.1 2b Instruct llm_load_print_meta: BOS token = 0 '<|end_of_text|>' llm_load_print_meta: EOS token = 0 '<|end_of_text|>' llm_load_print_meta: UNK token = 0 '<|end_of_text|>' llm_load_print_meta: PAD token = 0 '<|end_of_text|>' llm_load_print_meta: LF token = 145 'Ä' llm_load_print_meta: EOG token = 0 '<|end_of_text|>' llm_load_print_meta: max token length = 512 llm_load_print_meta: f_embedding_scale = 12.000000 llm_load_print_meta: f_residual_scale = 0.220000 llm_load_print_meta: f_attention_scale = 0.015625 time=2025-01-21T16:45:51.696-06:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" llm_load_tensors: offloading 40 repeating layers to GPU llm_load_tensors: offloading output layer to GPU llm_load_tensors: offloaded 41/41 layers to GPU llm_load_tensors: CPU_Mapped model buffer size = 102.01 MiB llm_load_tensors: CUDA0 model buffer size = 1495.30 MiB llama_new_context_with_model: n_seq_max = 4 llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 5000000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 40, can_shift = 1 llama_kv_cache_init: CUDA0 KV buffer size = 640.00 MiB llama_new_context_with_model: KV self size = 640.00 MiB, K (f16): 320.00 MiB, V (f16): 320.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 0.78 MiB llama_new_context_with_model: CUDA0 compute buffer size = 544.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 20.01 MiB llama_new_context_with_model: graph nodes = 1368 llama_new_context_with_model: graph splits = 2 time=2025-01-21T16:45:51.947-06:00 level=INFO source=server.go:594 msg="llama runner started in 0.52 seconds" [GIN] 2025/01/21 - 16:45:51 | 200 | 940.694287ms | 127.0.0.1 | POST "/api/generate"` And the long hang with no terminal response happens on this line of output: time=2025-01-21T16:42:14.242-06:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-473729a7-a78c-bd5a-eea8-9888394b121a library=cuda variant=v12 compute=8.9 driver=12.7 name="NVIDIA GeForce RTX 4090" total="23.5 GiB" available="23.1 GiB" It doesn't do this when I default quick install, only when I edit address, no matter port. Unsure why it's so much slower in one case than another to ollama serve, and I haven't even run a model yet. I also see a 404 for /api/show. I don't think it is dependent on port also because we've managed 0.0.0.0:11434 just fine, it just seems I'm not setting port correctly, because when I leave it alone it works.
Author
Owner

@ghost commented on GitHub (Jan 22, 2025):

SOLUTION: It is downloading a larger model than I was using before because in the new ollama install I had not pulled granite models. The debug_mode doesn't show real time download progress for me, so it hangs because it's downloading 16 100MB files silently. I only found out when I left it on accidentally for about 30 minutes.

<!-- gh-comment-id:2607509948 --> @ghost commented on GitHub (Jan 22, 2025): SOLUTION: It is downloading a larger model than I was using before because in the new ollama install I had not pulled granite models. The debug_mode doesn't show real time download progress for me, so it hangs because it's downloading 16 100MB files silently. I only found out when I left it on accidentally for about 30 minutes.
Author
Owner

@ghost commented on GitHub (Jan 22, 2025):

FINAL RECOMMENDATION THAT MAY HELP UNDERSTAND USE CASE FOR NEW USERS:

We are on red hat linux, and subject to ZPA. We identified it's just not worth exposing ollama in that way - especially when we are going to use it with python later. Thus, we loaded a local jupyter notebook, left all settings for ollama at default port and local host, and then after a pip install ollama, you can access models through the localhost address in a few ways and expose it via a custom UI later, instead of putting it all on ollama - and now that I think about it, I don't know why I assumed I needed to do it that way.

You can access the jupyter notebook on non GUI linux machines using ssh from a GUI based machine on the same local network which I also didn't consider. Thus, I'll leave this here in hopes my insanity helps someone else.

In the end though, many of these packages have fallback downloads from the network, in every case for a casual hobbyist - this is not a big deal. In cases where you don't have so much freedom, the download may violate policies, so it gets chewed up. We have to turn them off or meet hidden flags not to trigger them. I wonder if outsourcing model loading is really the optimal path, but that's difficult to do closer to metal i assume. it seems everyone wants to provide mirrors as convenience but they end up being more of a chore to turn off.

<!-- gh-comment-id:2608042889 --> @ghost commented on GitHub (Jan 22, 2025): FINAL RECOMMENDATION THAT MAY HELP UNDERSTAND USE CASE FOR NEW USERS: We are on red hat linux, and subject to ZPA. We identified it's just not worth exposing ollama in that way - especially when we are going to use it with python later. Thus, we loaded a local jupyter notebook, left all settings for ollama at default port and local host, and then after a pip install ollama, you can access models through the localhost address in a few ways and expose it via a custom UI later, instead of putting it all on ollama - and now that I think about it, I don't know why I assumed I needed to do it that way. You can access the jupyter notebook on non GUI linux machines using ssh from a GUI based machine on the same local network which I also didn't consider. Thus, I'll leave this here in hopes my insanity helps someone else. In the end though, many of these packages have fallback downloads from the network, in every case for a casual hobbyist - this is not a big deal. In cases where you don't have so much freedom, the download may violate policies, so it gets chewed up. We have to turn them off or meet hidden flags not to trigger them. I wonder if outsourcing model loading is really the optimal path, but that's difficult to do closer to metal i assume. it seems everyone wants to provide mirrors as convenience but they end up being more of a chore to turn off.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#5496