[GH-ISSUE #1166] Since Modelfiles doesn't work How do we set default PARAMETER settings? #47102

Closed
opened 2026-04-28 03:09:51 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @oliverbob on GitHub (Nov 17, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1166

How can I set global settings for the current model without making a Modelfile? Example, set paramater for number of threads and gpus, etc fo a user chosen model?

Thanks.

Originally created by @oliverbob on GitHub (Nov 17, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1166 How can I set global settings for the current model without making a Modelfile? Example, set paramater for number of threads and gpus, etc fo a user chosen model? Thanks.
Author
Owner

@technovangelist commented on GitHub (Nov 24, 2023):

You said modelfile aren't working for you. Can you explain more about what you are seeing?

<!-- gh-comment-id:1826095514 --> @technovangelist commented on GitHub (Nov 24, 2023): You said modelfile aren't working for you. Can you explain more about what you are seeing?
Author
Owner

@oliverbob commented on GitHub (Nov 25, 2023):

FROM codellama:7b
# sets the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 1
# sets the context window size to 4096, this controls how many tokens the LLM c>
PARAMETER num_ctx 4096

# sets a custom system prompt to specify the behavior of the chat assistant
SYSTEM You are Bob, acting as an assistant and you are a developer.

Location: /tmp

But when I do:
ollama create example -f ./Modelfile

couldn't open modelfile '/tmp/Modelfile'  Error: failed to open file: open /tmp/Modelfile: no such file or directory
root@ubuntu-g-2vcpu-8gb-sgp1-01:/tmp# 

lsb_release -a:

No LSB modules are available.
Distributor ID:	Ubuntu
Description:	Ubuntu 22.04.3 LTS
Release:	22.04
Codename:	jammy

lscpu
Architecture:            x86_64
  CPU op-mode(s):        32-bit, 64-bit
  Address sizes:         40 bits physical, 48 bits virtual
  Byte Order:            Little Endian
CPU(s):                  8
  On-line CPU(s) list:   0-7
Vendor ID:               AuthenticAMD
  Model name:            DO-Premium-AMD
    CPU family:          23
    Model:               49
    Thread(s) per core:  1
    Core(s) per socket:  8
    Socket(s):           1
    Stepping:            0
    BogoMIPS:            3992.49
    Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mc
                         a cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall n
                         x mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid
                          extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx1
                         6 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c 
                         rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm
                          sse4a misalignsse 3dnowprefetch osvw topoext perfctr_c
                         ore ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 rds
                         eed adx smap clflushopt clwb sha_ni xsaveopt xsavec xge
                         tbv1 clzero xsaveerptr wbnoinvd arat npt nrip_save umip
                          rdpid
Virtualization features: 
  Virtualization:        AMD-V
  Hypervisor vendor:     KVM
  Virtualization type:   full
Caches (sum of all):     
  L1d:                   256 KiB (8 instances)
  L1i:                   256 KiB (8 instances)
  L2:                    4 MiB (8 instances)
  L3:                    16 MiB (1 instance)
NUMA:                    
  NUMA node(s):          1
  NUMA node0 CPU(s):     0-7
Vulnerabilities:         
  Gather data sampling:  Not affected
  Itlb multihit:         Not affected
  L1tf:                  Not affected
  Mds:                   Not affected
  Meltdown:              Not affected
  Mmio stale data:       Not affected
  Retbleed:              Mitigation; untrained return thunk; SMT disabled
  Spec rstack overflow:  Mitigation; SMT disabled
  Spec store bypass:     Vulnerable
  Spectre v1:            Mitigation; usercopy/swapgs barriers and __user pointer
                          sanitization
  Spectre v2:            Mitigation; Retpolines, IBPB conditional, STIBP disable
                         d, RSB filling, PBRSB-eIBRS Not affected
  Srbds:                 Not affected
  Tsx async abort:       Not affected


I have read through all related topics on this repo about it (including the docs/modelfile.md but non of those were helpful in my case.

<!-- gh-comment-id:1826188439 --> @oliverbob commented on GitHub (Nov 25, 2023): ``` FROM codellama:7b # sets the temperature to 1 [higher is more creative, lower is more coherent] PARAMETER temperature 1 # sets the context window size to 4096, this controls how many tokens the LLM c> PARAMETER num_ctx 4096 # sets a custom system prompt to specify the behavior of the chat assistant SYSTEM You are Bob, acting as an assistant and you are a developer. ``` Location: /tmp But when I do: ollama create example -f ./Modelfile ``` couldn't open modelfile '/tmp/Modelfile' Error: failed to open file: open /tmp/Modelfile: no such file or directory root@ubuntu-g-2vcpu-8gb-sgp1-01:/tmp# ``` lsb_release -a: ``` No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 22.04.3 LTS Release: 22.04 Codename: jammy ``` ``` lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 40 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Vendor ID: AuthenticAMD Model name: DO-Premium-AMD CPU family: 23 Model: 49 Thread(s) per core: 1 Core(s) per socket: 8 Socket(s): 1 Stepping: 0 BogoMIPS: 3992.49 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mc a cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall n x mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx1 6 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_c ore ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 rds eed adx smap clflushopt clwb sha_ni xsaveopt xsavec xge tbv1 clzero xsaveerptr wbnoinvd arat npt nrip_save umip rdpid Virtualization features: Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full Caches (sum of all): L1d: 256 KiB (8 instances) L1i: 256 KiB (8 instances) L2: 4 MiB (8 instances) L3: 16 MiB (1 instance) NUMA: NUMA node(s): 1 NUMA node0 CPU(s): 0-7 Vulnerabilities: Gather data sampling: Not affected Itlb multihit: Not affected L1tf: Not affected Mds: Not affected Meltdown: Not affected Mmio stale data: Not affected Retbleed: Mitigation; untrained return thunk; SMT disabled Spec rstack overflow: Mitigation; SMT disabled Spec store bypass: Vulnerable Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP disable d, RSB filling, PBRSB-eIBRS Not affected Srbds: Not affected Tsx async abort: Not affected ``` I have read through all related topics on this repo about it (including the docs/modelfile.md but non of those were helpful in my case.
Author
Owner

@jmorganca commented on GitHub (Nov 26, 2023):

Hi @oliverbob which version of Ollama are you running? Recent versions have a fix for this permission denied issue on Linux. Sorry you hit this issue

<!-- gh-comment-id:1826453902 --> @jmorganca commented on GitHub (Nov 26, 2023): Hi @oliverbob which version of Ollama are you running? Recent versions have a fix for this permission denied issue on Linux. Sorry you hit this issue
Author
Owner

@oliverbob commented on GitHub (Nov 27, 2023):

I'm using version ollama version 0.1.10

<!-- gh-comment-id:1827433760 --> @oliverbob commented on GitHub (Nov 27, 2023): I'm using version ollama version 0.1.10
Author
Owner

@oliverbob commented on GitHub (Dec 1, 2023):

When I download the latest version using the .sh file. It runs ok. Last question related to this. When I build with go from the latest pull on this repo, will it reflect the changes/patch to the latest release (the same as that of the standard) curl https://ollama.ai/install.sh | sh?

<!-- gh-comment-id:1835371675 --> @oliverbob commented on GitHub (Dec 1, 2023): When I download the latest version using the .sh file. It runs ok. Last question related to this. When I build with go from the latest pull on this repo, will it reflect the changes/patch to the latest release (the same as that of the standard) curl https://ollama.ai/install.sh | sh?
Author
Owner

@technovangelist commented on GitHub (Dec 4, 2023):

Yes, we build the install from the repo. I will go ahead and close this issue now. If you think there is anything we left out, reopen and we can address. Thanks for being part of this great community.

<!-- gh-comment-id:1839514987 --> @technovangelist commented on GitHub (Dec 4, 2023): Yes, we build the install from the repo. I will go ahead and close this issue now. If you think there is anything we left out, reopen and we can address. Thanks for being part of this great community.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#47102