[GH-ISSUE #648] Model Parameters Not Getting Set #288

Closed
opened 2026-04-12 09:49:46 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @fmackenzie on GitHub (Sep 29, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/648

Originally assigned to: @BruceMacD on GitHub.

From what I can tell, the parameters set in the model file are not getting set properly. Taking the mario Modelfile as an example and adding an EMBED and a few PARAMETERS, it looks like in the server output that the PARAMETERS are having issues getting set to the appropriate type, and thus are not actually getting set as configured.

Here's the sample Modelfile

FROM llama2

EMBED /data/ollama/data/sample-content/*.txt

PARAMETER temperature 0.8
# PARAMETER num_thread 2
PARAMETER num_ctx 4096
PARAMETER num_gpu 1


SYSTEM """
You are Mario from super mario bros, acting as an assistant.
"""

and when the creation of the new model is run, against the started server, the following outcomes appear to indicate that there are issues setting the data values as configured:

2023/09/29 12:23:06 types.go:234: could not convert model parameter num_ctx to int, skipped
2023/09/29 12:23:06 types.go:234: could not convert model parameter num_gpu to int, skipped
2023/09/29 12:23:06 types.go:247: could not convert model parameter temperature to float32, skipped
Originally created by @fmackenzie on GitHub (Sep 29, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/648 Originally assigned to: @BruceMacD on GitHub. From what I can tell, the parameters set in the model file are not getting set properly. Taking the mario Modelfile as an example and adding an EMBED and a few PARAMETERS, it looks like in the server output that the PARAMETERS are having issues getting set to the appropriate type, and thus are not actually getting set as configured. Here's the sample Modelfile ``` FROM llama2 EMBED /data/ollama/data/sample-content/*.txt PARAMETER temperature 0.8 # PARAMETER num_thread 2 PARAMETER num_ctx 4096 PARAMETER num_gpu 1 SYSTEM """ You are Mario from super mario bros, acting as an assistant. """ ``` and when the creation of the new model is run, against the started server, the following outcomes appear to indicate that there are issues setting the data values as configured: ``` 2023/09/29 12:23:06 types.go:234: could not convert model parameter num_ctx to int, skipped 2023/09/29 12:23:06 types.go:234: could not convert model parameter num_gpu to int, skipped 2023/09/29 12:23:06 types.go:247: could not convert model parameter temperature to float32, skipped ```
GiteaMirror added the bug label 2026-04-12 09:49:46 -05:00
Author
Owner

@mchiang0610 commented on GitHub (Sep 30, 2023):

May I ask what version of ollama you're running? ollama -v and if you are running it with langchain? I believe the langchain integration passes null values if they are not set, and will be ignored.

<!-- gh-comment-id:1741675764 --> @mchiang0610 commented on GitHub (Sep 30, 2023): May I ask what version of ollama you're running? `ollama -v` and if you are running it with langchain? I believe the langchain integration passes null values if they are not set, and will be ignored.
Author
Owner

@fmackenzie commented on GitHub (Oct 1, 2023):

Sure. I'm using the latest build (main branch compiled as of Sept 30) on Ubuntu 23.04. As a side note, I believe it is the message that is incorrect.

The current code in types.go (lines 227-237ish):

				case reflect.Int:
					switch t := val.(type) {
					case int64:
						field.SetInt(t)
					case float64:
						// when JSON unmarshals numbers, it uses float64, not int
						field.SetInt(int64(t))
					default:
						log.Printf("could not convert model parameter %v to int, skipped", key)
					}

However, it looks like it shouldn't be indicating that it is being skipped in this case as it appears that the parameter is actually already an INT. I've updated my code to identify if we have the correct type (so we don't need to convert it), and it looks like this:

				case reflect.Int:
					switch t := val.(type) {
					case int:
						log.Printf("Using int parameter %v with value %v as provided", key, t)
					case int64:
						field.SetInt(t)
					case float64:
						// when JSON unmarshals numbers, it uses float64, not int
						field.SetInt(int64(t))
					default:
						log.Printf("could not convert model parameter %v to int, skipped", key)
						log.Printf("unknown type %s for %s", field.Kind(), key)
					}

This output now shows the following:

2023/10/01 07:43:35 images.go:317: [temperature] - 0.8
2023/10/01 07:43:35 images.go:317: [num_thread] - 4
2023/10/01 07:43:35 images.go:317: [num_ctx] - 8192
2023/10/01 07:43:35 images.go:317: [num_gpu] - 1
2023/10/01 07:43:35 images.go:317: [system] - 
2023/10/01 07:43:35 types.go:230: Using int parameter num_gpu with value 1 as provided
2023/10/01 07:43:35 types.go:251: Using float32 parameter temperature with value 0.8 as provided
2023/10/01 07:43:35 types.go:230: Using int parameter num_thread with value 4 as provided
2023/10/01 07:43:35 types.go:230: Using int parameter num_ctx with value 8192 as provided
2023/10/01 07:43:35 llama.go:313: starting llama runner

What I don't know, is that we are using the correct values further down the line, but I will investigate further.

<!-- gh-comment-id:1742056298 --> @fmackenzie commented on GitHub (Oct 1, 2023): Sure. I'm using the latest build (main branch compiled as of Sept 30) on Ubuntu 23.04. As a side note, I believe it is the message that is incorrect. The current code in types.go (lines 227-237ish): ``` case reflect.Int: switch t := val.(type) { case int64: field.SetInt(t) case float64: // when JSON unmarshals numbers, it uses float64, not int field.SetInt(int64(t)) default: log.Printf("could not convert model parameter %v to int, skipped", key) } ``` However, it looks like it shouldn't be indicating that it is being skipped in this case as it appears that the parameter is actually already an INT. I've updated my code to identify if we have the correct type (so we don't need to convert it), and it looks like this: ``` case reflect.Int: switch t := val.(type) { case int: log.Printf("Using int parameter %v with value %v as provided", key, t) case int64: field.SetInt(t) case float64: // when JSON unmarshals numbers, it uses float64, not int field.SetInt(int64(t)) default: log.Printf("could not convert model parameter %v to int, skipped", key) log.Printf("unknown type %s for %s", field.Kind(), key) } ``` This output now shows the following: ``` 2023/10/01 07:43:35 images.go:317: [temperature] - 0.8 2023/10/01 07:43:35 images.go:317: [num_thread] - 4 2023/10/01 07:43:35 images.go:317: [num_ctx] - 8192 2023/10/01 07:43:35 images.go:317: [num_gpu] - 1 2023/10/01 07:43:35 images.go:317: [system] - 2023/10/01 07:43:35 types.go:230: Using int parameter num_gpu with value 1 as provided 2023/10/01 07:43:35 types.go:251: Using float32 parameter temperature with value 0.8 as provided 2023/10/01 07:43:35 types.go:230: Using int parameter num_thread with value 4 as provided 2023/10/01 07:43:35 types.go:230: Using int parameter num_ctx with value 8192 as provided 2023/10/01 07:43:35 llama.go:313: starting llama runner ``` What I don't know, is that we are using the correct values further down the line, but I will investigate further.
Author
Owner

@JoseConseco commented on GitHub (Oct 2, 2023):

I have similar issue. eg. num_gpu seems completely ignored. I did not test too much of other params.

<!-- gh-comment-id:1742892395 --> @JoseConseco commented on GitHub (Oct 2, 2023): I have similar issue. eg. num_gpu seems completely ignored. I did not test too much of other params.
Author
Owner

@BruceMacD commented on GitHub (Oct 2, 2023):

@JoseConseco is this on MacOS or Linux? MacOS only supports 0/1 gpu (corresponding with cpu/metal).

<!-- gh-comment-id:1743526672 --> @BruceMacD commented on GitHub (Oct 2, 2023): @JoseConseco is this on MacOS or Linux? MacOS only supports 0/1 gpu (corresponding with cpu/metal).
Author
Owner

@JoseConseco commented on GitHub (Oct 2, 2023):

Linux. Ok, it seems to work after all. My bad. I just had to kill the ollama server, to make sure it new version with gpu layers is loaded.

<!-- gh-comment-id:1743593601 --> @JoseConseco commented on GitHub (Oct 2, 2023): Linux. Ok, it seems to work after all. My bad. I just had to kill the ollama server, to make sure it new version with gpu layers is loaded.
Author
Owner

@jtoy commented on GitHub (Oct 2, 2023):

this can be closed

<!-- gh-comment-id:1743643962 --> @jtoy commented on GitHub (Oct 2, 2023): this can be closed
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#288