[GH-ISSUE #6887] temperature for reader-lm should be 0 #30116

Open
opened 2026-04-22 09:35:21 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @rick-github on GitHub (Sep 20, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6887

reader-lm converts HTML to Markdown but with the default temperature, it hallucinates content: https://github.com/ollama/ollama/issues/6875. Setting temperature to zero appears to resolve this. This would be nice to have in the model config in the ollama library.

Originally created by @rick-github on GitHub (Sep 20, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6887 [reader-lm](https://ollama.com/library/reader-lm) converts HTML to Markdown but with the default temperature, it hallucinates content: https://github.com/ollama/ollama/issues/6875. Setting `temperature` to zero appears to resolve this. This would be nice to have in the model config in the ollama library.
GiteaMirror added the model label 2026-04-22 09:35:21 -05:00
Author
Owner

@jmorganca commented on GitHub (Sep 20, 2024):

Thanks @rick-github !

<!-- gh-comment-id:2362558175 --> @jmorganca commented on GitHub (Sep 20, 2024): Thanks @rick-github !
Author
Owner

@whogben commented on GitHub (Oct 5, 2024):

@rick-github were you able to get reader-lm to respond when setting temperature?

I can get reader-lm to respond when I dont set temperature, but as soon as I add the temperature parameter to the request my client freezes and ollama seeming never responds or never completes it's response, super strange. I've tested my setup with many other ollama models no problemo, so I don't think the issue is on the client side.

Weirdly, the 0.5b version works, no other changes besides changing the model parameter between :latest and :0,5b

Failing / never-ending payload:

{
	"messages": [
		{
			"content": "<meta property=\"og:description\" content=\"Introducing the FeatherS3 -&amp;nbsp;The pro ESP32-S3 Development Board in the Feather Format now with a u.FL connector&amp;nbsp;instead of an onboard antenna, for the times when you want to connect ...\">",
			"role": "user"
		}
	],
	"model": "reader-lm:latest",
	"options": {
		"temperature": 0
	},
	"stream": false
}

Working payload #1 (only model change)

{
	"messages": [
		{
			"content": "<meta property=\"og:description\" content=\"Introducing the FeatherS3 -&amp;nbsp;The pro ESP32-S3 Development Board in the Feather Format now with a u.FL connector&amp;nbsp;instead of an onboard antenna, for the times when you want to connect ...\">",
			"role": "user"
		}
	],
	"model": "reader-lm:0.5b",
	"options": {
		"temperature": 0
	},
	"stream": false
}

Working payload #2 (only remove temperature):

{
	"messages": [
		{
			"content": "<meta property=\"og:description\" content=\"Introducing the FeatherS3 -&amp;nbsp;The pro ESP32-S3 Development Board in the Feather Format now with a u.FL connector&amp;nbsp;instead of an onboard antenna, for the times when you want to connect ...\">",
			"role": "user"
		}
	],
	"model": "reader-lm:latest",
	"options": {
	},
	"stream": false
}
<!-- gh-comment-id:2395107376 --> @whogben commented on GitHub (Oct 5, 2024): @rick-github were you able to get reader-lm to respond when setting temperature? I can get reader-lm to respond when I *dont* set temperature, but as soon as I add the temperature parameter to the request my client freezes and ollama seeming never responds or never completes it's response, super strange. I've tested my setup with many other ollama models no problemo, so I don't think the issue is on the client side. Weirdly, the 0.5b version works, no other changes besides changing the model parameter between :latest and :0,5b Failing / never-ending payload: ``` { "messages": [ { "content": "<meta property=\"og:description\" content=\"Introducing the FeatherS3 -&amp;nbsp;The pro ESP32-S3 Development Board in the Feather Format now with a u.FL connector&amp;nbsp;instead of an onboard antenna, for the times when you want to connect ...\">", "role": "user" } ], "model": "reader-lm:latest", "options": { "temperature": 0 }, "stream": false } ``` Working payload #1 (only model change) ``` { "messages": [ { "content": "<meta property=\"og:description\" content=\"Introducing the FeatherS3 -&amp;nbsp;The pro ESP32-S3 Development Board in the Feather Format now with a u.FL connector&amp;nbsp;instead of an onboard antenna, for the times when you want to connect ...\">", "role": "user" } ], "model": "reader-lm:0.5b", "options": { "temperature": 0 }, "stream": false } ``` Working payload #2 (only remove temperature): ``` { "messages": [ { "content": "<meta property=\"og:description\" content=\"Introducing the FeatherS3 -&amp;nbsp;The pro ESP32-S3 Development Board in the Feather Format now with a u.FL connector&amp;nbsp;instead of an onboard antenna, for the times when you want to connect ...\">", "role": "user" } ], "model": "reader-lm:latest", "options": { }, "stream": false } ```
Author
Owner

@rick-github commented on GitHub (Oct 5, 2024):

Please add to #6875

<!-- gh-comment-id:2395204611 --> @rick-github commented on GitHub (Oct 5, 2024): Please add to #6875
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#30116