[GH-ISSUE #4709] Code models like codestral should have a lower temperature #65005

Open
opened 2026-05-03 19:29:45 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @DuckyBlender on GitHub (May 29, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4709

This makes the code more correct

Originally created by @DuckyBlender on GitHub (May 29, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4709 This makes the code more correct
GiteaMirror added the feature request label 2026-05-03 19:29:46 -05:00
Author
Owner

@ipfans commented on GitHub (May 30, 2024):

You can set it manually. /set parameter temperature 0.0

<!-- gh-comment-id:2140019962 --> @ipfans commented on GitHub (May 30, 2024): You can set it manually. `/set parameter temperature 0.0`
Author
Owner

@DuckyBlender commented on GitHub (May 30, 2024):

Yeah but it should be the default

<!-- gh-comment-id:2140926097 --> @DuckyBlender commented on GitHub (May 30, 2024): Yeah but it should be the default
Author
Owner

@matbeedotcom commented on GitHub (May 31, 2024):

Yeah but it should be the default

Agreed.

<!-- gh-comment-id:2142606085 --> @matbeedotcom commented on GitHub (May 31, 2024): > Yeah but it should be the default Agreed.
Author
Owner

@brnrc commented on GitHub (Jun 26, 2024):

Hi @DuckyBlender 👋 , I'm curious on how did you get to this conclusion. Was it just through normal usage or did you run a benchmark?

<!-- gh-comment-id:2192351428 --> @brnrc commented on GitHub (Jun 26, 2024): Hi @DuckyBlender 👋 , I'm curious on how did you get to this conclusion. Was it just through normal usage or did you run a benchmark?
Author
Owner

@DuckyBlender commented on GitHub (Jun 27, 2024):

I didn't do any testing but I know how temperature works and coding tasks only harm the output quality of the LLM. It can get more distracted, and do things around because the model selected a less-optimal token.

<!-- gh-comment-id:2195302900 --> @DuckyBlender commented on GitHub (Jun 27, 2024): I didn't do any testing but I know how temperature works and coding tasks only harm the output quality of the LLM. It can get more distracted, and do things around because the model selected a less-optimal token.
Author
Owner

@brnrc commented on GitHub (Jul 3, 2024):

Makes complete sense to me to @DuckyBlender, thanks for the reply 😊.

While here, could you please share your thoughts about this issue I opened about codestral's default template?
https://github.com/ollama/ollama/issues/5403

<!-- gh-comment-id:2205992512 --> @brnrc commented on GitHub (Jul 3, 2024): Makes complete sense to me to @DuckyBlender, thanks for the reply 😊. While here, could you please share your thoughts about this issue I opened about codestral's default template? https://github.com/ollama/ollama/issues/5403
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#65005