[PR #23285] [CLOSED] build(deps): bump sentence-transformers from 5.2.3 to 5.3.0 in /backend #50173

Closed
opened 2026-04-30 02:46:12 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/open-webui/open-webui/pull/23285
Author: @dependabot[bot]
Created: 4/1/2026
Status: Closed

Base: devHead: dependabot/pip/backend/dev/sentence-transformers-5.3.0


📝 Commits (1)

  • 7e58c26 build(deps): bump sentence-transformers from 5.2.3 to 5.3.0 in /backend

📊 Changes

1 file changed (+1 additions, -1 deletions)

View changed files

📝 backend/requirements.txt (+1 -1)

📄 Description

Bumps sentence-transformers from 5.2.3 to 5.3.0.

Release notes

Sourced from sentence-transformers's releases.

v5.3.0 - Improved Contrastive Learning, New Losses, and Transformers v5 Compatibility

This minor version brings several improvements to contrastive learning: MultipleNegativesRankingLoss now supports alternative InfoNCE formulations (symmetric, GTE-style) and optional hardness weighting for harder negatives. Two new losses are introduced, GlobalOrthogonalRegularizationLoss for embedding space regularization and CachedSpladeLoss for memory-efficient SPLADE training. The release also adds a faster hashed batch sampler, fixes GroupByLabelBatchSampler for triplet losses, and ensures full compatibility with the latest Transformers v5 versions.

Install this version with

# Training + Inference
pip install sentence-transformers[train]==5.3.0

Inference only, use one of:

pip install sentence-transformers==5.3.0 pip install sentence-transformers[onnx-gpu]==5.3.0 pip install sentence-transformers[onnx]==5.3.0 pip install sentence-transformers[openvino]==5.3.0

Updated MultipleNegativesRankingLoss (a.k.a. InfoNCE)

MultipleNegativesRankingLoss received two major upgrades: support for alternative InfoNCE formulations from the literature, and optional hardness weighting to up-weight harder negatives.

Support other InfoNCE variants (#3607)

MultipleNegativesRankingLoss now supports several well-known contrastive loss variants from the literature through new directions and partition_mode parameters. Previously, this loss only supported the standard forward direction (query → doc). You can now configure which similarity interactions are included in the loss:

  • "query_to_doc" (default): For each query, its matched document should score higher than all other documents.
  • "doc_to_query": The symmetric reverse — for each document, its matched query should score higher than all other queries.
  • "query_to_query": For each query, all other queries should score lower than its matched document.
  • "doc_to_doc": For each document, all other documents should score lower than its matched query.

The partition_mode controls how scores are normalized: "joint" computes a single softmax over all directions, while "per_direction" computes a separate softmax per direction and averages the losses.

These combine to reproduce several loss formulations from the literature:

Standard InfoNCE (default, unchanged behavior):

loss = MultipleNegativesRankingLoss(model)
# equivalent to directions=("query_to_doc",), partition_mode="joint"

Symmetric InfoNCE (Günther et al. 2024) — adds the reverse direction so both queries and documents are trained to find their match:

loss = MultipleNegativesRankingLoss(
    model,
    directions=("query_to_doc", "doc_to_query"),
    partition_mode="per_direction",
)

GTE improved contrastive loss (Li et al. 2023) — adds same-type negatives (query <-> query, doc <-> doc) for a stronger training signal, especially useful with pairs-only data:

loss = MultipleNegativesRankingLoss(
</tr></table> 

... (truncated)

Commits
  • ce48ecc Merge branch 'main' into v5.3-release
  • cec08f8 Fix citation for EmbeddingGemma paper (#3687)
  • c29b3a6 Release v5.3.0
  • 55c13de Prep docs main page for v5.3.0 (#3686)
  • 72e75f7 [tests] Add slow reproduction tests for most common models (#3681)
  • 237e441 [fix] Fix model card generation with set_transform with new column names (#...
  • 7f180b4 [feat] Add hardness-weighted contrastive learning to losses (#3667)
  • 5890086 Disallow query_to_query/doc_to_doc with partition_mode="per_direction" due to...
  • 6518c36 CE trainer: Removed IterableDataset from train and eval dataset type hints (#...
  • 1e0e84c Add tips for adjusting batch size to improve processing speed (#3672)
  • Additional commits viewable in compare view

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/open-webui/open-webui/pull/23285 **Author:** [@dependabot[bot]](https://github.com/apps/dependabot) **Created:** 4/1/2026 **Status:** ❌ Closed **Base:** `dev` ← **Head:** `dependabot/pip/backend/dev/sentence-transformers-5.3.0` --- ### 📝 Commits (1) - [`7e58c26`](https://github.com/open-webui/open-webui/commit/7e58c269a9e3fe3f5c5d51188213a14f5bfc0675) build(deps): bump sentence-transformers from 5.2.3 to 5.3.0 in /backend ### 📊 Changes **1 file changed** (+1 additions, -1 deletions) <details> <summary>View changed files</summary> 📝 `backend/requirements.txt` (+1 -1) </details> ### 📄 Description Bumps [sentence-transformers](https://github.com/huggingface/sentence-transformers) from 5.2.3 to 5.3.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/huggingface/sentence-transformers/releases">sentence-transformers's releases</a>.</em></p> <blockquote> <h2>v5.3.0 - Improved Contrastive Learning, New Losses, and Transformers v5 Compatibility</h2> <p>This minor version brings several improvements to contrastive learning: <code>MultipleNegativesRankingLoss</code> now supports alternative InfoNCE formulations (symmetric, GTE-style) and optional hardness weighting for harder negatives. Two new losses are introduced, <code>GlobalOrthogonalRegularizationLoss</code> for embedding space regularization and <code>CachedSpladeLoss</code> for memory-efficient SPLADE training. The release also adds a faster hashed batch sampler, fixes <code>GroupByLabelBatchSampler</code> for triplet losses, and ensures full compatibility with the latest Transformers v5 versions.</p> <p>Install this version with</p> <pre lang="bash"><code># Training + Inference pip install sentence-transformers[train]==5.3.0 <h1>Inference only, use one of:</h1> <p>pip install sentence-transformers==5.3.0 pip install sentence-transformers[onnx-gpu]==5.3.0 pip install sentence-transformers[onnx]==5.3.0 pip install sentence-transformers[openvino]==5.3.0 </code></pre></p> <h2>Updated MultipleNegativesRankingLoss (a.k.a. InfoNCE)</h2> <p><code>MultipleNegativesRankingLoss</code> received two major upgrades: support for alternative InfoNCE formulations from the literature, and optional hardness weighting to up-weight harder negatives.</p> <h3>Support other InfoNCE variants (<a href="https://redirect.github.com/huggingface/sentence-transformers/issues/3607">#3607</a>)</h3> <p><code>MultipleNegativesRankingLoss</code> now supports several well-known contrastive loss variants from the literature through new <code>directions</code> and <code>partition_mode</code> parameters. Previously, this loss only supported the standard forward direction (query → doc). You can now configure which similarity interactions are included in the loss:</p> <ul> <li><code>&quot;query_to_doc&quot;</code> (default): For each query, its matched document should score higher than all other documents.</li> <li><code>&quot;doc_to_query&quot;</code>: The symmetric reverse — for each document, its matched query should score higher than all other queries.</li> <li><code>&quot;query_to_query&quot;</code>: For each query, all other queries should score lower than its matched document.</li> <li><code>&quot;doc_to_doc&quot;</code>: For each document, all other documents should score lower than its matched query.</li> </ul> <p>The <code>partition_mode</code> controls how scores are normalized: <code>&quot;joint&quot;</code> computes a single softmax over all directions, while <code>&quot;per_direction&quot;</code> computes a separate softmax per direction and averages the losses.</p> <p>These combine to reproduce several loss formulations from the literature:</p> <p><strong>Standard InfoNCE</strong> (default, unchanged behavior):</p> <pre lang="python"><code>loss = MultipleNegativesRankingLoss(model) # equivalent to directions=(&quot;query_to_doc&quot;,), partition_mode=&quot;joint&quot; </code></pre> <p><strong>Symmetric InfoNCE</strong> (<a href="https://arxiv.org/abs/2310.19923">Günther et al. 2024</a>) — adds the reverse direction so both queries and documents are trained to find their match:</p> <pre lang="python"><code>loss = MultipleNegativesRankingLoss( model, directions=(&quot;query_to_doc&quot;, &quot;doc_to_query&quot;), partition_mode=&quot;per_direction&quot;, ) </code></pre> <p><strong>GTE improved contrastive loss</strong> (<a href="https://arxiv.org/abs/2308.03281">Li et al. 2023</a>) — adds same-type negatives (query &lt;-&gt; query, doc &lt;-&gt; doc) for a stronger training signal, especially useful with pairs-only data:</p> <pre lang="python"><code>loss = MultipleNegativesRankingLoss( &lt;/tr&gt;&lt;/table&gt; </code></pre> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/huggingface/sentence-transformers/commit/ce48ecc77f8c7ed97c8fcc3cd99fde6071206680"><code>ce48ecc</code></a> Merge branch 'main' into v5.3-release</li> <li><a href="https://github.com/huggingface/sentence-transformers/commit/cec08f89aabaa11a1675529fb7b11618c8255622"><code>cec08f8</code></a> Fix citation for EmbeddingGemma paper (<a href="https://redirect.github.com/huggingface/sentence-transformers/issues/3687">#3687</a>)</li> <li><a href="https://github.com/huggingface/sentence-transformers/commit/c29b3a69f7eca78d613fc2e465989ee880216c8d"><code>c29b3a6</code></a> Release v5.3.0</li> <li><a href="https://github.com/huggingface/sentence-transformers/commit/55c13de47ea9475b1416a34af36716cc477bd95f"><code>55c13de</code></a> Prep docs main page for v5.3.0 (<a href="https://redirect.github.com/huggingface/sentence-transformers/issues/3686">#3686</a>)</li> <li><a href="https://github.com/huggingface/sentence-transformers/commit/72e75f7868700a768f6c57fb16e76b18df250f72"><code>72e75f7</code></a> [<code>tests</code>] Add slow reproduction tests for most common models (<a href="https://redirect.github.com/huggingface/sentence-transformers/issues/3681">#3681</a>)</li> <li><a href="https://github.com/huggingface/sentence-transformers/commit/237e44151758a9b4eb86b1ecd73152884073549a"><code>237e441</code></a> [<code>fix</code>] Fix model card generation with set_transform with new column names (#...</li> <li><a href="https://github.com/huggingface/sentence-transformers/commit/7f180b4b7146af66e2b043cdaa89bf87babe28bc"><code>7f180b4</code></a> [<code>feat</code>] Add hardness-weighted contrastive learning to losses (<a href="https://redirect.github.com/huggingface/sentence-transformers/issues/3667">#3667</a>)</li> <li><a href="https://github.com/huggingface/sentence-transformers/commit/58900864d51cde2f918885587709b2466938422c"><code>5890086</code></a> Disallow query_to_query/doc_to_doc with partition_mode=&quot;per_direction&quot; due to...</li> <li><a href="https://github.com/huggingface/sentence-transformers/commit/6518c36671294b2125506f0d382d04102f739c91"><code>6518c36</code></a> CE trainer: Removed IterableDataset from train and eval dataset type hints (#...</li> <li><a href="https://github.com/huggingface/sentence-transformers/commit/1e0e84cbfb5483ffe1a5ff7db459c67a61487f3f"><code>1e0e84c</code></a> Add tips for adjusting batch size to improve processing speed (<a href="https://redirect.github.com/huggingface/sentence-transformers/issues/3672">#3672</a>)</li> <li>Additional commits viewable in <a href="https://github.com/huggingface/sentence-transformers/compare/v5.2.3...v5.3.0">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=sentence-transformers&package-manager=pip&previous-version=5.2.3&new-version=5.3.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details> --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-30 02:46:12 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#50173