[PR #12785] [CLOSED] reshape the Conv2D #45196

Closed
opened 2026-04-25 00:53:42 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/12785
Author: @iosub
Created: 10/27/2025
Status: Closed

Base: mainHead: qwen3vl_base


📝 Commits (7)

  • 8a59f7f Add Z_Iosu folder from 12-07-b5 branch
  • a9ca764 logs: añadir logs/ollama.log (registro de arranque) y actualizar .gitignore para ignorar logs y cachés
  • f9422dc original funciona
  • 8a3856f vulkan: Add memory detection for Intel GPU using Level Zero Sysman (PR #12654)
  • 15eab62 Fix: GPU selection respeta orden Vulkan, overflow Intel solo si es necesario. Smart build siempre verbose. Pruebas y logs actualizados.
  • e1a3d85 llama.cpp: Fix Qwen2.5 VL cache causal masking (PR #16745)
  • e9dd7f0 Fix Conv2D bias broadcast for GGML

📊 Changes

94 files changed (+11265 additions, -111 deletions)

View changed files

📝 .gitignore (+10 -0)
📝 Dockerfile (+12 -1)
Z_Iosu/.build_state.json (+48 -0)
Z_Iosu/BUILD_COMMAND.txt (+74 -0)
Z_Iosu/BUILD_WINDOWS.md (+185 -0)
Z_Iosu/COMPILACION_COMPLETA.md (+452 -0)
Z_Iosu/Dockerfile.txt (+143 -0)
Z_Iosu/INSTALL_WINDOWS.md (+97 -0)
Z_Iosu/OJO.md (+338 -0)
Z_Iosu/OJONOTOCAR.MD (+5 -0)
Z_Iosu/PR_MEMORY_d4bd0265.md (+34 -0)
Z_Iosu/PR_VULKAN_ORDER_1f279e40.md (+35 -0)
Z_Iosu/QUICK_BUILD.md (+187 -0)
Z_Iosu/VULKAN_ANALYSIS.md (+148 -0)
Z_Iosu/VULKAN_BUILD_INTEGRATION.md (+113 -0)
Z_Iosu/VULKAN_IMPLEMENTATION_PLAN.md (+106 -0)
Z_Iosu/VULKAN_VERIFICATION_RESULTS.md (+57 -0)
Z_Iosu/app/ollama.iss (+215 -0)
Z_Iosu/docker/Dockerfile.dev (+16 -0)
Z_Iosu/docker/Dockerfile.devfull (+20 -0)

...and 74 more files

📄 Description

Summary

  • reshape the Conv2D bias tensor to [1, 1, channels, 1] before adding it so GGML can repeat without assertions

Testing

  • powershell -ExecutionPolicy Bypass -File Z_Iosu\scripts\build_windows.ps1 buildOllama
  • .\dist\windows-amd64\ollama.exe run oscar_while/gemma-3-4b-tools:Q4_K_M "Z:\IMG_20250125_194419.jpg"

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/12785 **Author:** [@iosub](https://github.com/iosub) **Created:** 10/27/2025 **Status:** ❌ Closed **Base:** `main` ← **Head:** `qwen3vl_base` --- ### 📝 Commits (7) - [`8a59f7f`](https://github.com/ollama/ollama/commit/8a59f7f994714f08aefb5b9970b6852bc19bc1fb) Add Z_Iosu folder from 12-07-b5 branch - [`a9ca764`](https://github.com/ollama/ollama/commit/a9ca764df415bf1549252e4e3dafdbc4005b46d2) logs: añadir logs/ollama.log (registro de arranque) y actualizar .gitignore para ignorar logs y cachés - [`f9422dc`](https://github.com/ollama/ollama/commit/f9422dc52c60dc46a4328ce1478e94ebd8a37dd2) original funciona - [`8a3856f`](https://github.com/ollama/ollama/commit/8a3856f41031073291c10bd820ec6110d3d07b3e) vulkan: Add memory detection for Intel GPU using Level Zero Sysman (PR #12654) - [`15eab62`](https://github.com/ollama/ollama/commit/15eab624f8e2ae5888e052ae52610023ffb6d130) Fix: GPU selection respeta orden Vulkan, overflow Intel solo si es necesario. Smart build siempre verbose. Pruebas y logs actualizados. - [`e1a3d85`](https://github.com/ollama/ollama/commit/e1a3d85570721d6a78fb5bc139cf7a582e278fab) llama.cpp: Fix Qwen2.5 VL cache causal masking (PR #16745) - [`e9dd7f0`](https://github.com/ollama/ollama/commit/e9dd7f0d72124d1bf285261f1ecd6579a9b1ab39) Fix Conv2D bias broadcast for GGML ### 📊 Changes **94 files changed** (+11265 additions, -111 deletions) <details> <summary>View changed files</summary> 📝 `.gitignore` (+10 -0) 📝 `Dockerfile` (+12 -1) ➕ `Z_Iosu/.build_state.json` (+48 -0) ➕ `Z_Iosu/BUILD_COMMAND.txt` (+74 -0) ➕ `Z_Iosu/BUILD_WINDOWS.md` (+185 -0) ➕ `Z_Iosu/COMPILACION_COMPLETA.md` (+452 -0) ➕ `Z_Iosu/Dockerfile.txt` (+143 -0) ➕ `Z_Iosu/INSTALL_WINDOWS.md` (+97 -0) ➕ `Z_Iosu/OJO.md` (+338 -0) ➕ `Z_Iosu/OJONOTOCAR.MD` (+5 -0) ➕ `Z_Iosu/PR_MEMORY_d4bd0265.md` (+34 -0) ➕ `Z_Iosu/PR_VULKAN_ORDER_1f279e40.md` (+35 -0) ➕ `Z_Iosu/QUICK_BUILD.md` (+187 -0) ➕ `Z_Iosu/VULKAN_ANALYSIS.md` (+148 -0) ➕ `Z_Iosu/VULKAN_BUILD_INTEGRATION.md` (+113 -0) ➕ `Z_Iosu/VULKAN_IMPLEMENTATION_PLAN.md` (+106 -0) ➕ `Z_Iosu/VULKAN_VERIFICATION_RESULTS.md` (+57 -0) ➕ `Z_Iosu/app/ollama.iss` (+215 -0) ➕ `Z_Iosu/docker/Dockerfile.dev` (+16 -0) ➕ `Z_Iosu/docker/Dockerfile.devfull` (+20 -0) _...and 74 more files_ </details> ### 📄 Description ## Summary - reshape the Conv2D bias tensor to `[1, 1, channels, 1]` before adding it so GGML can repeat without assertions ## Testing - powershell -ExecutionPolicy Bypass -File Z_Iosu\scripts\build_windows.ps1 buildOllama - .\dist\windows-amd64\ollama.exe run oscar_while/gemma-3-4b-tools:Q4_K_M "Z:\IMG_20250125_194419.jpg" --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-25 00:53:42 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#45196