USER INSIGHT: Table format is actually very simple and consistent:
': Representative hardware platforms across... {#tbl-representative-systems hover striped}'
BEFORE: Complex build_table_search_patterns() with 4 different cases:
❌ Old format with line breaks
❌ Old format with content stuck to same line
❌ New format with line breaks
❌ New format with content stuck to same line
🔧 40+ lines of complex pattern matching logic
AFTER: Simple, single-pattern approach:
✅ One regex pattern: '^:?\s*{caption}(\s*\{{#tbl-id[^}]*\}})(.*)$'
✅ Always output: ': [new_caption]. {#tbl-id [attributes]}'
✅ Handle period correctly (avoid double periods)
✅ 15 lines total - much cleaner
TECHNICAL CHANGES:
- Simplified build_table_search_patterns() from 40+ lines to 15 lines
- Single regex pattern handles both ': caption' and 'caption' formats
- Always produces consistent format: ': [caption]. {#tbl-id [attributes]}'
- Fixed period handling to avoid double periods in output
VERIFICATION:
✅ Input: ': Representative hardware platforms... {#tbl-representative-systems hover striped}'
✅ Output: ': Hardware comparison across ML deployment... {#tbl-representative-systems hover striped}'
✅ Maintains simple format: ': [caption]. {#tbl-id [attributes]}'
USER WAS RIGHT: Keep it simple! No need for complex edge case handling.
PROBLEM: User getting ': : **Hardware Spectrum**:' instead of ': **Hardware Spectrum**:'
ROOT CAUSE: Wrong regex pattern order in detect_table()
TECHNICAL CHANGE: Reordered regex patterns to try old format first
- Old format: ^:\s* properly strips ': ' prefix
- New format: ^[^{]+ only for captions without leading colon
RESULT: No more ': :' double colon prefixes in table captions
PREVENTION > FIXING: Instead of just post-processing weak verbs, now explicitly instruct the LLM to avoid them
MULTI-LAYER PROTECTION:
1. 🚫 Critical rule section with 14 banned weak verbs listed explicitly
2. ❌✅ Clear before/after examples showing bad vs good patterns
3. 🎯 Final reminder at end of prompt to reinforce the rule
4. 🛡️ Post-processing cleanup as backup safety net
INSTRUCTIONAL APPROACH:
- LLM now sees explicit 'NEVER start with Shows, Demonstrates, Illustrates...'
- Direct examples: 'Shows how X' → 'X processes Y through Z'
- Multiple reinforcement points throughout the prompt
RESULT: LLM should generate strong captions from the start, with hardcoded fixes as fallback
PROBLEM: LLM generating weak textbook captions like 'Shows how', 'Demonstrates how', 'Visualizes how'
ROOT CAUSE: Contradictory LLM prompt examples were teaching the exact weak language we wanted to avoid
SOLUTION:
1. Fixed LLM prompt examples to use strong, direct language
2. Added 6 new banned weak verbs: Visualizes, Exemplifies, Traces, Explains, Displays, Presents
3. Enhanced post-processing to catch and fix these patterns
RESULT: LLM now generates strong, direct textbook captions without weak descriptive language
📖 COMPREHENSIVE DOCUMENTATION UPDATE:
✅ Script Internal Documentation:
- Updated main script header docstring with new modes
- Updated class FigureCaptionImprover docstring
- Fixed function docstrings and comments throughout
- Removed references to old --workflow, --update, --validate options
- Updated print messages to reflect new terminology
📚 New External Documentation:
- Created scripts/FIGURE_CAPTIONS.md with complete usage guide
- Added model selection guide with speed/quality ratings
- Included troubleshooting section and best practices
- Updated scripts/README.md with script overview
🔧 Updated References:
- Main modes: --improve/-i, --build-map/-b, --analyze/-a, --repair/-r
- Removed outdated workflow terminology
- Clear examples for all usage patterns
- Performance optimization guidelines
📋 Documentation Features:
- Command-line option tables with short/long forms
- Model comparison with star ratings
- Before/after caption examples
- Integration with Quarto build process
- Success metrics and quality standards
✅ All documentation now reflects the streamlined v2.0 interface
✅ CONSISTENCY FIX:
- Added -b short form for --build-map option
- All main modes now have both short and long forms:
* --build-map/-b (build content map)
* --analyze/-a (quality analysis)
* --repair/-r (fix formatting)
* --improve/-i (LLM improvement)
📝 UPDATED EXAMPLES:
- Added python script.py -b -d contents/core/ example
- Maintains consistency across all command options
🧪 TESTED:
- -b option works correctly with content map building
- Help text displays properly formatted options
🐛 CRITICAL FIX: Table extraction was broken for most tables
- Before: 23/92 tables found (69 failures, 80.9% success)
- After: 92/92 tables found (0 failures, 100% success)
🔧 Root cause: Regex pattern excluded ':' characters
- Tables like '**Special Function Units**: Details...' were rejected
- Pattern stopped at first ':' because it was in exclusion list [^{{\n:]+?
- Fix: Allow colons in caption text by changing to [^{{\n]+?
📊 Results across all core files:
- hw_acceleration: 0→21 tables (was completely broken)
- optimizations: 0→10 tables
- privacy_security: 0→8 tables
- frameworks: 0→6 tables
- All other files: Similar dramatic improvements
✅ Perfect extraction now working:
- 270 figures extracted successfully
- 92 tables extracted successfully
- 0 extraction failures
- Ready for LLM caption improvement processing
🎯 Improved mid-sentence weak language detection:
- Handle 'X illustrates how Y' patterns in middle of sentences
- Replace with stronger constructions: 'Y through X', 'Y via X'
- Avoid circular replacements (no longer use 'shows' as replacement)
💪 Stronger language replacements:
- 'illustrates how' → direct restructure with stronger verbs
- 'demonstrates that' → 'establishes that' / 'confirms that'
- 'depicts' → 'presents' / 'exposes'
- 'reveals' → 'establishes' / 'exposes'
🧪 Comprehensive testing verified:
- All weak words removed from captions
- No circular replacement issues
- Maintains meaning while using stronger language
- Proper table format and spacing preserved
✅ Real-world test case from screenshot now produces clean output:
'Each of these scenarios illustrates how...'
→ 'Machine learning models can serve as amplifiers through each of these scenarios'
✨ Enhanced LLM prompt:
- Added explicit instructions to avoid weak sentence starters
- Discourage 'Illustrates', 'Shows', 'Demonstrates' etc.
- Encourage direct, strong language with examples
🔧 Post-processing improvements:
- Fix capitalization after periods (handle abbreviations)
- Replace weak sentence starters with direct language
- Ensure proper table format with ':' prefix
- Comprehensive caption validation pipeline
📝 Quality enforcement:
- Automatic detection and correction of weak language
- Proper sentence case throughout explanations
- Standardized table caption format: ': **Bold**: explanation'
- Word-by-word improvements while preserving meaning
✅ Fully tested with edge cases and validation
- Preserve existing ':' prefix in old format table captions
- Add ':' prefix to new format table captions for consistency
- Standardize all table captions to ': Caption {#tbl-id}' format
- Tested with both old and new caption formats
- Add line break preservation logic to table caption replacement
- Handle problematic case where content is stuck to caption line
- Force line break insertion between caption and following content
- Update TikZ figure caption replacement to preserve line breaks
- Tested with problematic cases to ensure proper formatting
- Implement 3-retry logic with exponential backoff (2s, 4s, 8s)
- Smart retry only for recoverable errors (API/network, not content issues)
- Enhanced sentence case formatting with comprehensive technical term preservation
- Preserve spaces and punctuation correctly during caption formatting
- Support for both fast models (qwen2.5:7b) and large models (gemma3:27b)
- Robust error handling for production caption improvement workflow
Enhances figure and table caption formatting to ensure consistency and readability.
- Implements a comprehensive `apply_sentence_case` function that handles technical terms, acronyms, and proper nouns correctly.
- Refines the `format_bold_explanation_caption` function to use the improved sentence casing.
- Updates the caption update logic to support both figures and tables.
- Widens key phrase length for figures.
Enhances the figure caption improvement process by directly
updating the QMD files immediately after generating improved
captions with the LLM. This approach streamlines the workflow
and reduces the need for a separate update step. It also improves
the prompt given to the LLM to better guide caption generation.
Refines the guidelines for generating figure and table
captions to enhance their clarity and pedagogical value.
Also, enhances the reporting of extraction failures by tracking
specific IDs and files with issues for easier debugging.
Improves the prompt used for generating figure captions with the Ollama model to focus on educational value and clarity.
The updated prompt provides clearer instructions and formatting guidelines for generating captions that teach students about the concepts illustrated in figures and tables. It emphasizes the use of key phrases and concise explanations tailored to the context of an AI/ML systems textbook.
Additionally, refactors the figure caption extraction logic to handle nested brackets in captions and escaped characters in paths. This fixes issues with figures containing links and special characters. Stores original captions as-is for better comparison.
Improves the caption improvement process by implementing a complete in-memory workflow, enhancing context extraction with pypandoc, and refining the LLM prompt for better caption generation. It also adds checks for Ollama and the model before proceeding and simplifies command-line arguments.
- Introduces a complete in-memory workflow: Build content map → Improve captions → Update QMD files, streamlining the process.
- Uses targeted search and replace for updates.
- Enhances context extraction using pypandoc AST parsing for richer paragraph context.
- Refines the LLM prompt with more focused instructions and examples, improving caption quality.
- Adds checks for Ollama and specified model, ensuring smooth execution.
- Improves CLI with a more straightforward syntax and helper functions.
FORMATTING IMPROVEMENTS:
- Add format_bold_explanation_caption() function for proper capitalization
- Bold part: Title Case (Every Important Word Capitalized)
- Explanation part: sentence case (only first word capitalized, plus proper nouns)
- Updated LLM prompt with explicit formatting rules
- Added post-processing after LLM generation for consistent formatting
TECHNICAL DETAILS:
- Uses titlecase library for proper title case in bold section
- Preserves proper nouns/acronyms: AI, ML, TikZ, LaTeX, GitHub, etc.
- Applied automatically after LLM response validation
- Robust regex parsing of **bold**: explanation format
TESTED RESULTS:
BEFORE: **DATA SYNCHRONIZATION**: The Adaptive Resource Pattern Addresses...
AFTER: **Data Synchronization in Distributed Systems**: The adaptive resource pattern addresses...
Perfect academic formatting for educational content
NEW FEATURES:
- Add --improve workflow to enhance captions with Ollama LLM
- Implement context extraction around figures/tables from QMD files
- Add multimodal image support for markdown figures
- Support TikZ compilation to PNG for vision models
- Enforce **bold**: explanation format through prompt engineering
METHODS ADDED:
- generate_caption_with_ollama(): Core LLM API integration with format validation
- extract_section_context(): Smart context extraction around figure/table references
- encode_image(): Base64 encoding for multimodal models
- compile_tikz_to_image(): LaTeX to PNG pipeline for TikZ figures
- improve_captions_with_llm(): Main orchestration for LLM improvement workflow
- parse_sections(): QMD section parsing for context
WORKFLOW:
1. --build-qmd-map: Extract figures/tables from QMD files
2. --improve: Use LLM to generate **bold**: explanation captions with context
3. --update: Apply improved captions back to QMD files
TECHNICAL:
- Default model: llava:7b (multimodal support)
- Smart image path resolution for markdown figures
- Temperature 0.3 for consistent formatting
- Robust error handling and validation
- Complete documentation and examples
TESTED: Successfully improved 7 figures + 3 tables with proper format
- Install and import titlecase library for proper English title case
- Remove complex 50+ line custom apply_title_case implementation
- Remove **bold**: explanation formatting logic (no longer used)
- Simplify normalize_caption_case to use titlecase library directly
- Maintain proper capitalization without custom word lists
- Significantly cleaner and more reliable implementation
- Add process_qmd_files function for efficient batch updates
- Group figures and tables by source file to minimize I/O
- Update each file once with all caption changes
- Use regex-based replacement with proper error handling
- Track source_file in JSON structure for organized processing
- Support both figure and table caption updates in single pass
- Support both old format (: Caption {#tbl-id}) and new format (Caption {#tbl-id})
- Improve regex patterns to capture only caption line, not table content
- Add line boundary anchors to prevent capturing table structure
- Update functions will convert old format to new format automatically
- Ensure clean caption extraction without leading colons
- Remove --tex-file argument and build_content_map_from_tex method
- Eliminate all tex-file related processing code
- Update help text and examples to focus on QMD-only approach
- Remove phase-based terminology in favor of descriptive language
- Simplify workflow to QMD-focused content mapping only
- Maintain backward compatibility for existing save/load functions
- Implement build_content_map_from_qmd for direct QMD file processing
- Add specialized detection functions for markdown, tikz, and code figures
- Support flexible fig-id and tbl-id placement in attributes
- Add --save-json option to output content map for review
- Achieve 100% extraction success rate across all core chapters
- Process 270 figures and 92 tables with zero failures
🔧 ENHANCED DETECTION PATTERNS:
- Allow fig-id/tbl-id anywhere in attribute blocks
- Support: {#fig-id}, {width=80% #fig-id}, {#fig-id .class}
- More robust handling of complex attribute combinations
📝 PATTERN IMPROVEMENTS:
- Markdown figures: (?:\s|[^}}])* allows ID placement anywhere
- TikZ figures: Same flexible ID matching
- Tables: Simplified to find any line with #tbl-id
- Code figures: Already flexible, no changes needed
✅ VALIDATION CONFIRMED:
- All existing detections maintained: 265/296 figures, 91/91 tables
- No regressions in functionality
- Patterns handle edge cases like multiple attributes
🎯 BENEFITS:
- Handles real-world QMD variations where IDs aren't first
- More resilient to attribute order changes
- Simpler table detection logic
- Future-proof for new attribute patterns
Ready for QMD-focused development with robust pattern matching
- Add --tex-file argument to bypass automatic builds
- Default remains Machine-Learning-Systems.tex for backward compatibility
- Enables using existing .tex files without rebuilding
- Update help documentation with usage examples
Benefits:
- Faster workflow when .tex file already exists
- Support for custom/alternative .tex file paths
- No need to rebuild entire book for caption processing
Usage:
python improve_figure_captions.py --build-map --tex-file custom.tex
python improve_figure_captions.py --build-map # Uses default
- Parse both active and commented chapters from _quarto.yml
- Detect figures/tables from commented-out chapters in content map
- Warn when .tex content doesn't match active book structure
- Add detailed reporting of consistency issues
- Provide actionable guidance for resolving mismatches
Prevents silent failures where:
- .tex file contains figures from all chapters (including commented)
- QMD processing only scans active chapters
- Caption updates would fail for commented chapter content
Now shows: 'Found 9 active chapters, 50 commented chapters'
and warns if content map contains items from inactive sources.
- Add support for TikZ figures using ::: {#fig-id} div format
- Detect figures in conditional visibility blocks
- Fix regex pattern to handle nested square brackets in captions
- Add type detection ('markdown' vs 'tikz') for proper updating
- Update caption replacement logic for both formats
Results: Perfect figure detection coverage
- Before: 23/37 figures found (62%)
- After: 37/37 figures found (100%)
Fixes issue with fig-ai-timeline, fig-cloudml-example, and fig-TinyML-example
that were previously showing as missing despite being present in QMD files.
- Parse _quarto.yml to extract active chapters in order
- Process QMD files following book structure instead of filesystem order
- Skip commented-out chapters (47 total, only 8 active)
- Add get_book_chapters_from_quarto() method
- Add find_qmd_files_in_order() method
- Update validation and quality check to use ordered processing
Results in much more accurate analysis:
- 23/37 figures found in active chapters (not 23/61 random files)
- Missing 14 figures are in commented-out chapters
- Follows intended book structure and order
- Add CaptionQualityChecker class with quality rules
- Implement --check/-c flag for caption quality analysis
- Implement --repair/-r flag for selective caption fixing
- Add short-form flags for all options (-b, -c, -r, -v, -u)
- Support multiple directories with -d flag
- Professional quality reports with issue categorization
- Smart repair of punctuation and capitalization issues
Quality rules detect missing punctuation, poor capitalization,
generic captions, broken formatting, and LaTeX artifacts.
Enables targeted caption improvements while maintaining quality.
- Created scripts/improve_figure_captions.py - comprehensive tool for improving figure captions
- Uses llava:7b model to analyze images and generate educational captions
- Features JSON-structured responses for consistent formatting
- Supports both single file (-f) and directory (-d) processing modes
- Handles large images using requests library instead of curl
- Robust regex patterns to find figure definitions with any attribute ordering
- Comprehensive error handling and progress reporting with statistics
- Enhanced prompting for textbook-specific educational content
- Successfully tested on socratiq.qmd with 100% improvement rate
Enhances the cross-reference generation script to leverage local LLMs
(via Ollama) for generating natural language explanations, offering readers
contextual insights into the connections between document sections.
Refines prompts and adds retry logic for improved explanation quality.
Also adds command-line option to specify a ollama model.
Updates the cross-reference injection to display a better formated explanation.
Fixes the reference to the cross-reference data file in the config.