[GH-ISSUE #1011] Learning Objectives for Self-Check Questions in Part I / Introduction #5627

Closed
opened 2026-04-21 21:35:34 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @foundingnimo on GitHub (Nov 1, 2025).
Original GitHub issue: https://github.com/harvard-edge/cs249r_book/issues/1011

Originally assigned to: @profvjreddi on GitHub.

The answers for the self-check questions in Part I / Introduction detail learning objectives.
These learning objectives do not match specifically to the learning objectives listed at the beginning of the section.

Should these match exactly, or is a more loose wording fine in this context?

Learning Objectives defined here: 263430f28a/quarto/contents/core/introduction/introduction.qmd (L26)

  • Define machine learning systems as integrated computing systems comprising data, algorithms, and infrastructure
  • Distinguish ML systems engineering from traditional software engineering through failure pattern analysis
  • Explain the AI Triangle framework and analyze interdependencies between data, algorithms, and computing infrastructure
  • Trace the historical evolution of AI paradigms from symbolic systems through statistical learning to deep learning
  • Evaluate the implications of Sutton's "Bitter Lesson" for modern ML systems engineering priorities
  • Compare silent performance degradation in ML systems with traditional software failure modes
  • Analyze the ML system lifecycle phases and contrast them with traditional software development
  • Classify real-world challenges in ML systems across data, model, system, and ethical categories
  • Apply the five-pillar engineering framework to analyze ML system architectures and their interdependencies

While the mapped L.Os in the questions are here: https://github.com/harvard-edge/cs249r_book/blob/dev/quarto/contents/core/introduction/introduction_quizzes.json

And the learning objectives listed in these answers are, for example:

  • Understand the fundamental difference between traditional and ML systems.
  • Analyze the impact of historical lessons on current AI system design
  • Identify key challenges unique to ML systems engineering.
  • Explain the role of the AI Triangle in ML system analysis.
Originally created by @foundingnimo on GitHub (Nov 1, 2025). Original GitHub issue: https://github.com/harvard-edge/cs249r_book/issues/1011 Originally assigned to: @profvjreddi on GitHub. The answers for the self-check questions in Part I / Introduction detail learning objectives. These learning objectives do not match specifically to the learning objectives listed at the beginning of the section. Should these match exactly, or is a more loose wording fine in this context? Learning Objectives defined here: https://github.com/harvard-edge/cs249r_book/blob/263430f28a1ea09898961b0e9a3039eb00918fa0/quarto/contents/core/introduction/introduction.qmd#L26 - Define machine learning systems as integrated computing systems comprising data, algorithms, and infrastructure - Distinguish ML systems engineering from traditional software engineering through failure pattern analysis - Explain the AI Triangle framework and analyze interdependencies between data, algorithms, and computing infrastructure - Trace the historical evolution of AI paradigms from symbolic systems through statistical learning to deep learning - Evaluate the implications of Sutton's "Bitter Lesson" for modern ML systems engineering priorities - Compare silent performance degradation in ML systems with traditional software failure modes - Analyze the ML system lifecycle phases and contrast them with traditional software development - Classify real-world challenges in ML systems across data, model, system, and ethical categories - Apply the five-pillar engineering framework to analyze ML system architectures and their interdependencies While the mapped L.Os in the questions are here: https://github.com/harvard-edge/cs249r_book/blob/dev/quarto/contents/core/introduction/introduction_quizzes.json And the learning objectives listed in these answers are, for example: - Understand the fundamental difference between traditional and ML systems. - Analyze the impact of historical lessons on current AI system design - Identify key challenges unique to ML systems engineering. - Explain the role of the AI Triangle in ML system analysis.
GiteaMirror added the area: booktype: improvement labels 2026-04-21 21:35:34 -05:00
Author
Owner

@profvjreddi commented on GitHub (Nov 1, 2025):

Hi @foundingnimo,

Thanks for raising this! You're right that the quiz objectives don't match the chapter learning objectives exactly, and I appreciate you noticing this detail.

This is actually semi-intentional 😅. Let me explain my thinking:

Chapter learning objectives are the formal goals we define at the chapter level. These are the key outcomes students should achieve upon completing the entire chapter. They're carefully carved out through multiple revisions.

Quiz question objectives are more granular and focused on specific sections. Since we have lots of quiz questions per chapter, each one tests a specific aspect or depth of understanding within those broader goals. When generating questions, I focus on: "What concept from this section do I want our students to reflect on?" rather than "Which exact chapter objective does this map to?"

Here's why I've kept this loose coupling:

The quiz questions are tied to sections, not chapters. A single chapter learning objective might span multiple sections, so the section-level questions naturally test components of that objective rather than the whole thing. For example, one chapter objective about "distinguishing ML from traditional software" might generate:

  • A section quiz about failure modes specifically
  • Another about monitoring differences
  • Another comparing lifecycle approaches

If we had a comprehensive quiz or exercise at the end of each chapter, I'd absolutely agree we should tightly align those to the chapter learning objectives. But since these are formative section quizzes meant to reinforce specific concepts as students progress, the loose connection feels more natural.

That said, if a tighter alignment would make the book more useful for how you're using it (such as curriculum mapping, course design, or clearer student expectations), I'm happy to refine it. We could add explicit mappings showing how each question relates to the chapter objectives.

What matters most to you?

  • Ensuring students understand expectations clearly?
  • Being able to map questions to objectives for teaching?
  • Something else?

Just let me know your use case, and we can adjust accordingly! But till then I will assume we are on the same page and good to close this issue and if not we can reopen.

(PS: I spent this morning going through and standardizing all chapter learning objectives across the book for clarity and consistency, so the foundation is solid either way!)

Best,
Vijay

<!-- gh-comment-id:3476481273 --> @profvjreddi commented on GitHub (Nov 1, 2025): Hi @foundingnimo, Thanks for raising this! You're right that the quiz objectives don't match the chapter learning objectives exactly, and I appreciate you noticing this detail. This is actually semi-intentional 😅. Let me explain my thinking: Chapter learning objectives are the formal goals we define at the chapter level. These are the key outcomes students should achieve upon completing the entire chapter. They're carefully carved out through multiple revisions. Quiz question objectives are more granular and focused on specific sections. Since we have lots of quiz questions per chapter, each one tests a specific aspect or depth of understanding within those broader goals. When generating questions, I focus on: "What concept from this section do I want our students to reflect on?" rather than "Which exact chapter objective does this map to?" Here's why I've kept this loose coupling: The quiz questions are tied to sections, not chapters. A single chapter learning objective might span multiple sections, so the section-level questions naturally test components of that objective rather than the whole thing. For example, one chapter objective about "distinguishing ML from traditional software" might generate: * A section quiz about failure modes specifically * Another about monitoring differences * Another comparing lifecycle approaches If we had a comprehensive quiz or exercise at the end of each chapter, I'd absolutely agree we should tightly align those to the chapter learning objectives. But since these are formative section quizzes meant to reinforce specific concepts as students progress, the loose connection feels more natural. That said, if a tighter alignment would make the book more useful for how you're using it (such as curriculum mapping, course design, or clearer student expectations), I'm happy to refine it. We could add explicit mappings showing how each question relates to the chapter objectives. What matters most to you? * Ensuring students understand expectations clearly? * Being able to map questions to objectives for teaching? * Something else? Just let me know your use case, and we can adjust accordingly! But till then I will assume we are on the same page and good to close this issue and if not we can reopen. (PS: I spent this morning going through and standardizing all chapter learning objectives across the book for clarity and consistency, so the foundation is solid either way!) Best, Vijay
Author
Owner

@foundingnimo commented on GitHub (Nov 1, 2025):

Hi Vijay, thank you for the well-thought-out comment and rationalisation behind the mappings.
From a "what matters most" perspective, I'd focus on being able to map questions to objectives.
As you reflected - the "sub-objectives" do not need to match the exact wording of the chapter-level ones. I am happy with this philosophy.

<!-- gh-comment-id:3476761146 --> @foundingnimo commented on GitHub (Nov 1, 2025): Hi Vijay, thank you for the well-thought-out comment and rationalisation behind the mappings. From a "what matters most" perspective, I'd focus on being able to map questions to objectives. As you reflected - the "sub-objectives" do not need to match the exact wording of the chapter-level ones. I am happy with this philosophy.
Author
Owner

@profvjreddi commented on GitHub (Nov 1, 2025):

I appreciate you taking the time to drop me a note and help me think through this clearly. Cheerios! Have a great weekend 👋

<!-- gh-comment-id:3476771566 --> @profvjreddi commented on GitHub (Nov 1, 2025): I appreciate you taking the time to drop me a note and help me think through this clearly. Cheerios! Have a great weekend 👋
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/cs249r_book#5627