feat(staffml): add optional question field to schema; surface as "Your task" prompt

Root cause of edge-0546 and similar confusion: 71% of the 9,657-question
corpus (6,864 questions) has no explicit interrogative in the scenario.
The YAML has title + scenario + details.realistic_solution, but no
`question`/`prompt` field — the reader must infer the ask from the
solution. Only 28.9% (2,793) have a `?` at the end of their scenario.

This commit adds the plumbing so a backfill pass can populate a uniform
`question` field, and ships a render-time fallback that improves UX for
every question immediately.

Schema (backend):
- Pydantic Question: add `question: Optional[str] = None`
- LinkML question_schema.yaml: add `question` slot, `required: false`,
  with docstring explaining the derivation contract.
- legacy_export._adapt: emit `question` to both corpus.json and the
  summary bundle (the field is ≤200 chars so it ships in the summary).

Frontend:
- Question TS interface: `question?: string`.
- useFullQuestion hook: merge-not-replace during hydration, so
  summary-only fields like `question` survive the worker roundtrip.
  Previously `setHydrated(full)` would drop any summary field the
  worker did not return.
- practice/page.tsx: new render block after the scenario prose.
  Three states:
    1. `current.question` present → prominent "Your task" callout
       (left-border accent blue).
    2. Scenario ends with `?` → render nothing; ask is already in prose.
    3. Neither → zone-/Bloom-aware fallback labeled "Your task
       (inferred)" with a dashed border so it reads differently from
       an authored prompt.
- inferTaskPrompt helper: 11-zone + 6-Bloom mapping with safe default.
  Keeps the fallback short — the goal is to orient, not to replace an
  authored question.

Backfill of the `question` field itself follows in a separate pass.
This commit is contained in:
Vijay Janapa Reddi
2026-04-24 15:09:23 -04:00
parent 53231b2c81
commit 7cce759daf
6 changed files with 126 additions and 1 deletions

View File

@@ -39,6 +39,52 @@ import { buildReportUrl } from "@/lib/issue-url";
import QuestionFeedback from "@/components/QuestionFeedback";
import { track } from "@/lib/analytics";
/**
* Zone- and Bloom-aware fallback prompt for questions that have no
* explicit `question` field yet AND no `?` in their scenario. Used by
* the practice page to render a minimally-useful "Your task (inferred)"
* callout during the 2026-04 backfill transition so readers aren't
* left guessing the shape of the expected answer. Keep this short —
* the goal is to orient, not to substitute for a properly authored
* question.
*/
function inferTaskPrompt(zone: string | undefined, bloom: string | undefined): string {
const z = (zone || "").toLowerCase();
const b = (bloom || "").toLowerCase();
// Zone-first mapping. Exact match against the 11 ikigai zones plus
// their common morphological neighbours so we don't miss a variant.
switch (z) {
case "diagnosis":
return "Identify the root cause suggested by the scenario and justify it with a specific mechanism.";
case "specification":
return "State the requirements or constraints the scenario imposes, then specify the design these dictate.";
case "design":
return "Propose a design that satisfies the scenario's constraints and explain the key trade-offs.";
case "implement":
case "realization":
return "Sketch the implementation or the concrete steps needed to realize the scenario's goal.";
case "evaluation":
return "Evaluate the scenario's proposed approach — what works, what breaks, and at what cost?";
case "optimization":
return "Identify the dominant bottleneck and propose an optimization that addresses it.";
case "fluency":
return "Explain the core mechanism at play and why it behaves as the scenario describes.";
case "analyze":
return "Analyze the trade-offs the scenario presents and recommend an approach with justification.";
case "recall":
return "Identify the concept the scenario illustrates and name the principle it demonstrates.";
case "mastery":
return "Integrate the scenario's constraints, propose an approach, and justify it against the dominant trade-off.";
}
// Bloom fallback when zone is unclear.
if (b === "remember" || b === "understand") return "Identify the concept the scenario illustrates and explain the underlying principle.";
if (b === "apply") return "Apply the relevant principle to the scenario and compute or decide the outcome.";
if (b === "analyze") return "Analyze the trade-offs the scenario presents and recommend an approach with justification.";
if (b === "evaluate") return "Evaluate the scenario's setup — what succeeds, what fails, and why?";
if (b === "create") return "Propose a design or plan that addresses the scenario and defend your choice.";
return "Based on the scenario above, reason about the trade-offs and decide what approach you would take.";
}
export default function PracticePageWrapper() {
return (
<Suspense fallback={
@@ -836,6 +882,45 @@ function PracticePage() {
<ScenarioSkeleton />
)}
</div>
{/*
Your-task callout. Three states, in priority order:
1. `current.question` present → render it as the
primary ask (post-backfill state).
2. Scenario already ends with `?` → render nothing
extra; the interrogative is already in the prose.
3. Neither → render a zone-aware inferred fallback
so the reader at least knows the shape of the
expected answer. Marked "inferred" so it's
clearly distinct from an authored question.
Placed AFTER the scenario because the scenario sets
context that the question refers to ("given the
above…"). A prompt above an unread scenario would
be meaningless.
*/}
{current.question ? (
<div className="mt-5 p-4 rounded-lg border-l-4 border-accentBlue bg-accentBlue/5">
<div className="flex items-center gap-2 mb-1.5">
<Target className="w-3.5 h-3.5 text-accentBlue" />
<span className="text-[10px] font-mono text-accentBlue uppercase tracking-widest">Your task</span>
</div>
<p className="text-textPrimary leading-relaxed text-base font-medium">
{current.question}
</p>
</div>
) : current.scenario && !current.scenario.trim().endsWith("?") ? (
<div className="mt-5 p-4 rounded-lg border border-dashed border-border bg-surface/40">
<div className="flex items-center gap-2 mb-1.5">
<Target className="w-3.5 h-3.5 text-textTertiary" />
<span className="text-[10px] font-mono text-textTertiary uppercase tracking-widest">
Your task <span className="text-textMuted normal-case">(inferred)</span>
</span>
</div>
<p className="text-textSecondary leading-relaxed text-sm">
{inferTaskPrompt(current.zone, current.bloom_level)}
</p>
</div>
) : null}
{chainInfo && !showAnswer && chainPreviewOpen && (
<div className="mt-6" data-testid="chain-preview-prereveal">
<ChainStrip chain={chainInfo} onNavigate={handleChainNavigate} />

View File

@@ -19,6 +19,14 @@ export interface Question {
track: string;
level: string;
title: string;
/**
* Explicit one-sentence interrogative derived from (scenario,
* realistic_solution). Ships in the summary bundle so the practice
* page can render it synchronously as a "Your task" callout. Optional
* while the backfill is in progress — if absent, the render falls
* back to a zone-based inferred-task label.
*/
question?: string;
topic: string; // one of 87 curated topic IDs
zone: string; // one of 11 ikigai zones
competency_area: string; // one of 13 canonical areas

View File

@@ -36,7 +36,15 @@ export function useFullQuestion(summary: Question | undefined | null): Question
setHydrated(summary);
let cancelled = false;
getQuestionFullDetail(summary.id).then(full => {
if (!cancelled && full) setHydrated(full);
if (cancelled || !full) return;
// Merge rather than replace: the worker returns the heavy fields
// (scenario, details) but does not necessarily carry every
// summary-bundle field. Summary fields like `question` (the
// explicit-ask prompt) live in the bundle and would otherwise be
// dropped by a straight replace. Spread summary first so worker
// values win where they overlap (they carry the real content),
// but summary-only fields survive.
setHydrated({ ...summary, ...full });
});
return () => {
cancelled = true;

View File

@@ -37,6 +37,11 @@ def _adapt(lq: LoadedQuestion) -> dict[str, Any]:
}
if q.phase:
legacy["phase"] = q.phase
# Explicit prompt (optional during backfill). Preserved in the
# summary bundle too — it's ≤200 chars and the practice page
# renders it synchronously, so lazy-hydration would be a regression.
if q.question:
legacy["question"] = q.question
# Chain — legacy shape: chain_ids (list) + chain_positions (dict).
# v1.0 schema already carries multi-chain membership natively.

View File

@@ -133,6 +133,13 @@ class Question(BaseModel):
# Content
title: str
scenario: str
# Explicit interrogative — the one-sentence ask derived from scenario
# + details.realistic_solution. Optional during backfill; once the
# 2026-04-24 pass completes, the practice page relies on this field
# to render a "Your task" callout so readers see the question
# uniformly (without it, 71% of questions had no explicit ask — the
# scenario just set context and the reader had to guess).
question: Optional[str] = None
details: QuestionDetails
# Workflow

View File

@@ -226,6 +226,18 @@ classes:
range: string
required: true
description: Plaintext only. No HTML. ≥30 chars.
question:
range: string
required: false
description: >-
Explicit one-sentence interrogative derived from (scenario,
realistic_solution). Optional during backfill; UI renders this
as the "Your task" callout when present so readers see the ask
uniformly instead of inferring it from the scenario. Max 200
chars. Example — scenario sets context about a drone with a
50 GB/s chip interconnect and a 33 ms frame deadline; the
question field reads "Which parallelism strategy (tensor vs.
pipeline) would you choose and why?"
details:
range: Details
required: true