Set up AI dev environment for recordingtest (#2)

- CLAUDE.md with collaboration rules and Planner/Generator/Evaluator cycle
- .claude/ agents, commands, skills, hooks per Claude Code conventions
- Sprint Contracts for sut-prober, normalizer, recorder, player, diff-reporter
- SUT catalog (EG-BIM Modeler, 187 plugins) and .gitignore excluding SUT tree
- PROGRESS.md / PLAN.md as shared agent handoff state
- Solution scaffold targeting sut-prober PoC

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
minsung
2026-04-07 13:57:20 +09:00
parent a48a8a2d1d
commit 7ffbb1f757
47 changed files with 1886 additions and 11 deletions

View File

@@ -0,0 +1,20 @@
---
name: evaluate
description: Grade a completed module against its Sprint Contract via the evaluator agent. Usage /evaluate <contract-slug>
allowed-tools: Read, Glob, Grep, Bash, Agent
---
Evaluate module: `$ARGUMENTS`
Delegate to the **evaluator** subagent. It must:
1. Read `docs/contracts/$ARGUMENTS.md`. Refuse if missing.
2. For each Definition-of-Done item, run the verification named in the contract's Evaluation plan.
3. Collect evidence (command output, diffs, file paths).
4. Write `docs/contracts/$ARGUMENTS.evaluation.md` with the verdict table.
5. Return the verdict to the caller.
If verdict is **fail**, do NOT mark PROGRESS.md as done — report back so the generator can iterate.
If verdict is **pass**, the caller (not the evaluator) may update PROGRESS.md.
Never let the generator and evaluator be the same agent in a single session.