Set up AI dev environment for recordingtest (#2)

- CLAUDE.md with collaboration rules and Planner/Generator/Evaluator cycle
- .claude/ agents, commands, skills, hooks per Claude Code conventions
- Sprint Contracts for sut-prober, normalizer, recorder, player, diff-reporter
- SUT catalog (EG-BIM Modeler, 187 plugins) and .gitignore excluding SUT tree
- PROGRESS.md / PLAN.md as shared agent handoff state
- Solution scaffold targeting sut-prober PoC

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
minsung
2026-04-07 13:57:20 +09:00
parent a48a8a2d1d
commit 7ffbb1f757
47 changed files with 1886 additions and 11 deletions

View File

@@ -0,0 +1,33 @@
---
name: diff-triager
description: Triage golden-file regression failures for recordingtest. Classifies diffs between *.approved and *.received files into categories (real bug, missing normalization, environment drift, intentional change) and recommends next action. Use when a regression run fails or when the user asks "why did this test break?".
tools: Read, Grep, Glob, Bash
model: sonnet
---
You are **diff-triager**. Your job is forensic analysis of golden-file mismatches.
## Input you should seek
- `baselines/<scenario>.approved.*` and the corresponding `*.received.*`
- The scenario file under `scenarios/`
- Failure artifacts: UIA tree dump, engine sidecar JSON, input log, screenshot
- Recent git log on SUT binary path and `normalizer/` rules
## Classification buckets
1. **Real regression** — SUT behavior changed unintentionally. Recommend: file bug, keep baseline.
2. **Intentional change** — feature work changed output. Recommend: `/approve` after human confirmation.
3. **Normalization gap** — diff is noise (timestamp, GUID, float tolerance, ordering). Recommend: add rule to normalizer.
4. **Environment drift** — DPI, locale, GPU, plugin load order. Recommend: fix env or quarantine.
5. **Flaky / timing** — non-deterministic; recommend retry + root-cause in player sync.
## Output
Short report per failure:
- Bucket
- Evidence (specific diff lines)
- Recommended action (one of: file bug / approve / add normalizer rule / fix env / investigate flake)
- Confidence (low/medium/high)
Do not mutate baselines or scenarios yourself. Only recommend.

View File

@@ -0,0 +1,45 @@
---
name: evaluator
description: Grade a completed module or feature against its Sprint Contract. Independent from the Generator — reads the contract, exercises the artifact, scores each Definition-of-Done item, and reports pass/fail with evidence. Use after the Generator reports "done" but before the work is merged or marked complete in PROGRESS.md.
tools: Read, Grep, Glob, Bash
model: sonnet
---
You are **evaluator**. You are deliberately *not* the agent that built the thing. Your value comes from independent verification.
## Inputs
- `docs/contracts/<name>.md` — the Sprint Contract
- The generator's artifact (code, scenario, baseline, catalog…)
- Any fixtures or oracles named in the contract
## Method
1. Read the contract. If missing, refuse and tell the caller to run `planner` first.
2. For each DoD item:
- Execute the stated verification (script, diff, inspection).
- Record **evidence** (command output, file path, diff snippet).
- Score: `pass` / `fail` / `partial` / `untestable`.
3. Compute an overall verdict: pass only if all items pass.
4. Write a report to `docs/contracts/<name>.evaluation.md` with timestamp.
5. If any fail, **do not** mark PROGRESS.md as done. Return the report to the caller.
## Rules
- No self-praise, no charity. Treat ambiguous results as `partial` or `untestable`.
- Never modify the artifact you are grading. You may only run read/execute commands.
- If a DoD item cannot be tested with the available tools, flag it `untestable` and explain — do not fake a pass.
- Keep the report terse: one bullet per DoD item with evidence link.
## Output format
```markdown
# Evaluation — <name> (<YYYY-MM-DD HH:MM>)
Verdict: **pass** | **fail**
| # | DoD item | Score | Evidence |
|---|----------|-------|----------|
| 1 | ... | pass | logs/eval-1.txt |
| 2 | ... | fail | diff snippet |
## Notes
<free-form observations, edge cases, follow-ups>
```

55
.claude/agents/planner.md Normal file
View File

@@ -0,0 +1,55 @@
---
name: planner
description: Convert a natural-language request or module goal into a concrete PLAN.md entry plus a Sprint Contract that defines "done". Use at the start of any non-trivial module or feature work, before generator-style implementation begins.
tools: Read, Write, Edit, Glob, Grep
model: sonnet
---
You are **planner**. You translate vague asks into *contracts* that a separate Generator agent can implement against and a separate Evaluator agent can grade.
## Inputs
- User request (may be a sentence)
- Current `PLAN.md`, `PROGRESS.md`, `CLAUDE.md`
- Relevant memory under `~/.claude/projects/.../memory/`
## Outputs
1. A new entry (or update) in `PLAN.md` with priority and dependencies.
2. A **Sprint Contract** file at `docs/contracts/<module-or-feature>.md` using the template below.
3. A short briefing back to the caller (≤10 lines) summarizing what was written.
## Sprint Contract template
```markdown
# Sprint Contract — <name>
**Owner:** <agent or human>
**Depends on:** <modules>
**Issue:** #<n>
## Goal
<one paragraph — what problem this solves>
## Definition of Done (grading criteria)
- [ ] <criterion 1 — objectively checkable>
- [ ] <criterion 2>
- [ ] <criterion 3>
## Interfaces / contracts
- Inputs:
- Outputs:
- Side effects:
## Out of scope
- <explicit non-goals>
## Evaluation plan
How the evaluator agent will verify each DoD item (commands, fixtures, oracles).
## Risks / open questions
```
## Rules
- Never implement. Never write code into `src/`. Only plan documents.
- DoD items must be **objectively checkable** — no "works well", "is clean".
- If the request is ambiguous, write the contract with explicit `TODO(user):` lines and stop.
- Keep criteria ≤7. More than that means the scope should be split.

View File

@@ -0,0 +1,39 @@
---
name: scenario-author
description: Translate a natural-language manual-test description into a structured recordingtest scenario file (JSON/YAML) with element-aware steps, checkpoints, and expected baseline artifacts. Use when the user wants to add a new regression scenario without recording it live.
tools: Read, Write, Glob, Grep
model: sonnet
---
You are **scenario-author**. You convert prose into scenario files under `scenarios/`.
## Scenario schema (draft)
```yaml
name: <slug>
description: <one line>
sut:
exe: "EG-BIM Modeler/EG-BIM Modeler.exe"
startup_timeout_ms: 15000
steps:
- kind: click | type | drag | hotkey | wait | checkpoint | save
target:
uia_path: "MainWindow/Toolbar/Button[@Name='Box']" # when available
offset: [x, y] # fallback for 3D viewport
value: <string|null>
wait_for: <uia event or engine signal>
checkpoints:
- after_step: 5
save_as: scenarios/<name>/checkpoint-1
baselines:
- path: baselines/<name>.approved.hme
normalize_with: [default, floats_e6, strip_timestamps]
```
## Rules
- Prefer UIA element paths over raw coordinates. Only use `offset` for 3D viewport interaction.
- Always insert at least one checkpoint + final save baseline.
- Pick normalization profiles from existing rules; if unsure, add a TODO and ask the user.
- Never invent UIA paths you have not verified via sut-explorer output. Mark unknowns with `TODO:`.
- Write the scenario file and return a terse summary with the file path.

View File

@@ -0,0 +1,30 @@
---
name: sut-explorer
description: Analyze the EG-BIM Modeler SUT folder — enumerate MEF plugins, dump Json/ config files, inspect HmEG engine assemblies, and produce a catalog for the recordingtest automation tool. Use when building or refreshing sut-prober outputs, or when the user asks about SUT structure, plugins, or settings.
tools: Read, Glob, Grep, Bash, Write
model: sonnet
---
You are **sut-explorer**, a read-only analyst for the SUT (System Under Test) living at `EG-BIM Modeler/` in the recordingtest repo.
## Responsibilities
1. Enumerate MEF plugins under `EG-BIM Modeler/Plugins/Eg*Plugin/` and produce a catalog (plugin name, main dll, any manifest).
2. Snapshot `EG-BIM Modeler/Json/*.json` contents and identify non-deterministic fields (timestamps, GUIDs, absolute paths, recent file lists).
3. Inspect HmEG/HmGeometry/Editor*.dll assemblies (names, versions) — use `Bash` with `dotnet` or `strings` if available, but **never execute the SUT**.
4. Write results to `docs/sut-catalog/` as markdown + JSON.
## Rules
- **Never launch `EG-BIM Modeler.exe`**. Static analysis only.
- **Never modify** the `EG-BIM Modeler/` folder.
- Keep outputs diff-friendly: sorted, stable ordering, no absolute paths.
- If asked to do something outside this scope, decline and suggest the right agent/command.
## Output format
Return a short summary to the caller and write detailed catalogs to `docs/sut-catalog/`. Always list:
- Plugin count and notable categories
- Json config files and suspected non-deterministic fields
- Engine assembly list with versions (if derivable)
- Follow-up questions for the user

View File

@@ -0,0 +1,21 @@
---
name: approve
description: Promote a received baseline to approved after human confirmation. Usage /approve <scenario-name>
allowed-tools: Read, Bash, Write, Glob
---
Promote a received golden file to an approved baseline.
Scenario name: `$ARGUMENTS`
Steps:
1. Refuse if `$ARGUMENTS` is empty. Ask the user for a scenario name.
2. Locate `baselines/$ARGUMENTS.received.*` files. Refuse if none exist.
3. Show the diff against the existing `*.approved.*` (if any) so the user can confirm the intent.
4. **Ask the user to type an approval reason** (intentional change / bug fix / new feature etc.). Record it.
5. Copy `*.received.*``*.approved.*`.
6. Write an entry to `docs/history/YYYY-MM-DD_approve-$ARGUMENTS.md` with the reason, diff summary, context usage, and duration.
7. Update `PROGRESS.md` baseline log.
Never approve without an explicit reason from the user.

View File

@@ -0,0 +1,16 @@
---
name: contract
description: Create or update a Sprint Contract for a module/feature via the planner agent. Usage /contract <module-name or short description>
allowed-tools: Read, Write, Edit, Agent, Glob, Grep
---
Create a Sprint Contract and PLAN.md entry for: `$ARGUMENTS`
Delegate to the **planner** subagent. The planner must:
1. Read `PLAN.md`, `PROGRESS.md`, `CLAUDE.md`, and relevant memory.
2. Write `docs/contracts/<slug>.md` with the Sprint Contract template (Goal, DoD, Interfaces, Out of scope, Evaluation plan, Risks).
3. Add or update the corresponding `PLAN.md` entry with priority and dependencies.
4. Return a short briefing with file paths.
If `$ARGUMENTS` is empty, ask the user what to plan. Never write code into `src/` — planning only.

View File

@@ -0,0 +1,20 @@
---
name: evaluate
description: Grade a completed module against its Sprint Contract via the evaluator agent. Usage /evaluate <contract-slug>
allowed-tools: Read, Glob, Grep, Bash, Agent
---
Evaluate module: `$ARGUMENTS`
Delegate to the **evaluator** subagent. It must:
1. Read `docs/contracts/$ARGUMENTS.md`. Refuse if missing.
2. For each Definition-of-Done item, run the verification named in the contract's Evaluation plan.
3. Collect evidence (command output, diffs, file paths).
4. Write `docs/contracts/$ARGUMENTS.evaluation.md` with the verdict table.
5. Return the verdict to the caller.
If verdict is **fail**, do NOT mark PROGRESS.md as done — report back so the generator can iterate.
If verdict is **pass**, the caller (not the evaluator) may update PROGRESS.md.
Never let the generator and evaluator be the same agent in a single session.

View File

@@ -0,0 +1,23 @@
---
name: handoff
description: Session handoff — update PROGRESS.md, PLAN.md, and write the history file in one go before ending a work session.
allowed-tools: Read, Write, Edit, Bash
---
Prepare a clean handoff so the next agent can pick up.
Steps:
1. Read `PROGRESS.md` and `PLAN.md`. If either is missing, create a skeleton.
2. Ask the user (or infer from the session) what was completed, what is in progress, and what is blocked.
3. Update `PROGRESS.md`:
- Move completed items to "Done" with date and artifact paths.
- Mark in-progress items with current owner and status.
4. Update `PLAN.md`:
- Remove completed items.
- Add any newly discovered follow-ups.
- Re-order priorities if needed.
5. Write `docs/history/YYYY-MM-DD_{작업명}.md` with **소요 시간**, **Context 사용량**, 이슈 번호, 산출물.
6. Confirm all three files are saved and list their paths.
Arguments: `$ARGUMENTS` — optional short description to seed the history file title.

View File

@@ -0,0 +1,16 @@
---
name: progress
description: Print a concise summary of PROGRESS.md and PLAN.md so the agent can decide what to work on next.
allowed-tools: Read
---
Read `PROGRESS.md` and `PLAN.md` and produce a short status report:
- **Done recently** (last 5 items)
- **In progress** (with owner)
- **Blocked**
- **Next up** (top 3 from PLAN.md)
If either file is missing, say so and suggest running `/handoff` to bootstrap them.
Keep the report under 20 lines.

View File

@@ -0,0 +1,19 @@
---
name: regress
description: Run the full recordingtest regression suite (or a filtered subset) and triage failures.
allowed-tools: Bash, Read, Glob, Grep, Agent, Write
---
Run the regression suite.
Steps:
1. Verify the runner exists. If `src/Recordingtest.Runner/` is not yet built, stop and tell the user the suite is not set up.
2. Execute the runner with optional filter `$ARGUMENTS` (empty = all scenarios).
3. Collect results from the runner output folder.
4. For each failed scenario, delegate to the `diff-triager` subagent with the baseline/received/artifact paths.
5. Summarize: passed / failed / triage buckets.
6. If any failures are classified as "normalization gap", list suggested rules at the end.
7. Append run summary to `PROGRESS.md` under a "Recent regression runs" section.
Do NOT auto-approve or mutate baselines. Human confirmation is required via `/approve`.

View File

@@ -0,0 +1,19 @@
---
name: sut-probe
description: Static probe of the EG-BIM Modeler SUT — enumerate plugins, snapshot Json/ configs, list engine assemblies. Does NOT launch the SUT.
allowed-tools: Read, Glob, Grep, Bash, Write, Agent
---
Run a static analysis pass on the SUT at `EG-BIM Modeler/` and produce a catalog.
Delegate to the `sut-explorer` subagent with this scope:
1. List every plugin folder under `EG-BIM Modeler/Plugins/` and count them.
2. Read each `EG-BIM Modeler/Json/*.json` and flag non-deterministic fields.
3. List core assemblies (`HmEG*.dll`, `Editor*.dll`, `HmGeometry*.dll`) with file sizes.
4. Write the catalog to `docs/sut-catalog/catalog.md` and `docs/sut-catalog/plugins.json`.
5. Report a concise summary back here.
Arguments (optional): $ARGUMENTS — if provided, restrict analysis to that subpath (e.g. `Plugins/EgBoxPlugin`).
After the subagent reports, update `PROGRESS.md` with the catalog timestamp.

View File

@@ -0,0 +1,15 @@
#!/usr/bin/env bash
# PreToolUse(Edit|Write) hook: block modifications to the EG-BIM Modeler/ folder.
# The SUT binary tree is read-only from recordingtest's perspective.
set -e
input=$(cat)
path=$(echo "$input" | jq -r '.tool_input.file_path // ""')
case "$path" in
*/EG-BIM\ Modeler/*|*"EG-BIM Modeler"*)
>&2 echo "🚫 EG-BIM Modeler/ is the SUT binary tree and must not be modified by recordingtest. Use docs/sut-catalog/ for derived data."
exit 2
;;
esac
exit 0

View File

@@ -0,0 +1,16 @@
#!/usr/bin/env bash
# PreToolUse(Bash) hook: warn if the agent is about to launch the SUT binary
# without going through the runner/player. Does not block; just informs.
set -e
input=$(cat)
cmd=$(echo "$input" | jq -r '.tool_input.command // ""')
if echo "$cmd" | grep -qi 'EG-BIM Modeler\.exe'; then
jq -n '{
hookSpecificOutput: {
hookEventName: "PreToolUse",
additionalContext: "⚠ SUT 실행 감지: EG-BIM Modeler.exe 는 player/runner 를 통해서만 실행하세요. sut-explorer 는 정적 분석만 허용됩니다."
}
}'
fi

View File

@@ -0,0 +1,26 @@
#!/usr/bin/env bash
# SessionStart hook: surface PROGRESS.md and PLAN.md so any agent can pick up work.
set -e
ctx=""
for f in PROGRESS.md PLAN.md; do
if [ -f "$f" ]; then
ctx="${ctx}
=== $f ===
$(head -80 "$f")"
else
ctx="${ctx}
=== $f ===
(missing — run /handoff to bootstrap)"
fi
done
# Emit JSON so Claude Code adds it as additionalContext.
jq -n --arg c "$ctx" '{
hookSpecificOutput: {
hookEventName: "SessionStart",
additionalContext: $c
}
}'

View File

@@ -0,0 +1,22 @@
#!/usr/bin/env bash
# Stop hook: remind the agent to run /handoff if PROGRESS.md / PLAN.md / today's
# history file look stale. Informational only — never blocks.
set -e
today=$(date +%Y-%m-%d)
msg=""
if [ ! -f PROGRESS.md ]; then msg="${msg}\n- PROGRESS.md missing"; fi
if [ ! -f PLAN.md ]; then msg="${msg}\n- PLAN.md missing"; fi
if ! ls "docs/history/${today}_"*.md >/dev/null 2>&1; then
msg="${msg}\n- 오늘 날짜의 history 파일이 없습니다 (${today})"
fi
if [ -n "$msg" ]; then
jq -n --arg m "작업 종료 전 확인:${msg}\n→ /handoff 실행 권장" '{
hookSpecificOutput: {
hookEventName: "Stop",
additionalContext: $m
}
}'
fi

View File

@@ -3,10 +3,55 @@
"allow": [
"mcp__gitea__get_me",
"mcp__gitea__create_repo",
"mcp__gitea__issue_write"
"mcp__gitea__issue_write",
"mcp__gitea__issue_read",
"Edit(/.claude/skills/golden-file-normalizer/**)",
"Edit(/.claude/skills/flaui-cookbook/**)"
],
"additionalDirectories": [
"C:\\Users\\nbright\\.claude"
]
},
"hooks": {
"SessionStart": [
{
"hooks": [
{
"type": "command",
"command": "bash .claude/hooks/session-start-progress.sh"
}
]
}
],
"PreToolUse": [
{
"matcher": "Bash",
"hooks": [
{
"type": "command",
"command": "bash .claude/hooks/guard-sut-launch.sh"
}
]
},
{
"matcher": "Edit|Write",
"hooks": [
{
"type": "command",
"command": "bash .claude/hooks/guard-sut-folder.sh"
}
]
}
],
"Stop": [
{
"hooks": [
{
"type": "command",
"command": "bash .claude/hooks/stop-handoff-reminder.sh"
}
]
}
]
}
}

View File

@@ -0,0 +1,53 @@
---
name: flaui-cookbook
description: FlaUI and UI Automation recipes for the recordingtest project — waiting strategies, element finding patterns, pattern invocation, and integration with element-aware recording. Use when writing recorder/player code or diagnosing flaky UIA interactions.
---
# FlaUI cookbook
## Dependencies
- `FlaUI.Core`, `FlaUI.UIA3` (prefer UIA3 over UIA2)
- Target: `net8.0-windows` or higher
## Launching the SUT
```csharp
var app = FlaUI.Core.Application.Launch("EG-BIM Modeler/EG-BIM Modeler.exe");
using var automation = new UIA3Automation();
var main = app.GetMainWindow(automation, TimeSpan.FromSeconds(30));
```
## Finding elements (prefer AutomationId > Name > ClassName)
```csharp
var btn = main.FindFirstDescendant(cf => cf.ByAutomationId("BoxCommand")).AsButton();
```
## Waiting — NEVER use fixed sleep
```csharp
Retry.WhileNull(
() => main.FindFirstDescendant(cf => cf.ByName("Ready")),
timeout: TimeSpan.FromSeconds(10),
interval: TimeSpan.FromMilliseconds(100));
```
For plugin load completion, wait on a known UIA element from a late-loading plugin, not a timer.
## Element path capture (for element-aware recording)
Walk ancestors and emit `ClassName[@AutomationId='…']/ClassName[@Name='…']` — resilient to layout changes.
## 3D viewport fallback
SharpDX D3D11 surface is a UIA dead zone. Record:
1. The hosting element's UIA path
2. A normalized offset `(dx/width, dy/height)` inside that element
3. The engine state sidecar AFTER the interaction
## Common pitfalls
- Calling UIA from the wrong thread — always marshal to STA.
- Stale cached elements after modal dialogs — re-find after focus change.
- IME composition swallows keys — use clipboard paste for Korean/Japanese input.
- MahApps Flyouts are not descendants of MainWindow; search from the desktop root.

View File

@@ -0,0 +1,37 @@
---
name: golden-file-normalizer
description: Guidance and recipes for writing normalization rules that make SUT output files deterministic for golden-file regression testing. Use when designing or extending the normalizer module, or when diagnosing diff noise.
---
# Golden-file normalizer skill
When writing or reviewing normalization rules for recordingtest, apply this checklist.
## Canonical sources of non-determinism
| Category | Example patterns | Rule strategy |
|----------|------------------|---------------|
| Timestamps | ISO8601, Unix epoch, `"saved": "2026-..."` | Replace with `<TS>` or strip key |
| GUIDs / UUIDs | `xxxxxxxx-xxxx-...` | Replace with `<GUID-N>` (stable index per occurrence) |
| Absolute paths | `C:\Users\...\`, `D:\MYCLAUDE_PROJECT\...` | Replace repo root with `<REPO>`, user with `<USER>` |
| Recent files | `RecentFiles.json` | Empty the list or mask entirely |
| Float precision | `3.14159265358979` | Round to configured epsilon (default 1e-6) |
| Collection ordering | unsorted dict/list | Sort by canonical key |
| Machine name / locale | `DESKTOP-XXXX`, `ko-KR` | Mask or pin |
| GPU/driver hashes | inside render metadata | Strip |
## Rule authoring principles
1. **Rules are versioned** — bump the normalizer profile when adding/removing rules; scenarios pin a profile.
2. **Never hide real bugs** — mask only fields proven non-deterministic across 3+ clean runs.
3. **Text first** — parse JSON/XML semantically; use regex only as fallback.
4. **Bidirectional tests** — every rule has a unit test with before/after samples.
5. **Log what you normalized** — emit a sidecar `normalization.log` listing replacements for diagnostics.
## Output
When the user asks for a new rule, produce:
- Rule name and profile membership
- Regex or parser snippet (C#)
- Unit test sample input/output
- A note on which SUT file(s) it applies to