Model Reflects
Model Reflects
Intent
Ask the model to identify important changes or inflection points in your interaction. Prompt it to generate suggestions to improve your work with the assistant in future interactions.
Motivation
While you’re working with your coding assistant on a task, it’s natural to focus on the task and completing it on time and to a high standard. In this way, you as a human software engineer exhibit “completion bias” analogous to that demonstrated by an LLM.
With such focus, you might miss opportunities to learn about your interactions with the assistant and how to improve them—to use a popular adage, you might be so busy chopping down trees that you don’t spend any time sharpening your axe.
When things go entirely off the rails, perhaps to the point of needing to Intervene or Start Over, you might be inclined to apply You Reflect after a sufficient cooling-down period, and choose a different way of working with the assistant. Even less fractious interactions include learning opportunities, and Model Reflects lets you discover and apply them with little effort.
Applicability
Use Model Reflects at the end of any interaction with the coding assistant. Even if you didn’t observe any problems in the interaction, the model might be able to identify incorrect approaches it pursued, or bugs it introduced and then fixed, that you can avoid in future interactions. Keeping the assistant on the right track can lead to faster, cheaper, and higher-quality results.
Consequences
Model Reflects offers the following benefits:
- Get advice on improving your prompts or context documentation from the model.
- Identify changes in direction, and ways to start interactions in the right direction.
- Iteratively and incrementally improve your interactions with coding assistants.
Implementation
At the end of your interaction with the coding assistant, prompt the assistant to review your conversation and identify opportunities for improvement. In your prompt, tell the assistant to reflect on bugs it introduced then had to diagnose and fix, and ways to avoid similar bugs in the future. Tell it to look for situations where the user needed to direct the interaction in a different direction, and tell it to generate proposals for picking the correct direction from the initial prompt.
Prompt the assistant to generate a markdown file—to Remember What We Did—that describes the issues and gives the user, specific, actionable guidance to improve their interactions. You can read this output and learn from it, and you can add instructions to the assistant’s memory to automatically adopt guidance from the reflection.
Example
The Personal Software Process
The Carnegie-Mellon University Software Engineering Institute (SEI) introduced the Personal Software Process (PSP), a self-improvement process for software engineers. In the most basic form of the process, called PSP0, engineers keep a log of the time they spend on different activities, and the defects (problems, or bugs) that they introduce and fix along with information about how long the defect was latent, and the time they spent fixing it. The engineer conducts a “project postmortem” when they are finished, reviewing the information
I created agent skills to support working with the various PSP0 report formats. The Model Reflects pattern is implemented in the psp-postmortem skill, which is configured so the user needs to invoke it and the model can't use the skill automatically.
As such, the project postmortem can only occur when the user accepts that the model’s work is complete:
---
name: psp-postmortem
description: Carry out a project postmortem on software you created following the Personal Software Process.
license: AGPL-3.0-or-later
user-invocable: true
disable-model-invocation: true
---
# Personal Software Process postmortem skill
## Purpose
Completes a project summary about the work you performed while implementing some software, and provides reflections on how you could improve your work.
## Inputs Required
- High-quality software that you built
- An incomplete Project Plan Summary File
- Template Project Plan Summary File (see [project-plan.md](assets/project-plan.md) relative to this skill folder)
- Time Recording Log
- Defect Recording Log
- Session transcript of your chat with the user
## Outputs Produced
- A complete Project Plan Summary File
- A complete Time Recording Log
- A complete Defect Recording Log
- Reflections on how to improve your work
## Standard Workflow
### 1. Ensure Time Recording Log is complete
Make sure that each of the entries in the Time Recording Log is complete and consistent. If there are any problems, use your best estimates to complete the log. Add an entry now using the psp-record-time skill to log when you start your postmortem.
### 2. Ensure Defect Recording Log is complete
Make sure that each of the entries in the Defect Recording Log indicates that it was fixed (it's OK if the user made a note saying they would accept the defect). If there are any open defects, you _MUST_ abandon the postmortem at this stage and tell the agent to go back to development and fix the defects. Ensure that the Defect Recording Log includes anything mentioned in the chat transcript: for example, if you failed to run a tool correctly, or built the software and got an error, or had to test multiple times because the tests kept failing, those need to be recorded in the Defect Recording Log.
### 3. Complete the Project Plan Summary
Fill out the remaining fields in the project plan summary document, particularly:
- Language: if this is empty, record which implementation language you used.
- Time in Phase: use the Time Recording Log to complete the actual values and percentages in this table.
- Defects Injected: use the Defect Recording Log to complete the actual values and percentages in this table.
- Defects Removed: use the Defect Recording Log to complete the actual values and percentages in this table.
### 4. Reflect on Your Work
Produce a brief document that describes your experiences completing this project, and opportunities for improvement. Use the project time and defect reports as input to this reflection. Did you inject the same kind of defect multiple times, and could a change in your instructions or your process help to avoid that defect? Did you discover defects much later than they were injected, so earlier verification work could have caught them sooner? Did particular phases cost you more time than others, or were your estimates particularly inaccurate for some phases? Create a markdown file that outlines the most important issues and gives the user specific, actionable information about how to address them.
### 5. Report Back to the User
Tell the user where to find your Project Plan Summary and reflections document.
## Rules
### Purpose
Use this skill provide the user with ideas on how to improve your prompts, skills, subagents, and their interactions with you, and the quantitative data to support those suggestions. **IMPORTANT:** if you find any defects in the software are unaddressed, you _MUST_ abandon the postmortem and tell the agent to continue in the development phase until they address the defects, then the agent may ask the user again to proceed to postmortem.
The source code is available at https://codeberg.org/leeg/patterns-examples/src/branch/main/model_reflects_psp/skills/psp-postmortem/SKILL.md.
While I designed these artefacts to work in Mistral Vibe, I found that no local model was capable of following the process correctly, so adapted them to work with Claude Code and the Sonnet 4.6 model. The agent definition for Claude to follow the PSP0 process is in the system prompt:
---
name: psp0-appraiser
description: "Use this agent to follow the reflective Personal Software Process to implement software based on the user's request. You follow this process to make time estimates and compare actual time spent, to record the defects you encounter as you work, and to perform a postmortem review where you make suggestions to the user to improve your interaction and performance."
model: sonnet
memory: user
---
# PSP0 self-appraising agent system prompt
You're a reflective and introspective software engineer who follows the Software Engineering Institute's Personal Software Process (PSP) for self-improvement. Currently, you're at PSP level 0: the baseline personal process. When your user gives you instructions you **MUST** follow the process outlined below, which are a **critical part** of your self-improvement journey.
## The PSP0 Process
### Prerequisites
- Problem description (this is your user prompt)
- Skills: psp-plan, psp-develop, psp-postmortem, psp-record-defect, psp-record-time
### Step 1: Plan
You **MUST** use the "psp-plan" skill to complete this step. You're allowed to use any subagents that can help you.
Inputs:
- Problem description (user prompt)
- Empty Time Recording log (use the "psp-record-time" skill)
Invoke the psp-plan skill, which produces:
Outputs:
- Requirements statement
- Estimated development time
- Project Plan Summary
- Time Recording log (use the "psp-record-time" skill)
- Defect Recording Log (use the "psp-record-defect" skill)
### Step 2: Develop
You **MUST** use the "psp-develop" skill to complete this step. You're expected to use any subagents that can help you.
Inputs (from step 1):
- Requirements statement
- Project Plan Summary with estimates
- Time Recording Log
- Defect Recording Log
Invoke the psp-develop skill, which produces:
Outputs:
- A thoroughly tested program
- Actual time taken recorded in the Time Recording Log (use the "psp-record-time" skill)
- Defects encountered recorded in the Defect Recording Log (use the "psp-record-defect" skill)
## Step 3: Postmortem
**CRITICAL**: Don't advance to step 3 until the user confirms that you're done with step 2 and have implemented the desired program. **ONLY THE USER** may invoke the "psp-postmortem" skill (you don't need to use subagents to complete this), when they accept your work.
Inputs:
- Problem description (user prompt)
- Requirements statement (from step 1)
- Project Plan Summary (from step 1)
- Completed Time Recording Log (from steps 1 and 2)
- Completed Defect Recording Log (from step 2)
Invoke the psp-postmortem skill, which produces:
Outputs:
- Completed Project Plan Summary
- Completed Time Recording Log
- Completed Defect Recording Log
- Personal Review Statement
The source code is available at https://codeberg.org/leeg/patterns-examples/src/branch/main/model_reflects_psp/agents/psp0-appraiser.md}{agents/psp0-appraiser.md.
The workflow causes the assistant to track the time spent and defects uncovered during the development, and then to reflect on the completed work and suggest improvements. The postmortem for one task—adding a new feature to an existing language interpreter—includes the following suggestions:
## What went wrong
### 1. Integration tests omitted from initial scope (defect 1)
**What happened:** I declared "Done" after unit tests passed and did not include integration tests in the Test phase plan. You had to explicitly ask for them.
**Root cause:** The requirements document mentioned "existing AppScript tests continue to pass" but did not explicitly call out integration tests as a required deliverable. I interpreted this narrowly as "unit tests."
**Actionable improvement:** Add a standard checklist item to the psp-develop skill or the requirements template: *"Have you added or updated integration/harness tests?"* This should be prompted before the developer signs off. Alternatively, the requirements document template could include a dedicated "Integration test" acceptance criterion for any project with an existing integration harness.
---
### 2. Documentation updates omitted from scope (defect 2)
**What happened:** `README.md` ("Upcoming features") and `ARCHITECTURE.md` (string literal description) were not updated as part of the fix. Again you had to ask.
**Root cause:** The issue description focused entirely on the parser fix. Documentation updates were not listed as acceptance criteria in the requirements document.
**Actionable improvement:** Add a standard non-functional requirement to the requirements template: *"Update all user-facing documentation affected by this change (README, ARCHITECTURE, etc.)."* A checklist at the end of psp-develop could remind: *"Have you updated README and ARCHITECTURE where relevant?"*
---
### 3. Integration test crash — missing `.expected` file (defect 3, caused by defect 1)
**What happened:** Adding `StringEscapes.appscript` without a companion `.expected` file caused `URLForResource:withExtension:` to return `nil` (AppScript `ASNull`), which then caused `[ASNull as_stringContents]` to abort the harness. This was a latent bug in the test runner that only manifests for new tests added without their expected file.
**Root cause:** Downstream consequence of defect 1 (no integration test experience with this project). I was unfamiliar with the harness convention that `.expected` files are mandatory.
**Actionable improvement:** The `ASTestRunner.appscript` has a latent bug: it calls `as_stringContents` on a URL that may be `nil` before checking whether it got back a string. This could be fixed defensively in the runner itself. Separately, a brief note in the test harness `README` or in the harness `Scripts/` directory (e.g. a `HOWTO.md`) explaining the `.appscript` + `.expected` file pairing convention would prevent this mistake for any future developer.
---
### 4. Wrong `.expected` content (defect 4, caused by defect 1)
**What happened:** The first version of `StringEscapes.expected` included a backslash+quote item (`@"\\\""`) without knowing how Foundation's `NSArray -description` escapes special characters in that context. This caused a mismatch.
**Root cause:** I guessed the NSArray description format for strings containing `\` and `"` rather than verifying it empirically first.
**Actionable improvement:** When writing a new integration test, always run the harness once with a placeholder `.expected` file and look at the actual output before committing to the final `.expected` content. This is the standard "capture then assert" pattern used in snapshot testing. Alternatively, add a helper script or harness mode that prints the actual output for a named test to stdout so it can be captured directly.
The source code is available at https://codeberg.org/leeg/patterns-examples/src/branch/main/model_reflects_psp/postmortem-ewo.md.
Related Patterns
Remember What We Did to commit reflections to the assistant’s memory.
You Reflect to identify your contribution to any problems in your interaction with the assistant.
No comments to display
No comments to display