Skip to main content

You Reflect

You Reflect

Intent

Review the transcript of your interaction with an agent, recall the tense moments, and plan to change your approach and mindset.

Motivation

Working with an AI coding agent can be fun, and it can be frustrating. At times, you can be surprised that the agent generates code that appears to be correct from a straightforward prompt. At other times, the surprise is that no matter how specifically you give a requirement, the model ignores it—or worse, complies in what appears to be a deliberately malicious manner.

While it’s natural to experience emotional responses to successes or drawbacks in your activities, the “chat bot” style of user interface that many agents employ amplifies the likelihood and strength of the emotional response. In emulating written conversation—and even providing outward signs of enthusiasm, agreement, and other collegial emotions—developers of AI coding agents put their products into an “uncanny valley” where the experience mimics having a conversation with an intelligent peer, but without any of the benefits that would provide.

When you work with a coding agent, you’re the only actor who has an emotional response to the interaction—the model continues to stream tokens without getting affected. It doesn’t adapt to your emotional state in the way that an intelligent, empathetic collaborator would, which can amplify negative sentiment as you (correctly) don’t feel that the agent listens to your issues. Where a human collaborator might sense your emotional state through non-verbal signs or changes to your language, and make a joke to lighten the mood or suggest a change of task or a break, the agent doesn’t pick up on these cues and continues in the tone its pre-training gave it.

Getting angry or frustrated with your experiences using the agent can lead you to make bad decisions, incorrectly specify prompts, or avoid using AI-augmented tools for future interactions where they would prove helpful. It can also have negative effects on your interactions with (human) colleagues, if your emotions about your work with the agent spill over into other contexts.

Take a break, understand why your interaction led to your emotional response, and create a strategy for future interactions to be more productive.

Applicability

Use You Reflect after a long interaction with an agent, in which the agent’s behaviour deviates from your expectations. The opportunity to reflect might come after the agent fails to complete the task for which you prompt it; when you have to give a lot of additional direction to the agent to get it to do the work the way you expect; or when the agent fails to pay attention to some important information no matter how much you repeat it.

You can also reflect after a successful interaction, to identify what went well. You Reflect is more helpful at the end of a problematic session as it leads you to understand the emotions you bring to the interaction, and how to improve your work with the agent.

Consequences

You Reflect offers the following benefits:

  • Recognise and take a step back from a poor experience.
  • Identify unhelpful emotional responses and take a different approach.
  • Reset your expectations of the ``discussion'' with an agent.

Implementation

Apply You Reflect by following these steps:

  1. Recognise a situation in which you are adversely emotionally effected by your interaction with a coding assistant; for example, you might be upset that it doesn’t seem to understand you, or frustrated that it doesn’t get something right.
  2. Save the transcript of your interaction with the agent to a file.
  3. If you need to take a break before you can proceed with the rest of the pattern, do so.
  4. Read and annotate the transcript, paying attention to signs of your changed emotional state, for example: chastising the agent; using aggressive or insulting language; changing the way in which you identify the agent; altering the emotional register of your speech; or repeating yourself.
  5. Reflect on the interaction to identify the causes of your reactions.
  6. Design different approaches that would allow you to continue the task with a more positive attitude.

Identify specific steps in the interaction that lead to emotional responses. For example, if the model consistently generates incorrect commands to control your build system, note that “I found it frustrating that I prompted the model three times to use Maven to build the product, and each time it created an Ant build.xml file,” not “I’m angry that the stupid clanker can’t build my stupid project.”

Recording specific problems and their emotional effects helps you to analyse your emotional state and improve your ability to interact productively with the agent (and possibly with others). It also helps you determine specific actions that can avoid the situation you identified recurring. For example, when the model generates the wrong build command, consider using Extract Prompt or Replace Vibes with Tools to take more control over how the agent builds your software.

Example

Repeating Myself

I tried to use Mistral Vibe with the Devstral 2 model to find and fix memory management errors in an existing project on my computer, that’s written in an old dialect of the Objective-C programming language that predates the language’s Automatic Reference Counting (ARC) facility:

Can you check for Objective-C memory management errors in this project?

The interaction went poorly over several sessions, with the model repeatedly failing to use tools correctly, not writing files, or making mistakes when editing files. I loaded transcripts of the sessions (6 in total) into the Taguette analysis tool, and reviewed my interactions with the agent. This showed that after applying Save Output to File to try to break the interaction into multiple steps, I got frustrated with the lack of progress, and expressed this frustration by repeating the same prompt multiple times with no change in result:

Read MEMORY_ANALYSIS_REPORT.md and fix the two issues in @Entry.m. Don't address any of the other issues.

And:

Take a look at MEMORY_ANALYSIS_REPORT.md and fix the two issues in Entry.m using the header at Entry.h for info.

And:

The file MEMORY_ANALYSIS_REPORT.md describes memory-management issues in this project. Please fix the issues in Entry.m only.

Discovering that I was both failing to address the problem and being stubborn in a way that would not lead to new insight, I decided to change approach. I Intervened and fixed these issues myself, before searching for information about the tool-use problems I encountered—which I fixed by using a newer version of Mistral Vibe.

Intervene if taking over helps you to make progress, and to feel better about the situation.

Disclose Ambiguity to understand whether there are multiple ways to interpret your prompt.

Ask for Alternatives to generate suggestions for alternative approaches.

Start Over if the situation looks unrecoverable and you want to try a different approach.