Skip to main content

Generate Documentation

Intent

Prompt the model to describe the code it creates. Generate reading guides that help you understand and modify the code. Generate instruction manuals that describe how to use the code, and that form the basis for acceptance tests.

Motivation

Relying on generated program code can introduce an understanding gap, where you’re responsible for deploying the code but don’t know how to use it, how to make changes, or how completely it implements the requirements you described in your prompt. This understanding gap can cause problems when you—or your customers—discover that the generated code doesn’t work the way you expect, or proves difficult to modify when you attempt to address changing requirements.

Code generated by an LLM might not follow your expectations for modularity, organisation, or coding standards, making a read-through difficult and fatiguing—particularly if the model generates a lot of code. Prompt the model to generate a reading guide that explains the structure of its code, that you can follow to perform a code review, to split a review into smaller parts, or to plan refactorings that transform the generated code into your usual style.

Prompt the model to generate instructions for using the generated code. Review the instructions to ensure that the stated behaviour of the code matches your expectations. Follow the instructions manually to verify that the described process works correctly, with no stub implementations or bugs. Use the instructions as the design for automated tests that detect regressions in the documented process.

Applicability

Use the pattern when you need to decide whether to accept code that you generate by prompting an LLM. Generate Documentation offers most support in cases where the model generates a large volume of code in a single session, or implements a complex workflow or a process with complicated setup requirements. For example, when the model generates code to communicate with an external API, Generate Documentation to learn how to supply API keys or client secrets, configure the API endpoint, and retrieve data from the API.

Consequences

Generate Documentation offers the following benefits:

  • Create the basis of a review of code that the model generates.

  • Compare the behaviour you request with the process implemented in the code

    the model generates.

  • Identify user interface interactions and code paths for subsequent testing.

Implementation

Generate Documentation by prompting the model to create a file that contains instructions for your intended use. For example, to generate a code reading plan, use a prompt like “Create a markdown file that helps a programmer who understands TypeScript learn how to read this code. Describe the high-level structure, key entry points, and indicate which files implement the important logic.”

To generate instructions that can act as a walkthrough, user guide, or acceptance test, use a prompt like “Create a markdown file that contains user instructions for someone to use the feature we just built. Describe any configuration steps that someone needs to follow to set up the software, then the actions they take in the user interface to exercise the feature.”

If you have existing exemplars of documents that follow the structure you expect, you can provide these as few-shot examples in your prompt.

When you Generate Documentation at the end of a session in which you generate the code, you might find that the documentation highlights particular details that you prompted during the session, for example, algorithm choices or naming conventions. Generate Documentation from a Clean Slate to remove these details from the model’s context.

In subsequent coding sessions, you can refer to the generated documentation to direct the agent to load relevant files into the model’s context, instead of performing broad searches of the whole codebase. In this way you can treat the document as a “checkpoint” between sessions that summarises work done for both you and the model.

Examples

User Guide

After prompting Mistral Vibe using Devstral 2 to generate Python code to synchronise notes between a Zotero collection and an Evernote notebook, I used the following prompt with Devstral Small 2 to generate instructions for configuring and using the code from the command line:

Generate documentation in README.md that explains how someone uses this code to synchronize Zotero notes to Evernote.
Describe any setup tasks the person needs to perform to use the program, and what they do to initiate a sync and specify which collection and notebook get synchronized.

The model generated comprehensive documentation:

# Zoternote - Zotero to Evernote Sync Tool

Zoternote is a command-line tool that synchronizes notes from Zotero items to Evernote. It helps researchers and academics keep their literature notes organized by automatically transferring Zotero item notes to Evernote notebooks.

## Features

- Sync notes from Zotero collections to Evernote notebooks
- Create stable identifiers for each Zotero item to avoid duplicate notes
- Combine multiple notes from a single Zotero item into one Evernote note
- Update existing Evernote notes when Zotero notes change
- Preserve metadata like titles, authors, and publication years in note titles

## Installation

### Prerequisites

- Python 3.14+
- Zotero account with API access enabled
- Evernote developer account and API credentials
- `poetry` for dependency management (recommended)

### Install the package

[...]

Source code available at user-guide.md.

The setup instructions include guidance for creating API credentials, through which I discovered that at time of writing, Evernote did not generated API keys for client applications. This finding leads to changing the script requirements to use a different storage service for notes.

Code Walkthrough

To understand how to modify the script to work with an alternative storage service, I prompted Devstral Small 2 to create a code review guide:

Create a reading guide for this code in a markdown file. Explain where the entry points are, the important parts of the implementation, and which files and classes contain the interfaces to external services.

The model generated a structured document that explains how the source code and tests are organised:

# Zoternote Code Reading Guide

This guide provides an overview of the zoternote codebase, explaining its structure, key components, and how to navigate the implementation.

## Table of Contents
- [Project Overview](#project-overview)
- [Entry Points](#entry-points)
- [Core Architecture](#core-architecture)
- [Key Modules and Files](#key-modules-and-files)
- [Interfaces to External Services](#interfaces-to-external-services)
- [Data Flow](#data-flow)
- [Testing Strategy](#testing-strategy)
- [Configuration and Setup](#configuration-and-setup)

## Project Overview

Zoternote is a command-line tool that synchronizes notes from Zotero items to Evernote. It helps researchers and academics keep their literature notes organized by automatically transferring Zotero item notes to Evernote notebooks.

**Key Features:**
- Sync notes from Zotero collections to Evernote notebooks
- Create stable identifiers for each Zotero item to avoid duplicate notes
- Combine multiple notes from a single Zotero item into one Evernote note
- Update existing Evernote notes when Zotero notes change
- Preserve metadata like titles, authors, and publication years in note titles

## Entry Points

### Main CLI Entry Point
The primary entry point for the application is in `src/zoternote/__main__.py`:

[...]

Source code available at CODE_READING_GUIDE.md.

The reading guide shows that a class called EvernoteClientInterface defines the API for syncing with Evernote, in evernote_client.py; that is the file I need to replace to use a different note store.

You Review the generated documentation, or You Review the code using the documentation as a guide.

Call Out Error when the generated documentation describes steps that the model didn’t implement, or for which it provided a stub implementation.

Use Adversarial Dialectics to have an agent compare the generated software with the generated documentation and describe divergences to another agent.