Automating design system documentation with AI

Problem: Design system documentation at ConnexAI took 30-45 minutes per component and went stale every time components changed. Designers manually wrote anatomy tables, spacing annotations, and typography specs for 15+ components with multiple variants each.
Solution: I built a Figma plugin using Claude API that automatically generates visual documentation in 3-5 seconds. The plugin extracts component data, uses Claude to write contextual explanations, and embeds live component instances so docs stay in sync with designs.
Impact: Reduced documentation time from 30-45 minutes to 3-5 seconds per component. Tested on ConnexAI's component library (Button, Alert, Input), saving 2-3 hours immediately across three components.
This was a personal project I built to solve a documentation pain point I experienced firsthand while working on ConnexAI's design system.
Eliminate the manual documentation bottleneck by making documentation a by-product of the design process, not a separate task done afterward.
ConnexAI's design system included 15-25 components with multiple variants. A single Button component had 3 sizes × 4 colours × 2 states, creating 24 permutations. Documenting each component meant manually writing anatomy breakdowns, typography details, spacing annotations, colour usage, and usage guidance. That's 30-45 minutes per component, and for a full system, somewhere between 10 and 15 hours of documentation work.
But the bigger problem wasn't the time. It was what happened next. When a component changed, which happened regularly, someone had to update the docs. Most of the time, they didn't. Documentation drifted. New designers onboarded from stale specs. Consistency eroded quietly.
The documentation wasn't a tool designers trusted. It was a liability they maintained.
The manual workflow had four compounding issues. It was slow. 30-45 minutes meant documentation was always behind, always deferred. It was error-prone. Raw property strings like "variant=Body 2 • 14 sp, Weight=Normal" were hard to parse and easy to copy incorrectly. And it was cognitively dense, presenting data dumps with no explanation of why variants existed or when to use them.
Most critically, it was non-visual. Documentation lived in separate files while the components lived in Figma. Designers had to switch between two contexts to answer a simple question. That friction meant most designers stopped consulting the docs at all.
The outcome: documentation fell out of sync, onboarding took longer, design consistency eroded, and maintenance became expensive. The system was designed to help designers, but it was creating work.
A Figma plugin that generates complete visual documentation in 3-5 seconds. Select a component, click Generate Documentation, and the plugin creates a fully formatted documentation frame adjacent to the component on the canvas.

The plugin extracts component properties, variants, nested elements, typography specs, and spacing values automatically. It sends that data to Claude API with a structured prompt. Claude generates contextual explanations, not just "size: 24x24" but something like "Use this variant when icon clarity takes priority over label length." The plugin then builds documentation frames on the canvas with embedded live component instances, anatomy tables, and spacing annotations.
Because the docs embed live instances rather than screenshots, they stay in sync. When a component changes, you regenerate. The docs reflect the current component, not a snapshot from six months ago.

I considered using simple templates or storing descriptions in component metadata. But component variant matrices are information-dense in ways templates can't handle. Templates can describe structure, but they can't explain intent. Claude reads the entire component and generates contextual prose. Designers get explanations instead of property dumps.
Trade-off: Requires an API key, adds 2-3 seconds of latency, and costs a small amount per component. Worth it for documentation that reads like it was written by someone who understands the component.
I could have used screenshots. Live instances are better. Designers can click through to the actual component. Spacing is visible as real dimensions. Most importantly: when the component updates, the documentation updates automatically on next generation. Screenshots go stale. Instances don't.
Trade-off: More complex implementation. The component must be in the same file. Instance positioning took several iterations to get right. But documentation stops being a separate artifact and becomes tied directly to the source.
Design system specs involve many elements across many properties. Text blocks are hard to scan. Visual cards get cluttered fast. Tables let designers go directly to what they need, like "What's the icon size in Button Small?" without reading through a paragraph.
I initially annotated both padding and internal gaps. Testing with real components showed gap annotations created visual noise without adding clarity. Padding is what designers reach for. Internal gaps can be inferred from the component's structure if needed. Removing gap annotations made the spacing section immediately more readable.
The hardest part wasn't the AI integration. It was the Figma API behaviour. After removing fixed heights on text elements, they collapsed to 1px — text overlapped, layouts broke. The fix was applying auto-layout with "hug content" to all text containers. A setting that isn't obvious until you've spent time debugging invisible text.
The plugin also crashed consistently with a getNodeById error. Figma's permission model requires different API calls depending on manifest settings. With dynamic page access enabled, you must use async-only APIs. Replacing synchronous calls with getNodeByIdAsync throughout fixed it, but that kind of thing isn't in the main documentation.
Component instances failing to embed was the trickiest issue. The problem wasn't instance creation, but that parent frames needed auto-layout enabled before instances were added, properties had to be applied in the right sequence, and positioning logic had to account for frame padding. Each of these was discovered through testing, not documentation.
Every polish fix was found through testing against real components, not by anticipating edge cases in theory. The gap between "works in a test file" and "works reliably across a production design system" was bigger than I expected.
Tested against ConnexAI's live component library with Button, Alert, and Input. Each generated complete documentation with anatomy tables, typography specs, spacing annotations, and contextual descriptions in 3-5 seconds. Manual documentation for all three would have taken 90-135 minutes. The plugin did it in under 15 seconds total.
Beyond the time saving, the quality difference mattered. Generated documentation explained why variants exist, not just what properties they have. Anatomy tables made component specs scannable in seconds instead of buried in text. And because docs embed live instances rather than static exports, they stopped being a maintenance burden and became faster to regenerate than to edit.
Built and tested as a private plugin for ConnexAI's design system. Functional and ready for deployment in any Figma-based design system.
Building this taught me that documentation problems are usually process problems in disguise. The real issue at ConnexAI wasn't that designers were lazy about docs, it was that the workflow made documentation a separate, manual task that competed with design work. When you remove that friction, documentation happens naturally.
I also learned where AI actually adds value in tooling. Claude is genuinely useful for contextual, semantic work like writing explanations that account for intent, not just structure. It's not the right tool for simple data extraction or layout logic. The most impactful use of AI here wasn't replacing work, it was generating the kind of explanation that no template could produce, one that understands why a component was designed the way it was.
If I were to extend the plugin, I'd focus on token documentation next. The same problem exists for colour, spacing, and typography tokens. I'd also explore conditional section visibility, letting teams toggle which documentation sections appear based on their team's needs.