the gap between AI and your components
here's something that bugged me for a while. you spend years building a design system — hundreds of components, carefully considered props, accessibility baked in, usage guidelines, the whole thing. then you ask an AI agent to build a page and it spits out raw <button class="bg-blue-500 text-white px-4 py-2"> like it's never heard of your design system. because it hasn't.
this was the core problem we hit at Upwork with Fusion Studio. we had AI agents generating frontend surfaces and they were producing technically correct code that looked absolutely nothing like our product. they'd reach for generic HTML or whatever component library was in their training data. meanwhile our Fluid Design System was sitting right there with the exact components they should've been using.
the AI doesn't know what it doesn't know. if you don't tell it about FluidButton, it's going to build its own button from scratch every single time.
making components queryable
the fix was surprisingly straightforward. we took our entire design system — over 6,000 component docs — and published them as MCP resources. instead of cramming component documentation into system prompts (which burns tokens and doesn't scale), agents can query for exactly the component they need, when they need it.
each component doc follows a structured format:
{
"component": "FluidButton",
"props": {
"variant": { "type": "string", "enum": ["primary", "secondary", "ghost"] },
"size": { "type": "string", "enum": ["sm", "md", "lg"] },
"loading": { "type": "boolean" }
},
"slots": ["default", "icon"],
"a11y": "Must have aria-label when icon-only"
}that's not just a type definition. it's everything the agent needs to use the component correctly. what variants exist, what sizes are valid, what slots it supports, and the accessibility requirements it has to follow. the agent doesn't have to guess or hallucinate prop names — they're right there in the schema.
how agents actually use this
when an agent is building a surface and needs a button, it calls the design system MCP server:
const docs = await mcp.call("get-component-docs", {
component: "FluidButton",
})it gets back the full doc — props, slots, examples, do's and don'ts. then it generates code using the actual component instead of inventing its own. the difference in output quality is night and day.
but it goes deeper than individual lookups. agents can also search by category or capability. need a layout component? query for all components tagged layout. need something that handles file uploads? search for upload and get back FluidFileUploader with its complete API surface.
the docs also include usage examples — what a typical implementation looks like, common patterns, things to avoid. stuff like "don't nest FluidCard inside FluidCard" or "always pair FluidModal with FluidModalTrigger." these constraints are the kind of thing a human developer learns over time by reading docs and getting PR feedback. the agent gets them upfront.
what changed
before we did this, maybe 30% of AI-generated surfaces used our design system correctly. the rest were a mix of raw HTML, incorrect prop usage, and components that technically rendered but violated our design guidelines. after publishing the docs as MCP resources, that number flipped. the vast majority of generated code now uses the right components with the right props.
the accessibility story improved too. when the component doc says "must have aria-label when icon-only," the agent just does it. it's not trying to remember accessibility rules from its training data — it's reading the specific requirements for the specific component it's using right now.
the boring insight
the thing that surprised me is how boring the solution is. we didn't build some fancy AI-aware design system format. we took existing documentation, structured it as JSON, and served it through MCP. the hard part wasn't the technology — it was deciding what information an agent actually needs to use a component correctly and making sure every one of those 6,000+ docs had it.
turns out the same documentation that helps human developers helps AI agents. you just have to make it machine-queryable instead of human-browsable. props as typed schemas instead of markdown tables. constraints as structured rules instead of prose paragraphs. examples as code blocks with annotations instead of screenshots.
if you have a design system and you're building with AI agents, this is probably the highest-leverage thing you can do. don't let the agent guess. tell it exactly what components exist and how to use them. it's the difference between an AI that generates generic frontend code and one that generates code that actually belongs in your product.