← Back to Blog
EN2026-02-27

json-render: The Missing Layer Between AI and UI

Vercel Labs ships a generative UI framework that solves the hardest problem in AI-driven interfaces: letting AI decide the layout while keeping developers in control of the components. A deep look at what makes it architecturally interesting.

By Neo
AIUIDeveloper ToolsReactGenerative UIVercelOpen Source

AI is good at generating content. It turns out it's also pretty good at generating interfaces — if you give it the right constraints.

That's the bet behind json-render, a new open-source framework from Vercel Labs. The pitch is compact: AI generates a JSON structure describing a UI; your components render it. But the architectural decisions underneath that pitch are worth unpacking.

The problem with AI + UI today

Current AI-UI integrations fall into two camps, and both have real limitations.

Camp 1: AI fills templates. The developer builds the layout and AI populates the data. The UI never changes structure. Safe, predictable, boring. You're using AI as a smarter database query.

Camp 2: AI generates code. The AI produces JSX or HTML directly. Creative, flexible, and completely untrustworthy in production. Nothing stops it from hallucinating a component that doesn't exist, generating a <script> tag, or producing structurally invalid output that crashes the renderer. The "AI code generation" approach collapses the distinction between instructions and execution — which is exactly the property that makes it dangerous.

json-render takes a third path. The AI generates a spec — a constrained JSON description of a UI — and your components do the actual rendering. The spec is the boundary. The AI never touches your component implementations. Your components never receive arbitrary AI output.

The catalog: defining the contract

The central concept is the catalog: a typed declaration of exactly which components, actions, and data bindings AI is allowed to use.

import { defineCatalog } from "@json-render/core";
import { schema } from "@json-render/react/schema";
import { z } from "zod";

const catalog = defineCatalog(schema, {
  components: {
    MetricCard: {
      props: z.object({
        label: z.string(),
        value: z.string(),
        trend: z.enum(["up", "down", "flat"]).nullable(),
      }),
      description: "Displays a single KPI with optional trend indicator",
    },
    DataTable: {
      props: z.object({
        columns: z.array(z.string()),
        caption: z.string().nullable(),
      }),
      description: "A tabular data display",
    },
  },
  actions: {
    export_csv: { description: "Export the current table as CSV" },
    refresh: { description: "Reload data from source" },
  },
});

Two things happen here simultaneously: you constrain the solution space — AI cannot hallucinate a SuperChart3D component that doesn't exist in your catalog — and the catalog generates the system prompt automatically via catalog.prompt(), so the AI always knows exactly what's available. The contract is bidirectional: it constrains the AI and informs it at the same time.

This is structurally similar to how type systems work. A type system doesn't prevent you from writing programs — it constrains the space of valid programs to those that can be reasoned about. json-render applies the same principle to generative interfaces.

The spec: what AI actually produces

Given a catalog and a user prompt like "show me a revenue dashboard for Q4", the AI outputs a spec — a flat dictionary of typed elements:

{
  "root": "dashboard-1",
  "elements": {
    "dashboard-1": {
      "type": "Stack",
      "props": { "direction": "column", "gap": 4 },
      "children": ["metric-revenue", "metric-orders", "table-breakdown"]
    },
    "metric-revenue": {
      "type": "MetricCard",
      "props": { "label": "Total Revenue", "value": "$2.4M", "trend": "up" },
      "children": []
    },
    "table-breakdown": {
      "type": "DataTable",
      "props": { "columns": ["Region", "Revenue", "Growth"], "caption": "Q4 by region" },
      "children": []
    }
  }
}

Notice the flat structure: elements are stored in a dictionary keyed by ID, with children expressed as arrays of IDs rather than nested objects. This is a deliberate streaming decision. When JSON arrives incrementally over a network, a flat dictionary lets you insert any element by ID without knowing its position in the tree. Nested structures force sequential parsing; flat dictionaries allow order-independent updates. The renderer resolves the tree at render time.

Streaming: structure that emerges

As the AI responds chunk by chunk, the SpecStreamCompiler converts JSONL patches into spec updates:

const compiler = createSpecStreamCompiler();

for await (const chunk of stream) {
  const { result } = compiler.push(chunk);
  setSpec(result); // UI updates with each partial result
}

Elements appear in the UI as soon as they're described in the stream. The dashboard header renders before the table is fully specified. This isn't just a loading optimization — it changes how the interaction feels. Instead of a blank screen followed by a complete UI, structure emerges progressively. The experience is closer to watching someone design a layout in real time than waiting for a page to load.

Data binding: AI wires the structure, you own the logic

The spec format includes a declarative expression language for data binding:

{
  "type": "Alert",
  "props": { "message": "Validation failed" },
  "visible": [
    { "$state": "/form/hasError" },
    { "$state": "/form/errorDismissed", "not": true }
  ]
}

Four expression forms cover most real-world cases: $state (read from state), $cond (conditional branch), $template (string interpolation with state values), and $computed (call a registered function). The AI can generate conditional visibility, dynamic styling, and reactive text without touching application logic.

The watch field extends this to side effects. When a Select changes its value, a registered action fires automatically:

{
  "type": "Select",
  "props": { "value": { "$bindState": "/form/country" } },
  "watch": {
    "/form/country": { "action": "loadCities", "params": { "country": { "$state": "/form/country" } } }
  }
}

The AI defined the reactive dependency. Your action handler implements the actual business logic. The boundary stays clean.

Cross-platform: one catalog, many renderers

The same catalog definition drives multiple platform-specific renderers:

PlatformPackage
React (web)@json-render/react
Vue 3@json-render/vue
React Native@json-render/react-native
Remotion (video)@json-render/remotion
React PDF@json-render/react-pdf

The Remotion and React PDF renderers are the most unexpected. The same generative pattern that produces a web dashboard can produce a video timeline or a PDF document. A spec that describes a video becomes a sequence of timed clips; a spec that describes a document becomes pages with headings and tables. The catalog abstraction holds across output formats because "here are the components and their typed props" is a universal contract.

State management is similarly unopinionated. Adapters exist for Redux, Zustand, Jotai, and XState. The $state expressions resolve against whichever store adapter you configure — the spec doesn't care which library you use.

36 components ready to use

For teams that don't want to build a catalog from scratch, @json-render/shadcn provides 36 pre-built component definitions covering the standard UI vocabulary: cards, tables, buttons, inputs, selects, dialogs, tabs, badges, and more. These are backed by shadcn/ui (Radix UI + Tailwind CSS), so the visual quality is production-ready without additional styling work.

Code export: no runtime lock-in

One design decision that stands out: json-render can export a generated spec as a standalone React project — complete with component files, styles, and package.json. The exported code has no json-render runtime dependency.

This matters for adoption. If you build interfaces with json-render and later decide to stop using it, you're not locked in. For builder tools, this enables a clean workflow: AI generates a first draft via the generative UI, the developer takes over a static codebase.

What it's good for

The pattern has clear advantages in specific scenarios:

Analytics dashboards where the layout depends on the question. "Show me sales by region" and "show me refund rates by product" should produce different structures — different chart types, different KPI arrangements, different groupings. Templating can't cover this space adequately; AI with guardrails can.

Builder and low-code tools that need to generate UI previews from user descriptions. The AI produces a spec; the builder shows it; the user adjusts; export to code when done.

Personalized views where different users need genuinely different structure and emphasis around the same underlying data — not just filtered results, but different interfaces.

Automated content pipelines — the Remotion and React PDF renderers make the same generation pattern applicable to video and documents, not just web UIs.

The honest limitations

json-render is from Vercel Labs — experimental, not a supported product, API subject to change. Apache-2.0 licensed, so commercial use is clear.

The framework doesn't write your prompt engineering for you. Writing a catalog with description fields that actually guide the AI toward sensible structural choices is real design work. A poorly described catalog produces incoherent interfaces. The quality of the constraint shapes the quality of the output.

It's also not a full application framework. Authentication, data fetching, backend integration, and business logic are still yours to build. The catalog's action definitions tell AI what actions exist; implementing those actions is left to you.

The structural insight worth keeping

What json-render gets right is treating the AI as a layout engine rather than a code generator. A layout engine operates within a defined vocabulary and produces structured output that humans can inspect and reason about. A code generator produces executable instructions — useful for developer tooling, dangerous when the output runs inside a live application without a human in the loop.

The flat spec format, the catalog-as-contract pattern, the declarative expression language, the streaming-first architecture — these are the result of thinking clearly about what AI is actually good at (choosing structure and composition from a defined vocabulary) versus what breaks when you give it free rein (producing executable code that integrates with a running system without verification).

For applications where the interface itself needs to respond to user intent — not just the data inside it — this is the cleanest approach to the problem I've seen so far.


json-render is open source under Apache-2.0: github.com/vercel-labs/json-render · Documentation and live playground: json-render.dev

intelliBrain

AI-augmented software development. Based in Zürich, working globally.

© 2026 intelliBrain GmbH. All rights reserved.Imprint
BUILT WITH 🧠 + AI