How I Structure Sanity Schemas to Avoid Query Waterfalls in Next.js
Apr 27, 2026 · 5 min read
Most engineers new to Sanity treat it like a relational database. They create a category document, then reference it from post. They create an author document, reference it everywhere. Then they write a GROQ query that projects references, and wonder why their Next.js server component takes 380 ms to render a list of twelve blog posts.
I've shipped fifteen Sanity projects in the last two years. The pattern that consistently cuts server render time by 40–60% is strategic denormalisation. Instead of always reaching for references, I embed commonly-queried fields directly in the parent document. I still use references for the Studio UI and for canonical data, but I mirror critical fields so GROQ can return everything in one round-trip.
The problem: reference projection is a hidden N+1
When you write *[_type == "post"] { title, author-> }, Sanity fetches all posts, then fetches each author document. Sanity's CDN is fast, but projection is not free. On a page that lists fifty posts, you're making fifty additional lookups if every post has a unique author. Even with CDN edge caching, that's 120–180 ms of latency on a cold render.
Next.js App Router server components don't block the browser, but they do block time-to-first-byte. A slow data fetch delays streaming. If your LCP element depends on that data, you've just added 150 ms to LCP.
The pattern: mirror critical fields at write time
Here's a schema I use for blog posts. The author field is a reference, but I also store authorName and authorSlug as plain strings.
// schemas/post.ts
import { defineField, defineType } from 'sanity'
export default defineType({
name: 'post',
type: 'document',
fields: [
defineField({ name: 'title', type: 'string' }),
defineField({ name: 'slug', type: 'slug', options: { source: 'title' } }),
defineField({
name: 'author',
type: 'reference',
to: [{ type: 'author' }],
}),
defineField({
name: 'authorName',
type: 'string',
description: 'Cached from author reference. Updated via document action.',
readOnly: true,
}),
defineField({
name: 'authorSlug',
type: 'string',
readOnly: true,
}),
defineField({ name: 'publishedAt', type: 'datetime' }),
],
})I mark the cached fields readOnly: true so editors don't manually edit them. Then I write a custom document action that runs whenever an editor publishes a post. The action fetches the referenced author, copies name and slug.current into authorName and authorSlug, and patches the post document.
// structure/actions/syncAuthorFields.ts
import { useDocumentOperation } from 'sanity'
import { useEffect } from 'react'
export function useSyncAuthorFields(props: any) {
const { patch, publish } = useDocumentOperation(props.id, props.type)
const { draft } = props
useEffect(() => {
if (!draft?.author?._ref) return
const authorRef = draft.author._ref
fetch(`/api/sanity/author/${authorRef}`)
.then((res) => res.json())
.then((author) => {
patch.execute([
{ set: { authorName: author.name } },
{ set: { authorSlug: author.slug.current } },
])
})
}, [draft?.author?._ref])
return { label: 'Publish', onHandle: () => publish.execute() }
}This is a simplified example. In production I use a webhook or a Sanity listener plugin to sync fields in the background, so the Studio doesn't block on the patch. The key insight is: write-time cost is cheap, read-time cost is expensive. I'd rather spend 50 ms on publish than 150 ms on every page render.
The GROQ query is now trivial
Instead of projecting the reference, I select the cached fields.
*[_type == "post"] | order(publishedAt desc) [0..11] {
_id,
title,
"slug": slug.current,
authorName,
authorSlug,
publishedAt
}No author->. No projection. Sanity returns twelve documents in one query, typically 40–60 ms from a Next.js server component in eu-west-1 hitting a Sanity dataset in the same region. My server component renders in 80–100 ms total, including React tree reconciliation.
When to denormalise and when to project
I denormalise when:
- The field appears in list queries (post grids, search results, navigation menus).
- The source document changes infrequently (author name, category slug).
- The cost of syncing is low (a webhook or a Studio action that runs once per publish).
I still use projection when:
- I need the full author object on a single post page. One projection on a detail route is fine.
- The source changes often and must stay in sync (like a live event countdown or a stock ticker).
- The field is only queried in Studio context, not frontend.
Real numbers from a production marketing site
Before denormalisation, the homepage hero query (fetching six featured posts with authors and categories) took 280–320 ms in Next.js server components. After mirroring authorName, categorySlug, and categoryColor, the same query takes 110–140 ms. The page TTFB dropped from 420 ms to 260 ms. LCP improved by 160 ms because the hero image started loading sooner.
The trade-off is complexity. I now maintain sync logic. But I've never had a sync failure in production, and the performance win is measurable on every page load. For content sites where query speed directly affects Core Web Vitals, this pattern is non-negotiable.
Related posts
All posts →How I Shaved 140 kB Off a Next.js Bundle by Lazy-Loading Sanity Portable Text
Apr 27, 2026 · 5 min read
Portable Text blocks can bloat client bundles. Here's how I defer serializers with dynamic imports and RSC boundaries to keep marketing pages under 80 kB.
INP for React Apps: Profiling and Eliminating Long Tasks
Apr 25, 2026 · 4 min read
INP is responsiveness. Learn how to find long tasks, profile React re-renders, reduce main-thread work, and ship fast interactions consistently.
How to Fix LCP on Image-Heavy Pages (Next.js Patterns That Work)
Apr 24, 2026 · 4 min read
LCP is usually one big image. Here’s how to identify the true LCP element, reduce TTFB, ship the right image bytes, and consistently hit <2.5s on real devices.