How I keep Sanity image pipelines under 50 kB using LQIP hashes and blur overlays

Apr 29, 2026 · 5 min read

sanitynextjsimage-optimizationlcpperformance

I used to ship every Sanity image with a Base64-encoded LQIP embedded in the page HTML. The intent was good—show a blurred placeholder while the full image loads—but the implementation added 2-4 kB per image to the document. On a product grid with twelve images, that's 24-48 kB of inline data before the user sees anything useful.

Sanity's image pipeline can generate LQIP metadata, but the default approach is to fetch it separately or embed it as a data URI. Both patterns hurt either LCP or HTML size. After profiling a dozen production deployments, I landed on a hybrid: request a tiny blur hash from Sanity's metadata API at build time, render it as a CSS background gradient, and layer the real next/image on top with a fade-in transition. The blur hash is ~20 bytes. The gradient render costs almost nothing. LCP improved by 200-400 ms on image-heavy pages.

Why Base64 LQIPs are expensive

When you query Sanity for an image asset and include the lqip field, you get back a Base64-encoded JPEG or WebP string. It looks like this:

*[_type == "product"][0] {
  image {
    asset->{
      url,
      metadata {
        lqip
      }
    }
  }
}

The lqip string is typically 2-4 kB. If you inline it in an <img src="data:image/jpeg;base64,..."> or as a next/image placeholder, that data ships with the HTML. On a page with ten products, you've added 20-40 kB to the document before React hydrates. That delays First Contentful Paint and pushes your LCP element further down the waterfall.

The alternative—fetching the LQIP on the client after mount—introduces a round trip and a visible layout shift. Not acceptable for e-commerce or editorial sites where images are above the fold.

Blur hashes as CSS gradients

BlurHash and ThumbHash are compact image placeholder formats. A blur hash is a short ASCII string (15-30 characters) that decodes into a tiny, blurred bitmap. Sanity doesn't generate BlurHash natively, but you can compute a simple 4×4 average-color grid from the image metadata and encode it as a CSS linear gradient.

Here's the pattern I use in a Sanity schema hook:

// lib/sanity/hooks/computeBlurGradient.ts
import imageUrlBuilder from '@sanity/image-url';
import { SanityImageSource } from '@sanity/image-url/lib/types/types';
import { client } from '../client';
 
const builder = imageUrlBuilder(client);
 
export async function computeBlurGradient(source: SanityImageSource): Promise<string> {
  // Request a 4x4 thumbnail from Sanity's image pipeline
  const tinyUrl = builder.image(source).width(4).height(4).url();
  const res = await fetch(tinyUrl);
  const buffer = await res.arrayBuffer();
  
  // Parse pixel data (simplified; in production use sharp or canvas)
  const pixels = new Uint8Array(buffer);
  const colors: string[] = [];
  
  for (let i = 0; i < 16; i++) {
    const r = pixels[i * 4];
    const g = pixels[i * 4 + 1];
    const b = pixels[i * 4 + 2];
    colors.push(`rgb(${r},${g},${b})`);
  }
  
  // Build a 4x4 CSS grid gradient
  return `linear-gradient(to bottom, ${colors.slice(0, 4).join(',')}, ${colors.slice(4, 8).join(',')}, ${colors.slice(8, 12).join(',')}, ${colors.slice(12).join(',')})`;
}

In practice, I run this at build time in a Sanity webhook or a Next.js generateStaticParams loop, then store the gradient string in a custom field on the image document. The gradient compresses to ~50 bytes in gzip.

Rendering the placeholder in Next.js

In the component, I render the gradient as a ::before pseudo-element behind the next/image. When the image loads, I fade it in with a CSS transition:

// components/SanityImage.tsx
import Image from 'next/image';
import { urlFor } from '@/lib/sanity/imageUrl';
 
interface Props {
  asset: { _ref: string; blurGradient?: string };
  alt: string;
  width: number;
  height: number;
}
 
export function SanityImage({ asset, alt, width, height }: Props) {
  const src = urlFor(asset).width(width).url();
  
  return (
    <div
      className="relative overflow-hidden"
      style={{
        background: asset.blurGradient || '#e5e7eb',
      }}
    >
      <Image
        src={src}
        alt={alt}
        width={width}
        height={height}
        className="opacity-0 transition-opacity duration-300 data-[loaded=true]:opacity-100"
        onLoad={(e) => e.currentTarget.setAttribute('data-loaded', 'true')}
      />
    </div>
  );
}

The background style renders immediately—no fetch, no decode. The next/image loads in parallel and fades in once ready. LCP is the image, not the placeholder, so this pattern doesn't hurt Core Web Vitals scoring.

Overhead and tradeoffs

Computing a 4×4 gradient adds ~30 ms per image at build time. For a site with 500 product images, that's 15 seconds total—acceptable in a CI pipeline. The gradient string is ~50 bytes per image, so 500 images add 25 kB to your build manifest. That's still 10× smaller than embedding Base64 LQIPs.

The gradient won't be a perfect blur of the full image—it's a coarse 4×4 average. For hero images where visual fidelity matters, I fall back to a single dominant color from Sanity's palette metadata. For grids and thumbnails, the 4×4 gradient is indistinguishable from a real blur to most users.

When to skip this pattern

If your images are mostly below the fold and you're lazy-loading them with loading="lazy", the browser won't request them until they enter the viewport. In that case, a placeholder costs you nothing because the image fetch is deferred anyway. This pattern pays off when images are in the initial viewport and contribute to LCP.

I also skip it for decorative SVG backgrounds or images where the aspect ratio is enforced by layout (like a 1:1 avatar). In those cases, a solid color or transparent background is simpler and just as fast.

Related posts

All posts →