
·18 min read
How this blog is built
Architecture notes for devopsoutloud.com — Next.js 14 App Router, MDX content, Shiki at build time, and the design decisions behind it.
This used to be a pre-rendered Astro export hosted on GitHub Pages. It worked — it was fast, free, and easy to forget. The trouble was every new post involved either fighting the Astro build locally or, when I was being honest with myself, hand-editing the built HTML the morning of. By the time I had fourteen posts the friction was loud enough that I rewrote the thing.
This post is the architecture writeup of what replaced it: what the stack is, how a .mdx file becomes a rendered page, how tags and search work, and — more interesting to me — why each piece is the way it is.
What I wanted
The new setup had to do five things:
- No moving parts at runtime. This is a personal blog. It should be statically generated and served as plain HTML+CSS, full stop. No databases, no per-request rendering, nothing waking up at 2 a.m. to throw a 500.
git pushto publish. Drop a Markdown file, push tomain, the post is live. No CMS, no auth, no admin UI.- Code that highlights itself properly. Most of what I write has shell, YAML, and Kubernetes manifests in it. The old Astro setup shipped client-side highlighting JS that flashed unstyled on slow connections.
- SEO and OG metadata as first-class. Sitemaps, RSS, OpenGraph — all derived from the same source of truth as the posts themselves, not hand-maintained alongside them.
- A theme toggle that actually persists. The old site was light-mode only. I read at night.
That list ruled out a few things. It ruled in Next.js + MDX on Vercel, which is what this is now.
The stack, one line each
- Next.js 14 (App Router) — file-based routing, React Server Components, build-time SSG. The whole site is generated at
next buildand served as static HTML; there is no server runtime in the request path. - TypeScript strict — non-negotiable.
- Tailwind CSS +
@tailwindcss/typography— utility-first for the chrome,proseplugin for the article body. - MDX via
next-mdx-remote/rsc— Markdown + JSX, rendered server-side as a React Server Component. Zero MDX runtime ships to the browser. - Shiki via
rehype-pretty-code— syntax highlighting at build time, no client JS, dual light/dark theme support. gray-matter— frontmatter parser.next-themes— dark/light toggle, class-based, system-aware.remark-gfm,rehype-slug,rehype-autolink-headings— GFM tables, heading IDs, anchor links.
Why not Astro again? Astro is excellent — for a blog of this size it's probably objectively the better choice. I went with Next because the App Router's RSC model lets me write the post-rendering pipeline as plain server-side functions with no client/server split to think about, and because Vercel's git push → preview URL → production loop is the cleanest deployment story I know of for a one-person project. The trade-off is shipping a small React runtime. For a content blog that's fine.
How a post becomes a page
The data flow, end to end:
content/posts/foo.mdx
│
│ fs.readFile + gray-matter
▼
{ slug, frontmatter, content, readingTime }
│
├──→ getAllPostsMeta() ──→ /, /blog, /tags/*, sitemap.xml, rss.xml
│
└──→ getPostBySlug() ──→ /blog/[slug]/page.tsx
│
│ <Mdx source={content} />
▼
next-mdx-remote/rsc compile
│
├─→ remark-gfm (tables, strikethrough, autolinks)
├─→ rehype-slug (heading IDs)
├─→ rehype-pretty-code (Shiki, dual theme)
└─→ rehype-autolink-headings (clickable headers)
│
▼
React Server Component tree
│
▼
static HTMLThere is no runtime "render this MDX." It happens once, at build, per post, and the result is bytes on Vercel's edge.
File layout
src/
├── app/
│ ├── layout.tsx ThemeProvider + Header + Footer + global metadata
│ ├── page.tsx Home (subhead + tag chips + featured + recent)
│ ├── blog/
│ │ ├── page.tsx All posts (search + tag filter via BlogList)
│ │ └── [slug]/
│ │ ├── page.tsx Article view
│ │ └── not-found.tsx
│ ├── tags/[tag]/page.tsx Per-tag static page (one URL per tag)
│ ├── about/page.tsx
│ ├── sitemap.ts Dynamic sitemap (posts + tags + static)
│ ├── robots.ts
│ ├── rss.xml/route.ts Hand-rolled RSS 2.0
│ └── globals.css Tailwind + Shiki dual-theme + heading anchors
│
├── components/
│ ├── Header.tsx, Footer.tsx
│ ├── PostCard.tsx, FeaturedPost.tsx
│ ├── BlogList.tsx Client component: search + tag filter + URL state
│ ├── TagChips.tsx
│ ├── ThemeProvider.tsx, ThemeToggle.tsx
│
└── lib/
├── posts.ts Server-only loader + tag/featured helpers
├── post-types.ts Pure types + tagSlug() (safe to import client-side)
├── mdx.tsx MDXRemote wrapper + the rehype/remark pipeline
├── site.ts Site metadata (used by sitemap, OG, RSS)
└── format.ts
content/posts/*.mdx Source of truth for posts
public/ DOL.png, favicon.svg, fonts/atkinson-*.woffThe split between posts.ts (server-only) and post-types.ts (pure) is load-bearing — more on that below.
The post loader
src/lib/posts.ts reads the filesystem and is therefore server-only. The first line of the file is:
import "server-only";That's a Next.js helper that throws a build-time error if anything from a "use client" component imports this file. Without it, the first time you accidentally import getAllPosts() from a client component, webpack tries to bundle node:fs for the browser and you get a cryptic "Webpack supports data: and file: URIs by default" error in your build log. With it, the failure is loud and obvious.
The relevant bits of the loader:
const POSTS_DIR = path.join(process.cwd(), "content", "posts");
const isProd = process.env.NODE_ENV === "production";
let cache: Post[] | null = null;
export async function getAllPosts(): Promise<Post[]> {
if (cache && isProd) return cache;
const files = await fs.readdir(POSTS_DIR);
const mdFiles = files.filter((f) => /\.mdx?$/.test(f));
const posts = (await Promise.all(mdFiles.map(readPostFile)))
.filter((p): p is Post => p !== null)
.sort((a, b) => (a.frontmatter.date < b.frontmatter.date ? 1 : -1));
if (isProd) cache = posts;
return posts;
}A few choices that look small but mattered to me:
Cache only in production. During next dev I want every save to a .mdx file to show up immediately. The cache check if (cache && isProd) makes hot reload work correctly without ever serving stale content from a build.
Drafts are filtered in readPostFile. Frontmatter draft: true posts are dropped during the prod build but visible in dev, so I can write something, see it on localhost:3000, and only when I take draft: true off does it appear in the build.
Reading time is content-derived, not frontmatter. It's a one-liner — Math.max(1, Math.round(words / 220)) — and computing it from the source file means I never get to forget to update it.
Sort is reverse-chronological by string comparison. Dates are stored as "YYYY-MM-DD" strings, which lexicographically sort the same as chronologically, which means I don't pay for new Date() parsing in the hot path. Tiny thing, but it removes a class of timezone bugs.
The loader exposes a few derived helpers — getAllPostsMeta() (drops content), getAllSlugs() (for generateStaticParams), getPostBySlug(), getAllTags(), getPostsByTag(), getFeaturedPost() — all of which call getAllPosts() underneath. One read, many views.
Why a separate post-types.ts
The split looks pedantic but it solves a real problem.
posts.ts imports node:fs. The moment a "use client" component (the search bar, the tag filter) tries to import the tagSlug() helper from posts.ts, the bundler tries to ship node:fs to the browser and the build dies.
Two ways out:
- Inline
tagSlugin the client component — duplicated logic, easy to drift. - Move pure helpers and types into a file with no Node imports.
I went with the latter. src/lib/post-types.ts contains the Post / PostMeta / TagSummary types and tagSlug() — that's it, no I/O. Server code imports from posts.ts, client code imports from post-types.ts, and the import "server-only" directive on posts.ts makes a wrong import an immediate, obvious error.
If you're wondering whether this matters at the scale of fifteen posts: no, it doesn't. But the cost of doing it right at the start is so small that it's worth it for the next time I forget which file I'm in at 11pm.
MDX rendering
MDX rendering happens in src/lib/mdx.tsx:
import { MDXRemote, type MDXRemoteProps } from "next-mdx-remote/rsc";
import rehypeAutolinkHeadings from "rehype-autolink-headings";
import rehypePrettyCode from "rehype-pretty-code";
import rehypeSlug from "rehype-slug";
import remarkGfm from "remark-gfm";
const mdxOptions: MDXRemoteProps["options"] = {
parseFrontmatter: false,
mdxOptions: {
remarkPlugins: [remarkGfm],
rehypePlugins: [
rehypeSlug,
[rehypePrettyCode, {
theme: { dark: "github-dark-dimmed", light: "github-light" },
keepBackground: true,
defaultLang: "plaintext",
}],
[rehypeAutolinkHeadings, {
behavior: "wrap",
properties: { className: ["heading-anchor"] },
}],
],
},
};
export function Mdx({ source }: { source: string }) {
return <MDXRemote source={source} options={mdxOptions} components={...} />;
}Two design points worth calling out:
parseFrontmatter: false. The frontmatter is parsed once by gray-matter in the loader, then stripped from the body that's handed to MDXRemote. If MDXRemote also tried to parse it I'd be doing the work twice and would have to keep the two parsers in sync.
Server-rendered, not client. next-mdx-remote/rsc (the /rsc subexport) is the React Server Components flavour. It compiles MDX during SSG and returns plain HTML. The browser receives finished markup — no MDX runtime, no syntax-highlighter JS, no hydration cost for the article body. The interactive bits of the site (theme toggle, search, tag filter) are isolated to small client components.
Syntax highlighting (and the dual-theme gotcha I almost missed)
Shiki via rehype-pretty-code runs at build time and emits HTML. Worth understanding what it actually emits.
With a dual-theme config (theme: { dark: "github-dark-dimmed", light: "github-light" }), rehype-pretty-code v0.14 produces a single set of <span>s with inline CSS custom properties for both colours:
<span style="--shiki-dark:#F47067;--shiki-light:#D73A49">import</span>
<span style="--shiki-dark:#ADBAC7;--shiki-light:#24292E"> matter </span>There is exactly one DOM tree per code block — the theme switch is purely a CSS choice between two custom properties. To make that work I have CSS that maps --shiki-light and --shiki-dark to actual color based on whether <html> has the dark class:
[data-rehype-pretty-code-figure] code,
[data-rehype-pretty-code-figure] pre,
[data-rehype-pretty-code-figure] span {
color: var(--shiki-light);
}
[data-rehype-pretty-code-figure] pre,
[data-rehype-pretty-code-figure] code {
background-color: var(--shiki-light-bg);
}
html.dark [data-rehype-pretty-code-figure] code,
html.dark [data-rehype-pretty-code-figure] pre,
html.dark [data-rehype-pretty-code-figure] span {
color: var(--shiki-dark);
}
html.dark [data-rehype-pretty-code-figure] pre,
html.dark [data-rehype-pretty-code-figure] code {
background-color: var(--shiki-dark-bg);
}I shipped this initially with the wrong CSS — I'd written rules targeting [data-theme="light"] and [data-theme="dark"] because that's the markup older versions of rehype-pretty-code emit. The new version doesn't emit those attributes at all. The result was that in light mode, the --shiki-light custom property was set on every span, but no rule consumed it, so the spans inherited the body's text colour through some other route — and ended up nearly white on white. The fix above mapped the custom properties correctly. Five lines of CSS and an evening of confusion.
The wider lesson: when you upgrade a syntax-highlighting plugin, view-source on a code block before you trust your old CSS.
Tags
Tags are plain frontmatter:
---
title: "..."
date: "..."
tags: ["kubernetes", "post-mortem"]
---Everything tag-related is derived. There's no tag table, no admin UI, just an aggregation pass over the posts:
post.frontmatter.tags ─┐
│
▼
getAllTags() ──→ TagSummary[] = [{slug, tag, count}]
│
├─→ TagChips on home + per-tag pages
├─→ generateStaticParams() for /tags/[tag]
└─→ sitemap.ts entriesThe interesting part is tagSlug():
export function tagSlug(tag: string): string {
return tag
.toLowerCase()
.trim()
.replace(/[^a-z0-9]+/g, "-")
.replace(/^-+|-+$/g, "");
}Frontmatter tags are human-readable ("post-mortem", "ci-cd"); URLs need to be lowercase and slug-safe. tagSlug is the canonical conversion. It runs in two places — getAllTags() when building the index, and the BlogList client component when matching ?tag= URL params against frontmatter — which is why it lives in post-types.ts: both server and client need it, and neither place wants node:fs along for the ride.
The tag pages themselves are statically generated:
export async function generateStaticParams() {
const tags = await getAllTags();
return tags.map(({ slug }) => ({ slug: slug }));
}At the time of writing, that produces 20 static /tags/<slug>/index.html files, one per tag, each pre-rendered with the matching post list. They land in the sitemap automatically.
Search
Search has fifteen-ish posts to deal with. Whatever you do for search at this scale is going to be fast.
I considered three approaches:
- Server-rendered
?q=page with the filter happening on the server. Works fine but every keystroke is an HTTP round trip and either you debounce (delay) or you don't (waste). - Build-time pre-built search index (Pagefind, FlexSearch, etc.). Overkill for fifteen posts, and the index is bigger than the entire post list.
- Pass all post metadata to a client component, filter in-memory on input. What I did.
The component:
"use client";
const filtered = useMemo(() => {
const tokens = q.toLowerCase().split(/\s+/).filter(Boolean);
return posts.filter((p) => {
if (activeTag) {
const tagSlugs = (p.frontmatter.tags ?? []).map(tagSlug);
if (!tagSlugs.includes(activeTag)) return false;
}
if (tokens.length === 0) return true;
const haystack = [
p.frontmatter.title,
p.frontmatter.description ?? "",
(p.frontmatter.tags ?? []).join(" "),
].join(" ").toLowerCase();
return tokens.every((tok) => haystack.includes(tok));
});
}, [posts, q, activeTag]);A few intentional choices:
Title + description + tags is the search corpus, not post bodies. Body search would mean shipping every post's prose to the browser at /blog. Titles and descriptions catch what people are realistically searching for ("docker", "jenkins dns") and the bundle stays under 100kB.
Token-AND, not phrase-match. "jenkins dns" matches a post if both jenkins and dns appear anywhere in title/description/tags. Better recall than phrase matching for a small corpus and zero ceremony.
URL-synced state. ?q=jenkins and ?tag=kubernetes are real query params, written via router.replace(..., { scroll: false }) so the URL is shareable but typing doesn't yank the page back to the top:
const updateParams = useCallback((next: Record<string, string | null>) => {
const params = new URLSearchParams(searchParams.toString());
for (const [key, value] of Object.entries(next)) {
if (value === null || value === "") params.delete(key);
else params.set(key, value);
}
const qs = params.toString();
startTransition(() => router.replace(qs ? `/blog?${qs}` : "/blog", { scroll: false }));
}, [router, searchParams]);startTransition marks the URL update as non-urgent, which means React doesn't re-suspend the boundary on every keystroke.
[server] /blog/page.tsx
│
├─→ getAllPostsMeta() → posts: PostMeta[] ──┐
├─→ getAllTags() → tags: TagSummary[] ──┤ serialised into the page
│ │
└─→ <Suspense> │
└─→ <BlogList posts={posts} tags={tags} />─┘ ← becomes the client bundle
│
├─ useSearchParams() ← reads ?q=…&tag=…
├─ useRouter() ← writes URL on change (replace, no scroll)
├─ useMemo() ← filters posts by tokens + active tag
└─ <input> + chip toggles + grid of PostCardsThe <Suspense> boundary is a Next.js requirement: any client component that calls useSearchParams() has to be inside a Suspense boundary in App Router builds, or next build fails.
Theme toggle
next-themes does the heavy lifting:
<NextThemesProvider
attribute="class"
defaultTheme="system"
enableSystem
disableTransitionOnChange
>
{children}
</NextThemesProvider>attribute="class" is the bit that makes the Shiki CSS above work — the theme is communicated via a dark class on <html>, which my custom-property selectors key off.
The other catch is hydration. The theme is a client-side concept (it reads localStorage/system pref), but the page is rendered statically. Without care, the server-rendered HTML doesn't match the client's first paint, and React screams about it. Two pieces handle that:
<html suppressHydrationWarning>in the root layout — tells React that theclassandstyleof<html>are deliberately client-controlled.- A
mountedguard in the toggle button itself:
const { resolvedTheme, setTheme } = useTheme();
const [mounted, setMounted] = useState(false);
useEffect(() => setMounted(true), []);
const isDark = mounted && resolvedTheme === "dark";
return (
<button onClick={() => setTheme(isDark ? "light" : "dark")} ...>
{mounted ? (isDark ? "☀" : "☾") : ""}
</button>
);Until mounted flips, the button renders empty. After hydration, it shows the correct icon. Without the guard, the server renders ☾ (or ☀) based on a default that the client may immediately override, and you get a hydration mismatch on the first paint of every page.
Metadata, sitemap, RSS
Metadata is set per page using the App Router's Metadata API:
export async function generateMetadata({ params }: PageProps): Promise<Metadata> {
const post = await getPostBySlug(params.slug);
if (!post) return {};
return {
title: post.frontmatter.title,
description: post.frontmatter.description,
alternates: { canonical: `/blog/${post.slug}` },
openGraph: {
type: "article",
title: post.frontmatter.title,
description: post.frontmatter.description,
publishedTime: post.frontmatter.date,
images: [{ url: post.frontmatter.cover ?? site.ogImage }],
},
};
}The sitemap and RSS feed are similarly derived from getAllPostsMeta(). There's no static sitemap.xml checked into the repo — the sitemap is a TypeScript file that runs at build time:
export default async function sitemap(): Promise<MetadataRoute.Sitemap> {
const [posts, tags] = await Promise.all([getAllPostsMeta(), getAllTags()]);
return [
{ url: `${site.url}/`, ... },
{ url: `${site.url}/blog`, ... },
{ url: `${site.url}/about`, ... },
...posts.map((p) => ({ url: `${site.url}/blog/${p.slug}`, ... })),
...tags.map((t) => ({ url: `${site.url}/tags/${t.slug}`, ... })),
];
}Same for robots.ts (MetadataRoute.Robots) and rss.xml/route.ts (a Route Handler that returns an XML string). One source of truth for "what posts exist" feeds five different consumers — pages, sitemap, RSS, OG metadata, the home/blog/tag UIs. Add a .mdx file, all five update.
Deployment
Vercel auto-detects Next.js. The whole project file looks like this:
package.jsondeclaresnextas a dependency.- One env var:
NEXT_PUBLIC_SITE_URL=https://devopsoutloud.com. Falls back tohttps://${VERCEL_URL}for previews. It's the only thingsrc/lib/site.tsreads. - Pushing any branch builds a preview deployment with its own URL.
- Pushing the production branch deploys to the apex domain.
next build does the actual work: it walks app/, calls generateStaticParams for the dynamic segments ([slug], [tag]), runs the MDX pipeline once per post, writes the static HTML to .next/, and Vercel serves it from the edge. There is no Node runtime in the request path. There is also nothing for me to operate.
Build numbers from a recent run for reference:
Route (app) Size First Load JS
┌ ○ / 182 B 96.1 kB
├ ○ /about 138 B 87.4 kB
├ ○ /blog 1.84 kB 97.8 kB
├ ● /blog/[slug] 182 B 96.1 kB
├ ├ /blog/tls-handshake-explained
├ ├ /blog/kubernetes-physics-airgap
├ └ [+12 more paths]
├ ○ /robots.txt 0 B 0 B
├ ○ /rss.xml 0 B 0 B
├ ○ /sitemap.xml 0 B 0 B
└ ● /tags/[tag] 180 B 96.1 kB
├ /tags/kubernetes
├ /tags/jenkins
└ [+18 more paths]
+ First Load JS shared by all 87.3 kB96 kB First Load JS, almost all of it shared React + Next runtime. The article pages ship 182 bytes of route-specific JS each.
The two things that almost broke the cutover
The migration above sounds clean. It wasn't. Two specific Vercel-side gotchas turned the deploy from "Vercel detected Next.js, built it, served it" into a couple of hours of debugging. Worth writing down for the next person.
The framework preset stays pinned per deployment
When I first connected the GitHub repo to Vercel, main still had the old Astro static export at the root. Vercel auto-detected that as Other (a plain static project), saved that as the project's framework preset, and built the existing files.
After I pushed the Next.js code to main, Vercel ran next build (because the build command was already wired up) — the build itself succeeded, all 45 routes generated. But the deployment kept the original Other adapter, which means it ignored the .next/ output entirely and served the project root directory as static files. The root no longer had an index.html — it had moved to _legacy/ — so every route 404'd.
The fix wasn't visible from the build log. It was in Project Settings → Build and Deployment → Framework Settings, where two panels disagreed:
Production Overrides Framework: Other ← what the live deployment uses
Project Settings Framework: Next.js ← what new deployments will useVercel only applies the Project Setting to new deployments. Existing ones stay locked to whatever their "Production Overrides" panel says. The fix: trigger a fresh deploy with "Use existing Build Cache" unchecked. The new build picks up the Next.js preset, the Next.js adapter wraps the .next/ output correctly, the routes start serving.
The lesson is broader than Vercel: when a build pipeline auto-detects a framework on first run and saves that detection somewhere, that saved value is sticky. If your repo's framework changes (Astro → Next.js, in this case), the saved detection won't update on its own.
next-mdx-remote@5 is blocked at the security gate
The first Next.js push built fine. Vercel then refused to promote it with:
Vulnerable version of next-mdx-remote detected (5.0.0). Please update to version 6.0.0 or later.
Vercel runs a security scan after the build and rejects deployments that include known-vulnerable packages. The build status flipped from Ready to Error, the previous (old Astro) deployment stayed live as production, and from outside it looked like nothing had happened.
Fix: npm install next-mdx-remote@6 (the /rsc API surface is unchanged) and push.
The lesson here is mostly about how you read deployment status. A green build log is not a green deploy — Vercel can run several gates after the build finishes (security scans, integrations, custom checks) and any one of them can flip the deployment to Error while your terminal still shows "✓ Compiled successfully". When something looks wrong, look at the deployment's status field, not the bottom of the build log.
What's not built (yet)
A handful of things I intentionally left out of this round:
- Mermaid / diagrams as code. The diagrams in this post are ASCII inside plain code fences. It works, it scales horizontally on mobile, and it doesn't pull in another rendering pipeline. If I find I'm drawing complex sequence diagrams often,
rehype-mermaidis a one-line plugin add. - Full-text search. Currently the corpus is title + description + tags. If the post count gets into the hundreds I'd add Pagefind to build an index at deploy time.
- View counts / reactions / comments. Would need a backend. I don't want a backend.
- Cleanup of
_legacy/. The old static export is still in the repo as a content reference. It'll be deleted once the Vercel deploy has been live for a couple of weeks.
The TL;DR
- Posts are MDX files in
content/posts/.gray-matterreads the frontmatter,next-mdx-remote/rscrenders the body server-side at build time. No client MDX runtime. - Shiki highlights code at build via
rehype-pretty-code. Dual themes are handled with CSS custom properties (--shiki-light/--shiki-dark) toggled by thehtml.darkclass. - Tags are derived from frontmatter.
getAllTags()aggregates them;generateStaticParamsproduces a static/tags/[tag]page per tag and a sitemap entry per tag. - Search is a client-side filter on title + description + tags, with
?q=and?tag=URL-synced viarouter.replaceandstartTransition. - The dark-mode toggle uses
next-themeswithattribute="class", plussuppressHydrationWarningon<html>and amountedguard in the button. - Sitemap, RSS, and OG metadata are all generated from the same loader as the pages — one source of truth, five consumers.
- Deployed on Vercel as static files. No request-path runtime, one env var,
git pushto publish.
The repo itself is private, but if you're building something similar the most interesting files, in order, are: src/lib/posts.ts (the loader), src/lib/mdx.tsx (the rehype/remark pipeline), src/components/BlogList.tsx (the URL-synced search/filter client component), and src/app/globals.css (the Shiki dual-theme CSS).