I spent a day building this site from scratch. Not because there aren’t easier options — there are plenty — but because I wanted to own the infrastructure and understand every layer. Here’s what I built and why.
The constraint that drove every decision
I didn’t want a managed platform. Ghost, Squarespace, Substack — they’re all fine until they’re not. Pricing changes. Features get enshittified. Export formats break. The moment you depend on someone else’s persistence layer for your writing, you’re renting, not owning.
The goal: my content in plain markdown files, my infrastructure, my control. Total cost under $5/month.
The stack
Static site generator: Astro
Astro compiles everything to static HTML at build time. No server, no runtime, no database. The site is just files. I used the Astro Paper theme as a base — dark mode default, clean typography, built-in search.
The build script does three things: generates llms-full.txt (more on that below), runs the Astro build, then generates the search index with Pagefind. One command.
Hosting: S3 + CloudFront
The built files go into an S3 bucket with public access completely disabled. CloudFront sits in front of it using Origin Access Control — only CloudFront can read from S3, nothing else. ACM handles the SSL cert. Route 53 points the apex domain and www subdomain to CloudFront.
Result: HTTPS enforced, HTTP redirects automatically, global CDN edge caching, and the bucket itself is locked down.
One gotcha I hit immediately: every post URL returned a 404. The files were in S3, the deploy worked fine — but /posts/my-post/ returned nothing.
The problem is subtle. Astro (like most static site generators) builds each post as /posts/my-post/index.html — an index.html file inside a subdirectory. S3 doesn’t resolve directories to index files. CloudFront’s DefaultRootObject setting only applies to the apex / — it doesn’t cascade to subdirectories. So when CloudFront asked S3 for /posts/my-post/, S3 found no object at that exact key, returned a 403, and CloudFront served the 404 page.
The fix is a CloudFront Function — a small JavaScript function that runs at the CDN edge on every incoming request before it reaches S3. It checks the URI: if it ends with /, append index.html. If it has no file extension, append /index.html. That’s it — about 15 lines of code.
function handler(event) {
var request = event.request;
var uri = request.uri;
if (uri.endsWith('/')) {
request.uri += 'index.html';
} else if (!uri.includes('.')) {
request.uri += '/index.html';
}
return request;
}
Attach it to the distribution as a viewer-request handler and every URL on the site resolves correctly. This is a one-time infrastructure fix — not something you repeat per deploy.
If you’re building a static site on S3 + CloudFront with OAC, add this function before you go live. It’s the kind of thing that works fine locally (dev servers handle it automatically) and breaks silently in production.
Email: SES + Lambda
I wanted amit@artificialcuriositylabs.dev to work as a real email address without running a mail server. The setup:
- SES receives inbound email for the domain
- A receipt rule stores incoming messages in S3
- A Lambda function rewrites the headers and forwards to Gmail, preserving the Reply-To so replies look native
It took about an hour to wire up. DKIM records in Route 53, MX record pointing to SES. The Lambda function is about 80 lines of Node.js. Works exactly like having an inbox without actually having one.
The decision I’m most glad I made: llms.txt
There’s an emerging standard for AI-readable content — llms.txt as an index file, similar to robots.txt, that tells AI crawlers what’s on the site and where. I added two files:
/llms.txt— a curated index of pages, topics, and permissions/llms-full.txt— every blog post concatenated into a single file, auto-generated at build time
The second one is the interesting one. Any AI system that fetches llms-full.txt gets the complete text of everything I’ve published, in one request, structured and clean. It’s a better interface for AI consumption than crawling individual HTML pages.
The generator script reads from the blog content directory, strips frontmatter, and concatenates everything with headers separating posts. Runs in under a second as part of the normal build.
I don’t know exactly how this will get used — but making the content machine-readable is a zero-cost decision with asymmetric upside.
What I’d do differently
Deploy script: right now deploying is three commands (build, two S3 syncs with different cache headers, CloudFront invalidation). I should wrap this in a single shell script. It’s the obvious next step and I haven’t done it yet.
Demo content: the Astro Paper theme ships with demo blog posts. They’re still there. Replacing them with real writing is the actual work — the infrastructure was the easy part.
The question this raises
Static sites feel like going backward until you realize what you’re trading away: runtime complexity, database dependencies, server costs, someone else’s uptime SLA. The question isn’t “why would you use a static site in 2026” — it’s “why would you add a server if you don’t need one?”