Why I prefer serverless
After running backends on EC2, Elastic Beanstalk, and GKE in production, almost every new side project I start now goes serverless on AWS Lambda. As a backend engineer who's also had to wear the infra hat, the tradeoffs come out overwhelmingly in serverless's favor. Here's what I've learned along the way.
What running production actually costs
At one company, we had a strong infra team, so backend engineers rarely had to touch infrastructure directly. The exception was the blockchain node servers I owned, which ran on EC2 (not EKS), so I spent enough time wrestling with vim and Ubuntu to handle config myself. The next company was a different story. There was no dedicated infra person, so backend engineers had to absorb that work too. I inherited services my predecessor had built on GCE (GCP's equivalent of EC2) with Docker, plus services running on Elastic Beanstalk.
1) Forget logrotate and the server dies. Back then, we had a big migration to move our entire IP range. Moving 15+ EC2 instances took a full month. The invisible config underneath was a mountain: nginx, pm2, firewall rules, peer whitelists for each blockchain node, env files we had to ferry by hand because of the closed-network setup. One server's logrotate config got missed in the move; its disk quietly filled up and eventually brought it down. That was the first time I really felt how a single missed line can kill a server.
2) Docker images pile up and break the deploy. One day a deploy failed. Digging in, I found the deploy script was using fuzzy matching to delete old images, missing some of them, and over time those leftover images had quietly piled up and eaten the disk. I patched the GitHub Actions script as a stop-gap, but that was when it really hit me how non-core work like this nibbles away at product time.
3) TLS renewals and 3 AM alerts. Let's Encrypt expires every 90 days. Miss one certbot cron run and the site dies. Recovery means going through DNS verification, reissue, and reload, easily burning 30 minutes. And to even know it died, you end up wiring up your own healthchecks and Slack alerts.
4) Managed doesn't mean hands-off. Managed services also mean more of the stack you can't touch. Beanstalk's bundled Node version was stuck on an old release. I needed Node 18+ for a new dependency, and got tangled up in compatibility issues trying to upgrade the platform1. When something breaks one abstraction layer above you, you can't even SSH in to fix it.
5) K8s (GKE) is its own kind of weight. I've run GKE too. The abstraction is appealing, but it carries plenty of hidden cost. For example, to connect a GKE workload to Cloud SQL Postgres, you can't whitelist the outbound IP because nodes get fresh ones every time. You either pin outbound traffic with Cloud NAT or run Cloud SQL Auth Proxy as a sidecar. K8s already has a steep learning curve on its own, and GCP's abstractions stack one more layer on top.
This kind of grunt work kept eating product time. Each one alone is 30 minutes to an hour, but added together, half a day a week was disappearing into infra.
So side projects start on Lambda
After enough of these, the conclusion was obvious. As a solo operator on a side project, I'm not signing up to carry that operational weight again. At a company, you at least share the load with infra teammates. On a personal project, the person waking up at 3 AM to clear the disk is also me.
So I started every side project on Lambda from day one. The moment I moved over, almost all of the chores I described above disappeared.
- No disk, so no log explosions (CloudWatch handles them automatically)
- No Docker cleanup (no image or container concept)
- No manual TLS management (API Gateway and ACM take care of it)
- No worrying whether the instance is alive or dead (there's no server in the picture)
Databases stay near zero too if you lean on free tiers.
- MongoDB Atlas M0 (512MB, free)
- Neon Postgres (free tier, plenty of compute time + 0.5GB)
The honest tradeoffs
Serverless isn't a silver bullet. Here are the downsides I actually ran into.
1) Vendor lock-in. Code written for Lambda doesn't move cleanly to other environments. The AWS dependency runs deep. But for an indie project, the bigger variable is whether you ship at all, so I don't see lock-in as a dealbreaker.
2) Node version deprecation still matters. I don't have to chase OS patches like on EC2, but when AWS retires a Lambda Node runtime, I'm the one who has to upgrade. Far cleaner than the Beanstalk days, but not zero work.
3) The Serverless Framework ecosystem was rough before the LLM era. This was the hardest part when I first started using serverless, before LLM coding assistants were ubiquitous. Commands, plugins, env injection at deploy time. Stack Overflow tab open the whole way, fumbling around.
4) The NestJS + Lambda combo always felt a bit sketchy. Running NestJS on Lambda requires an adapter. The original aws-serverless-express from AWS Labs was archived around 2022. The de-facto standard since then has been a community fork, @codegenie/serverless-express. In other words, the "official path" for running NestJS on Lambda disappeared, and you end up depending on a community fork. It worked fine, but having a critical production dependency live on one person's fork sat at the back of my mind.
5) Cold starts are real. This is the biggest downside in practice. When traffic is low and instances frequently expire, the first request after a gap arrives 1 to 3 seconds late. On a rarely-called critical path like payments, that compounds and hits UX directly. The good news is that framework choice solves much of this. I'll cover the actual numbers in a Part 2.
My default stack now
Here's what my current backend default looks like for indie projects.
- Hosting: AWS Lambda + API Gateway (Serverless Framework as IaC)
- Framework: Something light (like Hono). Heavy frameworks like NestJS pile on cold start penalty.
- DB: MongoDB Atlas M0 or Neon Postgres free tier
- CI/CD: GitHub Actions + OIDC (no access keys to carry around)
Closing
For high-traffic production or long-running workloads, other choices may fit better.
But in the indie context, where you need to ship an unproven idea solo as fast as possible, with minimal operational overhead, at near-zero cost, serverless plus a lightweight framework should be close to a default. That's the conclusion I've drawn from everything I've shipped on the side so far.