← All articlesCloud Infrastructure

Why We Self-Host 90+ Services Instead of Using AWS

Self-hosted vs AWS cost comparison: 90 Docker containers for $58/month vs $1,700-2,400 on AWS. Real line items, honest hidden costs, and a decision framework.

T
TechSaaS Team
11 min read read

Why We Self-Host 90+ Services Instead of Using AWS

We run 90 Docker containers on a single server. Postgres, Redis, MongoDB, Gitea, Grafana, Prometheus, n8n, Authelia, 20+ web applications, and a full monitoring stack. The AWS equivalent would cost us $1,700-2,400 per month. We pay under $65.

This isn't an ideological argument against cloud. It's math. We'll show you the real line items on both sides — including the hidden costs of self-hosting that most comparison articles conveniently skip.

What 90 Containers Actually Looks Like

First, let's be transparent: 90 containers does not mean 90 services. Multi-container applications inflate the count. Plane (project management) runs 9 containers. HyperSwitch (payments) runs 5. Penpot (design) runs 3. The monitoring stack is 8 containers. Our actual distinct services number around 40-45.

Here's the real categorized inventory:

Category Containers Services
Project Management 9 Plane (admin, api, worker, beat-worker, live, minio, mq, space, web)
Monitoring & Observability 8 Prometheus, Grafana, Loki, Promtail, Node-exporter, cAdvisor, Uptime Kuma, Dozzle
Databases & Caches 6 PostgreSQL, MongoDB, Redis, FalkorDB, Elasticsearch, MinIO
Payment Processing 5 HyperSwitch (server, consumer, producer, control-center, web)
Design Tools 3 Penpot (backend, exporter, frontend)
Invoicing 3 InvoiceNinja (app, db, web)
Git & CI/CD 3 Gitea, Gitea Runner, ephemeral CI jobs
Security 3 Authelia, CrowdSec, Vaultwarden
Infrastructure 5 Traefik, Autoheal, Cloudflared (x2), Dockge
Content & Docs 3 Directus CMS, BookStack, DocuSeal
Workflow Automation 2 Temporal, N8N
CRM 2 Twenty (server, worker)
Error Tracking 2 GlitchTip (app, worker)
Productivity SaaS 12+ Nextcloud, Listmonk, Paperless-ngx, Linkwarden, Metabase, PhotoPrism, Jellyfin...
Custom Apps & Webapps 11+ Company website, Contact API, OpenClaw, Umami, educational apps...

The host: 7 vCPUs (AMD Ryzen 5 3550H), 13 GiB RAM, 1 TB NVMe SSD, 88 GB swap file. Load average hovers around 8.7 — technically overcommitted on CPU. 19 GiB of swap is in active use.

On paper, this shouldn't work. In practice, most containers are idle most of the time. Swap absorbs the overflow. Autoheal catches failures. It works — but we'll be honest about the trade-offs.

The AWS Bill We'd Be Paying

Let's map our stack to AWS services using current on-demand pricing (us-east-1, March 2026):

Compute and Managed Services

AWS Service Our Equivalent Monthly Cost
EC2 t3.2xlarge (8 vCPU, 32 GB) Host compute $242.94
RDS PostgreSQL db.t3.large (100 GB) postgres container $117.35
ElastiCache Redis cache.t3.medium redis container $49.64
ECS Fargate (~20 small tasks) App containers $720.80
ALB + LCU costs Traefik ~$41.43
CloudWatch (50 metrics + 10 GB logs) Prometheus + Loki ~$20.00
S3 (50 GB storage) Local NVMe ~$2.00
AWS infrastructure subtotal ~$1,194

SaaS Equivalents

Managed Service Our Self-Hosted Monthly Cost
Grafana Cloud Pro Grafana $29
Datadog Infrastructure Prometheus + Grafana + Loki $180+
Auth0 B2B Essential Authelia $35+
GitLab Premium (5 users) Gitea $145
Confluence Standard (5 users) BookStack $30
Sentry Team GlitchTip $26
Linear (10 users) Plane $80
SaaS subtotal ~$525+

Realistic AWS total: $1,700-2,400/month depending on how many services you map to Fargate vs. EC2 and which managed SaaS you include.

Get more insights on Cloud Infrastructure

Join 2,000+ engineers who get our weekly deep-dives. No spam, unsubscribe anytime.

The Fargate Cost Killer

ECS Fargate is where the math breaks down for small services. Fargate's minimum allocation is 0.25 vCPU + 0.5 GB memory per task — costing ~$18/month. For a container like our Excalidraw whiteboard that uses 15 MiB of memory and effectively zero CPU, you still pay $18/month. Multiply by 20 low-traffic services and you're paying $360/month for containers that use almost nothing.

Self-hosted, those same containers share resources and cost effectively zero marginal. That's the fundamental advantage: on a single host, idle containers are free. On AWS, idle containers are $18 each.

Data Transfer

AWS charges $0.09/GB for egress after the first 100 GB/month. Our Hetzner server includes unlimited bandwidth. For a stack serving 40+ web services, this adds up fast. We don't even think about transfer costs.

What We Actually Pay

The honest breakdown:

Item Monthly Cost
Hetzner dedicated server (8c/16t, 64 GB, 1 TB NVMe) ~$50 (EUR 46)
Cloudflare (free tier: tunnel, CDN, DNS, WAF) $0
Domain (techsaas.cloud) ~$1 ($12/year)
Off-site backup storage (Backblaze B2) ~$7
Total hosting cost ~$58/month

Savings vs. AWS: ~$1,640-2,340/month (96-97%). But that's not the full picture.

The Hidden Costs We Don't Hide

Self-hosting costs time. Real time:

Task Hours/Month
Docker updates and container rebuilds ~2
Security patching and CVE response ~2
Debugging (healthcheck failures, OOM kills, swap pressure) ~3
Monitoring review and alert tuning ~1
Backup verification ~0.5
Total maintenance time ~8-10 hours/month

At $50/hour engineering time, that's $400-500/month in labor. At a $150/hour senior SRE rate, it's $1,200-1,500/month.

The honest calculation: self-hosting saves real money only if:

  • (a) You enjoy the work and learn from it (we do)
  • (b) Your effective labor cost is lower than the cloud premium (it is for us)
  • (c) Your workloads are predictable, not bursty (ours are)

For a solo developer or small team, self-hosting is a clear financial win. For a funded startup with a 5-person engineering team billing at $150/hour, the calculation changes. The 10 hours per month of maintenance cost more than the cloud premium.

The Swap Elephant in the Room

19 GiB of swap on 13 GiB of RAM means we're running about 2.5x overcommitted on memory. This works because most containers are idle most of the time — analytics dashboards, documentation sites, internal tools that see single-digit requests per hour.

The trade-off: when multiple heavy services spike simultaneously (Prometheus retention compaction + Metabase query + CI build), everything hits swap and latency spikes 10-100x. On AWS, you'd resize the instance in 5 minutes. On bare metal, you order more RAM and wait for shipping.

Swap lets us pack 90 containers into 13 GiB. It's a feature, not a bug — but it has real costs in tail latency.

What We'd Never Self-Host

Self-hosting everything would be foolish. Some services have network effects or operational complexity that make self-hosting irrational:

CDN and DDoS protection — Cloudflare operates 300+ data centers with Anycast routing. You cannot replicate this. Their free tier includes CDN, basic WAF, and DDoS mitigation that absorbs multi-terabit attacks. We self-host the origin; Cloudflare is the edge.

DNS — Cloudflare's authoritative DNS includes built-in DDoS protection and sub-10ms resolution worldwide. Self-hosted DNS is a single point of failure with no geographic redundancy.

Transactional email — SPF, DKIM, DMARC reputation management is a full-time job. IP warming alone takes weeks. We use Resend for transactional email and Listmonk (self-hosted) for newsletters with an external SMTP relay. The sending infrastructure is someone else's problem; the mailing list management is ours.

Certificate authority infrastructure — We do self-host TLS certificates via Traefik + Let's Encrypt ACME. But we depend on Let's Encrypt (the CA) and Cloudflare (edge certificates) as external services. The automation is ours; the trust chain is theirs.

The pattern: self-host the application layer, delegate the infrastructure layer where network effects matter.

The Decision Framework

Self-hosting makes sense when these conditions align:

Predictable workloads. If your traffic is spiky and unpredictable, cloud auto-scaling is worth the premium. If your services run at steady state with occasional spikes absorbed by swap, a fixed server works.

Free Resource

Free Cloud Architecture Checklist

A 47-point checklist covering security, scalability, cost optimization, and disaster recovery for production cloud environments.

Download the Checklist

Small team, high ownership. The maintenance knowledge lives in one or two heads. That's a bus factor risk but also means zero coordination overhead. On AWS, you'd have IAM policies, billing alerts, cross-account access, and team permissions to manage.

Data sovereignty. For EU-based teams, self-hosting eliminates GDPR data transfer concerns entirely. Your data physically lives on your hardware in your jurisdiction. AWS eu-west-1 data may still be subject to US CLOUD Act access.

You can answer the 2 AM test. Can you handle an incident at 2 AM? Our answer: autoheal restarts unhealthy containers every 30 seconds automatically. Uptime Kuma monitors external availability. Ntfy sends mobile alerts. Most incidents resolve without human intervention. The ones that don't — we get a push notification.

Self-hosting doesn't make sense when: you need multi-region redundancy, your team is larger than ~5, your workloads are bursty, or nobody on the team enjoys infrastructure work. Cloud exists for good reasons.

The Real Scenario

When Postiz (our social media manager) hit 89% memory utilization and started thrashing into swap, autoheal caught the container going unhealthy, restarted it, and we found out from the Grafana dashboard the next morning — not from a 2 AM page.

The AWS equivalent — mapping Postgres to RDS, Redis to ElastiCache, 20 app containers to Fargate, plus monitoring — would cost $1,700-2,400/month. We pay under $65. We spend ~10 hours/month on maintenance. Those 10 hours taught us more about Docker networking, resource limits, and observability than any course ever could.

The cloud premium buys you someone else's operational expertise. Self-hosting buys you your own.

What We'd Do Differently

  1. Start with more RAM. 13 GiB was tight from day one. 32-64 GiB eliminates swap dependency and most OOM incidents. The Hetzner price difference is minimal.
  2. Set resource limits on every container from day one. We're at 84/90 with mem_limit set. The 6 without limits are accidents we're fixing.
  3. Separate database and application networks. All 90 containers on one flat Docker network is an operational simplicity choice we'd revisit. Network segmentation adds minimal overhead and significantly reduces blast radius. See our Docker container security best practices for implementation patterns.
  4. Automate backups from the start. We added off-site backups after 6 months. That's 6 months of "the NVMe could die and everything is gone." Don't do this.

Every config in this article is from our live production environment. The costs are real, the trade-offs are real, and the swap usage is definitely real.


Related reading:

Need help designing cost-efficient infrastructure? Explore our cloud infrastructure and DevOps services or product strategy consulting.

#Self-Hosted#AWS#Cost Comparison#Docker#Infrastructure#DevOps#Cloud vs Self-Hosted

Related Service

Cloud Solutions

Let our experts help you build the right technology strategy for your business.

Need help with cloud infrastructure?

TechSaaS provides expert consulting and managed services for cloud infrastructure, DevOps, and AI/ML operations.

We Will Build You a Demo Site — For Free

Like it? Pay us. Do not like it? Walk away, zero complaints. You will spend way less than hiring developers or any agency.

47+ companies trusted us
99.99% uptime
< 48hr response

No spam. No contracts. Just a free demo.