CTO Playbook: First 90 Days at an Early-Stage Startup

You just got the title. Maybe you were employee #3 and the founder said "you're CTO now." Maybe you left a staff role at a Series C company to take the leap. Either way, you have 90 days before the de

Y
Yash Pritwani
11 min read read

# CTO Playbook: First 90 Days at an Early-Stage Startup

You just got the title. Maybe you were employee #3 and the founder said "you're CTO now." Maybe you left a staff role at a Series C company to take the leap. Either way, you have 90 days before the decisions you make calcify into the architecture your team will fight against for the next three years.

Most first-time CTOs spend those 90 days writing code. The ones who succeed spend them making decisions.

The 7 Decisions That Define 3 Years

Here is what nobody tells you in the "how to be a CTO" Medium posts: the technical choices you make in your first quarter are not really about technology. They are about organizational constraints. You are choosing the failure modes your company will experience at 10x scale. You are choosing the hiring profile of every engineer who joins after you. You are choosing how fast your team can ship when the board asks why revenue is not growing.

These seven decisions are the ones we have seen matter most across dozens of early-stage engagements. Get them right, and you buy yourself 18-24 months of compounding velocity. Get them wrong, and you will spend those same months rewriting.

Decision 1: Monolith First (Always)

If your startup has fewer than 15 engineers and less than $5M ARR, you do not need microservices. Full stop.

The monolith-vs-microservices debate ended years ago for anyone paying attention to what actually ships product. Shopify ran a monolithic Rails app to well past $1B in GMV. Basecamp still runs a monolith. Linear — the product engineers actually love — is a monolith. Figma scaled a monolith to hundreds of millions in ARR before splitting services.

What you need is a modular monolith: a single deployable unit with clear internal boundaries. Domain modules with explicit interfaces. Shared database, but with schema ownership per module. You get the deployment simplicity of a monolith with the organizational clarity of service boundaries.

The microservices trap is seductive because it feels like "real engineering." But what it actually gives a 5-person team is distributed debugging, network latency between every function call, 14 Kubernetes manifests to maintain, and a deployment pipeline that takes 45 minutes instead of 4.

When to split: You split a service out of the monolith when (a) a specific module has fundamentally different scaling requirements, (b) a dedicated team owns it, and (c) the interface between that module and the rest of the system is well-understood. That is usually month 18, not month 1.

Decision 2: Cloud Provider — Pick One, Go Deep

Multi-cloud is an enterprise strategy for companies with dedicated platform teams and seven-figure cloud bills. It is not a startup strategy.

Pick AWS, GCP, or Azure. Then use their native services aggressively. RDS over self-managed Postgres. Lambda or Cloud Functions for event-driven work. Managed Kubernetes if you genuinely need it (you probably do not yet). The vendor lock-in argument dissolves when you realize that the cost of abstracting across clouds is higher than the cost of migrating later — and most startups never need to migrate.

We wrote about this in detail in our analysis of multi-cloud hidden costs and pitfallsmulti-cloud hidden costs and pitfallshttps://www.techsaas.cloud/blog/multi-cloud-hidden-costs-pitfalls. The summary: teams that go multi-cloud early spend 30-40% more on infrastructure and move 50% slower on feature development.

This connects directly to the build-vs-buy calculus. Every hour your team spends building cloud-agnostic abstractions is an hour not spent on product. Our build vs buy framework for engineering leadersbuild vs buy framework for engineering leadershttps://www.techsaas.cloud/blog/build-vs-buy-framework-engineering-leaders breaks down exactly when building your own makes sense — and when it is just ego.

Recommendation for most early-stage startups: AWS if your team already knows it. GCP if you are ML-heavy. Azure if your customers are enterprises on Microsoft stacks.

Decision 3: CI/CD From Day 1

Not "we will set up CI/CD when we have more engineers." Day 1. Before your first feature branch.

The reason is not about automation sophistication. It is about establishing the rhythm of shipping. A team that merges to main and deploys automatically from day one develops fundamentally different habits than a team that does manual deployments for six months and then tries to bolt on automation.

Start simple. GitHub Actions with three workflows: lint/test on PR, build on merge to main, deploy to staging automatically and production with a manual gate. Total setup time: half a day. Total time saved in the first month: multiple days.

We documented how one team took their pipeline from 20 minutes down to 3 minutes20 minutes down to 3 minuteshttps://www.techsaas.cloud/blog/cicd-pipeline-optimization-20min-to-3min — and most of those optimizations are things you should build in from the start rather than retrofit later.

Day 1 CI/CD checklist:

Linting and formatting enforced in CI (not "please remember to run prettier")
Tests run on every PR (even if you only have 5 tests)
Main branch is always deployable
Staging environment gets every merge automatically
Production deploys require one click, not one hour

Decision 4: Observability Before You Need It

The worst time to add observability is during an outage at 2 AM when your biggest customer's integration is broken and you cannot tell whether the problem is in your API, your database, or a third-party dependency.

The second worst time is "after launch." After launch, you will never prioritize it over features. And when you do get around to it, instrumenting a running production system costs 10x what it costs to build it in from the start — because now you have to retrofit tracing into existing code paths, add logging to functions that were never designed for it, and figure out what metrics matter when everything is already on fire.

Minimum viable observability stack:

Structured logging: JSON logs with request IDs, user IDs, and operation names. Not console.log("something happened").
Application metrics: Request latency (p50, p95, p99), error rates, queue depths. Prometheus or Datadog, depending on budget.
Distributed tracing: OpenTelemetry from day one. Even if you are a monolith, traces through your HTTP handlers and database calls will save you when debugging performance issues.
Alerting: PagerDuty or Opsgenie. Error rate > 5% for 5 minutes? Page. p99 latency > 2s? Alert. Database connections > 80% pool? Alert.

This is not gold-plating. This is the minimum instrumentation that lets a small team operate a production service without burning out.

Decision 5: Auth — Don't Build It

Rolling your own authentication is the single most common build-vs-buy mistake in early-stage startups. It feels simple. "It's just password hashing and JWTs, right?"

Then you need password reset flows. Then MFA. Then OAuth with Google and GitHub. Then SAML because your first enterprise customer requires it. Then session management. Then rate limiting on login attempts. Then account lockout policies. Then SOC 2 auditors asking to see your auth implementation and you realize your hand-rolled system has three CVEs.

Use Auth0, Clerk, Supabase Auth, or Firebase Auth. The cost is negligible at startup scale ($0-200/month for the first few thousand users). The engineering time you save is measured in months, not weeks.

When custom auth makes sense: When authentication IS your product (you are building an identity platform), or when you have regulatory requirements that genuinely prevent using a third-party provider (rare outside fintech and healthcare, and even then, managed solutions usually comply).

For everyone else: do not build it.

Decision 6: Database — PostgreSQL, Period

For 95% of early-stage startups, PostgreSQL is the correct database choice. Not MongoDB because a tutorial used it. Not DynamoDB because "we might need to scale." Not MySQL because your Rails tutorial defaulted to it.

PostgreSQL gives you:

Relational data modeling (which is what most applications actually need)
JSONB columns for semi-structured data (so you get document-store flexibility without a separate database)
Full-text search that handles most search use cases without Elasticsearch
PostGIS for geospatial data
Row-level security for multi-tenant applications
Rock-solid replication and backup tooling
Every managed database service supports it (RDS, Cloud SQL, Supabase, Neon)

The MongoDB trap: Teams choose MongoDB because it feels faster to start — no schema, just throw JSON in. But within six months, they are writing application-level joins, fighting with inconsistent data shapes, and wishing they had foreign keys. The "schemaless" advantage becomes the "schema-everywhere-in-application-code" problem.

Start with a single PostgreSQL instance on your cloud provider's managed service. Add read replicas when you need them. Add pgbouncer for connection pooling. You will not outgrow this setup before Series B.

Decision 7: First 3 Hires > First 3 Features

Your first three engineering hires determine your team's DNA more than your first three features determine your product's DNA. Features can be rewritten. Culture calcifies.

Hire for breadth first, depth second. Your first three engineers should cover: 1. Backend / API: Someone who can build a production-grade API, manage the database, and reason about data modeling. 2. Frontend / product: Someone who can ship user-facing features fast and has strong product instincts, not just technical skills. 3. DevOps / infrastructure: Someone who can own the CI/CD pipeline, manage cloud infrastructure, and be on call without drowning.

Notice that none of these are "machine learning engineer" or "blockchain developer." Specialists come later. Your first hires need to be generalists who can cover for each other and context-switch without complaining.

For LATAM and nearshore teams: This is where timezone alignment becomes a genuine competitive advantage. A senior engineer in Bogota, Mexico City, or Buenos Aires who overlaps 6-8 hours with your San Francisco team is worth 2x their cost compared to a similarly skilled engineer 12 timezones away. The communication overhead of async-only collaboration at early stage is brutal. Nearshore engineers give you real-time collaboration at a cost structure that extends your runway.

The 90-Day Decision Matrix

Timeframe
Focus
Key Decisions
Deliverables

|-----------|-------|--------------|--------------|

Days 1-30
Audit + Architecture
Monolith structure, cloud provider, database choice
Architecture doc, tech radar, infrastructure-as-code repo
Days 31-60
Build Core + CI/CD
CI/CD pipeline, observability stack, auth integration
Working CI/CD, staging environment, monitoring dashboards
Days 61-90
Ship + Observe
First production deploy, hiring pipeline, on-call rotation
Product in production, first 1-2 hires, runbooks

Days 1-30: Audit and Architecture

Do not write production code in your first week. Audit what exists. Read every line of the existing codebase if there is one. Talk to every stakeholder about what the product needs to do in the next 6 months. Write an architecture decision record (ADR) for each of the seven decisions above. Get founder buy-in on the technical direction before you start building.

Days 31-60: Build Core and CI/CD

Now you build. But you build infrastructure first, features second. CI/CD pipeline. Staging environment. Observability stack. Auth integration. Database schema. The goal by day 60 is that any engineer can clone the repo, run one command, and have a working development environment — and that merging a PR to main results in an automatic deployment to staging.

Days 61-90: Ship and Observe

Ship the first version of the product to real users. Instrument everything. Watch the dashboards. Your first production deploy will reveal every assumption you got wrong — and that is the point. Better to learn in month 3 than month 12.

Simultaneously, start the hiring pipeline. Your first engineering hires should be in the interview process by day 60 and ideally accepting offers by day 90.

Mistakes We Have Seen

The "Let's Use Kubernetes" CTO. Seed-stage startup, 4 engineers, zero production traffic. The new CTO spent 6 weeks setting up a Kubernetes cluster with Istio service mesh, Helm charts for 8 microservices, and a GitOps workflow with ArgoCD. By the time the infrastructure was "ready," the company had burned through 2 months of runway with no product shipped. They eventually moved everything to a single Docker Compose on a $40/month Hetzner box and shipped in a week.

The "We Need Our Own Auth" CTO. B2B SaaS startup. The CTO decided to build authentication from scratch because "Auth0 is too expensive and we need full control." Four months later, they had a working login flow — but no password reset, no MFA, no SSO, and a session management system that leaked tokens. They migrated to Clerk in two days and never looked back.

The "Multi-Cloud From Day One" CTO. Pre-revenue startup with 3 engineers. The CTO insisted on deploying to both AWS and GCP "to avoid vendor lock-in." The result: two Terraform codebases, two CI/CD pipelines, two sets of IAM policies, and every feature taking twice as long because it had to work on both clouds. They consolidated to AWS after 8 months and recovered 40% of their engineering velocity.

The "Hire Specialists First" CTO. Series A startup, $3M raised. The CTO's first three hires were a machine learning engineer, a mobile developer, and a security engineer. None of them could deploy the web application. The ML engineer built models that nobody could serve. The mobile developer built an app for a product that did not have a stable API. The security engineer wrote policies for infrastructure that did not exist yet. They had to backfill with generalists and lost 6 months.

FAQ

Q: Should I take on technical debt intentionally in the first 90 days?

Yes, but deliberately and with documentation. There is a difference between "we chose to skip caching because our traffic does not justify it yet" (strategic debt with a clear trigger for repayment) and "we copy-pasted this function 14 times because we were rushing" (careless debt that compounds). Write ADRs for every deliberate shortcut. Set calendar reminders to revisit them.

Q: How do I handle a founder who wants to make technical decisions?

If the founder is technical: collaborate, do not compete. Present your recommendations with data and tradeoffs, not authority. If the founder is non-technical: translate every technical decision into business impact. "Monolith vs microservices" means nothing to them. "We can ship features 3x faster for the first 18 months" gets their attention.

Q: What is the biggest difference between a startup CTO and a big-company engineering manager?

Scope of responsibility. At a big company, you optimize within constraints that someone else set. At a startup, you set the constraints. That means your mistakes have no guardrails and no platform team to bail you out. It also means your good decisions compound faster than anywhere else in your career.

Related Reading

If you found this useful, these posts go deeper on specific decisions covered above:

Build vs Buy Framework for Engineering LeadersBuild vs Buy Framework for Engineering Leadershttps://www.techsaas.cloud/blog/build-vs-buy-framework-engineering-leaders — a structured framework for every build-vs-buy decision you will face as CTO
CI/CD Pipeline Optimization: From 20 Minutes to 3 MinutesCI/CD Pipeline Optimization: From 20 Minutes to 3 Minuteshttps://www.techsaas.cloud/blog/cicd-pipeline-optimization-20min-to-3min — practical techniques for fast, reliable pipelines from day one
Multi-Cloud Hidden Costs and PitfallsMulti-Cloud Hidden Costs and Pitfallshttps://www.techsaas.cloud/blog/multi-cloud-hidden-costs-pitfalls — why single-cloud is the right default for startups
Zero Trust Security with Cloudflare Tunnel for Self-Hosted InfrastructureZero Trust Security with Cloudflare Tunnel for Self-Hosted Infrastructurehttps://www.techsaas.cloud/blog/zero-trust-cloudflare-tunnel-self-hosted — securing your infrastructure without a dedicated security team

---

Build It Right the First Time

The difference between a startup that scales and one that stalls is rarely the product idea. It is the technical foundation. The decisions you make in your first 90 days as CTO determine whether your engineering team spends the next two years building product or rebuilding infrastructure.

If you are stepping into a CTO role and want to make sure your technical foundation is solid from day one, TechSaaS can helpTechSaaS can helphttps://www.techsaas.cloud/services/. We work with early-stage startups to set up infrastructure, CI/CD, observability, and cloud architecture — so you can focus on shipping product instead of fighting your own stack.

Explore our servicesExplore our serviceshttps://www.techsaas.cloud/services/ and see how we help startup engineering teams move faster.

#cto#startup#tech strategy#engineering leadership

Need help with technical?

TechSaaS provides expert consulting and managed services for cloud infrastructure, DevOps, and AI/ML operations.