← All articlesCloud Infrastructure

Terraform Modules and Workspace Patterns for Real-World Infrastructure

Master Terraform modules, workspaces, and state management. DRY infrastructure code, remote state, module composition, and multi-environment deployment...

Y
Yash Pritwani
15 min read

Terraform Beyond the Basics

Every Terraform tutorial teaches you to create a main.tf and run apply. Real-world infrastructure needs modules for reusability, workspaces for environments, remote state for team collaboration, and patterns that scale.

ProductionWeb ServerApp ServerDatabaseMonitoringStagingWeb ServerApp ServerDatabaseVLANBackupStorage3-2-1 Rule

Server infrastructure: production and staging environments connected via VLAN with offsite backups.

Module Architecture

A well-structured Terraform project uses modules to encapsulate reusable components:

infrastructure/
├── environments/
│   ├── production/
│   │   ├── main.tf
│   │   ├── variables.tf
│   │   ├── terraform.tfvars
│   │   └── backend.tf
│   ├── staging/
│   │   ├── main.tf
│   │   ├── variables.tf
│   │   ├── terraform.tfvars
│   │   └── backend.tf
│   └── development/
│       └── ...
├── modules/
│   ├── networking/
│   │   ├── main.tf
│   │   ├── variables.tf
│   │   └── outputs.tf
│   ├── database/
│   │   ├── main.tf
│   │   ├── variables.tf
│   │   └── outputs.tf
│   └── application/
│       ├── main.tf
│       ├── variables.tf
│       └── outputs.tf
└── global/
    ├── dns/
    └── iam/

Building a Reusable Module

# modules/database/variables.tf
variable "name" {
  description = "Database instance name"
  type        = string
}

variable "engine" {
  description = "Database engine (postgres, mysql)"
  type        = string
  default     = "postgres"

  validation {
    condition     = contains(["postgres", "mysql"], var.engine)
    error_message = "Engine must be postgres or mysql."
  }
}

variable "engine_version" {
  description = "Database engine version"
  type        = string
  default     = "16"
}

variable "instance_class" {
  description = "Instance class"
  type        = string
  default     = "db.t3.micro"
}

variable "allocated_storage" {
  description = "Storage in GB"
  type        = number
  default     = 20
}

variable "environment" {
  description = "Environment name"
  type        = string
}

variable "vpc_id" {
  description = "VPC ID for security group"
  type        = string
}

variable "subnet_ids" {
  description = "Subnet IDs for DB subnet group"
  type        = list(string)
}

variable "allowed_cidr_blocks" {
  description = "CIDR blocks allowed to connect"
  type        = list(string)
  default     = []
}

Get more insights on Cloud Infrastructure

Join 2,000+ engineers who get our weekly deep-dives. No spam, unsubscribe anytime.

# modules/database/main.tf
resource "aws_db_subnet_group" "this" {
  name       = "db-subnet-group-{var.name}-{var.environment}"
  subnet_ids = var.subnet_ids

  tags = {
    Name        = "db-subnet-group-{var.name}"
    Environment = var.environment
  }
}

resource "aws_security_group" "db" {
  name_prefix = "db-{var.name}-{var.environment}-"
  vpc_id      = var.vpc_id

  ingress {
    from_port   = var.engine == "postgres" ? 5432 : 3306
    to_port     = var.engine == "postgres" ? 5432 : 3306
    protocol    = "tcp"
    cidr_blocks = var.allowed_cidr_blocks
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name        = "db-sg-{var.name}"
    Environment = var.environment
  }

  lifecycle {
    create_before_destroy = true
  }
}

resource "random_password" "db_password" {
  length  = 32
  special = false
}

resource "aws_db_instance" "this" {
  identifier     = "{var.name}-{var.environment}"
  engine         = var.engine
  engine_version = var.engine_version
  instance_class = var.instance_class

  allocated_storage     = var.allocated_storage
  max_allocated_storage = var.allocated_storage * 2

  db_name  = replace(var.name, "-", "_")
  username = "admin"
  password = random_password.db_password.result

  db_subnet_group_name   = aws_db_subnet_group.this.name
  vpc_security_group_ids = [aws_security_group.db.id]

  backup_retention_period = var.environment == "production" ? 30 : 7
  multi_az                = var.environment == "production" ? true : false
  deletion_protection     = var.environment == "production" ? true : false

  skip_final_snapshot       = var.environment != "production"
  final_snapshot_identifier = var.environment == "production" ? "{var.name}-final" : null

  tags = {
    Name        = var.name
    Environment = var.environment
    ManagedBy   = "terraform"
  }
}
# modules/database/outputs.tf
output "endpoint" {
  description = "Database endpoint"
  value       = aws_db_instance.this.endpoint
}

output "port" {
  description = "Database port"
  value       = aws_db_instance.this.port
}

output "database_name" {
  description = "Database name"
  value       = aws_db_instance.this.db_name
}

output "password" {
  description = "Database password"
  value       = random_password.db_password.result
  sensitive   = true
}

output "security_group_id" {
  description = "Security group ID"
  value       = aws_security_group.db.id
}

Using Modules in Environments

# environments/production/main.tf
module "vpc" {
  source = "../../modules/networking"

  name        = "main"
  environment = "production"
  cidr_block  = "10.0.0.0/16"
}

module "app_database" {
  source = "../../modules/database"

  name                = "app-db"
  environment         = "production"
  engine              = "postgres"
  engine_version      = "16"
  instance_class      = "db.t3.medium"
  allocated_storage   = 100
  vpc_id              = module.vpc.vpc_id
  subnet_ids          = module.vpc.private_subnet_ids
  allowed_cidr_blocks = [module.vpc.private_cidr]
}

module "analytics_database" {
  source = "../../modules/database"

  name                = "analytics-db"
  environment         = "production"
  engine              = "postgres"
  engine_version      = "16"
  instance_class      = "db.r6g.large"
  allocated_storage   = 500
  vpc_id              = module.vpc.vpc_id
  subnet_ids          = module.vpc.private_subnet_ids
  allowed_cidr_blocks = [module.vpc.private_cidr]
}
Cloud$5,000/moMigrateBare MetalDocker + LXC$200/mo96% cost reduction

Cloud to self-hosted migration can dramatically reduce infrastructure costs while maintaining full control.

Workspaces vs Directories

Terraform offers two approaches for multi-environment management:

Approach 1: Separate Directories (Recommended)

environments/
├── production/    # Own state file, own variables
├── staging/       # Own state file, own variables
└── development/   # Own state file, own variables

Each environment has its own state, reducing the blast radius of mistakes.

Approach 2: Terraform Workspaces

terraform workspace new production
terraform workspace new staging
terraform workspace select production

# In your code
locals {
  env_config = {
    production = {
      instance_class = "db.t3.medium"
      min_capacity   = 2
      multi_az       = true
    }
    staging = {
      instance_class = "db.t3.micro"
      min_capacity   = 1
      multi_az       = false
    }
  }
  config = local.env_config[terraform.workspace]
}

We recommend separate directories because:

  • Clearer separation of concerns
  • Independent state files (smaller blast radius)
  • Can have different providers/versions per environment
  • Easier to understand and audit

Remote State

Free Resource

Free Cloud Architecture Checklist

A 47-point checklist covering security, scalability, cost optimization, and disaster recovery for production cloud environments.

Download the Checklist

Never store Terraform state locally for team projects:

# backend.tf
terraform {
  backend "s3" {
    bucket         = "techsaas-terraform-state"
    key            = "production/infrastructure.tfstate"
    region         = "ap-south-1"
    encrypt        = true
    dynamodb_table = "terraform-locks"
  }
}

For self-hosted infrastructure, use Infisical or MinIO as an S3-compatible backend, or the pg backend with PostgreSQL:

terraform {
  backend "pg" {
    conn_str = "postgres://terraform:password@postgres:5432/terraform_state?sslmode=disable"
  }
}
OrchestratorNode 1Container AContainer BNode 2Container CContainer ANode 3Container BContainer D

Container orchestration distributes workloads across multiple nodes for resilience and scale.

Best Practices

  1. Pin provider versions: Always specify exact versions to avoid surprises
  2. Use data sources: Reference existing resources instead of hardcoding IDs
  3. Validate variables: Add validation blocks to catch errors early
  4. Tag everything: Environment, team, managed-by, cost-center
  5. Use moved blocks: Refactor without destroying resources
  6. State locking: Always enable for team environments
  7. Module versioning: Tag module releases with semver
terraform {
  required_version = ">= 1.9.0"
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.70"
    }
  }
}

At TechSaaS, we manage our Proxmox infrastructure with Ansible (better for configuration management) and use Terraform for clients who deploy to cloud providers. Our module library covers common patterns: VPC with public/private subnets, RDS with automated backups, ECS services with auto-scaling, and CloudFront distributions. Each module is versioned and documented, so deploying a new environment takes minutes, not days.

#terraform#iac#modules#workspaces#infrastructure

Related Service

Cloud Solutions

Let our experts help you build the right technology strategy for your business.

Need help with cloud infrastructure?

TechSaaS provides expert consulting and managed services for cloud infrastructure, DevOps, and AI/ML operations.

We Will Build You a Demo Site — For Free

Like it? Pay us. Do not like it? Walk away, zero complaints. You will spend way less than hiring developers or any agency.

47+ companies trusted us
99.99% uptime
< 48hr response

No spam. No contracts. Just a free demo.