Prompt Engineering for DevOps: Automating Infrastructure with LLMs
Master prompt engineering for DevOps automation. Generate Terraform, Dockerfiles, CI/CD pipelines, and incident responses using LLMs with reliable output.
Beyond Chatting: LLMs as Infrastructure Tools
Most prompt engineering guides focus on writing better emails or summarizing documents. For DevOps engineers, the real power is in generating infrastructure code, debugging configurations, and automating incident response. This requires a completely different prompting approach — one that prioritizes precision, safety, and reproducibility over creativity.
<div style="margin:2.5rem auto;max-width:600px;width:100%;text-align:center;"><svg viewBox="0 0 600 200" xmlns="http://www.w3.org/2000/svg" style="width:100%;height:auto;"><rect width="600" height="200" rx="12" fill="#1a1a2e"/><text x="80" y="25" text-anchor="middle" fill="#94a3b8" font-size="10" font-family="system-ui">Input</text><circle cx="80" cy="50" r="14" fill="none" stroke="#3b82f6" stroke-width="2"/><circle cx="80" cy="100" r="14" fill="none" stroke="#3b82f6" stroke-width="2"/><circle cx="80" cy="150" r="14" fill="none" stroke="#3b82f6" stroke-width="2"/><text x="230" y="25" text-anchor="middle" fill="#94a3b8" font-size="10" font-family="system-ui">Hidden</text><circle cx="230" cy="45" r="14" fill="#6366f1" opacity="0.8"/><circle cx="230" cy="85" r="14" fill="#6366f1" opacity="0.8"/><circle cx="230" cy="125" r="14" fill="#6366f1" opacity="0.8"/><circle cx="230" cy="165" r="14" fill="#6366f1" opacity="0.8"/><text x="380" y="25" text-anchor="middle" fill="#94a3b8" font-size="10" font-family="system-ui">Hidden</text><circle cx="380" cy="55" r="14" fill="#a855f7" opacity="0.8"/><circle cx="380" cy="100" r="14" fill="#a855f7" opacity="0.8"/><circle cx="380" cy="145" r="14" fill="#a855f7" opacity="0.8"/><text x="520" y="25" text-anchor="middle" fill="#94a3b8" font-size="10" font-family="system-ui">Output</text><circle cx="520" cy="80" r="14" fill="none" stroke="#2dd4bf" stroke-width="2"/><circle cx="520" cy="130" r="14" fill="none" stroke="#2dd4bf" stroke-width="2"/><line x1="94" y1="50" x2="216" y2="45" stroke="#e2e8f0" stroke-width="0.5" opacity="0.3"/><line x1="94" y1="50" x2="216" y2="85" stroke="#e2e8f0" stroke-width="0.5" opacity="0.3"/><line x1="94" y1="50" x2="216" y2="125" stroke="#e2e8f0" stroke-width="0.5" opacity="0.3"/><line x1="94" y1="50" x2="216" y2="165" stroke="#e2e8f0" stroke-width="0.5" opacity="0.3"/><line x1="94" y1="100" x2="216" y2="45" stroke="#e2e8f0" stroke-width="0.5" opacity="0.3"/><line x1="94" y1="100" x2="216" y2="85" stroke="#e2e8f0" stroke-width="0.5" opacity="0.3"/><line x1="94" y1="100" x2="216" y2="125" stroke="#e2e8f0" stroke-width="0.5" opacity="0.3"/><line x1="94" y1="100" x2="216" y2="165" stroke="#e2e8f0" stroke-width="0.5" opacity="0.3"/><line x1="94" y1="150" x2="216" y2="45" stroke="#e2e8f0" stroke-width="0.5" opacity="0.3"/><line x1="94" y1="150" x2="216" y2="85" stroke="#e2e8f0" stroke-width="0.5" opacity="0.3"/><line x1="94" y1="150" x2="216" y2="125" stroke="#e2e8f0" stroke-width="0.5" opacity="0.3"/><line x1="94" y1="150" x2="216" y2="165" stroke="#e2e8f0" stroke-width="0.5" opacity="0.3"/><line x1="244" y1="45" x2="366" y2="55" stroke="#e2e8f0" stroke-width="0.5" opacity="0.3"/><line x1="244" y1="45" x2="366" y2="100" stroke="#e2e8f0" stroke-width="0.5" opacity="0.3"/><line x1="244" y1="45" x2="366" y2="145" stroke="#e2e8f0" stroke-width="0.5" opacity="0.3"/><line x1="244" y1="85" x2="366" y2="55" stroke="#e2e8f0" stroke-width="0.5" opacity="0.3"/><line x1="244" y1="85" x2="366" y2="100" stroke="#e2e8f0" stroke-width="0.5" opacity="0.3"/><line x1="244" y1="85" x2="366" y2="145" stroke="#e2e8f0" stroke-width="0.5" opacity="0.3"/><line x1="244" y1="125" x2="366" y2="55" stroke="#e2e8f0" stroke-width="0.5" opacity="0.3"/><line x1="244" y1="125" x2="366" y2="100" stroke="#e2e8f0" stroke-width="0.5" opacity="0.3"/><line x1="244" y1="125" x2="366" y2="145" stroke="#e2e8f0" stroke-width="0.5" opacity="0.3"/><line x1="244" y1="165" x2="366" y2="55" stroke="#e2e8f0" stroke-width="0.5" opacity="0.3"/><line x1="244" y1="165" x2="366" y2="100" stroke="#e2e8f0" stroke-width="0.5" opacity="0.3"/><line x1="244" y1="165" x2="366" y2="145" stroke="#e2e8f0" stroke-width="0.5" opacity="0.3"/><line x1="394" y1="55" x2="506" y2="80" stroke="#e2e8f0" stroke-width="0.5" opacity="0.3"/><line x1="394" y1="55" x2="506" y2="130" stroke="#e2e8f0" stroke-width="0.5" opacity="0.3"/><line x1="394" y1="100" x2="506" y2="80" stroke="#e2e8f0" stroke-width="0.5" opacity="0.3"/><line x1="394" y1="100" x2="506" y2="130" stroke="#e2e8f0" stroke-width="0.5" opacity="0.3"/><line x1="394" y1="145" x2="506" y2="80" stroke="#e2e8f0" stroke-width="0.5" opacity="0.3"/><line x1="394" y1="145" x2="506" y2="130" stroke="#e2e8f0" stroke-width="0.5" opacity="0.3"/></svg><p style="margin-top:0.75rem;font-size:0.85rem;color:#94a3b8;font-style:italic;line-height:1.4;">Neural network architecture: data flows through input, hidden, and output layers.</p></div>
At TechSaaS, we use Claude Code CLI as the backbone of our infrastructure automation. Here is what we have learned about prompting for ops.
The DevOps Prompting Framework
Every infrastructure prompt should include four elements:
1. Context: Current state of the system 2. Constraint: What must be preserved or avoided 3. Task: What to generate or change 4. Format: Exact output format expected
CONTEXT: We run Docker containers on a Proxmox LXC host with Traefik
as reverse proxy. PostgreSQL 16 is shared across services.
CONSTRAINT: Do not modify existing services. Port 80 is used by Traefik.
All containers must join the 'padc-net' network. Use environment variables
for secrets, never hardcode them.
TASK: Generate a Docker Compose service definition for a new Redis
instance for session caching with 256MB memory limit, persistent
storage, and Traefik labels for internal access only.
FORMAT: Output only the YAML service block. No explanation needed.Generating Terraform with LLMs
Terraform generation is where LLMs shine — and where they are most dangerous. A hallucinated resource can cost you money or expose infrastructure.
Bad prompt:
Create a Terraform config for AWSGood prompt:
Generate Terraform 1.6+ HCL for an AWS VPC with:
- CIDR: 10.0.0.0/16
- 3 public subnets (10.0.1.0/24, 10.0.2.0/24, 10.0.3.0/24) across
us-east-1a, 1b, 1c
- 3 private subnets (10.0.10.0/24, 10.0.20.0/24, 10.0.30.0/24)
- NAT Gateway in the first public subnet
- Internet Gateway
- Route tables for public (IGW) and private (NAT) subnets
- Tags: Environment=staging, Project=myapp, ManagedBy=terraform
Use aws provider >= 5.0. No modules, just resources.
Output only HCL code. No markdown fencing.The specificity eliminates ambiguity. Every CIDR, AZ, and tag is defined. The LLM has no room to hallucinate.
Dockerfile Generation
DOCKERFILE_PROMPT = """Generate a production Dockerfile for a Node.js 22
application with these requirements:
- Multi-stage build (builder + production)
- Builder stage: install deps, build TypeScript
- Production stage: node:22-alpine, non-root user (node:node)
- Copy only dist/ and node_modules from builder
- HEALTHCHECK using curl on port 3000/health
- Labels: maintainer, version, description
- .dockerignore entries listed as comments at the top
- No npm, use pnpm throughout
- Pin Alpine version, don't use 'latest'
- EXPOSE 3000
- CMD ["node", "dist/server.js"]
Output only the Dockerfile content. No explanation."""Result:
# .dockerignore: node_modules, dist, .git, .env, *.md, .github
FROM node:22.11-alpine3.20 AS builder
WORKDIR /app
RUN corepack enable pnpm
COPY package.json pnpm-lock.yaml ./
RUN pnpm install --frozen-lockfile
COPY . .
RUN pnpm build
FROM node:22.11-alpine3.20
LABEL maintainer="[email protected]"
LABEL version="1.0.0"
LABEL description="Production Node.js application"
RUN apk add --no-cache curl
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./
USER node
EXPOSE 3000
HEALTHCHECK --interval=30s --timeout=5s --retries=3 \
CMD curl -f http://localhost:3000/health || exit 1
CMD ["node", "dist/server.js"]<div style="margin:2.5rem auto;max-width:600px;width:100%;text-align:center;"><svg viewBox="0 0 600 180" xmlns="http://www.w3.org/2000/svg" style="width:100%;height:auto;"><rect width="600" height="180" rx="12" fill="#1a1a2e"/><rect x="30" y="60" width="80" height="50" rx="25" fill="#3b82f6" opacity="0.85"/><text x="70" y="90" text-anchor="middle" fill="#ffffff" font-size="11" font-family="system-ui">Prompt</text><rect x="145" y="50" width="90" height="70" rx="8" fill="#6366f1" opacity="0.85"/><text x="190" y="80" text-anchor="middle" fill="#ffffff" font-size="10" font-family="system-ui">Embed</text><text x="190" y="95" text-anchor="middle" fill="#ffffff" font-size="10" font-family="system-ui">[0.2, 0.8...]</text><rect x="270" y="50" width="90" height="70" rx="8" fill="#a855f7" opacity="0.85"/><text x="315" y="75" text-anchor="middle" fill="#ffffff" font-size="10" font-family="system-ui">Vector</text><text x="315" y="90" text-anchor="middle" fill="#ffffff" font-size="10" font-family="system-ui">Search</text><text x="315" y="105" text-anchor="middle" fill="#ffffff" font-size="9" font-family="system-ui" opacity="0.7">top-k=5</text><rect x="395" y="50" width="90" height="70" rx="8" fill="#2dd4bf" opacity="0.85"/><text x="440" y="80" text-anchor="middle" fill="#1a1a2e" font-size="11" font-family="system-ui" font-weight="bold">LLM</text><text x="440" y="95" text-anchor="middle" fill="#1a1a2e" font-size="9" font-family="system-ui">+ context</text><rect x="520" y="60" width="55" height="50" rx="25" fill="#f59e0b" opacity="0.85"/><text x="547" y="90" text-anchor="middle" fill="#1a1a2e" font-size="10" font-family="system-ui">Reply</text><defs><marker id="arrow4" markerWidth="8" markerHeight="6" refX="8" refY="3" orient="auto"><path d="M0,0 L8,3 L0,6" fill="#e2e8f0"/></marker></defs><line x1="112" y1="85" x2="143" y2="85" stroke="#e2e8f0" stroke-width="1.5" marker-end="url(#arrow4)"/><line x1="237" y1="85" x2="268" y2="85" stroke="#e2e8f0" stroke-width="1.5" marker-end="url(#arrow4)"/><line x1="362" y1="85" x2="393" y2="85" stroke="#e2e8f0" stroke-width="1.5" marker-end="url(#arrow4)"/><line x1="487" y1="85" x2="518" y2="85" stroke="#e2e8f0" stroke-width="1.5" marker-end="url(#arrow4)"/><text x="300" y="155" text-anchor="middle" fill="#94a3b8" font-size="10" font-family="system-ui">Retrieval-Augmented Generation (RAG) Flow</text></svg><p style="margin-top:0.75rem;font-size:0.85rem;color:#94a3b8;font-style:italic;line-height:1.4;">RAG architecture: user prompts are embedded, matched against a vector store, then fed to an LLM with retrieved context.</p></div>
CI/CD Pipeline Generation
Prompting for CI/CD pipelines requires specifying the exact platform and available secrets:
Generate a Gitea Actions workflow (.gitea/workflows/deploy.yml) that:
1. Triggers on push to main branch
2. Runs TypeScript type checking (pnpm tsc --noEmit)
3. Runs ESLint (pnpm lint)
4. Runs tests (pnpm test)
5. If all pass: builds Docker image, tags with git SHA
6. Pushes to Gitea container registry (git.techsaas.cloud)
7. SSHs into production and runs docker compose pull + up -d
Available secrets: DEPLOY_SSH_KEY, REGISTRY_TOKEN
Runner has: docker, pnpm, node 22
Use ubuntu-latest runner image.Incident Response Prompts
When something breaks at 3 AM, having pre-built prompt templates saves critical minutes:
INCIDENT_TRIAGE_PROMPT = """You are an SRE triaging an incident.
ALERT: {alert_name}
SERVICE: {service_name}
METRIC: {metric_name} = {metric_value} (threshold: {threshold})
TIME: {timestamp}
RECENT CHANGES: {recent_deployments}
AVAILABLE ACTIONS:
- restart_service(name): Restart a Docker container
- scale_service(name, replicas): Scale a service
- rollback(name, version): Roll back to previous version
- page_human(severity, message): Page the on-call engineer
Analyze this alert. Output a JSON object with:
{{
"severity": "P1|P2|P3|P4",
"likely_cause": "string",
"recommended_actions": ["action1", "action2"],
"needs_human": true|false,
"reasoning": "string"
}}"""Structured Output for Automation
When LLM output feeds into automation, force structured formats:
import json
def generate_and_parse(prompt: str) -> dict:
"""Generate structured output with validation."""
full_prompt = prompt + """
CRITICAL: Output ONLY valid JSON. No markdown, no explanation,
no code fences. Just the raw JSON object."""
response = llm.generate(full_prompt)
# Strip any markdown fencing the model might add anyway
fence = chr(96) * 3 # triple backtick
cleaned = response.strip().removeprefix(fence + "json").removesuffix(fence).strip()
try:
return json.loads(cleaned)
except json.JSONDecodeError:
# Retry with even stricter prompt
retry_prompt = f"Fix this invalid JSON:\n{cleaned}\nOutput only valid JSON."
response = llm.generate(retry_prompt)
return json.loads(response.strip())Safety Rules for Infrastructure Prompts
Never let an LLM execute infrastructure commands without these safeguards:
1. Dry run first: Generate the plan, review it, then apply 2. Diff before apply: Show what will change before changing it 3. Blast radius limits: Never modify more than N resources in one operation 4. Rollback plan: Every change must include a rollback command 5. Audit log: Record every LLM-generated command and its output
# Pattern: Generate -> Review -> Apply
claude -p "Generate terraform plan for..." > plan.tf
terraform plan -out=tfplan # Review the plan
terraform apply tfplan # Apply only after review<div style="margin:2.5rem auto;max-width:600px;width:100%;text-align:center;"><svg viewBox="0 0 600 170" xmlns="http://www.w3.org/2000/svg" style="width:100%;height:auto;"><rect width="600" height="170" rx="12" fill="#1a1a2e"/><circle cx="60" cy="85" r="25" fill="#f59e0b" opacity="0.85"/><text x="60" y="82" text-anchor="middle" fill="#1a1a2e" font-size="9" font-family="system-ui" font-weight="bold">Trigger</text><text x="60" y="94" text-anchor="middle" fill="#1a1a2e" font-size="8" font-family="system-ui">webhook</text><polygon points="175,55 210,85 175,115 140,85" fill="#6366f1" opacity="0.85"/><text x="175" y="88" text-anchor="middle" fill="#ffffff" font-size="9" font-family="system-ui">If</text><rect x="250" y="35" width="100" height="40" rx="6" fill="#2dd4bf" opacity="0.85"/><text x="300" y="55" text-anchor="middle" fill="#1a1a2e" font-size="10" font-family="system-ui">Send Email</text><text x="300" y="67" text-anchor="middle" fill="#1a1a2e" font-size="8" font-family="system-ui">SMTP</text><rect x="250" y="95" width="100" height="40" rx="6" fill="#a855f7" opacity="0.85"/><text x="300" y="115" text-anchor="middle" fill="#ffffff" font-size="10" font-family="system-ui">Log Event</text><text x="300" y="127" text-anchor="middle" fill="#ffffff" font-size="8" font-family="system-ui">database</text><rect x="400" y="55" width="100" height="40" rx="6" fill="#3b82f6" opacity="0.85"/><text x="450" y="75" text-anchor="middle" fill="#ffffff" font-size="10" font-family="system-ui">Update CRM</text><text x="450" y="87" text-anchor="middle" fill="#ffffff" font-size="8" font-family="system-ui">API call</text><circle cx="545" cy="75" r="18" fill="none" stroke="#2dd4bf" stroke-width="2"/><text x="545" y="79" text-anchor="middle" fill="#2dd4bf" font-size="9" font-family="system-ui">Done</text><defs><marker id="arrow10" markerWidth="8" markerHeight="6" refX="8" refY="3" orient="auto"><path d="M0,0 L8,3 L0,6" fill="#e2e8f0"/></marker></defs><line x1="87" y1="85" x2="138" y2="85" stroke="#e2e8f0" stroke-width="1.5" marker-end="url(#arrow10)"/><line x1="210" y1="72" x2="248" y2="55" stroke="#e2e8f0" stroke-width="1.5" marker-end="url(#arrow10)"/><line x1="210" y1="98" x2="248" y2="115" stroke="#e2e8f0" stroke-width="1.5" marker-end="url(#arrow10)"/><line x1="352" y1="55" x2="398" y2="68" stroke="#e2e8f0" stroke-width="1.5" marker-end="url(#arrow10)"/><line x1="352" y1="115" x2="398" y2="82" stroke="#e2e8f0" stroke-width="1.5" marker-end="url(#arrow10)"/><line x1="502" y1="75" x2="525" y2="75" stroke="#e2e8f0" stroke-width="1.5" marker-end="url(#arrow10)"/><text x="225" y="45" text-anchor="middle" fill="#2dd4bf" font-size="8" font-family="system-ui">true</text><text x="225" y="120" text-anchor="middle" fill="#a855f7" font-size="8" font-family="system-ui">false</text></svg><p style="margin-top:0.75rem;font-size:0.85rem;color:#94a3b8;font-style:italic;line-height:1.4;">Workflow automation: triggers, conditions, and actions chain together to eliminate manual processes.</p></div>
The TechSaaS Approach
We maintain a library of battle-tested prompt templates for every infrastructure task. Our Claude Code integration uses these templates with dynamic context injection — current system state, recent changes, and service dependencies are automatically included. The result is infrastructure automation that is fast, reliable, and auditable.
Need help with ai & machine learning?
TechSaaS provides expert consulting and managed services for cloud infrastructure, DevOps, and AI/ML operations.