GitOps with ArgoCD: Declarative Kubernetes Deployments Guide

Master GitOps with ArgoCD for Kubernetes. Hands-on setup, multi-environment management, security best practices, and production patterns.

Y
Yash Pritwani
12 min read read

GitOps with ArgoCD: The Complete Guide to Declarative Kubernetes Deployments

If you have ever SSHed into a production cluster at 2 AM to run kubectl apply and hoped for the best, you already understand why GitOps exists. The promise is simple: your Git repository becomes the single source of truth for your entire infrastructure, and an automated agent ensures your cluster always matches what is declared in code. No more snowflake clusters. No more "it worked on my machine." No more wondering what actually changed in production last Thursday.

In 2026, GitOps has moved from bleeding edge to mainstream. The CNCF's annual survey reports a 64% adoption rate among organizations running Kubernetes in production, up from 47% just two years ago. The tooling has matured, the patterns are well-understood, and the excuses for not adopting GitOps are running thin.

This guide walks through everything you need to implement GitOps with ArgoCD for Kubernetes deployments — from first principles to production-hardened patterns. Whether you are evaluating GitOps for your team or looking to level up an existing setup, this is the reference I wish I had when we started this journey at TechSaaS.


The Four Principles of GitOps

Before diving into tooling, it is worth grounding ourselves in what GitOps actually means. The OpenGitOps project (a CNCF sandbox project) defines four core principles:

1. Declarative Desired State

Your entire system — applications, configurations, networking policies, RBAC rules — is described declaratively. You define what the system should look like, not how to get there. In Kubernetes, this is natural since everything is already a YAML manifest.

2. Versioned and Immutable

That desired state lives in Git, which gives you versioning, audit trails, and immutability for free. Every change is a commit. Every deployment is traceable to a pull request. Every rollback is a git revert.

3. Pulled Automatically

This is where GitOps diverges from traditional CI/CD. Instead of a pipeline pushing changes to your cluster, an agent inside the cluster continuously pulls the desired state from Git and applies it. This is the pull-based model, and it has profound security implications — your CI system never needs cluster credentials.

4. Continuously Reconciled

The agent does not just apply changes once. It continuously compares the actual state of the cluster against the desired state in Git and corrects any drift. Someone manually scales a deployment? The agent scales it back. A config map gets edited in place? Reverted. The cluster always converges toward what Git declares.

These principles are tool-agnostic, but the tool that has come to define the GitOps experience for most teams is ArgoCD.


Why ArgoCD

The two dominant GitOps controllers for Kubernetes are ArgoCD and Flux. Both are CNCF graduated projects, both are actively maintained, and both implement the core GitOps loop. So why does ArgoCD dominate adoption?

Feature ArgoCD Flux
Web UI Full-featured dashboard with app visualization None (CLI and API only)
Multi-cluster Native support Via Kustomization controllers
SSO/RBAC Built-in with Dex, granular RBAC Relies on Kubernetes RBAC
Sync Waves & Hooks Native resource ordering Via Kustomize ordering
ApplicationSets Template-driven multi-app generation Via Kustomize overlays
Diff Preview Visual diff in UI and CLI CLI diff only
Rollback One-click in UI Git revert required
Learning Curve Moderate (UI helps onboarding) Steeper (pure GitOps, no UI crutch)

The short version: Flux is more "pure" GitOps and arguably more composable. ArgoCD is more opinionated, more visual, and significantly easier to adopt across a team where not everyone lives in a terminal. For most organizations, ArgoCD's developer experience wins.

That said, this is not a religious debate. If Flux fits your workflow better, use Flux. The principles matter more than the tool.


ArgoCD Architecture

Understanding how ArgoCD works internally will save you hours of debugging later. The system consists of four main components:

API Server — The gRPC/REST server that exposes the ArgoCD API. The web UI and CLI both talk to this. It handles authentication, RBAC enforcement, and serves as the external interface for everything.

Repository Server — A stateless service responsible for cloning Git repositories and generating Kubernetes manifests. It understands Helm charts, Kustomize overlays, Jsonnet, and plain YAML directories. When you point ArgoCD at a repo, this is the component that figures out what manifests to produce.

Application Controller — The brain of the operation. It continuously monitors running applications, compares the live state against the desired state from the repo server, and detects drift. When it finds a difference, it either reports it (manual sync) or corrects it (auto-sync).

Dex — An embedded OpenID Connect provider that handles SSO integration. It supports LDAP, SAML, GitHub, GitLab, Google, and practically any OIDC-compliant identity provider.

The controller uses kubectl diff semantics internally, meaning it understands Kubernetes strategic merge patches and can intelligently determine whether a difference is meaningful or just a defaulted field.


Hands-on: Installing ArgoCD

Let us get ArgoCD running. I will show both the quick way and the production way.

Quick Install (Non-HA, Good for Development)

# Create the namespace
kubectl create namespace argocd

# Install ArgoCD
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

# Wait for all pods to be ready
kubectl wait --for=condition=ready pod -l app.kubernetes.io/part-of=argocd -n argocd --timeout=300s

# Get the initial admin password
argocd admin initial-password -n argocd

# Port-forward the API server
kubectl port-forward svc/argocd-server -n argocd 8080:443

Now open https://localhost:8080, log in with admin and the password from above, and you are in.

Get more insights on DevOps

Join 2,000+ engineers who get our weekly deep-dives. No spam, unsubscribe anytime.

Production Install (HA, with Helm)

For production, use the Helm chart with proper configuration:

# Add the ArgoCD Helm repo
helm repo add argo https://argoproj.github.io/argo-helm
helm repo update

# Create a values file
cat > argocd-values.yaml << 'EOF'
global:
  domain: argocd.yourdomain.com

configs:
  params:
    server.insecure: true  # TLS terminated at ingress
  cm:
    url: https://argocd.yourdomain.com
    exec.enabled: "true"
    resource.compareoptions: |
      ignoreAggregatedRoles: true

controller:
  replicas: 2
  resources:
    requests:
      cpu: 500m
      memory: 512Mi
    limits:
      memory: 2Gi
  metrics:
    enabled: true
    serviceMonitor:
      enabled: true

server:
  replicas: 2
  ingress:
    enabled: true
    ingressClassName: nginx
    annotations:
      cert-manager.io/cluster-issuer: letsencrypt-prod
    tls: true

repoServer:
  replicas: 2
  resources:
    requests:
      cpu: 250m
      memory: 256Mi
    limits:
      memory: 1Gi

redis-ha:
  enabled: true

applicationSet:
  replicas: 2
EOF

# Install with Helm
helm install argocd argo/argo-cd \
  --namespace argocd \
  --create-namespace \
  -f argocd-values.yaml \
  --version 7.7.x

Key differences from the quick install: HA Redis, multiple controller and server replicas, proper resource limits, Prometheus metrics enabled, and ingress configured with TLS.

Install the CLI

# macOS
brew install argocd

# Linux
curl -sSL -o argocd https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-amd64
chmod +x argocd
sudo mv argocd /usr/local/bin/

# Authenticate
argocd login argocd.yourdomain.com --grpc-web

Deploying Your First Application

With ArgoCD running, let us deploy something. ArgoCD uses a custom resource called Application to define what to deploy and where.

A Simple Application

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: my-web-app
  namespace: argocd
  finalizers:
    - resources-finalizer.argocd.argoproj.io
spec:
  project: default
  source:
    repoURL: https://github.com/your-org/my-web-app.git
    targetRevision: main
    path: k8s/overlays/production
  destination:
    server: https://kubernetes.default.svc
    namespace: my-web-app
  syncPolicy:
    automated:
      prune: true        # Delete resources removed from Git
      selfHeal: true     # Revert manual changes in cluster
      allowEmpty: false   # Prevent syncing empty manifests
    syncOptions:
      - CreateNamespace=true
      - PrunePropagationPolicy=foreground
      - PruneLast=true
    retry:
      limit: 3
      backoff:
        duration: 5s
        factor: 2
        maxDuration: 3m

Apply it:

kubectl apply -f application.yaml

Let us break down the critical fields:

  • syncPolicy.automated — Enables auto-sync. Without this, ArgoCD detects drift but waits for manual approval.
  • prune: true — If you remove a manifest from Git, ArgoCD deletes the corresponding resource from the cluster. Without this, orphaned resources accumulate.
  • selfHeal: true — If someone manually modifies a resource in the cluster, ArgoCD reverts it to match Git. This is the "continuous reconciliation" principle in action.
  • retry — Transient failures happen. A retry policy with exponential backoff prevents a single API server hiccup from blocking deployments.
  • PruneLast: true — During sync, resources are pruned after all other resources are synced. This prevents downtime from premature deletion.

Kustomize Overlay Structure

Most real-world projects use Kustomize to manage environment-specific configuration. Here is a battle-tested directory structure:

k8s/
  base/
    deployment.yaml
    service.yaml
    hpa.yaml
    kustomization.yaml
  overlays/
    dev/
      kustomization.yaml
      patches/
        replicas.yaml
    staging/
      kustomization.yaml
      patches/
        replicas.yaml
        resources.yaml
    production/
      kustomization.yaml
      patches/
        replicas.yaml
        resources.yaml
        hpa.yaml

The base kustomization.yaml:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - deployment.yaml
  - service.yaml
  - hpa.yaml
commonLabels:
  app.kubernetes.io/managed-by: argocd

A production overlay:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - ../../base
namespace: my-web-app-prod
patches:
  - path: patches/replicas.yaml
  - path: patches/resources.yaml
  - path: patches/hpa.yaml
images:
  - name: my-web-app
    newTag: v2.4.1  # Pinned production image tag

When ArgoCD's repo server processes this, it runs kustomize build and produces the final manifests for the target environment. No Helm template rendering, no CI pipeline generating YAML — just declarative overlays.


Multi-Environment Management with ApplicationSets

Managing a handful of applications manually is fine. Managing dozens across multiple clusters and environments is not. This is where ApplicationSets come in — they are a template engine for ArgoCD Applications.

Git Directory Generator

The most common pattern: generate one Application per directory in a monorepo.

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: cluster-apps
  namespace: argocd
spec:
  goTemplate: true
  goTemplateOptions: ["missingkey=error"]
  generators:
    - git:
        repoURL: https://github.com/your-org/infrastructure.git
        revision: main
        directories:
          - path: apps/*/overlays/production
  template:
    metadata:
      name: '{{ index .path.segments 1 }}'
      namespace: argocd
      labels:
        environment: production
        team: platform
    spec:
      project: production
      source:
        repoURL: https://github.com/your-org/infrastructure.git
        targetRevision: main
        path: '{{ .path.path }}'
      destination:
        server: https://kubernetes.default.svc
        namespace: '{{ index .path.segments 1 }}'
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
        syncOptions:
          - CreateNamespace=true

Add a new service? Create a directory under apps/, push to main, and ArgoCD automatically creates and syncs a new Application. Remove the directory and the Application (and its resources) get cleaned up.

Matrix Generator for Multi-Cluster

For deploying the same apps across multiple clusters:

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: multi-cluster-apps
  namespace: argocd
spec:
  goTemplate: true
  generators:
    - matrix:
        generators:
          - clusters:
              selector:
                matchLabels:
                  environment: production
          - git:
              repoURL: https://github.com/your-org/infrastructure.git
              revision: main
              directories:
                - path: apps/*
  template:
    metadata:
      name: '{{ .name }}-{{ index .path.segments 1 }}'
      namespace: argocd
    spec:
      project: production
      source:
        repoURL: https://github.com/your-org/infrastructure.git
        targetRevision: main
        path: '{{ .path.path }}/overlays/{{ .metadata.labels.environment }}'
      destination:
        server: '{{ .server }}'
        namespace: '{{ index .path.segments 1 }}'
      syncPolicy:
        automated:
          prune: true
          selfHeal: true

This generates one Application for every combination of cluster and app directory. Three clusters and ten apps? Thirty Applications, all managed from a single ApplicationSet. This is where gitops argocd kubernetes really starts to show its power at scale.


Security Best Practices

Running a GitOps controller in production means it has broad access to your cluster. Lock it down.

RBAC Configuration

ArgoCD has its own RBAC system layered on top of Kubernetes RBAC. Define policies in the argocd-rbac-cm ConfigMap:

apiVersion: v1
kind: ConfigMap
metadata:
  name: argocd-rbac-cm
  namespace: argocd
data:
  policy.default: role:readonly
  policy.csv: |
    # Developers can sync apps in their project
    p, role:developer, applications, get, */*, allow
    p, role:developer, applications, sync, */*, allow
    p, role:developer, logs, get, */*, allow

    # Platform team gets full access
    p, role:platform-admin, applications, *, */*, allow
    p, role:platform-admin, clusters, *, *, allow
    p, role:platform-admin, repositories, *, *, allow
    p, role:platform-admin, projects, *, *, allow

    # Map SSO groups to roles
    g, platform-team, role:platform-admin
    g, backend-devs, role:developer
    g, frontend-devs, role:developer
  scopes: '[groups, email]'

The key principle: developers should be able to view and sync applications but never modify ArgoCD's own configuration, cluster registrations, or repository credentials.

SSO Integration

Never use the built-in admin account in production. Configure SSO via Dex:

configs:
  cm:
    dex.config: |
      connectors:
        - type: github
          id: github
          name: GitHub
          config:
            clientID: $dex.github.clientID
            clientSecret: $dex.github.clientSecret
            orgs:
              - name: your-org

Secret Management

Secrets are the trickiest part of GitOps. You cannot commit plaintext secrets to Git (obviously), but your desired state needs to include them. The two best solutions:

Sealed Secrets — Encrypt secrets client-side with a public key. Only the controller in-cluster can decrypt them.

# Install kubeseal
brew install kubeseal

# Encrypt a secret
kubectl create secret generic db-creds \
  --from-literal=password=supersecret \
  --dry-run=client -o yaml | \
  kubeseal --format yaml > sealed-db-creds.yaml

The sealed secret YAML is safe to commit to Git. The Sealed Secrets controller in your cluster decrypts it into a standard Kubernetes Secret.

External Secrets Operator — Pull secrets from external stores (AWS Secrets Manager, HashiCorp Vault, GCP Secret Manager) at runtime:

apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: db-credentials
spec:
  refreshInterval: 1h
  secretStoreRef:
    name: vault-backend
    kind: ClusterSecretStore
  target:
    name: db-credentials
    creationPolicy: Owner
  data:
    - secretKey: password
      remoteRef:
        key: secret/data/production/db
        property: password

We generally recommend External Secrets Operator for production environments. It scales better, supports rotation, and does not require re-encrypting secrets when you rotate your sealing key.


Monitoring and Alerting

ArgoCD exposes Prometheus metrics out of the box. Here is what to monitor:

Key Metrics

# Application sync status — alert if OutOfSync for more than 10 minutes
argocd_app_info{sync_status="OutOfSync"} > 0

# Sync failures
rate(argocd_app_sync_total{phase="Failed"}[5m]) > 0

# Controller queue depth — indicates controller overload
argocd_app_reconcile_bucket{le="10"} / argocd_app_reconcile_count < 0.9

# Repo server performance
histogram_quantile(0.99, rate(argocd_git_request_duration_seconds_bucket[5m]))

Grafana Dashboard

Import the official ArgoCD dashboard (ID 14584) from Grafana's dashboard marketplace. It covers application health, sync status, controller performance, and repository operations.

Notifications

ArgoCD has a built-in notification engine. Configure Slack alerts for sync failures:

apiVersion: v1
kind: ConfigMap
metadata:
  name: argocd-notifications-cm
  namespace: argocd
data:
  service.slack: |
    token: $slack-token
  trigger.on-sync-failed: |
    - when: app.status.sync.status == 'OutOfSync' and app.status.operationState.phase == 'Failed'
      send: [app-sync-failed]
  trigger.on-health-degraded: |
    - when: app.status.health.status == 'Degraded'
      send: [app-health-degraded]
  template.app-sync-failed: |
    slack:
      attachments: |
        [{
          "color": "#E96D76",
          "title": "{{.app.metadata.name}} sync failed",
          "text": "Application {{.app.metadata.name}} sync failed at {{.app.status.operationState.finishedAt}}.\nRevision: {{.app.status.sync.revision}}\nMessage: {{.app.status.operationState.message}}"
        }]
  template.app-health-degraded: |
    slack:
      attachments: |
        [{
          "color": "#F4C030",
          "title": "{{.app.metadata.name}} health degraded",
          "text": "Application {{.app.metadata.name}} is in a degraded health state."
        }]

Annotate your Applications to subscribe to notifications:

metadata:
  annotations:
    notifications.argoproj.io/subscribe.on-sync-failed.slack: platform-alerts
    notifications.argoproj.io/subscribe.on-health-degraded.slack: platform-alerts

Common Pitfalls and How to Avoid Them

After running gitops argocd kubernetes in production across dozens of projects, here are the patterns that trip teams up most often.

Free Resource

CI/CD Pipeline Blueprint

Our battle-tested pipeline template covering build, test, security scan, staging, and zero-downtime deployment stages.

Get the Blueprint

1. Drift Detection False Positives

Kubernetes mutating admission webhooks and controllers add fields to resources after creation (think: Istio sidecars, default service account tokens, resource defaulting). ArgoCD sees these as drift and shows resources as "OutOfSync" even when nothing has changed.

Fix: Use ignoreDifferences in your Application spec:

spec:
  ignoreDifferences:
    - group: apps
      kind: Deployment
      jsonPointers:
        - /spec/template/metadata/annotations/kubectl.kubernetes.io~1restartedAt
    - group: admissionregistration.k8s.io
      kind: MutatingWebhookConfiguration
      jqPathExpressions:
        - '.webhooks[]?.clientConfig.caBundle'

2. Sync Waves and Resource Ordering

Some resources must be created before others. Namespaces before deployments. CRDs before custom resources. Database migrations before application pods.

Fix: Use sync waves and resource hooks:

metadata:
  annotations:
    argocd.argoproj.io/sync-wave: "-1"  # Created before wave 0 (default)
---
# For one-time jobs like migrations
metadata:
  annotations:
    argocd.argoproj.io/hook: PreSync
    argocd.argoproj.io/hook-delete-policy: HookSucceeded

Lower wave numbers sync first. Use negative numbers for prerequisites. PreSync hooks run before the main sync, PostSync hooks after.

3. Large Repositories Causing Slow Syncs

If your repo server takes minutes to generate manifests, ArgoCD feels sluggish.

Fix:

  • Enable repo server caching: repoServer.env with ARGOCD_EXEC_TIMEOUT=180
  • Use shallow clones: set GIT_CLONE_DEPTH=1 on the repo server
  • Split monorepos into smaller repositories if they exceed 500MB
  • Increase repo server replicas for parallelism

4. Secrets in Git History

Someone accidentally committed a plaintext secret six months ago. It was deleted in the next commit, but it is still in Git history.

Fix: Prevention is better than cure. Use pre-commit hooks (like detect-secrets or gitleaks) in your CI pipeline. If a secret does leak, rotate it immediately — do not waste time trying to rewrite Git history.

5. Auto-Sync During Incidents

Auto-sync is great until you are debugging a production issue and ArgoCD keeps reverting your manual changes.

Fix: Use the ArgoCD CLI to temporarily disable auto-sync during incidents:

argocd app set my-app --sync-policy none
# Debug and fix
# Re-enable when done
argocd app set my-app --sync-policy automated --self-heal --auto-prune

Better yet, build this into your incident runbook.


When to Adopt GitOps

GitOps is not a silver bullet. It adds operational complexity — another controller to run, another abstraction layer to understand, another thing that can break at 3 AM. Here is when the investment pays off:

Adopt GitOps when:

  • You have more than 5 services running on Kubernetes
  • Multiple engineers deploy to the same cluster
  • You need audit trails for compliance (SOC 2, ISO 27001, HIPAA)
  • You manage multiple environments or clusters
  • You want to eliminate "works on my machine" deployment issues
  • You are tired of debugging "what changed in production?"

Hold off when:

  • You are a solo developer with one service on one cluster
  • Your team is still learning Kubernetes fundamentals
  • You do not have a Git-centric workflow yet

Conclusion

GitOps with ArgoCD transforms Kubernetes deployments from an error-prone manual process into a declarative, auditable, self-healing system. The learning curve is real, but the payoff — in reliability, security, and developer velocity — is substantial.

The key takeaways:

  • Git is your deployment API. Every change goes through a pull request, gets reviewed, and is traceable forever.
  • ArgoCD handles the hard parts. Drift detection, multi-cluster management, and automated reconciliation are built in.
  • Start simple. One cluster, one repo, manual sync. Then layer on auto-sync, ApplicationSets, and multi-cluster as your confidence grows.
  • Invest in secrets management early. External Secrets Operator or Sealed Secrets — pick one before you have plaintext credentials scattered across your repos.
  • Monitor everything. ArgoCD's Prometheus metrics and notification engine give you full visibility into your deployment pipeline.

The gitops argocd kubernetes stack is not just a trend — it is how modern platform teams operate. The sooner you adopt it, the sooner you stop firefighting deployments and start shipping features.


At TechSaaS, we help teams design, implement, and operate GitOps pipelines on Kubernetes. Whether you are starting from scratch or migrating from legacy CI/CD, our team has done it across startups and enterprises alike. Get in touch if you want to accelerate your GitOps journey.

#GitOps#ArgoCD#Kubernetes#CI/CD#DevOps#Infrastructure as Code#Continuous Deployment

Related Service

Platform Engineering

From CI/CD pipelines to service meshes, we create golden paths for your developers.

Need help with devops?

TechSaaS provides expert consulting and managed services for cloud infrastructure, DevOps, and AI/ML operations.

We Will Build You a Demo Site — For Free

Like it? Pay us. Do not like it? Walk away, zero complaints. You will spend way less than hiring developers or any agency.

47+ companies trusted us
99.99% uptime
< 48hr response

No spam. No contracts. Just a free demo.