← Home 🗺️ Mind Map ☕ Ko-fi 💳 Razorpay
// CI/CD Guide · Intermediate

CI/CD Pipeline Explained: GitHub Actions, Jenkins & ArgoCD in Production

📅 Updated April 2026 · 📅 April 2026 ⏱ 12 min read 🏷 CI/CD · GitHub Actions · Jenkins · ArgoCD · DevOps
👨‍💻
master.devops
Practising DevOps Engineer with deep hands-on experience in Kubernetes, AWS, CI/CD, and SRE. Every guide is written from real production work.

CI/CD pipelines are the backbone of modern software delivery. At a top enterprise, I design and maintain CI/CD pipelines for multiple teams — from simple GitHub Actions workflows to complex Jenkins multi-branch pipelines with shared Groovy libraries and ArgoCD GitOps deployments to Kubernetes. This guide explains how they work in practice, with real pipeline code and the interview questions you will actually be asked.

What is CI/CD and Why Does It Matter?

Continuous Integration (CI) is the practice of automatically building and testing code every time a developer pushes a change. The goal is to catch integration problems early — in minutes, not days. When a team of 20 engineers all merge code to the same branch daily, automated tests are the only thing standing between you and a broken production release.

Continuous Delivery (CD) extends CI by automatically deploying every successful build to a staging environment, and making it deployable to production with a single click (or automatically, in Continuous Deployment). CD eliminates the long, risky "big bang" release cycles that plague traditional software organisations.

The real value of CI/CD: Short feedback loops. A developer knows within 10 minutes whether their change broke anything. Without CI/CD, they might not find out for days — by which point the root cause is buried under hundreds of other commits.

A Real End-to-End CI/CD Pipeline

Here is the complete pipeline I use in production for a Java Spring Boot microservice:

  1. Developer pushes code to a feature branch on GitHub.
  2. GitHub Actions triggers on the push event — runs unit tests and integration tests.
  3. SonarQube analysis — quality gate must pass (coverage ≥ 80%, no new Critical issues).
  4. OWASP Dependency-Check — fails if any dependency has CVSS score ≥ 7.
  5. Docker build — multi-stage Dockerfile produces a minimal Alpine-based image.
  6. Trivy scan — fails pipeline on HIGH or CRITICAL CVEs in the image.
  7. Push to JFrog Artifactory — tagged with the Git commit SHA for traceability.
  8. Update GitOps repo — CI updates the image tag in the Helm values file and commits.
  9. ArgoCD detects the commit and syncs the new image to the Kubernetes staging cluster.
  10. Smoke tests run against staging — if they pass, the image is promoted to prod-ready.

GitHub Actions — Deep Dive

GitHub Actions is my preferred CI tool for greenfield projects. It is built into GitHub, has generous free tier limits (2,000 minutes/month for public repos), and the OIDC federation feature eliminates the need to store cloud credentials as secrets.

# .github/workflows/ci-cd.yml — Production workflow name: CI/CD Pipeline on: push: branches: [main, "release/*"] pull_request: branches: [main] # Cancel duplicate runs on same branch concurrency: group: ${{ github.workflow }}-${{ github.ref }} cancel-in-progress: true jobs: test: name: Test & Quality Gate runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: actions/setup-java@v4 with: java-version: '21' distribution: 'temurin' - name: Cache Maven dependencies uses: actions/cache@v3 with: path: ~/.m2 key: ${{ runner.os }}-maven-${{ hashFiles('**/pom.xml') }} - name: Run tests run: mvn clean verify -q - name: SonarQube analysis env: SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }} run: mvn sonar:sonar -Dsonar.projectKey=api - name: Quality Gate check uses: sonarqube-quality-gate-action@master timeout-minutes: 5 env: SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }} build-push: name: Build & Push Image needs: test runs-on: ubuntu-latest permissions: contents: read id-token: write # Required for OIDC to AWS steps: - uses: actions/checkout@v4 - name: Configure AWS credentials via OIDC uses: aws-actions/configure-aws-credentials@v4 with: role-to-assume: ${{ secrets.AWS_ROLE_ARN }} aws-region: us-east-1 # No stored AWS keys — uses short-lived OIDC tokens - name: Build Docker image run: | docker build -t api:${{ github.sha }} . docker tag api:${{ github.sha }} ${{ secrets.ECR_REGISTRY }}/api:${{ github.sha }} - name: Trivy vulnerability scan uses: aquasecurity/trivy-action@master with: image-ref: api:${{ github.sha }} severity: HIGH,CRITICAL exit-code: 1 - name: Push to ECR run: docker push ${{ secrets.ECR_REGISTRY }}/api:${{ github.sha }} deploy: name: Deploy to Staging needs: build-push if: github.ref == 'refs/heads/main' runs-on: ubuntu-latest steps: - name: Update GitOps repo image tag run: | git clone https://github.com/org/gitops-repo.git cd gitops-repo sed -i "s|tag:.*|tag: ${{ github.sha }}|" apps/api/values-staging.yaml git commit -am "ci: update api to ${{ github.sha }}" git push

Jenkins — Enterprise Pipelines

Jenkins is the most widely deployed CI/CD server in enterprise environments. Its power comes from flexibility and the plugin ecosystem (1,800+ plugins), but that flexibility also makes it complex. The key to maintainable Jenkins pipelines is Declarative Pipeline syntax and Shared Libraries.

// Jenkinsfile — Declarative Pipeline pipeline { agent { label 'docker' } options { buildDiscarder(logRotator(numToKeepStr: '10')) timeout(time: 30, unit: 'MINUTES') disableConcurrentBuilds() } environment { IMAGE_NAME = "api" IMAGE_TAG = "${env.GIT_COMMIT[0..7]}" } stages { stage('Test') { steps { sh 'mvn clean verify -q' } post { always { junit 'target/surefire-reports/*.xml' } } } stage('SonarQube') { steps { withSonarQubeEnv('sonar-prod') { sh 'mvn sonar:sonar' } } } stage('Quality Gate') { steps { timeout(time: 5, unit: 'MINUTES') { waitForQualityGate abortPipeline: true } } } stage('Docker Build & Scan') { steps { sh "docker build -t ${IMAGE_NAME}:${IMAGE_TAG} ." sh "trivy image --exit-code 1 --severity HIGH,CRITICAL ${IMAGE_NAME}:${IMAGE_TAG}" } } stage('Push to Artifactory') { when { branch 'main' } steps { withCredentials([usernamePassword(credentialsId: 'jfrog-creds', usernameVariable: 'USER', passwordVariable: 'PASS')]) { sh "docker login registry.jfrog.io -u ${USER} -p ${PASS}" sh "docker push registry.jfrog.io/api:${IMAGE_TAG}" } } } } post { failure { slackSend channel: '#ci-alerts', message: "Pipeline FAILED: ${env.JOB_NAME} #${env.BUILD_NUMBER}" } success { slackSend channel: '#ci-success', message: "Deployed: ${IMAGE_TAG}" } } }

ArgoCD GitOps — The Modern CD Approach

ArgoCD implements GitOps continuous delivery for Kubernetes. The core principle: Git is the single source of truth for all cluster state. The CI pipeline builds and pushes images; ArgoCD handles all Kubernetes deployments. The CI pipeline never needs kubectl or cloud credentials for deployment — only ArgoCD touches the cluster.

GitOps benefit: Rollback = git revert. Audit trail = Git history. Drift detection = ArgoCD alerts when someone manually applies something to the cluster that is not in Git. Disaster recovery = re-sync from Git to rebuild any cluster from scratch.
# ArgoCD Application definition apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: api-production namespace: argocd spec: project: production source: repoURL: https://github.com/org/gitops.git targetRevision: main path: apps/api/overlays/production destination: server: https://kubernetes.default.svc namespace: production syncPolicy: automated: prune: true # remove resources deleted from Git selfHeal: true # revert manual kubectl changes syncOptions: - CreateNamespace=true

Interview Q&A

Q1: What is the difference between Continuous Delivery and Continuous Deployment?
Continuous Delivery means every successful build is automatically deployed to staging and is ready to deploy to production with a single manual approval. Continuous Deployment goes one step further — every successful build is automatically deployed all the way to production without any manual gate. Most organisations practice Continuous Delivery for production (keeping a manual approval for production releases) and Continuous Deployment for staging environments.
Q2: How does OIDC eliminate stored cloud credentials in GitHub Actions?
Without OIDC, you store AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY as GitHub secrets — long-lived credentials that can be stolen. With OIDC, GitHub issues a short-lived JWT token for each workflow run. Your AWS IAM role is configured to trust GitHub's OIDC provider and accepts that JWT. The workflow exchanges the JWT for temporary AWS credentials lasting 1 hour. No long-lived keys to steal. The IAM trust policy can be scoped to a specific repository and branch, preventing other repos from assuming the role.
Q3: What is a Jenkins Shared Library?
A Shared Library is a collection of Groovy code stored in a separate Git repository and loaded by Jenkins pipelines. It allows you to centralise common pipeline logic — build steps, notification functions, security scans — and reuse them across all teams' Jenkinsfiles. Without shared libraries, teams copy-paste pipeline code, and updating security policies requires modifying every Jenkinsfile. With a shared library, you update one place and all pipelines get the change on next run.
Q4: How do you handle secrets in CI/CD pipelines?
Never hardcode secrets in Jenkinsfiles, YAML workflows, or Dockerfiles. For GitHub Actions: store secrets in repository or organisation secrets, access via ${{ secrets.MY_SECRET }} — they are masked in logs. Use OIDC for cloud credentials rather than storing access keys. For Jenkins: use the Credentials plugin with withCredentials() binding — secrets are injected at runtime and masked. For both: rotate secrets regularly, use scoped permissions, and audit secret access. Consider HashiCorp Vault as a central secrets manager for enterprise environments.
// More Guides
📖 DevOps ☸️ Kubernetes 🐳 Docker ⚙️ CI/CD 🗂️ Terraform 🐧 Linux 🌿 Git ☁️ AWS 📊 Prometheus

⚙️ Explore CI/CD Tools on the Interactive Mind Map

See how GitHub Actions, Jenkins, and ArgoCD connect to Git, Docker, Kubernetes, SonarQube, and JFrog — with interview Q&A for each.

Open Interactive Mind Map ← Docker Guide
🚀 Want the complete DevOps interview kit?
Full notes, Q&A cheat sheets, real commands — all tools covered.
💳 Get Complete DevOps Kit →

CI/CD pipelines build and push Docker images, then deploy to Kubernetes. Make sure you understand Docker → and Kubernetes → first.

📩 Get Free DevOps Interview Notes

Cheat sheets, real commands, interview Q&As — free.

No spam · Follow @master.devops for daily tips

// Continue Learning
🌿Git — Master branching strategies for CI/CD 🐳Docker — Build container images in your pipeline ☸️Kubernetes — Deploy with ArgoCD GitOps

GitHub Actions — Production-Grade Pipeline

GitHub Actions is now the default CI/CD choice for GitHub-hosted teams. Understanding the difference between jobs and steps, how OIDC works, and how to make pipelines fast with caching separates a competent DevOps engineer from a great one.

# .github/workflows/ci-cd.yml name: CI/CD Pipeline on: push: branches: [main] pull_request: branches: [main] concurrency: group: ${{ github.workflow }}-${{ github.ref }} cancel-in-progress: true jobs: test: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Cache Maven uses: actions/cache@v3 with: path: ~/.m2 key: ${{ runner.os }}-maven-${{ hashFiles('**/pom.xml') }} - run: mvn test build-push: needs: test runs-on: ubuntu-latest if: github.ref == 'refs/heads/main' permissions: id-token: write # required for OIDC contents: read steps: - uses: actions/checkout@v4 - name: Configure AWS (OIDC — zero stored secrets) uses: aws-actions/configure-aws-credentials@v4 with: role-to-assume: ${{ secrets.AWS_ROLE_ARN }} aws-region: us-east-1 - name: Build and push run: | aws ecr get-login-password | docker login --username AWS --password-stdin ${{ secrets.ECR_REGISTRY }} docker build -t ${{ secrets.ECR_REGISTRY }}/api:${{ github.sha }} . docker push ${{ secrets.ECR_REGISTRY }}/api:${{ github.sha }}

Jenkins Declarative Pipeline

// Jenkinsfile pipeline { agent { label 'docker' } environment { ECR_REGISTRY = credentials('ecr-registry-url') } stages { stage('Test') { parallel { stage('Unit') { steps { sh 'mvn test -Dtest=Unit*' } } stage('Integration') { steps { sh 'mvn test -Dtest=Integration*' } } } } stage('Build') { when { branch 'main' } steps { sh "docker build -t $ECR_REGISTRY/api:$BUILD_NUMBER ." sh "docker push $ECR_REGISTRY/api:$BUILD_NUMBER" } } stage('Deploy') { when { branch 'main' } steps { sh "helm upgrade --install api ./helm --set image.tag=$BUILD_NUMBER -n staging" sh "kubectl rollout status deployment/api -n staging --timeout=5m" } } } post { failure { slackSend channel: '#alerts', message: "Build failed: ${env.JOB_NAME}" } always { cleanWs() } } }
GitOps CI vs CD split: CI (GitHub Actions/Jenkins) builds image → runs tests → pushes to registry → updates image tag in Git. ArgoCD detects the Git change → syncs Kubernetes cluster. Your CI pipeline never needs kubectl access. Rollback = git revert. Audit log = git history.

CI/CD Interview Questions & Answers

Q: How do you avoid storing cloud credentials in GitHub Actions?
Use OIDC (OpenID Connect) federation. Configure your cloud provider to trust GitHub's OIDC provider. In your workflow, set permissions: id-token: write and use the official cloud action with role-to-assume. At runtime, GitHub issues a short-lived signed JWT. The cloud provider verifies the token and exchanges it for temporary credentials scoped to the IAM role. No long-lived secrets are stored anywhere. This is the recommended approach — AWS_ACCESS_KEY_ID in secrets is legacy.
Q: What is the difference between a job and a step in GitHub Actions?
Jobs run in parallel by default on separate fresh runner VMs. No shared state — you cannot pass files between jobs without actions/upload-artifact and actions/download-artifact. Use needs: to make jobs serial. Steps run sequentially within a single job, share the same runner and workspace, and share environment variables. Practical rule: multiple jobs for parallel work (test + lint + security scan simultaneously), multiple steps for sequential work within a phase.
Q: Declarative vs Scripted Jenkins Pipeline?
Declarative is the modern standard. It enforces a rigid structure (pipeline { agent { stages { post {) making it readable, lintable, and consistent across teams. It has built-in support for parallel stages, post conditions, and environment injection. Scripted Pipeline is raw Groovy — more flexible but harder to read and lint. Use Declarative for all new pipelines. Jenkins shared libraries let you put complex Groovy logic in reusable functions while keeping Jenkinsfiles declarative.
Q: What is the App-of-Apps pattern in ArgoCD?
App-of-Apps is a pattern where one ArgoCD Application manages other ArgoCD Applications as its children. A root "apps" Application watches a directory containing Application manifests. When you add a new Application manifest to Git, ArgoCD creates that Application automatically. This enables GitOps-driven management of ArgoCD itself — you never need to run kubectl apply or the ArgoCD CLI to add a new application. It scales well for teams managing dozens of services across multiple clusters.

🔗 Related DevOps Topics

🐳 Docker ☸️ Kubernetes 🗂️ Terraform 🐧 Linux ☁️ AWS ⚙️ CI/CD 📊 Prometheus 🌿 Git 📖 DevOps 🗺️ Mind Map

☕ Support Master DevOps

All content is 100% free. If this guide helped you crack an interview or learn something new, your support keeps the project going.

☕ Ko-fi — International 💳 Razorpay — UPI / India

No subscription · One-time equally loved 🙏

☸️
Written by Master DevOps
DevOps & SRE Engineer · Updated April 2026

Master DevOps is a community of practising DevOps and SRE engineers sharing real production knowledge — from Kubernetes internals to CI/CD pipeline design. All content is written from hands-on experience, not copied from documentation. Our mission: make senior-level DevOps knowledge free for everyone.

📸 Instagram ▶️ YouTube 💼 LinkedIn About Us →
🎯

Ready to Crack Your DevOps Interview?

Access 90+ interview Q&As, real commands, SRE frameworks, and 18-tool reference cards — all free, no login required. Used by 1,300+ DevOps engineers.

🎯 Open Interview Kit → 🗺️ Explore Mind Map

No account needed · Works on mobile · Updated weekly

Advertisement
🌙