Skip to content

CI/CD Pipelines

This project has two independent CI/CD pipelines: one for the documentation site and one for deploying infrastructure. Both run on GitHub Actions.


Docs Pipeline โ€” Already Running

The docs pipeline is live at taylorbobaylor.github.io/dotnet-sql-learning.

Trigger: push to main branch (or manual dispatch from the Actions tab).

Flow:

push to main
    โ””โ”€โ–ถ build job
            โ”œโ”€ checkout (full history for git revision dates)
            โ”œโ”€ setup Python + pip cache
            โ”œโ”€ pip install -r requirements.txt
            โ”œโ”€ mkdocs build --strict --site-dir _site
            โ””โ”€ upload Pages artifact
    โ””โ”€โ–ถ deploy job
            โ””โ”€ actions/deploy-pages@v4  โ†’  GitHub Pages

The workflow lives at .github/workflows/deploy-docs.yml. It uses the official GitHub Pages Actions pattern โ€” no third-party deployment token needed.

Strict mode

mkdocs build --strict fails the build on any broken internal link or misconfigured nav entry. This keeps the published site consistent with the repo.


Infrastructure Pipeline โ€” Terraform + Helm

This pipeline applies the Terraform configuration (which in turn deploys the Helm chart) to a Kubernetes cluster. It is not yet wired up as a GitHub Actions workflow โ€” for the local Docker Desktop workflow, you run Terraform manually. The section below shows what a production pipeline targeting AWS EKS would look like.

Local workflow (Docker Desktop)

cd terraform

# First time only
cp terraform.tfvars.example terraform.tfvars
# Edit terraform.tfvars โ€” set sa_password at minimum

terraform init          # downloads kubernetes + helm providers
terraform plan          # preview: namespace + helm_release
terraform apply         # deploy

# When done
terraform destroy

Terraform uses the hashicorp/helm provider (~> 2.12) to deploy the local helm/sql-server chart. One apply creates the namespace and hands everything else to Helm (Secret, PVC, Deployment, Service).

GitHub Actions โ€” EKS deployment (template)

A production pipeline targeting AWS EKS would look like this. Store sa_password and AWS credentials as GitHub Actions secrets.

# .github/workflows/deploy-infra.yml  (example โ€” not yet active)
name: Deploy Infrastructure

on:
  push:
    branches: [main]
    paths:
      - "terraform/**"
      - "helm/**"
  workflow_dispatch:

permissions:
  id-token: write   # needed for OIDC auth to AWS
  contents: read

jobs:
  terraform:
    runs-on: ubuntu-latest
    environment: production

    steps:
      - uses: actions/checkout@v4

      - name: Configure AWS credentials (OIDC โ€” no long-lived keys)
        uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: arn:aws:iam::${{ secrets.AWS_ACCOUNT_ID }}:role/github-actions-deploy
          aws-region: us-east-1

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v3
        with:
          terraform_version: "~1.6"

      - name: Terraform Init
        run: terraform init
        working-directory: terraform

      - name: Terraform Plan
        run: terraform plan -out=tfplan
        working-directory: terraform
        env:
          TF_VAR_sa_password: ${{ secrets.SA_PASSWORD }}
          TF_VAR_eks_cluster_name: ${{ secrets.EKS_CLUSTER_NAME }}

      - name: Terraform Apply
        run: terraform apply tfplan
        working-directory: terraform

Key things to note about the EKS pipeline:

OIDC instead of access keys โ€” aws-actions/configure-aws-credentials with role-to-assume uses GitHub's OIDC provider to exchange a short-lived token for AWS credentials. No long-lived AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY stored in GitHub.

Sensitive variables via TF_VAR_* โ€” Terraform picks up TF_VAR_sa_password automatically and treats it as sensitive, so it never appears in plan output.

paths filter โ€” The workflow only runs when Terraform or Helm files change, avoiding unnecessary deploys on docs-only commits.


How the Two Tools Fit Together

terraform apply
    โ”œโ”€ hashicorp/kubernetes provider  โ†’  creates namespace
    โ””โ”€ hashicorp/helm provider
            โ””โ”€ helm_release "sql_server"
                    โ”œโ”€ reads  helm/sql-server/Chart.yaml
                    โ”œโ”€ reads  helm/sql-server/values.yaml
                    โ”œโ”€ merges `set` overrides from terraform.tfvars
                    โ””โ”€ applies Kubernetes manifests rendered by the chart
                            โ”œโ”€ Secret   (sa password)
                            โ”œโ”€ PVC      (5 Gi data volume)
                            โ”œโ”€ Deployment (SQL Server pod)
                            โ””โ”€ Service  (NodePort / LoadBalancer)

Terraform owns the what (which cluster, which namespace, what password, what port). Helm owns the how (the Kubernetes resource templates). This separation means you can iterate on the chart templates without touching Terraform state, and you can change Terraform variables without modifying YAML templates.


AWS EKS โ€” End-to-End Flow

For a complete picture of what changes when moving from Docker Desktop to AWS EKS, see Kubernetes (Docker Desktop โ†’ AWS).

Developer pushes to main
    โ””โ”€โ–ถ deploy-docs.yml    โ†’  MkDocs build  โ†’  GitHub Pages
    โ””โ”€โ–ถ deploy-infra.yml   โ†’  Terraform
                                  โ”œโ”€ aws_eks_cluster data source  โ†’  EKS credentials
                                  โ””โ”€ helm_release
                                          โ”œโ”€ image: mssql/server:2022-latest (x86_64)
                                          โ”œโ”€ serviceType: LoadBalancer (NLB, internal)
                                          โ””โ”€ storageClassName: gp3 (EBS CSI driver)