Skip to content

Latest commit

 

History

History
239 lines (171 loc) · 7.47 KB

File metadata and controls

239 lines (171 loc) · 7.47 KB

Quickstart: OrbStack Kubernetes & ArgoCD Bootstrap

Spec: spec.md | Plan: plan.md | Date: 2026-02-10


Prerequisites

Tool Minimum Version Install
OrbStack Latest (macOS ARM) brew install orbstack
Helm 3.16.x brew install helm
kubeseal Latest brew install kubeseal
kubeconform Latest brew install kubeconform
shellcheck (optional) Latest brew install shellcheck

Note: OrbStack bundles kubectl and kustomize. No separate install needed.

Quick Setup (< 10 minutes)

# 1. Clone the repository
git clone https://github.com/hbasria/specops-orbstack-argocd.git && cd specops-orbstack-argocd

# 2. Start OrbStack Kubernetes (if not already running)
# → Enable Kubernetes in OrbStack Settings → Kubernetes, or:
orb start k8s

# 3. Run bootstrap
./scripts/bootstrap.sh

# 4. Access services
open https://argocd.k8s.orb.local     # ArgoCD UI
open https://grafana.k8s.orb.local    # Grafana dashboards

Access Credentials

Service URL Username Password
ArgoCD https://argocd.k8s.orb.local admin kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath='{.data.password}' | base64 -d
Grafana https://grafana.k8s.orb.local admin admin

Validation Scenarios

VS1: Fresh Bootstrap from Clean Cluster

Purpose: Verify end-to-end bootstrap on a fresh OrbStack Kubernetes cluster.

# Reset cluster (WARNING: deletes everything)
orb delete k8s && orb start k8s

# Wait for cluster to be ready
kubectl --context orbstack wait --for=condition=Ready node --all --timeout=120s

# Run bootstrap
./scripts/bootstrap.sh

# Verify all ArgoCD applications are synced and healthy
kubectl -n argocd get applications

Expected result: All 6 Applications (root + 5 children) show Synced and Healthy. Total time < 10 minutes.

VS2: Idempotent Re-run

Purpose: Verify bootstrap script is safe to re-run.

# Run bootstrap again on an already-bootstrapped cluster
./scripts/bootstrap.sh

# Verify no errors, no changed resources
kubectl -n argocd get applications

Expected result: Script completes with exit 0. All Applications remain Synced/Healthy. Helm reports "no changes".

VS3: ArgoCD Self-Heal

Purpose: Verify ArgoCD reverts manual changes.

# Manually scale ingress-nginx to 0
kubectl -n ingress-nginx scale deployment ingress-nginx-controller --replicas=0

# Wait for ArgoCD to detect drift (up to 3 minutes)
sleep 30

# Check if ArgoCD restored replicas
kubectl -n ingress-nginx get deployment ingress-nginx-controller -o jsonpath='{.spec.replicas}'

Expected result: ArgoCD detects drift and restores the deployment to the Git-defined replica count.

VS4: Certificate Issuance

Purpose: Verify cert-manager issues certificates via the CA ClusterIssuer.

# Check ClusterIssuer is ready
kubectl get clusterissuer local-ca-issuer -o jsonpath='{.status.conditions[0].status}'
# → True

# Verify ArgoCD TLS certificate was issued
kubectl -n argocd get certificate
kubectl -n argocd get secret argocd-server-tls -o jsonpath='{.data.tls\.crt}' | base64 -d | openssl x509 -noout -issuer -subject
# → Issuer: O = Local Development, CN = local-dev-ca

Expected result: ClusterIssuer ready, certificates issued by local-dev-ca.

VS5: Ingress Routing

Purpose: Verify ingress-nginx routes traffic via OrbStack DNS.

# Test ArgoCD UI
curl -sk https://argocd.k8s.orb.local | head -5
# → HTML content from ArgoCD UI

# Test Grafana
curl -sk -o /dev/null -w '%{http_code}' https://grafana.k8s.orb.local
# → 200 or 302

Expected result: Both endpoints are reachable without port-forwarding. OrbStack *.k8s.orb.local DNS resolves automatically.

VS6: Namespace Template Onboarding

Purpose: Verify a new project namespace can be created using the template.

# Create a new namespace from template
cd kubernetes/namespace-template
kustomize build overlays/local | kubectl apply -f -

# Verify resources created
kubectl get namespace <template-namespace>
kubectl get resourcequota -n <template-namespace>
kubectl get limitrange -n <template-namespace>
kubectl get networkpolicy -n <template-namespace>

Expected result: Namespace created with ResourceQuota, LimitRange, and NetworkPolicy resources. NetworkPolicy is present (but not enforced by Flannel CNI on OrbStack).

VS7: Monitoring Stack Health

Purpose: Verify Prometheus, Grafana, and exporters are operational.

# Check all monitoring pods
kubectl -n monitoring get pods

# Verify Prometheus is scraping targets
kubectl -n monitoring port-forward svc/prometheus-prometheus 9090:9090 &
curl -s http://localhost:9090/api/v1/targets | jq '.data.activeTargets | length'
# → > 0 targets

# Verify Grafana dashboards load
curl -sk https://grafana.k8s.orb.local/api/health
# → {"commit":"...","database":"ok","version":"..."}

Expected result: All monitoring pods Running. Prometheus has active scrape targets. Grafana API returns database: ok.

VS8: Sealed Secrets Workflow

Purpose: Verify kubeseal can encrypt secrets for Git storage.

# Fetch the sealed-secrets public key
kubeseal --controller-name sealed-secrets \
  --controller-namespace sealed-secrets \
  --fetch-cert > /tmp/sealed-secrets-cert.pem

# Create and seal a test secret
kubectl create secret generic test-secret \
  --from-literal=password=supersecret \
  --dry-run=client -o yaml | \
kubeseal --cert /tmp/sealed-secrets-cert.pem \
  --format yaml > /tmp/sealed-secret.yaml

# Verify sealed secret can be applied
kubectl apply -f /tmp/sealed-secret.yaml
kubectl get secret test-secret -o jsonpath='{.data.password}' | base64 -d
# → supersecret

# Cleanup
kubectl delete sealedsecret test-secret
kubectl delete secret test-secret
rm /tmp/sealed-secrets-cert.pem /tmp/sealed-secret.yaml

Expected result: Secret encrypted by kubeseal, decrypted by controller, original value recovered.

Manifest Validation (Pre-Deploy)

# Validate all Helm templates
for dir in argocd/helm-values/*/; do
  component=$(basename "$dir")
  echo "Validating $component..."
done

# Validate Kustomize builds
kustomize build kubernetes/namespace-template/overlays/local | kubeconform -strict
kustomize build kubernetes/apps/sample-app/overlays/local | kubeconform -strict

# Validate shell scripts
shellcheck scripts/*.sh

Troubleshooting

Symptom Likely Cause Fix
kubectl: connection refused OrbStack K8s not running orb start k8s
ArgoCD app stuck OutOfSync Git repo URL mismatch Check source.repoURL in Application YAML
cert-manager pods CrashLoopBackOff CRDs not installed Ensure crds.enabled: true in cert-manager values
Ingress returning 404 ingress-nginx not ready Wait for controller pod, check ingressClassName: nginx
Grafana stuck Pending PVC issue Ensure persistence.enabled: false in values
*.k8s.orb.local not resolving OrbStack DNS not active Restart OrbStack: orb restart
Sealed secret not decrypting Wrong controller namespace Verify --controller-namespace sealed-secrets

Full Cluster Reset

# Option 1: Delete and recreate cluster (clean slate)
orb delete k8s
orb start k8s
./scripts/bootstrap.sh

# Option 2: Uninstall ArgoCD only (keeps cluster)
helm uninstall argocd -n argocd
kubectl delete namespace argocd
./scripts/bootstrap.sh