Feature Branch: 001-orbstack-argocd-bootstrap
Created: 2026-02-10
Status: Draft
Input: User description: "local kubernetes kurulumunu kontrol ederek argocd yi aktif hale getirerek kubernetes e uygulama kuruluma hazır hale getir"
- Q: Which ArgoCD version to use? → A: ArgoCD 3.1.x (user specified, overrides constitution default of 2.13.x)
- Target machine is macOS with Apple Silicon (M1/M2/M3/M4)
- OrbStack is the chosen local Kubernetes runtime (per constitution)
- Single-node cluster is sufficient for local development
- OrbStack may or may not already be installed — bootstrap must handle both cases
- No cloud provider resources required — everything runs locally
- User has Homebrew available for installing CLI tools
- Network access available for pulling Helm charts and container images
- ArgoCD 3.1.x with auto-sync and self-heal enabled (local development)
- Ingress via NGINX Ingress Controller with
*.k8s.orb.localDNS (OrbStack-managed) - Monitoring via kube-prometheus-stack (Prometheus + Grafana)
- Secrets via Sealed Secrets controller
Verify that OrbStack is installed and its built-in Kubernetes cluster is running on the local macOS ARM machine. Ensure all required CLI tools (kubectl, helm, kustomize, kubeseal, kubeconform) are installed and functional. If OrbStack or any CLI tool is missing, the bootstrap process must install them via Homebrew. The cluster must be accessible via kubectl with the OrbStack context and all system pods must be healthy.
Why this priority: Without a running Kubernetes cluster and proper CLI tooling, no subsequent infrastructure can be deployed. This is the absolute foundation.
Independent Validation: Run kubectl get nodes and confirm node is Ready. Run kubectl get pods -n kube-system and confirm all system pods are Running. Verify each CLI tool with --version checks.
Acceptance Criteria:
- Given a macOS ARM machine (with or without OrbStack), When the bootstrap prerequisites script is run, Then OrbStack is installed, Kubernetes is enabled, and the node reports
Readystatus within 60 seconds - Given OrbStack Kubernetes is running, When
kubectl get pods -n kube-systemis executed, Then all system pods (CoreDNS, kube-proxy, etc.) are inRunningstate - Given a fresh macOS environment, When the prerequisites script is run, Then
kubectl,helm,kustomize,kubeseal, andkubeconformare all available in$PATHand return valid version output - Given OrbStack Kubernetes is running, When
kubectl cluster-infois executed, Then the Kubernetes control plane endpoint is reachable
Install ArgoCD into the local OrbStack cluster using a Helm chart and configure it as the GitOps controller. ArgoCD must be accessible via port-forward or ingress at argocd.k8s.orb.local. An App-of-Apps root Application must be created that points to this Git repository, enabling ArgoCD to manage all subsequent infrastructure and application deployments declaratively.
Why this priority: ArgoCD is the control plane for all GitOps-driven deployments. Without it, the cluster cannot self-manage infrastructure from Git. It depends on Scenario 1 (cluster running).
Independent Validation: Access ArgoCD UI via kubectl port-forward svc/argocd-server -n argocd 8080:443. Log in with the initial admin password. Verify ArgoCD server, repo-server, application-controller, and redis pods are healthy. Confirm the root App-of-Apps Application is synced.
Acceptance Criteria:
- Given a running OrbStack Kubernetes cluster (from IS1), When the ArgoCD bootstrap script is executed, Then ArgoCD is deployed in the
argocdnamespace with all components (server, repo-server, application-controller, redis, dex) inRunningstate - Given ArgoCD is installed, When the admin password is retrieved from the
argocd-initial-admin-secret, Then the user can log in to the ArgoCD UI successfully - Given ArgoCD is running, When the root App-of-Apps Application is created, Then ArgoCD detects the Git repository and reports the application as
SyncedandHealthy - Given ArgoCD is configured, When a new ArgoCD Application manifest is committed to Git, Then ArgoCD automatically syncs and deploys the resources within 3 minutes
Deploy the core infrastructure components that make the cluster ready for application workloads: NGINX Ingress Controller, cert-manager (with self-signed ClusterIssuer), Sealed Secrets controller, and kube-prometheus-stack (Prometheus + Grafana). All components must be deployed via ArgoCD Applications pointing to Helm charts, following the constitution's Helm-first deployment strategy.
Why this priority: These components provide the foundational services (ingress routing, TLS, secret management, monitoring) that any application deployment requires. Depends on Scenario 2 (ArgoCD running).
Independent Validation: Verify each component independently: kubectl get pods -n ingress-nginx (ingress), kubectl get pods -n cert-manager (certs), kubectl get pods -n sealed-secrets (secrets), kubectl get pods -n monitoring (monitoring stack). Test ingress by deploying a httpbin service and reaching it via httpbin.k8s.orb.local. Verify Grafana is accessible at grafana.k8s.orb.local or port-forward.
Acceptance Criteria:
- Given ArgoCD is running (from IS2), When infrastructure ArgoCD Applications are synced, Then NGINX Ingress Controller pods are
Runninginingress-nginxnamespace and the ingress classnginxis available - Given NGINX Ingress is running, When an Ingress resource is created with host
test.k8s.orb.local, Then the service is reachable via HTTP athttp://test.k8s.orb.local - Given ArgoCD syncs cert-manager Application, When cert-manager pods are running, Then a self-signed ClusterIssuer is available and can issue certificates
- Given ArgoCD syncs Sealed Secrets Application, When
kubesealencrypts a secret, Then the SealedSecret is decrypted in-cluster to a valid Kubernetes Secret - Given ArgoCD syncs kube-prometheus-stack Application, When Prometheus and Grafana pods are running in
monitoringnamespace, Then Grafana is accessible and shows the cluster overview dashboard with live metrics
Create a namespace template structure with default ResourceQuotas, LimitRanges, and NetworkPolicies (default deny-all ingress, allow from ingress-nginx and monitoring). Provide a sample application deployment to validate the full end-to-end flow: Git commit → ArgoCD sync → application running → accessible via ingress. This proves the cluster is fully ready for application onboarding.
Why this priority: This validates the entire pipeline from Git to running application. It is the "acceptance test" for the entire bootstrap. Depends on Scenario 3 (infrastructure components).
Independent Validation: Create a new namespace using the template. Deploy a sample nginx application via Kustomize through ArgoCD. Verify the pod runs, the service is reachable via ingress, metrics appear in Prometheus, and NetworkPolicies are enforced.
Acceptance Criteria:
- Given infrastructure components are deployed (from IS3), When a new project namespace is created using the template, Then ResourceQuota (2 CPU / 4Gi memory), LimitRange (default 100m/128Mi), and deny-all NetworkPolicy are automatically applied
- Given a templated namespace exists, When a sample application is deployed via ArgoCD Kustomize Application, Then the application pod starts within the resource limits and reaches
Runningstate - Given a sample application is running, When the application's Ingress is configured with host
sample.k8s.orb.local, Then the application is reachable viahttp://sample.k8s.orb.local - Given the monitoring stack is running, When a sample application is deployed, Then application pod metrics appear in Prometheus within 2 minutes
- Given deny-all NetworkPolicy is applied, When a pod in one namespace tries to reach a pod in another namespace, Then the connection is blocked
Note: OrbStack uses Flannel CNI which does NOT enforce NetworkPolicies. NetworkPolicy manifests are created for portability to production clusters but will have no effect on OrbStack. SC-010 is not testable on OrbStack.
- What happens when OrbStack Kubernetes is already running but in a degraded state (some system pods crashing)? The bootstrap script must detect unhealthy system pods, log warnings, and attempt to restart them via
orb restart k8sbefore continuing. - What happens when ArgoCD is already partially installed (e.g., from a previous failed bootstrap)? The Helm install must be idempotent —
helm upgrade --installpattern ensures no errors on re-run. - How does the system handle insufficient local machine resources (low CPU/memory)? The bootstrap script checks available resources and warns if below recommended thresholds (4 CPU, 8 GiB RAM) but does not block.
- What happens when container images cannot be pulled (no network)? ArgoCD sync will report
Degradedstatus. The user must restore network connectivity and ArgoCD will auto-retry. - What happens when the user already has a different Kubernetes context active (Docker Desktop, minikube)? The bootstrap script must explicitly switch to the OrbStack context (
kubectl config use-context orbstack) and verify before proceeding. - What happens when Homebrew is not installed? The prerequisites script must check for Homebrew and provide a clear error message with installation instructions if missing.
- FR-001: Bootstrap script MUST verify OrbStack installation and enable Kubernetes if not already active
- FR-002: Bootstrap script MUST install all required CLI tools (
kubectl,helm,kustomize,kubeseal,kubeconform) via Homebrew if not present - FR-003: Bootstrap script MUST switch kubectl context to OrbStack and verify cluster connectivity
- FR-004: Bootstrap script MUST install ArgoCD via Helm chart into the
argocdnamespace with auto-sync and self-heal enabled - FR-005: Bootstrap script MUST create a root App-of-Apps ArgoCD Application pointing to this Git repository
- FR-006: Bootstrap script MUST be idempotent — safe to run multiple times without errors or side effects
- FR-007: Infrastructure components (ingress-nginx, cert-manager, sealed-secrets, kube-prometheus-stack) MUST be deployed as ArgoCD Applications with pinned Helm chart versions
- FR-008: Each infrastructure component MUST run in its own dedicated namespace
- FR-009: A namespace template MUST provide default ResourceQuota, LimitRange, and NetworkPolicy for new project namespaces
- FR-010: Ingress MUST support
*.k8s.orb.localdomain routing via OrbStack's built-in DNS - FR-011: Grafana MUST be accessible via port-forward or ingress and display cluster-level dashboards
- FR-012: Sealed Secrets controller MUST be operational and
kubesealMUST be able to encrypt secrets - FR-013: cert-manager MUST have a self-signed ClusterIssuer configured for local TLS
- FR-014: All Helm values overrides MUST be stored in Git under the repository's values structure
- FR-015: Bootstrap script MUST output access information (ArgoCD URL, admin password retrieval command, Grafana URL) upon completion
Required Components (all via ArgoCD + Helm, per constitution):
| Component | Namespace | Helm Chart | Purpose |
|---|---|---|---|
| ArgoCD | argocd | argo-cd | GitOps controller |
| NGINX Ingress | ingress-nginx | ingress-nginx | Ingress routing |
| cert-manager | cert-manager | cert-manager | TLS certificates |
| Sealed Secrets | sealed-secrets | sealed-secrets | Git-safe secrets |
| kube-prometheus-stack | monitoring | kube-prometheus-stack | Monitoring & alerting |
Custom Applications: None in this feature (infrastructure-only). Sample app for validation only.
- OrbStack Kubernetes: Single-node local cluster providing the compute platform. Managed entirely by OrbStack — no manual kubeadm or node configuration required.
- ArgoCD: GitOps controller that reconciles desired state from this Git repository to the cluster. Deployed via Helm as the bootstrap entry point. All subsequent components deployed through ArgoCD.
- App-of-Apps Root Application: Single ArgoCD Application that manages all child Application definitions. Enables declarative management of the entire infrastructure stack.
- NGINX Ingress Controller: Handles HTTP/HTTPS routing from
*.k8s.orb.localdomains to cluster services. OrbStack's DNS automatically resolves*.k8s.orb.localto the cluster. - cert-manager: Manages TLS certificates. Configured with a self-signed ClusterIssuer for local development (no external CA needed).
- Sealed Secrets: Enables committing encrypted secrets to Git. The controller decrypts SealedSecrets into Kubernetes Secrets in-cluster.
- kube-prometheus-stack: Provides Prometheus (metrics collection), Grafana (visualization), and AlertManager (alerting). Includes pre-built dashboards for cluster and workload monitoring.
- Namespace Template: A reusable set of ResourceQuota, LimitRange, and NetworkPolicy manifests that enforce resource limits and network isolation for each project namespace.
- SC-001: Full bootstrap (from prerequisites to all components synced) completes in under 10 minutes on a machine with 8 GiB RAM and 4 CPU cores
- SC-002: All OrbStack Kubernetes system pods reach
Runningstate within 60 seconds of cluster start - SC-003: ArgoCD UI is accessible and shows all infrastructure Applications as
SyncedandHealthy - SC-004: NGINX Ingress Controller routes traffic to services via
*.k8s.orb.localdomains successfully - SC-005: Grafana displays live cluster metrics (CPU, memory, pod count) within 2 minutes of monitoring stack deployment
- SC-006: A sample application deployed via ArgoCD is reachable via ingress within 3 minutes of Git commit
- SC-007: Bootstrap script is idempotent — running it twice produces no errors and no duplicated resources
- SC-008: Sealed Secrets workflow (encrypt → commit → decrypt) completes successfully end-to-end
- SC-009: ResourceQuota enforcement prevents a namespace from exceeding its allocated CPU/memory limits
- SC-010: NetworkPolicy manifests are defined and applied in each namespace (enforcement depends on CNI — OrbStack/Flannel does not enforce; policies exist for portability)