Summary
When running this chart on a multi-node Kubernetes cluster with a ReadWriteOnce (RWO) PVC, enabling worker (and/or webhook) can lead to a Multi-Attach error because multiple pods try to mount/attach the same PVC while being scheduled on different nodes.
This results in pods stuck in Pending with events like:
Multi-Attach error for volume "pvc-..." Volume is already used by pod(s) ...
Environment
- Chart version: 2.0.1
- n8n version/image: latest
- Kubernetes version: v1.28.8
- StorageClass / CSI driver: hetzner-cloud
- PVC access mode:
ReadWriteOnce (RWO)
- Cluster: multi-node (pods can be scheduled across different nodes)
Values (relevant parts)
Example configuration (simplified):
main:
persistence:
enabled: true
# uses an RWO StorageClass
worker:
enabled: true
replicaCount: 1
# webhook:
# enabled: true
Steps to reproduce
- Install the chart with
main.persistence.enabled=true using an RWO StorageClass.
- Enable
worker (and/or webhook) so that additional pods are created.
- Let the scheduler place pods on different nodes (default behavior in multi-node clusters).
- Observe worker/webhook pod stuck in
Pending and Multi-Attach errors in pod events.
Actual behavior
n8n main pod mounts the PVC on node A
n8n-worker (or webhook) is scheduled on node B
- Kubernetes/CSI attempts to attach the same RWO volume to node B → fails with Multi-Attach
- Worker/webhook pod stays
Pending
Expected behavior
One of the following (depending on chart intent):
- Chart should not force/mount the same PVC into worker/webhook by default, OR
- Chart should support separate PVCs per component (main/worker/webhook), OR
- Chart should clearly document that enabling worker/webhook with
main.persistence.enabled requires ReadWriteMany (RWX) storage or single-node scheduling constraints, OR
- Provide an option to configure
podAffinity/nodeAffinity to ensure all pods that mount the same RWO PVC land on the same node.
Why this matters
In queue mode, main/worker/webhook often run as separate deployments and can be scheduled across nodes. If the chart shares the same persistence claim across components, RWO becomes a frequent footgun on multi-node clusters.
Suggested improvement
- Add values to configure separate persistence for
worker and webhook (or disable volume mounts there)
- Alternatively, add an explicit guard/validation/warning when:
main.persistence.enabled=true
- worker/webhook enabled
- and accessMode is RWO (or RWX not configured)
- Improve documentation for queue-mode deployments regarding persistence and storage recommendations.
Additional context
I can provide full values.yaml, pod manifests, and event logs if needed.
Summary
When running this chart on a multi-node Kubernetes cluster with a
ReadWriteOnce(RWO) PVC, enablingworker(and/orwebhook) can lead to a Multi-Attach error because multiple pods try to mount/attach the same PVC while being scheduled on different nodes.This results in pods stuck in
Pendingwith events like:Environment
ReadWriteOnce(RWO)Values (relevant parts)
Example configuration (simplified):
Steps to reproduce
main.persistence.enabled=trueusing an RWO StorageClass.worker(and/orwebhook) so that additional pods are created.Pendingand Multi-Attach errors in pod events.Actual behavior
n8nmain pod mounts the PVC on node An8n-worker(or webhook) is scheduled on node BPendingExpected behavior
One of the following (depending on chart intent):
main.persistence.enabledrequiresReadWriteMany(RWX) storage or single-node scheduling constraints, ORpodAffinity/nodeAffinityto ensure all pods that mount the same RWO PVC land on the same node.Why this matters
In queue mode, main/worker/webhook often run as separate deployments and can be scheduled across nodes. If the chart shares the same persistence claim across components, RWO becomes a frequent footgun on multi-node clusters.
Suggested improvement
workerandwebhook(or disable volume mounts there)main.persistence.enabled=trueAdditional context
I can provide full
values.yaml, pod manifests, and event logs if needed.