Proposal
Add trafficDistribution field to the ServiceExpose struct so users can
configure spec.trafficDistribution on all operator-managed Services (HAProxy
primary, HAProxy replicas, PXC per-node, ProxySQL) via the CR and Helm values.
Kubernetes supports spec.trafficDistribution on v1/Service since v1.30
(alpha) and v1.34 (beta, enabled by default). Valid values are PreferSameZone
and PreferSameNode. The operator's ServiceExpose struct already exposes
internalTrafficPolicy and externalTrafficPolicy — trafficDistribution
follows the same pattern.
Implementation
1. pkg/apis/pxc/v1/pxc_types.go:
type ServiceExpose struct {
// ... existing fields
TrafficDistribution *string `json:"trafficDistribution,omitempty"` // NEW
}
2. pkg/pxc/service.go — in NewServiceHAProxy() and NewServiceHAProxyReplicas():
if cr.CompareVersionWith("1.20.0") >= 0 {
if cr.Spec.HAProxy != nil && cr.Spec.HAProxy.ExposePrimary.TrafficDistribution != nil {
obj.Spec.TrafficDistribution = cr.Spec.HAProxy.ExposePrimary.TrafficDistribution
}
}
3. Helm values.yaml:
haproxy:
exposePrimary:
trafficDistribution: PreferSameNode
exposeReplicas:
trafficDistribution: PreferSameNode
CR Example
apiVersion: pxc.percona.com/v1
kind: PerconaXtraDBCluster
metadata:
name: mysql
spec:
haproxy:
enabled: true
exposePrimary:
trafficDistribution: PreferSameNode
exposeReplicas:
trafficDistribution: PreferSameNode
Scope
| CR path |
Service affected |
haproxy.exposePrimary.trafficDistribution |
<cluster>-haproxy |
haproxy.exposeReplicas.trafficDistribution |
<cluster>-haproxy-replicas |
pxc.expose.trafficDistribution |
<cluster>-pxc-N (per-node) |
proxysql.expose.trafficDistribution |
<cluster>-proxysql |
Use Case
In latency-sensitive deployments where application pods (e.g. TYPO3 CMS),
HAProxy, and PXC are co-located on the same node via soft pod affinity rules,
trafficDistribution: PreferSameNode eliminates inter-node network hops for
database traffic.
Without trafficDistribution — even with perfect affinity-based co-location,
kube-proxy still round-robins across all HAProxy endpoints:
TYPO3 (node-A) → ClusterIP → kube-proxy round-robin → HAProxy (node-B) → PXC (node-C)
With trafficDistribution: PreferSameNode — kube-proxy prefers the local
HAProxy endpoint, falling back to remote only when no local endpoint exists:
TYPO3 (node-A) → ClusterIP → kube-proxy prefers local → HAProxy (node-A) → PXC (node-A)
Two inter-node hops become zero. For a CMS portal generating hundreds of DB
queries per page render, this significantly reduces latency and network overhead.
Current workaround
We create a second, manually managed Service with the same selector and ports as
the operator-managed <cluster>-haproxy Service, adding trafficDistribution
ourselves:
apiVersion: v1
kind: Service
metadata:
name: mysql-haproxy-local
spec:
type: ClusterIP
trafficDistribution: PreferSameNode
selector:
app.kubernetes.io/component: haproxy
app.kubernetes.io/instance: mysql
app.kubernetes.io/managed-by: percona-xtradb-cluster-operator
app.kubernetes.io/name: percona-xtradb-cluster
app.kubernetes.io/part-of: percona-xtradb-cluster
ports:
- name: mysql
port: 3306
targetPort: 3306
# ... remaining ports duplicated from operator service
The application then connects to this custom Service instead of the
operator-managed one. This works but must be kept manually in sync — any port or
selector changes by the operator are not reflected automatically.
Is this a feature you are interested in implementing yourself?
No
Anything else?
trafficDistribution is soft by design — kube-proxy falls back to remote
endpoints if no local endpoint is available. No risk of dropping traffic
(unlike internalTrafficPolicy: Local).
- The field is a
*string upstream, validated by the Kubernetes API server — no
custom validation needed in the operator.
- Gating behind
cr.CompareVersionWith() ensures backward compatibility per
existing operator conventions.
- The operator already supports
internalTrafficPolicy and
externalTrafficPolicy on ServiceExpose — this is the same pattern.
References
Proposal
Add
trafficDistributionfield to theServiceExposestruct so users canconfigure
spec.trafficDistributionon all operator-managed Services (HAProxyprimary, HAProxy replicas, PXC per-node, ProxySQL) via the CR and Helm values.
Kubernetes supports
spec.trafficDistributiononv1/Servicesince v1.30(alpha) and v1.34 (beta, enabled by default). Valid values are
PreferSameZoneand
PreferSameNode. The operator'sServiceExposestruct already exposesinternalTrafficPolicyandexternalTrafficPolicy—trafficDistributionfollows the same pattern.
Implementation
1.
pkg/apis/pxc/v1/pxc_types.go:2.
pkg/pxc/service.go— inNewServiceHAProxy()andNewServiceHAProxyReplicas():3. Helm
values.yaml:CR Example
Scope
haproxy.exposePrimary.trafficDistribution<cluster>-haproxyhaproxy.exposeReplicas.trafficDistribution<cluster>-haproxy-replicaspxc.expose.trafficDistribution<cluster>-pxc-N(per-node)proxysql.expose.trafficDistribution<cluster>-proxysqlUse Case
In latency-sensitive deployments where application pods (e.g. TYPO3 CMS),
HAProxy, and PXC are co-located on the same node via soft pod affinity rules,
trafficDistribution: PreferSameNodeeliminates inter-node network hops fordatabase traffic.
Without
trafficDistribution— even with perfect affinity-based co-location,kube-proxy still round-robins across all HAProxy endpoints:
With
trafficDistribution: PreferSameNode— kube-proxy prefers the localHAProxy endpoint, falling back to remote only when no local endpoint exists:
Two inter-node hops become zero. For a CMS portal generating hundreds of DB
queries per page render, this significantly reduces latency and network overhead.
Current workaround
We create a second, manually managed Service with the same selector and ports as
the operator-managed
<cluster>-haproxyService, addingtrafficDistributionourselves:
The application then connects to this custom Service instead of the
operator-managed one. This works but must be kept manually in sync — any port or
selector changes by the operator are not reflected automatically.
Is this a feature you are interested in implementing yourself?
No
Anything else?
trafficDistributionis soft by design — kube-proxy falls back to remoteendpoints if no local endpoint is available. No risk of dropping traffic
(unlike
internalTrafficPolicy: Local).*stringupstream, validated by the Kubernetes API server — nocustom validation needed in the operator.
cr.CompareVersionWith()ensures backward compatibility perexisting operator conventions.
internalTrafficPolicyandexternalTrafficPolicyonServiceExpose— this is the same pattern.References