Report
I try to build a Percona cluster with 3 replica's on a IPv6-only kubernetes. The cluster does not get into sync state because it is unable to sync.
At first the setup was not listing to the IPv6 socket. After adjusting the config I was able to make the tcp ports listing to the IPv6 sockets.
root@node1:~# kubectl -n openstack get pods -o wide|grep perc
percona-xtradb-haproxy-0 2/2 Running 0 10h fd40:10::2c2 node3 <none> <none>
percona-xtradb-haproxy-1 2/2 Running 0 10h fd40:10::190 node2 <none> <none>
percona-xtradb-haproxy-2 2/2 Running 0 10h fd40:10::dd node1 <none> <none>
percona-xtradb-pxc-0 2/2 Running 0 10h fd40:10::26f node3 <none> <none>
percona-xtradb-pxc-1 1/2 CrashLoopBackOff 117 (2m50s ago) 10h fd40:10::13 node1 <none> <none>
root@node1:~#
root@node1:~# kubectl -n openstack get perconaxtradbcluster
NAME ENDPOINT STATUS PXC PROXYSQL HAPROXY AGE
percona-xtradb percona-xtradb-haproxy.openstack initializing 1 3 10h
The pxc-0 pod has the right node_address configured:
root@node1:~# kubectl -n openstack exec -it percona-xtradb-pxc-0 -- egrep -Ri wsrep_node_address /etc/mysql
/etc/mysql/node.cnf:wsrep_node_address=fd40:10::26f
More about the problem
Using the config from below I get the following error:
2026-01-06T21:15:59.262692Z 2 [Warning] [MY-000000] [Galera] Failed to prepare for incremental state transfer: Failed to open IST listener at tcp://[AUTO]:4568’, asio error 'Failed to listen: resolve: Host not found (authoritative): System error: 1 (Operation not permitted)
The full log is:
2026-01-06T21:15:57.937670Z 0 [Note] [MY-000000] [Galera] Flow-control interval: [141, 141]
2026-01-06T21:15:57.937690Z 0 [Note] [MY-000000] [Galera] Shifting OPEN -> PRIMARY (TO: 1232)
2026-01-06T21:15:57.937819Z 2 [Note] [MY-000000] [Galera] ####### processing CC 1232, local, ordered
2026-01-06T21:15:57.937886Z 2 [Note] [MY-000000] [Galera] Maybe drain monitors from -1 upto current CC event 1232 upto:-1
2026-01-06T21:15:57.937936Z 2 [Note] [MY-000000] [Galera] Drain monitors from -1 up to -1
2026-01-06T21:15:57.937979Z 2 [Note] [MY-000000] [Galera] Process first view: a1490146-e8c8-11f0-befb-978a5f67f81f my uuid: e451151f-eb44-11f0-b1ca-2f7dab2d9415
2026-01-06T21:15:57.938036Z 2 [Note] [MY-000000] [Galera] Server percona-xtradb-pxc-1 connected to cluster at position a1490146-e8c8-11f0-befb-978a5f67f81f:1232 with ID e451151f-eb44-11f0-b1ca-2f7dab2d9415
2026-01-06T21:15:57.938074Z 2 [Note] [MY-000000] [WSREP] Server status change disconnected -> connected
2026-01-06T21:15:57.960736Z 2 [Note] [MY-000000] [Galera] ####### My UUID: e451151f-eb44-11f0-b1ca-2f7dab2d9415
2026-01-06T21:15:57.960841Z 2 [Note] [MY-000000] [Galera] Cert index reset to 00000000-0000-0000-0000-000000000000:-1 (proto: 11), state transfer needed: yes
2026-01-06T21:15:57.960930Z 0 [Note] [MY-000000] [Galera] Service thread queue flushed.
2026-01-06T21:15:57.960985Z 2 [Note] [MY-000000] [Galera] ####### Assign initial position for certification: 00000000-0000-0000-0000-000000000000:-1, protocol version: -1
2026-01-06T21:15:57.961011Z 2 [Note] [MY-000000] [Galera] State transfer required:
Group state: a1490146-e8c8-11f0-befb-978a5f67f81f:1232
Local state: 00000000-0000-0000-0000-000000000000:-1
2026-01-06T21:15:57.961028Z 2 [Note] [MY-000000] [WSREP] Server status change connected -> joiner
2026-01-06T21:15:57.984331Z 0 [Note] [MY-000000] [WSREP] Initiating SST/IST transfer on JOINER side (wsrep_sst_xtrabackup-v2 --role 'joiner' --address '[AUTO]' --datadir '/var/lib/mysql/' --basedir '/usr/' --plugindir '/usr/lib64/mysql/plugin/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' --parent '1' --mysqld-version '8.0.42-33.1' --binlog 'binlog' )
2026-01-06T21:15:58.756574Z 0 [Warning] [MY-000000] [WSREP-SST] Found a stale sst_in_progress file: /var/lib/mysql//sst_in_progress
2026-01-06T21:15:59.252883Z 2 [Note] [MY-000000] [WSREP] Prepared SST request: xtrabackup-v2|[AUTO]:4444/xtrabackup_sst//1
2026-01-06T21:15:59.253049Z 2 [Note] [MY-000000] [Galera] Check if state gap can be serviced using IST
2026-01-06T21:15:59.253112Z 2 [Note] [MY-000000] [Galera] Local UUID: 00000000-0000-0000-0000-000000000000 != Group UUID: a1490146-e8c8-11f0-befb-978a5f67f81f
2026-01-06T21:15:59.253162Z 2 [Note] [MY-000000] [Galera] ####### IST uuid:00000000-0000-0000-0000-000000000000 f: 0, l: 1232, STRv: 3
2026-01-06T21:15:59.253474Z 2 [Note] [MY-000000] [Galera] IST receiver addr using tcp://[AUTO]:4568
2026-01-06T21:15:59.262549Z 2 [Note] [MY-000000] [Galera] State gap can't be serviced using IST. Switching to SST
2026-01-06T21:15:59.262692Z 2 [Warning] [MY-000000] [Galera] Failed to prepare for incremental state transfer: Failed to open IST listener at tcp://[AUTO]:4568', asio error 'Failed to listen: resolve: Host not found (authoritative): System error: 1 (Operation not permitted)
at ../../../../percona-xtradb-cluster-galera/galerautils/src/gu_asio_stream_react.cpp:listen():922'
at ../../../../percona-xtradb-cluster-galera/galera/src/ist.cpp:prepare():357. IST will be unavailable.
2026-01-06T21:15:59.263790Z 0 [Note] [MY-000000] [Galera] Member 1.0 (percona-xtradb-pxc-1) requested state transfer from '*any*'. Selected 0.0 (percona-xtradb-pxc-0)(SYNCED) as donor.
2026-01-06T21:15:59.263853Z 0 [Note] [MY-000000] [Galera] Shifting PRIMARY -> JOINER (TO: 1232)
2026-01-06T21:15:59.263907Z 2 [Note] [MY-000000] [Galera] Requesting state transfer: success, donor: 0
2026-01-06T21:15:59.263962Z 2 [Note] [MY-000000] [Galera] Resetting GCache seqno map due to different histories.
2026-01-06T21:15:59.264008Z 2 [Note] [MY-000000] [Galera] GCache history reset: a1490146-e8c8-11f0-befb-978a5f67f81f:0 -> a1490146-e8c8-11f0-befb-978a5f67f81f:1232
2026-01-06T21:15:59.265594Z 0 [Warning] [MY-000000] [Galera] 0.0 (percona-xtradb-pxc-0): State transfer to 1.0 (percona-xtradb-pxc-1) failed: No message of desired type
2026-01-06T21:15:59.265639Z 0 [ERROR] [MY-000000] [Galera] ../../../../percona-xtradb-cluster-galera/gcs/src/gcs_group.cpp:gcs_group_handle_join_msg():1334: Will never receive state. Need to abort.
2026-01-06T21:15:59.265669Z 0 [Note] [MY-000000] [Galera] gcomm: terminating thread
2026-01-06T21:15:59.265717Z 0 [Note] [MY-000000] [Galera] gcomm: joining thread
2026-01-06T21:15:59.265879Z 0 [Note] [MY-000000] [Galera] gcomm: closing backend
2026-01-06T21:16:00.274161Z 0 [Note] [MY-000000] [Galera] Current view of cluster as seen by this node
view (view_id(NON_PRIM,ca5c544d-8a16,2)
memb {
e451151f-b1ca,0
}
joined {
}
left {
}
partitioned {
ca5c544d-8a16,0
}
)
2026-01-06T21:16:00.274320Z 0 [Note] [MY-000000] [Galera] (e451151f-b1ca, 'tcp://[::]:4567') turning message relay requesting off
2026-01-06T21:16:00.274365Z 0 [Note] [MY-000000] [Galera] PC protocol downgrade 1 -> 0
2026-01-06T21:16:00.274385Z 0 [Note] [MY-000000] [Galera] Current view of cluster as seen by this node
view ((empty))
2026-01-06T21:16:00.274765Z 0 [Note] [MY-000000] [Galera] gcomm: closed
2026-01-06T21:16:00.274818Z 0 [Note] [MY-000000] [Galera] mysqld: Terminated.
2026-01-06T21:16:00.274834Z 0 [Note] [MY-000000] [WSREP] Initiating SST cancellation
2026-01-06T21:16:00.274843Z 0 [Note] [MY-000000] [WSREP] Terminating SST process
Steps to reproduce
I ended up using the following configuration:
kind: PerconaXtraDBCluster
metadata:
creationTimestamp: "2026-01-03T17:21:02Z"
generation: 11
name: percona-xtradb
namespace: openstack
resourceVersion: "1170568"
uid: 50985ad0-e528-4503-bd79-6f65a71cfef3
spec:
allowUnsafeConfigurations: true
backup:
image: docker.io/percona/percona-xtrabackup:8.0.35-33.1
crVersion: 1.18.0
enableVolumeExpansion: true
haproxy:
configuration: |
global
log stdout format raw local0
maxconn 8192
external-check
insecure-fork-wanted
hard-stop-after 10s
stats socket /etc/haproxy/pxc/haproxy.sock mode 600 expose-fd listeners level admin
defaults
no option dontlognull
log-format '{"time":"%t", "client_ip": "%ci", "client_port":"%cp", "backend_source_ip": "%bi", "backend_source_port": "%bp", "frontend_name": "%ft", "backend_name": "%b", "server_name":"%s", "tw": "%Tw", "tc": "%Tc", "Tt": "%Tt", "bytes_read": "%B", "termination_state": "%ts", "actconn": "%ac", "feconn" :"%fc", "beconn": "%bc", "srv_conn": "%sc", "retries": "%rc", "srv_queue": "%sq", "backend_queue": "%bq" }'
default-server init-addr last,libc,none
log global
mode tcp
retries 10
timeout client 28800s
timeout connect 100500
timeout server 28800s
resolvers kubernetes
parse-resolv-conf
frontend galera-in
bind [::]:3309 accept-proxy
bind [::]:3306
mode tcp
option clitcpka
default_backend galera-nodes
frontend galera-admin-in
bind [::]:33062
mode tcp
option clitcpka
default_backend galera-admin-nodes
frontend galera-replica-in
bind [::]:3307
mode tcp
option clitcpka
default_backend galera-replica-nodes
frontend galera-mysqlx-in
bind [::]:33060
mode tcp
option clitcpka
default_backend galera-mysqlx-nodes
frontend stats
bind [::]:8404
mode http
http-request use-service prometheus-exporter if { path /metrics }
enabled: true
image: docker.io/percona/haproxy:2.8.17
nodeSelector:
openstack-control-plane: enabled
size: 3
pxc:
autoRecovery: true
configuration: |
[mysqld]
bind_address=::
wsrep_node_address=[AUTO]
wsrep_provider_options="gmcast.listen_addr=tcp://[::]:4567"
max_connections=8192
innodb_buffer_pool_size=4096M
# Skip reverse DNS lookup of clients
skip-name-resolve
pxc_strict_mode=MASTER
image: docker.io/percona/percona-xtradb-cluster:8.0.42-33.1
livenessProbes:
failureThreshold: 100
timeoutSeconds: 60
nodeSelector:
openstack-control-plane: enabled
sidecars:
- args:
- --mysqld.username=monitor
- --collect.info_schema.processlist
env:
- name: MYSQLD_EXPORTER_PASSWORD
valueFrom:
secretKeyRef:
key: monitor
name: percona-xtradb
image: quay.io/prometheus/mysqld-exporter:v0.17.0
name: exporter
ports:
- containerPort: 9104
name: metrics
protocol: TCP
readinessProbe:
httpGet:
path: /
port: metrics
size: 3
volumeSpec:
persistentVolumeClaim:
resources:
requests:
storage: 160Gi
secretsName: percona-xtradb
status:
backup: {}
conditions:
- lastTransitionTime: "2026-01-03T17:21:02Z"
status: disabled
type: tls
- lastTransitionTime: "2026-01-03T17:21:03Z"
status: "True"
type: initializing
- lastTransitionTime: "2026-01-03T17:23:26Z"
status: "True"
type: ready
- lastTransitionTime: "2026-01-06T17:06:25Z"
status: "True"
type: initializing
- lastTransitionTime: "2026-01-06T17:07:28Z"
status: "True"
type: ready
- lastTransitionTime: "2026-01-06T17:09:59Z"
status: "True"
type: initializing
haproxy:
labelSelectorPath: app.kubernetes.io/component=haproxy,app.kubernetes.io/instance=percona-xtradb,app.kubernetes.io/managed-by=percona-xtradb-cluster-operator,app.kubernetes.io/name=percona-xtradb-cluster,app.kubernetes.io/part-of=percona-xtradb-cluster
size: 3
status: initializing
host: percona-xtradb-haproxy.openstack
logcollector: {}
observedGeneration: 11
pmm: {}
proxysql: {}
pxc:
image: docker.io/percona/percona-xtradb-cluster:8.0.42-33.1
labelSelectorPath: app.kubernetes.io/component=pxc,app.kubernetes.io/instance=percona-xtradb,app.kubernetes.io/managed-by=percona-xtradb-cluster-operator,app.kubernetes.io/name=percona-xtradb-cluster,app.kubernetes.io/part-of=percona-xtradb-cluster
size: 3
status: initializing
version: 8.0.42-33.1
ready: 0
size: 6
state: initializing
Versions
- Kubernetes: 1.35.0
- Operator: 1.18.0
- Database: 8.0.42-33.1
Anything else?
No response
Report
I try to build a Percona cluster with 3 replica's on a IPv6-only kubernetes. The cluster does not get into sync state because it is unable to sync.
At first the setup was not listing to the IPv6 socket. After adjusting the config I was able to make the tcp ports listing to the IPv6 sockets.
The pxc-0 pod has the right node_address configured:
More about the problem
Using the config from below I get the following error:
The full log is:
Steps to reproduce
I ended up using the following configuration:
Versions
Anything else?
No response