Skip to content

K8SPSMDB-1606: Add CA key persistence for manual TLS to support safe cert re-signing on SAN changes#2277

Open
myJamong wants to merge 4 commits intopercona:mainfrom
myJamong:NO-TICKET-YET-manual-tls-ca-persistence
Open

K8SPSMDB-1606: Add CA key persistence for manual TLS to support safe cert re-signing on SAN changes#2277
myJamong wants to merge 4 commits intopercona:mainfrom
myJamong:NO-TICKET-YET-manual-tls-ca-persistence

Conversation

@myJamong
Copy link
Copy Markdown

@myJamong myJamong commented Mar 9, 2026

CHANGE DESCRIPTION

Problem:
When cert-manager is not installed, the operator's manual TLS path has two issues: (1) it never detects SAN changes (e.g., when splitHorizons are added), because it skips reconciliation if TLS secrets already exist, and (2) if secrets are manually deleted to force regeneration, a completely new CA is generated without merging with the old one, causing TLS verification failures during SmartUpdate rolling restarts.

I also made an issue here: #2278

Cause:
The manual TLS code path (createSSLManually) only creates secrets when they don't exist and returns immediately if they do — there is no SAN change detection. Additionally, each call to tls.Issue() generates an independent CA, meaning ssl and ssl-internal use different CAs, and any regeneration produces a CA that existing pods cannot trust.

Solution:
Introduce a persistent CA secret ({name}-ca-cert) for manual TLS management, mirroring the cert-manager CA secret structure. The CA key is preserved so that when SANs change (e.g., splitHorizon additions), the operator re-signs TLS certificates using the same CA — no CA merge is needed and rolling restarts are safe. Key changes:

  • IssueCA() / IssueWithCA(): Split certificate generation so CA and TLS certs can be managed independently.
  • getOrCreateManualCA(): Creates or reuses the persistent CA secret.
  • needsManualSSLUpdate(): Detects SAN changes by comparing the current certificate's SANs with expected SANs.
  • updateSSLManually(): Re-signs TLS certs with the existing CA when SANs change.
  • Both ssl and ssl-internal now share the same CA (consistent with cert-manager behavior).

CHECKLIST

Jira

  • Is the Jira ticket created and referenced properly?
  • Does the Jira ticket have the proper statuses for documentation (Needs Doc) and QA (Needs QA)?
  • Does the Jira ticket link to the proper milestone (Fix Version field)?

Tests

  • Is an E2E test/test case added for the new feature/change?
  • Are unit tests added where appropriate?
  • Are OpenShift compare files changed for E2E tests (compare/*-oc.yml)?

Config/Logging/Testability

  • Are all needed new/changed options added to default YAML files?
  • Are all needed new/changed options added to the Helm Chart?
  • Did we add proper logging messages for operator actions?
  • Did we ensure compatibility with the previous version or cluster upgrade process?
  • Does the change support oldest and newest supported MongoDB version?
  • Does the change support oldest and newest supported Kubernetes version?

@pull-request-size pull-request-size bot added the size/XL 500-999 lines label Mar 9, 2026
@egegunes egegunes changed the title Add CA key persistence for manual TLS to support safe cert re-signing on SAN changes K8SPSMDB-1606: Add CA key persistence for manual TLS to support safe cert re-signing on SAN changes Mar 9, 2026
@egegunes egegunes added this to the v1.23.0 milestone Mar 9, 2026
Comment on lines 459 to 461
if cr.CompareVersion("1.17.0") < 0 {
secretObj.Labels = nil
caLabels = nil
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this condition can be removed

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed the condition - 0a1aa64

It seems like this line slipped in by accident.

Comment on lines +477 to +479
if cr.CompareVersion("1.17.0") < 0 {
secretLabels = nil
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we don't need this condition

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed the condition - 0a1aa64

It seems like this line slipped in by accident.

@JNKPercona
Copy link
Copy Markdown
Collaborator

Test Name Result Time
arbiter passed 00:11:26
balancer passed 00:18:50
cross-site-sharded failure 00:10:39
custom-replset-name passed 00:10:37
custom-tls passed 00:14:02
custom-users-roles passed 00:10:14
custom-users-roles-sharded passed 00:11:29
data-at-rest-encryption failure 00:16:15
data-sharded passed 00:24:06
demand-backup passed 00:17:06
demand-backup-eks-credentials-irsa passed 00:00:08
demand-backup-fs passed 00:24:47
demand-backup-if-unhealthy failure 00:09:17
demand-backup-incremental-aws failure 00:10:53
demand-backup-incremental-azure passed 00:14:05
demand-backup-incremental-gcp-native passed 00:13:51
demand-backup-incremental-gcp-s3 passed 00:11:27
demand-backup-incremental-minio passed 00:25:20
demand-backup-incremental-sharded-aws passed 00:18:41
demand-backup-incremental-sharded-azure passed 00:17:50
demand-backup-incremental-sharded-gcp-native passed 00:17:43
demand-backup-incremental-sharded-gcp-s3 passed 00:24:12
demand-backup-incremental-sharded-minio passed 00:29:50
demand-backup-physical-parallel passed 00:08:19
demand-backup-physical-aws passed 00:12:20
demand-backup-physical-azure passed 00:14:50
demand-backup-physical-gcp-s3 passed 00:11:43
demand-backup-physical-gcp-native passed 00:11:41
demand-backup-physical-minio failure 00:08:32
demand-backup-physical-minio-native passed 00:26:24
demand-backup-physical-minio-native-tls passed 00:21:31
demand-backup-physical-sharded-parallel passed 00:11:39
demand-backup-physical-sharded-aws passed 00:18:31
demand-backup-physical-sharded-azure passed 00:17:39
demand-backup-physical-sharded-gcp-native passed 00:17:25
demand-backup-physical-sharded-minio failure 00:17:23
demand-backup-physical-sharded-minio-native passed 00:21:53
demand-backup-sharded passed 00:26:13
disabled-auth passed 00:16:46
expose-sharded passed 00:36:27
finalizer passed 00:10:36
ignore-labels-annotations passed 00:07:59
init-deploy failure 00:04:26
ldap passed 00:09:10
ldap-tls passed 00:15:41
limits passed 00:07:06
liveness passed 00:11:38
mongod-major-upgrade failure 00:13:20
mongod-major-upgrade-sharded passed 00:59:46
monitoring-2-0 passed 00:27:24
monitoring-pmm3 passed 00:27:33
multi-cluster-service passed 00:14:46
multi-storage passed 00:18:38
non-voting-and-hidden passed 00:17:27
one-pod passed 00:08:12
operator-self-healing-chaos passed 00:12:49
pitr passed 00:31:40
pitr-physical passed 01:14:12
pitr-sharded passed 00:26:12
pitr-to-new-cluster passed 00:26:38
pitr-physical-backup-source failure 00:36:50
preinit-updates passed 00:05:03
pvc-auto-resize passed 00:12:12
pvc-resize passed 00:16:50
recover-no-primary failure 00:17:08
replset-overrides passed 00:18:45
replset-remapping passed 00:16:49
replset-remapping-sharded passed 00:17:24
rs-shard-migration passed 00:14:27
scaling passed 00:11:37
scheduled-backup passed 00:17:50
security-context passed 00:08:59
self-healing-chaos failure 00:13:56
service-per-pod passed 00:18:46
serviceless-external-nodes passed 00:07:19
smart-update passed 00:08:27
split-horizon passed 00:13:29
split-horizon-manual-tls passed 00:11:43
stable-resource-version passed 00:04:56
storage passed 00:07:25
tls-issue-cert-manager failure 00:03:52
unsafe-psa passed 00:07:51
upgrade passed 00:10:22
upgrade-consistency passed 00:08:48
upgrade-consistency-sharded-tls passed 00:59:46
upgrade-sharded passed 00:18:59
upgrade-partial-backup failure 00:18:36
users passed 00:17:46
users-vault passed 00:58:21
version-service failure 00:32:25
Summary Value
Tests Run 90/90
Job Duration 03:55:53
Total Test Time 26:44:43

commit: 0a1aa64
image: perconalab/percona-server-mongodb-operator:PR-2277-0a1aa642f

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants