Observed behavior
When creating a KV mirror, it's impossible to enable per-message TTL support (AllowMsgTTL), even when the source KV bucket has this feature enabled. This creates a significant issue where:
- Source KV bucket can automatically clean up expired messages and tombstone markers
- Mirror KV bucket accumulates these expired messages indefinitely, requiring manual cleanup
The root cause is that mirrors cannot set the --marker-ttl flag (which enables SubjectDeleteMarkerTTL), and without this configuration, per-message TTL processing cannot be enabled on the mirror stream.
Step-by-Step Reproduction
Step 1: Create Source KV with Per-Message TTL Support
❯ nats kv add demo --js-domain hub --marker-ttl=10s
Information for Key-Value Store Bucket demo created 2025-07-28T09:00:40+02:00
Configuration:
Bucket Name: demo
History Kept: 1
Values Stored: 0
Compressed: false
Per-Key TTL Supported: true
Limit Marker TTL: 10.00s
Backing Store Kind: JetStream
Bucket Size: 0 B
Maximum Bucket Size: unlimited
Maximum Value Size: unlimited
Maximum Age: unlimited
JetStream Stream: KV_demo
Storage: File
Cluster Information:
Name:
Leader: SERVER_0
Source KV shows Per-Key TTL Supported: true
Step 2: Create Mirror KV (Problem Occurs Here)
❯ nats kv add demo2 --js-domain ksds --mirror demo --mirror-domain hub
Information for Key-Value Store Bucket demo2 created 2025-07-28T09:00:45+02:00
Configuration:
Bucket Name: demo2
History Kept: 1
Values Stored: 0
Compressed: false
Per-Key TTL Supported: false
Backing Store Kind: JetStream
Bucket Size: 0 B
Maximum Bucket Size: unlimited
Maximum Value Size: unlimited
Maximum Age: unlimited
JetStream Stream: KV_demo2
Storage: File
Mirror Information:
Origin Bucket: demo
External API: $JS.hub.API
Last Seen: never
Lag: 0
Cluster Information:
Name: leaf-server-ksds
Leader: leaf-server-ksds
Mirror KV shows Per-Key TTL Supported: false - no way to enable this, as setting --marker-ttl= in the mirror, results in nats: error: nats: API error: code=500 err_code=10052 description=subject delete markers forbidden on mirrors which make sense as that would create new messages messing the messages sequence number.
Step 3: Add Test Data
❯ nats kv put demo test test
test
Both buckets now have the data:
❯ nats kv ls
╭──────────────────────────────────────────────────────────────────────────╮
│ Key-Value Buckets │
├────────┬─────────────┬─────────────────────┬──────┬────────┬─────────────┤
│ Bucket │ Description │ Created │ Size │ Values │ Last Update │
├────────┼─────────────┼─────────────────────┼──────┼────────┼─────────────┤
│ demo │ │ 2025-07-28 09:00:40 │ 47 B │ 1 │ 29.35s │
╰────────┴─────────────┴─────────────────────┴──────┴────────┴─────────────╯
❯ nats kv ls --js-domain ksds
╭──────────────────────────────────────────────────────────────────────────╮
│ Key-Value Buckets │
├────────┬─────────────┬─────────────────────┬──────┬────────┬─────────────┤
│ Bucket │ Description │ Created │ Size │ Values │ Last Update │
├────────┼─────────────┼─────────────────────┼──────┼────────┼─────────────┤
│ demo2 │ │ 2025-07-28 09:00:45 │ 47 B │ 1 │ 1m3s │
╰────────┴─────────────┴─────────────────────┴──────┴────────┴─────────────╯
Step 4: Purge with TTL (Demonstrates the Problem)
❯ nats kv purge demo test --ttl=10s
? Purge key demo > test? Yes
Step 5: Observe Different Behavior Between Source and Mirror
Source KV (after TTL expires):
❯ nats kv ls
╭──────────────────────────────────────────────────────────────────────────╮
│ Key-Value Buckets │
├────────┬─────────────┬─────────────────────┬──────┬────────┬─────────────┤
│ Bucket │ Description │ Created │ Size │ Values │ Last Update │
├────────┼─────────────┼─────────────────────┼──────┼────────┼─────────────┤
│ demo │ │ 2025-07-28 09:00:40 │ 0 B │ 0 │ 13.50s │
╰────────┴─────────────┴─────────────────────┴──────┴────────┴─────────────╯
Mirror KV (same time):
❯ nats kv ls --js-domain ksds
╭───────────────────────────────────────────────────────────────────────────╮
│ Key-Value Buckets │
├────────┬─────────────┬─────────────────────┬───────┬────────┬─────────────┤
│ Bucket │ Description │ Created │ Size │ Values │ Last Update │
├────────┼─────────────┼─────────────────────┼───────┼────────┼─────────────┤
│ demo2 │ │ 2025-07-28 09:00:45 │ 120 B │ 1 │ 39.76s │
╰────────┴─────────────┴─────────────────────┴───────┴────────┴─────────────╯
Problem: Source cleaned up (0 B), but mirror retains expired marker (120 B)
Step 6: Watch Events Show the Issue
Source KV watch:
❯ nats kv watch demo
[2025-07-28 09:05:36] PUT demo > test: test
[2025-07-28 09:05:43] PURGE demo > test
[2025-07-28 09:05:53] PURGE demo > test
^C
❯ nats kv watch demo
^C
Mirror KV watch:
❯ nats kv watch demo2 --js-domain ksds
[2025-07-28 09:05:36] PUT demo2 > test: test
[2025-07-28 09:05:43] PURGE demo2 > test
[2025-07-28 09:05:53] PURGE demo2 > test
^C
❯ nats kv watch demo2 --js-domain ksds
[2025-07-28 09:05:53] PURGE demo2 > test
^C
Notice: Mirror still shows the PURGE event, while source doesn't (it was cleaned up).
Step 7: Stream Configuration Comparison
Source Stream (KV_demo):
❯ nats stream info KV_demo
Information for Stream KV_demo created 2025-07-28 09:05:09
Subjects: $KV.demo.>
Replicas: 1
Storage: File
Options:
Retention: Limits
Acknowledgments: true
Discard Policy: New
Duplicate Window: 2m0s
Direct Get: true
Allows Msg Delete: false
Allows Purge: true
Allows Per-Message TTL: true
Subject Delete Markers TTL: 10.00s
Allows Rollups: true
State:
Host Version: 2.11.6
Required API Level: 1 hosted at level 1
Messages: 0
Bytes: 0 B
First Sequence: 4
Last Sequence: 3 @ 2025-07-28 09:05:53
Active Consumers: 0
Mirror Stream (KV_demo2):
❯ nats stream info KV_demo2 --js-domain ksds
Information for Stream KV_demo2 created 2025-07-28 09:05:14
Replicas: 1
Storage: File
Options:
Retention: Limits
Acknowledgments: true
Discard Policy: New
Duplicate Window: 2m0s
Direct Get: true
Mirror Direct Get: true
Allows Msg Delete: false
Allows Purge: true
Allows Per-Message TTL: false
Allows Rollups: true
State:
Host Version: 2.11.6
Required API Level: 0 hosted at level 1
Messages: 1
Bytes: 120 B
First Sequence: 3 @ 2025-07-28 09:05:53
Last Sequence: 3 @ 2025-07-28 09:05:53
Active Consumers: 0
Number of Subjects: 1
Key Difference:
- Source:
Allows Per-Message TTL: true, Messages: 0, Bytes: 0 B
- Mirror:
Allows Per-Message TTL: false, Messages: 1, Bytes: 120 B
Expected behavior
Mirror KV buckets should be able to inherit or, at least, explicitly allow enabling per-message TTL support from their source, allowing them to:
- Process
Nats-TTL headers on mirrored messages
- Automatically clean up expired tombstone markers
- Maintain storage efficiency like their source buckets
Without this fix, KV mirrors become storage inefficient over time, accumulating expired tombstone markers and requiring periodic manual compaction to prevent unbounded growth.
Workaround: Using Stream-Level Operations
The desired behavior can be achieved by directly editing the underlying JetStream stream, though this is explicitly marked as unsupported and dangerous:
❯ nats stream edit KV_demo2 --js-domain ksds --allow-msg-ttl
Differences (-old +new):
api.StreamConfig{
... // 29 identical fields
FirstSeq: 0,
Metadata: nil,
- AllowMsgTTL: false,
+ AllowMsgTTL: true,
SubjectDeleteMarkerTTL: s"0s",
ConsumerLimits: {},
}
WARNING: Operating on the underlying stream of a Key-Value bucket is dangerous.
Key-Value stores are an abstraction above JetStream Streams and as such require particular
configuration to be set. Interacting with KV buckets outside of the 'nats kv' subcommand can lead
unexpected outcomes, data loss and, technically, will mean your KV bucket is no longer a KV bucket.
Continuing this operation is an unsupported action.
? Really operate on the KV stream? Yes
? Really edit Stream KV_demo2 Yes
Stream KV_demo2 was updated
Information for Stream KV_demo2 created 2025-07-28 09:05:14
Options:
Retention: Limits
Acknowledgments: true
Discard Policy: New
Duplicate Window: 2m0s
Direct Get: true
Mirror Direct Get: true
Allows Msg Delete: false
Allows Purge: true
Allows Per-Message TTL: true
Allows Rollups: true
State:
Host Version: 2.11.6
Required API Level: 1 hosted at level 1
Messages: 1
Bytes: 120 B
First Sequence: 3 @ 2025-07-28 09:05:53
Last Sequence: 3 @ 2025-07-28 09:05:53
Active Consumers: 0
Number of Subjects: 1
After applying this workaround, initial testing shows that the mirror now processes per-message TTLs as expected. Expired messages and markers are cleaned up properly. However, I'm uncertain about the safety and long-term implications of this approach.
Server and client version
- NATS Server: v2.11.6
- NATS CLI: v0.2.3
Host environment
No response
Steps to reproduce
No response
Observed behavior
When creating a KV mirror, it's impossible to enable per-message TTL support (
AllowMsgTTL), even when the source KV bucket has this feature enabled. This creates a significant issue where:The root cause is that mirrors cannot set the
--marker-ttlflag (which enablesSubjectDeleteMarkerTTL), and without this configuration, per-message TTL processing cannot be enabled on the mirror stream.Step-by-Step Reproduction
Step 1: Create Source KV with Per-Message TTL Support
Source KV shows
Per-Key TTL Supported: trueStep 2: Create Mirror KV (Problem Occurs Here)
Mirror KV shows
Per-Key TTL Supported: false- no way to enable this, as setting--marker-ttl=in the mirror, results innats: error: nats: API error: code=500 err_code=10052 description=subject delete markers forbidden on mirrorswhich make sense as that would create new messages messing the messages sequence number.Step 3: Add Test Data
Both buckets now have the data:
Step 4: Purge with TTL (Demonstrates the Problem)
Step 5: Observe Different Behavior Between Source and Mirror
Source KV (after TTL expires):
Mirror KV (same time):
Problem: Source cleaned up (0 B), but mirror retains expired marker (120 B)
Step 6: Watch Events Show the Issue
Source KV watch:
Mirror KV watch:
Notice: Mirror still shows the PURGE event, while source doesn't (it was cleaned up).
Step 7: Stream Configuration Comparison
Source Stream (KV_demo):
Mirror Stream (KV_demo2):
Key Difference:
Allows Per-Message TTL: true,Messages: 0,Bytes: 0 BAllows Per-Message TTL: false,Messages: 1,Bytes: 120 BExpected behavior
Mirror KV buckets should be able to inherit or, at least, explicitly allow enabling per-message TTL support from their source, allowing them to:
Nats-TTLheaders on mirrored messagesWithout this fix, KV mirrors become storage inefficient over time, accumulating expired tombstone markers and requiring periodic manual compaction to prevent unbounded growth.
Workaround: Using Stream-Level Operations
The desired behavior can be achieved by directly editing the underlying JetStream stream, though this is explicitly marked as unsupported and dangerous:
❯ nats stream edit KV_demo2 --js-domain ksds --allow-msg-ttl Differences (-old +new): api.StreamConfig{ ... // 29 identical fields FirstSeq: 0, Metadata: nil, - AllowMsgTTL: false, + AllowMsgTTL: true, SubjectDeleteMarkerTTL: s"0s", ConsumerLimits: {}, } WARNING: Operating on the underlying stream of a Key-Value bucket is dangerous. Key-Value stores are an abstraction above JetStream Streams and as such require particular configuration to be set. Interacting with KV buckets outside of the 'nats kv' subcommand can lead unexpected outcomes, data loss and, technically, will mean your KV bucket is no longer a KV bucket. Continuing this operation is an unsupported action. ? Really operate on the KV stream? Yes ? Really edit Stream KV_demo2 Yes Stream KV_demo2 was updated Information for Stream KV_demo2 created 2025-07-28 09:05:14 Options: Retention: Limits Acknowledgments: true Discard Policy: New Duplicate Window: 2m0s Direct Get: true Mirror Direct Get: true Allows Msg Delete: false Allows Purge: true Allows Per-Message TTL: true Allows Rollups: true State: Host Version: 2.11.6 Required API Level: 1 hosted at level 1 Messages: 1 Bytes: 120 B First Sequence: 3 @ 2025-07-28 09:05:53 Last Sequence: 3 @ 2025-07-28 09:05:53 Active Consumers: 0 Number of Subjects: 1After applying this workaround, initial testing shows that the mirror now processes per-message TTLs as expected. Expired messages and markers are cleaned up properly. However, I'm uncertain about the safety and long-term implications of this approach.
Server and client version
Host environment
No response
Steps to reproduce
No response