Problem
When a block's system chunk processes scheduled transactions, the first system transaction (which invokes FlowTransactionScheduler) determines which transactions to execute, emits a PendingExecution event for each one, and marks them all as Executed at that point. The actual execution of each scheduled transaction happens afterward — still within the same block and collection — as the execution node reads those events and constructs individual transactions.
This creates a race condition: if any code in the same block queries getStatus() on a scheduled transaction after the scheduler system transaction has run but before that transaction has actually executed, it will see Status.Executed even though execution hasn't started yet.
Scope of the race condition
- Bounded to within one block
- A grace period of one block after the
Executed status appears is sufficient to guarantee the transaction has actually run
- The scheduled timestamp does not reflect the actual execution block — a transaction may run later than its scheduled time due to congestion, so callers cannot reliably infer the execution block from the timestamp alone
Impact
- Any contract that checks the status of a scheduled transaction (e.g. for panic recovery / rescheduling) and does so within the same block the transaction is scheduled to execute may get a false
Executed result
- Workarounds require adding a grace-period delay (checking N blocks after
Executed appears), which adds fragility and complexity to callers
Proposed Fixes
Two implementation approaches, each with a cost:
Option A — Update status individually per transaction, inline during execution
- Status is set accurately at the moment each transaction finishes
- Con: makes concurrent execution of scheduled transactions impossible — the status update would have to happen sequentially after each transaction, eliminating any parallelism in the batch.
- Con: Still a potential incorrect status -- If a transaction fails, its status would not be updated to
Executed, so in the same block and the following block it would still be marked as pending execution even if it failed, which would not actually be accurate and would be confusing to any code relying on correct status reporting.
Option B — Single trailing system transaction marks all as Executed after the batch
- Preserves parallelism across the batch
- Con: adds one additional system transaction per block This could slow the overall block rate and that trailing transaction cannot itself be parallelized
Key question
Which cost is worse: losing concurrent execution of scheduled transactions and correct reporting of failed transactions (Option A), or adding a fixed per-block system transaction overhead (Option B)?
Additional Notes
- Both changes are pretty easy from a Cadence perspective
- This is not currently blocking any major use cases — it requires more defensive code in callers to handle the edge case
- Any fix is expected to be a non-breaking Cadence contract change; existing scheduled transaction integrations should continue to work
- The
PendingExecution event already intentionally emits an empty transactionHandlerTypeIdentifier (to avoid failures if the handler contract is broken) — this is unrelated but worth noting for anyone reviewing the system transaction logic
Problem
When a block's system chunk processes scheduled transactions, the first system transaction (which invokes
FlowTransactionScheduler) determines which transactions to execute, emits aPendingExecutionevent for each one, and marks them all asExecutedat that point. The actual execution of each scheduled transaction happens afterward — still within the same block and collection — as the execution node reads those events and constructs individual transactions.This creates a race condition: if any code in the same block queries
getStatus()on a scheduled transaction after the scheduler system transaction has run but before that transaction has actually executed, it will seeStatus.Executedeven though execution hasn't started yet.Scope of the race condition
Executedstatus appears is sufficient to guarantee the transaction has actually runImpact
ExecutedresultExecutedappears), which adds fragility and complexity to callersProposed Fixes
Two implementation approaches, each with a cost:
Option A — Update status individually per transaction, inline during execution
Executed, so in the same block and the following block it would still be marked as pending execution even if it failed, which would not actually be accurate and would be confusing to any code relying on correct status reporting.Option B — Single trailing system transaction marks all as Executed after the batch
Key question
Which cost is worse: losing concurrent execution of scheduled transactions and correct reporting of failed transactions (Option A), or adding a fixed per-block system transaction overhead (Option B)?
Additional Notes
PendingExecutionevent already intentionally emits an emptytransactionHandlerTypeIdentifier(to avoid failures if the handler contract is broken) — this is unrelated but worth noting for anyone reviewing the system transaction logic