Skip to content

Add StreamingBehavior to AsyncBufferSequence for throughput/latency control#228

Open
dannys42 wants to merge 3 commits intoswiftlang:mainfrom
dannys42-contrib:FEATURE-202602-streaming_behavior
Open

Add StreamingBehavior to AsyncBufferSequence for throughput/latency control#228
dannys42 wants to merge 3 commits intoswiftlang:mainfrom
dannys42-contrib:FEATURE-202602-streaming_behavior

Conversation

@dannys42
Copy link
Copy Markdown

Motivation

When streaming subprocess output, users previously had no control over the trade-off between throughput and latency. The sequence always batched data for maximum throughput, which limits use for interactive cases where users want to receive output as soon as it's available.

Changes

  • Add AsyncBufferSequence.StreamingBehavior enum with three modes:
    • .throughput (default): batch data for best performance (preserves the existing behavior)
    • .balanced: batch data with guaranteed 250ms max delivery interval
    • .latency: deliver data as soon as it's available (also bounded by 250ms)
  • Add streamingBehavior parameter to all run overloads that expose an AsyncBufferSequence, defaulting to .throughput for backwards compatibility
  • Add unit tests verifying that data is delivered progressively under .latency and .balanced behaviors

@dannys42 dannys42 requested a review from iCharlesHu as a code owner March 20, 2026 23:36
@iCharlesHu
Copy link
Copy Markdown
Contributor

Hi @dannys42, thanks for the PR! I do like the idea of StreamingBehavior, and I have a few questions:

  • How does it work with the existing preferredBufferSize parameter? Are you proposing we replace it? preferredBufferSize was introduced to serve a similar purpose. Developers can increase or decrease the buffer size based on how frequently they want the data to be delivered.
  • Subprocess is a cross-platform package. I see in your implementation you only included the Dispatch implementation. Can you make sure this proposed API can be implemented on Windows and Linux as well? We don't want to have platform-specific APIs outside of PlatformOptions, and StreamingBehavior doesn't belong in PlatformOptions. Can the notion of batch data with guaranteed 250ms max delivery interval be efficiently implemented on top of Linux epoll and Windows IO Completion Ports?

@dannys42
Copy link
Copy Markdown
Author

Thanks @iCharlesHu!

I can try to take a look at Linux, but I don't really have a way of testing Windows.

preferredBufferSize doesn't resolve this problem because the dispatchIO.read() call only resumes when done=true, which only happens when we've read length (i.e. preferredBufferSize) bytes or get EOF. So even with a value of 1, it will hold onto that last byte.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants