-
Notifications
You must be signed in to change notification settings - Fork 2
Open
Description
Add Streaming Support to AI Workflows
Summary
We currently only support fully buffered generate style responses for AI workflows. For improved UX and faster time-to-first-token, we should explore adding client-side streaming support where appropriate.
Problem
Some workflows can take multiple seconds to complete end-to-end. For peak UX, users should not have to wait for the full response before seeing progress.
Proposed Direction
- Introduce an option to choose between:
generate(fully buffered)stream(incremental tokens)
- Use inferred conditional types so return types are correctly narrowed based on the selected mode.
- Evaluate which existing workflows are safe and valuable to support streaming.
Success Criteria
- At least one workflow successfully supports streaming end-to-end.
- Clean, type-safe developer API.
- No breaking changes to existing
generateconsumers.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels