✨ Is your feature request related to a problem? Please describe.
Current inference processing in SceneScape does not leverage batching across streams from multiple cameras, resulting in suboptimal hardware utilization for scenarios with several cameras.
💡 Describe the solution you'd like
Implement cross-stream batching, allowing frames from multiple video streams (cameras) to be grouped and processed together for inference. This should make more efficient use of GPU resources and improve performance for users with many cameras.
- Ideally, batching should be transparent to users, requiring minimal manual configuration or stream interruptions.
- In the MVP phase, a simpler approach with some required manual configuration and stream interruptions (e.g., when adding or removing a camera) is acceptable, with a roadmap to reduce these drawbacks in the final implementation.
🔄 Describe alternatives you've considered
- Keeping per-stream (per-camera) inference – this leads to fragmented HW utilization and lower throughput/speed.
- Manual scripting or outside-the-platform batch management (not user friendly).
📄 Additional context
✨ Is your feature request related to a problem? Please describe.
Current inference processing in SceneScape does not leverage batching across streams from multiple cameras, resulting in suboptimal hardware utilization for scenarios with several cameras.
💡 Describe the solution you'd like
Implement cross-stream batching, allowing frames from multiple video streams (cameras) to be grouped and processed together for inference. This should make more efficient use of GPU resources and improve performance for users with many cameras.
🔄 Describe alternatives you've considered
📄 Additional context