Skip to content

Latest commit

 

History

History
48 lines (25 loc) · 2.21 KB

File metadata and controls

48 lines (25 loc) · 2.21 KB

Repository of Streaming Large Language Models

🚀 Latest News

[2026-03] We have released the first survey paper about Streaming LLMs/MLLMs, covering text/speech/video stream.

[2026-02] Think-as-You-See is accepted by CVPR 2026.

[2026-01] We release a paper Speak-While-Watching.

[2026-01] StreamingThinker is accepted by ICLR 2026.

[2025-05] StreamingLLM_GPE is accepted by Findings of ACL 2025.

1. TL;DR

This repository collects the works of EIT-NLP Lab on streaming LLMs/MLLMs.

2. Content

3. What are streaming LLMs?

Streaming LLMs refer to large language models that support both the progressive processing of incoming information (streaming input) and the step-by-step generation of outputs (streaming output). Building upon this foundation, we further focus on scenarios where the model performs streaming input and output simultaneously. The formal definition and taxonomy of streaming LLMs/MLLMs can be found in our survey paper.


Here is an example of streaming reasoning (text-to-text streaming): streaming-processing

Here is an example of streaming speech recognition (speech-to-text streaming): streaming-processing

Contact

If you have any questions, please contact: jl-tong@sjtu.edu.cn