The AudioBuffer / WaveformBuffer design is now pretty close to the point where it would no longer be necessary to keep up to 30s of samples in AudioBuffer. When a seek back is performed, a mechanism allows holding WaveImage rendering until enough samples are available. This means that we could also remove the code that deals with merging existing samples with incoming ones.
Considering that, current design could transition to this:
- When a new segment is received outside of currently rendered boundaries (or if nothing is rendered yet), clear the
AudioBuffer and the WaveformBuffers / WaveformImages. Add the incoming GstBuffer to the AudioBuffer's VecDeque with flags indicating which WaveformBuffer has already handled the buffer.
- When a new segment is contained within currently rendered boundaries (e.g. in window seek), set the new segment reference, but drop the buffer.
- When a new buffer inside current segment is received and the buffer is contained within currently rendered boundaries update the cumulative offset, but drop the buffer.
- When a new buffer inside current segment is received and and the buffer is not contained within currently rendered boundaries update the cumulative offset, add the buffer to the
VecDeque and mark it has not handled yet.
On the DoubleAudioBuffer could select whether or not to swap the WaveformBuffers depending on the last buffer handling status. Current working buffer could be filled (converting samples to pixels on the fly) until the exposed buffer window range becomes too short, marking the buffer as handled by the working buffer, then the WaveformBuffers would be swapped. When a buffer has been handled by both WaveformBuffers the buffer is dropped.
This would avoid copying samples in the AudioBuffer (we would just get a reference on the GstBuffer) saving CPU cycles and memory. The only new cost would be induced by the double conversion samples -> pixels. It should be possible to use SIMD to handle multiple samples at once (either several channels or several consecutive samples at once).
The
AudioBuffer/WaveformBufferdesign is now pretty close to the point where it would no longer be necessary to keep up to 30s of samples inAudioBuffer. When a seek back is performed, a mechanism allows holdingWaveImagerendering until enough samples are available. This means that we could also remove the code that deals with merging existing samples with incoming ones.Considering that, current design could transition to this:
AudioBufferand theWaveformBuffers /WaveformImages. Add the incomingGstBufferto theAudioBuffer'sVecDequewith flags indicating whichWaveformBufferhas already handled the buffer.VecDequeand mark it has not handled yet.On the
DoubleAudioBuffercould select whether or not to swap theWaveformBuffers depending on the last buffer handling status. Current working buffer could be filled (converting samples to pixels on the fly) until the exposed buffer window range becomes too short, marking the buffer as handled by the working buffer, then theWaveformBuffers would be swapped. When a buffer has been handled by bothWaveformBuffers the buffer is dropped.This would avoid copying samples in the
AudioBuffer(we would just get a reference on theGstBuffer) saving CPU cycles and memory. The only new cost would be induced by the double conversion samples -> pixels. It should be possible to use SIMD to handle multiple samples at once (either several channels or several consecutive samples at once).