Reparo: Loss-Resilient Generative Codec for Video Conferencing

Packet loss during video conferencing often leads to poor quality and video freezing. Attempting to retransmit lost packets is often impractical due to the need for real-time playback. Employing Forward Error Correction (FEC) for recovering the lost packets is challenging as it is difficult to determine the appropriate redundancy level. To address these issues, we introduce Reparo -- a loss-resilient video conferencing framework based on generative deep learning models. Our approach involves generating missing information when a frame or part of a frame is lost. This generation is conditioned on the data received thus far, taking into account the model's understanding of how people and objects appear and interact within the visual realm. Experimental results, using publicly available video conferencing datasets, demonstrate that Reparo outperforms state-of-the-art FEC-based video conferencing solutions in terms of both video quality (measured through PSNR, SSIM, and LPIPS) and the occurrence of video freezes.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here