Lab Group

Image and Video Enhancement



Most of the existing works address the problem of generating high frame-rate sharp videos by separately learning the frame deblurring and frame interpolation modules. Many of these approaches have a strong prior assumption that all the input frames are blurry whereas in a real-world setting, the quality of frames varies. Moreover, such approaches are trained to perform either of the two tasks- deblurring or interpolation - in isolation, while many practical situations call for both. Further, existing works in video synthesis focus on generating videos using adversarial learning. Despite their success, these methods often require input reference frame or fail to generate diverse videos from the given data distribution, with little to no uniformity in the quality of videos that can be generated. We explore methods in optimizing for the latent space along with the network weights allows us to generate videos in a controlled environment for addressing these issues.

Video Enhancement

  • Sample Publications
    • ALANET: Adaptive Latent Attention Network for Joint Video Deblurring and Interpolation 

    A. Gupta, A. Aich, A. Roy-Chowdhury, ACM International Conference on Multimedia (ACM-MM), 2020.

    A. Aich*, A. Gupta*, R. Panda, R. Hyder, M. S. Asif, A. Roy-Chowdhury, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2020. (* joint first authors)