Oral presentations at CVPR, ICML and ACM MM 2021
Oral papers on unsupervised multi-source domain adaptation without source data at CVPR 2021, on cross-domain imitation learning from observations at ICML 2021, and on adaptive super-resolution at ACM Multimedia 2021
1. The paper Unsupervised Multi-source Domain Adaptation Without Access to Source Data, accepted as oral in CVPR 2021, proposes the Data frEe multi-sourCe unsupervISed domain adaptatiON (DECISION), which identifies the optimal blend of source models with no source data to generate the target model by optimizing a carefully designed unsupervised loss. Under intuitive assumptions, it also establishes theoretical guarantees on the performance of the target model which shows that it is consistently at least as good as deploying the single best source model, thus, minimizing negative transfer.
Title: Unsupervised Multi-source Domain Adaptation Without Access to Source Data. Sk Miraj Ahmed, Dripta S. Raychaudhuri, Sujoy Paul, Samet Oymak, Amit K. Roy-Chowdhury (CVPR), 2021. ( joint first authors)
2. The paper Cross-Domain Imitation from Observations, accepted as a long paper in ICML 2021, proposes an approach to learn policies from a limited number of expert demonstrations obtained from a domain that is different from the agent domain, where the discrepancies could include differing dynamics, viewpoints, or morphology. This work is done in collaboration between UCR and MERL (Mitsubishi Electric Research Laboratories).
Title: Cross-domain Imitation from Observations
Authors: Dripta S. Raychaudhuri, Sujoy Paul, Jeroen van Baar, Amit K. Roy-Chowdhury (ICML, 2021) (* joint first authors)
Project page: https://driptarc.github.
Link: https://arxiv.org/abs/2105.
3. The paper, Ada-VSR: Adaptive Video Super-Resolution with Meta-Learning, has been accepted for oral presentation at ACM Multimedia 2021. It addresses the problem of blind spatio-temporal super-resolution by learning the model parameters that can easily adapt to unseen degradation conditions. The proposed Ada-VSR approach employs meta-learning to obtain adaptive model parameters, using a large-scale external dataset, that can adapt quickly to novel conditions of the given test video, thereby exploiting external and internal information of a video for super-resolution.
Title: AdaVSR: Adaptive Video Super-Resolution with Meta-Learning
Authors: Akash Gupta, Padmaja Jonnalagedda, Bir Bhanu, Amit K. Roy-Chowdhury
Project Page: https://akashagupta.com/publication/acm2021_adavsr/