Adversarial Machine Vision
Deep Neural Networks (DNNs) are the state-of-the-art tools for a wide range of tasks. However, recent studies have found that DNNs are vulnerable to adversarial perturbation attacks, which are hardly perceptible to humans but cause mis-classification in DNN-based decision-making systems, e.g., image classifiers.The majority of the existing attack mechanisms today are targeted towards mis-classifying specific objects and activities. However, most scenes contain multiple objects and there is usually some relationship among the objects in the scene, e.g., certain objects co-occur more frequently than others.This is often referred to as context in computer vision and is related to top-down feedback in human vision; the idea has been widely used in recognition problems. However, context has not played a significant role in the design of adversarial attacks.We are studying how to develop better methods for both adversarial attacks and defense using spatio-temporal context information.
Adversarial Machine Vision
-
Sample Publications
S. Li, S. Zhu, S. Paul, A. Roy-Chowdhury, C. Song, S. Krishnamurthy, A. Swami, K. S Chan, European Conference on Computer Vision (ECCV), 2020.
S. Li, A. Neupane, S. Paul, C. Song, S. Krishnamurthy, A. Roy Chowdhury, and A. Swami. Network and Distributed System Security Symposium (NDSS). 2019.
J. Bappy, C. Simons, L.Natarajan, B. S. Manjunath, A. Roy-Chowdhury, IEEE Trans. on Image Processing (T-IP), 2019.
J. H. Bappy, A. Roy-Chowdhury, J. Bunk, L. Nataraj and B. S. Manjunath, International Conference on Computer Vision (ICCV), 2017.