1 code implementation • 12 Jun 2023 • Harshitha Machiraju, Michael H. Herzog, Pascal Frossard
In response, researchers have developed image corruption datasets to evaluate the performance of deep neural networks in handling such corruptions.
1 code implementation • 6 Oct 2022 • Ke Wang, Harshitha Machiraju, Oh-Hyeon Choung, Michael Herzog, Pascal Frossard
Convolutional neural networks (CNNs) have achieved superhuman performance in multiple vision tasks, especially image classification.
no code implementations • 2 Aug 2022 • Ben Lonnqvist, Harshitha Machiraju, Michael H. Herzog
In a recent article, Guo et al. [arXiv:2206. 11228] report that adversarially trained neural representations in deep networks may already be as robust as corresponding primate IT neural representations.
1 code implementation • 18 May 2022 • Harshitha Machiraju, Oh-Hyeon Choung, Michael H. Herzog, Pascal Frossard
There are continuous attempts to use features of the human visual system to improve the robustness of neural networks to data perturbations.
no code implementations • 16 Mar 2021 • Harshitha Machiraju, Oh-Hyeon Choung, Pascal Frossard, Michael. H Herzog
Many studies have tried to add features of the human visual system to DCNNs to make them robust against adversarial attacks.
2 code implementations • 16 Jan 2020 • Harshitha Machiraju, Vineeth N. Balasubramanian
Small, carefully crafted perturbations called adversarial perturbations can easily fool neural networks.
1 code implementation • 13 May 2019 • Mayank Singh, Abhishek Sinha, Nupur Kumari, Harshitha Machiraju, Balaji Krishnamurthy, Vineeth N. Balasubramanian
We analyze the adversarially trained robust models to study their vulnerability against adversarial attacks at the level of the latent layers.
1 code implementation • 2018 25th IEEE International Conference on Image Processing (ICIP) 2018 • Harshitha Machiraju, Sumohana S. Channappayya
An autonomous navigation system relies on a number of sensors including radar, LIDAR and a visible light camera for its operation.