1 code implementation • CVPR 2024 • Hossein Mirzaei, Mojtaba Nafez, Mohammad Jafari, Mohammad Bagher Soltani, Mohammad Azizmalayeri, Jafar Habibi, Mohammad Sabokrou, Mohammad Hossein Rohban
More precisely, for novelty detection, distribution shifts may occur in the training set or the test set.
1 code implementation • 21 May 2024 • Mohammad Azizmalayeri, Ameen Abu-Hanna, Giovanni Cinà
Detecting out-of-distribution (OOD) instances is crucial for the reliable deployment of machine learning models in real-world scenarios.
no code implementations • 31 Oct 2023 • Mohammad Azizmalayeri, Reza Abbasi, Amir Hosein Haji Mohammad rezaie, Reihaneh Zohrabi, Mahdi Amiri, Mohammad Taghi Manzuri, Mohammad Hossein Rohban
A promising solution to this problem is last-layer retraining, which involves retraining the linear classifier head on a small subset of data without spurious cues.
no code implementations • 29 Oct 2023 • Mahdi Salmani, Alireza Dehghanpour Farashah, Mohammad Azizmalayeri, Mahdi Amiri, Navid Eslami, Mohammad Taghi Manzuri, Mohammad Hossein Rohban
Despite the remarkable success achieved by deep learning algorithms in various domains, such as computer vision, they remain vulnerable to adversarial perturbations.
no code implementations • 15 Oct 2023 • Arshia Soltani Moakhar, Mohammad Azizmalayeri, Hossein Mirzaei, Mohammad Taghi Manzuri, Mohammad Hossein Rohban
Despite considerable theoretical progress in the training of neural networks viewed as a multi-agent system of neurons, particularly concerning biological plausibility and decentralized training, their applicability to real-world problems remains limited due to scalability issues.
1 code implementation • 28 Sep 2023 • Mohammad Azizmalayeri, Ameen Abu-Hanna, Giovanni Ciná
Despite their success, Machine Learning (ML) models do not generalize effectively to data not originating from the training distribution.
no code implementations • 25 Jan 2023 • Mohammad Azizmalayeri, Arman Zarei, Alireza Isavand, Mohammad Taghi Manzuri, Mohammad Hossein Rohban
For this purpose, we first demonstrate that the existing model-based methods can be equivalent to applying smaller perturbation or optimization weights to the hard training examples.
1 code implementation • 30 Sep 2022 • Mohammad Azizmalayeri, Arshia Soltani Moakhar, Arman Zarei, Reihaneh Zohrabi, Mohammad Taghi Manzuri, Mohammad Hossein Rohban
Therefore, unlike OOD detection in the standard setting, access to OOD, as well as in-distribution, samples sounds necessary in the adversarial training setup.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
1 code implementation • 14 Jul 2022 • Simin Shekarpaz, Mohammad Azizmalayeri, Mohammad Hossein Rohban
In this paper, we propose the physics informed adversarial training (PIAT) of neural networks for solving nonlinear differential equations (NDE).
no code implementations • 9 Jun 2022 • Mohammad Azizmalayeri, Mohammad Hossein Rohban
Despite advances in image classification methods, detecting the samples not belonging to the training classes is still a challenging problem.
1 code implementation • 29 Mar 2021 • Mohammad Azizmalayeri, Mohammad Hossein Rohban
However, it usually fails against other attacks, i. e. the model overfits to the training attack scheme.