no code implementations • 18 Feb 2024 • Vijaya Raghavan T Ramkumar, Bahram Zonooz, Elahe Arani
However, a key challenge of AT is robust overfitting, where the network's robust performance on test data deteriorates with further training, thus hindering generalization.
1 code implementation • 26 Jan 2024 • Shruthi Gowda, Bahram Zonooz, Elahe Arani
Adversarial training improves the robustness of neural networks against adversarial attacks, albeit at the expense of the trade-off between standard and robust generalization.
1 code implementation • 16 Dec 2023 • Hemang Chawla, Arnav Varma, Elahe Arani, Bahram Zonooz
Transformers have revolutionized deep learning based computer vision with improved performance as well as robustness to natural corruptions and adversarial attacks.
1 code implementation • 4 Nov 2023 • Hemang Chawla, Arnav Varma, Elahe Arani, Bahram Zonooz
Spatial scene understanding, including monocular depth estimation, is an important problem in various applications, such as robotics and autonomous driving.
2 code implementations • 17 Oct 2023 • Shruthi Gowda, Bahram Zonooz, Elahe Arani
Artificial neural networks (ANNs) exhibit a narrow scope of expertise on stationary independent data.
1 code implementation • NeurIPS 2023 • Preetha Vijayan, Prashant Bhat, Elahe Arani, Bahram Zonooz
Continual learning (CL) has remained a persistent challenge for deep neural networks due to catastrophic forgetting (CF) of previously learned tasks.
no code implementations • 30 Jun 2023 • Fahad Sarfraz, Elahe Arani, Bahram Zonooz
As our understanding of the mechanisms of brain function is enhanced, the value of insights gained from neuroscience to the development of AI algorithms deserves further consideration.
no code implementations • 15 May 2023 • Ibrahim Batuhan Akkaya, Senthilkumar S. Kathiresan, Elahe Arani, Bahram Zonooz
Vision transformers (ViTs) achieve remarkable performance on large datasets, but tend to perform worse than convolutional neural networks (CNNs) when trained from scratch on smaller datasets, possibly due to a lack of local inductive bias in the architecture.
1 code implementation • 8 May 2023 • Kishaan Jeeveswaran, Prashant Bhat, Bahram Zonooz, Elahe Arani
The ability of deep neural networks to continually learn and adapt to a sequence of tasks has remained challenging due to catastrophic forgetting of previously learned tasks.
1 code implementation • 30 Apr 2023 • Naresh Kumar Gurulingan, Bahram Zonooz, Elahe Arani
In the task learning phase, each network specializes in the corresponding task.
no code implementations • 13 Apr 2023 • Deepan Chakravarthi Padmanabhan, Shruthi Gowda, Elahe Arani, Bahram Zonooz
Few-shot learning (FSL) techniques seek to learn the underlying patterns in data using fewer samples, analogous to how humans learn from limited experience.
1 code implementation • 13 Apr 2023 • Fahad Sarfraz, Elahe Arani, Bahram Zonooz
Humans excel at continually acquiring, consolidating, and retaining information from an ever-changing environment, whereas artificial neural networks (ANNs) exhibit catastrophic forgetting.
1 code implementation • 18 Mar 2023 • Vijaya Raghavan T. Ramkumar, Elahe Arani, Bahram Zonooz
Deep neural networks (DNNs) are often trained on the premise that the complete training data set is provided ahead of time.
1 code implementation • 14 Feb 2023 • Prashant Bhat, Bahram Zonooz, Elahe Arani
Thus, inspired by the Global Workspace Theory of conscious information access in the brain, we propose TAMiL, a continual learning method that entails task-attention modules to capture task-specific information from the common representation space.
1 code implementation • 14 Feb 2023 • Fahad Sarfraz, Elahe Arani, Bahram Zonooz
To this end, we propose \textit{ESMER} which employs a principled mechanism to modulate error sensitivity in a dual-memory rehearsal-based system.
1 code implementation • 2 Jan 2023 • Arnav Varma, Elahe Arani, Bahram Zonooz
Real-world applications often require learning continuously from a stream of data under ever-changing conditions.
1 code implementation • 28 Dec 2022 • Fahad Sarfraz, Elahe Arani, Bahram Zonooz
Efficient continual learning in humans is enabled by a rich set of neurophysiological mechanisms and interactions between multiple memory systems.
no code implementations • 7 Oct 2022 • Haris Iqbal, Hemang Chawla, Arnav Varma, Terence Brouns, Ahmed Badar, Elahe Arani, Bahram Zonooz
Road infrastructure maintenance inspection is typically a labor-intensive and critical task to ensure the safety of all road users.
1 code implementation • 5 Oct 2022 • Hemang Chawla, Kishaan Jeeveswaran, Elahe Arani, Bahram Zonooz
Self-supervised monocular depth estimation is a salient task for 3D scene understanding.
no code implementations • 23 Aug 2022 • Elahe Arani, Shruthi Gowda, Ratnajit Mukherjee, Omar Magdy, Senthilkumar Kathiresan, Bahram Zonooz
Our extensive empirical study can act as a guideline for the industrial community to make an informed choice on the existing networks.
1 code implementation • 19 Aug 2022 • Naresh Kumar Gurulingan, Elahe Arani, Bahram Zonooz
However, increased sharing exposes more parameters to task interference which likely hinders both generalization and robustness.
1 code implementation • 11 Aug 2022 • Vijaya Raghavan T. Ramkumar, Elahe Arani, Bahram Zonooz
SCD is challenging due to noisy changes in illumination, seasonal variations, and perspective differences across a pair of views.
1 code implementation • 14 Jul 2022 • Hemang Chawla, Arnav Varma, Elahe Arani, Bahram Zonooz
While studies evaluating the impact of adversarial attacks on monocular depth estimation exist, a systematic demonstration and analysis of adversarial perturbations against pose estimation are lacking.
1 code implementation • 13 Jul 2022 • Prashant Bhat, Bahram Zonooz, Elahe Arani
Furthermore, the domain shift between pre-training data distribution and the task distribution reduces the generalizability of the learned representations.
1 code implementation • 11 Jul 2022 • Prashant Bhat, Bahram Zonooz, Elahe Arani
Therefore, we examine the role of consistency regularization in ER framework under various continual learning scenarios.
1 code implementation • 12 Jun 2022 • Shruthi Gowda, Bahram Zonooz, Elahe Arani
Humans rely less on spurious correlations and trivial cues, such as texture, compared to deep neural networks which lead to better generalization and robustness.
1 code implementation • 8 Jun 2022 • Fahad Sarfraz, Elahe Arani, Bahram Zonooz
Continual learning (CL) in the brain is facilitated by a complex set of mechanisms.
1 code implementation • 7 Feb 2022 • Arnav Varma, Hemang Chawla, Bahram Zonooz, Elahe Arani
While recent works have compared transformers against their CNN counterparts for tasks such as image classification, no study exists that investigates the impact of using transformers for self-supervised monocular depth estimation.
1 code implementation • ICLR 2022 • Elahe Arani, Fahad Sarfraz, Bahram Zonooz
Humans excel at continually learning from an ever-changing environment whereas it remains a challenge for deep neural networks which exhibit catastrophic forgetting.
no code implementations • 21 Jan 2022 • Kishaan Jeeveswaran, Senthilkumar Kathiresan, Arnav Varma, Omar Magdy, Bahram Zonooz, Elahe Arani
Convolutional Neural Networks (CNNs), architectures consisting of convolutional layers, have been the standard choice in vision tasks.
1 code implementation • 9 Nov 2021 • Shruthi Gowda, Bahram Zonooz, Elahe Arani
To overcome these challenges, we explore the idea of leveraging a different data modality that is disparate yet complementary to the visual data.
1 code implementation • 10 Aug 2021 • Naresh Kumar Gurulingan, Elahe Arani, Bahram Zonooz
Scene understanding is crucial for autonomous systems which intend to operate in the real world.
no code implementations • 30 Jun 2021 • Hamid Tabani, Ajay Balasubramaniam, Shabbir Marzban, Elahe Arani, Bahram Zonooz
Transformers provide promising accuracy and have become popular and used in various domains such as natural language processing and computer vision.
no code implementations • 6 Jun 2021 • Ahmed Badar, Arnav Varma, Adrian Staniec, Mahmoud Gamal, Omar Magdy, Haris Iqbal, Elahe Arani, Bahram Zonooz
We highlight that there is a need to rethink the design and evaluation of CNNs to alleviate the issue of research bias and carbon emissions.
no code implementations • 4 Jun 2021 • Ratnajit Mukherjee, Haris Iqbal, Shabbir Marzban, Ahmed Badar, Terence Brouns, Shruthi Gowda, Elahe Arani, Bahram Zonooz
Road infrastructure maintenance inspection is typically a labour-intensive and critical task to ensure the safety of all the road users.
no code implementations • 6 May 2021 • Hamid Tabani, Ajay Balasubramaniam, Elahe Arani, Bahram Zonooz
From computer vision and speech recognition to forecasting trajectories in autonomous vehicles, deep learning approaches are at the forefront of so many domains.
1 code implementation • 20 Apr 2021 • Prashant Bhat, Elahe Arani, Bahram Zonooz
To address the issue of self-supervised pre-training of smaller models, we propose Distill-on-the-Go (DoGo), a self-supervised learning paradigm using single-stage online knowledge distillation to improve the representation quality of the smaller models.
1 code implementation • 20 Apr 2021 • Daniel Koguciuk, Elahe Arani, Bahram Zonooz
We use an additional photometric distortion step in the synthetic COCO dataset generation to better represent the illumination variation of the real-world scenarios.
Ranked #1 on Homography Estimation on PDS-COCO
1 code implementation • 3 Mar 2021 • Hemang Chawla, Arnav Varma, Elahe Arani, Bahram Zonooz
Dense depth estimation is essential to scene-understanding for autonomous driving.
no code implementations • 15 Dec 2020 • Hemang Chawla, Matti Jukola, Shabbir Marzban, Elahe Arani, Bahram Zonooz
Here, we propose a system for practical monocular onboard camera auto-calibration from crowdsourced videos.
2 code implementations • 17 Sep 2020 • Fahad Sarfraz, Elahe Arani, Bahram Zonooz
Thus, we propose Noisy Concurrent Training (NCT) which leverages collaborative learning to use the consensus between two models as an additional source of supervision.
Ranked #34 on Image Classification on mini WebVision 1.0
1 code implementation • 16 Aug 2020 • Elahe Arani, Fahad Sarfraz, Bahram Zonooz
Adversarial training has been proven to be an effective technique for improving the adversarial robustness of models.
1 code implementation • 25 Jul 2020 • Hemang Chawla, Matti Jukola, Terence Brouns, Elahe Arani, Bahram Zonooz
The ability to efficiently utilize crowdsourced visual data carries immense potential for the domains of large scale dynamic mapping and autonomous driving.
no code implementations • 9 Jul 2020 • Hemang Chawla, Matti Jukola, Elahe Arani, Bahram Zonooz
Crowdsourced mapping of these landmarks such as traffic sign positions provides an appealing alternative.
no code implementations • 3 Jul 2020 • Fahad Sarfraz, Elahe Arani, Bahram Zonooz
Knowledge distillation (KD) is commonly deemed as an effective model compression technique in which a compact model (student) is trained under the supervision of a larger pretrained model or an ensemble of models (teacher).
no code implementations • 3 Dec 2019 • Elahe Arani, Shabbir Marzban, Andrei Pata, Bahram Zonooz
We propose a real-time general purpose semantic segmentation architecture, RGPNet, which achieves significant performance gain in complex environments.
no code implementations • 11 Oct 2019 • Elahe Arani, Fahad Sarfraz, Bahram Zonooz
In doing so, we propose three different methods that target the common challenges in deep neural networks: minimizing the performance gap between a compact model and large model (Fickle Teacher), training high performance compact adversarially robust models (Soft Randomization), and training models efficiently under label noise (Messy Collaboration).