no code implementations • 11 Mar 2024 • Ali Abbasi, Ashkan Shahbazi, Hamed Pirsiavash, Soheil Kolouri
However, traditional dataset distillation approaches often struggle to scale effectively with high-resolution images and more complex architectures due to the limitations in bi-level optimization.
1 code implementation • 5 Dec 2023 • Soroush Abbasi Koohpayegani, Anuj Singh, K L Navaneet, Hadi Jamali-Rad, Hamed Pirsiavash
To achieve this, inspired by recent diffusion based image editing techniques, we limit the number of diffusion iterations to ensure the generated image retains low-level and background features from the source image while representing the target category, resulting in a hard negative sample for the source category.
1 code implementation • 30 Nov 2023 • KL Navaneet, Kossar Pourahmadi Meibodi, Soroush Abbasi Koohpayegani, Hamed Pirsiavash
3D Gaussian Splatting is a new method for modeling and rendering 3D radiance fields that achieves much faster learning and rendering time compared to SOTA NeRF methods.
no code implementations • 20 Nov 2023 • Ali Abbasi, Parsa Nooralinejad, Hamed Pirsiavash, Soheil Kolouri
Continual learning has gained substantial attention within the deep learning community, offering promising solutions to the challenging problem of sequential learning.
1 code implementation • 4 Oct 2023 • Soroush Abbasi Koohpayegani, KL Navaneet, Parsa Nooralinejad, Soheil Kolouri, Hamed Pirsiavash
For instance, in larger models, even a rank one decomposition might exceed the number of parameters truly needed for adaptation.
1 code implementation • 4 Oct 2023 • KL Navaneet, Soroush Abbasi Koohpayegani, Essam Sleiman, Hamed Pirsiavash
We show that such models can be vulnerable to a universal adversarial patch attack, where the attacker optimizes for a patch that when pasted on any image, can increase the compute and power consumption of the model.
no code implementations • 24 Apr 2023 • Randall Balestriero, Mark Ibrahim, Vlad Sobal, Ari Morcos, Shashank Shekhar, Tom Goldstein, Florian Bordes, Adrien Bardes, Gregoire Mialon, Yuandong Tian, Avi Schwarzschild, Andrew Gordon Wilson, Jonas Geiping, Quentin Garrido, Pierre Fernandez, Amir Bar, Hamed Pirsiavash, Yann Lecun, Micah Goldblum
Self-supervised learning, dubbed the dark matter of intelligence, is a promising path to advance machine learning.
1 code implementation • CVPR 2023 • Ajinkya Tejankar, Maziar Sanjabi, Qifan Wang, Sinong Wang, Hamed Firooz, Hamed Pirsiavash, Liang Tan
It was shown that an adversary can poison a small part of the unlabeled data so that when a victim trains an SSL model on it, the final model will have a backdoor that the adversary can exploit.
no code implementations • 26 Oct 2022 • Zihao Wu, Huy Tran, Hamed Pirsiavash, Soheil Kolouri
Moreover, it is imaginable that when learning from multiple tasks, a small subset of these tasks could behave as adversarial tasks reducing the overall learning performance in a multi-task setting.
1 code implementation • 17 Jun 2022 • Soroush Abbasi Koohpayegani, Hamed Pirsiavash
Recently, vision transformers have become very popular.
1 code implementation • 16 Jun 2022 • Akshayvarun Subramanya, Aniruddha Saha, Soroush Abbasi Koohpayegani, Ajinkya Tejankar, Hamed Pirsiavash
Vision Transformers (ViT) have recently demonstrated exemplary performance on a variety of vision tasks and are being used as an alternative to CNNs.
2 code implementations • ICCV 2023 • Parsa Nooralinejad, Ali Abbasi, Soroush Abbasi Koohpayegani, Kossar Pourahmadi Meibodi, Rana Muhammad Shahroz Khan, Soheil Kolouri, Hamed Pirsiavash
We demonstrate that a deep model can be reparametrized as a linear combination of several randomly initialized and frozen deep models in the weight space.
no code implementations • 11 Apr 2022 • Akshayvarun Subramanya, Hamed Pirsiavash
Few-shot image classification, where the goal is to generalize to tasks with limited labeled data, has seen great progress over the years.
no code implementations • 12 Mar 2022 • Ali Abbasi, Parsa Nooralinejad, Vladimir Braverman, Hamed Pirsiavash, Soheil Kolouri
Overcoming catastrophic forgetting in deep neural networks has become an active field of research in recent years.
no code implementations • 18 Feb 2022 • Saeed Damadi, Erfan Nouri, Hamed Pirsiavash
ASNI-II learns a sparse network and an initialization that is quantized, compressed, and from which the sparse network is trainable.
1 code implementation • 13 Jan 2022 • K L Navaneet, Soroush Abbasi Koohpayegani, Ajinkya Tejankar, Hamed Pirsiavash
Feature regression is a simple way to distill large neural network models to smaller ones.
no code implementations • 27 Dec 2021 • Ajinkya Tejankar, Maziar Sanjabi, Bichen Wu, Saining Xie, Madian Khabsa, Hamed Pirsiavash, Hamed Firooz
In this paper, we focus on teasing out what parts of the language supervision are essential for training zero-shot image classification models.
1 code implementation • 8 Dec 2021 • Rex Liu, Huanle Zhang, Hamed Pirsiavash, Xin Liu
We propose MASTAF, a Model-Agnostic Spatio-Temporal Attention Fusion network for few-shot video classification.
1 code implementation • 8 Dec 2021 • KL Navaneet, Soroush Abbasi Koohpayegani, Ajinkya Tejankar, Kossar Pourahmadi, Akshayvarun Subramanya, Hamed Pirsiavash
On the other hand, far away NNs may not be semantically related to the query.
1 code implementation • 30 Nov 2021 • Mohsen Fayyaz, Soroush Abbasi Koohpayegani, Farnoush Rezaei Jafari, Sunando Sengupta, Hamid Reza Vaezi Joze, Eric Sommerlade, Hamed Pirsiavash, Juergen Gall
Since ATS is a parameter-free module, it can be added to the off-the-shelf pre-trained vision transformers as a plug and play module, thus reducing their GFLOPs without any additional training.
Ranked #13 on Efficient ViTs on ImageNet-1K (with DeiT-S)
1 code implementation • 22 Oct 2021 • Kossar Pourahmadi, Parsa Nooralinejad, Hamed Pirsiavash
However, most such methods assume that a large subset of the data can be annotated.
no code implementations • 19 Oct 2021 • Ajinkya Tejankar, Soroush Abbasi Koohpayegani, Hamed Pirsiavash
Inspired by recent success of self-supervised learning (SSL), we develop a non-contrastive representation learning method that can exploit additional knowledge.
1 code implementation • CVPR 2022 • Vipin Pillai, Soroush Abbasi Koohpayegani, Ashley Ouligian, Dennis Fong, Hamed Pirsiavash
We show that our method, Contrastive Grad-CAM Consistency (CGC), results in Grad-CAM interpretation heatmaps that are more consistent with human annotations while still achieving comparable classification accuracy.
1 code implementation • CVPR 2022 • Aniruddha Saha, Ajinkya Tejankar, Soroush Abbasi Koohpayegani, Hamed Pirsiavash
We show that such methods are vulnerable to backdoor attacks - where an attacker poisons a small part of the unlabeled data by adding a trigger (image patch chosen by the attacker) to the images.
1 code implementation • ICCV 2021 • Soroush Abbasi Koohpayegani, Ajinkya Tejankar, Hamed Pirsiavash
Most recent self-supervised learning (SSL) algorithms learn features by contrasting between instances of images or by clustering the images and then contrasting between the image clusters.
1 code implementation • ICCV 2021 • Ajinkya Tejankar, Soroush Abbasi Koohpayegani, Vipin Pillai, Paolo Favaro, Hamed Pirsiavash
Hence, we introduce a self supervised learning algorithm where we use a soft similarity for the negative images rather than a binary distinction between positive and negative pairs.
1 code implementation • NeurIPS 2020 • Simon Ging, Mohammadreza Zolfaghari, Hamed Pirsiavash, Thomas Brox
Many real-world video-text tasks involve different levels of granularity, such as frames and words, clip and sentences or videos and paragraphs, each with distinct semantics.
Ranked #4 on Video Captioning on ActivityNet Captions
1 code implementation • NeurIPS 2020 • Soroush Abbasi Koohpayegani, Ajinkya Tejankar, Hamed Pirsiavash
To the best of our knowledge, this is the first time a self-supervised AlexNet has outperformed supervised one on ImageNet classification.
no code implementations • 26 Dec 2019 • Ajinkya Tejankar, Hamed Pirsiavash
We show that removing this bias from the unlabeled data results in a large drop in performance of state-of-the-art methods, while our simple method is relatively robust.
3 code implementations • 30 Sep 2019 • Aniruddha Saha, Akshayvarun Subramanya, Hamed Pirsiavash
Backdoor attacks are a form of adversarial attacks on deep networks where the attacker provides poisoned data to the victim to train the model with, and then activates the attack by showing a specific small trigger pattern at the test time.
1 code implementation • 30 Sep 2019 • Aniruddha Saha, Akshayvarun Subramanya, Koninika Patil, Hamed Pirsiavash
However, one can show that an adversary can design adversarial patches which do not overlap with any objects of interest in the scene and exploit contextual reasoning to fool standard detectors.
1 code implementation • CVPR 2020 • Soheil Kolouri, Aniruddha Saha, Hamed Pirsiavash, Heiko Hoffmann
In this paper, we introduce a benchmark technique for detecting backdoor attacks (aka Trojan attacks) on deep convolutional neural networks (CNNs).
no code implementations • ICCV 2019 • Akshayvarun Subramanya, Vipin Pillai, Hamed Pirsiavash
Deep neural networks have been shown to be fooled rather easily using adversarial attack algorithms.
no code implementations • CVPR 2018 • Mehdi Noroozi, Ananth Vinjimoor, Paolo Favaro, Hamed Pirsiavash
We use this framework to design a novel self-supervised task, which achieves state-of-the-art performance on the common benchmarks in PASCAL VOC 2007, ILSVRC12 and Places by a significant margin.
2 code implementations • ICCV 2017 • Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro
In this paper, we use two image transformations in the context of counting: scaling and tiling.
no code implementations • CVPR 2017 • Ali Diba, Vivek Sharma, Ali Pazandeh, Hamed Pirsiavash, Luc van Gool
The final stage of both architectures is a part of a convolutional neural network that performs multiple instance learning on proposals extracted in the previous stage(s).
Ranked #2 on Weakly Supervised Object Detection on ImageNet
no code implementations • 27 Oct 2016 • Yusuf Aytar, Lluis Castrejon, Carl Vondrick, Hamed Pirsiavash, Antonio Torralba
Our experiments suggest that our scene representation can help transfer representations across modalities for retrieval.
no code implementations • NeurIPS 2016 • Carl Vondrick, Hamed Pirsiavash, Antonio Torralba
We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e. g. action classification) and video generation tasks (e. g. future prediction).
no code implementations • CVPR 2016 • Ali Diba, Ali Mohammad Pazandeh, Hamed Pirsiavash, Luc van Gool
On the other hand, we let an iteration of feature learning and patch clustering purify the set of dedicated patches that we use.
no code implementations • CVPR 2016 • Lluis Castrejon, Yusuf Aytar, Carl Vondrick, Hamed Pirsiavash, Antonio Torralba
Our experiments suggest that our scene representation can help transfer representations across modalities for retrieval.
no code implementations • 25 Apr 2016 • Arsalan Mousavian, Hamed Pirsiavash, Jana Kosecka
The proposed model is trained and evaluated on NYUDepth V2 dataset outperforming the state of the art methods on semantic segmentation and achieving comparable results on the task of depth estimation.
no code implementations • CVPR 2016 • Carl Vondrick, Hamed Pirsiavash, Antonio Torralba
The key idea behind our approach is that we can train deep networks to predict the visual representation of images in the future.
1 code implementation • 19 Feb 2015 • Carl Vondrick, Aditya Khosla, Hamed Pirsiavash, Tomasz Malisiewicz, Antonio Torralba
We introduce algorithms to visualize feature spaces used by object detectors.
no code implementations • NeurIPS 2015 • Carl Vondrick, Hamed Pirsiavash, Aude Oliva, Antonio Torralba
Although the human visual system can recognize many concepts under challenging conditions, it still has some biases.
no code implementations • CVPR 2016 • Carl Vondrick, Deniz Oktay, Hamed Pirsiavash, Antonio Torralba
In this paper, we introduce the problem of predicting why a person has performed an action in images.
no code implementations • CVPR 2014 • Hamed Pirsiavash, Deva Ramanan
Real-world videos of human activities exhibit temporal structure at various scales; long videos are typically composed out of multiple action instances, where each instance is itself composed of sub-actions with variable durations and orderings.
no code implementations • 25 Nov 2013 • Agata Lapedriza, Hamed Pirsiavash, Zoya Bylinskii, Antonio Torralba
When learning a new concept, not all training examples may prove equally useful for training: some may have higher or lower training value than others.
no code implementations • NeurIPS 2009 • Hamed Pirsiavash, Deva Ramanan, Charless C. Fowlkes
Bilinear classifiers are a discriminative variant of bilinear models, which capture the dependence of data on multiple factors.