no code implementations • 27 Jun 2024 • Chayne Thrash, Ali Abbasi, Parsa Nooralinejad, Soroush Abbasi Koohpayegani, Reed Andreas, Hamed Pirsiavash, Soheil Kolouri
Through extensive experiments in computer vision and natural language processing tasks, we demonstrate that our method, MCNC, significantly outperforms state-of-the-art baselines in terms of compression, accuracy, and/or model reconstruction time.
1 code implementation • CVPR 2024 • K L Navaneet, Soroush Abbasi Koohpayegani, Essam Sleiman, Hamed Pirsiavash
We show that such models can be vulnerable to a universal adversarial patch attack where the attacker optimizes for a patch that when pasted on any image can increase the compute and power consumption of the model.
1 code implementation • 5 Dec 2023 • Soroush Abbasi Koohpayegani, Anuj Singh, K L Navaneet, Hamed Pirsiavash, Hadi Jamali-Rad
To achieve this, we adjust the noise level (equivalently, number of diffusion iterations) to ensure the generated image retains low-level and background features from the source image while representing the target category, resulting in a hard negative sample for the source category.
1 code implementation • 30 Nov 2023 • KL Navaneet, Kossar Pourahmadi Meibodi, Soroush Abbasi Koohpayegani, Hamed Pirsiavash
3D Gaussian Splatting (3DGS) is a new method for modeling and rendering 3D radiance fields that achieves much faster learning and rendering time compared to SOTA NeRF methods.
Ranked #6 on
Novel View Synthesis
on Mip-NeRF 360
1 code implementation • 4 Oct 2023 • Soroush Abbasi Koohpayegani, KL Navaneet, Parsa Nooralinejad, Soheil Kolouri, Hamed Pirsiavash
These methods can reduce the number of parameters needed to fine-tune an LLM by several orders of magnitude.
1 code implementation • 4 Oct 2023 • KL Navaneet, Soroush Abbasi Koohpayegani, Essam Sleiman, Hamed Pirsiavash
We show that such models can be vulnerable to a universal adversarial patch attack, where the attacker optimizes for a patch that when pasted on any image, can increase the compute and power consumption of the model.
1 code implementation • 17 Jun 2022 • Soroush Abbasi Koohpayegani, Hamed Pirsiavash
Recently, vision transformers have become very popular.
1 code implementation • 16 Jun 2022 • Akshayvarun Subramanya, Aniruddha Saha, Soroush Abbasi Koohpayegani, Ajinkya Tejankar, Hamed Pirsiavash
Vision Transformers (ViT) have recently demonstrated exemplary performance on a variety of vision tasks and are being used as an alternative to CNNs.
2 code implementations • ICCV 2023 • Parsa Nooralinejad, Ali Abbasi, Soroush Abbasi Koohpayegani, Kossar Pourahmadi Meibodi, Rana Muhammad Shahroz Khan, Soheil Kolouri, Hamed Pirsiavash
We demonstrate that a deep model can be reparametrized as a linear combination of several randomly initialized and frozen deep models in the weight space.
1 code implementation • 13 Jan 2022 • K L Navaneet, Soroush Abbasi Koohpayegani, Ajinkya Tejankar, Hamed Pirsiavash
Feature regression is a simple way to distill large neural network models to smaller ones.
1 code implementation • 8 Dec 2021 • KL Navaneet, Soroush Abbasi Koohpayegani, Ajinkya Tejankar, Kossar Pourahmadi, Akshayvarun Subramanya, Hamed Pirsiavash
On the other hand, far away NNs may not be semantically related to the query.
1 code implementation • 30 Nov 2021 • Mohsen Fayyaz, Soroush Abbasi Koohpayegani, Farnoush Rezaei Jafari, Sunando Sengupta, Hamid Reza Vaezi Joze, Eric Sommerlade, Hamed Pirsiavash, Juergen Gall
Since ATS is a parameter-free module, it can be added to the off-the-shelf pre-trained vision transformers as a plug and play module, thus reducing their GFLOPs without any additional training.
Ranked #13 on
Efficient ViTs
on ImageNet-1K (with DeiT-S)
no code implementations • 19 Oct 2021 • Ajinkya Tejankar, Soroush Abbasi Koohpayegani, Hamed Pirsiavash
Inspired by recent success of self-supervised learning (SSL), we develop a non-contrastive representation learning method that can exploit additional knowledge.
1 code implementation • CVPR 2022 • Vipin Pillai, Soroush Abbasi Koohpayegani, Ashley Ouligian, Dennis Fong, Hamed Pirsiavash
We show that our method, Contrastive Grad-CAM Consistency (CGC), results in Grad-CAM interpretation heatmaps that are more consistent with human annotations while still achieving comparable classification accuracy.
2 code implementations • CVPR 2022 • Aniruddha Saha, Ajinkya Tejankar, Soroush Abbasi Koohpayegani, Hamed Pirsiavash
We show that such methods are vulnerable to backdoor attacks - where an attacker poisons a small part of the unlabeled data by adding a trigger (image patch chosen by the attacker) to the images.
1 code implementation • ICCV 2021 • Soroush Abbasi Koohpayegani, Ajinkya Tejankar, Hamed Pirsiavash
Most recent self-supervised learning (SSL) algorithms learn features by contrasting between instances of images or by clustering the images and then contrasting between the image clusters.
1 code implementation • ICCV 2021 • Ajinkya Tejankar, Soroush Abbasi Koohpayegani, Vipin Pillai, Paolo Favaro, Hamed Pirsiavash
Hence, we introduce a self supervised learning algorithm where we use a soft similarity for the negative images rather than a binary distinction between positive and negative pairs.
1 code implementation • NeurIPS 2020 • Soroush Abbasi Koohpayegani, Ajinkya Tejankar, Hamed Pirsiavash
To the best of our knowledge, this is the first time a self-supervised AlexNet has outperformed supervised one on ImageNet classification.