Search Results for author: Hamed Pirsiavash

Found 48 papers, 24 papers with code

One Category One Prompt: Dataset Distillation using Diffusion Models

no code implementations11 Mar 2024 Ali Abbasi, Ashkan Shahbazi, Hamed Pirsiavash, Soheil Kolouri

However, traditional dataset distillation approaches often struggle to scale effectively with high-resolution images and more complex architectures due to the limitations in bi-level optimization.

Knowledge Distillation

GeNIe: Generative Hard Negative Images Through Diffusion

1 code implementation5 Dec 2023 Soroush Abbasi Koohpayegani, Anuj Singh, K L Navaneet, Hadi Jamali-Rad, Hamed Pirsiavash

To achieve this, inspired by recent diffusion based image editing techniques, we limit the number of diffusion iterations to ensure the generated image retains low-level and background features from the source image while representing the target category, resulting in a hard negative sample for the source category.

Data Augmentation Image Generation

Compact3D: Compressing Gaussian Splat Radiance Field Models with Vector Quantization

1 code implementation30 Nov 2023 KL Navaneet, Kossar Pourahmadi Meibodi, Soroush Abbasi Koohpayegani, Hamed Pirsiavash

3D Gaussian Splatting is a new method for modeling and rendering 3D radiance fields that achieves much faster learning and rendering time compared to SOTA NeRF methods.

Quantization

BrainWash: A Poisoning Attack to Forget in Continual Learning

no code implementations20 Nov 2023 Ali Abbasi, Parsa Nooralinejad, Hamed Pirsiavash, Soheil Kolouri

Continual learning has gained substantial attention within the deep learning community, offering promising solutions to the challenging problem of sequential learning.

Continual Learning Data Poisoning

NOLA: Networks as Linear Combination of Low Rank Random Basis

1 code implementation4 Oct 2023 Soroush Abbasi Koohpayegani, KL Navaneet, Parsa Nooralinejad, Soheil Kolouri, Hamed Pirsiavash

For instance, in larger models, even a rank one decomposition might exceed the number of parameters truly needed for adaptation.

SlowFormer: Universal Adversarial Patch for Attack on Compute and Energy Efficiency of Inference Efficient Vision Transformers

1 code implementation4 Oct 2023 KL Navaneet, Soroush Abbasi Koohpayegani, Essam Sleiman, Hamed Pirsiavash

We show that such models can be vulnerable to a universal adversarial patch attack, where the attacker optimizes for a patch that when pasted on any image, can increase the compute and power consumption of the model.

Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning

1 code implementation CVPR 2023 Ajinkya Tejankar, Maziar Sanjabi, Qifan Wang, Sinong Wang, Hamed Firooz, Hamed Pirsiavash, Liang Tan

It was shown that an adversary can poison a small part of the unlabeled data so that when a victim trains an SSL model on it, the final model will have a backdoor that the adversary can exploit.

Data Poisoning Self-Supervised Learning

Is Multi-Task Learning an Upper Bound for Continual Learning?

no code implementations26 Oct 2022 Zihao Wu, Huy Tran, Hamed Pirsiavash, Soheil Kolouri

Moreover, it is imaginable that when learning from multiple tasks, a small subset of these tasks could behave as adversarial tasks reducing the overall learning performance in a multi-task setting.

Continual Learning Multi-Task Learning +1

Backdoor Attacks on Vision Transformers

1 code implementation16 Jun 2022 Akshayvarun Subramanya, Aniruddha Saha, Soroush Abbasi Koohpayegani, Ajinkya Tejankar, Hamed Pirsiavash

Vision Transformers (ViT) have recently demonstrated exemplary performance on a variety of vision tasks and are being used as an alternative to CNNs.

Blocking

PRANC: Pseudo RAndom Networks for Compacting deep models

2 code implementations ICCV 2023 Parsa Nooralinejad, Ali Abbasi, Soroush Abbasi Koohpayegani, Kossar Pourahmadi Meibodi, Rana Muhammad Shahroz Khan, Soheil Kolouri, Hamed Pirsiavash

We demonstrate that a deep model can be reparametrized as a linear combination of several randomly initialized and frozen deep models in the weight space.

Image Classification

A Simple Approach to Adversarial Robustness in Few-shot Image Classification

no code implementations11 Apr 2022 Akshayvarun Subramanya, Hamed Pirsiavash

Few-shot image classification, where the goal is to generalize to tasks with limited labeled data, has seen great progress over the years.

Adversarial Robustness Few-Shot Image Classification +2

Amenable Sparse Network Investigator

no code implementations18 Feb 2022 Saeed Damadi, Erfan Nouri, Hamed Pirsiavash

ASNI-II learns a sparse network and an initialization that is quantized, compressed, and from which the sparse network is trainable.

Quantization

A Fistful of Words: Learning Transferable Visual Models from Bag-of-Words Supervision

no code implementations27 Dec 2021 Ajinkya Tejankar, Maziar Sanjabi, Bichen Wu, Saining Xie, Madian Khabsa, Hamed Pirsiavash, Hamed Firooz

In this paper, we focus on teasing out what parts of the language supervision are essential for training zero-shot image classification models.

Classification Image Captioning +3

Adaptive Token Sampling For Efficient Vision Transformers

1 code implementation30 Nov 2021 Mohsen Fayyaz, Soroush Abbasi Koohpayegani, Farnoush Rezaei Jafari, Sunando Sengupta, Hamid Reza Vaezi Joze, Eric Sommerlade, Hamed Pirsiavash, Juergen Gall

Since ATS is a parameter-free module, it can be added to the off-the-shelf pre-trained vision transformers as a plug and play module, thus reducing their GFLOPs without any additional training.

Efficient ViTs Video Classification

Constrained Mean Shift for Representation Learning

no code implementations19 Oct 2021 Ajinkya Tejankar, Soroush Abbasi Koohpayegani, Hamed Pirsiavash

Inspired by recent success of self-supervised learning (SSL), we develop a non-contrastive representation learning method that can exploit additional knowledge.

Representation Learning Self-Supervised Learning

Consistent Explanations by Contrastive Learning

1 code implementation CVPR 2022 Vipin Pillai, Soroush Abbasi Koohpayegani, Ashley Ouligian, Dennis Fong, Hamed Pirsiavash

We show that our method, Contrastive Grad-CAM Consistency (CGC), results in Grad-CAM interpretation heatmaps that are more consistent with human annotations while still achieving comparable classification accuracy.

Contrastive Learning Explainable Models +1

Backdoor Attacks on Self-Supervised Learning

1 code implementation CVPR 2022 Aniruddha Saha, Ajinkya Tejankar, Soroush Abbasi Koohpayegani, Hamed Pirsiavash

We show that such methods are vulnerable to backdoor attacks - where an attacker poisons a small part of the unlabeled data by adding a trigger (image patch chosen by the attacker) to the images.

Inductive Bias Knowledge Distillation +1

Mean Shift for Self-Supervised Learning

1 code implementation ICCV 2021 Soroush Abbasi Koohpayegani, Ajinkya Tejankar, Hamed Pirsiavash

Most recent self-supervised learning (SSL) algorithms learn features by contrasting between instances of images or by clustering the images and then contrasting between the image clusters.

Clustering Self-Supervised Learning

ISD: Self-Supervised Learning by Iterative Similarity Distillation

1 code implementation ICCV 2021 Ajinkya Tejankar, Soroush Abbasi Koohpayegani, Vipin Pillai, Paolo Favaro, Hamed Pirsiavash

Hence, we introduce a self supervised learning algorithm where we use a soft similarity for the negative images rather than a binary distinction between positive and negative pairs.

Contrastive Learning Self-Supervised Learning +1

COOT: Cooperative Hierarchical Transformer for Video-Text Representation Learning

1 code implementation NeurIPS 2020 Simon Ging, Mohammadreza Zolfaghari, Hamed Pirsiavash, Thomas Brox

Many real-world video-text tasks involve different levels of granularity, such as frames and words, clip and sentences or videos and paragraphs, each with distinct semantics.

Cross-Modal Retrieval Representation Learning +2

A simple baseline for domain adaptation using rotation prediction

no code implementations26 Dec 2019 Ajinkya Tejankar, Hamed Pirsiavash

We show that removing this bias from the unlabeled data results in a large drop in performance of state-of-the-art methods, while our simple method is relatively robust.

Domain Adaptation Self-Supervised Learning +1

Hidden Trigger Backdoor Attacks

3 code implementations30 Sep 2019 Aniruddha Saha, Akshayvarun Subramanya, Hamed Pirsiavash

Backdoor attacks are a form of adversarial attacks on deep networks where the attacker provides poisoned data to the victim to train the model with, and then activates the attack by showing a specific small trigger pattern at the test time.

Backdoor Attack Image Classification

Role of Spatial Context in Adversarial Robustness for Object Detection

1 code implementation30 Sep 2019 Aniruddha Saha, Akshayvarun Subramanya, Koninika Patil, Hamed Pirsiavash

However, one can show that an adversary can design adversarial patches which do not overlap with any objects of interest in the scene and exploit contextual reasoning to fool standard detectors.

Adversarial Attack Adversarial Robustness +3

Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs

1 code implementation CVPR 2020 Soheil Kolouri, Aniruddha Saha, Hamed Pirsiavash, Heiko Hoffmann

In this paper, we introduce a benchmark technique for detecting backdoor attacks (aka Trojan attacks) on deep convolutional neural networks (CNNs).

Traffic Sign Recognition

Boosting Self-Supervised Learning via Knowledge Transfer

no code implementations CVPR 2018 Mehdi Noroozi, Ananth Vinjimoor, Paolo Favaro, Hamed Pirsiavash

We use this framework to design a novel self-supervised task, which achieves state-of-the-art performance on the common benchmarks in PASCAL VOC 2007, ILSVRC12 and Places by a significant margin.

object-detection Object Detection +2

Weakly Supervised Cascaded Convolutional Networks

no code implementations CVPR 2017 Ali Diba, Vivek Sharma, Ali Pazandeh, Hamed Pirsiavash, Luc van Gool

The final stage of both architectures is a part of a convolutional neural network that performs multiple instance learning on proposals extracted in the previous stage(s).

Multiple Instance Learning Object +3

Cross-Modal Scene Networks

no code implementations27 Oct 2016 Yusuf Aytar, Lluis Castrejon, Carl Vondrick, Hamed Pirsiavash, Antonio Torralba

Our experiments suggest that our scene representation can help transfer representations across modalities for retrieval.

Retrieval

Generating Videos with Scene Dynamics

no code implementations NeurIPS 2016 Carl Vondrick, Hamed Pirsiavash, Antonio Torralba

We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e. g. action classification) and video generation tasks (e. g. future prediction).

Action Classification Future prediction +7

DeepCAMP: Deep Convolutional Action & Attribute Mid-Level Patterns

no code implementations CVPR 2016 Ali Diba, Ali Mohammad Pazandeh, Hamed Pirsiavash, Luc van Gool

On the other hand, we let an iteration of feature learning and patch clustering purify the set of dedicated patches that we use.

Attribute Clustering

Learning Aligned Cross-Modal Representations from Weakly Aligned Data

no code implementations CVPR 2016 Lluis Castrejon, Yusuf Aytar, Carl Vondrick, Hamed Pirsiavash, Antonio Torralba

Our experiments suggest that our scene representation can help transfer representations across modalities for retrieval.

Retrieval

Joint Semantic Segmentation and Depth Estimation with Deep Convolutional Networks

no code implementations25 Apr 2016 Arsalan Mousavian, Hamed Pirsiavash, Jana Kosecka

The proposed model is trained and evaluated on NYUDepth V2 dataset outperforming the state of the art methods on semantic segmentation and achieving comparable results on the task of depth estimation.

Depth Estimation Segmentation +1

Anticipating Visual Representations from Unlabeled Video

no code implementations CVPR 2016 Carl Vondrick, Hamed Pirsiavash, Antonio Torralba

The key idea behind our approach is that we can train deep networks to predict the visual representation of images in the future.

Learning visual biases from human imagination

no code implementations NeurIPS 2015 Carl Vondrick, Hamed Pirsiavash, Aude Oliva, Antonio Torralba

Although the human visual system can recognize many concepts under challenging conditions, it still has some biases.

Object Recognition

Predicting Motivations of Actions by Leveraging Text

no code implementations CVPR 2016 Carl Vondrick, Deniz Oktay, Hamed Pirsiavash, Antonio Torralba

In this paper, we introduce the problem of predicting why a person has performed an action in images.

Parsing Videos of Actions with Segmental Grammars

no code implementations CVPR 2014 Hamed Pirsiavash, Deva Ramanan

Real-world videos of human activities exhibit temporal structure at various scales; long videos are typically composed out of multiple action instances, where each instance is itself composed of sub-actions with variable durations and orderings.

Are all training examples equally valuable?

no code implementations25 Nov 2013 Agata Lapedriza, Hamed Pirsiavash, Zoya Bylinskii, Antonio Torralba

When learning a new concept, not all training examples may prove equally useful for training: some may have higher or lower training value than others.

Bilinear classifiers for visual recognition

no code implementations NeurIPS 2009 Hamed Pirsiavash, Deva Ramanan, Charless C. Fowlkes

Bilinear classifiers are a discriminative variant of bilinear models, which capture the dependence of data on multiple factors.

Action Classification General Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.