Search Results for author: Ajinkya Tejankar

Found 11 papers, 8 papers with code

Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning

1 code implementation CVPR 2023 Ajinkya Tejankar, Maziar Sanjabi, Qifan Wang, Sinong Wang, Hamed Firooz, Hamed Pirsiavash, Liang Tan

It was shown that an adversary can poison a small part of the unlabeled data so that when a victim trains an SSL model on it, the final model will have a backdoor that the adversary can exploit.

Data Poisoning Self-Supervised Learning

Backdoor Attacks on Vision Transformers

1 code implementation16 Jun 2022 Akshayvarun Subramanya, Aniruddha Saha, Soroush Abbasi Koohpayegani, Ajinkya Tejankar, Hamed Pirsiavash

Vision Transformers (ViT) have recently demonstrated exemplary performance on a variety of vision tasks and are being used as an alternative to CNNs.

Blocking

A Fistful of Words: Learning Transferable Visual Models from Bag-of-Words Supervision

no code implementations27 Dec 2021 Ajinkya Tejankar, Maziar Sanjabi, Bichen Wu, Saining Xie, Madian Khabsa, Hamed Pirsiavash, Hamed Firooz

In this paper, we focus on teasing out what parts of the language supervision are essential for training zero-shot image classification models.

Classification Image Captioning +3

Constrained Mean Shift for Representation Learning

no code implementations19 Oct 2021 Ajinkya Tejankar, Soroush Abbasi Koohpayegani, Hamed Pirsiavash

Inspired by recent success of self-supervised learning (SSL), we develop a non-contrastive representation learning method that can exploit additional knowledge.

Representation Learning Self-Supervised Learning

Backdoor Attacks on Self-Supervised Learning

1 code implementation CVPR 2022 Aniruddha Saha, Ajinkya Tejankar, Soroush Abbasi Koohpayegani, Hamed Pirsiavash

We show that such methods are vulnerable to backdoor attacks - where an attacker poisons a small part of the unlabeled data by adding a trigger (image patch chosen by the attacker) to the images.

Inductive Bias Knowledge Distillation +1

Mean Shift for Self-Supervised Learning

1 code implementation ICCV 2021 Soroush Abbasi Koohpayegani, Ajinkya Tejankar, Hamed Pirsiavash

Most recent self-supervised learning (SSL) algorithms learn features by contrasting between instances of images or by clustering the images and then contrasting between the image clusters.

Clustering Self-Supervised Learning

ISD: Self-Supervised Learning by Iterative Similarity Distillation

1 code implementation ICCV 2021 Ajinkya Tejankar, Soroush Abbasi Koohpayegani, Vipin Pillai, Paolo Favaro, Hamed Pirsiavash

Hence, we introduce a self supervised learning algorithm where we use a soft similarity for the negative images rather than a binary distinction between positive and negative pairs.

Contrastive Learning Self-Supervised Learning +1

A simple baseline for domain adaptation using rotation prediction

no code implementations26 Dec 2019 Ajinkya Tejankar, Hamed Pirsiavash

We show that removing this bias from the unlabeled data results in a large drop in performance of state-of-the-art methods, while our simple method is relatively robust.

Domain Adaptation Self-Supervised Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.