Search Results for author: Samyadeep Basu

Found 15 papers, 1 papers with code

Rethinking Artistic Copyright Infringements in the Era of Text-to-Image Generative Models

no code implementations11 Apr 2024 Mazda Moayeri, Samyadeep Basu, Sriram Balasubramanian, Priyatham Kattakinda, Atoosa Chengini, Robert Brauneis, Soheil Feizi

Recent text-to-image generative models such as Stable Diffusion are extremely adept at mimicking and generating copyrighted content, raising concerns amongst artists that their unique styles may be improperly copied.

Artistic style classification

On Surgical Fine-tuning for Language Encoders

1 code implementation25 Oct 2023 Abhilasha Lodha, Gayatri Belapurkar, Saloni Chalkapurkar, Yuanming Tao, Reshmi Ghosh, Samyadeep Basu, Dmitrii Petrov, Soundararajan Srinivasan

We show evidence that for different downstream language tasks, fine-tuning only a subset of layers is sufficient to obtain performance that is close to and often better than fine-tuning all the layers in the language encoder.

Localizing and Editing Knowledge in Text-to-Image Generative Models

no code implementations20 Oct 2023 Samyadeep Basu, Nanxuan Zhao, Vlad Morariu, Soheil Feizi, Varun Manjunatha

We adapt Causal Mediation Analysis for text-to-image models and trace knowledge about distinct visual attributes to various (causal) components in the (i) UNet and (ii) text-encoder of the diffusion model.

Attribute Image Generation +1

EditVal: Benchmarking Diffusion Based Text-Guided Image Editing Methods

no code implementations3 Oct 2023 Samyadeep Basu, Mehrdad Saberi, Shweta Bhardwaj, Atoosa Malemir Chegini, Daniela Massiceti, Maziar Sanjabi, Shell Xu Hu, Soheil Feizi

From both the human study and automated evaluation, we find that: (i) Instruct-Pix2Pix, Null-Text and SINE are the top-performing methods averaged across different edit types, however {\it only} Instruct-Pix2Pix and Null-Text are able to preserve original image properties; (ii) Most of the editing methods fail at edits involving spatial operations (e. g., changing the position of an object).

Benchmarking text-guided-image-editing

Augmenting CLIP with Improved Visio-Linguistic Reasoning

no code implementations18 Jul 2023 Samyadeep Basu, Maziar Sanjabi, Daniela Massiceti, Shell Xu Hu, Soheil Feizi

On the challenging Winoground compositional reasoning benchmark, our method improves the absolute visio-linguistic performance of different CLIP models by up to 7%, while on the ARO dataset, our method improves the visio-linguistic performance by upto 3%.

Retrieval Text Retrieval +2

Strong Baselines for Parameter Efficient Few-Shot Fine-tuning

no code implementations4 Apr 2023 Samyadeep Basu, Daniela Massiceti, Shell Xu Hu, Soheil Feizi

Through our controlled empirical study, we have two main findings: (i) Fine-tuning just the LayerNorm parameters (which we call LN-Tune) during few-shot adaptation is an extremely strong baseline across ViTs pre-trained with both self-supervised and supervised objectives, (ii) For self-supervised ViTs, we find that simply learning a set of scaling parameters for each attention matrix (which we call AttnScale) along with a domain-residual adapter (DRA) module leads to state-of-the-art performance (while being $\sim\!$ 9$\times$ more parameter-efficient) on MD.

Few-Shot Image Classification

Topic Segmentation in the Wild: Towards Segmentation of Semi-structured & Unstructured Chats

no code implementations27 Nov 2022 Reshmi Ghosh, Harjeet Singh Kajal, Sharanya Kamath, Dhuri Shrivastava, Samyadeep Basu, Soundararajan Srinivasan

Breaking down a document or a conversation into multiple contiguous segments based on its semantic structure is an important and challenging problem in NLP, which can assist many downstream tasks.

Segmentation

On Hard Episodes in Meta-Learning

no code implementations21 Oct 2021 Samyadeep Basu, Amr Sharaf, Nicolo Fusi, Soheil Feizi

To address the issue of sub-par performance on hard episodes, we investigate and benchmark different meta-training strategies based on adversarial training and curriculum learning.

Meta-Learning

Influence Functions in Deep Learning Are Fragile

no code implementations ICLR 2021 Samyadeep Basu, Philip Pope, Soheil Feizi

Influence functions approximate the effect of training samples in test-time predictions and have a wide variety of applications in machine learning interpretability and uncertainty estimation.

Privacy Leakage Avoidance with Switching Ensembles

no code implementations18 Nov 2019 Rauf Izmailov, Peter Lin, Chris Mesterharm, Samyadeep Basu

We consider membership inference attacks, one of the main privacy issues in machine learning.

BIG-bench Machine Learning

On Second-Order Group Influence Functions for Black-Box Predictions

no code implementations ICML 2020 Samyadeep Basu, Xuchen You, Soheil Feizi

Often we want to identify an influential group of training samples in a particular test prediction for a given machine learning model.

BIG-bench Machine Learning

Membership Model Inversion Attacks for Deep Networks

no code implementations9 Oct 2019 Samyadeep Basu, Rauf Izmailov, Chris Mesterharm

With the increasing adoption of AI, inherent security and privacy vulnerabilities formachine learning systems are being discovered.

Optical Character Recognition (OCR)

Cannot find the paper you are looking for? You can Submit a new open access paper.