Search Results for author: Mazda Moayeri

Found 9 papers, 1 papers with code

Embracing Diversity: Interpretable Zero-shot classification beyond one vector per class

no code implementations25 Apr 2024 Mazda Moayeri, Michael Rabbat, Mark Ibrahim, Diane Bouchacourt

We propose a method to encode and account for diversity within a class using inferred attributes, still in the zero-shot setting without retraining.

Rethinking Artistic Copyright Infringements in the Era of Text-to-Image Generative Models

no code implementations11 Apr 2024 Mazda Moayeri, Samyadeep Basu, Sriram Balasubramanian, Priyatham Kattakinda, Atoosa Chengini, Robert Brauneis, Soheil Feizi

Recent text-to-image generative models such as Stable Diffusion are extremely adept at mimicking and generating copyrighted content, raising concerns amongst artists that their unique styles may be improperly copied.

Artistic style classification

PRIME: Prioritizing Interpretability in Failure Mode Extraction

no code implementations29 Sep 2023 Keivan Rezaei, Mehrdad Saberi, Mazda Moayeri, Soheil Feizi

To improve on these shortcomings, we propose a novel approach that prioritizes interpretability in this problem: we start by obtaining human-understandable concepts (tags) of images in the dataset and then analyze the model's behavior based on the presence or absence of combinations of these tags.

Image Classification

Text-To-Concept (and Back) via Cross-Model Alignment

1 code implementation10 May 2023 Mazda Moayeri, Keivan Rezaei, Maziar Sanjabi, Soheil Feizi

We observe that the mapping between an image's representation in one model to its representation in another can be learned surprisingly well with just a linear layer, even across diverse models.

Data-Centric Debugging: mitigating model failures via targeted data collection

no code implementations17 Nov 2022 Sahil Singla, Atoosa Malemir Chegini, Mazda Moayeri, Soheil Feiz

Our Data-Centric Debugging (DCD) framework carefully creates a debug-train set by selecting images from $\mathcal{F}$ that are perceptually similar to the images in $\mathcal{E}_{sample}$.

Image Classification

Explicit Tradeoffs between Adversarial and Natural Distributional Robustness

no code implementations15 Sep 2022 Mazda Moayeri, Kiarash Banihashem, Soheil Feizi

In this setting, through theoretical and empirical analysis, we show that (i) adversarial training with $\ell_1$ and $\ell_2$ norms increases the model reliance on spurious features; (ii) For $\ell_\infty$ adversarial training, spurious reliance only occurs when the scale of the spurious features is larger than that of the core features; (iii) adversarial training can have an unintended consequence in reducing distributional robustness, specifically when spurious correlations are changed in the new test domain.

Adversarial Robustness

Core Risk Minimization using Salient ImageNet

no code implementations28 Mar 2022 Sahil Singla, Mazda Moayeri, Soheil Feizi

Deep neural networks can be unreliable in the real world especially when they heavily use spurious features for their predictions.

A Comprehensive Study of Image Classification Model Sensitivity to Foregrounds, Backgrounds, and Visual Attributes

no code implementations CVPR 2022 Mazda Moayeri, Phillip Pope, Yogesh Balaji, Soheil Feizi

While datasets with single-label supervision have propelled rapid advances in image classification, additional annotations are necessary in order to quantitatively assess how models make predictions.

Image Classification

Sample Efficient Detection and Classification of Adversarial Attacks via Self-Supervised Embeddings

no code implementations ICCV 2021 Mazda Moayeri, Soheil Feizi

In this paper, we propose a self-supervised method to detect adversarial attacks and classify them to their respective threat models, based on a linear model operating on the embeddings from a pre-trained self-supervised encoder.

Adversarial Robustness

Cannot find the paper you are looking for? You can Submit a new open access paper.