Search Results for author: Piotr Teterwak

Found 14 papers, 9 papers with code

Vision-LLMs Can Fool Themselves with Self-Generated Typographic Attacks

1 code implementation1 Feb 2024 Maan Qraitem, Nazia Tasnim, Piotr Teterwak, Kate Saenko, Bryan A. Plummer

Furthermore, prior work's Typographic attacks against CLIP randomly sample a misleading class from a predefined set of categories.


CLAMP: Contrastive LAnguage Model Prompt-tuning

no code implementations4 Dec 2023 Piotr Teterwak, Ximeng Sun, Bryan A. Plummer, Kate Saenko, Ser-Nam Lim

Our results show that LLMs can, indeed, achieve good image classification performance when adapted this way.

Contrastive Learning Image Captioning +5

Learning to Compose SuperWeights for Neural Parameter Allocation Search

1 code implementation3 Dec 2023 Piotr Teterwak, Soren Nelson, Nikoli Dryden, Dina Bashkirova, Kate Saenko, Bryan A. Plummer

To address this, we generate layer weights by learning to compose sets of SuperWeights, which represent a group of trainable parameters.

MixtureGrowth: Growing Neural Networks by Recombining Learned Parameters

1 code implementation7 Nov 2023 Chau Pham, Piotr Teterwak, Soren Nelson, Bryan A. Plummer

Newly grown layer weights are generated by using a new linear combination of existing templates for a layer.

ERM++: An Improved Baseline for Domain Generalization

1 code implementation4 Apr 2023 Piotr Teterwak, Kuniaki Saito, Theodoros Tsiligkaridis, Kate Saenko, Bryan A. Plummer

We call the resulting method ERM++, and show it significantly improves the performance of DG on five multi-source datasets by over 5% compared to standard ERM, and beats state-of-the-art despite being less computationally expensive.

Domain Generalization

Mind the Backbone: Minimizing Backbone Distortion for Robust Object Detection

1 code implementation26 Mar 2023 Kuniaki Saito, Donghyun Kim, Piotr Teterwak, Rogerio Feris, Kate Saenko

We propose to use Relative Gradient Norm (RGN) as a way to measure the vulnerability of a backbone to feature distortion, and show that high RGN is indeed correlated with lower OOD performance.

object-detection Robust Object Detection

MixtureEnsembles: Leveraging Parameter Sharing for Efficient Ensembles

no code implementations29 Sep 2021 Piotr Teterwak, Nikoli Dryden, Dina Bashkirova, Kate Saenko, Bryan A. Plummer

We improve on these methods with MixtureEnsembles, which learns to factorize ensemble members with shared parameters by constructing each layer with a linear combination of templates.

OCONet: Image Extrapolation by Object Completion

no code implementations CVPR 2021 Richard Strong Bowen, Huiwen Chang, Charles Herrmann, Piotr Teterwak, Ce Liu, Ramin Zabih

Existing methods struggle to extrapolate images with salient objects in the foreground or are limited to very specific objects such as humans, but tend to work well on indoor/outdoor scenes.


Understanding Invariance via Feedforward Inversion of Discriminatively Trained Classifiers

no code implementations15 Mar 2021 Piotr Teterwak, Chiyuan Zhang, Dilip Krishnan, Michael C. Mozer

We use our reconstruction model as a tool for exploring the nature of representations, including: the influence of model architecture and training objectives (specifically robust losses), the forms of invariance that networks achieve, representational differences between correctly and incorrectly classified images, and the effects of manipulating logits and images.

Supervised Contrastive Learning

23 code implementations NeurIPS 2020 Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, Dilip Krishnan

Contrastive learning applied to self-supervised representation learning has seen a resurgence in recent years, leading to state of the art performance in the unsupervised training of deep image models.

Class Incremental Learning Contrastive Learning +4

Cannot find the paper you are looking for? You can Submit a new open access paper.