Search Results for author: Sayna Ebrahimi

Found 27 papers, 13 papers with code

Gradient-free Policy Architecture Search and Adaptation

no code implementations16 Oct 2017 Sayna Ebrahimi, Anna Rohrbach, Trevor Darrell

We develop a method for policy architecture search and adaptation via gradient-free optimization which can learn to perform autonomous driving tasks.

Autonomous Driving Neural Architecture Search

Compositional GAN: Learning Image-Conditional Binary Composition

1 code implementation19 Jul 2018 Samaneh Azadi, Deepak Pathak, Sayna Ebrahimi, Trevor Darrell

Generative Adversarial Networks (GANs) can produce images of remarkable complexity and realism but are generally structured to sample from a single latent source ignoring the explicit spatial interaction between multiple entities that could be present in a scene.

Uncertainty-guided Lifelong Learning in Bayesian Networks

no code implementations27 Sep 2018 Sayna Ebrahimi, Mohamed Elhoseiny, Trevor Darrell, Marcus Rohrbach

Sequentially learning of tasks arriving in a continuous stream is a complex problem and becomes more challenging when the model has a fixed capacity.

Continual Learning

Cross-Linked Variational Autoencoders for Generalized Zero-Shot Learning

no code implementations ICLR Workshop LLD 2019 Edgar Schönfeld, Sayna Ebrahimi, Samarth Sinha, Trevor Darrell, Zeynep Akata

While following the same direction, we also take artificial feature generation one step further and propose a model where a shared latent space of image features and class embeddings is learned by aligned variational autoencoders, for the purpose of generating latent features to train a softmax classifier.

Few-Shot Learning Generalized Zero-Shot Learning

Compositional GAN (Extended Abstract): Learning Image-Conditional Binary Composition

no code implementations ICLR Workshop DeepGenStruct 2019 Samaneh Azadi, Deepak Pathak, Sayna Ebrahimi, Trevor Darrell

Generative Adversarial Networks (GANs) can produce images of surprising complexity and realism but are generally structured to sample from a single latent source ignoring the explicit spatial interaction between multiple entities that could be present in a scene.

Variational Adversarial Active Learning

6 code implementations ICCV 2019 Samarth Sinha, Sayna Ebrahimi, Trevor Darrell

Unlike conventional active learning algorithms, our approach is task agnostic, i. e., it does not depend on the performance of the task for which we are trying to acquire labeled data.

Active Learning Image Classification +1

WiCV 2019: The Sixth Women In Computer Vision Workshop

no code implementations23 Sep 2019 Irene Amerini, Elena Balashova, Sayna Ebrahimi, Kathryn Leonard, Arsha Nagrani, Amaia Salvador

In this paper we present the Women in Computer Vision Workshop - WiCV 2019, organized in conjunction with CVPR 2019.

Adversarial Continual Learning

1 code implementation ECCV 2020 Sayna Ebrahimi, Franziska Meier, Roberto Calandra, Trevor Darrell, Marcus Rohrbach

We show that shared features are significantly less prone to forgetting and propose a novel hybrid continual learning framework that learns a disjoint representation for task-invariant and task-specific features required to solve a sequence of tasks.

Continual Learning Image Classification

Minimax Active Learning

no code implementations18 Dec 2020 Sayna Ebrahimi, William Gan, Dian Chen, Giscard Biamby, Kamyar Salahi, Michael Laielli, Shizhan Zhu, Trevor Darrell

Active learning aims to develop label-efficient algorithms by querying the most representative samples to be labeled by a human annotator.

Active Learning Clustering +2

Self-Supervised Pretraining Improves Self-Supervised Pretraining

1 code implementation23 Mar 2021 Colorado J. Reed, Xiangyu Yue, Ani Nrusimha, Sayna Ebrahimi, Vivek Vijaykumar, Richard Mao, Bo Li, Shanghang Zhang, Devin Guillory, Sean Metzger, Kurt Keutzer, Trevor Darrell

Through experimentation on 16 diverse vision datasets, we show HPT converges up to 80x faster, improves accuracy across tasks, and improves the robustness of the self-supervised pretraining process to changes in the image augmentation policy or amount of pretraining data.

Image Augmentation

Predicting with Confidence on Unseen Distributions

no code implementations ICCV 2021 Devin Guillory, Vaishaal Shankar, Sayna Ebrahimi, Trevor Darrell, Ludwig Schmidt

Our work connects techniques from domain adaptation and predictive uncertainty literature, and allows us to predict model accuracy on challenging unseen distributions without access to labeled data.

Domain Adaptation

Region-level Active Detector Learning

no code implementations20 Aug 2021 Michael Laielli, Giscard Biamby, Dian Chen, Ritwik Gupta, Adam Loeffler, Phat Dat Nguyen, Ross Luo, Trevor Darrell, Sayna Ebrahimi

Active learning for object detection is conventionally achieved by applying techniques developed for classification in a way that aggregates individual detections into image-level selection criteria.

Active Learning Object +2

On-target Adaptation

1 code implementation2 Sep 2021 Dequan Wang, Shaoteng Liu, Sayna Ebrahimi, Evan Shelhamer, Trevor Darrell

Domain adaptation seeks to mitigate the shift between training on the \emph{source} domain and testing on the \emph{target} domain.

Domain Adaptation

Zero-Shot Reward Specification via Grounded Natural Language

no code implementations29 Sep 2021 Parsa Mahmoudieh, Sayna Ebrahimi, Deepak Pathak, Trevor Darrell

Reward signals in reinforcement learning can be expensive signals in many tasks and often require access to direct state.

Reinforcement Learning (RL)

Differentiable Gradient Sampling for Learning Implicit 3D Scene Reconstructions from a Single Image

no code implementations ICLR 2022 Shizhan Zhu, Sayna Ebrahimi, Angjoo Kanazawa, Trevor Darrell

Existing approaches for single object reconstruction impose supervision signals based on the loss of the signed distance value from all locations in a scene, posing difficulties when extending to real-world scenarios.

Indoor Scene Reconstruction Object Reconstruction +1

Contrastive Test-Time Adaptation

1 code implementation CVPR 2022 Dian Chen, Dequan Wang, Trevor Darrell, Sayna Ebrahimi

We propose a novel way to leverage self-supervised contrastive learning to facilitate target feature learning, along with an online pseudo labeling scheme with refinement that significantly denoises pseudo labels.

Contrastive Learning Test-time Adaptation +1

Test-Time Adaptation for Visual Document Understanding

no code implementations15 Jun 2022 Sayna Ebrahimi, Sercan O. Arik, Tomas Pfister

For visual document understanding (VDU), self-supervised pretraining has been shown to successfully generate transferable representations, yet, effective adaptation of such representations to distribution shifts at test-time remains to be an unexplored area.

document understanding Language Modelling +5

Beyond Invariance: Test-Time Label-Shift Adaptation for Distributions with "Spurious" Correlations

1 code implementation28 Nov 2022 Qingyao Sun, Kevin Murphy, Sayna Ebrahimi, Alexander D'Amour

However, we assume that the generative model for features $p(x|y, z)$ is invariant across domains.

ASPEST: Bridging the Gap Between Active Learning and Selective Prediction

1 code implementation7 Apr 2023 Jiefeng Chen, Jinsung Yoon, Sayna Ebrahimi, Sercan Arik, Somesh Jha, Tomas Pfister

In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain while increasing accuracy and coverage.

Active Learning

LANISTR: Multimodal Learning from Structured and Unstructured Data

no code implementations26 May 2023 Sayna Ebrahimi, Sercan O. Arik, Yihe Dong, Tomas Pfister

To bridge this gap, we propose LANISTR, an attention-based framework to learn from LANguage, Image, and STRuctured data.

Time Series

PAITS: Pretraining and Augmentation for Irregularly-Sampled Time Series

1 code implementation25 Aug 2023 Nicasia Beebe-Wang, Sayna Ebrahimi, Jinsung Yoon, Sercan O. Arik, Tomas Pfister

In this paper, we present PAITS (Pretraining and Augmentation for Irregularly-sampled Time Series), a framework for identifying suitable pretraining strategies for sparse and irregularly sampled time series datasets.

Time Series

Adaptation with Self-Evaluation to Improve Selective Prediction in LLMs

no code implementations18 Oct 2023 Jiefeng Chen, Jinsung Yoon, Sayna Ebrahimi, Sercan O Arik, Tomas Pfister, Somesh Jha

Large language models (LLMs) have recently shown great advances in a variety of tasks, including natural language understanding and generation.

Decision Making Natural Language Understanding +1

TextGenSHAP: Scalable Post-hoc Explanations in Text Generation with Long Documents

no code implementations3 Dec 2023 James Enouen, Hootan Nakhost, Sayna Ebrahimi, Sercan O Arik, Yan Liu, Tomas Pfister

Given their nature as black-boxes using complex reasoning processes on their inputs, it is inevitable that the demand for scalable and faithful explanations for LLMs' generated content will continue to grow.

Question Answering Text Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.