Search Results for author: Rogerio Feris

Found 74 papers, 46 papers with code

Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning

no code implementations6 Mar 2023 Zhen Wang, Rameswar Panda, Leonid Karlinsky, Rogerio Feris, Huan Sun, Yoon Kim

Prompt tuning, in which a base pretrained model is adapted to each task via conditioning on learned prompt vectors, has emerged as a promising approach for efficiently adapting large language models to multiple downstream tasks.

Transfer Learning

Learning to Grow Pretrained Models for Efficient Transformer Training

no code implementations2 Mar 2023 Peihao Wang, Rameswar Panda, Lucas Torroba Hennigen, Philip Greengard, Leonid Karlinsky, Rogerio Feris, David Daniel Cox, Zhangyang Wang, Yoon Kim

Scaling transformers has led to significant breakthroughs in many domains, leading to a paradigm in which larger versions of existing models are trained and released on a periodic basis.

Synthetic Pre-Training Tasks for Neural Machine Translation

no code implementations19 Dec 2022 Zexue He, Graeme Blackwood, Rameswar Panda, Julian McAuley, Rogerio Feris

Our second approach explores the effect of pre-training on procedurally generated synthetic parallel data that does not depend on any real human language corpus.

Machine Translation Translation

Procedural Image Programs for Representation Learning

1 code implementation29 Nov 2022 Manel Baradad, Chun-Fu Chen, Jonas Wulff, Tongzhou Wang, Rogerio Feris, Antonio Torralba, Phillip Isola

Learning image representations using synthetic data allows training neural networks without some of the concerns associated with real images, such as privacy and bias.

Representation Learning

Exploring Consistency in Cross-Domain Transformer for Domain Adaptive Semantic Segmentation

no code implementations27 Nov 2022 Kaihong Wang, Donghyun Kim, Rogerio Feris, Kate Saenko, Margrit Betke

We propose to perform adaptation on attention maps with cross-domain attention layers that share features between the source and the target domains.

Semantic Segmentation Unsupervised Domain Adaptation

ConStruct-VL: Data-Free Continual Structured VL Concepts Learning

no code implementations17 Nov 2022 James Seale Smith, Paola Cascante-Bonilla, Assaf Arbelle, Donghyun Kim, Rameswar Panda, David Cox, Diyi Yang, Zsolt Kira, Rogerio Feris, Leonid Karlinsky

We, therefore, propose a data-free method comprised of a new approach of Adversarial Pseudo-Replay (APR) which generates adversarial reminders of past tasks from past task models.

C2KD: Cross-Lingual Cross-Modal Knowledge Distillation for Multilingual Text-Video Retrieval

no code implementations7 Oct 2022 Andrew Rouditchenko, Yung-Sung Chuang, Nina Shvetsova, Samuel Thomas, Rogerio Feris, Brian Kingsbury, Leonid Karlinsky, David Harwath, Hilde Kuehne, James Glass

Inspired by the fact that English text-video retrieval outperforms other languages, we train a student model using input text in different languages to match the cross-modal predictions from teacher models using input text in English.

Knowledge Distillation Retrieval +2

FETA: Towards Specializing Foundation Models for Expert Task Applications

1 code implementation8 Sep 2022 Amit Alfassy, Assaf Arbelle, Oshri Halimi, Sivan Harary, Roei Herzig, Eli Schwartz, Rameswar Panda, Michele Dolfi, Christoph Auer, Kate Saenko, PeterW. J. Staar, Rogerio Feris, Leonid Karlinsky

However, as we show in this paper, FMs still have poor out-of-the-box performance on expert tasks (e. g. retrieval of car manuals technical illustrations from language queries), data for which is either unseen or belonging to a long-tail part of the data distribution of the huge datasets used for FM pre-training.

Domain Generalization Image Retrieval +6

VALHALLA: Visual Hallucination for Machine Translation

no code implementations CVPR 2022 Yi Li, Rameswar Panda, Yoon Kim, Chun-Fu, Chen, Rogerio Feris, David Cox, Nuno Vasconcelos

In particular, given a source sentence an autoregressive hallucination transformer is used to predict a discrete visual representation from the input text, and the combined text and hallucinated representations are utilized to obtain the target translation.

Multimodal Machine Translation Translation

SimVQA: Exploring Simulated Environments for Visual Question Answering

no code implementations CVPR 2022 Paola Cascante-Bonilla, Hui Wu, Letao Wang, Rogerio Feris, Vicente Ordonez

By exploiting 3D and physics simulation platforms, we provide a pipeline to generate synthetic data to expand and replace type-specific questions and answers without risking the exposure of sensitive or personal data that might be present in real images.

Data Augmentation Question Answering +1

Everything at Once -- Multi-modal Fusion Transformer for Video Retrieval

1 code implementation8 Dec 2021 Nina Shvetsova, Brian Chen, Andrew Rouditchenko, Samuel Thomas, Brian Kingsbury, Rogerio Feris, David Harwath, James Glass, Hilde Kuehne

Multi-modal learning from video data has seen increased attention recently as it allows to train semantically meaningful embeddings without human annotation enabling tasks like zero-shot retrieval and classification.

Action Localization Retrieval +2

Unsupervised Domain Generalization by Learning a Bridge Across Domains

1 code implementation CVPR 2022 Sivan Harary, Eli Schwartz, Assaf Arbelle, Peter Staar, Shady Abu-Hussein, Elad Amrani, Roei Herzig, Amit Alfassy, Raja Giryes, Hilde Kuehne, Dina Katabi, Kate Saenko, Rogerio Feris, Leonid Karlinsky

The ability to generalize learned representations across significantly different visual domains, such as between real photos, clipart, paintings, and sketches, is a fundamental capacity of the human visual system.

Domain Generalization Self-Supervised Learning

Targeted Supervised Contrastive Learning for Long-Tailed Recognition

1 code implementation CVPR 2022 Tianhong Li, Peng Cao, Yuan Yuan, Lijie Fan, Yuzhe Yang, Rogerio Feris, Piotr Indyk, Dina Katabi

This forces all classes, including minority classes, to maintain a uniform distribution in the feature space, improves class boundaries, and provides better generalization even in the presence of long-tail data.

Contrastive Learning Long-tail Learning

Dynamic Network Quantization for Efficient Video Inference

1 code implementation ICCV 2021 Ximeng Sun, Rameswar Panda, Chun-Fu Chen, Aude Oliva, Rogerio Feris, Kate Saenko

Deep convolutional networks have recently achieved great success in video recognition, yet their practical realization remains a challenge due to the large amount of computational resources required to achieve robust recognition.

Quantization Video Recognition

IA-RED$^2$: Interpretability-Aware Redundancy Reduction for Vision Transformers

no code implementations NeurIPS 2021 Bowen Pan, Rameswar Panda, Yifan Jiang, Zhangyang Wang, Rogerio Feris, Aude Oliva

The self-attention-based model, transformer, is recently becoming the leading backbone in the field of computer vision.

Dynamic Distillation Network for Cross-Domain Few-Shot Recognition with Unlabeled Data

1 code implementation NeurIPS 2021 Ashraful Islam, Chun-Fu Chen, Rameswar Panda, Leonid Karlinsky, Rogerio Feris, Richard J. Radke

As the base dataset and unlabeled dataset are from different domains, projecting the target images in the class-domain of the base dataset with a fixed pretrained model might be sub-optimal.

cross-domain few-shot learning

AdaMML: Adaptive Multi-Modal Learning for Efficient Video Recognition

1 code implementation ICCV 2021 Rameswar Panda, Chun-Fu Chen, Quanfu Fan, Ximeng Sun, Kate Saenko, Aude Oliva, Rogerio Feris

Specifically, given a video segment, a multi-modal policy network is used to decide what modalities should be used for processing by the recognition model, with the goal of improving both accuracy and efficiency.

Video Recognition

Spoken Moments: Learning Joint Audio-Visual Representations from Video Descriptions

no code implementations CVPR 2021 Mathew Monfort, SouYoung Jin, Alexander Liu, David Harwath, Rogerio Feris, James Glass, Aude Oliva

With this in mind, the descriptions people generate for videos of different dynamic events can greatly improve our understanding of the key information of interest in each video.

Contrastive Learning Retrieval +1

Pseudo-IoU: Improving Label Assignment in Anchor-Free Object Detection

1 code implementation29 Apr 2021 Jiachen Li, Bowen Cheng, Rogerio Feris, JinJun Xiong, Thomas S. Huang, Wen-mei Hwu, Humphrey Shi

Current anchor-free object detectors are quite simple and effective yet lack accurate label assignment methods, which limits their potential in competing with classic anchor-based models that are supported by well-designed assignment methods based on the Intersection-over-Union~(IoU) metric.

object-detection Object Detection

Multimodal Clustering Networks for Self-supervised Learning from Unlabeled Videos

1 code implementation ICCV 2021 Brian Chen, Andrew Rouditchenko, Kevin Duarte, Hilde Kuehne, Samuel Thomas, Angie Boggust, Rameswar Panda, Brian Kingsbury, Rogerio Feris, David Harwath, James Glass, Michael Picheny, Shih-Fu Chang

Multimodal self-supervised learning is getting more and more attention as it allows not only to train large networks without human supervision but also to search and retrieve data across various modalities.

Contrastive Learning Retrieval +4

Improved Techniques for Quantizing Deep Networks with Adaptive Bit-Widths

no code implementations2 Mar 2021 Ximeng Sun, Rameswar Panda, Chun-Fu Chen, Naigang Wang, Bowen Pan, Kailash Gopalakrishnan, Aude Oliva, Rogerio Feris, Kate Saenko

Second, to effectively transfer knowledge, we develop a dynamic block swapping method by randomly replacing the blocks in the lower-precision student network with the corresponding blocks in the higher-precision teacher network.

Image Classification Quantization +2

VA-RED$^2$: Video Adaptive Redundancy Reduction

no code implementations ICLR 2021 Bowen Pan, Rameswar Panda, Camilo Fosco, Chung-Ching Lin, Alex Andonian, Yue Meng, Kate Saenko, Aude Oliva, Rogerio Feris

An inherent property of real-world videos is the high correlation of information across frames which can translate into redundancy in either temporal or spatial feature maps of the models, or both.

Semi-Supervised Action Recognition with Temporal Contrastive Learning

1 code implementation CVPR 2021 Ankit Singh, Omprakash Chakraborty, Ashutosh Varshney, Rameswar Panda, Rogerio Feris, Kate Saenko, Abir Das

We approach this problem by learning a two-pathway temporal contrastive model using unlabeled videos at two different speeds leveraging the fact that changing video speed does not change an action.

Action Recognition Contrastive Learning

A Maximal Correlation Approach to Imposing Fairness in Machine Learning

no code implementations30 Dec 2020 Joshua Lee, Yuheng Bu, Prasanna Sattigeri, Rameswar Panda, Gregory Wornell, Leonid Karlinsky, Rogerio Feris

As machine learning algorithms grow in popularity and diversify to many industries, ethical and legal concerns regarding their fairness have become increasingly relevant.

BIG-bench Machine Learning Fairness

Addressing Feature Suppression in Unsupervised Visual Representations

no code implementations17 Dec 2020 Tianhong Li, Lijie Fan, Yuan Yuan, Hao He, Yonglong Tian, Rogerio Feris, Piotr Indyk, Dina Katabi

However, contrastive learning is susceptible to feature suppression, i. e., it may discard important information relevant to the task of interest, and learn irrelevant features.

Contrastive Learning Representation Learning

Select, Label, and Mix: Learning Discriminative Invariant Feature Representations for Partial Domain Adaptation

no code implementations6 Dec 2020 Aadarsh Sahoo, Rameswar Panda, Rogerio Feris, Kate Saenko, Abir Das

Partial domain adaptation which assumes that the unknown target label space is a subset of the source label space has attracted much attention in computer vision.

Partial Domain Adaptation

Deep Analysis of CNN-based Spatio-temporal Representations for Action Recognition

1 code implementation CVPR 2021 Chun-Fu Chen, Rameswar Panda, Kandan Ramakrishnan, Rogerio Feris, John Cohn, Aude Oliva, Quanfu Fan

In recent years, a number of approaches based on 2D or 3D convolutional neural networks (CNN) have emerged for video action recognition, achieving state-of-the-art results on several large-scale benchmark datasets.

Action Recognition Temporal Action Localization

Mitigating Dataset Imbalance via Joint Generation and Classification

1 code implementation12 Aug 2020 Aadarsh Sahoo, Ankit Singh, Rameswar Panda, Rogerio Feris, Abir Das

In this work we address these questions from the perspective of dataset imbalance resulting out of severe under-representation of annotated training data for certain classes and its effect on both deep classification and generation methods.

Classification General Classification

AR-Net: Adaptive Frame Resolution for Efficient Action Recognition

1 code implementation ECCV 2020 Yue Meng, Chung-Ching Lin, Rameswar Panda, Prasanna Sattigeri, Leonid Karlinsky, Aude Oliva, Kate Saenko, Rogerio Feris

Specifically, given a video frame, a policy network is used to decide what input resolution should be used for processing by the action recognition model, with the goal of improving both accuracy and efficiency.

Action Recognition

NASTransfer: Analyzing Architecture Transferability in Large Scale Neural Architecture Search

no code implementations23 Jun 2020 Rameswar Panda, Michele Merler, Mayoore Jaiswal, Hui Wu, Kandan Ramakrishnan, Ulrich Finkler, Chun-Fu Chen, Minsik Cho, David Kung, Rogerio Feris, Bishwaranjan Bhattacharjee

The typical way of conducting large scale NAS is to search for an architectural building block on a small dataset (either using a proxy set from the large dataset or a completely different small scale dataset) and then transfer the block to a larger dataset.

Neural Architecture Search

Differential Treatment for Stuff and Things: A Simple Unsupervised Domain Adaptation Method for Semantic Segmentation

1 code implementation CVPR 2020 Zhonghao Wang, Mo Yu, Yunchao Wei, Rogerio Feris, JinJun Xiong, Wen-mei Hwu, Thomas S. Huang, Humphrey Shi

We consider the problem of unsupervised domain adaptation for semantic segmentation by easing the domain shift between the source domain (synthetic data) and the target domain (real data) in this work.

Semantic Segmentation Unsupervised Domain Adaptation

TAFSSL: Task-Adaptive Feature Sub-Space Learning for few-shot classification

1 code implementation ECCV 2020 Moshe Lichtenstein, Prasanna Sattigeri, Rogerio Feris, Raja Giryes, Leonid Karlinsky

The field of Few-Shot Learning (FSL), or learning from very few (typically $1$ or $5$) examples per novel class (unseen during training), has received a lot of attention and significant performance advances in the recent literature.

Few-Shot Learning General Classification

A Broader Study of Cross-Domain Few-Shot Learning

2 code implementations ECCV 2020 Yunhui Guo, Noel C. Codella, Leonid Karlinsky, James V. Codella, John R. Smith, Kate Saenko, Tajana Rosing, Rogerio Feris

Extensive experiments on the proposed benchmark are performed to evaluate state-of-art meta-learning approaches, transfer learning approaches, and newer methods for cross-domain few-shot learning.

cross-domain few-shot learning Few-Shot Image Classification +1

MetAdapt: Meta-Learned Task-Adaptive Architecture for Few-Shot Classification

no code implementations1 Dec 2019 Sivan Doveh, Eli Schwartz, Chao Xue, Rogerio Feris, Alex Bronstein, Raja Giryes, Leonid Karlinsky

In this work, we propose to employ tools inspired by the Differentiable Neural Architecture Search (D-NAS) literature in order to optimize the architecture for FSL without over-fitting.

Classification Few-Shot Learning +2

Baby steps towards few-shot learning with multiple semantics

no code implementations5 Jun 2019 Eli Schwartz, Leonid Karlinsky, Rogerio Feris, Raja Giryes, Alex M. Bronstein

Learning from one or few visual examples is one of the key capabilities of humans since early infancy, but is still a significant challenge for modern AI systems.

Few-Shot Image Classification Few-Shot Learning

Fashion IQ: A New Dataset Towards Retrieving Images by Natural Language Feedback

2 code implementations CVPR 2021 Hui Wu, Yupeng Gao, Xiaoxiao Guo, Ziad Al-Halah, Steven Rennie, Kristen Grauman, Rogerio Feris

We provide a detailed analysis of the characteristics of the Fashion IQ data, and present a transformer-based user simulator and interactive image retriever that can seamlessly integrate visual attributes with image features, user feedback, and dialog history, leading to improved performance over the state of the art in dialog-based image retrieval.

Image Retrieval Retrieval

LaSO: Label-Set Operations networks for multi-label few-shot learning

2 code implementations CVPR 2019 Amit Alfassy, Leonid Karlinsky, Amit Aides, Joseph Shtok, Sivan Harary, Rogerio Feris, Raja Giryes, Alex M. Bronstein

We conduct numerous experiments showing promising results for the label-set manipulation capabilities of the proposed approach, both directly (using the classification and retrieval metrics), and in the context of performing data augmentation for multi-label few-shot learning.

Data Augmentation Few-Shot Learning +2

Depthwise Convolution is All You Need for Learning Multiple Visual Domains

1 code implementation3 Feb 2019 Yunhui Guo, Yandong Li, Rogerio Feris, Liqiang Wang, Tajana Rosing

A model aware of the relationships between different domains can also be trained to work on new domains with less resources.

Continual Learning

SpotTune: Transfer Learning through Adaptive Fine-tuning

3 code implementations CVPR 2019 Yunhui Guo, Honghui Shi, Abhishek Kumar, Kristen Grauman, Tajana Rosing, Rogerio Feris

Transfer learning, which allows a source task to affect the inductive bias of the target task, is widely used in computer vision.

Inductive Bias Transfer Learning

Co-regularized Alignment for Unsupervised Domain Adaptation

no code implementations NeurIPS 2018 Abhishek Kumar, Prasanna Sattigeri, Kahini Wadhawan, Leonid Karlinsky, Rogerio Feris, William T. Freeman, Gregory Wornell

Deep neural networks, trained with large amount of labeled data, can fail to generalize well when tested with examples from a \emph{target domain} whose distribution differs from the training data distribution, referred as the \emph{source domain}.

Unsupervised Domain Adaptation

Big-Little Net: An Efficient Multi-Scale Feature Representation for Visual and Speech Recognition

3 code implementations ICLR 2019 Chun-Fu Chen, Quanfu Fan, Neil Mallinar, Tom Sercu, Rogerio Feris

The proposed approach demonstrates improvement of model efficiency and performance on both object recognition and speech recognition tasks, using popular architectures including ResNet and ResNeXt.

Object Recognition speech-recognition +1

RepMet: Representative-based metric learning for classification and one-shot object detection

1 code implementation12 Jun 2018 Leonid Karlinsky, Joseph Shtok, Sivan Harary, Eli Schwartz, Amit Aides, Rogerio Feris, Raja Giryes, Alex M. Bronstein

Distance metric learning (DML) has been successfully applied to object classification, both in the standard regime of rich training data and in the few-shot scenario, where each category is represented by only a few examples.

Classification Few-Shot Object Detection +4

Collaborative Human-AI (CHAI): Evidence-Based Interpretable Melanoma Classification in Dermoscopic Images

1 code implementation30 May 2018 Noel C. F. Codella, Chung-Ching Lin, Allan Halpern, Michael Hind, Rogerio Feris, John R. Smith

Quantitative relevance of results, according to non-expert similarity, as well as localized image regions, are also significantly improved.

General Classification

Segmentation of both Diseased and Healthy Skin from Clinical Photographs in a Primary Care Setting

no code implementations16 Apr 2018 Noel C. F. Codella, Daren Anderson, Tyler Philips, Anthony Porto, Kevin Massey, Jane Snowdon, Rogerio Feris, John Smith

The stark performance improvement with fine-tuning compared to direct transfer and direct training emphasizes both the need for adequate representative data of diseased skin, and the utility of other publicly available data sources for this task.

Learning to Separate Object Sounds by Watching Unlabeled Video

2 code implementations ECCV 2018 Ruohan Gao, Rogerio Feris, Kristen Grauman

Our work is the first to learn audio source separation from large-scale "in the wild" videos containing multiple audio sources per video.

Audio Denoising Audio Source Separation +2

Revisiting RCNN: On Awakening the Classification Power of Faster RCNN

3 code implementations ECCV 2018 Bowen Cheng, Yunchao Wei, Honghui Shi, Rogerio Feris, JinJun Xiong, Thomas Huang

Recent region-based object detectors are usually built with separate classification and localization branches on top of shared feature extraction networks.

Classification General Classification +1

Improving Object Detection from Scratch via Gated Feature Reuse

2 code implementations4 Dec 2017 Zhiqiang Shen, Honghui Shi, Jiahui Yu, Hai Phan, Rogerio Feris, Liangliang Cao, Ding Liu, Xinchao Wang, Thomas Huang, Marios Savvides

In this paper, we present a simple and parameter-efficient drop-in module for one-stage object detectors like SSD when learning from scratch (i. e., without pre-trained models).

object-detection Object Detection

BlockDrop: Dynamic Inference Paths in Residual Networks

1 code implementation CVPR 2018 Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, Rogerio Feris

Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications.

S3Pool: Pooling with Stochastic Spatial Sampling

4 code implementations CVPR 2017 Shuangfei Zhai, Hui Wu, Abhishek Kumar, Yu Cheng, Yongxi Lu, Zhongfei Zhang, Rogerio Feris

We view the pooling operation in CNNs as a two-step procedure: first, a pooling window (e. g., $2\times 2$) slides over the feature map with stride one which leaves the spatial resolution intact, and second, downsampling is performed by selecting one pixel from each non-overlapping pooling window in an often uniform and deterministic (e. g., top-left) manner.

Data Augmentation Image Classification

Generative Adversarial Networks as Variational Training of Energy Based Models

1 code implementation6 Nov 2016 Shuangfei Zhai, Yu Cheng, Rogerio Feris, Zhongfei Zhang

We propose VGAN, which works by minimizing a variational lower bound of the negative log likelihood (NLL) of an energy based model (EBM), where the model density $p(\mathbf{x})$ is approximated by a variational distribution $q(\mathbf{x})$ that is easy to sample from.

Efficient Maximum Appearance Search for Large-Scale Object Detection

no code implementations CVPR 2013 Qiang Chen, Zheng Song, Rogerio Feris, Ankur Datta, Liangliang Cao, Zhongyang Huang, Shuicheng Yan

In recent years, efficiency of large-scale object detection has arisen as an important topic due to the exponential growth in the size of benchmark object detection datasets.

object-detection Object Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.