Search Results for author: Rui Hou

Found 28 papers, 14 papers with code

Jack of All Tasks, Master of Many: Designing General-purpose Coarse-to-Fine Vision-Language Model

no code implementations19 Dec 2023 Shraman Pramanick, Guangxing Han, Rui Hou, Sayan Nag, Ser-Nam Lim, Nicolas Ballas, Qifan Wang, Rama Chellappa, Amjad Almahairi

In this work, we introduce VistaLLM, a powerful visual system that addresses coarse- and fine-grained VL tasks over single and multiple input images using a unified framework.

Attribute Language Modelling +1

Synergistic Anchored Contrastive Pre-training for Few-Shot Relation Extraction

1 code implementation19 Dec 2023 Da Luo, Yanglei Gan, Rui Hou, Run Lin, Qiao Liu, Yuxiang Cai, Wannian Gao

Specifically, our framework involves a symmetrical contrastive objective that encompasses both sentence-anchored and label-anchored contrastive losses.

Contrastive Learning Relation +2

RoAST: Robustifying Language Models via Adversarial Perturbation with Selective Training

1 code implementation7 Dec 2023 Jaehyung Kim, Yuning Mao, Rui Hou, Hanchao Yu, Davis Liang, Pascale Fung, Qifan Wang, Fuli Feng, Lifu Huang, Madian Khabsa

Under a unified evaluation of fine-tuned LMs by incorporating four representative perspectives of model robustness, we demonstrate the effectiveness of RoAST compared to state-of-the-art fine-tuning methods on six different types of LMs, which indicates its usefulness in practice.

Adversarial Robustness

MART: Improving LLM Safety with Multi-round Automatic Red-Teaming

no code implementations13 Nov 2023 Suyu Ge, Chunting Zhou, Rui Hou, Madian Khabsa, Yi-Chia Wang, Qifan Wang, Jiawei Han, Yuning Mao

Specifically, an adversarial LLM and a target LLM interplay with each other in an iterative manner, where the adversarial LLM aims to generate challenging prompts that elicit unsafe responses from the target LLM, while the target LLM is fine-tuned with safety aligned data on these adversarial prompts.

Instruction Following Response Generation

Effective Long-Context Scaling of Foundation Models

1 code implementation27 Sep 2023 Wenhan Xiong, Jingyu Liu, Igor Molybog, Hejia Zhang, Prajjwal Bhargava, Rui Hou, Louis Martin, Rashi Rungta, Karthik Abinav Sankararaman, Barlas Oguz, Madian Khabsa, Han Fang, Yashar Mehdad, Sharan Narang, Kshitiz Malik, Angela Fan, Shruti Bhosale, Sergey Edunov, Mike Lewis, Sinong Wang, Hao Ma

We also examine the impact of various design choices in the pretraining process, including the data mix and the training curriculum of sequence lengths -- our ablation experiments suggest that having abundant long texts in the pretrain dataset is not the key to achieving strong performance, and we empirically verify that long context continual pretraining is more efficient and similarly effective compared to pretraining from scratch with long sequences.

Continual Pretraining Language Modelling

Aspect-oriented Opinion Alignment Network for Aspect-Based Sentiment Classification

1 code implementation22 Aug 2023 Xueyi Liu, Rui Hou, Yanglei Gan, Da Luo, Changlin Li, Xiaojun Shi, Qiao Liu

In addition, we design a multi-perspective attention mechanism that align relevant opinion information with respect to the given aspect.

Management Sentiment Analysis +1

A Study on Knowledge Distillation from Weak Teacher for Scaling Up Pre-trained Language Models

1 code implementation26 May 2023 Hayeon Lee, Rui Hou, Jongpil Kim, Davis Liang, Sung Ju Hwang, Alexander Min

Distillation from Weak Teacher (DWT) is a method of transferring knowledge from a smaller, weaker teacher model to a larger student model to improve its performance.

Knowledge Distillation

Learning Easily Updated General Purpose Text Representations with Adaptable Task-Specific Prefixes

no code implementations22 May 2023 Kuan-Hao Huang, Liang Tan, Rui Hou, Sinong Wang, Amjad Almahairi, Ruty Rinott

Fine-tuning a large pre-trained language model for each downstream task causes computational burdens in the inference time due to several times of forward passes.

Language Modelling

Residual Prompt Tuning: Improving Prompt Tuning with Residual Reparameterization

1 code implementation6 May 2023 Anastasia Razdaibiedina, Yuning Mao, Rui Hou, Madian Khabsa, Mike Lewis, Jimmy Ba, Amjad Almahairi

In this work, we introduce Residual Prompt Tuning - a simple and efficient method that significantly improves the performance and stability of prompt tuning.

MMViT: Multiscale Multiview Vision Transformers

no code implementations28 Apr 2023 Yuchen Liu, Natasha Ong, Kaiyan Peng, Bo Xiong, Qifan Wang, Rui Hou, Madian Khabsa, Kaiyue Yang, David Liu, Donald S. Williamson, Hanchao Yu

Our model encodes different views of the input signal and builds several channel-resolution feature stages to process the multiple views of the input at different resolutions in parallel.

Image Classification

SVT: Supertoken Video Transformer for Efficient Video Understanding

no code implementations1 Apr 2023 Chenbin Pan, Rui Hou, Hanchao Yu, Qifan Wang, Senem Velipasalar, Madian Khabsa

Whether by processing videos with fixed resolution from start to end or incorporating pooling and down-scaling strategies, existing video transformers process the whole video content throughout the network without specially handling the large portions of redundant information.

Video Understanding

Progressive Prompts: Continual Learning for Language Models

2 code implementations29 Jan 2023 Anastasia Razdaibiedina, Yuning Mao, Rui Hou, Madian Khabsa, Mike Lewis, Amjad Almahairi

We introduce Progressive Prompts - a simple and efficient approach for continual learning in language models.

Continual Learning

Query Rewriting for Effective Misinformation Discovery

no code implementations14 Oct 2022 Ashkan Kazemi, Artem Abzaliev, Naihao Deng, Rui Hou, Scott A. Hale, Verónica Pérez-Rosas, Rada Mihalcea

We propose a novel system to help fact-checkers formulate search queries for known misinformation claims and effectively search across multiple social media platforms.

Misinformation reinforcement-learning +2

IDPG: An Instance-Dependent Prompt Generation Method

no code implementations NAACL 2022 Zhuofeng Wu, Sinong Wang, Jiatao Gu, Rui Hou, Yuxiao Dong, V. G. Vinod Vydiswaran, Hao Ma

Prompt tuning is a new, efficient NLP transfer learning paradigm that adds a task-specific prompt in each input instance during the model training stage.

Language Modelling Natural Language Understanding +2

UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning

1 code implementation ACL 2022 Yuning Mao, Lambert Mathias, Rui Hou, Amjad Almahairi, Hao Ma, Jiawei Han, Wen-tau Yih, Madian Khabsa

Recent parameter-efficient language model tuning (PELT) methods manage to match the performance of fine-tuning with much fewer trainable parameters and perform especially well when training data is limited.

Language Modelling Model Selection

Classification of Multiple Diseases on Body CT Scans using Weakly Supervised Deep Learning

1 code implementation3 Aug 2020 Fakrul Islam Tushar, Vincent M. D'Anniballe, Rui Hou, Maciej A. Mazurowski, Wanyi Fu, Ehsan Samei, Geoffrey D. Rubin, Joseph Y. Lo

Purpose: To design multi-disease classifiers for body CT scans for three different organ systems using automatically extracted labels from radiology text reports. Materials & Methods: This retrospective study included a total of 12, 092 patients (mean age 57 +- 18; 6, 172 women) for model development and testing (from 2012-2017).

Computed Tomography (CT) General Classification

Semantically-Guided Representation Learning for Self-Supervised Monocular Depth

1 code implementation ICLR 2020 Vitor Guizilini, Rui Hou, Jie Li, Rares Ambrus, Adrien Gaidon

Instead of using semantic labels and proxy losses in a multi-task approach, we propose a new architecture leveraging fixed pretrained semantic segmentation networks to guide self-supervised representation learning via pixel-adaptive convolutions.

Depth Prediction Monocular Depth Estimation +3

Real-Time Panoptic Segmentation from Dense Detections

no code implementations CVPR 2020 Rui Hou, Jie Li, Arjun Bhargava, Allan Raventos, Vitor Guizilini, Chao Fang, Jerome Lynch, Adrien Gaidon

Panoptic segmentation is a complex full scene parsing task requiring simultaneous instance and semantic segmentation at high resolution.

Clustering object-detection +4

Towards Automatic Detection of Misinformation in Online Medical Videos

no code implementations4 Sep 2019 Rui Hou, Verónica Pérez-Rosas, Stacy Loeb, Rada Mihalcea

Recent years have witnessed a significant increase in the online sharing of medical information, with videos representing a large fraction of such online sources.

Misinformation

An End-to-end 3D Convolutional Neural Network for Action Detection and Segmentation in Videos

no code implementations30 Nov 2017 Rui Hou, Chen Chen, Mubarak Shah

A video is first divided into equal length clips and next for each clip a set of tube proposals are generated based on 3D CNN features.

Action Detection Action Segmentation +4

Tube Convolutional Neural Network (T-CNN) for Action Detection in Videos

1 code implementation ICCV 2017 Rui Hou, Chen Chen, Mubarak Shah

A video is first divided into equal length clips and for each clip a set of tube proposals are generated next based on 3D Convolutional Network (ConvNet) features.

Action Detection Image Classification +2

Cannot find the paper you are looking for? You can Submit a new open access paper.