Search Results for author: Ruochen Xu

Found 30 papers, 16 papers with code

In-Context Demonstration Selection with Cross Entropy Difference

1 code implementation24 May 2023 Dan Iter, Reid Pryzant, Ruochen Xu, Shuohang Wang, Yang Liu, Yichong Xu, Chenguang Zhu

Our method is based on the observation that the effectiveness of in-context demonstrations negatively correlates with the perplexity of the test example by a language model that was finetuned on that demonstration.

Language Modelling Text Generation

G-Eval: NLG Evaluation using GPT-4 with Better Human Alignment

2 code implementations29 Mar 2023 Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, Chenguang Zhu

In this work, we present G-Eval, a framework of using large language models with chain-of-thoughts (CoT) and a form-filling paradigm, to assess the quality of NLG outputs.

Dialogue Generation nlg evaluation +1

Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners

1 code implementation22 May 2022 Zhenhailong Wang, Manling Li, Ruochen Xu, Luowei Zhou, Jie Lei, Xudong Lin, Shuohang Wang, ZiYi Yang, Chenguang Zhu, Derek Hoiem, Shih-Fu Chang, Mohit Bansal, Heng Ji

The goal of this work is to build flexible video-language models that can generalize to various video-to-text tasks from few examples, such as domain-specific captioning, question answering, and future event prediction.

Attribute Automatic Speech Recognition +6

Fusing Context Into Knowledge Graph for Commonsense Question Answering

2 code implementations Findings (ACL) 2021 Yichong Xu, Chenguang Zhu, Ruochen Xu, Yang Liu, Michael Zeng, Xuedong Huang

However, although a KG contains rich structural information, it lacks the context to provide a more precise understanding of the concepts.

Ranked #4 on Common Sense Reasoning on CommonsenseQA (using extra training data)

Common Sense Reasoning Knowledge Graphs +3

CLIP-Event: Connecting Text and Images with Event Structures

1 code implementation CVPR 2022 Manling Li, Ruochen Xu, Shuohang Wang, Luowei Zhou, Xudong Lin, Chenguang Zhu, Michael Zeng, Heng Ji, Shih-Fu Chang

Vision-language (V+L) pretraining models have achieved great success in supporting multimedia applications by understanding the alignments between images and text.

Contrastive Learning Event Extraction +2

Learning Visual Representation from Modality-Shared Contrastive Language-Image Pre-training

1 code implementation26 Jul 2022 Haoxuan You, Luowei Zhou, Bin Xiao, Noel Codella, Yu Cheng, Ruochen Xu, Shih-Fu Chang, Lu Yuan

Large-scale multi-modal contrastive pre-training has demonstrated great utility to learn transferable features for a range of downstream tasks by mapping multiple modalities into a shared embedding space.

UniSumm and SummZoo: Unified Model and Diverse Benchmark for Few-Shot Summarization

1 code implementation17 Nov 2022 Yulong Chen, Yang Liu, Ruochen Xu, ZiYi Yang, Chenguang Zhu, Michael Zeng, Yue Zhang

The high annotation costs and diverse demands of various summarization tasks motivate the development of few-shot summarization.

Unsupervised Cross-lingual Transfer of Word Embedding Spaces

1 code implementation EMNLP 2018 Ruochen Xu, Yiming Yang, Naoki Otani, Yuexin Wu

Supervised methods for this problem rely on the availability of cross-lingual supervision, either using parallel corpora or bilingual lexicons as the labeled data for training, which may not be available for many low resource languages.

Bilingual Lexicon Induction Cross-Lingual Transfer +4

Cross-lingual Distillation for Text Classification

1 code implementation ACL 2017 Ruochen Xu, Yiming Yang

Using soft probabilistic predictions for the documents in a label-rich language as the (induced) supervisory labels in a parallel corpus of documents, we train classifiers successfully for new languages in which labeled training data are not available.

General Classification Model Compression +2

Predicting Performance for Natural Language Processing Tasks

1 code implementation ACL 2020 Mengzhou Xia, Antonios Anastasopoulos, Ruochen Xu, Yiming Yang, Graham Neubig

Given the complexity of combinations of tasks, languages, and domains in natural language processing (NLP) research, it is computationally prohibitive to exhaustively test newly proposed models on each possible experimental setting.

ERBench: An Entity-Relationship based Automatically Verifiable Hallucination Benchmark for Large Language Models

1 code implementation8 Mar 2024 Jio Oh, Soyeon Kim, Junseok Seo, Jindong Wang, Ruochen Xu, Xing Xie, Steven Euijong Whang

Our key idea is to construct questions using the database schema, records, and functional dependencies such that they can be automatically verified.

Hallucination Prompt Engineering

Leveraging Multilingual Training for Limited Resource Event Extraction

no code implementations COLING 2016 Andrew Hsi, Yiming Yang, Jaime Carbonell, Ruochen Xu

Event extraction has become one of the most important topics in information extraction, but to date, there is very limited work on leveraging cross-lingual training to boost performance.

Dependency Parsing Event Argument Extraction +4

The ARIEL-CMU Systems for LoReHLT18

no code implementations24 Feb 2019 Aditi Chaudhary, Siddharth Dalmia, Junjie Hu, Xinjian Li, Austin Matthews, Aldrian Obaja Muis, Naoki Otani, Shruti Rijhwani, Zaid Sheikh, Nidhi Vyas, Xinyi Wang, Jiateng Xie, Ruochen Xu, Chunting Zhou, Peter J. Jansen, Yiming Yang, Lori Levin, Florian Metze, Teruko Mitamura, David R. Mortensen, Graham Neubig, Eduard Hovy, Alan W. black, Jaime Carbonell, Graham V. Horwood, Shabnam Tafreshi, Mona Diab, Efsun S. Kayi, Noura Farra, Kathleen McKeown

This paper describes the ARIEL-CMU submissions to the Low Resource Human Language Technologies (LoReHLT) 2018 evaluations for the tasks Machine Translation (MT), Entity Discovery and Linking (EDL), and detection of Situation Frames in Text and Speech (SF Text and Speech).

Machine Translation Translation

Does Knowledge Help General NLU? An Empirical Study

no code implementations1 Sep 2021 Ruochen Xu, Yuwei Fang, Chenguang Zhu, Michael Zeng

It is often observed in knowledge-centric tasks (e. g., common sense question and answering, relation classification) that the integration of external knowledge such as entity representation into language models can help provide useful information to boost the performance.

Common Sense Reasoning Language Modelling +2

MA-CLIP: Towards Modality-Agnostic Contrastive Language-Image Pre-training

no code implementations29 Sep 2021 Haoxuan You, Luowei Zhou, Bin Xiao, Noel C Codella, Yu Cheng, Ruochen Xu, Shih-Fu Chang, Lu Yuan

Large-scale multimodal contrastive pretraining has demonstrated great utility to support high performance in a range of downstream tasks by mapping multiple modalities into a shared embedding space.

LMGQS: A Large-scale Dataset for Query-focused Summarization

no code implementations22 May 2023 Ruochen Xu, Song Wang, Yang Liu, Shuohang Wang, Yichong Xu, Dan Iter, Chenguang Zhu, Michael Zeng

We hypothesize that there is a hidden query for each summary sentence in a generic summarization annotation, and we utilize a large-scale pretrained language model to recover it.

Language Modelling Query-focused Summarization +1

InheritSumm: A General, Versatile and Compact Summarizer by Distilling from GPT

no code implementations22 May 2023 Yichong Xu, Ruochen Xu, Dan Iter, Yang Liu, Shuohang Wang, Chenguang Zhu, Michael Zeng

While large models such as GPT-3 demonstrate exceptional performance in zeroshot and fewshot summarization tasks, their extensive serving and fine-tuning costs hinder their utilization in various applications.

Language Models can be Logical Solvers

no code implementations10 Nov 2023 Jiazhan Feng, Ruochen Xu, Junheng Hao, Hiteshi Sharma, Yelong Shen, Dongyan Zhao, Weizhu Chen

Despite their impressive performance, any parsing errors will inevitably result in the failure of the execution of the external logical solver and no answer to the logical questions.

Decision Making Language Modelling +1

SciAgent: Tool-augmented Language Models for Scientific Reasoning

no code implementations18 Feb 2024 Yubo Ma, Zhibin Gou, Junheng Hao, Ruochen Xu, Shuohang Wang, Liangming Pan, Yujiu Yang, Yixin Cao, Aixin Sun, Hany Awadalla, Weizhu Chen

To make this task more practical and solvable for LLMs, we introduce a new task setting named tool-augmented scientific reasoning.

DyVal 2: Dynamic Evaluation of Large Language Models by Meta Probing Agents

no code implementations21 Feb 2024 Kaijie Zhu, Jindong Wang, Qinlin Zhao, Ruochen Xu, Xing Xie

Our multifaceted analysis demonstrated the strong correlation between the basic abilities and an implicit Matthew effect on model size, i. e., larger models possess stronger correlations of the abilities.

Data Augmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.