Search Results for author: Chen Xing

Found 25 papers, 13 papers with code

DocQueryNet: Value Retrieval with Arbitrary Queries for Form-like Documents

1 code implementation COLING 2022 Mingfei Gao, Le Xue, Chetan Ramaiah, Chen Xing, ran Xu, Caiming Xiong

Unlike previous methods that only address a fixed set of field items, our method predicts target value for an arbitrary query based on the understanding of the layout and semantics of a form.

document understanding Language Modelling +1

FOFO: A Benchmark to Evaluate LLMs' Format-Following Capability

1 code implementation28 Feb 2024 Congying Xia, Chen Xing, Jiangshu Du, Xinyi Yang, Yihao Feng, ran Xu, Wenpeng Yin, Caiming Xiong

This paper presents FoFo, a pioneering benchmark for evaluating large language models' (LLMs) ability to follow complex, domain-specific formats, a crucial yet underexamined capability for their application as AI agents.

Lemur: Harmonizing Natural Language and Code for Language Agents

1 code implementation10 Oct 2023 Yiheng Xu, Hongjin Su, Chen Xing, Boyu Mi, Qian Liu, Weijia Shi, Binyuan Hui, Fan Zhou, Yitao Liu, Tianbao Xie, Zhoujun Cheng, Siheng Zhao, Lingpeng Kong, Bailin Wang, Caiming Xiong, Tao Yu

We introduce Lemur and Lemur-Chat, openly accessible language models optimized for both natural language and coding capabilities to serve as the backbone of versatile language agents.

XGen-7B Technical Report

1 code implementation7 Sep 2023 Erik Nijkamp, Tian Xie, Hiroaki Hayashi, Bo Pang, Congying Xia, Chen Xing, Jesse Vig, Semih Yavuz, Philippe Laban, Ben Krause, Senthil Purushwalkam, Tong Niu, Wojciech Kryściński, Lidiya Murakhovs'ka, Prafulla Kumar Choubey, Alex Fabbri, Ye Liu, Rui Meng, Lifu Tu, Meghana Bhat, Chien-Sheng Wu, Silvio Savarese, Yingbo Zhou, Shafiq Joty, Caiming Xiong

Most open-source LLMs, on the other hand, are limited in their ability to support longer sequence lengths, which is a key requirement for many tasks that require inference over an input context.

2k 8k

Mask-free OVIS: Open-Vocabulary Instance Segmentation without Manual Mask Annotations

no code implementations CVPR 2023 Vibashan VS, Ning Yu, Chen Xing, Can Qin, Mingfei Gao, Juan Carlos Niebles, Vishal M. Patel, ran Xu

In summary, an OV method learns task-specific information using strong supervision from base annotations and novel category information using weak supervision from image-captions pairs.

Image Captioning Instance Segmentation +2

GlueGen: Plug and Play Multi-modal Encoders for X-to-image Generation

1 code implementation ICCV 2023 Can Qin, Ning Yu, Chen Xing, Shu Zhang, Zeyuan Chen, Stefano Ermon, Yun Fu, Caiming Xiong, ran Xu

Empirical results show that GlueNet can be trained efficiently and enables various capabilities beyond previous state-of-the-art models: 1) multilingual language models such as XLM-Roberta can be aligned with existing T2I models, allowing for the generation of high-quality images from captions beyond English; 2) GlueNet can align multi-modal encoders such as AudioCLIP with the Stable Diffusion model, enabling sound-to-image generation; 3) it can also upgrade the current text encoder of the latent diffusion model for challenging case generation.

Image Generation

Model ensemble instead of prompt fusion: a sample-specific knowledge transfer method for few-shot prompt tuning

no code implementations23 Oct 2022 Xiangyu Peng, Chen Xing, Prafulla Kumar Choubey, Chien-Sheng Wu, Caiming Xiong

Through this way, SESoM inherits the superior generalization of model ensemble approaches and simultaneously captures the sample-specific competence of each source prompt.

Transfer Learning

Value Retrieval with Arbitrary Queries for Form-like Documents

1 code implementation15 Dec 2021 Mingfei Gao, Le Xue, Chetan Ramaiah, Chen Xing, ran Xu, Caiming Xiong

Unlike previous methods that only address a fixed set of field items, our method predicts target value for an arbitrary query based on the understanding of the layout and semantics of a form.

document understanding Language Modelling +1

Open Vocabulary Object Detection with Pseudo Bounding-Box Labels

1 code implementation18 Nov 2021 Mingfei Gao, Chen Xing, Juan Carlos Niebles, Junnan Li, ran Xu, Wenhao Liu, Caiming Xiong

To enlarge the set of base classes, we propose a method to automatically generate pseudo bounding-box annotations of diverse objects from large-scale image-caption pairs.

Object object-detection +1

Improving Gender Fairness of Pre-Trained Language Models without Catastrophic Forgetting

no code implementations11 Oct 2021 Zahra Fatemi, Chen Xing, Wenhao Liu, Caiming Xiong

In this work, we empirically show that catastrophic forgetting occurs in such methods by evaluating them with general NLP tasks in GLUE.

coreference-resolution Fairness

Taking Notes on the Fly Helps Language Pre-Training

no code implementations ICLR 2021 Qiyu Wu, Chen Xing, Yatao Li, Guolin Ke, Di He, Tie-Yan Liu

In this paper, we focus on improving the efficiency of language pre-training methods through providing better data utilization.

Sentence

Learning from Mistakes: Using Mis-predictions as Harm Alerts in Language Pre-Training

no code implementations16 Dec 2020 Chen Xing, Wenhao Liu, Caiming Xiong

According to recent studies and our empirical observations, one possible reason is that some easy-to-fit patterns in the training data, such as frequently co-occurring word combinations, dominate and harm pre-training, making it hard for the model to fit more complex information.

Sentence

Taking Notes on the Fly Helps BERT Pre-training

no code implementations4 Aug 2020 Qiyu Wu, Chen Xing, Yatao Li, Guolin Ke, Di He, Tie-Yan Liu

In this paper, we focus on improving the efficiency of language pre-training methods through providing better data utilization.

Sentence

Distance-Based Learning from Errors for Confidence Calibration

no code implementations ICLR 2020 Chen Xing, Sercan Arik, Zizhao Zhang, Tomas Pfister

To circumvent this by inferring the distance for every test sample, we propose to train a confidence model jointly with the classification model.

Classification General Classification

A Walk with SGD: How SGD Explores Regions of Deep Network Loss?

no code implementations ICLR 2019 Chen Xing, Devansh Arpit, Christos Tsirigotis, Yoshua Bengio

The non-convex nature of the loss landscape of deep neural networks (DNN) lends them the intuition that over the course of training, stochastic optimization algorithms explore different regions of the loss surface by entering and escaping many local minima due to the noise induced by mini-batches.

Stochastic Optimization

Adaptive Cross-Modal Few-Shot Learning

1 code implementation NeurIPS 2019 Chen Xing, Negar Rostamzadeh, Boris N. Oreshkin, Pedro O. Pinheiro

Through a series of experiments, we show that by this adaptive combination of the two modalities, our model outperforms current uni-modality few-shot learning methods and modality-alignment methods by a large margin on all benchmarks and few-shot scenarios tested.

Few-Shot Image Classification Few-Shot Learning +1

A Walk with SGD

no code implementations24 Feb 2018 Chen Xing, Devansh Arpit, Christos Tsirigotis, Yoshua Bengio

Based on this and other metrics, we deduce that for most of the training update steps, SGD moves in valley like regions of the loss surface by jumping from one valley wall to another at a height above the valley floor.

A Sequential Matching Framework for Multi-turn Response Selection in Retrieval-based Chatbots

no code implementations CL 2019 Yu Wu, Wei Wu, Chen Xing, Can Xu, Zhoujun Li, Ming Zhou

The task requires matching a response candidate with a conversation context, whose challenges include how to recognize important parts of the context, and how to model the relationships among utterances in the context.

Retrieval

Hierarchical Recurrent Attention Network for Response Generation

1 code implementation25 Jan 2017 Chen Xing, Wei Wu, Yu Wu, Ming Zhou, YaLou Huang, Wei-Ying Ma

With the word level attention, hidden vectors of a word level encoder are synthesized as utterance vectors and fed to an utterance level encoder to construct hidden representations of the context.

Response Generation

Sequential Matching Network: A New Architecture for Multi-turn Response Selection in Retrieval-based Chatbots

3 code implementations ACL 2017 Yu Wu, Wei Wu, Chen Xing, Ming Zhou, Zhoujun Li

Existing work either concatenates utterances in context or matches a response with a highly abstract context vector finally, which may lose relationships among utterances or important contextual information.

Conversational Response Selection Retrieval

Detecting Context Dependent Messages in a Conversational Environment

no code implementations COLING 2016 Chaozhuo Li, Yu Wu, Wei Wu, Chen Xing, Zhoujun Li, Ming Zhou

While automatic response generation for building chatbot systems has drawn a lot of attention recently, there is limited understanding on when we need to consider the linguistic context of an input text in the generation process.

Chatbot Response Generation

Topic Aware Neural Response Generation

1 code implementation21 Jun 2016 Chen Xing, Wei Wu, Yu Wu, Jie Liu, YaLou Huang, Ming Zhou, Wei-Ying Ma

We consider incorporating topic information into the sequence-to-sequence framework to generate informative and interesting responses for chatbots.

Response Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.