Search Results for author: Bei Chen

Found 35 papers, 15 papers with code

Multi-task Learning for Paraphrase Generation With Keyword and Part-of-Speech Reconstruction

no code implementations Findings (ACL) 2022 Xuhang Xie, Xuesong Lu, Bei Chen

The rationale is to capture simultaneously the possible keywords of a source sentence and the relations between them to facilitate the rewriting.

Multi-Task Learning Paraphrase Generation

``What Do You Mean by That?'' A Parser-Independent Interactive Approach for Enhancing Text-to-SQL

no code implementations EMNLP 2020 Yuntao Li, Bei Chen, Qian Liu, Yan Gao, Jian-Guang Lou, Yan Zhang, Dongmei Zhang

In Natural Language Interfaces to Databases systems, the text-to-SQL technique allows users to query databases by using natural language questions.


When Language Model Meets Private Library

1 code implementation31 Oct 2022 Daoguang Zan, Bei Chen, Zeqi Lin, Bei guan, Yongji Wang, Jian-Guang Lou

In this paper, we investigate how to equip pre-trained language models with the ability of code generation for private libraries.

Code Generation Language Modelling +1

CodeT: Code Generation with Generated Tests

1 code implementation21 Jul 2022 Bei Chen, Fengji Zhang, Anh Nguyen, Daoguang Zan, Zeqi Lin, Jian-Guang Lou, Weizhu Chen

A natural way to evaluate the quality and correctness of a code solution is to run it against a set of test cases, but the manual creation of such test cases is often costly and time-consuming.

Code Generation

On the Advance of Making Language Models Better Reasoners

no code implementations6 Jun 2022 Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, Weizhu Chen

We conduct extensive experiments using the latest language model code-davinci-002 and demonstrate that DiVeRSe can achieve new state-of-the-art performance on six out of eight reasoning benchmarks (e. g., GSM8K 74. 4% to 83. 2%), outperforming the PaLM model with 540B parameters.

 Ranked #1 on Arithmetic Reasoning on GSM8K (using extra training data)

Arithmetic Reasoning Few-Shot Learning +1

Input-Tuning: Adapting Unfamiliar Inputs to Frozen Pretrained Models

no code implementations7 Mar 2022 Shengnan An, Yifei Li, Zeqi Lin, Qian Liu, Bei Chen, Qiang Fu, Weizhu Chen, Nanning Zheng, Jian-Guang Lou

This motivates us to propose input-tuning, which fine-tunes both the continuous prompts and the input representations, leading to a more effective way to adapt unfamiliar inputs to frozen PLMs.

Language Modelling Natural Language Understanding +1

Reasoning Like Program Executors

1 code implementation27 Jan 2022 Xinyu Pi, Qian Liu, Bei Chen, Morteza Ziyadi, Zeqi Lin, Qiang Fu, Yan Gao, Jian-Guang Lou, Weizhu Chen

Reasoning over natural language is a long-standing goal for the research community.

Ranked #2 on Question Answering on DROP Test (using extra training data)

Logical Reasoning Question Answering

LEMON: Language-Based Environment Manipulation via Execution-Guided Pre-training

no code implementations20 Jan 2022 Qi Shi, Qian Liu, Bei Chen, Yu Zhang, Ting Liu, Jian-Guang Lou

In this work, we propose LEMON, a general framework for language-based environment manipulation tasks.

Language Modelling

TAPEX: Table Pre-training via Learning a Neural SQL Executor

1 code implementation ICLR 2022 Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou

TAPEX addresses the data scarcity challenge via guiding the language model to mimic a SQL executor on the diverse, large-scale and high-quality synthetic corpus.

 Ranked #1 on Semantic Parsing on WikiSQL (Denotation accuracy (test) metric)

Language Modelling Semantic Parsing +1

AutoAI-TS: AutoAI for Time Series Forecasting

no code implementations24 Feb 2021 Syed Yousaf Shah, Dhaval Patel, Long Vu, Xuan-Hong Dang, Bei Chen, Peter Kirchner, Horst Samulowitz, David Wood, Gregory Bramble, Wesley M. Gifford, Giridhar Ganapavarapu, Roman Vaculin, Petros Zerfos

We present AutoAI for Time Series Forecasting (AutoAI-TS) that provides users with a zero configuration (zero-conf ) system to efficiently train, optimize and choose best forecasting model among various classes of models for the given dataset.

BIG-bench Machine Learning Model Selection +1

Revisiting Iterative Back-Translation from the Perspective of Compositional Generalization

no code implementations8 Dec 2020 Yinuo Guo, Hualei Zhu, Zeqi Lin, Bei Chen, Jian-Guang Lou, Dongmei Zhang

Human intelligence exhibits compositional generalization (i. e., the capacity to understand and produce unseen combinations of seen components), but current neural seq2seq models lack such ability.


"What Do You Mean by That?" A Parser-Independent Interactive Approach for Enhancing Text-to-SQL

1 code implementation9 Nov 2020 Yuntao Li, Bei Chen, Qian Liu, Yan Gao, Jian-Guang Lou, Yan Zhang, Dongmei Zhang

In Natural Language Interfaces to Databases systems, the text-to-SQL technique allows users to query databases by using natural language questions.


Discovering Traveling Companions using Autoencoders

no code implementations23 Jul 2020 Xiaochang Li, Bei Chen, Xuesong Lu

The ability to discover moving objects that travel together, i. e., traveling companions, from their trajectories is desired by many applications such as intelligent transportation systems and location-based services.

Representation Learning

Compositional Generalization by Learning Analytical Expressions

1 code implementation NeurIPS 2020 Qian Liu, Shengnan An, Jian-Guang Lou, Bei Chen, Zeqi Lin, Yan Gao, Bin Zhou, Nanning Zheng, Dongmei Zhang

Compositional generalization is a basic and essential intellective capability of human beings, which allows us to recombine known parts readily.

Hierarchical Reinforcement Learning

You Impress Me: Dialogue Generation via Mutual Persona Perception

1 code implementation ACL 2020 Qian Liu, Yihong Chen, Bei Chen, Jian-Guang Lou, Zixuan Chen, Bin Zhou, Dongmei Zhang

Despite the continuing efforts to improve the engagingness and consistency of chit-chat dialogue systems, the majority of current work simply focus on mimicking human-like responses, leaving understudied the aspects of modeling understanding between interlocutors.

Dialogue Generation

How Far are We from Effective Context Modeling? An Exploratory Study on Semantic Parsing in Context

1 code implementation3 Feb 2020 Qian Liu, Bei Chen, Jiaqi Guo, Jian-Guang Lou, Bin Zhou, Dongmei Zhang

Recently semantic parsing in context has received considerable attention, which is challenging since there are complex contextual phenomena.

Semantic Parsing

Depth-First Proof-Number Search with Heuristic Edge Cost and Application to Chemical Synthesis Planning

no code implementations NeurIPS 2019 Akihiro Kishimoto, Beat Buesser, Bei Chen, Adi Botea

Search techniques, such as Monte Carlo Tree Search (MCTS) and Proof-Number Search (PNS), are effective in playing and solving games.

A Split-and-Recombine Approach for Follow-up Query Analysis

1 code implementation IJCNLP 2019 Qian Liu, Bei Chen, Haoyan Liu, Lei Fang, Jian-Guang Lou, Bin Zhou, Dongmei Zhang

To leverage the advances in context-independent semantic parsing, we propose to perform follow-up query analysis, aiming to restate context-dependent natural language queries with contextual information.

Natural Language Queries Semantic Parsing

LambdaOpt: Learn to Regularize Recommender Models in Finer Levels

1 code implementation28 May 2019 Yihong Chen, Bei Chen, Xiangnan He, Chen Gao, Yong Li, Jian-Guang Lou, Yue Wang

We show how to employ LambdaOpt on matrix factorization, a classical model that is representative of a large family of recommender models.

Hyperparameter Optimization Recommendation Systems

FANDA: A Novel Approach to Perform Follow-up Query Analysis

1 code implementation24 Jan 2019 Qian Liu, Bei Chen, Jian-Guang Lou, Ge Jin, Dongmei Zhang

NLIDB allow users to search databases using natural language instead of SQL-like query languages.

Castor: Contextual IoT Time Series Data and Model Management at Scale

1 code implementation20 Nov 2018 Bei Chen, Bradley Eck, Francesco Fusco, Robert Gormally, Mark Purcell, Mathieu Sinn, Seshu Tirupathi

The main features of Castor are: (1) an efficient pipeline for ingesting IoT time series data in real time; (2) a scalable, hybrid data management service for both time series and contextual data; (3) a versatile semantic model for contextual information which can be easily adopted to different application domains; (4) an abstract framework for developing and storing predictive models in R or Python; (5) deployment services which automatically train and/or score predictive models upon user-defined conditions.

Computation Other Statistics

Learning-to-Ask: Knowledge Acquisition via 20 Questions

no code implementations22 Jun 2018 Yihong Chen, Bei Chen, Xuguang Duan, Jian-Guang Lou, Yue Wang, Wenwu Zhu, Yong Cao

Almost all the knowledge empowered applications rely upon accurate knowledge, which has to be either collected manually with high cost, or extracted automatically with unignorable errors.

Max-Margin Nonparametric Latent Feature Models for Link Prediction

no code implementations24 Feb 2016 Jun Zhu, Jiaming Song, Bei Chen

Our approach attempts to unite the ideas of max-margin learning and Bayesian nonparametrics to discover discriminative latent features for link prediction.

Link Prediction Variational Inference

Jointly Modeling Topics and Intents with Global Order Structure

no code implementations7 Dec 2015 Bei Chen, Jun Zhu, Nan Yang, Tian Tian, Ming Zhou, Bo Zhang

Modeling document structure is of great importance for discourse analysis and related applications.

Discriminative Nonparametric Latent Feature Relational Models with Data Augmentation

no code implementations7 Dec 2015 Bei Chen, Ning Chen, Jun Zhu, Jiaming Song, Bo Zhang

We present a discriminative nonparametric latent feature relational model (LFRM) for link prediction to automatically infer the dimensionality of latent features.

Bayesian Inference Data Augmentation +1

(Blue) Taxi Destination and Trip Time Prediction from Partial Trajectories

no code implementations17 Sep 2015 Hoang Thanh Lam, Ernesto Diaz-Aviles, Alessandra Pascale, Yiannis Gkoufas, Bei Chen

Real-time estimation of destination and travel time for taxis is of great importance for existing electronic dispatch systems.

Ensemble Learning

Mixing Properties of Conditional Markov Chains with Unbounded Feature Functions

no code implementations NeurIPS 2012 Mathieu Sinn, Bei Chen

Conditional Markov Chains (also known as Linear-Chain Conditional Random Fields in the literature) are a versatile class of discriminative models for the distribution of a sequence of hidden states conditional on a sequence of observable variables.

Cannot find the paper you are looking for? You can Submit a new open access paper.