Search Results for author: Bei Chen

Found 45 papers, 23 papers with code

CMMMU: A Chinese Massive Multi-discipline Multimodal Understanding Benchmark

1 code implementation22 Jan 2024 Ge Zhang, Xinrun Du, Bei Chen, Yiming Liang, Tongxu Luo, Tianyu Zheng, Kang Zhu, Yuyang Cheng, Chunpu Xu, Shuyue Guo, Haoran Zhang, Xingwei Qu, Junjie Wang, Ruibin Yuan, Yizhi Li, Zekun Wang, Yudong Liu, Yu-Hsuan Tsai, Fengji Zhang, Chenghua Lin, Wenhao Huang, Wenhu Chen, Jie Fu

We introduce CMMMU, a new Chinese Massive Multi-discipline Multimodal Understanding benchmark designed to evaluate LMMs on tasks demanding college-level subject knowledge and deliberate reasoning in a Chinese context.

Yi: Open Foundation Models by 01.AI

1 code implementation7 Mar 2024 01. AI, :, Alex Young, Bei Chen, Chao Li, Chengen Huang, Ge Zhang, Guanwei Zhang, Heng Li, Jiangcheng Zhu, Jianqun Chen, Jing Chang, Kaidong Yu, Peng Liu, Qiang Liu, Shawn Yue, Senbin Yang, Shiming Yang, Tao Yu, Wen Xie, Wenhao Huang, Xiaohui Hu, Xiaoyi Ren, Xinyao Niu, Pengcheng Nie, Yuchi Xu, Yudong Liu, Yue Wang, Yuxuan Cai, Zhenyu Gu, Zhiyuan Liu, Zonghong Dai

The Yi model family is based on 6B and 34B pretrained language models, then we extend them to chat models, 200K long context models, depth-upscaled models, and vision-language models.

Attribute Chatbot +2

CodeT: Code Generation with Generated Tests

1 code implementation21 Jul 2022 Bei Chen, Fengji Zhang, Anh Nguyen, Daoguang Zan, Zeqi Lin, Jian-Guang Lou, Weizhu Chen

A natural way to evaluate the quality and correctness of a code solution is to run it against a set of test cases, but the manual creation of such test cases is often costly and time-consuming.

 Ranked #1 on Code Generation on APPS (Introductory Pass@1 metric)

Code Generation

RepoCoder: Repository-Level Code Completion Through Iterative Retrieval and Generation

1 code implementation22 Mar 2023 Fengji Zhang, Bei Chen, Yue Zhang, Jacky Keung, Jin Liu, Daoguang Zan, Yi Mao, Jian-Guang Lou, Weizhu Chen

The task of repository-level code completion is to continue writing the unfinished code based on a broader context of the repository.

Code Completion Language Modelling +1

How Far are We from Effective Context Modeling? An Exploratory Study on Semantic Parsing in Context

1 code implementation3 Feb 2020 Qian Liu, Bei Chen, Jiaqi Guo, Jian-Guang Lou, Bin Zhou, Dongmei Zhang

Recently semantic parsing in context has received considerable attention, which is challenging since there are complex contextual phenomena.

Semantic Parsing

Compositional Generalization by Learning Analytical Expressions

1 code implementation NeurIPS 2020 Qian Liu, Shengnan An, Jian-Guang Lou, Bei Chen, Zeqi Lin, Yan Gao, Bin Zhou, Nanning Zheng, Dongmei Zhang

Compositional generalization is a basic and essential intellective capability of human beings, which allows us to recombine known parts readily.

Hierarchical Reinforcement Learning

"What Do You Mean by That?" A Parser-Independent Interactive Approach for Enhancing Text-to-SQL

1 code implementation9 Nov 2020 Yuntao Li, Bei Chen, Qian Liu, Yan Gao, Jian-Guang Lou, Yan Zhang, Dongmei Zhang

In Natural Language Interfaces to Databases systems, the text-to-SQL technique allows users to query databases by using natural language questions.

Text-To-SQL

Reasoning Like Program Executors

1 code implementation27 Jan 2022 Xinyu Pi, Qian Liu, Bei Chen, Morteza Ziyadi, Zeqi Lin, Qiang Fu, Yan Gao, Jian-Guang Lou, Weizhu Chen

Reasoning over natural language is a long-standing goal for the research community.

Ranked #2 on Question Answering on DROP Test (using extra training data)

Logical Reasoning Math +1

Does Deep Learning Learn to Abstract? A Systematic Probing Framework

1 code implementation23 Feb 2023 Shengnan An, Zeqi Lin, Bei Chen, Qiang Fu, Nanning Zheng, Jian-Guang Lou

Abstraction is a desirable capability for deep learning models, which means to induce abstract concepts from concrete instances and flexibly apply them beyond the learning context.

You Impress Me: Dialogue Generation via Mutual Persona Perception

1 code implementation ACL 2020 Qian Liu, Yihong Chen, Bei Chen, Jian-Guang Lou, Zixuan Chen, Bin Zhou, Dongmei Zhang

Despite the continuing efforts to improve the engagingness and consistency of chit-chat dialogue systems, the majority of current work simply focus on mimicking human-like responses, leaving understudied the aspects of modeling understanding between interlocutors.

Ranked #2 on Dialogue Generation on Persona-Chat (using extra training data)

Dialogue Generation

TAPEX: Table Pre-training via Learning a Neural SQL Executor

1 code implementation ICLR 2022 Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou

TAPEX addresses the data scarcity challenge via guiding the language model to mimic a SQL executor on the diverse, large-scale and high-quality synthetic corpus.

 Ranked #1 on Semantic Parsing on WikiSQL (Denotation accuracy (test) metric)

Language Modelling Semantic Parsing +1

When Language Model Meets Private Library

1 code implementation31 Oct 2022 Daoguang Zan, Bei Chen, Zeqi Lin, Bei guan, Yongji Wang, Jian-Guang Lou

In this paper, we investigate how to equip pre-trained language models with the ability of code generation for private libraries.

Code Generation Language Modelling +1

SoTaNa: The Open-Source Software Development Assistant

1 code implementation25 Aug 2023 Ensheng Shi, Fengji Zhang, Yanlin Wang, Bei Chen, Lun Du, Hongyu Zhang, Shi Han, Dongmei Zhang, Hongbin Sun

To meet the demands of this dynamic field, there is a growing need for an effective software development assistant.

Code Summarization

LambdaOpt: Learn to Regularize Recommender Models in Finer Levels

1 code implementation28 May 2019 Yihong Chen, Bei Chen, Xiangnan He, Chen Gao, Yong Li, Jian-Guang Lou, Yue Wang

We show how to employ LambdaOpt on matrix factorization, a classical model that is representative of a large family of recommender models.

Hyperparameter Optimization Recommendation Systems

FANDA: A Novel Approach to Perform Follow-up Query Analysis

1 code implementation24 Jan 2019 Qian Liu, Bei Chen, Jian-Guang Lou, Ge Jin, Dongmei Zhang

NLIDB allow users to search databases using natural language instead of SQL-like query languages.

A Split-and-Recombine Approach for Follow-up Query Analysis

1 code implementation IJCNLP 2019 Qian Liu, Bei Chen, Haoyan Liu, Lei Fang, Jian-Guang Lou, Bin Zhou, Dongmei Zhang

To leverage the advances in context-independent semantic parsing, we propose to perform follow-up query analysis, aiming to restate context-dependent natural language queries with contextual information.

Natural Language Queries Semantic Parsing

Castor: Contextual IoT Time Series Data and Model Management at Scale

1 code implementation20 Nov 2018 Bei Chen, Bradley Eck, Francesco Fusco, Robert Gormally, Mark Purcell, Mathieu Sinn, Seshu Tirupathi

The main features of Castor are: (1) an efficient pipeline for ingesting IoT time series data in real time; (2) a scalable, hybrid data management service for both time series and contextual data; (3) a versatile semantic model for contextual information which can be easily adopted to different application domains; (4) an abstract framework for developing and storing predictive models in R or Python; (5) deployment services which automatically train and/or score predictive models upon user-defined conditions.

Computation Other Statistics

Question Answering as Programming for Solving Time-Sensitive Questions

1 code implementation23 May 2023 Xinyu Zhu, Cheng Yang, Bei Chen, Siheng Li, Jian-Guang Lou, Yujiu Yang

Question answering plays a pivotal role in human daily life because it involves our acquisition of knowledge about the world.

Natural Language Understanding Question Answering

LEMON: Language-Based Environment Manipulation via Execution-Guided Pre-training

2 code implementations20 Jan 2022 Qi Shi, Qian Liu, Bei Chen, Yu Zhang, Ting Liu, Jian-Guang Lou

In this work, we propose LEMON, a general framework for language-based environment manipulation tasks.

Language Modelling

Max-Margin Nonparametric Latent Feature Models for Link Prediction

no code implementations24 Feb 2016 Jun Zhu, Jiaming Song, Bei Chen

Our approach attempts to unite the ideas of max-margin learning and Bayesian nonparametrics to discover discriminative latent features for link prediction.

Link Prediction Variational Inference

Discriminative Nonparametric Latent Feature Relational Models with Data Augmentation

no code implementations7 Dec 2015 Bei Chen, Ning Chen, Jun Zhu, Jiaming Song, Bo Zhang

We present a discriminative nonparametric latent feature relational model (LFRM) for link prediction to automatically infer the dimensionality of latent features.

Bayesian Inference Data Augmentation +1

Jointly Modeling Topics and Intents with Global Order Structure

no code implementations7 Dec 2015 Bei Chen, Jun Zhu, Nan Yang, Tian Tian, Ming Zhou, Bo Zhang

Modeling document structure is of great importance for discourse analysis and related applications.

(Blue) Taxi Destination and Trip Time Prediction from Partial Trajectories

no code implementations17 Sep 2015 Hoang Thanh Lam, Ernesto Diaz-Aviles, Alessandra Pascale, Yiannis Gkoufas, Bei Chen

Real-time estimation of destination and travel time for taxis is of great importance for existing electronic dispatch systems.

Ensemble Learning

Learning-to-Ask: Knowledge Acquisition via 20 Questions

no code implementations22 Jun 2018 Yihong Chen, Bei Chen, Xuguang Duan, Jian-Guang Lou, Yue Wang, Wenwu Zhu, Yong Cao

Almost all the knowledge empowered applications rely upon accurate knowledge, which has to be either collected manually with high cost, or extracted automatically with unignorable errors.

Mixing Properties of Conditional Markov Chains with Unbounded Feature Functions

no code implementations NeurIPS 2012 Mathieu Sinn, Bei Chen

Conditional Markov Chains (also known as Linear-Chain Conditional Random Fields in the literature) are a versatile class of discriminative models for the distribution of a sequence of hidden states conditional on a sequence of observable variables.

Depth-First Proof-Number Search with Heuristic Edge Cost and Application to Chemical Synthesis Planning

no code implementations NeurIPS 2019 Akihiro Kishimoto, Beat Buesser, Bei Chen, Adi Botea

Search techniques, such as Monte Carlo Tree Search (MCTS) and Proof-Number Search (PNS), are effective in playing and solving games.

Discovering Traveling Companions using Autoencoders

no code implementations23 Jul 2020 Xiaochang Li, Bei Chen, Xuesong Lu

The ability to discover moving objects that travel together, i. e., traveling companions, from their trajectories is desired by many applications such as intelligent transportation systems and location-based services.

Representation Learning

``What Do You Mean by That?'' A Parser-Independent Interactive Approach for Enhancing Text-to-SQL

no code implementations EMNLP 2020 Yuntao Li, Bei Chen, Qian Liu, Yan Gao, Jian-Guang Lou, Yan Zhang, Dongmei Zhang

In Natural Language Interfaces to Databases systems, the text-to-SQL technique allows users to query databases by using natural language questions.

Text-To-SQL

Revisiting Iterative Back-Translation from the Perspective of Compositional Generalization

no code implementations8 Dec 2020 Yinuo Guo, Hualei Zhu, Zeqi Lin, Bei Chen, Jian-Guang Lou, Dongmei Zhang

Human intelligence exhibits compositional generalization (i. e., the capacity to understand and produce unseen combinations of seen components), but current neural seq2seq models lack such ability.

Translation

AutoAI-TS: AutoAI for Time Series Forecasting

no code implementations24 Feb 2021 Syed Yousaf Shah, Dhaval Patel, Long Vu, Xuan-Hong Dang, Bei Chen, Peter Kirchner, Horst Samulowitz, David Wood, Gregory Bramble, Wesley M. Gifford, Giridhar Ganapavarapu, Roman Vaculin, Petros Zerfos

We present AutoAI for Time Series Forecasting (AutoAI-TS) that provides users with a zero configuration (zero-conf ) system to efficiently train, optimize and choose best forecasting model among various classes of models for the given dataset.

Benchmarking BIG-bench Machine Learning +3

Input-Tuning: Adapting Unfamiliar Inputs to Frozen Pretrained Models

no code implementations7 Mar 2022 Shengnan An, Yifei Li, Zeqi Lin, Qian Liu, Bei Chen, Qiang Fu, Weizhu Chen, Nanning Zheng, Jian-Guang Lou

This motivates us to propose input-tuning, which fine-tunes both the continuous prompts and the input representations, leading to a more effective way to adapt unfamiliar inputs to frozen PLMs.

Language Modelling Natural Language Understanding +1

Multi-task Learning for Paraphrase Generation With Keyword and Part-of-Speech Reconstruction

no code implementations Findings (ACL) 2022 Xuhang Xie, Xuesong Lu, Bei Chen

The rationale is to capture simultaneously the possible keywords of a source sentence and the relations between them to facilitate the rewriting.

Multi-Task Learning Paraphrase Generation +1

Large Language Models Meet NL2Code: A Survey

no code implementations19 Dec 2022 Daoguang Zan, Bei Chen, Fengji Zhang, Dianjie Lu, Bingchao Wu, Bei guan, Yongji Wang, Jian-Guang Lou

The task of generating code from a natural language description, or NL2Code, is considered a pressing and significant challenge in code intelligence.

How Do In-Context Examples Affect Compositional Generalization?

no code implementations8 May 2023 Shengnan An, Zeqi Lin, Qiang Fu, Bei Chen, Nanning Zheng, Jian-Guang Lou, Dongmei Zhang

Compositional generalization--understanding unseen combinations of seen primitives--is an essential reasoning capability in human intelligence.

In-Context Learning

Skill-Based Few-Shot Selection for In-Context Learning

no code implementations23 May 2023 Shengnan An, Bo Zhou, Zeqi Lin, Qiang Fu, Bei Chen, Nanning Zheng, Weizhu Chen, Jian-Guang Lou

Few-shot selection -- selecting appropriate examples for each test instance separately -- is important for in-context learning.

In-Context Learning Semantic Parsing +1

Cannot find the paper you are looking for? You can Submit a new open access paper.