Search Results for author: Yasheng Wang

Found 68 papers, 29 papers with code

ACEBench: Who Wins the Match Point in Tool Learning?

no code implementations22 Jan 2025 Chen Chen, Xinlong Hao, Weiwen Liu, Xu Huang, Xingshan Zeng, Shuai Yu, Dexun Li, Shuai Wang, Weinan Gan, Yuefeng Huang, Wulong Liu, Xinzhi Wang, Defu Lian, Baoqun Yin, Yasheng Wang, Wu Liu

Normal evaluates function calls in basic scenarios; Special evaluates function calls in scenarios with vague or incomplete instructions; Agent introduces multi-agent interactions to simulate function calling evaluation in real-world multi-turn interactions.

Decision Making

A Survey on Multi-Turn Interaction Capabilities of Large Language Models

no code implementations17 Jan 2025 Chen Zhang, Xinyi Dai, Yaxiong Wu, Qu Yang, Yasheng Wang, Ruiming Tang, Yong liu

Multi-turn interaction in the dialogue system research refers to a system's ability to maintain context across multiple dialogue turns, enabling it to generate coherent and contextually relevant responses.

Conversational Search

NILE: Internal Consistency Alignment in Large Language Models

no code implementations21 Dec 2024 Minda Hu, Qiyuan Zhang, YuFei Wang, Bowei He, Hongru Wang, Jingyan Zhou, Liangyou Li, Yasheng Wang, Chen Ma, Irwin King

However, existing IFT datasets often contain knowledge that is inconsistent with LLMs' internal knowledge learned from the pre-training phase, which can greatly affect the efficacy of IFT.

GUI Agents with Foundation Models: A Comprehensive Survey

no code implementations7 Nov 2024 Shuai Wang, Weiwen Liu, Jingxuan Chen, Weinan Gan, Xingshan Zeng, Shuai Yu, Xinlong Hao, Kun Shao, Yasheng Wang, Ruiming Tang

This survey consolidates recent research on (M)LLM-based GUI agents, highlighting key innovations in data, frameworks, and applications.

Survey

SPA-Bench: A Comprehensive Benchmark for SmartPhone Agent Evaluation

1 code implementation19 Oct 2024 Jingxuan Chen, Derek Yuen, Bin Xie, Yuhao Yang, Gongwei Chen, Zhihao Wu, Li Yixing, Xurui Zhou, Weiwen Liu, Shuai Wang, Kaiwen Zhou, Rui Shao, Liqiang Nie, Yasheng Wang, Jianye Hao, Jun Wang, Kun Shao

Smartphone agents are increasingly important for helping users control devices efficiently, with (Multimodal) Large Language Model (MLLM)-based approaches emerging as key contenders.

AI Agent Benchmarking +2

Proactive Agent: Shifting LLM Agents from Reactive Responses to Active Assistance

1 code implementation16 Oct 2024 Yaxi Lu, Shenzhi Yang, Cheng Qian, Guirong Chen, Qinyu Luo, Yesai Wu, Huadong Wang, Xin Cong, Zhong Zhang, Yankai Lin, Weiwen Liu, Yasheng Wang, Zhiyuan Liu, Fangming Liu, Maosong Sun

The labeled data is used to train a reward model that simulates human judgment and serves as an automatic evaluator of the proactiveness of LLM agents.

Learning Evolving Tools for Large Language Models

1 code implementation9 Oct 2024 Guoxin Chen, Zhong Zhang, Xin Cong, Fangda Guo, Yesai Wu, Yankai Lin, Wenzheng Feng, Yasheng Wang

Tool learning enables large language models (LLMs) to interact with external tools and APIs, greatly expanding the application scope of LLMs.

RevisEval: Improving LLM-as-a-Judge via Response-Adapted References

no code implementations7 Oct 2024 Qiyuan Zhang, YuFei Wang, Tiezheng Yu, Yuxin Jiang, Chuhan Wu, Liangyou Li, Yasheng Wang, Xin Jiang, Lifeng Shang, Ruiming Tang, Fuyuan Lyu, Chen Ma

With significant efforts in recent studies, LLM-as-a-Judge has become a cost-effective alternative to human evaluation for assessing the text generation quality in a wide range of tasks.

Instruction Following Text Generation

Flat-LoRA: Low-Rank Adaption over a Flat Loss Landscape

no code implementations22 Sep 2024 Tao Li, Zhengbao He, YuJun Li, Yasheng Wang, Lifeng Shang, Xiaolin Huang

Fine-tuning large-scale pre-trained models is prohibitively expensive in terms of computational and memory costs.

Image Classification parameter-efficient fine-tuning

RethinkMCTS: Refining Erroneous Thoughts in Monte Carlo Tree Search for Code Generation

no code implementations15 Sep 2024 Qingyao Li, Wei Xia, Kounianhua Du, Xinyi Dai, Ruiming Tang, Yasheng Wang, Yong Yu, Weinan Zhang

More importantly, we construct verbal feedback from fine-grained code execution feedback to refine erroneous thoughts during the search.

Code Generation HumanEval

ToolACE: Winning the Points of LLM Function Calling

no code implementations2 Sep 2024 Weiwen Liu, Xu Huang, Xingshan Zeng, Xinlong Hao, Shuai Yu, Dexun Li, Shuai Wang, Weinan Gan, Zhengying Liu, Yuanqing Yu, Zezhong Wang, Yuxian Wang, Wu Ning, Yutai Hou, Bin Wang, Chuhan Wu, Xinzhi Wang, Yong liu, Yasheng Wang, Duyu Tang, Dandan Tu, Lifeng Shang, Xin Jiang, Ruiming Tang, Defu Lian, Qun Liu, Enhong Chen

Function calling significantly extends the application boundary of large language models, where high-quality and diverse training data is critical for unlocking this capability.

Bridging and Modeling Correlations in Pairwise Data for Direct Preference Optimization

1 code implementation14 Aug 2024 Yuxin Jiang, Bo Huang, YuFei Wang, Xingshan Zeng, Liangyou Li, Yasheng Wang, Xin Jiang, Lifeng Shang, Ruiming Tang, Wei Wang

Firstly, we increase the consistency and informativeness of the pairwise preference signals through targeted modifications, synthesizing a pseudo-winning response by improving the losing response with the winning response as a reference.

Informativeness Instruction Following +1

Entropy Law: The Story Behind Data Compression and LLM Performance

3 code implementations9 Jul 2024 Mingjia Yin, Chuhan Wu, YuFei Wang, Hao Wang, Wei Guo, Yasheng Wang, Yong liu, Ruiming Tang, Defu Lian, Enhong Chen

Inspired by the information compression nature of LLMs, we uncover an ``entropy law'' that connects LLM performance with data compression ratio and first-epoch training loss, which reflect the information redundancy of a dataset and the mastery of inherent knowledge encoded in this dataset, respectively.

Data Compression

CoIR: A Comprehensive Benchmark for Code Information Retrieval Models

1 code implementation3 Jul 2024 Xiangyang Li, Kuicai Dong, Yi Quan Lee, Wei Xia, Yichun Yin, Hao Zhang, Yong liu, Yasheng Wang, Ruiming Tang

Despite the substantial success of Information Retrieval (IR) in various NLP tasks, most IR systems predominantly handle queries and corpora in natural language, neglecting the domain of code retrieval.

Benchmarking Code Search +2

SINKT: A Structure-Aware Inductive Knowledge Tracing Model with Large Language Model

1 code implementation1 Jul 2024 Lingyue Fu, Hao Guan, Kounianhua Du, Jianghao Lin, Wei Xia, Weinan Zhang, Ruiming Tang, Yasheng Wang, Yong Yu

Knowledge Tracing (KT) aims to determine whether students will respond correctly to the next question, which is a crucial task in intelligent tutoring systems (ITS).

Knowledge Tracing Language Modeling +3

Chain-of-Probe: Examing the Necessity and Accuracy of CoT Step-by-Step

no code implementations23 Jun 2024 Zezhong Wang, Xingshan Zeng, Weiwen Liu, YuFei Wang, Liangyou Li, Yasheng Wang, Lifeng Shang, Xin Jiang, Qun Liu, Kam-Fai Wong

To address these questions, we propose a method, namely Chain-of-Probe (CoP), to probe changes in the mind during the model's reasoning.

valid

Dynamic Stochastic Decoding Strategy for Open-Domain Dialogue Generation

no code implementations12 Jun 2024 Yiwei Li, Fei Mi, Yitong Li, Yasheng Wang, Bin Sun, Shaoxiong Feng, Kan Li

In DDS, both sequence-level and token-level adaptive search can be achieved to adjust the decoding process in a unified framework.

Dialogue Generation Diversity +1

Evaluating the External and Parametric Knowledge Fusion of Large Language Models

no code implementations29 May 2024 Hao Zhang, Yuyang Zhang, Xiaoguang Li, Wenxuan Shi, Haonan Xu, Huanshuo Liu, Yasheng Wang, Lifeng Shang, Qun Liu, Yong liu, Ruiming Tang

Integrating external knowledge into large language models (LLMs) presents a promising solution to overcome the limitations imposed by their antiquated and static parametric memory.

CELA: Cost-Efficient Language Model Alignment for CTR Prediction

1 code implementation17 May 2024 Xingmei Wang, Weiwen Liu, Xiaolong Chen, Qi Liu, Xu Huang, Yichao Wang, Xiangyang Li, Yasheng Wang, Zhenhua Dong, Defu Lian, Ruiming Tang

This model-agnostic framework can be equipped with plug-and-play textual features, with item-level alignment enhancing the utilization of external information while maintaining training and inference efficiency.

Click-Through Rate Prediction Collaborative Filtering +3

CodeGRAG: Bridging the Gap between Natural Language and Programming Language via Graphical Retrieval Augmented Generation

no code implementations3 May 2024 Kounianhua Du, Jizheng Chen, Renting Rui, Huacan Chai, Lingyue Fu, Wei Xia, Yasheng Wang, Ruiming Tang, Yong Yu, Weinan Zhang

Despite the intelligence shown by the general large language models, their specificity in code generation can still be improved due to the syntactic gap and mismatched vocabulary existing among natural language and different programming languages.

Code Generation Language Modelling +3

WESE: Weak Exploration to Strong Exploitation for LLM Agents

no code implementations11 Apr 2024 Xu Huang, Weiwen Liu, Xiaolong Chen, Xingmei Wang, Defu Lian, Yasheng Wang, Ruiming Tang, Enhong Chen

Concretely, WESE involves decoupling the exploration and exploitation process, employing a cost-effective weak agent to perform exploration tasks for global knowledge.

Decision Making Prompt Engineering

UniRetriever: Multi-task Candidates Selection for Various Context-Adaptive Conversational Retrieval

no code implementations26 Feb 2024 Hongru Wang, Boyang Xue, Baohang Zhou, Rui Wang, Fei Mi, Weichao Wang, Yasheng Wang, Kam-Fai Wong

Conversational retrieval refers to an information retrieval system that operates in an iterative and interactive manner, requiring the retrieval of various external resources, such as persona, knowledge, and even response, to effectively engage with the user and successfully complete the dialogue.

Information Retrieval Retrieval

Evaluating Robustness of Generative Search Engine on Adversarial Factual Questions

no code implementations25 Feb 2024 Xuming Hu, Xiaochuan Li, Junzhe Chen, Yinghui Li, Yangning Li, Xiaoguang Li, Yasheng Wang, Qun Liu, Lijie Wen, Philip S. Yu, Zhijiang Guo

To this end, we propose evaluating the robustness of generative search engines in the realistic and high-risk setting, where adversaries have only black-box system access and seek to deceive the model into returning incorrect responses.

Retrieval

Understanding the planning of LLM agents: A survey

no code implementations5 Feb 2024 Xu Huang, Weiwen Liu, Xiaolong Chen, Xingmei Wang, Hao Wang, Defu Lian, Yasheng Wang, Ruiming Tang, Enhong Chen

As Large Language Models (LLMs) have shown significant intelligence, the progress to leverage LLMs as planning modules of autonomous agents has attracted more attention.

Survey

Planning, Creation, Usage: Benchmarking LLMs for Comprehensive Tool Utilization in Real-World Complex Scenarios

1 code implementation30 Jan 2024 Shijue Huang, Wanjun Zhong, Jianqiao Lu, Qi Zhu, Jiahui Gao, Weiwen Liu, Yutai Hou, Xingshan Zeng, Yasheng Wang, Lifeng Shang, Xin Jiang, Ruifeng Xu, Qun Liu

The recent trend of using Large Language Models (LLMs) as tool agents in real-world applications underscores the necessity for comprehensive evaluations of their capabilities, particularly in complex scenarios involving planning, creating, and using tools.

Benchmarking

YODA: Teacher-Student Progressive Learning for Language Models

no code implementations28 Jan 2024 Jianqiao Lu, Wanjun Zhong, YuFei Wang, Zhijiang Guo, Qi Zhu, Wenyong Huang, Yanlin Wang, Fei Mi, Baojun Wang, Yasheng Wang, Lifeng Shang, Xin Jiang, Qun Liu

With the teacher's guidance, the student learns to iteratively refine its answer with feedback, and forms a robust and comprehensive understanding of the posed questions.

GSM8K Math

Large Language Models as Source Planner for Personalized Knowledge-grounded Dialogue

no code implementations13 Oct 2023 Hongru Wang, Minda Hu, Yang Deng, Rui Wang, Fei Mi, Weichao Wang, Yasheng Wang, Wai-Chung Kwan, Irwin King, Kam-Fai Wong

Open-domain dialogue system usually requires different sources of knowledge to generate more informative and evidential responses.

Response Generation

Improving Factual Consistency for Knowledge-Grounded Dialogue Systems via Knowledge Enhancement and Alignment

1 code implementation12 Oct 2023 Boyang Xue, Weichao Wang, Hongru Wang, Fei Mi, Rui Wang, Yasheng Wang, Lifeng Shang, Xin Jiang, Qun Liu, Kam-Fai Wong

Inspired by previous work which identified that feed-forward networks (FFNs) within Transformers are responsible for factual knowledge expressions, we investigate two methods to efficiently improve the factual expression capability {of FFNs} by knowledge enhancement and alignment respectively.

DecompEval: Evaluating Generated Texts as Unsupervised Decomposed Question Answering

1 code implementation13 Jul 2023 Pei Ke, Fei Huang, Fei Mi, Yasheng Wang, Qun Liu, Xiaoyan Zhu, Minlie Huang

Existing evaluation metrics for natural language generation (NLG) tasks face the challenges on generalization ability and interpretability.

Dialogue Generation nlg evaluation +3

Learning Summary-Worthy Visual Representation for Abstractive Summarization in Video

no code implementations8 May 2023 Zenan Xu, Xiaojun Meng, Yasheng Wang, Qinliang Su, Zexuan Qiu, Xin Jiang, Qun Liu

Multimodal abstractive summarization for videos (MAS) requires generating a concise textual summary to describe the highlights of a video according to multimodal resources, in our case, the video content and its transcript.

Abstractive Text Summarization Language Modeling +1

MultiCoder: Multi-Programming-Lingual Pre-Training for Low-Resource Code Completion

no code implementations19 Dec 2022 Zi Gong, Yinpeng Guo, Pingyi Zhou, Cuiyun Gao, Yasheng Wang, Zenglin Xu

On the other hand, there are few studies exploring the effects of multi-programming-lingual (MultiPL) pre-training for the code completion, especially the impact on low-resource programming languages.

Code Completion

Momentum Contrastive Pre-training for Question Answering

no code implementations12 Dec 2022 Minda Hu, Muzhi Li, Yasheng Wang, Irwin King

In order to address this problem, we propose a novel Momentum Contrastive pRe-training fOr queStion anSwering (MCROSS) method for extractive QA.

Benchmarking Contrastive Learning +3

Constructing Highly Inductive Contexts for Dialogue Safety through Controllable Reverse Generation

1 code implementation4 Dec 2022 Zhexin Zhang, Jiale Cheng, Hao Sun, Jiawen Deng, Fei Mi, Yasheng Wang, Lifeng Shang, Minlie Huang

In order to detect such toxic generations, existing methods rely on templates, real-world data extraction, crowdsourcing workers, or automatic generation to construct adversarial contexts that are likely to induce toxic generations.

Response Generation

Lexicon-injected Semantic Parsing for Task-Oriented Dialog

no code implementations26 Nov 2022 Xiaojun Meng, Wenlin Dai, Yasheng Wang, Baojun Wang, Zhiyong Wu, Xin Jiang, Qun Liu

Then we present a novel lexicon-injected semantic parser, which collects slot labels of tree representation as a lexicon, and injects lexical features to the span representation of parser.

Semantic Parsing

Sparse Structure Search for Delta Tuning

1 code implementation NIPS 2022 Shengding Hu, Zhen Zhang, Ning Ding, Yadao Wang, Yasheng Wang, Zhiyuan Liu, Maosong Sun

Generally, DT methods exquisitely design delta modules (DT modules) which could be applied to arbitrary fine-grained positions inside PTMs.

PanGu-Coder: Program Synthesis with Function-Level Language Modeling

1 code implementation22 Jul 2022 Fenia Christopoulou, Gerasimos Lampouras, Milan Gritta, Guchun Zhang, Yinpeng Guo, Zhongqi Li, Qi Zhang, Meng Xiao, Bo Shen, Lin Li, Hao Yu, Li Yan, Pingyi Zhou, Xin Wang, Yuchi Ma, Ignacio Iacobacci, Yasheng Wang, Guangtai Liang, Jiansheng Wei, Xin Jiang, Qianxiang Wang, Qun Liu

We present PanGu-Coder, a pretrained decoder-only language model adopting the PanGu-Alpha architecture for text-to-code generation, i. e. the synthesis of programming language solutions given a natural language problem description.

Code Generation Decoder +4

Sparse Structure Search for Parameter-Efficient Tuning

no code implementations15 Jun 2022 Shengding Hu, Zhen Zhang, Ning Ding, Yadao Wang, Yasheng Wang, Zhiyuan Liu, Maosong Sun

The searched structures preserve more than 99\% fine-tuning performance with 0. 01\% trainable parameters.

CODE-MVP: Learning to Represent Source Code from Multiple Views with Contrastive Pre-Training

no code implementations Findings (NAACL) 2022 Xin Wang, Yasheng Wang, Yao Wan, Jiawei Wang, Pingyi Zhou, Li Li, Hao Wu, Jin Liu

Specifically, we first extract multiple code views using compiler tools, and learn the complementary information among them under a contrastive learning framework.

Contrastive Learning Defect Detection +2

Compilable Neural Code Generation with Compiler Feedback

no code implementations Findings (ACL) 2022 Xin Wang, Yasheng Wang, Yao Wan, Fei Mi, Yitong Li, Pingyi Zhou, Jin Liu, Hao Wu, Xin Jiang, Qun Liu

Automatically generating compilable programs with (or without) natural language descriptions has always been a touchstone problem for computational linguistics and automated software engineering.

Code Completion Code Generation +5

Towards Identifying Social Bias in Dialog Systems: Frame, Datasets, and Benchmarks

no code implementations16 Feb 2022 Jingyan Zhou, Jiawen Deng, Fei Mi, Yitong Li, Yasheng Wang, Minlie Huang, Xin Jiang, Qun Liu, Helen Meng

The research of open-domain dialog systems has been greatly prospered by neural models trained on large-scale corpora, however, such corpora often introduce various safety problems (e. g., offensive languages, biases, and toxic behaviors) that significantly hinder the deployment of dialog systems in practice.

Bias Detection Open-Domain Dialog

Pan More Gold from the Sand: Refining Open-domain Dialogue Training with Noisy Self-Retrieval Generation

no code implementations COLING 2022 Yihe Wang, Yitong Li, Yasheng Wang, Fei Mi, Pingyi Zhou, Xin Wang, Jin Liu, Xin Jiang, Qun Liu

Experiments over publicly available datasets demonstrate that our method can help models generate better responses, even such training data are usually impressed as low-quality data.

Dialogue Generation Retrieval

JABER and SABER: Junior and Senior Arabic BERt

1 code implementation8 Dec 2021 Abbas Ghaddar, Yimeng Wu, Ahmad Rashid, Khalil Bibi, Mehdi Rezagholizadeh, Chao Xing, Yasheng Wang, Duan Xinyu, Zhefeng Wang, Baoxing Huai, Xin Jiang, Qun Liu, Philippe Langlais

Language-specific pre-trained models have proven to be more accurate than multilingual ones in a monolingual evaluation setting, Arabic is no exception.

Language Modeling Language Modelling +1

UniMS: A Unified Framework for Multimodal Summarization with Knowledge Distillation

no code implementations13 Sep 2021 Zhengkun Zhang, Xiaojun Meng, Yasheng Wang, Xin Jiang, Qun Liu, Zhenglu Yang

Specially, we adopt knowledge distillation from a vision-language pretrained model to improve image selection, which avoids any requirement on the existence and quality of image captions.

Abstractive Text Summarization Decoder +3

CINS: Comprehensive Instruction for Few-shot Learning in Task-oriented Dialog Systems

no code implementations10 Sep 2021 Fei Mi, Yitong Li, Yasheng Wang, Xin Jiang, Qun Liu

As labeling cost for different modules in task-oriented dialog (ToD) systems is high, a major challenge in practice is to learn different tasks with the least amount of labeled data.

dialog state tracking Few-Shot Learning +3

SynCoBERT: Syntax-Guided Multi-Modal Contrastive Pre-Training for Code Representation

no code implementations10 Aug 2021 Xin Wang, Yasheng Wang, Fei Mi, Pingyi Zhou, Yao Wan, Xiao Liu, Li Li, Hao Wu, Jin Liu, Xin Jiang

Code representation learning, which aims to encode the semantics of source code into distributed vectors, plays an important role in recent deep-learning-based models for code intelligence.

Clone Detection Code Search +5

Sub-Character Tokenization for Chinese Pretrained Language Models

2 code implementations1 Jun 2021 Chenglei Si, Zhengyan Zhang, Yingfa Chen, Fanchao Qi, Xiaozhi Wang, Zhiyuan Liu, Yasheng Wang, Qun Liu, Maosong Sun

2) Pronunciation-based SubChar tokenizers can encode Chinese homophones into the same transliteration sequences and produce the same tokenization output, hence being robust to homophone typos.

Chinese Word Segmentation Computational Efficiency +2

Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger

2 code implementations ACL 2021 Fanchao Qi, Mukai Li, Yangyi Chen, Zhengyan Zhang, Zhiyuan Liu, Yasheng Wang, Maosong Sun

As far as we know, almost all existing textual backdoor attack methods insert additional contents into normal samples as triggers, which causes the trigger-embedded samples to be detected and the backdoor attacks to be blocked without much effort.

Backdoor Attack

Red Alarm for Pre-trained Models: Universal Vulnerability to Neuron-Level Backdoor Attacks

1 code implementation ICML Workshop AML 2021 Zhengyan Zhang, Guangxuan Xiao, Yongwei Li, Tian Lv, Fanchao Qi, Zhiyuan Liu, Yasheng Wang, Xin Jiang, Maosong Sun

In this work, we demonstrate the universal vulnerability of PTMs, where fine-tuned PTMs can be easily controlled by backdoor attacks in arbitrary downstream tasks.

Backdoor Attack

Better Robustness by More Coverage: Adversarial Training with Mixup Augmentation for Robust Fine-tuning

1 code implementation31 Dec 2020 Chenglei Si, Zhengyan Zhang, Fanchao Qi, Zhiyuan Liu, Yasheng Wang, Qun Liu, Maosong Sun

In this work, we propose a simple and effective method to cover a much larger proportion of the attack search space, called Adversarial and Mixup Data Augmentation (AMDA).

Adversarial Robustness Text Augmentation +2

Unified Mandarin TTS Front-end Based on Distilled BERT Model

1 code implementation31 Dec 2020 Yang Zhang, Liqun Deng, Yasheng Wang

The front-end module in a typical Mandarin text-to-speech system (TTS) is composed of a long pipeline of text processing components, which requires extensive efforts to build and is prone to large accumulative model size and cascade errors.

Knowledge Distillation Language Modeling +3

Multi-channel Reverse Dictionary Model

1 code implementation18 Dec 2019 Lei Zhang, Fanchao Qi, Zhiyuan Liu, Yasheng Wang, Qun Liu, Maosong Sun

A reverse dictionary takes the description of a target word as input and outputs the target word together with other words that match the description.

model Reverse Dictionary +1

Improving Sequence Modeling Ability of Recurrent Neural Networks via Sememes

1 code implementation20 Oct 2019 Yujia Qin, Fanchao Qi, Sicong Ouyang, Zhiyuan Liu, Cheng Yang, Yasheng Wang, Qun Liu, Maosong Sun

Sememes, the minimum semantic units of human languages, have been successfully utilized in various natural language processing applications.

Adversarial Attack Language Modeling +3

NEZHA: Neural Contextualized Representation for Chinese Language Understanding

10 code implementations31 Aug 2019 Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen, Qun Liu

The pre-trained language models have achieved great successes in various natural language understanding (NLU) tasks due to its capacity to capture the deep contextualized information in text by pre-training on large-scale corpora.

named-entity-recognition Named Entity Recognition +6

GPT-based Generation for Classical Chinese Poetry

3 code implementations29 Jun 2019 Yi Liao, Yasheng Wang, Qun Liu, Xin Jiang

We present a simple yet effective method for generating high quality classical Chinese poetry with Generative Pre-trained Language Model (GPT).

Language Modeling Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.