Search Results for author: Liangchen Luo

Found 13 papers, 4 papers with code

An Auto-Encoder Matching Model for Learning Utterance-Level Semantic Dependency in Dialogue Generation

1 code implementation EMNLP 2018 Liangchen Luo, Jingjing Xu, Junyang Lin, Qi Zeng, Xu sun

Different from conventional text generation tasks, the mapping between inputs and responses in conversations is more complicated, which highly demands the understanding of utterance-level semantic dependency, a relation between the whole meanings of inputs and outputs.

Dialogue Generation

Learning Personalized End-to-End Goal-Oriented Dialog

no code implementations12 Nov 2018 Liangchen Luo, Wenhao Huang, Qi Zeng, Zaiqing Nie, Xu sun

Most existing works on dialog systems only consider conversation content while neglecting the personality of the user the bot is interacting with, which begets several unsolved issues.

Goal-Oriented Dialog

Text Assisted Insight Ranking Using Context-Aware Memory Network

no code implementations13 Nov 2018 Qi Zeng, Liangchen Luo, Wenhao Huang, Yang Tang

Extracting valuable facts or informative summaries from multi-dimensional tables, i. e. insight mining, is an important task in data analysis and business intelligence.

Adaptive Gradient Methods with Dynamic Bound of Learning Rate

5 code implementations ICLR 2019 Liangchen Luo, Yuanhao Xiong, Yan Liu, Xu sun

Recent work has put forward some algorithms such as AMSGrad to tackle this issue but they failed to achieve considerable improvement over existing methods.

MUSE: Parallel Multi-Scale Attention for Sequence to Sequence Learning

2 code implementations17 Nov 2019 Guangxiang Zhao, Xu sun, Jingjing Xu, Zhiyuan Zhang, Liangchen Luo

In this work, we explore parallel multi-scale representation learning on sequence data, striving to capture both long-range and short-range language structures.

Machine Translation Representation Learning +1

Large-Scale Generative Data-Free Distillation

no code implementations10 Dec 2020 Liangchen Luo, Mark Sandler, Zi Lin, Andrey Zhmoginov, Andrew Howard

Knowledge distillation is one of the most popular and effective techniques for knowledge transfer, model compression and semi-supervised learning.

Knowledge Distillation Model Compression +1

RewriteLM: An Instruction-Tuned Large Language Model for Text Rewriting

1 code implementation25 May 2023 Lei Shu, Liangchen Luo, Jayakumar Hoskere, Yun Zhu, Yinxiao Liu, Simon Tong, Jindong Chen, Lei Meng

In this work, we develop new strategies for instruction tuning and reinforcement learning to better align LLMs for cross-sentence rewriting tasks using diverse wording and structures expressed through natural languages including 1) generating rewriting instruction data from Wiki edits and public corpus through instruction generation and chain-of-thought prompting; 2) collecting comparison data for reward model training through a new ranking function.

Language Modelling Large Language Model +3

Critique Ability of Large Language Models

no code implementations7 Oct 2023 Liangchen Luo, Zi Lin, Yinxiao Liu, Lei Shu, Yun Zhu, Jingbo Shang, Lei Meng

In the era of large language models (LLMs), this study explores the ability of LLMs to deliver accurate critiques across various tasks.

Code Completion Decision Making +3

Fusion-Eval: Integrating Evaluators with LLMs

no code implementations15 Nov 2023 Lei Shu, Nevan Wichers, Liangchen Luo, Yun Zhu, Yinxiao Liu, Jindong Chen, Lei Meng

Evaluating natural language systems poses significant challenges, particularly in the realms of natural language understanding and high-level reasoning.

Natural Language Understanding

Multi-step Problem Solving Through a Verifier: An Empirical Analysis on Model-induced Process Supervision

no code implementations5 Feb 2024 Zihan Wang, Yunxuan Li, Yuexin Wu, Liangchen Luo, Le Hou, Hongkun Yu, Jingbo Shang

Process supervision, using a trained verifier to evaluate the intermediate steps generated by reasoner, has demonstrated significant improvements in multi-step problem solving.

GSM8K Math

Cannot find the paper you are looking for? You can Submit a new open access paper.