Search Results for author: Tianyi Tang

Found 23 papers, 17 papers with code

Towards Coarse-to-Fine Evaluation of Inference Efficiency for Large Language Models

1 code implementation17 Apr 2024 Yushuo Chen, Tianyi Tang, Erge Xiang, Linjiang Li, Wayne Xin Zhao, Jing Wang, Yunpeng Chai, Ji-Rong Wen

In real world, large language models (LLMs) can serve as the assistant to help users accomplish their jobs, and also support the development of advanced applications.

Language-Specific Neurons: The Key to Multilingual Capabilities in Large Language Models

no code implementations26 Feb 2024 Tianyi Tang, Wenyang Luo, Haoyang Huang, Dongdong Zhang, Xiaolei Wang, Xin Zhao, Furu Wei, Ji-Rong Wen

Large language models (LLMs) demonstrate remarkable multilingual capabilities without being pre-trained on specially curated multilingual parallel corpora.

BAMBOO: A Comprehensive Benchmark for Evaluating Long Text Modeling Capacities of Large Language Models

1 code implementation23 Sep 2023 Zican Dong, Tianyi Tang, Junyi Li, Wayne Xin Zhao, Ji-Rong Wen

Recently, multiple studies have committed to extending the context length and enhancing the long text modeling capabilities of LLMs.

Code Completion Hallucination +2

Towards Effective Ancient Chinese Translation: Dataset, Model, and Evaluation

1 code implementation1 Aug 2023 Geyang Guo, Jiarong Yang, Fengyuan LU, Jiaxin Qin, Tianyi Tang, Wayne Xin Zhao

From an evaluation perspective, we build a benchmark to judge ancient Chinese translation quality in different scenarios and evaluate the ancient Chinese translation capacities of various existing models.

Language Modelling Translation

Zero-shot Visual Question Answering with Language Model Feedback

1 code implementation26 May 2023 Yifan Du, Junyi Li, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen

In this paper, we propose a novel language model guided captioning approach, LAMOC, for knowledge-based visual question answering (VQA).

Language Modelling Question Answering +1

Not All Metrics Are Guilty: Improving NLG Evaluation by Diversifying References

2 code implementations24 May 2023 Tianyi Tang, Hongyuan Lu, Yuchen Eleanor Jiang, Haoyang Huang, Dongdong Zhang, Wayne Xin Zhao, Tom Kocmi, Furu Wei

Most research about natural language generation (NLG) relies on evaluation benchmarks with limited references for a sample, which may result in poor correlations with human judgements.

Machine Translation nlg evaluation +3

The Web Can Be Your Oyster for Improving Large Language Models

1 code implementation18 May 2023 Junyi Li, Tianyi Tang, Wayne Xin Zhao, Jingyuan Wang, Jian-Yun Nie, Ji-Rong Wen

In order to further improve the capacity of LLMs for knowledge-intensive tasks, we consider augmenting LLMs with the large-scale web using search engine.

Retrieval World Knowledge

A Survey of Large Language Models

5 code implementations31 Mar 2023 Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, YiFan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, Ji-Rong Wen

To discriminate the difference in parameter scale, the research community has coined the term large language models (LLM) for the PLMs of significant size.

Language Modelling

A Survey on Long Text Modeling with Transformers

no code implementations28 Feb 2023 Zican Dong, Tianyi Tang, Lunyi Li, Wayne Xin Zhao

In this paper, we provide an overview of the recent advances on long texts modeling based on Transformer models.

TextBox 2.0: A Text Generation Library with Pre-trained Language Models

1 code implementation26 Dec 2022 Tianyi Tang, Junyi Li, Zhipeng Chen, Yiwen Hu, Zhuohao Yu, Wenxun Dai, Zican Dong, Xiaoxue Cheng, Yuhao Wang, Wayne Xin Zhao, Jian-Yun Nie, Ji-Rong Wen

To facilitate research on text generation, this paper presents a comprehensive and unified library, TextBox 2. 0, focusing on the use of pre-trained language models (PLMs).

Abstractive Text Summarization Data-to-Text Generation +7

MVP: Multi-task Supervised Pre-training for Natural Language Generation

2 code implementations24 Jun 2022 Tianyi Tang, Junyi Li, Wayne Xin Zhao, Ji-Rong Wen

Motivated by the success of supervised pre-training, we propose Multi-task superVised Pre-training (MVP) for natural language generation.

Text Generation

Learning to Transfer Prompts for Text Generation

1 code implementation NAACL 2022 Junyi Li, Tianyi Tang, Jian-Yun Nie, Ji-Rong Wen, Wayne Xin Zhao

First, PTG learns a set of source prompts for various source generation tasks and then transfers these prompts as target prompts to perform target generation tasks.

Text Generation

Context-Tuning: Learning Contextualized Prompts for Natural Language Generation

1 code implementation COLING 2022 Tianyi Tang, Junyi Li, Wayne Xin Zhao, Ji-Rong Wen

Secondly, we use continuous inverse prompting to improve the process of natural language generation by modeling an inverse generation process from output to input, making the generated text more relevant to the inputs.

Text Generation

Pretrained Language Models for Text Generation: A Survey

no code implementations14 Jan 2022 Junyi Li, Tianyi Tang, Wayne Xin Zhao, Jian-Yun Nie, Ji-Rong Wen

We begin with introducing three key aspects of applying PLMs to text generation: 1) how to encode the input into representations preserving input semantics which can be fused into PLMs; 2) how to design an effective PLM to serve as the generation model; and 3) how to effectively optimize PLMs given the reference text and to ensure that the generated texts satisfy special text properties.

Text Generation

Pretrained Language Models for Text Generation: A Survey

no code implementations21 May 2021 Junyi Li, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen

In this paper, we present an overview of the major advances achieved in the topic of PLMs for text generation.

Text Generation

Session-based Social and Dependency-aware Software Recommendation

no code implementations10 Mar 2021 Dengcheng Yan, Tianyi Tang, Wenxin Xie, Yiwen Zhang, Qiang He

With the increase of complexity of modern software, social collaborative coding and reuse of open source software packages become more and more popular, which thus greatly enhances the development efficiency and software quality.

Graph Attention Recommendation Systems

TextBox: A Unified, Modularized, and Extensible Framework for Text Generation

1 code implementation ACL 2021 Junyi Li, Tianyi Tang, Gaole He, Jinhao Jiang, Xiaoxuan Hu, Puzhao Xie, Zhipeng Chen, Zhuohao Yu, Wayne Xin Zhao, Ji-Rong Wen

In this paper, we release an open-source library, called TextBox, to provide a unified, modularized, and extensible text generation framework.

Text Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.