Search Results for author: Jiangtao Feng

Found 24 papers, 16 papers with code

BjTT: A Large-scale Multimodal Dataset for Traffic Prediction

2 code implementations8 Mar 2024 Chengyang Zhang, Yong Zhang, Qitan Shao, Jiangtao Feng, Bo Li, Yisheng Lv, Xinglin Piao, BaoCai Yin

The key challenge of the TTG task is how to associate text with the spatial structure of the road network and traffic data for generating traffic situations.

Traffic Prediction

Linear Attention via Orthogonal Memory

no code implementations18 Dec 2023 Jun Zhang, Shuyang Jiang, Jiangtao Feng, Lin Zheng, Lingpeng Kong

Given that orthogonal memory compresses global information, we further dissect the context to amplify fine-grained local information.

Causal Language Modeling Computational Efficiency +1

Attentive Multi-Layer Perceptron for Non-autoregressive Generation

1 code implementation14 Oct 2023 Shuyang Jiang, Jun Zhang, Jiangtao Feng, Lin Zheng, Lingpeng Kong

Furthermore, we marry AMLP with popular NAR models, deriving a highly efficient NAR-AMLP architecture with linear time and space complexity.

Machine Translation Speech Synthesis +1

Optimizing Non-Autoregressive Transformers with Contrastive Learning

no code implementations23 May 2023 Chenxin An, Jiangtao Feng, Fei Huang, Xipeng Qiu, Lingpeng Kong

In this paper, we propose to ease the difficulty of modality learning via sampling from the model distribution instead of the data distribution.

Contrastive Learning Machine Translation +2

OpenICL: An Open-Source Framework for In-context Learning

3 code implementations6 Mar 2023 Zhenyu Wu, Yaoxiang Wang, Jiacheng Ye, Jiangtao Feng, Jingjing Xu, Yu Qiao, Zhiyong Wu

However, the implementation of ICL is sophisticated due to the diverse retrieval and inference methods involved, as well as the varying pre-processing requirements for different models, datasets, and tasks.

In-Context Learning Language Modelling +4

Compositional Exemplars for In-context Learning

1 code implementation11 Feb 2023 Jiacheng Ye, Zhiyong Wu, Jiangtao Feng, Tao Yu, Lingpeng Kong

The performance of ICL is highly dominated by the quality of the selected in-context examples.

Code Generation Contrastive Learning +6

In-Context Learning with Many Demonstration Examples

1 code implementation9 Feb 2023 Mukai Li, Shansan Gong, Jiangtao Feng, Yiheng Xu, Jun Zhang, Zhiyong Wu, Lingpeng Kong

Based on EVALM, we scale up the size of examples efficiently in both instruction tuning and in-context learning to explore the boundary of the benefits from more annotated data.

16k 8k +2

ProGen: Progressive Zero-shot Dataset Generation via In-context Feedback

2 code implementations22 Oct 2022 Jiacheng Ye, Jiahui Gao, Jiangtao Feng, Zhiyong Wu, Tao Yu, Lingpeng Kong

To improve the quality of dataset synthesis, we propose a progressive zero-shot dataset generation framework, ProGen, which leverages the feedback from the task-specific model to guide the generation of new training data via in-context examples.

Informativeness text-classification +2

DiffuSeq: Sequence to Sequence Text Generation with Diffusion Models

1 code implementation17 Oct 2022 Shansan Gong, Mukai Li, Jiangtao Feng, Zhiyong Wu, Lingpeng Kong

Bringing together theoretical analysis and empirical evidence, we demonstrate the great potential of diffusion models in complex conditional language generation tasks.

Text Generation

CAB: Comprehensive Attention Benchmarking on Long Sequence Modeling

1 code implementation14 Oct 2022 Jun Zhang, Shuyang Jiang, Jiangtao Feng, Lin Zheng, Lingpeng Kong

In this paper, we propose Comprehensive Attention Benchmark (CAB) under a fine-grained attention taxonomy with four distinguishable attention patterns, namely, noncausal self, causal self, noncausal cross, and causal cross attentions.

Benchmarking Long-range modeling

PARAGEN : A Parallel Generation Toolkit

1 code implementation7 Oct 2022 Jiangtao Feng, Yi Zhou, Jun Zhang, Xian Qian, Liwei Wu, Zhexi Zhang, Yanming Liu, Mingxuan Wang, Lei LI, Hao Zhou

PARAGEN is a PyTorch-based NLP toolkit for further development on parallel generation.

Model Selection

CoNT: Contrastive Neural Text Generation

2 code implementations29 May 2022 Chenxin An, Jiangtao Feng, Kai Lv, Lingpeng Kong, Xipeng Qiu, Xuanjing Huang

We validate CoNT on five generation tasks with ten benchmarks, including machine translation, summarization, code comment generation, data-to-text generation and commonsense generation.

Code Comment Generation Comment Generation +4

ZeroGen: Efficient Zero-shot Learning via Dataset Generation

3 code implementations16 Feb 2022 Jiacheng Ye, Jiahui Gao, Qintong Li, Hang Xu, Jiangtao Feng, Zhiyong Wu, Tao Yu, Lingpeng Kong

There is a growing interest in dataset generation recently due to the superior generative capacity of large pre-trained language models (PLMs).

Knowledge Distillation Natural Language Inference +5

Alleviate Exposure Bias in Sequence Prediction \\ with Recurrent Neural Networks

no code implementations22 Mar 2021 Liping Yuan, Jiangtao Feng, Xiaoqing Zheng, Xuanjing Huang

The key idea is that at each time step, the network takes as input a ``bundle'' of similar words predicted at the previous step instead of a single ground truth.

Pre-training Multilingual Neural Machine Translation by Leveraging Alignment Information

1 code implementation EMNLP 2020 Zehui Lin, Xiao Pan, Mingxuan Wang, Xipeng Qiu, Jiangtao Feng, Hao Zhou, Lei LI

We investigate the following question for machine translation (MT): can we develop a single universal MT model to serve as the common seed and obtain derivative and improved models on arbitrary language pairs?

Ranked #3 on Machine Translation on WMT2014 English-French (using extra training data)

Machine Translation Translation

Cross-Lingual Vision-Language Navigation

2 code implementations24 Oct 2019 An Yan, Xin Eric Wang, Jiangtao Feng, Lei LI, William Yang Wang

Commanding a robot to navigate with natural language instructions is a long-term goal for grounded language understanding and robotics.

Domain Adaptation Navigate +2

Neural Phrase-to-Phrase Machine Translation

no code implementations6 Nov 2018 Jiangtao Feng, Lingpeng Kong, Po-Sen Huang, Chong Wang, Da Huang, Jiayuan Mao, Kan Qiao, Dengyong Zhou

We also design an efficient dynamic programming algorithm to decode segments that allows the model to be trained faster than the existing neural phrase-based machine translation method by Huang et al. (2018).

Machine Translation Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.