Search Results for author: Yue Zhang

Found 344 papers, 158 papers with code

PromptGen: Automatically Generate Prompts using Generative Models

no code implementations Findings (NAACL) 2022 Yue Zhang, Hongliang Fei, Dingcheng Li, Ping Li

Recently, prompt learning has received significant attention, where the downstream tasks are reformulated to the mask-filling task with the help of a textual prompt.

Knowledge Probing

Entity Enhanced BERT Pre-training for Chinese NER

no code implementations EMNLP 2020 Chen Jia, Yuefeng Shi, Qinrong Yang, Yue Zhang

We then integrate the entity information into BERT using Char-Entity-Transformer, which augments the self-attention using a combination of character and entity representations.

NER

Natural Language Processing Meets Quantum Physics: A Survey and Categorization

no code implementations EMNLP 2021 Sixuan Wu, Jian Li, Peng Zhang, Yue Zhang

Recent research has investigated quantum NLP, designing algorithms that process natural language in quantum computers, and also quantum-inspired algorithms that improve NLP performance on classical computers.

Speeding up Transformer Decoding via an Attention Refinement Network

1 code implementation COLING 2022 Kaixin Wu, Yue Zhang, Bojie Hu, Tong Zhang

Extensive experiments on ten WMT machine translation tasks show that the proposed model yields an average of 1. 35x faster (with almost no decrease in BLEU) over the state-of-the-art inference implementation.

Machine Translation NMT +1

Investigating Rich Feature Sources for Conceptual Representation Encoding

no code implementations COLING (CogALex) 2020 Lu Cao, Yulong Chen, Dandan Huang, Yue Zhang

Functional Magnetic Resonance Imaging (fMRI) provides a means to investigate human conceptual representation in cognitive and neuroscience studies, where researchers predict the fMRI activations with elicited stimuli inputs.

Cross-Lingual Dependency Parsing via Self-Training

no code implementations CCL 2020 Meishan Zhang, Yue Zhang

Recent advances of multilingual word representations weaken the input divergences across languages, making cross-lingual transfer similar to the monolingual cross-domain and semi-supervised settings.

Cross-Lingual POS Tagging Cross-Lingual Transfer +2

新型冠状病毒肺炎相关的推特主题与情感研究(Exploring COVID-19-related Twitter Topic Dynamics across Countries)

no code implementations CCL 2020 Shuailong Liang, Derek F. Wong, Yue Zhang

我们基于从2020年1月22日至2020年4月30日在推特社交平台上抓取的不同国家和地区发布的50万条推文, 研究了有关 2019新型冠状病毒肺炎相关的主题和人们的观点, 发现了不同国家之间推特用户的普遍关切和看法之间存在着异同, 并且对不同议题的情感态度也有所不同。我们发现大部分推文中包含了强烈的情感, 其中表达爱与支持的推文比较普遍。总体来看, 人们的情感随着时间的推移逐渐正向增强。

Contrastive Data and Learning for Natural Language Processing

no code implementations NAACL (ACL) 2022 Rui Zhang, Yangfeng Ji, Yue Zhang, Rebecca J. Passonneau

We then survey the benefits and the best practices of contrastive learning for various downstream NLP applications including Text Classification, Question Answering, Summarization, Text Generation, Interpretability and Explainability, Commonsense Knowledge and Reasoning, Vision-and-Language. This tutorial intends to help researchers in the NLP and computational linguistics community to understand this emerging topic and promote future research directions of using contrastive learning for NLP applications.

Contrastive Learning Question Answering +4

DialogSum Challenge: Summarizing Real-Life Scenario Dialogues

no code implementations INLG (ACL) 2021 Yulong Chen, Yang Liu, Yue Zhang

We propose a shared task on summarizing real-life scenario dialogues, DialogSum Challenge, to encourage researchers to address challenges in dialogue summarization, which has been less studied by the summarization community.

Common Sense Reasoning Representation Learning

Prompt-Driven Neural Machine Translation

1 code implementation Findings (ACL) 2022 Yafu Li, Yongjing Yin, Jing Li, Yue Zhang

Neural machine translation (NMT) has obtained significant performance improvement over the recent years.

Machine Translation NMT +1

Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models

1 code implementation3 Sep 2023 Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi

While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge.

Reinforcement Learning-assisted Evolutionary Algorithm: A Survey and Research Opportunities

no code implementations25 Aug 2023 Yanjie Song, Yutong Wu, Yangyang Guo, Ran Yan, P. N. Suganthan, Yue Zhang, Witold Pedrycz, Yingwu Chen, Swagatam Das, Rammohan Mallipeddi, Oladayo Solomon Ajani

This paper presents a comprehensive survey on integrating reinforcement learning into the evolutionary algorithm, referred to as reinforcement learning-assisted evolutionary algorithm (RL-EA).

Evolutionary Algorithms reinforcement-learning +1

An Empirical Study of Catastrophic Forgetting in Large Language Models During Continual Fine-tuning

1 code implementation17 Aug 2023 Yun Luo, Zhen Yang, Fandong Meng, Yafu Li, Jie zhou, Yue Zhang

Moreover, we find that ALPACA can maintain more knowledge and capacity compared with LLAMA during the continual fine-tuning, which implies that general instruction tuning can help mitigate the forgetting phenomenon of LLMs in the further fine-tuning process.

Reading Comprehension

DPMix: Mixture of Depth and Point Cloud Video Experts for 4D Action Segmentation

no code implementations31 Jul 2023 Yue Zhang, Hehe Fan, Yi Yang, Mohan Kankanhalli

The proposed method, named Mixture of Depth and Point cloud video experts (DPMix), achieved the first place in the 4D Action Segmentation Track of the HOI4D Challenge 2023.

Action Segmentation Human-Object Interaction Detection +1

Multi-representations Space Separation based Graph-level Anomaly-aware Detection

no code implementations22 Jul 2023 Fu Lin, Haonan Gong, Mingkang Li, Zitong Wang, Yue Zhang, Xuexiong Luo

The previous works have observed that abnormal graphs mainly show node-level and graph-level anomalies, but these methods equally treat two anomaly forms above in the evaluation of abnormal graphs, which is contrary to the fact that different types of abnormal graph data have different degrees in terms of node-level and graph-level anomalies.

Zero-shot Query Reformulation for Conversational Search

1 code implementation18 Jul 2023 Dayu Yang, Yue Zhang, Hui Fang

Nevertheless, existing zero-shot methods face three primary limitations: they are not universally applicable to all retrievers, their effectiveness lacks sufficient explainability, and they struggle to resolve common conversational ambiguities caused by omission.

Conversational Search Information Retrieval +2

An Exploration Study of Mixed-initiative Query Reformulation in Conversational Passage Retrieval

no code implementations17 Jul 2023 Dayu Yang, Yue Zhang, Hui Fang

In this work, we aim to reproduce multi-stage retrieval pipelines and explore one of the potential benefits of involving mixed-initiative interaction in conversational passage retrieval scenarios: reformulating raw queries.

Passage Retrieval Retrieval

ConTrack: Contextual Transformer for Device Tracking in X-ray

no code implementations14 Jul 2023 Marc Demoustier, Yue Zhang, Venkatesh Narasimha Murthy, Florin C. Ghesu, Dorin Comaniciu

Tracking the catheter tip poses different challenges: the tip can be occluded by contrast during angiography or interventional devices; and it is always in continuous movement due to the cardiac and respiratory motions.

TVPR: Text-to-Video Person Retrieval and a New Benchmark

no code implementations14 Jul 2023 Fan Ni, Xu Zhang, Jianhui Wu, Guan-Nan Dong, Aichun Zhu, Hui Liu, Yue Zhang

To the best of our knowledge, TVPRN is the first successful attempt to use video for text-based person retrieval task and has achieved state-of-the-art performance on TVPReid dataset.

Person Retrieval Retrieval +3

Revisiting Cross-Lingual Summarization: A Corpus-based Study and A New Benchmark with Improved Annotation

1 code implementation8 Jul 2023 Yulong Chen, Huajian Zhang, Yijie Zhou, Xuefeng Bai, Yueguan Wang, Ming Zhong, Jianhao Yan, Yafu Li, Judy Li, Michael Zhu, Yue Zhang

Additionally, based on the same intuition, we propose a 2-Step method, which takes both conversation and summary as input to simulate human annotation process.

A Survey on Evaluation of Large Language Models

1 code implementation6 Jul 2023 Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie

Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications.

Ethics

Distributed Marker Representation for Ambiguous Discourse Markers and Entangled Relations

no code implementations19 Jun 2023 Dongyu Ru, Lin Qiu, Xipeng Qiu, Yue Zhang, Zheng Zhang

Discourse analysis is an important task because it models intrinsic semantic structures between sentences in a document.

Opinion Tree Parsing for Aspect-based Sentiment Analysis

1 code implementation15 Jun 2023 Xiaoyi Bao, Xiaotong Jiang, Zhongqing Wang, Yue Zhang, Guodong Zhou

To address these challenges, we propose an opinion tree parsing model, aiming to parse all the sentiment elements from an opinion tree, which is much faster, and can explicitly reveal a more comprehensive and complete aspect-level sentiment structure.

Sentiment Analysis

PandaLM: An Automatic Evaluation Benchmark for LLM Instruction Tuning Optimization

1 code implementation8 Jun 2023 Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, Wei Ye, Shikun Zhang, Yue Zhang

To ensure the reliability of PandaLM, we collect a diverse human-annotated test dataset, where all contexts are generated by humans and labels are aligned with human preferences.

Language Modelling Large Language Model

PromptBench: Towards Evaluating the Robustness of Large Language Models on Adversarial Prompts

1 code implementation7 Jun 2023 Kaijie Zhu, Jindong Wang, Jiaheng Zhou, Zichen Wang, Hao Chen, Yidong Wang, Linyi Yang, Wei Ye, Neil Zhenqiang Gong, Yue Zhang, Xing Xie

The increasing reliance on Large Language Models (LLMs) across academia and industry necessitates a comprehensive understanding of their robustness to prompts.

Machine Translation Natural Language Inference +2

An AMR-based Link Prediction Approach for Document-level Event Argument Extraction

1 code implementation30 May 2023 Yuqing Yang, Qipeng Guo, Xiangkun Hu, Yue Zhang, Xipeng Qiu, Zheng Zhang

Motivated by the fact that all event structures can be inferred from AMR, this work reformulates EAE as a link prediction problem on AMR graphs.

Event Argument Extraction Link Prediction +1

RFiD: Towards Rational Fusion-in-Decoder for Open-Domain Question Answering

1 code implementation26 May 2023 Cunxiang Wang, Haofei Yu, Yue Zhang

Open-Domain Question Answering (ODQA) systems necessitate a reader model capable of generating answers by simultaneously referring to multiple passages.

Natural Questions Open-Domain Question Answering +1

Exploiting Abstract Meaning Representation for Open-Domain Question Answering

no code implementations26 May 2023 Cunxiang Wang, Zhikun Xu, Qipeng Guo, Xiangkun Hu, Xuefeng Bai, Zheng Zhang, Yue Zhang

The Open-Domain Question Answering (ODQA) task involves retrieving and subsequently generating answers from fine-grained relevant passages within a database.

Natural Questions Open-Domain Question Answering +1

NaSGEC: a Multi-Domain Chinese Grammatical Error Correction Dataset from Native Speaker Texts

1 code implementation25 May 2023 Yue Zhang, Bo Zhang, Haochen Jiang, Zhenghua Li, Chen Li, Fei Huang, Min Zhang

We introduce NaSGEC, a new dataset to facilitate research on Chinese grammatical error correction (CGEC) for native speaker texts from multiple domains.

Grammatical Error Correction

Out-of-Distribution Generalization in Text Classification: Past, Present, and Future

no code implementations23 May 2023 Linyi Yang, Yaoxiao Song, Xuan Ren, Chenyang Lyu, Yidong Wang, Lingqiao Liu, Jindong Wang, Jennifer Foster, Yue Zhang

Machine learning (ML) systems in natural language processing (NLP) face significant challenges in generalizing to out-of-distribution (OOD) data, where the test distribution differs from the training data distribution.

Out-of-Distribution Generalization text-classification +1

EASE: An Easily-Customized Annotation System Powered by Efficiency Enhancement Mechanisms

no code implementations23 May 2023 Naihao Deng, YiKai Liu, Mingye Chen, Winston Wu, Siyang Liu, Yulong Chen, Yue Zhang, Rada Mihalcea

Our results show that our system can meet the diverse needs of NLP researchers and significantly accelerate the annotation process.

Active Learning

Non-Autoregressive Document-Level Machine Translation (NA-DMT): Exploring Effective Approaches, Challenges, and Opportunities

1 code implementation22 May 2023 Guangsheng Bao, Zhiyang Teng, Yue Zhang

Non-autoregressive translation (NAT) models have been extensively investigated within the context of sentence-level machine translation (MT) tasks, demonstrating comparable quality and superior translation speed when contrasted with autoregressive translation (AT) models.

Document Level Machine Translation Machine Translation +2

Multi-Task Instruction Tuning of LLaMa for Specific Scenarios: A Preliminary Study on Writing Assistance

no code implementations22 May 2023 Yue Zhang, Leyang Cui, Deng Cai, Xinting Huang, Tao Fang, Wei Bi

ChatGPT and GPT-4 have attracted substantial interest from both academic and industrial circles, owing to their remarkable few-shot (or even zero-shot) ability to handle various tasks.

Instruction Following

Deepfake Text Detection in the Wild

1 code implementation22 May 2023 Yafu Li, Qintong Li, Leyang Cui, Wei Bi, Longyue Wang, Linyi Yang, Shuming Shi, Yue Zhang

In practical scenarios, the detector faces texts from various domains or LLMs without knowing their sources.

Face Swapping Story Generation +1

Evaluating Open-QA Evaluation

1 code implementation21 May 2023 Cunxiang Wang, Sirui Cheng, Qipeng Guo, Zhikun Xu, Bowen Ding, Yidong Wang, Xiangkun Hu, Zheng Zhang, Yue Zhang

This study focuses on the evaluation of the Open Question Answering (Open-QA) task, which can directly estimate the factuality of large language models (LLMs).

Open-Ended Question Answering

LogiCoT: Logical Chain-of-Thought Instruction-Tuning Data Collection with GPT-4

1 code implementation20 May 2023 Hanmeng Liu, Zhiyang Teng, Leyang Cui, Chaoli Zhang, Qiji Zhou, Yue Zhang

LogiCoT serves as an instruction set for teaching models of logical reasoning and elicits general reasoning skills.

Logical Reasoning Text Generation

Mitigating Catastrophic Forgetting in Task-Incremental Continual Learning with Adaptive Classification Criterion

no code implementations20 May 2023 Yun Luo, Xiaotian Lin, Zhen Yang, Fandong Meng, Jie zhou, Yue Zhang

It is seldom considered to adapt the decision boundary for new representations and in this paper we propose a Supervised Contrastive learning framework with adaptive classification criterion for Continual Learning (SCCL), In our method, a contrastive loss is used to directly learn representations for different tasks and a limited number of data samples are saved as the classification criterion.

Classification Continual Learning +1

ALT: An Automatic System for Long Tail Scenario Modeling

no code implementations19 May 2023 Ya-Lin Zhang, Jun Zhou, Yankun Ren, Yue Zhang, Xinxing Yang, Meng Li, Qitao Shi, Longfei Li

In this paper, we consider the problem of long tail scenario modeling with budget limitation, i. e., insufficient human resources for model training stage and limited time and computing resources for model inference stage.

Meta-Learning Neural Architecture Search +1

Chain-of-Symbol Prompting Elicits Planning in Large Langauge Models

1 code implementation17 May 2023 Hanxu Hu, Hongyuan Lu, Huajian Zhang, Wai Lam, Yue Zhang

To this end, we propose a novel method called CoS (Chain-of-Symbol Prompting) that represents the complex environments with condensed symbolic spatial representations during the chained intermediate thinking steps.

Measuring Consistency in Text-based Financial Forecasting Models

1 code implementation15 May 2023 Linyi Yang, Yingpeng Ma, Yue Zhang

Using FinTrust, we show that the consistency of state-of-the-art NLP models for financial forecasting is poor.

Learning to Generalize for Cross-domain QA

1 code implementation14 May 2023 Yingjie Niu, Linyi Yang, Ruihai Dong, Yue Zhang

Our method has been theoretically and empirically shown to be effective in enhancing the generalization ability of both generative and discriminative models.

Data Augmentation Domain Generalization +1

Temporal Consistent Automatic Video Colorization via Semantic Correspondence

1 code implementation13 May 2023 Yu Zhang, Siqi Chen, Mingdao Wang, Xianlin Zhang, Chuang Zhu, Yue Zhang, Xueming Li

Extensive experiments demonstrate that our method outperforms other methods in maintaining temporal consistency both qualitatively and quantitatively.

Colorization Image Colorization +1

Instance Smoothed Contrastive Learning for Unsupervised Sentence Embedding

1 code implementation12 May 2023 Hongliang He, Junlei Zhang, Zhenzhong Lan, Yue Zhang

Contrastive learning-based methods, such as unsup-SimCSE, have achieved state-of-the-art (SOTA) performances in learning unsupervised sentence embeddings.

Contrastive Learning Semantic Similarity +5

Investigating Forgetting in Pre-Trained Representations Through Continual Learning

no code implementations10 May 2023 Yun Luo, Zhen Yang, Xuefeng Bai, Fandong Meng, Jie zhou, Yue Zhang

Intuitively, the representation forgetting can influence the general knowledge stored in pre-trained language models (LMs), but the concrete effect is still unclear.

Continual Learning General Knowledge

Token-Level Fitting Issues of Seq2seq Models

no code implementations8 May 2023 Guangsheng Bao, Zhiyang Teng, Yue Zhang

Sequence-to-sequence (seq2seq) models have been widely used for natural language processing, computer vision, and other deep learning tasks.

Language Modelling

Target-Side Augmentation for Document-Level Machine Translation

1 code implementation8 May 2023 Guangsheng Bao, Zhiyang Teng, Yue Zhang

Document-level machine translation faces the challenge of data sparsity due to its long input length and a small amount of training data, increasing the risk of learning spurious patterns.

Data Augmentation Document Level Machine Translation +2

A Curriculum View of Robust Loss Functions

no code implementations3 May 2023 Zebin Ou, Yue Zhang

Robust loss functions are designed to combat the adverse impacts of label noise, whose robustness is typically supported by theoretical bounds agnostic to the training dynamics.

Tailored Multi-Organ Segmentation with Model Adaptation and Ensemble

no code implementations14 Apr 2023 Jiahua Dong, Guohua Cheng, Yue Zhang, Chengtao Peng, Yu Song, Ruofeng Tong, Lanfen Lin, Yen-Wei Chen

Multi-organ segmentation, which identifies and separates different organs in medical images, is a fundamental task in medical image analysis.

Organ Segmentation

SPColor: Semantic Prior Guided Exemplar-based Image Colorization

no code implementations13 Apr 2023 Siqi Chen, Xueming Li, Xianlin Zhang, Mingdao Wang, Yu Zhang, Yue Zhang

Previous methods search for correspondence across the entire reference image, and this type of global matching is easy to get mismatch.

Colorization Image Colorization +1

Unified Multi-Modal Image Synthesis for Missing Modality Imputation

no code implementations11 Apr 2023 Yue Zhang, Chengtao Peng, Qiuli Wang, Dan Song, Kaiyan Li, S. Kevin Zhou

Besides, we propose a Dynamic Feature Unification Module to integrate information from a varying number of available modalities, which enables the network to be robust to random missing modalities.

Anatomy Image Generation +1

GEMINI: Controlling the Sentence-level Writing Style for Abstractive Text Summarization

no code implementations7 Apr 2023 Guangsheng Bao, Zebin Ou, Yue Zhang

Human experts write summaries using different techniques, including rewriting a sentence in the document or fusing multiple sentences to generate a summary sentence.

Abstractive Text Summarization Sentence ReWriting

Evaluating the Logical Reasoning Ability of ChatGPT and GPT-4

1 code implementation7 Apr 2023 Hanmeng Liu, Ruoxi Ning, Zhiyang Teng, Jian Liu, Qiji Zhou, Yue Zhang

With the release of Generative Pretrained Transformer 4 (GPT-4), highlighted as "advanced" at reasoning tasks, we are eager to learn the GPT-4 performance on various logical reasoning tasks.

Logical Reasoning Natural Language Inference +2

Is ChatGPT a Highly Fluent Grammatical Error Correction System? A Comprehensive Evaluation

no code implementations4 Apr 2023 Tao Fang, Shu Yang, Kaixin Lan, Derek F. Wong, Jinpeng Hu, Lidia S. Chao, Yue Zhang

To showcase its capabilities in GEC, we design zero-shot chain-of-thought (CoT) and few-shot CoT settings using in-context learning for ChatGPT.

Grammatical Error Correction Language Modelling

Exemplar-based Video Colorization with Long-term Spatiotemporal Dependency

no code implementations27 Mar 2023 Siqi Chen, Xueming Li, Xianlin Zhang, Mingdao Wang, Yu Zhang, Jiatong Han, Yue Zhang

Exemplar-based video colorization is an essential technique for applications like old movie restoration.

Colorization

RepoCoder: Repository-Level Code Completion Through Iterative Retrieval and Generation

no code implementations22 Mar 2023 Fengji Zhang, Bei Chen, Yue Zhang, Jin Liu, Daoguang Zan, Yi Mao, Jian-Guang Lou, Weizhu Chen

It streamlines the repository-level code completion process by incorporating a similarity-based retriever and a pre-trained code language model, which allows for the effective utilization of repository-level information for code completion and grants the ability to generate code at various levels of granularity.

Code Completion Language Modelling +1

Lung Nodule Segmentation and Uncertain Region Prediction with an Uncertainty-Aware Attention Mechanism

no code implementations15 Mar 2023 Han Yang, Qiuli Wang, Yue Zhang, Zhulin An, Chen Liu, Xiaohong Zhang, S. Kevin Zhou

Radiologists possess diverse training and clinical experiences, leading to variations in the segmentation annotations of lung nodules and resulting in segmentation uncertainty. Conventional methods typically select a single annotation as the learning target or attempt to learn a latent space comprising multiple annotations.

Lung Nodule Segmentation

On the Robustness of ChatGPT: An Adversarial and Out-of-distribution Perspective

1 code implementation22 Feb 2023 Jindong Wang, Xixu Hu, Wenxin Hou, Hao Chen, Runkai Zheng, Yidong Wang, Linyi Yang, Haojun Huang, Wei Ye, Xiubo Geng, Binxin Jiao, Yue Zhang, Xing Xie

In this paper, we conduct a thorough evaluation of the robustness of ChatGPT from the adversarial and out-of-distribution (OOD) perspective.

Adversarial Robustness Chatbot +1

VLN-Trans: Translator for the Vision and Language Navigation Agent

1 code implementation18 Feb 2023 Yue Zhang, Parisa Kordjamshidi

The mentioned landmarks are not recognizable by the navigation agent due to the different vision abilities of the instructor and the modeled agent.

Vision and Language Navigation

GLUECons: A Generic Benchmark for Learning Under Constraints

1 code implementation16 Feb 2023 Hossein Rajaby Faghihi, Aliakbar Nafar, Chen Zheng, Roshanak Mirzaee, Yue Zhang, Andrzej Uszok, Alexander Wan, Tanawan Premsri, Dan Roth, Parisa Kordjamshidi

Recent research has shown that integrating domain knowledge into deep learning architectures is effective -- it helps reduce the amount of required data, improves the accuracy of the models' decisions, and improves the interpretability of models.

Improving (Dis)agreement Detection with Inductive Social Relation Information From Comment-Reply Interactions

1 code implementation8 Feb 2023 Yun Luo, Zihan Liu, Stan Z. Li, Yue Zhang

(Dis)agreement detection aims to identify the authors' attitudes or positions (\textit{{agree, disagree, neutral}}) towards a specific text.

Knowledge Graph Embedding Language Modelling

Uniform tensor clustering by jointly exploring sample affinities of various orders

no code implementations3 Feb 2023 Hongmin Cai, Fei Qi, Junyu Li, Yu Hu, Yue Zhang, Yiu-ming Cheung, Bin Hu

Conventional clustering methods based on pairwise affinity usually suffer from the concentration effect while processing huge dimensional features yet low sample sizes data, resulting in inaccuracy to encode the sample proximity and suboptimal performance in clustering.

Clustering

Learning 6-DoF Fine-grained Grasp Detection Based on Part Affordance Grounding

no code implementations27 Jan 2023 Yaoxian Song, Penglei Sun, Yi Ren, Yu Zheng, Yue Zhang

To evaluate the effectiveness, we perform multi-level difficulty part language grounding grasping experiments and deploy our proposed model on a real robot.

Representation Learning Robotic Grasping

DC-MBR: Distributional Cooling for Minimum Bayesian Risk Decoding

no code implementations8 Dec 2022 Jianhao Yan, Jin Xu, Fandong Meng, Jie zhou, Yue Zhang

In this work, we show that the issue arises from the un-consistency of label smoothing on the token-level and sequence-level distributions.

Machine Translation NMT

UniSumm and SummZoo: Unified Model and Diverse Benchmark for Few-Shot Summarization

1 code implementation17 Nov 2022 Yulong Chen, Yang Liu, Ruochen Xu, ZiYi Yang, Chenguang Zhu, Michael Zeng, Yue Zhang

The high annotation costs and diverse demands of various summarization tasks motivate the development of few-shot summarization.

GLUE-X: Evaluating Natural Language Understanding Models from an Out-of-distribution Generalization Perspective

1 code implementation15 Nov 2022 Linyi Yang, Shuibai Zhang, Libo Qin, Yafu Li, Yidong Wang, Hanmeng Liu, Jindong Wang, Xing Xie, Yue Zhang

Pre-trained language models (PLMs) are known to improve the generalization performance of natural language understanding models by leveraging large amounts of data during the pre-training phase.

Natural Language Understanding Out-of-Distribution Generalization

CSynGEC: Incorporating Constituent-based Syntax for Grammatical Error Correction with a Tailored GEC-Oriented Parser

no code implementations15 Nov 2022 Yue Zhang, Zhenghua Li

Recently, Zhang et al. (2022) propose a syntax-aware grammatical error correction (GEC) approach, named SynGEC, showing that incorporating tailored dependency-based syntax of the input sentence is quite beneficial to GEC.

Grammatical Error Correction

RLET: A Reinforcement Learning Based Approach for Explainable QA with Entailment Trees

1 code implementation31 Oct 2022 Tengxiao Liu, Qipeng Guo, Xiangkun Hu, Yue Zhang, Xipeng Qiu, Zheng Zhang

RLET iteratively performs single step reasoning with sentence selection and deduction generation modules, from which the training signal is accumulated across the tree with elaborately designed aligned reward function that is consistent with the evaluation.

reinforcement-learning Reinforcement Learning (RL)

Using Context-to-Vector with Graph Retrofitting to Improve Word Embeddings

1 code implementation ACL 2022 Jiangbin Zheng, Yile Wang, Ge Wang, Jun Xia, Yufei Huang, Guojiang Zhao, Yue Zhang, Stan Z. Li

Although contextualized embeddings generated from large-scale pre-trained models perform well in many tasks, traditional static embeddings (e. g., Skip-gram, Word2Vec) still play an important role in low-resource and lightweight settings due to their low computational cost, ease of deployment, and stability.

Word Embeddings

Cross-domain Generalization for AMR Parsing

1 code implementation22 Oct 2022 Xuefeng Bai, Seng Yang, Leyang Cui, Linfeng Song, Yue Zhang

Based on our observation, we investigate two approaches to reduce the domain distribution divergence of text and AMR features, respectively.

AMR Parsing Domain Generalization

Multi-Granularity Optimization for Non-Autoregressive Translation

1 code implementation20 Oct 2022 Yafu Li, Leyang Cui, Yongjing Yin, Yue Zhang

Despite low latency, non-autoregressive machine translation (NAT) suffers severe performance deterioration due to the naive independence assumption.

Machine Translation Translation

Prompting through Prototype: A Prototype-based Prompt Learning on Pretrained Vision-Language Models

no code implementations19 Oct 2022 Yue Zhang, Hongliang Fei, Dingcheng Li, Tan Yu, Ping Li

In particular, we focus on few-shot image recognition tasks on pretrained vision-language models (PVLMs) and develop a method of prompting through prototype (PTP), where we define $K$ image prototypes and $K$ prompt prototypes.

Few-Shot Learning

Denoising Enhanced Distantly Supervised Ultrafine Entity Typing

no code implementations18 Oct 2022 Yue Zhang, Hongliang Fei, Ping Li

Specifically, we build a noise model to estimate the unknown labeling noise distribution over input contexts and noisy type labels.

Denoising Entity Typing

LOViS: Learning Orientation and Visual Signals for Vision and Language Navigation

1 code implementation COLING 2022 Yue Zhang, Parisa Kordjamshidi

Understanding spatial and visual information is essential for a navigation agent who follows natural language instructions.

Vision and Language Navigation

Semantic-based Pre-training for Dialogue Understanding

1 code implementation COLING 2022 Xuefeng Bai, Linfeng Song, Yue Zhang

However, these models are typically trained on surface dialogue text, thus are proven to be weak in understanding the main semantic meaning of a dialogue context.

Dialogue Understanding

Can Offline Reinforcement Learning Help Natural Language Understanding?

no code implementations15 Sep 2022 Ziqi Zhang, Yile Wang, Yue Zhang, Donglin Wang

Experimental results show that our RL pre-trained models can give close performance compared with the models using the LM training objective, showing that there exist common useful features across these two modalities.

Language Modelling Natural Language Understanding +3

Pre-Training a Graph Recurrent Network for Language Representation

1 code implementation8 Sep 2022 Yile Wang, Linyi Yang, Zhiyang Teng, Ming Zhou, Yue Zhang

Transformer-based pre-trained models have gained much advance in recent years, becoming one of the most important backbones in natural language processing.

Language Modelling text-classification +1

Recent Advances in Text-to-SQL: A Survey of What We Have and What We Expect

1 code implementation COLING 2022 Naihao Deng, Yulong Chen, Yue Zhang

Text-to-SQL has attracted attention from both the natural language processing and database communities because of its ability to convert the semantics in natural language into SQL queries and its practical application in building natural language interfaces to database systems.

Text-To-SQL

Lost in Context? On the Sense-wise Variance of Contextualized Word Embeddings

no code implementations20 Aug 2022 Yile Wang, Yue Zhang

We quantify how much the contextualized embeddings of each word sense vary across contexts in typical pre-trained models.

Word Embeddings Word Sense Disambiguation

Mere Contrastive Learning for Cross-Domain Sentiment Analysis

1 code implementation COLING 2022 Yun Luo, Fang Guo, Zihan Liu, Yue Zhang

Cross-domain sentiment analysis aims to predict the sentiment of texts in the target domain using the model trained on the source domain to cope with the scarcity of labeled data.

Contrastive Learning Sentiment Analysis

Open Information Extraction from 2007 to 2022 -- A Survey

no code implementations18 Aug 2022 Pai Liu, Wenyang Gao, Wenjie Dong, Songfang Huang, Yue Zhang

Open information extraction is an important NLP task that targets extracting structured information from unstructured text without limitations on the relation type or the domain of the text.

Open Information Extraction

DialogSum Challenge: Results of the Dialogue Summarization Shared Task

1 code implementation8 Aug 2022 Yulong Chen, Naihao Deng, Yang Liu, Yue Zhang

We report the results of DialogSum Challenge, the shared task on summarizing real-life scenario dialogues at INLG 2022.

Modeling mandatory and discretionary lane changes using dynamic interaction networks

no code implementations26 Jul 2022 Yue Zhang, Yajie Zou, Yuanchang Xie, Lei Chen

A quantitative understanding of dynamic lane-changing (LC) interaction patterns is indispensable for improving the decision-making of autonomous vehicles, especially in mixed traffic with human-driven vehicles.

Autonomous Vehicles Decision Making

A General Contextualized Rewriting Framework for Text Summarization

1 code implementation13 Jul 2022 Guangsheng Bao, Yue Zhang

The rewriting method for text summarization combines extractive and abstractive approaches, improving the conciseness and readability of extractive summaries using an abstractive model.

reinforcement-learning Reinforcement Learning (RL) +2

A Graph Enhanced BERT Model for Event Prediction

no code implementations Findings (ACL) 2022 Li Du, Xiao Ding, Yue Zhang, Kai Xiong, Ting Liu, Bing Qin

To this end, we incorporate an additional structured variable into BERT to learn to predict the event connections in the training process.

The Cross-lingual Conversation Summarization Challenge

2 code implementations1 May 2022 Yulong Chen, Ming Zhong, Xuefeng Bai, Naihao Deng, Jing Li, Xianchao Zhu, Yue Zhang

We propose the shared task of cross-lingual conversation summarization, \emph{ConvSumX Challenge}, opening new avenues for researchers to investigate solutions that integrate conversation summarization and machine translation.

Abstractive Dialogue Summarization Cross-Lingual Abstractive Summarization +3

MuCGEC: a Multi-Reference Multi-Source Evaluation Dataset for Chinese Grammatical Error Correction

1 code implementation NAACL 2022 Yue Zhang, Zhenghua Li, Zuyi Bao, Jiacheng Li, Bo Zhang, Chen Li, Fei Huang, Min Zhang

This paper presents MuCGEC, a multi-reference multi-source evaluation dataset for Chinese Grammatical Error Correction (CGEC), consisting of 7, 063 sentences collected from three Chinese-as-a-Second-Language (CSL) learner sources.

Grammatical Error Correction

On Effectively Learning of Knowledge in Continual Pre-training

no code implementations17 Apr 2022 Cunxiang Wang, Fuli Luo, Yanyang Li, Runxin Xu, Fei Huang, Yue Zhang

Pre-trained language models (PLMs) like BERT have made significant progress in various downstream NLP tasks.

Self-Supervised Learning

Towards Fine-grained Causal Reasoning and QA

1 code implementation15 Apr 2022 Linyi Yang, Zhen Wang, Yuxiang Wu, Jie Yang, Yue Zhang

Understanding causality is key to the success of NLP applications, especially in high-stakes domains.

Question Answering

Challenges for Open-domain Targeted Sentiment Analysis

no code implementations14 Apr 2022 Yun Luo, Hongjie Cai, Linyi Yang, Yanxia Qin, Rui Xia, Yue Zhang

Since previous studies on open-domain targeted sentiment analysis are limited in dataset domain variety and sentence level, we propose a novel dataset consisting of 6, 013 human-labeled data to extend the data domains in topics of interest and document level.

Sentiment Analysis

A Rationale-Centric Framework for Human-in-the-loop Machine Learning

1 code implementation ACL 2022 Jinghui Lu, Linyi Yang, Brian Mac Namee, Yue Zhang

We present a novel rationale-centric framework with human-in-the-loop -- Rationales-centric Double-robustness Learning (RDL) -- to boost model out-of-distribution performance in few-shot learning scenarios.

BIG-bench Machine Learning Few-Shot Learning

Graph Pre-training for AMR Parsing and Generation

2 code implementations ACL 2022 Xuefeng Bai, Yulong Chen, Yue Zhang

To our knowledge, we are the first to consider pre-training on semantic graphs.

 Ranked #1 on AMR-to-Text Generation on Bio (BLEU metric, using extra training data)

AMR Parsing AMR-to-Text Generation +1

Towards Robust Online Dialogue Response Generation

no code implementations7 Mar 2022 Leyang Cui, Fandong Meng, Yijin Liu, Jie zhou, Yue Zhang

Although pre-trained sequence-to-sequence models have achieved great success in dialogue response generation, chatbots still suffer from generating inconsistent responses in real-world practice, especially in multi-turn settings.

Chatbot Re-Ranking +1

Do Prompts Solve NLP Tasks Using Natural Language?

no code implementations2 Mar 2022 Sen yang, Yunchen Zhang, Leyang Cui, Yue Zhang

Thanks to the advanced improvement of large pre-trained language models, prompt-based fine-tuning is shown to be effective on a variety of downstream tasks.

Revisiting QMIX: Discriminative Credit Assignment by Gradient Entropy Regularization

no code implementations9 Feb 2022 Jian Zhao, Yue Zhang, Xunhan Hu, Weixun Wang, Wengang Zhou, Jianye Hao, Jiangcheng Zhu, Houqiang Li

In cooperative multi-agent systems, agents jointly take actions and receive a team reward instead of individual rewards.

NumHTML: Numeric-Oriented Hierarchical Transformer Model for Multi-task Financial Forecasting

no code implementations5 Jan 2022 Linyi Yang, Jiazheng Li, Ruihai Dong, Yue Zhang, Barry Smyth

Financial forecasting has been an important and active area of machine learning research because of the challenges it presents and the potential rewards that even minor improvements in prediction accuracy or forecasting may entail.

NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation

2 code implementations6 Dec 2021 Kaustubh D. Dhole, Varun Gangal, Sebastian Gehrmann, Aadesh Gupta, Zhenhao Li, Saad Mahamood, Abinaya Mahendiran, Simon Mille, Ashish Shrivastava, Samson Tan, Tongshuang Wu, Jascha Sohl-Dickstein, Jinho D. Choi, Eduard Hovy, Ondrej Dusek, Sebastian Ruder, Sajant Anand, Nagender Aneja, Rabin Banjade, Lisa Barthe, Hanna Behnke, Ian Berlot-Attwell, Connor Boyle, Caroline Brun, Marco Antonio Sobrevilla Cabezudo, Samuel Cahyawijaya, Emile Chapuis, Wanxiang Che, Mukund Choudhary, Christian Clauss, Pierre Colombo, Filip Cornell, Gautier Dagan, Mayukh Das, Tanay Dixit, Thomas Dopierre, Paul-Alexis Dray, Suchitra Dubey, Tatiana Ekeinhor, Marco Di Giovanni, Tanya Goyal, Rishabh Gupta, Louanes Hamla, Sang Han, Fabrice Harel-Canada, Antoine Honore, Ishan Jindal, Przemyslaw K. Joniak, Denis Kleyko, Venelin Kovatchev, Kalpesh Krishna, Ashutosh Kumar, Stefan Langer, Seungjae Ryan Lee, Corey James Levinson, Hualou Liang, Kaizhao Liang, Zhexiong Liu, Andrey Lukyanenko, Vukosi Marivate, Gerard de Melo, Simon Meoni, Maxime Meyer, Afnan Mir, Nafise Sadat Moosavi, Niklas Muennighoff, Timothy Sum Hon Mun, Kenton Murray, Marcin Namysl, Maria Obedkova, Priti Oli, Nivranshu Pasricha, Jan Pfister, Richard Plant, Vinay Prabhu, Vasile Pais, Libo Qin, Shahab Raji, Pawan Kumar Rajpoot, Vikas Raunak, Roy Rinberg, Nicolas Roberts, Juan Diego Rodriguez, Claude Roux, Vasconcellos P. H. S., Ananya B. Sai, Robin M. Schmidt, Thomas Scialom, Tshephisho Sefara, Saqib N. Shamsi, Xudong Shen, Haoyue Shi, Yiwen Shi, Anna Shvets, Nick Siegel, Damien Sileo, Jamie Simon, Chandan Singh, Roman Sitelew, Priyank Soni, Taylor Sorensen, William Soto, Aman Srivastava, KV Aditya Srivatsa, Tony Sun, Mukund Varma T, A Tabassum, Fiona Anting Tan, Ryan Teehan, Mo Tiwari, Marie Tolkiehn, Athena Wang, Zijian Wang, Gloria Wang, Zijie J. Wang, Fuxuan Wei, Bryan Wilie, Genta Indra Winata, Xinyi Wu, Witold Wydmański, Tianbao Xie, Usama Yaseen, Michael A. Yee, Jing Zhang, Yue Zhang

Data augmentation is an important component in the robustness evaluation of models in natural language processing (NLP) and in enhancing the diversity of the data they are trained on.

Data Augmentation

Nonautoregressive Encoder-Decoder Neural Framework for End-to-End Aspect-Based Sentiment Triplet Extraction

no code implementations IEEE 2021 Hao Fei, Yafeng Ren, Yue Zhang, Donghong Ji

Aspect-based sentiment triplet extraction (ASTE) aims at recognizing the joint triplets from texts, i. e., aspect terms, opinion expressions, and correlated sentiment polarities.

Aspect Sentiment Triplet Extraction

whu-nercms at trecvid2021:instance search task

no code implementations30 Oct 2021 Yanrui Niu, Jingyao Yang, Ankang Lu, Baojin Huang, Yue Zhang, Ji Huang, Shishi Wen, Dongshu Xu, Chao Liang, Zhongyuan Wang, Jun Chen

We will make a brief introduction of the experimental methods and results of the WHU-NERCMS in the TRECVID2021 in the paper.

Action Detection Face Detection +5

Confidence-Aware Active Feedback for Interactive Instance Search

1 code implementation23 Oct 2021 Yue Zhang, Chao Liang, Longxiang Jiang

To address this issue, we propose a confidence-aware active feedback method (CAAF) that is specifically designed for online RF in interactive INS tasks.

Active Learning Instance Search +1

Entity Relation Extraction as Dependency Parsing in Visually Rich Documents

no code implementations EMNLP 2021 Yue Zhang, Bo Zhang, Rui Wang, Junjie Cao, Chen Li, Zuyi Bao

Previous works on key information extraction from visually rich documents (VRDs) mainly focus on labeling the text within each bounding box (i. e., semantic entity), while the relations in-between are largely unexplored.

Dependency Parsing Entity Linking +2

NAIL: A Challenging Benchmark for Na\"ive Logical Reasoning

no code implementations29 Sep 2021 Xinbo Zhang, Changzhi Sun, Yue Zhang, Lei LI, Hao Zhou

Logical reasoning over natural text is an important capability towards human level intelligence.

Logical Reasoning

Investigating Non-local Features for Neural Constituency Parsing

1 code implementation ACL 2022 Leyang Cui, Sen yang, Yue Zhang

Besides, our method achieves state-of-the-art BERT-based performance on PTB (95. 92 F1) and strong performance on CTB (92. 31 F1).

Constituency Parsing

Knowledge Enhanced Fine-Tuning for Better Handling Unseen Entities in Dialogue Generation

1 code implementation EMNLP 2021 Leyang Cui, Yu Wu, Shujie Liu, Yue Zhang

To deal with this problem, instead of introducing knowledge base as the input, we force the model to learn a better semantic representation by predicting the information in the knowledge base, only based on the input context.

Dialogue Generation Retrieval

Exploring Generalization Ability of Pretrained Language Models on Arithmetic and Logical Reasoning

no code implementations15 Aug 2021 Cunxiang Wang, Boyuan Zheng, Yuchen Niu, Yue Zhang

To quantitatively and intuitively explore the generalization ability of pre-trained language models (PLMs), we have designed several tasks of arithmetic and logical reasoning.

Logical Reasoning

End-to-End AMR Coreference Resolution

1 code implementation ACL 2021 Qiankun Fu, Linfeng Song, Wenyu Du, Yue Zhang

Although parsing to Abstract Meaning Representation (AMR) has become very popular and AMR has been shown effective on the many sentence-level downstream tasks, little work has studied how to generate AMRs that can represent multi-sentence information.

coreference-resolution Text Summarization

Understanding the merging behavior patterns and evolutionary mechanism at freeway on-ramps

no code implementations31 Jul 2021 Yue Zhang, Yajie Zou, Lingtao Wuand Wanbing Han

This study develops a primitive-based framework to identify the driving patterns during merging processes and reveal the evolutionary mechanism at freeway on-ramps in congested traffic flow.

Autonomous Driving Decision Making +2

Supervised Off-Policy Ranking

1 code implementation3 Jul 2021 Yue Jin, Yue Zhang, Tao Qin, Xudong Zhang, Jian Yuan, Houqiang Li, Tie-Yan Liu

Inspired by the two observations, in this work, we study a new problem, supervised off-policy ranking (SOPR), which aims to rank a set of target policies based on supervised learning by leveraging off-policy data and policies with known performance.

Off-policy evaluation

Exploring the Efficacy of Automatically Generated Counterfactuals for Sentiment Analysis

1 code implementation ACL 2021 Linyi Yang, Jiazheng Li, Pádraig Cunningham, Yue Zhang, Barry Smyth, Ruihai Dong

While state-of-the-art NLP models have been achieving the excellent performance of a wide range of tasks in recent years, important questions are being raised about their robustness and their underlying sensitivity to systematic biases that may exist in their training and test data.

Data Augmentation Sentiment Analysis

Non-Point Visible Light Transmitter Localization based on Monocular Camera

no code implementations29 Jun 2021 Hongxiu Zhao, Xun Zhang, Faouzi Bader, Yue Zhang

Many algorithms for visible light positioning (VLP) localization do not consider the shapes of the transmitters, which leads to the impracticality of the algorithm and the low localization accuracy.

Template-Based Named Entity Recognition Using BART

1 code implementation Findings (ACL) 2021 Leyang Cui, Yu Wu, Jian Liu, Sen yang, Yue Zhang

To address the issue, we propose a template-based method for NER, treating NER as a language model ranking problem in a sequence-to-sequence framework, where original sentences and statement templates filled by candidate named entity span are regarded as the source sequence and the target sequence, respectively.

Few-shot NER Language Modelling +2

Can Generative Pre-trained Language Models Serve as Knowledge Bases for Closed-book QA?

1 code implementation ACL 2021 Cunxiang Wang, Pai Liu, Yue Zhang

Recent work has investigated the interesting question using pre-trained language models (PLMs) as knowledge bases for answering open questions.

Question Answering

A Unified Span-Based Approach for Opinion Mining with Syntactic Constituents

1 code implementation NAACL 2021 Qingrong Xia, Bo Zhang, Rui Wang, Zhenghua Li, Yue Zhang, Fei Huang, Luo Si, Min Zhang

Fine-grained opinion mining (OM) has achieved increasing attraction in the natural language processing (NLP) community, which aims to find the opinion structures of {``}Who expressed what opinions towards what{''} in one sentence.

Multi-Task Learning Opinion Mining

G-Transformer for Document-level Machine Translation

1 code implementation ACL 2021 Guangsheng Bao, Yue Zhang, Zhiyang Teng, Boxing Chen, Weihua Luo

However, study shows that when we further enlarge the translation unit to a whole document, supervised training of Transformer can fail.

Document Level Machine Translation Inductive Bias +2

On Compositional Generalization of Neural Machine Translation

1 code implementation ACL 2021 Yafu Li, Yongjing Yin, Yulong Chen, Yue Zhang

Modern neural machine translation (NMT) models have achieved competitive performance in standard benchmarks such as WMT.

Domain Generalization Machine Translation +2

V2V Spatiotemporal Interactive Pattern Recognition and Risk Analysis in Lane Changes

no code implementations22 May 2021 Yue Zhang, Yajie Zou, Lingtao Wu

This study explores the spatiotemporal evolution law and risk formation mechanism of the LC interactive patterns and the findings are useful for comprehensively understanding the latent interactive patterns, improving the rationality and safety of autonomous vehicle's decision-making.

Autonomous Vehicles Clustering +2

Semantic Representation for Dialogue Modeling

1 code implementation ACL 2021 Xuefeng Bai, Yulong Chen, Linfeng Song, Yue Zhang

Although neural models have achieved competitive results in dialogue systems, they have shown limited ability in representing core semantics, such as ignoring important entities.

Dialog Relation Extraction Dialogue Understanding +1

Lexicon Enhanced Chinese Sequence Labeling Using BERT Adapter

1 code implementation ACL 2021 Wei Liu, Xiyan Fu, Yue Zhang, Wenming Xiao

Lexicon information and pre-trained models, such as BERT, have been combined to explore Chinese sequence labelling tasks due to their respective strengths.

named-entity-recognition Named Entity Recognition +2

Towards Navigation by Reasoning over Spatial Configurations

no code implementations ACL (splurobonlp) 2021 Yue Zhang, Quan Guo, Parisa Kordjamshidi

Additionally, the experimental results demonstrate that explicit modeling of spatial semantic elements in the instructions can improve the grounding and spatial reasoning of the model.

Structural Adapters in Pretrained Language Models for AMR-to-text Generation

1 code implementation EMNLP 2021 Leonardo F. R. Ribeiro, Yue Zhang, Iryna Gurevych

Pretrained language models (PLM) have recently advanced graph-to-text generation, where the input graph is linearized into a sequence and fed into the PLM to obtain its representation.

AMR-to-Text Generation Data-to-Text Generation

Constrained Text Generation with Global Guidance -- Case Study on CommonGen

no code implementations12 Mar 2021 Yixian Liu, Liwen Zhang, Wenjuan Han, Yue Zhang, Kewei Tu

We focus on CommonGen, the task of generating text based on a set of concepts, as a representative task of constrained text generation.

Common Sense Reasoning reinforcement-learning +2