Search Results for author: Xintong Li

Found 20 papers, 7 papers with code

Generating Discourse Connectives with Pre-trained Language Models: Conditioning on Discourse Relations Helps Reconstruct the PDTB

1 code implementation SIGDIAL (ACL) 2022 Symon Stevens-Guille, Aleksandre Maskharashvili, Xintong Li, Michael White

Our results suggest that including discourse relation information in the input of the model significantly improves the consistency with which it produces a correctly realized discourse relation in the output.

Relation Text Generation

On the Relationship between Neural Machine Translation and Word Alignment

no code implementations Xintong Li, Lemao Liu, Guanlin Li, Max Meng, Shuming Shi

We find that although NMT models are difficult to capture word alignment for CFT words but these words do not sacrifice translation quality significantly, which provides an explanation why NMT is more successful for translation yet worse for word alignment compared to statistical machine translation.

Machine Translation NMT +2

Leveraging Large Pretrained Models for WebNLG 2020

1 code implementation ACL (WebNLG, INLG) 2020 Xintong Li, Aleksandre Maskharashvili, Symon Jory Stevens-Guille, Michael White

In this paper, we report experiments on finetuning large pretrained models to realize resource description framework (RDF) triples to natural language.

Neural NLG for Methodius: From RST Meaning Representations to Texts

1 code implementation INLG (ACL) 2020 Symon Stevens-Guille, Aleksandre Maskharashvili, Amy Isard, Xintong Li, Michael White

While classic NLG systems typically made use of hierarchically structured content plans that included discourse relations as central components, more recent neural approaches have mostly mapped simple, flat inputs to texts without representing discourse relations explicitly.

Sentence

Neural Methodius Revisited: Do Discourse Relations Help with Pre-Trained Models Too?

1 code implementation INLG (ACL) 2021 Aleksandre Maskharashvili, Symon Stevens-Guille, Xintong Li, Michael White

Recent developments in natural language generation (NLG) have bolstered arguments in favor of re-introducing explicit coding of discourse relations in the input to neural models.

Relation Text Generation

Self-Training for Compositional Neural NLG in Task-Oriented Dialogue

2 code implementations INLG (ACL) 2021 Xintong Li, Symon Stevens-Guille, Aleksandre Maskharashvili, Michael White

Neural approaches to natural language generation in task-oriented dialogue have typically required large amounts of annotated training data to achieve satisfactory performance, especially when generating from compositional inputs.

Text Generation

Building Adaptive Acceptability Classifiers for Neural NLG

no code implementations EMNLP 2021 Soumya Batra, Shashank Jain, Peyman Heidari, Ankit Arun, Catharine Youngs, Xintong Li, Pinar Donmez, Shawn Mei, Shiunzu Kuo, Vikas Bhardwaj, Anuj Kumar, Michael White

We propose a novel framework to train models to classify acceptability of responses generated by natural language generation (NLG) models, improving upon existing sentence transformation and model-based approaches.

Sentence Synthetic Data Generation +1

AutoWS-Bench-101: Benchmarking Automated Weak Supervision with 100 Labels

no code implementations30 Aug 2022 Nicholas Roberts, Xintong Li, Tzu-Heng Huang, Dyah Adila, Spencer Schoenberg, Cheng-Yu Liu, Lauren Pick, Haotian Ma, Aws Albarghouthi, Frederic Sala

While it has been used successfully in many domains, weak supervision's application scope is limited by the difficulty of constructing labeling functions for domains with complex or high-dimensional features.

Benchmarking

A$^3$T: Alignment-Aware Acoustic and Text Pretraining for Speech Synthesis and Editing

2 code implementations18 Mar 2022 He Bai, Renjie Zheng, Junkun Chen, Xintong Li, Mingbo Ma, Liang Huang

Recently, speech representation learning has improved many speech-related tasks such as speech recognition, speech classification, and speech-to-text translation.

Representation Learning Speaker Verification +5

A State-of-the-art Survey of U-Net in Microscopic Image Analysis: from Simple Usage to Structure Mortification

no code implementations14 Feb 2022 Jian Wu, Wanli Liu, Chen Li, Tao Jiang, Islam Mohammad Shariful, Hongzan Sun, Xiaoqi Li, Xintong Li, Xinyu Huang, Marcin Grzegorzek

Image analysis technology is used to solve the inadvertences of artificial traditional methods in disease, wastewater treatment, environmental change monitoring analysis and convolutional neural networks (CNN) play an important role in microscopic image analysis.

Image Segmentation Segmentation +1

What Can Machine Vision Do for Lymphatic Histopathology Image Analysis: A Comprehensive Review

no code implementations21 Jan 2022 Xiaoqi Li, HaoYuan Chen, Chen Li, Md Mamunur Rahaman, Xintong Li, Jian Wu, Xiaoyan Li, Hongzan Sun, Marcin Grzegorzek

In the past ten years, the computing power of machine vision (MV) has been continuously improved, and image analysis algorithms have developed rapidly.

Particle-hole asymmetric superconducting coherence peaks in overdoped cuprates

no code implementations10 Mar 2021 Changwei Zou, Zhenqi Hao, Xiangyu Luo, Shusen Ye, Qiang Gao, Xintong Li, Miao Xu, Peng Cai, Chengtian Lin, Xingjiang Zhou, Dung-Hai Lee, Yayu Wang

To elucidate the superconductor to metal transition at the end of superconducting dome, the overdoped regime has stepped onto the center stage of cuprate research recently.

Superconductivity

Regularized Context Gates on Transformer for Machine Translation

no code implementations ACL 2020 Xintong Li, Lemao Liu, Rui Wang, Guoping Huang, Max Meng

This paper first provides a method to identify source and target contexts and then introduce a gate mechanism to control the source and target contributions in Transformer.

Machine Translation NMT +1

On the Word Alignment from Neural Machine Translation

no code implementations ACL 2019 Xintong Li, Guanlin Li, Lemao Liu, Max Meng, Shuming Shi

Prior researches suggest that neural machine translation (NMT) captures word alignment through its attention mechanism, however, this paper finds attention may almost fail to capture word alignment for some NMT models.

Machine Translation NMT +2

Target Foresight Based Attention for Neural Machine Translation

no code implementations NAACL 2018 Xintong Li, Lemao Liu, Zhaopeng Tu, Shuming Shi, Max Meng

In neural machine translation, an attention model is used to identify the aligned source words for a target word (target foresight word) in order to select translation context, but it does not make use of any information of this target foresight word at all.

Language Modelling Machine Translation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.