1 code implementation • SIGDIAL (ACL) 2022 • Symon Stevens-Guille, Aleksandre Maskharashvili, Xintong Li, Michael White
Our results suggest that including discourse relation information in the input of the model significantly improves the consistency with which it produces a correctly realized discourse relation in the output.
no code implementations • Xintong Li, Lemao Liu, Guanlin Li, Max Meng, Shuming Shi
We find that although NMT models are difficult to capture word alignment for CFT words but these words do not sacrifice translation quality significantly, which provides an explanation why NMT is more successful for translation yet worse for word alignment compared to statistical machine translation.
1 code implementation • ACL (WebNLG, INLG) 2020 • Xintong Li, Aleksandre Maskharashvili, Symon Jory Stevens-Guille, Michael White
In this paper, we report experiments on finetuning large pretrained models to realize resource description framework (RDF) triples to natural language.
1 code implementation • INLG (ACL) 2020 • Symon Stevens-Guille, Aleksandre Maskharashvili, Amy Isard, Xintong Li, Michael White
While classic NLG systems typically made use of hierarchically structured content plans that included discourse relations as central components, more recent neural approaches have mostly mapped simple, flat inputs to texts without representing discourse relations explicitly.
1 code implementation • INLG (ACL) 2021 • Aleksandre Maskharashvili, Symon Stevens-Guille, Xintong Li, Michael White
Recent developments in natural language generation (NLG) have bolstered arguments in favor of re-introducing explicit coding of discourse relations in the input to neural models.
2 code implementations • INLG (ACL) 2021 • Xintong Li, Symon Stevens-Guille, Aleksandre Maskharashvili, Michael White
Neural approaches to natural language generation in task-oriented dialogue have typically required large amounts of annotated training data to achieve satisfactory performance, especially when generating from compositional inputs.
no code implementations • EMNLP 2021 • Soumya Batra, Shashank Jain, Peyman Heidari, Ankit Arun, Catharine Youngs, Xintong Li, Pinar Donmez, Shawn Mei, Shiunzu Kuo, Vikas Bhardwaj, Anuj Kumar, Michael White
We propose a novel framework to train models to classify acceptability of responses generated by natural language generation (NLG) models, improving upon existing sentence transformation and model-based approaches.
no code implementations • 30 Aug 2022 • Nicholas Roberts, Xintong Li, Tzu-Heng Huang, Dyah Adila, Spencer Schoenberg, Cheng-Yu Liu, Lauren Pick, Haotian Ma, Aws Albarghouthi, Frederic Sala
While it has been used successfully in many domains, weak supervision's application scope is limited by the difficulty of constructing labeling functions for domains with complex or high-dimensional features.
2 code implementations • NAACL (ACL) 2022 • HUI ZHANG, Tian Yuan, Junkun Chen, Xintong Li, Renjie Zheng, Yuxin Huang, Xiaojie Chen, Enlei Gong, Zeyu Chen, Xiaoguang Hu, dianhai yu, Yanjun Ma, Liang Huang
PaddleSpeech is an open-source all-in-one speech toolkit.
Automatic Speech Recognition (ASR) Environmental Sound Classification +9
2 code implementations • 18 Mar 2022 • He Bai, Renjie Zheng, Junkun Chen, Xintong Li, Mingbo Ma, Liang Huang
Recently, speech representation learning has improved many speech-related tasks such as speech recognition, speech classification, and speech-to-text translation.
no code implementations • 14 Feb 2022 • Jian Wu, Wanli Liu, Chen Li, Tao Jiang, Islam Mohammad Shariful, Hongzan Sun, Xiaoqi Li, Xintong Li, Xinyu Huang, Marcin Grzegorzek
Image analysis technology is used to solve the inadvertences of artificial traditional methods in disease, wastewater treatment, environmental change monitoring analysis and convolutional neural networks (CNN) play an important role in microscopic image analysis.
no code implementations • 21 Jan 2022 • Xiaoqi Li, HaoYuan Chen, Chen Li, Md Mamunur Rahaman, Xintong Li, Jian Wu, Xiaoyan Li, Hongzan Sun, Marcin Grzegorzek
In the past ten years, the computing power of machine vision (MV) has been continuously improved, and image analysis algorithms have developed rapidly.
no code implementations • 13 Apr 2021 • Xintong Li, Weiming Hu, Chen Li, Tao Jiang, Hongzan Sun, Xiaoyan Li, Xinyu Huang, Marcin Grzegorzek
Finally, the application prospect of the analytical method in this field is discussed.
no code implementations • 10 Mar 2021 • Changwei Zou, Zhenqi Hao, Xiangyu Luo, Shusen Ye, Qiang Gao, Xintong Li, Miao Xu, Peng Cai, Chengtian Lin, Xingjiang Zhou, Dung-Hai Lee, Yayu Wang
To elucidate the superconductor to metal transition at the end of superconducting dome, the overdoped regime has stepped onto the center stage of cuprate research recently.
Superconductivity
no code implementations • 21 Feb 2021 • Chen Li, Xintong Li, Md Rahaman, Xiaoyan Li, Hongzan Sun, Hong Zhang, Yong Zhang, Xiaoqi Li, Jian Wu, YuDong Yao, Marcin Grzegorzek
This paper reviews the methods of WSI analysis based on machine learning.
no code implementations • 7 Jan 2020 • Philip A. Collender, Zhiyue Tom Hu, Charles Li, Qu Cheng, Xintong Li, Yue You, Song Liang, Changhong Yang, Justin V. Remais
Approximate string-matching methods to account for complex variation in highly discriminatory text fields, such as personal names, can enhance probabilistic record linkage.
no code implementations • ACL 2020 • Xintong Li, Lemao Liu, Rui Wang, Guoping Huang, Max Meng
This paper first provides a method to identify source and target contexts and then introduce a gate mechanism to control the source and target contributions in Transformer.
no code implementations • ACL 2019 • Xintong Li, Guanlin Li, Lemao Liu, Max Meng, Shuming Shi
Prior researches suggest that neural machine translation (NMT) captures word alignment through its attention mechanism, however, this paper finds attention may almost fail to capture word alignment for some NMT models.
no code implementations • NAACL 2019 • Guanlin Li, Lemao Liu, Xintong Li, Conghui Zhu, Tiejun Zhao, Shuming Shi
Multilayer architectures are currently the gold standard for large-scale neural machine translation.
no code implementations • NAACL 2018 • Xintong Li, Lemao Liu, Zhaopeng Tu, Shuming Shi, Max Meng
In neural machine translation, an attention model is used to identify the aligned source words for a target word (target foresight word) in order to select translation context, but it does not make use of any information of this target foresight word at all.