1 code implementation • Findings of the Association for Computational Linguistics 2020 • Xiangci Li, Hairong Liu, Liang Huang
Existing natural language processing systems are vulnerable to noisy inputs resulting from misspellings.
no code implementations • ACL 2020 • Baigong Zheng, Kaibo Liu, Renjie Zheng, Mingbo Ma, Hairong Liu, Liang Huang
Adaptive policies are better than fixed policies for simultaneous translation, since they can flexibly balance the tradeoff between translation quality and latency based on the current context information.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Mingbo Ma, Baigong Zheng, Kaibo Liu, Renjie Zheng, Hairong Liu, Kainan Peng, Kenneth Church, Liang Huang
Text-to-speech synthesis (TTS) has witnessed rapid progress in recent years, where neural methods became capable of producing audios with high naturalness.
no code implementations • 3 Nov 2019 • Hairong Liu, Mingbo Ma, Liang Huang
The research in machine translation community focus on translation in text space.
no code implementations • WS 2019 • Renjie Zheng, Hairong Liu, Mingbo Ma, Baigong Zheng, Liang Huang
To make it worse, the amount of social media parallel corpora is extremely limited.
3 code implementations • ACL 2019 • Mingbo Ma, Liang Huang, Hao Xiong, Renjie Zheng, Kaibo Liu, Baigong Zheng, Chuanqiang Zhang, Zhongjun He, Hairong Liu, Xing Li, Hua Wu, Haifeng Wang
Simultaneous translation, which translates sentences before they are finished, is useful in many scenarios but is notoriously difficult due to word-order differences.
no code implementations • ACL 2019 • Hairong Liu, Mingbo Ma, Liang Huang, Hao Xiong, Zhongjun He
Neural machine translation (NMT) is notoriously sensitive to noises, but noises are almost inevitable in practice.
no code implementations • 24 Jul 2017 • Eric Battenberg, Jitong Chen, Rewon Child, Adam Coates, Yashesh Gaur, Yi Li, Hairong Liu, Sanjeev Satheesh, David Seetapun, Anuroop Sriram, Zhenyao Zhu
In this work, we perform an empirical comparison among the CTC, RNN-Transducer, and attention-based Seq2Seq models for end-to-end speech recognition.
no code implementations • 11 May 2017 • Eric Battenberg, Rewon Child, Adam Coates, Christopher Fougner, Yashesh Gaur, Jiaji Huang, Heewoo Jun, Ajay Kannan, Markus Kliegl, Atul Kumar, Hairong Liu, Vinay Rao, Sanjeev Satheesh, David Seetapun, Anuroop Sriram, Zhenyao Zhu
Replacing hand-engineered pipelines with end-to-end deep learning systems has enabled strong results in applications like speech and object recognition.
no code implementations • ICML 2017 • Hairong Liu, Zhenyao Zhu, Xiangang Li, Sanjeev Satheesh
These methods suffer from two major drawbacks: 1) the set of basic units is fixed, such as the set of words, characters or phonemes in speech recognition, and 2) the decomposition of target sequences is fixed.
no code implementations • 10 Dec 2016 • Jiaji Huang, Rewon Child, Vinay Rao, Hairong Liu, Sanjeev Satheesh, Adam Coates
For speech recognition, confidence scores and other likelihood-based active learning methods have been shown to be effective.
no code implementations • NeurIPS 2010 • Hairong Liu, Longin J. Latecki, Shuicheng Yan
In this paper, we regard clustering as ensembles of k-ary affinity relations and clusters correspond to subsets of objects with maximal average affinity relations.