no code implementations • MTSummit 2021 • Minghan Wang, Jiaxin Guo, Yimeng Chen, Chang Su, Min Zhang, Shimin Tao, Hao Yang
Based on large-scale pretrained networks and the liability to be easily overfitting with limited labelled training data of multimodal translation (MMT) is a critical issue in MMT.
no code implementations • IWSLT (ACL) 2022 • Minghan Wang, Jiaxin Guo, Yinglu Li, Xiaosong Qiao, Yuxia Wang, Zongyao Li, Chang Su, Yimeng Chen, Min Zhang, Shimin Tao, Hao Yang, Ying Qin
The cascade system is composed of a chunking-based streaming ASR model and the SimulMT model used in the T2T track.
no code implementations • Findings (ACL) 2022 • Yuxia Wang, Minghan Wang, Yimeng Chen, Shimin Tao, Jiaxin Guo, Chang Su, Min Zhang, Hao Yang
Natural Language Inference (NLI) datasets contain examples with highly ambiguous labels due to its subjectivity.
no code implementations • IWSLT (ACL) 2022 • Jiaxin Guo, Yinglu Li, Minghan Wang, Xiaosong Qiao, Yuxia Wang, Hengchao Shang, Chang Su, Yimeng Chen, Min Zhang, Shimin Tao, Hao Yang, Ying Qin
The paper presents the HW-TSC’s pipeline and results of Offline Speech to Speech Translation for IWSLT 2022.
no code implementations • WMT (EMNLP) 2021 • Yimeng Chen, Chang Su, Yingtao Zhang, Yuxia Wang, Xiang Geng, Hao Yang, Shimin Tao, Guo Jiaxin, Wang Minghan, Min Zhang, Yujia Liu, ShuJian Huang
This paper presents our work in WMT 2021 Quality Estimation (QE) Shared Task.
no code implementations • IWSLT (ACL) 2022 • Minghan Wang, Jiaxin Guo, Xiaosong Qiao, Yuxia Wang, Daimeng Wei, Chang Su, Yimeng Chen, Min Zhang, Shimin Tao, Hao Yang, Ying Qin
For machine translation part, we pretrained three translation models on WMT21 dataset and fine-tuned them on in-domain corpora.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+3
no code implementations • SMM4H (COLING) 2022 • Xi Liu, Han Zhou, Chang Su
For task 1a, the system achieved an F1 score of 0. 68; for task 1b Overlapping F1 score of 0. 65 and a Strict F1 score of 0. 49.
no code implementations • 18 Sep 2023 • Yuang Li, Yinglu Li, Min Zhang, Chang Su, Mengyao Piao, Xiaosong Qiao, Jiawei Yu, Miaomiao Ma, Yanqing Zhao, Hao Yang
End-to-end automatic speech recognition (ASR) systems often struggle to recognize rare name entities, such as personal names, organizations, or technical terms that are not frequently encountered in the training data.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+2
1 code implementation • 15 Jun 2023 • Zhongkai Hao, Jiachen Yao, Chang Su, Hang Su, Ziao Wang, Fanzhi Lu, Zeyu Xia, Yichi Zhang, Songming Liu, Lu Lu, Jun Zhu
In addition to providing a standardized means of assessing performance, PINNacle also offers an in-depth analysis to guide future research, particularly in areas such as domain decomposition methods and loss reweighting for handling multi-scale problems and complex geometry.
no code implementations • 5 Jun 2023 • Jiachen Yao, Chang Su, Zhongkai Hao, Songming Liu, Hang Su, Jun Zhu
Physics-informed Neural Networks (PINNs) have recently achieved remarkable progress in solving Partial Differential Equations (PDEs) in various fields by minimizing a weighted sum of PDE loss and boundary loss.
no code implementations • 19 Feb 2023 • Xinyue Hu, Lin Gu, Kazuma Kobayashi, Qiyuan An, Qingyu Chen, Zhiyong Lu, Chang Su, Tatsuya Harada, Yingying Zhu
Medical visual question answering (VQA) aims to answer clinically relevant questions regarding input medical images.
no code implementations • 11 Oct 2022 • Matthew Brendel, Chang Su, Zilong Bai, Hao Zhang, Olivier Elemento, Fei Wang
Single-cell RNA-sequencing (scRNA-seq) has become a routinely used technique to quantify the gene expression profile of thousands of single cells simultaneously.
no code implementations • 13 Aug 2022 • Xinyue Hu, Lin Gu, Liangchen Liu, Ruijiang Li, Chang Su, Tatsuya Harada, Yingying Zhu
Existing video domain adaption (DA) methods need to store all temporal combinations of video frames or pair the source and target videos, which are memory cost expensive and can't scale up to long videos.
no code implementations • 28 May 2022 • Chang Su, Wenbo Lyu, Yueting Liu
By comparing the historical patterns of currency development, this paper pointed out the inevitability of the development of digital currency and the relationship between digital currency and the digital economy.
1 code implementation • ACL 2022 • WeiJie Chen, Yongzhu Chang, Rongsheng Zhang, Jiashu Pu, Guandan Chen, Le Zhang, Yadong Xi, Yijiang Chen, Chang Su
In this paper, we probe simile knowledge from PLMs to solve the SI and SG tasks in the unified framework of simile triple completion for the first time.
no code implementations • EAMT 2022 • Minghan Wang, Jiaxin Guo, Yuxia Wang, Daimeng Wei, Hengchao Shang, Chang Su, Yimeng Chen, Yinglu Li, Min Zhang, Shimin Tao, Hao Yang
In this paper, we aim to close the gap by preserving the original objective of AR and NAR under a unified framework.
no code implementations • 22 Dec 2021 • Zhengzhe Yu, Jiaxin Guo, Minghan Wang, Daimeng Wei, Hengchao Shang, Zongyao Li, Zhanglin Wu, Yuxia Wang, Yimeng Chen, Chang Su, Min Zhang, Lizhi Lei, Shimin Tao, Hao Yang
Deep encoders have been proven to be effective in improving neural machine translation (NMT) systems, but it reaches the upper bound of translation quality when the number of encoder layers exceeds 18.
no code implementations • 22 Dec 2021 • Jiaxin Guo, Minghan Wang, Daimeng Wei, Hengchao Shang, Yuxia Wang, Zongyao Li, Zhengzhe Yu, Zhanglin Wu, Yimeng Chen, Chang Su, Min Zhang, Lizhi Lei, Shimin Tao, Hao Yang
An effective training strategy to improve the performance of AT models is Self-Distillation Mixup (SDM) Training, which pre-trains a model on raw data, generates distilled data by the pre-trained model itself and finally re-trains a model on the combination of raw data and distilled data.
no code implementations • 9 Aug 2021 • Minghan Wang, Yuxia Wang, Chang Su, Jiaxin Guo, Yingtao Zhang, Yujia Liu, Min Zhang, Shimin Tao, Xingshan Zeng, Liangyou Li, Hao Yang, Ying Qin
This paper describes our work in participation of the IWSLT-2021 offline speech translation task.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+5
no code implementations • 21 Mar 2021 • Jianlan Luo, Oleg Sushkov, Rugile Pevceviciute, Wenzhao Lian, Chang Su, Mel Vecerik, Ning Ye, Stefan Schaal, Jon Scholz
In this paper we define criteria for industry-oriented DRL, and perform a thorough comparison according to these criteria of one family of learning approaches, DRL from demonstration, against a professional industrial integrator on the recently established NIST assembly benchmark.
1 code implementation • 21 Dec 2020 • Xinyi Zhong, Chang Su, Zhou Fan
When the dimension of data is comparable to or larger than the number of data samples, Principal Components Analysis (PCA) may exhibit problematic high-dimensional noise.
no code implementations • 13 Nov 2019 • Jie Xu, Benjamin S. Glicksberg, Chang Su, Peter Walker, Jiang Bian, Fei Wang
With the rapid development of computer software and hardware technologies, more and more healthcare data are becoming readily available from clinical institutions, patients, insurance companies and pharmaceutical industries, among others.
no code implementations • 2 Nov 2019 • Sendong Zhao, Chang Su, Andrea Sboner, Fei Wang
GRAPHENE consists of three main different modules 1) graph-augmented document representation learning; 2) query expansion and representation learning and 3) learning to rank biomedical articles.
2 code implementations • 26 Jun 2018 • Xi Sheryl Zhang, Dandi Chen, Yongjun Zhu, Chao Che, Chang Su, Sendong Zhao, Xu Min, Fei Wang
This paper presents details of our winning solutions to the task IV of NIPS 2017 Competition Track entitled Classifying Clinically Actionable Genetic Mutations.
no code implementations • NeurIPS 2010 • Chang Su, Sargur Srihari
The probability of random correspondence for several latent fingerprints are evaluated for varying numbers of minutiae.