1 code implementation • Findings (ACL) 2022 • Zhen Wang, Yating Yang, Zhou Xi, Bo Ma, Lei Wang, Rui Dong, Azmat Anwar
We also propose a stable semi-supervised method named stair learning (SL) that orderly distills knowledge from better models to weaker models.
no code implementations • 13 Feb 2023 • Danilo Ribeiro, Shen Wang, Xiaofei Ma, Henry Zhu, Rui Dong, Deguang Kong, Juliette Burger, Anjelica Ramos, William Wang, Zhiheng Huang, George Karypis, Bing Xiang, Dan Roth
We introduce STREET, a unified multi-task and multi-domain natural language reasoning and explanation benchmark.
no code implementations • 17 Dec 2022 • Jifan Chen, Yuhao Zhang, Lan Liu, Rui Dong, Xinchi Chen, Patrick Ng, William Yang Wang, Zhiheng Huang
There has been great progress in unifying various table-to-text tasks using a single encoder-decoder model trained via multi-task learning (Xie et al., 2022).
1 code implementation • Findings (NAACL) 2022 • Danilo Ribeiro, Shen Wang, Xiaofei Ma, Rui Dong, Xiaokai Wei, Henry Zhu, Xinchi Chen, Zhiheng Huang, Peng Xu, Andrew Arnold, Dan Roth
Our model is able to explain a given hypothesis by systematically generating a step-by-step explanation from textual premises.
no code implementations • EACL 2021 • Rui Dong, David Smith
Starting from the Table Parsing (TAPAS) model developed for question answering (Herzig et al., 2020), we find that modeling table structure improves a language model pre-trained on unstructured text.
no code implementations • ACL 2020 • Yirong Pan, Xiao Li, Yating Yang, Rui Dong
Neural machine translation (NMT) has achieved impressive performance recently by using large-scale parallel corpora.
no code implementations • 29 Mar 2020 • Rui Dong, Changyang She, Wibowo Hardjawana, Yonghui Li, Branka Vucetic
To accommodate diverse Quality-of-Service (QoS) requirements in the 5th generation cellular networks, base stations need real-time optimization of radio resources in time-varying network conditions.
no code implementations • 22 Feb 2020 • Changyang She, Rui Dong, Zhouyou Gu, Zhanwei Hou, Yonghui Li, Wibowo Hardjawana, Chenyang Yang, Lingyang Song, Branka Vucetic
In this article, we first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC, and discuss some open problems of these methods.
no code implementations • 2 Jan 2020 • Yirong Pan, Xiao Li, Yating Yang, Rui Dong
Experimental results show that our morphologically motivated word segmentation method is better suitable for the NMT model, which achieves significant improvements on Turkish-English and Uyghur-Chinese machine translation tasks on account of reducing data sparseness and language complexity.
no code implementations • 30 Jun 2019 • Rui Dong, Changyang She, Wibowo Hardjawana, Yonghui Li, Branka Vucetic
We propose a deep learning (DL) architecture, where a digital twin of the real network environment is used to train the DL algorithm off-line at a central server.
no code implementations • WS 2019 • Rui Dong, David Smith, Shiran Dudy, Steven Bedrick
Language models have broad adoption in predictive typing tasks.
no code implementations • ACL 2018 • Rui Dong, David Smith
We propose a novel approach to OCR post-correction that exploits repeated texts in large corpora both as a source of noisy target outputs for unsupervised training and as a source of evidence when decoding.
no code implementations • RANLP 2017 • Chenggang Mi, Yating Yang, Rui Dong, Xi Zhou, Lei Wang, Xiao Li, Tonghai Jiang
To alleviate data sparsity in spoken Uyghur machine translation, we proposed a log-linear based morphological segmentation approach.