no code implementations • COLING 2016 • Lianhui Qin, Zhisong Zhang, Hai Zhao
For the task of implicit discourse relation recognition, traditional models utilizing manual features can suffer from data sparsity problem.
no code implementations • ACL 2017 • Lianhui Qin, Zhisong Zhang, Hai Zhao, Zhiting Hu, Eric P. Xing
Implicit discourse relation classification is of great challenge due to the lack of connectives as strong linguistic cues, which motivates the use of annotated implicit connectives to improve the recognition.
1 code implementation • ACL 2017 • Deng Cai, Hai Zhao, Zhisong Zhang, Yuan Xin, Yongjian Wu, Feiyue Huang
Neural models with minimal feature engineering have achieved competitive performance against traditional methods for the task of Chinese word segmentation.
no code implementations • CONLL 2017 • Hao Wang, Hai Zhao, Zhisong Zhang
This paper describes the system for our participation in the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies.
1 code implementation • EMNLP 2018 • Zhisong Zhang, Rui Wang, Masao Utiyama, Eiichiro Sumita, Hai Zhao
In Neural Machine Translation (NMT), the decoder can capture the features of the entire prediction history with neural connections and representations.
2 code implementations • NAACL 2019 • Wasi Uddin Ahmad, Zhisong Zhang, Xuezhe Ma, Eduard Hovy, Kai-Wei Chang, Nanyun Peng
Different languages might have different word orders.
1 code implementation • ACL 2019 • Yu-Hsiang Lin, Chian-Yu Chen, Jean Lee, Zirui Li, Yuyan Zhang, Mengzhou Xia, Shruti Rijhwani, Junxian He, Zhisong Zhang, Xuezhe Ma, Antonios Anastasopoulos, Patrick Littell, Graham Neubig
Cross-lingual transfer, where a high-resource transfer language is used to improve the accuracy of a low-resource task language, is now an invaluable tool for improving performance of natural language processing (NLP) on low-resource languages.
1 code implementation • ACL 2019 • Junxian He, Zhisong Zhang, Taylor Berg-Kirkpatrick, Graham Neubig
The parameters of source model and target model are softly shared through a regularized log likelihood objective.
1 code implementation • ACL 2019 • Zhisong Zhang, Xuezhe Ma, Eduard Hovy
In this paper, we investigate the aspect of structured output modeling for the state-of-the-art graph-based neural dependency parser (Dozat and Manning, 2017).
no code implementations • WS 2019 • Junpei Zhou, Zhisong Zhang, Zecong Hu
In WMT-2019 QE task, our system ranked in the second place on En-De NMT dataset and the third place on En-Ru NMT dataset.
1 code implementation • CONLL 2019 • Wasi Uddin Ahmad, Zhisong Zhang, Xuezhe Ma, Kai-Wei Chang, Nanyun Peng
We conduct experiments on cross-lingual dependency parsing where we train a dependency parser on a source language and transfer it to a wide range of target languages.
no code implementations • 11 May 2020 • Lane Schwartz, Francis Tyers, Lori Levin, Christo Kirov, Patrick Littell, Chi-kiu Lo, Emily Prud'hommeaux, Hyunji Hayley Park, Kenneth Steimel, Rebecca Knowles, Jeffrey Micher, Lonny Strunk, Han Liu, Coleman Haley, Katherine J. Zhang, Robbie Jimmerson, Vasilisa Andriyanets, Aldrian Obaja Muis, Naoki Otani, Jong Hyuk Park, Zhisong Zhang
In the literature, languages like Finnish or Turkish are held up as extreme examples of complexity that challenge common modelling assumptions.
no code implementations • ACL 2020 • Zhisong Zhang, Xiang Kong, Zhengzhong Liu, Xuezhe Ma, Eduard Hovy
It remains a challenge to detect implicit arguments, calling for more future work of document-level modeling for this task.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Zhisong Zhang, Xiang Kong, Lori Levin, Eduard Hovy
Recently, pre-training contextualized encoders with language model (LM) objectives has been shown an effective semi-supervised method for structured prediction.
1 code implementation • EMNLP 2020 • Xiang Kong, Zhisong Zhang, Eduard Hovy
In this work, we introduce a novel local autoregressive translation (LAT) mechanism into non-autoregressive translation (NAT) models so as to capture local dependencies among tar-get outputs.
Ranked #2 on Machine Translation on WMT2016 English-Romanian
1 code implementation • 12 Dec 2021 • Zhisong Zhang, Yizhe Zhang, Bill Dolan
Nevertheless, due to the incompatibility between absolute positional encoding and insertion-based generation schemes, it needs to refresh the encoding of every token in the generated partial hypothesis at each step, which could be costly.
1 code implementation • 18 Oct 2022 • Zhisong Zhang, Emma Strubell, Eduard Hovy
In this work, we provide a survey of active learning (AL) for its applications in natural language processing (NLP).
1 code implementation • 22 May 2023 • Zhisong Zhang, Emma Strubell, Eduard Hovy
To address this challenge, we adopt an error estimator to adaptively decide the partial selection ratio according to the current model's capability.
1 code implementation • 22 Dec 2023 • Weiwen Xu, Deng Cai, Zhisong Zhang, Wai Lam, Shuming Shi
As humans, we consistently engage in interactions with our peers and receive feedback in the form of natural language.
no code implementations • 10 Feb 2024 • Chufan Shi, Haoran Yang, Deng Cai, Zhisong Zhang, Yifan Wang, Yujiu Yang, Wai Lam
Decoding methods play an indispensable role in converting language models from next-token predictors into practical task solvers.
1 code implementation • 16 Apr 2024 • Pengyu Cheng, Tianhao Hu, Han Xu, Zhisong Zhang, Yong Dai, Lei Han, Nan Du
Hence, we are curious about whether LLMs' reasoning ability can be further enhanced by Self-Play in this Adversarial language Game (SPAG).
1 code implementation • EMNLP 2021 • Zhisong Zhang, Emma Strubell, Eduard Hovy
Although recent developments in neural architectures and pre-trained representations have greatly increased state-of-the-art model performance on fully-supervised semantic role labeling (SRL), the task remains challenging for languages where supervised SRL training data are not abundant.
1 code implementation • ACL (spnlp) 2021 • Zhisong Zhang, Emma Strubell, Eduard Hovy
In this work, we empirically compare span extraction methods for the task of semantic role labeling (SRL).