no code implementations • EMNLP 2021 • Tao Ji, Yong Jiang, Tao Wang, Zhongqiang Huang, Fei Huang, Yuanbin Wu, Xiaoling Wang
Transition systems usually contain various dynamic structures (e. g., stacks, buffers).
no code implementations • EMNLP 2021 • Tao Ji, Yong Jiang, Tao Wang, Zhongqiang Huang, Fei Huang, Yuanbin Wu, Xiaoling Wang
Adapting word order from one language to another is a key problem in cross-lingual structured prediction.
no code implementations • 19 Oct 2022 • Xuming Hu, Yong Jiang, Aiwei Liu, Zhongqiang Huang, Pengjun Xie, Fei Huang, Lijie Wen, Philip S. Yu
To alleviate the excessive reliance on the dependency order among entities in existing augmentation paradigms, we develop an entity-to-text instead of text-to-entity based data augmentation method named: EnTDA to decouple the dependencies between entities by adding, deleting, replacing and swapping entities, and adopt these augmented data to bootstrap the generalization ability of the NER model.
1 code implementation • 18 Oct 2022 • Chen Wang, Yuchen Liu, Boxing Chen, Jiajun Zhang, Wei Luo, Zhongqiang Huang, Chengqing Zong
Existing zero-shot methods fail to align the two modalities of speech and text into a shared semantic space, resulting in much worse performance compared to the supervised ST methods.
1 code implementation • NAACL 2022 • Xinyu Wang, Min Gui, Yong Jiang, Zixia Jia, Nguyen Bach, Tao Wang, Zhongqiang Huang, Fei Huang, Kewei Tu
As text representations take the most important role in MNER, in this paper, we propose {\bf I}mage-{\bf t}ext {\bf A}lignments (ITA) to align image features into the textual space, so that the attention mechanism in transformer-based pretrained textual embeddings can be better utilized.
Ranked #1 on
Multi-modal Named Entity Recognition
on Twitter-17
Multi-modal Named Entity Recognition
named-entity-recognition
1 code implementation • EMNLP 2021 • Xinyin Ma, Yong Jiang, Nguyen Bach, Tao Wang, Zhongqiang Huang, Fei Huang, Weiming Lu
Entity retrieval, which aims at disambiguating mentions to canonical entities from massive KBs, is essential for many tasks in natural language processing.
Ranked #1 on
Entity Retrieval
on ZESHEL
no code implementations • ACL 2021 • Zechuan Hu, Yong Jiang, Nguyen Bach, Tao Wang, Zhongqiang Huang, Fei Huang, Kewei Tu
In this paper, we propose a novel unified framework for zero-shot sequence labeling with minimum risk training and design a new decomposable risk function that models the relations between the predicted labels from the source models and the true labels.
no code implementations • ACL 2021 • Zechuan Hu, Yong Jiang, Nguyen Bach, Tao Wang, Zhongqiang Huang, Fei Huang, Kewei Tu
In structured prediction problems, cross-lingual transfer learning is an efficient way to train quality models for low-resource languages, and further improvement can be obtained by learning from multiple source languages.
no code implementations • AAAI 2021 • Ke Wang, Guandan Chen, Zhongqiang Huang, Xiaojun Wan, Fei Huang
Despite the near-human performances already achieved on formal texts such as news articles, neural machine transla- tion still has difficulty in dealing with ”user-generated” texts that have diverse linguistic phenomena but lack large-scale high-quality parallel corpora.
3 code implementations • ACL 2021 • Xinyu Wang, Yong Jiang, Nguyen Bach, Tao Wang, Zhongqiang Huang, Fei Huang, Kewei Tu
We find empirically that the contextual representations computed on the retrieval-based input view, constructed through the concatenation of a sentence and its external contexts, can achieve significantly improved performance compared to the original input view based only on the sentence.
Ranked #1 on
Named Entity Recognition
on CMeEE
no code implementations • Findings of the Association for Computational Linguistics 2020 • Zechuan Hu, Yong Jiang, Nguyen Bach, Tao Wang, Zhongqiang Huang, Fei Huang, Kewei Tu
The neural linear-chain CRF model is one of the most widely-used approach to sequence labeling.
1 code implementation • ACL 2021 • Xinyu Wang, Yong Jiang, Zhaohui Yan, Zixia Jia, Nguyen Bach, Tao Wang, Zhongqiang Huang, Fei Huang, Kewei Tu
The objective function of knowledge distillation is typically the cross-entropy between the teacher and the student's output distributions.
2 code implementations • ACL 2021 • Xinyu Wang, Yong Jiang, Nguyen Bach, Tao Wang, Zhongqiang Huang, Fei Huang, Kewei Tu
Pretrained contextualized embeddings are powerful word representations for structured prediction tasks.
Ranked #1 on
Part-Of-Speech Tagging
on ARK
no code implementations • Findings of the Association for Computational Linguistics 2020 • Xinyu Wang, Yong Jiang, Nguyen Bach, Tao Wang, Zhongqiang Huang, Fei Huang, Kewei Tu
Recent work proposes a family of contextual embeddings that significantly improves the accuracy of sequence labelers over non-contextual embeddings.
Ranked #2 on
Chunking
on CoNLL 2003 (German)
1 code implementation • EMNLP 2020 • Xinyu Wang, Yong Jiang, Nguyen Bach, Tao Wang, Zhongqiang Huang, Fei Huang, Kewei Tu
The linear-chain Conditional Random Field (CRF) model is one of the most widely-used neural sequence labeling approaches.
Ranked #3 on
Chunking
on CoNLL 2003 (German)
no code implementations • WS 2019 • Lingjun Zhao, Rabih Zbib, Zhuolin Jiang, Damianos Karakos, Zhongqiang Huang
We propose a weakly supervised neural model for Ad-hoc Cross-lingual Information Retrieval (CLIR) from low-resource languages.
no code implementations • 23 Feb 2018 • Matthew Wiesner, Chunxi Liu, Lucas Ondel, Craig Harman, Vimal Manohar, Jan Trmal, Zhongqiang Huang, Najim Dehak, Sanjeev Khudanpur
Automatic speech recognition (ASR) systems often need to be developed for extremely low-resource languages to serve end-uses such as audio content categorization and search.
no code implementations • 19 Dec 2016 • Min Jiang, Zhongqiang Huang, Liming Qiu, Wenzhen Huang, Gary G. Yen
This approach takes the transfer learning method as a tool to help reuse the past experience for speeding up the evolutionary process, and at the same time, any population based multiobjective algorithms can benefit from this integration without any extensive modifications.
no code implementations • IJCNLP 2015 • Hendra Setiawan, Zhongqiang Huang, Jacob Devlin, Thomas Lamar, Rabih Zbib, Richard Schwartz, John Makhoul
We present a three-pronged approach to improving Statistical Machine Translation (SMT), building on recent success in the application of neural networks to SMT.