no code implementations • CL (ACL) 2021 • Junjie Cao, Zi Lin, Weiwei Sun, Xiaojun Wan
Abstract In this work, we present a phenomenon-oriented comparative analysis of the two dominant approaches in English Resource Semantic (ERS) parsing: classic, knowledge-intensive and neural, data-intensive models.
no code implementations • Findings (ACL) 2022 • Zi Lin, Jeremiah Zhe Liu, Jingbo Shang
Recent work in task-independent graph semantic parsing has shifted from grammar-based symbolic approaches to neural models, showing strong performance on different types of meaning representations.
1 code implementation • 26 Jan 2023 • Zi Lin, Jeremiah Liu, Jingbo Shang
Pre-trained seq2seq models excel at graph semantic parsing with rich annotated data, but generalize worse to out-of-distribution (OOD) and long-tail examples.
2 code implementations • 1 May 2022 • Jeremiah Zhe Liu, Shreyas Padhy, Jie Ren, Zi Lin, Yeming Wen, Ghassen Jerfel, Zack Nado, Jasper Snoek, Dustin Tran, Balaji Lakshminarayanan
The most popular approaches to estimate predictive uncertainty in deep learning are methods that combine predictions from multiple neural networks, such as Bayesian neural networks (BNNs) and deep ensembles.
1 code implementation • ACL (WOAH) 2021 • Ian D. Kivlichan, Zi Lin, Jeremiah Liu, Lucy Vasserman
Content moderation is often performed by a collaboration between humans and machine learning models.
no code implementations • 10 Dec 2020 • Liangchen Luo, Mark Sandler, Zi Lin, Andrey Zhmoginov, Andrew Howard
Knowledge distillation is one of the most popular and effective techniques for knowledge transfer, model compression and semi-supervised learning.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Zi Lin, Jeremiah Zhe Liu, Zi Yang, Nan Hua, Dan Roth
Traditional (unstructured) pruning methods for a Transformer model focus on regularizing the individual weights by penalizing them toward zero.
3 code implementations • NeurIPS 2020 • Jeremiah Zhe Liu, Zi Lin, Shreyas Padhy, Dustin Tran, Tania Bedrax-Weiss, Balaji Lakshminarayanan
Bayesian neural networks (BNN) and deep ensembles are principled approaches to estimate the predictive uncertainty of a deep learning model.
1 code implementation • NeurIPS 2019 • Zhiqing Sun, Zhuohan Li, Haoqing Wang, Zi Lin, Di He, Zhi-Hong Deng
However, these models assume that the decoding process of each token is conditionally independent of others.
1 code implementation • IJCNLP 2019 • Zhuohan Li, Zi Lin, Di He, Fei Tian, Tao Qin, Li-Wei Wang, Tie-Yan Liu
Due to the unparallelizable nature of the autoregressive factorization, AutoRegressive Translation (ART) models have to generate tokens sequentially during decoding and thus suffer from high inference latency.
no code implementations • WS 2019 • Zi Lin, Nianwen Xue
The parsing accuracy varies a great deal for different meaning representations.
no code implementations • 4 Jul 2019 • Junjie Cao, Zi Lin, Weiwei Sun, Xiaojun Wan
We present a phenomenon-oriented comparative analysis of the two dominant approaches in task-independent semantic parsing: classic, knowledge-intensive and neural, data-intensive models.
no code implementations • 26 Nov 2018 • Zi Lin, Yang Liu
Previously, researchers paid no attention to the creation of unambiguous morpheme embeddings independent from the corpus, while such information plays an important role in expressing the exact meanings of words for parataxis languages like Chinese.
1 code implementation • EMNLP 2018 • Zi Lin, Yuguang Duan, Yuan-Yuan Zhao, Weiwei Sun, Xiaojun Wan
This paper studies semantic parsing for interlanguage (L2), taking semantic role labeling (SRL) as a case task and learner Chinese as a case language.