Search Results for author: Zi Lin

Found 14 papers, 8 papers with code

Comparing Knowledge-Intensive and Data-Intensive Models for English Resource Semantic Parsing

no code implementations CL (ACL) 2021 Junjie Cao, Zi Lin, Weiwei Sun, Xiaojun Wan

Abstract In this work, we present a phenomenon-oriented comparative analysis of the two dominant approaches in English Resource Semantic (ERS) parsing: classic, knowledge-intensive and neural, data-intensive models.

Semantic Parsing

Towards Collaborative Neural-Symbolic Graph Semantic Parsing via Uncertainty

no code implementations Findings (ACL) 2022 Zi Lin, Jeremiah Zhe Liu, Jingbo Shang

Recent work in task-independent graph semantic parsing has shifted from grammar-based symbolic approaches to neural models, showing strong performance on different types of meaning representations.

Semantic Parsing

Neural-Symbolic Inference for Robust Autoregressive Graph Parsing via Compositional Uncertainty Quantification

1 code implementation26 Jan 2023 Zi Lin, Jeremiah Liu, Jingbo Shang

Pre-trained seq2seq models excel at graph semantic parsing with rich annotated data, but generalize worse to out-of-distribution (OOD) and long-tail examples.

Semantic Parsing

A Simple Approach to Improve Single-Model Deep Uncertainty via Distance-Awareness

2 code implementations1 May 2022 Jeremiah Zhe Liu, Shreyas Padhy, Jie Ren, Zi Lin, Yeming Wen, Ghassen Jerfel, Zack Nado, Jasper Snoek, Dustin Tran, Balaji Lakshminarayanan

The most popular approaches to estimate predictive uncertainty in deep learning are methods that combine predictions from multiple neural networks, such as Bayesian neural networks (BNNs) and deep ensembles.

Data Augmentation Probabilistic Deep Learning

Large-Scale Generative Data-Free Distillation

no code implementations10 Dec 2020 Liangchen Luo, Mark Sandler, Zi Lin, Andrey Zhmoginov, Andrew Howard

Knowledge distillation is one of the most popular and effective techniques for knowledge transfer, model compression and semi-supervised learning.

Knowledge Distillation Model Compression +1

Pruning Redundant Mappings in Transformer Models via Spectral-Normalized Identity Prior

1 code implementation Findings of the Association for Computational Linguistics 2020 Zi Lin, Jeremiah Zhe Liu, Zi Yang, Nan Hua, Dan Roth

Traditional (unstructured) pruning methods for a Transformer model focus on regularizing the individual weights by penalizing them toward zero.

Simple and Principled Uncertainty Estimation with Deterministic Deep Learning via Distance Awareness

3 code implementations NeurIPS 2020 Jeremiah Zhe Liu, Zi Lin, Shreyas Padhy, Dustin Tran, Tania Bedrax-Weiss, Balaji Lakshminarayanan

Bayesian neural networks (BNN) and deep ensembles are principled approaches to estimate the predictive uncertainty of a deep learning model.

Fast Structured Decoding for Sequence Models

1 code implementation NeurIPS 2019 Zhiqing Sun, Zhuohan Li, Haoqing Wang, Zi Lin, Di He, Zhi-Hong Deng

However, these models assume that the decoding process of each token is conditionally independent of others.

Machine Translation Translation

Hint-Based Training for Non-Autoregressive Machine Translation

1 code implementation IJCNLP 2019 Zhuohan Li, Zi Lin, Di He, Fei Tian, Tao Qin, Li-Wei Wang, Tie-Yan Liu

Due to the unparallelizable nature of the autoregressive factorization, AutoRegressive Translation (ART) models have to generate tokens sequentially during decoding and thus suffer from high inference latency.

Machine Translation Translation

Parsing Meaning Representations: Is Easier Always Better?

no code implementations WS 2019 Zi Lin, Nianwen Xue

The parsing accuracy varies a great deal for different meaning representations.

A Comparative Analysis of Knowledge-Intensive and Data-Intensive Semantic Parsers

no code implementations4 Jul 2019 Junjie Cao, Zi Lin, Weiwei Sun, Xiaojun Wan

We present a phenomenon-oriented comparative analysis of the two dominant approaches in task-independent semantic parsing: classic, knowledge-intensive and neural, data-intensive models.

Semantic Parsing

Implanting Rational Knowledge into Distributed Representation at Morpheme Level

no code implementations26 Nov 2018 Zi Lin, Yang Liu

Previously, researchers paid no attention to the creation of unambiguous morpheme embeddings independent from the corpus, while such information plays an important role in expressing the exact meanings of words for parataxis languages like Chinese.

Word Similarity

Semantic Role Labeling for Learner Chinese: the Importance of Syntactic Parsing and L2-L1 Parallel Data

1 code implementation EMNLP 2018 Zi Lin, Yuguang Duan, Yuan-Yuan Zhao, Weiwei Sun, Xiaojun Wan

This paper studies semantic parsing for interlanguage (L2), taking semantic role labeling (SRL) as a case task and learner Chinese as a case language.

Semantic Parsing Semantic Role Labeling

Cannot find the paper you are looking for? You can Submit a new open access paper.