no code implementations • Findings (ACL) 2022 • Zi Lin, Jeremiah Zhe Liu, Jingbo Shang
Recent work in task-independent graph semantic parsing has shifted from grammar-based symbolic approaches to neural models, showing strong performance on different types of meaning representations.
no code implementations • CL (ACL) 2021 • Junjie Cao, Zi Lin, Weiwei Sun, Xiaojun Wan
Abstract In this work, we present a phenomenon-oriented comparative analysis of the two dominant approaches in English Resource Semantic (ERS) parsing: classic, knowledge-intensive and neural, data-intensive models.
no code implementations • 7 May 2024 • Yongqi Tong, Sizhe Wang, Dawei Li, Yifan Wang, Simeng Han, Zi Lin, Chengsong Huang, Jiaxin Huang, Jingbo Shang
Therefore, we present \textsc{PuzzleBen}, a weakly supervised benchmark that comprises 25, 147 complex questions, answers, and human-generated rationales across various domains, such as brainteasers, puzzles, riddles, parajumbles, and critical reasoning tasks.
no code implementations • 26 Oct 2023 • Zi Lin, Zihan Wang, Yongqi Tong, Yangkun Wang, Yuxin Guo, Yujia Wang, Jingbo Shang
This benchmark contains the rich, nuanced phenomena that can be tricky for current toxicity detection models to identify, revealing a significant domain difference compared to social media content.
no code implementations • 18 Oct 2023 • Yongqi Tong, Yifan Wang, Dawei Li, Sizhe Wang, Zi Lin, Simeng Han, Jingbo Shang
Chain-of-Thought(CoT) prompting and its variants explore equipping large language models (LLMs) with high-level reasoning abilities by emulating human-like linear cognition and logic.
no code implementations • 7 Oct 2023 • Liangchen Luo, Zi Lin, Yinxiao Liu, Lei Shu, Yun Zhu, Jingbo Shang, Lei Meng
In the era of large language models (LLMs), this study explores the ability of LLMs to deliver accurate critiques across various tasks.
5 code implementations • 21 Sep 2023 • Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Tianle Li, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zhuohan Li, Zi Lin, Eric P. Xing, Joseph E. Gonzalez, Ion Stoica, Hao Zhang
Studying how people interact with large language models (LLMs) in real-world scenarios is increasingly important due to their widespread use in various applications.
no code implementations • 17 Aug 2023 • Yuguang Duan, Zi Lin, Weiwei Sun
The annotation procedure is guided by the Chinese PropBank specification, which is originally developed to cover first language phenomena.
11 code implementations • NeurIPS 2023 • Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica
Evaluating large language model (LLM) based chat assistants is challenging due to their broad capabilities and the inadequacy of existing benchmarks in measuring human preferences.
Ranked #3 on
Long-Context Understanding
on Ada-LEval (TSort)
1 code implementation • 26 Jan 2023 • Zi Lin, Jeremiah Liu, Jingbo Shang
Pre-trained seq2seq models excel at graph semantic parsing with rich annotated data, but generalize worse to out-of-distribution (OOD) and long-tail examples.
2 code implementations • 1 May 2022 • Jeremiah Zhe Liu, Shreyas Padhy, Jie Ren, Zi Lin, Yeming Wen, Ghassen Jerfel, Zack Nado, Jasper Snoek, Dustin Tran, Balaji Lakshminarayanan
The most popular approaches to estimate predictive uncertainty in deep learning are methods that combine predictions from multiple neural networks, such as Bayesian neural networks (BNNs) and deep ensembles.
1 code implementation • ACL (WOAH) 2021 • Ian D. Kivlichan, Zi Lin, Jeremiah Liu, Lucy Vasserman
Content moderation is often performed by a collaboration between humans and machine learning models.
no code implementations • 10 Dec 2020 • Liangchen Luo, Mark Sandler, Zi Lin, Andrey Zhmoginov, Andrew Howard
Knowledge distillation is one of the most popular and effective techniques for knowledge transfer, model compression and semi-supervised learning.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Zi Lin, Jeremiah Zhe Liu, Zi Yang, Nan Hua, Dan Roth
Traditional (unstructured) pruning methods for a Transformer model focus on regularizing the individual weights by penalizing them toward zero.
4 code implementations • NeurIPS 2020 • Jeremiah Zhe Liu, Zi Lin, Shreyas Padhy, Dustin Tran, Tania Bedrax-Weiss, Balaji Lakshminarayanan
Bayesian neural networks (BNN) and deep ensembles are principled approaches to estimate the predictive uncertainty of a deep learning model.
1 code implementation • NeurIPS 2019 • Zhiqing Sun, Zhuohan Li, Haoqing Wang, Zi Lin, Di He, Zhi-Hong Deng
However, these models assume that the decoding process of each token is conditionally independent of others.
1 code implementation • IJCNLP 2019 • Zhuohan Li, Zi Lin, Di He, Fei Tian, Tao Qin, Li-Wei Wang, Tie-Yan Liu
Due to the unparallelizable nature of the autoregressive factorization, AutoRegressive Translation (ART) models have to generate tokens sequentially during decoding and thus suffer from high inference latency.
no code implementations • WS 2019 • Zi Lin, Nianwen Xue
The parsing accuracy varies a great deal for different meaning representations.
no code implementations • 4 Jul 2019 • Junjie Cao, Zi Lin, Weiwei Sun, Xiaojun Wan
We present a phenomenon-oriented comparative analysis of the two dominant approaches in task-independent semantic parsing: classic, knowledge-intensive and neural, data-intensive models.
no code implementations • 26 Nov 2018 • Zi Lin, Yang Liu
Previously, researchers paid no attention to the creation of unambiguous morpheme embeddings independent from the corpus, while such information plays an important role in expressing the exact meanings of words for parataxis languages like Chinese.
1 code implementation • EMNLP 2018 • Zi Lin, Yuguang Duan, Yuan-Yuan Zhao, Weiwei Sun, Xiaojun Wan
This paper studies semantic parsing for interlanguage (L2), taking semantic role labeling (SRL) as a case task and learner Chinese as a case language.