no code implementations • EMNLP 2020 • Chengyue Jiang, Yinggong Zhao, Shanbo Chu, Libin Shen, Kewei Tu
On the other hand, symbolic rules such as regular expressions are interpretable, require no training, and often achieve decent accuracy; but rules cannot benefit from labeled data when available and hence underperform neural networks in rich-resource scenarios.
no code implementations • EMNLP 2021 • Chengyue Jiang, Zijian Jin, Kewei Tu
Neural models and symbolic rules such as regular expressions have their respective merits and weaknesses.
1 code implementation • 8 Apr 2024 • Wenyang Hui, Chengyue Jiang, Yan Wang, Kewei Tu
It uses a strong LLM to summarize guidelines from previous tree search experiences to enhance the ability of a weak LLM.
1 code implementation • 2 Apr 2024 • Zhuo Chen, Chengyue Jiang, Kewei Tu
In this paper, we propose a framework of utilizing interpretation methods and gold rationales to enhance models.
1 code implementation • 12 Sep 2023 • Weiqi Wu, Chengyue Jiang, Yong Jiang, Pengjun Xie, Kewei Tu
In this paper, we focus on probing whether PLMs store ontological knowledge and have a semantic understanding of the knowledge rather than rote memorization of the surface form.
1 code implementation • 21 Aug 2023 • Tianyu Yu, Chengyue Jiang, Chao Lou, Shen Huang, Xiaobin Wang, Wei Liu, Jiong Cai, Yangning Li, Yinghui Li, Kewei Tu, Hai-Tao Zheng, Ningyu Zhang, Pengjun Xie, Fei Huang, Yong Jiang
However, LLMs are sometimes too footloose for natural language understanding (NLU) tasks which always have restricted output and input format.
1 code implementation • 14 Aug 2023 • Yangning Li, Shirong Ma, Xiaobin Wang, Shen Huang, Chengyue Jiang, Hai-Tao Zheng, Pengjun Xie, Fei Huang, Yong Jiang
EcomInstruct scales up the data size and task diversity by constructing atomic tasks with E-commerce basic data types, such as product information, user reviews.
no code implementations • 1 Jul 2023 • Jiong Cai, Yong Jiang, Yue Zhang, Chengyue Jiang, Ke Yu, Jianhui Ji, Rong Xiao, Haihong Tang, Tao Wang, Zhongqiang Huang, Pengjun Xie, Fei Huang, Kewei Tu
We also show that pretraining the QE module with auto-generated QE data from user logs can further improve the overall performance.
1 code implementation • 8 Feb 2023 • Chengyue Jiang, Yong Jiang, Weiqi Wu, Yuting Zheng, Pengjun Xie, Kewei Tu
The subject and object noun phrases and the relation in open KG have severe redundancy and ambiguity and need to be canonicalized.
1 code implementation • 18 Dec 2022 • Chengyue Jiang, Wenyang Hui, Yong Jiang, Xiaobin Wang, Pengjun Xie, Kewei Tu
We also found MCCE is very effective in fine-grained (130 types) and coarse-grained (9 types) entity typing.
Ranked #2 on Entity Typing on Open Entity
1 code implementation • 3 Dec 2022 • Chengyue Jiang, Yong Jiang, Weiqi Wu, Pengjun Xie, Kewei Tu
We use mean-field variational inference for efficient type inference on very large type sets and unfold it as a neural network module to enable end-to-end training.
Ranked #3 on Entity Typing on Open Entity
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Chengyue Jiang, Zhonglin Nian, Kaihao Guo, Shanbo Chu, Yinggong Zhao, Libin Shen, Kewei Tu
Numeral embeddings represented in this manner can be plugged into existing word embedding learning approaches such as skip-gram for training.
1 code implementation • CONLL 2019 • Xinyu Wang, Yixian Liu, Zixia Jia, Chengyue Jiang, Kewei Tu
This paper presents the system used in our submission to the \textit{CoNLL 2019 shared task: Cross-Framework Meaning Representation Parsing}.
no code implementations • 28 Dec 2019 • Chengyue Jiang, Zhonglin Nian, Kaihao Guo, Shanbo Chu, Yinggong Zhao, Libin Shen, Kewei Tu
Numeral embeddings represented in this manner can be plugged into existing word embedding learning approaches such as skip-gram for training.