Search Results for author: Zonglin Yang

Found 8 papers, 4 papers with code

Adaptive Reconvergence-driven AIG Rewriting via Strategy Learning

no code implementations22 Dec 2023 Liwei Ni, Zonglin Yang, Jiaxi Zhang, Junfeng Liu, Huawei Li, Biwei Xie, Xinquan Li

Rewriting is a common procedure in logic synthesis aimed at improving the performance, power, and area (PPA) of circuits.

A Survey on Semantic Processing Techniques

no code implementations22 Oct 2023 Rui Mao, Kai He, Xulang Zhang, Guanyi Chen, Jinjie Ni, Zonglin Yang, Erik Cambria

We connect the surveyed tasks with downstream applications because this may inspire future scholars to fuse these low-level semantic processing tasks with high-level natural language processing tasks.

named-entity-recognition Named Entity Recognition +1

Large Language Models for Automated Open-domain Scientific Hypotheses Discovery

1 code implementation6 Sep 2023 Zonglin Yang, Xinya Du, Junxian Li, Jie Zheng, Soujanya Poria, Erik Cambria

Hypothetical induction is recognized as the main reasoning type when scientists make observations about the world and try to propose hypotheses to explain those observations.

valid

Finding the Pillars of Strength for Multi-Head Attention

2 code implementations22 May 2023 Jinjie Ni, Rui Mao, Zonglin Yang, Han Lei, Erik Cambria

Specifically, the heads of MHA were originally designed to attend to information from different representation subspaces, whereas prior studies found that some attention heads likely learn similar features and can be pruned without harming performance.

feature selection

Logical Reasoning over Natural Language as Knowledge Representation: A Survey

1 code implementation21 Mar 2023 Zonglin Yang, Xinya Du, Rui Mao, Jinjie Ni, Erik Cambria

This paper provides a comprehensive overview on a new paradigm of logical reasoning, which uses natural language as knowledge representation and pretrained language models as reasoners, including philosophical definition and categorization of logical reasoning, advantages of the new paradigm, benchmarks and methods, challenges of the new paradigm, possible future directions, and relation to related NLP fields.

Logical Reasoning

Language Models as Inductive Reasoners

1 code implementation21 Dec 2022 Zonglin Yang, Li Dong, Xinya Du, Hao Cheng, Erik Cambria, Xiaodong Liu, Jianfeng Gao, Furu Wei

To this end, we propose a new paradigm (task) for inductive reasoning, which is to induce natural language rules from natural language facts, and create a dataset termed DEER containing 1. 2k rule-fact pairs for the task, where rules and facts are written in natural language.

Philosophy

Cannot find the paper you are looking for? You can Submit a new open access paper.