no code implementations • 15 Apr 2024 • Xiongye Xiao, Gengshuo Liu, Gaurav Gupta, Defu Cao, Shixuan Li, Yaxing Li, Tianqing Fang, Mingxi Cheng, Paul Bogdan
Integrating and processing information from various sources or modalities are critical for obtaining a comprehensive and accurate perception of the real world in autonomous systems and cyber-physical systems.
no code implementations • 12 Mar 2024 • Tianqing Fang, Zeming Chen, Yangqiu Song, Antoine Bosselut
Event commonsense reasoning requires the ability to reason about the relationship between events, as well as infer implicit context underlying that relationship.
1 code implementation • 16 Feb 2024 • Zhaowei Wang, Wei Fan, Qing Zong, Hongming Zhang, Sehyun Choi, Tianqing Fang, Xin Liu, Yangqiu Song, Ginny Y. Wong, Simon See
Abstraction ability is crucial in human intelligence, which can also benefit various tasks in NLP study.
no code implementations • 15 Feb 2024 • Ying Su, Tianqing Fang, Huiru Xiao, Weiqi Wang, Yangqiu Song, Tong Zhang, Lei Chen
In this paper, we propose to adopt textual entailment to find implicit entailment relations between CSKG nodes, to effectively densify the subgraph connecting nodes within the same conceptual class, which indicates a similar level of plausibility.
1 code implementation • 25 Jan 2024 • Quyet V. Do, Tianqing Fang, Shizhe Diao, Zhaowei Wang, Yangqiu Song
When considering a new knowledge instance, ConstraintChecker employs a rule-based module to produce a list of constraints, then it uses a zero-shot learning module to check whether this knowledge instance satisfies all constraints.
1 code implementation • 14 Jan 2024 • Weiqi Wang, Tianqing Fang, Chunyang Li, Haochen Shi, Wenxuan Ding, Baixuan Xu, Zhaowei Wang, Jiaxin Bai, Xin Liu, Jiayang Cheng, Chunkit Chan, Yangqiu Song
The sequential process of conceptualization and instantiation is essential to generalizable commonsense reasoning as it allows the application of existing knowledge to unfamiliar scenarios.
1 code implementation • 15 Nov 2023 • Zhaowei Wang, Haochen Shi, Weiqi Wang, Tianqing Fang, Hongming Zhang, Sehyun Choi, Xin Liu, Yangqiu Song
Cognitive research indicates that abstraction ability is essential in human intelligence, which remains under-explored in language models.
1 code implementation • 19 Oct 2023 • Cheng Jiayang, Lin Qiu, Tsz Ho Chan, Tianqing Fang, Weiqi Wang, Chunkit Chan, Dongyu Ru, Qipeng Guo, Hongming Zhang, Yangqiu Song, Yue Zhang, Zheng Zhang
Analogy-making between narratives is crucial for human reasoning.
1 code implementation • 17 Oct 2023 • Haochen Shi, Weiqi Wang, Tianqing Fang, Baixuan Xu, Wenxuan Ding, Xin Liu, Yangqiu Song
Zero-shot commonsense Question-Answering (QA) requires models to reason about general situations beyond specific benchmarks.
1 code implementation • 13 Oct 2023 • Sehyun Choi, Tianqing Fang, Zhaowei Wang, Yangqiu Song
Large Language Models (LLMs) have demonstrated remarkable human-level natural language generation capabilities.
no code implementations • 27 Sep 2023 • Xiongye Xiao, Gengshuo Liu, Gaurav Gupta, Defu Cao, Shixuan Li, Yaxing Li, Tianqing Fang, Mingxi Cheng, Paul Bogdan
Integrating and processing information from various sources or modalities are critical for obtaining a comprehensive and accurate perception of the real world.
no code implementations • 23 Jul 2023 • Xingbo Wang, Renfei Huang, Zhihua Jin, Tianqing Fang, Huamin Qu
Specifically, we extract relevant commonsense knowledge in inputs as references to align model behavior with human knowledge.
no code implementations • 24 May 2023 • Tianqing Fang, Zhaowei Wang, Wenxuan Zhou, Hongming Zhang, Yangqiu Song, Muhao Chen
However, knowledge conflicts arise when there is a mismatch between the actual temporal relations of events in the context and the prior knowledge or biases learned by the model.
1 code implementation • 24 May 2023 • Weiqi Wang, Tianqing Fang, Wenxuan Ding, Baixuan Xu, Xin Liu, Yangqiu Song, Antoine Bosselut
The task of zero-shot commonsense question answering evaluates models on their capacity to reason about general scenarios beyond those presented in specific datasets.
1 code implementation • 9 May 2023 • Zhaowei Wang, Quyet V. Do, Hongming Zhang, Jiayao Zhang, Weiqi Wang, Tianqing Fang, Yangqiu Song, Ginny Y. Wong, Simon See
This paper proposes a new task to detect commonsense causation between two events in an event sequence (i. e., context), called contextualized commonsense causal reasoning.
2 code implementations • 8 May 2023 • Weiqi Wang, Tianqing Fang, Baixuan Xu, Chun Yi Louis Bo, Yangqiu Song, Lei Chen
Commonsense reasoning, aiming at endowing machines with a human-like ability to make situational presumptions, is extremely challenging to generalize.
no code implementations • 28 Apr 2023 • Chunkit Chan, Jiayang Cheng, Weiqi Wang, Yuxin Jiang, Tianqing Fang, Xin Liu, Yangqiu Song
This paper aims to quantitatively evaluate the performance of ChatGPT, an interactive large language model, on inter-sentential relations such as temporal relations, causal relations, and discourse relations.
1 code implementation • 20 Apr 2023 • Tianqing Fang, Quyet V. Do, Sehyun Choi, Weiqi Wang, Yangqiu Song
Populating Commonsense Knowledge Bases (CSKB) is an important yet hard task in NLP, as it tackles knowledge from external sources with unseen events and entities.
no code implementations • 20 Dec 2022 • Tianqing Fang, Wenxuan Zhou, Fangyu Liu, Hongming Zhang, Yangqiu Song, Muhao Chen
However, data augmentation may introduce noisy data that impairs training.
1 code implementation • 14 Oct 2022 • Ying Su, ZiHao Wang, Tianqing Fang, Hongming Zhang, Yangqiu Song, Tong Zhang
Commonsense reasoning tasks such as commonsense knowledge graph completion and commonsense question answering require powerful representation learning.
1 code implementation • 14 Oct 2022 • Tianqing Fang, Quyet V. Do, Hongming Zhang, Yangqiu Song, Ginny Y. Wong, Simon See
We propose PseudoReasoner, a semi-supervised learning framework for CSKB population that uses a teacher model pre-trained on CSKBs to provide pseudo labels on the unlabeled candidate dataset for a student model to learn from.
1 code implementation • 13 Oct 2022 • Zhaowei Wang, Hongming Zhang, Tianqing Fang, Yangqiu Song, Ginny Y. Wong, Simon See
In this paper, we propose a new task of sub-event generation for an unseen process to evaluate the understanding of the coherence of sub-event actions and objects.
1 code implementation • 3 Jun 2022 • Mutian He, Tianqing Fang, Weiqi Wang, Yangqiu Song
Conceptualization, or viewing entities and situations as instances of abstract concepts in mind and making inferences based on that, is a vital component in human intelligence for commonsense reasoning.
1 code implementation • Findings (NAACL) 2022 • Ziqian Zeng, Weimin Ni, Tianqing Fang, Xiang Li, Xinran Zhao, Yangqiu Song
In this paper, we propose to query a masked language model with cloze style prompts to obtain supervision signals.
2 code implementations • EMNLP 2021 • Tianqing Fang, Weiqi Wang, Sehyun Choi, Shibo Hao, Hongming Zhang, Yangqiu Song, Bin He
Experimental results show that generalizing commonsense reasoning on unseen assertions is inherently a hard task.
1 code implementation • ACL 2021 • Nedjma Ousidhoum, Xinran Zhao, Tianqing Fang, Yangqiu Song, Dit-yan Yeung
Large pre-trained language models (PTLMs) have been shown to carry biases towards different social groups which leads to the reproduction of stereotypical and toxic content by major NLP systems.
1 code implementation • AKBC 2021 • Tianqing Fang, Haojie Pan, Hongming Zhang, Yangqiu Song, Kun Xu, Dong Yu
To evaluate the inference capability of different methods, we also propose a new evaluation metric based on CODC.
1 code implementation • 5 Apr 2021 • Hongming Zhang, Xin Liu, Haojie Pan, Haowen Ke, Jiefu Ou, Tianqing Fang, Yangqiu Song
After conceptualization with Probase, a selectional preference based concept-instance relational knowledge base, our concept graph contains 15 million conceptualized eventualities and 224 million edges between them.
1 code implementation • 1 Jan 2021 • Tianqing Fang, Hongming Zhang, Weiqi Wang, Yangqiu Song, Bin He
On the other hand, generation models have the potential to automatically generate more knowledge.