no code implementations • 10 Apr 2024 • Li Zhou, Taelin Karidi, Nicolas Garneau, Yong Cao, Wanlong Liu, Wenyu Chen, Daniel Hershcovich
Recent studies have highlighted the presence of cultural biases in Large Language Models (LLMs), yet often lack a robust methodology to dissect these phenomena comprehensively.
no code implementations • 3 Jan 2024 • Li Zhou, Wenyu Chen, Yong Cao, Dingyi Zeng, Wanlong Liu, Hong Qu
While Transformer-based pre-trained language models and their variants exhibit strong semantic representation capabilities, the question of comprehending the information gain derived from the additional components of PLMs remains an open question in this field.
1 code implementation • Association for Computational Linguistics 2023 • Wanlong Liu, Shaohuan Cheng, Dingyi Zeng, Hong Qu
Document-level event argument extraction poses new challenges of long input and cross-sentence inference compared to its sentence-level counterpart.
Ranked #1 on Event Argument Extraction on WikiEvents (F1 metric)
no code implementations • 8 Oct 2023 • Wanlong Liu, Dingyi Zeng, Li Zhou, Yichen Xiao, Weishan Kong, Malu Zhang, Shaohuan Cheng, Hongyang Zhao, Wenyu Chen
Document-level event argument extraction is a crucial yet challenging task within the field of information extraction.
no code implementations • International Joint Conference on Neural Networks 2022 • Wanlong Liu
However, previous transformer-based methods neglect structural information between entities, while graph-based methods are unable to extract structural information effectively on account that they isolate the en-coding stage and structure reasoning stage.
Ranked #8 on Relation Extraction on DocRED
no code implementations • 15 Oct 2021 • Li Zhou, Wenyu Chen, Dingyi Zeng, Shaohuan Cheng, Wanlong Liu, Malu Zhang, Hong Qu
To address these drawbacks, we present a novel message-passing paradigm, based on the properties of multi-step message source, node-specific message output, and multi-space message interaction.