1 code implementation • 18 Feb 2025 • Kaiyang Wan, Honglin Mu, Rui Hao, Haoran Luo, Tianle Gu, Xiuying Chen
Like humans, Large Language Models (LLMs) struggle to generate high-quality long-form text that adheres to strict requirements in a single pass.
1 code implementation • 8 Oct 2023 • Haoran Luo, Haihong E, Yuhao Yang, Tianyu Yao, Yikai Guo, Zichen Tang, Wentai Zhang, Kaiyang Wan, Shiyao Peng, Meina Song, Wei Lin, Yifan Zhu, Luu Anh Tuan
However, the construction of NKGs remains at a coarse-grained level, which is always in a single schema, ignoring the order and variable arity of entities.
Event-based N-ary Relaiton Extraction
Hypergraph-based N-ary Relaiton Extraction
+3
1 code implementation • ACL 2023 • Haoran Luo, Haihong E, Yuhao Yang, Yikai Guo, Mingzhi Sun, Tianyu Yao, Zichen Tang, Kaiyang Wan, Meina Song, Wei Lin
The global-level attention can model the graphical structure of HKG using hypergraph dual-attention layers, while the local-level attention can learn the sequential structure inside H-Facts via heterogeneous self-attention layers.
Ranked #1 on
Link Prediction
on Wikipeople
1 code implementation • AAAI 2023 • Haoran Luo, Haihong E, Yuhao Yang, Gengxian Zhou, Yikai Guo, Tianyu Yao, Zichen Tang, Xueyuan Lin, Kaiyang Wan
Complex query answering (CQA) is an essential task for multi-hop and logical reasoning on knowledge graphs (KGs).
Ranked #1 on
Complex Query Answering
on WD50K-QE
1 code implementation • AAAI 2023 • Haoran Luo, Haihong E, Ling Tan, Gengxian Zhou, Tianyu Yao, Kaiyang Wan
To overcome this limitation, we propose a dual-view hyper-relational KG structure (DH-KG) that contains a hyper-relational instance view for entities and a hyper-relational ontology view for concepts that are abstracted hierarchically from the entities.
Ranked #1 on
Link prediction on DH-KGs
on JW44K-6K