no code implementations • 22 Apr 2024 • Zhengwei Tao, Ting-En Lin, Xiancai Chen, Hangyu Li, Yuchuan Wu, Yongbin Li, Zhi Jin, Fei Huang, DaCheng Tao, Jingren Zhou
Large language models (LLMs) have significantly advanced in various fields and intelligent agent applications.
no code implementations • 18 Apr 2024 • Zhengwei Tao, Xiancai Chen, Zhi Jin, Xiaoying Bai, Haiyan Zhao, Yiwei Lou
We conduct extensive experiments on event reasoning tasks on several datasets.
1 code implementation • 16 Apr 2024 • Zhengwei Tao, Zhi Jin, Junqiang Huang, Xiancai Chen, Xiaoying Bai, Haiyan Zhao, Yifan Zhang, Chongyang Tao
Finally, we observe that models trained in this way are still struggling to fully comprehend event evolution.
no code implementations • 11 Apr 2024 • Lei Sun, Zhengwei Tao, Youdi Li, Hiroshi Arakawa
However, existing methodologies that integrate LLMs and KGs often navigate the task-solving process solely based on the LLM's analysis of the question, overlooking the rich cognitive potential inherent in the vast knowledge encapsulated in KGs.
no code implementations • 31 Oct 2023 • Yongqiang Zhao, Zhenyu Li, Zhi Jin, Feng Zhang, Haiyan Zhao, Chengfeng Dou, Zhengwei Tao, Xinhai Xu, Donghong Liu
The Multi-Modal Large Language Model (MLLM) refers to an extension of the Large Language Model (LLM) equipped with the capability to receive and infer multi-modal data.
no code implementations • 24 May 2023 • Zhengwei Tao, Zhi Jin, Xiaoying Bai, Haiyan Zhao, Yanlin Feng, Jia Li, Wenpeng Hu
In this paper, we propose an overarching framework for event semantic processing, encompassing understanding, reasoning, and prediction, along with their fine-grained aspects.
no code implementations • ICLR 2019 • Wenpeng Hu, Zhou Lin, Bing Liu, Chongyang Tao, Zhengwei Tao, Jinwen Ma, Dongyan Zhao, Rui Yan
Several continual learning methods have been proposed to address the problem.
no code implementations • ICLR 2019 • Wenpeng Hu, Zhengwei Tao, Zhanxing Zhu, Bing Liu, Zhou Lin, Jinwen Ma, Dongyan Zhao, Rui Yan
A large amount of parallel data is needed to train a strong neural machine translation (NMT) system.