no code implementations • NAACL 2022 • Nan Hu, Zirui Wu, Yuxuan Lai, Xiao Liu, Yansong Feng
Different from previous fact extraction and verification tasks that only consider evidence of a single format, FEVEROUS brings further challenges by extending the evidence format to both plain text and tables.
1 code implementation • 29 Sep 2024 • Yike Wu, Yi Huang, Nan Hu, Yuncheng Hua, Guilin Qi, Jiaoyan Chen, Jeff Z. Pan
Recent studies have explored the use of Large Language Models (LLMs) with Retrieval Augmented Generation (RAG) for Knowledge Graph Question Answering (KGQA).
1 code implementation • 11 Apr 2024 • Yi Sun, Hong Shen, Bingqing Li, Wei Xu, Pengcheng Zhu, Nan Hu, Chunming Zhao
The receiver design for multi-input multi-output (MIMO) ultra-reliable and low-latency communication (URLLC) systems can be a tough task due to the use of short channel codes and few pilot symbols.
no code implementations • 28 Mar 2024 • Rihui Jin, Yu Li, Guilin Qi, Nan Hu, Yuan-Fang Li, Jiaoyan Chen, Jianan Wang, Yongrui Chen, Dehai Min
Table understanding (TU) has achieved promising advancements, but it faces the challenges of the scarcity of manually labeled tables and the presence of complex table structures. To address these challenges, we propose HGT, a framework with a heterogeneous graph (HG)-enhanced large language model (LLM) to tackle few-shot TU tasks. It leverages the LLM by aligning the table semantics with the LLM's parametric knowledge through soft prompts and instruction turning and deals with complex tables by a multi-task pre-training scheme involving three novel multi-granularity self-supervised HG pre-training objectives. We empirically demonstrate the effectiveness of HGT, showing that it outperforms the SOTA for few-shot complex TU on several benchmarks.
no code implementations • 20 Feb 2024 • Dehai Min, Nan Hu, Rihui Jin, Nuo Lin, Jiaoyan Chen, Yongrui Chen, Yu Li, Guilin Qi, Yun Li, Nijun Li, Qianren Wang
Table-to-Text Generation is a promising solution by facilitating the transformation of hybrid data into a uniformly text-formatted corpus.
no code implementations • 18 Feb 2024 • Jiaqi Li, Miaozeng Du, Chuanyi Zhang, Yongrui Chen, Nan Hu, Guilin Qi, Haiyun Jiang, Siyuan Cheng, Bozhong Tian
Multimodal knowledge editing represents a critical advancement in enhancing the capabilities of Multimodal Large Language Models (MLLMs).
no code implementations • 26 Jan 2024 • Nan Hu, Jiaoyan Chen, Yike Wu, Guilin Qi, Sheng Bi, Tongtong Wu, Jeff Z. Pan
The attribution of question answering is to provide citations for supporting generated statements, and has attracted wide research attention.
1 code implementation • 20 Sep 2023 • Yike Wu, Nan Hu, Sheng Bi, Guilin Qi, Jie Ren, Anhuan Xie, Wei Song
To this end, we propose an answer-sensitive KG-to-Text approach that can transform KG knowledge into well-textualized statements most informative for KGQA.
no code implementations • 24 Jul 2023 • Yi Sun, Hong Shen, Wei Xu, Nan Hu, Chunming Zhao
Furthermore, a robust detection network RADMMNet is constructed by unfolding the ADMM iterations and employing both model-driven and data-driven philosophies.
1 code implementation • 23 Jun 2023 • Nan Hu, Daobilige Su, Shuo Wang, Xuechang Wang, Huiyu Zhong, Zimeng Wang, Yongliang Qiao, Yu Tan
Regarding the robust tracking of vegetable plants, to solve the challenging problem of associating vegetables with similar color and texture in consecutive images, in this paper, a novel method of Multiple Object Tracking and Segmentation (MOTS) is proposed for instance segmentation and tracking of multiple vegetable plants.
no code implementations • 18 Mar 2023 • Nan Hu, Yike Wu, Guilin Qi, Dehai Min, Jiaoyan Chen, Jeff Z. Pan, Zafar Ali
Large-scale pre-trained language models (PLMs) such as BERT have recently achieved great success and become a milestone in natural language processing (NLP).
2 code implementations • 14 Mar 2023 • Yiming Tan, Dehai Min, Yu Li, Wenbo Li, Nan Hu, Yongrui Chen, Guilin Qi
ChatGPT is a powerful large language model (LLM) that covers knowledge resources such as Wikipedia and supports natural language question answering using its own knowledge.
Ranked #1 on Knowledge Base Question Answering on WebQuestionsSP (Accuracy metric)
1 code implementation • Findings (ACL) 2022 • Dongyang Li, Taolin Zhang, Nan Hu, Chengyu Wang, Xiaofeng He
In this paper, we propose a hierarchical contrastive learning Framework for Distantly Supervised relation extraction (HiCLRE) to reduce noisy sentences, which integrate the global structural information and local fine-grained interaction.
1 code implementation • 2 Dec 2021 • Taolin Zhang, Chengyu Wang, Nan Hu, Minghui Qiu, Chengguang Tang, Xiaofeng He, Jun Huang
Knowledge-Enhanced Pre-trained Language Models (KEPLMs) are pre-trained models with relation triples injecting from knowledge graphs to improve language understanding abilities.
no code implementations • 7 Sep 2020 • Anupama Goparaju, Alexandre Bone, Nan Hu, Heath B. Henninger, Andrew E. Anderson, Stanley Durrleman, Matthijs Jacxsens, Alan Morris, Ibolya Csecs, Nassir Marrouche, Shireen Y. Elhabian
Statistical shape modeling (SSM) is widely used in biology and medicine as a new generation of morphometric approaches for the quantitative analysis of anatomical shapes.
no code implementations • CVPR 2013 • Nan Hu, Raif M. Rustamov, Leonidas Guibas
In this paper, we consider the weighted graph matching problem with partially disclosed correspondences between a number of anchor nodes.
no code implementations • CVPR 2018 • Nan Hu, Qi-Xing Huang, Boris Thibert, Leonidas Guibas
In this paper we propose an optimization-based framework to multiple object matching.
no code implementations • CVPR 2014 • Nan Hu, Raif M. Rustamov, Leonidas Guibas
We also introduce the pairwise heat kernel distance as a stable second order compatibility term; we justify its plausibility by showing that in a certain limiting case it converges to the classical adjacency matrix-based second order compatibility function.