Search Results for author: Nan Hu

Found 17 papers, 7 papers with code

DKPLM: Decomposable Knowledge-enhanced Pre-trained Language Model for Natural Language Understanding

1 code implementation2 Dec 2021 Taolin Zhang, Chengyu Wang, Nan Hu, Minghui Qiu, Chengguang Tang, Xiaofeng He, Jun Huang

Knowledge-Enhanced Pre-trained Language Models (KEPLMs) are pre-trained models with relation triples injecting from knowledge graphs to improve language understanding abilities.

Knowledge Graphs Knowledge Probing +3

Retrieve-Rewrite-Answer: A KG-to-Text Enhanced LLMs Framework for Knowledge Graph Question Answering

1 code implementation20 Sep 2023 Yike Wu, Nan Hu, Sheng Bi, Guilin Qi, Jie Ren, Anhuan Xie, Wei Song

To this end, we propose an answer-sensitive KG-to-Text approach that can transform KG knowledge into well-textualized statements most informative for KGQA.

Graph Question Answering Language Modelling +2

Can ChatGPT Replace Traditional KBQA Models? An In-depth Analysis of the Question Answering Performance of the GPT LLM Family

2 code implementations14 Mar 2023 Yiming Tan, Dehai Min, Yu Li, Wenbo Li, Nan Hu, Yongrui Chen, Guilin Qi

ChatGPT is a powerful large language model (LLM) that covers knowledge resources such as Wikipedia and supports natural language question answering using its own knowledge.

Knowledge Base Question Answering Language Modelling +3

HiCLRE: A Hierarchical Contrastive Learning Framework for Distantly Supervised Relation Extraction

1 code implementation Findings (ACL) 2022 Dongyang Li, Taolin Zhang, Nan Hu, Chengyu Wang, Xiaofeng He

In this paper, we propose a hierarchical contrastive learning Framework for Distantly Supervised relation extraction (HiCLRE) to reduce noisy sentences, which integrate the global structural information and local fine-grained interaction.

Contrastive Learning Data Augmentation +3

An Empirical Study of Pre-trained Language Models in Simple Knowledge Graph Question Answering

1 code implementation18 Mar 2023 Nan Hu, Yike Wu, Guilin Qi, Dehai Min, Jiaoyan Chen, Jeff Z. Pan, Zafar Ali

Large-scale pre-trained language models (PLMs) such as BERT have recently achieved great success and become a milestone in natural language processing (NLP).

Graph Question Answering Knowledge Distillation +1

Segmentation and Tracking of Vegetable Plants by Exploiting Vegetable Shape Feature for Precision Spray of Agricultural Robots

1 code implementation23 Jun 2023 Nan Hu, Daobilige Su, Shuo Wang, Xuechang Wang, Huiyu Zhong, Zimeng Wang, Yongliang Qiao, Yu Tan

Regarding the robust tracking of vegetable plants, to solve the challenging problem of associating vegetables with similar color and texture in consecutive images, in this paper, a novel method of Multiple Object Tracking and Segmentation (MOTS) is proposed for instance segmentation and tracking of multiple vegetable plants.

Instance Segmentation Multiple Object Tracking +3

Trainable Joint Channel Estimation, Detection and Decoding for MIMO URLLC Systems

1 code implementation11 Apr 2024 Yi Sun, Hong Shen, Bingqing Li, Wei Xu, Pengcheng Zhu, Nan Hu, Chunming Zhao

The receiver design for multi-input multi-output (MIMO) ultra-reliable and low-latency communication (URLLC) systems can be a tough task due to the use of short channel codes and few pilot symbols.

Graph Matching with Anchor Nodes: A Learning Approach

no code implementations CVPR 2013 Nan Hu, Raif M. Rustamov, Leonidas Guibas

In this paper, we consider the weighted graph matching problem with partially disclosed correspondences between a number of anchor nodes.

Graph Matching

Stable and Informative Spectral Signatures for Graph Matching

no code implementations CVPR 2014 Nan Hu, Raif M. Rustamov, Leonidas Guibas

We also introduce the pairwise heat kernel distance as a stable second order compatibility term; we justify its plausibility by showing that in a certain limiting case it converges to the classical adjacency matrix-based second order compatibility function.

Graph Matching Informativeness

Distributable Consistent Multi-Object Matching

no code implementations CVPR 2018 Nan Hu, Qi-Xing Huang, Boris Thibert, Leonidas Guibas

In this paper we propose an optimization-based framework to multiple object matching.

Object

Benchmarking off-the-shelf statistical shape modeling tools in clinical applications

no code implementations7 Sep 2020 Anupama Goparaju, Alexandre Bone, Nan Hu, Heath B. Henninger, Andrew E. Anderson, Stanley Durrleman, Matthijs Jacxsens, Alan Morris, Ibolya Csecs, Nassir Marrouche, Shireen Y. Elhabian

Statistical shape modeling (SSM) is widely used in biology and medicine as a new generation of morphometric approaches for the quantitative analysis of anatomical shapes.

Benchmarking

Dual-Channel Evidence Fusion for Fact Verification over Texts and Tables

no code implementations NAACL 2022 Nan Hu, Zirui Wu, Yuxuan Lai, Xiao Liu, Yansong Feng

Different from previous fact extraction and verification tasks that only consider evidence of a single format, FEVEROUS brings further challenges by extending the evidence format to both plain text and tables.

Fact Verification

Robust MIMO Detection With Imperfect CSI: A Neural Network Solution

no code implementations24 Jul 2023 Yi Sun, Hong Shen, Wei Xu, Nan Hu, Chunming Zhao

Furthermore, a robust detection network RADMMNet is constructed by unfolding the ADMM iterations and employing both model-driven and data-driven philosophies.

Benchmarking Large Language Models in Complex Question Answering Attribution using Knowledge Graphs

no code implementations26 Jan 2024 Nan Hu, Jiaoyan Chen, Yike Wu, Guilin Qi, Sheng Bi, Tongtong Wu, Jeff Z. Pan

The attribution of question answering is to provide citations for supporting generated statements, and has attracted wide research attention.

Benchmarking Knowledge Graphs +1

MIKE: A New Benchmark for Fine-grained Multimodal Entity Knowledge Editing

no code implementations18 Feb 2024 Jiaqi Li, Miaozeng Du, Chuanyi Zhang, Yongrui Chen, Nan Hu, Guilin Qi, Haiyun Jiang, Siyuan Cheng, Bozhong Tian

Multimodal knowledge editing represents a critical advancement in enhancing the capabilities of Multimodal Large Language Models (MLLMs).

knowledge editing

HGT: Leveraging Heterogeneous Graph-enhanced Large Language Models for Few-shot Complex Table Understanding

no code implementations28 Mar 2024 Rihui Jin, Yu Li, Guilin Qi, Nan Hu, Yuan-Fang Li, Jiaoyan Chen, Jianan Wang, Yongrui Chen, Dehai Min

Table understanding (TU) has achieved promising advancements, but it faces the challenges of the scarcity of manually labeled tables and the presence of complex table structures. To address these challenges, we propose HGT, a framework with a heterogeneous graph (HG)-enhanced large language model (LLM) to tackle few-shot TU tasks. It leverages the LLM by aligning the table semantics with the LLM's parametric knowledge through soft prompts and instruction turning and deals with complex tables by a multi-task pre-training scheme involving three novel multi-granularity self-supervised HG pre-training objectives. We empirically demonstrate the effectiveness of HGT, showing that it outperforms the SOTA for few-shot complex TU on several benchmarks.

Language Modelling Large Language Model

Cannot find the paper you are looking for? You can Submit a new open access paper.