Search Results for author: Lijie Wen

Found 38 papers, 17 papers with code

FSMR: A Feature Swapping Multi-modal Reasoning Approach with Joint Textual and Visual Clues

no code implementations29 Mar 2024 Shuang Li, Jiahua Wang, Lijie Wen

Multi-modal reasoning plays a vital role in bridging the gap between textual and visual information, enabling a deeper understanding of the context.

Image-text matching Language Modelling +1

ChatCite: LLM Agent with Human Workflow Guidance for Comparative Literature Summary

no code implementations5 Mar 2024 Yutong Li, Lu Chen, Aiwei Liu, Kai Yu, Lijie Wen

In this work, we firstly focus on the independent literature summarization step and introduce ChatCite, an LLM agent with human workflow guidance for comparative literature summary.

Retrieval

LLMArena: Assessing Capabilities of Large Language Models in Dynamic Multi-Agent Environments

no code implementations26 Feb 2024 Junzhe Chen, Xuming Hu, Shuodi Liu, Shiyu Huang, Wei-Wei Tu, Zhaofeng He, Lijie Wen

Recent advancements in large language models (LLMs) have revealed their potential for achieving autonomous agents possessing human-level intelligence.

Evaluating Robustness of Generative Search Engine on Adversarial Factual Questions

no code implementations25 Feb 2024 Xuming Hu, Xiaochuan Li, Junzhe Chen, Yinghui Li, Yangning Li, Xiaoguang Li, Yasheng Wang, Qun Liu, Lijie Wen, Philip S. Yu, Zhijiang Guo

To this end, we propose evaluating the robustness of generative search engines in the realistic and high-risk setting, where adversaries have only black-box system access and seek to deceive the model into returning incorrect responses.

Retrieval

Direct Large Language Model Alignment Through Self-Rewarding Contrastive Prompt Distillation

no code implementations19 Feb 2024 Aiwei Liu, Haoping Bai, Zhiyun Lu, Xiang Kong, Simon Wang, Jiulong Shan, Meng Cao, Lijie Wen

In this paper, we propose a method to evaluate the response preference by using the output probabilities of response pairs under contrastive prompt pairs, which could achieve better performance on LLaMA2-7B and LLaMA2-13B compared to RLAIF.

Language Modelling Large Language Model

MetaTra: Meta-Learning for Generalized Trajectory Prediction in Unseen Domain

no code implementations13 Feb 2024 Xiaohe Li, Feilong Huang, Zide Fan, Fangli Mou, Yingyan Hou, Chen Qian, Lijie Wen

Trajectory prediction has garnered widespread attention in different fields, such as autonomous driving and robotic navigation.

Autonomous Driving Domain Generalization +2

A Survey of Text Watermarking in the Era of Large Language Models

no code implementations13 Dec 2023 Aiwei Liu, Leyi Pan, Yijian Lu, Jingjing Li, Xuming Hu, Xi Zhang, Lijie Wen, Irwin King, Hui Xiong, Philip S. Yu

Text watermarking algorithms play a crucial role in the copyright protection of textual content, yet their capabilities and application scenarios have been limited historically.

Dialogue Generation

Prompt Me Up: Unleashing the Power of Alignments for Multimodal Entity and Relation Extraction

1 code implementation25 Oct 2023 Xuming Hu, Junzhe Chen, Aiwei Liu, Shiao Meng, Lijie Wen, Philip S. Yu

Additionally, our method is orthogonal to previous multimodal fusions, and using it on prior SOTA fusions further improves 5. 47% F1.

Relation Relation Extraction

RAPL: A Relation-Aware Prototype Learning Approach for Few-Shot Document-Level Relation Extraction

1 code implementation24 Oct 2023 Shiao Meng, Xuming Hu, Aiwei Liu, Shu'ang Li, Fukun Ma, Yawen Yang, Lijie Wen

However, existing works often struggle to obtain class prototypes with accurate relational semantics: 1) To build prototype for a target relation type, they aggregate the representations of all entity pairs holding that relation, while these entity pairs may also hold other relations, thus disturbing the prototype.

Document-level Relation Extraction Meta-Learning +1

A Semantic Invariant Robust Watermark for Large Language Models

1 code implementation10 Oct 2023 Aiwei Liu, Leyi Pan, Xuming Hu, Shiao Meng, Lijie Wen

In this work, we propose a semantic invariant watermarking method for LLMs that provides both attack robustness and security robustness.

Do Large Language Models Know about Facts?

no code implementations8 Oct 2023 Xuming Hu, Junzhe Chen, Xiaochuan Li, Yufei Guo, Lijie Wen, Philip S. Yu, Zhijiang Guo

Large language models (LLMs) have recently driven striking performance improvements across a range of natural language processing tasks.

Question Answering Text Generation

An Unforgeable Publicly Verifiable Watermark for Large Language Models

2 code implementations30 Jul 2023 Aiwei Liu, Leyi Pan, Xuming Hu, Shu'ang Li, Lijie Wen, Irwin King, Philip S. Yu

Experiments demonstrate that our algorithm attains high detection accuracy and computational efficiency through neural networks with a minimized number of parameters.

Computational Efficiency

Exploring the Compositional Generalization in Context Dependent Text-to-SQL Parsing

no code implementations29 May 2023 Aiwei Liu, Wei Liu, Xuming Hu, Shuang Li, Fukun Ma, Yawen Yang, Lijie Wen

Based on these observations, we propose a method named \texttt{p-align} to improve the compositional generalization of Text-to-SQL models.

SQL Parsing Text-To-SQL

Read it Twice: Towards Faithfully Interpretable Fact Verification by Revisiting Evidence

1 code implementation2 May 2023 Xuming Hu, Zhaochen Hong, Zhijiang Guo, Lijie Wen, Philip S. Yu

In light of this, we propose a fact verification model named ReRead to retrieve evidence and verify claim that: (1) Train the evidence retriever to obtain interpretable evidence (i. e., faithfulness and plausibility criteria); (2) Train the claim verifier to revisit the evidence retrieved by the optimized evidence retriever to improve the accuracy.

Claim Verification Decision Making +1

Entity-to-Text based Data Augmentation for various Named Entity Recognition Tasks

no code implementations19 Oct 2022 Xuming Hu, Yong Jiang, Aiwei Liu, Zhongqiang Huang, Pengjun Xie, Fei Huang, Lijie Wen, Philip S. Yu

Data augmentation techniques have been used to alleviate the problem of scarce labeled data in various NER tasks (flat, nested, and discontinuous NER tasks).

Data Augmentation named-entity-recognition +3

Scene Graph Modification as Incremental Structure Expanding

no code implementations COLING 2022 Xuming Hu, Zhijiang Guo, Yu Fu, Lijie Wen, Philip S. Yu

A scene graph is a semantic representation that expresses the objects, attributes, and relationships between objects in a scene.

Semantic Enhanced Text-to-SQL Parsing via Iteratively Learning Schema Linking Graph

1 code implementation8 Aug 2022 Aiwei Liu, Xuming Hu, Li Lin, Lijie Wen

First, we extract a schema linking graph from PLMs through a probing procedure in an unsupervised manner.

Graph Learning SQL Parsing +1

A Multi-level Supervised Contrastive Learning Framework for Low-Resource Natural Language Inference

no code implementations31 May 2022 Shu'ang Li, Xuming Hu, Li Lin, Aiwei Liu, Lijie Wen, Philip S. Yu

Natural Language Inference (NLI) is a growingly essential task in natural language understanding, which requires inferring the relationship between the sentence pairs (premise and hypothesis).

Contrastive Learning Data Augmentation +5

HiURE: Hierarchical Exemplar Contrastive Learning for Unsupervised Relation Extraction

1 code implementation NAACL 2022 Xuming Hu, Shuliang Liu, Chenwei Zhang, Shu`ang Li, Lijie Wen, Philip S. Yu

Unsupervised relation extraction aims to extract the relationship between entities from natural language sentences without prior information on relational scope or distribution.

Clustering Contrastive Learning +3

What Makes the Story Forward? Inferring Commonsense Explanations as Prompts for Future Event Generation

no code implementations18 Jan 2022 Li Lin, Yixin Cao, Lifu Huang, Shu'ang Li, Xuming Hu, Lijie Wen, Jianmin Wang

To alleviate the knowledge forgetting issue, we design two modules, Im and Gm, for each type of knowledge, which are combined via prompt tuning.

Information Retrieval Retrieval +1

Gradient Imitation Reinforcement Learning for Low Resource Relation Extraction

1 code implementation EMNLP 2021 Xuming Hu, Chenwei Zhang, Yawen Yang, Xiaohe Li, Li Lin, Lijie Wen, Philip S. Yu

Low-resource Relation Extraction (LRE) aims to extract relation facts from limited labeled corpora when human annotation is scarce.

Meta-Learning Pseudo Label +5

Counterfactual Inference for Text Classification Debiasing

1 code implementation ACL 2021 Chen Qian, Fuli Feng, Lijie Wen, Chunping Ma, Pengjun Xie

In inference, given a factual input document, Corsair imagines its two counterfactual counterparts to distill and mitigate the two biases captured by the poisonous model.

counterfactual Counterfactual Inference +3

GAHNE: Graph-Aggregated Heterogeneous Network Embedding

no code implementations23 Dec 2020 Xiaohe Li, Lijie Wen, Chen Qian, Jianmin Wang

Heterogeneous network embedding aims to embed nodes into low-dimensional vectors which capture rich intrinsic information of heterogeneous networks.

Network Embedding

A Graph Representation of Semi-structured Data for Web Question Answering

no code implementations COLING 2020 Xingyao Zhang, Linjun Shou, Jian Pei, Ming Gong, Lijie Wen, Daxin Jiang

The abundant semi-structured data on the Web, such as HTML-based tables and lists, provide commercial search engines a rich information source for question answering (QA).

Question Answering

Semi-supervised Relation Extraction via Incremental Meta Self-Training

1 code implementation Findings (EMNLP) 2021 Xuming Hu, Chenwei Zhang, Fukun Ma, Chenyao Liu, Lijie Wen, Philip S. Yu

To alleviate human efforts from obtaining large-scale annotations, Semi-Supervised Relation Extraction methods aim to leverage unlabeled data in addition to learning from limited samples.

Meta-Learning Pseudo Label +2

TraceWalk: Semantic-based Process Graph Embedding for Consistency Checking

no code implementations16 May 2019 Chen Qian, Lijie Wen, Akhil Kumar

Process consistency checking (PCC), an interdiscipline of natural language processing (NLP) and business process management (BPM), aims to quantify the degree of (in)consistencies between graphical and textual descriptions of a process.

Graph Embedding Management

An Approach for Process Model Extraction By Multi-Grained Text Classification

1 code implementation16 May 2019 Chen Qian, Lijie Wen, Akhil Kumar, Leilei Lin, Li Lin, Zan Zong, Shuang Li, Jian-Min Wang

Process model extraction (PME) is a recently emerged interdiscipline between natural language processing (NLP) and business process management (BPM), which aims to extract process models from textual descriptions.

General Classification Management +5

Cannot find the paper you are looking for? You can Submit a new open access paper.