Search Results for author: Zhaoxuan Tan

Found 18 papers, 13 papers with code

DELL: Generating Reactions and Explanations for LLM-Based Misinformation Detection

no code implementations16 Feb 2024 Herun Wan, Shangbin Feng, Zhaoxuan Tan, Heng Wang, Yulia Tsvetkov, Minnan Luo

Large language models are limited by challenges in factuality and hallucinations to be directly employed off-the-shelf for judging the veracity of news articles, where factual accuracy is paramount.

Misinformation

Towards Safer Large Language Models through Machine Unlearning

no code implementations15 Feb 2024 Zheyuan Liu, Guangyao Dou, Zhaoxuan Tan, Yijun Tian, Meng Jiang

To address this gap, we introduce Selective Knowledge negation Unlearning (SKU), a novel unlearning framework for LLMs, designed to eliminate harmful knowledge while preserving utility on normal prompts.

Machine Unlearning Negation

Democratizing Large Language Models via Personalized Parameter-Efficient Fine-tuning

no code implementations6 Feb 2024 Zhaoxuan Tan, Qingkai Zeng, Yijun Tian, Zheyuan Liu, Bing Yin, Meng Jiang

OPPU integrates parametric user knowledge in the personal PEFT parameters with the non-parametric knowledge acquired through retrieval and profile.

Retrieval

What Does the Bot Say? Opportunities and Risks of Large Language Models in Social Media Bot Detection

no code implementations1 Feb 2024 Shangbin Feng, Herun Wan, Ningnan Wang, Zhaoxuan Tan, Minnan Luo, Yulia Tsvetkov

Social media bot detection has always been an arms race between advancements in machine learning bot detectors and adversarial bot strategies to evade detection.

User Modeling in the Era of Large Language Models: Current Research and Future Directions

1 code implementation11 Dec 2023 Zhaoxuan Tan, Meng Jiang

Two common types of user data are text and graph, as the data usually contain a large amount of user-generated content (UGC) and online interactions.

Graph Mining

GADY: Unsupervised Anomaly Detection on Dynamic Graphs

no code implementations25 Oct 2023 Shiqi Lou, Qingyue Zhang, Shujie Yang, Yuyang Tian, Zhaoxuan Tan, Minnan Luo

Supplementary experiments further validate the effectiveness of our model design and the necessity of each module.

Unsupervised Anomaly Detection

KGQuiz: Evaluating the Generalization of Encoded Knowledge in Large Language Models

1 code implementation15 Oct 2023 Yuyang Bai, Shangbin Feng, Vidhisha Balachandran, Zhaoxuan Tan, Shiqi Lou, Tianxing He, Yulia Tsvetkov

To gain a better understanding of LLMs' knowledge abilities and their generalization, we evaluate 10 open-source and black-box LLMs on the KGQuiz benchmark across the five knowledge-intensive tasks and knowledge domains.

Multiple-choice World Knowledge

Knowledge Crosswords: Geometric Reasoning over Structured Knowledge with Large Language Models

1 code implementation2 Oct 2023 Wenxuan Ding, Shangbin Feng, YuHan Liu, Zhaoxuan Tan, Vidhisha Balachandran, Tianxing He, Yulia Tsvetkov

We additionally propose two new approaches, Staged Prompting and Verify-All, to augment LLMs' ability to backtrack and verify structured constraints.

Can Language Models Solve Graph Problems in Natural Language?

2 code implementations NeurIPS 2023 Heng Wang, Shangbin Feng, Tianxing He, Zhaoxuan Tan, Xiaochuang Han, Yulia Tsvetkov

We then propose Build-a-Graph Prompting and Algorithmic Prompting, two instruction-based approaches to enhance LLMs in solving natural language graph problems.

In-Context Learning Knowledge Probing +2

Detecting Spoilers in Movie Reviews with External Movie Knowledge and User Networks

1 code implementation22 Apr 2023 Heng Wang, Wenqian Zhang, Yuyang Bai, Zhaoxuan Tan, Shangbin Feng, Qinghua Zheng, Minnan Luo

We then propose MVSD, a novel Multi-View Spoiler Detection framework that takes into account the external knowledge about movies and user activities on movie review platforms.

PAR: Political Actor Representation Learning with Social Context and Expert Knowledge

1 code implementation15 Oct 2022 Shangbin Feng, Zhaoxuan Tan, Zilong Chen, Ningnan Wang, Peisheng Yu, Qinghua Zheng, Xiaojun Chang, Minnan Luo

Extensive experiments demonstrate that PAR is better at augmenting political text understanding and successfully advances the state-of-the-art in political perspective detection and roll call vote prediction.

Representation Learning

KALM: Knowledge-Aware Integration of Local, Document, and Global Contexts for Long Document Understanding

1 code implementation8 Oct 2022 Shangbin Feng, Zhaoxuan Tan, Wenqian Zhang, Zhenyu Lei, Yulia Tsvetkov

With the advent of pretrained language models (LMs), increasing research efforts have been focusing on infusing commonsense and domain-specific knowledge to prepare LMs for downstream tasks.

document understanding Knowledge Graphs +3

AHEAD: A Triple Attention Based Heterogeneous Graph Anomaly Detection Approach

1 code implementation17 Aug 2022 Shujie Yang, Binchi Zhang, Shangbin Feng, Zhaoxuan Tan, Qinghua Zheng, Jun Zhou, Minnan Luo

In light of this problem, we propose AHEAD: a heterogeneity-aware unsupervised graph anomaly detection approach based on the encoder-decoder framework.

Attribute Graph Anomaly Detection

KRACL: Contrastive Learning with Graph Context Modeling for Sparse Knowledge Graph Completion

1 code implementation16 Aug 2022 Zhaoxuan Tan, Zilong Chen, Shangbin Feng, Qingyue Zhang, Qinghua Zheng, Jundong Li, Minnan Luo

Knowledge Graph Embeddings (KGE) aim to map entities and relations to low dimensional spaces and have become the \textit{de-facto} standard for knowledge graph completion.

Contrastive Learning Knowledge Graph Embeddings

Legislator Representation Learning with Social Context and Expert Knowledge

1 code implementation9 Aug 2021 Shangbin Feng, Zhaoxuan Tan, Zilong Chen, Peisheng Yu, Qinghua Zheng, Xiaojun Chang, Minnan Luo

Modeling the ideological perspectives of political actors is an essential task in computational political science with applications in many downstream tasks.

Representation Learning Stance Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.