Search Results for author: Shangbin Feng

Found 26 papers, 20 papers with code

Stumbling Blocks: Stress Testing the Robustness of Machine-Generated Text Detectors Under Attacks

1 code implementation18 Feb 2024 Yichen Wang, Shangbin Feng, Abe Bohan Hou, Xiao Pu, Chao Shen, Xiaoming Liu, Yulia Tsvetkov, Tianxing He

Our experiments reveal that almost none of the existing detectors remain robust under all the attacks, and all detectors exhibit different loopholes.

DELL: Generating Reactions and Explanations for LLM-Based Misinformation Detection

no code implementations16 Feb 2024 Herun Wan, Shangbin Feng, Zhaoxuan Tan, Heng Wang, Yulia Tsvetkov, Minnan Luo

Large language models are limited by challenges in factuality and hallucinations to be directly employed off-the-shelf for judging the veracity of news articles, where factual accuracy is paramount.

Misinformation

What Does the Bot Say? Opportunities and Risks of Large Language Models in Social Media Bot Detection

no code implementations1 Feb 2024 Shangbin Feng, Herun Wan, Ningnan Wang, Zhaoxuan Tan, Minnan Luo, Yulia Tsvetkov

Social media bot detection has always been an arms race between advancements in machine learning bot detectors and adversarial bot strategies to evade detection.

Don't Hallucinate, Abstain: Identifying LLM Knowledge Gaps via Multi-LLM Collaboration

no code implementations1 Feb 2024 Shangbin Feng, Weijia Shi, Yike Wang, Wenxuan Ding, Vidhisha Balachandran, Yulia Tsvetkov

Despite efforts to expand the knowledge of large language models (LLMs), knowledge gaps -- missing or outdated information in LLMs -- might always persist given the evolving nature of knowledge.

Retrieval

P^3SUM: Preserving Author's Perspective in News Summarization with Diffusion Language Models

no code implementations16 Nov 2023 YuHan Liu, Shangbin Feng, Xiaochuang Han, Vidhisha Balachandran, Chan Young Park, Sachin Kumar, Yulia Tsvetkov

In this work, we take a first step towards designing summarization systems that are faithful to the author's intent, not only the semantic content of the article.

News Summarization

KGQuiz: Evaluating the Generalization of Encoded Knowledge in Large Language Models

1 code implementation15 Oct 2023 Yuyang Bai, Shangbin Feng, Vidhisha Balachandran, Zhaoxuan Tan, Shiqi Lou, Tianxing He, Yulia Tsvetkov

To gain a better understanding of LLMs' knowledge abilities and their generalization, we evaluate 10 open-source and black-box LLMs on the KGQuiz benchmark across the five knowledge-intensive tasks and knowledge domains.

Multiple-choice World Knowledge

Resolving Knowledge Conflicts in Large Language Models

1 code implementation2 Oct 2023 Yike Wang, Shangbin Feng, Heng Wang, Weijia Shi, Vidhisha Balachandran, Tianxing He, Yulia Tsvetkov

To this end, we introduce KNOWLEDGE CONFLICT, an evaluation framework for simulating contextual knowledge conflicts and quantitatively evaluating to what extent LLMs achieve these goals.

Knowledge Crosswords: Geometric Reasoning over Structured Knowledge with Large Language Models

1 code implementation2 Oct 2023 Wenxuan Ding, Shangbin Feng, YuHan Liu, Zhaoxuan Tan, Vidhisha Balachandran, Tianxing He, Yulia Tsvetkov

We additionally propose two new approaches, Staged Prompting and Verify-All, to augment LLMs' ability to backtrack and verify structured constraints.

Knowledge Card: Filling LLMs' Knowledge Gaps with Plug-in Specialized Language Models

2 code implementations17 May 2023 Shangbin Feng, Weijia Shi, Yuyang Bai, Vidhisha Balachandran, Tianxing He, Yulia Tsvetkov

Ultimately, Knowledge Card framework enables dynamic synthesis and updates of knowledge from diverse domains.

Retrieval

Can Language Models Solve Graph Problems in Natural Language?

2 code implementations NeurIPS 2023 Heng Wang, Shangbin Feng, Tianxing He, Zhaoxuan Tan, Xiaochuang Han, Yulia Tsvetkov

We then propose Build-a-Graph Prompting and Algorithmic Prompting, two instruction-based approaches to enhance LLMs in solving natural language graph problems.

In-Context Learning Knowledge Probing +2

From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models

1 code implementation15 May 2023 Shangbin Feng, Chan Young Park, YuHan Liu, Yulia Tsvetkov

We focus on hate speech and misinformation detection, aiming to empirically quantify the effects of political (social, economic) biases in pretraining data on the fairness of high-stakes social-oriented tasks.

Fairness Misinformation

FactKB: Generalizable Factuality Evaluation using Language Models Enhanced with Factual Knowledge

1 code implementation14 May 2023 Shangbin Feng, Vidhisha Balachandran, Yuyang Bai, Yulia Tsvetkov

We propose FactKB, a simple new approach to factuality evaluation that is generalizable across domains, in particular with respect to entities and relations.

News Summarization

Detecting Spoilers in Movie Reviews with External Movie Knowledge and User Networks

1 code implementation22 Apr 2023 Heng Wang, Wenqian Zhang, Yuyang Bai, Zhaoxuan Tan, Shangbin Feng, Qinghua Zheng, Minnan Luo

We then propose MVSD, a novel Multi-View Spoiler Detection framework that takes into account the external knowledge about movies and user activities on movie review platforms.

Datavoidant: An AI System for Addressing Political Data Voids on Social Media

no code implementations24 Oct 2022 Claudia Flores-Saviaga, Shangbin Feng, Saiph Savage

Independent journalists who combat disinformation in underrepresented communities have reported feeling overwhelmed because they lack the tools necessary to make sense of the information they monitor and address the data voids.

PAR: Political Actor Representation Learning with Social Context and Expert Knowledge

1 code implementation15 Oct 2022 Shangbin Feng, Zhaoxuan Tan, Zilong Chen, Ningnan Wang, Peisheng Yu, Qinghua Zheng, Xiaojun Chang, Minnan Luo

Extensive experiments demonstrate that PAR is better at augmenting political text understanding and successfully advances the state-of-the-art in political perspective detection and roll call vote prediction.

Representation Learning

KALM: Knowledge-Aware Integration of Local, Document, and Global Contexts for Long Document Understanding

1 code implementation8 Oct 2022 Shangbin Feng, Zhaoxuan Tan, Wenqian Zhang, Zhenyu Lei, Yulia Tsvetkov

With the advent of pretrained language models (LMs), increasing research efforts have been focusing on infusing commonsense and domain-specific knowledge to prepare LMs for downstream tasks.

document understanding Knowledge Graphs +3

GraTO: Graph Neural Network Framework Tackling Over-smoothing with Neural Architecture Search

1 code implementation18 Aug 2022 Xinshun Feng, Herun Wan, Shangbin Feng, Hongrui Wang, Jun Zhou, Qinghua Zheng, Minnan Luo

Further experiments bear out the quality of node representations learned with GraTO and the effectiveness of model architecture.

Neural Architecture Search

AHEAD: A Triple Attention Based Heterogeneous Graph Anomaly Detection Approach

1 code implementation17 Aug 2022 Shujie Yang, Binchi Zhang, Shangbin Feng, Zhaoxuan Tan, Qinghua Zheng, Jun Zhou, Minnan Luo

In light of this problem, we propose AHEAD: a heterogeneity-aware unsupervised graph anomaly detection approach based on the encoder-decoder framework.

Attribute Graph Anomaly Detection

BIC: Twitter Bot Detection with Text-Graph Interaction and Semantic Consistency

1 code implementation17 Aug 2022 Zhenyu Lei, Herun Wan, Wenqian Zhang, Shangbin Feng, Zilong Chen, Jundong Li, Qinghua Zheng, Minnan Luo

In addition, given the stealing behavior of novel Twitter bots, BIC proposes to model semantic consistency in tweets based on attention weights while using it to augment the decision process.

Misinformation Twitter Bot Detection

KRACL: Contrastive Learning with Graph Context Modeling for Sparse Knowledge Graph Completion

1 code implementation16 Aug 2022 Zhaoxuan Tan, Zilong Chen, Shangbin Feng, Qingyue Zhang, Qinghua Zheng, Jundong Li, Minnan Luo

Knowledge Graph Embeddings (KGE) aim to map entities and relations to low dimensional spaces and have become the \textit{de-facto} standard for knowledge graph completion.

Contrastive Learning Knowledge Graph Embeddings

KCD: Knowledge Walks and Textual Cues Enhanced Political Perspective Detection in News Media

1 code implementation NAACL 2022 Wenqian Zhang, Shangbin Feng, Zilong Chen, Zhenyu Lei, Jundong Li, Minnan Luo

Previous approaches generally focus on leveraging textual content to identify stances, while they fail to reason with background knowledge or leverage the rich semantic and syntactic textual labels in news articles.

Knowledge Graphs Representation Learning

PPSGCN: A Privacy-Preserving Subgraph Sampling Based Distributed GCN Training Method

no code implementations22 Oct 2021 Binchi Zhang, Minnan Luo, Shangbin Feng, Ziqi Liu, Jun Zhou, Qinghua Zheng

In light of these problems, we propose a Privacy-Preserving Subgraph sampling based distributed GCN training method (PPSGCN), which preserves data privacy and significantly cuts back on communication and memory overhead.

Federated Learning Graph Learning +2

Legislator Representation Learning with Social Context and Expert Knowledge

1 code implementation9 Aug 2021 Shangbin Feng, Zhaoxuan Tan, Zilong Chen, Peisheng Yu, Qinghua Zheng, Xiaojun Chang, Minnan Luo

Modeling the ideological perspectives of political actors is an essential task in computational political science with applications in many downstream tasks.

Representation Learning Stance Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.