Search Results for author: Shuchang Tao

Found 10 papers, 7 papers with code

When to Trust LLMs: Aligning Confidence with Response Quality

no code implementations26 Apr 2024 Shuchang Tao, Liuyi Yao, Hanxing Ding, Yuexiang Xie, Qi Cao, Fei Sun, Jinyang Gao, HuaWei Shen, Bolin Ding

Specifically, the order-preserving reward incentivizes the model to verbalize greater confidence for responses of higher quality to align the order of confidence and quality.

Text Generation

Robust Recommender System: A Survey and Future Directions

no code implementations5 Sep 2023 Kaike Zhang, Qi Cao, Fei Sun, Yunfan Wu, Shuchang Tao, HuaWei Shen, Xueqi Cheng

With the rapid growth of information, recommender systems have become integral for providing personalized suggestions and overcoming information overload.

Fairness Recommendation Systems +1

IDEA: Invariant Defense for Graph Adversarial Robustness

no code implementations25 May 2023 Shuchang Tao, Qi Cao, HuaWei Shen, Yunfan Wu, Bingbing Xu, Xueqi Cheng

To address these limitations, we analyze the causalities in graph adversarial attacks and conclude that causal features are key to achieve graph adversarial robustness, owing to their determinedness for labels and invariance across attacks.

Adversarial Robustness

Popularity Debiasing from Exposure to Interaction in Collaborative Filtering

1 code implementation9 May 2023 YuanHao Liu, Qi Cao, HuaWei Shen, Yunfan Wu, Shuchang Tao, Xueqi Cheng

In this paper, we propose a new criterion for popularity debiasing, i. e., in an unbiased recommender system, both popular and unpopular items should receive Interactions Proportional to the number of users who Like it, namely IPL criterion.

Collaborative Filtering Recommendation Systems

Graph Adversarial Immunization for Certifiable Robustness

1 code implementation16 Feb 2023 Shuchang Tao, HuaWei Shen, Qi Cao, Yunfan Wu, Liang Hou, Xueqi Cheng

In this paper, we propose and formulate graph adversarial immunization, i. e., vaccinating part of graph structure to improve certifiable robustness of graph against any admissible adversarial attack.

Adversarial Attack Combinatorial Optimization

Adversarial Camouflage for Node Injection Attack on Graphs

1 code implementation3 Aug 2022 Shuchang Tao, Qi Cao, HuaWei Shen, Yunfan Wu, Liang Hou, Fei Sun, Xueqi Cheng

In this paper, we first propose and define camouflage as distribution similarity between ego networks of injected nodes and normal nodes.

Single Node Injection Attack against Graph Neural Networks

1 code implementation30 Aug 2021 Shuchang Tao, Qi Cao, HuaWei Shen, JunJie Huang, Yunfan Wu, Xueqi Cheng

In this paper, we focus on an extremely limited scenario of single node injection evasion attack, i. e., the attacker is only allowed to inject one single node during the test phase to hurt GNN's performance.

Signed Bipartite Graph Neural Networks

1 code implementation22 Aug 2021 JunJie Huang, HuaWei Shen, Qi Cao, Shuchang Tao, Xueqi Cheng

Signed bipartite networks are different from classical signed networks, which contain two different node sets and signed links between two node sets.

Link Sign Prediction Network Embedding

INMO: A Model-Agnostic and Scalable Module for Inductive Collaborative Filtering

1 code implementation12 Jul 2021 Yunfan Wu, Qi Cao, HuaWei Shen, Shuchang Tao, Xueqi Cheng

INMO generates the inductive embeddings for users (items) by characterizing their interactions with some template items (template users), instead of employing an embedding lookup table.

Collaborative Filtering Recommendation Systems

Adversarial Immunization for Certifiable Robustness on Graphs

2 code implementations19 Jul 2020 Shuchang Tao, Hua-Wei Shen, Qi Cao, Liang Hou, Xue-Qi Cheng

Despite achieving strong performance in semi-supervised node classification task, graph neural networks (GNNs) are vulnerable to adversarial attacks, similar to other deep learning models.

Adversarial Attack Bilevel Optimization +2

Cannot find the paper you are looking for? You can Submit a new open access paper.