Search Results for author: Yuxuan Wan

Found 10 papers, 3 papers with code

SCANet: Correcting LEGO Assembly Errors with Self-Correct Assembly Network

1 code implementation27 Mar 2024 Yuxuan Wan, Kaichen Zhou, jinhong Chen, Hao Dong

To support research in this area, we present the LEGO Error Correction Assembly Dataset (LEGO-ECA), comprising manual images for assembly steps and instances of assembly failures.

DPStyler: Dynamic PromptStyler for Source-Free Domain Generalization

no code implementations25 Mar 2024 Yunlong Tang, Yuxuan Wan, Lei Qi, Xin Geng

The Style Generation module refreshes all styles at every training epoch, while the Style Removal module eliminates variations in the encoder's output features caused by input styles.

Source-free Domain Generalization

New Job, New Gender? Measuring the Social Bias in Image Generation Models

no code implementations1 Jan 2024 Wenxuan Wang, Haonan Bai, Jen-tse Huang, Yuxuan Wan, Youliang Yuan, Haoyi Qiu, Nanyun Peng, Michael R. Lyu

BiasPainter uses a diverse range of seed images of individuals and prompts the image generation models to edit these images using gender, race, and age-neutral queries.

Fairness Image Generation

A & B == B & A: Triggering Logical Reasoning Failures in Large Language Models

no code implementations1 Jan 2024 Yuxuan Wan, Wenxuan Wang, Yiliu Yang, Youliang Yuan, Jen-tse Huang, Pinjia He, Wenxiang Jiao, Michael R. Lyu

In addition, the test cases of LogicAsker can be further used to design demonstration examples for in-context learning, which effectively improves the logical reasoning ability of LLMs, e. g., 10\% for GPT-4.

Code Generation In-Context Learning +2

BiasAsker: Measuring the Bias in Conversational AI System

1 code implementation21 May 2023 Yuxuan Wan, Wenxuan Wang, Pinjia He, Jiazhen Gu, Haonan Bai, Michael Lyu

Particularly, it is hard to generate inputs that can comprehensively trigger potential bias due to the lack of data containing both social groups as well as biased properties.

Bias Detection

ChatGPT or Grammarly? Evaluating ChatGPT on Grammatical Error Correction Benchmark

no code implementations15 Mar 2023 Haoran Wu, Wenxuan Wang, Yuxuan Wan, Wenxiang Jiao, Michael Lyu

ChatGPT is a cutting-edge artificial intelligence language model developed by OpenAI, which has attracted a lot of attention due to its surprisingly strong ability in answering follow-up questions.

Grammatical Error Correction Language Modelling +1

Transferable Unlearnable Examples

1 code implementation18 Oct 2022 Jie Ren, Han Xu, Yuxuan Wan, Xingjun Ma, Lichao Sun, Jiliang Tang

The unlearnable strategies have been introduced to prevent third parties from training on the data without permission.

Towards Fair Classification against Poisoning Attacks

no code implementations18 Oct 2022 Han Xu, Xiaorui Liu, Yuxuan Wan, Jiliang Tang

We demonstrate that the fairly trained classifiers can be greatly vulnerable to such poisoning attacks, with much worse accuracy & fairness trade-off, even when we apply some of the most effective defenses (originally proposed to defend traditional classification tasks).

Classification Fairness

Probabilistic Categorical Adversarial Attack & Adversarial Training

no code implementations17 Oct 2022 Han Xu, Pengfei He, Jie Ren, Yuxuan Wan, Zitao Liu, Hui Liu, Jiliang Tang

To tackle this problem, we propose Probabilistic Categorical Adversarial Attack (PCAA), which transfers the discrete optimization problem to a continuous problem that can be solved efficiently by Projected Gradient Descent.

Adversarial Attack

Defense Against Gradient Leakage Attacks via Learning to Obscure Data

no code implementations1 Jun 2022 Yuxuan Wan, Han Xu, Xiaorui Liu, Jie Ren, Wenqi Fan, Jiliang Tang

However, federated learning is still under the risk of privacy leakage because of the existence of attackers who deliberately conduct gradient leakage attacks to reconstruct the client data.

Federated Learning Privacy Preserving

Cannot find the paper you are looking for? You can Submit a new open access paper.