Search Results for author: Yuancheng Xu

Found 11 papers, 8 papers with code

Can Watermarking Large Language Models Prevent Copyrighted Text Generation and Hide Training Data?

no code implementations24 Jul 2024 Michael-Andrei Panaitescu-Liess, Zora Che, Bang An, Yuancheng Xu, Pankayaraj Pathmanathan, Souradip Chakraborty, Sicheng Zhu, Tom Goldstein, Furong Huang

Surprisingly, we find that watermarking adversely affects the success rate of MIAs, complicating the task of detecting copyrighted text in the pretraining dataset.

Text Generation

Calibrated Dataset Condensation for Faster Hyperparameter Search

no code implementations27 May 2024 Mucong Ding, Yuancheng Xu, Tahseen Rabbani, Xiaoyu Liu, Brian Gravelle, Teresa Ranadive, Tai-Ching Tuan, Furong Huang

We aim to generate a synthetic validation dataset so that the validation-performance rankings of the models, with different hyperparameters, on the condensed and original datasets are comparable.

Dataset Condensation

Mementos: A Comprehensive Benchmark for Multimodal Large Language Model Reasoning over Image Sequences

1 code implementation19 Jan 2024 Xiyao Wang, YuHang Zhou, Xiaoyu Liu, Hongjin Lu, Yuancheng Xu, Feihong He, Jaehong Yoon, Taixi Lu, Gedas Bertasius, Mohit Bansal, Huaxiu Yao, Furong Huang

However, current MLLM benchmarks are predominantly designed to evaluate reasoning based on static information about a single image, and the ability of modern MLLMs to extrapolate from image sequences, which is essential for understanding our ever-changing world, has been less investigated.

Language Modelling Large Language Model +1

WAVES: Benchmarking the Robustness of Image Watermarks

1 code implementation16 Jan 2024 Bang An, Mucong Ding, Tahseen Rabbani, Aakriti Agrawal, Yuancheng Xu, ChengHao Deng, Sicheng Zhu, Abdirisak Mohamed, Yuxin Wen, Tom Goldstein, Furong Huang

Our evaluation examines two pivotal dimensions: the degree of image quality degradation and the efficacy of watermark detection after attacks.

Benchmarking

C-Disentanglement: Discovering Causally-Independent Generative Factors under an Inductive Bias of Confounder

1 code implementation NeurIPS 2023 Xiaoyu Liu, Jiaxin Yuan, Bang An, Yuancheng Xu, Yifan Yang, Furong Huang

Representation learning assumes that real-world data is generated by a few semantically meaningful generative factors (i. e., sources of variation) and aims to discover them in the latent space.

Disentanglement Inductive Bias

Exploring and Exploiting Decision Boundary Dynamics for Adversarial Robustness

2 code implementations6 Feb 2023 Yuancheng Xu, Yanchao Sun, Micah Goldblum, Tom Goldstein, Furong Huang

However, it is unclear whether existing robust training methods effectively increase the margin for each vulnerable point during training.

Adversarial Robustness

Multi-Task Adversarial Attack

no code implementations19 Nov 2020 Pengxin Guo, Yuancheng Xu, Baijiong Lin, Yu Zhang

More specifically, MTA uses a generator for adversarial perturbations which consists of a shared encoder for all tasks and multiple task-specific decoders.

Adversarial Attack

Cannot find the paper you are looking for? You can Submit a new open access paper.