Search Results for author: Yue Xing

Found 20 papers, 4 papers with code

Unveiling and Mitigating Memorization in Text-to-image Diffusion Models through Cross Attention

1 code implementation17 Mar 2024 Jie Ren, Yaxin Li, Shenglai Zen, Han Xu, Lingjuan Lyu, Yue Xing, Jiliang Tang

Recent advancements in text-to-image diffusion models have demonstrated their remarkable capability to generate high-quality images from textual prompts.

Memorization

Effect of Ambient-Intrinsic Dimension Gap on Adversarial Vulnerability

no code implementations6 Mar 2024 Rajdeep Haldar, Yue Xing, Qifan Song

The existence of adversarial attacks on machine learning models imperceptible to a human is still quite a mystery from a theoretical perspective.

The Good and The Bad: Exploring Privacy Issues in Retrieval-Augmented Generation (RAG)

1 code implementation23 Feb 2024 Shenglai Zeng, Jiankun Zhang, Pengfei He, Yue Xing, Yiding Liu, Han Xu, Jie Ren, Shuaiqiang Wang, Dawei Yin, Yi Chang, Jiliang Tang

In this work, we conduct extensive empirical studies with novel attack methods, which demonstrate the vulnerability of RAG systems on leaking the private retrieval database.

Language Modelling Retrieval

Benefits of Transformer: In-Context Learning in Linear Regression Tasks with Unstructured Data

no code implementations1 Feb 2024 Yue Xing, Xiaofeng Lin, Namjoon Suh, Qifan Song, Guang Cheng

In practice, it is observed that transformer-based models can learn concepts in context in the inference stage.

In-Context Learning

Superiority of Multi-Head Attention in In-Context Linear Regression

no code implementations30 Jan 2024 Yingqian Cui, Jie Ren, Pengfei He, Jiliang Tang, Yue Xing

We present a theoretical analysis of the performance of transformer with softmax attention in in-context learning with linear regression tasks.

In-Context Learning regression

Better Representations via Adversarial Training in Pre-Training: A Theoretical Perspective

no code implementations26 Jan 2024 Yue Xing, Xiaofeng Lin, Qifan Song, Yi Xu, Belinda Zeng, Guang Cheng

Pre-training is known to generate universal representations for downstream tasks in large-scale deep learning such as large language models.

Adversarial Robustness Contrastive Learning +1

Exploring Memorization in Fine-tuned Language Models

no code implementations10 Oct 2023 Shenglai Zeng, Yaxin Li, Jie Ren, Yiding Liu, Han Xu, Pengfei He, Yue Xing, Shuaiqiang Wang, Jiliang Tang, Dawei Yin

In this work, we conduct the first comprehensive analysis to explore language models' (LMs) memorization during fine-tuning across tasks.

Memorization

FT-Shield: A Watermark Against Unauthorized Fine-tuning in Text-to-Image Diffusion Models

no code implementations3 Oct 2023 Yingqian Cui, Jie Ren, Yuping Lin, Han Xu, Pengfei He, Yue Xing, Wenqi Fan, Hui Liu, Jiliang Tang

Text-to-image generative models based on latent diffusion models (LDM) have demonstrated their outstanding ability in generating high-quality and high-resolution images according to language prompt.

Face Transfer

Adversarial Training with Generated Data in High-Dimensional Regression: An Asymptotic Study

no code implementations21 Jun 2023 Yue Xing

In recent years, studies such as \cite{carmon2019unlabeled, gowal2021improving, xing2022artificial} have demonstrated that incorporating additional real or generated data with pseudo-labels can enhance adversarial training through a two-stage training approach.

regression

DiffusionShield: A Watermark for Copyright Protection against Generative Diffusion Models

no code implementations25 May 2023 Yingqian Cui, Jie Ren, Han Xu, Pengfei He, Hui Liu, Lichao Sun, Yue Xing, Jiliang Tang

By detecting the watermark from generated images, copyright infringement can be exposed with evidence.

Benefit of Interpolation in Nearest Neighbor Algorithms

no code implementations23 Feb 2022 Yue Xing, Qifan Song, Guang Cheng

In some studies \citep[e. g.,][]{zhang2016understanding} of deep learning, it is observed that over-parametrized deep neural networks achieve a small testing error even when the training error is almost zero.

Unlabeled Data Help: Minimax Analysis and Adversarial Robustness

no code implementations14 Feb 2022 Yue Xing, Qifan Song, Guang Cheng

The recent proposed self-supervised learning (SSL) approaches successfully demonstrate the great potential of supplementing learning algorithms with additional unlabeled data.

Adversarial Robustness Self-Supervised Learning

On the Algorithmic Stability of Adversarial Training

no code implementations NeurIPS 2021 Yue Xing, Qifan Song, Guang Cheng

In contrast, this paper studies the algorithmic stability of a generic adversarial training algorithm, which can further help to establish an upper bound for generalization error.

Adversarially Robust Estimate and Risk Analysis in Linear Regression

no code implementations18 Dec 2020 Yue Xing, Ruizhi Zhang, Guang Cheng

Further, we reveal an explicit connection of adversarial and standard estimates, and propose a straightforward two-stage adversarial learning framework, which facilitates to utilize model structure information to improve adversarial robustness.

Adversarial Robustness regression

On the Generalization Properties of Adversarial Training

no code implementations15 Aug 2020 Yue Xing, Qifan Song, Guang Cheng

Modern machine learning and deep learning models are shown to be vulnerable when testing data are slightly perturbed.

Adversarial Robustness

Directional Pruning of Deep Neural Networks

1 code implementation NeurIPS 2020 Shih-Kang Chao, Zhanyu Wang, Yue Xing, Guang Cheng

In the light of the fact that the stochastic gradient descent (SGD) often finds a flat minimum valley in the training loss, we propose a novel directional pruning method which searches for a sparse minimizer in or close to that flat region.

Predictive Power of Nearest Neighbors Algorithm under Random Perturbation

no code implementations13 Feb 2020 Yue Xing, Qifan Song, Guang Cheng

We consider a data corruption scenario in the classical $k$ Nearest Neighbors ($k$-NN) algorithm, that is, the testing data are randomly perturbed.

Benefit of Interpolation in Nearest Neighbor Algorithms

no code implementations25 Sep 2019 Yue Xing, Qifan Song, Guang Cheng

The over-parameterized models attract much attention in the era of data science and deep learning.

Cannot find the paper you are looking for? You can Submit a new open access paper.