Search Results for author: Dingfan Chen

Found 12 papers, 7 papers with code

PoLLMgraph: Unraveling Hallucinations in Large Language Models via State Transition Dynamics

no code implementations6 Apr 2024 Derui Zhu, Dingfan Chen, Qing Li, Zongxiong Chen, Lei Ma, Jens Grossklags, Mario Fritz

Despite tremendous advancements in large language models (LLMs) over recent years, a notably urgent challenge for their practical deployment is the phenomenon of hallucination, where the model fabricates facts and produces non-factual statements.

Benchmarking Hallucination

Towards Biologically Plausible and Private Gene Expression Data Generation

1 code implementation7 Feb 2024 Dingfan Chen, Marie Oestreich, Tejumade Afonja, Raouf Kerkouche, Matthias Becker, Mario Fritz

In this paper, we initiate a systematic analysis of how DP generative models perform in their natural application scenarios, specifically focusing on real-world gene expression data.

Benchmarking

A Unified View of Differentially Private Deep Generative Modeling

no code implementations27 Sep 2023 Dingfan Chen, Raouf Kerkouche, Mario Fritz

The availability of rich and vast data sources has greatly advanced machine learning applications in various domains.

Privacy Preserving

MargCTGAN: A "Marginally'' Better CTGAN for the Low Sample Regime

no code implementations16 Jul 2023 Tejumade Afonja, Dingfan Chen, Mario Fritz

The potential of realistic and useful synthetic data is significant.

Data Forensics in Diffusion Models: A Systematic Analysis of Membership Privacy

no code implementations15 Feb 2023 Derui Zhu, Dingfan Chen, Jens Grossklags, Mario Fritz

In recent years, diffusion models have achieved tremendous success in the field of image generation, becoming the stateof-the-art technology for AI-based image processing applications.

Image Generation

Private Set Generation with Discriminative Information

2 code implementations7 Nov 2022 Dingfan Chen, Raouf Kerkouche, Mario Fritz

Differentially private data generation techniques have become a promising solution to the data privacy challenge -- it enables sharing of data while complying with rigorous privacy guarantees, which is essential for scientific progress in sensitive domains.

RelaxLoss: Defending Membership Inference Attacks without Losing Utility

1 code implementation ICLR 2022 Dingfan Chen, Ning Yu, Mario Fritz

As a long-term threat to the privacy of training data, membership inference attacks (MIAs) emerge ubiquitously in machine learning models.

GS-WGAN: A Gradient-Sanitized Approach for Learning Differentially Private Generators

1 code implementation NeurIPS 2020 Dingfan Chen, Tribhuvanesh Orekondy, Mario Fritz

The wide-spread availability of rich data has fueled the growth of machine learning applications in numerous domains.

BadNL: Backdoor Attacks against NLP Models with Semantic-preserving Improvements

no code implementations1 Jun 2020 Xiaoyi Chen, Ahmed Salem, Dingfan Chen, Michael Backes, Shiqing Ma, Qingni Shen, Zhonghai Wu, Yang Zhang

In this paper, we perform a systematic investigation of backdoor attack on NLP models, and propose BadNL, a general NLP backdoor attack framework including novel attack methods.

Backdoor Attack BIG-bench Machine Learning +1

GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models

1 code implementation9 Sep 2019 Dingfan Chen, Ning Yu, Yang Zhang, Mario Fritz

In addition, we propose the first generic attack model that can be instantiated in a large range of settings and is applicable to various kinds of deep generative models.

Inference Attack Membership Inference Attack

Cannot find the paper you are looking for? You can Submit a new open access paper.