no code implementations • 6 Apr 2024 • Derui Zhu, Dingfan Chen, Qing Li, Zongxiong Chen, Lei Ma, Jens Grossklags, Mario Fritz
Despite tremendous advancements in large language models (LLMs) over recent years, a notably urgent challenge for their practical deployment is the phenomenon of hallucination, where the model fabricates facts and produces non-factual statements.
1 code implementation • 7 Feb 2024 • Dingfan Chen, Marie Oestreich, Tejumade Afonja, Raouf Kerkouche, Matthias Becker, Mario Fritz
In this paper, we initiate a systematic analysis of how DP generative models perform in their natural application scenarios, specifically focusing on real-world gene expression data.
no code implementations • 27 Sep 2023 • Dingfan Chen, Raouf Kerkouche, Mario Fritz
The availability of rich and vast data sources has greatly advanced machine learning applications in various domains.
no code implementations • 16 Jul 2023 • Tejumade Afonja, Dingfan Chen, Mario Fritz
The potential of realistic and useful synthetic data is significant.
no code implementations • 15 Feb 2023 • Derui Zhu, Dingfan Chen, Jens Grossklags, Mario Fritz
In recent years, diffusion models have achieved tremendous success in the field of image generation, becoming the stateof-the-art technology for AI-based image processing applications.
1 code implementation • 2 Feb 2023 • Hui-Po Wang, Dingfan Chen, Raouf Kerkouche, Mario Fritz
This work proposes FedLAP-DP, a novel privacy-preserving approach for federated learning.
2 code implementations • 7 Nov 2022 • Dingfan Chen, Raouf Kerkouche, Mario Fritz
Differentially private data generation techniques have become a promising solution to the data privacy challenge -- it enables sharing of data while complying with rigorous privacy guarantees, which is essential for scientific progress in sensitive domains.
1 code implementation • ICLR 2022 • Dingfan Chen, Ning Yu, Mario Fritz
As a long-term threat to the privacy of training data, membership inference attacks (MIAs) emerge ubiquitously in machine learning models.
1 code implementation • ICLR 2022 • Ning Yu, Vladislav Skripniuk, Dingfan Chen, Larry Davis, Mario Fritz
Over the past years, deep generative models have achieved a new level of performance.
1 code implementation • NeurIPS 2020 • Dingfan Chen, Tribhuvanesh Orekondy, Mario Fritz
The wide-spread availability of rich data has fueled the growth of machine learning applications in numerous domains.
no code implementations • 1 Jun 2020 • Xiaoyi Chen, Ahmed Salem, Dingfan Chen, Michael Backes, Shiqing Ma, Qingni Shen, Zhonghai Wu, Yang Zhang
In this paper, we perform a systematic investigation of backdoor attack on NLP models, and propose BadNL, a general NLP backdoor attack framework including novel attack methods.
1 code implementation • 9 Sep 2019 • Dingfan Chen, Ning Yu, Yang Zhang, Mario Fritz
In addition, we propose the first generic attack model that can be instantiated in a large range of settings and is applicable to various kinds of deep generative models.