Synthetic Data Generation
177 papers with code • 1 benchmarks • 5 datasets
The generation of tabular data by any means possible.
Libraries
Use these libraries to find Synthetic Data Generation models and implementationsDatasets
Latest papers with no code
ViFu: Multiple 360$^\circ$ Objects Reconstruction with Clean Background via Visible Part Fusion
In this paper, we propose a method to segment and recover a static, clean background and multiple 360$^\circ$ objects from observations of scenes at different timestamps.
SiloFuse: Cross-silo Synthetic Data Generation with Latent Tabular Diffusion Models
We introduce SiloFuse, a novel generative framework for high-quality synthesis from cross-silo tabular data.
Synthetic Data Generation and Joint Learning for Robust Code-Mixed Translation
In this paper, we tackle the problem of code-mixed (Hinglish and Bengalish) to English machine translation.
Does Differentially Private Synthetic Data Lead to Synthetic Discoveries?
Objectives: The aim of this study is to evaluate the Mann-Whitney U test on DP-synthetic biomedical data in terms of Type I and Type II errors, in order to establish whether statistical hypothesis testing performed on privacy preserving synthetic data is likely to lead to loss of test's validity or decreased power.
Six Levels of Privacy: A Framework for Financial Synthetic Data
In addition to the benefits it provides, such as improved financial modeling and better testing procedures, it poses privacy risks as well.
Automated data processing and feature engineering for deep learning and big data applications: a survey
In addition to automating specific data processing tasks, we discuss the use of AutoML methods and tools to simultaneously optimize all stages of the machine learning pipeline.
Structured Evaluation of Synthetic Tabular Data
Many metrics exist for evaluating the quality of synthetic tabular data; however, we lack an objective, coherent interpretation of the many metrics.
Generative AI for Synthetic Data Generation: Methods, Challenges and the Future
The recent surge in research focused on generating synthetic data from large language models (LLMs), especially for scenarios with limited data availability, marks a notable shift in Generative Artificial Intelligence (AI).
LAB: Large-Scale Alignment for ChatBots
This work introduces LAB (Large-scale Alignment for chatBots), a novel methodology designed to overcome the scalability challenges in the instruction-tuning phase of large language model (LLM) training.
A Quantum Approach to Synthetic Minority Oversampling Technique (SMOTE)
The paper proposes the Quantum-SMOTE method, a novel solution that uses quantum computing techniques to solve the prevalent problem of class imbalance in machine learning datasets.