Given a quantum circuit, a quantum computer can sample the output distribution exponentially faster in the number of bits than classical computers.
We provide new state-of-the-art results for conditional generation on CIFAR-10 with both consistency loss and contrastive loss as additional regularizations.
We introduce a simple (one line of code) modification to the Generative Adversarial Network (GAN) training algorithm that materially improves results with no increase in computational cost: When updating the generator parameters, we simply zero out the gradient contributions from the elements of the batch that the critic scores as `least realistic'.
Recent work has increased the performance of Generative Adversarial Networks (GANs) by enforcing a consistency cost on the discriminator.
Broad adoption of machine learning techniques has increased privacy concerns for models trained on sensitive data such as medical records.
Due to their complex nature, it is hard to characterize the ways in which machine learning models can misbehave or be exploited when deployed.