Perceptual Distance
11 papers with code • 1 benchmarks • 1 datasets
Benchmarks
These leaderboards are used to track progress in Perceptual Distance
Trend | Dataset | Best Model | Paper | Code | Compare |
---|
Most implemented papers
Diverse Image-to-Image Translation via Disentangled Representations
Our model takes the encoded content features extracted from a given input and the attribute vectors sampled from the attribute space to produce diverse outputs at test time.
DRIT++: Diverse Image-to-Image Translation via Disentangled Representations
In this work, we present an approach based on disentangled representation for generating diverse outputs without paired training images.
Palette: Image-to-Image Diffusion Models
We expect this standardized evaluation protocol to play a role in advancing image-to-image translation research.
Perceptual Adversarial Robustness: Defense Against Unseen Threat Models
We call this threat model the neural perceptual threat model (NPTM); it includes adversarial examples with a bounded neural perceptual distance (a neural network-based approximation of the true perceptual distance) to natural images.
Guetzli: Perceptually Guided JPEG Encoder
Guetzli is a new JPEG encoder that aims to produce visually indistinguishable images at a lower bit-rate than other common JPEG encoders.
Towards Visual Distortion in Black-Box Attacks
Constructing adversarial examples in a black-box threat model injures the original images by introducing visual distortion.
Where and What? Examining Interpretable Disentangled Representations
We thus impose a perturbation on a certain dimension of the latent code, and expect to identify the perturbation along this dimension from the generated images so that the encoding of simple variations can be enforced.
Towards Better Robustness against Common Corruptions for Unsupervised Domain Adaptation
Recent studies have investigated how to achieve robustness for unsupervised domain adaptation (UDA).
Adversarial Image Generation by Spatial Transformation in Perceptual Colorspaces
Deep neural networks are known to be vulnerable to adversarial perturbations.
HWD: A Novel Evaluation Score for Styled Handwritten Text Generation
Through extensive experimental evaluation on different word-level and line-level datasets of handwritten text images, we demonstrate the suitability of the proposed HWD as a score for Styled HTG.