This adversarial loss guarantees the map is diverse -- a very wide range of anime can be produced from a single content code.
Ranked #1 on Image-to-Image Translation on selfie2anime
In contrast to previous approaches that either lack the ability to generalize to arbitrary identity or fail to preserve attributes like facial expression and gaze direction, our framework is capable of transferring the identity of an arbitrary source face into an arbitrary target face while preserving the attributes of the target face.
Ranked #1 on Face Swapping on FaceForensics++ (ID retrieval metric)
We present SegFormer, a simple, efficient yet powerful semantic segmentation framework which unifies Transformers with lightweight multilayer perception (MLP) decoders.
Ranked #1 on Semantic Segmentation on COCO-Stuff full
Games are abstractions of the real world, where artificial agents learn to compete and cooperate with other agents.
Blind face restoration usually relies on facial priors, such as facial geometry prior or reference prior, to restore realistic and faithful details.
To tackle this problem, we propose a framework with two components: a Fingerprint Estimation Network (FEN), which estimates a GM fingerprint from a generated image by training with four constraints to encourage the fingerprint to have desired properties, and a Parsing Network (PN), which predicts network architecture and loss functions from the estimated fingerprints.
We propose Styleformer, which is a style-based generator for GAN architecture, but a convolution-free transformer-based generator.
Ranked #1 on Image Generation on CelebA 64x64
Artificial Intelligence (AI), along with the recent progress in biomedical language understanding, is gradually changing medical practice.
Ranked #1 on Named Entity Recognition on CMeEE
To the best of our knowledge, this is the largest real-world interaction dataset for personalized recommendation.
Recently, it has been demonstrated that the performance of a deep convolutional neural network can be effectively improved by embedding an attention module into it.