Knowledge representation learning (KRL) has been used in plenty of knowledge-driven tasks.
After the great success of Vision Transformer variants (ViTs) in computer vision, it has also demonstrated great potential in domain adaptive semantic segmentation.
Ranked #5 on Domain Adaptation on GTA5 to Cityscapes
In this survey, we provide a comprehensive review of various Graph Transformer models from the architectural design perspective.
The proposed architecture, termed as NICE-GAN, exhibits two advantageous patterns over previous approaches: First, it is more compact since no independent encoding component is required; Second, this plug-in encoder is directly trained by the adversary loss, making it more informative and trained more effectively if a multi-scale discriminator is applied.