In this work, we introduce this approach into the realm of encoder-based inversion.
Ranked #1 on Fine-tuning on 2021 Hotel-ID
To cover language, image, and video at the same time for different scenarios, a 3D transformer encoder-decoder framework is designed, which can not only deal with videos as 3D data but also adapt to texts and images as 1D and 2D data, respectively.
Our experiments indicate that the VQ-Diffusion model with the reparameterization is fifteen times faster than traditional AR methods while achieving a better image quality.
Due to the complex nature of this multimodal task, which combines text reasoning, video understanding, instance segmentation and tracking, existing approaches typically rely on sophisticated pipelines in order to tackle it.
Ranked #1 on Referring Expression Segmentation on A2D Sentences
We introduce a prototype model and provide an open-source and extensible toolkit called OpenUE for various extraction tasks.
Inspired by BERT, we devise a Masked Point Modeling (MPM) task to pre-train point cloud Transformers.
Ranked #1 on 3D Point Cloud Classification on ScanObjectNN
Recent advances show that semi-supervised implicit representation learning can be achieved through physical constraints like Eikonal equations.
Long-tailed relation classification is a challenging problem as the head classes may dominate the training phase, thereby leading to the deterioration of the tail performance.
A reverse dictionary takes descriptions of words as input and outputs words semantically matching the input descriptions.