NÜWA: Visual Synthesis Pre-training for Neural visUal World creAtion

microsoft/nuwa 24 Nov 2021

To cover language, image, and video at the same time for different scenarios, a 3D transformer encoder-decoder framework is designed, which can not only deal with videos as 3D data but also adapt to texts and images as 1D and 2D data, respectively.

Text-to-Image Generation Video Generation +1

5.49 stars / hour

MetaFormer is Actually What You Need for Vision

sail-sg/poolformer 22 Nov 2021

Based on this observation, we hypothesize that the general architecture of the transformers, instead of the specific token mixer module, is more essential to the model's performance.

Image Classification Semantic Segmentation

2.04 stars / hour


BR-IDL/PaddleViT NeurIPS 2021

:robot: PaddleViT: State-of-the-art Visual Transformer and MLP Models for PaddlePaddle 2. 0+

Semantic Segmentation

1.85 stars / hour

Resolution-robust Large Mask Inpainting with Fourier Convolutions

saic-mdal/lama 15 Sep 2021

We find that one of the main reasons for that is the lack of an effective receptive field in both the inpainting network and the loss function.

Image Inpainting LAMA

1.56 stars / hour

KML: Using Machine Learning to Improve Storage Systems

sbu-fsl/kernel-ml 22 Nov 2021

Operating systems include many heuristic algorithms designed to improve overall storage performance and throughput.

1.39 stars / hour

Attention Mechanisms in Computer Vision: A Survey

MenghaoGuo/Awesome-Vision-Attentions 15 Nov 2021

Humans can naturally and effectively find salient regions in complex scenes.

Image Classification Image Generation +4

1.14 stars / hour

Direct Voxel Grid Optimization: Super-fast Convergence for Radiance Fields Reconstruction

sunset1995/directvoxgo 22 Nov 2021

Finally, evaluation on five inward-facing benchmarks shows that our method matches, if not surpasses, NeRF's quality, yet it only takes about 15 minutes to train from scratch for a new scene.

Novel View Synthesis

1.11 stars / hour

Masked Autoencoders Are Scalable Vision Learners

pengzhiliang/MAE-pytorch 11 Nov 2021

Our MAE approach is simple: we mask random patches of the input image and reconstruct the missing pixels.

Object Detection Self-Supervised Image Classification +2

1.01 stars / hour

BlendGAN: Implicitly GAN Blending for Arbitrary Stylized Face Generation

onion-liu/BlendGAN NeurIPS 2021

Specifically, we first train a self-supervised style encoder on the generic artistic dataset to extract the representations of arbitrary styles.

Face Generation

0.94 stars / hour

Investigating Tradeoffs in Real-World Video Super-Resolution

ckkelvinchan/realbasicvsr 24 Nov 2021

The diversity and complexity of degradations in real-world video super-resolution (VSR) pose non-trivial challenges in inference and training.

Video Super-Resolution

0.82 stars / hour