TabPFN: A Transformer That Solves Small Tabular Classification Problems in a Second

automl/tabpfn 5 Jul 2022

We present TabPFN, a trained Transformer that can do supervised classification for small tabular datasets in less than a second, needs no hyperparameter tuning and is competitive with state-of-the-art classification methods.

AutoML Bayesian Inference +2

627
0.29 stars / hour

Versatile Diffusion: Text, Images and Variations All in One Diffusion Model

shi-labs/versatile-diffusion 15 Nov 2022

Through our experiments, we demonstrate that VD and its underlying framework have the following merits: a) VD handles all subtasks with competitive quality; b) VD initiates novel extensions and applications such as disentanglement of style and semantic, image-text dual-guided generation, etc.

Disentanglement Image Captioning +4

607
0.29 stars / hour

High-Resolution Image Synthesis with Latent Diffusion Models

compvis/stable-diffusion CVPR 2022

By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond.

Denoising Image Inpainting +3

35,667
0.28 stars / hour

RenderDiffusion: Image Diffusion for 3D Reconstruction, Inpainting and Generation

anciukevicius/renderdiffusion 17 Nov 2022

In this paper, we present RenderDiffusion as the first diffusion model for 3D generation and inference that can be trained using only monocular 2D supervision.

3D Reconstruction Image Denoising +3

120
0.25 stars / hour

SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models

mit-han-lab/smoothquant 18 Nov 2022

We propose SmoothQuant, a training-free, accuracy-preserving, and general-purpose post-training quantization (PTQ) solution to enable 8-bit weight, 8-bit activation (W8A8) quantization for LLMs that can be implemented efficiently.

Quantization

104
0.21 stars / hour

Fast Sampling of Diffusion Models with Exponential Integrator

shivamshrirao/diffusers 29 Apr 2022

Our goal is to develop a fast sampling method for DMs with a much less number of steps while retaining high sample quality.

947
0.21 stars / hour

OneFormer: One Transformer to Rule Universal Image Segmentation

SHI-Labs/OneFormer 10 Nov 2022

However, such panoptic architectures do not truly unify image segmentation because they need to be trained individually on the semantic, instance, or panoptic segmentation to achieve the best performance.

Instance Segmentation Panoptic Segmentation +1

370
0.20 stars / hour

Cold Diffusion: Inverting Arbitrary Image Transforms Without Noise

lucidrains/denoising-diffusion-pytorch 19 Aug 2022

We observe that the generative behavior of diffusion models is not strongly dependent on the choice of image degradation, and in fact an entire family of generative models can be constructed by varying this choice.

Image Restoration Variational Inference

2,592
0.20 stars / hour

Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese

ofa-sys/chinese-clip 2 Nov 2022

The tremendous success of CLIP (Radford et al., 2021) has promoted the research and application of contrastive learning for vision-language pretraining.

Contrastive Learning Image Classification +9

275
0.20 stars / hour

ZeroEGGS: Zero-shot Example-based Gesture Generation from Speech

ubisoft/ubisoft-laforge-ZeroEGGS 15 Sep 2022

In a series of experiments, we first demonstrate the flexibility and generalizability of our model to new speakers and styles.

Gesture Generation

87
0.20 stars / hour