👾 A library of state-of-the-art pretrained models for Natural Language Processing (NLP)
We introduce a new generative model where samples are produced via Langevin dynamics using gradients of the data distribution estimated with score matching.
SOTA for Image Generation on CIFAR-10
However, it has been so far limited to simple, shallow models or low-dimensional data, due to the difficulty of computing the Hessian of log-density functions.
Recurrent Neural Networks have long been the dominating choice for sequence modeling.
SOTA for Music Modeling on Nottingham
Although our baseline system is a straightforward combination of standard methods, we obtain the state-of-the-art results.
SOTA for 3D Multi-Object Tracking on KITTI
When trained with 10% of the labeled set, UDA improves the top-1/top-5 accuracy from 55. 1/77. 3% to 68. 7/88. 5%.
#2 best model for Semi-Supervised Image Classification on ImageNet - 10% labeled data
Unlike previous methods that use random noise such as Gaussian noise or dropout noise, UDA has a small twist in that it makes use of harder and more realistic noise generated by state-of-the-art data augmentation methods.
Here, we propose a new two-stream CNN architecture for semantic segmentation that explicitly wires shape information as a separate processing branch, i. e. shape stream, that processes information in parallel to the classical stream.
#2 best model for Semantic Segmentation on Cityscapes