no code implementations • 12 Dec 2022 • Angzhi Fan, Benjamin Ticknor, Yali Amit
Third, we propose a whole reconstruction algorithm which generates the joint reconstruction of all objects in a hypothesized interpretation, taking into account occlusion ordering.
1 code implementation • 30 Sep 2021 • Mufeng Tang, Yibo Yang, Yali Amit
We develop biologically plausible training mechanisms for self-supervised learning (SSL) in deep networks.
no code implementations • 19 May 2021 • Zhisheng Xiao, Qing Yan, Yali Amit
Unsupervised outlier detection, which predicts if a test sample is an outlier or not using only the information from unlabelled inlier data, is an important but challenging task.
no code implementations • ICLR Workshop EBM 2021 • Zhisheng Xiao, Qing Yan, Yali Amit
Doing so allows us to study the density induced by the dynamics (if the dynamics are invertible), and connect with GANs by treating the dynamics as generator models, the initial values as latent variables and the loss as optimizing a critic defined by the very same energy that determines the generator through its gradient.
no code implementations • 15 Jun 2020 • Zhisheng Xiao, Qing Yan, Yali Amit
In this paper, we present a general method that can improve the sample quality of pre-trained likelihood based generative models.
2 code implementations • NeurIPS 2020 • Zhisheng Xiao, Qing Yan, Yali Amit
An important application of generative modeling should be the ability to detect out-of-distribution (OOD) samples by setting a threshold on the likelihood.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
no code implementations • 5 Nov 2019 • Zhisheng Xiao, Qing Yan, Yali Amit
In particular, we use our proposed method to analyze inverse problems with invertible neural networks by maximizing the posterior likelihood.
1 code implementation • 24 May 2019 • Zhisheng Xiao, Qing Yan, Yali Amit
In this work, we propose the Generative Latent Flow (GLF), an algorithm for generative modeling of the data distribution.
Ranked #1 on Image Generation on Fashion-MNIST
1 code implementation • 19 Nov 2018 • Yali Amit
We show that similar performance is achieved with untied layers, also known as locally connected layers, corresponding to the connectivity implied by the convolutional layers, but where weights are untied and updated separately.
no code implementations • 18 Dec 2017 • Jiajun Shen, Yali Amit
In this paper, we design a framework for training deformable classifiers, where latent transformation variables are introduced, and a transformation of the object image to a reference instantiation is computed in terms of the classifier output, separately for each class.
no code implementations • 16 Feb 2017 • Marc Goessling, Yali Amit
We present a new approach for learning compact and intuitive distributed representations with binary encoding.
no code implementations • 15 Nov 2015 • Marc Goessling, Yali Amit
We consider high-dimensional distribution estimation through autoregressive networks.
no code implementations • 11 Dec 2014 • Marc Goessling, Yali Amit
Learning compact and interpretable representations is a very natural task, which has not been solved satisfactorily even for simple binary datasets.