no code implementations • 13 Aug 2024 • Jian Xu, Delu Zeng, John Paisley
To mitigate these issues, we adopt natural gradient methods from information geometry for variational parameter optimization of Student-t Processes.
no code implementations • 12 Aug 2024 • Jian Xu, Zhiqi Lin, Min Chen, Junmei Yang, Delu Zeng, John Paisley
Traditional deep Gaussian processes model the data evolution using a discrete hierarchy, whereas differential Gaussian processes (DIFFGPs) represent the evolution as an infinitely deep Gaussian process.
no code implementations • 7 Aug 2024 • Jian Xu, Zhiqi Lin, Shigui Li, Min Chen, Junmei Yang, Delu Zeng, John Paisley
Bayesian Last Layer (BLL) models focus solely on uncertainty in the output layer of neural networks, demonstrating comparable performance to more complex Bayesian models.
1 code implementation • 24 Jul 2024 • Jian Xu, Delu Zeng, John Paisley
In DGPs, a set of sparse integration locations called inducing points are selected to approximate the posterior distribution of the model.
1 code implementation • 6 Jul 2024 • Wei Chen, Shian Du, Shigui Li, Delu Zeng, John Paisley
Normalizing Flows (NFs) have gained popularity among deep generative models due to their ability to provide exact likelihood estimation and efficient sampling.
1 code implementation • 19 Feb 2024 • Wei zhang, Brian Barr, John Paisley
Deep neural networks have revolutionized many fields, but their black-box nature also occasionally prevents their wider adoption in fields such as healthcare and finance, where interpretable and explainable models are required.
no code implementations • 17 Sep 2023 • Jian Xu, Shian Du, Junmei Yang, Xinghao Ding, John Paisley, Delu Zeng
Bayesian inference for these models has been extensively studied and applied in tasks such as time series prediction.
no code implementations • 14 Mar 2023 • Arunesh Mittal, Kai Yang, Paul Sajda, John Paisley
Several approximate inference methods have been proposed for deep discrete latent variable models.
1 code implementation • 8 Mar 2023 • San Gultekin, Brendan Kitts, Aaron Flores, John Paisley
The widely used parametric approximation is based on a jointly Gaussian assumption of the state-space model, which is in turn equivalent to minimizing an approximation to the Kullback-Leibler divergence.
no code implementations • 1 Nov 2021 • Huangxing Lin, Yihong Zhuang, Delu Zeng, Yue Huang, Xinghao Ding, John Paisley
Specifically, we treat the output of the network as a ``prior'' that we denoise again after ``re-noising''.
no code implementations • 24 Sep 2021 • Elizabeth A. Gibson, Sebastian T. Rowland, Jeff Goldsmith, John Paisley, Julie B. Herbstman, Marianthi-Anna Kiourmourtzoglou
Environmental health researchers may aim to identify exposure patterns that represent sources, product use, or behaviors that give rise to mixtures of potentially harmful environmental chemical exposures.
no code implementations • 30 Nov 2020 • Huangxing Lin, Yihong Zhuang, Yue Huang, Xinghao Ding, Yizhou Yu, Xiaoqing Liu, John Paisley
Coupling the noisy data output from ADANI with the corresponding ground-truth, a denoising CNN is then trained in a fully-supervised manner.
no code implementations • 14 Nov 2020 • Arunesh Mittal, Scott Linderman, John Paisley, Paul Sajda
We evaluate our method on the ADNI2 dataset by inferring latent state patterns corresponding to altered neural circuits in individuals with Mild Cognitive Impairment (MCI).
no code implementations • 9 Nov 2020 • Arunesh Mittal, Paul Sajda, John Paisley
We propose a deep generative factor analysis model with beta process prior that can approximate complex non-factorial distributions over the latent codes.
no code implementations • 3 Dec 2019 • Huangxing Lin, Weihong Zeng, Xinghao Ding, Xueyang Fu, Yue Huang, John Paisley
Using the new image pair, the denoising network learns to generate clean and high-quality images from noisy observations.
no code implementations • 2 Dec 2019 • San Gultekin, John Paisley
Bipartite ranking is an important supervised learning problem; however, unlike regression or classification, it has a quadratic dependence on the number of samples.
1 code implementation • NeurIPS 2019 • Tao Tu, John Paisley, Stefan Haufe, Paul Sajda
In this study, we develop a linear state-space model to infer the effective connectivity in a distributed brain network based on simultaneously recorded EEG and fMRI data.
1 code implementation • 30 Nov 2019 • Huangxing Lin, Weihong Zeng, Xinghao Ding, Yue Huang, Chenxi Huang, John Paisley
The uncertainty of the descent path helps the model avoid saddle points and bad local minima.
no code implementations • NeurIPS 2019 • Jeremiah Zhe Liu, John Paisley, Marianthi-Anna Kioumourtzoglou, Brent Coull
We introduce a Bayesian nonparametric ensemble (BNE) approach that augments an existing ensemble model to account for different sources of model uncertainty.
1 code implementation • 13 Jun 2019 • Adji B. Dieng, John Paisley
The typical workaround is to use variational inference (VI) and maximize a lower bound to the log marginal likelihood of the data.
1 code implementation • 9 May 2019 • Aonan Zhang, John Paisley
The likelihood model of high dimensional data $X_n$ can often be expressed as $p(X_n|Z_n,\theta)$, where $\theta\mathrel{\mathop:}=(\theta_k)_{k\in[K]}$ is a collection of hidden features shared across objects, indexed by $n$, and $Z_n$ is a non-negative factor loading vector with $K$ entries where $Z_{nk}$ indicates the strength of $\theta_k$ used to express $X_n$.
no code implementations • 9 Apr 2019 • Huangxing Lin, Yanlong Li, Xinghao Ding, Weihong Zeng, Yue Huang, John Paisley
We present a supervised technique for learning to remove rain from images without using synthetic rain software.
1 code implementation • 6 Feb 2019 • Mark Ibrahim, Melissa Louie, Ceena Modarres, John Paisley
A barrier to the wider adoption of neural networks is their lack of interpretability.
no code implementations • 23 Dec 2018 • Ghazal Fazelnia, Mark Ibrahim, Ceena Modarres, Kevin Wu, John Paisley
Models for sequential data such as the recurrent neural network (RNN) often implicitly model a sequence as having a fixed time interval between observations and do not account for group-level effects when multiple sequences are observed.
no code implementations • 8 Dec 2018 • Jeremiah Zhe Liu, John Paisley, Marianthi-Anna Kioumourtzoglou, Brent A. Coull
Ensemble learning is a mainstay in modern data science practice.
no code implementations • 21 Nov 2018 • Xueyang Fu, Qi Qi, Yue Huang, Xinghao Ding, Feng Wu, John Paisley
We propose a simple yet effective deep tree-structured fusion model based on feature aggregation for the deraining problem.
no code implementations • 15 Nov 2018 • Ceena Modarres, Mark Ibrahim, Melissa Louie, John Paisley
Deep learning adoption in the financial services industry has been limited due to a lack of model interpretability.
no code implementations • 25 Oct 2018 • Liyan Sun, Jiexiang Wang, Yue Huang, Xinghao Ding, Hayit Greenspan, John Paisley
Being able to provide a "normal" counterpart to a medical image can provide useful side information for medical imaging tasks like lesion segmentation or classification validated by our experiments.
1 code implementation • 10 Oct 2018 • Aonan Zhang, Quan Wang, Zhenyao Zhu, John Paisley, Chong Wang
In this paper, we propose a fully supervised speaker diarization approach, named unbounded interleaved-state recurrent neural networks (UIS-RNN).
Ranked #1 on
Speaker Diarization
on Hub5'00 CallHome
no code implementations • ICML 2018 • Aonan Zhang, John Paisley
Time-series data often exhibit irregular behavior, making them hard to analyze and explain with a simple dynamic model.
no code implementations • ICML 2018 • Ghazal Fazelnia, John Paisley
In this paper, we introduce a new approach to solving the variational inference optimization based on convex relaxation and semidefinite programming.
no code implementations • 29 May 2018 • San Gultekin, Avishek Saha, Adwait Ratnaparkhi, John Paisley
Area under the receiver operating characteristics curve (AUC) is an important metric for a wide range of signal processing and machine learning problems, and scalable methods for optimizing AUC have recently been proposed.
no code implementations • 16 May 2018 • Xueyang Fu, Borong Liang, Yue Huang, Xinghao Ding, John Paisley
In this paper, we propose a lightweight pyramid of networks (LPNet) for single image deraining.
1 code implementation • 15 May 2018 • Delu Zeng, Yixuan He, Li Liu, Zhihong Chen, Jiabin Huang, Jie Chen, John Paisley
In this paper, we propose an end-to-end generic salient object segmentation model called Metric Expression Network (MEnet) to deal with saliency detection with the tolerance of distortion.
no code implementations • 6 May 2018 • Liyan Sun, Zhiwen Fan, Yue Huang, Xinghao Ding, John Paisley
The need for fast acquisition and automatic analysis of MRI data is growing in the age of big data.
no code implementations • 10 Apr 2018 • Liyan Sun, Zhiwen Fan, Yue Huang, Xinghao Ding, John Paisley
In multi-contrast magnetic resonance imaging (MRI), compressed sensing theory can accelerate imaging by sampling fewer measurements within each contrast.
no code implementations • ECCV 2018 • Zhiwen Fan, Liyan Sun, Xinghao Ding, Yue Huang, Congbo Cai, John Paisley
In this paper, we proposed a segmentation-aware deep fusion network called SADFN for compressed sensing MRI.
no code implementations • 27 Mar 2018 • Liyan Sun, Zhiwen Fan, Xinghao Ding, Congbo Cai, Yue Huang, John Paisley
Compressed sensing (CS) theory assures us that we can accurately reconstruct magnetic resonance images using fewer k-space measurements than the Nyquist sampling rate requires.
no code implementations • 23 Mar 2018 • Liyan Sun, Zhiwen Fan, Yue Huang, Xinghao Ding, John Paisley
Existing CS-MRI algorithms can serve as the template module for guiding the reconstruction.
no code implementations • 23 Dec 2017 • San Gultekin, John Paisley
In this paper the problem of forecasting high dimensional time series is considered.
no code implementations • NeurIPS 2017 • Adji Bousso Dieng, Dustin Tran, Rajesh Ranganath, John Paisley, David Blei
In this paper we propose CHIVI, a black-box variational inference algorithm that minimizes $D_{\chi}(p || q)$, the $\chi$-divergence from $p$ to $q$.
no code implementations • ICCV 2017 • Junfeng Yang, Xueyang Fu, Yuwen Hu, Yue Huang, Xinghao Ding, John Paisley
We incorporate domain-specific knowledge to design our PanNet architecture by focusing on the two aims of the pan-sharpening problem: spectral and spatial preservation.
no code implementations • 2 Jul 2017 • Shiliang Sun, John Paisley, Qiuyang Liu
Dirichlet processes (DP) are widely applied in Bayesian nonparametric modeling.
no code implementations • CVPR 2017 • Xueyang Fu, Jia-Bin Huang, Delu Zeng, Yue Huang, Xinghao Ding, John Paisley
We propose a new deep network architecture for removing rain streaks from individual images based on the deep convolutional neural network (CNN).
1 code implementation • 1 May 2017 • Xiangyong Cao, Feng Zhou, Lin Xu, Deyu Meng, Zongben Xu, John Paisley
This paper presents a new supervised classification algorithm for remotely sensed hyperspectral image (HSI) which integrates spectral and spatial information in a unified Bayesian framework.
Ranked #13 on
Hyperspectral Image Classification
on Indian Pines
(Overall Accuracy metric, using extra
training data)
no code implementations • 1 May 2017 • San Gultekin, John Paisley
We consider the nonlinear Kalman filtering problem using Kullback-Leibler (KL) and $\alpha$-divergence measures as optimization criteria.
1 code implementation • 5 Nov 2016 • Adji B. Dieng, Chong Wang, Jianfeng Gao, John Paisley
The proposed TopicRNN model integrates the merits of RNNs and latent topic models: it captures local (syntactic) dependencies using an RNN and global (semantic) dependencies using latent topics.
no code implementations • 1 Nov 2016 • Adji B. Dieng, Dustin Tran, Rajesh Ranganath, John Paisley, David M. Blei
In this paper we propose CHIVI, a black-box variational inference algorithm that minimizes $D_{\chi}(p || q)$, the $\chi$-divergence from $p$ to $q$.
2 code implementations • 7 Sep 2016 • Xueyang Fu, Jia-Bin Huang, Xinghao Ding, Yinghao Liao, John Paisley
We introduce a deep network architecture called DerainNet for removing rain streaks from an image.
Ranked #11 on
Single Image Deraining
on Test100
(SSIM metric)
no code implementations • ICCV 2015 • Yiyong Jiang, Xinghao Ding, Delu Zeng, Yue Huang, John Paisley
Our objective incorporates the L1/2-norm in a way that can leverage recent computationally efficient methods, and L1 for which the alternating direction method of multipliers can be used.
1 code implementation • 10 Jun 2015 • Aaron Schein, John Paisley, David M. Blei, Hanna Wallach
We demonstrate that our model's predictive performance is better than that of standard non-negative tensor factorization methods.
no code implementations • 25 May 2015 • San Gultekin, Aonan Zhang, John Paisley
We empirically evaluate a stochastic annealing strategy for Bayesian posterior optimization with variational inference.
no code implementations • 22 Jan 2015 • San Gultekin, John Paisley
Using the matrix factorization approach to collaborative filtering, the CKF accounts for time evolution by modeling each low-dimensional latent embedding as a multidimensional Brownian motion.
no code implementations • 12 Feb 2013 • Yue Huang, John Paisley, Qin Lin, Xinghao Ding, Xueyang Fu, Xiao-Ping Zhang
The size of the dictionary and the patch-specific sparsity pattern are inferred from the data, in addition to other dictionary learning variables.
no code implementations • 25 Oct 2012 • John Paisley, Chong Wang, David M. Blei, Michael. I. Jordan
We develop a nested hierarchical Dirichlet process (nHDP) for hierarchical topic modeling.
2 code implementations • 29 Jun 2012 • Matt Hoffman, David M. Blei, Chong Wang, John Paisley
We develop stochastic variational inference, a scalable algorithm for approximating posterior distributions.
no code implementations • 27 Jun 2012 • John Paisley, David Blei, Michael Jordan
This requires the ability to integrate a sum of terms in the log joint likelihood using this factorized distribution.
no code implementations • 8 Nov 2011 • Tamara Broderick, Lester Mackey, John Paisley, Michael. I. Jordan
We show that the NBP is conjugate to the beta process, and we characterize the posterior distribution under the beta-negative binomial process (BNBP) and hierarchical models based on the BNBP (the HBNBP).