Search Results for author: Nhat Ho

Found 44 papers, 10 papers with code

Model Fusion of Heterogeneous Neural Networks via Cross-Layer Alignment

no code implementations29 Oct 2021 Dang Nguyen, Khai Nguyen, Dinh Phung, Hung Bui, Nhat Ho

To address this issue, we propose a novel model fusion framework, named CLAFusion, to fuse neural networks with a different number of layers, which we refer to as heterogeneous neural networks, via cross-layer alignment.

Fine-tuning Knowledge Distillation +1

On Label Shift in Domain Adaptation via Wasserstein Distance

no code implementations29 Oct 2021 Trung Le, Dat Do, Tuan Nguyen, Huy Nguyen, Hung Bui, Nhat Ho, Dinh Phung

We study the label shift problem between the source and target domains in general domain adaptation (DA) settings.

Domain Adaptation

Transformer with a Mixture of Gaussian Keys

1 code implementation16 Oct 2021 Tam Nguyen, Tan M. Nguyen, Dung Le, Khuong Nguyen, Anh Tran, Richard G. Baraniuk, Nhat Ho, Stanley J. Osher

Inspired by this observation, we propose Transformer with a Mixture of Gaussian Keys (Transformer-MGK), a novel transformer architecture that replaces redundant heads in transformers with a mixture of keys at each head.

Language Modelling

Towards Statistical and Computational Complexities of Polyak Step Size Gradient Descent

no code implementations15 Oct 2021 Tongzheng Ren, Fuheng Cui, Alexia Atsidakou, Sujay Sanghavi, Nhat Ho

We study the statistical and computational complexities of the Polyak step size gradient descent algorithm under generalized smoothness and Lojasiewicz conditions of the population loss function, namely, the limit of the empirical loss function when the sample size goes to infinity, and the stability between the gradients of the empirical and population loss functions, namely, the polynomial growth on the concentration bound between the gradients of sample and population loss functions.

Entropic Gromov-Wasserstein between Gaussian Distributions

no code implementations24 Aug 2021 Khang Le, Dung Le, Huy Nguyen, Dat Do, Tung Pham, Nhat Ho

When the metric is the inner product, which we refer to as inner product Gromov-Wasserstein (IGW), we demonstrate that the optimal transportation plans of entropic IGW and its unbalanced variant are (unbalanced) Gaussian distributions.

Improving Mini-batch Optimal Transport via Partial Transportation

1 code implementation22 Aug 2021 Khai Nguyen, Dang Nguyen, Tung Pham, Nhat Ho

To address the misspecified mappings issue, we propose a novel mini-batch method by using partial optimal transport (POT) between mini-batch empirical measures, which we refer to as mini-batch partial optimal transport (m-POT).

Domain Adaptation

On Multimarginal Partial Optimal Transport: Equivalent Forms and Computational Complexity

no code implementations18 Aug 2021 Khang Le, Huy Nguyen, Tung Pham, Nhat Ho

We demonstrate that the ApproxMPOT algorithm can approximate the optimal value of multimarginal POT problem with a computational complexity upper bound of the order $\tilde{\mathcal{O}}(m^3(n+1)^{m}/ \varepsilon^2)$ where $\varepsilon > 0$ stands for the desired tolerance.

On Integral Theorems: Monte Carlo Estimators and Optimal Functions

no code implementations22 Jul 2021 Nhat Ho, Stephen G. Walker

We introduce a class of integral theorems based on cyclic functions and Riemann sums approximating integrals theorem.

Statistical Analysis from the Fourier Integral Theorem

no code implementations11 Jun 2021 Nhat Ho, Stephen G. Walker

Taking the Fourier integral theorem as our starting point, in this paper we focus on natural Monte Carlo and fully nonparametric estimators of multivariate distributions and conditional distribution functions.

Structured Dropout Variational Inference for Bayesian Neural Networks

no code implementations NeurIPS 2021 Son Nguyen, Duong Nguyen, Khai Nguyen, Khoat Than, Hung Bui, Nhat Ho

Approximate inference in Bayesian deep networks exhibits a dilemma of how to yield high fidelity posterior approximations while maintaining computational efficiency and scalability.

Bayesian Inference Out-of-Distribution Detection +1

On Robust Optimal Transport: Computational Complexity and Barycenter Computation

no code implementations NeurIPS 2021 Khang Le, Huy Nguyen, Quang Nguyen, Tung Pham, Hung Bui, Nhat Ho

We consider robust variants of the standard optimal transport, named robust optimal transport, where marginal constraints are relaxed via Kullback-Leibler divergence.

On Transportation of Mini-batches: A Hierarchical Approach

1 code implementation11 Feb 2021 Khai Nguyen, Dang Nguyen, Quoc Nguyen, Tung Pham, Hung Bui, Dinh Phung, Trung Le, Nhat Ho

To address these problems, we propose a novel mini-batching scheme for optimal transport, named Batch of Mini-batches Optimal Transport (BoMb-OT), that finds the optimal coupling between mini-batches and it can be seen as an approximation to a well-defined distance on the space of probability measures.

Domain Adaptation

On the computational and statistical complexity of over-parameterized matrix sensing

no code implementations27 Jan 2021 Jiacheng Zhuo, Jeongyeol Kwon, Nhat Ho, Constantine Caramanis

We consider solving the low rank matrix sensing problem with Factorized Gradient Descend (FGD) method when the true rank is unknown and over-specified, which we refer to as over-parameterized matrix sensing.

Multivariate Smoothing via the Fourier Integral Theorem and Fourier Kernel

no code implementations28 Dec 2020 Nhat Ho, Stephen G. Walker

Starting with the Fourier integral theorem, we present natural Monte Carlo estimators of multivariate functions including densities, mixing densities, transition densities, regression functions, and the search for modes of multivariate density functions (modal regression).

Improving Relational Regularized Autoencoders with Spherical Sliced Fused Gromov Wasserstein

2 code implementations ICLR 2021 Khai Nguyen, Son Nguyen, Nhat Ho, Tung Pham, Hung Bui

To improve the discrepancy and consequently the relational regularization, we propose a new relational discrepancy, named spherical sliced fused Gromov Wasserstein (SSFG), that can find an important area of projections characterized by a von Mises-Fisher distribution.

Image Generation

Projection Robust Wasserstein Distance and Riemannian Optimization

no code implementations NeurIPS 2020 Tianyi Lin, Chenyou Fan, Nhat Ho, Marco Cuturi, Michael. I. Jordan

Projection robust Wasserstein (PRW) distance, or Wasserstein projection pursuit (WPP), is a robust variant of the Wasserstein distance.

Riemannian optimization

Probabilistic Best Subset Selection via Gradient-Based Optimization

1 code implementation11 Jun 2020 Mingzhang Yin, Nhat Ho, Bowei Yan, Xiaoning Qian, Mingyuan Zhou

In high-dimensional statistics, variable selection is an optimization problem aiming to recover the latent sparse pattern from all possible covariate combinations.

Methodology

On the Minimax Optimality of the EM Algorithm for Learning Two-Component Mixed Linear Regression

no code implementations4 Jun 2020 Jeongyeol Kwon, Nhat Ho, Constantine Caramanis

In the low SNR regime where the SNR is below $\mathcal{O}((d/n)^{1/4})$, we show that EM converges to a $\mathcal{O}((d/n)^{1/4})$ neighborhood of the true parameters, after $\mathcal{O}((n/d)^{1/2})$ iterations.

Uniform Convergence Rates for Maximum Likelihood Estimation under Two-Component Gaussian Mixture Models

1 code implementation1 Jun 2020 Tudor Manole, Nhat Ho

We derive uniform convergence rates for the maximum likelihood estimator and minimax lower bounds for parameter estimation in two-component location-scale Gaussian mixture models with unequal variances.

Instability, Computational Efficiency and Statistical Accuracy

no code implementations22 May 2020 Nhat Ho, Koulik Khamaru, Raaz Dwivedi, Martin J. Wainwright, Michael. I. Jordan, Bin Yu

Many statistical estimators are defined as the fixed point of a data-dependent operator, with estimators based on minimizing a cost function being an important special case.

Distributional Sliced-Wasserstein and Applications to Generative Modeling

1 code implementation ICLR 2021 Khai Nguyen, Nhat Ho, Tung Pham, Hung Bui

Sliced-Wasserstein distance (SW) and its variant, Max Sliced-Wasserstein distance (Max-SW), have been used widely in the recent years due to their fast computation and scalability even when the probability measures lie in a very high dimensional space.

Fixed-Support Wasserstein Barycenters: Computational Hardness and Fast Algorithm

no code implementations NeurIPS 2020 Tianyi Lin, Nhat Ho, Xi Chen, Marco Cuturi, Michael. I. Jordan

We study the fixed-support Wasserstein barycenter problem (FS-WBP), which consists in computing the Wasserstein barycenter of $m$ discrete probability measures supported on a finite metric space of size $n$.

On Unbalanced Optimal Transport: An Analysis of Sinkhorn Algorithm

no code implementations ICML 2020 Khiem Pham, Khang Le, Nhat Ho, Tung Pham, Hung Bui

We provide a computational complexity analysis for the Sinkhorn algorithm that solves the entropic regularized Unbalanced Optimal Transport (UOT) problem between two measures of possibly different masses with at most $n$ components.

Sampling for Bayesian Mixture Models: MCMC with Polynomial-Time Mixing

no code implementations11 Dec 2019 Wenlong Mou, Nhat Ho, Martin J. Wainwright, Peter L. Bartlett, Michael. I. Jordan

We study the problem of sampling from the power posterior distribution in Bayesian Gaussian mixture models, a robust version of the classical posterior.

Tree-Wasserstein Barycenter for Large-Scale Multilevel Clustering and Scalable Bayes

no code implementations10 Oct 2019 Tam Le, Viet Huynh, Nhat Ho, Dinh Phung, Makoto Yamada

We study in this paper a variant of Wasserstein barycenter problem, which we refer to as tree-Wasserstein barycenter, by leveraging a specific class of ground metrics, namely tree metrics, for Wasserstein distance.

Flow-based Alignment Approaches for Probability Measures in Different Spaces

1 code implementation10 Oct 2019 Tam Le, Nhat Ho, Makoto Yamada

By leveraging a tree structure, we propose to align \textit{flows} from a root to each support instead of pair-wise tree metrics of supports, i. e., flows from a support to another, in GW.

On the Complexity of Approximating Multimarginal Optimal Transport

no code implementations30 Sep 2019 Tianyi Lin, Nhat Ho, Marco Cuturi, Michael. I. Jordan

This provides a first \textit{near-linear time} complexity bound guarantee for approximating the MOT problem and matches the best known complexity bound for the Sinkhorn algorithm in the classical OT setting when $m = 2$.

On Efficient Multilevel Clustering via Wasserstein Distances

1 code implementation19 Sep 2019 Viet Huynh, Nhat Ho, Nhan Dam, XuanLong Nguyen, Mikhail Yurochkin, Hung Bui, and Dinh Phung

We propose a novel approach to the problem of multilevel clustering, which aims to simultaneously partition data in each group and discover grouping patterns among groups in a potentially large hierarchically structured corpus of data.

Convergence Rates for Gaussian Mixtures of Experts

no code implementations9 Jul 2019 Nhat Ho, Chiao-Yu Yang, Michael. I. Jordan

We provide a theoretical treatment of over-specified Gaussian mixtures of experts with covariate-free gating networks.

On the Efficiency of Sinkhorn and Greenkhorn and Their Acceleration for Optimal Transport

no code implementations1 Jun 2019 Tianyi Lin, Nhat Ho, Michael. I. Jordan

First, we improve the complexity bound of a greedy variant of the Sinkhorn algorithm, known as \textit{Greenkhorn} algorithm, from $\widetilde{O}(n^2\varepsilon^{-3})$ to $\widetilde{O}(n^2\varepsilon^{-2})$.

Posterior Distribution for the Number of Clusters in Dirichlet Process Mixture Models

no code implementations23 May 2019 Chiao-Yu Yang, Eric Xia, Nhat Ho, Michael I. Jordan

In this work, we provide a rigorous study for the posterior distribution of the number of clusters in DPMM under different prior distributions on the parameters and constraints on the distributions of the data.

Fast Algorithms for Computational Optimal Transport and Wasserstein Barycenter

no code implementations23 May 2019 Wenshuo Guo, Nhat Ho, Michael. I. Jordan

First, we introduce the \emph{accelerated primal-dual randomized coordinate descent} (APDRCD) algorithm for computing the OT distance.

Neural Rendering Model: Joint Generation and Prediction for Semi-Supervised Learning

no code implementations ICLR 2019 Nhat Ho, Tan Nguyen, Ankit B. Patel, Anima Anandkumar, Michael. I. Jordan, Richard G. Baraniuk

The conjugate prior yields a new regularizer for learning based on the paths rendered in the generative model for training CNNs–the Rendering Path Normalization (RPN).

Neural Rendering

On Structured Filtering-Clustering: Global Error Bound and Optimal First-Order Algorithms

no code implementations16 Apr 2019 Nhat Ho, Tianyi Lin, Michael. I. Jordan

We also demonstrate that the GDGA with stochastic gradient descent (SGD) subroutine attains the optimal rate of convergence up to the logarithmic factor, shedding the light to the possibility of solving the filtering-clustering problems efficiently in online setting.

Sharp Analysis of Expectation-Maximization for Weakly Identifiable Models

no code implementations1 Feb 2019 Raaz Dwivedi, Nhat Ho, Koulik Khamaru, Martin J. Wainwright, Michael. I. Jordan, Bin Yu

We study a class of weakly identifiable location-scale mixture models for which the maximum likelihood estimates based on $n$ i. i. d.

On Efficient Optimal Transport: An Analysis of Greedy and Accelerated Mirror Descent Algorithms

no code implementations19 Jan 2019 Tianyi Lin, Nhat Ho, Michael. I. Jordan

We show that a greedy variant of the classical Sinkhorn algorithm, known as the \emph{Greenkhorn algorithm}, can be improved to $\widetilde{\mathcal{O}}(n^2\varepsilon^{-2})$, improving on the best known complexity bound of $\widetilde{\mathcal{O}}(n^2\varepsilon^{-3})$.

Data Structures and Algorithms

On Deep Domain Adaptation: Some Theoretical Understandings

no code implementations15 Nov 2018 Trung Le, Khanh Nguyen, Nhat Ho, Hung Bui, Dinh Phung

The underlying idea of deep domain adaptation is to bridge the gap between source and target domains in a joint space so that a supervised classifier trained on labeled source data can be nicely transferred to the target domain.

Domain Adaptation Transfer Learning

A Bayesian Perspective of Convolutional Neural Networks through a Deconvolutional Generative Model

no code implementations1 Nov 2018 Tan Nguyen, Nhat Ho, Ankit Patel, Anima Anandkumar, Michael. I. Jordan, Richard G. Baraniuk

This conjugate prior yields a new regularizer based on paths rendered in the generative model for training CNNs-the Rendering Path Normalization (RPN).

Object Classification

Probabilistic Multilevel Clustering via Composite Transportation Distance

no code implementations29 Oct 2018 Nhat Ho, Viet Huynh, Dinh Phung, Michael. I. Jordan

We propose a novel probabilistic approach to multilevel clustering problems based on composite transportation distance, which is a variant of transportation distance where the underlying metric is Kullback-Leibler divergence.

Singularity, Misspecification, and the Convergence Rate of EM

no code implementations1 Oct 2018 Raaz Dwivedi, Nhat Ho, Koulik Khamaru, Michael. I. Jordan, Martin J. Wainwright, Bin Yu

A line of recent work has analyzed the behavior of the Expectation-Maximization (EM) algorithm in the well-specified setting, in which the population likelihood is locally strongly concave around its maximizing argument.

Multilevel Clustering via Wasserstein Means

1 code implementation ICML 2017 Nhat Ho, XuanLong Nguyen, Mikhail Yurochkin, Hung Hai Bui, Viet Huynh, Dinh Phung

We propose a novel approach to the problem of multilevel clustering, which aims to simultaneously partition data in each group and discover grouping patterns among groups in a potentially large hierarchically structured corpus of data.

Singularity structures and impacts on parameter estimation in finite mixtures of distributions

no code implementations9 Sep 2016 Nhat Ho, XuanLong Nguyen

Our study makes explicit the deep links between model singularities, parameter estimation convergence rates and minimax lower bounds, and the algebraic geometry of the parameter space for mixtures of continuous distributions.

Identifiability and optimal rates of convergence for parameters of multiple types in finite mixtures

no code implementations11 Jan 2015 Nhat Ho, XuanLong Nguyen

This paper studies identifiability and convergence behaviors for parameters of multiple types in finite mixtures, and the effects of model fitting with extra mixing components.

Cannot find the paper you are looking for? You can Submit a new open access paper.