Search Results for author: Khai Nguyen

Found 23 papers, 13 papers with code

Integrating Efficient Optimal Transport and Functional Maps For Unsupervised Shape Correspondence Learning

no code implementations4 Mar 2024 Tung Le, Khai Nguyen, Shanlin Sun, Nhat Ho, Xiaohui Xie

In the realm of computer vision and graphics, accurately establishing correspondences between geometric 3D shapes is pivotal for applications like object tracking, registration, texture transfer, and statistical shape analysis.

Object Tracking valid

On Parameter Estimation in Deviated Gaussian Mixture of Experts

no code implementations7 Feb 2024 Huy Nguyen, Khai Nguyen, Nhat Ho

We consider the parameter estimation problem in the deviated Gaussian mixture of experts in which the data are generated from $(1 - \lambda^{\ast}) g_0(Y| X)+ \lambda^{\ast} \sum_{i = 1}^{k_{\ast}} p_{i}^{\ast} f(Y|(a_{i}^{\ast})^{\top}X+b_i^{\ast},\sigma_{i}^{\ast})$, where $X, Y$ are respectively a covariate vector and a response variable, $g_{0}(Y|X)$ is a known function, $\lambda^{\ast} \in [0, 1]$ is true but unknown mixing proportion, and $(p_{i}^{\ast}, a_{i}^{\ast}, b_{i}^{\ast}, \sigma_{i}^{\ast})$ for $1 \leq i \leq k^{\ast}$ are unknown parameters of the Gaussian mixture of experts.

Sliced Wasserstein with Random-Path Projecting Directions

no code implementations29 Jan 2024 Khai Nguyen, Shujian Zhang, Tam Le, Nhat Ho

From the RPD, we derive the random-path slicing distribution (RPSD) and two variants of sliced Wasserstein, i. e., the Random-Path Projection Sliced Wasserstein (RPSW) and the Importance Weighted Random-Path Projection Sliced Wasserstein (IWRPSW).

Denoising

Quasi-Monte Carlo for 3D Sliced Wasserstein

1 code implementation21 Sep 2023 Khai Nguyen, Nicola Bariletto, Nhat Ho

Monte Carlo (MC) integration has been employed as the standard approximation method for the Sliced Wasserstein (SW) distance, whose analytical expression involves an intractable expectation.

Stochastic Optimization Style Transfer

Diffeomorphic Mesh Deformation via Efficient Optimal Transport for Cortical Surface Reconstruction

no code implementations27 May 2023 Tung Le, Khai Nguyen, Shanlin Sun, Kun Han, Nhat Ho, Xiaohui Xie

The metric is defined by sliced Wasserstein distance on meshes represented as probability measures that generalize the set-based approach.

Surface Reconstruction

Towards Convergence Rates for Parameter Estimation in Gaussian-gated Mixture of Experts

1 code implementation12 May 2023 Huy Nguyen, TrungTin Nguyen, Khai Nguyen, Nhat Ho

Originally introduced as a neural network for ensemble learning, mixture of experts (MoE) has recently become a fundamental building block of highly successful modern deep neural networks for heterogeneous data analysis in several applications of machine learning and statistics.

Ensemble Learning

Sliced Wasserstein Estimation with Control Variates

1 code implementation30 Apr 2023 Khai Nguyen, Nhat Ho

To bridge the literature on variance reduction and the literature on the SW distance, we propose computationally efficient control variates to reduce the variance of the empirical estimation of the SW distance.

Energy-Based Sliced Wasserstein Distance

1 code implementation NeurIPS 2023 Khai Nguyen, Nhat Ho

The second approach is optimizing for the best distribution which belongs to a parametric family of distributions and can maximize the expected distance.

Point cloud reconstruction

Self-Attention Amortized Distributional Projection Optimization for Sliced Wasserstein Point-Cloud Reconstruction

1 code implementation12 Jan 2023 Khai Nguyen, Dang Nguyen, Nhat Ho

Despite being efficient, Max-SW and its amortized version cannot guarantee metricity property due to the sub-optimality of the projected gradient ascent and the amortization gap.

Point cloud reconstruction

Markovian Sliced Wasserstein Distances: Beyond Independent Projections

1 code implementation NeurIPS 2023 Khai Nguyen, Tongzheng Ren, Nhat Ho

Sliced Wasserstein (SW) distance suffers from redundant projections due to independent uniform random projecting directions.

Fast Approximation of the Generalized Sliced-Wasserstein Distance

no code implementations19 Oct 2022 Dung Le, Huy Nguyen, Khai Nguyen, Trang Nguyen, Nhat Ho

Generalized sliced Wasserstein distance is a variant of sliced Wasserstein distance that exploits the power of non-linear projection through a given defining function to better capture the complex structures of the probability distributions.

Hierarchical Sliced Wasserstein Distance

1 code implementation27 Sep 2022 Khai Nguyen, Tongzheng Ren, Huy Nguyen, Litu Rout, Tan Nguyen, Nhat Ho

We explain the usage of these projections by introducing Hierarchical Radon Transform (HRT) which is constructed by applying Radon Transform variants recursively.

Transformer with Fourier Integral Attentions

no code implementations1 Jun 2022 Tan Nguyen, Minh Pham, Tam Nguyen, Khai Nguyen, Stanley J. Osher, Nhat Ho

Multi-head attention empowers the recent success of transformers, the state-of-the-art models that have achieved remarkable success in sequence modeling and beyond.

Image Classification Language Modelling +1

Revisiting Sliced Wasserstein on Images: From Vectorization to Convolution

2 code implementations4 Apr 2022 Khai Nguyen, Nhat Ho

Finally, we demonstrate the favorable performance of CSW over the conventional sliced Wasserstein in comparing probability measures over images and in training deep generative modeling on images.

Amortized Projection Optimization for Sliced Wasserstein Generative Models

1 code implementation25 Mar 2022 Khai Nguyen, Nhat Ho

Seeking informative projecting directions has been an important task in utilizing sliced Wasserstein distance in applications.

On Cross-Layer Alignment for Model Fusion of Heterogeneous Neural Networks

no code implementations29 Oct 2021 Dang Nguyen, Trang Nguyen, Khai Nguyen, Dinh Phung, Hung Bui, Nhat Ho

To address this issue, we propose a novel model fusion framework, named CLAFusion, to fuse neural networks with a different number of layers, which we refer to as heterogeneous neural networks, via cross-layer alignment.

Knowledge Distillation Model Compression

Improving Mini-batch Optimal Transport via Partial Transportation

2 code implementations22 Aug 2021 Khai Nguyen, Dang Nguyen, The-Anh Vu-Le, Tung Pham, Nhat Ho

Mini-batch optimal transport (m-OT) has been widely used recently to deal with the memory issue of OT in large-scale applications.

Partial Domain Adaptation

Structured Dropout Variational Inference for Bayesian Neural Networks

no code implementations NeurIPS 2021 Son Nguyen, Duong Nguyen, Khai Nguyen, Khoat Than, Hung Bui, Nhat Ho

Approximate inference in Bayesian deep networks exhibits a dilemma of how to yield high fidelity posterior approximations while maintaining computational efficiency and scalability.

Bayesian Inference Computational Efficiency +2

On Transportation of Mini-batches: A Hierarchical Approach

2 code implementations11 Feb 2021 Khai Nguyen, Dang Nguyen, Quoc Nguyen, Tung Pham, Hung Bui, Dinh Phung, Trung Le, Nhat Ho

To address these problems, we propose a novel mini-batch scheme for optimal transport, named Batch of Mini-batches Optimal Transport (BoMb-OT), that finds the optimal coupling between mini-batches and it can be seen as an approximation to a well-defined distance on the space of probability measures.

Domain Adaptation

Improving Relational Regularized Autoencoders with Spherical Sliced Fused Gromov Wasserstein

2 code implementations ICLR 2021 Khai Nguyen, Son Nguyen, Nhat Ho, Tung Pham, Hung Bui

To improve the discrepancy and consequently the relational regularization, we propose a new relational discrepancy, named spherical sliced fused Gromov Wasserstein (SSFG), that can find an important area of projections characterized by a von Mises-Fisher distribution.

Image Generation

Distributional Sliced-Wasserstein and Applications to Generative Modeling

1 code implementation ICLR 2021 Khai Nguyen, Nhat Ho, Tung Pham, Hung Bui

Sliced-Wasserstein distance (SW) and its variant, Max Sliced-Wasserstein distance (Max-SW), have been used widely in the recent years due to their fast computation and scalability even when the probability measures lie in a very high dimensional space.

Informativeness

EmbNum: Semantic labeling for numerical values with deep metric learning

no code implementations26 Jun 2018 Phuc Nguyen, Khai Nguyen, Ryutaro Ichise, Hideaki Takeda

Semantic labeling for numerical values is a task of assigning semantic labels to unknown numerical attributes.

Attribute Metric Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.